Yesterday, the Keras team announced the release of Keras 2.3.0, which is the first release of multi-backend Keras with TensorFlow 2.0 support. This is also the last major release of multi-backend Keras. It is backward-compatible with TensorFlow 1.14, 1.13, Theano, and CNTK.
Keras to focus mainly on tf.keras while continuing support for Theano/CNTK
This release comes with a lot of API changes to bring the multi-backend Keras API “in sync” with tf.keras, TensorFlow’s high-level API. However, there are some TensorFlow 2.0 features that are not supported. This is why the team recommends developers to switch their Keras code to tf.keras in TensorFlow 2.0.
Moving to tf.keras will give developers access to features like eager execution, TPU training, and much better integration between low-level TensorFlow and high-level concepts like Layer and Model.
Following this release, the team plans to mainly focus on the further development of tf.keras. “Development will focus on tf.keras going forward. We will keep maintaining multi-backend Keras over the next 6 months, but we will only be merging bug fixes. API changes will not be ported,” the team writes.
To make it easier for the community to contribute to the development of Keras, the team will be developing tf.keras in its own standalone GitHub repository at keras-team/keras. François Chollet, the creator of Keras, further explained on Twitter why they are moving away from the multi-backend Keras:
Both Theano and CNTK are out of development. Meanwhile, as Keras backends they represent less than 4% of Keras usage. The other 96% of users (of which more than half are already on tf.keras) are better served with tf.keras.
Keras development will focus on tf.keras going forward.
— François Chollet (@fchollet) September 17, 2019
API updates in Keras 2.3.0
Here are some of the API updates in Keras 2.3.0:
- The add_metric method is added to Layer/Model, which is similar to the add_loss method but for metrics.
- Keras 2.3.0 introduces several class-based losses including MeanSquaredError, MeanAbsoluteError, BinaryCrossentropy, Hinge, and more. With this update, losses can be parameterized via constructor arguments.
- Many class-based metrics are added including Accuracy, MeanSquaredError, Hinge, FalsePositives, BinaryAccuracy, and more. This update enables metrics to be stateful and parameterized via constructor arguments.
- The train_on_batch and test_on_batch methods now have a new argument called resent_metrics. You can set this argument to True for maintaining metric state across different batches when writing lower-level training or evaluation loops.
- The model.reset_metrics() method is added to Model to clear metric state at the start of an epoch when writing lower-level training or evaluation loops.
Breaking changes in Keras 2.3.0
Along with the API changes, Keras 2.3.0 includes a few breaking changes. In this release, batch_size, write_grads, embeddings_freq, and embeddings_layer_names are deprecated and hence are ignored when used with TensorFlow 2.0. Metrics and losses will now be reported under the exact name specified by the user. Also, the default recurrent activation is changed from hard_sigmoid to sigmoid in all RNN layers.
The release started a discussion on Hacker News where developers appreciated that Keras will mainly focus on the development of tf.keras. A user commented, “Good move. I’d much rather it worked well for one backend then sucked mightily on all of them. Eager mode means that for the first time ever you can _easily_ debug programs using the TensorFlow backend. That will be music to the ears of anyone who’s ever tried to debug a complex TF-backed model.”
Some also raised the question that Google might acquire Keras in the future considering TensorFlow has already included Keras in its codebase and its creator, François Chollet works as an AI researcher at Google.
Check out the official announcement to know what more has landed in Keras 2.3.0.