After all the hype and waiting, Google has finally announced the beta version of TensorFlow 2.0. The focus feature is the tf.distribute.Strategy which distributes training across multiple GPUs, multiple machines or TPUs with minimal code changes. TensorFlow 2.0 beta version also has a number of major improvements, breaking changes and multiple bug fixes. Earlier this year, the TensorFlow team had updated the users on what to expect from TensorFlow 2.0.
TensorFlow 2.0 support for Keras features
Distribution Strategy for hardware
The tf.distribute.Strategy supports multiple user segments, including researchers, ML engineers, etc. It also provides good performance and easy switching between strategies. Users can use the tf.distribute.Strategy API to distribute training across multiple GPUs, multiple machines or TPUs. Users can distribute their existing models and training code with minimal code changes.
The tf.distribute.Strategy can be used with:
Custom training loops
TenserFlow 2.0 beta also simplifies the API for custom training loops. This is also based on the distribution strategy – tf.distribute.Strategys. Custom training loops give flexibility and a greater control on training. It is also easier to debug the model and the training loop.
Building a fully-customizable model by subclassing tf.keras.Model, allows user to define its own forward pass. Layers can be created in the __init__ method and set them as attributes of the class instance. The forward pass is defined in the call method. Model subclassing is particularly useful when eager execution is enabled, because it allows the forward pass to be written imperatively. Model subclassing gives greater flexibility when creating models that are not easily expressible.
- The tf.contrib has been deprecated and its functionality has been migrated to the core TensorFlow API, to tensorflow/addons or removed entirely.
- In the tf.estimator.DNN/Linear/DNNLinearCombined family, the premade estimators have been updated to use the tf.keras.optimizers instead of the tf.compat.v1.train.OptimizerS. A checkpoint converter tool, for converting optimizers has also been included with this release.
Bug Fixes and Other Changes
This beta version of 2.0 includes many bug fixes and other changes. Some of them are mentioned below:
- In the tf.data.Options, the experimental_numa_aware option has been removed and a support for TensorArrays has been added.
- The tf.keras.estimator.model_to_estimator now supports exporting to tf.train.Checkpoint format. This allows the saved checkpoints to be compatible with model.load_weights.
- The tf.contrib.estimator.add_metrics has been replaced with tf.estimator.add_metrics.
- Gradient for SparseToDense op, GPU implementation of tf.linalg.tridiagonal_solve, broadcasting support to tf.matmul has been added.
- This beta version also exposes a flag that allows the number of threads to vary across Python benchmarks.
- The unused StringViewVariantWrapper and the tf.string_split from v2 API has been removed.
The TensorFlow team has provided a TF 2.0 Testing User Group to users for any snag experience and for feedback purpose.
General reaction to the release of TensorFlow 2.0 beta is positive.
— Mark Carter (@markcartertm) June 8, 2019
Google today announced the release of the Beta version of TensorFlow 2.0. The new version of the world’s most popular open source machine learning library is being welcomed by developers. #TensorFlow #Google #AI
— Tony Peng (@tonypeng_Synced) June 7, 2019
A user on reddit comments, “Can’t wait to try that out !”
However some users have compared it to PyTorch calling it more comprehensive than TensorFlow. PyTorch provides a more powerful platform for research and is good for production.
A user on Hacker News comments, “Maybe I’ll give TF another try, but right now I’m really liking PyTorch. With TensorFlow I always felt like my models were buried deep in the machine and it was very hard to inspect and change them, and if I wanted to do something non-standard it was difficult even with Keras. With PyTorch though, I connect things however how I want, write whatever training logic I want, and I feel like my model is right in my hands. It’s great for research and proofs-of-concept. Maybe for production too.”
Another user says that “Might give it another try, but my latest incursion in the Tensorflow universe did not end pleasantly. I ended up recording everything in Pytorch, took me less than a day to do the stuff that took me more than a week in TF. One problem is that there are too many ways to do the same thing in TF and it’s hard to transition from one to the other.”
The TensorFlow team hopes to resolve all the additional issues before the release candidate (RC) 2.0 version, including complete Keras model support on Cloud TPUs and TPU pods and improve the overall performance of 2.0. The RC release is expected sometime this summer.