4 min read
In a medium blog post, Alex Ingerman (Product Manager) and Krzys Ostrowski (Research Scientist) introduced the TensorFlow Federated framework on the first day. This open source framework is useful for experimenting with machine learning and other computations on decentralized data.
As the name suggests, this framework uses Federated Learning, a learning approach introduced by Google in 2017. This technique enables ML models to collaboratively learn a shared prediction model while keeping all the training data on the device. Thus eliminating machine learning from the need to store the data in the cloud.
The authors note that TFF is based on their experiences with developing federated learning technology at Google. TFF uses the Federated Learning API to express an ML model architecture, and then train it across data provided by multiple developers, while keeping each developer’s data separate and local. It also uses the Federated Core (FC) API, a set of lower-level primitives, which enables the expression of a broad range of computations over a decentralized dataset.
The authors conclude, “With TFF, we are excited to put a flexible, open framework for locally simulating decentralized computations into the hands of all TensorFlow users. You can try out TFF in your browser, with just a few clicks, by walking through the tutorials.”
TensorFlow 2.0.0- alpha0
The event also the release of the first alpha version of the TensorFlow 2.0 framework which came with fewer APIs. First introduced last year in August by Martin Wicke, engineer at Google, TensorFlow 2.0, is expected to come with:
- Easy model building with Keras and eager execution.
- Robust model deployment in production on any platform.
- Powerful experimentation for research.
- API simplification by reducing duplication removing deprecated endpoints.
The first teaser, TensorFlow 2.0.0- alpha0 version comes with the following changes:
- API clean-up included removing tf.app, tf.flags, and tf.logging in favor of absl-py.
- No more global variables with helper methods like tf.global_variables_initializer and tf.get_global_step.
- Functions, not sessions (tf.Session and session.run -> tf.function).
- Added support for TensorFlow Lite in TensorFlow 2.0.
- tf.contrib has been deprecated, and functionality has been either migrated to the core TensorFlow API, to tensorflow/addons, or removed entirely.
- Checkpoint breakage for RNNs and for Optimizers.
Minor bug fixes have also been made to the Keras and Python API and tf.estimator.
Read the full list of bug fixes in the changelog.
TensorFlow Lite 1.0
The TF-Lite framework is basically designed to aid developers in deploying machine learning and artificial intelligence models on mobile and IoT devices. Lite was first introduced at the I/O developer conference in May 2017 and in developer preview later that year. At the TensorFlow Dev Summit, the team announced a new version of this framework, the TensorFlow Lite 1.0. According to a post by VentureBeat, improvements include selective registration and quantization during and after training for faster, smaller models. The team behind TF-Lite 1.0 says that quantization has helped them achieve up to 4 times compression of some models.
Another interesting library released at the TensorFlow dev summit was TensorFlow Privacy. This Python-based open source library aids developers to train their machine-learning models with strong privacy guarantees. To achieve this, it takes inspiration from the principles of differential privacy. This technique offers strong mathematical guarantees that models do not learn or remember the details about any specific user when training the user data.
TensorFlow Privacy includes implementations of TensorFlow optimizers for training machine learning models with differential privacy. For more information, you can go through the technical whitepaper describing its privacy mechanisms in more detail.
The creators also note that “no expertise in privacy or its underlying mathematics should be required for using TensorFlow Privacy. Those using standard TensorFlow mechanisms should not have to change their model architectures, training procedures, or processes.”
TF Replicator also released at the TensorFlow Dev Summit, is a software library that helps researchers deploy their TensorFlow models on GPUs and Cloud TPUs. To do this, the creators assure that developers would require minimal effort and need not have previous experience with distributed systems.
For multi-GPU computation, TF-Replicator relies on an “in-graph replication” pattern, where the computation for each device is replicated in the same TensorFlow graph.
When TF-Replicator builds an in-graph replicated computation, it first builds the computation for each device independently and leaves placeholders where cross-device computation has been specified by the user. Once the sub-graphs for all devices have been built, TF-Replicator connects them by replacing the placeholders with actual cross-device computation.
For a more comprehensive description, you can go through the research paper.
These were the top announcements made at the TensorFlow Dev Summit 2019. You can go through the Keynote and other videos of the announcements and tutorials on this YouTube playlist.