2 min read

In an attempt to deepen neural network interpretability, Google has released Google Lucid, a neural network visualization library along with publishing an article “The Building Blocks of Interpretability”, which answers one of the most popular questions in Deep Learning: how do neural networks make decisions?

Google Lucid is a neural network visualization library building off Google’s work on DeepDream. You may remember DeepDream as Google’s earlier attempt to visualize how neural networks understand images, which led to the creation of psychedelic images. Google Lucid adds feature visualizations to create more artistic DeepDream images. It is basically a collection of infrastructure and tools for research in neural network interpretability. In particular, it provides state of the art implementations of feature visualization techniques, and flexible abstractions that make it very easy to explore new research directions.

To add more flexibility and ease of work, Google is also releasing colab notebooks. These notebooks make it extremely easy to use Lucid to reproduce visualizations. Just open the notebook and click a button to run code without worrying about setup requirements.

To further make things exciting, Google’s new Distill article, titled, “The Building Blocks of Interpretability,” shows how feature visualization in combination with other interpretability techniques allows a clear cut view of the neural network. This is helpful to see how a neural network makes some decisions at a point, and how they influence the final output. For example, Google says, “we can see things like how a network detects a floppy ear, and then that increases the probability it gives to the image being a “Labrador retriever” or “beagle”.

The article explores techniques for understanding which neurons fire in the network by attaching visualizations to each neuron, almost a kind of MRI for neural networks. It can also zoom out and show how the entire image was perceived at different layers. Thus detecting very simple combinations of edges, to rich textures and 3d structure, to high-level structures.

The purpose of this research, Google says is to “address one of the most exciting questions in Deep Learning: how do neural networks do what they do?” However, it adds, “This work only scratches the surface of the kind of interfaces that we think it’s possible to build for understanding neural networks. We’re excited to see what the community will do.

You can read the entire article on Distill.

Content Marketing Editor at Packt Hub. I blog about new and upcoming tech trends ranging from Data science, Web development, Programming, Cloud & Networking, IoT, Security and Game development.

LEAVE A REPLY

Please enter your comment!
Please enter your name here