Categories: NewsData

Google strides forward in deep learning: open sources Google Lucid to answer how neural networks make decisions

2 min read

In an attempt to deepen neural network interpretability, Google has released Google Lucid, a neural network visualization library along with publishing an article “The Building Blocks of Interpretability”, which answers one of the most popular questions in Deep Learning: how do neural networks make decisions?

Google Lucid is a neural network visualization library building off Google’s work on DeepDream. You may remember DeepDream as Google’s earlier attempt to visualize how neural networks understand images, which led to the creation of psychedelic images. Google Lucid adds feature visualizations to create more artistic DeepDream images. It is basically a collection of infrastructure and tools for research in neural network interpretability. In particular, it provides state of the art implementations of feature visualization techniques, and flexible abstractions that make it very easy to explore new research directions.

To add more flexibility and ease of work, Google is also releasing colab notebooks. These notebooks make it extremely easy to use Lucid to reproduce visualizations. Just open the notebook and click a button to run code without worrying about setup requirements.

To further make things exciting, Google’s new Distill article, titled, “The Building Blocks of Interpretability,” shows how feature visualization in combination with other interpretability techniques allows a clear cut view of the neural network. This is helpful to see how a neural network makes some decisions at a point, and how they influence the final output. For example, Google says, “we can see things like how a network detects a floppy ear, and then that increases the probability it gives to the image being a “Labrador retriever” or “beagle”.

The article explores techniques for understanding which neurons fire in the network by attaching visualizations to each neuron, almost a kind of MRI for neural networks. It can also zoom out and show how the entire image was perceived at different layers. Thus detecting very simple combinations of edges, to rich textures and 3d structure, to high-level structures.

The purpose of this research, Google says is to “address one of the most exciting questions in Deep Learning: how do neural networks do what they do?” However, it adds, “This work only scratches the surface of the kind of interfaces that we think it’s possible to build for understanding neural networks. We’re excited to see what the community will do.

You can read the entire article on Distill.

Sugandha Lahoti

Content Marketing Editor at Packt Hub. I blog about new and upcoming tech trends ranging from Data science, Web development, Programming, Cloud & Networking, IoT, Security and Game development.

Share
Published by
Sugandha Lahoti

Recent Posts

Top life hacks for prepping for your IT certification exam

I remember deciding to pursue my first IT certification, the CompTIA A+. I had signed…

3 years ago

Learn Transformers for Natural Language Processing with Denis Rothman

Key takeaways The transformer architecture has proved to be revolutionary in outperforming the classical RNN…

3 years ago

Learning Essential Linux Commands for Navigating the Shell Effectively

Once we learn how to deploy an Ubuntu server, how to manage users, and how…

3 years ago

Clean Coding in Python with Mariano Anaya

Key-takeaways:   Clean code isn’t just a nice thing to have or a luxury in software projects; it's a necessity. If we…

3 years ago

Exploring Forms in Angular – types, benefits and differences   

While developing a web application, or setting dynamic pages and meta tags we need to deal with…

3 years ago

Gain Practical Expertise with the Latest Edition of Software Architecture with C# 9 and .NET 5

Software architecture is one of the most discussed topics in the software industry today, and…

3 years ago