Categories: NewsData

AmoebaNets: Google’s new evolutionary AutoML

2 min read

In order to detect objects within an image, artificial neural networks require careful design by experts over years of difficult research. They later address one specific task, such as to find what’s in photograph, to call genetic variant, or to help diagnose disease. Google believes one approach to generate these ANN architectures is through the use of evolutionary algorithms. So, today Google introduced AmoebaNets, an evolutionary algorithm that achieves state-of-the-art results for datasets such as ImageNet and CIFAR-10.

Google offers AmoebaNets as an answer to questions such as,

  • By using the computational resources to programmatically evolve image classifiers at unprecedented scale, can one achieve solutions with minimal expert participation? How good can today’s artificially-evolved neural networks be?

These questions were addressed through the two papers:

  1. Large-Scale Evolution of Image Classifiers,” presented at ICML 2017. In this paper, the authors have set up an evolutionary process with simple building blocks and trivial initial conditions. The idea was to “sit back” and let evolution at scale do the work of constructing the architecture.
  2. Regularized Evolution for Image Classifier Architecture Search (2018). This paper includes scaled up computation using Google’s new TPUv2 chips. This combination of modern hardware, expert knowledge, and evolution worked together to produce state-of-the-art models on CIFAR-10 and ImageNet, two popular benchmarks for image classification.

One important feature of the evolutionary algorithm (AmoebaNets) that the team used in their second paper is a form of regularization, which means:

  • Instead of letting the worst neural networks die, they remove the oldest ones — regardless of how good they are. This improves robustness to changes in the task being optimized and tends to produce more accurate networks in the end.
  • Since weight inheritance is not allowed, all networks must train from scratch. Therefore, this form of regularization selects for networks that remain good when they are re-trained.
  • These models achieve state-of-the-art results for CIFAR-10 (mean test error = 2.13%), mobile-size ImageNet (top-1 accuracy = 75.1% with 5.1 M parameters) and ImageNet (top-1 accuracy = 83.1%).

Read more about AmoebaNets on Google Research Blog

Savia Lobo

A Data science fanatic. Loves to be updated with the tech happenings around the globe. Loves singing and composing songs. Believes in putting the art in smart.

Share
Published by
Savia Lobo

Recent Posts

Top life hacks for prepping for your IT certification exam

I remember deciding to pursue my first IT certification, the CompTIA A+. I had signed…

3 years ago

Learn Transformers for Natural Language Processing with Denis Rothman

Key takeaways The transformer architecture has proved to be revolutionary in outperforming the classical RNN…

3 years ago

Learning Essential Linux Commands for Navigating the Shell Effectively

Once we learn how to deploy an Ubuntu server, how to manage users, and how…

3 years ago

Clean Coding in Python with Mariano Anaya

Key-takeaways:   Clean code isn’t just a nice thing to have or a luxury in software projects; it's a necessity. If we…

3 years ago

Exploring Forms in Angular – types, benefits and differences   

While developing a web application, or setting dynamic pages and meta tags we need to deal with…

3 years ago

Gain Practical Expertise with the Latest Edition of Software Architecture with C# 9 and .NET 5

Software architecture is one of the most discussed topics in the software industry today, and…

3 years ago