2 min read

Facebook has open sourced it’s ELF OpenGo project and added new features to it. Facebook’s ELF OpenGo is a reimplementation of AlphaGoZero / AlphaZero. Last year in May, ELF OpenGo was released to allow AI researchers to better understand how AI systems learn. This open-source bot had a 20-0 record against top professional Go players and has been widely adopted by the AI research community to run their own Go experiments.

Now, the Facebook AI Research team has announced new features and research results related to ELF OpenGo. They have now retrained the model of ELF OpenGo using reinforcement learning and have also released a Windows executable version of the bot, which can be used as a training aid for Go players. A unique archive that shows ELF OpenGo’s analysis of 87,000 professional Go games is also released. This will help Go players assess their performance in detail. They are also releasing their data set of 20 million self-play games and the 1,500 intermediate models.

Facebook researchers have shared their experiments and learnings of retraining the ELF OpenGo model in a new research paper. The paper details the results of extensive experiments, modifying individual features during evaluation to better understand the properties of these kinds of algorithms.

Training ELF OpenGo

ELF OpenGo was trained on 2,000 GPUs for 9 days. Post that, the 20-block model was comparable to the 20-block models described in AlphaGo Zero and Alpha Zero. The model was also provided with pretrained superhuman models, the code used to train the models, a comprehensive training trajectory dataset featuring 20 million self-play games, over 1.5 million training mini batches, and auxiliary data.

Model behavior during training

  • There is high variance in the model’s strength when compared to other models. This property holds even if the learning rates are reduced.
  • Moves that require significant lookahead to determine whether they should be played, such as “ladder” moves, are learned slowly by the model and are never fully mastered.
  • The model quickly learns high quality moves at different stages of the game. In contrast to the typical behavior of tabular RL, the rate of progression for learning both mid-game and end-game moves is nearly identical.

In a Facebook blog post, the team behind this RL model wrote “We’re excited that our development of this versatile platform is helping researchers better understand AI, and we’re gratified to see players in the Go community use it to hone their skills and study the game. We’re also excited to expand last year’s release into a broader suite of open source resources

The research paper titled ELF OpenGo: An Analysis and Open Reimplementation of AlphaZero is available on arXiv.

Read Next

Google DeepMind’s AI AlphaStar beats StarCraft II pros TLO and MaNa; wins 10-1 against the gamers.

Deepmind’s AlphaZero shows unprecedented growth in AI, masters 3 different games

FAIR releases a new ELF OpenGo bot with a unique archive that can analyze 87k professional Go games

Content Marketing Editor at Packt Hub. I blog about new and upcoming tech trends ranging from Data science, Web development, Programming, Cloud & Networking, IoT, Security and Game development.