3 min read

A paper with inputs from 15 researchers talks about artificial intelligence systems playing a game called Hanabi in a paper titled The Hanabi Challenge: A New Frontier for AI Research. The researchers propose an experimental framework called the Hanabi Learning Environment for the AI research community to test out and advance algorithms. This can help in assessing the performance of the current state-of-the-art algorithms and techniques.

What’s special about Hanabi?

Hanabi is a two to five player game involving cards with numbers on them. In Hanabi, you are playing against other participants but you also need to trust the imperfect information they provide and make deductions to advance your cards. Hanabi has an imperfect nature as players cannot see their own cards. This game is a test of collaboratively sharing information with discretion. The rules are specified in detail in the research paper.

Games have always been used to showcase or test the ability of artificial intelligence and machine learning, be it Go, Chess, Dota 2, or other games. So why would Hanabi be ‘A New Frontier for AI Research’? The difference is that Hanabi needs a bit of human touch to play. Factors like trust, imperfect information, and co-operation come into the picture, with this game which is why it is a good testing ground for AI applications.

What’s the paper about?

The idea is to test the collaboration of AI agents where the information is limited and only implicit communication is allowed. The researchers say that Hanabi increases reasoning of beliefs and intentions of other AI agents and makes them prominent. They believe that, developing techniques that instill agents with such theory will, in addition to succeeding at Hanabi, unlock ways how agents can collaborate efforts with human partners.

The researchers have even introduced an open-source ‘Hanabi Learning Environment’, which is an experimental framework for other researchers to assess their techniques in the environment.

To play Hanabi, the theory of mind is necessary, which revolves around human-like traits such as beliefs, intents, desires, etc The human approach of the theory of mind reasoning is important not just in how humans approach this game. It is also about how humans handle communication and interactions when multiple parties are involved..

Results and further work

State-of-the-art reinforcement learning algorithms using deep learning are evaluated in the paper. In self-play, they fall short of the hand-coded Hanabi playing bots. In case of collaborative play, they do not collaborate at all. This shows that there is a lot of room for advances in this area related to theory of mind.

The code for the Hanabi Learning Environment is being written in Python and C++ and will be available on DeepMind’s GitHub. Its interface is similar to OpenAI Gym.

For more details about the game and how the theory will help in testing AI agent interactions, check out the research paper.

Read next

Curious Minded Machine: Honda teams up with MIT and other universities to create an AI that wants to learn

Technical and hidden debts in machine learning – Google engineers’ give their perspective

The US Air Force lays groundwork towards artificial general intelligence based on hierarchical model of intelligence

Data science enthusiast. Cycling, music, food, movies. Likes FPS and strategy games.