6 min read
Our article aims to deliver key points from Yael Niv’s keynote presentation at NIPS 2017. She talks about the ability of Artificial Intelligence systems to perform simple human-like tasks effectively using State representations in the human brain. The talk also deconstructs the complex human decision-making process. Further, we explore how a human brain breaks down complex procedures into simple states and how these states determine our decision-making capabilities.This, in turn, gives valuable insights into the design and architecture of smart AI systems with decision-making capabilities.
Staying Simple is Complex
What do you think happens when a human being crosses a road, especially when it’s a busy street and you constantly need to keep an eye on multiple checkpoints in order to be safe and sound? The answer is quite ironical. The human brain breaks down the complex process into multiple simple blocks. The blocks can be termed as states – and these states then determine decisions such as when to cross the road or at what speed to cross the road. In other words, the states can be anything – from determining the incoming traffic density to maintaining the calculation of your walking speed. These states help the brain to ignore other spurious or latent tasks in order to complete the priority task at hand. Hence, the computational power of the brain is optimized. The human brain possesses the capability to focus on the most important task at hand and then breaks it down into multiple simple tasks.
The process of making smarter AI systems with complex decision-making capabilities can take inspiration from this process.
The Practical Human Experiment
To observe how the human brain behaves when urged to draw complex decisions, a few experiments were performed. The primary objective of these experiments was to verify the hypothesis that the decision making information in the human brain is stored in a part of the frontal brain called as Orbitofrontal cortex. The two experiments performed are described in brief below:
The participants were given sets of circles at random and they were asked to guess the number of circles in the cluster within 2 minutes. After they guessed the first time, the experimenter disclosed the correct number of circles. Then the subjects were further given a cluster of circles in two different colors (red and yellow) to repeat the guessing activity for each cluster. However, the experimenter never disclosed the fact that they will be given different colored clusters next.
The most important observation derived from the experiment was that after the subject knew the correct count, their guesses revolved around that number irrespective of whether that count mattered for the next set of circle clusters given. That is, the count had actually changed for the two color specimens given to them.
The important factor here is that the participants were not told that color would be a parameter to determine the number of circles in each set and still it played a huge part in guessing the number of circles in each set. This way it acted as a latent factor, which was present in the subconscious of the participants and was not a direct parameter. And, this being a latent factor was not in the list of parameters which played an important in determining the number of circles. But still, it played an important part in changing the overall count which was significantly higher for the red color than for the yellow color cluster. Hence, the experiment proved the hypothesis that latent factors are an integral part of intelligent decision-making capabilities in human beings.
The second experiment was performed to ascertain the hypothesis that the Orbitofrontal cortex contains all the data to help the human brain make complex decisions. For this, human brains were monitored using MRI to track the brain activity during the decision making process. In this experiment, the subjects were given a straight line and a dot. They were then asked to predict the next line from the dot – both in terms of line direction and its length. After completing this process for a given number of times, the participants were asked to remember the length and direction of the first line. There was a minor change among the sets of lines and dots. One group had a gradual change in line length and direction and another group had a drastic change in the middle.
The results showed that the group with a gradual change of line length and direction were more helpful in preserving the first data and the one with drastic change was less accurate.
The MRI reports showed signs that the classification information was primarily stored in the Orbitofrontal cortex. Hence it is considered as one of the most important parts of the human decision-making process.
Shallow Learning with Deep Representations
The decision-making capabilities and the effect of latent factors involved in it form the basis of dormant memory in humans. An experiment on rats was performed to explain this phenomenon. In the experiment, 4 rats were given electric shock accompanied by a particular type of sound for a day or two. On the third day, they reacted to the sound even without being given electric shocks. Ivan Pavlov has coined this term as Classical Conditioning theory wherein a relatively permanent change in behavior can be seen as a result of experience or continuous practice. Such instances of conditioning can be deeply damaging, for example in case of PTSD (Post Traumatic Stress Disorder) patients and other trauma victims.
In order to understand the process of State representations being stored in memory, the reversal mechanism, i.e how to reverse the process also needs to be understood. For that, three techniques were tested on these rats:
- The rats were not given any shock but were subjected to the sound
- The rats were given shocks accompanied by sound at regular intervals and sounds without shock
- The shocks were slowly reduced in numbers but the sound continued
The best results in reversing the memory were observed in case of the third technique, which is known as gradual extinction. In this way, a simple reinforcement learning mechanism is shown to be very effective because it helps in creating simple states which are manageable efficiently and trainable easily. Along with this, if we could extract information from brain imaging data derived from the Orbitofrontal cortex, these simple representational states can shed a lot of light into making complex computational processes simpler and enable us to make smarter AI systems for a better future.