5 min read

Yesterday, Facebook posted a detailed report of their research in brain-computer interface (BCI) with an aim to build a non-invasive device that would type what a person is imagining. This device is expected to be an input solution for future augmented reality (AR) glasses. Facebook first proposed its plan to build this technology at the F8 2017 conference.

Two weeks ago, Elon’s Musk presented ‘Neuralink’ based on a brain-computer interface technology. It is a sewing machine-like robot that can implant ultrathin threads deep into the human brain. It uses four sensors which will be placed in a wearable device, under the scalp to an inductive coil behind the ear. It will contain a Bluetooth radio and a battery and will be controlled through an iPhone app.

Neuralink aims to give people the ability to read computers and smartphones using their thoughts, while Facebook aims to explore human brain to write an input solution for AR glasses.

Unlike Neuralink, Facebook plans to take a non-invasive route to reading minds. One part of the vision of Facebook’s research coincides with a paper titled “Real-time decoding of question-and-answer speech dialogue using human cortical activity”. The paper is published by a team of researchers from the University of California San Francisco (UCSF) and are supported by Facebook.

The paper demonstrates real-time decoding of perceived and produced speech from high-density electrocorticography (ECoG) activity in humans to detect when they heard or said an utterance and then decode the utterance’s identity. The researchers were able to perceive and produce utterances with 76% and 61% accuracy rates respectively. The paper aims to help patients, who are unable to speak or move due to locked-in syndrome, paralysis or epilepsy, to interact on a rapid timescale similar to human conversations.

How does real-time decoding of question-and-answer speech work?

  • Three human epilepsy patients undergoing treatment at the UCSF Medical Center gave his or her written informed consent to participate in this research. ECoG arrays were surgically implanted on their cortical surface i.e., outer layer of the cerebrum of each participant.

Image Source: Real-time decoding approach

  • For each trial, each participant was made to hear a question with a possible set of answer choices on a screen. The participants were asked to choose any one of the answers and verbally say it when a green response cue appears on the screen.
  • At that same time, participant’s cortical activity was acquired from the ECoG electrodes, implanted on the participants temporal and frontal cortex surface. The cortical activity is then filtered in real-time to extract high gamma activity. Next, a speech detection model uses the spatiotemporal pattern of a high gamma activity to predict if a question is being heard or an answer is being produced (or neither) at each time point.
  • If a question event is detected, the time window of high gamma activity is passed to a question classifier which uses Viterbi decoding to compute question utterance probability. The question with the highest probability is considered the output of the decoded question. A stimulus set is designed such that each answer is only likely for a particular set of questions called context priors. These context priors are then combined with the predicted question probabilities to obtain answer priors.
  • When the speech detection model detects an answer event, the same procedure is followed with an answer classifier. Finally, the context integration model combines all the answer probabilities with the answer priors to yield answer posterior probabilities. The answer with the highest number of posterior probability is considered as the output i.e., the decoded answer.
  • Thus by integrating what the participants hear and say, the researchers are using “an interactive question-and-answer behavioral paradigm” to present a real-world assistive communication setting for patients.

Although the researchers were unable to “make quantitative and definitive claims about the relationship between decoder performance and functional-anatomical coverage” with three participants, they are satisfied that this is a promising step towards the goal of “demonstrating that produced speech can be detected and decoded from neural activity in real-time while integrating dynamic information from the surrounding context.”

Many people have found it fascinating and consider this research as an important implication for patients who are unable to communicate.

How is Facebook using brain-computer interface for AR?

The UCSF researchers have maintained that their algorithm is capable of recognizing only a small set of words and phrases, and are working towards translating much larger vocabulary. Facebook say that Facebook Reality Labs (FRL) researchers have limited access to de-identified data, as it remain onsite at UCSF and under its control at all times.

In the post, Facebook states that FRL has built a research kit of a wearable brain-computer interface device. They have been testing the ability of the system to decode single imagined words like “home,” “select,” and “delete,” with non-invasive technologies, using near-infrared light. It also says that though the system is currently bulky, slow, and unreliable, it’s potential is significant.

“We don’t expect this system to solve the problem of input for AR anytime soon. It could take a decade, but we think we can close the gap,” Facebook writes in the post.

Facebook is building towards a bigger goal of implementing systems that can “interact with today’s VR systems — and tomorrow’s AR glasses.”

Users are highly skeptical of Facebook’s vision as many doubt Facebook’s intention of exploring human brain control in the name of AR wearables.

While many have raised ethical questions that exploring human membranes in the name of research is risky and disturbing.

Many have also raised concerns that Facebook, a company who has been in the wrong books lately, (read data breaches, GDPR violation, tracking users data), cannot be trusted.

Facebook did manage to find few supporters who were excited about this technology.

Read Next

Along with platforms like Facebook, now websites using embedded ‘Like’ buttons are jointly responsible for what happens to the collected user data, rules EU court

The US Justice Department opens a broad antitrust review case against tech giants

Microsoft Azure VP demonstrates Holoportation, a reconstructed transmittable 3D technology

A born storyteller turned writer!