Yesterday, Facebook posted a detailed report of their research in brain-computer interface (BCI) with an aim to build a non-invasive device that would type what a person is imagining. This device is expected to be an input solution for future augmented reality ( AR) glasses. Facebook first proposed its plan to build this technology at the F8 2017 conference.
Today we’re sharing an update on our work to build a non-invasive wearable device that lets people type just by imagining what they want to say. Our progress shows real potential in how future inputs and interactions with AR glasses could one day look. https://t.co/ilk192GwAR
— Boz (@boztank) July 30, 2019
Two weeks ago, Elon’s Musk presented ‘Neuralink’ based on a brain-computer interface technology. It is a sewing machine-like robot that can implant ultrathin threads deep into the human brain. It uses four sensors which will be placed in a wearable device, under the scalp to an inductive coil behind the ear. It will contain a Bluetooth radio and a battery and will be controlled through an iPhone app.
Neuralink aims to give people the ability to read computers and smartphones using their thoughts, while Facebook aims to explore human brain to write an input solution for AR glasses.
Unlike Neuralink, Facebook plans to take a non-invasive route to reading minds. One part of the vision of Facebook’s research coincides with a paper titled “Real-time decoding of question-and-answer speech dialogue using human cortical activity”. The paper is published by a team of researchers from the University of California San Francisco (UCSF) and are supported by Facebook.
The paper demonstrates real-time decoding of perceived and produced speech from high-density electrocorticography (ECoG) activity in humans to detect when they heard or said an utterance and then decode the utterance’s identity. The researchers were able to perceive and produce utterances with 76% and 61% accuracy rates respectively. The paper aims to help patients, who are unable to speak or move due to locked-in syndrome, paralysis or epilepsy, to interact on a rapid timescale similar to human conversations.
How does real-time decoding of question-and-answer speech work?
- Three human epilepsy patients undergoing treatment at the UCSF Medical Center gave his or her written informed consent to participate in this research. ECoG arrays were surgically implanted on their cortical surface i.e., outer layer of the cerebrum of each participant.
Image Source: Real-time decoding approach
- For each trial, each participant was made to hear a question with a possible set of answer choices on a screen. The participants were asked to choose any one of the answers and verbally say it when a green response cue appears on the screen.
- At that same time, participant’s cortical activity was acquired from the ECoG electrodes, implanted on the participants temporal and frontal cortex surface. The cortical activity is then filtered in real-time to extract high gamma activity. Next, a speech detection model uses the spatiotemporal pattern of a high gamma activity to predict if a question is being heard or an answer is being produced (or neither) at each time point.
- If a question event is detected, the time window of high gamma activity is passed to a question classifier which uses Viterbi decoding to compute question utterance probability. The question with the highest probability is considered the output of the decoded question. A stimulus set is designed such that each answer is only likely for a particular set of questions called context priors. These context priors are then combined with the predicted question probabilities to obtain answer priors.
- When the speech detection model detects an answer event, the same procedure is followed with an answer classifier. Finally, the context integration model combines all the answer probabilities with the answer priors to yield answer posterior probabilities. The answer with the highest number of posterior probability is considered as the output i.e., the decoded answer.
- Thus by integrating what the participants hear and say, the researchers are using “an interactive question-and-answer behavioral paradigm” to present a real-world assistive communication setting for patients.
Although the researchers were unable to “make quantitative and definitive claims about the relationship between decoder performance and functional-anatomical coverage” with three participants, they are satisfied that this is a promising step towards the goal of “demonstrating that produced speech can be detected and decoded from neural activity in real-time while integrating dynamic information from the surrounding context.”
Many people have found it fascinating and consider this research as an important implication for patients who are unable to communicate.
This is very big news for #stroke #aphasia and clinical neuroscience in general: Real-time decoding of question-and-answer speech dialogue using human cortical activity | Nature Communications https://t.co/UxUSs0rWsS
— Joseph Kwan MD 🇪🇺 (@drjkwan) July 30, 2019
— Sabera Talukder (@SaberaTalukder) July 31, 2019
How is Facebook using brain-computer interface for AR?
The UCSF researchers have maintained that their algorithm is capable of recognizing only a small set of words and phrases, and are working towards translating much larger vocabulary. Facebook say that Facebook Reality Labs (FRL) researchers have limited access to de-identified data, as it remain onsite at UCSF and under its control at all times.
In the post, Facebook states that FRL has built a research kit of a wearable brain-computer interface device. They have been testing the ability of the system to decode single imagined words like “home,” “select,” and “delete,” with non-invasive technologies, using near-infrared light. It also says that though the system is currently bulky, slow, and unreliable, it’s potential is significant.
“We don’t expect this system to solve the problem of input for AR anytime soon. It could take a decade, but we think we can close the gap,” Facebook writes in the post.
Facebook is building towards a bigger goal of implementing systems that can “interact with today’s VR systems — and tomorrow’s AR glasses.”
Users are highly skeptical of Facebook’s vision as many doubt Facebook’s intention of exploring human brain control in the name of AR wearables.
So Facebook is seeking ways to mine your mind. Totally nefarious.
— Darryl Zaontz CFA (@HuxleysRazor) July 30, 2019
People don't trust FB with their data or money, WHY would they trust FB with their brain????!
— Caroline Julianna (@Croftt) July 30, 2019
While many have raised ethical questions that exploring human membranes in the name of research is risky and disturbing.
Title of future news article: “23 million brains hacked in a massive breach of data security.”
— Gjergj Dollani (@gjergjdollani) July 30, 2019
My thoughts are mine and aren't for companies to monetize. #DeleteFacebook
— Michael Cholod (@MichaelCholod) July 30, 2019
This is going to really speed up the flow of online stupidity. Can't wait!
— Paulo Hubert (@hubertpaulo) July 30, 2019
I want to hear about your neuroethical guidelines, I want to hear about the ethics board you have set up in place, I want to see the neuro-EULA, I want you to prove to the world facebook can be trusted with this kind of information.
— Sterling Crispin 🕊️ (@sterlingcrispin) July 31, 2019
I'm telling you my concerns. I'm concerned there's no ethics committee overseeing this work. I'm concerned you've already built a global panopti and now you're going into people's heads. I'm concerned youll use this information for behavioral modification and information warfare.
— Sterling Crispin 🕊️ (@sterlingcrispin) July 31, 2019
Facebook did manage to find few supporters who were excited about this technology.
What an amazing technological advancement, which I completely trust Facebook to manage and apply ethically! 😳😳😳🤯
— Ryan Forde-Kelly (@FKSportsBlog) July 31, 2019
This is why I love tech
— Drew Roberts (@DrewRoberts) July 30, 2019