Data

Researchers at Columbia University use deep learning to translate brain activity into words for epileptic cases

2 min read

Researchers at Columbia University have carried out a successful experiment where they translated brain activity into words using deep learning and speech synthesizer. They made use of the Auditory stimulus reconstruction technique which combines the recent advances in deep learning with the latest innovations in speech synthesis technologies to reconstruct closed-set intelligible speech from the human auditory cortex.

They temporarily placed five electrodes in the brains of five people who were about to undergo a brain surgery for epilepsy. These five were asked to listen to recordings of sentences, and their brain activity was used to train deep-learning-based speech recognition software. Post this they were made to listen to 40 numbers being spoken.

The AI then tried decoding what they heard based on the brain activity and spoke out the results in a robotic voice. According to the ones who heard the robot voice, the voice synthesizer produced was understandable as the right word 75% of the time.

 

Source: Nature.com

According to the Technology Review, “At the moment the technology can only reproduce words that these five patients have heard—and it wouldn’t work on anyone else.” However, the researchers believe  that such a technology could help people who have been paralyzed communicate with their family and friends, despite losing the ability to speak.

Dr. Nima Mesgarani, an associate professor at Columbia University, said “One of the motivations of this work…is for alternative human-computer interaction methods, such as a possible interface between a user and a smartphone.”

According to the report, “Our approach takes a step toward the next generation of human-computer interaction systems and more natural communication channels for patients suffering from paralysis and locked-in syndromes.”

To know more about this experiment, head over to the complete report.

Read Next

Using deep learning methods to detect malware in Android Applications

Researchers introduce deep learning method that converts mono audio recordings into 3D sounds using video scenes

IEEE Computer Society predicts top ten tech trends for 2019: assisted transportation, chatbots, and deep learning accelerators among others

Savia Lobo

A Data science fanatic. Loves to be updated with the tech happenings around the globe. Loves singing and composing songs. Believes in putting the art in smart.

Share
Published by
Savia Lobo

Recent Posts

Top life hacks for prepping for your IT certification exam

I remember deciding to pursue my first IT certification, the CompTIA A+. I had signed…

3 years ago

Learn Transformers for Natural Language Processing with Denis Rothman

Key takeaways The transformer architecture has proved to be revolutionary in outperforming the classical RNN…

3 years ago

Learning Essential Linux Commands for Navigating the Shell Effectively

Once we learn how to deploy an Ubuntu server, how to manage users, and how…

3 years ago

Clean Coding in Python with Mariano Anaya

Key-takeaways:   Clean code isn’t just a nice thing to have or a luxury in software projects; it's a necessity. If we…

3 years ago

Exploring Forms in Angular – types, benefits and differences   

While developing a web application, or setting dynamic pages and meta tags we need to deal with…

3 years ago

Gain Practical Expertise with the Latest Edition of Software Architecture with C# 9 and .NET 5

Software architecture is one of the most discussed topics in the software industry today, and…

3 years ago