7 min read

Over the recent years, the world has witnessed a global move towards digitization. Massive improvements in computational capabilities have been made; thanks to the boom in the AI chip market as well as computation farms. These have resulted in data abundance and fast data processing ecosystems which are accessible to everyone – important pillars for the growth of AI and allied fields. Terms such as ‘Machine learning’ and ‘Deep learning’ in particular have gained a lot of traction in the data science community, mainly because of the multitude of domains they lend themselves to. Along with image processing, computer vision and games, one key area transformed by machine learning, and more recently by deep learning, is Natural Language Processing, simply known as NLP.

Human language is a heady concoction of otherwise incoherent words and phrases with more exceptions than rules, full of jargons and words with different meanings. Making machines comprehend a human language in all its glory, not to mention its users’ idiosyncrasies, can be quite a challenge. Then there is the matter of there being thousands of languages, dialects, accents, slangs and what not. Yet, it is a challenge worth taking up – mainly because language finds its application in almost everything humans do – from web search to e-mails to content curation, and more. According to Tractica, a market intelligence firm, “Natural Language Processing market will reach $22.3 Billion by 2025.”

NLP Evolution – From Machine Learning to Deep Learning

Before deep learning embraced NLP into a smarter version of a conversational machine, machine learning based NLP systems were utilized to process natural language. Machine learning based NLP systems were trained on models which were shallow in nature as they were often based on incomplete and time-consuming custom-made features. They included algorithms such as support vector machines (SVM) and logistic regression. These models found their applications in tasks such as spam detection in emails, grouping together similar words in a document, spin articles, and much more.

ML-based NLP systems relied heavily on the quality of the training data. Because of the limited nature of the capabilities offered by machine learning, when it came to understanding high-level texts and speech outputs from humans, the classical NLP model fell short. This led to the conclusion that machine learning algorithms can handle only narrow features and as such cannot perform high-level reasoning, which human conversations often comprise of. Also, as the scale of the data grew, machine learning couldn’t be an effective tool to tackle the different NLP problems related to efficiently training the models and their optimization. Here’s where deep learning proves to be a stepping stone.

Deep learning includes Artificial Neural Networks (ANNs) that function similar to neural nerves in a human brain, a reason why they are considered to emulate human thinking remarkably. Deep learning models perform significantly better as the quantity of data fed to them increases. For instance, Google’s Smart Reply can generate relevant responses to the emails received by the user. This system uses a pair of  RNNs, one to encode the incoming mail and the other to predict relevant responses. With the incorporation of DL in NLP, the need for feature engineering is highly reduced, saving time – a major asset. This means machines can be trained to understand languages other than English without complex and custom feature engineering by applying deep neural network models. In spite of the constant upgrades happening to language, the quest to get machines more and more friendly to humans is made possible using deep learning.     

Key Deep Learning techniques used for NLP

NLP-based deep learning models make use of word-embeddings, pre-trained using a large corpus or collection of unlabeled data. With advancements in word embedding techniques, the ability of the machines to derive deeper insights from languages has increased. To do so, NLP uses a technique called Word2vec that converts a given word into a vector for the better understanding of the machines. Continuous-bag-of words and skip-gram models – models used for learning word vectors, help in capturing the sequential patterns within sentences. The latter predicts the outside words using the center word as an input and is used in large datasets whereas the former does the vice versa. Similarly, GloVe also computes vector representations but using a technique called matrix factorization.

A disadvantage of the word embedding approach is that it cannot understand phrases and sentences. As mentioned earlier, the bag-of-words model converts each word into a corresponding vector. This can simplify many problems but it can also change the context of the text. For instance, it may not collectively understand the use of idioms or sub-phrases such as “Break a leg”. Also, recognizing indicative or negative words such as ‘not’, ‘but’, that attaches a semantical meaning to a word is difficult for the model to understand. A solution to this would be using ‘negative sampling’, i.e., a frequency-based sampling of negative terms while training the word2vec model. This is where neural networks can come into play.

CNNs (Convolutional Neural Networks)  and RNNs (Recurrent Neural Networks) are the two widely used neural network models in NLP. CNNs are good performers for text classification. However, the downside is that they are poor in learning the sequential information from the text. Expresso, built on Caffe, is one of the many tools used to develop CNNs. RNNs are preferred over CNNs for NLP as they allow sequential processing. For example, an RNN can differentiate between the words ‘fan’ and ‘fan-following’. This means RNNs are better equipped to handle complex dependencies and unbounded texts. Also, unlike CNNs, RNNs can handle input context of arbitrary length because of its flexible computational steps. All the above highlight why RNNs have better modeling potential than CNNs as far NLP is concerned.

Although RNNs are the preferred choice, they have a limitation: The vanishing gradient problem. This problem can be solved using LSTM (Long-short term memory), which helps in understanding the association of words within a text, and back-propagates an error through unlimited steps. LSTM includes a forget gate, which forgets the learned weights if carrying it forward is negligible. Thus, long-term dependencies are reduced. Other than LSTM, GRU (Gated Recurrent Units) is also widely opted to solve the vanishing gradient problem.

Current Implementations

Deep Learning is good at identifying patterns within unstructured data. Social Media is a major dump of unstructured media content – a goldmine for human sentiment analysis. Facebook uses DeepText, a Deep Learning based text understanding engine, which can understand the textual content of thousands of posts with near-human accuracy. CRM systems strive to maximize customer lifetime value by understanding what customers want and then taking appropriate measures. TalkIQ, uses neural-network based text analysis and deep learning models to extract meaning from the conversations that organizations have with their customers in order to gain deeper insights in real-time.

Google’s Cloud Speech API helps convert audio to texts; it can also recognize audio in 110 languages. Other implementations include Automated Text Summarization for summarizing the concept within a huge document, Speech Processing for converting voice requests into search recommendations, and much more.

Many other areas such as fraud detection tools, UI/UX, IoT devices, and more, that make use of speech and text analytics can perform explicitly well by imbibing deep learning neural network models.

The future of NLP with Deep Learning

With the advancements in deep learning, machines will be able to understand human communication in a much more comprehensive way. They will be able to extract complex patterns and relationships and decipher the variations and ambiguities in various languages. This will find some interesting use-cases – smarter chatbots being a very important one. Understanding complex and longer customer queries and giving out accurate answers are what we can expect from these chatbots in the near future. The advancements in NLP and deep learning could also lead to the development of expert systems which perform smarter searches, allowing the applications to search for content using informal, conversational language. Understanding and interpreting unindexed unstructured information, which is currently a challenge for NLP, is something that is possible as well.

The possibilities are definitely there – how NLP evolves by blending itself with the innovations in Artificial Intelligence is all that remains to be seen.

A Data science fanatic. Loves to be updated with the tech happenings around the globe. Loves singing and composing songs. Believes in putting the art in smart.

LEAVE A REPLY

Please enter your comment!
Please enter your name here