4 min read

[box type=”shadow” align=”” class=”” width=””]

Dr.Brandon: Welcome everyone to the first episode of ‘Date with data science‘. I am Dr. Brandon Hopper, B.S., M.S., Ph.D., Senior Data Scientist at BeingHumanoid and, visiting faculty at Fictional AI University. 

Jon: And I am just Jon – actor, foodie and Brandon’s fun friend. I don’t have any letters after my name but I can say the alphabets in reverse order. Pretty cool, huh!

Dr.Brandon: Yes, I am sure our readers will find it very amusing Jon. Talking of alphabets, today we discuss NLP.

Jon: Wait, what is NLP? Is it that thing Ashley’s working on?

Dr.Brandon: No. The NLP we are talking about today is Natural Language Processing, not to be confused with Neuro-Linguistic Programming.  

Jon: Oh alright. I thought we just processed cheese. How do you process language? Don’t you start with ‘to understand NLP, we must first understand how humans started communicating’! And keep it short and simple, will you?

Dr.Brandon: OK I will try my best to do all of the above if you promise not to doze off. The following is an excerpt from the book Mastering Machine Learning with Spark 2.x by Alex Tellez, Max Pumperla and Michal Malohlava. [/box]


 

NLP helps analyze raw textual data and extract useful information such as sentence structure, sentiment of text, or even translation of text between languages. Since many sources of data contain raw text, (for example, reviews, news articles, and medical records). NLP is getting more and more popular, thanks to providing an insight into the text and helps make automatized decisions easier.

Under the hood, NLP is often using machine-learning algorithms to extract and model the structure of text. The power of NLP is much more visible if it is applied in the context of another machine method, where, for example, text can represent one of the input features.

NLP – a brief primer

Just like artificial neural networks, NLP is a relatively “old” subject, but one that has garnered a massive amount of attention recently due to the rise of computing power and various applications of machine learning algorithms for tasks that include, but are not limited to, the following:

Machine translation (MT): In its simplest form, this is the ability of machines to translate one language of words to another language of words. Interestingly, proposals for machine translation systems pre-date the creation of the digital computer. One of the first NLP applications was created during World War II by an American scientist named Warren Weaver whose job was to try and crack German code. Nowadays, we have highly sophisticated applications that can translate a piece of text into any number of different languages we desire!‌

Speech recognition (SR): These methodologies and technologies attempt to recognize and translate spoken words into text using machines. We see these technologies in smartphones nowadays that use SR systems in tasks ranging from helping us find directions to the nearest gas station to querying Google for the weekend’s weather forecast. As we speak into our phones, a machine is able to recognize the words we are speaking and then translate these words into text that the computer can recognize and perform some task if need be.

Information retrieval (IR): Have you ever read a piece of text, such as an article on a news website, for example, and wanted to see similar news articles like the one you just read? This is but one example of an information retrieval system that takes a piece of text as an “input” and seeks to obtain other relevant pieces of text similar to the input text. Perhaps the easiest and most recognizable example of an IR system is doing a search on a web-based search engine. We give some words that we want to “know” more about (this is the “input”), and the output are the search results, which are hopefully relevant to our input search query.

Information extraction (IE): This is the task of extracting structured bits of information from unstructured data such as text, video and pictures. For example, when you read a blog post on some website, often, the post is tagged with a few keywords that describe the general topics about this posting, which can be classified using information extraction systems. One extremely popular avenue of IE is called Visual Information Extraction, which attempts to identify complex entities from the visual layout of a web page, for example, which would not be captured in typical NLP approaches.

Text summarization (darn, no acronym here!): This is a hugely popular area of interest. This is the task of taking pieces of text of various length and summarizing them by identifying topics, for example. In the next chapter, we will explore two popular approaches to text summarization via topic models such as Latent Dirichlet Allocation (LDA) and Latent Semantic Analysis (LSA).

If you enjoyed the above excerpt from the book Mastering Machine Learning with Spark 2.x by Alex Tellez, Max Pumperla, and Michal Malohlava, check out the book to learn how to

  • Use Spark streams to cluster tweets online
  • Utilize generated models for off-line/on-line prediction
  • Transfer learning from an ensemble to a simpler Neural Network
  • Use GraphFrames, an extension of DataFrames to graphs, to study graphs using an elegant query language
  • Use K-means algorithm to cluster movie reviews dataset and more

Managing Editor, Packt Hub. Former mainframes/DB2 programmer turned marketer/market researcher turned editor. I love learning, writing and tinkering when I am not busy running after my toddler. Wonder how algorithms would classify this!

LEAVE A REPLY

Please enter your comment!
Please enter your name here