What is frequency distribution and why does it matter?
In the context of natural language processing, frequency distribution is simply a tally of the number of times each unique word is used in a text. Recording the individual word counts of a text can better help us understand not only what topics are being discussed and what information is important but also how that information is being discussed as well. It’s a useful method for better understanding language and different types of texts.
This video tutorial has been taken from from Natural Language Processing with Python.
Word frequency distribution is central to performing content analysis with NLP. Its applications are wide ranging. From understanding and characterizing an author’s writing style to analyzing the vocabulary of rappers, the technique is playing a large part in wider cultural conversations. It’s also used in psychological research in a number of ways to analyze how patients use language to form frameworks for thinking about themselves and the world. Trivial or serious, word frequency distribution is becoming more and more important in the world of research. Of course, manually creating such a word frequency distribution models would be time consuming and inconvenient for data scientists. Fortunately for us, NLTK, Python’s toolkit for natural language processing, makes life much easier.
How to use NLTK for frequency distribution
Take a look at how to use NLTK to create a frequency distribution for Herman Melville’s Moby Dick in the video tutorial above. In it, you’ll find a step by step guide to performing an important data analysis task. Once you’ve done that, you can try it for yourself, or have a go at performing a similar analysis on another data set.
Learn more about natural language processing – read How to create a conversational assistant or chatbot using Python.