15 min read

In this article, by Jalaj Thanaki, the author of the book Python Natural Language Processing discusses how to develop natural language processing (NLP) application.

In this article, we will be developing a spam filtering. In order to develop spam filtering we will be using supervised machine learning (ML) algorithm named logisticregression. You can also use decision tree, NaiveBayes,or support vector machine (SVM).Tomake this happen the following steps will be covered:

  • Understandlogistic regression with MLalgorithm
  • Data collection and exploration
  • Split dataset into training-dataset and testing-dataset

(For more resources related to this topic, see here.)

Understanding logistic regression ML algorithm

Let’s understand logistic regression algorithm first.For this classification algorithm, I will give you intuition how logistic regression algorithm works and we will see some basic mathematics related to it. Then we will see the spam filtering application.

First we are considering the binary classes like spam or not-spam, good or bad, win or lose, 0 or 1, and so on for understanding the algorithm and its application. Suppose I want to classify emails into spam and non-spam (ham)category so the spam and non-spam are discrete output label or target concept here. Our goal here is that we want to predict that whether the new email is spam or not-spam. Not-spam also known asham. In order to build this NLP application we are going to use logistic regression.

Let’s step back a while and understand the technicality of algorithm first.

Here I’m stating the facts related to mathematics and this algorithm in very simple manner so everyone can understand the logic. General approach for understanding this algorithm is as follows. If you know some part of ML then you can connect your dot and if you are new to ML then don’t worry because we are going to understand every part which I will describe as follows:

  • We are defining our hypothesis function which helps us to generate our target output or target concept
  • We are defining the cost function or error function and we choose error function in such a way that we can derive the partial derivate of error function easily so we can calculate gradient descent easily
  • Over the time we are trying to minimize the error so we can generate the more accurate label and classify data accurately

In statistics, logistic regression is also called as logitregression or logitmodel. This algorithm is mostly used as binary class classifier that means there should be two different class in which you want to classify the data. The binary logistic model is used to estimate the probability of a binary response and it generates the response based on one or more predictor or independent variables or features. By the way the ML algorithm that basic mathematics concepts used in deep learning (DL) as well.

First I want to explain that why this algorithm called logistic regression? The reason is that the algorithm uses logistic function or sigmoid function and that is the reason it called logistic regression. Logistic function or sigmoid function are the synonyms of each other.

We use sigmoid function as hypothesis function and this function belongs to the hypothesis class. Now if you want to say thatwhat do you mean by the hypothesis function? well as we have seen earlier that machine has to learn mapping between data attributes and given label in such a way so it can predict the label for new data. This can be achieved by machine if it learns this mapping using mathematical function. So the mathematical function is called hypothesis function,which machine will use to classify the data and predict the labels or target concept. Here, as I said, we want to build binary classifier so our label is either spam or ham. So mathematically I can assign 0 for ham or not-spam and 1 for spam or viceversa as per your choice. These mathematically assigned labels are our dependent variables. Now we need that our output labels should be either zero or one. Mathematically,we can say that label is y and y ∈ {0, 1}. So we need to choose that kind of hypothesis function which convert our output value either in zero or one and logistic function or sigmoid function is exactly doing that and this is the main reason why logistic regression uses sigmoid function as hypothesis function.

Logistic or Sigmoid Function

Let me provide you the mathematical equation for logistic or sigmoid function. Refer to Figure 1:

Figure 1: Logistic or sigmoid function

You can see the plot which is showing g(z). Here, g(z)= Φ(z). Refer to Figure 2:

Figure 2: Graph of sigmoid or logistic function

From the preceding graph you can see following facts: 

  • If you have z value greater than or equal to zero then logistic function gives the output value one. 
  • If you have value of z less than zero then logistic function or sigmoid function generate the output zero.

You can see the following mathematical condition for logistic function. Refer to Figure 3:

  Figure 3: Logistic function mathematical property

Because of the preceding mathematical property, we can use this function to perform binary classification.

Now it’s time to show the hypothesis function how this sigmoid function will be represented as hypothesis function. Refer to Figure 4:

Figure 4: Hypothesis function for logistic regression

If we take the preceding equation and substitute the value of z with θTx then equation given in Figure 1gets convertedas following. Refer to Figure 5:

Figure 5: Actual hypothesis function after mathematical manipulation

Here hθx is the hypothesis function,θT is the matrix of the feature or matrix of the independent variables and transpose representation of it, x is the stand for all independent variables or for all possible feature set. In order to generate the hypothesis equation we replace the z value of logistic function with θTx.

By using hypothesis equation machine actually tries to learn mapping between input variables or input features, and output labels. Let’s talk a bit about the interpretation of this hypothesis function. Here for logistic regression, can you think what is the best way to predict the class label? Accordingly, we can predict the target class label by using probability concept. We need to generate the probability for both classes and whatever class has high probability we will assign that class label for that particular instance of feature. So in binary classification the value of y or target class is either zero or one. So if you are familiar with probability then you can represent the probability equation as given in Figure 6:

Figure 6: Interpretation of hypothesis function using probabilistic representation

So those who are not familiar with probability the P(y=1|x;θ) can be read like this. Probability of y =1, given x, and parameterized by θ. In simple language you can say like this hypothesis function will generate the probability value for target output 1 where we give features matrix x and some parameter θ. This seems intuitive concept, so for a while, you can keep all these in your mind. I will later on given you the reason why we need to generate probability as well as let you know how we can generate probability values for each of the class.

Here we complete first step of general approach to understand the logistic regression.

Cost or Error function for logistic regression

First, let’s understand what is cost function or the error function? Cost function or lose function, or error function are all the same things. In ML it is very important concept so here we understand definition of cost function and what is the purpose of defining the cost function.

Cost function is the function which we use to check how accurate our ML classifier performs. So let me simplify this for you, in our training dataset we have data and we have labels. Now, when we use hypothesis function and generate the output we need to check how much near we are from the actual prediction and if we predict the actual output label then the difference between our hypothesis function output and actual label is zero or minimum and if our hypothesis function output and actual label are not same then we have big difference between them. So suppose if actual label of email is spam which is 1 and our hypothesis function also generate the result 1 then difference between actual target value and predicated output value is zero and therefore error in prediction is also zero and if our predicted output is 1 and actual output is zero then we have maximum error between our actual target concept and prediction. So it is important for us to have minimum error in our predication. This is the very basic concept of error function. We will get in to the mathematics in some minutes. There are several types of error function available like r2 error, sum of squared error, and so on. As per the ML algorithm and as per the hypothesis function our error function also changes.

Now I know you wanted to know what will be the error function for logistic regression? and I have put θ in our hypothesis function so you also want to know what is θ and if I need to choose some value of the θ then how can I approach it? So here I will give all answers.

Let me give you some background what we used to do in linear regression so it will help you to understand the logistic regression. We generally used sum of squared error or residuals error, or cost function. In linear regression we used to use it. So, just to give you background about sum of squared error. In linear regression we are trying to generate the line of best fit for our dataset so as I stated the example earlier given height I want to predict the weight and in this case we fist draw a line and measure the distance from each of the data point to line. We will square these distance and sum them and try to minimize this error function. Refer to Figure 7:

Figure 7: Sum of squared error representation for reference

You can see the distance of each data point from the line which is denoted using red line we will take this distance, square them, and sum them. This error function we will use in linear regression. We use this error function and we have generate partial derivative with respect to slop of line m and with respect to intercept b. Every time we calculate error and update the value of m and b so we can generate the line of best fit. The process of updating m and b is called gradient descent. By using gradient descent we update m and b in such a way so our error function has minimum error value and we can generate line of best fit. Gradient descent gives us a direction in which we need to plot a line so we can generate the line of best fit. You can find the detail example in Chapter 9,Deep Learning for NLU and NLG Problems. So by defining error function and generating partial derivatives we can apply gradient descent algorithm which help us to minimize our error or cost function.

Now back to the main question which error function can we use for logistic regression? What you think can we use this as sum of squared error function for logistic regression as well? If you know function and calculus very well, then probably your answer is no. That is the correct answer. Let me explain this for those who aren’t familiar with function and calculus. This is important so be careful.

In linear regression our hypothesis function is linear so it is very easy for us to calculate sum of squared errors but here we are using sigmoid function which is non-linear function if you apply same function which we used in linear regression will not turn out well because if you take sigmoid function and put into the sum of squared error function then and if you try to visualized the all possible values then you will get non-convex curve. Refer to Figure 8:

Figure 8: Non-convex with (Image credit: http://www.yuthon.com/images/non-convex_and_convex_function.png)

In machine learning we majorly use function which are able to provide convex curve because then we can use gradient descent algorithm to minimize the error function and able to reach at global minimum certainly. As you saw in Figure 8, non-convex curve has many local minimum so in order to reach to global minimum is very challenging and very time consuming because then you need to apply second order or nth order optimization in order to reach to global minimum where in convex curve you can reach to global minimum certainly and fast as well.

So if we plug our sigmoid function in sum of squared error then you get the non-convex function so we are not going to define same error function which we use in linear regression.

So, we need to define a different cost function which is convex so we can apply gradient descent algorithm and generate global minimum. So here we are using the statistical concept called likelihood. To derive likelihood function we will use the equation of the probability which is given in Figure 6 and we are considering all data points in training set. So we can generate the following equation which is the likelihood function. Refer to Figure 9:

Figure 9: likelihood function for logistic regression (Image credit: http://cs229.stanford.edu/notes/cs229-notes1.pdf)

Now in order to simplify the derivative process we need to convert the likelihood function into monotonically increasing function which can be achieved by taking natural logarithm of the likelihood function and this is called loglikelihood. This log likelihood is our cost function for logistic regression. See the following equation given in Figure 10:

Figure 10: Cost function for logistic regression

Here to gain some intuition about the given cost function we will plot it and understand what benefit it provides to us. Here in xaxis we have our hypothesis function. Our hypothesis function range is 0 to 1 so we have these two points on xaxis. Start with the first case where y =1. You can see the generated curve which is on top right hand side in Figure 11:

Figure 11: Logistic function cost function graphs

If you see any log function plot and then flip that curve because here we have negative sign then you get the same curve as we plot in Figure 11. you can see the log graph as well as flipped graph in Figure 12:

Figure 12:comparing log(x) and –log(x) graph for better understanding of cost function (Image credit : http://www.sosmath.com/algebra/logs/log4/log42/log422/gl30.gif)

So here we are interested for value 0 and 1 so we are considering that part of the graph which we have depicted in Figure 11. This cost function has some interesting and useful properties. If predict or candidate label is same as the actual target label then cost will be zero so you can put like this if y=1 and hypothesis function predict hθ(x) = 1 then cost is 0 but if hθ(x) tends to 0 means more towards the zero then cost function blows up to ∞.

Now you can see for the y = 0 you can see the graph which is on top left hand side inside the Figure 11. This case condition also have same advantages and properties which we have seen earlier. It will go to ∞ when actual value is 0 and hypothesis function predicts 1. If hypothesis function predict 0 and actual target is also 0 then cost =0.

As I told you earlier that I will give you reason why we are choosing this cost function then the reason is that this function makes our optimization easy as we are using maximum log likelihood function as we as this function has convex curve which help us to run gradient decent.

In order to apply gradient decent we need to generate the partial derivative with respect to θ and we can generate the following equation which is given in Figure 13:

Figure 13: Partial derivative for performing gradient descent (Image credit : http://2.bp.blogspot.com)

This equation is used for updating the parameter value of θ and α is here define the learning rate. This is the parameter which you can use how fast or how slow your algorithm should learn or train. If you set learning rate too high then algorithm can not learn and if you set it too low then it take lot of time to train. So you need to choose learning rate wisely.

Now let’s start building the spam filtering application.

Data loading and exploration

To build the spam filtering application we need dataset. Here we are using small size dataset. This dataset is simply straight forward. This dataset has two attribute. The first attribute is the label and second attribute is the text content of the email. Let’s discuss more about the first attribute. Here the presence of label make this dataset a tagged data. This label indicated that the email content is belong to thespam category or ham category. Let’s jump into the practical part. Here we are using numpy, pandas, andscikit-learnas dependency libraries.

So let’s explore or dataset first.We read dataset using pandas library.I have also checked how many total data records we have and basic details of the dataset.

Once we load data,we will check its first ten records and then we will replace the spam and ham categories with number. As we have seen that machine can understand numerical format only so here all labels ham is converted into 0 and all labels spam is converted into 1.Refer to Figure 14:

Figure 14: Code snippet for converting labels into numerical format

Split dataset intotrainingdataset and testingdataset

In this part we divide our dataset into two parts one part is called training set and other part is called testing set. Refer to Figure 15:

Figure 15: Code snippet for dividing dataset into trainingdataset and testingdataset

We are dividing dataset into two partsbecause we will perform training by using our trainingdataset and one our ML algorithm trained on that dataset and generate ML-model after that we will use generated ML-model and feed testing into that model as result our ML-model will generate the prediction. Based on that result we evaluate out ML-model

Summary

Resources for Article:

 


Further resources on this subject:

  • [article]
  • [article]
  • [article]

LEAVE A REPLY

Please enter your comment!
Please enter your name here