14 min read

This article is an excerpt taken from the book Hands-On Data Science and Python Machine Learning authored by Frank Kane.

In this article, we’re going to start by talking about the bias-variance trade-off, which is kind of a more principled way of talking about the different ways you might overfit and underfit data, and how it all interrelates with each other. We will later talk about the k-fold cross-validation technique, which is an important tool in your chest to combat overfitting and look at how to implement it using Python. Finally, we look at how to detect outliers and deal with them.

Bias is just how far off you are from the correct values, that is, how good are your predictions overall in predicting the right overall value. If you take the mean of all your predictions, are they more or less on the right spot? Or are your errors all consistently skewed in one direction or another? If so, then your predictions are biased in a certain direction.

Variance is just a measure of how spread out, or how scattered your predictions are. So, if your predictions are all over the place, then that’s high variance. But, if they’re very tightly focused on what the correct values are, or even an incorrect value in the case of high bias, then your variance is small.

In reality, you often need to choose between bias and variance. It comes down to overfitting Vs underfitting your data. Let’s take a look at the following example:

It’s a little bit of a different way of thinking of bias and variance. So, in the left graph, we have a straight line, and you can think of that as having very low variance, relative to these observations. So, there’s not a lot of variance in this line, that is, there is low variance. But the bias, the error from each individual point, is actually high.

Now, contrast that to the overfitted data in the graph at the right, where we’ve kind of gone out of our way to fit the observations. The line has high variance, but low bias, because each individual point is pretty close to where it should be. So, this is an example of where we traded off variance for bias.

At the end of the day, you’re not out to just reduce bias or just reduce variance, you want to reduce error. That’s what really matters, and it turns out you can express error as a function of bias and variance:

Looking at this, error is equal to bias squared plus variance. So, these things both contribute to the overall error, with bias actually contributing more. But keep in mind, it’s error you really want to minimize, not the bias or the variance specifically, and that an overly complex model will probably end up having a high variance and low bias, whereas a too simple model will have low variance and high bias. However, they could both end up having similar error terms at the end of the day. You just have to find the right happy medium of these two things when you’re trying to fit your data.

This is bias-variance trade-off. You know the decision you have to make between how overall accurate your values are, and how spread out they are or how tightly clustered they are. That’s the bias-variance trade-off and they both contribute to the overall error, which is the thing you really care about minimizing. So, keep those terms in mind!

K-fold cross-validation to avoid overfitting

Train and test as a good way of preventing overfitting and actually measuring how well your model can perform on data it’s never seen before. We can take that to the next level with a technique called k-fold cross-validation. So, let’s talk about this powerful tool in your arsenal for fighting overfitting; k-fold cross-validation and learn how that works.

The idea, although it sounds complicated, is fairly simple:

  1. Instead of dividing our data into two buckets, one for training and one for testing, we divide it into K buckets.
  2. We reserve one of those buckets for testing purposes, for evaluating the results of our model.
  3. We train our model against the remaining buckets that we have, K-1, and then we take our test dataset and use that to evaluate how well our model did amongst all of those different training datasets.
  4. We average those resulting error metrics, that is, those r-squared values, together to get a final error metric from k-fold cross-validation.

Example of k-fold cross-validation using scikit-learn

Fortunately, scikit-learn makes this really easy to do, and it’s even easier than doing normal train/test! It’s extremely simple to do k-fold cross-validation, so you may as well just do it.

Now, the way this all works in practice is you will have a model that you’re trying to tune, and you will have different variations of that model, different parameters you might want to tweak on it, right?

Like, for example, the degree of polynomial for a polynomial fit. So, the idea is to try different values of your model, different variations, measure them all using k-fold cross-validation, and find the one that minimizes error against your test dataset. That’s kind of your sweet spot there. In practice, you want to use k-fold cross-validation to measure the accuracy of your model against a test dataset, and just keep refining that model, keep trying different values within it, keep trying different variations of that model or maybe even different models entirely, until you find the technique that reduces error the most, using k-fold cross validation.

Please go ahead and open up the KFoldCrossValidation.ipynb and follow along if you will. We’re going to look at the Iris dataset again; remember we introduced this when we talk about dimensionality reduction?

We’re going to use the SVC model. If you remember back again, that’s just a way of classifying data that’s pretty robust. There’s a section on that if you need to go and refresh your memory:

import numpy as np 
from sklearn import cross_validation 
from sklearn import datasets 
from sklearn import svm 
 
iris = datasets.load_iris()

# Split the iris data into train/test data sets with
#40% reserved for testing X_train, X_test, y_train, y_test = cross_validation.train_test_split(iris.data,
iris.target, test_size=0.4, random_state=0) # Build an SVC model for predicting iris classifications
#using training data clf = svm.SVC(kernel=’linear’, C=1).fit(X_train, y_train) # Now measure its performance with the test data clf.score(X_test, y_test)

What we do is use the cross_validation library from scikit-learn, and we start by just doing a conventional train test split, just a single train/test split, and see how that will work.

To do that we have a train_test_split() function that makes it pretty easy. So, the way this works is we feed into train_test_split() a set of feature data. iris.data just contains all the actual measurements of each flower. iris.target is basically the thing we’re trying to predict.

In this case, it contains all the species for each flower. test_size says what percentage do we want to train versus test. So, 0.4 means we’re going to extract 40% of that data randomly for testing purposes, and use 60% for training purposes. What this gives us back is 4 datasets, basically, a training dataset and a test dataset for both the feature data and the target data. So, X_train ends up containing 60% of our Iris measurements, and X_test contains 40% of the measurements used for testing the results of our model. y_train and y_test contain the actual species for each one of those segments.

Then after that we go ahead and build an SVC model for predicting Iris species given their measurements, and we build that only using the training data. We fit this SVC model, using a linear kernel, using only the training feature data, and the training species data, that is, target data. We call that model clf. Then, we call the score() function on clf to just measure its performance against our test dataset. So, we score this model against the test data we reserved for the Iris measurements, and the test Iris species, and see how well it does:

It turns out it does really well! Over 96% of the time, our model is able to correctly predict the species of an Iris that it had never seen before, just based on the measurements of that Iris. So that’s pretty cool!

But, this is a fairly small dataset, about 150 flowers if I remember right. So, we’re only using 60% of 150 flowers for training and only 40% of 150 flowers for testing. These are still fairly small numbers, so we could still be overfitting to our specific train/test split that we made. So, let’s use k-fold cross-validation to protect against that. It turns out that using k-fold cross-validation, even though it’s a more robust technique, is actually even easier to use than train/test. So, that’s pretty cool! So, let’s see how that works:

# We give cross_val_score a model, the entire data set and its "real" values, and the number of folds: 
scores = cross_validation.cross_val_score(clf, iris.data, iris.target, cv=5) 
 
# Print the accuracy for each fold: 
print scores 
 
# And the mean accuracy of all 5 folds: 
print scores.mean()

We have a model already, the SVC model that we defined for this prediction, and all you need to do is call cross_val_score() on the cross_validation package. So, you pass in this function a model of a given type (clf), the entire dataset that you have of all of the measurements, that is, all of my feature data (iris.data) and all of my target data (all of the species), iris.target.

I want cv=5 which means it’s actually going to use 5 different training datasets while reserving 1 for testing. Basically, it’s going to run it 5 times, and that’s all we need to do. That will automatically evaluate our model against the entire dataset, split up five different ways, and give us back the individual results.

If we print back the output of that, it gives us back a list of the actual error metric from each one of those iterations, that is, each one of those folds. We can average those together to get an overall error metric based on k-fold cross-validation:

When we do this over 5 folds, we can see that our results are even better than we thought! 98% accuracy. So that’s pretty cool! In fact, in a couple of the runs we had perfect accuracy. So that’s pretty amazing stuff.

Now let’s see if we can do even better. We used a linear kernel before, what if we used a polynomial kernel and got even fancier? Will that be overfitting or will it actually better fit the data that we have? That kind of depends on whether there’s actually a linear relationship or polynomial relationship between these petal measurements and the actual species or not. So, let’s try that out:

clf = svm.SVC(kernel='poly', C=1).fit(X_train, y_train)
scores = cross_validation.cross_val_score(clf, iris.data, iris.target, cv=5)
print scores
print scores.mean()

We’ll just run this all again, using the same technique. But this time, we’re using a polynomial kernel. We’ll fit that to our training dataset, and it doesn’t really matter where you fit to in this case, because cross_val_score() will just keep re-running it for you:

It turns out that when we use a polynomial fit, we end up with an overall score that’s even lower than our original run. So, this tells us that the polynomial kernel is probably overfitting. When we use k-fold cross-validation it reveals an actual lower score than with our linear kernel.

The important point here is that if we had just used a single train/test split, we wouldn’t have realized that we were overfitting. We would have actually gotten the same result if we just did a single train/test split here as we did on the linear kernel. So, we might inadvertently be overfitting our data there, and not have even known it had we not use k-fold cross-validation. So, this is a good example of where k-fold comes to the rescue, and warns you of overfitting, where a single train/test split might not have caught that. So, keep that in your tool chest.

If you want to play around with this some more, go ahead and try different degrees. So, you can actually specify a different number of degrees. The default is 3 degrees for the polynomial kernel, but you can try a different one, you can try two.

Detecting outliers

A common problem with real-world data is outliers. You’ll always have some strange users or some strange agents that are polluting your data, that act abnormally and atypically from the typical user. They might be legitimate outliers; they might be caused by real people and not by some sort of malicious traffic, or fake data. So sometimes, it’s appropriate to remove them, sometimes it isn’t.

Dealing with outliers

So, let’s take some example code, and see how you might handle outliers in practice. Let’s mess around with some outliers. It’s a pretty simple section. A little bit of review actually. If you want to follow along, we’re in Outliers.ipynb. So, go ahead and open that up if you’d like:

import numpy as np

incomes = np.random.normal(27000, 15000, 10000)
incomes = np.append(incomes, [1000000000])

import matplotlib.pyplot as plt
plt.hist(incomes, 50)
plt.show()

What we’re going to do is start off with a normal distribution of incomes here that are have a mean of $27,000 per year, with a standard deviation of 15,000. I’m going to create 10,000 fake Americans that have an income in that distribution. This is totally made-up data, by the way, although it’s not that far off from reality.

Then, I’m going to stick in an outlier – call it Donald Trump, who has a billion dollars. We’re going to stick this guy in at the end of our dataset. So, we have a normally distributed dataset around $27,000, and then we’re going to stick in Donald Trump at the end.

We’ll go ahead and plot that as a histogram:

We have the entire normal distribution of everyone else in the country squeezed into one bucket of the histogram. On the other hand, we have Donald Trump out at the right side screwing up the whole thing at a billion dollars.

The other problem too is that if I’m trying to answer the question how much money does the typical American make. If I take the mean to try and figure that out, it’s not going to be a very good, useful number:

incomes.mean ()

The output of the preceding code is as follows:

126892.66469341301

Donald Trump has pushed that number up all by himself to $126,000 and some odd of change, when I know that the real mean of my normally distributed data that excludes Donald Trump is only $27,000. So, the right thing to do there would be to use the median instead of the mean.

A better thing to do would be to actually measure the standard deviation of your dataset, and identify outliers as being some multiple of a standard deviation away from the mean.

So, following is a little function that I wrote that does just that. It’s called reject_outliers():

def reject_outliers(data): 
    u = np.median(data) 
    s = np.std(data) 
    filtered = [e for e in data if (u - 2 * s < e < u + 2 * s)] 
    return filtered 
 
filtered = reject_outliers(incomes) 
 
plt.hist(filtered, 50) 
plt.show()

It takes in a list of data and finds the median. It also finds the standard deviation of that dataset. So, I filter that out, so I only preserve data points that are within two standard deviations of the median for my data. So, I can use this handy dandy reject_outliers() function on my income data, to actually strip out weird outliers automatically:

Sure enough, it works! I get a much prettier graph now that excludes Donald Trump and focuses in on the more typical dataset here in the center. So, pretty cool stuff!

So, that’s one example of identifying outliers, and automatically removing them, or dealing with them however you see fit. Remember, always do this in a principled manner. Don’t just throw out outliers because they’re inconvenient. Understand where they’re coming from, and how they actually affect the thing you’re trying to measure in spirit.

By the way, our mean is also much more meaningful now; much closer to 27,000 that it should be, now that we’ve gotten rid of that outlier.

In this article we came across the Bias-variance tradeoff and how to minimize the error. We also saw the concept of k-fold cross-validation and how to implement it in Python to prevent overfitting. If you’ve enjoyed this excerpt, head over to the book Hands-On Data Science and Python Machine Learning to prepare your data for analysis, training machine learning models, and visualizing the final data analysis, and much more.

Read Next

A Data science fanatic. Loves to be updated with the tech happenings around the globe. Loves singing and composing songs. Believes in putting the art in smart.