Home Data Tutorials What are discriminative and generative models and when to use which?

What are discriminative and generative models and when to use which?

0
3373
In memory OLTP
5 min read

[box type=”note” align=”” class=”” width=””]Our article is a book excerpt from Bayesian Analysis with Python written Osvaldo Martin. This book covers the bayesian framework and the fundamental concepts of bayesian analysis in detail. It will help you solve complex statistical problems by leveraging the power of bayesian framework and Python.[/box]

From this article you will explore the fundamentals and implementation of two strong machine learning models – discriminative and generative models. We have also included examples to help you understand the difference between these models and how they operate.

Learn Programming & Development with a Packt Subscription

In general cases, we try to directly compute p(|), that is, the probability of a given class knowing, which is some feature we measured to members of that class. In other words, we try to directly model the mapping from the independent variables to the dependent ones and then use a threshold to turn the (continuous) computed probability into a boundary that allows us to assign classes.This approach is not unique. One alternative is to model first p(|), that is, the distribution of  for each class, and then assign the classes. This kind of model is called a generative classifier because we are creating a model from which we can generate samples from each class. On the contrary, logistic regression is a type of discriminative classifier since it tries to classify by discriminating classes but we cannot generate examples from each class.

We are not going to go into much detail here about generative models for classification, but we are going to see one example that illustrates the core of this type of model for classification. We are going to do it for two classes and only one feature, exactly as the first model we built in this chapter, using the same data.

Following is a PyMC3 implementation of a generative classifier. From the code, you can see that now the boundary decision is defined as the average between both estimated Gaussian means. This is the correct boundary decision when the distributions are normal and their standard deviations are equal. These are the assumptions made by a model known as linear discriminant analysis (LDA). Despite its name, the LDA model is generative:

with pm.Model() as lda:

    mus = pm.Normal('mus', mu=0, sd=10, shape=2)

   sigmas = pm.Uniform('sigmas', 0, 10)

 setosa = pm.Normal('setosa', mu=mus[0], sd=sigmas[0],

observed=x_0[:50])

     versicolor = pm.Normal('setosa', mu=mus[1], sd=sigmas[1],

observed=x_0[50:])

     bd = pm.Deterministic('bd', (mus[0]+mus[1])/2)

   start = pm.find_MAP()

step = pm.NUTS()

trace = pm.sample(5000, step, start)

Discriminative and Generative models

Now we are going to plot a figure showing the two classes (setosa = 0 and versicolor = 1) against the values for sepal length, and also the boundary decision as a red line and the 95% HPD interval for it as a semitransparent red band.

Discriminative and Generative models

As you may have noticed, the preceding figure is pretty similar to the one we plotted at the beginning of this chapter. Also check the values of the boundary decision in the following summary:
pm.df_summary(trace_lda):

mean sd mc_error hpd_2.5 hpd_97.5
mus__0 5.01 0.06 8.16e-04 4.88 5.13
mus__1 5.93 0.06 6.28e-04 5.81 6.06
sigma 0.45 0.03 1.52e-03 0.38 0.51
bd 5.47 0.05 5.36e-04 5.38 5.56

Both the LDA model and the logistic regression gave similar results:

The linear discriminant model can be extended to more than one feature by modeling the classes as multivariate Gaussians. Also, it is possible to relax the assumption of the classes sharing a common variance (or common covariance matrices when working with more than one feature). This leads to a model known as quadratic linear discriminant (QDA), since now the decision boundary is not linear but quadratic.

In general, an LDA or QDA model will work better than a logistic regression when the features we are using are more or less Gaussian distributed and the logistic regression will perform better in the opposite case. One advantage of the discriminative model for classification is that it may be easier or more natural to incorporate prior information; for example, we may have information about the mean and variance of the data to incorporate in the model.

It is important to note that the boundary decisions of LDA and QDA are known in closed-form and hence they are usually used in such a way. To use an LDA for two classes and one feature, we just need to compute the mean of each distribution and average those two values, and we get the boundary decision. Notice that in the preceding model we just did that but in a more Bayesian way. We estimate the parameters of the two Gaussians and then we plug those estimates into a formula. Where do such formulae come from? Well, without entering into details, to obtain that formula we must assume that the data is Gaussian distributed, and hence such a formula will only work if the data does not deviate drastically from normality. Of course, we may hit a problem where we want to relax the normality assumption, such as, for example using a Student’s t-distribution (or a multivariate Student’s t-distribution, or something else). In such a case, we can no longer use the closed form for the LDA (or QDA); nevertheless, we can still compute a decision boundary numerically using PyMC3.

To sum up, we saw the basic idea behind generative and discriminative models and their practical use cases in detail.

If you enjoyed this excerpt, check out the book Bayesian Analysis with Python  to solve complex statistical problems with Bayesian Framework and Python.

Bayesian Analysis with Python

 

 

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here