Probabilistic graphical models, or simply graphical models as we will refer to them in this article, are models that use the representation of a graph to describe the conditional independence relationships between a series of random variables. This topic has received an increasing amount of attention in recent years and probabilistic graphical models have been successfully applied to tasks ranging from medical diagnosis to image segmentation. In this article, we’ll present some of the necessary background that will pave the way to understanding the most basic graphical model, the Naïve Bayes classifier. We will then look at a slightly more complicated graphical model, known as the Hidden Markov Model, or HMM for short. To get started in this field, we must first learn about graphs.

*(For more resources related to this topic, see here.)*

# A Little Graph Theory

Graph theory is a branch of mathematics that deals with mathematical objects known as **graphs**. Here, a graph does not have the everyday meaning that we are more used to talking about, in the sense of a diagram or plot with an *x* and *y* axis. In graph theory, a graph consists of two sets. The first is a set of vertices, which are also referred to as **nodes**. We typically use integers to label and enumerate the vertices. The second set consists of **edges** between these vertices.

Thus, a graph is nothing more than a description of some points and the connections between them. The connections can have a direction so that an edge goes from the **source** or **tail vertex** to the **target** or **head vertex**. In this case, we have a **directed graph**. Alternatively, the edges can have no direction, so that the graph is **undirected**.

A common way to describe a graph is via the **adjacency matrix**. If we have *V* vertices in the graph, an adjacency matrix is a *V×V* matrix whose entries are 0 if the vertex represented by the row number is not connected to the vertex represented by the column number. If there is a connection, the entry is 1.

With undirected graphs, both nodes at each edge are connected to each other so the adjacency matrix is symmetric. For directed graphs, a vertex *vi* is connected to a vertex *vj* via an edge (*vi*,*vj*); that is, an edge where *vi* is the tail and *vj* is the head. Here is an example adjacency matrix for a graph with seven nodes:

```
> adjacency_m
1 2 3 4 5 6 7
1 0 0 0 0 0 1 0
2 1 0 0 0 0 0 0
3 0 0 0 0 0 0 1
4 0 0 1 0 1 0 1
5 0 0 0 0 0 0 0
6 0 0 0 1 1 0 1
7 0 0 0 0 1 0 0
```

This matrix is not symmetric, so we know that we are dealing with a directed graph. The first *1* value in the first row of the matrix denotes the fact that there is an edge starting from vertex 1 and ending on vertex 6. When the number of nodes is small, it is easy to visualize a graph. We simply draw circles to represent the vertices and lines between them to represent the edges.

For directed graphs, we use arrows on the lines to denote the directions of the edges. It is important to note that we can draw the same graph in an infinite number of different ways on the page. This is because the graph tells us nothing about the positioning of the nodes in space; we only care about how they are connected to each other. Here are two different but equally valid ways to draw the graph described by the adjacency matrix we just saw:

Two vertices are said to be connected with each other if there is an edge between them (taking note of the order when talking about directed graphs). If we can move from vertex *vi* to vertex *vj* by starting at the first vertex and finishing at the second vertex, by moving on the graph along the edges and passing through an arbitrary number of graph vertices, then these intermediate edges form a **path** between these two vertices. Note that this definition requires that all the vertices and edges along the path are distinct from each other (with the possible exception of the first and last vertex).

For example, in our graph, vertex 6 can be reached from vertex 2 by a path leading through vertex 1. Sometimes, there can be many such possible paths through the graph, and we are often interested in the shortest path, which moves through the fewest number of intermediary vertices. We can define the distance between two nodes in the graph as the length of the shortest path between them. A path that begins and ends at the same vertex is known as a **cycle**. A graph that does not have any cycles in it is known as an **acyclic graph**. If an acyclic graph has directed edges, it is known as a **directed acyclic graph**, which is often abbreviated as a **DAG**.

There are many excellent references on graph theory available. One such reference which is available online, is *Graph Theory*, *Reinhard Diestel*, *Springer*. This landmark reference is now in its *4th* edition and can be found at http://diestel-graph-theory.com/.

It might not seem obvious at first, but it turns out that a large number of real world situations can be conveniently described using graphs. For example, the network of friendships on social media sites, such as Facebook, or followers on Twitter, can be represented as graphs. On Facebook, the friendship relation is reciprocal, and so the graph is undirected. On Twitter, the follower relation is not, and so the graph is directed.

Another graph is the network of websites on the Web, where links from one web page to the next form directed edges. Transport networks, communication networks, and electricity grids can be represented as graphs. For the predictive modeler, it turns out that a special class of models known as **probabilistic graphical models**, or **graphical models** for short, are models that involve a graph structure.

In a graphical model, the nodes represent random variables and the edges in between represent the dependencies between them. Before we can go into further detail, we’ll need to take a short detour in order to visit Bayes’ Theorem, a classic theorem in statistics that despite its simplicity has implications both profound and practical when it comes to statistical inference and prediction.

# Summary

In this article, we learned that graphs are consist of nodes and edges. We also learned the way of describing a graph is via the adjacency matrix.

For more information on graphical models, you can refer to the books published by Packt (https://www.packtpub.com/):

*Mastering Predictive Analytics with Python*(https://www.packtpub.com/big-data-and-business-intelligence/mastering-predictive-analytics-python)*R Graphs Cookbook Second Edition*(https://www.packtpub.com/big-data-and-business-intelligence/r-graph-cookbook-%E2%80%93-second-edition)

# Resources for Article:

**Further resources on this subject:**

- Data Analytics[article]
- Big Data Analytics[article]
- Learning Data Analytics with R and Hadoop[article]