





















































In this article by Nicholas McClure, the author of the book TensorFlow Machine Learning Cookbook, we will cover basic recipes in order to understand how TensorFlow works and how to access data for this book and additional resources:
(For more resources related to this topic, see here.)
Google's TensorFlow engine has a unique way of solving problems. This unique way allows us to solve machine learning problems very efficiently. We will cover the basic steps to understand how TensorFlow operates. This understanding is essential in understanding recipes for the rest of this book.
At first, computation in TensorFlow may seem needlessly complicated. But there is a reason for it: because of how TensorFlow treats computation, developing more complicated algorithms is relatively easy. This recipe will talk you through the pseudo code of how a TensorFlow algorithm usually works.
Currently, TensorFlow is only supported on Mac and Linux distributions. Using TensorFlow on Windows requires the usage of a virtual machine. Throughout this book we will only concern ourselves with the Python library wrapper of TensorFlow. This book will use Python 3.4+ (https://www.python.org) and TensorFlow 0.7 (https://www.tensorflow.org). While TensorFlow can run on the CPU, it runs faster if it runs on the GPU, and it is supported on graphics cards with NVidia Compute Capability 3.0+. To run on a GPU, you will also need to download and install the NVidia Cuda Toolkit (https://developer.nvidia.com/cuda-downloads). Some of the recipes will rely on a current installation of the Python packages Scipy, Numpy, and Scikit-Learn as well.
Here we will introduce the general flow of TensorFlow algorithms. Most recipes will follow this outline:
data = tf.nn.batch_norm_with_global_normalization(...)
learning_rate = 0.01
iterations = 1000
a_var = tf.constant(42)
x_input = tf.placeholder(tf.float32, [None, input_size])
y_input = tf.placeholder(tf.fload32, [None, num_classes])
y_pred = tf.add(tf.mul(x_input, weight_matrix), b_matrix)
loss = tf.reduce_mean(tf.square(y_actual – y_pred))
with tf.Session(graph=graph) as session:
...
session.run(...)
...
Note that we can also initiate our graph with:
session = tf.Session(graph=graph)
session.run(…)
In TensorFlow, we have to setup the data, variables, placeholders, and model before we tell the program to train and change the variables to improve the predictions. TensorFlow accomplishes this through the computational graph. We tell it to minimize a loss function and TensorFlow does this by modifying the variables in the model. TensorFlow knows how to modify the variables because it keeps track of the computations in the model and automatically computes the gradients for every variable. Because of this, we can see how easy it can be to make changes and try different data sources.
Tensors are the data structure that TensorFlow operates on in the computational graph. We can declare these tensors as variables or feed them in as placeholders. First we must know how to create tensors. When we create a tensor and declare it to be a variable, TensorFlow creates several graph structures in our computation graph. It is also important to point out that just by creating a tensor, TensorFlow is not adding anything to the computational graph. TensorFlow does this only after creating a variable out of the tensor. See the next section on variables and placeholders for more information.
Here we will cover the main ways to create tensors in TensorFlow.
zero_tsr = tf.zeros([row_dim, col_dim])
ones_tsr = tf.ones([row_dim, col_dim])
filled_tsr = tf.fill([row_dim, col_dim], 42)
constant_tsr = tf.constant([1,2,3])
Note that the tf.constant() function can be used to broadcast a value into an array, mimicking the behavior of tf.fill() by writing tf.constant(42, [row_dim, col_dim])
zeros_similar = tf.zeros_like(constant_tsr)
ones_similar = tf.ones_like(constant_tsr)
Note, that since these tensors depend on prior tensors, we must initialize them in order. Attempting to initialize all the tensors all at once will result in an error.
linear_tsr = tf.linspace(start=0, stop=1, start=3)
integer_seq_tsr = tf.range(start=6, limit=15, delta=3)
The result is the sequence [6, 9, 12]. Note that this function does not include the limit value.
randunif_tsr = tf.random_uniform([row_dim, col_dim], minval=0, maxval=1)
randnorm_tsr = tf.random_normal([row_dim, col_dim], mean=0.0, stddev=1.0)
runcnorm_tsr = tf.truncated_normal([row_dim, col_dim], mean=0.0, stddev=1.0)
shuffled_output = tf.random_shuffle(input_tensor)
cropped_output = tf.random_crop(input_tensor, crop_size)
cropped_image = tf.random_crop(my_image, [height/2, width/2, 3])
Once we have decided on how to create the tensors, then we may also create the corresponding variables by wrapping the tensor in the Variable() function, as follows. More on this in the next section:
my_var = tf.Variable(tf.zeros([row_dim, col_dim]))
We are not limited to the built in functions, we can convert any numpy array, Python list, or constant to a tensor using the function convert_to_tensor(). Know that this function also accepts tensors as an input in case we wish to generalize a computation inside a function.
One of the most important distinctions to make with data is whether it is a placeholder or variable. Variables are the parameters of the algorithm and TensorFlow keeps track of how to change these to optimize the algorithm. Placeholders are objects that allow you to feed in data of a specific type and shape or that depend on the results of the computational graph, like the expected outcome of a computation.
The main way to create a variable is by using the Variable() function, which takes a tensor as an input and outputs a variable. This is the declaration and we still need to initialize the variable. Initializing is what puts the variable with the corresponding methods on the computational graph. Here is an example of creating and initializing a variable:
my_var = tf.Variable(tf.zeros([2,3]))
sess = tf.Session()
initialize_op = tf.initialize_all_variables()
sess.run(initialize_op)
To see whatthe computational graph looks like after creating and initializing a variable, see the next part in this section, How it works…, Figure 1.
Placeholders are just holding the position for data to be fed into the graph. Placeholders get data from a feed_dict argument in the session. To put a placeholder in the graph, we must perform at least one operation on the placeholder. We initialize the graph, declare x to be a placeholder, and define y as the identity operation on x, which just returns x. We then create data to feed into the x placeholder and run the identity operation. It is worth noting that Tensorflow will not return a self-referenced placeholder in the feed dictionary. The code is shown below and the resulting graph is in the next section, How it works…:
sess = tf.Session()
x = tf.placeholder(tf.float32, shape=[2,2])
y = tf.identity(x)
x_vals = np.random.rand(2,2)
sess.run(y, feed_dict={x: x_vals})
# Note that sess.run(x, feed_dict={x: x_vals}) will result in a self-referencing error.
The computational graph of initializing a variable as a tensor of zeros is seen in Figure 1to follow:
Figure 1: Variable
Figure 1: Here we can see what the computational graph looks like in detail with just one variable, initialized to all zeros. The grey shaded region is a very detailed view of the operations and constants involved. The main computational graph with less detail is the smaller graph outside of the grey region in the upper right. For more details on creating and visualizing graphs.
Similarly, the computational graph of feeding a numpy array into a placeholder can be seen to follow, in Figure 2:
Figure 2: Computational graph of an initialized placeholder
Figure 2: Here is the computational graph of a placeholder initialized. The grey shaded region is a very detailed view of the operations and constants involved. The main computational graph with less detail is the smaller graph outside of the grey region in the upper right.
During the run of the computational graph, we have to tell TensorFlow when to initialize the variables we have created. While each variable has an initializer method, the most common way to do this is with the helper function initialize_all_variables(). This function creates an operation in the graph that initializes all the variables we have created, as follows:
initializer_op = tf.initialize_all_variables()
But if we want to initialize a variable based on the results of initializing another variable, we have to initialize variables in the order we want, as follows:
sess = tf.Session()
first_var = tf.Variable(tf.zeros([2,3]))
sess.run(first_var.initializer)
second_var = tf.Variable(tf.zeros_like(first_var))
# Depends on first_var
sess.run(second_var.initializer)
Many algorithms depend on matrix operations. TensorFlow gives us easy-to-use operations to perform such matrix calculations. For all of the following examples, we can create a graph session by running the following code:
import tensorflow as tf
sess = tf.Session()
identity_matrix = tf.diag([1.0, 1.0, 1.0]) # Identity matrix
A = tf.truncated_normal([2, 3]) # 2x3 random normal matrix
B = tf.fill([2,3], 5.0) # 2x3 constant matrix of 5's
C = tf.random_uniform([3,2]) # 3x2 random uniform matrix
D = tf.convert_to_tensor(np.array([[1., 2., 3.],[-3., -7., -1.],[0., 5., -2.]]))
print(sess.run(identity_matrix))
[[ 1. 0. 0.]
[ 0. 1. 0.]
[ 0. 0. 1.]]
print(sess.run(A))
[[ 0.96751703 0.11397751 -0.3438891 ]
[-0.10132604 -0.8432678 0.29810596]]
print(sess.run(B))
[[ 5. 5. 5.]
[ 5. 5. 5.]]
print(sess.run(C))
[[ 0.33184157 0.08907614]
[ 0.53189191 0.67605299]
[ 0.95889051 0.67061249]]
print(sess.run(D))
[[ 1. 2. 3.]
[-3. -7. -1.]
[ 0. 5. -2.]]
Note that if we were to run sess.run(C) again, we would reinitialize the random variables and end up with different random values.
print(sess.run(A+B))
[[ 4.61596632 5.39771316 4.4325695 ]
[ 3.26702736 5.14477345 4.98265553]]
print(sess.run(B-B))
[[ 0. 0. 0.]
[ 0. 0. 0.]]
Multiplication
print(sess.run(tf.matmul(B, identity_matrix)))
[[ 5. 5. 5.]
[ 5. 5. 5.]]
print(sess.run(tf.transpose(C)))
[[ 0.67124544 0.26766731 0.99068872]
[ 0.25006068 0.86560275 0.58411312]]
print(sess.run(tf.matrix_determinant(D)))
-38.0
print(sess.run(tf.matrix_inverse(D)))
[[-0.5 -0.5 -0.5 ]
[ 0.15789474 0.05263158 0.21052632]
[ 0.39473684 0.13157895 0.02631579]]
Note that the inverse method is based on the Cholesky decomposition if the matrix is symmetric positive definite or the LU decomposition otherwise.
print(sess.run(tf.cholesky(identity_matrix)))
[[ 1. 0. 1.]
[ 0. 1. 0.]
[ 0. 0. 1.]]
print(sess.run(tf.self_adjoint_eig(D))
[[-10.65907521 -0.22750691 2.88658212]
[ 0.21749542 0.63250104 -0.74339638]
[ 0.84526515 0.2587998 0.46749277]
[ -0.4880805 0.73004459 0.47834331]]
Note that the function self_adjoint_eig() outputs the eigen values in the first row and the subsequent vectors in the remaining vectors. In mathematics, this is called the eigen decomposition of a matrix.
TensorFlow provides all the tools for us to get started with numerical computations and add such computations to our graphs. This notation might seem quite heavy for simple matrix operations. Remember that we are adding these operations to the graph and telling TensorFlow what tensors to run through those operations.
Besides the standard arithmetic operations, TensorFlow provides us more operations that we should be aware of and how to use them before proceeding. Again, we can create a graph session by running the following code:
import tensorflow as tf
sess = tf.Session()
TensorFlow has the standard operations on tensors, add(), sub(), mul(), and div(). Note that all of these operations in this section will evaluate the inputs element-wise unless specified otherwise.
print(sess.run(tf.div(3,4)))
0
print(sess.run(tf.truediv(3,4)))
0.75
print(sess.run(tf.floordiv(3.0,4.0)))
0.0
print(sess.run(tf.mod(22.0, 5.0)))
2.0
print(sess.run(tf.cross([1., 0., 0.], [0., 1., 0.])))
[ 0. 0. 1.0]
abs() |
Absolute value of one input tensor |
ceil() |
Ceiling function of one input tensor |
cos() |
Cosine function of one input tensor |
exp() |
Base e exponential of one input tensor |
floor() |
Floor function of one input tensor |
inv() |
Multiplicative inverse (1/x) of one input tensor |
log() |
Natural logarithm of one input tensor |
maximum() |
Element-wise max of two tensors |
minimum() |
Element-wise min of two tensors |
neg() |
Negative of one input tensor |
pow() |
The first tensor raised to the second tensor element-wise |
round() |
Rounds one input tensor |
rsqrt() |
One over the square root of one tensor |
sign() |
Returns -1, 0, or 1, depending on the sign of the tensor |
sin() |
Sine function of one input tensor |
sqrt() |
Square root of one input tensor |
square() |
Square of one input tensor |
digamma() |
Psi function, the derivative of the lgamma() function |
erf() |
Gaussian error function, element-wise, of one tensor |
erfc() |
Complimentary error function of one tensor |
igamma() |
Lower regularized incomplete gamma function |
igammac() |
Upper regularized incomplete gamma function |
lbeta() |
Natural logarithm of the absolute value of the beta function |
lgamma() |
Natural logarithm of the absolute value of the gamma function |
squared_difference() |
Computes the square of the differences between two tensors |
It is important to know what functions are available to us to add to our computational graphs. Mostly we will be concerned with the preceding functions. We can also generate many different custom functions as compositions of the preceding, as follows:
# Tangent function (tan(pi/4)=1)
print(sess.run(tf.div(tf.sin(3.1416/4.), tf.cos(3.1416/4.))))
1.0
If we wish to add other operations to our graphs that are not listed here, we must create our own from the preceding functions. Here is an example of an operation not listed above that we can add to our graph:
# Define a custom polynomial function
def custom_polynomial(value):
# Return 3 * x^2 - x + 10
return(tf.sub(3 * tf.square(value), value) + 10)
print(sess.run(custom_polynomial(11)))
362
Thus in this article we have implemented some introductory recipes that will help us to learn the basics of TensorFlow.
Further resources on this subject: