TensorFlow has picked up a lot of steam over the past couple of months, and there’s been more and more interest in learning how to use the library. I’ve seen tons of tutorials out there that just slap together TensorFlow code, roughly describe what some of the lines do, and call it a day. Conversely, I’ve seen really dense tutorials that mix together universal machine learning concepts and TensorFlow’s API. There is value in both of these sorts of examples, but I find them either a little too sparse or too confusing respectively. In this post, I plan to focus solely on information related to the TensorFlow API, and not touch on general machine learning concepts (aside from describing computational graphs). Additionally, I will link directly to relevant portions of the TensorFlow API for further reading. While this post isn’t going to be a proper tutorial, my hope is that focusing on the core components and workflows of the TensorFlow API will make working with other resources more accessible and comprehensible.
As a final note, I’ll be referring to the Python API and not the C++ API in this post.
Definitions
Let’s start off with a glossary of key words you’re going to see when using TensorFlow.
- Tensor: An n-dimensional matrix. For most practical purposes, you can think of them the same way you would a two-dimensional matrix for matrix algebra. In TensorFlow, the return value of any mathematical operation is a tensor. See here for more about TensorFlow Tensor objects.
- Graph: The computational graph that is defined by the user. It’s constructed of nodes and edges, representing computations and connections between those computations respectively. For a quick primer on computation graphs and how they work in backpropagation, check out Chris Olah’s post here. A TensorFlow user can define more than one Graph object and run them separately. Additionally, it is possible to define a large graph and run only smaller portions of it. See here for more information about TensorFlow Graphs.
- Op, Operation (Ops, Operations): Any sort of computation on tensors. Operations (or Ops) can take in zero or more TensorFlow Tensor objects, and output zero or more Tensor objects as a result of the computation. Ops are used all throughout TensorFlow, from doing simple addition to matrix multiplication to initializing TensorFlow variables. Operations run only when they are passed to the Session object, which I’ll discuss below. For the most part, nodes and operations are interchangable concepts. In this guide, I’ll try to use the term Operation or Op when referring to TensorFlow-specific operations and node when referring to general computation graph terminology. Here’s the API reference for the Operation class.
- Node: A computation in the graph that takes as input zero or more tensors and outputs zero or more tensors. A node does not have to interact with any other nodes, and thus does not have to have any edges connected to it. Visually, these are usually depicted as ellipses or boxes.
- Edge: The directed connection between two nodes. In TensorFlow, each edge can be seen as one or more tensors, and usually represents the output of one node becoming the input of the next node. Visually, these are usually depicted as lines or arrows.
- Device: A CPU or GPU. In TensorFlow, computations can occur across many different CPUs and GPUs, and it must keep track of these devices in order to coordinate work properly.
The Typical TensorFlow Coding Workflow
Writing a working TensorFlow model boils down to two steps:
- Build the Graph using a series of Operations, placeholders, and Variables.
- Run the Graph with training data repeatedly using the Session (you’ll want to test the model while training to make sure it’s learning properly).
Sounds simple enough, and once you get a hang of it, it really is! We talked about Ops in the section above, but now I want to put special emphasis on placeholders, Variables, and the Session. They are fairly easy to grasp, but getting these core fundamentals solidified will give context to learning the rest of the TensorFlow API.
Placeholders
A Placeholder is a node in the graph that must be fed data via the feed_dict parameter in Session.run (see below). In general, these are used to specify input data and label data. Basically, you use placeholders to tell TensorFlow, “Hey TF, the data here is going to change each time you run the graph, but it will always be a tensor of size [N] and data-type [D]. Use that information to make sure that my matrix/tensor calculations are set up properly.” TensorFlow needs to have that information in order to compile the program, as it has to guarantee that you don’t accidentally try to multiply a 5×5 matrix with an 8×8 matrix (amongst other things).
Placeholders are easy to define. Just make a variable that is assigned to the result of tensorflow.placeholder():
import tensorflow as tf
# Create a Placeholder of size 100x400 that will contain 32-bit floating point numbers
my_placeholder = tf.placeholder(tf.float32, shape=(100, 400))
Read more about Placeholder objects here.
Note: We are required to feed data to the placeholder when we run our graph. We’ll cover this in the Session section below.
Variables
Variables are objects that contain tensor information but persist across multiple calls to Session.run(). That is, they contain information that can be altered during the run of a graph, and then that altered state can be accessed the next time the graph is run. Variables are used to hold the weights and biases of a machine learning model while it trains, and their final values are what define the trained model.
Defining and using Variables is mostly straightforward. Define a Variable with tensorflow.Variable() and update its information with the assign() method:
import tensorflow as tf
# Create a variable with the value 0 and the name of 'my_variable'
my_var = tf.Variable(0, name='my_variable')
# Increment the variable by one
my_var.assign(my_var + 1)
One catch with Variable objects is that you can’t run Ops with them until you initialize them within the Session object. This is usually done with the Operation returned from tf.initialize_all_variables(), as I’ll describe in the next section.
The Session
Finally, let’s talk about running the Session. The TensorFlow Session object is in charge of keeping track of all Variables, coordinating computation across devices, and generally doing anything that involves running the graph. You generally start a Session by calling tensorflow.Session(), and either directly assign the value of that statement to a handle or use a with … as statement.
The most important method in the Session object is run(), which takes in as input fetches, a list of Operations and Tensors that the user wishes to calculate; and feed_dict, which is an optional dictionary mapping Tensors (often Placeholders) to values that should override that Tensor. This is how you specify which values you want returned from your computation as well as the input values for training.
Here is a toy example that uses a placeholder, a Variable, and the Session to showcase their basic use:
import tensorflow as tf
# Create a placeholder for inputting floating point data later
a = tf.placeholder(tf.float32)
# Make a base Variable object with the starting value of 0
start = tf.Variable(0.0)
# Create a node that is the value of incrementing the 'start' Variable by the value of 'a'
y = start.assign(start + a)
# Open up a TensorFlow Session and assign it to the handle 'sess'
sess = tf.Session()
# Important: initialize the Variable, or else we won't be able to run our computation
init = tf.initialize_all_variables() # 'init' is an Op: must be run by sess
sess.run(init) # Now the Variable is initialized!
# Get the value of 'y', feeding in different values for 'a', and print the result
# Because we are using a Variable, the value should change each time
print(sess.run(y, feed_dict={a:1})) # Prints 1.0
print(sess.run(y, feed_dict={a:0.5})) # Prints 1.5
print(sess.run(y, feed_dict={a:2.2})) # Prints 3.7
# Close the Session
sess.close()
Check out the documentation for TensorFlow’s Session object here.
Finishing Up
Alright! This primer should give you a head start on understanding more of the resources out there for TensorFlow. The less you have to think about how TensorFlow works, the more time you can spend working out how to set up the best neural network you can! Good luck, and happy flowing!
About the author
Sam Abrahams is a freelance data engineer and animator in Los Angeles, CA, USA. He specializes in real-world applications of machine learning and is a contributor to TensorFlow. Sam runs a small tech blog, Memdump, and is an active member of the local hacker scene in West LA.