10 min read

[box type=”note” align=”” class=”” width=””]The following is an excerpt from the book Machine Learning with Go, Chapter 8, Neural Networks and Deep Learning, written by Daniel Whitenack. The associated code bundle is available at the end of the article.[/box]

Deep learning models are powerful!

Especially for tasks like computer vision. However, you should also keep in mind that complicated combinations of these neural net components are also extremely hard to interpret. That is, determining why the model made a certain prediction can be near impossible. This can be a problem when you need to maintain compliance in certain industries and jurisdictions, and it also might inhibit debugging or maintenance of your applications. That being said, there are some major efforts to improve the interpretability of deep learning models. Notable among these efforts is the LIME project:

Deep learning with Go

There are a variety of options when you are looking to build or utilize deep learning models from Go. This, as with deep learning itself, is an ever-changing landscape. However, the options for building, training and utilizing deep learning models in Go are generally as follows:

Use a Go package: There are Go packages that allow you to use Go as your main interface to build and train deep learning models. The most features and developed of these packages is Gorgonia. It treats Go as a first-class citizen and is written in Go, even if it does make significant usage of cgo to interface with numerical libraries.

Use an API or Go client for a non-Go DL framework: You can interface with popular deep learning services and frameworks from Go including TensorFlow, MachineBox, H2O, and the various cloud providers or third-party API offerings (such as IBM Watson). TensorFlow and Machine Box actually have Go bindings or SDKs, which are continually improving. For the other services, you may need to interact via REST or even call binaries using exec.

Use cgo: Of course, Go can talk to and integrate with C/C++ libraries for deep learning, including the TensorFlow libraries and various libraries from Intel. However, this is a difficult road, and it is only recommended when absolutely necessary.

As TensorFlow is by far the most popular framework for deep learning (at the moment), we will briefly explore the second category listed here. However, the Tensorflow Go bindings are under active development and some functionality is quite crude at the moment. The TensorFlow team recommends that if you are going to use a TensorFlow model in Go, you first train and export this model using Python. That pre-trained model can then be utilized from Go, as we will demonstrate in the next section. There are a number of members of the community working very hard to make Go more of a first-class citizen for TensorFlow. As such, it is likely that the rough edges of the TensorFlow bindings will be smoothed over the coming year.

Setting up TensorFlow for use with Go

The TensorFlow team has provided some good docs to install TensorFlow and get it ready for usage with Go. These docs can be found here. There are a couple of preliminary steps, but once you have the TensorFlow C libraries installed, you can get the following Go package:

$ go get github.com/tensorflow/tensorflow/tensorflow/go

Everything should be good to go if you were able to get github.com/tensorflow/tensorflow/tensorflow/go without error, but you can make sure that you are ready to use TensorFlow by executing the following tests:

$ go test github.com/tensorflow/tensorflow/tensorflow/go 
ok github.com/tensorflow/tensorflow/tensorflow/go 0.045s

Retrieving and calling a pretrained TensorFlow model

The model that we are going to use is a Google model for object recognition in images called Inception. The model can be retrieved as follows:

$ mkdir model
$ cd model 
$ wget https://storage.googleapis.com/download.tensorflow.org/models/inception5h.z ip --2017-09-09 18:29:03-- 

https://storage.googleapis.com/download.tensorflow.org/models/inception5h.z ip 

Resolving storage.googleapis.com (storage.googleapis.com)... 172.217.6.112, 2607:f8b0:4009:812::2010

Connecting to storage.googleapis.com (storage.googleapis.com)|172.217.6.112|:443... connected. 

HTTP request sent, awaiting response... 200 OK 
Length: 49937555 (48M) [application/zip] 
Saving to: ‘inception5h.zip’ 

inception5h.zip 100%[====================================================================== ===================================================>] 47.62M 19.0MB/s in 2.5s 

2017-09-09 18:29:06 (19.0 MB/s) - ‘inception5h.zip’ saved [49937555/49937555] 

$ unzip inception5h.zip 
Archive: inception5h.zip
inflating: imagenet_comp_graph_label_strings.txt inflating: tensorflow_inception_graph.pb inflating: LICENSE

After unzipping the compressed model, you should see a *.pb file. This is a protobuf file that represents a frozen state of the model. Think back to our simple neural network. The network was fully defined by a series of weights and biases. Although more complicated, this model can be defined in a similar way and these definitions are stored in this protobuf file.

To call this model, we will use some example code from the TensorFlow Go bindings docs–. This code loads the model and uses the model to detect and label the contents of a *.jpg image.

As the code is included in the TensorFlow docs, I will spare the details and just highlight a couple of snippets. To load the model, we perform the following:

// Load the serialized GraphDef from a file. 
modelfile, labelsfile, err := modelFiles(*modeldir)
 if err != nil {
 log.Fatal(err)
 } 
model, err := ioutil.ReadFile(modelfile)
 if err != nil { 
log.Fatal(err)
 }

Then we load the graph definition of the deep learning model and create a new TensorFlow session with the graph, as shown in the following code:

// Construct an in-memory graph from the serialized form. 
graph := tf.NewGraph() 
if err := graph.Import(model, ""); err != nil { 
log.Fatal(err) 
} 
// Create a session for inference over graph. 
session, err := tf.NewSession(graph, nil)
if err != nil {
log.Fatal(err) 
}
 defer session.Close()

Finally, we can make an inference using the model as follows:

// Run inference on *imageFile.
 // For multiple images, session.Run() can be called in a loop (and concurrently). Alternatively, images can be batched since the model // accepts batches of image data as input. 

tensor, err := makeTensorFromImage(*imagefile)
if err != nil {
 log.Fatal(err)
}
output, err := session.Run(
 map[tf.Output]*tf.Tensor{
 graph.Operation("input").Output(0): tensor,
 },
 []tf.Output{
 graph.Operation("output").Output(0),
 },
 nil)
if err != nil {
 log.Fatal(err)
}
// output[0].Value() is a vector containing probabilities of
// labels for each image in the "batch". The batch size was 1.
// Find the most probable label index.
probabilities := output[0].Value().([][]float32)[0]
printBestLabel(probabilities, labelsfile)

Object detection with Go using TensorFlow


The Go program for object detection, as specified in the TensorFlow GoDocs, can be called as follows:

$ ./myprogram -dir=<path/to/the/model/dir> -image=<path/to/a/jpg/image>

When the program is called, it will utilize the pretrained and loaded model to infer the contents of the specified image. It will then output the most likely contents of that image along with its calculated probability.

To illustrate this, let’s try performing the object detection on the following image of an airplane, saved as airplane.jpg:

Running the TensorFlow model from Go gives the following results:

$ go build
$ ./myprogram -dir=model -image=airplane.jpg
2017-09-09 20:17:30.655757: W
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library
wasn't compiled to use SSE4.1 instructions, but these are available on your
machine and could speed up CPU computations.
2017-09-09 20:17:30.655807: W
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library
wasn't compiled to use SSE4.2 instructions, but these are available on your
machine and could speed up CPU computations.
2017-09-09 20:17:30.655814: W
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library
wasn't compiled to use AVX instructions, but these are available on your
machine and could speed up CPU computations.
2017-09-09 20:17:30.655818: W
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library
wasn't compiled to use AVX2 instructions, but these are available on your
machine and could speed up CPU computations.
2017-09-09 20:17:30.655822: W
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library
wasn't compiled to use FMA instructions, but these are available on your
machine and could speed up CPU computations.
BEST MATCH: (86% likely) airliner

After some suggestions about speeding up CPU computations, we get a result: airliner. Wow! That’s pretty cool. We just performed object recognition with TensorFlow right from our Go program!

Let try another one. This time, we will use pug.jpg, which looks like the following:

Running our program again with this image gives the following:

$ ./myprogram -dir=model -image=pug.jpg
2017-09-09 20:20:32.323855: W
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library
wasn't compiled to use SSE4.1 instructions, but these are available on your
machine and could speed up CPU computations.
2017-09-09 20:20:32.323896: W
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library
wasn't compiled to use SSE4.2 instructions, but these are available on your
machine and could speed up CPU computations.
2017-09-09 20:20:32.323902: W
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library
wasn't compiled to use AVX instructions, but these are available on your
machine and could speed up CPU computations.
2017-09-09 20:20:32.323906: W
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library
wasn't compiled to use AVX2 instructions, but these are available on your
machine and could speed up CPU computations.
2017-09-09 20:20:32.323911: W
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library
wasn't compiled to use FMA instructions, but these are available on your
machine and could speed up CPU computations.
BEST MATCH: (84% likely) pug

Success! Not only did the model detect that there was a dog in the picture, it correctly identified that there was a pug dog in the picture.

Let try just one more. As this is a Go article, we cannot resist trying gopher.jpg, which looks like the following (huge thanks to Renee French, the artist behind the Go gopher):

Running the model gives the following result:

$ ./myprogram -dir=model -image=gopher.jpg
2017-09-09 20:25:57.967753: W
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library
wasn't compiled to use SSE4.1 instructions, but these are available on your
machine and could speed up CPU computations.
2017-09-09 20:25:57.967801: W
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library
wasn't compiled to use SSE4.2 instructions, but these are available on your
machine and could speed up CPU computations.
2017-09-09 20:25:57.967808: W
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library
wasn't compiled to use AVX instructions, but these are available on your
machine and could speed up CPU computations.
2017-09-09 20:25:57.967812: W
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library
wasn't compiled to use AVX2 instructions, but these are available on your
machine and could speed up CPU computations.
2017-09-09 20:25:57.967817: W
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library
wasn't compiled to use FMA instructions, but these are available on your
machine and could speed up CPU computations.
BEST MATCH: (12% likely) safety pin

Well, I guess we can’t win them all. Looks like we need to refactor our model to be able to recognize Go gophers. More specifically, we should probably add a bunch of Go gophers to our training dataset, because a Go gopher is definitely not a safety pin!

[box type=”download” align=”” class=”” width=””]The code for this exercise is available here.[/box]

Summary

Congratulations! We have gone from parsing data with Go to calling deep learning models from Go. You now know the basics of neural networks and can implement them and utilize them in your Go programs. In the next chapter, we will discuss how to get these models and applications off of your laptops and run them at production scale in data pipelines.

If you enjoyed the above excerpt from the book Machine Learning with Go, check out the book to learn how to build machine learning apps with Go.

LEAVE A REPLY

Please enter your comment!
Please enter your name here