5 min read

The question several Deep Learning engineers may ask themselves is: Which is better, TensorFlow or CNTK?

Well, we’re going to answer that question for you, taking you through a closely fought match between the two most exciting frameworks.

So, here we are, ladies and gentlemen, it’s fight night and it’s a full house.

In the Red corner, weighing in at two hundred and seventy pounds of Python and topping out at over ten thousand frames per second; managed by the American tech giant, Google; we have the mighty, the beefy, TensorFlow!

In the Blue corner, weighing in at two hundred and thirty pounds of C++ muscle, we have, one of the top toolkits that can comfortably scale beyond a single machine. Managed by none other than Microsoft, it’s fast, it’s furious, it’s CNTK aka the Microsoft Cognitive Toolkit!

And we’re into Round One…

TensorFlow and CNTK are looking quite menacingly at each other and are raging to take down their opponents. TensorFlow seems pleased that its compile times are considerably faster than its successor, Theano. Although, it looks like happiness came a tad bit soon. CNTK, light and bouncy on it’s feet, comes straight out of nowhere with a whopping seventy thousand frames/second upper cut, knocking TensorFlow to the floor. TensorFlow looks like it’s in no mood to give up anytime soon. It makes itself so simple to use and understand that even students can pick it up and start training their own models. This isn’t the case with CNTK, as it begs to shed its complexity. On the other hand, CNTK seems to be thrashing TensorFlow in terms of 3D convolution, where CNTK can clearly recognize images from streaming content. TensorFlow also tries its best to run LSTM RNNs, but in vain.

The crowd keeps cheering on… Wait a minute…are they calling out for TensorFlow?

Yes they are! There’s hardly any cheering for CNTK. This is embarrassing! Looks like its community support can’t match up to TensorFlow’s. And ladies and gentlemen, that does make a difference – we can see TensorFlow improving on several fronts and gradually getting back in the game! TensorFlow huffs and puffs as it tries to prove that it’s not just about deep learning and that it has tools in the pocket that can support other algorithms such as reinforcement learning. It conveniently whips out the TensorBoard, and drops CNTK to the floor with its beautiful visualizations. TensorFlow now has the upper hand and is trying hard to pin CNTK to the floor and tries to use its R support to finish it off. But CNTK tactfully breaks loose and leaves TensorFlow on the floor – still not ready to be used in production.

And there goes the bell for Round One!

Both fighters look exhausted but you can see a faint twinkle in TensorFlow’s eye, primarily because it survived Round One. Google seems to be working hard to prep it for Round Two and is making several improvements in terms of speed, flexibility and majorly making it ready for production. Meanwhile, Microsoft boosts CNTK’s spirits with a shot of Python APIs in its blood. As it moves towards reaching version 2.0, there are a lot of improvements to CNTK, wherein, Microsoft has ensured that it’s not left behind, like having a backend for Keras, which puts it on par with TensorFlow. Moreover, there seem to be quite a few experimental features that it looks ready to enter the ring with, like the Java API for example.

It’s the final round and boy, are these two into a serious stare-down! The referee waves them in and off they are.

CNTK needs to get back at TensorFlow. Comfortably supporting multiple GPUs and CPUs out of the box, across both the Microsoft and Linux platforms, it has an advantage over TensorFlow. Is it going to use that trump card? Yes it is! A thousand GPUs and a hundred machines in, and CNTK is raining blows on TensorFlow. TensorFlow clearly drops the ball when it comes to multiple machines, and it rather complicates things.

It’s high time that TensorFlow turned the tables.

Lo and behold! It shows off its mobile deep learning capabilities with TensorFlow Lite, clearly flipping CNTK flat on its back. This is revolutionary and a tremendous breakthrough for TensorFlow! CNTK, however, is clearly the people’s choice when it comes to language compatibility. With support for C++, Python, C#/.NET and now Java, it’s clearly winning in this area.

Round Two is coming to an end, ladies and gentlemen and it’s a neck to neck battle out there. We’re not sure the judges are going to be able to choose a clear winner, from the looks of it.

And…. there goes the bell!

While the scores are being tallied, we go over to the teams and some spectators for some gossip on the what’s what of deep learning. Did you know having multiple machine support is a huge advantage? It increases speed and efficiency by almost 10 times! That’s something! We also got to know that TensorFlow is training hard and is picking up positives from its rival, CNTK. There are also rumors about a new kid called MXNet (read about it here), that has APIs in R, Python and even in Julia! This makes it one helluva framework in terms of flexibility and speed. In fact, AWS is already implementing it while Apple also is rumored to be using it. Clearly, something to watch out for.

And finally, the judges have made their decision. Ladies and gentlemen, after two rounds of sheer entertainment, we have the results…

TensorFlow CNTK
Processing speed 0 1
Learning curve 1 0
Production readiness 0 1
Community support 1 0
CPU, GPU computation support 0 1
Mobile deep learning 1 0
Multiple language compatibility 0 1

It’s a unanimous decision and just as we thought, CNTK is the heavyweight champion!

CNTK clearly beat TensorFlow in terms of performance, because of its flexibility, speed and ability to use in production!

As a Deep Learning engineer, should you be wanting to use one of these frameworks in your tasks, you should check out their features thoroughly, test them out with a test dataset and then implement them to your actual data.

After all, it’s the choices we make that define a win or a loss – simplicity over resource utilisation, or speed over platform, we must choose our tools wisely.

For more information on the kind of tests that both the tools have been put through, read the Research Paper presented by Shaohuai Shi, Qiang Wang, Pengfei Xu and Xiaowen Chu from the Department of Computer Science, Hong Kong Baptist University and these benchmarks.

I'm a technology enthusiast who designs and creates learning content for IT professionals, in my role as a Category Manager at Packt. I also blog about what's trending in technology and IT. I'm a foodie, an adventure freak, a beard grower and a doggie lover.

1 COMMENT

LEAVE A REPLY

Please enter your comment!
Please enter your name here