5 min read

Amazon and Microsoft, the pioneer tech giants have collaborated their efforts to bring in a compelling, easy, and powerful deep learning interface known as Gluon.

If you are into physics, you must be aware of the term Gluon. Gluon is a hypothetical particle believed to be exchanged between quarks in order to bind them. If we go by the literal meaning, Gluon, similar to a glue, works as a binding agent. Having gained inspiration from this, Amazon and Microsoft have glued in their efforts to bring deep learning to a wider developer audience with the launch of Gluon. It is a simple, efficient, and compact API for deep learning.

Why is Gluon essential?

Any Neural network, has three important phases:

  • First, the manual coding, where the developer explains the specific behaviour of the network.
  • Then is the training phase where the error of the output is calculated and subsequently, the weights are adjusted. This activity requires memory and is computationally exhaustive.
  • After the training phase, the network is used to make predictions.

The process of building up a neural network is labor-intensive as well as time consuming. These networks have to be trained to parse large and complex data sets and therefore they are  usually  constructed manually. Thus, making them difficult to debug and reuse. Also, manual construction requires expertise and advanced skill-sets which are possessed by experienced data scientists. However, the reach of machine learning technique is at every doorstep now. A large number of developers are looking for solutions that can help them build deep learning models with ease and practicality without compromising on the power.

Gluon is a flexible and approachable way to train and construct neural networks. It comes across as a more concise and easy-to-use programming interface providing developers the ability to quickly prototype, build, and train deep learning models without sacrificing performance. The API plays around with MXNet to reduce the complexity of deep learning making it reachable to a large number of developers.

How is it different?

A few compelling advantages that makes Gluon stand out:

An Easy to Use API

A strong differentiating feature of Gluon is  that it provides interface, in the form of an API. Making it easier for the developers to grasp and develop DL models with the help of modular components. This functionality is simpler  to comprehend than the formal neural net definition methods.

Data Structure Approach

Deep learning models in Gluon can be defined, flexed, and modified in a way similar to a data structure. This ability makes it a  familiar interface especially for  developers who have just recently  stepped into the machine learning world. Dynamic networks can be easily managed with Gluon as it mixes the programming models from TensorFlow (symbolic representations) and PyTorch (imperative definitions of networks).

Network Defining Ability

Gluon provides the ability to define the network. Thus, the dynamic adjustment of the network is possible during the definition and the training process. This essentially means that the training algorithm and the neural model can inform one another. Due to this, developers can make use of standard programming structures to build, and can also use sophisticated algorithms and models to advance neural nets.

High Speed Training

Friendly APIs and flexible approaches are all great, but they shouldn’t be incurred at the cost of training speed. Gluon is better than the manual approach as it can perform  all of the tasks without compromising on performance while providing abstractions without losing out on training speed. This is because, Gluon blends the formal definitions and specific details of the network under the hood of a concise API, allowing users to implement models, rather than doing tasks like compiler optimizations manually.

Easy algorithmic implementations using Gluon

Gluon supports a wide range of prebuilt and optimized components for building neural networks. Developers can build deep learning models using the MXNet framework in the Gluon interface.

Gluon allows building neural nets from predefined layers. It  can also keep a note of when to record or not to record the computation graph. It can invoke highly optimized layers written in C++. Training of parallel data can also be accomplished easily. As compared to other interfaces, Gluon can run code, faster on both CPUs and GPUs.  Also, movement from one to multiple devices and initializing network parameters over them is pretty easy.

Even for a simple problem like a linear regression model, Gluon can help in writing quick, and clean code. For linear regression, it eliminates the need of allocating parameters individually, implementing a stochastic gradient descent, or defining a loss function. Subsequently, it also reduces the workload required for multiclass logistic regression.

On similar terms, Gluon can be used to transform the logic of a logistic regression model to a multilayer perceptron with a few additional lines of code. A convolutional neural network can also be designed easily and concisely.

Limitations: On the flip side

In spite of Gluon being easy, compact and efficient, it has certain limitations. Currently it  is available on Apache MXNet, and is awaiting a release in the Microsoft Cognitive Toolkit. However, not much has been known about other frameworks. For instance, it currently lacks support for the two most widely used deep learning frameworks, Caffe2 and TensorFlow. This could pose an issue for Gluon because most interfaces released, provide integration with multiple frameworks.

Ultimately, it boils down to the project requirements including the model requirements and the difficulty associated with building networks from a particular tool. So, for a computer vision project people would prefer using Caffe. While TensorFlow is popular  among the developers because of the existing community, the complex nature of the platform, makes a digestible deep learning interface like Gluon highly appreciated.  Hence, each framework performs on its own tradeoffs.

Conclusion

Gluon comes as a boon, for both experienced data scientists and nascent developers alike. For developers, this interface, models like a data structure, providing more familiarity.  On the other side, for researchers and data scientists, it provides an interface to build prototypes quickly and easily for complex neural networks, without sacrificing training speeds. Overall, Gluon will accelerate the development of advanced neural networks and models, resulting in robust artificial intelligence based applications.

Content Marketing Editor at Packt Hub. I blog about new and upcoming tech trends ranging from Data science, Web development, Programming, Cloud & Networking, IoT, Security and Game development.

LEAVE A REPLY

Please enter your comment!
Please enter your name here