2 min read

Last week, the PyTorch team announced the release of PyTorch 1.2. This version comes with a new TorchScript API with improved Python language coverage, expanded ONNX export, a standard nn.Transformer module, and more.

Here are some of the updates in PyTorch 1.2:

A new TorchScript API

TorchScript enables you to create models that are serializable and optimizable with PyTorch code. PyTorch 1.2 brings a new “easier-to-use TorchScript API” for converting nn.Modules into ScriptModules. The torch.jit.script will now recursively compile functions, methods, and classes that it encounters. The preferred way to create ScriptModules is torch.jit.script(nn_module_instance) instead of inheriting from torch.jit.ScriptModule.

With this update, some of the items will be considered deprecated and developers are recommended not to use them in their new code. Among the deprecated components are the @torch.jit.script_method decorator, classes that inherit from torch.jit.ScriptModule, the torch.jit.Attribute wrapper class, and the __constants__ array.

Also, TorchScript now has improved support for Python language constructs and Python’s standard library. It supports iterator-based constructs such as for..in loops, zip(), and enumerate(). It also supports the math and string libraries and other Python builtin functions.

Full support for ONNX Opset export

The PyTorch team has worked with Microsoft to bring full support for exporting ONNX Opset versions 7, 8, 9, 10. PyTorch 1.2 includes the ability to export dropout, slice, flip and interpolate in Opset 10. ScriptModule is improved to include support for multiple outputs, tensor factories, and tuples as inputs and outputs. Developers will also be able to register their own symbolic to export custom ops, and set the dynamic dimensions of inputs during export.

A standard nn.Transformer

PyTorch 1.2 comes with a standard nn.Transformer module that allows you to modify the attributes as needed. Based on the paper Attention is All You Need, this module relies entirely on an attention mechanism for drawing global dependencies between input and output. It is designed in such a way that you can use its individual components independently. For instance, you can use its nn.TransformerEncoder API without the larger nn.Transformer.

Breaking changes in PyTorch 1.2

  • The return dtype of comparison operations including lt, le, gt, ge, eq, ne is now changed to torch.bool instead of torch.uint8.
  • The type of torch.tensor(bool) and torch.as_tensor(bool) is changed to torch.bool dtype instead of torch.uint8.
  • Some of the linear algebra functions are now removed in favor of the renamed operations. Here’s a table listing all the removed operations and their alternatives for your quick reference:

Source: PyTorch

Check out the PyTorch release notes to know more in detail.

Read Next

PyTorch announces the availability of PyTorch Hub for improving machine learning research reproducibility

Sherin Thomas explains how to build a pipeline in PyTorch for deep learning workflows

Facebook open-sources PyText, a PyTorch based NLP modeling framework


Subscribe to the weekly Packt Hub newsletter. We'll send you the results of our AI Now Survey, featuring data and insights from across the tech landscape.