4 min read

It was just two months back when Facebook announced the release of PyTorch 1.0 RC1. Facebook is now out with the stable release of PyTorch 1.0. The latest release, which was announced last week at the NeurIPS conference, explores new features such as JIT, brand new distributed package, and Torch Hub, breaking changes, bug fixes and other improvements.

PyTorch is an open source, deep learning python-based framework. “It accelerates the workflow involved in taking AI from research prototyping to production deployment, and makes it easier and more accessible to get started”, reads the announcement page.

Let’s now have a look at what’s new in PyTorch 1.0

New Features

JIT

JIT is a set of compiler tools that is capable of bridging the gap between research in PyTorch
and production. JIT enables the creation of models that have the capacity to run without any dependency on the Python interpreter.

PyTorch 1.0 offers two ways using which you can make your existing code compatible with the JIT: using  torch.jit or torch.jit.script. Once the models have been annotated, Torch Script code can be optimized and serialized for later use in the new C++ API, which doesn’t depend on Python.

Brand New distributed package

In PyTorch 1.0, the  new torch.distributed package and torch.nn.parallel.DistributedDataParallel comes backed with a brand new re-designed distributed library. Major highlights of the new library are as follows:

  • The new torch.distributed is performance driven and operates entirely asynchronously for all backends such as Gloo, NCCL, and MPI.
  • There are significant Distributed Data-Parallel performance improvements for hosts with slower networks such as Ethernet-based hosts.
  • It comes with async support for all distributed collective operations in the torch.distributed package.

C++ frontend [ API unstable]

The C++ frontend is a complete C++ interface to the PyTorch backend. It follows the API and architecture of the established Python frontend and is meant to enable research in high performance, low latency and bare metal C++ applications. It also offers equivalents to torch.nn, torch.optim, torch.data and other components of the Python frontend.

PyTorch team has released C++ frontend marked as “API Unstable” as part of PyTorch 1.0. This is because although it is ready to use for research applications, it still needs to get more stabilized over future releases.

Torch Hub

Torch Hub refers to a pre-trained model repository that has been designed to facilitate research reproducibility. Torch Hub offers support for publishing pre-trained models (model definitions and pre-trained weights) to a github repository with the help of hubconf.py file. Once published, users can then load the pre-trained models with the help of torch.hub.load API.

Breaking Changes

  • Indexing a 0-dimensional tensor displays an error instead of warn.
  • torch.legacy has been removed.
  • torch.masked_copy_ is removed and hence, use torch.masked_scatter_ instead.
  • torch.distributed: the TCP backend has been removed. It is recommended to use Gloo and MPI backends for CPU collectives and NCCL backend for GPU collectives.
  • torch.tensor function with a Tensor argument can now return a detached Tensor (i.e. a Tensor where grad_fn is None) in PyTorch 1.0.
  • torch.nn.functional.multilabel_soft_margin_loss now returns Tensors of shape (N,) instead of (N, C). This is to match the behaviour of torch.nn.MultiMarginLoss and it is also more numerically stable.
  • Support for C extensions has been removed in PyTorch 1.0.
  • Torch.utils.trainer has been deprecated.

Bug fixes

  • torch.multiprocessing has been fixed and now correctly handles CUDA tensors, requires_grad settings, and hooks.
  • Memory leak during packing in tuples has been fixed.
  • RuntimeError: storages that don’t support slicing when loading models are saved with PyTorch 0.3, has been fixed.
  • The issue with calculated output sizes of torch.nn.Conv modules with stride and dilation have been fixed.
  • torch.dist has been fixed for infinity, zero and minus infinity norms.
  • torch.nn.InstanceNorm1d has been fixed and now can correctly accept 2-dimensional inputs.
  • torch.nn.Module.load_state_dict showed an incorrect error message that has been fixed.
  • broadcasting bug in torch.distributions.studentT.StudentT has been fixed.

Other Changes

  • “Advanced Indexing” performance has been considerably improved on CPU as well as GPU.
  • torch.nn.PReLU speed has been improved on both CPU and GPU.
  • Printing large tensors has become faster.
  • N-dimensional empty tensors have been added in PyTorch 1.0, which allows tensors with 0 elements to have arbitrary number of dimensions. They also support indexing and other torch operations.

For more information, check out the official release notes.

Read Next

Can a production-ready Pytorch 1.0 give TensorFlow a tough time?

Pytorch.org revamps for Pytorch 1.0 with design changes and added Static graph support

What is PyTorch and how does it work?

Tech writer at the Packt Hub. Dreamer, book nerd, lover of scented candles, karaoke, and Gilmore Girls.