3 min read

Microsoft released the version 2.6 of their popular deep learning framework, CNTK or Microsoft Cognitive Toolkit, last week. CNTK v2.6 explores features such as an added .NET support, efficient group convolution, improved sequential convolution, more operators, and ONNX feature update among others.

Added .Net Support

The Cntk.Core.Managed library has now been converted to .Net Standard. It now supports .Net Core and .Net Framework applications on both Windows as well as Linux. .Net developers will now be able to restore CNTK Nuget packages. To restore the CNTK Nuget packages, use the new .Net SDK style project file with package management format set to PackageReference.

Efficient group convolution

With CNTK v2.6, the group convolution has been updated. The updated implementation uses cuDNN7 and MKL2017 APIs directly instead of having to create a sub-graph for group convolution (slicing and splicing). This leads to improved experience in terms of both performance and model size.

Improved Sequential Convolution

Sequential Convolution implementation has also been updated in CNTK v2.6. The new implementation creates a separate sequential convolution layer. This layer offers support for broader cases, such as, where stride > 1 for the sequence axis. So, if sequential convolution is performed over a batch of one-channel black-and-white images then these images will have the same fixed height of 640 with the width of variable lengths. The width is then represented by the sequential axis.

More Operators

More support has been added in CNTK v2.6 for operators such as depth_to_space and space_to_depth, Tan and Atan, ELU, and Convolution.

depth_to_space and space_to_depth

There are breaking changes in the depth_to_space and space_to_depth operators. These two operators are updated to match the ONNX specification.

Tan and Atan

Support has been added for trigonometric ops Tan and Atan.

ELU

Support added for alpha attribute in ELU op.

Convolution

The auto padding algorithms of Convolution have been updated to produce symmetric padding at best effort on CPU, without influencing the final convolution output values. This leads to an increase in the range of cases which could be covered by MKL API and also improves the performance, E.g. ResNet50.

Updated ONNX

CNTK’s ONNX import/export has been updated to support ONNX 1.2. A major update has been added on how the batch and sequence axes are handled in export and import.  CNTK’s ONNX BatchNormalization op export/import has been updated to the latest spec. A model domain has been added to the ONNX model export. Support has also been added for exporting alpha attribute in ELU ONNX op.

Change in Default arguments order

There is a major updated to the arguments property in CNTK python API. The default behavior has been updated so now it returns the arguments in python order instead of in C++ order. Because of this, it will return arguments in the same order as they are fed into ops.

Bug Fixes

  • Improved input validation added for group convolution.
  • Validation added for padding channel axis in convolution.
  • Proper initialization added for ONNX TypeStrToProtoMap.
  • The Min/Max import implementation has been updated to handle variadic inputs.

There are even more updates that come with CNTK v2.6. For more information on those, check out the CNTK official release notes.

Read Next

The Deep Learning Framework Showdown: TensorFlow vs CNTK

Deep Learning with Microsoft CNTK

ONNX 1.3 is here with experimental function concept

Tech writer at the Packt Hub. Dreamer, book nerd, lover of scented candles, karaoke, and Gilmore Girls.