Data

Amazon open-sources SageMaker Neo to help developers optimize the training and deployment of machine learning models

2 min read

Amazon announced last week that it’s making Amazon SageMaker Neo, a new machine learning feature in Amazon Sagemaker, available as open source. Amazon has released the code as Neo-AI project under the Apache software license. Neo-AI’s open source release will help processor vendors, device makers, and AI developers to bring and extend the latest machine learning innovations to a wide variety of hardware platforms.

“with the Neo-AI project, processor vendors can quickly integrate their custom code into the compiler..and.. enables device makers to customize the Neo-AI runtime for the particular software and hardware configuration of their devices”, states the AWS team.

Amazon SageMaker Neo was announced at AWS re:Invent 2018 as newly added capability to Amazon SageMaker, its popular Machine learning Platform as a service. Neo-AI offers developers with a capability to train their machine learning models once and to run it anywhere in the cloud. It can deploy the machine learning models on multiple platforms by automatically optimizing TensorFlow, MXNet, PyTorch, ONNX, and XGBoost models.

Moreover, it can also convert the machine learning models into a common format to get rid of the software compatibility problems. It currently supports platforms from Intel, NVIDIA, and ARM. There’ll also be an added support for Xilinx, Cadence, and Qualcomm in the near future in Neo-AI.

Amazon states that Neo-AI is a machine learning compiler and runtime at its core, built on traditional compiler technologies like LLVM and Halide. It also uses TVM (to compile deep learning models) and Treelite (to compile decision tree models), which had started off as open source research projects at the University of Washington. Other than these, it also performs platform-specific optimizations from different contributors.

The Neo-AI project will receive contributions from several organizations such as AWS, ARM, Intel, Qualcomm, Xilinx, Cadence, and others. Also, the Neo-AI runtime is deployed currently on devices such as ADLINK, Lenovo, Leopard Imaging, Panasonic, and others. “Xilinx provides the FPGA hardware and software capabilities that accelerate machine learning inference applications in the cloud..we are pleased to support developers using Neo to optimize models for deployment on Xilinx FPGAs”, said Sudip Nag, Corporate Vice President at Xilinx.

For more information, check out the official Neo-AI GitHub repository.

Read Next

Amazon unveils Sagemaker: An end-to-end machine learning service

AWS introduces Amazon DocumentDB featuring compatibility with MongoDB, scalability and much more

AWS re:Invent 2018: Amazon announces variety of AWS IoT releases

Natasha Mathur

Tech writer at the Packt Hub. Dreamer, book nerd, lover of scented candles, karaoke, and Gilmore Girls.

Share
Published by
Natasha Mathur

Recent Posts

Top life hacks for prepping for your IT certification exam

I remember deciding to pursue my first IT certification, the CompTIA A+. I had signed…

3 years ago

Learn Transformers for Natural Language Processing with Denis Rothman

Key takeaways The transformer architecture has proved to be revolutionary in outperforming the classical RNN…

3 years ago

Learning Essential Linux Commands for Navigating the Shell Effectively

Once we learn how to deploy an Ubuntu server, how to manage users, and how…

3 years ago

Clean Coding in Python with Mariano Anaya

Key-takeaways:   Clean code isn’t just a nice thing to have or a luxury in software projects; it's a necessity. If we…

3 years ago

Exploring Forms in Angular – types, benefits and differences   

While developing a web application, or setting dynamic pages and meta tags we need to deal with…

3 years ago

Gain Practical Expertise with the Latest Edition of Software Architecture with C# 9 and .NET 5

Software architecture is one of the most discussed topics in the software industry today, and…

3 years ago