3 min read

Machine Learning was one of the most talked about topic at the Amazon’s re:invent this year. In order to make machine learning models accessible to everyday users, regardless of their expertise level, Amazon Web services launched an end-to-end machine learning service – Sagemaker.

Amazon Sagemaker allows data scientists, developers, and machine learning experts to quickly build, train, and deploy machine learning models at scale. The below image shows the process adopted by Sagemaker to aid developers in building ML models.

Amazon SageMaker

Source: aws.amazon.com

Model Building

Amazon SageMaker makes it easy to build ML models by easy training and selection of best algorithms and frameworks for a particular model. Amazon Sagemaker has zero-setup hosted Jupyter notebooks which makes it easy to explore, connect, and visualize the training data stored on Amazon S3. These notebook IDEs are runnable on either general instance types or GPU powered instances.

Model Training

ML models can be trained by a single click in the Amazon SageMaker console. For training the data, Sagemaker also has a provision for moving training data from Amazon RDS, Amazon DynamoDB, and Amazon Redshift into S3. Amazon Sagemaker is preconfigured to run TensorFlow and Apache MXNet. However, developers can use their own frameworks and also create their own training with Docker containers.

Model Tuning and Hosting

Amazon Sagemaker has a model hosting service with HTTPs endpoints. These endpoints can invoke real-time inferences, support traffic, and simultaneously allow A/B Testing. Amazon Sagemaker can automatically tune models to achieve high accuracy. This makes the training process faster and easier. Sagemaker can automate the underlying infrastructure and allows developers to easily scale to train models at petabyte scale.

Model Deployment

After training and tuning come the deployment phase. Sagemaker deploys the models on an auto-scaling cluster of Amazon EC2 instances, for running predictions on new data. These high-performance instances are spread across multiple availability zones.

According to the official product page, Amazon Sagemaker has multiple use cases. One of them being Ad targeting, where Amazon Sagemaker can be used with other AWS services to help build, train, and deploy ML models for targeting online ads, optimize return on ad spend, customer segmentation, etc. Another interesting use case of Sagemaker is how it can train recommender systems within its serverless, distributed environment which can be hosted easily in low-latency, auto-scaling endpoint systems.

Sagemaker can also be used for building highly efficient Industrial IoT and ML models to predict machine failure or for maintenance scheduling.

As of now, Amazon Sagemaker is free for developers for the first two months. Each month developers are provided with 250 hours of t2.medium notebook usage, 50 hours of m4.xlarge usage for training, and 125 hours of m4.xlarge usage for hosting.

After the free period, the pricing would vary by region and customers would be billed per-second for instance usage, per-GB of storage, and per-GB of Data transfer into and out of the service.

AWS Sagemaker provides an end-to-end solution for the development of machine learning applications. The ease and flexibility offered by AWS Sagemaker could be harnessed by developers to solve several business-related problems.

Content Marketing Editor at Packt Hub. I blog about new and upcoming tech trends ranging from Data science, Web development, Programming, Cloud & Networking, IoT, Security and Game development.

LEAVE A REPLY

Please enter your comment!
Please enter your name here