Day 2 of the ongoing AWS re:Invent 2019 conference at Las Vegas, included a lot of new announcements such as AWS Wavelength, Provisioned Concurrency for Lambda functions, Amazon Sagemaker Autopilot, and much more. The Day 1 highlights included a lot of exciting releases too, such as preview of AWS’ new quantum service, Braket; Amazon SageMaker Operators for Kubernetes, among others.
Day Two announcements at AWS re:Invent 2019
AWS Wavelength to deliver ultra-low latency applications for 5G devices
With AWS Wavelength, developers can build applications that deliver single-digit millisecond latencies to mobile devices and end-users. AWS developers can deploy their applications to Wavelength Zones, AWS infrastructure deployments that embed AWS compute and storage services within the telecommunications providers’ datacenters at the edge of the 5G networks, and seamlessly access the breadth of AWS services in the region. This enables developers to deliver applications that require single-digit millisecond latencies such as game and live video streaming, machine learning inference at the edge, and augmented and virtual reality (AR/VR).
AWS Wavelength brings AWS services to the edge of the 5G network. This minimizes the latency to connect to an application from a mobile device. Application traffic can reach application servers running in Wavelength Zones without leaving the mobile provider’s network. This reduces the extra network hops to the Internet that can result in latencies of more than 100 milliseconds, preventing customers from taking full advantage of the bandwidth and latency advancements of 5G.
To know more about AWS Wavelength, read the official post.
Provisioned Concurrency for Lambda Functions
To provide customers with improved control over their mission-critical app performance on serverless, AWS introduces Provisioned Concurrency, which is a Lambda feature and works with any trigger. For example, you can use it with WebSockets APIs, GraphQL resolvers, or IoT Rules. This feature gives you more control when building serverless applications that require low latency, such as web and mobile apps, games, or any service that is part of a complex transaction.
This is a feature that keeps functions initialized and hyper-ready to respond in double-digit milliseconds. This addition is helpful for implementing interactive services, such as web and mobile backends, latency-sensitive microservices, or synchronous APIs.
On enabling Provisioned Concurrency for a function, the Lambda service will initialize the requested number of execution environments so they can be ready to respond to invocations.
To know more Provisioned Concurrency in detail, read the official document.
Amazon Managed Cassandra Service open preview launched
Amazon Managed Apache Cassandra Service (MCS) is a scalable, highly available, and managed Apache Cassandra-compatible database service. Since the Amazon MCS is serverless, you pay for only the resources you use and the service automatically scales tables up and down in response to application traffic. You can build applications that serve thousands of requests per second with virtually unlimited throughput and storage.
With Amazon MCS, it becomes easy to run Cassandra workloads on AWS using the same Cassandra application code and developer tools that you use today. Amazon MCS implements the Apache Cassandra version 3.11 CQL API, allowing you to use the code and drivers that you already have in your applications. Updating your application is as easy as changing the endpoint to the one in the Amazon MCS service table.
To know more about Amazon MCS in detail, read AWS official blog post.
Introducing Amazon SageMaker Autopilot to auto-create high-quality Machine Learning models with full control and visibility
The AWS team launched Amazon SageMaker Autopilot to automatically create classification and regression machine learning models with full control and visibility.
SageMaker Autopilot first checks the dataset and then runs a number of candidates to figure out the optimal combination of data preprocessing steps, machine learning algorithms, and hyperparameters. All this with a single API call or few clicks in the Amazon SageMaker Studio. Further, it uses this combination to train an Inference Pipeline, which can be easily deployed either on a real-time endpoint or for batch processing. All of this takes place on fully-managed infrastructure.
SageMaker Autopilot also generates Python code showing exactly how data was preprocessed: not only can you understand what SageMaker Autopilot does, you can also reuse that code for further manual tuning if you’re so inclined.
SageMaker Autopilot supports:
- Input data in tabular format, with automatic data cleaning and preprocessing,
- Automatic algorithm selection for linear regression, binary classification, and multi-class classification,
- Automatic hyperparameter optimization,
- Distributed training,
- Automatic instance and cluster size selection.
To know more about Amazon Sagemaker Autopilot, read the official document.
Announcing ML-powered Amazon Kendra
Amazon Kendra is an ML-powered highly accurate enterprise search service. It provides powerful natural language search capabilities to your websites and applications such that end users can easily find the required information within the vast amount of content spread across the organization.
Key benefits of Kendra include:
- Users can get immediate answers to questions asked in natural language. This eliminates sifting through long lists of links and hoping one has the information you need.
- Kendra lets you easily add content from file systems, SharePoint, intranet sites, file-sharing services, and more, into a centralized location so you can quickly search all of your information to find the best answer.
- The search results get better over time as Kendra’s machine learning algorithms learn which results users find most valuable.
To know more about Amazon Kendra in detail, read the official document.
Introducing preview of Amazon Codeguru
Amazon CodeGuru is a machine learning service for automated code reviews and application performance recommendations. It helps developers find the most expensive lines of code that affect application performance and causes difficulty while troubleshooting.
CodeGuru is powered by machine learning, best practices, and hard-learned lessons across millions of code reviews and thousands of applications profiled on open source projects and internally at Amazon. It helps developers find and fix code issues such as resource leaks, potential concurrency race conditions, and wasted CPU cycles.
To know more about Amazon Codeguru in detail, read the official blog post.
A few other highlights of Day two at AWS re:Invent 2019 include:
- General availability of Amazon EKS on AWS Fargate, AWS Fargate Spot, and ECS Cluster Auto Scaling.
- The Deep Graph Library, an open source library built for easy implementation of graph neural networks, is now available on Amazon SageMaker.
Amazon re:Invent will continue throughout this week till the 6th of December. You can access the Livestream. Keep checking this space for news for further updates and releases.