2 min read

AWS already has solutions for machine learning, edge computing, and IoT. But a recent update to AWS Greengrass has combined all of these facets so you can deploy machine learning models to the edge of networks. That’s an important step forward in the IoT space for AWS. With Microsoft also recently announcing a $5 billion investment in IoT projects over the next 4 years, by extending the capability of AWS Greengrass, the AWS team are making sure they set the pace in the industry.

Jeff Barr, AWS evangelist, explained the idea in a post on the AWS blog:

“…You can now perform Machine Learning inference at the edge using AWS Greengrass. This allows you to use the power of the AWS cloud (including fast, powerful instances equipped with GPUs) to build, train, and test your ML models before deploying them to small, low-powered, intermittently-connected IoT devices running in those factories, vehicles, mines, fields…”

Industrial applications of machine learning inference

Machine learning inference is bringing lots of advantages to industry and agriculture. For example:

  • In farming, edge-enabled machine learning systems will be able to monitor crops using image recognition  – in turn this will enable corrective action to be taken, allowing farmers to optimize yields.
  • In manufacturing, machine learning inference at the edge should improve operational efficiency by making it easier to spot faults before they occur. For example, by monitoring vibrations or noise levels, Barr explains, you’ll be able to identify faulty or failing machines before they actually break.

Running this on AWS greengrass offers a number of advantages over running machine learning models and processing data locally – it means you can run complex models without draining your computing resources.

Read more in detail on the AWS Greengrass Developer Guide.

AWS Greengrass should simplify machine learning inference

One of the fundamental benefits of using AWS Greengrass should be that it simplifies machine learning inference at every single stage of the typical machine learning workflow. From building and deploying machine learning models, to developing inference applications that can be launched locally within an IoT network, it should, in theory, make the advantages of machine learning inference more accessible to more people.

It will be interesting to see how this new feature is applied by IoT engineers over the next year or so. But it will also be interesting to see if this has any impact on the wider battle for the future of Industrial IoT.

Further reading:

Co-editor of the Packt Hub. Interested in politics, tech culture, and how software and business are changing each other.

LEAVE A REPLY

Please enter your comment!
Please enter your name here