Broadcast Date: February 21, 2019
Developers spend a lot of time and effort to deliver accurate machine learning (ML) models that can make fast and accurate predictions in real-time. This becomes even more critical for edge devices where memory and processing power tend to be highly constrained, but need to deliver at high accuracy and low latency. Amazon SageMaker Neo removes the barriers holding back developers from running and developing models in the most optimized manner. Amazon SageMaker Neo enables developers to train machine learning models once and run them anywhere in the cloud and the edge. With SageMaker Neo, ML models are optimized to run up to two times faster, with less than a tenth of the memory footprint, with no loss in accuracy. In this tech talk, we will dive deep into Amazon SageMaker Neo and how this capability of SageMaker automatically optimizes models built on TensorFlow, Apache MXnet, PYTorch, and ONNX. These optimized models can be deployed on several hardware platforms, and local compute and ML inference capabilities brought to the edge with AWS Greengrass. We will discuss the innovation driving the optimization and how to make ML models easy to train and run anywhere.
- Learn about Amazon SageMaker Neo where you can train ML models once and run them anywhere in the cloud and at the edge
- Learn how you can optimize models built on a wide range of frameworks and deploy them on multiple hardware platforms
- Learn how Amazon SageMaker Neo and AWS Greengrass work together to bring local compute and ML inference capabilities
Who Should Attend?
Machine Learning Practitioners, Developers, Data Scientists, Technical Decision Makers, Architects
Service How To
December 19th, 2018 | 1:00 PM PT
Developing Deep Learning Models for Computer Vision with
Amazon EC2 P3 Instances.