Broadcast Date: September 30, 2022

Level: 200

High performance and cost-effective deployment techniques can help you maximize the return on your machine learning investments. ML engineers, join us for a hands-on demo showing how to use Amazon SageMaker to optimize inference workloads and reduce infrastructure costs.

Learning Objectives

  • Discover how to easily deploy ML models for any use case.
  • Learn how to achieve the best inference performance and cost.
  • Find out how to reduce operational burden and accelerate time to value

Who Should Attend?

MLOps engineers, ML engineers, data scientists

Speakers

  • Shelbee Eigenbrode, Principal AI/ML Specialist SA, AWS WWSO AI/ML
  • Michael Hsieh, Sr AI/ML Specialist SA, AWS WWSO AI/ML - VAR


Learn More

To learn more about the services featured in this talk, please visit:
https://aws.amazon.com/sagemaker/deploy/

Intro body copy here about 2018 re:Invent launches.

Download the Slide Deck

Compute

Service How To

December 19th, 2018 | 1:00 PM PT

Developing Deep Learning Models for Computer Vision with
Amazon EC2 P3 Instances.

Register Now>

Containers

What's New / Cloud Innovation

December 11th, 2018 | 1:00 PM PT

EMBARGOED

Register Now>

Data Lakes & Analytics

Webinar 1:

What's New / Cloud Innovation

December 10th, 2018 | 11:00 AM PT

EMBARGOED

Register Now>

Webinar 2:

What's New / Cloud Innovation

December 12th, 2018 | 11:00 AM PT

EMBARGOED

Register Now>