Broadcast Date: September 30, 2022
High performance and cost-effective deployment techniques can help you maximize the return on your machine learning investments. ML engineers, join us for a hands-on demo showing how to use Amazon SageMaker to optimize inference workloads and reduce infrastructure costs.
- Discover how to easily deploy ML models for any use case.
- Learn how to achieve the best inference performance and cost.
- Find out how to reduce operational burden and accelerate time to value
Who Should Attend?
MLOps engineers, ML engineers, data scientists
- Shelbee Eigenbrode, Principal AI/ML Specialist SA, AWS WWSO AI/ML
- Michael Hsieh, Sr AI/ML Specialist SA, AWS WWSO AI/ML - VAR
To learn more about the services featured in this talk, please visit:
Service How To
December 19th, 2018 | 1:00 PM PT
Developing Deep Learning Models for Computer Vision with
Amazon EC2 P3 Instances.