Broadcast Date: May 5, 2023

Level: 300

As machine learning moves from the research lab to the enterprise, questions around ethics and risk become even more critical for producing well-performing models. Bias goes against a code of ethics and can present itself throughout the machine learning lifecycle. We will explore how bias can appear in your people, strategy, data, algorithms, and models. I’ll share questions to ask during each stage to help root out bias. We will explore Amazon SageMaker Clarify, which detects potential bias throughout the ML lifecycle and helps to add a level of explainability to predictions.

Learning Objectives

  • Understand how bias can present itself across the ML lifecycle.
  • Learn techniques on how to root out bias throughout the ML lifecycle.
  • Understand how Amazon SageMaker Clarify can detect potential bias during data preparation, after model training, and in deployed models and help explain how input features contribute to model predictions in real time.

Who Should Attend?

Developers, Data Scientists, ML practitioners

Speaker(s)

Kesha Williams, AWS ML Hero, Program Director, AWS Cloud Residency, Slalom


Learn More

To learn more about the services featured in this talk, please visit:
aws.amazon.com/sagemaker/clarify

Intro body copy here about 2018 re:Invent launches.

Download the slide deck

Compute

Service How To

December 19th, 2018 | 1:00 PM PT

Developing Deep Learning Models for Computer Vision with
Amazon EC2 P3 Instances.

Register Now>

Containers

What's New / Cloud Innovation

December 11th, 2018 | 1:00 PM PT

EMBARGOED

Register Now>

Data Lakes & Analytics

Webinar 1:

What's New / Cloud Innovation

December 10th, 2018 | 11:00 AM PT

EMBARGOED

Register Now>

Webinar 2:

What's New / Cloud Innovation

December 12th, 2018 | 11:00 AM PT

EMBARGOED

Register Now>