Broadcast Date: June 17, 2019
Every data lake initiative begins with setting up extract, transform, and load (ETL) processes where data is moved from various data sources into a central data repository. In this tech talk, we will show how you can use AWS Glue to build, automate, and manage ETL jobs in a scalable, serverless Apache Spark platform. See how to support Python shell jobs too, in addition to Spark jobs.
- Learn about building a data lake on AWS
- Discover how to create ETL processes using AWS Glue
- Understand how serverless Spark and Python jobs reduce costs
Who Should Attend?
Analysts, Developers, Data Scientists, Data Engineers, DBAs
Service How To
December 19th, 2018 | 1:00 PM PT
Developing Deep Learning Models for Computer Vision with
Amazon EC2 P3 Instances.