Broadcast Date: February 18, 2019
Organizations that want to democratize access to data cannot afford a data warehouse that is slow to scale or one that enforces a trade-off between performance and concurrency. Data engineering teams and DBAs that use traditional data warehouses need to deploy more capacity than they typically need so that they can maintain SLAs and query performance when demand increases. Alternatively, they have to compromise and settle for unpredictable performance during demand peaks. While the former approach is expensive, the latter results in end-user frustration. Amazon Redshift seamlessly scales to provide consistently fast performance; not only with rapidly growing data, but also with high user and query concurrency. This tech talk highlights how Amazon Redshift enables you to scale storage and compute resources on-demand and automatically, as needed. It also demonstrates intelligent maintenance and administration operations that Amazon Redshift does on your behalf to ensure your clusters are performant at any scale.
- Learn tips and tricks to prepare your Amazon Redshift clusters for varying analytical demands and concurrency needs
- See how features such as concurrency scaling automatically deploy and remove capacity as needed to serve your changing query workload
- See how automated operations such as VACUUM help to free up disk space and improve as you scale
Who Should Attend?
Developers, Engineers, IT Professionals, Architects, Technical Decision Makers
Service How To
December 19th, 2018 | 1:00 PM PT
Developing Deep Learning Models for Computer Vision with
Amazon EC2 P3 Instances.