Join us for the AWS Database Modernization Week
Put your data to work with a modern data strategy
To thrive, organizations need to build a modern data strategy that will grow with them in the future. AWS Data Modernization Week is an online event for technical professionals designed to help you build your modern data strategy. Join AWS experts and partners for deep dive technical sessions and hands-on labs where you'll learn how to build a better data foundation and put your data to work.
Gain the insights and resources you need to modernize your data infrastructure, unify the best of both data lakes and purpose-built data stores, innovate new experiences, and reimagine old processes.
Event details

Day, Month 20, 2020
00:00 - 00:00

Register Now
Register Now
Join us for the AWS Database Modernization Week
Put your data to work with a modern data strategy
To thrive, organizations need to build a modern data strategy that will grow with them in the future. AWS Data Modernization Week is an online event for technical professionals designed to help you build your modern data strategy. Join AWS experts and partners for deep dive technical sessions and hands-on labs where you'll learn how to build a better data foundation and put your data to work.
Gain the insights and resources you need to modernize your data infrastructure, unify the best of both data lakes and purpose-built data stores, innovate new experiences, and reimagine old processes.
Agenda
Achieve Compliance and Enhanced Security with Relational Databases for Financial Services
May 23 - 10:00am CEST / 9:00am BST
Level 300
In this session, we will walk through the best practices and reference architectures for achieving compliance and enhanced security for Amazon Aurora for Financial Services use cases. Level 300
Shayon Sanyal
Principal Database Specialist
Rajib Sadhu
Senior BDSI GVSA Database Solution Architecture
Amazon RDS Custom for Oracle Technical Overview
May 23 - 11:00am CEST / 10:00am BST
Level 300
In this session, we will deep dive into some of the exciting new features of Amazon RDS Custom for Oracle. We will provide deeper insight into use cases and service capabilities accompanied by a demo.
Yamuna Palasamudram
Senior Database Specialist
Amazon Aurora Cost Optimization Best Practices
May 23 - 12:00pm CEST / 11:00am BST
Level 300-400
Amazon Aurora is a cloud native database built for speed and reliability. Aurora offers you pay-as-you-go pricing with no licensing cost. Aurora is a unique relational database that is made up of several decoupled components. In this session you will learn how different Aurora components affect the cost and best practices to optimize the cost, while taking advantages of all the advance functionality the database offers.
Krishna Sarabu
Senior PostgreSQL Specialist
Aditya Samant
Senior Database Specialist
Building a Customer 360 Graph Application on Amazon Neptune
May 23 - 1:00pm CEST / 12:00am BST
Level 300-400
Amazon Neptune is a fast, reliable, fully managed graph database service that makes it easy to build and run applications that work with highly connected datasets. In this session, we show how Amazon Neptune can be used to build a Customer 360 graph for applications in marketing and targeted advertising use cases. In part 1, we’ll walk through how to design the graph data model for Customer 360 use cases. Then join us for Part 2 to learn how to gain insights through queries and visualizations.
Justin Thomas, Principal Amazon Neptune Specialist
Overview of Amazon Open Source Relational Databases Services (RDS) and Key Features
May 24 - 10:00am CEST / 9:00am BST
Level 200
In this session, you will learn about Amazon Relational Database Service (RDS) open source database engines like RDS for PostgreSQL, RDS for MySQL, and RDS for MariaDB, the architecture of RDS and some of the key features that you can use for your critical workloads.
Vijay Karumajji
Senior Database Specialist
Shagun Arora
Database Specialist
Krishna Sarabu
Senior PostgreSQL Specialist
Amazon RDS Custom for SQL Server Overview
May 24 - 11:00am CEST / 10:00am BST
Level 300
This session includes a technical overview of Amazon RDS Custom for SQL Server including the key features, target use cases, how the service works, and how you can identify workloads in your organization that may benefit from this new service.
Carlos Robles
Database Specialist
Saleh Ghasemi
Senior Database Specialist
Protect against Ransomware and Insider Threats with Best Practices on Data Resiliency
May 24 - 12:00pm CEST / 11:00am BST
Level: 300
In this session, we will dive deep into best practices for configuring Amazon Aurora databases to safeguard against ransomware and related insider threats to your database.
Sundar Raghavan
Principal Database Specialist
Sukhpreet Kaur Bedi
Database Specialist
Introduction to Amazon DocumentDB (with MongoDB compatibility)
May 24 - 1:00pm CEST / 12:00pm BST
Level 300-400
This session will provide an introduction to Amazon DocumentDB, including use cases for document databases, differences between DocumentDB and traditional databases, and challenges scaling traditional databases. We will also dive deep into the recently launched Global Clusters/Cross Region Replication feature. Then join us to apply what you learned in the Introduction to DocumentDB hands-on lab.
Ryan Thurston
Principal Business Development
Karthik Vijayraghavan
Principal Solution Architect
Amazon Aurora Performance Optimization Techniques
May 25 - 10:00am CEST / 9:00am BST
Level 300
Amazon Aurora is a high-performance, highly scalable database service with MySQL and PostgreSQL compatibility. While Aurora offers high performance with multi-region, automatic storage scaling, and high throughput, learn about different techniques to optimize your database performance.
Rajesh Matkar
Principal Partner SA, Database
Arabinda Pani
Principal Partner SA, Database
Building modern, high performance applications at any scale with Amazon DynamoDB Part 1
May 25 - 11:00am CEST / 10:00am BST
Level 300
Amazon DynamoDB offers an enterprise-ready database that helps you deliver apps with consistent, single-digit millisecond performance and nearly unlimited throughput and storage. In this session, we'll review some of the most powerful features that will help you save costs while driving the most business impact, such as multi-region replication with global tables, optimizing for cost with new DynamoDB table classes, on-demand capacity mode for spiky workloads, and exporting data from your continuous backups to Amazon S3.
Jason Hunter, Principal DynamoDB Specialist
Building modern, high performance applications at any scale with Amazon DynamoDB Part 2
May 25 - 12:00pm CEST / 11:00am BST
Level 300
Amazon DynamoDB offers an enterprise-ready database that helps you deliver apps with consistent, single-digit millisecond performance and nearly unlimited throughput and storage. In this session, we'll review some of the most powerful features that will help you save costs while driving the most business impact, such as multi-region replication with global tables, optimizing for cost with new DynamoDB table classes, on-demand capacity mode for spiky workloads, and exporting data from your continuous backups to Amazon S3.
Jason Hunter, Principal DynamoDB Specialist
Oracle Applications on Amazon RDS Best Practices
May 25 - 1:00pm CEST / 12:00pm BST
Level 300
In this session, we will dive deep into why AWS is the best place to host Oracle applications. We will also discuss best practices for architecting your Oracle applications to be highly available, efficient, flexible, and fault tolerant using AWS Relational Database Services.
Sachin Vaidya
Sr. Database Specialist SA
Jeremy Shearer
Sr. Partner SA, Oracle
How to leverage AWS Modern Data Architecture to Accelerate your Data Strategy
May 23 - 10:00am CEST / 9:00am BST
Data volumes are increasing at an unprecedented rate, exploding from petabytes to zettabytes of data. To get the most out of data at any scale, companies are rapidly modernizing their data architecture into a cloud-based lake house architecture and evaluating capabilities such as a "data mesh." In this session, we provide an overview of challenges customers face, and how a "data mesh" and AWS's modern data architecture is solving customer pain points. This session will include customer references and touch upon architectures enabling their use cases. The content will be at 200 level and will go over various Amazon Database and Analytics services.
Ryan Shevchik
Principal Database Specialist
Bob Maus
Senior Manager GTM Data & Streaming
Empower your Users through Embedding Analytics into your Applications with Amazon QuickSight
May 23 - 11:00am CEST / 10:00am BST
Enhance your applications with rich, interactive dashboard visualizations using QuickSight, without specialized analytics know-how or setting up and managing servers. QuickSight’s embedded dashboards are secure, serverless, highly scalable, and cost effective. In this session, learn how companies like Blackboard are using QuickSight’s new embedding and API capabilities to deliver data-driven insights to their end users.
Kareem Syed-Mohammed
Senior Product Manager
Evolve from Delivering Transactional to Continuous Real-time Intelligence for Next-level Customer Experiences
May 23 - 12:00pm CEST / 11:00am BST
Speed is a key characteristic businesses need for a competitive edge in today’s world. Consumers increasingly want experiences that are timely, targeted, and tailored to their specific needs, whether they are applying for a loan, checking health alerts, online shopping, or monitoring systems. Join us in this session to understand how you can use AWS data streaming platforms to tap in to real-time insights that can help take your customer experiences to the next level.
Jeremy Ber
Senior Analytics Specialist
Democratize Machine Learning using SQL and Amazon Redshift ML
May 23 - 1:00pm CEST / 2:00pm BST
Experience how Redshift can help you gain insights to the metrics needed to run your business. Redshift ML allows Data Analysts and Data Scientists to easily train machine learning models using SQL without having to move data. Data Engineers will learn how the Data API simplifies access and allows for the easy integration of applications to build event-driven applications systems.
Srikanth Sopirala
Principal Analytics SSA
Simplify Data Integration and Preparation using AWS Glue
May 23 - 2:00pm CEST / 1:00pm BST
There are many ways to ingest and prepare data for analytics and ML. In this session, you will learn how AWS Glue Studio makes it simple and easy to visually build data integration pipelines using hundreds of connectors and transformations. We will also cover how using AWS Glue DataBrew can simplify data preparation tasks, allowing your users to get insights from data quicker. Lastly, we will discuss the enhancements made to the underlying AWS Glue execution engine to improve performance and reliability and reduce cost.
Shiv Narayanan
Senior Technical Product Manager
Deen Prasad
Senior Analytics Specialist
Build a Serverless Analytics Framework
May 24 - 10:00am CEST / 9:00am BST
Level 200
This session will describe customer pain points and an overview of the Analytics serverless capabilities (Amazon Redshift, Amazon EMR, Amazon MSK, and Amazon Kinesis), how they are helping AWS customers overcome these pain points, and typical use cases where the Analytics serverless capabilities are deployed.
Bob Maus
Sr Manager GTM Data & Streaming
Migration Deep Dive with Amazon OpenSearch Service
May 24 - 11:00am CEST / 10:00am BST
Level 300-400
When businesses begin migrating their self-managed Elasticsearch, OpenSearch, and other search solutions to the Amazon OpenSearch Service, they need to understand migration patterns that will make them successful. In this session you will learn about focus areas that help you prepare for a migration. You will also learn about tooling, approaches, and mechanisms that can help you migrate your workloads to Amazon OpenSearch Service.
Principal Analytics Specialist
Building Analytics at Scale with Amazon Athena
May 24 - 12:00pm CEST / 11:00am BST
Level 200
Amazon Athena is a highly scalable analytics service that makes it easy to analyze data in Amazon S3 and other data stores. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run. This session offers a deep dive into the service, customer use cases, newly launched features, and what is next for Amazon Athena.
Suresh Akena
Principal WW Athena
Naresh Gautam
Principal US-West Analytics
Amazon Redshift Data Sharing to Build a Scalable Multi-tenant Architecture
May 24 - 1:00pm CEST / 12:00pm BST
Level 200-300
With Amazon Redshift data sharing, organizations can now have instant, granular, and fast data access across Amazon Redshift clusters without the need to copy or move it. This session will illustrate how to effectively use data sharing through real-world use cases and discuss multi-tenant architecture patterns to meet their requirements.
Milind Oke
Analytics Specialist
What's New in AWS Lake Formation?
May 25 - 10:00am CEST / 9:00am BST
Level 200
Building data lakes and securing and sharing data can be challenging for many organizations. Learn how AWS Lake Formation can simplify data ingestion and enable a database-like permissions model for your data lakes. We will discuss recently launched AWS Lake Formation Erie features such as Row/Cell level security, transactions and storage optimization in Governed tables, and a new Storage API to improve integration with other AWS analytics and 3rd party services.
Adnan Hasan
Global Spec Analytics
Amazon OpenSearch Service Why a managed search service based on the core principles of Open Source is important to your organization
May 25 - 11:00am CEST / 10:00am BST
Level 200
Attend this session to learn: (1) What is Amazon OpenSearch Service (2) How it provides a choice of open source engines to deploy and run, including the latest versions of OpenSearch and the currently available 19 versions of ALv2 Elasticsearch (7.10 and earlier) (3) Why the freedom and innovation of open source-based software gives users greater freedom and fosters innovation (4) How to take advantage of Amazon OpenSearch Service's new features, dashboards, and operational simplicity to power your organization's log analytics and full-text search use-cases.
Arun Lakshmanan
Analytics Specialist
Building Data Pipelines using Amazon EMR on Amazon EC2 and EMR on Amazon EKS with Amazon Managed Workflows for Apache Airflow
May 25 - 12:00pm CEST / 11:00am BST
Level 300
Learn how to deploy and run data pipelines on Amazon Managed Workflows for Apache Airflow (MWAA) using EMR on EC2 and EMR on EKS. In this session, we'll show how to schedule jobs in Airflow using MWAA with EMR Operators and how to deploy and configure Airflow, EMR, and PySpark jobs.
Leonardo Gomez
Sr Analytics Specialist
ETL Modernization with AWS Glue
May 25 - 1:00pm CEST / 12:00pm BST
Level 200-300
The ETL/Data Integration landscape is significantly changing, with organizations looking for cost effective, scalable solutions that enable self-service data integration. In this session, we'll cover emerging trends in data integration, discuss how AWS Glue enables self-service and infuses ML to transform data, and share best practices for ETL modernization.
Shiv Narayanan
Senior Technical Product Manager
[Lab] Getting started with Amazon DocumentDB
May 23 - 2:00pm-5:00pm CEST / 1:00pm-4:00pm BST
In this lab, you set up Amazon DocumentDB cluster, Interact with your DocumentDB cluster using an AWS Cloud9 Environment and Python Client Application, and see how to Scale out Amazon DocumentDB cluster.
The lab will then follow these steps:
Amazon DocumentDB Index Tool (https://github.com/awslabs/amazon-documentdb-tools) Mihai Aldoiu, Kaarthiik Thotato read the MongoDB source indexes, check for DocumentDB compatibility, and then restore the indexes to DocumentDB before using DMS to load data.
Mihai Aldoiu
Senior DocumentDB Specialist
Kaarthiik Thota
Senior DocumentDB Specialist
[Lab] DynamoDB - Relational Modeling & Migration
May 24 - 2:00pm-5:00pm CEST / 1:00pm-4:00pm BST
In this module, you will create an environment to host the MySQL database on Amazon EC2. This instance will be used to host source database and simulate on-premise side of migration architecture
Sometimes data appears to be in a relational format at given point of time, though the evolving business requirement causes schema changes over the project lifecycle. Every schema change is labor-intensive, costly and sometimes causes business to reprioritize their needs due to complicated cascading impact. Amazon DynamoDB helps IT to rethink the data model in a key-value format. Data in the key-value format has the potential to absorb the disruption due to evolving schema caused by changing requirements. Amazon DynamoDB offers a fully managed, serverless datastore for information stored in key-value format. Schema flexibility lets DynamoDB store complex hierarchical data within an item and offers single-digit millisecond latency at scale
Arjaan Schaaf
DynamoDB Specialist
Shaun Farrell
Senior DynamoDB Specialist
Juhi Patil
DynamoDB Specialist
[Lab] Modernise Oracle database with Amazon Aurora
May 24 - 2:00pm-5:00pm CEST / 1:00pm-4:00pm BST
In this lab, you will learn how to migrate on-premises Oracle Database to Amazon Aurora (PostgreSQL). You will also learn how to migrate associated Java application to AWS. These migrations are called Heterogenous migrations. Migrating databases can be a complex, multi-step process that involves pre-migration assessments, conversion of database schema and application code, data migration, functional testing and performance tuning. Fortunately, AWS provides you with tools that help with migration.
Before joining the lab, it will be beneficial to have an understanding of Amazon Aurora and how it works:
- Lab guide for Oracle to Amazon Aurora PostgreSQL
- Best practices for migration Oracle to Aurora PostgreSQL
- Converting Application using SCT
- AWS re:Invent Dive Deep into AWS SCT and AWS DMS
- AWS re:Invent Migrating Oracle & SQL Server to Amazon Aurora
- Assess,Migrate & Modernise From Legacy Database to AWSlink
- Amazon Aurora reference material
- Migration tip for developers converting Oracle and SQL Server code to PostgreSQL
Asif Mujawar
Senior Database Specialist
Mattia Berlusconi
Senior Database Specialist
[Lab] Break free of MSSQL with Babelfish for Aurora PostgreSQL
May 25 - 2:00pm-5:00pm CEST / 1:00pm-4:00pm BST
In this lab, you will learn how to migrate on-premises MSSQL Server Database to Babelfish for Amazon Aurora (PostgreSQL). You will also learn how to migrate associated application with no/little changes. These migrations are called Heterogenous migrations. Migrating databases can be a complex, multi-step process that involves pre-migration assessments, conversion of database schema and application code, data migration, functional testing, performance tuning, and many other steps. Fortunately, with Babelfish for Amazon Aurora it has enabled customers to breakfree of MSSQL with low risk and accelerating the migration. In this hands-on you will explore how you can migrate your MSSQL Server Database workloads to Babelfish for Aurora PostgreSQL using the AWS Database Migration Service (DMS) and Compass for Babelfish.
Before joining the lab, it will be beneficial to have an understanding of Amazon Aurora and how it works.
Lab guide to Break free MSSQL to Babelfish for Aurora PostgreSQ Link https://immersionday.com/babelfish-immersion-day/content
Amazon Aurora reference material link. https://aws.amazon.com/rds/aurora/resources/?ar-cards-aurora.sort-by=item.additionalFields.dateAdded&ar-cards-aurora.sort-order=desc
Babelfish for Aurora PostgreSQL link. - https://aws.amazon.com/rds/aurora/babelfish/
Asif Mujawar
Senior Database Specialist
Jonathan Kerr
Senior Database Specialist
Brynn Binnell
Senior Database Specialist