Track 1 - Get started: Moving your data to the cloud
|
40 mins
Migrate your Oracle and SQL Server databases to Amazon RDS (200)
Organizations today are looking to free themselves from the constraints of on-premises databases and leverage the power of fully managed databases in the cloud. Amazon RDS is a fully managed relational database service that you can use to run your choice of database engines including open source engines, Oracle, and SQL Server in the cloud. Amazon RDS automates time-consuming database administration tasks and adds capabilities such as replication and Multi-AZ failover to make your database deployments more scalable, available, reliable, manageable, and cost-effective. This session covers why you should consider moving your on-premises Oracle & SQL Server deployments to Amazon RDS and the tools to get started.
|
40 mins
Migrate your on-premises Data Warehouse to Amazon Redshift (200)
Most companies are over-run with data, yet they lack critical insights to make timely and accurate business decisions due to cost, complexity, and rigid architectures of traditional data warehouses. They are missing the opportunity to combine large amounts of new, unstructured big data that resides outside their data warehouse with trusted, structured data inside their data warehouse. Amazon Redshift offers 10x the performance at 1/10th the cost of traditional data warehouses, and extends queries to the data lake with no data movement needed. In this session, we discuss how moving to Amazon Redshift enables you to unlock better price-performance and scale, while automating your day-to-day administration tasks. We also show how Amazon Redshift natively integrates with your data lake and enables you to analyze open data formats with SQL without the need to load, transform, or move the data.
|
40 mins
Why cloud databases like Amazon Aurora are more scalable and reliable (300)
Amazon Aurora is a fully managed MySQL and PostgreSQL-compatible relational database with the speed, reliability, and availability of commercial databases at one-tenth the cost. It is up to five times faster than standard MySQL databases and three times faster than standard PostgreSQL databases. This session provides an overview of Aurora, explores Aurora features, such as serverless, global databases, multi-master, replication and multi-AZ failover, and helps you get started.
|
40 mins
Deploying open source databases on AWS (200)
Open source databases like MySQL, PostgreSQL, and Redis now rank among the world’s most popular databases. Fast-growing companies and large enterprises alike prefer open source databases due to their low cost, freedom from traditional license models, flexibility, community-backed development and support, and large ecosystems of tools and extensions. While open source databases are widely available, they can become difficult and time-consuming to manage in production environments. AWS Database Services including Amazon RDS (MySQL, PostgreSQL, MariaDB), and Amazon ElastiCache (Redis, Memcached) make it easy to manage open source database workloads in the cloud with performance, scalability, and availability.
|
|
Track 2 - Building apps with modern databases
|
Which database to choose: Pick the right purpose-built database for the job (200)
Developers building modern applications need purpose-built databases, so they have the freedom to choose the right database for the right job. AWS offers purpose-built relational, key-value, document, in-memory, graph, time series, and ledger databases so you can select the best database based on the application workload, not the other way around. Attend this session to learn how to pick the right database services to address specific application issues.
|
Building large scale data-driven apps with AWS databases (300)
Applications today require databases with unlimited scale that respond quicker than ever before. Ecommerce and social media applications and connected devices need more than what traditional relational databases offer. Attempts to scale a relational database management system (RDBMS) involve upgrades to more powerful—and often proprietary—hardware. This work is also known as “vertical scaling,” and it usually includes the undesirable combination of rising costs, operational complexity, and performance bottlenecks. Come to this session to learn how AWS databases such as Amazon DynamoDB are built for the scale and performance needs of today’s applications, without the complexity of running massively scalable, distributed databases allowing developers to build applications rather than manage infrastructure.
|
Extreme Performance at Cloud Scale: supercharge your real-time apps with Amazon ElastiCache (300)
Microseconds are the new milliseconds. Real-time applications such as caching, session stores, and other real-time processing need microsecond latency and high throughput to support millions of requests per second. Developers have traditionally relied on specialized hardware, and workarounds such as disk-based databases combined with data reduction techniques to manage data for real-time applications. These approaches can be expensive and not scalable. Learn how you can boost the performance of real-time apps by using the fully-managed, in-memory AWS database ElastiCache for extreme performance, high scalability, availability, and security.
|
Databases for building business-critical enterprise apps (300)
Packaged and custom enterprise apps help organizations manage core business processes, such as sales, billing, customer service, and HR. For decades, developers have built enterprise apps with old-guard commercial databases, but these databases are expensive, proprietary, have high lock-in, impose punitive licensing terms, don’t scale, and are difficult to manage. AWS offers a better way forward with fully-managed, cloud-native, modern database services that make it is easier, faster, and cost-effective for you to build enterprise apps. Attend this session to learn about AWS databases services including Amazon Aurora & RDS and dive deep into service capabilities such as scalability, reliability, and performance to quickly build enterprise apps.
|
|
Track 3 - Get started: Building your data lake
|
How to go from zero to data lake in days (200)
AWS provides the most comprehensive, secure, scalable, and cost-effective portfolio of services for building data lakes for analytics. In this session, you will learn how to discover, load, store, prepare, catalog, and secure your data in a data lake. Then, you will learn how to analyze that data with the largest choice of analytics approaches, including data warehouse, operational, and real-timing streaming analytics, and even ML and AI. This will give you an overview of what AWS analytics can help you accomplish. Finally, you will hear about how leading companies built successful and productive data lakes.
|
Picking the right analytical engine for your needs (200)
AWS offers analytical engines for several use cases including big data processing, using Hadoop and Spark, data warehousing, ad-hoc analysis, real-time analytics, and operational/log analytics. In this session, you will learn about what engines you can use to analyze the data stored in your Amazon S3 data lake. You will also learn how to use these engines together to generate new insights by complementing your data warehouse workloads with ad-hoc and real-time analytics engines to quickly incorporate new data into your reports.
|
Breaking the silos: Extending your DW to your data lake (300)
Traditional data warehouses require data to be loaded before it can be analyzed. This creates silos between the data warehouse, where transformed and structured data is stored in a proprietary format that other analytical engines cannot access, and the data lake, where data is stored as it arrives. Amazon Redshift breaks through data silos by enabling you to query exabytes of data in open formats directly from your Amazon S3 data lake. In this session you will learn how to use a data catalog such as AWS Glue to crawl the Amazon S3 data lake and create external tables, how to register those external tables in Amazon Redshift, and how to use Redshift to query data in your Amazon S3 data lake using those external tables - all without any data movement, duplication, or transformation.
|
Big data in the era of heavy privacy regulations (200)
The General Data Protection Regulation (GDPR) and California Consumer Protection Act (CCPA) are new privacy regulations that have major implications for data management and data analytics. In this session, you will learn about key parts of these new rules that you should consider while designing or evolving your analytics platform. You will then learn about approaches that will help with compliance, including metadata classification, tagging, fine-grained access controls, comprehensive encryption, anonymization, and erasure of customer data.
|
|
Track 4 - Analyzing and getting insights
|
Amazon Redshift use cases and deployment patterns (400)
More than 10,000 customers use Amazon Redshift and collectively process over 2 exabytes of data per day. In this session, we will show some common deployment patterns for various use cases across different industries. We will also highlight best practices and offer tips to avoid common pitfalls, based on lessons learned from our customer engagements. You will walk away knowing how to get insights from all of your data in your Redshift data warehouse and Amazon S3 data lake at the best performance and lowest cost.
|
Processing Big Data with Hadoop, Spark, and other frameworks in Amazon EMR (300)
Amazon EMR provides a managed Hadoop framework that makes it easy, fast, and cost-effective to process vast amounts of data. You can also run other popular distributed frameworks such as Apache Spark, HBase, Presto, and Flink in EMR. EMR Notebooks, based on the popular Jupyter Notebook, provide a development and collaboration environment for ad hoc querying and exploratory analysis. In this session, you will learn how EMR securely and reliably handles a broad set of big data use cases, including log analysis, web indexing, data transformations (ETL), machine learning, financial analysis, scientific simulation, and bioinformatics.
|
Scalable, secure log analytics with Amazon Elasticsearch Service (200)
You’ve got servers, you’ve got applications, you’ve got microservices, that means, you’ve got logs. They’re not the most exciting data that your systems generate, but many times, they’re the most useful for real-time application monitoring, root-cause analysis, security analytics, and more. Customers like Autodesk, Nike, Expedia, and many more are using Amazon Elasticsearch Service to ingest, analyze, and search their log data, at multi-petabyte scale. In this session, you will learn about the Amazon Elasticsearch Service, how to get data into Amazon Elasticsearch Service, and how to use Kibana to visualize the insights from your log data.
|
High-performance data streaming and real-time analytics with AWS (300)
For many use cases timing is critical and the value of data diminishes rapidly. This means that every micro-second counts. Amazon Kinesis services and Amazon Managed Streaming for Kafka provide customers with fully managed streaming options enabling data to be collected, stored and processed as soon as it is created. In this session, you will learn how to solve data streaming use cases using AWS and how to decide which of the services are best suited to your needs.
|
|
Duration: XX mins
Track 5
|
ENTER SESSION 5A ABSTRACT HERE
|
ENTER SESSION 5B ABSTRACT HERE
|
ENTER SESSION 5C ABSTRACT HERE
|
|
Duration: XX mins
Track 6
|
ENTER SESSION 6A ABSTRACT HERE
|
ENTER SESSION 6B ABSTRACT HERE
|
ENTER SESSION 6C ABSTRACT HERE
|
|