Skip to main content

AWS Innovate

Migrate. Modernize. Build.

Accelerate your cloud journey with AWS

Join millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—who are leveraging AWS to lower costs, enhance agility, and accelerate innovation.

At AWS Innovate, we share proven strategies and practical steps for effectively migrating workloads, modernizing applications, and building cloud-native and AI-enabled solutions. Don’t miss this opportunity to learn with the experts and unlock the full potential of AWS. Register now!

Agenda

Embark on a hands-on learning journey, guided by step-by-step architectural and deployment best practices. Tailored for all skill levels–whether you are starting your cloud journey, an advanced user, or simply curious, we have sessions for your experience and job role. Check out the latest agenda below and join us for a day of immersive learning!

Download agenda at a glance »

Opening keynote

Open all

Cloud fundamentally changes the way we build and operate applications. Digital transformation has dramatically impacted the way many organizations deliver value and the rate at which they make changes to their products and services. With many new applications expected to be built over the next few years, organizations need to find balance between managing technologies and building new features. In this session, we analyze how successful organizations are building and running a wider variety of applications with the right migration and modernization pathways, based on customer engagements. We also discuss about the advancements of AI/ML, and share practical guidance how to successfully integrate machine learning in your migration and modernization journey to innovate faster, improve performance, and build new customer experiences while lowering total cost of ownership.

Migrate and modernize: Accelerate outcomes

Open all

AWS and its partners have successfully guided hundreds of thousands of customers through cloud migration and modernization over the past 19 years. Drawing from this extensive experience, we have identified key patterns for successful migrations as well as common anti-patterns to take note of. In this session, we walk through the three-phase AWS migration process: Assess, Mobilize, as well as Migrate and Modernize. By leveraging real-life customer examples, we share best practices, anti-patterns, and provide strategies to mitigate potential challenges. By the end of this session, get a clear understanding of how to approach your cloud migration and modernization efforts for maximum efficiency and success.

Many organizations are migrating and modernizing their workloads to the cloud for agility, increased performance and resilience. But modernizing applications often involves critical tasks such as breaking a monolith into microservices, adopting the right design patterns, data migration, and handling dependencies on legacy interfaces. In this session, we outline the different modernization pathways and services to get to the target architecture based on engagements with thousands of customers. We share key tools, programs, and resources from AWS to transform your existing applications and infrastructure into higher-value, cloud-native services. We also discuss how you can combine with agile processes deliver value more quickly, frequently, and reliably.

This session explores essential services and resources for successful cloud migration and modernization with focus on AWS compute options, operational best practices, and resilient architectures. Understand the various AWS compute options tailored to different workload requirements and dive deep into AWS Graviton-based instances support wide range of computing workloads. We then explore the resilience patterns and explain the specific strengths and trade-offs for each pattern. By understanding these patterns and their implications, builders like yourself can design resilient cloud architectures that deliver high availability and efficient recovery from potential disruptions. The session also covers the use of AWS Application Migration Service (MGN) feature for customized post-launch actions on migrated servers. We conclude by providing guidance and best practices for your workloads to deliver the price performance, reliability, and security.

The unique global cloud infrastructure provided by AWS is instrumental in facilitating the development of robust, accessible, secure, scalable, and fault-tolerant applications. Join this session to learn how AWS continually enhances and expands its global infrastructure through the introduction of additional Regions, Availability Zones, and how it integrates custom hardware tailored to the requirements of modern applications. We discuss how the establishment of a purpose-built global network backbone enables connectivity between the different sites. Discover how the implementation of innovative energy management systems provide efficient, resilient services while minimizing environmental footprint. Learn how AWS is committed to delivering reduced latency, heightened reliability, enhanced scalability, and improved operational efficiencies, all geared towards empowering your organization with the ability to succeed in the rapidly evolving demands of modern applications, including those driven by advanced AI technologies.

In this session, we showcase the key strategies for VMware users in the workload migration to AWS, to simplify operations and accelerate innovation. Learn various AWS services and programs you can use to increase scalability, improve performance, reduce costs, and enhance security. We explain how AWS Optimization and Licensing Assessment (AWS OLA), enables you to build your data-driven business case for cloud migration. Find out how to use AWS OLA to evaluate business goals, application portfolios, and performance needs. The session also covers the various AWS infrastructure options and third-party licensing requirements to identify the optimal migration strategy tailored to your specific needs.

Join this session to learn how RACQ built a generative AI assistant for insurance claims processing. We cover the architecture featuring serverless technologies such as AWS Lambda and AWS Step Functions, for workflow orchestration. Find out how RACQ used Amazon ECS on Fargate, a serverless approach for hosting front-end application and Amazon Bedrock to build an automated, scalable workflow triggered by claim events. We also share how they used AWS to enable data redaction, prompt techniques, human feedback loops, and adherence to responsibility principles for privacy, safety, and transparency. With AWS services' built-in security controls, such as VPC integration and encryption for Amazon Bedrock, understand how RACQ deployed generative AI capabilities securely. At the end of the session, understand how RACQ's modern, event-driven approach allows for rapid development of generative AI applications.

 

Having a strong security posture at the core enables digital transformation and innovation. Yet, many still have questions about the security of their data and applications. How can one actually be safer on the cloud than on-premise? Many believe that a trade-off has to be made; either move fast, or stay secure. At AWS, security is our top priority and we remained focused on helping organizations to develop and evolve security, identity, and compliance into key business enablers. In this session, we address the myths about cloud security, AWS security services, and Shared Responsibility Model (SRM). Find out how AWS provides the secure global cloud infrastructure to build, migrate, and manage your applications and workloads, enabling you to innovate securely and with confidence.

Migrate and modernize: Lifecycle of a migrated application

Open all

What does it mean to be a serverless builder? What disciplines do you need to successfully build cloud-native, serverless solutions today? In this session, we walk you through a day-in-a-life and insights on the core disciplines based on our engagements with several builders and developers. Understand what are the common design challenges and considerations in the decision making to manage these challenges. We then explore the architecture patterns, frameworks and tools for developing, testing, and deploying your serverless applications.

Teams building microservices architecture often find that integration with other applications and external services can make workloads more monolithic and tightly coupled. One of the critical aspects of decomposing monolithic applications is to design how various services interact with each other seamlessly. To do that, there are multiple considerations that needs to be accounted for and these considerations needs that align with business, performance, and resiliency requirements. In this session, we discuss different integration patterns, and how they can be combined together to achieve a scalable and resilient architecture. We take you through use cases via demos on how event-driven architectures and async work. Understand how to handle transactions and workflows into your architecture with orchestration, as well as how both approaches work together.

Many builders are looking for efficient ways to run their containers for security, reliability, and scalability. Containers have changed software development, testing, and deployment by enabling faster experimentation and value delivery. And running on cloud offers flexibility, scalability, reliability, performance, and cost-effectiveness. However, the question remains - how can one seamlessly integrate these two worlds? How do builders reap the benefits of containers and the cloud? In this session, learn how to formulate a container migration strategy, containerize, and migrate these workloads from your source to AWS. Discover how to successfully complete these migrations, evaluate the methodology to choose specific migration methods, and lay the right foundation for subsequent migration. The session also covers how to accelerate your generative AI project in a cost-optimized and scalable manner. We walk you through how to run and optimize your machine learning workloads with containers on AWS.

Join this session as we discuss why many organizations transition from traditional licensing products to cloud-native approaches to build scalable, flexible, and resilient applications. We discuss how cloud-native technologies facilitate rapid updates to meet customer demands while maintaining service delivery. Learn the benefits of open-source databases such as PostgreSQL for extensibility and SQL compliance. Understand why many customers are choosing PostgreSQL as their relational database management system (RDBMS) for cloud-native applications. We also demonstrate how to build AI-enabled .NET applications using PostgreSQL as a vector database with Amazon Bedrock's Large Language Models (LLMs). Find out how these technologies can be integrated to create powerful and intelligent applications. We then showcase Amazon Q, a generative AI-powered assistant for software development, and how it can streamline the maintenance and upgrading of legacy applications, expedite critical upgrade tasks, and transform applications by leveraging the latest language features and versions.

Many organizations running legacy applications on-premise are constantly searching for efficient ways to update their applications to achieve cost savings, agility, and speed to market. In this session, we outline the benefits of moving to modern application architecture. Learn how to remove the tedious process of spending countless hours manually upgrading dependencies and refactoring deprecated code for Java application with Amazon Q. We demonstrate how to use Amazon Q to automate the end-to-end process of upgrading and transforming code, reducing the time it takes to upgrade applications from weeks to days or minutes. The session also dives deep into how to transform your Java applications into microservices for deployment on cloud native services that leverage containers, Kubernetes, CI/CD pipelines, serverless, service mesh, and cloud developer tools. By the end of the session, understand how these services enable you and your teams to move faster and deploy more often as you accelerate the rate of updates, features, and fixes.

A DevOps engineer typically work through different phases of software delivery lifecycle including writing and testing code, deployment and observability, to successfully operate the application. But many face challenges when implementing DevOps practices - from security and compliance, continuous integration, to effectively monitoring and scaling applications. In this session, learn the DevOps tools from AWS to manage the tasks, deploy at scale, and maintain your ability to deliver applications and services at high velocity. The session also provides practical guidance on how you can move click ops to infrastructure as code (IaC), build a release pipeline, automate monitoring, and logging for your infrastructure and applications.

Building highly resilient applications in the cloud requires careful design and well-thought through capacity planning to ensure redundancy, as well as having right mechanisms so that requests are routed away from occasional, temporary failures. Join this session as we share the architectural patterns for injecting failures and experiments for chaos engineering and test for high availability. We explain how AWS provides isolation boundaries, including Availability Zones and AWS Regions, which can be used to meet high availability and continuity of operations requirements. We then demonstrate how to use chaos engineering to set up failure injection testing to validate the resiliency of your service. By referencing the AWS Well-Architected Framework, understand the design principles and AWS resources that help ensure a resilient architecture. The session also features how AWS Fault Injection Service (AWS FIS) enables you to test for hidden issues, prevent regression, and maintain application availability.

Build data foundations

Open all

In this session, we share the comprehensive suite of services from AWS which enables you to efficiently store, process, analyze, and act from their data. We dive deep into the key areas including data transfer, storage for data lakes, data warehousing, integration, and governance. We also share the analytics, visualization capabilities, and how to deliver innovative outcomes with AI and machine learning services on AWS. Learn the best practices for designing scalable, secure, and cost-effective data architectures, strategies for future-proofing your data foundation to support emerging technologies including generative AI. By the end of the session, understand how to build a strong data foundation that provides you with clean, organized, and easily accessible data for better decision-making and business insights.

The idea of one-size-fits-all monolithic database no longer fit today, as more organizations build highly distributed applications using many purpose-built databases. We observe an increasing number of customers wanting to build internet-scale applications that require diverse data models. In response to customers’ needs, AWS offers the choice of key-value, wide column, document, in-memory, graph, time-series, and ledger databases, with each database catering to specific use cases and requirements. In this session, find out the AWS purpose-built database that meet the scale, performance, and manageability requirements of modern applications.

Join this session as we dive deep into the pillars for the data lifecycle management strategy, so you can always have access to relevant, accurate, and searchable information that is used to make data-driven decisions. We cover the solutions from AWS to support every step of the data journey, so you can move your data efficiently and securely to the cloud. This session includes customer sharing by Autodesk as they share how they built a disaster recovery cross region site for their production data on Amazon S3. Autodesk team walks through how they successfully migrated and managed 1PB of existing data with total of 6 billion objects. Find out their cost optimized and robust approach to migrate existing data between the regions. We also explain how to run validation post migration to ensure good data in the region, while adhering to business continuity.

Extract, transform, and load (ETL) is the process of combining, cleaning, and normalizing data from different sources to get it ready for analytics, AI/ML workloads. But traditional ETL processes can be time-consuming and complex to develop, maintain, and scale. In this session, we share how zero-ETL eliminates the need for complex ETL data pipelines by enabling direct data movement and federated querying across databases, data lakes, and external sources. Learn how the integration of Amazon Aurora with Amazon Redshift allows near real-time analytics and ML on transactional data stored in Amazon Aurora MySQL-Compatible Edition, without building data pipelines. We then demonstrate Amazon DynamoDB zero-ETL integration with Amazon OpenSearch Service to perform tasks such as full-text search, fuzzy search, auto-complete, and vector search for machine learning (ML) capabilities to offer new experiences that boost user engagement and improve satisfaction with their applications. By end of session, understand how zero-ETL architecture on AWS empowers users to focus on extracting value from data rather than pipeline development.

Many organizations are looking at ways to collect, ingest, and visualize log data from various sources quickly. Join the session to discover how Amazon OpenSearch Serverless enables you to manage your search and log analytics needs securely, at scale, and cost-efficiently. We share the steps to easily build a log analytics pipeline using Amazon OpenSearch Serverless. Learn how to create a collection, using a Python Data Generator, build OpenSearch Dashboards, configure security policies, and analyze visualizations. We explain how to run large-scale search and analytics workloads without having to configure, manage, or scale OpenSearch clusters. The session also covers how to automatically provision and scale underlying resources, allowing you to deliver fast data ingestion and query responses for even the most demanding and unpredictable workloads.

Join this session as we explain why data streaming is a crucial enabler for building responsive, contextually-aware generative AI applications. We discuss why foundation models such as large language models offer immense potential, but lack the ability to dynamically incorporate real-time data at inference time, leading to hallucinations, lack of relevance, and poor personalization. This session dives into the various techniques including in-context learning and retrieval augmented generation (RAG) to help bridge this gap, by allowing models to adapt to the latest data in the context of a given prompt or query. We explore the key architectural patterns you can use when building streaming data pipelines to ingest change data capture (CDC) events, perform identity resolution for unified customer profiles, and transform unstructured content into vectorized representations, all in near real-time. Discover how to use key AWS services including Amazon MSK, Amazon Kinesis, Amazon Managed Streaming for Apache Flink, as well as purpose-built vector databases for building real-time analytics with streaming data.

As organizations manage more data across more locations, ensuring right access to data while managing data governance, compliance, security, scalability and management overhead is challenging. Access that is too restrictive can slow down business decision-making while access that is too lenient introduces risk. In addition, developing such solutions in-house can be complex and resource-intensive. In this session, we explain how to use Amazon DataZone, a data management service that makes it faster and easier for customers to catalog, discover, share, and govern data stored across AWS on premises and third-party sources. Understand how to ensure discovery and sharing of data while governing access with built-in workflow and tools integration. We also demonstrate how Amazon DataZone providers users throughout the organization to securely access the data and unlock valuable data-driven insights with ease.

Generative AI fundamentals

Open all

Everything begins with an idea. But how do you accelerate that initial spark to final product? Successful generative AI implementation requires aligning people, processes, and technologies. We explore how to build the end-to-end strategy to develop that alignment, build right skills, establish scalable workflows, and deploy the appropriate technologies. Find out how to build the business use cases with generative AI to advance your organizational objectives including reinventing applications, creating innovative customer experiences, and improving productivity. By the end of the session, learn how to ideate, prototype, and deliver products and services quickly.

In this session, find out how to build with generative AI stack on AWS including applications, tools, and infrastructure. We explain the important considerations for building generative AI applications. We also dive deep into the generative AI resources, and ways to leverage LLMs and other FMs on AWS. Discover the common architecture patterns and how to implement them using AWS. We also cover how to enable generative AI applications with your own data, and share best practices for designing and testing generative AI solutions. The session summarizes with key resources from AWS to develop and deploy your generative AI solutions seamlessly.

There are a large number of large language models (LLMs) out there and choosing the right one is critical because of the high cost associated with deploying generative AI models. In this session, we share the key considerations when evaluating LLMs. Find out how to evaluate LLMs for tasks where the output is fact-based and when the output is creative by nature. With thousands of text generation models out there to choose from and endless prompt engineering possibilities to use them with, learn how you can quickly and reliably identify the best price-performance solution for your use case. We then explain how you can build a complete picture of model and prompt-template performance on AWS. The session also covers the use automated tools that work alongside human labellers to create scalable but accurate evaluations, enabling you to build high-quality solutions faster and deploy with confidence.

Harnessing generative AI requires overcoming significant technical and strategic challenges to deploy production-ready solutions. In this session learn the tools, customization methods, and models so you can scale, move fast, manage risks while building and deploying your generative AI applications. We first dive deep into how to use Amazon Bedrock to access the key foundation models. We then explain how large language models (LLMs) are deployed on Amazon SageMaker. Discover how you can ensure flexibility of pluggable models, prompt versioning, customizability of RAG engines, and seamlessly integrate with data services on AWS. Understand the different techniques to generate safe and reliable responses, as well as best practices for monitoring and evaluating model outputs. We explore the key generative AI deployment patterns on AWS, and how they enable you to effectively deploy of multiple instances with diverse configurations, compare outputs, evaluate performance metrics, while ensuring enterprise-grade security measures.

Organizations across various industries are increasingly adopting machine learning for a wide range of use cases, including natural language processing (NLP), computer vision, voice assistants, fraud detection, and recommendation engines. Large language models (LLMs) that have hundreds of billions of parameters are unlocking new generative AI use cases, for example, image and text generation. But the growth of ML applications has resulted in higher usage, management, cost of compute, storage, and networking resources. This session explains why identifying and choosing the right compute infrastructure is important to reduce your power consumption, costs, as well as managing complexities from training and deployment of ML models to production. We explain how AWS offers the ideal combination of high performance, cost-effective, and energy-efficient purpose-built ML tools and accelerators, optimized for ML applications. Learn how to choose the right infrastructure for your AI/ML workload requirements. The session also explores the highly performant, scalable, and cost-effective ML infrastructure from AWS, ranging from the latest GPUs to purpose-built accelerators including AWS Trainium, AWS Inferentia and Amazon EC2 P5 which are designed for training and running models.

Many organizations adopt generative AI to achieve high application performance, uncover new opportunities, and build sustained competitive advantage. When building with generative AI, the choices made upfront can significantly impact the overall costs. In this session, we address common questions and challenges around cost management when building AI/ML workloads. Understand the key strategies to prevent unintentional cloud spend, cost-effective infrastructure for training ML models, and alternative hosting options. We explain how you can leverage the latest features in Amazon Bedrock to reduce expenses while maximizing the value of your workloads.

Builders’ tools for modern applications

Open all

Join this demo packed session to acquire practical skills on how to build scalable, and cost-effective APIs. We showcase essential topics including what you need to know when building APIs, computing options to host APIs, how to deploy them and more. Learn what, and when to use the services, and development tools from AWS and OpenAPI. The session features best practices on how to minimize friction, build, manage, and run your APIs effectively. We also explain how to implement serverless architectures to improve performance and user experience while reducing operational costs.

This session explores essential AWS tools and services for builders and developers to build your applications. We highlight generative AI-powered capabilities for low-code abstractions, cloud development, and operations. Through technical demos, learn how to integrate these tools to enhance your productivity. We showcase the use of Amazon Q Developer, a generative AI-powered conversational assistant to accelerate software development process. Find out how Amazon Q Developer can streamline various aspects of coding, from ideation to implementation. We then showcase how to integrate generative AI tools with other cloud native services for application modernization and maintenance.

Organizations need access to a broad range of foundation models to build and scale generative AI applications, and harnessing this requires more than just a model. Join this session as we dive deep into agents. We explain the benefits of using agents, key components of the agent architecture, and demonstrate how to fully utilize the capabilities of agents to automate tasks and workflows and the critical importance of democratizing access to AI.

Join this session as we delve into how organizations can leverage domain-specific knowledge and internal data to boost employee productivity and enhance customer experiences with generative AI assistants. We outline the tools to build your generative AI assistants in minutes, and how to use them to seamlessly integrate your proprietary knowledge bases, product catalogs, employee manuals, and customer data. Learn to build the generative AI assistant with Amazon Q and data hosted on data lake, data warehouse, cloud storage including Amazon S3 and other data sources such as RDBMS systems, Salesforce, Confluence, SharePoint, Quip, Jira, with enhanced accuracy, security, and privacy.

To equip the foundation models (FMs) with up-to-date proprietary information, organizations use Retrieval Augmented Generation (RAG), to fetch data from company data sources and enrich the prompt with the data for more relevant and accurate responses. However, implementing RAG requires specific skillset and time to configure connections to data sources, manage data ingestion workflows, and write custom code to manage the interactions between the foundation model (FM) and the data sources. In this session, we share how to simplify the process with Knowledge Bases for Amazon Bedrock. Learn how to give FMs and agents contextual information from your company’s private data sources for RAG to deliver more relevant, accurate, and customized responses. We also demonstrate how to automate the end-to-end RAG workflow, including ingestion, retrieval, prompt augmentation, and citations, eliminating the need to write custom code to integrate data sources and manage queries. We then explore advanced RAG techniques involving multiple data sources including Amazon OpenSearch Service, Amazon Aurora Serverless and container-based system.

Software development and IT operations face significant challenges due to the complexities of modern systems and increased demand for faster release cycles. This session is packed with demos on how you can mitigate these challenges with generative AI to accelerate end-to-end DevOps processes. We explain how builders and developers can save time and speed up application development with key tools on AWS. We share how to use Code Catalyst and Amazon Q to bring your idea to runnable and mergeable code. Understand how to debug workflows easily and set up development project using blueprints within minutes. We outline the real-world use cases demonstrating how large language models (LLMs) and code generation can reduce manual effort across the DevOps lifecycle. By the end of the session, learn how to use generative AI to streamline tasks such as deployment pipeline development, testing, infrastructure provisioning, and incident remediation. The session also features best practices for implementing responsible AI in DevOps practice.

Closing remarks

Open all

Modern application development is a powerful approach to designing, building, and managing software in the cloud. It increases the agility of your development teams and the reliability and security of your applications, allowing you to build better products faster. This session provides a recap of the days' sessions and addresses some of the commonly asked questions on modern applications. Learn why modern application development practices is pivotal to an organization’s growth and how organizations can realize ongoing benefits of the cloud through modernization of their applications, data, and infrastructure. We also share best practices on how organizations with the use of serverless, microservices, containers, CI/CD, DevOps, business applications, cost optimization, and generative AI can unlock innovation, increase agility, and enable faster time to market.

Builders Zone

Open all

In this session, we demonstrate how to develop a 'brick maestro' solution using AI/ML, IoT, and high-performance computing (HPC) solutions on AWS. Discover how this solution utilizes computer vision models with Amazon SageMaker to identify bricks. We then explain how to leverage ML models to rank the best builds that resemble real objects, and influence these rankings by indicating your preferred objects. Learn how to efficiently run this workload in the cloud with HPC solutions on AWS, which provide virtually unlimited compute capacity, a high-performance file system, and high-throughput networking.

Join this session to learn about retrieval augmented generation (RAG) and various popular architectures. We discuss the vector databases available on AWS and how they are important for implementing RAG architectures efficiently. The session also features a demo to showcase the different RAG architectures, how to generate final answer to the same question, giving you insights to deliver outcomes.

Organizations today have increased expectations for modernized workflows and try to use generative AI to process large data volumes for their desired outcomes. In this session, learn how to build a generative AI application for context-sensitive information retrieval to augment decision-making and enable accurate, efficient, and ethical data-driven operations. We demonstrate how to quickly build an application to extract knowledge from documents and provide a question-answering and discovery interface to your users. We walk through the steps to streamline document ingestion and extraction as well as create embedding models stored in a vector store. Once ingestion is done, we share how this chatbot interface allows you to ask questions in natural language, get contextual answer and query the vector store semantically.

Join this session to learn how to build an augmented reality (AR) observability dashboard that enables you to identify and resolve application and infrastructure issues through a gamified experience. Utilizing an AR headset, Amazon Transcribe, generative AI, and observability solutions, we will demonstrate how to intentionally induce failures within an application. We will then walk you through creating real-time analysis within the architecture and using generative AI-powered voice interaction to request root causes and solutions. This session also showcases how to pinpoint the root cause, provide enhanced details in the AR-rendered architecture, and recommend ways to resolve the problem.

Organizations are focused on ways to deliver highly personalized user experiences at scale to achieve higher customer engagement, conversion, and revenue while creating meaningful differentiation. In this session, we showcase how you can use Amazon Personalize with generative AI to boost your user engagement and provide highly-optimized customer interactions. Discover how to use Amazon Bedrock LLM foundation model with algorithms from Amazon Personalize to automatically generate thematic connections between recommended content for any interface. We also demonstrate how to build a custom solution with personalized content descriptions that can be integrated into your existing websites, applications, and email marketing systems with simple APIs.

Have you ever been stuck at a traffic light even though there are no vehicles coming from the other direction? Do you wish to avoid traffic congestion and get to the destination quickly? In this session, we demonstrate how to build a smart traffic management solution, powered by machine learning at the edge with Amazon SageMaker and AWS IoT. Discover how the solution enables you to automatically observe traffic patterns, vehicle loads on the road and control the lights so that cars move across quickly and reduce traffic congestion. Uncover how it can automatically identify emergency vehicles, including ambulances and police cars, as well as control the traffic lights to enable these vehicles to move to the destinations in the shortest possible time. We also showcase how this solution can manage the traffic flow by automatically tracking accidents, vehicle failures, or other incidents that result in road blockage. The session concludes with guidance on how to develop an analytics dashboard for real-time traffic insights.

Japanese

Open all

Korean

Open all

Session levels designed for you

Intermediate sessions

INTERMEDIATE
Sessions are focused on providing best practices, details of service features and demos with the assumption that attendees have introductory knowledge of the topics.

Advanced sessions

ADVANCED
Sessions dive deeper into the selected topic. Presenters assume that the audience has some familiarity with the topic, but may or may not have direct experience implementing a similar solution.

Featured speakers

Loading
Loading
Loading
Loading
Loading

Frequently Asked Questions

1 column no expander subheading

AWS Innovate is an online conference. After filling up the registration form, you will receive a confirmation email.

AWS Innovate is a free online conference

Whether you are new to the cloud or an experienced user, you can learn something new at AWS Innovate. AWS Innovate is designed to help you develop the right skills to innovate faster, enable new efficiencies, and make quicker, accurate decisions.

The online conference is available in English, Japanese, and Korean.

If you have questions that have not been answered in the FAQs above, please email us.

Begin your migration to cloud with AWS Free Tier

Sign up for a free account to explore free offers on AWS Database Migration Service, AWS Migration Hub, and over 100 other services.

Get Started »
Contact sales »