AWS Database Blog
Category: Learning Levels
Enhanced throttling observability in Amazon DynamoDB
Today, we’re announcing improved observability for throttled requests in Amazon DynamoDB. These enhancements provide developers with enriched exception messages, detailed Amazon CloudWatch metrics, and a new, more cost-effective mode for CloudWatch Contributor Insights. Together, these improvements make it straightforward to understand, monitor, and optimize your DynamoDB applications’ performance. In this post, we explore how these […]
Securing Amazon Aurora DSQL: Access control best practices
You can access an Amazon Aurora DSQL cluster by using a public endpoint and AWS PrivateLink endpoints. In this post, we demonstrate how to control access to your Aurora DSQL cluster by using public endpoints and private VPC endpoints through PrivateLink, both from inside and outside AWS.
Announcing Extended Support for Amazon DocumentDB (with MongoDB compatibility) version 3.6
Today, Amazon DocumentDB (with MongoDB compatibility) announced that Amazon DocumentDB version 3.6 will reach end of life on March 30, 2026. Starting March 31, 2026, you can continue to run Amazon DocumentDB version 3.6 on Extended Support. Extended Support provides fixes for critical security issues and bugs through patch releases for three years beyond the end of standard support of Amazon DocumentDB version 3.6.
Improve AWS DMS continuous replication performance by using column filters to parallelize high-volume tables
In this post, we explore how you can use column filters to divide a high-activity table into multiple tasks during the CDC phase. This approach can accelerate the migration process and reduce target latency.
Scaling transaction peaks: Juspay’s approach using Amazon ElastiCache
Juspay powers global enterprises by streamlining payment process orchestration, enhancing security, reducing fraud, and providing seamless customer experiences. In this post, we walk you through how Juspay transformed their payment processing architecture to handle transaction peaks. Using Amazon ElastiCache and Amazon RDS for MySQL, Juspay built a system that processes 7.6 million transactions per hour during peak events, achieves sub-millisecond latency, and reduces infrastructure costs by 80% compared to their previous solution.
How Wiz achieved near-zero downtime for Amazon Aurora PostgreSQL major version upgrades at scale using Aurora Blue/Green Deployments
Wiz, a leading cloud security company, identifies and removes risks across major cloud platforms. Our agent-less scanner processes tens of billions of daily cloud resource metadata entries. This demands high-performance, low-latency processing, making our Amazon Aurora PostgreSQL-Compatible Edition database, serving hundreds of microservices at scale, a critical component of our architecture. In this post, we share how we upgraded our Aurora PostgreSQL database from version 14 to 16 with near-zero downtime using Amazon Aurora Blue/Green Deployments.
Simplify data integration using zero-ETL from Amazon RDS to Amazon Redshift
Organizations rely on real-time analytics to gain insights into their core business drivers, enhance operational efficiency, and maintain a competitive edge. Traditionally, this has involved the use of complex extract, transform, and load (ETL) pipelines. ETL is the process of combining, cleaning, and normalizing data from different sources to prepare it for analytics, AI, and […]
Automate conversion of Oracle SQL to PostgreSQL inside Java applications with AWS SCT
This post demonstrates how to use AWS SCT to simplify and accelerate the migration of embedded Oracle SQL code within Java applications to PostgreSQL-compatible syntax. The solution focuses on a practical use case involving a source Oracle database coupled with a sample Java application containing numerous Oracle-specific SQL statements. By using AWS SCT, developers can automate much of the schema and SQL conversion process, reducing manual effort and minimizing errors during migration.
How Clari achieved 50% cost savings with Amazon Aurora I/O-Optimized
In this post, we show you how Clari optimized their database performance and reduced costs by 50% by switching to Amazon Aurora I/O-Optimized.
Introducing the Amazon DynamoDB data modeling MCP tool
To help you move faster with greater confidence, we’re introducing a new DynamoDB data modeling tool, available as part of our DynamoDB Model Context Protocol (MCP) server. The DynamoDB MCP data modeling tool integrates with AI assistants that support MCP, providing a structured, natural-language-driven workflow to translate application requirements into DynamoDB data models. In this post, we show you how to generate a data model in minutes using this new data modeling tool.