AWS Database Blog

Category: Advanced (300)

How to optimize Amazon RDS and Amazon Aurora database costs/performance with AWS Compute Optimizer

In this post, we dive deeper into database optimization for your Amazon Relational Database Service (Amazon RDS), exploring how you can use AWS Compute Optimizer recommendations to make cost-aware resource configuration decisions for your MySQL and PostgreSQL databases.

Vibe code with AWS databases using Vercel v0

In this post, we explore how you can use Vercel’s v0 generative UI to build applications with a modern UI for AWS purpose-built databases such as Amazon Aurora, Amazon DynamoDB, Amazon Neptune, and Amazon ElastiCache.

Demystifying the AWS advanced JDBC wrapper plugins

In 2023, AWS introduced the AWS advanced JDBC wrapper, enhancing the capabilities of existing JDBC drivers with additional functionality. This wrapper enables support of AWS and Amazon Aurora functions on top of an existing PostgreSQL, MySQL, or MariaDB JDBC driver of your choice. This wrapper supports a variety of plugins, including the Aurora connection tracker plugin, the limitless connection plugin, and the read-write splitting plugin. In this post, we discuss the benefits, use cases, and implementation details for two popular AWS Advanced JDBC Wrapper Driver plugins: the Aurora Initial Connection Strategy and Failover v2 plugins.

How Wiz achieved near-zero downtime for Amazon Aurora PostgreSQL major version upgrades at scale using Aurora Blue/Green Deployments

Wiz, a leading cloud security company, identifies and removes risks across major cloud platforms. Our agent-less scanner processes tens of billions of daily cloud resource metadata entries. This demands high-performance, low-latency processing, making our Amazon Aurora PostgreSQL-Compatible Edition database, serving hundreds of microservices at scale, a critical component of our architecture. In this post, we share how we upgraded our Aurora PostgreSQL database from version 14 to 16 with near-zero downtime using Amazon Aurora Blue/Green Deployments.

Simplify data integration using zero-ETL from Amazon RDS to Amazon Redshift

Organizations rely on real-time analytics to gain insights into their core business drivers, enhance operational efficiency, and maintain a competitive edge. Traditionally, this has involved the use of complex extract, transform, and load (ETL) pipelines. ETL is the process of combining, cleaning, and normalizing data from different sources to prepare it for analytics, AI, and […]

Automate conversion of Oracle SQL to PostgreSQL inside Java applications with AWS SCT

This post demonstrates how to use AWS SCT to simplify and accelerate the migration of embedded Oracle SQL code within Java applications to PostgreSQL-compatible syntax. The solution focuses on a practical use case involving a source Oracle database coupled with a sample Java application containing numerous Oracle-specific SQL statements. By using AWS SCT, developers can automate much of the schema and SQL conversion process, reducing manual effort and minimizing errors during migration.

Improve PostgreSQL performance: Diagnose and mitigate lock manager contention

Are your database read operations unexpectedly slowing down as your workload scales? Many organizations running PostgreSQL-based systems encounter performance bottlenecks that aren’t immediately obvious. When many concurrent read operations access tables with numerous partitions or indexes, they can even exhaust PostgreSQL’s fast path locking mechanism, forcing the system to use shared memory locks. The switch […]

Assess and migrate your database using AWS DMS Schema Conversion CLI

In this post, we demonstrate how to use DMS Schema Conversion to assess an Amazon RDS for SQL Server database and convert it to Amazon Aurora PostgreSQL-Compatible Edition. We walk you through how to automate the setup and configuration of DMS Schema Conversion components, generate an assessment report, convert database storage and code objects, export the converted code to Amazon S3, and apply the converted code to the target database.