AWS Database Blog

Category: Advanced (300)

Build graph applications faster with Amazon Neptune public endpoints

Developing applications on Amazon Neptune Database historically required users setup access into the VPC where it is hosted and use either 3rd party drivers or direct HTTP requests. In this post, we discuss how two key features, public endpoints and the Neptune Data API, solve these common challenges in Amazon Neptune application development. Public endpoints […]

Automating vector embedding generation in Amazon Aurora PostgreSQL with Amazon Bedrock

In this post, we explore several approaches for automating the generation of vector embedding in Amazon Aurora PostgreSQL-Compatible Edition when data is inserted or modified in the database. Each approach offers different trade-offs in terms of complexity, latency, reliability, and scalability, allowing you to choose the best fit for your specific application needs.

Scale read operations with Amazon Timestream for InfluxDB read replicas

In this post, we show how to use Amazon Timestream for InfluxDB read replicas to scale your read operations by adding additional read replicas while maintaining a single write endpoint. Built in partnership with InfluxData, our read replica add-on offers InfluxDB open source customers the ability to horizontally scale their read capacity.

Automating Amazon RDS and Amazon Aurora recommendations via notification with AWS Lambda, Amazon EventBridge, and Amazon SES

In this post, we walk through a solution that automates the notification of Amazon RDS and Aurora recommendations through email using AWS Lambda, Amazon EventBridge and Amazon Simple Email Service (Amazon SES).

Accelerate database migration using virtual target mode in AWS DMS Schema Conversion

AWS recently announced virtual target mode in AWS Database Migration Service (AWS DMS) Schema Conversion. This feature helps you start migration planning without provisioning target databases. In this post, we show you how to get started using virtual target mode in AWS DMS Schema Conversion.

Gracefully handle failed AWS Lambda events from Amazon DynamoDB Streams

In this post, we show how to capture and retain failed stream events for later analysis or replay using Amazon S3 as a durable destination. We compare this approach with the traditional Amazon SQS dead-letter queue (DLQ) pattern, and explain when and why Amazon S3 is a preferred option.