AWS Database Blog
Configure additional storage volumes with Amazon RDS for SQL Server
With the introduction of the additional storage volume feature, you can now attach up to three additional storage volumes to your Amazon RDS for SQL Server instances. By using this feature, you can distribute your data and log files across multiple volumes. This enhancement offers more granular control over storage configuration and performance optimization. In this post, you will learn about the following scenarios: Adding a new storage volume, Scaling an existing storage volume, Restoring a database on an additional storage volume, and Deleting a storage volume.
Build and explore Knowledge Graphs faster with Amazon Neptune using Graph.Build and G.V() – Part 2
This is a guest blog by Arthur Bigeard, Founder at gdotv, in partnership with Charles Ivie, Sr Graph Architect at AWS. G.V() is a graph database IDE available for Desktop or on AWS Marketplace, offering extensive graph visualization and querying capabilities for Amazon Neptune and Neptune Analytics. In Part 1 of this series, we demonstrated […]
Build and explore Knowledge Graphs faster with Amazon Neptune using Graph.Build and G.V() – Part 1
This is a guest blog post by Richard Loveday, Head of Product at Graph.Build, in partnership with Charles Ivie, Graph Architect at AWS. The Graph.Build platform is a dedicated, no-code graph model design studio and build factory, available on AWS Marketplace. Knowledge graphs have been widely adopted by organizations, powering use cases such as social […]
Introducing Amazon Aurora powers for Kiro
In this post, we show how you can turn your ideas into full-stack applications with Kiro powers for Aurora. We explore how a new innovation, Kiro powers, can help you use Amazon Aurora best practices built into your development workflow, automatically implementing configurations and optimizations that make sure your database layer is production-ready from day one.
Build a fitness center management application with Kiro using Amazon DocumentDB (with MongoDB compatibility)
In this post, we walk through how we used Kiro, an agentic Integrated Development Environment (IDE), to build a complete fitness center management application that digitizes paper-based fitness tracking. We explore Kiro’s spec-driven development workflow and see how it transforms complex application development into a streamlined, iterative process. Our solution uses Amazon DocumentDB as the backend.
Exploring Optimize CPU feature on Amazon RDS for SQL Server
Amazon RDS for SQL Server now supports the Optimize CPU feature. With the Optimize CPU feature you can define the number of vCPUs when you launch new instances or when modifying existing database instances. This feature also provides a detailed billing breakdown of RDS infrastructure costs, and licensing costs for SQL Server and Windows OS. It is available starting from the 7th Generation instance class. In this post, we explore how to use the Optimize CPU feature with Amazon RDS for SQL Server.
Netflix consolidates relational database infrastructure on Amazon Aurora, achieving up to 75% improved performance
Netflix operates a global streaming service that serves hundreds of millions of users through a distributed microservices architecture. In this post, we examine the technical and operational challenges encountered by their Online Data Stores (ODS) team with their current self-managed distributed PostgreSQL-compatible database, the evaluation criteria used to select a database solution, and why they chose to migrate to Amazon Aurora PostgreSQL to meet their current and future performance needs. The migration to Aurora PostgreSQL improved their database infrastructure, achieving up to 75% increase in performance and 28% cost savings across critical applications.
How Letta builds production-ready AI agents with Amazon Aurora PostgreSQL
With the Letta Developer Platform, you can create stateful agents with built-in context management (compaction, context rewriting, and context offloading) and persistence. Using the Letta API, you can create agents that are long-lived or achieve complex tasks without worrying about context overflow or model lock-in. In this post, we guide you through setting up Amazon Aurora Serverless as a database repository for storing Letta long-term memory. We show how to create an Aurora cluster in the cloud, configure Letta to connect to it, and deploy agents that persist their memory to Aurora. We also explore how to query the database directly to view agent state.
Lower cost and latency for AI using Amazon ElastiCache as a semantic cache with Amazon Bedrock
This post shows how to build a semantic cache using vector search on Amazon ElastiCache for Valkey. As detailed in the Impact section of this post, our experiments with semantic caching reduced LLM inference cost by up to 86 percent and improved average end-to-end latency for queries by up to 88 percent.
Build persistent memory for agentic AI applications with Mem0 Open Source, Amazon ElastiCache for Valkey, and Amazon Neptune Analytics
Today, we’re announcing a new integration between Mem0 Open Source, Amazon ElastiCache for Valkey, and Amazon Neptune Analytics to provide persistent memory capabilities to agentic AI applications. This integration solves a critical challenge when building agentic AI applications: without persistent memory, agents forget everything between conversations, making it impossible to deliver personalized experiences or complete multi-step tasks effectively. In this post, we show how you can use this new Mem0 integration.









