AWS Database Blog
Category: Customer Solutions
How Aqua Security automates fast clone orchestration on Amazon Aurora at scale
Aqua Security is a leading provider of cloud-based security solutions, trusted by global enterprises to secure their applications from development to production. In this post, we explore how Aqua Security automates the use of Amazon Aurora fast clones to support read-heavy operations at scale, simplify their data workflows, and maintain operational efficiency.
How TalentNeuron optimized data operations and cut costs and modernized with Amazon Aurora I/O-Optimized
For years, TalentNeuron, a leader in talent intelligence and workforce planning, has been empowering organizations with data-driven insights by collecting and processing vast amounts of job board data. In this post, we share three key benefits that TalentNeuron realized by using Amazon Aurora I/O-Optimized as part of their new data platform: reduced monthly database costs by 29%, improved data validation performance, and accelerated innovation through modernization.
Fluent Commerce’s approach to near-zero downtime Amazon Aurora PostgreSQL upgrade at 32 TB scale using snapshots and AWS DMS ongoing replication
Fluent Commerce, an omnichannel commerce platform, offers order management solutions that enable businesses to deliver seamless shopping experiences across various channels. Fluent uses Amazon Aurora PostgreSQL-Compatible Edition as its high-performance OLTP database engine to process their customers’ intricate search queries efficiently. Fluent Commerce strategically combined AWS-based upgrade approaches—including snapshot restores and AWS DMS ongoing replication—to seamlessly upgrade their 32 TB Aurora PostgreSQL databases with minimal downtime. In this post, we explore a pragmatic and cost-effective approach to achieve near-zero downtime during database upgrades. We explore the method of using the snapshot and restore method followed by continuous replication using AWS DMS.
How an AWS customer in the learning services industry migrated and modernized SAP ASE to Amazon Aurora PostgreSQL
In this post, we explore how a leading AWS customer in the learning services industry successfully modernized its legacy SAP ASE environment by migrating to Amazon Aurora PostgreSQL-Compatible Edition. Partnering with AWS, the customer engineered a comprehensive migration strategy to transition from a proprietary system to an open source database while providing high availability, performance optimization, and cost-efficiency.
Implement row-level security in Amazon Aurora MySQL and Amazon RDS for MySQL
Row-level security (RLS) is a security mechanism that enhances data protection in scalable applications by controlling access at the individual row level. It enables organizations to implement fine-grained access controls based on user attributes, so users can only view and modify data they’re authorized to access. This post focuses on implementing a cost-effective custom RLS solution using native MySQL features, making it suitable for a wide range of use cases without requiring additional software dependencies. This solution is applicable for both Amazon Relational Database Service (Amazon RDS) for MySQL and Amazon Aurora MySQL-Compatible Edition, providing flexibility for users of either service.
Real-time Iceberg ingestion with AWS DMS
Etleap is an AWS Advanced Technology Partner with the AWS Data & Analytics Competency and Amazon Redshift Service Ready designation. In this post, we show how Etleap helps you build scalable, near real-time pipelines that stream data from operational SQL databases into Iceberg tables using AWS DMS. You can use AWS DMS as a robust and configurable solution for change data capture (CDC) from all major databases into AWS.
Scaling Amazon RDS for MySQL performance for Careem’s digital platform on AWS
Careem powers rides, deliveries, and payments across the Middle East, North Africa and South Asia. As Careem grew, so did its data infrastructure challenges. Their monolithic 270 TB Amazon RDS for MySQL database consisting of one writer and five read replicas— experienced performance issues due to increased storage utilization, slow queries, high replica lag, and increased Amazon RDS cost. In this post, we provide a step-by-step breakdown of how Careem successfully implemented a phased data purging strategy, improving DB performance while addressing key technical challenges.
Zupee implements Amazon Neptune to detect Wallet transaction anomalies in real time
Zupee is a leading skill-based gaming platform offering casual and board games and is one of the fastest growing real money gaming platforms in India. Users can play multiple skill-based games online and win prizes. In this post, we show you how Zupee integrated Amazon Neptune Database to detect anomalies in real time for wallet transactions by creating a system for tracing the complex relationships between users, devices, and wallet transactions metadata.
How Habby enhanced resiliency and system robustness using Valkey GLIDE and Amazon ElastiCache
Habby is a game studio that creates interactive entertainment to connect players worldwide. We adopted Valkey GLIDE, a client library for Amazon ElastiCache for Valkey and Redis OSS, to address our system challenges. Our system uses the Amazon ElastiCache for Redis OSS publish/subscribe (Pub/Sub) functionality for the chat message sending. However, we faced challenges with connection stability during infrastructure changes, such as instance scaling, Redis OSS version upgrades, and hardware failures. This post describes our messaging system architecture and explains how we improved system reliability by using Valkey GLIDE as the client communicating with Amazon ElastiCache.
How Amazon Finance Automation built an operational data store with AWS purpose built databases to power critical finance applications
In this post, we discuss how the Amazon Finance Automation team used AWS purpose built databases, such as Amazon DynamoDB, Amazon OpenSearch Service, and Amazon Neptune together coupled with serverless compute like AWS Lambda to build an Operational Data Store (ODS) to store financial transactional data and support FinOps applications with millisecond latency. This data is the key enabler for FinOps business.