AWS Database Blog

Category: Analytics

Accelerate SQL Server to Amazon Aurora migrations with a customizable solution

Migrating from SQL Server to Amazon Aurora can significantly reduce database licensing costs and modernize your data infrastructure. To accelerate your migration journey, we have developed a migration solution that offers ease and flexibility. You can use this migration accelerator to achieve fast data migration and minimum downtime while customizing it to meet your specific business requirements. In this post, we showcase the core features of the migration accelerator, demonstrated through a complex use case of consolidating 32 SQL Server databases into a single Amazon Aurora instance with near-zero downtime, while addressing technical debt through refactoring.

Better together: Amazon RDS for SQL Server and Amazon SageMaker Lakehouse, a generative AI data integration use case

Generative AI solutions are transforming how businesses operate worldwide. It has now become paramount for businesses to integrate generative AI capabilities into their customer-facing services and applications. The challenge they often face is the need to use massive amounts of relational data hosted on SQL Server databases to contextualize these new generative AI solutions. In this post, we demonstrate how you can address this challenge by combining Amazon RDS for SQL Server and Amazon SageMaker Lakehouse.

Amazon DynamoDB zero-ETL integration with Amazon SageMaker Lakehouse – Part 2

Amazon DynamoDB zero-ETL integration with Amazon SageMaker Lakehouse allows you to run analytics workloads on your DynamoDB data without having to set up and manage extract, transform, and load (ETL) pipelines. In this post we cover setting up Amazon SageMaker Unified Studio, followed by running data analysis to showcase its capabilities. We illustrate our solution walkthrough with an example of a credit card company that wants to analyze its customer behavior and spending trends.

Amazon DynamoDB zero-ETL integration with Amazon SageMaker Lakehouse – Part 1

Amazon DynamoDB zero-ETL integration with Amazon SageMaker Lakehouse allows you to run analytics workloads on your DynamoDB data without having to set up and manage extract, transform, and load (ETL) pipelines. In this two-part series, we first walk through the prerequisites and initial setup for the zero-ETL integration. In Part 2, we cover setting up Amazon SageMaker Unified Studio, followed by running data analysis to showcase its capabilities. We illustrate our solution walkthrough with an example of a credit card company that wants to analyze its customer behavior and spending trends.

Graph-powered authorization: Relationship based access control for access management

Authorization systems are a critical component of modern applications, yet traditional approaches like role-based access control (RBAC) and attribute-based access control (ABAC) struggle to meet the complex access control requirements of today’s enterprises. In this post, we introduce a relationship-based access control (ReBAC) as an alternative for enterprise scale authorization. We explore how the proposed […]

How Amazon Finance Automation built an operational data store with AWS purpose built databases to power critical finance applications

In this post, we discuss how the Amazon Finance Automation team used AWS purpose built databases, such as Amazon DynamoDB, Amazon OpenSearch Service, and Amazon Neptune together coupled with serverless compute like AWS Lambda to build an Operational Data Store (ODS) to store financial transactional data and support FinOps applications with millisecond latency. This data is the key enabler for FinOps business.

Improve cost visibility of an Amazon RDS multi-tenant instance with Performance Insights and Amazon Athena

In this post we introduce a solution that addresses a common challenge faced by many customers: managing costs in multi-tenant applications, particularly for shared databases in Amazon Relational Database Service (Amazon RDS) and Amazon Aurora. This solution uses Amazon RDS Performance Insights and AWS Cost and Usage Reports (CUR) to addresses this challenge. This allows for efficient grouping of tenants within the same RDS or Aurora instances, while helping you implement accurate chargeback models, optimize resource-intensive workloads, and make data-driven decisions for capacity planning.

Gather organization-wide Amazon RDS orphan snapshot insights using AWS Step Functions and Amazon QuickSight

In this post, we walk you through a solution to aggregate RDS orphan snapshots across accounts and AWS Regions, enabling automation and organization-wide visibility to optimize cloud spend based on data-driven insights. Cross-region copied snapshots, Aurora cluster copied snapshots and shared snapshots are out of scope for this solution. The solution uses AWS Step Functions orchestration together with AWS Lambda functions to generate orphan snapshot metadata across your organization. Generated metadata information is stored in Amazon Simple Storage Service (Amazon S3) and transformed into an Amazon Athena table by AWS Glue. Amazon QuickSight uses the Athena table to generate orphan snapshot insights.

How Skello uses AWS DMS to synchronize data from a monolithic application to microservices

Skello is a human resources (HR) software-as-a-service (SaaS) platform that focuses on employee scheduling and workforce management. It caters to various sectors, including hospitality, retail, healthcare, construction, and industry. In this post, we show how Skello uses AWS Database Migration Service (AWS DMS) to synchronize data from an monolithic architecture to microservices and perform data ingestion from the monolithic architecture and microservices to our data lake.

How Channel Corporation modernized their architecture with Amazon DynamoDB, Part 2: Streams

Channel Corporation is a B2B software as a service (SaaS) startup that operates the all-in-one artificial intelligence (AI) messenger Channel Talk. In Part 1 of this series, we introduced our motivation for NoSQL adoption, technical problems with business growth, and considerations for migration from PostgreSQL to Amazon DynamoDB. In this post, we share our experience integrating with other services to solve areas that couldn’t be addressed with DynamoDB alone.