Simplify data integration using zero-ETL from Amazon RDS to Amazon Redshift
Organizations rely on real-time analytics to gain insights into their core business drivers, enhance operational efficiency, and maintain a competitive edge. Traditionally, this has involved the use of complex extract, transform, and load (ETL) pipelines. ETL is the process of combining, cleaning, and normalizing data from different sources to prepare it for analytics, AI, and machine learning (ML) workloads. Although ETL processes have long been a staple of data integration, they often prove time-consuming, complex, and less adaptable to the fast-changing demands of modern data architectures. By transitioning towards zero-ETL architectures, you can can foster agility in analytics, streamline processes, and make sure that data is immediately actionable.
Zero-ETL is a set of fully managed integrations by AWS that minimizes the need to build ETL data pipelines. It makes data available in Amazon Redshift from multiple operational, transactional, and enterprise sources. Zero-ETL integrations help unify your data across applications and data sources for holistic insights and breaking data silos. They provide a fully managed, no-code, near real-time solution for making petabytes of transactional data available in Amazon Redshift within seconds of data being written into Amazon RDS for MySQL or Amazon RDS for PostgreSQL. This alleviates the need to create your own ETL jobs simplifying data ingestion, reducing your operational overhead and potentially lowering your overall data processing costs. This enables you to focus more on deriving actionable insights and less on managing the complexities of data integration. The following image illustrates how you can achieve near real time analytics from Amazon RDS to Amazon Redshift.
A zero-ETL integration makes the data in your RDS database available in Amazon Redshift in near real-time. When that data is in Amazon Redshift, you can power your analytics, ML, and AI workloads using the built-in capabilities of Amazon Redshift, such as ML, materialized views, data sharing, federated access to multiple data stores and data lakes, and integrations with Amazon SageMaker, Amazon QuickSight, and other AWS services.
Solution overview
To create a zero-ETL integration, you specify an RDS database as the source, and a Redshift data warehouse as the target. The integration replicates data from the source database into the target data warehouse. The following diagram illustrates this architecture.
You’ll use AWS Command Line Interface (AWS CLI) to create the zero-ETL integration. To do so, you’ll first create a source RDS database instance, a target Redshift cluster and then initiate the integration.
Prerequisites
You must have the following prerequisites:
The AWS Command Line Interface (AWS CLI) v2 installed and configured with appropriate credentials.
Sufficient AWS Identify and Access management (AWS IAM) permissions to create and configure Amazon RDS. For more details, refer to Creating an Amazon RDS DB instance.
A RDS for MySQL or RDS for PostgreSQL (source) DB instance set up and accessible on their respective SQL ports. For this post, we use RDS DB instances with MySQL 8.0 or PostgreSQL 15.7
MySQL: For MySQL source, use the following command to create a custom RDS DB parameter group. This example illustrates creating a parameter group for MySQL 8.0.
aws rds create-db-parameter-group \
--db-parameter-group-name zetl-mysql-parameter-group \
--db-parameter-group-family mysql8.0 \
--description "Parameter group for mysql" \
--region {region}
Then modify the binlog_format and binlog_row_image parameter values. The binary logging format is important because it determines the record of data changes that is recorded in the source and sent to the replication targets. For information about the advantages and disadvantages of different binary logging formats for replication, see Advantages and Disadvantages of Statement-Based and Row-Based Replication in the MySQL documentation.
PostgreSQL: For PostgreSQL source, use the following command to create a custom RDS DB parameter group. This example illustrates creating a parameter group for PostgreSQL15.
aws rds create-db-parameter-group \
--db-parameter-group-name zetl-pgsql-parameter-group \
--db-parameter-group-family postgres15 \
--description "Parameter group for postgresql" \
--region {region}
Then modify the rds.logical_replication, rds.replica_identity_full, session_replication_role, wal_sender_timeout, max_wal_senders, max_replication_slots parameters to set their new values. See Zero-ETL documentation for more information these parameters.
Use the following command to create a cluster subnet group
aws redshift create-cluster-subnet-group \
--cluster-subnet-group-name zetl-subnet-group \
--subnet-ids "subnet-*******" "subnet-*******" "subnet-*******" "subnet-*******" \
--description "subnet group for redshift" \
--region {region}
{
"ClusterSubnetGroup": {
"ClusterSubnetGroupName": "zetl-subnet-group",
"Description": "subnet group for redshift",
"VpcId": "vpc-********",
"SubnetGroupStatus": "Complete",
//Skipping rest of API response
}
}
Create a custom parameter group for the Redshift cluster
Use the following command to create a custom parameter group for the Redshift cluster:
aws redshift create-cluster-parameter-group \
--parameter-group-name zetl-redshift-parameter-group \
--parameter-group-family redshift-1.0 \
--description "cluster parameter group for zetl" \
--region {region}
{
"ClusterParameterGroup": {
"ParameterGroupName": "zetl-redshift-parameter-group",
"ParameterGroupFamily": "redshift-1.0",
"Description": "cluster parameter group for zetl",
"Tags": []
}
}
Modify the enable_case_sensitive_identifier parameter and set its value as ON. This is required to support the case sensitivity of source tables and columns. The enable_case_sensitive_identifier parameter is a configuration value that determines whether name identifiers of databases, tables, and columns are case sensitive. This parameter must be turned on to create zero-ETL integrations in the data warehouse.
aws redshift modify-cluster-parameter-group \
--parameter-group-name zetl-redshift-parameter-group \
--parameters ParameterName=enable_case_sensitive_identifier,ParameterValue=ON \
--region {region}
{
"ParameterGroupName": "zetl-redshift-parameter-group",
"ParameterGroupStatus": "Your parameter group has been updated. If you changed only dynamic parameters, associated clusters are being modified now. If you changed static parameters, all updates, including dynamic parameters, will be applied when you reboot the associated clusters."
}
Select or create the target Redshift cluster
If you already have a Redshift cluster, you can use that, or you can create a new cluster with the following command:
Configure authorization using an Amazon Redshift resource policy
You can use the Amazon Redshift API operations to configure resource policies that work with zero-ETL integrations.To control the source that can create an inbound integration into the Amazon Redshift namespace, create a resource policy and attach it to the namespace. With the resource policy, you can specify the source that has access to the integration. The resource policy is attached to the namespace of your target data warehouse to allow the source to create an inbound integration to replicate live data from the source into Amazon Redshift.
If you’re using MySQL, connect to the source MySQL database and run the following commands:
mysql -h zetl-db.************.us-east-1.rds.amazonaws.com -u test -P 3306 -p
MySQL [(none)]> CREATE DATABASE my_db;
MySQL [(none)]> USE my_db;
MySQL [my_db]> CREATE TABLE books_table (ID int NOT NULL, Title VARCHAR(50) NOT NULL, Author VARCHAR(50) NOT NULL,
Copyright INT NOT NULL, Genre VARCHAR(50) NOT NULL, (ID));
MySQL [my_db]> INSERT INTO books_table VALUES (1, 'The Shining', 'Stephen King', 1977, 'Supernatural fiction');
MySQL [my_db]> commit;
If you’re using PostgreSQL, connect to the source PostgreSQL database and run the following commands:
psql -h zetl-pg-db.************.us-east-1.rds.amazonaws.com -d zetldb -U test -p 5432 -W
zetldb=> CREATE TABLE books_table (ID int primary key, Title VARCHAR(50), Author VARCHAR(50),Copyright int, Genre VARCHAR(50));
zetldb=> INSERT INTO books_table VALUES (1, 'The Shining', 'Stephen King', 1977, 'Supernatural fiction');
zetldb=> commit;
This data will be our historic data. After we create an integration, we will generate new live data.
Create a Zero-ETL integration
In this step, we create an Amazon RDS zero-ETL integration with Amazon Redshift where we specify the source RDS database and the target Redshift data warehouse. You can optionally also provide data filters, an AWS Key Management Service (AWS KMS) key that you want to use for encryption, tags, and other configurations.
The SourceArn will be arn:aws:rds:{region}:{account-id}:db:zetl-db for MySQL and arn:aws:rds:{region}:{account-id}:db:zetl-pg-db for PostgreSQL
Verify the solution
To verify the solution, we will use Amazon Redshift Query Editor v2 to verify the data on the target cluster. In order to use Amazon Redshift query editor v2, you need to create a database and connect to it. For instructions, see Creating destination databases in Amazon Redshift. We will be using mydb as the destination database.
Verify the historic data in Amazon Redshift.
MySQL:
PostgreSQL:
Next, add some new live data on the source database:
RDS for MySQL:
MySQL [my_db]> INSERT INTO books_table VALUES (2, 'AWS', 'Jeff', 1960, 'Amazon');
RDS for PostgreSQL:
zetldb=> INSERT INTO books_table VALUES (2, 'AWS', 'Jeff', 1960, 'Amazon');
Verify the new changes you made to the source database are now replicated in Amazon Redshift within seconds.
MySQL:
PostgreSQL:
You have successfully configured a zero-ETL integration and new changes on the source will be replicated to the target. However, there are a few limitations that apply to RDS zero-ETL integrations with Amazon Redshift.
Clean up
You can clean up after verification is complete: RDS for MySQL:
You can also clean up the Redshift cluster, RDS instance, parameter and subnet group that you created.To delete the Redshift cluster without taking a final snapshot, use the following code:
In this post, we showed how you can run a zero-ETL integration from Amazon RDS for MySQL and Amazon RDS for PostgreSQL to Amazon Redshift using the AWS CLI. This minimizes the need to maintain complex data pipelines and enables near real time analytics on transactional and operational data. With zero-ETL integrations, you can focus more on deriving value from your data and less on managing data movement.
As next steps, consider exploring how you can apply this zero-ETL approach to other data sources in your organization. You might also want to investigate how to combine zero-ETL with the advanced analytics capabilities of Amazon Redshift, such as ML integration or federated queries. To learn more about zero-ETL integrations and start implementing them in your own environment, refer to zero-ETL documentation and begin simplifying your data integration today.