AWS Database Blog

Migrate a self-managed MySQL database to Amazon Aurora MySQL using AWS DMS homogeneous data migrations

Migrating your self-managed MySQL database to Amazon Aurora MySQL-Compatible Edition can significantly enhance your database’s performance, scalability, and manageability. AWS Database Migration Service (AWS DMS) makes this migration process straightforward and efficient. Homogeneous data migrations in AWS DMS simplify the migration of self-managed, on-premises databases to their Amazon Relational Database Service (Amazon RDS) equivalents. For the list of supported source databases, see Source data providers for DMS homogeneous data migrations. For the list of supported target databases, see Target data providers for DMS homogeneous data migrations.

In this post, we provide a comprehensive, step-by-step guide for migrating an on-premises self-managed encrypted MySQL database to Amazon Aurora MySQL using AWS DMS homogeneous data migrations over a private network. We show a complete end-to-end example of setting up and executing an AWS DMS homogeneous migration, consolidating all necessary configuration steps and best practices. By following this thorough walkthrough, you’ll gain practical insights into the entire migration process, from preparing your source and target environments to performing the final cutover.

Solution overview

Homogeneous data migrations are serverless and make it possible to migrate data between the same database engines, such as moving from a MySQL instance to an Aurora MySQL instance. With homogeneous data migrations, you can migrate data, table partitions, data types, and secondary objects such as functions, stored procedures, and so on. For more information, see Migrating data from MySQL databases with homogeneous data migrations in AWS DMS.

For homogeneous data migrations of the full load and change data capture (CDC) type, AWS DMS uses mydumper to read data from your source database and store it on the disk attached to the serverless environment. After AWS DMS reads your source data, it uses myloader in the target database to restore your data. After AWS DMS completes the full load, it uses native binary log replication to sync on-going changed to the target.

The following diagram shows the architectural overview to migrate a self-managed MySQL encrypted database to Aurora MySQL-Compatible using AWS DMS homogeneous data migrations.

You can use the same process to migrate an RDS for MySQL or self-managed MySQL database to Aurora MySQL-Compatible database hosted in a different VPC. To implement this solution, use the following steps:

  1. Prepare the source environment.
  2. Prepare the target environment.
  3. Create an AWS DMS subnet group.
  4. Import a certificate for in-transit encryption.
  5. Create secrets for the source and target databases in AWS Secrets Manager.
  6. Create an instance profile.
  7. Create data providers.
  8. Create a migration project.
  9. Create a data migration.
  10. Monitor replication.
  11. Perform cutover.
  12. Clean up.

Limitations

Keep in mind the following limitations:

  • MySQL sources only support selection rules for the full load migrations. Selection rules allow you to choose the schema, tables, or both that you want to include in your replication. For more information, see Selection rules for homogeneous data migrations.
  • You can’t use homogeneous data migrations in AWS DMS to migrate data from a higher database version to a lower database version.
  • Homogeneous data migrations migrate encrypted MySQL databases and tables as unencrypted on the target database. This is because Aurora MySQL-Compatible and Amazon RDS for MySQL don’t support encryption using Keyring plugin. However, we recommend using AWS Key Management Service (AWS KMS) for encryption at rest for an Aurora MySQL DB cluster.

We recommend that you review Limitations for homogeneous data migrations before migrating your self-managed MySQL database to Aurora MySQL-Compatible.

Prerequisites

Make sure you meet the following prerequisites:

For the example in this post, we use the following configuration:

  • AWS Region: us-east-2
  • Amazon VPC ID: vpc-00000
  • Target Aurora MySQL DB cluster: aurora-mysql-01
  • VPC security group attached to the Aurora MySQL DB cluster: aurora-mysql-sg
  • Binary log format: ROW
  • VPC security group attached to the AWS DMS instance profile: dms-mtm-sg
  • KMS key used to encrypt secrets in Secrets Manager: "arn:aws:kms:us-east-2:<account-id>:key/mrk-d8e4262axxxx"
  • Source database secret in Secrets Manager: mysql-secret
  • Target database secret in Secrets Manager: aurora-secret
  • On-premises source database host IP: 10.16.2.125/16
  • IAM role for AWS DMS homogeneous data migrations: HomogeneousDataMigrationRole-01

Prepare the source environment

In this section, you prepare the source database for homogeneous data migrations.

Create a database user

To run homogeneous data migrations, you must use a database user with the required privileges for replication. Use the following script to create a database user with the required permissions in the source MySQL database:

CREATE USER 'dms_user'@'%' IDENTIFIED BY 'your_password';
 GRANT REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'dms_user'@'%';
 GRANT SELECT, RELOAD, LOCK TABLES, SHOW VIEW, EVENT, TRIGGER ON *.* TO 'dms_user'@'%';
 GRANT BACKUP_ADMIN ON *.* TO 'dms_user'@'%';

For more information on database user permissions, see Using a MySQL compatible database as a source for homogeneous data migrations in AWS DMS.

Enable binary logging and other required parameters

To configure CDC, you enable binary logging on the source database. To enable binary logging, you configure server-id, log-bin, binlog_format, expire_logs_days, binlog_checksum, and binlog_row_image parameters in the my.ini (Windows) or my.cnf (UNIX) file of your MySQL database. You must reboot the source DB instance for the changes to take effect. Refer Using a self-managed MySQL compatible database as a source for homogeneous data migrations for recommended values for these parameters.

Adjust binary log retention (binlog_expire_logs_seconds) considering source database size and workload. If set too low and full load takes longer, binary logs may be deleted prematurely, compromising ongoing replication.

Network configuration

The on-premises security firewall should allow incoming and outgoing traffic between the on-premises network and the VPC CIDR range.

Prepare the target environment

In this section, we walk through the steps to prepare the target environment.

Create an IAM role for AWS DMS homogeneous migrations

To run homogeneous data migrations, you must create an IAM policy and an IAM role in your AWS account to interact with other AWS services. For more details on required IAM permissions, see Creating required IAM resources for homogeneous data migrations in AWS DMS. Complete the following steps:

  1. Create the HomogeneousDataMigrationRole-01 IAM role:
    aws iam create-role \
        --role-name HomogeneousDataMigrationRole-01 \
        --assume-role-policy-document '{
            "Version": "2012-10-17",
            "Statement": [
                {
                    "Sid": "",
                    "Effect": "Allow",
                    "Principal": {
                        "Service": [
                            "dms.us-east-2.amazonaws.com"
                        ]
                    },
                    "Action": "sts:AssumeRole"
                }
            ]
        }'
  2. Create the AWSDMSHomogeneousRolePolicy-01 IAM policy and add the policy to HomogeneousDataMigrationRole-01 IAM role. This policy allows AWS DMS to perform the required steps for data migration.
    aws iam put-role-policy \
        --role-name HomogeneousDataMigrationRole-01 \
        --policy-name AWSDMSHomogeneousRolePolicy-01 \
        --policy-document '{
            "Version": "2012-10-17",
            "Statement": [
                {
                    "Effect": "Allow",
                    "Action": [
                        "ec2:DescribeSecurityGroups",
                        "ec2:DescribeVpcPeeringConnections",
                        "ec2:DescribeVpcs",
                        "ec2:DescribePrefixLists",
                        "logs:DescribeLogGroups"
                    ],
                    "Resource": "*"
                },
                {
                    "Effect": "Allow",
                    "Action": [
                        "servicequotas:GetServiceQuota"
                    ],
                    "Resource": "arn:aws:servicequotas:*:*:vpc/L-0EA8095F"
                },
                {
                    "Effect": "Allow",
                    "Action": [
                        "logs:CreateLogGroup",
                        "logs:DescribeLogStreams"
                    ],
                    "Resource": "arn:aws:logs:*:*:log-group:dms-data-migration-*"
                },
                {
                    "Effect": "Allow",
                    "Action": [
                        "logs:CreateLogStream",
                        "logs:PutLogEvents"
                    ],
                    "Resource": "arn:aws:logs:*:*:log-group:dms-data-migration-*:log-stream:dms-data-migration-*"
                },
                {
                    "Effect": "Allow",
                    "Action": "cloudwatch:PutMetricData",
                    "Resource": "*"
                }
            ]
        }'
        
  3. Create the AWSDMSHomogeneousRolePolicy-02 IAM policy and add the policy to the HomogeneousDataMigrationRole-01 IAM role. This policy allows AWS DMS to access the secrets in Secrets Manager for data migration.
    aws iam put-role-policy \
        --role-name HomogeneousDataMigrationRole-01 \
        --policy-name AWSDMSHomogeneousRolePolicy-02 \
        --policy-document '{
            "Version": "2012-10-17",
            "Statement": [
                {
                    "Effect": "Allow",
                    "Action": [
                        "secretsmanager:DescribeSecret",
                        "secretsmanager:GetSecretValue"
                    ],
                    "Resource": [
                        "arn:aws:secretsmanager:us-east-2:<account-id>:secret:aurora-secret*",
                        "arn:aws:secretsmanager:us-east-2:<account-id>:secret:mysql-secret*"
                    ]
                },
                {
                    "Effect": "Allow",
                    "Action": "kms:Decrypt",
                    "Resource": [
                        "arn:aws:kms:us-east-2:<account-id>:key/mrk-d8e4262axxxx"
                    ]
                }
            ]
        }'

Create a database user

AWS DMS requires a database user with certain permissions to migrate data into your target Aurora MySQL database. For more information on creating a database user and permissions, see Using a MySQL compatible database as a target for homogeneous data migrations in AWS DMS. In this post, you use the following script to create a database user with the required permissions in your MySQL target database:

CREATE USER 'dms_user'@'%' IDENTIFIED BY 'your_password';
GRANT ALTER, CREATE, DROP, INDEX, INSERT, UPDATE, DELETE, SELECT, CREATE VIEW,
CREATE ROUTINE, ALTER ROUTINE, EVENT, TRIGGER, EXECUTE, REFERENCES ON *.* 
TO 'dms_user'@'%';
GRANT REPLICATION SLAVE, REPLICATION CLIENT  ON *.* TO 'dms_user'@'%';

For limitations on user name and password when using a MySQL compatible database as a target for homogeneous data migrations, see Limitations for using a MySQL compatible database as a target for homogeneous data migrations.

AWS DMS will also assign this user (dms_user) as DEFINER of MySQL database objects. If you want to keep the DEFINER the same in Aurora MySQL-Compatible as your on-premises environment, you should consider one of following approach:

  1. Drop and recreate required objects such as triggers, procedures, functions, and views in Aurora MySQL after migration, ensuring objects have the required DEFINER.
  2. If you have app_user as DEFINER in the source database, you can create app_user in Aurora MySQL-Compatible and use this user to migrate to Aurora MySQL-Compatible.

Configure VPC security groups for the AWS DMS instance profile in the target AWS account

Complete the following steps to configure the aurora-mysql-sg and dms-mtm-sg VPC security groups:

  1. Create a VPC security group called dms-mtm-sg without any inbound rules in the target AWS account:
    $ aws ec2 create-security-group \
        --group-name "dms-mtm-sg" \
        --description "Security group for DMS MTM" \
        --vpc-id "vpc-022xxxxx"
    
    {
        "GroupId": "sg-0fdc4a80000342f7",
        "SecurityGroupArn": "arn:aws:ec2:us-east-2:xxxx:security-group/sg-0fdc4a80000342f7"
    }
  2. Verify the security group configuration using the following code:
    $ aws ec2 describe-security-groups \
        --filters "Name=vpc-id,Values=vpc-022xxxxx" "Name=group-name,Values=dms-mtm-sg"
  3. Fetch the security group ID of the aurora-mysql-sg security group, which is attached to your Aurora MySQL database:
    $ aws ec2 describe-security-groups \
        --query 'SecurityGroups[0].GroupId' \
        --filters "Name=vpc-id,Values=vpc-022xxxxx" "Name=group-name,Values=aurora-mysql-sg" \
        --output text
    
    sg-0e0000d07096999d
  4. Update the inbound rules of the security group aurora-mysql-sg by adding access from dms-mtm-sg:
    $ aws ec2 authorize-security-group-ingress \
        --group-id "sg-0e0000d0709699d" \
        --ip-permissions '[{
            "IpProtocol": "tcp",
            "FromPort": 3306,
            "ToPort": 3306,
            "UserIdGroupPairs": [{
                "GroupId": "sg-0fdc4a80000342f7",
                "Description": "Allow inbound connection from AWS DMS MTM"
            }]
        }]'    
        
    Hint:
    1. "sg-0e0000d07096999d" : Security GroupId attached to Aurora MySQL
    2. "sg-0fdc4a80000342f7": Security GroupId attached to AWS DMS Instance Profile

Modify the DB cluster parameter group

Modify log_bin_trust_function_creators to 1 in the custom DB cluster parameter group associated with your Aurora MySQL DB cluster. For more information, see Modifying parameters in a DB cluster parameter group. Changing this parameter allows AWS DMS to create functions and triggers in the Aurora MySQL database as part of the data migration.

Create a subnet group for AWS DMS migration project

A subnet group includes subnets from different Availability Zones which your instance profile can use. Note that a replication subnet group is a DMS resource and is distinct from subnet groups that Amazon VPC and Amazon RDS use. Create a subnet group for your AWS DMS instance profile. For more information, see Creating a subnet group for an AWS DMS migration project. In this post, you create a subnet group using private subnets on Amazon VPC:

aws dms create-replication-subnet-group \
   --replication-subnet-group-identifier "dms-subnet-group-01" \
   --replication-subnet-group-description "Subnet group for DMS replication" \
   --subnet-ids "subnet-ID1" "subnet-ID2" "subnet-ID3"

Import the certificates for in-transit encryption

You can encrypt connections for source and target endpoints using SSL. To do so, you import a certificate (.pem file) to AWS DMS and assign that certificate to a data provider. Complete the following steps to import the certificate .pem file for the source MySQL database and Aurora MySQL DB cluster. You use these certificates at a later step to create data providers.

  1. Import the certificate for the source MySQL database:
    -- Import Certificate 
    aws dms import-certificate \
        --certificate-identifier mysql-cert \
        --certificate-pem file://<name>.pem
        
    -- Get Certificate ARN
    aws dms describe-certificates \
         --filters Name=certificate-id,Values=mysql-cert \
         --query 'Certificates[0].CertificateArn' \
         --output text
     
    Output:   
    arn:aws:dms:us-east-2:<account-id>:cert:CQCPN3ZMSJEDAHJDHJD4HEQ    
  2. Refer to Download certificate bundles for Aurora to download the latest certificate bundle to import into AWS DMS.
  3. Import the certificate for the Aurora MySQL DB cluster:
    -- Import Certificate
    aws dms import-certificate \
        --certificate-identifier aurora-cert \
        --certificate-pem file://global-bundle.pem
        
    -- Get Certificate ARN
    aws dms describe-certificates \
         --filters Name=certificate-id,Values=aurora-cert \
         --query 'Certificates[0].CertificateArn' \
         --output text
     
    Output:   
    arn:aws:dms:us-east-2:<account-id>:cert:BACPN3ZMSJEDRBBFKEEBTMDJ      
        

Create secrets for the source and target databases in Secrets Manager

You store the database credentials information in the AWS Secrets Manager. AWS DMS uses this information to connect to your database. Complete the following steps to create the secrets:

  1. Create the secret for the source database user dms_user in Secrets Manager:
    aws secretsmanager create-secret \
     --name "mysql-secret" \
     --description "Credentials for self-managed MySQL database" \
     --kms-key-id "arn:aws:kms:us-east-2:<account-id>:key/mrk-d8e4262axxxx" \
     --secret-string '{
         "username":"dms_user", 
         "password":"xxxx", 
         "engine":"mysql", 
         "host":"mysql-source.abc.com", 
         "port":3306,
         "dbname":"mysql"
         }'  
    
    Output:
    {
        "ARN": "arn:aws:secretsmanager:us-east-2:<account-id>:secret:mysql-secret-ZCMgt0",
        "Name": "mysql-secret",
        "VersionId": "bc368ee6-32d4-4ca8-9658-9b8182b4bbe9"
    }
  2. Create the secret for the target database user in Secrets Manager:
    aws secretsmanager create-secret \
     --name "aurora-secret" \
     --description "Credentials for target Aurora MySQL database" \
     --kms-key-id "arn:aws:kms:us-east-2:<account-id>:key/mrk-d8e4262axxxx" \
     --secret-string '{
         "username":"dms_user", 
         "password":"xxxx", 
         "engine":"aurora-mysql", 
         "host":"aurora-mysql-01.cluster-xxx-east-2.rds.amazonaws.com", 
         "port":3306,
         "dbname":"mysql"
         }' 
            
    Output:
    {
        "ARN": "arn:aws:secretsmanager:us-east-2:<account-id>:secret:aurora-secret-py7NlB",
        "Name": "aurora-secret",
        "VersionId": "1a9ee7dd-23bf-4ad8-a89c-87c14cc7698b"
    } 

Create an instance profile

An instance profile specifies network and security settings for the serverless environment where your migration project runs. Create an AWS DMS instance profile to specify network and security settings for the database migration. You use this instance profile in a later step to create the migration project.

aws dms create-instance-profile \
  --instance-profile-name "dms-instance-profile-01" \
  --description "AWS DMS Instance Profile" \
  --network-type "IPV4" \
  --subnet-group-identifier "dms-subnet-group-01" \
  --vpc-security-groups "sg-0fdc4a80000342f7" \
  --no-publicly-accessible \
  --kms-key-arn "arn:aws:kms:us-east-2:<account-id>:key/mrk-d8e4262axxxx"

{
    "InstanceProfile": {
        "InstanceProfileArn": "arn:aws:dms:us-east-2:<account-id>:instance-profile:YN5NAQMVQJH7XEB2EYGJ2CL63A",
        "KmsKeyArn": "arn:aws:kms:us-east-2:<account-id>:key/mrk-d8e4262axxxx",
        "PubliclyAccessible": false,
        "NetworkType": "IPV4",
        "InstanceProfileName": "dms-instance-profile-01",
        "Description": "AWS DMS Instance Profile",
        "InstanceProfileCreationTime": "2025-05-09T12:49:10.106814+00:00",
        "SubnetGroupIdentifier": "dms-subnet-group-01",
        "VpcSecurityGroups": [
            "sg-0fdc4a80000342f7"
        ]
    }
}

Hint:
1. "sg-0fdc4a80000342f7": Security GroupId attached to AWS DMS Instance Profile
2. "dms-instance-profile-01": Instance profile which have VPC with access to both source and target database attached to it

Create data providers

data provider stores the information about your database. AWS DMS uses this information to connect to your database. Complete the following steps to create AWS DMS data providers for the source on-premises MySQL database and the target Aurora MySQL database. You use these data providers at a later step to create the migration project.

  1. Use the following configuration to create a data provider for the source on-premises MySQL database:
    $ aws dms create-data-provider \
      --data-provider-name "mysql-provider" \
      --engine mysql \
      --settings '{
        "MySqlSettings": {
          "ServerName": "mysql-source.abc.com",
          "Port": 3306,
          "SslMode": "verify-ca",
          "CertificateArn": "arn:aws:dms:us-east-2:<account-id>:cert:CQCPN3ZMSJEDAHJDHJD4HEQ"
        }
      }'
    
    Output: 
    {
        "DataProvider": {
            "DataProviderName": "mysql-provider",
            "DataProviderArn": "arn:aws:dms:us-east-2:<account-id>:data-provider:CHXYVOVOAFBBDFZJIH72QJ6XJ4",
            "DataProviderCreationTime": "2025-05-09T12:52:03.270264+00:00",
            "Engine": "mysql",
            "Settings": {
                "MySqlSettings": {
                    "ServerName": "mysql-source.abc.com",
                    "Port": 3306,
                    "SslMode": "verify-ca",
                    "CertificateArn": "arn:aws:dms:us-east-2:<account-id>:cert:CQCPN3ZMSJEDAHJDHJD4HEQ"
                }
            }
        }
    }
  2. Use the following configuration to create a data provider for the target Aurora MySQL DB cluster:
    $ aws dms create-data-provider \
      --data-provider-name "aurora-provider" \
      --engine aurora \
      --settings '{
        "MySqlSettings": {
          "ServerName": "aurora-mysql-01",
          "Port": 3306,
          "SslMode": "verify-ca",
          "CertificateArn": "arn:aws:dms:us-east-2:<account-id>:cert:BACPN3ZMSJEDRBBFKEEBTMDJ"
        }
      }' 
      
    Output:
    {
        "DataProvider": {
            "DataProviderName": "aurora-provider",
            "DataProviderArn": "arn:aws:dms:us-east-2:<account-id>:data-provider:VD2OUNX25BEHNCYFTF76MT4OYI",
            "DataProviderCreationTime": "2025-05-09T12:55:47.574122+00:00",
            "Engine": "aurora",
            "Settings": {
                "MySqlSettings": {
                    "ServerName": "aurora-mysql-01",
                    "Port": 3306,
                    "SslMode": "verify-ca",
                    "CertificateArn": "arn:aws:dms:us-east-2:<account-id>:cert:BACPN3ZMSJEDRBBFKEEBTMDJ"
                }
            }
        }
    }

Create the migration project

Migration projects in AWS DMS are serverless. AWS DMS automatically provisions the cloud resources for your migration projects. You use a migration project to migrate data from your source database to a target database of the same type in the AWS. You specify the instance profile, source and target data providers, and secrets from AWS Secrets Manager while creating the migration project. Create the AWS DMS migration project with the data providers, secrets, and instance profile that you have created in the previous step:

aws dms create-migration-project \
  --migration-project-name "mysql-migration-project-01" \
  --instance-profile-identifier "dms-instance-profile-01" \
  --source-data-provider-descriptors '[{
    "DataProviderIdentifier": "mysql-provider",
    "SecretsManagerSecretId": "mysql-secret",
    "SecretsManagerAccessRoleArn": "arn:aws:iam::<account-id>:role/HomogeneousDataMigrationsRole"
  }]' \
  --target-data-provider-descriptors '[{
    "DataProviderIdentifier": "aurora-provider",
    "SecretsManagerSecretId": "aurora-secret",
    "SecretsManagerAccessRoleArn": "arn:aws:iam::<account-id>:role/HomogeneousDataMigrationsRole"
  }]'
  
Output:
{
    "MigrationProject": {
        "MigrationProjectName": "mysql-migration-project-01",
        "MigrationProjectArn": "arn:aws:dms:us-east-2:<account-id>:migration-project:6LNQE3OB6JFNNG2I4RGO2PTCVA",
        "MigrationProjectCreationTime": "2025-05-09T13:21:06.416965+00:00",
        "SourceDataProviderDescriptors": [
            {
                "SecretsManagerSecretId": "arn:aws:secretsmanager:us-east-2:<account-id>:secret:mysql-secret-ZCMgt0",
                "SecretsManagerAccessRoleArn": "arn:aws:iam::<account-id>:role/HomogeneousDataMigrationsRole",
                "DataProviderName": "mysql-provider",
                "DataProviderArn": "arn:aws:dms:us-east-2:<account-id>:data-provider:CHXYVOVOAFBBDFZJIH72QJ6XJ4"
            }
        ],
        "TargetDataProviderDescriptors": [
            {
                "SecretsManagerSecretId": "arn:aws:secretsmanager:us-east-2:<account-id>:secret:aurora-secret-py7NlB",
                "SecretsManagerAccessRoleArn": "arn:aws:iam::<account-id>:role/HomogeneousDataMigrationsRole",
                "DataProviderName": "aurora-provider",
                "DataProviderArn": "arn:aws:dms:us-east-2:<account-id>:data-provider:VD2OUNX25BEHNCYFTF76MT4OYI"
            }
        ],
        "InstanceProfileArn": "arn:aws:dms:us-east-2:<account-id>:instance-profile:YN5NAQMVQJH7XEB2EYGJ2CL63A",
        "InstanceProfileName": "dms-instance-profile-01"
    }
}  

Verify the migration project configuration. The project is now ready to use for homogeneous data migrations.

Create a data migration

After you create a migration project, you use the migration project for homogeneous data migrations. You create a new data migration in the migration project for the homogeneous data migrations. You can create several homogeneous data migrations of different types in a single migration project. Create a data migration in the migration project that you have created in the previous step:

aws dms create-data-migration \
  --data-migration-name "mysql-data-migration-01" \
  --migration-project-identifier "mysql-migration-project-01" \
  --data-migration-type "full-load-and-cdc" \
  --service-access-role-arn "arn:aws:iam::ACCOUNT_ID:role/HomogeneousDataMigrationRole-01" \
  --enable-cloudwatch-logs \
  --number-of-jobs 8
  
Output:
{
    "DataMigration": {
        "DataMigrationName": "mysql-data-migration-01",
        "DataMigrationArn": "arn:aws:dms:us-east-2:<account-id>:data-migration:PDhxxxrGgZHKpxoYGItvqfkmhyzWHmL",
        "DataMigrationCreateTime": "2025-05-09T13:25:30.513527+00:00",
        "ServiceAccessRoleArn": "arn:aws:iam::<account-id>:role/HomogeneousDataMigrationsRole",
        "MigrationProjectArn": "arn:aws:dms:us-east-2:<account-id>:migration-project:6LNQE3OB6JFNNG2I4RGO2PTCVA",
        "DataMigrationType": "full-load-and-cdc",
        "DataMigrationSettings": {
            "NumberOfJobs": 8,
            "CloudwatchLogsEnabled": true
        },
        "SourceDataSettings": [
            {}
        ],
        "TargetDataSettings": [
            {}
        ],
        "DataMigrationStatus": "READY",
        "PublicIpAddresses": []
    }
}  

After verifying your configuration, now you start the data migration:

aws dms start-data-migration \
  --data-migration-identifier "mysql-data-migration-01" \
  --start-type start-replication

Output:
{
    "DataMigration": {
        "DataMigrationName": "mysql-data-migration-01",
        "DataMigrationArn": "arn:aws:dms:us-east-2:<account-id>:data-migration:PDhxxxrGgZHKpxoYGItvqfkmhyzWHmL",
        "DataMigrationCreateTime": "2025-05-09T13:25:30.513527+00:00",
        "ServiceAccessRoleArn": "arn:aws:iam::<account-id>:role/HomogeneousDataMigrationsRole",
        "MigrationProjectArn": "arn:aws:dms:us-east-2:<account-id>:migration-project:6LNQE3OB6JFNNG2I4RGO2PTCVA",
        "DataMigrationType": "full-load-and-cdc",
        "DataMigrationSettings": {
            "NumberOfJobs": 4,
            "CloudwatchLogsEnabled": true
        },
        "SourceDataSettings": [
            {}
        ],
        "TargetDataSettings": [
            {}
        ],
        "DataMigrationStatus": "STARTING",
        "PublicIpAddresses": []
    }
}    

You can check the data migration status as follows:

$ aws dms describe-data-migrations \
     --filters "Name=data-migration-name,Values=mysql-data-migration-01" \
     --no-without-statistics \
     --query 'DataMigrations[0].{Status:DataMigrationStatus,Statistics:DataMigrationStatistics}' \
     --output table
-------------------------------------------------------------------------------------------------------------------------------
|                                                   DescribeDataMigrations                                                    |
+--------------------------------------------------------+--------------------------------------------------------------------+
|  Status                                                |  STARTING                                                          |
+--------------------------------------------------------+--------------------------------------------------------------------+
||                                                        Statistics                                                         ||
|+------------+--------------------+---------------------+----------------+---------------+-----------------+----------------+|
|| CDCLatency | ElapsedTimeMillis  | FullLoadPercentage  | TablesErrored  | TablesLoaded  |  TablesLoading  | TablesQueued   ||
|+------------+--------------------+---------------------+----------------+---------------+-----------------+----------------+|
||  0         |  0                 |  99                 |  0             |  0            |  0              |  0             ||
|+------------+--------------------+---------------------+----------------+---------------+-----------------+----------------+|

For more information on the checking the status and progress of the data migration, see Monitoring data migrations in AWS DMS.

Monitor replication

After you start your homogeneous data migration, you can monitor the replication status and CDC latency. To monitor how far the target DB is behind the source DB, connect to the target DB instance and run the SHOW REPLICA STATUS (Aurora MySQL version 3) or SHOW SLAVE STATUS (Aurora MySQL version 2). In the following command output, the Seconds_Behind_Source field tells you how far the target DB instance is behind the source:

target|MySQL[(none)]> show replica status\G
*************************** 1. row ***************************
Replica_IO_State: Waiting for source to send event
Source_Host: aurora-mysql-01.cluster-xxx-east-2.rds.amazonaws.com
Source_User: dms_user
Source_Port: 3306
Seconds_Behind_Source: 0
........
1 row in set (0.00 sec)

To check the overall CDC latency in the units of seconds during the CDC phase, you can use the OverallCDClatency Amazon CloudWatch metric. For more information on monitoring the data migration’s status and progress, see Monitoring data migrations in AWS DMS.

To view the metrics, use the following steps:

  1. On the AWS DMS console, in the navigation pane, choose Migration projects.
  2. Choose the project mysql-migration-project-01.
  3. Navigate to the Data migrations tab.
  4. Choose Data migration, then mysql-data-migration-01.
  5. Choose the Monitoring tab.

You can also use the following awscli commands to view the OverallCDCLatency metrics:

aws cloudwatch get-metric-statistics \
--namespace "AWS/DMS/DataMigrations" \
--metric-name "OverallCDCLatency" \
--dimensions Name="DataMigrationExternalResourceId",Value=" PDhxxxrGgZHKpxoYGItvqfkmhyzWHmL" \
--start-time "2025-05-13T00:00:00Z" \
--end-time "2025-05-15T23:59:59Z" \
--period 300 \
--statistics Average

Hint: You get DataMigrationExternalResourceId from the Arn of the data migration created in earlier step – “arn:aws:dms:us-east-2:&lt;account-id&gt;:data-migration:PDhxxxrGgZHKpxoYGItvqfkmhyzWHmL”

You can also use the AuroraBinlogReplicaLag CloudWatch metric associated with the writer instance to check the lag between the source and target database. For more information, see Instance-level metrics for Amazon Aurora.

Perform cutover

Once the replica lag is a near zero value, you are ready for cutover to point the application to the Aurora MySQL DB cluster in the target account. We recommend planning your cutover during a low traffic window and following your in-house business cutover checklist and processes. The following are key steps for the cutover process:

  1. Stop accepting connections on the source database.
  2. Make sure CDC latency from the source to the target DB instance is 0.
  3. To get the binary log position on the source, execute SHOW MASTER STATUS and compare the output with the target’s binary log coordinates, such as Exec_Source_Log_Pos and Read_Source_Log_Pos and they should the same.
  4. On the target Aurora MySQL instance, run SHOW REPLICA STATUS to obtain the binary log coordinates.
  5. Check the Seconds_Behind_Source field to determine how far the target database instance is behind the source.
  6. After stopping active database connections on the source database and checking the replication status, you stop the data migration
    aws dms stop-data-migration \
    --data-migration-identifier "mysql-data-migration-01"
  7. Update the application configuration or DNS CNAME record with the target database endpoints.
  8. You can also set up replication from the target Aurora MySQL cluster to the source on-premises database using binary log replication to address fallback requirements before starting the application with the Aurora MySQL database.
  9. Start your application with the Aurora MySQL database.

Clean up

As part of this migration, you deployed several resources for AWS DMS, Secrets Manager, and other services in your AWS account. These resources will incur costs if they are in use. Be sure to remove any resources you no longer need. You can use following steps to delete the resources created in this post, make sure to replace the values based on your deployment:

-- delete data migration
aws dms delete-data-migration \
--data-migration-identifier "mysql-data-migration-01"

-- delete migration project
aws dms delete-migration-project \
--migration-project-identifier "mysql-migration-project-01" 

-- delete instance profile
aws dms delete-instance-profile "dms-instance-profile-01" \
--instance-profile-identifier

-- delete source data provider
aws dms delete-data-provider \
--data-provider-identifier "mysql-provider" 

-- delete target data provider 
aws dms delete-data-provider \
--data-provider-identifier "aurora-provider" 

-- delete subnet group
aws dms delete-replication-subnet-group \
--replication-subnet-group-identifier "dms-subnet-group-01" 

-- delete source certificateaws dms delete-certificate \
--certificate-arn "<replace-with-source-certificate-arn>"

-- delete target certificate 
aws dms delete-certificate \
--certificate-arn "<replace-with-target-certificate-arn>" 

-- delete source secretaws secretsmanager delete-secret \
--secret-id "mysql-secret" 

-- delete target secret
aws secretsmanager delete-secret \
--secret-id "aurora-secret" 

-- delete IAM Policy 
aws iam delete-role-policy \
--role-name "HomogeneousDataMigrationRole-01" \
--policy-name "AWSDMSHomogeneousRolePolicy-01" 

-- delete IAM Role
aws iam delete-role \
--role-name "HomogeneousDataMigrationRole-01"

-- Remove inbound rule from SG attached to aurora mysql that allow access from on-premises

aws ec2 revoke-security-group-ingress \
--group-id "<replace-with-security-grp-id-attached-to-aurora>" \
--protocol "tcp" \
--port "3306" \
--cidr "<replace-with-source-db-host-ip>" 

-- Remove the inbound rule from SG attached to aurora mysql that allows access from dms instance profile 
aws ec2 revoke-security-group-ingress \
--group-id "<replace-with-security-grp-id-attached-to-aurora>" \
--protocol "tcp" \
--port "3306" \
--source-group "<replace-with-security-grp-id-attached-to-dms-instance-profile>"

-- delete security group attached to instance profile
aws ec2 delete-security-group \
--group-id "<replace-with-security-grp-id-attached-to-dms-instance-profile>"

Conclusion

In this post, we discussed the steps involved in migrating a self-managed MySQL database to Aurora MySQL-Compatible using homogenous data migrations in AWS DMS. We recommend testing the migration steps in non-production environments prior to making changes in production. We welcome your feedback. If you have any questions or suggestions, leave them in the comments section.


About the authors

Alok Srivastava

Alok Srivastava

Alok is a Senior Consultant and Data Architect at AWS, specializing in database migration and modernization programs. With expertise in both traditional and cutting-edge technologies, he guides AWS customers and partners through their journey to the AWS Cloud. Alok’s role encompasses not only database solutions but also the integration of generative AI to enhance data-driven insights and innovation.

Adrian Gajewski

Adrian Gajewski

Adrian is a Software Development Engineer in Database Migration Service at AWS. He works on Homogeneous Migrations for MySQL and MariaDB to allow for simple migrations with the use of native database tooling, helping customers to move their databases to AWS.

Ahmed Mosaad

Ahmed Mosaad

Ahmed is a Senior Database Engineer in Database Migration Service at AWS. He works with customers to provide guidance and technical assistance on database migration projects, helping them improve the value of their solutions when using AWS.