AWS Database Blog

Restore self-managed Db2 Linux databases in Amazon RDS for Db2

As more organizations migrate their self-managed Db2 Linux-based workloads to Amazon Relational Database Service (Amazon RDS) for Db2, migration teams are learning that preparation is key to avoiding project delays. Common roadblocks include outdated database versions, invalid objects, and improper storage configurations that surface during the migration process.

In this post, we introduce a Db2 Migration Prerequisites Validation Tool that catches these issues before they impact your timeline. This tool performs thorough pre-migration validation and guides you through the necessary preparations for migrating self-managed Db2 on Linux to Amazon RDS for Db2

Solution overview

The Db2 Migration Prerequisites Validation Tool performs comprehensive pre-migration assessments across various validation categories to ascertain migration readiness. Upon identifying any discrepancies, the tool offers specific, actionable recommendations for remediation. These detailed insights facilitate database administrators and migration teams in systematically addressing potential issues. The identified problems must be resolved prior to creating the final on-premises Db2 backup, which will be used for restoration in Amazon RDS for Db2.

This proactive approach reduces failures and provides a smooth transition to Amazon RDS for Db2. For step-by-step guidance on migrating your self-managed Db2 databases from Linux to Amazon RDS for Db2, see Migrating from Linux to Linux for Amazon RDS for Db2.

The following diagram illustrates the general process flow to facilitate successful restoration from an on-premises, self-managed Db2 database on Linux to Amazon RDS for Db2.

The tool offers the following key features:

  • Cross-platform support:
  • Multiple operation modes:
    • Interactive mode – Guided experience with user prompts
    • Non-interactive mode – Automated execution for scripting
    • Remote Mode – Connect to remote Db2 databases for validation
  • Comprehensive reporting:
    • Color-coded console output
    • Detailed log files with timestamps
    • PASS/FAIL/WARNING status for each check
    • Actionable recommendations for failures
  • Inventory analysis:
    • Complete database object inventory
    • Sanity checks for migration readiness
    • Object count summaries by type

You can use the tool in the following scenarios:

  • Pre-migration planning – To identify potential issues early and allocate time for remediation.
  • Migration readiness assessment – To perform a final validation before initiating the migration process to Amazon RDS for Db2.
  • After fixpack upgrades – Validate databases after applying Db2 fixpacks to confirm proper update completion.
  • Before taking final Db2 backups – To confirm readiness before restoring to Amazon RDS for Db2, a clean output can safeguard against restore. We provide general guidance on using the database backup command later in this post.

How to use the tool

Prerequisites

Before getting started, note the following prerequisites and limitations:

Local Mode

  • The Db2 instance must be running and accessible
  • Run the tool in the Db2 server using SYSADM authority
  • The Db2 environment must be properly sourced (. ~/sqllib/db2profile)
  • Run as the Db2 instance user (for example, db2inst1)

Remote Mode

  • Db2 client must be installed and configured
  • Network connectivity to remote Db2 server
  • Valid Db2 user credentials with DBADM or SYSMAINT privileges
  • Database must be cataloged or DSN entries available in db2dsdriver.cfg file

In an interactive flow, the tool completes the following steps:

  1. Display Db2 version information.
  2. List available instances.
  3. Identify current instance for validation.
  4. Discover local databases in current instance.
  5. Validate remote databases when you cannot run the script on the Db2 server.
  6. Allow database selection for validation.
  7. Execute comprehensive validation checks.
  8. Generate a detailed report.

The following commands lists the relevant code for interactive execution.

Direct execute of the tool

curl -sL https://bit.ly/precheckdb2migration |  bash

Download and execute

curl -sL https://bit.ly/precheckdb2migration -o db2_migration_prereq_check.sh
chmod +x db2_migration_prereq_check.sh
./db2_migration_prereq_check.sh

Remote mode usage – direct execute

export DB2USER=myuser
export DB2PASSWORD=mypassword
export DBNAME=mydatabase
curl -sL https://bit.ly/precheckdb2migration |  bash -s -- --verbose

Remote mode usage – download and execute

curl -sL https://bit.ly/precheckdb2migration -o  db2_migration_prereq_check.sh
chmod +x db2_migration_prereq_check.sh
export DB2USER=myuser
export DB2PASSWORD=mypassword
export DBNAME=mydatabase
# Run validation against remote database
./db2_migration_prereq_check.sh --verbose

Note: The DBNAME environment variable used for remote connection must be either locally catalogued remote database name or name of the DSN entry used in the db2dsdriver.cfg file.

The following commands lists the code for non-interactive mode when the tool is used locally on the Db2 server.

Validate specific instance

DB2_INSTANCES=db2inst1 \
    ./db2_migration_prereq_check.sh

With custom report location

DB2_INSTANCES=db2inst1 \
REPORT_FILE_PATH=/opt/reports/db2_check.log \
./db2_migration_prereq_check.sh

NOTE: Reports prefixed with “db2_inventory” will be generated in the directory where script was launched.

Verbose output for debugging

DB2_INSTANCES=db2inst1 \
./db2_migration_prereq_check.sh --verbose

For a multi-instance execution, you can run this tool separately in each instance. See the following sample code:

#!/bin/bash
# Validate multiple instances
INSTANCES=("db2inst1" "db2inst2" "db2inst3")
REPORT_DIR="/var/log/db2_migration_checks"
mkdir -p "$REPORT_DIR"

for instance in "${INSTANCES[@]}"; do
    echo "Validating instance: $instance"
    su - "$instance" -c "
        . ~/sqllib/db2profile
        export REPORT_FILE_PATH='$REPORT_DIR/${instance}_migration_check.log'
        /path/to/db2_migration_prereq_check.sh
    "
done

Understanding tool’s output

The tool’s behavior is slightly different if you run the tool locally or through a Db2 client.

Local run

When you run the following command, it will download the script db2_migration_prereq_check.sh and run it immediately.

curl -sL https://bit.ly/precheckdb2migration | bash -s -- --verbose

When executing the tool locally in a Db2 instance, it will identify all local databases and perform the validation against them.

The output will show Db2 version, number of local databases discovered, validation results, size of the database and performs an inventory analysis on the database.

Remote run

For a remote run, you must export three environment variables such as DB2USER, DB2PASSWORD and DBNAME before running the script. The validation will occur only one database at a time.

Understanding validation checks

The following table lists the various category checks. The tool will also complete a database inventory analysis and save the output locally in db2_inventory_yyyymmdd_hhmmss.json.

Category Checks, Recommendations, and Remediations
db2updv115
  • Make sure you’re using the latest Amazon RDS for Db2 software to create an RDS for Db2 instance. IBM has a documented problem in using the db2updv115 tool causing an Db2 instance crash, and the fix is available in the Amazon RDS for Db2 software latest release.
  • This is one of the most common RDS Db2 restore failures.
    There is not a good way to validate if db2updv115 tool was run on the database.
  • Recommendation: db2updv115 -d <DBName>
InDoubt Transaction
  • An indoubt transaction is one that has been prepared but not yet committed or rolled back.
  • All transactions must be committed or rolled back.
  • Recommendation: db2 list indoubt transactions with prompting
Invalid Objects
  • All invalid objects must be revalidated or dropped.
  • Recommendation: db2 "call SYSPROC.ADMIN_REVALIDATE_DB_OBJECTS()" (might need multiple runs)
Tablespace Check
  • All tablespaces must be in Normal state.
  • Recommendation: There are over 20 different tablespace states. Take corrective action accordingly. Refer to the IBM documentation detailing different tablespace states.
Non-Fenced Routines
  • All routines must be fenced.
  • Recommendation : Non-fenced routines are not permitted in Amazon RDS for Db2. Convert all the routines to fenced.
Automatic Storage
  • Common restore failure into Amazon RDS for Db2. At least one storage group must exist.
  • Recommendation: db2 “CREATE STOGROUP <name> ON '<PATHname>'”
Database Config
  • The following parameters must have a value of No:
    • Backup pending
    • Rollforward pending
    • Restore pending
    • Upgrade pending
Database Config
  • Circular logging (logarchmeth1+2), log files should be less than or equal to 254.
  • Archive logging (logarchmeth1+2), log files should be less than or equal to 4096.
Database Sizing
  • When sizing Amazon RDS for Db2, consider database plus log space used.
  • Recommendation: Refer to the RDS_SIZING_TIER variable in db2_migration_prereq_report_yyyymmdd_hhmmss.log.
Federation
  • Federation to all DBMSs is not supported in Amazon RDS for Db2. Having federation to non-supported DBMSs will not cause restore failure into Amazon RDS for Db2. However, there will be a loss of functionality if your application is dependent on federation to non-supported DBMSs.
  • Recommendation: Supported libraries include libdb2drda.so and libdb2rcodbc.so.
Java stored procedures
  • If JAR files are defined in your database (sysibm.sysjarcontents), then add JAR files to the RDS for Db2 instance (if needed).
  • Recommendation:
    • Install: call sqlj.install_jar('jar-url', 'jar-id') → Ex: call sqlj.install_jar('file:/home/rdsdb/Common.jar', 'COMMON')
    • Replace: call sqlj.replace_jar(‘jar-url’, ‘jar-id’) → Ex: call sqlj.install_jar('file:/home/rdsdb/Common.jar', 'COMMON')
    • Remove: db2 “sqlj.remove_jar(‘jar-id’) → Ex: call sqlj.remove_jar('COMMON')

Reading and acting on reports

The following table summarizes the reports generated.

Report Name Details
db2_migration_prereq_report_yyyymmdd_hhmmss.log This report has checks, recommendations, and remediations for all the databases scanned.
db2_inventory_detail_ yyyymmdd_hhmmss.txt For each database scanned, this report contains inventory details like number of tablespaces, tables, view, indexes, and more.
db2_inventory_summary_ yyyymmdd_hhmmss.txt This report summarizes inventory information for all the databases scanned.
db2_inventory_ yyyymmdd_hhmmss.json For each database scanned, this report contains inventory details like number of tablespaces, tables, view, indexes, and so on in JSON format.

Navigate to the section DATABASE SUMMARY to check DATABASE READINESS status:

==========================================
DATABASE SUMMARY: DB2DB
==========================================
Checks performed: 15
Passed: 15
Warnings: 0
Failed: 0
Informational: 48
==========================================
DATABASE READINESS: READY
Database DB2DB passed all checks
==========================================

The migration readiness levels are as follows:

  • READY FOR MIGRATION:
    • All checks passed (PASS status)
    • No critical issues found
    • Database ready for backup and to restore in Amazon RDS for Db2
  • REVIEW REQUIRED:
    • Some warnings found
    • Manual review of recommendations needed
    • Migration possible with considerations
  • NOT READY FOR MIGRATION:
    • Critical failures found
    • Must address failed checks before migration
    • Migration should not proceed

Best practices

Consider the following best practices:

  • Run early and often:
    • Execute during migration planning phase
    • Rerun after any database changes
    • Validate immediately before backup creation
  • Address issues systematically:
    • Fix FAIL items first (blocking issues)
    • Review WARNING items for potential risks
    • Document INFO items for reference
  • Automate for multiple databases:
    • Use non-interactive mode for automation
    • Create scripts for multi-instance environments
    • Schedule regular validation runs
  • Maintain documentation:
    • Keep validation reports for audit trails
    • Document remediation actions taken
    • Track validation history over time

Advanced usage scenarios

The following code illustrates a continuous integration scenario:

#!/bin/bash
# CI/CD pipeline integration
DB_VALIDATION_EXIT_CODE=0

for instance in $(db2ilist); do
    su - "$instance" -c "
        . ~/sqllib/db2profile
        /path/to/db2_migration_prereq_check.sh
    " || DB_VALIDATION_EXIT_CODE=1
done

if [ $DB_VALIDATION_EXIT_CODE -ne 0 ]; then
    echo "db2 validation failed - blocking deployment"
    exit 1
fi

The following code illustrates a scheduled monitoring scenario:

#!/bin/bash
# Cron job for regular validation
# 0 2 * * 1 /path/to/weekly_db2_validation.sh

REPORT_DIR="/var/log/db2_weekly_checks"
mkdir -p "$REPORT_DIR"
DATE=$(date +%Y%m%d)

for instance in $(db2ilist); do
    su - "$instance" -c "
        . ~/sqllib/db2profile
        export REPORT_FILE_PATH='$REPORT_DIR/${instance}_${DATE}.log'
        /path/to/db2_migration_prereq_check.sh
    "
done

# Send summary email
mail -s "Weekly db2 Validation Report" admin@company.com < "$REPORT_DIR/summary_${DATE}.txt"

Troubleshooting common issues

The following table presents common issues and troubleshooting tips.

Issue Troubleshooting
db2 command not found
# Ensure db2 environment is sourced
. ~/sqllib/db2profile

# Verify Db2 installation
which db2
echo $DB2INSTANCE
No Db2 instances found
# Check if instances are running
db2ilist

# Verify user permissions
whoami
id
Unable to connect to database
# Check database directory and try connect manually
db2 list db directory
db2 connect to <dbname>
Taking too long to connect
# Activate database
db2 activate db <dbname>
db2 list active databases
Permission error
# Ensure running as Db2 instance user
su - db2inst1
# Verify authorities
db2 "SELECT * FROM 
TABLE(SYSPROC.AUTH_LIST_AUTHORITIES_FOR_AUTHID('DB2INST1'))"

General guidance on using database backup command

When using the database backup command, consider the following best practices:

  • Amazon RDS for Db2 uses Amazon Simple Storage Service (Amazon S3) streaming capabilities to restore the database using parallel streams. Amazon S3 streaming is most effective when dealing with multi-part files of the database backup image. For instance, the command db2 backup database <dbname> to /backup will generate a single image of the database, which might not be optimal from a performance standpoint. Instead, use multiple locations in the command, such as db2 backup database to /backup, /backup, /backup, /backup, /backup. This example demonstrates that the database backup operation will be executed in parallel, resulting in the database image being divided into five parts, each labeled as .001, .002, .003, .004, and .005.
  • Consider employing multi-part backup even for smaller databases. Determine the number of database locations based on your Linux machine’s CPU and memory capabilities. In cases of uncertainty, it is advisable to use 20 locations.
  • Consider using db2 backup directly to Amazon S3 if you have connectivity from self-managed db2 to the AWS Region’s network. To copy database multi-part images to Amazon S3, you must create a bucket in your account with the necessary privileges. Use this bucket to create a db2 storage access alias.
    Consideartions for creating storage alias:
  • If using Db2 on EC2, grant proper IAM role to access the Amazon S3 bucket and then do not specify USER, PASSWORD or TOKEN in the create storage alias command.
    Example command

    db2 "CATALOG STORAGE ACCESS ALIAS <aliasName> VENDOR S3 SERVER https://s3.<region>.amazonaws.com" CONTAINER <container-name> DBUSER <masterUserName>
  • If using self-manage Db2, you can get the AWS CLI credentials and create the storage alias.
  • Using log-term credentials:
    db2 "CATALOG STORAGE ACCESS ALIAS <aliasName> VENDOR S3 SERVER s3.<region>.amazonaws.com USER $AWS_ACCESS_KEY_ID PASSWORD $AWS_SECRET_ACCESS_KEY CONTAINER <container-name> DBUSER <masterUserName>"
  • Using short-term credentials:
    db2 "CATALOG STORAGE ACCESS ALIAS <aliasName> VENDOR S3 SERVER s3.<region>.amazonaws.com USER $AWS_ACCESS_KEY_ID PASSWORD $AWS_SECRET_ACCESS_KEY CONTAINER <container-name> DBUSER <masterUserName> TOKEN $AWS_SESSION_TOKEN"
  • Use the storage alias in the backup command to directly write database backup images to Amazon S3. For instance, if the storage alias created is db2S3, use the command:
    db2 backup database <dbname> to DB2REMOTE://db2S3, DB2REMOTE://db2S3, DB2REMOTE://db2S3, DB2REMOTE://db2S3, DB2REMOTE://db2S3

    This command will facilitate splitting database multi-part images into five parts in your S3 bucket.

General guidance on using database restore command

When using the database restore command, consider the following best practices:

  • Make sure that you have enabled Amazon S3 integration for RDS for Db2 instances with the proper AWS Identity and Access Management (IAM) role for access to an S3 bucket with database backup multi-part copies.
  • Use the stored procedure rdsadmin.set_configuration to set performance parameters such as RESTORE_DATABASE_NUM_BUFFERS, RESTORE_DATABASE_PARALLELISM, and RESTORE_DATABASE_NUM_MULTI_PATHS.
  • Set the parameter USE_STREAMING_RESTORE to TRUE to enable Amazon S3 streaming direct from Amazon S3 to Amazon RDS for Db2.
  • Amazon RDS for Db2 will create a Db2 storage alias automatically for restoring the database multi-part database backup images using the rdsadmin.restore_database stored procedure.
  • Pay attention to the s3_prefix variable used in the rdsadmin.restore_database stored procedure. This prefix is the common part of multi-part backup images excluding extensions such as .001, .002, and so on so that matching files are restored to the RDS for Db2 database.
  • If using online backup image, you must keep copying Db2 archive log files to Amazon S3 and run the stored procedure rdsadmin.rollforward_database to apply Db2 archive log files. Repeat the process until you have applied all log files. You should use a different prefix for each operation.
  • After applying all log files, run the stored procedure rdsadmin.complete_rollforward to bring the RDS for Db2 database in a connectible state.
  • Consider using the Db2MT tool for automation for using online restore instead of manual steps.

Enhancements to the tool

The source code of this tool is available in the following GitHub repository. Open an issue to submit your enhancements request or submit a pull request with your suggested changes.

Additional Resources

To learn more about Amazon RDS for Db2 and migration strategies, refer to the following resources:

Conclusion

The Db2 Migration Prerequisites Validation Tool significantly reduces migration failures by identifying and addressing common issues before they impact your migration timeline. By incorporating this tool into your migration workflow, you can achieve the following:

  • Reduce migration risk – Identify issues early in the process
  • Save time – Avoid failed restore operations and troubleshooting
  • Improve success rate – Make sure databases are properly prepared
  • Maintain documentation – Keep detailed validation records

Regular use of this tool as part of your Db2 maintenance and migration processes can help facilitate smooth, successful migrations to Amazon RDS for Db2.

Have you tried this approach in your setup? Let us know how it works for you or if you want to see enhancements in the tool.


About the Authors

Vikram S Khatri

Vikram S Khatri

Vikram is a Sr. DBE for Amazon RDS for Db2. Vikram has over 20 years of experience in Db2. He enjoys developing new products from the ground up. In his spare time, he practices meditation and enjoys listening to podcasts.

Umair Hussain

Umair Hussain

Umair is a Senior Database Engineer for Amazon RDS for Db2. Umair has over 20 years of Db2 experience.

Rajib Sarkar

Rajib Sarkar

Rajib is a Senior Database Engineer for Amazon RDS for Db2. Rajib has over 20 years of Db2 experience.