AWS Database Blog

How an AWS customer in the learning services industry migrated and modernized SAP ASE to Amazon Aurora PostgreSQL

In this post, we explore how a leading AWS customer in the learning services industry successfully modernized its legacy SAP ASE environment by migrating to Amazon Aurora PostgreSQL-Compatible Edition. Partnering with AWS, the customer engineered a comprehensive migration strategy to transition from a proprietary system to an open source database while providing high availability, performance optimization, and cost-efficiency.

The customer’s legacy SAP ASE setup posed several challenges, including maintaining outdated code, managing multiple clusters, and maintaining uptime. The modernization objectives were to refactor SAP ASE code for Aurora PostgreSQL-Compatible, consolidate infrastructure, enhance performance, reduce operational costs, and implement a resilient disaster recovery strategy.

To achieve this, the team executed a well-structured migration plan using AWS Database Migration Service (AWS DMS) for data migration and AWS DMS Schema Conversion for efficient code conversion. This approach enabled a successful migration within a strict maintenance window, minimizing business disruption.

Beyond migration, this transformation provided a scalable, high-performance, and cost-effective open source database, reducing legacy complexities through efficient code refactoring. In this post, we show you key code conversion challenges, practical solutions, and best practices to help you smoothly transition SAP ASE applications to Aurora PostgreSQL-Compatible while maintaining functionality and performance.

For details on migration from SAP ASE, see Using DMS Schema Conversion and Using an SAP ASE database as a source for AWS DMS. For more information about potential limitations, refer to the limitations when SAP ASE is the source and the limitations for PostgreSQL as the target.

Code conversion challenges

The customer had legacy applications running on a legacy database engine, which introduced several challenges during migration to PostgreSQL. Although the AWS Schema Conversion Tool (AWS SCT) facilitates the basic conversion process, several SAP ASE-specific aspects require careful consideration and manual intervention during migration.

Some of the key challenges that the customer faced during SAP ASE to PostgreSQL migrations included:

  • Data type compatibility – Some SAP ASE-specific data types and their behaviors need special handling during migration.
  • Temporary table handling – The semantics of temporary tables (#temp) in SAP ASE differ from PostgreSQL’s temporary tables, affecting application logic.
  • Handling cursors – SAP ASE and PostgreSQL have different cursor implementations, particularly in terms of cursor scope, performance characteristics, and syntax for declaration and manipulation.
  • Handling triggers – SAP ASE and PostgreSQL have different trigger implementations, including differences in row-level compared to statement-level behavior, and access to modified data through inserted and deleted tables vs. NEW and OLD references.
  • Cross-database operations – SAP ASE’s cross-database reference mechanisms differ significantly from PostgreSQL’s approach, requiring architectural redesign.
  • Handling collation – Differences in collation rules and character set handling between SAP ASE and PostgreSQL require careful mapping and testing to maintain proper text sorting and comparison behavior.

Handling code conversion patterns

This section highlights some of the conversion patterns the customer encountered when migrating from SAP ASE to Aurora PostgreSQL-Compatible.

Data type compatibility

One of the key challenges the customer faced during the Sybase to PostgreSQL migration was dealing with data type mismatches—especially around proprietary Sybase types like UNIQUEIDENTIFIER, MONEY, and DATETIME. These types either have no direct counterparts in PostgreSQL or behave differently.SAP ASE and PostgreSQL have different data type systems, and making sure that conversions are accurate is essential for maintaining data integrity and optimizing performance. The following table summarizes some common data type mappings from SAP ASE to Aurora PostgreSQL-Compatible we handled for the customer.

SAP ASE Data Type PostgreSQL Data Type Description
TINYINT SMALLINT SAP ASE TINYINT (1 byte) maps to PostgreSQL SMALLINT (2 bytes) for small integer values. A difference is that TINYINT is only from 0–255. Optionally, if you want the same semantics, you can create a domain based on SMALLINT and put a constraint on it.
SMALLINT SMALLINT This data type maps directly to PostgreSQL.
INT INTEGER SAP ASE INT (4 bytes) maps directly to PostgreSQL INTEGER.
BIGINT BIGINT Both SAP ASE and PostgreSQL use BIGINT for 8-byte integer values.
REAL REAL This data type maps directly to PostgreSQL.
FLOAT DOUBLE PRECISION SAP ASE FLOAT maps to PostgreSQL DOUBLE PRECISION for 8-byte floating-point numbers.
DECIMAL(p,s) NUMERIC(p,s) SAP ASE DECIMAL maps to PostgreSQL NUMERIC for fixed-point numbers with precision (p) and scale (s).
NUMERIC(p,s) NUMERIC(p,s) This data type maps directly to PostgreSQL.
MONEY NUMERIC(19,4) SAP ASE MONEY has fixed four decimal places precision.
SMALLMONEY NUMERIC(10,4) SAP ASE SMALLMONEY has fixed four decimal places with a smaller range than the MONEY type.
CHAR(n) CHAR(n) SAP ASE maximum length is 16 KB, whereas PostgreSQL supports up to 1 GB.
VARCHAR(n) VARCHAR(n) SAP ASE maximum length is 16 KB, whereas PostgreSQL supports up to 1 GB.
TEXT TEXT SAP ASE TEXT has a 2 GB limit, whereas PostgreSQL TEXT has no size limit. Consider data volume during migration.
UNITEXT TEXT SAP ASE UNITEXT is for Unicode text data over 16 KB. Consider character encoding differences during migration.
BINARY (n) BYTEA PostgreSQL BYTEA uses different escape formats and handling mechanisms than SAP ASE BINARY.
VARBINARY (n) BYTEA PostgreSQL BYTEA uses different escape formats and handling mechanisms than SAP ASE VARBINARY.
IMAGE BYTEA SAP ASE IMAGE maps to PostgreSQL BYTEA for large binary objects (for example, images).
DATE DATE SAP ASE DATE range is from 1753-01-01 to 9999-12-31, whereas PostgreSQL supports 4713 BC to 5874897 AD. Check for date values outside PostgreSQL range.
DATETIME TIMESTAMP(3) SAP ASE DATETIME has 3-millisecond precision, whereas PostgreSQL TIMESTAMP defaults to microsecond precision. Use TIMESTAMP(3) to match SAP ASE’s precision or handle precision differences in the application layer.
SMALLDATETIME TIMESTAMP SAP ASE SMALLDATETIME maps to PostgreSQL TIMESTAMP with reduced precision.
BIGDATETIME TIMESTAMP(6) SAP ASE stores dates from 0001-01-01 to 9999-12-31 with microsecond precision. Direct mapping is possible without data loss.
BIGTIME TIME(6) Direct mapping is possible without data loss.
UNSIGNED SMALLINT INTEGER ASE range is 0–65,535. PostgreSQL doesn’t support unsigned integers. You need a larger data type to accommodate unsigned ranges.
UNSIGNED INT BIGINT ASE range–4,294,967,295. PostgreSQL doesn’t support unsigned integers. You need a larger data type to accommodate unsigned ranges.
BIT BOOLEAN SAP ASE BIT maps to PostgreSQL BOOLEAN for true/false values. BIT can’t be NULL, but Boolean can.
TIMESTAMP (row version) No direct equivalent SAP ASE TIMESTAMP for row versioning has no direct equivalent in PostgreSQL. Consider alternative row versioning strategies. Use BYTEA for binary storage or BIGINT for numeric versioning.
UNICHAR VARCHAR SAP ASE UNICHAR maps to PostgreSQL VARCHAR for Unicode character data.
UNIVARCHAR VARCHAR SAP ASE UNIVARCHAR maps to PostgreSQL VARCHAR for variable-length Unicode data.
TIME TIME This data type maps directly to PostgreSQL.
XML XML This data type maps directly to PostgreSQL.
NCHAR VARCHAR For national character type data.
NVARCHAR VARCHAR For variable-length national character type data.

Key considerations for data type mapping

Consider the following:

  • Numeric data types – When mapping numeric data types from SAP ASE to PostgreSQL, distinguish between exact numeric types (such as SMALLINT, INTEGER, and BIGINT). Using integer types (SMALLINT, INTEGER, BIGINT) instead of NUMERIC can improve performance for certain operations because integer arithmetic is generally faster than decimal arithmetic.
  • Precision and scale – Make sure that the precision (p) and scale (s) of numeric types such as DECIMAL and NUMERIC are preserved during migration to avoid data truncation or loss.
  • Date and time handling – PostgreSQL’s TIMESTAMP is more flexible than SAP ASE’s DATETIME and SMALLDATETIME. Adjustments might be needed for time zone handling or precision requirements.
  • Binary data – SAP ASE’s BINARY, VARBINARY, and IMAGE types map to PostgreSQL’s BYTEA. Make sure that binary data is correctly encoded and decoded during migration.
  • String lengths – SAP ASE and PostgreSQL handle string lengths similarly, but PostgreSQL’s VARCHAR and TEXT types are more flexible for large text data.

AWS SCT maps numeric and decimal data types to the numeric data type in PostgreSQL without knowing the actual data size and has an option to configure or map to the correct data type during conversion.

Temporary table handling

When migrating from SAP ASE to Aurora PostgreSQL-Compatible, the customer faced a challenge with the behavior of #temp tables used heavily in their batch processing logic. We have replaced #temp tables with PostgreSQL’s temporary tables (CREATE TEMP TABLE), considering differences in both session-level and transaction-level behavior. In SAP ASE, temporary tables are stored in tempdb and are automatically dropped when a session or stored procedure ends. However, in PostgreSQL, temporary tables persist for the entire session unless explicitly dropped, and their behavior within transactions requires careful handling. To optimize performance and resource management, it’s crucial to implement proper cleanup mechanisms, such as ON COMMIT DROP for transaction-scoped tables or explicitly dropping them when no longer needed.

Additionally, global temporary tables in SAP ASE, which persist across sessions but hold session-specific data, have no direct equivalent in PostgreSQL. Instead, their functionality can be replicated using unlogged tables or application-managed solutions.

By understanding these differences and implementing proper cleanup and session handling, temporary table migration can be optimized for performance, consistency, and resource efficiency.

REF CURSOR implementation

During the migration, the customer encountered a challenge with stored procedures in SAP ASE, which were returning a varying number of result sets using SELECT statements. PostgreSQL supports similar functionality using REF CURSOR, but the behavior and usage patterns differ—especially around cursor scope, lifecycle, and how they are returned from functions and stored procedures. A REF CURSOR is a special type of cursor in PostgreSQL that enables a function to return a result set dynamically, enabling applications to fetch data row by row or as a set. The following are some points to consider, for which we used REF CURSOR:

  • Flexibility – Allows dynamic query execution, returning different result sets based on conditions
  • Performance – Efficient for handling large datasets by streaming rows instead of loading all data at one time
  • Client compatibility – Helps integrate with applications that process results iteratively
  • PL/pgSQL functions compatibility – Useful when you need a stored procedure to return a result set.

The following SAP ASE code returns multiple result sets:

CREATE PROCEDURE get_dept_summary
    @dept_id INT
AS
BEGIN
-- First result set
SELECT dept_name, location 
    FROM departments 
    WHERE dept_id = @dept_id
-- Second result set
SELECT COUNT(*) as cust_count 
    FROM customer 
    WHERE dept_id = @dept_id
END

The following code is the PostgreSQL equivalent:

CREATE OR REPLACE PROCEDURE get_dept_summary(
    IN p_dept_id INT,
    INOUT dept_cursor REFCURSOR,
    INOUT count_cursor REFCURSOR
)
LANGUAGE plpgsql
AS $$
BEGIN
    -- Open first cursor for department details
    OPEN dept_cursor FOR 
        SELECT dept_name, location 
        FROM departments 
        WHERE dept_id = p_dept_id;
    -- Open second cursor for customer count
    OPEN count_cursor FOR 
        SELECT COUNT(*) as cust_count 
        FROM customer 
        WHERE dept_id = p_dept_id;
END;
$$;

You can also apply this approach when returning a single result set by using just one INOUT parameter instead of multiple parameters.

Handling triggers

The customer depended extensively on sophisticated triggers in SAP ASE to audit data changes and enforce business rules by logging modifications to audit tables. Although AWS SCT automated much of the trigger conversion, the customer still had to manually review some of the code, especially for complex or database-specific triggers. Additionally, triggers can have significant impacts on database performance and behavior. The following example helps illustrate this issue.

Suppose you have a trigger in SAP ASE named trg_audit_customer that logs changes made to the Customer table in an audit table called CustomerAudit. The following is the SAP ASE trigger code:

CREATE TRIGGER trg_audit_customer
ON Customer
FOR INSERT, UPDATE, DELETE
AS
BEGIN
    -- Handle INSERT
    IF EXISTS (SELECT 1 FROM inserted) AND NOT EXISTS (SELECT 1 FROM deleted)
    BEGIN
        INSERT INTO CustomerAudit (CustomerID, CustomerName, Operation, OperationDate)
        SELECT CustomerID, CustomerName, 'INSERT', GETDATE()
        FROM inserted
    END
    -- Handle UPDATE
    IF EXISTS (SELECT 1 FROM inserted) AND EXISTS (SELECT 1 FROM deleted)
    BEGIN
        INSERT INTO CustomerAudit (CustomerID, CustomerName, Operation, OperationDate)
        SELECT CustomerID, CustomerName, 'UPDATE', GETDATE()
        FROM inserted
    END
    -- Handle DELETE
    IF NOT EXISTS (SELECT 1 FROM inserted) AND EXISTS (SELECT 1 FROM deleted)
    BEGIN
        INSERT INTO CustomerAudit (CustomerID, CustomerName, Operation, OperationDate)
        SELECT CustomerID, CustomerName, 'DELETE', GETDATE()
        FROM deleted
    END
END

We used the following approach to migrate this trigger to Aurora PostgreSQL-Compatible with AWS SCT:

  • Trigger extraction – AWS SCT extracts the trigger definition and code from the SAP ASE source database.
  • Trigger code translation – AWS SCT attempts to translate the SAP ASE trigger code into equivalent PostgreSQL syntax. The translated code might look like the following example:
    -- First, create a function that will be called by the trigger
    CREATE OR REPLACE FUNCTION fn_audit_customer()
    RETURNS TRIGGER 
    LANGUAGE plpgsql
    AS $$
    BEGIN
        IF (TG_OP = 'INSERT') THEN
            INSERT INTO CustomerAudit (CustomerID, CustomerName, Operation, OperationDate)
            VALUES (NEW.CustomerID, NEW.CustomerName, 'INSERT', CURRENT_TIMESTAMP);
            
        ELSIF (TG_OP = 'UPDATE') THEN
            INSERT INTO CustomerAudit (CustomerID, CustomerName, Operation, OperationDate)
            VALUES (NEW.CustomerID, NEW.CustomerName, 'UPDATE', CURRENT_TIMESTAMP);
            
        ELSIF (TG_OP = 'DELETE') THEN
            INSERT INTO CustomerAudit (CustomerID, CustomerName, Operation, OperationDate)
            VALUES (OLD.CustomerID, OLD.CustomerName, 'DELETE', CURRENT_TIMESTAMP);
        END IF;
        
        RETURN COALESCE(NEW, OLD);
    END;
    $$;
    -- Then, create the trigger
    CREATE TRIGGER trg_audit_customer
    AFTER INSERT OR UPDATE OR DELETE ON Customer
    FOR EACH ROW
    EXECUTE FUNCTION fn_audit_customer();
  • Trigger validation – AWS SCT performs validation checks to make sure that the translated trigger code is syntactically correct and compatible with Aurora PostgreSQL-Compatible. If any issues are identified, they’re reported, and manual intervention might be required.
  • Trigger migration – Assuming the validation is successful, AWS SCT can migrate the translated trigger code to the target Aurora PostgreSQL database.

In this example, the translated trigger code in PostgreSQL uses a combination of a trigger function and a trigger definition. This is a common approach in PostgreSQL because triggers can’t have complex logic directly embedded within them. Instead, the trigger calls a separate function that contains the desired logic.Additionally, AWS SCT might encounter some differences in syntax or functionality between SAP ASE and PostgreSQL, such as the handling of temporary tables, cursor operations, or database-specific features. In such cases, AWS SCT will highlight the unsupported constructs, and manual intervention or alternative implementations will be required.

Cross-database operations

The customer’s SAP ASE environment made extensive use of cross-database references, with stored procedures and queries accessing objects across multiple databases. During the migration to Aurora PostgreSQL-Compatible, this presented a challenge because PostgreSQL doesn’t natively support cross-database queries within a single session like SAP ASE does. When migrating from SAP ASE to Aurora PostgreSQL-Compatible, handling cross-database references requires significant architectural changes due to fundamental differences in how these databases handle cross-database operations.The following table lists possible approaches to implementing cross-database operations.

Feature

SAP ASE

PostgreSQL

Migration notes

Database references

database.owner.object

No direct equivalent Requires architectural redesign
Cross-database queries Direct queries using database prefix Foreign data wrappers Foreign data wrapper setup required for external access
Security context Unified sign-in across databases Separate user mappings needed Additional security configuration required
Transaction scope Cross-database transaction support Limited cross-database transaction support Might need application-level changes

Replace SAP ASE’s cross-database references with the following:

  • Foreign data wrappers (FDWs)
  • Database links
  • Application-level solutions

Note the following key considerations:

  • Implement proper security and access controls for cross-database communication
  • Consider performance implications of distributed queries
  • FDWs are best for integrating multiple data sources within PostgreSQL, and dblink is best used for cross-database communication in PostgreSQL environments

Handling collation

The customer’s SAP ASE 15.7 environment used binary sort order, which enforced strict byte-level comparisons—impacting case sensitivity, accent sensitivity, and width sensitivity across their application logic. This behavior was deeply embedded in their validation rules, reporting logic, and user-facing features. When migrating to Aurora PostgreSQL-Compatible, the customer initially struggled with inconsistent query results due to PostgreSQL’s default locale-based collations, which didn’t match the binary comparison semantics they relied on. These discrepancies led to unexpected sorting and filtering behavior in critical application components. After thorough testing, the customer adopted the C collation in PostgreSQL, which closely mimicked ASE’s binary sort order and provided the consistency they needed. However, implementing this required careful planning—collation settings had to be explicitly applied at the database, column, and query level to preserve behavior across all application layers.

The C collation provides:

  • Efficient byte-to-byte comparisons – Alleviates the need for complex character transformations
  • Lower CPU usage – Speeds up sorting and indexing operations
  • Consistent query performance – Provides predictable execution times

By adopting C collation in PostgreSQL, our customer achieved consistent comparisons and predictable sorting, making it a suitable choice for applications requiring efficient text processing.

Summary

In this post, we shared some tips and best practices that you can use when working on migrations from commercial databases to PostgreSQL. This post highlighted some of the decisions you must make during the migration and how they can impact your database performance. Keeping these performance aspects in mind during the conversion can help avoid performance issues during migration.If you have any questions or comments about this post, share your thoughts in the comments section.


About the Authors

InduTeja Aligeti

InduTeja Aligeti

InduTeja is a Senior Lead Database Consultant at AWS. She has over 18 years of experience working with Microsoft technologies with a specialization in SQL Server. She focuses on helping customers build highly available, cost-effective database solutions and migrate their large-scale SQL Server databases to AWS.

Amit Kumar

Amit Kumar

Amit is a distinguished Data and AI/ML Strategist at AWS Professional Services in Hyderabad, India. With over a decade of experience, he brings profound expertise in the complete data lifecycle, from storage and processing to cutting-edge AI/ML applications.

Ankit Jain

Ankit Jain

Ankit is a data enthusiast and a Lead Database Consultant at AWS. With over 16 years of experience in end-to-end homogenous and heterogenous database migrations for Oracle, Microsoft SQL Server, SAP ASE/IQ, Amazon Aurora PostgreSQL, Amazon DynamoDB, and Amazon QLDB in a cost-effective and performant manner.