AWS Storage Blog

Simplify log rotation with Amazon S3 Express One Zone

Log rotation is a standard operational practice for maintaining system health and performance while managing storage costs effectively. This practice involves systematically archiving log files to prevent them from consuming excessive storage. When a log file reaches a certain size or age, it’s rotated—meaning the current file is archived with a new name and a fresh log file is created to continue recording new events. This process helps maintain required system performance and efficient storage utilization, while also making log data less complicated to manage and analyze. Because applications continuously generate log data, organizations need storage solutions that can handle frequent writes and updates to log data with consistently high performance while optimizing their storage cost.

Amazon S3 Express One Zone is a high-performance, single-Availability Zone storage class purpose-built for your most frequently accessed data and latency-sensitive applications. In 2024, S3 Express One Zone added support to append data directly to existing objects. With this feature, you can append new data to existing objects without having to download the object, append new data locally, and then upload the entire object. This makes it possible to configure applications to log directly to S3 Express One Zone, without requiring local storage.

In June 2025, S3 Express One Zone also added support for renaming objects with the new RenameObject API. For the first time in Amazon S3, you can rename your existing objects atomically (with a single operation) without data movement. This new API is useful for log rotation because you can now atomically rename your log files in S3 Express One Zone instead of having to rename them locally, upload the renamed log file, and then delete the original log file.

In this blog post, we demonstrate a log rotation strategy using S3 Express One Zone. We cover using append functionality to append new log entries to the end of existing log files, and the RenameObject API to atomically rename your log file for rotation. Additionally, we show you how to use Mountpoint for Amazon S3 to mount your S3 directory bucket in S3 Express One Zone storage class as a local filesystem. Mountpoint makes it straightforward for applications using a standard logging framework, like Log4j2, to take advantage of these new capabilities in S3 Express One Zone.

Solution overview

In this solution, you configure Log4j2, a popular logging framework for the Java programming language, to write logs directly to a directory bucket in the S3 Express One Zone storage class. You’ll use Mountpoint to mount the directory bucket as a local filesystem. Note that you need Mountpoint version 1.19.0 or later to be able to rename files using Mountpoint.

As Log4j2 appends log entries to the filesystem, you’ll notice that Mountpoint sends append requests to S3 Express One Zone. You’ll also configure Log4j2 to rotate log files based on either a specific time period or a size limit. As these log files are rotated, you’ll notice that Mountpoint translates the log file renames to RenameObject API calls to S3 Express One Zone.

Solution walkthrough

To implement a logging solution with S3 Express One Zone, you need to follow four steps:

  1. Create the required AWS resources, such as a directory bucket, IAM role, and an EC2 instance.
  2. Prepare your workspace by downloading dependencies such as Mountpoint and Maven, which is used to build and run a Java application.
  3. Write a Java application to use Log4j2 to write logs and rotate the log files.
  4. Run the application and monitor the logs.

Let’s go through each of these steps in detail.

Step 1: Create the required AWS resources

In this step, you create AWS resources that are required to build the logging solution.

  1. Create a directory bucket. In this example, we use the bucket name logging-on-express in the Availability Zone usw2-az3, resulting in a full directory bucket name of logging-on-express--usw2-az3--x-s3.
  2. Create an IAM role for Amazon Elastic Compute Cloud (Amazon EC2). To do so, choose Create role. On the first page, select AWS service as the trusted entity type and the use case EC2 from the dropdown to allow EC2 instances to access the directory bucket. After that, choose Next to reach the Add permissions page, and then choose Next. On the final page, enter logging-on-express as the name for the new AWS Identity and Access Management (IAM) role and choose Create role.

IAM role creation interface with a name and EC2 service permission description.Figure 1: Create an IAM role

  1. You’ve created the IAM role without pre-defined policies, and will instead create an inline policy. To do so, after your IAM role is created, choose Add permissions and select Create inline policy.

IAM role summary displaying ARN, creation date, and session duration where you click 'Create inline policy'.Figure 2: Add permissions to the IAM role

  1. Select S3 Express as the service and then select CreateSession to allow S3 Express One Zone to make CreateSession API calls. After that, choose All under Resources and then choose Next. On the final page, enter a name for the policy and choose Create Policy. In this example, we used the name express-create-session.

Interface for creating an inline policy which shows S3 Express permissions such as List, Read, Write, and Permissions management options, with 'CreateSession' selected under Write permissions.

Figure 3: Create an inline policy to allow S3 Express One Zone to make CreateSession API calls

  1. Launch an EC2 instance to run your logging application. For this solution, use Amazon Linux 2023 as the Amazon Machine Image (AMI), t3.micro as the instance type, and added one 24 GiB gp3 Amazon Elastic Block Store (Amazon EBS) volume for storage. These values are available under the free tier.
  2. Under the Network Settings section, make sure to use the same Availability Zone that your directory bucket is in. You can do so by opening the Amazon EC2 console and checking the widget titled ServiceHealth on the landing page, which shows a mapping between Availability Zone names and Availability Zone IDs. In this example, the bucket resides in the Availability Zone ID usw2-az3 which maps to the name us-west-2c. Therefore, we selected the subnet in our Amazon Virtual Private Cloud (Amazon VPC) in us-west-2c. If you don’t have a subnet there, you can create a new subnet.

VPC and subnet selection interface with multiple options.Figure 4: Network settings while launching an EC2 instance

  1. Choose Advanced details for further customization. Under the IAM instance profile, select the role logging-on-express that you created earlier. Choose Launch instance to create an EC2 instance.

Advances details section while launching EC2 instance, with an IAM instance profile selected.Figure 5: Add IAM instance profile while launching an EC2 instance

Step 2: Prepare your workspace

In this step, you download and install dependencies to build the log rotation solution.

  1. Connect to the EC2 instance that you launched in the previous step. Install Mountpoint, which is used to mount the directory bucket on the EC2 instance as a local file system, and Maven, which is a build automation tool primarily used to build and run Java applications. You can install Mountpoint by following the Installing Mountpoint instructions and Maven by following the Maven installation instructions.
  2. Verify that the Mountpoint version is 1.19.0 or later, as shown in the following command.
$ mount-s3 --version 
mount-s3 1.19.0
  1. Create the directory where the logging application and the logs that it generates will reside.
sh-5.2$ cd $HOME 
sh-5.2$ mkdir logging-on-express 
sh-5.2$ cd logging-on-express/ 
sh-5.2$ pwd 
/home/ssm-user/logging-on-express 
sh-5.2$ mkdir logs
  1. Mount your directory bucket. Use the system configuration file /etc/fstab to configure the operating system to mount the directory bucket using Mountpoint during instance boot. This helps makes sure that even if the instance reboots, the logs will continue to be uploaded to the specified directory bucket.
sh-5.2$ export MNT_PATH="$HOME/logging-on-express/logs" 
sh-5.2$ export BUCKET_NAME="logging-on-express--usw2-az3--x-s3" 
sh-5.2$ echo "s3://$BUCKET_NAME $MNT_PATH mount-s3 _netdev,nosuid,nodev,rw,incremental-upload,write-part-size=8388608,allow-other,uid=$(id -u $(whoami)),gid=$(id -g $(whoami)) 0 0" | sudo tee -a /etc/fstab

With the preceding command, you have appended an entry to /etc/fstab file that uses mount-s3 command to mount the s3://logging-on-express--usw2-az3--x-s3 bucket to the /home/ssm-user/logging-on-express/logs directory. You will notice that we have added a variety of fstab options for the mount:

  • _netdev specifies that the file system requires networking to mount
  • nosuid specifies that the file system cannot contain set userid files
  • nodev specifies that the files ystem cannot contain special devices
  • rw specifies that the file system will have both read and write permissions

Additionally, you will notice Mountpoint specific options:

  • incremental-upload allows incremental uploads and support for appending to existing objects.
  • write-part-size specifies part size for multi-part PUT in bytes. This value determines how much data Mountpoint buffers locally before issuing an append request. S3 Express One Zone currently supports 10,000 total parts in an object so write-part-size and log rotation configuration surrounding file size must be planned accordingly.
  1. With the configuration in place, make sure that the configuration is valid. To do so, load the configuration changes that you just made and mount your directory bucket using the following commands:
sh-5.2$ sudo systemctl daemon-reload 
sh-5.2$ sudo systemctl restart "$(systemd-escape --suffix=mount --path $MNT_PATH)"
  1. Finally, run a check to see if everything worked as expected using the following commands:
sh-5.2$ sudo systemctl status "$(systemd-escape --suffix=mount --path $MNT_PATH)"
$ sudo df -h

If everything worked well, you will see an output similar to the following:

Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        4.0M     0  4.0M   0% /dev
tmpfs           475M     0  475M   0% /dev/shm
tmpfs           190M  440K  190M   1% /run
/dev/xvda1       24G  2.0G   23G   8% /
tmpfs           475M     0  475M   0% /tmp
/dev/xvda128     10M  1.3M  8.7M  13% /boot/efi
tmpfs            95M     0   95M   0% /run/user/0
mountpoint-s3   8.0E     0  8.0E   0% /home/ssm-user/logging-on-express/logs

Now that you’ve mounted the directory bucket, Mountpoint will transform the file system commands against the /opt/logging-on-express/var/logs directory to the corresponding S3 API calls.

Step 3: Write a Java application for logging

Next, you write a Java application to use Log4j2 to write logs and rotate the log files.

  1. Define a new Java application with a dependency on Log4j2. To do so, first create a pom.xml file using the sample below. This instructs Maven to take a dependency on Log4j2 and defines how to build the application.
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns=http://maven.apache.org/POM/4.0.0 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.example</groupId>
    <artifactId>logging-on-express</artifactId>
    <version>1.0-SNAPSHOT</version>
    <properties>
        <maven.compiler.source>17</maven.compiler.source>
        <maven.compiler.target>17</maven.compiler.target>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    </properties>

    <dependencies>
        <!-- Log4j Core -->
        <dependency>
            <groupId>org.apache.logging.log4j</groupId>
            <artifactId>log4j-api</artifactId>
            <version>2.20.0</version>
        </dependency>
        <dependency>
            <groupId>org.apache.logging.log4j</groupId>
            <artifactId>log4j-core</artifactId>
            <version>2.20.0</version>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-assembly-plugin</artifactId>
                <version>3.3.0</version>
                <configuration>
                    <archive>
                        <manifest>
                            <mainClass>com.example.LoggingApp</mainClass>
                        </manifest>
                    </archive>
                    <descriptorRefs>
                        <descriptorRef>jar-with-dependencies</descriptorRef>
                    </descriptorRefs>
                </configuration>
                <executions>
                    <execution>
                        <id>make-assembly</id>
                        <phase>package</phase>
                        <goals>
                            <goal>single</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>
</project>
  1. Configure Log4j2 by using the sample below for a Log4j2 configuration file src/main/resources/log4j2.xml.

This configuration specifies the directory to write logs, a rotation period after which the log file gets renamed, maximum size of a log file, and the log file naming pattern. This configuration file instructs Log4j2 to write logs in the $HOME/logs directory, which is the directory you used to mount the directory bucket. The configuration file defines a max log file size of 10 GB, which is well within the limit of approximately 83.89 GB based on Mountpoint write-part-size value. Logs will be written to an object named app.log and the object will be renamed every 1 minute (rotation period) or 10 GB of logs written (maximum log file size). The configuration file uses a log file name pattern of ${LOG_DIR}/app-%d{yyyy-MM-dd-HH-mm}-%i.log, which means that the rotated log files will have a minute-based timestamp and a counter i, which will be incremented if more than 10 GB of logs are written within the rotation period. This rotation results in a RenameObject call against your directory bucket, which atomically changes the name of the current app.log object to its timestamp-counter name without copying data that’s been written.

<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="WARN">
    <Properties>
        <Property name="LOG_DIR">logs</Property>
        <!-- Size-based trigger for rotation (10GB) -->
        <Property name="MAX_FILE_SIZE">10GB</Property>
        <!-- Time-based pattern for log file names with counter suffix -->
        <Property name="FILE_PATTERN">${LOG_DIR}/app-%d{yyyy-MM-dd-HH-mm}-%i.log</Property>
    </Properties>

    <Appenders>
        <Console name="Console" target="SYSTEM_OUT">
            <PatternLayout pattern="%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n"/>
        </Console>

        <RollingFile name="RollingFile"
                     fileName="${LOG_DIR}/app.log"
                     filePattern="${FILE_PATTERN}">
            <PatternLayout pattern="%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n"/>
            <Policies>
                <!-- Rotate every minute -->
                <TimeBasedTriggeringPolicy interval="1" modulate="true"/>
                <!-- Also rotate when file reaches specified size -->
                <SizeBasedTriggeringPolicy size="${MAX_FILE_SIZE}"/>
            </Policies>
            <!-- Keep up to 100 files -->
            <DefaultRolloverStrategy fileIndex="nomax"/>
        </RollingFile>
    </Appenders>

    <Loggers>
        <Root level="info">
            <AppenderRef ref="RollingFile"/>
        </Root>
    </Loggers>
</Configuration>
  1. Write and rotate logs with Log4j2 using the sample program below. This program spawns 50 threads that write log messages every 1 millisecond until the program is interrupted. You can interrupt the program by pressing CTRL-C.
package com.example;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;

import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicLong;
public class LoggingApp {
    private static final Logger logger = LogManager.getLogger(LoggingApp.class);
    private static final int NUM_THREADS = 50;
    private static final int LOG_INTERVAL_MS = 1; // Log every 1ms
    private static final AtomicLong counter = new AtomicLong(0);
    private static final ExecutorService executor = Executors.newFixedThreadPool(NUM_THREADS);

    public static void main(String[] args) {
        // When we get interrupted, shutdown our threadpool
        Runtime.getRuntime().addShutdownHook(new Thread() {
            @Override
            public void run() {
                System.out.println("Shutting down executor...");
                executor.shutdown();
                try {
                    // Wait for tasks to complete, or timeout after 5 seconds
                    if (!executor.awaitTermination(5, TimeUnit.SECONDS)) {
                        // Force shutdown if tasks don't complete in time
                        executor.shutdownNow();
                    }
                } catch (InterruptedException e) {
                    executor.shutdownNow();
                }
            }
        });

        logger.info("Starting logging application with {} threads. Will run until interrupted with CTRL-C.", NUM_THREADS);

        for (int i = 0; i < NUM_THREADS; i++) {
            final int threadId = i;
            executor.submit(() -> {
                logger.info("Thread {} started", threadId);

                try {
                    // Keep logging until the application is shut down
                    while (!Thread.currentThread().isInterrupted()) {
                        long count = counter.incrementAndGet();

                        logger.info("Thread {} - Log #{} - This is a sample log              message that will be written to the rotating log file.", threadId, count);

                        if (count % 10 == 0) {
                            logger.warn("Thread {} - Warning log entry #{}: This is a warning message.", threadId, count);
                        }

                        if (count % 50 == 0) {
                            logger.error("Thread {} - Error log entry #{}: This is an error message.", threadId, count);
                        }

                        Thread.sleep(LOG_INTERVAL_MS);
                    }
                } catch (InterruptedException e) {
                    logger.info("Thread {} was interrupted and is shutting down", threadId);
                }

                return null;
            });
        }

        while (true) {}
    }
}

Step 4: Run the application and monitor the logs

In this final step, you run the logging application and see log rotation in action on S3 Express One Zone.

  1. Build your application using Maven with the following command:

sh-5.2$ mvn clean package

  1. Run the application using java.

sh-5.2$ java -jar target/logging-on-express-1.0-SNAPSHOT-jar-with-dependencies.jar

  1. Monitor the results.

At this point the application will start running and will append logs to the app.log object in the directory bucket, creating the object if it doesn’t exist. Every minute or every 10 GB of logs appended, app.log will be renamed and new logs will be written to a newly created app.log object. You can see that the timestamp in the Last modified column and the size of the object in the Size column change as new logs are appended.

Amazon S3 console showing one 0B app.log file in S3 Express One ZoneFigure 6: Empty app.log file in S3 Express One Zone

Amazon S3 console showing one 56.0 MB app.log file, illustrating object size growth as logs are appended to an object in S3 Express One ZoneFigure 7: Logs getting appended to app.log

After one minute passes or after 10 GB logs have been appended, app.log is renamed to a name that includes the timestamp and the new logs start being appended to a newly created app.log object.Amazon S3 console displaying two log files: a 144.9 MB dated file illustrating it was rotated, and a 88.0 MB app.log in S3 Express One ZoneFigure 8: A log file getting rotated as new logs get appended to app.log

Amazon S3 console displaying three log files: two dated files (144.9 MB, 217.0 MB) illustrating they were rotated, and a 8.0 MB app.log in S3 Express One ZoneFigure 9: A second log file getting rotated as another new log gets appended to app.log

If you’ve set up CloudTrail data events for directory buckets, you can observe the requests being made by Mountpoint. As shown in the following example, for every append request you will see the value of the header “x-amz-write-offset-bytes”, which specifies the offset for appending data to existing objects.

   {
      "eventVersion": "1.10",
      "userIdentity": REDACTED,
      "eventTime": "2025-07-07T21:50:37Z",
      "eventSource": "s3express.amazonaws.com",
      "eventName": "PutObject",
      "awsRegion": "us-west-2",
      "sourceIPAddress": REDACTED,
      "userAgent": "mountpoint-s3/1.19.0 mountpoint-s3-client/0.16.0-41aeca1 os/linux#6.1.141-155.222.amzn2023.x86_64 md/arch#x86_64 md/instance#t3.micro md/mp-fstab CRTS3NativeClient/0.1.x platform/unknown",
      "requestParameters": {
        "bucketName": "logging-on-express--usw2-az3--x-s3",
        "content-length": "8388608",
        "Host": "logging-on-express--usw2-az3--x-s3.s3express-usw2-az3.us-west-2.amazonaws.com",
        "if-match": "\"291d15ffc350411984ee75d8878d2ec4\"",
        "x-amz-write-offset-bytes": "58720256",
        "x-amz-checksum-crc32c": "t71vJg==",
        "x-amz-sdk-checksum-algorithm": "CRC32C",
        "key": "app.log",
        "Content-Type": "binary/octet-stream"
      },
      "responseElements": {
        "x-amz-object-size": "67108864",
        "ETag": "b6dc0e348a2e49c594723e529c2fae11",
        "x-amz-checksum-crc32c": "t71vJg==",
        "x-amz-server-side-encryption": "AES256"
      },
      "additionalEventData": {
        "SignatureVersion": "Sigv4",
        "CipherSuite": "TLS_AES_128_GCM_SHA256",
        "bytesTransferredIn": 8388608,
        "AuthenticationMethod": "AuthHeader",
        "x-amz-id-2": "T5yw00B",
        "bytesTransferredOut": 0,
        "availabilityZone": "usw2-az3",
        "sessionModeApplied": "ReadWrite"
      },
      "requestID": REDACTED,
      "eventID": REDACTED,
      "readOnly": false,
      "resources": [
        {
          "type": "AWS::S3Express::Object",
          "ARN": "arn:aws:s3express:us-west-2:REDACTED:bucket/logging-on-express--usw2-az3--x-s3/app.log"
        },
        {
          "accountId": "327053767902",
          "type": "AWS::S3Express::DirectoryBucket",
          "ARN": "arn:aws:s3express:us-west-2:REDACTED:bucket/logging-on-express--usw2-az3--x-s3"
        }
      ],
      "eventType": "AwsApiCall",
      "managementEvent": false,
      "recipientAccountId": REDACTED,
      "eventCategory": "Data",
      "tlsDetails": {
        "tlsVersion": "TLSv1.3",
        "cipherSuite": "TLS_AES_128_GCM_SHA256",
        "clientProvidedHostHeader": "logging-on-express--usw2-az3--x-s3.s3express-usw2-az3.us-west-2.amazonaws.com"
      }
    },

Similarly, you will also see RenameObject requests in these data events whenever the log files are rotated, as shown in the following example.

   {
      "eventVersion": "1.10",
      "userIdentity": REDACTED
      "eventTime": "2025-07-07T21:51:00Z",
      "eventSource": "s3express.amazonaws.com",
      "eventName": "RenameObject",
      "awsRegion": "us-west-2",
      "sourceIPAddress": REDACTED,
      "userAgent": "mountpoint-s3/1.19.0 mountpoint-s3-client/0.16.0-41aeca1 os/linux#6.1.141-155.222.amzn2023.x86_64 md/arch#x86_64 md/instance#t3.micro md/mp-fstab CRTS3NativeClient/0.1.x platform/unknown",
      "requestParameters": {
        "If-None-Match": "*",
        "bucketName": "logging-on-express--usw2-az3--x-s3",
        "x-amz-rename-source": "app.log",
        "Host": "logging-on-express--usw2-az3--x-s3.s3express-usw2-az3.us-west-2.amazonaws.com",
        "key": "app-2025-07-07-21-50-1.log"
      },
      "responseElements": null,
      "additionalEventData": {
        "SignatureVersion": "Sigv4",
        "CipherSuite": "TLS_AES_128_GCM_SHA256",
        "bytesTransferredIn": 0,
        "AuthenticationMethod": "AuthHeader",
        "x-amz-id-2": "HXmEMOeMykT1",
        "bytesTransferredOut": 0,
        "availabilityZone": "usw2-az3",
        "sessionModeApplied": "ReadWrite"
      },
      "requestID": REDACTED,
      "eventID": REDACTED,
      "readOnly": false,
      "resources": [
        {
          "type": "AWS::S3Express::Object",
          "ARN": "arn:aws:s3express:us-west-2:REDACTED:bucket/logging-on-express--usw2-az3--x-s3/app-2025-07-07-21-50-1.log"
        },
        {
          "accountId": REDACTED,
          "type": "AWS::S3Express::DirectoryBucket",
          "ARN": "arn:aws:s3express:us-west-2:REDACTED:bucket/logging-on-express--usw2-az3--x-s3"
        },
        {
          "type": "AWS::S3Express::Object",
          "ARN": "arn:aws:s3express:us-west-2:REDACTED:bucket/logging-on-express--usw2-az3--x-s3/app.log"
        }
      ],
      "eventType": "AwsApiCall",
      "managementEvent": false,
      "recipientAccountId": REDACTED,
      "eventCategory": "Data",
      "tlsDetails": {
        "tlsVersion": "TLSv1.3",
        "cipherSuite": "TLS_AES_128_GCM_SHA256",
        "clientProvidedHostHeader": "logging-on-express--usw2-az3--x-s3.s3express-usw2-az3.us-west-2.amazonaws.com"
      }
    },

Clean up

After you’re done testing the solution and no longer need the resources that you set up for this post, use the following steps to clean up the resources to avoid incurring unwanted charges.

  1. Terminate the EC2 instance to avoid additional compute charges.
  2. Delete the S3 directory bucket to avoid additional storage charges.

Conclusion

In this post, you learned how to build a log rotation solution using S3 Express One Zone storage class. You started by creating a new S3 directory bucket to store the log files, an IAM role to be used by your EC2 instance, and an EC2 instance to run a Java application using Log4j2 to write logs to a directory where an S3 Express One Zone bucket has been mounted with Mountpoint. You configured Log4j2 to rotate logs every minute and wrote a basic Java application that writes log events. You saw that as log events were being emitted, Log4j2 would append to the file system mounted with Mountpoint, which would translate the append requests to S3 PutObject append requests. Every minute, Log4j2 rotated the log file, renaming the file in the file system mounted with Mountpoint, which would translate into a RenameObject API request to S3 Express One Zone.

You can use the example in this post to get started using S3 Express One Zone while you generate log files and implement a log rotation solution. With this solution, you can configure your logging application to log and rotate your log files directly in S3 Express One Zone, without requiring local storage. If you use other logging tools, you can build on top of this example to integrate your preferred tools while working with S3 Express One Zone. To learn more about S3 Express One Zone, visit Getting started with S3 Express One Zone.

Arushi Garg

Arushi Garg

Arushi is a Product Manager with Amazon S3. She enjoys understanding customer problems and innovating to solve them. Outside of work, she loves singing while playing her ukulele and is on a constant lookout for travel opportunities.

Matthew Russo

Matthew Russo

Matthew Russo is a Senior Software Development Engineer at AWS where he works on the S3 Express One Zone team.