Page topics
- General S3 FAQs
20
- AWS Regions
6
- Billing
10
- S3 Vectors
12
- Amazon S3 and IPv6
4
- S3 Event Notifications
5
- Amazon S3 Transfer Acceleration
12
- Security
14
- S3 Access Grants
19
- S3 Access Points
13
- Durability & Data Protection
23
- Storage Classes
2
- S3 Intelligent-Tiering
15
- S3 Standard
2
- S3 Express One Zone
16
- S3 Standard-Infrequent Access (S3 Standard-IA)
8
- S3 One Zone-Infrequent Access (S3 One Zone-IA)
6
- Amazon S3 Glacier Instant Retrieval storage class
8
- Amazon S3 Glacier Flexible Retrieval storage class
10
- Amazon S3 Glacier Deep Archive
10
- S3 on Outposts
1
- Storage Management
46
- Storage Analytics & Insights
12
- Query in Place
4
- Replication
32
- Data processing
9
- Data Access
20
- Storage Browser for Amazon S3
9
General S3 FAQs
Open allWhat is Amazon S3?
What can I do with Amazon S3?
How can I get started using Amazon S3?
What can I do with Amazon S3 that I cannot do with an on-premises solution?
What kind of data can I store in Amazon S3?
How much data can I store in Amazon S3?
What is an S3 general purpose bucket?
What is an S3 directory bucket?
What is an S3 table bucket?
A table bucket is purpose-built for storing tables using the Apache Iceberg format. Use Amazon S3 Tables to create table buckets and set up table-level permissions in just a few steps. S3 table buckets are specifically optimized for analytics and machine learning workloads. With built-in support for Apache Iceberg, you can query tabular data in S3 with popular query engines including Amazon Athena, Amazon Redshift, and Apache Spark. Use S3 table buckets to store tabular data such as daily purchase transactions, streaming sensor data, or ad impressions as an Iceberg table in Amazon S3, and then interact with that data using analytics capabilities.
What is an S3 vector bucket?
A vector bucket is purpose-built for storing and querying vectors. Within a vector bucket, you do not use the S3 object APIs, but rather dedicated vector APIs to write vector data and query it based on semantic meaning and similarity. You can control access to your vector data with the existing access control mechanisms in Amazon S3, including bucket and IAM policies. All writes to a vector bucket are strongly consistent, which means that you can immediately access the most recently added vectors. As you write, update, and delete vectors over time, S3 vector buckets automatically optimize the vector data stored in them to achieve the optimal price-performance, even as the data sets scale and evolve.
What is the difference between a general purpose bucket, a directory bucket, a table bucket, and a vector bucket?
A bucket is a container for objects and tables stored in Amazon S3, and you can store any number of objects in a bucket. General purpose buckets are the original S3 bucket type, and a single general purpose bucket can contain objects stored across all storage classes except S3 Express One Zone. They are recommended for most use cases and access patterns. S3 directory buckets only allow objects stored in the S3 Express One Zone storage class, which provides faster data processing within a single Availability Zone. They are recommended for low-latency use cases. Each S3 directory bucket can support up to 2 million transactions per second (TPS), independent of the number of directories within the bucket. S3 table buckets are purpose-built for storing tabular data in S3 such as daily purchase transactions, streaming sensor data, or ad impressions. When using a table bucket, your data is stored as an Iceberg table in S3, and then you can interact with that data using analytics capabilities such as row-level transactions, queryable table snapshots, and more, all managed by S3. Additionally, table buckets perform continual table maintenance to automatically optimize query efficiency over time, even as the data lake scales and evolves. S3 vector buckets are purpose-built for storing and querying vectors. Within a vector bucket, you use dedicated vector APIs to write vector data and query it based on semantic meaning and similarity. You can control access to your vector data using the existing access control mechanisms in Amazon S3, including bucket and IAM policies. As you write, update, and delete vectors over time, S3 vector buckets automatically optimize the vector data stored in them to achieve the optimal price-performance, even as the data sets scale and evolve.
What does Amazon do with my data in Amazon S3?
Does Amazon store its own data in Amazon S3?
How is Amazon S3 data organized?
How do I interface with Amazon S3?
How reliable is Amazon S3?
How will Amazon S3 perform if traffic from my application suddenly spikes?
Does Amazon S3 offer a Service Level Agreement (SLA)?
What is the consistency model for Amazon S3?
Why does strong read-after-write consistency help me?
AWS Regions
Open allWhere is my data stored?
Why should I use Amazon S3 storage classes for AWS Dedicated Local Zones?
What is an AWS Region?
What is an AWS Availability Zone (AZ)?
The Amazon S3 One Zone-IA storage class replicates data within a single AZ. The data stored in S3 One Zone-IA is not resilient to the physical loss of an Availability Zone resulting from disasters, such as earthquakes, fires, and floods.
How do I decide which AWS Region to store my data in?
In which parts of the world is Amazon S3 available?
Billing
Open allHow much does Amazon S3 cost?
How will I be charged and billed for my use of Amazon S3?
There are no set up charges or commitments to begin using Amazon S3. At the end of the month, you will automatically be charged for that month’s usage. You can view your charges for the current billing period at any time by logging into your Amazon Web Services account, and selecting the 'Billing Dashboard' associated with your console profile. With the AWS Free Usage Tier*, you can get started with Amazon S3 for free in all Regions except the AWS GovCloud Regions. Upon sign up, new AWS customers receive 5 GB of Amazon S3 Standard storage, 20,000 Get Requests, 2,000 Put Requests, and 100 GB of data transfer out (to internet, other AWS Regions, or Amazon CloudFront) each month for one year. Unused monthly usage will not roll over to the next month. Amazon S3 charges you for the following types of usage. Note that the calculations below assume there is no AWS Free Tier in place.
Starting July 15, 2025, new AWS customers will receive up to $200 in AWS Free Tier credits, which can be applied towards eligible AWS services, including Amazon S3. At account sign-up, you can choose between a free plan and a paid plan. The free plan will be available for 6 months after account creation. If you upgrade to a paid plan, any remaining Free Tier credit balance will automatically apply to your AWS bills. All Free Tier credits must be used within 12 months of your account creation date. To learn more about the AWS Free Tier program, refer to AWS Free Tier website and AWS Free Tier documentation.
Why do prices vary depending on which Amazon S3 Region I choose?
How am I charged for using Versioning?
2) Day 16 of the month: You perform a PUT of 5 GB (5,368,709,120 bytes) within the same bucket using the same key as the original PUT on Day 1.
When analyzing the storage costs of the above operations, note that the 4 GB object from Day 1 is not deleted from the bucket when the 5 GB object is written on Day 15. Instead, the 4 GB object is preserved as an older version and the 5 GB object becomes the most recently written version of the object within your bucket. At the end of the month: Total Byte-Hour usage
[4,294,967,296 bytes x 31 days x (24 hours / day)] + [5,368,709,120 bytes x 16 days x (24 hours / day)] = 5,257,039,970,304 Byte-Hours. Conversion to Total GB-Months
5,257,039,970,304 Byte-Hours x (1 GB / 1,073,741,824 bytes) x (1 month / 744 hours) = 6.581 GB-Month The cost is calculated based on the current rates for your Region on the Amazon S3 pricing page.
How am I charged for accessing Amazon S3 through the AWS Management Console?
How am I charged if my Amazon S3 buckets are accessed from another AWS account?
Do your prices include taxes?
Will I incur any data transfer out to the internet charges when I move my data out of AWS?
I want to move my data out of AWS. How do I request free data transfer out to the internet?
Why do I have to request AWS’ pre-approval for free data transfer out to the internet before moving my data out of AWS?
S3 Vectors
Open allHow do I get started with S3 Vectors?
You can get started with S3 Vectors in four simple steps, without having to set up any infrastructure outside of Amazon S3. First, create a vector bucket in a specific AWS Region through the CreateVectorBucket API or in the S3 Console. Second, to organize your vector data in a vector bucket, you create a vector index with the CreateIndex API or in the S3 Console. When you create a vector index, you specify the distance metric (Cosine or Euclidean) and the number of dimensions a vector should have (up to 4092). For the most accurate results, select the distance metric recommended by your embedding model. Third, add vector data to a vector index with the PutVectors API. You can optionally attach metadata as key value pairs to each vector to filter queries. Fourth, perform a similarity query using the QueryVectors API, specifying the vector to search for and the number of the most similar results to return.
How do I create a vector index in a vector bucket?
You can create a vector index using the S3 Console or the CreateIndex API. During index creation, you specify the vector bucket, index, distance metric, dimensions, and optionally a list of metadata fields that you want to exclude from filtering during similarity queries. For example, if you want to store data associated with vectors purely for reference, you can specify these as non-filterable metadata fields. Upon creation, each index is assigned a unique Amazon Resource Name (ARN). Subsequently when you make a write or query request, you direct it to a vector index within a vector bucket.
How do I add vector data to my vector index?
You can add vectors to a vector index using the PutVectors API. Each vector consists of a key, which uniquely identifies each vector in a vector index (e.g. you can programmatically generate a UUID). To maximize write throughput, it is recommended that you insert vectors in large batches, up to the maximum request size. Additionally, you can attach metadata (for example, year, author, genre, and location) as key value pairs to each vector. When you include metadata, by default all fields can be used as filters in a similarity query unless specified as non-filterable metadata at the time of vector index creation. To generate new vector embeddings of your unstructured data, you can use Amazon Bedrock’s InvokeModel API, specifying the model ID of the embedding model you want to use.
How do I retrieve vectors and its associated metadata?
You can use the GetVectors API to look up and return vectors and associated metadata by the vector key.
How do I query my vector data?
You can run a similarity query with the QueryVectors API, specifying the query vector, the number of relevant results to return (the top k nearest neighbors), and the index ARN. When generating the query vector, you should use the same embedding model that was used to generate the initial vectors stored in the vector index. For example, if you use Amazon Titan Text Embeddings v2 in Amazon Bedrock to generate embeddings of your documents, it is recommended that you use the same model to convert a question to a vector. Additionally, you can use metadata filters in a query, to search vectors that match the filter. When you run the similarity query, by default the vector keys are returned. You can optionally include the distance and metadata in the response.
What are the durability and availability characteristics of S3 Vectors?
S3 Vectors offers highly durable and available vector storage. Data written to S3 Vectors is stored on S3, which is designed for 11 9s of data durability. S3 Vectors is designed to deliver 99.99% availability with an availability SLA of 99.9%.
What query performance can I expect with S3 Vectors?
S3 Vectors delivers sub-second query latency times. It uses the elastic throughput of Amazon S3 to handle searches across millions of vectors and is ideal for infrequent query workloads.
What recall can I expect when querying S3 Vectors?
For performing similarity queries for your vector embeddings, several factors can affect average recall, including the embedding model, size of the vector dataset (number of vectors and dimensions), and the distribution of queries. S3 Vectors delivers over 90% average recall for most datasets. Average recall measures the quality of query results—90% means the response contains 90% of the ground truth closest vectors, that are stored in the index, to the query vector. However, because actual performance may vary depending on your specific use case, we recommend conducting your own tests with representative data and queries to validate that S3 vector indexes meet your recall requirements.
How can I see a list of vectors in a vector index?
You can see a list of vectors in a vector index with the ListVectors API, which returns up to 1,000 vectors at a time with an indicator if the response is truncated. The response includes the last modified date, vector key, vector data, and metadata. You can also use the ListVectors API to easily export vector data from a specified vector index. The ListVectors operation is strongly consistent. So, after a write you can immediately list vectors with any changes reflected.
How much does it cost to use S3 Vectors?
With S3 Vectors, you pay for storage and any applicable write and read requests (e.g., inserting vectors and performing query operations on vectors in a vector index). To see pricing details, see the S3 pricing page.
Can I use S3 Vectors as my vector store in Amazon Bedrock Knowledge Bases?
Yes. While creating a Bedrock Knowledge Base through the Bedrock Console or API, you can configure an existing S3 vector index as your vector store to save on vector storage costs for RAG use cases. If you prefer to let Bedrock create and manage the vector index for you, use the Quick Create workflow in the Bedrock console. Additionally, you can configure a new S3 vector index as your vector store for RAG workflows in Amazon SageMaker Unified Studio.
Can I use S3 Vectors with Amazon OpenSearch Service?
Yes. There are two ways you can use S3 Vectors with Amazon OpenSearch Service. First, S3 customers can export all vectors from an S3 vector index to OpenSearch Serverless as a new serverless collection using either the S3 or OpenSearch console. If you build natively on S3 Vectors, you will benefit from being able to use OpenSearch Serverless selectively for workloads with real-time query needs. Second, if you are a managed OpenSearch customer, you can now choose S3 Vectors as your engine for vector data that can be queried with sub-second latency. OpenSearch will then automatically use S3 Vectors as the underlying engine for vectors and you can update and search your vector data using the OpenSearch APIs. You gain the cost benefits of S3 Vectors, with no changes to your applications.
Amazon S3 and IPv6
Open allWhat is IPv6?
What can I do with IPv6?
How do I get started with IPv6 on Amazon S3?
Should I expect a change in Amazon S3 performance when using IPv6?
S3 Event Notifications
Open allWhat are Amazon S3 Event Notifications?
What can I do with Amazon S3 Event Notifications?
What is included in Amazon S3 Event Notifications?
How do I set up Amazon S3 Event Notifications?
What does it cost to use Amazon S3 Event Notifications?
Amazon S3 Transfer Acceleration
Open allWhat is S3 Transfer Acceleration?
How do I get started with S3 Transfer Acceleration?
How fast is S3 Transfer Acceleration?
Who should use S3 Transfer Acceleration?
How secure is S3 Transfer Acceleration?
What if S3 Transfer Acceleration is not faster than a regular Amazon S3 transfer?
Can I use S3 Transfer Acceleration with multipart uploads?
How should I choose between S3 Transfer Acceleration and Amazon CloudFront’s PUT/POST?
Can S3 Transfer Acceleration complement AWS Direct Connect?
Can S3 Transfer Acceleration complement AWS Storage Gateway or a third-party gateway?
Visit this File section of the Storage Gateway FAQ to learn more about the AWS implementation.
Can S3 Transfer Acceleration complement third-party integrated software?
Is S3 Transfer Acceleration HIPAA eligible?
Security
Open allHow secure is my data in Amazon S3?
For more information on security in AWS, refer to the AWS security page, and for S3 security information, visit the S3 security page and the S3 security best practices guide.
How can I control access to my data stored on Amazon S3?
Does Amazon S3 support data access auditing?
What options do I have for encrypting data stored on Amazon S3?
Can I comply with European data privacy regulations using Amazon S3?
Where is my object and object metadata stored in AWS Dedicated Local Zones?
By default, your object data and object metadata stay within the single Dedicated Local Zone you put the object. Bucket management and telemetry data, including bucket names, capacity metrics, CloudTrail logs, CloudWatch metrics, customer managed keys from AWS Key Management Service (KMS), and Identity and Access Management (IAM) policies, are stored back in the parent AWS Region. Optionally, other bucket management features, like S3 Batch Operations, store management metadata with bucket name and object name in the parent AWS Region.
What is an Amazon VPC Endpoint for Amazon S3?
Can I allow a specific Amazon VPC Endpoint access to my Amazon S3 bucket?
What is AWS PrivateLink for Amazon S3?
How do I get started with interface VPC endpoints for S3?
You can create an interface VPC endpoint using the AWS VPC Management Console, AWS Command Line Interface (AWS CLI), AWS SDK, or API. To learn more, visit the documentation.
When should I choose gateway VPC endpoints versus AWS PrivateLink-based interface VPC endpoints?
Can I use both Interface Endpoints and Gateway Endpoints for S3 in the same VPC?
What is Amazon Macie and how can I use it to secure my data?
What is IAM Access Analyzer for Amazon S3 and how does it work?
For more information, visit the IAM Access Analyzer documentation.
S3 Access Grants
Open allWhat are Amazon S3 Access Grants?
Why should I use S3 Access Grants?
How do I get started with S3 Access Grants?
What types of identity are supported for S3 Access Grants permission grants?
What are the different access levels that S3 Access Grants offers?
Can I customize my access levels?
Are there any quotas for S3 Access Grants?
Is there any performance impact for data access when I use S3 Access Grants?
What other AWS services are required to use S3 Access Grants?
Does S3 Access Grants require client-side modifications?
Since client-side modifications are necessary, what AWS services and third-party applications are integrated with S3 Access Grants out-of-box today?
Is S3 Access Grants a replacement for AWS IAM?
Does S3 Access Grants work with KMS?
How do I view and manage my S3 Access Grants permission grants?
Can you grant public access to data with S3 Access Grants?
How can I audit requests that were authorized via S3 Access Grants?
How is S3 Access Grants priced?
What is the relationship between S3 Access Grants and Lake Formation?
Is S3 Access Grants integrated with IAM Access Analyzer?
S3 Access Points
Open allWhat are Amazon S3 Access Points?
Amazon S3 Access Points are endpoints that simplify managing data access for any application or AWS service that works with S3. S3 Access Points work with S3 buckets and Amazon FSx for OpenZFS file systems. You can control and simplify how different applications or users can access data by creating access points with names and permissions tailored to each application or user.
Using S3 Access Points with S3 buckets, you no longer have to manage a single, complex bucket policy with hundreds of different permission rules that need to be written, read, tracked, and audited. Instead, you can create hundreds of access points per bucket that each provide a customized path into a bucket, with a unique hostname and access policy that enforces the specific permissions and network controls for any request made through the access point.
Using S3 Access Points with FSx for OpenZFS, you can access your FSx data using the S3 API as if the data were in S3. With this capability, your file data in FSx for OpenZFS is accessible for use with the broad range of artificial intelligence, machine learning, and analytics services and applications that work with S3 while your file data continues to reside on the FSx for OpenZFS file system.
Why should I use an access point?
How do S3 Access Points attached to FSx for OpenZFS file systems work?
With S3 Access Points, you can access file data in Amazon FSx for OpenZFS using S3 APIs and without moving data to S3. S3 Access Points attached to FSx for OpenZFS file systems work similarly to how S3 Access Points attached to S3 buckets work, providing data access via S3 with access controlled by access policies, while data continues to be stored in either FSx for OpenZFS file systems or S3 buckets. For example, once an S3 Access Point is attached to an FSx for OpenZFS file system, customers can use the access point with generative AI, machine learning, and analytics services and applications that work with S3 to access their FSx for OpenZFS data.
How do S3 Access Points work?
Is there a quota on how many S3 Access Points I can create?
When using an access point, how are requests authorized?
How do I write access point policies?
How is restricting access to specific VPCs using network origin controls on access points different from restricting access to VPCs using the bucket policy?
Can I enforce a “No internet data access” policy for all access points in my organization?
Can I completely disable direct access to a bucket using the bucket hostname?
Can I replace or remove an access point from a bucket?
What is the cost of Amazon S3 Access Points?
How do I get started with S3 Access Points?
Durability & Data Protection
Open allHow durable is Amazon S3?
How is Amazon S3 designed for 99.999999999% durability?
Is data stored in a One Zone storage class protected against damage or loss of the Availability Zone?
How does Amazon S3 go beyond 99.999999999% durability?
With such high durability, do I still need to back up my critical data?
What capabilities does Amazon S3 provide to protect my data against accidental or malicious deletes?
What checksum algorithms does Amazon S3 support for data integrity checking?
Amazon S3 uses a combination of Content-MD5 checksums, secure hash algorithms (SHAs), and cyclic redundancy checks (CRCs) to verify data integrity. Amazon S3 performs these checksums on data at rest and repairs any disparity using redundant data. In addition, the latest AWS SDKs automatically calculate efficient CRC-based checksums for all uploads. S3 independently verifies that checksum and only accepts objects after confirming that data integrity was maintained in transit over the public internet. If a version of the SDK that does not provide pre-calculated checksums is used to upload an object, S3 calculates a CRC-based checksum of the whole object, even for multipart uploads. Checksums are stored in object metadata and are therefore available to verify data integrity at any time. You can choose from five supported checksum algorithms for data integrity checking on your upload and download requests. You can choose a SHA-1, SHA-256, CRC32, CRC32C, or CRC64NVME checksum algorithm, depending on your application needs. You can automatically calculate and verify checksums as you store or retrieve data from S3, and can access the checksum information at any time using the HeadObject S3 API, the GetObjectAttributes S3 API or an S3 Inventory report. Calculating a checksum as you stream data into S3 saves you time as you’re able to both verify and transmit your data in a single pass, instead of as two sequential operations. Using checksums for data validation is a best practice for data durability, and these capabilities increase the performance and reduce the cost to do so.
What is Versioning?
Why should I use Versioning?
How do I start using Versioning?
How does Versioning protect me from accidental deletion of my objects?
Can I set up a trash, recycle bin, or rollback window on my Amazon S3 objects to recover from deletes and overwrites?
How can I ensure maximum protection of my preserved versions?
How am I charged for using Versioning?
2) Day 16 of the month: You perform a PUT of 5 GB (5,368,709,120 bytes) within the same bucket using the same key as the original PUT on Day 1.
When analyzing the storage costs of the above operations, note that the 4 GB object from Day 1 is not deleted from the bucket when the 5 GB object is written on Day 15. Instead, the 4 GB object is preserved as an older version and the 5 GB object becomes the most recently written version of the object within your bucket. At the end of the month: Total Byte-Hour usage
[4,294,967,296 bytes x 31 days x (24 hours / day)] + [5,368,709,120 bytes x 16 days x (24 hours / day)] = 5,257,039,970,304 Byte-Hours. Conversion to Total GB-Months
5,257,039,970,304 Byte-Hours x (1 GB / 1,073,741,824 bytes) x (1 month / 744 hours) = 6.581 GB-Month The cost is calculated based on the current rates for your region on the Amazon S3 pricing page.
What is Amazon S3 Object Lock?
Learn more by visiting the S3 Object Lock user guide.
How does Amazon S3 Object Lock work?
S3 Object Lock can be configured in one of two Modes. When deployed in Governance Mode, AWS accounts with specific IAM permissions are able to remove WORM protection from an object version. If you require stronger immutability in order to comply with regulations, you can use Compliance Mode. In Compliance Mode, WORM protection cannot be removed by any user, including the root account.
How does enabling S3 Object Lock for existing buckets impact the objects already existing in the buckets?
Can I disable S3 Object Lock after I have enabled it?
No, you cannot disable S3 Object Lock or S3 Versioning for buckets once S3 Object Lock is enabled.
How do I get started with replicating objects from buckets with S3 Object Lock enabled?
To start replicating objects with S3 Replication from buckets with S3 Object Lock enabled , you can add a replication configuration on your source bucket by specifying a destination bucket in the same or different AWS Region and in the same or different AWS account. You can choose to replicate all objects at the S3 bucket level, or filter objects on a shared prefix level, or an object level using S3 object tags. You will also need to specify an AWS Identity and Access Management (IAM) role with the required permissions to perform the replication operation. You can use the S3 console, AWS API, AWS CLI, AWS SDKs, or AWS CloudFormation to enable replication and must have S3 Versioning enabled for both the source and destination buckets. Additionally, to replicate objects from S3 Object Lock enabled buckets, your destination bucket must also have S3 Object Lock enabled. For more information see the documentation on setting up S3 Replication and using S3 Object Lock with S3 Replication.
Do I need additional permissions to replicate objects from buckets with S3 Object Lock enabled?
Yes, to replicate objects from S3 Object Lock enabled buckets you need to grant two new permissions, s3:GetObjectRetention and s3:GetObjectLegalHold, on the source bucket in the IAM role that you use to set up replication. Alternatively, if the IAM role has an s3:Get* permission, it satisfies the requirement. For more information see the documentation on using S3 Object Lock with S3 Replication.
Are there any limitations for using S3 Replication while replicating from S3 Object Lock buckets?
No, all features of S3 Replication, such as S3 Same-Region Replication (S3 SRR), S3 Cross-Region Replication (S3 CRR), S3 Replication metrics to track progress, S3 Replication Time Control (S3 RTC), and S3 Batch Replication, are supported while replicating from S3 Object Lock buckets.
How can I replicate existing objects from S3 Object Lock enabled buckets?
You can use S3 Batch Replication to replicate existing objects from S3 Object Lock enabled buckets. For more information on replicating existing objects, see the documentation on S3 Batch Replication.
What is the retention status of the replicas of source objects protected with S3 Object Lock?
Storage Classes
Open allWhat are the Amazon S3 storage classes?
How do I decide which S3 storage class to use?
In deciding which S3 storage class best fits your workload, consider the access patterns and retention time of your data to optimize for the lowest total cost over the lifetime of your data. Many workloads have changing (user-generated content), unpredictable (analytics, data lakes), or unknown (new applications) access patterns, and that is why S3 Intelligent-Tiering should be the default storage class to automatically save on storage costs. If you know the access patterns of your data, you can follow this guidance. The S3 Standard storage class is ideal for frequently accessed data; this is the best choice if you access data more than once a month. S3 Standard-Infrequent Access is ideal for data retained for at least a month and accessed once every month or two. The Amazon S3 Glacier storage classes are purpose-built for data archiving, providing you with the highest performance, most retrieval flexibility, and the lowest cost archive storage in the cloud. You can now choose from three archive storage classes optimized for different access patterns and storage duration. For archive data that needs immediate access, such as medical images, news media assets, or genomics data, choose the S3 Glacier Instant Retrieval storage class, an archive storage class that delivers the lowest cost storage with milliseconds retrieval. For archive data that does not require immediate access but needs the flexibility to retrieve large sets of data at no cost, such as backup or disaster recovery use cases, choose S3 Glacier Flexible Retrieval, with retrieval in minutes or free bulk retrievals in 5—12 hours. To save even more on long-lived archive storage such as compliance archives and digital media preservation, choose S3 Glacier Deep Archive, the lowest cost storage in the cloud with data retrieval within 12 hours. All these storage classes provide multi-Availability Zone (AZ) resiliency by redundantly storing data on multiple devices and physically separated AWS Availability Zones in an AWS Region.
For data that has a lower resiliency requirement, you can reduce costs by selecting a single-AZ storage class, like S3 One Zone-Infrequent Access. If you have data residency or isolation requirements that can’t be met by an existing AWS Region, you can use S3 storage classes for AWS Dedicated Local Zones or S3 on Outposts racks to store your data in a specific perimeter.
S3 Intelligent-Tiering
Open allWhat is S3 Intelligent-Tiering?
How does S3 Intelligent-Tiering work?
There is no minimum object size for S3 Intelligent-Tiering, but objects smaller than 128KB are not eligible for auto-tiering. These smaller objects may be stored in S3 Intelligent-Tiering, but will always be charged at the Frequent Access tier rates, and are not charged the monitoring and automation charge. If you would like to standardize on S3 Intelligent-Tiering as the default storage class for newly created data, you can modify your applications by specifying INTELLIGENT-TIERING on your S3 PUT API request header. S3 Intelligent-Tiering is designed for 99.9% availability and 99.999999999% durability, and automatically offers the same low latency and high throughput performance of S3 Standard. You can use AWS Cost Explorer to measure the additional savings from the Archive Instant Access tier.
Why would I choose to use S3 Intelligent-Tiering?
What performance does S3 Intelligent-Tiering offer?
What performance do the optional Archive Access and Deep Archive Access tiers provide?
How durable and available is S3 Intelligent-Tiering?
How do I get my data into S3 Intelligent-Tiering?
How am I charged for S3 Intelligent-Tiering?
For a small monitoring and automation fee, S3 Intelligent-Tiering monitors access patterns and automatically moves objects through low latency and high throughput access tiers, as well as two opt in asynchronous archive access tiers where customers get the lowest storage costs in the cloud for data that can be accessed asynchronously.
There is no minimum billable object size in S3 Intelligent-Tiering, but objects smaller than 128KB are not eligible for auto-tiering. These small objects will not be monitored and will always be charged at the Frequent Access tier rates, with no monitoring and automation charge. For each object archived to the Archive Access tier or Deep Archive Access tier in S3 Intelligent-Tiering, Amazon S3 uses 8 KB of storage for the name of the object and other metadata (billed at S3 Standard storage rates) and 32 KB of storage for index and related metadata (billed at S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive storage rates).
Is there a charge to retrieve data from S3 Intelligent-Tiering?
How do I activate S3 Intelligent-Tiering archive access tiers?
How do I access an object from the Archive Access or Deep Archive Access tiers in the S3 Intelligent-Tiering storage class?
How do I know in which S3 Intelligent-Tiering access tier my objects are stored in?
Can I lifecycle objects from S3 Intelligent-Tiering to another storage class?
Is there a minimum duration for S3 Intelligent-Tiering?
Is there a minimum billable object size for S3 Intelligent-Tiering?
S3 Standard
Open allWhat is S3 Standard?
Why would I choose to use S3 Standard?
S3 Express One Zone
Open allWhat is the Amazon S3 Express One Zone storage class?
Why would I choose to use the Amazon S3 Express One Zone storage class?
How do I get started with the Amazon S3 Express One Zone storage class?
How can I import data into the Amazon S3 Express One Zone storage class?
You can import data from within the same AWS Region into the S3 Express One Zone storage class via the S3 console by using the Import option after you create a directory bucket. Import simplifies copying data into S3 directory buckets by letting you choose a prefix or bucket to import data from without having to specify all of the objects to copy individually. S3 Batch Operations copies the objects in the selected prefix or general purpose bucket and you can monitor the progress of the import copy job through the S3 Batch Operations job details page.
How many Availability Zones are Amazon S3 Express One Zone objects stored in?
What performance does the Amazon S3 Express One Zone storage class provide?
How does the Amazon S3 Express One Zone storage class achieve high performance?
How many transactions per second (TPS) does an S3 directory bucket support?
What happens to an S3 directory bucket with no request activity for an extended period of time?
S3 directory buckets that have no request activity for a period of at least 3 months will transition to an inactive state. While in an inactive state, a directory bucket is temporarily inaccessible for reads and writes. Inactive buckets retain all storage, object metadata, and bucket metadata. Existing storage charges will apply to inactive buckets. On an access request to an inactive bucket, the bucket will transition to an active state, typically within a few minutes. During this transition period, reads and writes will return a 503 SlowDown error code.
How should I plan for my application’s throughput needs with the S3 Express One Zone storage class?
How is request authorization different with Amazon S3 Express One Zone compared to other S3 storage classes?
How reliable is the Amazon S3 Express One Zone storage class?
How is the Amazon S3 Express One Zones storage class designed to provide 99.95% availability?
How am I charged for Amazon S3 Express One Zone?
Assume you store 10 GB of data in S3 Express One Zone for 30 days, making a total of 1,000,000 writes and 9,000,000 reads, accessing with Athena with a request size of 10 KB. Then, you delete 1,000,000 files by the end of 30 days. Assuming your bucket is in the US East (Northern Virginia) Region, the storage and request charges are calculated below: Storage Charges
Total Byte-Hour usage = 10 GB-Month
Total Storage cost = 10 GB-Month x $0.11 = $1.10 Request Charges
1,000,000 PUT Requests: 1,000,000 requests x $0.00113/1,000 = $1.13
9,000,000 GET Requests: 9,000,000 requests x $0.00003/1,000 = $0.27
1,000,000 DELETE requests = 1,000,000 requests x $0.00 (no charge) = $0 Data upload charge: 10 KB / 1,048,576 x 1,000,000 x $0.0032 = $0.03
Data retrieval charge: 10 KB / 1,048,576 x 9,000,000 x $0.0006 = $0.05
Total Charges = $1.10 + $1.13 + $0.27 + $0.03 + $0.05= $2.58 Example 2:
Assume you store 10 TB of data for machine learning training for an 8-hour workload every day, and then delete it. During the 8-hour workload you make 5,242,880 writes and 10,485,760 reads for a 2 MB request size. Assume you do this for 30 days (a month). Storage Charges
Total Byte-Hour usage = [10,995,116,277,760 bytes x 30 days x (8 hours / day)] = 2,638,827,906,662,400 Byte-Hours = 3303.77 GB-Month
Total Storage cost = 3303.77 GB x $0.11 = $363.41 Request Charges
5,242,880 PUT Requests/day: 5,242,880 requests x 30 x $0.00113/1,000 = $177.73
10,485,760 GET Requests/day: 10,485,760 requests x 30 x $0.00003/1,000 = $9.44
5,242,880 DELETE requests/day: 5,242,880 requests x $0.00 (no charge) = $0 Data upload charge: 2MB/1024 x 5,242,880 x 30 x $0.0032 = $983.04
Data retrieval charge: 2MB/1024 x 10,485,760 x 30 x $0.0006 = $368.64
Total Charges = $363.41 + $177.73 + $9.44 + $983.04 + $368.64 = $1,902.26
Are there any additional Data Transfer charges for using the Amazon S3 Express One Zone storage class within the same Region?
Are there any additional networking charges for using Gateway VPC endpoints with the Amazon S3 Express One Zone storage class?
S3 Standard-Infrequent Access (S3 Standard-IA)
Open allWhat is S3 Standard-Infrequent Access?
Why would I choose to use S3 Standard-IA?
What performance does S3 Standard-IA offer?
How do I get my data into S3 Standard-IA?
What charges will I incur if I change the storage class of an object from S3 Standard-IA to S3 Standard with a COPY request?
Is there a minimum storage duration charge for S3 Standard-IA?
Is there a minimum object storage charge for S3 Standard-IA?
Can I tier objects from S3 Standard-IA to S3 One Zone-IA or to the S3 Glacier Flexible Retrieval storage class?
S3 One Zone-Infrequent Access (S3 One Zone-IA)
Open allWhat is S3 One Zone-IA storage class?
What use cases are best suited for S3 One Zone-IA storage class?
What performance does S3 One Zone-IA storage offer?
How durable is the S3 One Zone-IA storage class?
Is an S3 One Zone-IA “Zone” the same thing as an AWS Availability Zone?
How much disaster recovery protection do I forgo by using S3 One Zone-IA?
Amazon S3 Glacier Instant Retrieval storage class
Open allWhat is the S3 Glacier Instant Retrieval storage class?
Why would I choose to use S3 Glacier Instant Retrieval?
How available and durable is S3 Glacier Instant Retrieval?
What performance does S3 Glacier Instant Retrieval offer?
How do I get my data into S3 Glacier Instant Retrieval?
Is there a minimum storage duration charge for Amazon S3 Glacier Instant Retrieval?
Is there a minimum object size charge for Amazon S3 Glacier Instant Retrieval?
How am I charged for S3 Glacier Instant Retrieval?
Amazon S3 Glacier Flexible Retrieval storage class
Open allWhat is the S3 Glacier Flexible Retrieval storage class?
Why would I choose to use S3 Glacier Flexible Retrieval storage class?
How do I get my into S3 Glacier Flexible Retrieval?
Note: S3 Glacier Flexible Retrieval is also available through the original direct Glacier APIs and through the Amazon S3 Glacier Management Console. For an enhanced experience complete with access to the full S3 feature set including lifecycle management, S3 Replication, S3 Storage Lens, and more, we recommend using S3 APIs and the S3 Management Console to use S3 Glacier features.
How can I retrieve my objects that are archived in S3 Glacier Flexible Retrieval and will I be notified when the object is restored?
How long will it take to restore my objects archived in Amazon S3 Glacier Flexible Retrieval?
With S3 Glacier storage class provisioned capacity units, you can pay a fixed upfront fee for a given month to ensure the availability of retrieval capacity for expedited retrievals from S3 Glacier Flexible Retrieval. You can purchase two provisioned capacity units per month to increase the amount of data you can retrieve. Each unit of capacity ensures that at least three expedited retrievals can be performed every five minutes, and it provides up to 150 MB/s of retrieval throughput. If your workload requires highly reliable and predictable access to a subset of your data in minutes, you should purchase provisioned retrieval capacity. Without provisioned capacity, expedited retrievals might not be accepted during periods of high demand. If you require access to expedited retrievals under any circumstance, we recommend that you purchase provisioned retrieval capacity.
You can purchase provisioned capacity using the Amazon S3 console, the purchase provisioned capacity REST API, the AWS SDKs, or the AWS CLI. A provisioned capacity unit lasts for one month starting at the date and time of purchase, which is the start date. The unit expires on the expiration date, which is exactly one month after the start date to the nearest second. For provisioned capacity pricing information, see Amazon S3 pricing.
How is my storage charge calculated for Amazon S3 objects archived to S3 Glacier Flexible Retrieval?
1.000032 gigabytes for each object x 100,000 objects = 100,003.2 gigabytes of S3 Glacier storage.
0.000008 gigabytes for each object x 100,000 objects = 0.8 gigabytes of S3 Standard storage.
The fee is calculated based on the current rates for your AWS Region on the Amazon S3 pricing page. For additional Amazon S3 pricing examples, go to the S3 billing FAQs or use the AWS pricing calculator.
Are there minimum storage duration and minimum object storage charges for Amazon S3 Glacier Flexible Retrieval?
S3 Glacier Flexible Retrieval also requires 40 KB of additional metadata for each archived object. This includes 32 KB of metadata charged at the S3 Glacier Flexible Retrieval rate required to identify and retrieve your data. And, an additional 8 KB data charged at the S3 Standard rate which is required to maintain the user-defined name and metadata for objects archived to S3 Glacier Flexible Retrieval. This allows you to get a real-time list of all of your S3 objects using the S3 LIST API or the S3 Inventory report. View the Amazon S3 pricing page for information about Amazon S3 Glacier Flexible Retrieval pricing.
How much does it cost to retrieve data from Amazon S3 Glacier Flexible Retrieval?
Does Amazon S3 provide capabilities for archiving objects to lower cost storage classes?
What is the backend infrastructure supporting the S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive storage class?
Amazon S3 Glacier Deep Archive
Open allWhat is the Amazon S3 Glacier Deep Archive storage class?
What use cases are best suited for the S3 Glacier Deep Archive storage class?
How does the S3 Glacier Deep Archive storage class differ from the S3 Glacier Instant Retrieval, and S3 Glacier Flexible Retrieval storage classes?
How do I get started using S3 Glacier Deep Archive?
How do you recommend migrating data from my existing tape archives to S3 Glacier Deep Archive?
You can also use AWS Snowball to migrate data. Snowball accelerates moving terabytes to petabytes of data into and out of AWS using physical storage devices designed to be secure for transport. Using Snowball helps to eliminate challenges that can be encountered with large-scale data transfers including high network costs, long transfer times, and security concerns. Finally, you can use AWS Direct Connect to establish dedicated network connections from your premises to AWS. In many cases, Direct Connect can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than internet-based connections.
How can I retrieve my objects stored in S3 Glacier Deep Archive?
How am I charged for using S3 Glacier Deep Archive?
How will S3 Glacier Deep Archive usage show up on my AWS bill and in the AWS Cost Management tool?
Are there minimum storage duration and minimum object storage charges for S3 Glacier Deep Archive?
How does S3 Glacier Deep Archive integrate with other AWS Services?
S3 on Outposts
Open allWhat is Amazon S3 on Outposts?
Storage Management
Open allWhat are S3 object tags?
Learn more by visiting the S3 object tags user guide.
Why should I use object tags?
How can I update the object tags on my objects?
How much do object tags cost?
How do I get started with Storage Class Analysis?
Why should I use Amazon S3 Metadata?
You should use Amazon S3 Metadata if you want to use SQL to query the information about your S3 objects to quickly identify specific datasets for your generative AI, analytics, and other use cases. S3 Metadata keeps metadata up to date in near real time, so you can use any Iceberg-compatible client to run SQL queries to find objects by the object metadata. For example, you can use a SQL query to return a list of objects that match certain filters such as objects added in the last 30 days across any bucket.
How does S3 Metadata work?
S3 Metadata is designed to automatically generate metadata that provides additional information about objects that are uploaded into a bucket and to make that metadata queryable in a read-only table. These metadata tables are stored in Amazon S3 Tables, which are built on Apache Iceberg and provide a managed way to store and query tabular data within S3. S3 Metadata creates and maintains system-level metadata such as object size, custom metadata such as tags and user-defined metadata during object upload, and event metadata such as the IP address that sent the request. As data in your bucket changes, S3 Metadata updates in near real time to reflect the latest changes. You can then query your metadata tables using various AWS analytics services and open source tools that are Iceberg-compatible, including Amazon Athena, Amazon QuickSight, and Apache Spark.
How do I get started with S3 Metadata?
You can get started with S3 Metadata with just a few clicks in the S3 console. Just select the general purpose S3 bucket on which you would like to enable S3 Metadata, and S3 will analyze the data in your bucket and build a fully managed Apache Iceberg table that contains metadata for all of your objects. Within minutes, you can begin to query your metadata using any query engine or tooling that supports Apache Iceberg.
Where are my S3 Metadata tables stored?
Your S3 Metadata tables are stored in an AWS managed table bucket in your AWS Account called aws-s3. Your tables will be read-only, and only S3 will have permission to write, update, or delete metadata.
What are the different types of S3 Metadata tables?
S3 Metadata stores metadata in two managed tables in your account: journal tables and live inventory tables.
The S3 Metadata journal table provides a view of changes made within your bucket. As objects are added to, updated, and removed from your general purpose S3 buckets, the corresponding changes are reflected in the journal tables in near real time. Journal tables are useful for understanding the behavior of your applications, and for identifying any change made to your datasets. For example, you can write SQL queries for journal tables to find S3 objects that match a filter such as objects added in the last 30 days, objects that were added by active requesters, or objects that have metadata changes across the last week.
The S3 Metadata live inventory table contains a complete list of all the objects in your bucket. Live inventory tables are updated hourly and contain all the information that S3 knows about your objects. Live inventory tables are useful for discovering or identifying datasets in your bucket, based on the characteristics generated in object metadata. For example, you can use live inventory tables to identify training datasets for machine learning, to use in storage cost optimization exercises, or to help enforce governance controls.
How soon are changes from my bucket reflected in S3 Metadata?
When you add new objects to your bucket, you will see entries in the journal table within minutes, and you will see entries in the live inventory table on the next hourly refresh. When you enable S3 Metadata on an existing bucket, S3 will automatically start a backfill operation to generate metadata for all your existing objects. This backfill typically finishes in minutes but can take several hours if your existing datasets contain millions or billions of S3 objects.
Can I combine S3 Metadata tables with my own metadata?
What is S3 Inventory?
The S3 Inventory report provides a scheduled alternative to Amazon S3’s synchronous List API. You can configure S3 Inventory to provide a CSV, ORC, or Parquet file output of your objects and their corresponding metadata on a daily or weekly basis for an S3 bucket or prefix. You can simplify and speed up business workflows and big data jobs with S3 Inventory. You can also use S3 inventory to verify encryption and replication status of your objects to meet business, compliance, and regulatory needs. Learn more at the Amazon S3 Inventory user guide.
How do I get started with S3 Inventory?
How am I charged for using S3 Inventory?
What are Amazon S3 Tables?
Why should I use S3 Tables?
How do table buckets work?
S3 Tables provide purpose-built S3 storage for storing structured data in the Apache Parquet, Avro, and ORC formats. Within a table bucket, you can create tables as first-class resources directly in S3. These tables can be secured with table-level permissions defined in either identity- or resource-based policies, and are accessible by applications or tooling that supports the Apache Iceberg standard. When you create a table in your table bucket, the underlying data in S3 is stored as Parquet, Avro, or ORC files. Then, S3 uses the Apache Iceberg standard to store the metadata necessary to make that data queryable by your applications. S3 Tables include a client library that is used by query engines to navigate and update the Iceberg metadata of tables in your table bucket. This library, in conjunction with updated S3 APIs for table operations, allows multiple clients to safely read and write data to your tables. Over time, S3 automatically optimizes the underlying Parquet, Avro, or ORC data by rewriting, or "compacting” your objects. Compaction optimizes your data on S3 to improve query performance.
How do I get started with S3 Tables?
How do I create and delete tables in my table bucket?
How do I query my tables?
What performance can I expect from S3 Tables?
You can expect up to 3x faster query performance and up to 10x higher transactions per second (TPS) compared to storing Iceberg tables in general purpose Amazon S3 buckets. This is because table buckets automatically compact the underlying Parquet, Avro, or ORC data for your tables to optimize query performance, and the purpose-built storage supports up to 10x the TPS by default.
Can I manually overwrite or delete an object in my table bucket?
How do table bucket permissions work?
Table buckets give you the ability to apply resource policies to the entire bucket, or to individual tables. Table bucket policies can be applied using the PutTablePolicy and PutTableBucketPolicy APIs. Table-level policies allow you to manage permissions to tables in your table buckets based on the logical table that it is associated with, without having to understand the physical location of individual Parquet, Avro, or ORC files. Additionally, S3 Block Public Access is always applied to your table buckets.
Do table buckets support concurrent writes to a single table?
What table and data formats do table buckets support?
Table buckets support the Apache Iceberg table format with Parquet, Avro, or ORC data.
What table maintenance operations are offered by table buckets?
Can I track and audit changes made to my tables?
Do table buckets support encryption at rest for my table data?
How much does it cost to use S3 Tables?
How does compaction work for S3 Tables?
How does snapshot management work for S3 Tables?
How does unreferenced file removal work for S3 Tables?
What is S3 Batch Operations?
How do I get started with S3 Batch Operations?
If you are interested in learning more about S3 Batch Operations watch the tutorials videos and visit the documentation.
What AWS electronic storage services have been assessed based on financial services regulations?
What AWS documentation supports the SEC 17a-4(f)(2)(i) and CFTC 1.31(c) requirement for notifying my regulator?
How do I get started with S3 CloudWatch Metrics?
What alarms can I set on my storage metrics?
How am I charged for using S3 CloudWatch Metrics?
What is S3 Lifecycle management?
How do I set up an S3 Lifecycle management policy?
How can I use Amazon S3 Lifecycle management to help lower my Amazon S3 storage costs?
You can also specify an S3 Lifecycle policy to delete objects after a specific period of time. You can use this policy-driven automation to quickly and easily reduce storage costs as well as save time. In each rule you can specify a prefix, a time period, a transition to S3 Standard-IA, S3 One Zone-IA, S3 Glacier Instant Retrieval, S3 Glacier Flexible Retrieval, S3 Glacier Deep Archive, and/or an expiration. For example, you could create a rule that archives into S3 Glacier Flexible Retrieval all objects with the common prefix “logs/” 30 days from creation and expires these objects after 365 days from creation.
You can also create a separate rule that only expires all objects with the prefix “backups/” 90 days from creation. S3 Lifecycle policies apply to both existing and new S3 objects, helping you optimize storage and maximize cost savings for all current data and any new data placed in S3 without time-consuming manual data review and migration.
Within a lifecycle rule, the prefix field identifies the objects subject to the rule. To apply the rule to an individual object, specify the key name. To apply the rule to a set of objects, specify their common prefix (e.g. “logs/”). You can specify a transition action to have your objects archived and an expiration action to have your objects removed. For time period, provide the creation date (e.g. January 31, 2015) or the number of days from creation date (e.g. 30 days) after which you want your objects to be archived or removed. You may create multiple rules for different prefixes.
How much does it cost to use S3 Lifecycle management?
Why would I use an S3 Lifecycle policy to expire incomplete multipart uploads?
Can I set up Amazon S3 Event Notifications to send notifications when S3 Lifecycle transitions or expires objects?
Storage Analytics & Insights
Open allWhat features are available to analyze my storage usage on Amazon S3?
What is Amazon S3 Storage Lens?
How does S3 Storage Lens work?
What are the key questions that can be answered using S3 Storage Lens metrics?
The S3 Storage Lens dashboard is organized around four main types of questions that can be answered about your storage. With the Summary filter, top-level questions related to overall storage usage and activity trends can be explored. For example, “How rapidly is my overall byte count and request count increasing over time?” With the Cost Optimization filter, you can explore questions related to storage cost reduction, for example, “Is it possible for me to save money by retaining fewer non-current versions?” With the Data Protection and Access Management filters you can answer questions about securing your data, for example, “Is my storage protected from accidental or intentional deletion?” Finally, with the Performance and Events filters you can explore ways to improve performance of workflows. Each of these questions represent a first layer of inquiry that would likely lead to drill-down analysis.
What metrics are available in S3 Storage Lens?
What are my dashboard configuration options?
A default dashboard is configured automatically provided for your entire account, and you have the option to create additional custom dashboards that can be scoped to your AWS organization, specific regions, or buckets within an account. You can set up multiple custom dashboards, which can be useful if you require some logical separation in your storage analysis, such as segmenting on buckets to represent various internal teams. By default, your dashboard will receive the S3 Storage Lens free metrics, but you have the option to upgrade to receive S3 Storage Lens advanced metrics and recommendations (for an additional cost). S3 Storage Lens advanced metrics have 7 distinct options: Activity metrics, Advanced Cost Optimization metrics, Advanced Data Protection metrics, Detailed Status Code metrics, Prefix aggregation, CloudWatch publishing, and Storage Lens groups aggregation. Additionally, for each dashboard you can enable metrics export, with additional options to specify destination bucket and encryption type.
How much historical data is available in S3 Storage Lens?
How will I be charged for S3 Storage Lens?
S3 Storage Lens is available in two tiers of metrics. The free metrics are enabled by default and available at no additional charge to all S3 customers. The S3 Storage Lens advanced metrics and recommendations pricing details are available on the S3 pricing page. With S3 Storage Lens free metrics you receive 28 metrics at the bucket level, and can access 14 days of historical data in the dashboard. With S3 Storage Lens advanced metrics and recommendations you receive 35 additional metrics, prefix-level aggregation, CloudWatch metrics support, custom object metadata filtering with S3 Storage Lens groups, and can access 15 months of historical data in the dashboard.
What is the difference between S3 Storage Lens and S3 Inventory?
What is the difference between S3 Storage Lens and S3 Storage Class Analysis (SCA)?
What is Storage Class Analysis?
How often is the Storage Class Analysis updated?
Query in Place
Open allWhat is "Query in Place" functionality?
How do I query my data in Amazon S3?
What is Amazon Athena?
What is Amazon Redshift Spectrum?
Replication
Open allWhat is Amazon S3 Replication?
What is Amazon S3 Cross-Region Replication (CRR)?
What is Amazon S3 Same-Region Replication (SRR)?
What is Amazon S3 Batch Replication?
How do I enable Amazon S3 Replication (Cross-Region Replication and Same-Region Replication)?
How do I use S3 Batch Replication?
Can I use S3 Replication with S3 Lifecycle rules?
You can find more information about lifecycle configuration and replication in the S3 Replication documentation.
Can I use S3 Replication to replicate to more than one destination bucket?
Yes. S3 Replication allows customers to replicate their data to multiple destination buckets in the same, or different AWS Regions. When setting up, you simply specify the new destination bucket in your existing replication configuration or create a new replication configuration with multiple destination buckets. For each new destination you specify, you have the flexibility to choose storage class of destination bucket, encryption type, replication metrics and notifications, Replication Time Control (RTC), and other properties.
Q: Can I use S3 Replication to set up two-way replication between S3 buckets?
Can I use replication across AWS accounts to protect against malicious or accidental deletion?
Will my object tags be replicated if I use Cross-Region Replication?
Can I replicate delete markers from one bucket to another?
Can I replicate data from other AWS Regions to China? Can a customer replicate from one China Region bucket outside of China Regions?
Can I replicate existing objects?
Can I re-try replication if object fail to replicate initially?
What encryption types does S3 Replication support?
What is the pricing for cross account data replication?
Visit the Amazon S3 pricing page for more details on S3 Replication pricing.
What is Amazon S3 Replication Time Control?
How do I enable Amazon S3 Replication Time Control?
Can I use S3 Replication Time Control to replicate data within and between China Regions?
What are Amazon S3 Replication metrics and events?
How do I enable Amazon S3 Replication metrics and events?
Can I use Amazon S3 Replication metrics and events to track S3 Batch Replication?
What is the Amazon S3 Replication Time Control Service Level Agreement (SLA)?
What is the pricing for S3 Replication and S3 Replication Time Control?
What are S3 Multi-Region Access Points?
Why should I use S3 Multi-Region Access Points?
How do S3 Multi-Region Access Points work?
In an active-active configuration, S3 Multi-Region Access Points consider factors like network congestion and the location of the requesting application to dynamically route your requests over the AWS network to the closest copy of your data. S3 Multi-Region Access Points route your requests through the closest AWS location to your client, and then over the global private AWS network to S3. In either configuration, S3 Multi-Region Access Points allow you to take advantage of the global infrastructure of AWS while maintaining a simple application architecture.
What is the difference between S3 Cross-Region Replication (S3 CRR) and S3 Multi-Region Access Points?
S3 CRR and S3 Multi-Region Access Points are complementary features that work together to replicate data across AWS Regions and then to automatically route requests to the replicated copy with the lowest latency. S3 Multi-Region Access Points help you to manage requests across AWS Regions, while CRR allows you to move data across AWS Regions to create isolated replicas. You use S3 Multi-Region Access Points and CRR together to create a replicated multi-Region dataset that is addressable by a single global endpoint.
How much do S3 Multi-Region Access Points cost?
When you use an S3 Multi-Region Access Point to route requests within AWS, you pay a low per-GB data routing charge for each GB processed, as well as standard charges for S3 requests, storage, data transfer, and replication. If your application runs outside of AWS and accesses S3 over the internet, S3 Multi-Region Access Points increase performance by automatically routing your requests through an AWS edge location, over the global private AWS network, to the closest copy of your data based on access latency. When you accelerate requests made over the internet, you pay the data routing charge and an internet acceleration charge. S3 Multi-Region Access Points internet acceleration pricing varies based on whether the source client is in the same or in a different location as the destination AWS Region, and is in addition to standard S3 data transfer pricing. To use S3 Multi-Region Access Points failover controls, you are only charged for standard S3 API costs to view the current routing control status of each Region and submit any routing control changes for initiating a failover. See the Amazon S3 pricing page and the data transfer tab for more pricing information.
Can I use Requester Pays with S3 Multi-Region Access Points?
Yes, you can configure the underlying buckets of the S3 Multi-Region Access Point to be Requester Pays buckets. With Requester Pays, the requester pays all of the cost associated to the endpoint usage, including the cost for requests and data transfer cost associated with both the bucket and the Multi-Region Access Point. Typically, you want to configure your buckets as Requester Pays buckets if you wish to share data but not incur charges associated with others accessing the data. In general, bucket owners pay for all Amazon S3 storage associated with their bucket. To learn more, please visit S3 Requester Pays.
How is S3 Transfer Acceleration different than S3 Multi-Region Access Points?
How do I get started with S3 Multi-Region Access Points and failover controls?
The S3 console provides a simple guided workflow to quickly set up everything you need to run multi-Region storage on S3 in just three simple steps. First, create an Amazon S3 Multi-Region Access Point endpoint and specify the AWS Regions you want to replicate and failover between. You can add buckets in multiple AWS accounts to a new S3 Multi-Region Access Point by entering the account IDs that own the buckets at the time of creation. Second, for each AWS Region and S3 bucket behind your S3 Multi-Region Access Point endpoint, specify whether their routing status is active or passive, where active AWS Regions accept S3 data request traffic, and passive Regions are not be routed to until you initiate a failover. Third, configure your S3 Cross-Region Replication rules to synchronize your data in S3 between the Regions and/or accounts. You can then initiate a failover at any time between the AWS Regions within minutes to shift your S3 data requests and monitor the shift of your S3 traffic to your new active AWS Region in Amazon CloudWatch. Alternatively, you can use AWS CloudFormation to automate your multi-Region storage configuration. All of the building blocks required to set up multi-Region storage on S3, including S3 Multi-Region Access Points, are supported by CloudFormation, allowing you to automate a repeatable setup process outside of the S3 console.