AWS Cloud Financial Management
Export and visualize carbon emissions data from your AWS accounts
In March 2022, AWS announced the Customer Carbon Footprint Tool (CCFT). Since then, customers have used the CCFT available in the AWS Management Console to understand carbon emission estimates associated with their cloud usage. In April 2025, AWS added carbon emissions data to AWS Data Exports. This managed feature introduces the ability to automatically export carbon emissions data with AWS Account and AWS Region granularity on a monthly basis to Amazon Simple Storage Service (S3). When using AWS Organizations, the carbon emissions export delivers data for all member accounts linked to your management account.
This post explains how to configure the recurring delivery of carbon emissions data to Amazon S3 and visualize the exported data in the sustainability-proxy-metrics dashboard of the Cloud Intelligence Dashboards (CID). Utilizing Data Exports and the CID, you can track emissions across more than one AWS organization, with the ability to build custom visualizations and drill down to member account-level granularity.
Previously, some customers pulled the CCFT’s information with the sample code published as experimental programmatic access to collect emissions data programmatically from an entire AWS Organization. If you used the “experimental programmatic access” before, this post also explains how to delete the application while retaining data and switch to AWS Data Exports.
Integration of Data Exports with analytics and BI services

Figure 1: Architecture diagram for the integration of carbon emissions Data Export with analytics and BI services
AWS Data Exports recurringly exports carbon emission data to Amazon S3. A common architecture is that AWS Glue data catalog stores the table metadata of the location and structure of the data in Amazon S3. Amazon Athena is a service that runs standard SQL queries on the tables without the need to load the data into a database. Amazon QuickSight allows you to build dashboards with data from various datasets, including SQL queries run by Athena.
Configure and query carbon emissions data through Data Exports
Let’s look at two options to configure carbon emissions exports:
- Option 1: Create all resources manually in the AWS console, all in a single AWS Account. Choose this option to understand the concepts step-by-step.
- Option 2: Create all resources the Infrastructure as Code (IaC) way with an AWS CloudFormation template. Choose this option if you want to collect the carbon emissions data for one or more AWS Organizations, process the carbon emissions data in a separate account or deploy the sustainability proxy metrics dashboard.
Afterwards, you will be able to query the carbon emissions data with Athena, irrespective of which option you followed.
Prerequisites
To configure the carbon emissions Data Export, make sure to complete the following prerequisites:
- You must have an Amazon S3 bucket in your AWS Account to receive and store your data exports. Learn how to set up an Amazon S3 bucket for data exports.
- To use Data Exports, an AWS Identity and Access Management (IAM) user needs to be given access to actions in the
bcm-data-exports namespace
in IAM. Learn more about Identity and access management for Data Exports. - To access data in the Data Export carbon emissions table, you need the IAM permission
sustainability:GetCarbonFootprintSummary
.
Option 1: Create the carbon emissions Data Export and tables manually
1. Create the AWS Data Export for carbon emissions in the AWS Console (read more about creating Data Exports for details):
-
- Navigate to the AWS Data Exports feature in the Billing and Cost Management console
- Choose Create
- Enter an Export name, e.g.
carbon-emissions
- Choose Carbon emissions as a data table
- Choose an S3 bucket name and S3 path prefix of your choice
- Leave the defaults for all other choices
- Choose Create

Figure 2: Screenshot of carbon emissions Data Export creation
The S3 bucket name, S3 path prefix, and export name are parts of the data export path we will reference later in this post. The final data export path is s3://<S3-bucket-name>/<s3-path-prefix>/<export-name>/data
, e.g. s3://my-bucket/my/prefix/carbon-emissions/data
.
It may take up to 24 hours before your carbon emissions data, including a backfill of 38 months, is exported to the S3 bucket.
2. Once the export delivers the data, create the table in the data catalog by running the following query in Amazon Athena’s query editor. Choose the default
database, or create a dedicated database for this table. Replace <database>
with default
or the name of your dedicated database, and <Data export path>
with the data export path explained before.
CREATE EXTERNAL TABLE <database>.`carbon`(
`payer_account_id` string,
`usage_account_id` string,
`total_mbm_emissions_value` double,
`total_mbm_emissions_unit` string,
`model_version` string,
`product_code` string,
`usage_period_start` timestamp,
`usage_period_end` timestamp,
`last_refresh_timestamp` timestamp,
`region_code` string,
`location` string)
PARTITIONED BY (
`carbon_model_version` string,
`usage_period` string)
ROW FORMAT SERDE
'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
STORED AS INPUTFORMAT
'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'
LOCATION
'<data export path>'
3. The export delivers files with S3 prefixes according to the Apache Hive style. Each prefix contains its partitions encoded as key-value pairs connected by equal signs:
<data export path>/carbon_model_version=xxx/usage_period=2025-02/<report-name>-00001.snappy.parquet
<data export path>/carbon_model_version=yyy/usage_period=2025-03/<report-name>-00001.snappy.parquet
When the data export delivers data for the first time, or on monthly data refreshes, the data contains values for usage_period
and carbon_model_version
. You must add the partitions to the Glue data catalog. Make Athena discover the data partitions by running this query in the Athena query editor:
MSCK REPAIR TABLE <database>.`carbon`
You are now ready to query the data via Athena.
Option 2: Create the carbon emissions Data Export and tables with IaC
Another way to make the data accessible to Athena is to create the data export and the tables via AWS CloudFormation. Follow the instructions in this workshop. The instructions cover the programmatic creation of a carbon emissions Data Export, the collection of the data in a separate account, and the creation of an AWS Glue crawler to regularly discover new data.
Query and download the data
Once you have configured the Data Export following one of the two options, test the retrieval of data with the following query in the Amazon Athena query editor:
SELECT * FROM <database>."carbon" LIMIT 10;
The query editor shows the results similar to the following sample data:

Figure 3: Amazon Athena Query editor
You may download the results as a CSV by choosing Download results CSV.
If you don’t see any results, this may be because your account was created recently or doesn’t have any carbon emissions data to display. Read the documentation for understanding your carbon emission estimations in detail. Additionally, keep in mind that it may take up to 24 hours before your carbon emissions data and a backfill of 38 months is exported to the S3 bucket.
Deploy the dashboards visualizing the new data
If you have followed option 2, you can now set up the sustainability proxy metrics dashboard to visualize the carbon emissions data. To do so, follow these steps:
- Follow step 3 from the Cloud Intelligence Dashboards workshop
- In step 3.2, please keep the default “Yes” to “Deploy CUDOS v5 Dashboard” as this is a requirement for the sustainability proxy metrics dashboard
- Next, follow the instructions in the sustainability proxy metrics deployment guide to set up the visualizations based on your configured carbon emissions Data Export
Once deployed, navigate to your dashboard and the Carbon emissions tab. Figures 4 and 5 below show dashboard visuals with sample data that allow you to:
- Spot monthly variations and trends across selected AWS Accounts
- Drill down to your top contributors
- Drill down to the carbon emissions distribution across products
- Drill down to the carbon emissions distribution across AWS regions
- Analyze the flow of carbon emissions in relation to AWS regions and product codes
- Spot monthly variations and trends across multiple AWS organizations

Figure 4: Screenshot of Sustainability Proxy Metrics Dashboard, Carbon Emissions tab (1/2)

Figure 5: Screenshot of Sustainability Proxy Metrics Dashboard, Carbon Emissions tab (2/2)
Delete experimental access resources while maintaining your previously collected data
If you have previously used the experimental programmatic access, delete the application to stop incurring any costs. The endpoint used by this sample code will be discontinued on July 23rd, 2025, and the sample will stop working. We recommend using Data Exports instead.
Delete the stack by either:
• Using the AWS CLI: Assuming you used ccft-sam-script
for the stack name, run
aws cloudformation delete-stack —stack-name ccft-sam-script
• Using the AWS CloudFormation console (see documentation for step by step instructions)
During stack deletion, CloudFormation will attempt to delete the empty S3 bucket. If your bucket contains data, the stack deletion will fail unless you first empty the bucket. To preserve your historical data, either keep the data in the bucket (which will prevent stack deletion) or back up the data before proceeding with the deletion.
When you delete the stack, the Athena database, tables and views will be retained. If you chose to retain your S3 bucket with historical carbon emissions data, you can still use these tables and views for analysis.
Get notified on recurring data refreshes
You can set up notifications for new files written by AWS Data Exports using Amazon S3 Event Notifications in combination with Amazon SNS or AWS Lambda. If you want to get notified just once per export rather than for each individual file, read more on a pattern for handling duplicate events in the Amazon S3 documentation.
Calculate aggregates by AWS Organization, Regions, and Service
If you’re signed in as a management account of AWS Organizations, the Customer Carbon Footprint Tool dashboard reports the consolidated member account data. If you’re logged in to a member account, the Customer Carbon Footprint Tool reports emission data only for this account.
Unlike the CCFT console experience, the carbon emissions Data Export always provides individual account-level emissions data, regardless if it’s a payer or member account. If you want to calculate aggregates by AWS Organization, Region, or Product, start with the following query in the Athena query editor:
SELECT
carbon_model_version,
model_version,
usage_period,
usage_period_start,
usage_period_end,
SUM(total_mbm_emissions_value) AS total_mbm_emissions_value,
total_mbm_emissions_unit,
payer_account_id,
product_code,
region_code,
location
FROM <database>.`carbon`
GROUP BY
carbon_model_version,
model_version,
usage_period,
usage_period_start,
usage_period_end,
total_mbm_emissions_unit,
payer_account_id,
product_code,
region_code,
location
If you want to aggregate by AWS Organization, remove the product_code
, region_code
and location
from the SELECT
and the GROUP BY
parts of the SQL statement. Likewise, if you want to aggregate by AWS region, remove the payer_account_id
, and the product_code
. Leave only product_code, if you want to get the emissions by service.
If you want to reuse this query, you can create a view, which is a virtual table that saves your query logic for reuse, by running:
CREATE [ OR REPLACE ] VIEW <database>.<view_name> AS <select query above>
Cleaning up
If you want to delete the created resources and have followed option 1, delete the following resources manually:
- The Data Export you created
- Deleting the Data Export will not delete any objects stored in the S3 bucket. If you don’t want to retain them, delete the S3 objects and the bucket manually.
- The Athena table you created
If you have created the Data Export with Infrastructure as Code (option 2), follow these Teardown instructions.
Conclusion
Carbon emissions Data Export simplifies the collection and analysis of AWS carbon emissions data by delivering monthly updates in CSV or Parquet format to Amazon S3. This allows you to automate the processing of carbon data for reports and dashboards across your entire AWS organization at the AWS Service, Account, and Region granularity.
By following the setup steps outlined in this post and leveraging the Cloud Intelligence Dashboards, you can gain valuable insights into your AWS carbon emission estimates and make informed decisions to optimize your cloud sustainability efforts.