[SEO Subhead]
This Guidance demonstrates how to automate digital forensics processes for Amazon EC2 instances when security issues arise. Through orchestrated AWS services, it streamlines incident response by automatically collecting disk and memory data, isolating affected instances, and initiating forensic investigation tools. The Guidance helps security teams reduce response time through automated workflows that capture forensic artifacts and integrate with analysis and reporting tools. Organizations running Amazon EC2 workloads can enhance their security operations by deploying this Guidance, enabling rapid response to potential security incidents while maintaining consistent investigation procedures.
Note: [Disclaimer]
Architecture Diagram

[Architecture diagram description]
Step 1
Prior to running the workflow, you will need a forensic Amazon Machine Image (AMI). You can use Amazon EC2 Image Builder to build a new forensic AMI or an existing forensic AMI.
Step 2
AWS Step Functions leverages the forensic AMI to perform memory and disk investigation.
Step 3
In the AWS application account, AWS Config managed rules, Amazon GuardDuty, and third-party tools detect malicious activities that are specific to Amazon Elastic Compute Cloud (Amazon EC2) resources. For example, an EC2 instance queries a low reputation domain name that is associated with known abused domains. The findings are sent to AWS Security Hub in the security account through their native or existing integration.
Step 4
By default, all Security Hub findings are then sent to Amazon EventBridge to invoke automated downstream workflows.
Step 5
For a specified event, EventBridge provides an instance ID for the forensics process to target, and initiates the Step Functions workflow.
Step 6
Step Functions triages the request through the following approach: It first gets the instance information. It then determines if isolation is required based on the Security Hub action and if acquisition is required based on tags associated with the instance. Finally, it initiates the acquisition flow based on triaging output.
Step 6a
Amazon DynamoDB stores triaging details.
Step 6b
Two acquisition flows are initiated in parallel: The Memory Forensics Flow is a Step Functions workflow that captures the memory data and stores it in Amazon Simple Storage Service (Amazon S3). Post memory acquisition, the instance is isolated using security groups.
Step 6b (continued)
To help ensure the chain of custody, a new security group gets attached to the targeted instance and removes any access for users, admins, or developers. Isolation is initiated based on the selected Security Hub action. The Disk Forensics Flow is a Step Functions workflow that takes a snapshot of an Amazon Elastic Block Store (Amazon EBS) volume and shares it with the forensic account.
Step 6c
DynamoDB stores acquisition details.
Step 6d
Once the disk or memory acquisition process is complete, a notification is sent to an investigation Step Functions state machine to begin the automated investigation of the captured data.
Step 6e
When the Step Functions jobs are complete, DynamoDB stores the state of forensic tasks and their results.
Step 7
Investigation Step Functions starts a forensic instance from an existing forensic AMI loaded with customer forensic tools. Step Functions loads the memory data from Amazon S3 for investigation, creates an EBS volume from the snapshot, and attaches the EBS volume for disk analysis.
Step 8
AWS Systems Manager documents (SSM documents) run forensic investigation.
Step 9
Amazon Simple Notification Service (Amazon SNS) shares investigation details with customers.
Step 10
AWS AppSync can query the forensic timeline. For more details, refer to Sample AppSync API to query forensic details.
Well-Architected Pillars

The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
-
Operational Excellence
EventBridge enables automated event-driven architecture and seamlessly integrates AWS and third-party services. Lambda reduces operational overhead and automates routine tasks through serverless computing. Step Functions orchestrates complex workflows while providing visual management, making distributed service coordination and maintenance easier. DynamoDB delivers fully managed, scalable NoSQL database capabilities with consistent performance at any scale. Amazon SNS ensures reliable message delivery and enables automated responses to system events. Together, these services promote operational excellence through automation, integration, and reduced management overhead.
-
Security
Native AWS services create a framework to orchestrate and automate key forensics processes from initial threat detection. This Guidance reduces mean-time-to-respond for security events by orchestrating end-to-end Amazon EC2 incident response, including resource triage, forensic artifact collection, resource isolation, investigation, and reporting. AWS Identity and Access Management (IAM) implements least privilege across AWS accounts for authorized principals. The framework allows Security Operations Center (SOC) teams to continuously discover and analyze fraudulent activities across multi-account and multi-region environments, while capturing memory and disk images to secure storage and initiating automated investigation tools.
-
Reliability
EventBridge delivers highly available event routing with built-in retry policies and dead-letter queues, helping to keep event-driven applications resilient. Step Functions ensures workflow reliability through built-in error handling, automatic retries, and state management, enabling robust error recovery. DynamoDB maintains reliability through automatic multi-Availability Zone (AZ) replication, point-in-time recovery, and on-demand backups, guaranteeing consistent performance at scale.
-
Performance Efficiency
EventBridge processes events in near real-time with consistent throughput and automatic scaling capabilities, handling millions of events per second without performance degradation. Lambda automatically scales compute resources in milliseconds, processing requests concurrently and allocating optimal memory and compute power based on function configuration. This event-driven architecture minimizes manual intervention and monitoring requirements.
-
Cost Optimization
Lambda uses a precise pay-per-use model that charges only for consumed compute time in 1ms increments and number of requests, eliminating idle resource costs. DynamoDB offers on-demand capacity for unpredictable workloads with per-request pricing and auto-scaling to prevent over-provisioning. The DynamoDB time-to-live feature automatically deletes unnecessary data.
-
Sustainability
Serverless AWS services minimize idle resources and environmental impact while enabling rapid scaling when needed. EventBridge routes events without dedicated infrastructure, reducing idle energy consumption. Lambda and Step Functions use on-demand execution models, activating compute resources only during function invocation. Amazon S3 offers intelligent-tiering and lifecycle policies automatically move data to energy-efficient storage tiers, while server-side encryption requires no additional compute resources. Long-term artifacts can be archived to more energy-efficient tiers for legal retention requirements.
Related Content

[Title]
Disclaimer
The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.
References to third-party services or organizations in this Guidance do not imply an endorsement, sponsorship, or affiliation between Amazon or AWS and the third party. Guidance from AWS is a technical starting point, and you can customize your integration with third-party services when you deploy the architecture.