AWS Security Blog

Category: Artificial Intelligence

New AWS Skill Builder course available: Securing Generative AI on AWS

To support our customers in securing their generative AI workloads on Amazon Web Services (AWS), we are excited to announce the launch of a new AWS Skill Builder course: Securing Generative AI on AWS. This comprehensive course is designed to help security professionals, architects, and artificial intelligence and machine learning (AI/ML) engineers understand and implement […]

How to enhance Amazon Macie data discovery capabilities using Amazon Textract

Amazon Macie is a managed service that uses machine learning (ML) and deterministic pattern matching to help discover sensitive data that’s stored in Amazon Simple Storage Service (Amazon S3) buckets. Macie can detect sensitive data in many different formats, including commonly used compression and archive formats. However, Macie doesn’t support the discovery of sensitive data […]

Flag of Australia

Preparing for take-off: Regulatory perspectives on generative AI adoption within Australian financial services

The Australian financial services regulator, the Australian Prudential Regulation Authority (APRA), has provided its most substantial guidance on generative AI to date in Member Therese McCarthy Hockey’s remarks to the AFIA Risk Summit 2024. The guidance gives a green light for banks, insurance companies, and superannuation funds to accelerate their adoption of this transformative technology, […]

Exploring the benefits of artificial intelligence while maintaining digital sovereignty

English | German | French Around the world, organizations are evaluating and embracing artificial intelligence (AI) and machine learning (ML) to drive innovation and efficiency. From accelerating research and enhancing customer experiences to optimizing business processes, improving patient outcomes, and enriching public services, the transformative potential of AI is being realized across sectors. Although using […]

Securing the RAG ingestion pipeline: Filtering mechanisms

Retrieval-Augmented Generative (RAG) applications enhance the responses retrieved from large language models (LLMs) by integrating external data such as downloaded files, web scrapings, and user-contributed data pools. This integration improves the models’ performance by adding relevant context to the prompt. While RAG applications are a powerful way to dynamically add additional context to an LLM’s prompt […]

Main Image

Threat modeling your generative AI workload to evaluate security risk

As generative AI models become increasingly integrated into business applications, it’s crucial to evaluate the potential security risks they introduce. At AWS re:Invent 2023, we presented on this topic, helping hundreds of customers maintain high-velocity decision-making for adopting new technologies securely. Customers who attended this session were able to better understand our recommended approach for […]

Implement effective data authorization mechanisms to secure your data used in generative AI applications – part 1

April 3, 2025: We’ve updated this post to reflect the new 2025 OWASP top 10 for LLM entries. This is part 1 of a two-part blog series. See part 2. Data security and data authorization, as distinct from user authorization, is a critical component of business workload architectures. Its importance has grown with the evolution […]

AI AuthZ

Enhancing data privacy with layered authorization for Amazon Bedrock Agents

April 3, 2025: We’ve updated this post to reflect the new 2025 OWASP top 10 for LLM entries. Customers are finding several advantages to using generative AI within their applications. However, using generative AI adds new considerations when reviewing the threat model of an application, whether you’re using it to improve the customer experience for […]

Methodology for incident response on generative AI workloads

The AWS Customer Incident Response Team (CIRT) has developed a methodology that you can use to investigate security incidents involving generative AI-based applications. To respond to security events related to a generative AI workload, you should still follow the guidance and principles outlined in the AWS Security Incident Response Guide. However, generative AI workloads require […]

Network perimeter security protections for generative AI

Generative AI–based applications have grown in popularity in the last couple of years. Applications built with large language models (LLMs) have the potential to increase the value companies bring to their customers. In this blog post, we dive deep into network perimeter protection for generative AI applications. We’ll walk through the different areas of network […]