Skip to main content

Guidance for Payments Fraud Prevention on AWS

Overview

This Guidance shows how payment service providers can implement a near real-time fraud screening system on AWS by streaming data. Transactions are scored by risk using machine learning (ML) models, and notifications are sent to customers based on the risk level of the transactions.

How it works

This high-level reference architecture shows how payment companies can implement a near real-time fraud screening system on AWS.

Well-Architected Pillars

The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.

This Guidance shows how fully managed services such as AWS DataSync, Amazon EMR, and Kinesis allow you to break free from the complexities of database and data warehouse administration.

You can send logs directly from your application to CloudWatch using the CloudWatch Logs API, or send events using an AWS SDK and Amazon EventBridge.

Read the Operational Excellence whitepaper 

Raw data is ingested into Amazon S3. Amazon S3 supports both server-side encryption and client-side encryption for data uploads.

You can encrypt metadata objects in your AWS Glue Data Catalog in addition to the data written to Amazon S3 and Amazon CloudWatch Logs by jobs, crawlers, and development endpoints.

Read the Security whitepaper 

The solution is modular and has the ability to scale based on the transactions. Serverless capabilities such as Kinesis and Lambda automatically scale throughput up or down based on demand.

Read the Reliability whitepaper 

Serverless architectures help to provision the exact resources that the workload needs. Lambda manages scaling automatically. You can optimize the individual Lambda functions used in your application to reduce latency and increase throughput.

Read the Performance Efficiency whitepaper 

This Guidance is designed to be fully optimized for cost, only using resources where necessary and only accessing data using the services appropriate for the business need.

All costs should align with the defined goals for pricing and clearly defined KPIs for managing batch, compared with near real time requirements to ensure the optimum value benefits.

Read the Cost Optimization whitepaper 

By extensively using managed services and dynamic scaling, you minimize the environmental impact of the backend services.

Technologies that support data access and storage patterns should be monitored to ensure that assets such as data are stored in the optimum solution based on the read and write access patterns, paying close attention to the scaling of compute resources closely aligned to the demand.

Read the Sustainability whitepaper 

Disclaimer

The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.