Skip to main content

Guidance for Demand Forecasting for Restaurants on AWS

Overview

This Guidance helps you use machine learning (ML) to forecast demand in restaurants so you can optimize staff scheduling. Additionally, forecasting demand can help you reduce inventory, more effectively use resources, increase revenue, and reduce waste. This Guidance includes an end-to-end pipeline for data to show you how to analyze data and present it in a format that non-technical users can interact with to derive business insights.

How it works

This reference architecture showcases an end-to-end pipeline to deliver restaurant order demand forecasts in a data format that non-technical users can update and consume.

Well-Architected Pillars

The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.

You can update changes (such as forecast frequency and forecast horizon) to prediction configurations for SageMaker Canvas to achieve required levels of granularity and explainability of forecasting outputs.

Read the Operational Excellence whitepaper

The permissions for each user is controlled through AWS Identity and Access Management (IAM) roles. Additionally, Transfer Family integration with Amazon S3 server-side data encryption helps secure file transfers. Although the architecture is serverless, the Lambda components can run within your virtual private cloud (VPC) and be associated to IAM roles with minimal required permissions.

Read the Security whitepaper

This architecture will scale to meet demand based on the volume of data (such as data from restaurant receipts) you upload to Transfer Family. As you scale your workloads, consider opting for user-defined schedules for SageMaker Canvas models rather than re-forecasting and re-training models for every upload. Additionally, all components of this architecture are built on event-driven patterns, meaning the system will only run when an event or change occurs in Amazon S3.

Read the Reliability whitepaper

You can adjust data input into the architecture through direct integrations with your own systems using Lambda. You can add more data relevant to your business use case through Data Exchange and adjust the Glue Crawler to construct modified data sets with which to forecast. You can also adjust configurations for SageMaker Canvas predictions and training on an as-needed basis. QuickSight allows you to view and compare variations of forecasts, explainability data, and accuracy metrics in one place using an intuitive user interface.

Read the Performance Efficiency whitepaper

Using SageMaker Canvas’ price-per-use approach, you can train a predictor for under $1 USD (assuming less than 3 hours training time) and produce 1,000 forecasts on updated data for $2 USD. The architecture also benefits from using Amazon S3 for cost-effective data storage. The price of individual architecture components can be isolated by re-using existing SageMaker Canvas predictors.

This Guidance offers a low start-up cost to trial forecasting for non-technical users that can be further customized with automation as your familiarity with AWS technology grows.

Read the Cost Optimization whitepaper

By default, the architecture’s resources are only activated when there are changes in Amazon S3 buckets. Additionally, by adopting a serverless architecture, you can scale based on usage so that you consume only required resources.

Read the Sustainability whitepaper

Disclaimer

The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.