
Overview
Scanned documents can have flipped pages that can create challenges while processing of documents using OCR, ICR, Text extraction, image-based ML/AI modelling, etc. This solution incorporates statistical models which identify document flip and correct the alignment of pages. It incorporates deep learning models to achieve this. The models are trained on a large dataset of thousands of pages. This enables OCR/ICR engines to achieve higher accuracy and improves the subsequent text extraction pipelines.
Highlights
- This solution can be used to correct flipped pages which occur while scanning a document with scanner or phone.
- This solution incorporates a CNN model trained on a large dataset of documents e.g. invoices, financial statements, legal documents etc. and can identify if a page is flipped. Flipped pages are corrected using computer vision algorithms.
- Mphasis DeepInsights is a cloud-based cognitive computing platform that offers data extraction & predictive analytics capabilities. Need customized Machine Learning and Deep Learning solutions? Get in touch!
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Pricing
Dimension | Description | Cost/host/hour |
|---|---|---|
ml.m5.xlarge Inference (Batch) Recommended | Model inference on the ml.m5.xlarge instance type, batch mode | $8.00 |
ml.m5.xlarge Inference (Real-Time) Recommended | Model inference on the ml.m5.xlarge instance type, real-time mode | $4.00 |
ml.m4.4xlarge Inference (Batch) | Model inference on the ml.m4.4xlarge instance type, batch mode | $8.00 |
ml.m5.4xlarge Inference (Batch) | Model inference on the ml.m5.4xlarge instance type, batch mode | $8.00 |
ml.m4.16xlarge Inference (Batch) | Model inference on the ml.m4.16xlarge instance type, batch mode | $8.00 |
ml.m5.2xlarge Inference (Batch) | Model inference on the ml.m5.2xlarge instance type, batch mode | $8.00 |
ml.p3.16xlarge Inference (Batch) | Model inference on the ml.p3.16xlarge instance type, batch mode | $8.00 |
ml.m4.2xlarge Inference (Batch) | Model inference on the ml.m4.2xlarge instance type, batch mode | $8.00 |
ml.c5.2xlarge Inference (Batch) | Model inference on the ml.c5.2xlarge instance type, batch mode | $8.00 |
ml.p3.2xlarge Inference (Batch) | Model inference on the ml.p3.2xlarge instance type, batch mode | $8.00 |
Vendor refund policy
Currently we do not support refunds, but you can cancel your subscription to the service at any time.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Amazon SageMaker model
An Amazon SageMaker model package is a pre-trained machine learning model ready to use without additional training. Use the model package to create a model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.
Version release notes
Bug Fixes and Performance Improvement
Additional details
Inputs
- Summary
Input:
Following are the mandatory inputs for rotation correction algorithm on scanned documents:
- Supported content type: application/zip
- The algorithm corrects upside down pages in scanned documents.
- The algorithm works with scanned documents in formats – PDFs and Images. The input documents must be zipped.
- The input zip file can have up to 5 images [for types see below] or a scanned document in PDF format with maximum 5 pages.
- Also the image size must be less than 2 MB and PDF size must be less than 4 MB.
- Images can be of following types - bmp, dib, jpeg, jpg, jpe, png, pbm, pgm, ppm, tiff, tif
Output:
Instructions for score interpretation:
- Content type: application/zip
- Output zip file will contain Images / PDF with correct rotation depending on the documents inside Input zip file.
- Supported content types: ‘application/zip'
Invoking endpoint
AWS CLI Command
If you are using real time inferencing, please create the endpoint first and then use the following command to invoke it:
!aws sagemaker-runtime invoke-endpoint --endpoint-name $model_name --body fileb://$file_name --content-type 'application/zip' --region us-east-2 output.zipSubstitute the following parameters:
- "model-name" - name of the inference endpoint where the model is deployed
- file_name - input zip file name
- application/zip - type of the given input
- output.zip - filename where the inference results are written to
Resources:
- Input MIME type
- application/zip
Resources
Vendor resources
Support
Vendor support
For any assistance reach out to us at:
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
Similar products

