
Overview
Model performance Estimator evaluates and monitors the performance of regression and classification models given the control limits of performance estimators. Additionally, it can detect any potential data drifts in individual features of dataset. This solution provides a proactive approach to model performance estimation enabling improved data driven decision making process. It uses various evaluation metrics like accuracy, f1 score, etc. to measure models’ performance and return alerts using text messages, charts and tables for given timestamps or index chunks. The solution draws attention to statistically significant events and gives appropriate warnings of data drifts that impact performance of the model post deployment.
Highlights
- This solution approach allows for an accurate assessment of model performance in real-world scenarios when the ground truth for the new incoming data is unavailable, providing crucial insights into model's effectiveness over time. The solution ensures the models are consistently delivering accurate and reliable results, with ability to monitor data drift. The solution enables businesses to make data-driven decisions with confidence. It helps detect silent model failure and informs before hand about any data drifts with relevant alerts.
- Individual users can use this solution for testing the performance of their machine learning models in production environment. Weekly estimates of drifts on outputs and features - individually and as a whole can identify what could be the root cause of drop in model performance. For example: The solution effectively captures the model performance deteriorating for a particular time period or certain set of data entries.
- Mphasis HyperGraf is an Omni-channel customer 360 analytics solution. Need customized Deep Learning/NLP solutions? Get in touch!
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Pricing
Dimension | Description | Cost/host/hour |
|---|---|---|
ml.m5.large Inference (Batch) Recommended | Model inference on the ml.m5.large instance type, batch mode | $4.00 |
ml.m5.large Inference (Real-Time) Recommended | Model inference on the ml.m5.large instance type, real-time mode | $8.00 |
ml.m4.4xlarge Inference (Batch) | Model inference on the ml.m4.4xlarge instance type, batch mode | $4.00 |
ml.m5.4xlarge Inference (Batch) | Model inference on the ml.m5.4xlarge instance type, batch mode | $4.00 |
ml.m4.16xlarge Inference (Batch) | Model inference on the ml.m4.16xlarge instance type, batch mode | $4.00 |
ml.m5.2xlarge Inference (Batch) | Model inference on the ml.m5.2xlarge instance type, batch mode | $4.00 |
ml.p3.16xlarge Inference (Batch) | Model inference on the ml.p3.16xlarge instance type, batch mode | $4.00 |
ml.m4.2xlarge Inference (Batch) | Model inference on the ml.m4.2xlarge instance type, batch mode | $4.00 |
ml.c5.2xlarge Inference (Batch) | Model inference on the ml.c5.2xlarge instance type, batch mode | $4.00 |
ml.p3.2xlarge Inference (Batch) | Model inference on the ml.p3.2xlarge instance type, batch mode | $4.00 |
Vendor refund policy
Currently we do not support refunds, but you can cancel your subscription to the service at any time.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Amazon SageMaker model
An Amazon SageMaker model package is a pre-trained machine learning model ready to use without additional training. Use the model package to create a model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.
Version release notes
version:1
Additional details
Inputs
- Summary
The input has to be a '.zip' file containing 4 files namely(reference_data.csv; analysis_data.csv; nannyml_model.sav; nannyml.json):
- Reference_data(.csv) data with ground truth on which the model is trained
- Analysis_data(.csv) file data without the ground truth for which we want to predict the metrics
- Nannml_model(.sav)file ML model trained on your reference data
- Nannyml(.josn) file specifing the type of task, name of the target variable and and timestamp column name(if any)
- Input MIME type
- application/zip
Resources
Vendor resources
Support
Vendor support
For any assistance, please reach out at:
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
Similar products

