
Overview
The idea is to take a textual data as input (such as a IT incident tickets, customer helpdesk queries, or documents/emails) and predict the appropriate category at each level of a hierarchy. This system is useful when dealing with data that has multiple levels of granularity, and it's crucial for organizing information based on more specific or broad categories. This trainable listing fine-tunes Phi-3 model, and the resulting LoRA adapters can be directly used for inference. Users must provide datasets with textual descriptions of any specific domain and their corresponding multi-level labels, ensuring the label count does not surpass 256.
Highlights
- This solution streamlines classification workflows by automatically mapping textual input to the most relevant hierarchical categories, reducing manual tagging efforts and accelerating decision-making processes across diverse industries.
- Using lora adapters, our solution enables efficient fine-tuning of Phi-3 model while minimizing computational overhead. The resulting trainable LoRA adapters can be seamlessly applied in multi-adapter settings, providing flexibility for deployment across various use cases.
- Mphasis DeepInsights is a cloud-based cognitive computing platform that offers data extraction & predictive analytics capabilities. Need customized Machine Learning and Deep Learning solutions? Get in touch!
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Pricing
Dimension | Description | Cost/host/hour |
|---|---|---|
ml.p2.8xlarge Inference (Batch) Recommended | Model inference on the ml.p2.8xlarge instance type, batch mode | $2.00 |
ml.p2.8xlarge Inference (Real-Time) Recommended | Model inference on the ml.p2.8xlarge instance type, real-time mode | $2.00 |
ml.g5.4xlarge Training Recommended | Algorithm training on the ml.g5.4xlarge instance type | $2.00 |
ml.p2.xlarge Inference (Batch) | Model inference on the ml.p2.xlarge instance type, batch mode | $2.00 |
ml.p3.8xlarge Inference (Batch) | Model inference on the ml.p3.8xlarge instance type, batch mode | $2.00 |
ml.p3.2xlarge Inference (Batch) | Model inference on the ml.p3.2xlarge instance type, batch mode | $2.00 |
ml.p2.16xlarge Inference (Batch) | Model inference on the ml.p2.16xlarge instance type, batch mode | $2.00 |
ml.p3.16xlarge Inference (Batch) | Model inference on the ml.p3.16xlarge instance type, batch mode | $2.00 |
ml.p2.xlarge Inference (Real-Time) | Model inference on the ml.p2.xlarge instance type, real-time mode | $2.00 |
ml.p3.8xlarge Inference (Real-Time) | Model inference on the ml.p3.8xlarge instance type, real-time mode | $2.00 |
Vendor refund policy
Currently we do not support refunds, but you can cancel your subscription to the service at any time.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Amazon SageMaker algorithm
An Amazon SageMaker algorithm is a machine learning model that requires your training data to make predictions. Use the included training algorithm to generate your unique model artifact. Then deploy the model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.
Version release notes
latest version
Additional details
Inputs
- Summary
The input file should be train.csv
- Limitations for input type
- The train.csv with maximum of 10,000 rows.
- Input MIME type
- application/json
Input data descriptions
The following table describes supported input data fields for real-time inference and batch transform.
Field name | Description | Constraints | Required |
|---|---|---|---|
DESCRIPTION | The textual description based on which categories are assigned. | Type: FreeText | Yes |
CATEGORY_<level> | multi-level categories that should come under column names "CATEGORY_" | Type: Categorical
Allowed values: user defined categories | Yes |
Resources
Vendor resources
Support
Vendor support
For any assistance reach out to us at:
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
Similar products
