
Overview
Systems can search through similar content through vector similarity computations. This solution uses Generative AI to help customize the LLM model that embeds the vector to understand the nature of questions and domain semantics. Alignment of embeddings using custom finetuned LLMs helps improve search results which is observed by improved rankings of relevant content. The input is a CSV with questions and corresponding answers - for example a FAQ dataset. The embedding model is fine-tuned on this dataset. At the time of inference this trained model can be used to embed any text content. This means the trained foundational model can be used to vectorize the final desired corpus and incoming queries for search. The solution leverages the current state of the art techniques in the domain of generative AI to finetune custom LLM models to generate the right set of embeddings.
Highlights
- The solution is intended to be used as part of a search and retrieval workflow. The fine-tuned LLM model can be used to embed your documents to a vector database and vectorize incoming search queries.
- The solution takes input in the form of simple Q&A pairs. This helps the LLM align with the type of questions that will be posed to the system in production.
- Mphasis DeepInsights is a cloud-based cognitive computing platform that offers data extraction & predictive analytics capabilities. Need customized Machine Learning and Deep Learning solutions? Get in touch!
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Pricing
Dimension | Description | Cost |
|---|---|---|
ml.m5.2xlarge Inference (Batch) Recommended | Model inference on the ml.m5.2xlarge instance type, batch mode | $3.00/host/hour |
ml.g4dn.xlarge Training Recommended | Algorithm training on the ml.g4dn.xlarge instance type | $10.00/host/hour |
ml.m4.4xlarge Inference (Batch) | Model inference on the ml.m4.4xlarge instance type, batch mode | $3.00/host/hour |
ml.m5.4xlarge Inference (Batch) | Model inference on the ml.m5.4xlarge instance type, batch mode | $3.00/host/hour |
ml.m4.16xlarge Inference (Batch) | Model inference on the ml.m4.16xlarge instance type, batch mode | $3.00/host/hour |
ml.p3.16xlarge Inference (Batch) | Model inference on the ml.p3.16xlarge instance type, batch mode | $3.00/host/hour |
ml.m4.2xlarge Inference (Batch) | Model inference on the ml.m4.2xlarge instance type, batch mode | $3.00/host/hour |
ml.c5.2xlarge Inference (Batch) | Model inference on the ml.c5.2xlarge instance type, batch mode | $3.00/host/hour |
ml.p3.2xlarge Inference (Batch) | Model inference on the ml.p3.2xlarge instance type, batch mode | $3.00/host/hour |
ml.c4.2xlarge Inference (Batch) | Model inference on the ml.c4.2xlarge instance type, batch mode | $3.00/host/hour |
Vendor refund policy
Currently we do not support refunds, but you can cancel your subscription to the service at any time.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Amazon SageMaker algorithm
An Amazon SageMaker algorithm is a machine learning model that requires your training data to make predictions. Use the included training algorithm to generate your unique model artifact. Then deploy the model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.
Version release notes
This is the latest version.
Additional details
Inputs
- Summary
The input to training pipeline is a "train.zip" file which contains train.csv file(with columns as "Question" and "Answer") and a user_input json file having hyper parmeters ({"BATCH_SIZE":10,"EPOCHS":5})
- Input MIME type
- application/zip, application/gzip, text/plain
Resources
Vendor resources
Support
Vendor support
For any assistance reach out to us at:
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
Similar products





