
Overview
Document Similarity solution helps a user find pairwise similarity between documents. This will help in identifying whether two documents have similar verbatim and contextual information. Higher similarity value means documents have very similar contextual information and are written in similar verbatim. This helps in removing duplicate documents from a set.
Highlights
- An unsupervised text mining solution to find similarity between documents based on content and context. An intuitive similarity score that ranges from 0 (low similarity) to 1 (high similarity) is computed that can be used to segregate documents belonging to distinct subject groups.
- Some of the practical applications for this solution include: 1) Optimization of time taken in searching for relevant information such as de-duplication of search engine results. 2) Grouping documents with similar content. 3) Recommending personalised learning paths for effective knowledge interventions.
- Mphasis DeepInsights is a cloud-based cognitive computing platform that offers data extraction & predictive analytics capabilities. Need Customized Deep learning and Machine Learning Solutions? Get in Touch!
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Pricing
Dimension | Description | Cost/host/hour |
|---|---|---|
ml.m5.large Inference (Batch) Recommended | Model inference on the ml.m5.large instance type, batch mode | $20.00 |
ml.t2.medium Inference (Real-Time) Recommended | Model inference on the ml.t2.medium instance type, real-time mode | $10.00 |
ml.m4.4xlarge Inference (Batch) | Model inference on the ml.m4.4xlarge instance type, batch mode | $20.00 |
ml.m5.4xlarge Inference (Batch) | Model inference on the ml.m5.4xlarge instance type, batch mode | $20.00 |
ml.m4.16xlarge Inference (Batch) | Model inference on the ml.m4.16xlarge instance type, batch mode | $20.00 |
ml.m5.2xlarge Inference (Batch) | Model inference on the ml.m5.2xlarge instance type, batch mode | $20.00 |
ml.p3.16xlarge Inference (Batch) | Model inference on the ml.p3.16xlarge instance type, batch mode | $20.00 |
ml.m4.2xlarge Inference (Batch) | Model inference on the ml.m4.2xlarge instance type, batch mode | $20.00 |
ml.c5.2xlarge Inference (Batch) | Model inference on the ml.c5.2xlarge instance type, batch mode | $20.00 |
ml.p3.2xlarge Inference (Batch) | Model inference on the ml.p3.2xlarge instance type, batch mode | $20.00 |
Vendor refund policy
Currently we do not support refunds, but you can cancel your subscription to the service at any time.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Amazon SageMaker model
An Amazon SageMaker model package is a pre-trained machine learning model ready to use without additional training. Use the model package to create a model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.
Version release notes
Bug Fixes and Performance Improvement
Additional details
Inputs
- Summary
Input
- The input file should be a zip of text files (.txt) in utf-8 encoding
- The zipped file can have a maximum of 20 documents
- The maximum size of the each file should be <= 10KB (1000 lines)
- Supported Content type: application/zip
Output
- The output from the model is a csv file, supported content type: text/csv
- Output file contains a document to document matrix with scores of similarity indices between 0 to 1 interpreted as:
- 0 being least similar
- 1 being most similar
- Sample output file: |----------------| Document-1 | Document-2 | Document-3 | | Document-1 | 1 | 0.167 | 0.648 | | Document-2 | 0.167 | 1 | 0.070 | | Document-3 | 0.648 | 0.070 | 1 |
Invoking endpoint
AWS CLI Command
If you are using real time inferencing, please create the endpoint first and then use the following command to invoke it:
aws sagemaker-runtime invoke-endpoint --endpoint-name "endpoint-name" --body fileb://input.zip --content-type application/zip --accept text/csv result.csvSubstitute the following parameters:
- endpoint-name - name of the inference endpoint where the model is deployed
- input.zip - input file
- application/zip - MIME type of the given input file (above)
- result.csv - filename where the inference results are written to.
Resources
- Input MIME type
- application/zip
Resources
Vendor resources
Support
Vendor support
For any assistance, please reach out at:
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
Similar products




