
Overview
Mphasis Knowledge Graph is a novel approach of summarizing or converting unstructured data into query-able triplets of Subject-Predicate-Object using NLP. It helps in semantic understanding of the unstructured data. The algorithm takes English text data as input and generates two outputs, the triplets (Subject-Predicate-Object) and graphical representation of these triplets.
Highlights
- The solution uses English text as input and uses NLP to understand and convert input into semantically correct triplets of Subject-Predicate-Object. The solution summarizes the unstructured data into graphical format signifying the associated entities along with their relationship.
- The solution can be leveraged to import unstructured text data to graph Data Bases that can ease information retrieval process. This enables user to build dialogue systems such as question-answer systems, chatbots, knowledge discovery, compliance, customer 360, KYC etc.
- Mphasis DeepInsights is a cloud-based cognitive computing platform that offers data extraction & predictive analytics capabilities. Need customized Machine Learning and Deep Learning solutions? Get in touch!
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Pricing
Dimension | Description | Cost/host/hour |
|---|---|---|
ml.m5.2xlarge Inference (Batch) Recommended | Model inference on the ml.m5.2xlarge instance type, batch mode | $16.00 |
ml.m5.2xlarge Inference (Real-Time) Recommended | Model inference on the ml.m5.2xlarge instance type, real-time mode | $8.00 |
ml.m4.4xlarge Inference (Batch) | Model inference on the ml.m4.4xlarge instance type, batch mode | $16.00 |
ml.m5.4xlarge Inference (Batch) | Model inference on the ml.m5.4xlarge instance type, batch mode | $16.00 |
ml.m4.16xlarge Inference (Batch) | Model inference on the ml.m4.16xlarge instance type, batch mode | $16.00 |
ml.p3.16xlarge Inference (Batch) | Model inference on the ml.p3.16xlarge instance type, batch mode | $16.00 |
ml.m4.2xlarge Inference (Batch) | Model inference on the ml.m4.2xlarge instance type, batch mode | $16.00 |
ml.c5.2xlarge Inference (Batch) | Model inference on the ml.c5.2xlarge instance type, batch mode | $16.00 |
ml.p3.2xlarge Inference (Batch) | Model inference on the ml.p3.2xlarge instance type, batch mode | $16.00 |
ml.c4.2xlarge Inference (Batch) | Model inference on the ml.c4.2xlarge instance type, batch mode | $16.00 |
Vendor refund policy
Currently we do not support refunds, but you can cancel your subscription to the service at any time.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Amazon SageMaker model
An Amazon SageMaker model package is a pre-trained machine learning model ready to use without additional training. Use the model package to create a model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.
Version release notes
Bug Fixes and Performance Improvement
Additional details
Inputs
- Summary
Amazon SageMaker
Input
- Supported content type: text/plain.
- The input file has to be in utf-8 encoding only
- The algorithm works with any English text data with a word limit in range 100 to 250 words.
Output
- Content type: application/zip.
- A zipped folder contains two output files, a “.csv” file with the Triplets and a “.png” file with the knowledge graph.
- The csv will have the triplets {Subject-Predicate-Object} and a graphical representation of these triplets.
- In the knowledge graph (.png), the nodes represent Subject & Object and edges represent Predicate.
- The nodes in the knowledge graph is color coded based on NER tags:
- Location: Green
- Org: Maroon
- Date: Red
- Person: Blue
- Other NER tags:
- Skyblue
- Non NER tags:
- Yellow
Invoking endpoint
AWS CLI Command If you are using real time inferencing, please create the endpoint first and then use the following command to invoke it:
aws sagemaker-runtime invoke-endpoint --endpoint-name "endpoint-name" --body fileb://Input.txt --content-type text/plain --accept application/zip output.zipSubstitute the following parameters:
- "endpoint-name" - name of the inference endpoint where the model is deployed.
- Input.txt - Input file.
- text/plain - MIME type of the given input file.
- output.zip - filename where the inference results are written to.
Resources
- Input MIME type
- text/plain
Resources
Vendor resources
Support
Vendor support
For any assistance reach out to us at:
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
Similar products





