
Overview
Key phrase extractor uses end-to-end text extraction pipeline, text analysis and natural language processing techniques to automate key phrases/words extraction from text documents. This solution is based on unsupervised graph-based, topic-based, statistics-based algorithms for the construction of word network and ranking in order to identify the most relevant keyphrases.
Highlights
- This solution provides a list of most relevant keyphrases present in a text document using a graph-based, topic based and statistics based ranking model.
- Applications of key word extraction includes understanding of data, indexing, search, and scalability of content. Few use cases of keyword extraction are Search Engine Optimization (SEO) and Real Time Analysis (RTA) on social media posts, customer reviews, emails, chat transcripts and surveys.
- Mphasis DeepInsights is a cloud-based cognitive computing platform that offers data extraction & predictive analytics capabilities.Mphasis DeepInsights is a cloud-based cognitive computing platform that offers data extraction & predictive analytics capabilities. Need Customized Deep learning and Machine Learning Solutions? Get in Touch!
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Pricing
Dimension | Description | Cost/host/hour |
|---|---|---|
ml.m5.large Inference (Batch) Recommended | Model inference on the ml.m5.large instance type, batch mode | $8.00 |
ml.m5.large Inference (Real-Time) Recommended | Model inference on the ml.m5.large instance type, real-time mode | $4.00 |
ml.m4.4xlarge Inference (Batch) | Model inference on the ml.m4.4xlarge instance type, batch mode | $8.00 |
ml.m5.4xlarge Inference (Batch) | Model inference on the ml.m5.4xlarge instance type, batch mode | $8.00 |
ml.m4.16xlarge Inference (Batch) | Model inference on the ml.m4.16xlarge instance type, batch mode | $8.00 |
ml.m5.2xlarge Inference (Batch) | Model inference on the ml.m5.2xlarge instance type, batch mode | $8.00 |
ml.p3.16xlarge Inference (Batch) | Model inference on the ml.p3.16xlarge instance type, batch mode | $8.00 |
ml.m4.2xlarge Inference (Batch) | Model inference on the ml.m4.2xlarge instance type, batch mode | $8.00 |
ml.c5.2xlarge Inference (Batch) | Model inference on the ml.c5.2xlarge instance type, batch mode | $8.00 |
ml.p3.2xlarge Inference (Batch) | Model inference on the ml.p3.2xlarge instance type, batch mode | $8.00 |
Vendor refund policy
Currently we do not support refunds, but you can cancel your subscription to the service at any time.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Amazon SageMaker model
An Amazon SageMaker model package is a pre-trained machine learning model ready to use without additional training. Use the model package to create a model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.
Version release notes
Bug Fixes and Performance Improvement
Additional details
Inputs
- Summary
Usage Methodology for the algorithm:
- The input has to be a '.txt' file with 'utf-8' encoding. PLEASE NOTE: If your input .txt file is not 'utf-8' encoded, model will not perform as expected
- To make sure that your input file is 'UTF-8' encoded please 'Save As' using Encoding as 'UTF-8'
- Input should have atleast 3 sentences with 50 words (Model limitation)
- The input can have a maximum of 750 words (Sagemaker restriction)
- Supported content types: text/plain
Input
Supported content types: text/plain
sample input:
Uttar Pradesh Chief Minister Yogi Adityanath on Friday flagged off the Tejas Express, the country's first "private" train run by its subsidiary IRCTC, on the Lucknow-New Delhi route. The commercial run of the train starts on Saturday. The Tejas Express cuts the time travelled between the two cities to 6.15 hours from the 6.40 hours taken by the Swarn Shatabdi, currently the fastest train on the route."It is the first corporate train of the country.......
Output
Content type: text/csv
sample output:
SNo-|------Key Topics--------------------------------
- environment friendly public transport
- fastest train
- first corporate train
- minister piyush
- tejas express
Invoking endpoint
AWS CLI Command
You can invoke endpoint using AWS CLI:
aws sagemaker-runtime invoke-endpoint --endpoint-name $model_name --body fileb://$file_name --content-type 'text/plain' --region us-east-2 output.csvSubstitute the following parameters:
- $model_name - name of the inference endpoint where the model is deployed
- $file_name - 'input.txt'- input file to do the inference on
- text/plain - type of the given input file (above)
- output.csv - filename where the inference results are written to
Resources
- Input MIME type
- text/plain
Resources
Vendor resources
Support
Vendor support
For any assistance, please reach out to:
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
Similar products



