
Overview
Influencer identification in a network is a component of HyperGraf. The underlying algorithms are driven by state-of-the-art network analysis and graph theory methods, e.g. page-rank, degree centralities, betweenness, clustering co-efficients. A social network consists of a set of socially relevant actors connected by one or more relations which can characterize communications between the actors. Identifying influential nodes in such a network enables organizations to optimize the spread of information in such networks. Mphasis HyperGraf is an Omni-channel customer 360 analytics solution.
Highlights
- Provides several measures for key influencer identification using state-of-the-art methods in network analysis. We cover the following classes of measures - 1. Influence; 2. Closeness; 3. Centrality ; 3. Knowledge broker; 4. Hub& Authority; and 5. Communities & Clusters
- We organize the measures of influence into the following categories – Influence – Connection of a node to other important nodes in the network. Centrality – Based on the number of links incident upon a node. Closeness – how tightly knit the nodes are in the network based. Brokerage – If nodes in the network act as a bridge between different groups of the network. Hub & Authority – Pointed links to many other pages which have high authority. Communities – Measures of the cluster belonginess of the nodes. Each node is tagged to a specific cluster it belongs to in the network
- Need customized predictive analytic solutions? Get in touch!
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Pricing
Dimension | Description | Cost/host/hour |
|---|---|---|
ml.m5.large Inference (Batch) Recommended | Model inference on the ml.m5.large instance type, batch mode | $8.00 |
ml.m5.large Inference (Real-Time) Recommended | Model inference on the ml.m5.large instance type, real-time mode | $4.00 |
ml.m4.4xlarge Inference (Batch) | Model inference on the ml.m4.4xlarge instance type, batch mode | $8.00 |
ml.m5.4xlarge Inference (Batch) | Model inference on the ml.m5.4xlarge instance type, batch mode | $8.00 |
ml.m4.16xlarge Inference (Batch) | Model inference on the ml.m4.16xlarge instance type, batch mode | $8.00 |
ml.m5.2xlarge Inference (Batch) | Model inference on the ml.m5.2xlarge instance type, batch mode | $8.00 |
ml.p3.16xlarge Inference (Batch) | Model inference on the ml.p3.16xlarge instance type, batch mode | $8.00 |
ml.m4.2xlarge Inference (Batch) | Model inference on the ml.m4.2xlarge instance type, batch mode | $8.00 |
ml.c5.2xlarge Inference (Batch) | Model inference on the ml.c5.2xlarge instance type, batch mode | $8.00 |
ml.p3.2xlarge Inference (Batch) | Model inference on the ml.p3.2xlarge instance type, batch mode | $8.00 |
Vendor refund policy
Currently we do not support refunds, but you can cancel your subscription to the service at any time.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Amazon SageMaker model
An Amazon SageMaker model package is a pre-trained machine learning model ready to use without additional training. Use the model package to create a model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.
Version release notes
Bug Fixes and Performance Improvement
Additional details
Inputs
- Summary
Input:
Following are the mandatory inputs for predictions made by the algorithm:
- Vertex 1
- Vertex 2
- Here, Vertex 1 represents actors/nodes that communicate with actors/nodes in Vertex 2.
- Supported content types: 'text/csv'
Output:
- Supported content types: 'text/csv'
Invoking endpoint:
If you are using real time inferencing, please create the endpoint first and then use the following command to invoke it:
aws sagemaker-runtime invoke-endpoint --endpoint-name "endpoint-name" --body fileb://input.csv --content-type text/csv --accept text/csv out.csv
Substitute the following parameters:
- "endpoint-name" - name of the inference endpoint where the model is deployed
- input.csv - input image to do the inference on
- text/csv - MIME type of the given input file (above)
- out.csv - filename where the inference results are written to
Resources:
- Input MIME type
- text/csv, text/plain
Resources
Vendor resources
Support
Vendor support
For any assistance, please reach out at:
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
Similar products




