
Overview
GenAI based chatbots require planning to orchestrate APIs (or tools) to accomplish a user's request. Traditionally user journeys and dialogue designs were fixed with NLP models indentifying user intents. With Generative AI models, the ability to create diagloue flows on the go is possible. This requires undersaning user's goals and breaking them down to steps to execute. Users can leverage our solution to create dynamic plans of execution. Our solution leverages the capabilities of Anthropic's Claude to determine required APIs and the order in which they have to be called to achieve user's request to the LLM chatbot. The model takes OpenAPI schema and the given user query as input. Users benefit from streamlined, accurate API interactions, saving time and effort while ensuring efficient, reliable responses to complex queries. It also allows users to customize plans by providing specific instructions and counter examples for the model to follow.
Highlights
- Users can leverage our solution to customize workflows to their specific requirements and orchestrate dynamic workflows without having to manually create rules and logic flows of the services involved. This provides flexibility in executing business-specific workflows reducing manual overhead. Users can also handle complex workflows seamlessly, allowing for scalable solutions that can adapt to different requirements and changes in the API schema.
- Our solution enhances reliability by ensuring accurate responses because it enables structured data retrieval. Integration with external tools via JSON Schema facilitates seamless interactions. By generating a plan for API calls, the solution minimizes the risk of errors that can occur from incorrect sequencing or missing dependencies between API calls. This approach improves user experience by providing more relevant responses. Additionally, automating tasks saves time and costs by reducing manual intervention.
- The Mphasis AI for Software Development service enables enterprises to build customized no-code/low-code solutions to accelerate development and deployment of software. We leverage our patented AI/ML platforms and frameworks to engage with clients across multiple use cases: intelligent code recommendation, rapid prototyping etc. We help enterprises target impactful AI/ML interventions that drive business benefits. Our Assessments, Workshops and Implementations identify the most relevant use cases in software engineering and outline the potential benefits such as efficiency, cost and innovation
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Pricing
Dimension | Description | Cost |
|---|---|---|
ml.m5.2xlarge Inference (Batch) Recommended | Model inference on the ml.m5.2xlarge instance type, batch mode | $2.00/host/hour |
ml.p3.8xlarge Inference (Batch) | Model inference on the ml.p3.8xlarge instance type, batch mode | $2.00/host/hour |
ml.p2.8xlarge Inference (Batch) | Model inference on the ml.p2.8xlarge instance type, batch mode | $2.00/host/hour |
inference.count.m.i.c Inference Pricing | inference.count.m.i.c Inference Pricing | $0.05/request |
Vendor refund policy
Currently we do not support refunds, but you can cancel your subscription to the service at any time.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Amazon SageMaker model
An Amazon SageMaker model package is a pre-trained machine learning model ready to use without additional training. Use the model package to create a model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.
Version release notes
Latest version
Additional details
Inputs
- Summary
The input file 'input_zip.zip' should contain two files, namely:
- config.json
- openAPI_schema.json
config.json - This file contains credentails to Amazon Bedrock key details ("region_name","aws_access_key_id", "aws_secret_access_key"), file_path of APIs ("file_path"), useful information to generate correct response ("instructions", "examples", "extra_tasks" ). Also provide the query in the config file.
openAPI_schema.json - This file contains the openAPI schema of APIs
- Limitations for input type
- The input file 'input_zip.zip' should contain two files, namely: 1. config.json 2. openAPI_schema.json
- Input MIME type
- application/zip
Resources
Vendor resources
Support
Vendor support
For any assistance reach out to us at:
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.