
Overview
Autocode Text To Python Code Recommender takes a code related user text query as input and returns 3 optimal code recommendations from Github that will be syntactically and semantically correct. Considering the ever increasing number of programming languages and the frameworks that are built around them, it is very difficult to be technically fluent in all of them. Another challenge is the amount of code development time and effort spent on looking up efficient solutions to solve a problem. This solution helps in addressing these practical problems faced by the developer community.
Highlights
- This solution helps accelerate the application development cycle by providing developers with targeted code recommendations.
- The system uses a similarity-based distance measure to find the most correct and efficient code sample for the user query. The query should be coherent and focused on a single topic.
- Autocode is a Deep Learning based automated software development platform for rapid prototyping that can help software developers, testers and support teams. Need customized Deep Learning solutions? Get in touch!
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Pricing
Dimension | Description | Cost/host/hour |
|---|---|---|
ml.c4.2xlarge Inference (Batch) Recommended | Model inference on the ml.c4.2xlarge instance type, batch mode | $20.00 |
ml.c4.2xlarge Inference (Real-Time) Recommended | Model inference on the ml.c4.2xlarge instance type, real-time mode | $10.00 |
ml.p2.xlarge Inference (Batch) | Model inference on the ml.p2.xlarge instance type, batch mode | $20.00 |
ml.m4.4xlarge Inference (Batch) | Model inference on the ml.m4.4xlarge instance type, batch mode | $20.00 |
ml.m5.12xlarge Inference (Batch) | Model inference on the ml.m5.12xlarge instance type, batch mode | $20.00 |
ml.m4.16xlarge Inference (Batch) | Model inference on the ml.m4.16xlarge instance type, batch mode | $20.00 |
ml.p2.16xlarge Inference (Batch) | Model inference on the ml.p2.16xlarge instance type, batch mode | $20.00 |
ml.p3.16xlarge Inference (Batch) | Model inference on the ml.p3.16xlarge instance type, batch mode | $20.00 |
ml.c4.4xlarge Inference (Batch) | Model inference on the ml.c4.4xlarge instance type, batch mode | $20.00 |
ml.c5.9xlarge Inference (Batch) | Model inference on the ml.c5.9xlarge instance type, batch mode | $20.00 |
Vendor refund policy
Currently we do not support refunds, but you can cancel your subscription to the service at any time.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Amazon SageMaker model
An Amazon SageMaker model package is a pre-trained machine learning model ready to use without additional training. Use the model package to create a model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.
Version release notes
Bug Fixes and Performance Improvement
Additional details
Inputs
- Summary
Input
Supported content types: text/plain As such there is no character limit on the query. The query should be coherent and focused on a single topic. The system may have problem in capturing context across multiple sentences so it is advised to stick to a single sentence query
Sample Input queries :
- Create confusion matrix ?
- How to Input a csv file in Python ?
- Convert a date string into yyyymmdd format
Output
Content type: text/csv Sample Output:
Result Function Name URL Result 1 Create <https://github.com/cloudfoundry/>.. Invoking endpoint
AWS CLI Command
If you are using real time inferencing, please create the endpoint first and then use the following command to invoke it:
!aws sagemaker-runtime invoke-endpoint --endpoint-name $model_name --body fileb://$input.json--content-type 'text/plain' --region us-east-2 output.csvSubstitute the following parameters:
- "endpoint-name" - name of the inference endpoint where the model is deployed
- input.json - input json with query
- application/json - MIME type of the given input
- out.json - filename where the inference results are written to.
Resources
Link to Instructions Notebook: https://tinyurl.com/qnvw8nv Link to Sample Input: https://tinyurl.com/uc5ez2t Link to Sample Output: https://tinyurl.com/t6fm8q4Â
- Input MIME type
- application/json, text/plain, text/csv
Resources
Vendor resources
Support
Vendor support
For any assistance reach out to us at:
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
Similar products



