
Overview
Autocode is an automated software code development platform. It converts wire-frames and visual designs in image format to corresponding HTML, CSS, HTML-JET code. This solution has the ability to automatically learn web elements in hand drawn wire-frames and map them to corresponding code in HTML. It is a Deep Learning based rapid prototyping platform designed to help design thinking teams, software developers, testers and support teams. It can generate code from multiple input formats like wire-frames, and visual designs.
Highlights
- Automated code generation from hand drawn as well as digital wire-frames that helps in faster creation of prototypes as well as accelerate application development. The solution can detect user interface elements like buttons, text boxes, labels, etc. in wire-frames and convert to corresponding HTML and CSS code.
- Uses image processing models that capture element level details from wire-frames and generates corresponding HTML code. The Deep Learning based models have been trained using transfer learning concepts.
- Autocode is a Deep Learning based automated software development platform for rapid prototyping that can help software developers, testers and support teams. Need customized Deep Learning solutions? Get in touch!
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Pricing
Dimension | Description | Cost/host/hour |
|---|---|---|
ml.m5.large Inference (Batch) Recommended | Model inference on the ml.m5.large instance type, batch mode | $20.00 |
ml.m5.large Inference (Real-Time) Recommended | Model inference on the ml.m5.large instance type, real-time mode | $10.00 |
ml.m4.4xlarge Inference (Batch) | Model inference on the ml.m4.4xlarge instance type, batch mode | $20.00 |
ml.m5.4xlarge Inference (Batch) | Model inference on the ml.m5.4xlarge instance type, batch mode | $20.00 |
ml.m4.16xlarge Inference (Batch) | Model inference on the ml.m4.16xlarge instance type, batch mode | $20.00 |
ml.m5.2xlarge Inference (Batch) | Model inference on the ml.m5.2xlarge instance type, batch mode | $20.00 |
ml.p3.16xlarge Inference (Batch) | Model inference on the ml.p3.16xlarge instance type, batch mode | $20.00 |
ml.m4.2xlarge Inference (Batch) | Model inference on the ml.m4.2xlarge instance type, batch mode | $20.00 |
ml.c5.2xlarge Inference (Batch) | Model inference on the ml.c5.2xlarge instance type, batch mode | $20.00 |
ml.p3.2xlarge Inference (Batch) | Model inference on the ml.p3.2xlarge instance type, batch mode | $20.00 |
Vendor refund policy
Currently we do not support refunds, but you can cancel your subscription to the service at any time.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Amazon SageMaker model
An Amazon SageMaker model package is a pre-trained machine learning model ready to use without additional training. Use the model package to create a model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.
Version release notes
Bug Fixes and Performance Improvement
Additional details
Inputs
- Summary
Input
Supported content types: image/jpeg The images needs to be in the jpeg and png format. Guidelines: a. Wire-frames should either be a scanned image (using Camscanner) or a digital wire-frame b. The image should be scanned via either a phone app or scanner without any shadow or noise to work properly. c. Try to draw wireframe objects as straight as possible d. File size limit < 4mb. Objects supported by Autocode- button, imagebox, text box, text area, combo box, search box, paragraph , help , logo, radio button, checkbox, table grid and mail box .
Output
Content type: application/json Sample output:
{ "generated_webpage_html": "<!DOCTYPE html> <html lang=\"en\"> <head> <title>Bootstrap Example</title> }Invoking endpoint
AWS CLI Command
You can invoke endpoint using AWS CLI:
!aws sagemaker-runtime invoke-endpoint --endpoint-name $model_name --body fileb://$sample.jpg--content-type 'image/jpeg' --region us-east-2 output.jsonSubstitute the following parameters:
- "endpoint-name" - name of the inference endpoint where the model is deployed
- sample.jpg - input image json serialized
- image/jpeg - MIME type of the given input image
- out.json - filename where the inference results are written to.
Python
Python code to process the output(more detailed example can be found in sample notebook):
f = open('output.json', mode='r',encoding='utf-8') def prediction_wrapper(prediction): p_json_parse = json.loads(prediction) return p_json_parse generated_code=prediction_wrapper(f.read())Resources
Link to Instructions Notebook: https://tinyurl.com/y3wsstzr Link to Sample Input Images: https://tinyurl.com/y3wkfn7y Link to Sample Output: https://tinyurl.com/sws7as9Â
- Input MIME type
- image/jpeg
Resources
Vendor resources
Support
Vendor support
For any assistance reach out to us at:
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
Similar products



