Overview
ProteinMPNN (Protein Message Passing Neural Network) is a cutting-edge, deep learning-based, graph neural network designed to predict amino acid sequences for given protein backbones.
This network leverages evolutionary, functional, and structural information to generate sequences that are likely to fold into the desired 3D structures.
The input to the neural network is a 3D structure of a protein in PDB format, and the output is the amino-acid sequence in Multi-FASTA format.
ProteinMPNN is one of many NIMs that you can apply to tasks in biosciences and drug discovery. NIMs make it easy to chain models together to develop a complete in silico drug discovery pipeline.
For example, you can use a ProteinMPNN NIM as a subsequent step after RFdiffusion generative model NIM in order to determine possible amino acid sequence.
Highlights
- Increased productivity: NIMs enable developers to build generative AI applications quickly, in minutes rather than weeks, by providing a standardized way to add AI capabilities to their applications. Simplified deployment: NIMs provide containers that can be easily deployed on various platforms, including clouds, data centers, or workstations, making it convenient for developers to test and deploy their applications.
- ProteinMPNN potential applications spans from accelerating drug discovery to advancing synthetic biology.
Details
Introducing multi-product solutions
You can now purchase comprehensive solutions tailored to use cases and industries.
Features and programs
Financing for AWS Marketplace purchases
Pricing
Free trial
Dimension | Description | Cost/host/hour |
|---|---|---|
ml.g5.12xlarge Inference (Batch) Recommended | Model inference on the ml.g5.12xlarge instance type, batch mode | $1.00 |
ml.g6e.12xlarge Inference (Real-Time) Recommended | Model inference on the ml.g6e.12xlarge instance type, real-time mode | $1.00 |
ml.g6e.24xlarge Inference (Real-Time) | Model inference on the ml.g6e.24xlarge instance type, real-time mode | $1.00 |
ml.g6e.48xlarge Inference (Real-Time) | Model inference on the ml.g6e.48xlarge instance type, real-time mode | $1.00 |
ml.p4d.24xlarge Inference (Real-Time) | Model inference on the ml.p4d.24xlarge instance type, real-time mode | $1.00 |
ml.p4de.24xlarge Inference (Real-Time) | Model inference on the ml.p4de.24xlarge instance type, real-time mode | $1.00 |
ml.p5.48xlarge Inference (Real-Time) | Model inference on the ml.p5.48xlarge instance type, real-time mode | $1.00 |
ml.p5e.48xlarge Inference (Real-Time) | Model inference on the ml.p5e.48xlarge instance type, real-time mode | $1.00 |
ml.p5en.48xlarge Inference (Real-Time) | Model inference on the ml.p5en.48xlarge instance type, real-time mode | $1.00 |
Vendor refund policy
None
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Amazon SageMaker model
An Amazon SageMaker model package is a pre-trained machine learning model ready to use without additional training. Use the model package to create a model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.
Version release notes
Additional details
Inputs
- Summary
The model accepts JSON requests with parameters on /invocations and /ping APIs that can be used to control the generated text. See examples and field descriptions below.
- Input MIME type
- application/json
Resources
Support
Vendor support
Free support via NVIDIA NIM Developer Forum:
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.