
Overview
Meta Llama 3.1 is a collection of multilingual large language models (LLMs) that are pre-trained and instruction-tuned generative models. NVIDIA NIM microservices for Llama 3.1 70B-Instruct simplifies the deployment of the Llama 3.1 70B instruction tuned model which is optimized for language understanding, reasoning, and text generation use cases. Llama 3.1 70B-Instruct is available as an NVIDIA NIM microservice, part of NVIDIA AI Enterprise available on the AWS Marketplace. NIM is a set of easy-to-use microservices designed for secure, reliable deployment of high performance AI model inferencing across clouds, data centers and workstations.
The Llama 3.1 70B-Instruct NIM is a prebuilt container that includes the Meta Llama 3.1 large language model built on inference engines like Triton Inference Server, TensorRT, TensorRT-LLM, and PyTorch. NIM provides features like low latency, high throughput, function calling, metrics export, standard API, optimized profiles & enterprise support.
Highlights
- NVIDIA Llama 3.1 70B-Instruct is an 70-billion-parameter multilingual large language model (LLM) pretrained and instruction tuned generative model. The Llama 3.1 instruction tuned text only model is optimized for multilingual dialogue use cases. It is available as an [NVIDIA NIM microservice](https://docs.nvidia.com/nim/large-language-models/latest/introduction.html).
- NVIDIA NIM, a part of the [NVIDIA AI Enterprise](https://www.nvidia.com/en-us/data-center/products/ai-enterprise/) software platform available on the [AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-ozgjkov6vq3l6), is a set of easy-to-use microservices designed for secure, reliable deployment of high performance AI model inferencing.
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Pricing
Free trial
Dimension | Description | Cost/host/hour |
|---|---|---|
ml.g5.48xlarge Inference (Batch) Recommended | Model inference on the ml.g5.48xlarge instance type, batch mode | $8.00 |
ml.p5.48xlarge Inference (Real-Time) Recommended | Model inference on the ml.p5.48xlarge instance type, real-time mode | $8.00 |
Vendor refund policy
No refunds. Please contact NVIDIA at https://www.nvidia.com/en-us/data-center/lp/aws-marketplace-offer/Â for further assistance.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Amazon SageMaker model
An Amazon SageMaker model package is a pre-trained machine learning model ready to use without additional training. Use the model package to create a model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.
Version release notes
Supports Real-time inference on NVIDIA H100: P5.48xlarge instance
Additional details
Inputs
- Summary
The model accepts JSON requests with parameters on /invocations and /ping APIs that can be used to control the generated text. See examples and fields descriptions below.
- Input MIME type
- application/json
Input data descriptions
The following table describes supported input data fields for real-time inference and batch transform.
Field name | Description | Constraints | Required |
|---|---|---|---|
model | Name of the model: meta/llama-3.1-8b-instruct
| Type: FreeText | Yes |
messages | Text input for the model to respond to. | Type: FreeText | Yes |
max_tokens | The maximum number of tokens the model will generate as part of the response. Note: Setting a low value may result in incomplete generations. | Default value: 1024
Type: FreeText | No |
stream | When `true`, the response will be a JSON stream of events. If set to `false`, the entire response will be sent out to client.
| Default value: false
Type: Categorical
Allowed values: true, false | No |
temperature | Use a lower value to decrease randomness in the response. Randomness can be further maximized by increasing the value of the `p` parameter.
| Default value: 0.5
Type: Continuous
Minimum: 0
Maximum: 2 | No |
Resources
Support
Vendor support
Free support via NVIDIA NIM Developer Forum: https://forums.developer.nvidia.com/c/ai-data-science/nvidia-nim/Â
Global enterprise support is included with an NVIDIA AI Enterprise subscription: https://www.nvidia.com/en-us/data-center/products/ai-enterprise-suite/support/Â
For additional support information please contact NVIDIA: https://www.nvidia.com/en-us/data-center/lp/aws-marketplace-offerÂ
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
Similar products


