
Overview
Llama-3.1-SuperNova-Lite is an 8B parameter model developed by Arcee.ai, based on the Llama-3.1-8B-Instruct architecture. It is a distilled version of the larger Llama-3.1-405B-Instruct model, leveraging offline logits extracted from the 405B parameter variant. This 8B variation of Llama-3.1-SuperNova maintains high performance while offering exceptional instruction-following capabilities and domain-specific adaptability.
IMPORTANT INFORMATION: The model is available in two package versions. Please make sure to select the appropriate one. Once you have subscribed, we strongly recommend that you deploy it with our sample notebooks at https://github.com/arcee-ai/aws-samples .
Highlights
- The model was trained using a state-of-the-art distillation pipeline and an instruction dataset generated with EvolKit, ensuring accuracy and efficiency across a wide range of tasks. For more information on its training, visit blog.arcee.ai.
- Llama-3.1-SuperNova-Lite excels in both benchmark performance and real-world applications, providing the power of large-scale models in a more compact, efficient form ideal for organizations seeking high performance with reduced resource requirements.
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Pricing
Dimension | Description | Cost/host/hour |
|---|---|---|
ml.inf2.xlarge Inference (Real-Time) Recommended | Model inference on the ml.inf2.xlarge instance type, real-time mode | $0.00 |
ml.p3.8xlarge Inference (Batch) Recommended | Model inference on the ml.p3.8xlarge instance type, batch mode | $0.00 |
ml.inf2.24xlarge Inference (Real-Time) | Model inference on the ml.inf2.24xlarge instance type, real-time mode | $0.00 |
ml.g6.16xlarge Inference (Real-Time) | Model inference on the ml.g6.16xlarge instance type, real-time mode | $0.00 |
ml.g6.2xlarge Inference (Real-Time) | Model inference on the ml.g6.2xlarge instance type, real-time mode | $0.00 |
ml.g5.8xlarge Inference (Real-Time) | Model inference on the ml.g5.8xlarge instance type, real-time mode | $0.00 |
ml.g6.4xlarge Inference (Real-Time) | Model inference on the ml.g6.4xlarge instance type, real-time mode | $0.00 |
ml.g5.2xlarge Inference (Real-Time) | Model inference on the ml.g5.2xlarge instance type, real-time mode | $0.00 |
ml.g5.4xlarge Inference (Real-Time) | Model inference on the ml.g5.4xlarge instance type, real-time mode | $0.00 |
ml.g6.8xlarge Inference (Real-Time) | Model inference on the ml.g6.8xlarge instance type, real-time mode | $0.00 |
Vendor refund policy
This product is offered for free. If there are any questions, please contact us for further clarifications.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Amazon SageMaker model
An Amazon SageMaker model package is a pre-trained machine learning model ready to use without additional training. Use the model package to create a model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.
Version release notes
This version is configured for Inferentia2 instances: inf2.xlarge, inf2.8xlarge, inf2.24xlarge, inf2.48xlarge. As the model has been compiled with a tensor parallelism level of 2, the number of model copies running on the instance is respectively 1, 1, 6, and 12. Context size is set to 8K, and batch size to 4.
Additional details
Inputs
- Summary
You can invoke the model using the OpenAI Messages AI. Please see the sample notebook for details.
- Input MIME type
- application/json, application/jsonlines
Input data descriptions
The following table describes supported input data fields for real-time inference and batch transform.
Field name | Description | Constraints | Required |
|---|---|---|---|
OpenAI Messages API | Please see sample notebook. | Type: FreeText | Yes |
Resources
Vendor resources
Support
Vendor support
IMPORTANT INFORMATION: Once you have subscribed to the model, we strongly recommend that you deploy it with our sample notebook at https://github.com/arcee-ai/aws-samples/blob/main/model_package_notebooks/sample-notebook-llama-supernova-lite-on-sagemaker.ipynb . This is the best way to guarantee proper configuration.
Contact: julien@arcee.aiÂ
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
Similar products



