Listing Thumbnail

    The Inference Server - Llama.cpp - CUDA - NVIDIA Container - Ubuntu 22

     Info
    Deployed on AWS
    Run AI Inference on your own server for coding support, creative writing, summarizing, ... without sharing data with other services. The Inference server has all you need to run state-of-the-art inference on GPU servers. Includes llama.cpp inference, latest CUDA and NVIDIA Docker container support. Support for llama-cpp-python, Open Interpreter, Tabby coding assistant.

    Overview

    Play video

    The Inference server offers the full infrastructure to run fast inference on GPUs.

    It includes llama.cpp inference, latest CUDA and NVIDIA Docker container toolkit.

    Leverage the multitude of models freely available to run inference with 8 bit or lower quantized models which makes inference possible on e.g. 16 GB or 24 GB memory GPUs.

    Llama.cpp offer efficient inference of quantized models in interactive and server mode. It features

    • Plain C/C++ implementation without dependencies
    • 2-bit, 3-bit, 4-bit, 5-bit, 6-bit and 8-bit integer quantization support
    • Running inference on GPU and CPU simultaneously allowing to run larger models in case GPU memory is insufficient
    • AVX, AVX2 and AVX512 support for x86 architectures
    • Supported models: LLaMA, LLaMA 2, Falcon, Alpaca, GPT4All, Chinese LLaMA / Alpaca and Chinese LLaMA-2 / Alpaca-2, Vigogne (French), Vicuna, Koala, OpenBuddy (Multilingual), Pygmalion 7B / Metharme 7B, WizardLM, Baichuan-7B and its derivations (such as baichuan-7b-sft), Aquila-7B / AquilaChat-7B, Starcoder models, Mistral AI v0.1, Refact

    Here is our guide How to use the AI SP Inference Server 

    The Inference server supports in addition

    • llama-cpp-python: OpenAI API compatible Llama.cpp inference server
    • Open Interpreter: let language models run code on your computer. An open-source, locally running implementation of OpenAIs Code Interpreter.
    • Tabby coding assistant: a self-hosted AI coding assistant, offering an open-source alternative to GitHub Copilot

    Includes remote desktop access via NICE DCV high-end remote desktops or via ssh (putty, ...).

    Highlights

    • Ready to run Inference. Everything pre-installed. Download a model for coding, text generation, chat, ... and start creating output
    • Different options to run Inference servers for text generation, coding integration for IDE support, summarizing, sentiment analysis, ...
    • You own the data and inference. No data is shared with any public service for AI inference.

    Details

    Delivery method

    Delivery option
    64-bit (x86) Amazon Machine Image (AMI)

    Latest version

    Operating system
    Ubuntu 22

    Deployed on AWS

    Unlock automation with AI agent solutions

    Fast-track AI initiatives with agents, tools, and solutions from AWS Partners.
    AI Agents

    Features and programs

    Financing for AWS Marketplace purchases

    AWS Marketplace now accepts line of credit payments through the PNC Vendor Finance program. This program is available to select AWS customers in the US, excluding NV, NC, ND, TN, & VT.
    Financing for AWS Marketplace purchases

    Pricing

    The Inference Server - Llama.cpp - CUDA - NVIDIA Container - Ubuntu 22

     Info
    Pricing is based on actual usage, with charges varying according to how much you consume. Subscriptions have no end date and may be canceled any time.
    Additional AWS infrastructure costs may apply. Use the AWS Pricing Calculator  to estimate your infrastructure costs.

    Usage costs (13)

     Info
    Dimension
    Cost/hour
    g5.xlarge
    Recommended
    $0.10
    g4dn.4xlarge
    $0.12
    g5.12xlarge
    $0.48
    g4dn.8xlarge
    $0.16
    g5.16xlarge
    $0.56
    g4dn.metal
    $0.48
    g5.4xlarge
    $0.18
    g5.2xlarge
    $0.13
    g4dn.12xlarge
    $0.32
    g5.8xlarge
    $0.24

    Vendor refund policy

    No refund. Instance is billed by hour of actual use, terminate at any time and product charges are stopped .

    How can we make this page better?

    We'd like to hear your feedback and ideas on how to improve this page.
    We'd like to hear your feedback and ideas on how to improve this page.

    Legal

    Vendor terms and conditions

    Upon subscribing to this product, you must acknowledge and agree to the terms and conditions outlined in the vendor's End User License Agreement (EULA) .

    Content disclaimer

    Vendors are responsible for their product descriptions and other product content. AWS does not warrant that vendors' product descriptions or other product content are accurate, complete, reliable, current, or error-free.

    Usage information

     Info

    Delivery details

    64-bit (x86) Amazon Machine Image (AMI)

    Amazon Machine Image (AMI)

    An AMI is a virtual image that provides the information required to launch an instance. Amazon EC2 (Elastic Compute Cloud) instances are virtual servers on which you can run your applications and workloads, offering varying combinations of CPU, memory, storage, and networking resources. You can launch as many instances from as many different AMIs as you need.

    Version release notes

    Includes Llama.cpp as of Dec 21, 2024. Security fixes.

    Additional details

    Usage instructions

    Make sure the instance security groups allow inbound traffic to TCP and UDP port 8443 and 22.

    To connect to your Inference Server you have different options:

    Option 1: Connect with the native NICE DCV Client for best performance

    1. Download the NICE DCV client from: https://download.nice-dcv.com/  (includes Windows portable client)
    2. In the DCV client connection field enter the instance public IP to connect.
    3. Sign in using the following credentials: User: ubuntu. Password: last 6 digits of the instance ID.

    Option 2: Connect with NICE DCV Web Client for convenience

    1. Connect with the following URL: https://IP_OR_FQDN:8443/, e.g. https://3.70.184.235:8443/ 
    2. Sign in using the following credentials: User: ubuntu. Password: last 6 digits of the instance ID.

    Option 3: Set your own password and connect

    1. Connect to your remote machine with ssh -i <your-pem-key> ubuntu@<public-dns>
    2. Set the password for the user "ubuntu" with sudo passwd ubuntu. This is the password you will use to log in to DCV
    3. Connect to your remote machine with the NICE DCV native client or web client as described above
    4. Enter your credentials and you are ready to rock

    Please do not perform an update to a new kernel or higher releases as it might disable the GPU driver.

    Here is our guide How to use the AI SP Inference Server 

    Quick start

    How to run neural network inference with llama.cpp for quantized models - example with Xwin-LM-13B:

    # depending on the instance type g4dn or g5 please use one of the 'cd' below cd ~/inference/llama.cpp-g4dn cd ~/inference/llama.cpp-g5 # now download the model - example is Xwin-LM-13B with 5bit quantization cd models wget https://huggingface.co/TheBloke/Xwin-LM-13B-V0.1-GGUF/resolve/main/xwin-lm-13b-v0.1.Q5_K_M.gguf cd .. # start inference ./main -m models/xwin-lm-13b-v0.1.Q5_K_M.gguf -p 'Building a website can be done in 10 simple steps:\nStep 1:' -n 600 -e -c 2700 --color --temp 0.1 --log-disable -ngl 52 # move 52 layers into the GPU # or you can put your prompt into the file "prompt.txt" and run bash run.sh # please note that llama.cpp also supports a chat mode by adding the option '-i': ./main -i -m models/xwin-lm-13b-v0.1.Q5_K_M.gguf -p 'Building a website can be done in 10 simple steps:\nStep 1:' -n 600 -e -c 2700 --color --temp 0.1 --log-disable -ngl 52 # move 52 layers into the GPU

    Have fun infering!

    (At the moment the AMI supports g4dn and g5 instances - you can clone and compile for other instance types like p3).

    Support

    Vendor support

    AWS infrastructure support

    AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.

    Product comparison

     Info
    Updated weekly

    Accolades

     Info
    Top
    10
    In Summarization-Text, Generation-Text
    Top
    25
    In Generation-Text, Summarization-Text, Natural Language Processing
    Top
    10
    In Text/Captioning, Generative AI, ML Solutions

    Overview

     Info
    AI generated from product descriptions
    Model Quantization Support
    Supports multi-bit integer quantization from 2-bit to 8-bit, enabling efficient model inference on limited GPU memory
    GPU and CPU Inference
    Capable of running inference simultaneously on GPU and CPU, allowing processing of larger models across different hardware resources
    Model Compatibility
    Supports diverse language models including LLaMA, LLaMA 2, Falcon, Alpaca, GPT4All, Vicuna, Mistral AI, and multiple multilingual models
    Inference Framework
    Utilizes llama.cpp with plain C/C++ implementation, offering efficient and lightweight model inference without complex dependencies
    Architecture Optimization
    Provides support for x86 architecture extensions including AVX, AVX2, and AVX512 for enhanced computational performance
    Large Language Model Architecture
    "Transformer-based architecture with 13 billion parameters optimized through supervised fine-tuning and reinforcement learning with human feedback"
    Text Generation Capability
    "Advanced generative text model designed for precise and nuanced textual output processing"
    Model Training Methodology
    "Utilizes extensive pretrained dataset with supervised fine-tuning and reinforcement learning techniques"
    API Compatibility
    "Fully integrated with OpenAI API ecosystem for seamless application interactions"
    Model Specialization
    "Specifically optimized for text input and output operations with enhanced dialogue generation capabilities"
    Local Language Model Execution
    Enables direct execution of large language models on local machines without cloud dependencies
    Model Customization Framework
    Provides capabilities to modify, create, and tailor AI models for specialized applications
    API Integration Mechanism
    Supports OpenAI-compatible API integration with multiple external language model platforms
    Web Search Retrieval Augmentation
    Implements web search capabilities for Retrieval Augmented Generation (RAG) using multiple search providers
    Multi-Modal Content Generation
    Integrates image generation capabilities through local and external APIs to enhance conversational experiences

    Contract

     Info
    Standard contract

    Customer reviews

    Ratings and reviews

     Info
    0 ratings
    5 star
    4 star
    3 star
    2 star
    1 star
    0%
    0%
    0%
    0%
    0%
    0 AWS reviews
    No customer reviews yet
    Be the first to review this product . We've partnered with PeerSpot to gather customer feedback. You can share your experience by writing or recording a review, or scheduling a call with a PeerSpot analyst.