Listing Thumbnail

    AI inference stress server

     Info
    Sold by: Baideac 
    Deployed on AWS
    This product can help to stress the inference server to test your application at scale.

    Overview

    This product can help to stress the inference server with concurrent queries with custom large data and analyse the server resource utilization (e.g. GPU utilization, GPU memory, CPU utilization and CPU memory) against one of multiple GPUs. Monthly charge is for support and customization on the go.

    Highlights

    • This product can help to determine and analyse the large data
    • You can input any json based data url. Server is able to ingest data and using those data you can chat anything with those data
    • Prior support provide on mail and customization on the go

    Details

    Delivery method

    Delivery option
    64-bit (x86) Amazon Machine Image (AMI)

    Latest version

    Operating system
    Ubuntu 22.04

    Deployed on AWS

    Unlock automation with AI agent solutions

    Fast-track AI initiatives with agents, tools, and solutions from AWS Partners.
    AI Agents

    Features and programs

    Financing for AWS Marketplace purchases

    AWS Marketplace now accepts line of credit payments through the PNC Vendor Finance program. This program is available to select AWS customers in the US, excluding NV, NC, ND, TN, & VT.
    Financing for AWS Marketplace purchases

    Pricing

    AI inference stress server

     Info
    Pricing is based on a fixed subscription cost and actual usage of the product. You pay the same amount each billing period for access, plus an additional amount according to how much you consume. The fixed subscription cost is prorated, so you're only charged for the number of days you've been subscribed. Subscriptions have no end date and may be canceled any time.
    Additional AWS infrastructure costs may apply. Use the AWS Pricing Calculator  to estimate your infrastructure costs.

    Fixed subscription cost

     Info
    $850.00/month

    Usage costs (15)

     Info
    Dimension
    Cost/hour
    g5.xlarge
    Recommended
    $0.55
    g5.2xlarge
    $0.55
    g5.4xlarge
    $0.55
    g4dn.metal
    $0.55
    g4dn.16xlarge
    $0.55
    g4dn.2xlarge
    $0.55
    g4dn.12xlarge
    $0.55
    g5.8xlarge
    $0.55
    g4dn.xlarge
    $0.55
    g5.24xlarge
    $0.55

    Vendor refund policy

    No refund policy

    How can we make this page better?

    We'd like to hear your feedback and ideas on how to improve this page.
    We'd like to hear your feedback and ideas on how to improve this page.

    Legal

    Vendor terms and conditions

    Upon subscribing to this product, you must acknowledge and agree to the terms and conditions outlined in the vendor's End User License Agreement (EULA) .

    Content disclaimer

    Vendors are responsible for their product descriptions and other product content. AWS does not warrant that vendors' product descriptions or other product content are accurate, complete, reliable, current, or error-free.

    Usage information

     Info

    Delivery details

    64-bit (x86) Amazon Machine Image (AMI)

    Amazon Machine Image (AMI)

    An AMI is a virtual image that provides the information required to launch an instance. Amazon EC2 (Elastic Compute Cloud) instances are virtual servers on which you can run your applications and workloads, offering varying combinations of CPU, memory, storage, and networking resources. You can launch as many instances from as many different AMIs as you need.

    Version release notes

    Release notes:

    • Openstack plugin support
    • Llama-bench support for token based benchmarking
    • Minor Bug Fixes

    Additional details

    Usage instructions

    Once the instance is run, all the services will come up automatically once instance boots up.

    It is suggested to manually configure your Security Group/Firewall settings to access your instance. Currently, the 1-Click Security Group opens only to port 22 and 80 so that you can access your instance via SSH using login 'ubuntu' username. If you chose the 1-Click Security Group, you can change it later to enable other applications to use the AWS Console or API.

    To connect to this instance via ssh, you must have a new keypair generated or use an already present keypair pem at the time of launching the EC2 instance. Using this keypair, you can login to the instance as below: ssh -i <key-pair> ubuntu@<public_IP_of_the_instance>.

    Alternately,you can also login to the instance via the AWS console by clicking on the"Connect Instance" option on the EC2 instances page. The username must be "ubuntu".

    The way to the check the logs are

    1. Connect to the instance via ssh or console.
    2. Type "sudo su" to enter super user [No password required]
    3. type "bv-ai-stress verbose" and hit enter. You would be able to see the logs as below. More logs would generated once the inference server is being used.

    root@ip-<>:~$ bv-ai-stress verbose Streaming logs for process: bv_inference_stress (ID: 0) [bv_inference_stress] 2024-09-06 09:07:12 - INFO - 127.0.0.1:35034 - GET / HTTP/1.1 - 200 OK

    [bv_inference_stress] 2024-09-06 09:07:14 - INFO - 127.0.0.1:35036 - GET /models/ HTTP/1.1 - 200 OK

    [bv_inference_stress] 2024-09-06 09:07:14 - INFO - 127.0.0.1:37152 - GET /favicon.ico HTTP/1.1 - 404 Not Found

    [bv_inference_stress] 2024-09-06 09:07:17 - INFO - 127.0.0.1:37166 - POST /queue-count/ HTTP/1.1 - 200 OK

    Note: Right after launching the instance, it may take a few minutes to show the logs. Try after sometime if you do not see any logs. Two option to access platform.

    1. Using UI we can access the platform. we can access UI via http://<instance-public-ip>.
    2. Using cli we can access the platform. We can see help/docs section via this command "bv-ai-stress -h".

    For any issues or queries, contact our support team (at aws-mp-support@bhojr.com ). For more information on the product, check out the link: https://www.bhojr.com/prod/ai-inference-stresser.html 

    Resources

    Vendor resources

    Support

    Vendor support

    Support can be available on mail address support@baideac.com 

    AWS infrastructure support

    AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.

    Product comparison

     Info
    Updated weekly

    Accolades

     Info
    Top
    100
    In Generative AI
    Top
    10
    In Summarization-Text, Generation-Text
    Top
    10
    In Serverless Workloads

    Overview

     Info
    AI generated from product descriptions
    Inference Load Testing
    Capability to stress test inference servers with concurrent queries and custom large datasets
    Resource Monitoring
    Comprehensive monitoring of server resources including GPU utilization, GPU memory, CPU utilization, and CPU memory
    Data Ingestion
    Supports JSON-based data input from various sources for analysis and interaction
    Concurrent Query Handling
    Ability to process multiple queries simultaneously on inference servers
    Multi-GPU Support
    Enables performance analysis across multiple GPU configurations
    Model Quantization Support
    Supports multi-bit integer quantization from 2-bit to 8-bit, enabling efficient model inference on limited GPU memory
    GPU and CPU Inference
    Capable of running inference simultaneously on GPU and CPU, allowing processing of larger models across different hardware resources
    Model Compatibility
    Supports diverse language models including LLaMA, LLaMA 2, Falcon, Alpaca, GPT4All, Vicuna, Mistral AI, and multiple multilingual models
    Inference Framework
    Utilizes llama.cpp with plain C/C++ implementation, offering efficient and lightweight model inference without complex dependencies
    Architecture Optimization
    Provides support for x86 architecture extensions including AVX, AVX2, and AVX512 for enhanced computational performance
    Serverless Compute
    Provides serverless compute infrastructure specifically designed for AI, ML, and data processing workloads
    GPU Container Deployment
    Enables rapid GPU-enabled container deployment with startup times as low as one second
    Infrastructure as Code
    Supports deploying Python functions to cloud environments with custom container image and hardware specification definitions
    Dynamic Resource Scaling
    Automatically scales computational resources up to hundreds of GPUs and down to zero based on workload requirements
    Cloud Workload Optimization
    Supports complex computational tasks including ML inference, fine-tuning, and batch data processing

    Contract

     Info
    Standard contract

    Customer reviews

    Ratings and reviews

     Info
    0 ratings
    5 star
    4 star
    3 star
    2 star
    1 star
    0%
    0%
    0%
    0%
    0%
    0 AWS reviews
    No customer reviews yet
    Be the first to review this product . We've partnered with PeerSpot to gather customer feedback. You can share your experience by writing or recording a review, or scheduling a call with a PeerSpot analyst.