Overview
Login
Login
Image Generation
RAG
DeepSeek
Multi-LLM and Image Generation AI Server with Open WebUI & Ollama
Deploy a Private, Powerful, Self-Hosted AI Platform on AWS that lets YOU wear the pants.
Transform your AI workflows with our pre-configured Multi-LLM Server, combining the flexibility of Ollama (with DeepSeek and other models pre-installed) and the intuitive Open WebUI interface. This AWS Marketplace offering provides a seamless, scalable solution for businesses and developers to run, customize, download, and manage multiple open-source language models (LLMs) and image generation models in a secure, private cloud environment.
Not ready to buy? Demo for free at: https://demo.khakicloud.comÂ
Key Features & Benefits
1. Effortless Deployment & Management
- 1-Click AWS Deployment: Launch a fully configured AI server with Open WebUI and Ollama in minutes, eliminating complex setup hassles.
- Pre-Installed Models: Includes DeepSeek R1, Stable Diffusion, Llama, Gemini, Mistral, Phi and supports additional Ollama-compatible LLMs, enabling immediate experimentation and production use.
2. Open WebUI: A Feature-Rich Interface
- User-Friendly Chat Experience: Inspired by ChatGPT's UI, Open WebUI offers a responsive design for desktop and mobile, with Markdown/LaTeX support for technical users .
- Retrieval-Augmented Generation (RAG): Integrate documents (PDFs, Word, Excel) into chats using # commands, enabling context-aware AI responses.
- Multi-Model Conversations: Run multiple LLMs concurrently (e.g., DeepSeek for coding, Mistral for creative tasks) and compare outputs in a single interface.
- Granular Access Control: Role-based permissions (RBAC) ensure secure collaboration, with admin controls for model deployment and user management .
3. Enterprise-Grade Customization
- Local & Remote RAG Pipelines: Enhance LLM accuracy by connecting to internal knowledge bases or web searches. Plugin Framework: Extend functionality with Python-based pipelines for toxic content filtering, rate limiting, or custom API integrations.
- Self-Hosted Privacy: Keep data on your AWS instances, avoiding third-party LLM providers privacy risks .
4. Cost-Effective & Flexible Pricing
- Consolidated AWS Billing: Simplify budgeting with hourly/monthly pricing tied to your AWS account.
- Configure EC2 instance scheduling for cost savings: https://docs.aws.amazon.com/solutions/latest/instance-scheduler-on-aws/solution-overview.htmlÂ
5. Use Cases
- Developers: Rapidly prototype AI applications with Ollama's local models and Open WebUI's API integrations.
- Businesses: Deploy secure, internal AI chatbots with document retrieval for HR, IT, or customer support.
- Researchers: Compare LLM performance or fine-tune models using Open WebUI's model builder and RAG tools.
Why Choose This Solution?
- Open-Source Advantage: Avoid vendor lock-in with Open WebUI's modular design and Ollama's model ecosystem.
- AWS Optimized: Pre-validated Amazon Linux AMI ensures compatibility with EC2, ECS, and other AWS services.
Technical Specifications
- Supported Models: DeepSeek R1 (default), Stable Diffusion (for image generation), Llama, Gemini, Mistral and Phi. Download and run other Ollama-compatible LLMs.
- Integration: OpenAI-compatible API endpoints for third-party tooling.
- Security: End-to-end encryption, RBAC, and AWS VPC isolation.
Get Started Today!
Ideal for DevOps teams, AI researchers, and businesses seeking a private, customizable AI platform, this AWS Marketplace listing delivers the power of open-source LLMs with enterprise-grade manageability. Deploy now and unlock the future of self-hosted AI.
Highlights
- Get support from the Khaki Support bot pre-configured to perform RAG on Open WebUI docs. (See screenshot)
- DeepSeek R1, Stable Diffusion, Llama, Gemma, Phi, and Mistral models pre-installed. Download and run your own models via Open WebUI: https://docs.openwebui.com/getting-started/quick-start/starting-with-ollama#a-quick-and-efficient-way-to-download-models
- SSL, websockets, and significantly better model response time with models served from high performance NVMe drive included with G instance types configured by default.
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Pricing
Free trial
Dimension | Cost/hour |
---|---|
g6.xlarge Recommended | $1.00 |
g4dn.xlarge | $1.00 |
g6e.xlarge | $1.00 |
g6e.24xlarge | $2.00 |
g5.xlarge | $1.00 |
g6.24xlarge | $2.00 |
g5.24xlarge | $2.00 |
Vendor refund policy
Email support@khakicloud.com for refund inquiries.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
64-bit (x86) Amazon Machine Image (AMI)
Amazon Machine Image (AMI)
An AMI is a virtual image that provides the information required to launch an instance. Amazon EC2 (Elastic Compute Cloud) instances are virtual servers on which you can run your applications and workloads, offering varying combinations of CPU, memory, storage, and networking resources. You can launch as many instances from as many different AMIs as you need.
Version release notes
- Update Open Web UI
- Update Models
Additional details
Usage instructions
Just go to the EC2 instance url port 3000 (http://<instance url>:3000) in a browser and get started! For more information visit https://www.khakicloud.com/usage.htmlÂ
Resources
Vendor resources
Support
Vendor support
For support email support@khakicloud.comÂ
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.