Overview
Twin Talk Knowledge Builder Overview
Industrial enterprises often struggle to fully leverage the benefits of Generative AI in plant operations due to security and intellectual property concerns. The Twin Talk Knowledge Builder is a comprehensive GenAI solution that combines industry-leading language models with domain-specific training on your enterprise data. It is deployable in both offline and customer-managed cloud environments, enabling secure, AI-driven transformation of traditional operations and unlocking new use cases across the enterprise. The Twin Talk GenAI Knowledge Builder delivers real-time, accurate insights from operational data through custom AI agents powered by advanced GenAI models. These agents integrate seamlessly with live operational data feeds to provide instant answers to complex industrial questions. Built on a secure, enterprise-specific foundation model, Twin Talk ensures high accuracy and robust data protection without the vulnerabilities often associated with SaaS-based GenAI platforms.
Highlights
- Real-Time Data Integration - Instantly feeds real-time operational data into Generative AI models, providing up-to-the-minute insights for decision-making. Custom agents automatically pull the latest data from relevant systems, ensuring accuracy in AI-generated responses.
- Secure, Non-SaaS Environment - Hosted in-house, ensuring that data remains secure within the organization's network, eliminating the security risks associated with SaaS AI models. Custom domain-trained LLMs (Large Language Models) ensure your AI is purpose-built for your industry, reducing irrelevant or inaccurate results.
- Domain-Specific Foundation Models - Built on a foundation model pre-trained with the enterprise's specific domain knowledge, ensuring accurate and relevant AI outputs tailored to the unique operational challenges of each business.
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Pricing
Dimension | Cost/hour |
---|---|
g4dn.xlarge Recommended | $0.0001 |
Vendor refund policy
Refunds within 48 hours of purchase is available.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
64-bit (x86) Amazon Machine Image (AMI)
Amazon Machine Image (AMI)
An AMI is a virtual image that provides the information required to launch an instance. Amazon EC2 (Elastic Compute Cloud) instances are virtual servers on which you can run your applications and workloads, offering varying combinations of CPU, memory, storage, and networking resources. You can launch as many instances from as many different AMIs as you need.
Version release notes
Support for Amazon Bedrock
Additional details
Usage instructions
Welcome to Twin Talk Knowledge Builder!
Twin Talk Knowledge Builder (TTKB) is a self-contained Retrieval Augmented Generation (RAG) system built using FastAPI, Qdrant vector database, and local LLMs served via Ollama. This AMI comes pre-configured with all required components for secure, offline document ingestion, embedding, querying, and user access control.
Step 1. Launch the AMI
- Subscribe to Twin Talk Knowledge Builder on AWS Marketplace.
- Choose an instance type: Recommended (GPU): g4dn.xlarge or higher
- Configure the security group to allow the following ports: Port 8000 - Web and API access Port 6444 - Optional internal service communication
- Launch the instance into your preferred VPC and subnet.
- Assign an SSH key pair (optional for future debugging, not required for normal usage).
Step 2: Access the Web Interface Once the instance is in the "running" state, open your browser and access the application via: http://<EC2-Public-IP>:8000/
Step 3: Default Login Credentials Log in using the preconfigured administrator credentials. You will be prompted to update your password on login. Login: admin@eot.ai Password: <Instance ID>
Step 4: Application Functionalities Chat Playground: This feature allows users to engage in general conversation with the local LLM. Responses are generated purely through language modeling without vector-based retrieval.
Query Knowledge: This functionality enables users to ask questions related to uploaded documents. The system performs similarity search using Qdrant and generates answers using the LLM, enabling Retrieval Augmented Generation (RAG).
Knowledge: This section is used to manage vector collections. It allows users to: 1. Create and manage custom vector data collections 2. Upload and embed PDF files 3. Store and retrieve embeddings using Qdrant
Users: This feature allows administrative users to manage platform access: 1. Create new users 2. Assign user roles as either Creator or User
Role definitions: 1. Creator: Has full access to all functionalities, including document uploads, querying, and user management. 2. User: Has limited, scoped access based on login session; cannot manage collections or other users
Support
Vendor support
Standard Service Level Support: L3. SLA Goals: 12h time to first response, 24h time to resolution within business hours 9-5pm PST. EOT Help Center at:
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
Similar products

