Listing Thumbnail

    EOT Agentic AI Knowledge Builder for Asset-Intensive Industries

     Info
    Deployed on AWS
    The Twin Talk Knowledge Builder empowers industrial organizations to quickly create and manage custom AI agents tailored to their operational needs, enabling real-time decision-making based on the latest available data and intelligence. Hosted entirely on-premise, or in the customer's AWS cloud environment, the Twin Talk Knowledge Builder eliminates common risks like AI poisoning, ensuring the integrity of sensitive industrial data.

    Overview

    Open image

    Industrial enterprises often struggle to fully leverage the benefits of Generative AI in plant operations due to security and intellectual property concerns. The Twin Talk Knowledge Builder is a comprehensive GenAI solution that combines industry-leading language models with domain-specific training on your enterprise data. It is deployable in both offline and customer-managed cloud environments, enabling secure, AI-driven transformation of traditional operations and unlocking new use cases across the enterprise. The Twin Talk GenAI Knowledge Builder delivers real-time, accurate insights from operational data through custom AI agents powered by advanced GenAI models. These agents integrate seamlessly with live operational data feeds to provide instant answers to complex industrial questions. Built on a secure, enterprise-specific foundation model, Twin Talk ensures high accuracy and robust data protection without the vulnerabilities often associated with SaaS-based GenAI platforms.

    Highlights

    • Real-Time Data Integration - Instantly feeds real-time operational data into Generative AI models, providing up-to-the-minute insights for decision-making. Custom agents automatically pull the latest data from relevant systems, ensuring accuracy in AI-generated responses.
    • Secure, Non-SaaS Environment - Hosted in-house, ensuring that data remains secure within the organization's network, eliminating the security risks associated with SaaS AI models. Custom domain-trained LLMs (Large Language Models) ensure your AI is purpose-built for your industry, reducing irrelevant or inaccurate results.
    • Domain-Specific Foundation Models - Built on a foundation model pre-trained with the enterprise's specific domain knowledge, ensuring accurate and relevant AI outputs tailored to the unique operational challenges of each business.

    Details

    Delivery method

    Delivery option
    64-bit (x86) Amazon Machine Image (AMI)

    Latest version

    Operating system
    Ubuntu 2023

    Deployed on AWS

    Unlock automation with AI agent solutions

    Fast-track AI initiatives with agents, tools, and solutions from AWS Partners.
    AI Agents

    Features and programs

    Financing for AWS Marketplace purchases

    AWS Marketplace now accepts line of credit payments through the PNC Vendor Finance program. This program is available to select AWS customers in the US, excluding NV, NC, ND, TN, & VT.
    Financing for AWS Marketplace purchases

    Pricing

    EOT Agentic AI Knowledge Builder for Asset-Intensive Industries

     Info
    Pricing is based on actual usage, with charges varying according to how much you consume. Subscriptions have no end date and may be canceled any time.
    Additional AWS infrastructure costs may apply. Use the AWS Pricing Calculator  to estimate your infrastructure costs.

    Usage costs (1)

     Info
    Dimension
    Cost/hour
    g4dn.xlarge
    Recommended
    $0.0001

    Vendor refund policy

    Refunds within 48 hours of purchase is available.

    How can we make this page better?

    We'd like to hear your feedback and ideas on how to improve this page.
    We'd like to hear your feedback and ideas on how to improve this page.

    Legal

    Vendor terms and conditions

    Upon subscribing to this product, you must acknowledge and agree to the terms and conditions outlined in the vendor's End User License Agreement (EULA) .

    Content disclaimer

    Vendors are responsible for their product descriptions and other product content. AWS does not warrant that vendors' product descriptions or other product content are accurate, complete, reliable, current, or error-free.

    Usage information

     Info

    Delivery details

    64-bit (x86) Amazon Machine Image (AMI)

    Amazon Machine Image (AMI)

    An AMI is a virtual image that provides the information required to launch an instance. Amazon EC2 (Elastic Compute Cloud) instances are virtual servers on which you can run your applications and workloads, offering varying combinations of CPU, memory, storage, and networking resources. You can launch as many instances from as many different AMIs as you need.

    Version release notes

    Support for Amazon Bedrock

    Additional details

    Usage instructions

    Welcome to Twin Talk Knowledge Builder!

    Twin Talk Knowledge Builder (TTKB) is a self-contained Retrieval Augmented Generation (RAG) system built using FastAPI, Qdrant vector database, and local LLMs served via Ollama. This AMI comes pre-configured with all required components for secure, offline document ingestion, embedding, querying, and user access control.

    Step 1. Launch the AMI

    1. Subscribe to Twin Talk Knowledge Builder on AWS Marketplace.
    2. Choose an instance type: Recommended (GPU): g4dn.xlarge or higher
    3. Configure the security group to allow the following ports: Port 8000 - Web and API access Port 6444 - Optional internal service communication
    4. Launch the instance into your preferred VPC and subnet.
    5. Assign an SSH key pair (optional for future debugging, not required for normal usage).

    Step 2: Access the Web Interface Once the instance is in the "running" state, open your browser and access the application via: http://<EC2-Public-IP>:8000/

    Step 3: Default Login Credentials Log in using the preconfigured administrator credentials. You will be prompted to update your password on login. Login: admin@eot.ai  Password: <Instance ID>

    Step 4: Application Functionalities Chat Playground: This feature allows users to engage in general conversation with the local LLM. Responses are generated purely through language modeling without vector-based retrieval.

    Query Knowledge: This functionality enables users to ask questions related to uploaded documents. The system performs similarity search using Qdrant and generates answers using the LLM, enabling Retrieval Augmented Generation (RAG).

    Knowledge: This section is used to manage vector collections. It allows users to: 1. Create and manage custom vector data collections 2. Upload and embed PDF files 3. Store and retrieve embeddings using Qdrant

    Users: This feature allows administrative users to manage platform access: 1. Create new users 2. Assign user roles as either Creator or User

    Role definitions: 1. Creator: Has full access to all functionalities, including document uploads, querying, and user management. 2. User: Has limited, scoped access based on login session; cannot manage collections or other users

    Support

    Vendor support

    Standard Service Level Support: L3. SLA Goals: 12h time to first response, 24h time to resolution within business hours 9-5pm PST. EOT Help Center at:

    AWS infrastructure support

    AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.

    Similar products

    Customer reviews

    Ratings and reviews

     Info
    0 ratings
    5 star
    4 star
    3 star
    2 star
    1 star
    0%
    0%
    0%
    0%
    0%
    0 AWS reviews
    No customer reviews yet
    Be the first to review this product . We've partnered with PeerSpot to gather customer feedback. You can share your experience by writing or recording a review, or scheduling a call with a PeerSpot analyst.