Skip to main content

Customize your applications

Securely customize generative AI applications and agents with your data

Build secure, tailored AI applications with enterprise data

Organizations can leverage their unique enterprise data to build differentiated experiences for their business. Using techniques such as retrieval augmented generation (RAG), model fine-tuning, model distillation, and multi-modal data processing, you can build generative AI applications tailored to your specific use case. Maintain complete control over sensitive information—your data is never used to train base models or shared with any model providers, including Amazon.

Missing alt text value

Create differentiation for your apps

Combine multiple data customization tools to optimize models for domain-specific accuracy

Amazon Bedrock Knowledge Bases

Amazon Bedrock Knowledge Bases offers an end-to-end managed RAG workflow that lets you create highly accurate, low-latency, secure, and custom generative AI applications by incorporating contextual information from your own data sources.

  • End-to-end RAG workflows
  • Securely connect FMs and agents to data sources
  • Deliver accurate responses at runtime
Missing alt text value

Model Fine-tuning

Train a foundation model to improve performance on specific tasks (known as fine-tuning) or pre-train a model by familiarizing it with certain types of inputs (known as continued pre-training). Adapt foundation models to your specific needs to improve performance for specialized tasks.

Missing alt text value

Data automation

Amazon Bedrock Data Automation is a fully managed API that can easily integrate into your applications. It streamlines the development of generative AI applications and automates workflows involving documents, images, audio, and videos

  • Build intelligent document processing, media analysis, and other multimodal data-centric automation solutions
  • Industry-leading accuracy at lower cost, along with features such as visual grounding with confidence scores for explainability and built-in hallucination mitigation
  • Integrated with Bedrock Knowledge Bases, making it easier to generate meaningful information from unstructured multi-modal content to provide more relevant responses for RAG
Missing alt text value

Model distillation

With Amazon Bedrock Model Distillation, you can use smaller, faster, more cost-effective models that deliver use-case specific accuracy—comparable to the most advanced models in Amazon Bedrock. Distilled models in Amazon Bedrock are up to 500% faster and up to 75% less expensive than original models, with less than 2% accuracy loss for use cases like RAG.

  • Fine tune a ‘student’ model with a ‘teacher’ model that has the accuracy you want • Maximize distilled model performance with proprietary data synthesis
  • Reduce cost by bringing your production data. Model Distillation lets you provide prompts, and then uses them to generate responses and fine-tune the student models
  • Boost function calling prediction accuracy for agents. Enable smaller models to predict function calling accurately to help deliver substantially faster response times and lower operational costs
Missing alt text value