Artificial Intelligence

Tag: Generative AI

Build conversational interfaces for structured data using Amazon Bedrock Knowledge Bases

This post provides instructions to configure a structured data retrieval solution, with practical code examples and templates. It covers implementation samples and additional considerations, empowering you to quickly build and scale your conversational data interfaces.

Extend your Amazon Q Business with PagerDuty Advance data accessor

In this post, we demonstrate how organizations can enhance their incident management capabilities by integrating PagerDuty Advance, an innovative set of agentic and generative AI capabilities that automate response workflows and provide real-time insights into operational health, with Amazon Q Business. We show how to configure PagerDuty Advance as a data accessor for Amazon Q indexes, so you can search and access enterprise knowledge across multiple systems during incident response.

Mental model for choosing Amazon Bedrock options for cost optimization

Effective cost optimization strategies for Amazon Bedrock

With the increasing adoption of Amazon Bedrock, optimizing costs is a must to help keep the expenses associated with deploying and running generative AI applications manageable and aligned with your organization’s budget. In this post, you’ll learn about strategic cost optimization techniques while using Amazon Bedrock.

Building intelligent AI voice agents with Pipecat and Amazon Bedrock

Building intelligent AI voice agents with Pipecat and Amazon Bedrock – Part 1

In this series of posts, you will learn how to build intelligent AI voice agents using Pipecat, an open-source framework for voice and multimodal conversational AI agents, with foundation models on Amazon Bedrock. It includes high-level reference architectures, best practices and code samples to guide your implementation.

Solution workflow

Implement semantic video search using open source large vision models on Amazon SageMaker and Amazon OpenSearch Serverless

In this post, we demonstrate how to use large vision models (LVMs) for semantic video search using natural language and image queries. We introduce some use case-specific methods, such as temporal frame smoothing and clustering, to enhance the video search performance. Furthermore, we demonstrate the end-to-end functionality of this approach by using both asynchronous and real-time hosting options on Amazon SageMaker AI to perform video, image, and text processing using publicly available LVMs on the Hugging Face Model Hub. Finally, we use Amazon OpenSearch Serverless with its vector engine for low-latency semantic video search.

Build a scalable AI assistant to help refugees using AWS

The Danish humanitarian organization Bevar Ukraine has developed a comprehensive virtual generative AI-powered assistant called Victor, aimed at addressing the pressing needs of Ukrainian refugees integrating into Danish society. This post details our technical implementation using AWS services to create a scalable, multilingual AI assistant system that provides automated assistance while maintaining data security and GDPR compliance.

Enhanced diagnostics flow with LLM and Amazon Bedrock agent integration

In this post, we explore how Noodoe uses AI and Amazon Bedrock to optimize EV charging operations. By integrating LLMs, Noodoe enhances station diagnostics, enables dynamic pricing, and delivers multilingual support. These innovations reduce downtime, maximize efficiency, and improve sustainability. Read on to discover how AI is transforming EV charging management.

Fast-track SOP processing using Amazon Bedrock

When a regulatory body like the US Food and Drug Administration (FDA) introduces changes to regulations, organizations are required to evaluate the changes against their internal SOPs. When necessary, they must update their SOPs to align with the regulation changes and maintain compliance. In this post, we show different approaches using Amazon Bedrock to identify relationships between regulation changes and SOPs.

Generative AI platform maturity stages

Architect a mature generative AI foundation on AWS

In this post, we give an overview of a well-established generative AI foundation, dive into its components, and present an end-to-end perspective. We look at different operating models and explore how such a foundation can operate within those boundaries. Lastly, we present a maturity model that helps enterprises assess their evolution path.

A generative AI prototype with Amazon Bedrock transforms life sciences and the genome analysis process

This post explores deploying a text-to-SQL pipeline using generative AI models and Amazon Bedrock to ask natural language questions to a genomics database. We demonstrate how to implement an AI assistant web interface with AWS Amplify and explain the prompt engineering strategies adopted to generate the SQL queries. Finally, we present instructions to deploy the service in your own AWS account.