Artificial Intelligence

Category: Generative AI

AWS architecture for Netsertive showcasing EKS, Aurora, Bedrock integration with insights management and call reporting workflow

How Netsertive built a scalable AI assistant to extract meaningful insights from real-time data using Amazon Bedrock and Amazon Nova

In this post, we show how Netsertive introduced a generative AI-powered assistant into MLX, using Amazon Bedrock and Amazon Nova, to bring their next generation of the platform to life.

Training Llama 3.3 Swallow: A Japanese sovereign LLM on Amazon SageMaker HyperPod

The Institute of Science Tokyo has successfully trained Llama 3.3 Swallow, a 70-billion-parameter large language model (LLM) with enhanced Japanese capabilities, using Amazon SageMaker HyperPod. The model demonstrates superior performance in Japanese language tasks, outperforming GPT-4o-mini and other leading models. This technical report details the training infrastructure, optimizations, and best practices developed during the project.

Accelerating Articul8’s domain-specific model development with Amazon SageMaker HyperPod

Learn how Articul8 is redefining enterprise generative AI with domain-specific models that outperform general-purpose LLMs in real-world applications. In our latest blog post, we dive into how Amazon SageMaker HyperPod accelerated the development of Articul8’s industry-leading semiconductor model—achieving 2X higher accuracy that top open source models while slashing deployment time by 4X.

A diagram illustrating the high-level workflow of VideoAmp's Natural Language Analytics solution

How VideoAmp uses Amazon Bedrock to power their media analytics interface

In this post, we illustrate how VideoAmp, a media measurement company, worked with the AWS Generative AI Innovation Center (GenAIIC) team to develop a prototype of the VideoAmp Natural Language (NL) Analytics Chatbot to uncover meaningful insights at scale within media analytics data using Amazon Bedrock.

Solution architecture diagram

Adobe enhances developer productivity using Amazon Bedrock Knowledge Bases

Adobe partnered with the AWS Generative AI Innovation Center, using Amazon Bedrock Knowledge Bases and the Vector Engine for Amazon OpenSearch Serverless. This solution dramatically improved their developer support system, resulting in a 20% increase in retrieval accuracy. In this post, we discuss the details of this solution and how Adobe enhances their developer productivity.

Solution Architecture

Automate customer support with Amazon Bedrock, LangGraph, and Mistral models

In this post, we demonstrate how to use Amazon Bedrock and LangGraph to build a personalized customer support experience for an ecommerce retailer. By integrating the Mistral Large 2 and Pixtral Large models, we guide you through automating key customer support workflows such as ticket categorization, order details extraction, damage assessment, and generating contextual responses.

Mental model for choosing Amazon Bedrock options for cost optimization

Effective cost optimization strategies for Amazon Bedrock

With the increasing adoption of Amazon Bedrock, optimizing costs is a must to help keep the expenses associated with deploying and running generative AI applications manageable and aligned with your organization’s budget. In this post, you’ll learn about strategic cost optimization techniques while using Amazon Bedrock.

How Kepler democratized AI access and enhanced client services with Amazon Q Business

At Kepler, a global full-service digital marketing agency serving Fortune 500 brands, we understand the delicate balance between creative marketing strategies and data-driven precision. In this post, we share how implementing Amazon Q Business transformed our operations by democratizing AI access across our organization while maintaining stringent security standards, resulting in an average savings of 2.7 hours per week per employee in manual work and improved client service delivery.