AWS Insights
Why model choice matters: Flexible AI unlocks freedom to innovate

Generated by Amazon Nova in Amazon Bedrock
Successful generative AI (gen AI) implementation hinges on a critical and fundamental decision that many organizations overlook: selecting the right foundation models (FMs) for their unique business challenges. Every organization has diverse needs requiring specific AI capabilities—what works perfectly for one use case may be imperfect or impractical for another due to cost-benefit tradeoffs, operational inefficiencies, or misalignment with business outcomes.
Each model brings its own strengths, whether in reasoning capabilities, multilingual support, code generation, creative content production, and more. When organizations lack model choice, they face costly limitations: either paying premium prices for overqualified models that waste resources, or accepting inadequate performance that compromises accuracy, speed, and customer satisfaction. Equally crucial is the ability to evaluate these models using your own data and real-world scenarios, as theoretical capabilities often differ from practical performance. This hands-on evaluation enables organizations to empirically determine the optimal balance between output quality, response latency, and operational costs for their specific use cases.
The freedom to select and thoroughly test the right model — or set of models — for each specific task is the cornerstone of an effective AI strategy, and it directly determines whether your technology investments deliver transformative business impact or result in underutilized resources and missed opportunities.
What to keep in mind about model selection
There are three critical points to consider when thinking about model selection:
- Different use cases require different tools. Just as you wouldn’t use a hammer for every home repair, you shouldn’t expect one model to solve every business problem optimally.
- The ability to experiment and switch between models isn’t just convenient – it’s a competitive advantage. Organizations that can rapidly test and deploy different models for specific use cases consistently outperform those locked into single-model approaches.
- Cost optimization comes from matching the right model to each task – avoiding both overengineering and underperformance.
These fundamental realities for business are why we designed Amazon Bedrock with model choice and flexibility at its core. In fact, with the launch of Amazon Bedrock, AWS was the first cloud service provider to offer a diversity of fully managed foundation models from leading AI companies, and we consistently make new models from the industry’s top providers available as they are released–often with exclusive first access. (See Figure 1.) The majority of Amazon Bedrock customers that use multiple models use models from various providers, optimizing both capability and cost-efficiency for their unique use cases.

Figure 1.
Let’s take a look at some examples of how our customers are reaping the benefits of model selection through Amazon Bedrock:
Giving Veolia the right tools for their tasks: When Veolia, a leading provider of environmental services headquartered in France, began their AI journey, they faced a complex challenge: how could they deploy AI tools that would deliver genuine value to tens of thousands of employees across their global operations? The answer wasn’t selecting just one model from one model provider; it was in providing access to multiple models across multiple model providers that could address diverse needs. Amazon Bedrock’s extensive selection of models proved essential for Veolia’s success. With access to more than 100 models from leading AI companies including Amazon, AI21 Labs, Anthropic, Cohere, DeepSeek, Luma AI, Meta, Mistral AI, Stability AI, Writer, and more, Veolia found they could address diverse needs across their global operations – in fact, they now use nearly all available models in Amazon Bedrock.
Their “Veolia Secure GPT” platform, built with Amazon Bedrock, streamlines translation services, image generation, and knowledge retrieval across the organization. By leveraging Amazon Bedrock’s flexible model selection, the company’s platform has experienced explosive growth—scaling from 2,000 users in Sept. 2023 to 48,000 by Nov. 2024, and now exceeding 64,000 users today. This remarkable adoption stems directly from employee empowerment: Team members can select the ideal models for each specific task rather than forcing one-size-fits-all solutions that may deliver subpar results. This freedom to innovate within their unique roles has driven sustained engagement and continuous value creation, as evidenced by consistent and growing usage metrics. The strategic implementation has not only measurably enhanced operational efficiency but also accelerated the core Veolia mission of ecological transformation.
Aligning models with company values for Showpad: By selecting models aligned with specific governance requirements, organizations can implement appropriate guardrails for different use cases—applying more stringent controls for customer-facing applications while optimizing for performance in internal tools. This tailored approach to risk management and oversight offers both flexibility and peace of mind—increasingly critical advantages as global AI regulations continue to evolve. Showpad, a sales enablement platform company headquartered in Belgium, offers a clear example of this in action. The company needed to expand its AI capabilities while maintaining strict security and trust standards for their sales platform. By carefully evaluating their options, they chose Claude models from Anthropic because the models aligned with their core principles of human agency, transparency, inclusivity, and integrity. This model choice paid off: Showpad successfully launched 12 new AI features in just one year while maintaining their high compliance standards. Their approach shows how selecting the right model helps companies grow their AI capabilities without compromising on security or trust.
Enhancing efficiency for TUI: The marketing team at TUI, one of the world’s largest tourism and travel companies, headquartered in Germany, faced a different challenge: generating authentic, customer-facing, and brand-compliant content at scale. Their breakthrough came not from using a single model, but from combining the strengths of multiple models through Amazon Bedrock. By using Llama models from Meta for initial content generation–in this case, generating hotel descriptions–and Claude models from Anthropic for refinement and formatting, they transformed an eight-hour content creation process into one that takes mere seconds. This hybrid approach delivered results that neither model could achieve alone.
Accelerating AI implementation with safety standards for Stride Learning: As an early pioneer in online learning, Stride—headquartered in the U.S.—has evolved over 25 years to become a leading lifelong learning company offering K–12 education, career learning, professional skills training, and talent development. When developing educational technology for young learners, speed and safety must work hand in hand. Stride Learning faced the challenge of rapidly deploying their Legend Library app, also known as K12 Story Studio—an AI-powered platform that enables K-4 students to create personalized, illustrated storybooks in just minutes. Key was finding models that could generate rich, detailed illustrations while maintaining the accuracy and safety standards essential for children’s educational content. Through Amazon Bedrock, Stride Learning used Stability AI Stable Image Core models to create engaging visual content that reduces nonsensical text and ensures age-appropriate imagery. Story Studio has reached more than 20,000 unique users since its launch in February 2025 and, with the backend support of Stable Image Core in Amazon Bedrock, students have created more than 100,000 unique AI-generated images in Story Studio. This model selection also delivers impressive scale: by using Amazon Bedrock and other AWS serverless solutions, Stride achieved a throughput of 1,000 images per minute. This strategic model choice, combined with close collaboration with AWS and Stability AI solution architects, enabled Stride Learning to deploy their complete Legend Library product in under six months—transforming students’ imaginative stories into vivid, illustrated books with just a few clicks. Their success demonstrates how the right model selection can accelerate time-to-market while maintaining the rigorous safety and reliability standards that educational applications demand.
Reducing cost and latency for AWS: The AWS Field Experience Team empowers AWS sales teams with generative AI solutions built on Amazon Bedrock, improving how AWS sellers and customers interact. The team selected Amazon Nova models to power account summaries, a feature frequently used by sellers that requires fast and reliable responses. Since moving to the Nova Lite model, the team has seen a 90% reduction in inference costs and a 72% favorability rate for the feature.
Speeding transformation for BigDataCorp: BigDataCorp, a datatech company based in Brazil, leveraged Mistral AI models through Amazon Bedrock to design, develop, and deploy their Generative Biography solution—transforming structured information from spreadsheets into text format—in just 20 days.
How Amazon Bedrock turns model flexibility into a strategic business advantage
The power of Amazon Bedrock extends far beyond its diverse model offerings—it fundamentally reshapes how organizations approach AI implementation. Through a unified interface, teams can experiment with different models in real-time, directly comparing their responses to identical tasks using Amazon Bedrock’s interactive playground and built-in evaluation tools. This consolidated approach eliminates the complexity of juggling multiple vendor relationships and inconsistent evaluation methodologies across platforms. Then, using unified APIs and comprehensive SDKs, organizations can seamlessly incorporate multiple models from different model providers into their existing systems—from legacy applications to modern microservices. This integration-first approach protects current technology investments while enabling progressive AI adoption. Throughout the AI implementation journey, Amazon Bedrock provides comprehensive tools that continuously optimize your deployment—from initial model selection to ongoing refinement. Its evaluation capabilities help identify the most effective models for specific tasks, while advanced features like intelligent prompt routing, prompt caching, and prompt optimization work alongside model distillation options to systematically reduce costs, improve latency, and enhance accuracy for all your Amazon Bedrock-powered applications.
By eliminating traditional barriers to model access and deployment, Bedrock shifts the focus from technical integration hurdles and vendor management to strategic solution discovery. The real-world impact is clear: Companies like Veolia, Showpad, TUI, Stride, BigDataCorp —and Amazon itself—demonstrate that model flexibility isn’t merely a technical advantage—it’s a business differentiator that directly influences customer experiences, operational efficiency, and competitive positioning. As AI continues to evolve into the foundation of modern business strategy, organizations that can fluidly adapt between models to meet changing demands will establish themselves as market leaders, leaving less agile competitors behind.
Learn more:
Contact us to learn more about Amazon Bedrock or sign in to the Amazon Bedrock console to try it today. Or, check out the following helpful resources:
- Amazon Bedrock (product page)
- Optimizing costs of generative AI applications on AWS (blog)
- Amazon Bedrock customer stories (playlist)
- Amazon Bedrock demo videos (playlist)