Skip to main content

Calculating the Cost and ROI of Generative AI

Featuring three AWS Enterprise Strategists

Generative AI has the potential to revolutionize industries and organizations, but how much will it cost you, and how do you measure return on that investment? In this episode, get practical guidance on estimating the costs of generative AI adoption, from model selection to infrastructure requirements. Learn how to quantify the productivity gains and efficiency improvements enabled by generative AI, strategies for upskilling your workforce, and advice on driving organization-wide adoption of generative AI capabilities.

Highlights

01:58 - How do you estimate the cost of generative AI?
07:52 - Should you buy or build your own model?
16:33 - Getting employee buy-in on generative AI
25:37 - What value does generative AI bring to your organization?
36:44 - Getting return on your investment

Transcript of the conversation

Featuring AWS Executive Strategists Helena Yin Koeppl, Chris Hennesey, and Jake Burns

Jake Burns:
Welcome to “Conversations With Leaders”. I’m Jake Burns, Enterprise Strategist with AWS. In my former role, I was with Live Nation Ticketmaster and I led an all-in migration to AWS that we successfully completed in 2017. I'm joined today by two of my colleagues, Helena and Chris.

Helena Yin Koeppl:
Hi, I am Helena Yin Koeppl. I'm also our Enterprise Strategist for AWS. Before joining AWS, my immediate former role was the Head of Thomson Reuters Labs for AI and generative AI innovation.

Chris Hennesey:
Yeah. Hi, this is Chris Hennesey here on the same team as Helena and Jake. Excited to be here today. Prior to joining AWS three years ago, I was the technology CFO at Capital One and helped lead through their transformation to the cloud. So, excited to spend time with you today on the topic we're getting from a lot of customers.

Jake Burns:
Yeah, and that topic is the cost and value of generative AI. And like you Chris, I've been hearing a lot of questions from customers on this topic and I think there's a lot of interesting things to talk about here. I saw Gartner Finance keynote recently, and the title was “AI Success Depends on the CFO, Not IT". I thought that was really interesting and having a former CFO here I think is appropriate. So, we're going to talk about both sides of the equation here. The cost side, which I think our customers are all trying to minimize their costs as they go through this process, but I think the more interesting part is how to ensure that we get value. But let's tackle the cost side first. One of the questions that I get fairly often is how do we estimate costs when it comes to generative AI? Is it the same as cloud? Is it the same as a traditional IT deployment? Are there some things that are unique about generative AI that we need to consider?

Helena Yin Koeppl:
When we are talking about cost of generative AI, we need to break it down into, so what we mean by that? Is it the cost of adopting the entire technology? But for what? So, the starting point of talking about cost is exactly that. What you do with generative AI? So, starting from really what are the key problems to solve and then working backwards to identify all of the key things for a generative AI-based solution and then calculating on each of the components to reach that solution. So, I think that's the step to move forward, to bring down a very big topic of, “How much does it cost for me to adopting generative AI?” To what are the problems to solve, and how am I going to solve it? And then calculating backwards.

Jake Burns:
Right, and what are some of the categories of costs that customers should be looking at? Are they the same as, like I was saying before, like IT deployment? Seems like there's maybe some additional things that they need to consider here with generative AI that may be unique to it.

Chris Hennesey:
Yeah, I think in any traditional one, as you think about generative AI, there's the buy versus build kind of question that comes. Jake, to your question, what are the cost components? If you think about purchasing and buying, you need to think about, one, “What services am I going to leverage and models am I going to leverage?” As you think about the usage of that. So, model choice is a big decision and we're going to talk a little bit more about that later. There's obviously complementary it could be other software analytic capabilities that you want to consider as a part of it. So there could be some subscription-based pricing to consider there on the buy side. But the beauty of testing with GenAI is you can sample and try a lot of those services at a small scale for a small cost until you can see the value.

Obviously if you start to build, you need to think about the resourcing and the labor considerations. I know a lot of companies are looking to rescale or to enhance the knowledge of individuals. How do you do prompt engineering and generative AI effectively? And then you also will use cloud providers and AWS costs in terms of leveraging those capabilities. So, I think the traditional categories of people, software, and obviously if you're doing things with partners, third-party costs that all exist. I think most of the use cases that I've seen in customer engagements, people are looking to leverage all the great LLMs that are out there. How can they then apply them with their data to their use case versus starting from scratch? Which I know there are plenty of companies who are building their own, but I think that is something that will come in the future. Most of the use cases I've seen are leveraging other models today.

Jake Burns:
Yeah, I think you could get a lot of value from something like Amazon Q for Business, which is pretty turnkey as things go, right? Connect your data source, and then start creating prompts and queries to get insights out of it. Amazon Bedrock has a little bit more of an investment, but still, I would say compared to building your own from scratch, it's far, far easier and quicker and less expensive and with far more flexibility. So I think the approach that Amazon has taken with the service offering and the three layers of the stack to serve customers of all maturity levels, I think there's very few that I've seen at least, and I'm curious to hear if you guys have seen otherwise. But very few that I've seen, they're saying we're interested in building our own model from scratch and doing it that way. It may be because I'm talking to enterprise leaders, but curious to hear what you think.

Helena Yin Koeppl:
So I think in general, you guys talked a lot about the similarities of cost components comparing to other software solutions. Let me talk a little bit about the differences. One key thing, so we talk about buy versus build. There's actually a third option for generative AI, which is customize because you can take a pre-built model but not completely using it as is, but really customize it with your own data. And normally it has a few advantages. It can help you to get better performance. It can actually control the cost because you don't have to go for the biggest and the greatest generative AI model there is. And you can take a smaller one, actually really customize it for the specific topic and the domain you have with your data and then really get the better result while controlling the cost.

So that really helps you with the cost components, but at the same time as also getting a better performance coming out from that model. That third option is particularly very special, I would say, for generative AI. And it's actually from what I have seen, not only from my past experience, but a lot of the customers I've been talking to, this is the option they're going for.

Jake Burns:
Yeah, that makes sense. I think this trend towards using smaller, more specialized models makes a lot of sense if you're looking from an ROI perspective. If we’re talking a year ago, a lot of people were thinking, “What's the best model, what's the biggest model?” But I think as the reality has set in that these really big models, you don't necessarily need all the functionality of them for everything that you're doing, and they tend to be the most expensive option.

Helena Yin Koeppl:
Yeah, when we are talking about the cost, and let's not forget that beyond really the initial cost to set up the model for you. In the long term, there is a big component which is the inference cost, which means that you give the input data, and for example a prompt, and get the result back from the models. And imagine a bigger model definitely has a bigger costs per inference because it's calculated by token, and token is the size of inputs and outputs. So, basically that is another reason for you to think about really the model choices and having access to the right model, be it specialized and be it start testing with smaller models. And really gradually finding the right balance between the model itself, the customization and the result as in the speed and the size of input and output.

Jake Burns:
So I guess, would it be fair to say that a reasonable approach would be to build a prototype or a proof of concept and kind of just run it and see what that setup costs looked like, see what the run, the inference costs will be, and then kind of extrapolate that as to what you would predict what would be running in production. And that would give you some sort of ballpark as to estimating the costs. Would that be a reasonable approach?

Chris Hennesey:
What you just defined sounds good, but then if you apply that to a lot of use cases, I know as we talk to customers, it's not like they have one or two use cases. Many have tens of use cases they're considering. So, I think in aggregate, one good control that you could have in the beginning is thinking about really broad financial guardrails to consider. Maybe how much investment capacity do you want to allocate towards this? Maybe just setting aside the kind of funding in that regard before you know it? Because don't want to slow down the speed and pace of innovation just to have a really good forecast and estimate.

I think investing a small amount of capacity in the beginning doing these assessments like you just outlined, Jake, and then thinking about what that means at scale, that gives you the really good foundation. We're going to talk about the value in a little bit, but it gives you a really good foundation of what's the range of outcomes, especially on a scale basis. I think that's where some people get hung up is, “ I know this use case, I know what we want to do, but if I have X number of developers or X number of resources in the company putting prompts against this, what is the impact going to be at scale?” So I think using some financial guardrails or allotting some element of investment capacity could be really helpful to force and enable innovation, but for some awareness before things get out of whack.

Jake Burns:
Indeed. I think that also helps answer the question a little bit on how to minimize costs, as well. Because while you're in that experimentation phase, while you're building that prototype, you could test different models and you could test different strategies and different prompts, and you can get better at writing prompts.

I mean, it's amazing how much the skill of being good at writing prompts or prompt engineering can affect your costs. So there's so many different levers that you have. And so I think, really, spending some time experimenting, you could still go fast, but really realizing what those levers are, whether it be prompt engineering, or RAG, or fine-tuning, or model choice, or even just opening your mind up to thinking about, “Maybe this application isn't going to use one model, maybe it's going to use many different models for many different parts of the application.”

And then also realizing that generative AI isn't the solution to everything and using other technologies that may be less expensive for the majority of what the application does, and really relying on generative AI and these models for the things that you really need them for. And I think if you combine all of those things and you get really good at that, I think you'll build that kind of cost optimization or that efficiency muscle over time.

Chris Hennesey:
Yeah, I love that.

Jake Burns:
Absolutely. I think one of the major topics that customers are asking about is the talent consideration. We've kind of been through this before with cloud. You could argue it was a bigger scale or a smaller scale, maybe we don't even know. But it's not like this is the first transformation that most of us have led or been a part of. I'm a big advocate for taking those lessons learned from the very transformative technology of cloud and applying it to generative AI.

And I think one of my favorite lessons is really getting buy-in from your existing employees, and training them and giving them opportunities to take these new roles and fill these new roles and learn these new skills. It may sound almost like a charitable thing that you're doing for your team, and it is in a sense. You want to build morale and you want to do good things for your employees, but it's also a very selfish thing as well, because recruiting for these roles is very, very difficult and very expensive. So if you can utilize your existing employees, it's a much easier path forward, a much quicker path forward, and typically a lot less expensive and more likely to succeed.

Helena Yin Koeppl:
Absolutely. So I think we need to think about, because it's such a new technology and people really jumping on, "Okay, what are the technical skills that I need in my organization in order to adopt it?" But we should really think about that. There's, of course, technology component in terms of talents, but there's also skills that everybody needs, and actually skills that somebody could do better if they actually have domain knowledge or business knowledge.

For example, prompt engineering is actually less of engineering, more actually knowing what is the right questions to ask and what answers do you expect to evaluate that answer. So that is actually normally coming from business experts and subject matter experts, rather than coming from AI scientists or machine learning engineers. However, if we are really talking about technical skills, I absolutely agree that you probably already have the talents that you need. You just need to upskill them, reskill them. And there's a lot of adjacencies to what we need in generative AI or large language models.

So for example, people who have already data science skills can actually pivot into learning about the additional points and the kinds of data that large language models are being trained on, the algorithm that large language models are leveraging. And then really be the one who understand how best to customize it, as an example. So data engineers, similarly, and also software engineers, and architects are very much needed if you want to leverage external large language models or generative AI models via API, as an example. So all of them are actually needed for you to efficiently build the right architecture to actually optimize costs as well.

Chris Hennesey:
I was reading an article on LinkedIn this morning that talked about… I think it was over half of the jobs posted on LinkedIn, there's an expectation of understanding and leveraging AI in the job. And I love, Helena, what you just shared, is obviously there's a bunch of technical personas, but there's also a lot of non-technical expectations that exist in knowing how to leverage this technology. And really, the promise for hiring managers is around operational efficiency.

So productivity, how do you get that? How do you ultimately improve decision-making? I think the ability for gen AI to consolidate information, multiple sources, and distilling it down to something that could be effectively leveraged for decisions inside of an organization is really powerful. And there's so much promise and excitement around potentially opening new revenue streams and new businesses with generative AI and new servicing for your existing customers.

So, I think we're starting to see more and more. While it's a new technology, you need to really immerse yourself to begin to learn and how you can apply it to the work you do. And for those that can apply it in an effective way, it could be a big differentiator for them to be successful, either in the job they're in or in the roles that they're looking to join.

Jake Burns:
Yeah, yeah, great points. It occurs to me that this isn't just about hiring engineers or upskilling engineers. It's almost like everyone in the organization, eventually, is going to have to learn how to interface with these systems, how to get good results out of them. And so to your point, Helena, you want to take the people who have that domain knowledge and empower them with this technology, but there's going to be a small skills gap that they're going to have to bridge in order to learn how to effectively create prompts to get the results they need or work with the technology. It's going to be integrated in just about every piece of software that we use eventually if we stay on this trajectory. So yeah, I think we tend to focus on those technical skills and upskilling them. But really, everybody needs to be a prompt engineer to some degree.

Helena Yin Koeppl:
Plus probably everybody would have a little bit of design thinking or critical thinking as well to really think about what I'm doing today, how this can be more efficient, and which part I can automate and which part I can generate actually a new way of doing things and generate, to your point, new products and new revenue stream.

Jake Burns:
Yeah, it's interesting. We went through this with cloud a little bit, but I think with generative AI, I think it's even so much more the case where people are afraid of, "This is going to maybe replace my job." But in reality, what we're seeing is that people who are adopting this technology, and again, these aren't always engineers per se, it's just regular people in the company, knowledge workers who use a computer, they're able to use it as a force multiplier. And their skills are still needed, but then they're relying on this technology, almost like the computer itself.

Helena Yin Koeppl:
So shall we move to the value? I mean, just this is a natural transition into, you're talking about that. Why would you want to adopt generative AI? What are the potential values it can bring to your organization?

Chris Hennesey:
No, it's funny, it blends the two as a transition because I spent a lot of time with CFOs, and for most organizations, the CFO obviously plays a very critical role in the company, but their teams are pretty small in the grand scheme of the companies, especially relative to maybe IT or product team. But I think it's this balancing act, Jake, to what you just said, which is there's continued demands on these organizations, especially shared service teams. There's not enough capacity, and this could be a way to kind of expand capacity and get more productivity gains in that regard.

But I think, Helena, to your point in the pivot to value, it's all eyes on what value it's going to deliver. And I know we're talking a lot about this from a consumer standpoint, but I've spent a lot of time this year with a lot of software companies who are also trying to think about how do they price in generative AI into their solutions because they know that value is being delivered in what the investments that they're making.

And at least for me, I'll run through the value, at least things in the top of my mind, and I'd love your input. Obviously, first and foremost, typically what comes up first is around just productivity and efficiency. So I can do things at a faster rate. That could be coding, depending on how that's being leveraged. It could be through, I know I came from financial planning and analysis, so I had to answer a lot of questions to customers all throughout the company. So if I could aggregate through a lot of financial data and varying source data and integrate this down to a narrative and a story and could help guide people into what they need to focus on, really powerful in terms of that regard.

I know we've all read a lot of use cases around legal and some other, maybe even healthcare use cases, where it's taking huge volumes of data and able to synthesize this in a powerful way to just streamline the way in which you work. So I think that's one value prop. I know a lot of companies are thinking about this from a servicing standpoint for customers. What are the ways by which they could either self-service or support individuals who are servicing customers? I know call center agents will come up pretty often. I've spent a lot of time just because I came from a bank around how do you best support call center agents in the way in which they're servicing customers? This could transform the way in which, and Amazon Connect, I know, is using this, and with gen AI, in ways that it's totally transforming the scripts and the ways in which you communicate, and the consistencies from a risk management standpoint to how you communicate to customers. So huge benefits in terms of servicing, quality of service, and the risk management side.

And I know I meet with a lot of companies, I'm sure you do, who are looking for totally new revenue streams. How can this open up new businesses, new ways of working, and even adjacent businesses inside of organizations of new product lines to open up new revenue streams and opportunities there? So those are the three big areas. I'm sure there's many more. I'd love to get your input. Those are the three I typically hear from customers.

Helena Yin Koeppl:
Yes, absolutely. I think to add to your points, and very often what we need to do is to look at our total workflow, and looking at points where there are really today that is more repetitive, mundane. As an example that I’ve seen… What I have seen actual use cases from customers and from my previous jobs, Form Autofill. It sounds very simple, but imagine doing that know tens, hundreds of times per week to get to what you want to have as the end result, the form being filled.

And while 80% of the time, it's actually very repetitive information about address, dates, and all of those things, and that can be very easily done and accelerated by generative AI. That's just one simple example.

And additional examples are really a lot of this is at professional fills, where legal, where you have to sort through a lot of information, or accounting. So all of this, again, that value added is really that human expertise which can do the analysis, and coming up with insights where when sorting through the information and summarizing them the right way, and a lot of this can be automated by generative AI, learning on basically labeled data. So that's an example.

Another example that I would say is really enhancing customer experience. Today, that customers, again, there are many, many different processes, for example, travel booking, for example, that customer is trying to find the things, reading through tons of reviews, and really trying to find the right product for you and for your preferences.

And all of this actually can be accelerated by generative AI, because again, it's sorting through a huge amount of information: the reviews, your preferences, and who you are, and what's your past history of ordering things? And really coming out with recommendations with narratives because there's also the generative function.

And not to mention, using the generative AI, especially in images or videos, it can generate further in terms of prototype, in terms of design and different versions. You can go through 500 different versions, and you can personalize them, and you can generate them and really with the image putting in front of you for you to choose. So all of this are really great ways to enhance customer experience.

Jake Burns:
Yeah. I think this is definitely the interesting part of the conversation, because it has so many layers to it, right? So I love your example of the autofill, because it's how many little paper cuts do we have in our day-to-day life of things that we have to just do these repetitive tasks over and over?

And this is an example, and it may seem small, but a lot of these small things add up in your day-to-day: how much can you free up your time to do the things that only a human can do? Or you have specific expertise that you could do? So of course there's the automation part and the efficiency gains through that.

And then I think there's a theme in what I'm hearing from what both of you are saying, and I'm also seeing the same thing, which is some of the things that just weren't possible before because there just wasn't enough human resources in order to do it, and the technology couldn't do it, and it's things like finding insights in just huge amounts of data.

And people like to push back on this because I think they don't quite understand this isn't a fully automated system, if we're talking about going through legal documents. Of course, people think, okay, the model's going to come up with the answer that you're going to just use in terms of a response to a legal proceeding.

But in reality, the way I imagine it being used as an attorney or paralegal, or someone is going in there and saying, "Come up with some ideas based on all of these documents on maybe some things that we could pursue. And then based on those ideas, we'll go do the work that we would normally do." But it came up with perhaps some directions we could go that we never would've considered because who's going to go through thousands and thousands of pages of documents and be able to read all of those things, right? So this technology could be very interesting in that regard.

And of course, and this was one of the first use cases that I think I ever heard, but it's still, I think, very valuable, which is imagine that your entire organization's knowledge base is accessible by each one of your employees. And of course, with guardrails in place, not everyone's going to have access to the same level of information, but how much more effective can you be in your day- to-day work?

Not just from removing the undifferentiated heavy-lifting of, “It took me 30 minutes to go find that document I'm looking for, or find that policy.” Or whatever the case may be, but also coming up with ideas that I never would've been able to come up with because maybe there was some pattern in all of this data that I just never would've seen before, right?

And then thirdly, I think the other category is this creativity category, right? And I'm not of the opinion that the models have creativity in and of itself, but they're drawing upon the creativity of the source material that goes into the models and kind of creating mashups, which can be very inspirational for people to use.

Now, of course, you can use those mashups yourself. People use large language models to do their homework for them or write a document for them, or the diffusion models to create images and use them as-is. But I think as time goes on, I think the more impactful way to use that is going to be as a starting point, right?

Or I'm seeing with image generation or with video generation, maybe you create a character, and it kind of shades it for you, or it creates a new background for you. Or it comes up with 20 different ones, and you get to pick the one that you like, and then you enhance it, right? So I think the theme here is it's a collaborative effort between humans and these models at this point. And that's where, in my mind, I think we're getting the most value.

Helena Yin Koeppl:
Yes, I mean, as we always say that there's human in the loop, human on the loop, and human out of the loop. And in 99% of the cases that we're seeing either human on the loop or human in the loop, because to your point, highly specialized professions, and we really need human to be the one who eventually make the final decision, right? So for example, radiologists and final decision on the diagnosis, or lawyers.

And so that is the ethical thing to do, and that is the right thing to do, and that's the right process. But their job can be made much easier if AI provides the drafts and basically the first highlighting and pointing out areas that they need to pay attention to due to a large amount of training data, so.

Chris Hennesey:
Yeah, Jake, I love what's being called out here, but when we get to brass tax specifically around the cost and value, as you know, a lot of companies will do business cases, and trying to articulate all the components of cost and all the components of value.

And having done and partnered with a lot of customers on business cases of cloud migrations, anytime you call out productivity as one of the big value props, it's very divisive in terms of how much you want to count that in quotes. Because in terms of you're making investments in generative AI, you're seeing value, but how do you get the returns for that value?

Some of this could be in the way in which you all are talking is you can be way more effective and efficient in what you're doing. You have more satisfied employees that maybe don't attrite or stick around longer, so that's value. Maybe you're able to scale your organization in a way to support so you don't have to invest more resources. You're getting more scale out of that.

But anytime you get into cost avoidance, and this is just speaking from a finance person, sometimes that feels a little squishy in a business case when you're avoiding cost. In reality, it's real, but some people are looking at, “How do I bring the water level down of existing spend, or how do I grow the existing base of revenue?” So I would be cautious as companies and organizations go through this.

As you get into productivity gains, get really clear around where those gains are being realized, what is the impact of those productivity gains? And are we making a decision to invest that productivity into more capacity, or do we want to drop some of that productivity to the bottom line? And that may mean you don't backfill a role because you have more efficiency, so you start translating that.

So as I hear both of you talk, I have my kind of finance mindset going through the business case here. Just be cautious on the productivity side that you're actually being explicit and embedding that, versus just ignoring it or putting it to the side. Because it's a real big portion of the equation here for many of the people, personas, we've talked about.

Helena Yin Koeppl:
That is an excellent point. And I've seen from actual customer examples, and one customer was telling me that basically currently he has a team of 50 people, and serving three markets, which is to identify, for example, know your customers, and sorting through companies and sorting through internet information, the company background and et cetera.

And he's saying that basically, he now can scale into much more markets, and with generative AI's help, making his existing 50-people team much more efficient. And that is absolutely a much better, to your point, value proposition as comparing to some others which purely saying, “Okay, we can cut the bottom line.”

Jake Burns:
Yeah. As we talk about this, customers are going to want to know, if they're not already doing this, how can they get started, right? And I think it's such a privilege to have you in this conversation, Chris, because I'm sure there's some CFOs and finance leaders that are listening to this, but I suspect that we have a lot of IT leaders as well who want to know, “How can I build a business case that my CFO is going to agree with?” Right? At least in my experience, that's been a challenge in my career. And what the finance folks are thinking sometimes could be a mystery.

So maybe what advice would you give to a CIO, for example, who's putting together a business case for generative AI, and they need funding, and so they need support from their finance organization and from their CFO. What are some of the things that you would want to see, and what advice would you give them?

Chris Hennesey:
Yeah, great question. One, and again, depending on the size of the company, but usually it's best to start out with some existing resourcing and existing capacity and reprioritize to where you think some of these opportunities are. I know not every company can afford to just add more money depending on their situation, so I think making it a priority and carving out some capacity on the pilots we've talked about is a good first step. Maybe defining the use cases, testing out some of those use cases on a small scale, understanding what some of the cost ranges are, understanding some of the value.

You need some information to base this on other than just qualitative. We feel that this can change our business. We feel, well, that's great. To your point, that's great you feel these things. Let's see a little more facts. So I think the advice we've given throughout this, which is start small and scale is a perfect one, and that's true for the financials as well.

Start small, use existing capacity, maybe dedicate portions of an agile sprint towards teams focusing on this. Maybe ring-fence a couple of resources to go a little bit deeper. Maybe bring in a partner to lean on this, AWS or others

So then you elevate some of the insights from those learnings, because you're never going to just get an open checkbook and be like, "Oh. Gen AI is going to change everything? Here, go spend X millions of dollars and go after it." It's going to be kind of prove-as-you-ago.

Obviously, there are some big bets many companies are making, especially at the C-suite level or the board level of companies. And they do want to make big bets in this regard, which is great. Let's redirect some capacity there.

But just like any other good analysts, you want to make sure you have data informing the decisions over the long-term. So as you think of those long-term implications, make sure you're thinking about that.

I would do that. So work with your finance team, work with your IT leadership that's there, and really help build a partnership that you can scale through this. That would be my advice in going through this.

Obviously, lean on AWS and other partners who can help through this. As you all know, we do a lot of peer-to-peer connects and recommendations as part of our job. Maybe you're reading articles about other companies that are doing thing. I know for me, Amazon Finance just went public and shared a bunch in Wall Street Journal, around the ways that Amazon's leveraging Gen AI within finance, for smart contracts, for earnings call prep, all these things. Great. How do we set up time and make connections from peer-to-peer, to learn from them so that you can accelerate your learning? So I would do that as well, Jake.

Jake Burns:
Yeah, no, that's great advice. But let me ask just one more question, because I think this is the part that I've struggled with in the past is, and especially with Generative AI, I think a lot of the value is we don't know what the use case is going to be. Right? So I think we need to maybe spend some money, not a lot of money, I think we could do this in a frugal way, but we need to be experimenting with this technology to some degree.

How could we make a business case to a CFO to allow us to experiment? Do we show examples about how other companies have done that and had good results? Is there any particular approach that you would recommend?

Chris Hennesey:
Yeah, most... It's always a running joke. Anybody from IT that's coming to ask for more money feels like they don't have any to work with, even though they have a lot to work with. They just don't, everything's marginal and incremental. So I think one of the questions I always love to ask to CTOs and CIOs, which is, "Oh great, you want to do something new? What are you willing to trade off to do that?"

So thinking about trade-offs, is something I would advise. But most organizations reserve some capacity for new opportunities and investment. So not that everybody does it that way, but I think, if you escalate and elevate some of the great opportunities, there typically is capacity to support small investments across the organization, so that you just need to create awareness around what those opportunities are.

The product owners typically have that flexibility as well. So depending on how your IT team's structured, and how much localized decisioning you have versus corporate level decisions. Obviously, if you're trying to make a material investment, you need to have a little bit more bolstered case.

But for people wanting to try out Bedrock, or apply it to a use case, my guess is these are all local decisions. And they're not big investments at all. So it's not like it's taking a ton of time from your teams or an investment. And that's where, again, a lot of companies will just lean on partners to help accelerate a lot of this as well.

Because another route you can go. I'm sure you already engaged with partners, maybe you redirect them away from some lower priorities to focus on some of this, get some excitement, and then scale it. That's where you can make the case from there.

Jake Burns:
Alright, any closing thoughts?

Chris Hennesey:
Yeah, so I think we covered a lot. Obviously, we talked about, there's almost a little bit of the art of the possible. So use case creation is something I know a lot of companies have spend a lot of time and mind share on. And, Jake, you mentioned earlier, these don't have to all be home runs. A lot of these are singles inside of a baseball analogy.

I know within every department there's opportunities to leverage this. So what are the ways in which you think you can apply it, and really start to invite and encourage people all over the organization to take advantage of this? I know, the points of view are evolving through time, but some companies, especially in highly regulated companies, they were concerned over the level of security and information sharing, and weren't opening this up to their organization and their pools.

But obviously, leveraging Bedrock enables you to have really good security and an ecosystem around doing this in a well-managed way. You fully can control all that information and none of that data will be used outside of it. So I think taking advantage of AWS Bedrock, finding ways to open this up in your organization, allowing a small level of capacity to go after this, and then make sure you don't underestimate the importance of communication, of the results of this inside of your company, because you could get some really good wins and inspire others, just from hearing from what others are doing.

So opening the lines of communication and celebrating the wins inside of your organization, would be my last piece of advice.

Helena Yin Koeppl:
Yeah. I would say the only thing I would add is, yes, start small, but involving your organization. So when you are brainstorming on potential use cases, and let people think about their roles, and think about that where they can optimize. Both of you have mentioned that people would be surprised, actually, there are many, many potential use cases in each of the functions, and in customer-facing or non-customer-facing internal ones.

So get started there, get everybody involved, inspire everybody with that mindset of experimentation. And yes, leveraging technologies, which are easy to do experiment on, like Bedrock, where you can try out with different models, and customizations, and leveraging your own data. So yes, that would be my advice.

Jake Burns:
Yeah, great advice. As I've talked to customers over the past year, or two years, on this topic, when I have, let's say five minutes to talk to them, my summary is, usually, there's two parallel paths here if you're just starting out. One is, of course you want to find good use cases. So there's a lot of research you could do on that and a lot of internal... And this doesn't have to be expensive. This could be really just brainstorming, learning. AWS is here to help. We have the reference finder that we can give a link to in the show notes here.

See what others are doing and get an inspiration from that. See what makes sense to your business and focus on those things. But the other path is, even if you don't know how you're going to use this, you know you're probably going to use this technology in some way. And so don't wait to have that perfect use case before you start putting in guardrails, and training your teams, and doing all those things. You could do that in parallel. So my advice is to do both at the same time. Get people trained, get people bought in, get people to understand, “This is something that the company supports, and we're going to use enterprise grade technology, like the AWS suite of services rather than public models.” So if you're an enterprise, you want to have all that security, and privacy, and all those controls in place. And put those guardrails in place and put that governance in place, so that when you come up with that great idea that you want to execute on, you can execute on it quickly.

And so, do that, and then also look for those early wins, those quick wins that you can have, so you could show value. Especially, speaking to enterprise CIOs, if you need to win over your CFO, in my experience, the best way to do it is with a track record of success. So maybe focus on these things like Coding Companions, Amazon Q Developer, that you could prove, at very low cost, that you're getting value out of this technology. And then you can kind of earn trust with the finance teams and others within the organization as you go.

So yeah, work on those smaller, I think we're all in agreement on that. Start small, but think big, and work your way up. And I think you could build momentum very quickly.

Jake Burns, Enterprise Strategist, AWS:

"This trend towards using smaller, more specialized models makes a lot of sense if you're looking from an ROI perspective. If we’re talking a year ago, a lot of people were thinking, “What's the best model, what's the biggest model?” But I think as the reality has set in that these really big models, you don't necessarily need all the functionality of them for everything that you're doing, and they tend to be the most expensive option."

Subscribe and listen

Listen to the interview on your favorite podcast platform: