Tom Soderstrom:
So, what do you think the future looks like for AI?
Ron Keesing:
Well, okay, so now it's my turn to say something controversial back to you.
Tom Soderstrom:
Good. Please.
Ron Keesing:
It is such an interesting time in the field, because if you think about how we work today, we have all these processes we use that are based on the generation of text and then the consumption of text by... So humans produce text and humans consume text. Now, we're entering this era where we're going to be using AI more and more to be the ultimate producer of that text. And ultimately, on the other end, there'll be AI that's consuming more and more of that text.
Let me give you an example of how this might play itself out in the world I live in, where we're writing proposals to the US government to pursue work. Well, we are using AI, like everyone is. We're trying to think about how we can use AI to streamline and improve the proposal writing process. As you mentioned, of course, humans, that's a first draft and humans are iterating. I expect AI is going to get better and better at this over time. Well, our customers are over there looking at getting more and more proposals and having fewer and fewer people; well, they're turning to thinking about how they're going to use AI to read these proposals.
Tom Soderstrom:
Really interesting point.
Ron Keesing:
Right?
Tom Soderstrom:
Yes.
Ron Keesing:
So if you think about what we're going to do in the limiting case, we'll be sort of creating a set of knowledge chunks that we then feed into an AI system that then produces a beautiful written proposal. They'll be having an AI system on their end that reads our proposal and extracts out the knowledge chunks, which they then kind of compare in some way and decide who they want to select. And we'll both be crediting tremendous productivity gains to the fact that we have these AI systems that actually added no fundamental value at all, because all of the encoding and decoding as text actually isn't being used.
So it's actually a really interesting point about how, as we have all these workflows we've developed, not just internal to organizations, but even across organizations, where text becomes kind of the medium of knowledge transfer, we're all excited about using AI to transform the way we do this, but we may not be generating real, ultimate value until we actually rethink how those processes work and realize that maybe text is no longer going to be the medium of exchange we really think it is today.
Tom Soderstrom:
Super interesting. Both you and I have written many, many large proposals, and the thing that you deliver is a fraction of what you wrote initially. So you wasted a lot of time.
Ron Keesing:
Sure.
Tom Soderstrom:
So can you reuse those pieces, et cetera, and maybe AI can help you. That's a super interesting point.
Ron Keesing:
Look, we're a long way from being able to do that with the systems we have today. But it is ultimately the case that we're already seeing people, I mean, I see people today use AI, create some bullet points, ask AI to expand it out into an email, and then someone on the other side asking AI to read that email for them and compress it down to the point. So we're already seeing the beginnings of these kinds of workflows starting to occur. And I think it's going to be more and more, we have to ask, how do we really want to work?
For example, if you think about how that might work in an age of selecting a bidder, maybe what we do is we move more to a system where they're actually oral presentations and people meet the people who are presenting these ideas, because big written proposals that are 500 pages long aren't really that technically meaningful anymore.
I think what you just did, you spawned a whole bunch of startups out there going, "Oh, yeah, I can write that." I think that's what's going to happen. We're in the infancy here. It's an exciting time. So you have people watching and thinking, I want to be a chief AI something officer. What advice would you give? What are three pieces of advice, arbitrarily, you can go more, that you would give to somebody who wants to be a chief AI officer?
Ron Keesing:
Well, maybe I'll give these pieces of advice contextually in the moment we're in right now.
Tom Soderstrom:
Yeah.
Ron Keesing:
If you're trying to help your organization move forward in the world of AI in this moment, first piece of advice I'd give is to really focus in on your data. Most organizations have tremendous aspirations about what they can accomplish with AI, but their data is not really in what I would call AI-ready form. They're just not going to have the opportunity to actually use the AI to solve the problems they want to solve.
Now, it's easy to turn any exercise in trying to improve your data into a decade-long journey, a slog that will never go anywhere, so you have to be very focused in how you do this. You have to think about, how do you create, essentially, an AI substrate layer of data products that express your core business practices in a way that will unlock the potential for AI to transform the way that you work?
The second piece of advice I'd give is to really think about how AI works with people as a partnership. Too much of the conversation around AI turns into, "Is AI going to take human jobs? Are we going to get rid of people?"
And we all hear this, and we all see it. And the truth is those of us who've been working in AI for as long as you and I have know that successful AI projects almost always involve marrying what humans do well with what machines do well. And creating really synergistic partnerships and framing the way your AI works as a synergistic partnership between the person and the AI system is the key to success.
Much of the most important data you get to actually improve the way an AI system works comes as humans actually interact with that AI system. So that human-machine partnership as a core construct is something I really recommend to everyone trying to think about how to approach building their own AI solutions.
The third thing is kind of wearing my governance hat. Most organizations I know are starting to take on the challenge of how do you govern AI, and how do you be both innovative and have governance at the same time? Finding this balance point, to me, all becomes about having a really well articulated understanding of AI risk, how it comes about, and how you can manage it.
Don't focus your governance attention, your governance resources on low risk AI use cases. Let people experiment, let people go quickly, let people figure out what works, and focus what will always be limited governance resources on the real things that present business risk. And also, link up your AI governance practice with your broader risk management practice so it can be grounded in real business principles.
Tom Soderstrom:
I like that a lot. Have you heard about the Amazon principle of one-way door decisions and two-way door decisions?
Ron Keesing:
No, actually, I don't know that one.
Tom Soderstrom:
So a one-way door decision is something that only the highest level executive can make. For us to create a new region, billions of dollars. Big decisions. Most decisions are two-way door decisions. You can walk through the door, you can come right back. And so what you are describing fits that very well. Leave those risk experts on the big things, and let people experiment at a low risk. So if you can lower the risk on most things, you can move forward really fast.
Ron Keesing:
Yes, that's exactly right.
Tom Soderstrom:
I think it's great advice. But I really want to thank you for this very intriguing conversation, and I see many more coming. And it's going to be interesting to see how we measure success. It's in its infancy.
Ron Keesing:
Absolutely.
Tom Soderstrom:
And if you are an AI officer out there and you want to know what life is like, like a chief AI officer, contact Ron.
Ron Keesing:
Yes.
Tom Soderstrom:
Thank you very much.
Ron Keesing:
Thank you, Tom.