Skip to main content

A Chief AI Officer’s Perspective on Leading AI Transformation and Governance

A conversation with Ron Keesing, Chief AI Officer, Leidos

In this episode...

Join AWS Enterprise Strategist Tom Soderstrom as he sits down with Ron Keesing, Chief AI Officer at Leidos, for an insightful discussion on leading enterprise AI transformation. As a newly minted CAIO, Keesing aims to infuse AI into Leidos' strategy at every level, moving from centralized AI projects to a distributed model of excellence across the organization. Listen in as he shares insights from his 20+ years of experience, including strategies for measuring AI success, creating effective human-AI partnerships, and managing AI governance at scale. Whether you're an aspiring AI leader or scaling AI in your organization, this discussion offers valuable insights from a pioneering AI executive.

Transcript of the conversation

Featuring Tom Soderstrom, Enterprise Strategist, AWS, and Ron Keesing, Chief AI Officer, Leidos

Tom Soderstrom:
Welcome to the Executive Insights podcast, brought to you by AWS. My name is Tom Soderstrom. I am an enterprise strategist in AWS. As we look at the industry trends of what's coming, one of the key trends is the emergence of generative AI and cloud. Those go together. You really can't do generative AI without cloud.

So as we look at this, we know that technology is kind of meaningless unless you have the people to drive it and adopt it. So today, we're very fortunate indeed to have one of the newly minted chief AI officers, Ron Keesing, of Leidos with us to kind of explore this new title and what it means. So Ron, thank you so much for joining us.

Ron Keesing:
Oh, thank you, Tom. It's great to be here.

Tom Soderstrom:
Would you tell us a little bit about Leidos and yourself, your background? And then we'll get into the details of what you're now supposed to do.

Ron Keesing:
Sure. Well, Leidos is... Many people don't know us. We're not necessarily a household name, but we're a very large company. We're a Fortune 300 company, and we actually get to solve many of the world's most vexing problems in security and health.

So this is a really great position I've got, leading the AI practice for Leidos, because we work on everything, typically for the US government, on missions like keeping the FAA air traffic control systems operating. So we write the software that operates and keeps our air traffic safe. We operate one of the largest electronic health record systems in the world for DoD, over 10 million users, actually, that we maintain. We do more disability claims and disability examinations than any other entity in the United States. That's just in the healthcare business.

We operate some of the coolest logistics supply chains you can imagine, including the logistics supply chain for the International Space Station.

Tom Soderstrom:
Now, that's cool.

Ron Keesing:
Yeah, it is cool. We operate the third largest IT network in the world on behalf of the US Department of Defense.

So when you think about the scale at which we operate, there are these massive data-centric jobs, and AI is really core to them. Now, I've been in the world of AI for a long time. We compared notes some time ago and talked about experience when you were at JPL, and I worked on a NASA mission, we partnered with JPL to build the first generation of autonomous spacecraft back in the '90s. So I go way back in the world of AI. I've been at Leidos for over 20 years, leading a broad range of AI projects there.

Tom Soderstrom:
We're seeing the emergence of chief AI officer, and other people are going to ask, "What does success mean? What are my expectations?" So let's start with, what's the expectations of your role?

Ron Keesing:
One of the leadership principles that I really believe in and, I think, share with AWS values is the importance of taking ownership. So for me, I looked at where Leidos was as a company. We'd been an AI company for a long time, but even as we articulated what our AI strategy was, there was no one who was really taking ownership of it. And I kind of stood up and said, "Hey, look, we need some position, some role that's really going to be responsible, not just from a technology standpoint, but from a broader corporate standpoint. What is our AI strategy?"

And I really focused on defining a chief AI officer role that would lead the execution of a comprehensive AI strategy for Leidos. And that's what I'm focused on. It's the combination of AI strategy and then also owning and defining our AI governance principles and how we do AI governance as an enterprise.

Tom Soderstrom:
Super interesting. And one of the key lessons learned from Amazon is to have... If you want to make something work, you need a single-threaded leader, somebody who wakes up and is really... That's all they worry about, every day. And in your case, it's AI. So you actually defined the job description?

Ron Keesing:
I did, and-

Tom Soderstrom:
Very good.

Ron Keesing:
Yeah, no, it's been great, and I've had wonderful support from the executive leadership and the board at Leidos to actually define this role and to make it be real, because I think we as a company, I mentioned strategy, we've been embarked in the last year on what we call a year of deep strategic thinking as a company. And we've really looked at our future and looked at the future of our customers because, as I mentioned, really, our work is largely on behalf of addressing many of the most important missions performed by the US government.

As we did that, we recognized that AI is crucial to our strategy, both for Leidos and for our customers. And really having a single point of focus for the execution of that comprehensive AI strategy was the basis for our broader, not just AI strategy, but our broader corporate strategy, because so much of what we do hinges on data and hinges on building large complex systems that manipulate massive amounts of data. And to those kinds of missions, AI is the future.

Tom Soderstrom:
Yeah, and AI is table stakes, and this new generative AI is the new kid on the block. We were together, and I was listening to your CEO who said, "The number one priority is speed." So how are you going to execute your mission now of single-threaded leader of AI in Leidos?

Ron Keesing:
So I think of the mission, and really, the core, my core goal is to truly infuse AI into the DNA of Leidos at every level. And let me explain what I mean by that.

Tom Soderstrom:
Please. That's nice.

Ron Keesing:
Because in truth, Leidos has been an AI company for a long time. We actually have participated in some groundbreaking AI work over the last 20 years. Back in 2004, we were part of the first DARPA Grand Challenge that demonstrated autonomous cars. We built the first generation of autonomous seagoing vessels for the US Navy. We've been at the forefront of a lot of exciting autonomy and AI breakthroughs for the US government over the years. So we know how to do AI as a company, but those AI efforts have always been kind of focused, individualized projects, not something we do as part of all of our work.

So the challenge for us as we move forward is, how do we take that AI expertise we have and, for example, that's embodied in an organization we call our AI Accelerator, where we have our true rocket scientist talent. But how do we take that talent and start to develop much more of a hub-and-spoke model where there's excellence in AI at every level of the organization. Because the truth is, to scale, to attack the opportunities presented by generative AI isn't something you can do anymore with just a centralized organization. You actually have to have AI capability spread across the entire organization.

And I'd argue it goes even deeper than that. When you think about the future of the workforce that we're moving toward, it's not just about, how do you have excellence in having AI scientists and engineers? You actually have to think about how you're going to develop an entire workforce that's capable of working in partnership with AI to do their jobs every day.

One of the things I recognized about doing this job properly is I can't be the owner of every AI resource in the company. My function actually needs to be to integrate everything we do across the organization. So the way I've actually organized what I do, again, with great support from across the company, is there's a series of initiatives that bring together all the work everyone else is doing across the company and execute it in a unified way.

So really, it's less about ownership and it's more about providing the vision and providing clear and steady coordination and orchestration of a bunch of different efforts that are already underway, because there's so much enthusiasm, as you mentioned, around generative AI to start experimenting, to do pilots. And that's all great. At a certain point, you have to decide what you're going to put your money behind, what you're going to go after and really invest in, really try to be world-class at, where you think the real value is.

So I see my role as coordinating what everyone's doing across the company so we are operating in a unified manner, identifying what that vision is, where we need to go as a company, and steering our collective movement in that direction so we can get there quickly.

Tom Soderstrom:
I like this. I very controversially say something, I want you to react on it. Everybody creates a COE and thinks that a COE is the center of excellence. You need a small group that's excellent. But if you create a center of excellence, they all think they're excellent, everybody else hates them, creates their own center of excellence, and you get shadow COEs and shadow IT. Instead, create the center of engagement, where you create the tide that lifts all ships. And it sounds to me like that's what you're doing. Would you smack me around or agree with me?

Ron Keesing:
Well, I'd agree with you that that is how we absolutely think about it. And in fact, what we've been doing is taking some of the real leaders who grew up in that Accelerator organization and actually using them to seed those centers around the rest of the company so that you have that connective tissue.

There's nothing worse than when you've developed a lot of centralized capability and then everyone goes off and does things their own way. And there's no way to achieve scale, there's no way to achieve consistency.

Tom Soderstrom:
No, it's a complete waste of all those resources.

Ron Keesing:
Exactly. So how we maintain that connective tissue is absolutely critical to my plan and how we're trying to execute and of the development of this much more distributed AI capability across the entire organization. And the good news is we have people who've been in this organization now for a number of years, who are really ready to take that next step. And it gives them a great opportunity to grow their careers, to get out into those different organizations and see how they can lead and scale and help those organizations grow, too.

Tom Soderstrom:
That's fantastic. So now the really hard question. How do you measure your success as a chief AI officer? And everybody who's thinking of becoming a chief AI officer is going to go like, "Okay, tell me how. Give me the recipe." So how do you measure it? What's success?

Ron Keesing:
A big part of developing those solutions and measuring success in developing those solutions is just looking at, how much of the pipeline are we actually impacting through AI?

So we actually measure that quite explicitly. We track all the engagements we have. We look at, for example, these AI initiatives that we talked about. And we explicitly track, okay, which things in our pipeline are using the technology we're developing within these initiatives, and how are they being impacted? Eventually, we even want to look at how much are those kind of contributions helping us win new work? Are they getting cited as strengths in the way that we're selected for proposals, for example? So that's sort of the pipeline side of this.

Now, another big thing that the company wants to measure, and I think everyone expects this out of a chief AI officer role, is how are you contributing to becoming more productive as an organization? How are you becoming more efficient? And this is such an interesting topic to me because one of the real challenges, I think, around AI today is we understand how to use, for example, generative AI to increase the efficiency of certain workflows, but it's not entirely clear how we'll turn those efficiencies into improved business. And what do I mean by that? Let's take an example like using AI as a coding assistant.

It's a great technology if used properly as a coding assistant, but does that mean if you're, let's say, 30 or 40% more efficient with your software developers that you're going to do the job with 30 or 40% less software developers? Or are you going to write software that's more secure? Or are you going to service some of that legacy tech debt that you've had that you've never been able to get to?

So if you think about this, some organizations want to measure what the dollar impact will be. Well, it's easy if you're cutting head count, but I don't think that's how it's going to work in most organizations. I think as you develop good human-AI partnerships, what you'll find is you're taking on more work or solving problems that weren't previously being adequately addressed. And those kinds of impacts become, frankly, harder to measure. So I am measuring potential labor hours saved, but how those labor hours will actually be invested by the businesses, that's got to be kind of up to them.

Tom Soderstrom:
Interesting.

Tom Soderstrom:
So one of the things, you're in a regulated business, as was I, and if you are not getting the compliance check, you can't move forward. So check mark, not dollars.

Ron Keesing:
Sure.

Tom Soderstrom:
So what happens a lot is those compliance checks happens after the thing is built. It sits and waits and waits and waits. I think, I'd like you to comment on this, that generative AI and the coding assistant helps in that. We saw, for instance, that 27% of when they use generative AI, those solutions were accepted. They went into production, whether it's internal or for customers. But if you can embed security and compliance into the code, then you shorten that compliance cycle to get the check mark, that is money, and it's measurable. What do you think about that?

Ron Keesing:
Yeah, I think that's absolutely right. And it's one of the ways that, for example, improving the code security and the code delivery actually translates into real business value. And I think you mentioned this time to value. This is such an interesting area right now, because it's true about all of software that anything you write becomes obsolete now within six months to a year. So if you think about leaving something on the shelf for six months, you've eroded away half that value. So speed to delivery, speed to deployment is really critical, and it's interesting that you mentioned that particularly.

A lot of people think that code AI is really valuable for sitting down and writing out lines of code. It's actually, in our experience, most valuable more broadly across the entire DevOps lifecycle. How do you speed all those compliance checks? How do you speed all the quality to get to deployment as quickly as possible? So I am violently in agreement that a lot of the value, especially from the software perspective, becomes about time to deploy, time to get to compliance. Absolutely.

Tom Soderstrom:
How will you measure if people are adopting and using the code AI?

Ron Keesing:
We're actually seeing really interesting patterns when we look at how our developers are using code AI and code AI assistants. People think that when you have a code AI assistant, developers are going to spend less of their time coding and the AI is going to do that. What we're finding, actually, when you do this at scale is that our developers spend more time coding with a coding assistant. What they spend less time doing is writing the packages, writing all the acceptance criteria, writing the unit tests, writing the documentation.

And when you can kind of interpose a coding assistant that helps in those ways-

Tom Soderstrom:
Exactly.

Ron Keesing:
... then suddenly people become really accepting of it. And if it can help them a little bit to improve their coding, too, that's great. If you try and take away the coding part from a software developer, they're very resistant, but you have to-

Tom Soderstrom:
Yeah. As an ex-software developer, it would not make me happy.

Ron Keesing:
Exactly. And if you think about, instead, how do you add the coding assistant into the broader work that they do, helping contextualize code, because most developers are working, especially in our world, are working in massive legacy code bases that may contain legacy code that's decades old. And much of their time is just spent trying to understand what that code does. So if a coding assistant can help them orient and actually work effectively in that legacy code base, well, then they love it, right? It's actually making their job easier and taking away the part of the job they hate.

Tom Soderstrom:
I agree. One of the things that's interesting, I'm curious what you see here is technical debt. Technical debt is real. It's very hard to get funded. So how do you eliminate it? And I think that AI can help. With a cloud, you can use infrastructure as code, which means it's documenting how you spin out servers. So all of a sudden, you could remove those old servers from your data center that this group has to maintain in their stock.

So that's the hardware. Then the software is this old code. How do you get rid of it? Those developers often would want nothing more than being able to do other things, but it's so critical. Generative AI and coding assistants can help, like you said, understand that, and maybe even translate the code and move it. Now you could then move it to the cloud, and those developers could do other things. That's the hope. What do you think? Do you think it's possible?

Ron Keesing:
Absolutely, in certain ways. I would say it's not a panacea, at least what we're seeing today, but there are really good examples where it's doing fantastic work for us. So you mentioned infrastructure as code. There are certain people who are great at infrastructure as code, and we all love them, and we all value them. Most people are not. And actually, what we're finding is well-done coding assistants can be great helps to make non-experts be able to do a lot of that infrastructure as code work.

For example, one of the biggest adoptions we have internally of coding assistants is within our own CIO shop, where they're using generative AI coding assistants to actually write a lot of new infrastructure as code, to exactly your point, right? That's how you can start to service some of that technical debt and spend less of your time doing manual configuration of all of your systems and moving into, how do I move this into the cloud, operate in a more modern way, and actually then spare cycles to take on more interesting parts of the problem, truly modernizing and transforming the way that my IT systems operate so I can do it more efficiently?

Tom Soderstrom:
That's a great example. Do you have any other examples of that where you've seen AI, and particularly generative AI if you have them, paying business dividends already?

Ron Keesing:
Yeah, it's such a great question because I think everyone's so excited. We've all tried playing with various ChatGPT or other generative AI systems, and it seems very intuitive that you ought to be able to do great things with them, right? But actually finding the cases that consistently deliver business value is much trickier.

I will say one of the areas I think, and we're seeing business value, is in transforming just a general IT service provisioning process. This is a very manual process, traditionally. As I mentioned, Leidos manages massive networks for the US government with huge help desks and traditional kind of support infrastructure. And the truth is most human beings don't want to ever have to call someone when their IT systems don't work.

Tom Soderstrom:
That's right.

Ron Keesing:
First of all, they just want their IT systems to self-heal and fix, or if they do have to, they want to just be able to deal with some kind of a chat-based assistant that can actually resolve the problem for them on the fly.

We're seeing great success in actually using generative AI chatbots that provide tailored IT service support in a way that makes users much happier. I hit my IT support agent at least two or three times a week. We have one we use within Leidos that's really powerful and hugely successful, and now we're rolling that out to a lot of our government customers.

Ron Keesing:
The area that, in many ways, I'm most excited about seeing a transformation around, but I think still the jury's out on how much we'll get out of it, is transforming the world of digital engineering, design, systems engineering through generative AI, taking on a lot of what are traditionally highly manual processes.

Tom Soderstrom:
That just makes sense to me, because one of the things that we're seeing scale, scale is scaling up, but things are difficult at scale. And when you take something like a very complex system engineering task, it's a lot to keep track of. If there's one thing that AI is good at, it's keep track of lots and lots of details.

Tom Soderstrom:
So, what do you think the future looks like for AI?

Ron Keesing:
Well, okay, so now it's my turn to say something controversial back to you.

Tom Soderstrom:
Good. Please.

Ron Keesing:
It is such an interesting time in the field, because if you think about how we work today, we have all these processes we use that are based on the generation of text and then the consumption of text by... So humans produce text and humans consume text. Now, we're entering this era where we're going to be using AI more and more to be the ultimate producer of that text. And ultimately, on the other end, there'll be AI that's consuming more and more of that text.

Let me give you an example of how this might play itself out in the world I live in, where we're writing proposals to the US government to pursue work. Well, we are using AI, like everyone is. We're trying to think about how we can use AI to streamline and improve the proposal writing process. As you mentioned, of course, humans, that's a first draft and humans are iterating. I expect AI is going to get better and better at this over time. Well, our customers are over there looking at getting more and more proposals and having fewer and fewer people; well, they're turning to thinking about how they're going to use AI to read these proposals.

Tom Soderstrom:
Really interesting point.

Ron Keesing:
Right?

Tom Soderstrom:
Yes.

Ron Keesing:
So if you think about what we're going to do in the limiting case, we'll be sort of creating a set of knowledge chunks that we then feed into an AI system that then produces a beautiful written proposal. They'll be having an AI system on their end that reads our proposal and extracts out the knowledge chunks, which they then kind of compare in some way and decide who they want to select. And we'll both be crediting tremendous productivity gains to the fact that we have these AI systems that actually added no fundamental value at all, because all of the encoding and decoding as text actually isn't being used.

So it's actually a really interesting point about how, as we have all these workflows we've developed, not just internal to organizations, but even across organizations, where text becomes kind of the medium of knowledge transfer, we're all excited about using AI to transform the way we do this, but we may not be generating real, ultimate value until we actually rethink how those processes work and realize that maybe text is no longer going to be the medium of exchange we really think it is today.

Tom Soderstrom:
Super interesting. Both you and I have written many, many large proposals, and the thing that you deliver is a fraction of what you wrote initially. So you wasted a lot of time.

Ron Keesing:
Sure.

Tom Soderstrom:
So can you reuse those pieces, et cetera, and maybe AI can help you. That's a super interesting point.

Ron Keesing:
Look, we're a long way from being able to do that with the systems we have today. But it is ultimately the case that we're already seeing people, I mean, I see people today use AI, create some bullet points, ask AI to expand it out into an email, and then someone on the other side asking AI to read that email for them and compress it down to the point. So we're already seeing the beginnings of these kinds of workflows starting to occur. And I think it's going to be more and more, we have to ask, how do we really want to work?

For example, if you think about how that might work in an age of selecting a bidder, maybe what we do is we move more to a system where they're actually oral presentations and people meet the people who are presenting these ideas, because big written proposals that are 500 pages long aren't really that technically meaningful anymore.

I think what you just did, you spawned a whole bunch of startups out there going, "Oh, yeah, I can write that." I think that's what's going to happen. We're in the infancy here. It's an exciting time. So you have people watching and thinking, I want to be a chief AI something officer. What advice would you give? What are three pieces of advice, arbitrarily, you can go more, that you would give to somebody who wants to be a chief AI officer?

Ron Keesing:
Well, maybe I'll give these pieces of advice contextually in the moment we're in right now.

Tom Soderstrom:
Yeah.

Ron Keesing:
If you're trying to help your organization move forward in the world of AI in this moment, first piece of advice I'd give is to really focus in on your data. Most organizations have tremendous aspirations about what they can accomplish with AI, but their data is not really in what I would call AI-ready form. They're just not going to have the opportunity to actually use the AI to solve the problems they want to solve.

Now, it's easy to turn any exercise in trying to improve your data into a decade-long journey, a slog that will never go anywhere, so you have to be very focused in how you do this. You have to think about, how do you create, essentially, an AI substrate layer of data products that express your core business practices in a way that will unlock the potential for AI to transform the way that you work?

The second piece of advice I'd give is to really think about how AI works with people as a partnership. Too much of the conversation around AI turns into, "Is AI going to take human jobs? Are we going to get rid of people?"

And we all hear this, and we all see it. And the truth is those of us who've been working in AI for as long as you and I have know that successful AI projects almost always involve marrying what humans do well with what machines do well. And creating really synergistic partnerships and framing the way your AI works as a synergistic partnership between the person and the AI system is the key to success.

Much of the most important data you get to actually improve the way an AI system works comes as humans actually interact with that AI system. So that human-machine partnership as a core construct is something I really recommend to everyone trying to think about how to approach building their own AI solutions.

The third thing is kind of wearing my governance hat. Most organizations I know are starting to take on the challenge of how do you govern AI, and how do you be both innovative and have governance at the same time? Finding this balance point, to me, all becomes about having a really well articulated understanding of AI risk, how it comes about, and how you can manage it.

Don't focus your governance attention, your governance resources on low risk AI use cases. Let people experiment, let people go quickly, let people figure out what works, and focus what will always be limited governance resources on the real things that present business risk. And also, link up your AI governance practice with your broader risk management practice so it can be grounded in real business principles.

Tom Soderstrom:
I like that a lot. Have you heard about the Amazon principle of one-way door decisions and two-way door decisions?

Ron Keesing:
No, actually, I don't know that one.

Tom Soderstrom:
So a one-way door decision is something that only the highest level executive can make. For us to create a new region, billions of dollars. Big decisions. Most decisions are two-way door decisions. You can walk through the door, you can come right back. And so what you are describing fits that very well. Leave those risk experts on the big things, and let people experiment at a low risk. So if you can lower the risk on most things, you can move forward really fast.

Ron Keesing:
Yes, that's exactly right.

Tom Soderstrom:
I think it's great advice. But I really want to thank you for this very intriguing conversation, and I see many more coming. And it's going to be interesting to see how we measure success. It's in its infancy.

Ron Keesing:
Absolutely.

Tom Soderstrom:
And if you are an AI officer out there and you want to know what life is like, like a chief AI officer, contact Ron.

Ron Keesing:
Yes.

Tom Soderstrom:
Thank you very much.

Ron Keesing:
Thank you, Tom.

Ron Keesing

Chief AI Officer, Leidos

"So much of what we do hinges on data and hinges on building large complex systems that manipulate massive amounts of data. And to those kinds of missions, AI is the future."

   

Subscribe and listen

Listen to the episode on your favorite podcast platform: