Building trust in AI as technology rapidly advances
AI technology is rapidly advancing. The capabilities and ongoing projects continue to expand at an astounding rate. But the faster the tech grows, the faster human trust in the tech needs to grow. Without trust in the technology from the humans using it and benefiting from it, the potential of AI will go unfulfilled. That's why Leidos is working to not only build AI humans can trust but also exemplify why humans should trust it.
"If you can get the trust relationship right, when humans and machines actually work together to solve problems you can really transform the way that business is done. If you build that relationship with intention based on trust, then humans actually really like working with AI-enabled capabilities traditionally."
Some of our best work in technology comes when humans and machines work together. That also applies to AI-enabled tech. But to see those rewards, like any relationship, trust needs to be present, and building that trust is a task the team at Leidos is heavily focused on. Today, Ron Keesing, senior vice president for technology integration and Tifani O'Brien, lead for the AI and Machine Learning Accelerator join us to walk us through how they're doing that and the challenges they face.
On today's podcast:
- How trust in AI impacts the application of the technology
- Methods for evoking trust in AI
- How AI can help humans unlearn biases
-
Star Wars Clip: How would you know the hyper driver is deactivated? The city central computer told you. R2-D2 you know better than to trust a strange computer.
Shaunté Newby: Computers and technology have brought a lot of value to us. And in recent years, AI specifically has begun assisting humans in ways we've never thought possible, but a key factor in making sure that AI works as it's supposed to is human cooperation. And that means human trust in AI, but like any relationship, trust has to be built.
Ron Keesing: When people think about AI, the first question people ask is, "How are you going to build AI so it doesn't turn into the next Terminator or the next HAL from 2001?"
Shaunté Newby: That's Ron Keesing, he's been working in machine learning and AI for over 25 years. Before working at Leidos, he also spent time with NASA building the first autonomous spacecraft for deep space travel. He learned firsthand the potential of AI and the related tech. Now he works to bring that knowledge to the wider public, but there are a lot of important factors to consider when bringing AI into the game, like this one.
Tifani O'Brien: We take the approach that what you need really are humans and computers teaming together to get the best outcome. That way let's take advantage of the human strengths and computer strengths and use them together appropriately.
Shaunté Newby: That voice is the Leidos lead for the AI and Machine Learning Accelerator, Tifani O'Brien. Both Tifani and Ron will join us in this episode to tell us about what AI trust means, the challenges they face in AI trust and why it's important to get the world to trust AI. My name is Shaunte Newby. This is MindSET, a podcast by Leidos. In this series, our goal is to have you walk away from every episode with a new understanding of the complex and fascinating technological advancement going on at Leidos, from space IT to trusted AI, to threat-informed cybersecurity. We've got a lot going on and we're excited to share it with you. So let's start off with, what do you do at Leidos right now?
Ron Keesing: This is Ron. So I am our senior VP for technology integration here at Leidos, which is a new role that we've created that really ties together a lot of our activities, especially our office of technology around emerging technologies. So I manage our accelerator in AI and machine learning as well as in digital modernization and our accelerators in cyber and software as well. And I also help manage an investment portfolio of significant enterprise-level investments in these technologies. And finally, I also manage our technical core capability teams and enablers and our external technology function and our technology transition function. So those are all different threads that focus on how we advanced technology across Leidos.
Shaunté Newby: Thank you. And Tifani, what about you, what do you do at Leidos?
Tifani O'Brien: As the lead for the AI and Machine Learning Accelerator, I'm leading a team of data scientists and machine learning researchers, and we work on solutions across many different kinds of missions and use cases for Leidos. And we're leading research, developing new algorithms and approaches for really secure and resilient and adaptive AI machine learning. And these are in areas that include defense and intelligence and health and civil, and also things like developing and deploying predictive analytics for predictive maintenance and inventory management, as well as detecting patterns in IT and network operations data that indicate when you're going to have a major outage that affects the user base. So a lot of different domains. It's really exciting.
Shaunté Newby: As I understand it, Ron, you're passing on the torch here to Tifani. And so Tifani, what makes you excited about taking on this new role?
Tifani O'Brien: Well, I'm excited because there has been such an explosion of data. And what I've seen are, working with the, for example, in the intelligence community, where you have analysts that are looking at all this data, and basically they need help. How do we help them actually make sense of it and be able to make connections and get insights across a large amount of data and do something that humans can no longer do easily by themselves? So I think it's a fantastic opportunity.
Shaunté Newby: It is. And congratulations, actually, congratulations to both of you because I believe Ron was promoted and you were promoted as well. So at Leidos, it seems that you're focusing a lot on machine learning and AI, can you explain what this is and why it's significant?
Tifani O'Brien: So machine learning, the exciting part about machine learning is you're learning from the data instead of rules from humans. Customers talk about the explosion of data and their folks are, they're overwhelmed. Machine learning is exciting because it can handle this challenge. Basically, the machines learn from the data itself and you're turning that explosion of data into an advantage instead of something that's overwhelming the humans. In the old days, we tried to tell machines how to solve problems and which meant we were always trying to catch up. There's so much happening, so much change. Now we're turning it over to the machines to learn from history and learn from the data itself, how to address these problems. And I think that's a fantastic advantage.
Shaunté Newby: Ron, can you briefly describe what trusted AI refers to?
Ron Keesing: When we talk about trusted AI at Leidos, which has really become a key focus for us, we're talking about building AI that both humans can trust and is really worthy of that trust. And also building out AI that humans have an intuitive sense of so that they can appropriately gauge their level of trust as they continue to operate with it. So we're always thinking about humans and machines teaming together to solve problems wherever possible. And we want the AI to be a real partner to the human. We want to build AI that never puts humans or their missions at risk. We want to build AI that always responds in predictable ways and humans can understand, in some cases we want it to be really explainable or obvious to the human, then they can understand what the AI is doing because that's really crucial too, to trust.
Ron Keesing: So when we think about trusted AI I take a step back. The whole reason that we've focused on this so much is that we've been actually building AI solutions for the US government on a variety of really challenging problems, really for over a decade. And as we looked at what had been successful and what hadn't, what we realized was trust was a really core element across all the places where we'd been really successful and was a gap in places that AI just hadn't taken on the way that we had, the successes we'd planned. It was often came down to had we built AI that humans could have the right trusting relationship with. And if we got that right, the AI programs were successful. And if we didn't, they often failed. So we chose trusted AI as the focus area for us as a corporation. And we have dedicated our research and our intention of the way we develop AI as a corporate strategy around this issue of how do we consistently develop AI that humans can trust and is worthy of that trust?
Shaunté Newby: Well, I noticed you use the word partner versus replace, so partnering with humans as opposed to replacing them. And with that in mind, why is it so important to build AI that humans trust?
Ron Keesing: Well, you hit on a key point. Humans are constantly concerned about being displaced by technology, and that's one of the sources of distrust. So we want to build AI that humans trust because it's really crucial to the successful adoption of AI. There are a bunch of perspectives that need to get included. As we think about how humans are going to work with AI, you've got the users, you've got the system owners that have to trust that the AI's not going to shut down their networks. You've got all these different ways and in fact, even the public. Different examples where if humans don't have the right level of trust in AI, then they can resist adoption, they can literally make the decision to not use the systems in various ways. And all those things are really huge blockers to successful AI. On the other hand, if you can get the trust relationship right, when humans and machines actually work together to solve problems, you can really transform the way that business is done. If you build that relationship with intention based on trust, then humans actually really like working with AI-enabled capabilities traditionally.
Shaunté Newby: And Tifani, what are your thoughts on that too?
Tifani O'Brien: If humans don't trust AI and don't accept it, then they won't get the advantages. And so you need to think of in terms of the different way that AI might fail and basically address those, not ignore them, but address those problems. And that includes, as he said, the people that actually are responsible for the system, the end users and the public all need to be able to have a level of trust that's appropriate to what they're expecting the AI to do for them.
News Clip: Artificial intelligence, drones, warfare, and Google. It's a mixture that caused an uproar inside the tech giant, where the early motto was, don't be evil. So what's behind Google's contract with a Department of Defense for a project called Maven. Joining me now from Oakland is Gizmodo reporter King-
Shaunté Newby: There's more to AI trust than trust from users and stakeholders. Public trust is also a major consideration. The power of public distrust was on full display when Google's involvement with Project Maven was put on the public stage. That's something that Ron describes as a turning point for how the industry approaches AI and AI ethics, along with how that interacts with AI trust. I asked him to explain more about what happened and what it meant for the industry. Here's what he had to say.
Ron Keesing: There's a lot of history there, but in project Maven, one of the aspects of project Maven that came up at a certain point is that a number of engineers at Google found out that they were working on overhead imagery exploitation using AI at Google in support of DOD missions. And there was a little mini-revolt within Google. A lot of people were really concerned. Was this going to be an ethical application of AI? Was it an appropriate thing for Google to be involved with? And in fact, Google ended up pulling out of that project entirely. And it caused everyone to step back and rethink, what is the level of ethical commitment we want as we take on AI projects and how do we interact with, and make sure that people understand what we're doing with AI and that our intentions are really the right ones?
Ron Keesing: So that's an example, a great example of the public side of AI. We talked about stakeholders and we talked about users having to trust AI, often it's the public that has to trust AI as well. So in this case it was really an issue of public trust. Both the public was concerned and engineers at Google, who weren't even working on the program were concerned about what the AI might be doing, and that led a whole project to be shut down. I mean, this is really important for Leidos because we work in a lot of applications where we have public-facing AI as well. For example, we build AI that goes into airport scanning systems and keeps the flying public safe. But if the public perceives that AI is treating certain people unfairly, then they might not trust that, they might resist the adoption of that technology. So we have to work very hard to make sure that our AI is not just trusted by the system owners or the users, the end users of the technology but by the public as well.
Shaunté Newby: Let's talk about how things have changed. You spoke about trusted AI on previous seasons, and I guess I want to hear what has changed since maybe last year.
Ron Keesing: Sure. So look a lot has been changing very quickly in this area for us, mostly because we've been so successful in continuing to deliver really exciting solutions for our customers and our programs, everything from improving the way we deliver healthcare to our veterans, to transforming the way we do IT operations, as Tifani mentioned earlier, by using AI to predict failures and actually allow us to repair major IT networks more quickly, to applications for the intelligence community, to in fact, the very exciting milestone of a project called Gremlins, where we did the first air to air docking of a UAV. Really exciting series of successes for Leidos around AI, all of which again, really center on AI trust. Now in parallel with that, the AI accelerator has really grown tremendously, so we're up to about 50 people. We've deepened our investment at the corporate level in AI in a number of ways. So all of these are really exciting changes that have been occurring over the last couple of years and really have taken off in the last year. It's an exciting time.
Shaunté Newby: Something that was really interesting in our research for this episode was Leidos' framework for AI resilience and security. Can you explain this?
Tifani O'Brien: We found as we were continuing in our research on trust and how to create AI you can trust. That there's really seven major critical capabilities that need to be covered in your trustworthy system in order to be able to really operate and deliver that result, just to walk through them. Explainability; you need to have results that are understandable and transparent and that you can even audit when they're coming from the AI system. Another important capability is that it's resilient. You have to be able to know that you're going to have consistent performance, no matter what the actual environment is like, you can count on the performance at a certain level. Another capability that's obviously important to security. You need to be able to defend your data and defend your models against adversaries, and then assurance, assurance in the sense of, can you actually prove you have your particular level of performance, particularly for critical applications where people and equipment are at risk.
Tifani O'Brien: You want your AI system to be fair. That means being able to detect bias and also mitigate for bias, so that you're having fair results. And you can trust those. You want your system to be adaptive, as the world changes and as your data changes, can your AI sense, detect, and then respond quickly enough to keep those results trustworthy to those changes with the same level of performance? And then finally I'd say the seventh is that having accurate results. You want models giving answers that are correct, and that a human can trust. All these things together are what make an AI system you can trust.
Shaunté Newby: Before this interview, you described the pickup in AI focus at Leidos as a Renaissance of sorts. Why is that?
Ron Keesing: First off, I would say the field of AI over the last decade has really undergone a Renaissance itself. It's as people have figured out these ways to apply machine learning, to solve AI problems that were previously unsolvable, there has been growth in interest in the area. And that's really also been what we've seen here at Leidos. We've seen this just tremendous increase in the number of different use cases we're approaching using AI. And we're really changing the way we think about AI and including it into our DNA really as a company. So we think about it in everything we do. So just to give some examples, we use AI to help improve our bidding process. So we think about how to use AI to help us with the way that we write proposals. And we also apply it across almost every aspect of the business now, from the way we think about human resources and the way we staff teams, to the way we deliver solutions on our programs and capture.
Ron Keesing: So the range of problems we apply is really undergone a Renaissance, but there's also this Renaissance that the community at Leidos, we've seen this tremendous growth. So every year we have an event called our AI Palooza. And when we first started we could gather everyone in one room, through the last couple of years, of course, during COVID it became a virtual event, but now we have literally hundreds of people all around the company. And another sign of that community's strength is the number of people who are pursuing upskilling in AI and machine learning. So last count, I think we had almost a thousand people who've been involved in various kinds of degree and micro degree programs in AI from across all Leidos groups. So it's really been this tremendous growth of interest and this explosion of a community of people who want to use AI as part of the way that they solve every problem.
Terminator Clip: In the 21st century, a weapon will be invented like no other. This weapon will be powerful, versatile and indestructible. It can't be reasoned with. This weapon will be called the Terminator.
Shaunté Newby: That's from the trailer of the 1984 smash hit, you guessed it, The Terminator, it's obviously a fictional story, but the way things are presented in fictional media can still have a major impact on public perception. And the Terminator is far from the only piece of media that depicts the dangers of imagined versions of AI. For the most part, the science and tech industry can watch these types of depictions with the lens of entertainment, knowing that the work is not based on reality, but in this case for Ron and Tifani, their work can be made more complicated when public opinion is affected in this way. I asked Ron how he felt about this.
Ron Keesing: I get this question all the time. When people think about AI, the first question people ask is, "How are you going to build AI so it doesn't turn into the next Terminator or the next HAL from 2001?" And I think it really reflects an incorrect conception of, first of all, how powerful current generation AI is and what problems it can really solve, but also how we should be approaching the way we even frame AI ethics because those concerns really are about, are we going to build AI that violates our ethical sense? And how can we be sure that we can trust that AI will behave in a way that we humans perceive as ethical? It's a really important concern, but when these conversations around, how do we keep from building the Terminator happen? It turns into a really unproductive discussion because almost always what it causes us to think about is, how do we put rules or constraints around AI so it won't do something terrible?
Ron Keesing: And yes, of course, we should do that. But in fact, what we also should be doing is thinking very proactively about how can we solve human problems and help keep humans safe and operate better with AI? So we see this, for example, I get asked all the time, in a military situation, how would you keep an AI system from mistaking a school bus for a tank? Well, that would be terrible and we don't want that to happen, but let's be honest in wartime, right now, school buses get mistaken for tanks and bad things happen. So the real question we need to ask ourselves is, can we introduce AI in a way that helps make things safer and better and lead to more ethical outcomes? And I think the answer is a resounding yes if we get it right.
News Clip: Can a person be convicted of manslaughter if that person is behind the wheel of a car, but the car is on autopilot? It's a key and brand new question right now in a first-of-its-kind case in Southern California.
Shaunté Newby: Tesla autopilot crashes have been making the news more frequently as the cars continue to grow their presence on our roads. And the legal situations the crashes are presenting have been an easy entry point for a larger conversation around blame when it comes to AI. As we increase our use of AI alongside humans, this question becomes more important, is it the creator of the tech, the person overseeing the work, perhaps even the tech itself? Here's what Ron said when I posed that question to him.
Ron Keesing: It's part of the reason that we emphasize wherever we can, how do we actually build humans and machines as teams to work together? The risk issues that you raised, how do you attribute a failure? They matter a lot in government settings too, for our customers in a DOD setting, literally if something happens that violates military doctrine or is contravenes the laws of war, you actually legally have to be able to attribute who's responsible and allocate the blame for that. So part of the reason that it's really important at building AI that leaves humans in control is that you absolutely have humans who are accountable as the owners and the controllers of how that AI was operating in the first place, who understand and are well trained and have had the experience with the system so they can effectively govern it and be responsible for the actions it takes.
Shaunté Newby: Another side of AI that sows distrust is AI bias. And we've seen it talked about a lot more recently, and it's been exemplified by the high-profile DALL-E 2 reviews, which for folks who don't know is an image-generating AI, that's images portrayed a lot of negative stereotypes. So how can we move forward with trusting AI if these biases are built right in?
Ron Keesing: It's a great question, Shaunte. And I think one of the things we have to remember about this is it isn't as though the AI builds these biases in. What really happens is the biases are reflected into the data from which the AI systems learn. So Tifani talked earlier about machine learning and the fact that we use the data to do this, but the problem occurs when we use data that reflects real-world human biases. And if we get it wrong, then that AI can not only reflect those human biases but actually even amplify them and can behave in ways that seem to us, terrible. So you can throw your hands up at this and say, "Boy, AI is terrible." Or you can say, "Actually there's something here that's exciting too."
Ron Keesing: Because the truth is that what we're seeing is exposing biases that exist in the real world and AI allowing us to address those things. So if we build the right human machine solutions, and if we tackle that area of bias up front. Tifani talked earlier about building AI, that is as part of the trust framework deals with these bias issues. So if you deal with this issue of bias in the models, you can actually end up with a fairer outcome than anything that existed before. So if you're careless, you can end up with this DALL-E 2 system. If you get it right, which we are doing in areas like healthcare and in our scanning systems for airports, and you really intentionally take the bias out, what you can end up with is a system that's actually more fair than the way things exist today.
Ron Keesing: And that reflection of the biases in data really just reflects the way that the real world operates today. So I always like to think about instead of imagining, let's just hand the job over to AI, and then we're maybe going to end up with these biases. If we intentionally try to take the bias out of AI systems and then make it a partner to humans, maybe we can actually even help humans overcome some of their biases. You imagine that loan officer who maybe has had a little bit of subconscious bias in the way that they make loan decisions, and they've got an AI system assisting them that actually helps them see things a little more objectively because that system has actually been effectively de-biased. So in that way, I actually hope that in the future, AI can be a teammate who actually helps humans be better in removing bias from their own behavior.
Shaunté Newby: Definitely needed. Let's talk about the future. So Ron, you've been on this podcast now three times and we see a lot of changes each time we speak. And I guess that means that we can assume that we're bound to see even more rapid changes in the years to come, especially given how much Leidos has increased their focus on trusted AI. Can you share anything you're really excited about?
Ron Keesing: One area I'm really, really excited about, is called reinforcement learning. So in reinforcement learning rather than training models, based on data as Tifani described, we actually let them learn from experience. So typically we set up a simulation or we let them experiment in the real world and learn how to operate through trying and failing and getting better at things. Whether it's learning to walk through a maze or fly a drone or all these kinds of things, we actually use reinforcement learning to learn the rules that govern behavior. And what's really exciting about this. First of all, at Leidos is there are just a huge number of problems that are really important for our customers that we can solve using reinforcement learning. So this is one of those areas we've seen tremendous growth and tremendous excitement. For example, we had a competition earlier this year where we actually used something called DeepRacer, which is an Amazon system for training these reinforcement learning systems to drive race cars around a track.
Ron Keesing: We had over approximately, a hundred participants from across Leidos who got in and rolled their sleeves up and built their own reinforcement learning algorithms to drive these cars around a track. We had top-performing cars come and actually drive a real car around a track at Leidos headquarters. It was a great exciting event. And more broadly when we think about exciting trends in reinforcement learning at Leidos, we have some great reinforcement learning projects at places like DARPA, where we're using reinforcement learning to transform the way we do, let's say, battle management and make decisions and help humans figure out what actions to take with a machine that helps them. Again, this is the same technology that powers computers that are best in the world at Go and chess now. They learned to play by trying lots of moves. So we use that same technology now for things like battle management or driving cars or controlling drones, and is really an exciting area where things are moving very fast for us.
Shaunté Newby: Thank you. And Tifani, before I even ask you just know you're on the hook to come back next season to give us updates now.
Tifani O'Brien: So we're good. I'm looking forward to it.
Shaunté Newby: So is there anything you're really excited about?
Tifani O'Brien: Yes, actually, transformer models. The transformer model is a, it's a neural network and it learns context and all the way through to get meaning by tracking relationships in data. So this is really exciting. It's originally developed for natural language processing, for understanding text and speech. And we are able to see whole new levels of performance better than ever before on these really complex language tasks. And what's exciting now, and what I'm looking forward to going forward on is, this same technology, we can now apply in a lot of different domains. Doing things like playing video games or captioning photos or moving robotic arms. It's a really interesting new technique. For example, we're currently, we're adapting transformers to whole new domains, like IT operations and medical claims processing to understand those specialized languages. So I really think there's going to be a lot more we can do with transformer models. And I'm looking forward to seeing where that goes.
Shaunté Newby: I just want you both to know, to hear reinforcement learning and how you describe the transformer models, it just made it sound so cool. Even though my mind went on to transformers, the toys, sorry. But-
Ron Keesing: There's more than meets the eye, Shaunte.
Shaunté Newby: More than meets the eye. Thank you. So I was listening to a previous podcast, Ron, I think you did. And one of the challenges you mentioned was talent, and this is my personal opinion. I believe sometimes the challenge in getting talent is probably people's awareness because something like AI and machine learning, it sounds so far away and how do you approach it? So I guess for me, it's like, if you could share, what are some other roles that are involved in this type of work that aren't so obvious?
Ron Keesing: One of the things that we're seeing first of all, is that there are a lot of people with skills that are adjacent to AI and machine learning that can quickly get up to speed and really contribute in AI projects really effectively. So for example, people who are really good computer scientists can learn now because there are much better tools to build AI solutions. They can work on the AI engineering components. And actually, how do you build AI as part of larger software systems? And we're helping people get involved that way. We're seeing a lot of people with fundamental engineering skills in areas like electrical engineering or other kinds of engineering, who've got the right mathematical background to really learn machine learning very quickly and be able to use it to solve their problems.
Ron Keesing: And we're also seeing an explosion of what would make called no-code, low-code modeling systems where you don't even have to be an expert in AI and machine learning. For example, we use a tool called Dataiku, which is really nice because you don't really have to be even a computer scientists to start generating models and applying it to your data. So I think all those are areas we're seeing tremendous adjacency. And then as I said, we're also seeing all these people who want to go ahead and take their own backgrounds and get upskilled in AI and machine learning so that they can start doing more of it themselves as well.
Shaunté Newby: AI and machine learning stand to bring us an incredible amount of useful applications that will help to better our world. But trust in AI is vital to making those applications successful. Ron and Tifani gave us a lot of amazing knowledge about how to make sure AI is trusted and the challenges they're working to overcome. If you want to learn even more, you can visit leidos.com/AI. That link will also be in the description. Thank you for joining me on this episode of MindSET, a podcast by Leidos. If you like this and want to learn even more about the incredible tech sector work going on to push humanity forward, make sure you subscribe to the show. New episodes will be live every two weeks. Also rate and review, we're excited to hear your thoughts on the show. My name is Shaunte Newby. Talk to you next time.