Takeaways from AI Palooza 2023
Leidos hosted AI Palooza last month at Leidos Global Headquarters in Reston, Virginia. Photo: Jay Townsend
Leidos recently hosted its sixth annual AI Palooza, an event to promote learning and collaboration around artificial intelligence (AI) within the company.
Hot topics this year included large language models like ChatGPT, ethical concerns surrounding AI and mission implications for the federal government.
In his opening remarks, Leidos Chairman and CEO Roger Krone shared that AI adoption is expanding so rapidly among Leidos customers that four in five business proposals submitted by the company now include AI content.
Krone said Leidos has seen positive results from its investments in AI research and has expanded its bench of AI talent to more than 80 subject matter experts.
Leidos CTO Jim Carlini said these investments reflect the company’s vision to innovate at the nexus of global challenges and emerging technology.
- “AI is one of those technologies that presents tremendous challenges and opportunities for our government customers,” says Carlini. “We’re building a culture where AI is in our DNA, but we haven’t even come close to tapping into its mission implications.”
In a fireside chat, Carlini hosted Dr. Andrew Moore, Google Director of Cloud AI, who shared his optimism about AI adoption in the government.
- “I think there’s a sense of urgency in the U.S. government and meaningful agreement among lawmakers in both parties on the need to make sure the U.S. is preeminent in AI,” says Moore.
A large portion of the day was spent covering the meteoric success of ChatGPT, a chatbot that has dominated headlines since December.
Leidos Sr. Vice President Ron Keesing said large language models like ChatGPT will open the door to a wide range of use cases that will simplify workflows in unexpected ways.
- “I predict we’re going to spend the coming years nailing down these use cases that will include machine-assisted software development and a fundamental change in how we search for information on the web,” says Keesing.
Moore added while it’s easy to be skeptical of hype cycles surrounding new technology, he believes these language models will fundamentally alter the design of autonomous systems.
- “These models are really good at understanding human utterances and disambiguating requests, but we also need to create AI that does something useful on the basis of this,” he said. “I don’t expect us to rely solely on these as standalone models, but that we’ll ingeniously work out how to use them to generate value in other systems.”
In a panel discussion on diversity and bias, technology leaders from Historically Black Colleges and Universities explored how to maintain high ethical standards as society develops and adopts AI.
- “When it comes to the education of technologists, we need to put a greater emphasis on the history of technology and its differential effects on various groups of people,” says Keesing. “We know AI can be riddled with bias because it reflects human biases in the data. If we’re not careful, these biases can be used to reinforce traditional power structures and lead to inequitable outcomes.”
Keesing said Leidos has formed a working group focused on identifying ethical risks, recommending governance and offering tools to mitigate AI bias and inequality.
Other sessions on the day included presentations from Leidos experts who have successfully applied AI to help solve challenges in veteran health, airport security, intelligence analysis, cyber operations, radio spectrum sensing and more.
Please contact the Leidos media relations team for more information.
Related: