National security AI leaders discuss guiding principles, commitment to ethical AI
In an era where technological advancements are rapidly reshaping the landscape of national security, the Department of Defense (DOD) and the intelligence community (IC) are spearheading efforts to harness the power of trusted mission artificial intelligence (AI). Leidos recently had the opportunity to sit down at the ACT-IAC Emerging Technology Conference for a fireside chat with two of the top minds in this area.
Andy MacDonald, Leidos chief technology officer for decision advantage solutions, moderated a discussion with Dr. William Streilein, former chief technology officer for the DOD’s Chief Digital and AI Office (CDAO), and Dr. John Beieler, chief AI officer for the Office of the Director of National Intelligence. The experts shared their guiding principles and insight from efforts to date. Some key takeaways from the discussion transcript are outlined below.
Bring the data to where analytics are done
The CDAO Data Mesh Reference Architecture provides a blueprint to guide and constrain the instantiations of data mesh solution architectures, which provides department stakeholders with a common language and validation against proven reference architectures (RA). In doing so, CDAO develops connection points among stakeholder RAs while adhering to a common set of patterns that bring the data to where the analytics are built, enabling various dashboards and decision support. Moving forward, CDAO’s goal is to make these data sets accessible across the department, in a self-service manner, to accelerate and scale decision advantage outcomes towards the DOD’s digital transformation goals.
Investing in the totality of the stack
The IC has been on an AI journey for decades, and investments made in AI need to enhance the ability to provide relevant insights to policy makers in an actionable timeframe. With an understanding of what algorithms can be built today, the IC is reimagining what the whole analyst workflow architecture should look like moving forward. Another IC goal is to invest in making its technology stack more effective and efficient, better enabling tasking, collection, processing, exploitation, and dissemination.
Related reading: Principles of artificial intelligence ethics for the intelligence community
United efforts in AI assurance
Although they operate under different authorities, the DOD and IC collaborate on areas such as AI assurance and data infrastructure to support the warfighter. While their approaches differ, important commonalities remain. National security organizations share best practices using a common lexicon so that stakeholders know how they can leverage those best practices.
Maturity model for human and machine teaming
Currently, large language model (LLM) capabilities are useful for back-office tasks and legacy code updates that are of low consequence. As users move to more high consequence use cases, a maturity model will help stakeholders understand what needs to be done to protect workflows. CDAO is developing a human and machine teaming maturity model, like a rubric, with five levels so operators understand the limitations associated with the model and how they could be successfully leveraged.
Related reading: Trusted AI: the Leidos way
Commitment to ethical AI principles
Adversaries seek to exploit AI, and they may not be bound by the same legal standards, civil liberties, and privacy guidelines as the U.S. However, the U.S. commitment to ethical, responsible use of AI is what makes it a leader in the AI frontier. Existing published AI principles include making sure that the AI is equitable and traceable. The alignment that the DOD and IC have on ethical AI guardrails enables them to innovate quickly and efficiently to ensure ethical AI.
Related reading: Pentagon official lays out DOD Vision for AI
New paradigm in industry collaboration
CDAO’s AI strategy includes lessons learned from industry, particularly in machine learning operations. Now, the DOD seeks to learn from industry about its application of AI in the data hierarchy of needs. Similarly, the IC understands that only a few commercial entities hold LLMs with multi-trillion parameter models, and government access for testing and evaluation is through APIs. This calls for new government and industry collaboration paradigms for large, complex models to ensure an understanding of ethical, trusted AI before adoption in government systems.
Learn about our role as an industry collaborator providing Trusted Mission AI solutions