DOD Announces Establishment of Generative AI Task Force - press release 8/10/2023
Generative AI are of course the type of AI that tend to make LOTS of mistakes, often charmingly called "hallucinations."
Just what we need for the military...
Posting in full since it is a press release:
Deputy Secretary of Defense Dr. Kathleen Hicks directed the organization of Task Force Lima; it will play a pivotal role in analyzing and integrating generative AI tools, such as large language models (LLMs), across the DoD.
"The establishment of Task Force Lima underlines the Department of Defense's unwavering commitment to leading the charge in AI innovation," Hicks said. "As we navigate the transformative power of generative AI, our focus remains steadfast on ensuring national security, minimizing risks, and responsibly integrating these technologies. The future of defense is not just about adopting cutting-edge technologies, but doing so with foresight, responsibility, and a deep understanding of the broader implications for our nation."
Led by the Chief Digital and Artificial Intelligence Office (CDAO), Task Force Lima will assess, synchronize, and employ generative AI capabilities across the DoD, ensuring the Department remains at the forefront of cutting-edge technologies while safeguarding national security.
"The DoD has an imperative to responsibly pursue the adoption of generative AI models while identifying proper protective measures and mitigating national security risks that may result from issues such as poorly managed training data," said Dr. Craig Martell, the DoD Chief Digital and Artificial Intelligence Officer. "We must also consider the extent to which our adversaries will employ this technology and seek to disrupt our own use of AI-based solutions."
Leveraging partnerships across the Department, Intelligence Community and other government agencies, the task force will help minimize risk and redundancy while pursuing generative AI initiatives across the Department.
Artificial intelligence has emerged as a transformative technology with the potential to revolutionize various sectors, including defense. By leveraging generative AI models, which can use vast datasets to train algorithms and generate products efficiently, the Department aims to enhance its operations in areas such as warfighting, business affairs, health, readiness, and policy.
"The adoption of artificial intelligence in defense is not solely about innovative technology but also about enhancing national security," said U.S. Navy Capt. M. Xavier Lugo, Task Force Lima mission commander and member of the CDAO's Algorithmic Warfare Directorate. "The DoD recognizes the potential of generative AI to significantly improve intelligence, operational planning, and administrative and business processes. However, responsible implementation is key to managing associated risks effectively."
The CDAO became operational in June 2022 and is dedicated to integrating and optimizing artificial intelligence capabilities across the DoD. The office is responsible for accelerating the DoD's adoption of data, analytics, and AI, enabling the Department's digital infrastructure and policy adoption to deliver scalable AI-driven solutions for enterprise and joint use cases, safeguarding the nation against current and emerging threats.
For more information about Task Force Lima, please visit the CDAO website at ai.mil. You can also connect with the CDAO on LinkedIn (@ DoD Chief Digital and Artificial Intelligence Office) and Twitter (@dodcdao). Additional updates and news can be found on the CDAO Unit Page on DVIDS.
Some months back I read something about one defense-industry company's plans (think it was Palantir, but I'm not sure) to use generative AI for autonomous or largely autonomous weaponry. They admitted to the problem with hallucinations but said they planned to correct it with two LLMs being run at the same time to see if their results largely agreed.
One possibly-hallucinating AI to fact-check another possibly-hallucinating AI.
Which might reduce the potential for disastrous errors, though not completely. And if the LLMs are running on the same data sets with the same biases, not by much.
What could go wrong?
Quite frankly, insanity by any other name would smell as sweet, and we can all participate and be complcit in the illusions of grandeur. Dangerous stuff - hope they know what they're unleashing from Pandora's bag of tricks. One EMP episode leads to more, and then it's mostly ALL done!