War is messy. AI can't handle it. (Bulletin of the Atomic Scientists)
Published today. See these related threads from yesterday and a couple of days ago:
ABC News: Security was an afterthought with AI developers. Text & image- based models are "pitiable"
DOD Announces Establishment of Generative AI Task Force - press release 8/10/2023
What does not exist within the confines of the Palantir demo is an enemy with any agency at all, a contingency in which the information environment is not completely dominated by the operator of the Artificial Intelligence Platform, or consideration that data used to train the underlying system functionality might be messy, disrupted, or incomplete, or reflect problematic biases. Ingrained into the demo is the assumption of a pristine environment with perfect information and technological performance and an adversary that simply accepts those circumstances. It is a hollow view of war.
In terms of who has agency in conflict, at least within the context of Palantirs Artificial Intelligence Platform demo, only one side gets to act, employing electronic jamming technology, benefiting from sensors and intelligence-fusion capabilities linked to the software that appear as a sort of sovereign or external observer above the battlefield. It is a confrontation against an adversary whose forces remain as stagnant orange blocks on the screen. Accordingly, significant questions quickly emerge. For example, what if the broader linked intelligence collection system is disrupted? Or suppose the complex architecture supporting the seamless bond between forces on the ground, surveillance drones, and the chain of command is broken? These types of questions echo past worries of technologically enabled military command systems regarding issues of decision-making paralysis, of tendencies towards over-centralization, or of making forces over-reliant on technology that is destined to, at some point, break down. As scholars of war and international security have argued, the dreams of information communications technology or AI enabled military solutionism are likely overstated.
Deployment-stage problems. After the deployment of the model, problems will persist. Even the most advanced technical systems particularly large language model-enabled technology, which is known to act unexpectedly when presented with situations not included in training datasetsshould not be considered immune from post-deployment issues. More worryingly, studies show that AI models can be susceptible to adversarial attacks even when the attacker only has query access to the model. A well-known category of attacks called physical adversarial attacks, adversarial actions against AI models that are executed in the real-world as opposed to the digital domain can cause the AI to misinterpret or misclassify what it is sensing. Studies highlight that even small-magnitude perturbations added to the input may cause significant deceptions. For instance, just with the placement of stickers on the road, researchers could fool Teslas autopilot to drive into oncoming traffic.
Much, much more at the link.
Including the worrisome info that the Air Force trained a targeting AI on synthetic data, computer-generated data used to create the data set, and the targeting AI believed it had achieved 90% accuracy, when it was actually at 25%. Oops. There have been studies lately showing that AI fed AI-generated data tend to go haywire: https://www.democraticunderground.com/100218143970 . The Air Force encountered that problem in 2021, and I have to wonder if they figured out the problem was the synthetic data, or if our military has trained a lot of our AI weaponry on synthetic data, but just hasn't caught it going haywire yet.
Here's that Palantir demo, which I first saw in April when someone posted it on Reddit...and I'm editing to link to my May 10 thread about Palantir's AI: https://www.democraticunderground.com/100217901033 .