Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

highplainsdem

(49,172 posts)
Mon Aug 14, 2023, 12:39 PM Aug 2023

War is messy. AI can't handle it. (Bulletin of the Atomic Scientists)

Published today. See these related threads from yesterday and a couple of days ago:
ABC News: Security was an afterthought with AI developers. Text & image- based models are "pitiable"
https://www.democraticunderground.com/100218179000
DOD Announces Establishment of Generative AI Task Force - press release 8/10/2023
https://www.democraticunderground.com/100218175847


https://thebulletin.org/2023/08/war-is-messy-ai-cant-handle-it/

In April of 2023, technology company Palantir released a demo of a large language model (LLM)-enabled battle management software called Artificial Intelligence Platform (AIP) for Defense. The platform links together interactive AI-enabled chat-based functionality with seemingly perfect intelligence collection and query. This is paired with course of action generation capabilities for military command decision-making.

-snip-

What does not exist within the confines of the Palantir demo is an enemy with any agency at all, a contingency in which the “information environment” is not completely dominated by the operator of the Artificial Intelligence Platform, or consideration that data used to train the underlying system functionality might be messy, disrupted, or incomplete, or reflect problematic biases. Ingrained into the demo is the assumption of a pristine environment with perfect information and technological performance and an adversary that simply accepts those circumstances. It is a hollow view of war.

-snip-

In terms of who has agency in conflict, at least within the context of Palantir’s Artificial Intelligence Platform demo, only one side gets to act, employing electronic jamming technology, benefiting from sensors and intelligence-fusion capabilities linked to the software that appear as a sort of sovereign or external observer above the battlefield. It is a confrontation against an adversary whose forces remain as stagnant orange blocks on the screen. Accordingly, significant questions quickly emerge. For example, what if the broader linked intelligence collection system is disrupted? Or suppose the complex architecture supporting the seamless bond between forces on the ground, surveillance drones, and the chain of command is broken? These types of questions echo past worries of technologically enabled military command systems regarding issues of decision-making paralysis, of tendencies towards over-centralization, or of making forces over-reliant on technology that is destined to, at some point, break down. As scholars of war and international security have argued, the dreams of information communications technology or AI enabled military solutionism are likely overstated.

-snip-

Deployment-stage problems. After the deployment of the model, problems will persist. Even the most advanced technical systems —particularly large language model-enabled technology, which is known to act unexpectedly when presented with situations not included in training datasets—should not be considered immune from post-deployment issues. More worryingly, studies show that AI models can be susceptible to adversarial attacks even when the attacker only has query access to the model. A well-known category of attacks called physical adversarial attacks, adversarial actions against AI models that are executed in the real-world as opposed to the digital domain can cause the AI to misinterpret or misclassify what it is sensing. Studies highlight that even small-magnitude perturbations added to the input may cause significant deceptions. For instance, just with the placement of stickers on the road, researchers could fool Tesla’s autopilot to drive into oncoming traffic.

-snip-


Much, much more at the link.

Including the worrisome info that the Air Force trained a targeting AI on synthetic data, computer-generated data used to create the data set, and the targeting AI believed it had achieved 90% accuracy, when it was actually at 25%. Oops. There have been studies lately showing that AI fed AI-generated data tend to go haywire: https://www.democraticunderground.com/100218143970 . The Air Force encountered that problem in 2021, and I have to wonder if they figured out the problem was the synthetic data, or if our military has trained a lot of our AI weaponry on synthetic data, but just hasn't caught it going haywire yet.

Here's that Palantir demo, which I first saw in April when someone posted it on Reddit...and I'm editing to link to my May 10 thread about Palantir's AI: https://www.democraticunderground.com/100217901033 .


1 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
War is messy. AI can't handle it. (Bulletin of the Atomic Scientists) (Original Post) highplainsdem Aug 2023 OP
I've edited the last paragraph of the OP to link to a May 10 thread on Palantir's AI. highplainsdem Aug 2023 #1

Kick in to the DU tip jar?

This week we're running a special pop-up mini fund drive. From Monday through Friday we're going ad-free for all registered members, and we're asking you to kick in to the DU tip jar to support the site and keep us financially healthy.

As a bonus, making a contribution will allow you to leave kudos for another DU member, and at the end of the week we'll recognize the DUers who you think make this community great.

Tell me more...

Latest Discussions»General Discussion»War is messy. AI can't ha...