Welcome to DU!
The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards.
Join the community:
Create a free account
Support DU (and get rid of ads!):
Become a Star Member
Latest Breaking News
Editorials & Other Articles
General Discussion
The DU Lounge
All Forums
Issue Forums
Culture Forums
Alliance Forums
Region Forums
Support Forums
Help & Search
General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsWar is messy. AI can't handle it. (Bulletin of the Atomic Scientists)
Published today. See these related threads from yesterday and a couple of days ago:
ABC News: Security was an afterthought with AI developers. Text & image- based models are "pitiable"
https://www.democraticunderground.com/100218179000
DOD Announces Establishment of Generative AI Task Force - press release 8/10/2023
https://www.democraticunderground.com/100218175847
https://thebulletin.org/2023/08/war-is-messy-ai-cant-handle-it/
In April of 2023, technology company Palantir released a demo of a large language model (LLM)-enabled battle management software called Artificial Intelligence Platform (AIP) for Defense. The platform links together interactive AI-enabled chat-based functionality with seemingly perfect intelligence collection and query. This is paired with course of action generation capabilities for military command decision-making.
-snip-
What does not exist within the confines of the Palantir demo is an enemy with any agency at all, a contingency in which the information environment is not completely dominated by the operator of the Artificial Intelligence Platform, or consideration that data used to train the underlying system functionality might be messy, disrupted, or incomplete, or reflect problematic biases. Ingrained into the demo is the assumption of a pristine environment with perfect information and technological performance and an adversary that simply accepts those circumstances. It is a hollow view of war.
-snip-
In terms of who has agency in conflict, at least within the context of Palantirs Artificial Intelligence Platform demo, only one side gets to act, employing electronic jamming technology, benefiting from sensors and intelligence-fusion capabilities linked to the software that appear as a sort of sovereign or external observer above the battlefield. It is a confrontation against an adversary whose forces remain as stagnant orange blocks on the screen. Accordingly, significant questions quickly emerge. For example, what if the broader linked intelligence collection system is disrupted? Or suppose the complex architecture supporting the seamless bond between forces on the ground, surveillance drones, and the chain of command is broken? These types of questions echo past worries of technologically enabled military command systems regarding issues of decision-making paralysis, of tendencies towards over-centralization, or of making forces over-reliant on technology that is destined to, at some point, break down. As scholars of war and international security have argued, the dreams of information communications technology or AI enabled military solutionism are likely overstated.
-snip-
Deployment-stage problems. After the deployment of the model, problems will persist. Even the most advanced technical systems particularly large language model-enabled technology, which is known to act unexpectedly when presented with situations not included in training datasetsshould not be considered immune from post-deployment issues. More worryingly, studies show that AI models can be susceptible to adversarial attacks even when the attacker only has query access to the model. A well-known category of attacks called physical adversarial attacks, adversarial actions against AI models that are executed in the real-world as opposed to the digital domain can cause the AI to misinterpret or misclassify what it is sensing. Studies highlight that even small-magnitude perturbations added to the input may cause significant deceptions. For instance, just with the placement of stickers on the road, researchers could fool Teslas autopilot to drive into oncoming traffic.
-snip-
-snip-
What does not exist within the confines of the Palantir demo is an enemy with any agency at all, a contingency in which the information environment is not completely dominated by the operator of the Artificial Intelligence Platform, or consideration that data used to train the underlying system functionality might be messy, disrupted, or incomplete, or reflect problematic biases. Ingrained into the demo is the assumption of a pristine environment with perfect information and technological performance and an adversary that simply accepts those circumstances. It is a hollow view of war.
-snip-
In terms of who has agency in conflict, at least within the context of Palantirs Artificial Intelligence Platform demo, only one side gets to act, employing electronic jamming technology, benefiting from sensors and intelligence-fusion capabilities linked to the software that appear as a sort of sovereign or external observer above the battlefield. It is a confrontation against an adversary whose forces remain as stagnant orange blocks on the screen. Accordingly, significant questions quickly emerge. For example, what if the broader linked intelligence collection system is disrupted? Or suppose the complex architecture supporting the seamless bond between forces on the ground, surveillance drones, and the chain of command is broken? These types of questions echo past worries of technologically enabled military command systems regarding issues of decision-making paralysis, of tendencies towards over-centralization, or of making forces over-reliant on technology that is destined to, at some point, break down. As scholars of war and international security have argued, the dreams of information communications technology or AI enabled military solutionism are likely overstated.
-snip-
Deployment-stage problems. After the deployment of the model, problems will persist. Even the most advanced technical systems particularly large language model-enabled technology, which is known to act unexpectedly when presented with situations not included in training datasetsshould not be considered immune from post-deployment issues. More worryingly, studies show that AI models can be susceptible to adversarial attacks even when the attacker only has query access to the model. A well-known category of attacks called physical adversarial attacks, adversarial actions against AI models that are executed in the real-world as opposed to the digital domain can cause the AI to misinterpret or misclassify what it is sensing. Studies highlight that even small-magnitude perturbations added to the input may cause significant deceptions. For instance, just with the placement of stickers on the road, researchers could fool Teslas autopilot to drive into oncoming traffic.
-snip-
Much, much more at the link.
Including the worrisome info that the Air Force trained a targeting AI on synthetic data, computer-generated data used to create the data set, and the targeting AI believed it had achieved 90% accuracy, when it was actually at 25%. Oops. There have been studies lately showing that AI fed AI-generated data tend to go haywire: https://www.democraticunderground.com/100218143970 . The Air Force encountered that problem in 2021, and I have to wonder if they figured out the problem was the synthetic data, or if our military has trained a lot of our AI weaponry on synthetic data, but just hasn't caught it going haywire yet.
Here's that Palantir demo, which I first saw in April when someone posted it on Reddit...and I'm editing to link to my May 10 thread about Palantir's AI: https://www.democraticunderground.com/100217901033 .
InfoView thread info, including edit history
TrashPut this thread in your Trash Can (My DU » Trash Can)
BookmarkAdd this thread to your Bookmarks (My DU » Bookmarks)
1 replies, 411 views
ShareGet links to this post and/or share on social media
AlertAlert this post for a rule violation
PowersThere are no powers you can use on this post
EditCannot edit other people's posts
ReplyReply to this post
EditCannot edit other people's posts
Rec (4)
ReplyReply to this post
1 replies
= new reply since forum marked as read
Highlight:
NoneDon't highlight anything
5 newestHighlight 5 most recent replies
War is messy. AI can't handle it. (Bulletin of the Atomic Scientists) (Original Post)
highplainsdem
Aug 2023
OP
I've edited the last paragraph of the OP to link to a May 10 thread on Palantir's AI.
highplainsdem
Aug 2023
#1
highplainsdem
(51,740 posts)1. I've edited the last paragraph of the OP to link to a May 10 thread on Palantir's AI.