General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsEU: AI Act must ban dangerous, AI-powered technologies in historic law (Amnesty International)
https://www.amnesty.org/en/latest/news/2023/09/eu-ai-act-must-ban-dangerous-ai-powered-technologies-in-historic-law/Numerous states across the globe have deployed unregulated AI systems to assess welfare claims, monitor public spaces, or determine someones likelihood of committing a crime. These technologies are often branded as technical fixes for structural issues such as poverty, sexism and discrimination. They use sensitive and often staggering amounts of data, which are fed into automated systems to decide whether or not individuals should receive housing, benefits, healthcare and education or even be charged with a crime.
-snip-
These systems are not used to improve peoples access to welfare, they are used to cut costs. And when you already have systemic racism and discrimination, these technologies amplify harms against marginalized communities at much greater scale and speed, said Mher Hakobyan, Amnesty Internationals Advocacy Advisor on AI Regulation.
-snip-
In 2021, Amnesty International documented how an AI system used by the Dutch tax authorities had racially profiled recipients of childcare benefits. The tool was supposed to ascertain whether benefit claims were genuine or fraudulent, but the system wrongly penalized thousands of parents from low-income and immigrant backgrounds, plunging them into exorbitant debt and poverty.
-snip-
LudwigPastorius
(9,249 posts)much like climate change and nuclear or bioweapons, it's got to be a global effort.
Fear Of Missing Out is going to drive someone, somewhere to create a runaway AI intelligence explosion.
Ms. Toad
(34,124 posts)None of the technologies listed are the new generative AI that is getting all of the attention. They are just algorithm-based data processing being rebranded to take advantage of the current momentum around AI.
That doesn't mean they don't need to be fixed, but suddenly labeling systems that have been in existence for years/decades as AI to take advantage of the skepticism about generative AI, without distinguishing them, is misleading.
(I know that connection is in the article, not one you created. But it still needs to be addressed as part of distinct concerns about generative AI v. algorithm-based data processing that inappropriately embeds stereotyping in the algorithm and removes people from the review process.)