Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

Nevilledog

(51,157 posts)
Wed Jun 22, 2022, 02:27 PM Jun 2022

FTC Issues Report On Using AI to Address Online Harms



Tweet text:

Justin Hendrix
@justinhendrix
·
Follow
Yesterday the @FTC released an incredibly thoughtful report on "Combatting Online Harms Through Innovation" that raises both the opportunities and challenges in using AI & machine learning to deal with problems on the internet. Here's the quick skinny:

techpolicy.press
FTC Issues Report On Using AI to Address Online Harms
AI must be cautiously applied so as not to exacerbate problems that are themselves often a result of automated systems, says report.
12:38 PM · Jun 17, 2022


https://techpolicy.press/ftc-issues-report-on-using-ai-to-address-online-harms/


*snip*

The report includes a range of recommendations, and argues that despite the “intense focus on the role and responsibility of social media platforms, it is often lost that other private actors — as well as government agencies — could use AI to address these harms,” including “search engines, gaming platforms, messaging apps, marketplaces and app stores, but also those at other layers of the tech stack such as internet service providers, content distribution networks, domain registrars, cloud providers, and web browsers.”

The first recommendation is to recognize that AI detection tools are “blunt instruments,” with “built-in imprecision,” and that there is a danger to over-reliance on such tools. There are also considerations around the political ramifications of such systems, including tradeoffs such as “blocking more content that might incite extremist violence (e.g., via detection of certain terms or imagery) can result in also blocking members of victimized communities from discussing how to address such violence. This fact explains in part why each specified harm needs individual consideration; the trade-offs we may be willing to accept may differ for each one.” There needs to be consideration of imprecision, context and meaning, and bias and discrimination, for instance.

The second recommendation revolves around ‘humans in the loop,’ or human oversight of AI systems. The FTC acknowledges that “[s]imply placing moderators, trust and safety professionals, and other people in AI oversight roles is insufficient.” and that human oversight “also shouldn’t serve as a way to legitimize such systems or for their operators to avoid accountability.”

The third recommendation addresses transparency and accountability, defined as “measures that provide more and meaningful information about these systems and that, ideally, enable accountability, which involves measures that make companies more responsible for outcomes and impact.” A key plank of this recommendation is researcher access to platform data. The report puts forward assessments and audits and auditor and employee protections

*snip*
Latest Discussions»General Discussion»FTC Issues Report On Usin...