FTC Issues Report On Using AI to Address Online Harms
Tweet text:
Justin Hendrix
@justinhendrix
·
Follow
Yesterday the @FTC released an incredibly thoughtful report on "Combatting Online Harms Through Innovation" that raises both the opportunities and challenges in using AI & machine learning to deal with problems on the internet. Here's the quick skinny:
techpolicy.press
FTC Issues Report On Using AI to Address Online Harms
AI must be cautiously applied so as not to exacerbate problems that are themselves often a result of automated systems, says report.
12:38 PM · Jun 17, 2022
https://techpolicy.press/ftc-issues-report-on-using-ai-to-address-online-harms/
*snip*
The report includes a range of recommendations, and argues that despite the intense focus on the role and responsibility of social media platforms, it is often lost that other private actors as well as government agencies could use AI to address these harms, including search engines, gaming platforms, messaging apps, marketplaces and app stores, but also those at other layers of the tech stack such as internet service providers, content distribution networks, domain registrars, cloud providers, and web browsers.
The first recommendation is to recognize that AI detection tools are blunt instruments, with built-in imprecision, and that there is a danger to over-reliance on such tools. There are also considerations around the political ramifications of such systems, including tradeoffs such as blocking more content that might incite extremist violence (e.g., via detection of certain terms or imagery) can result in also blocking members of victimized communities from discussing how to address such violence. This fact explains in part why each specified harm needs individual consideration; the trade-offs we may be willing to accept may differ for each one. There needs to be consideration of imprecision, context and meaning, and bias and discrimination, for instance.
The second recommendation revolves around humans in the loop, or human oversight of AI systems. The FTC acknowledges that [s]imply placing moderators, trust and safety professionals, and other people in AI oversight roles is insufficient. and that human oversight also shouldnt serve as a way to legitimize such systems or for their operators to avoid accountability.
The third recommendation addresses transparency and accountability, defined as measures that provide more and meaningful information about these systems and that, ideally, enable accountability, which involves measures that make companies more responsible for outcomes and impact. A key plank of this recommendation is researcher access to platform data. The report puts forward assessments and audits and auditor and employee protections
*snip*