HomeLatest ThreadsGreatest ThreadsForums & GroupsMy SubscriptionsMy Posts
DU Home » Latest Threads » Forums & Groups » Main » General Discussion (Forum) » Researchers, scared by th...

Thu Aug 1, 2019, 09:58 AM

Researchers, scared by their own work, hold back "deepfakes for text" AI

Researchers, scared by their own work, hold back “deepfakes for text” AI
OpenAI's GPT-2 algorithm shows machine learning could ruin online content for everyone.

https://arstechnica.com/information-technology/2019/02/researchers-scared-by-their-own-work-hold-back-deepfakes-for-text-ai/

The performance of the system was so disconcerting, now the researchers are only releasing a reduced version of GPT-2 based on a much smaller text corpus. In a blog post on the project and this decision, researchers Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever wrote:

Due to concerns about large language models being used to generate deceptive, biased, or abusive language at scale, we are only releasing a much smaller version of GPT-2 along with sampling code. We are not releasing the dataset, training code, or GPT-2 model weights. Nearly a year ago we wrote in the OpenAI Charter: “we expect that safety and security concerns will reduce our traditional publishing in the future, while increasing the importance of sharing safety, policy, and standards research,” and we see this current work as potentially representing the early beginnings of such concerns, which we expect may grow over time. This decision, as well as our discussion of it, is an experiment: while we are not sure that it is the right decision today, we believe that the AI community will eventually need to tackle the issue of publication norms in a thoughtful way in certain research areas.


OpenAI is funded by contributions from a group of technology executives and investors connected to what some have referred to as the PayPal "mafia"—Elon Musk, Peter Thiel, Jessica Livingston, and Sam Altman of YCombinator, former PayPal COO and LinkedIn co-founder Reid Hoffman, and former Stripe Chief Technology Officer Greg Brockman. Brockman now serves as OpenAI's CTO. Musk has repeatedly warned of the potential existential dangers posed by AI, and OpenAI is focused on trying to shape the future of artificial intelligence technology—ideally moving it away from potentially harmful applications.

Given present-day concerns about how fake content has been used to both generate money for "fake news" publishers and potentially spread misinformation and undermine public debate, GPT-2's output certainly qualifies as concerning. Unlike other text generation "bot" models, such as those based on Markov chain algorithms, the GPT-2 "bot" did not lose track of what it was writing about as it generated output, keeping everything in context.

For example: given a two-sentence entry, GPT-2 generated a fake science story on the discovery of unicorns in the Andes, a story about the economic impact of Brexit, a report about a theft of nuclear materials near Cincinnati, a story about Miley Cyrus being caught shoplifting, and a student's report on the causes of the US Civil War.


We truly won’t be able to believe our eyes and ears one day....

7 replies, 595 views

Reply to this thread

Back to top Alert abuse

Always highlight: 10 newest replies | Replies posted after I mark a forum
Replies to this discussion thread
Arrow 7 replies Author Time Post
Reply Researchers, scared by their own work, hold back "deepfakes for text" AI (Original post)
Roland99 Aug 1 OP
Roland99 Aug 1 #1
Thomas Hurt Aug 1 #2
DetlefK Aug 1 #3
harumph Aug 1 #5
Roland99 Aug 1 #6
hunter Aug 1 #7
42bambi Aug 1 #4

Response to Roland99 (Original post)

Thu Aug 1, 2019, 10:00 AM

1. And then there's this, too...


Reply to this post

Back to top Alert abuse Link here Permalink


Response to Roland99 (Original post)

Thu Aug 1, 2019, 10:02 AM

2. this is Big Brother.....

Reply to this post

Back to top Alert abuse Link here Permalink


Response to Roland99 (Original post)

Thu Aug 1, 2019, 10:07 AM

3. There is only one solution: Doing away with anonymity on the internet.

If a story gets posted, it must be clear who posted it and who shall be held accountable if it turns out to be false.

Reply to this post

Back to top Alert abuse Link here Permalink


Response to DetlefK (Reply #3)

Thu Aug 1, 2019, 10:54 AM

5. Or, an independent third party authenticator service.

Just spitballing ...
Legit news sources would be accompanied by a crypto token that could be authenticated through the third party.
Intelligent browsers could screen out stories/images/sound files... that lack the authenticated token - or mark them as "dubious."

There's probably a business case for that.

There are ways to screen out deceptions - as are also ways to prevent robocalling - but b/c it's time consuming and not considered "profitable" - it will likely have to be mandated by regulation.

If things are fucked up - it's because someone somewhere is making $$ on the fuckedupedness.

Reply to this post

Back to top Alert abuse Link here Permalink


Response to harumph (Reply #5)

Thu Aug 1, 2019, 11:23 AM

6. Integrating some sort of blockchain technology into all online publishing.

Incl Social media

Reply to this post

Back to top Alert abuse Link here Permalink


Response to DetlefK (Reply #3)

Thu Aug 1, 2019, 11:25 AM

7. How does that help?

How do I know some random Joe in Missouri isn't an AI in Russia? How do I know Mitch McConnell isn't a robot?

If an AI's blathering is consistent, how is that any different than the blathering of, say, our asshole president or any hate radio troll?

What eliminating anonymity would do is make it much harder for people to tell their own truths, especially in places or situations where such honesty might cost a person their job, their freedom, or even their life.

Reply to this post

Back to top Alert abuse Link here Permalink


Response to Roland99 (Original post)

Thu Aug 1, 2019, 10:14 AM

4. When all the powers that be

perfectly programs our brains, this won't make any difference. Our masters will guide us to where ever they/it want to take us...but who/what will be our masters. I'll leave it right there....

Reply to this post

Back to top Alert abuse Link here Permalink

Reply to this thread