Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

highplainsdem

(54,857 posts)
Mon Feb 27, 2023, 02:43 PM Feb 2023

Great. OpenAI's CEO is a doomsday prepper who thinks AI might attack us & has his hideout ready.

Just the person we want in charge of society- and economy-disrupting AI.

I thought OpenAI sounded more than a bit loony when I posted a couple of threads - https://democraticunderground.com/100217677504 and https://democraticunderground.com/100217678412 - about statements I found on their website.

But this morning, while checking for tweets about ChatGPT being down, I ran across one linking to a Futurism article from a few weeks ago - https://futurism.com/the-byte/openai-ceo-survivalist-prepper - that refers back to a 2016 New Yorker profile of Sam Altman.

"I prep for survival," said Altman, per the profile, while sitting around a fire pit at a Y Combinator party.

The tech wunderkind explained to the assembled partygoers that he's freaked by the concept of the world ending and wants to prepare to survive it. The two scenarios he gave as examples, and we promise we're not making this up, were a "super contagious" lab-modified virus "being released" onto the world population and "AI that attacks us."

"I try not to think about it too much," the OpenAI CEO told the reportedly uncomfortable startup founders surrounding him at that forgotten Silicon Valley gathering. "But I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to."

So yeah, that's the guy who is in charge of the company that was initially founded with the philanthropic goal of promoting responsible AI, and which subsequently decided to go for-profit and is now making money hand over fist on its super-sophisticated neural networks that many fear will take their jobs.



From the New Yorker, https://www.newyorker.com/magazine/2016/10/10/sam-altmans-manifest-destiny and archive link https://archive.ph/3FLF8 :

“Well, I like racing cars,” Altman said. “I have five, including two McLarens and an old Tesla. I like flying rented planes all over California. Oh, and one odd one—I prep for survival.” Seeing their bewilderment, he explained, “My problem is that when my friends get drunk they talk about the ways the world will end. After a Dutch lab modified the H5N1 bird-flu virus, five years ago, making it super contagious, the chance of a lethal synthetic virus being released in the next twenty years became, well, nonzero. The other most popular scenarios would be A.I. that attacks us and nations fighting with nukes over scarce resources.” The Shypmates looked grave. “I try not to think about it too much,” Altman said. “But I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.”

-snip-

OpenAI, the nonprofit that Altman founded with Elon Musk, is a hedged bet on the end of human predominance—a kind of strategic-defense initiative to protect us from our own creations. OpenAI was born of Musk’s conviction that an A.I. could wipe us out by accident. The problem of managing powerful systems that lack human values is exemplified by “the paperclip maximizer,” a scenario that the Swedish philosopher Nick Bostrom raised in 2003. If you told an omnicompetent A.I. to manufacture as many paper clips as possible, and gave it no other directives, it could mine all of Earth’s resources to make paper clips, including the atoms in our bodies—assuming it didn’t just kill us outright, to make sure that we didn’t stop it from making more paper clips. OpenAI was particularly concerned that Google’s DeepMind Technologies division was seeking a supreme A.I. that could monitor the world for competitors. Musk told me, “If the A.I. that they develop goes awry, we risk having an immortal and superpowerful dictator forever.” He went on, “Murdering all competing A.I. researchers as its first move strikes me as a bit of a character flaw.”

-snip-

A.I. technology hardly seems almighty yet. After Microsoft launched a chatbot, called Tay, bullying Twitter users quickly taught it to tweet such remarks as “gas the kikes race war now”; the recently released “Daddy’s Car,” the first pop song created by software, sounds like the Beatles, if the Beatles were cyborgs. But, Musk told me, “just because you don’t see killer robots marching down the street doesn’t mean we shouldn’t be concerned.” Apple’s Siri, Amazon’s Alexa, and Microsoft’s Cortana serve millions as aides-de-camp, and simultaneous-translation and self-driving technologies are now taken for granted. Y Combinator has even begun using an A.I. bot, Hal9000, to help it sift admission applications: the bot’s neural net trains itself by assessing previous applications and those companies’ outcomes. “What’s it looking for?” I asked Altman. “I have no idea,” he replied. “That’s the unsettling thing about neural networks—you have no idea what they’re doing, and they can’t tell you.”

-snip-

Altman felt that OpenAI’s mission was to babysit its wunderkind until it was ready to be adopted by the world. He’d been reading James Madison’s notes on the Constitutional Convention for guidance in managing the transition. “We’re planning a way to allow wide swaths of the world to elect representatives to a new governance board,” he said. “Because if I weren’t in on this I’d be, like, Why do these fuckers get to decide what happens to me?”

-snip-



Here's part of a later paragraph, what Altman said about tech: "These phones already control us. The merge has begun—and a merge is our best scenario. Any version without a merge will have conflict: we enslave the A.I. or it enslaves us. The full-on-crazy version of the merge is we get our brains uploaded into the cloud. I’d love that. We need to level up humans, because our descendants will either conquer the galaxy or extinguish consciousness in the universe forever. What a time to be alive!"

Again, this is who's in charge of the release of ChatGPT and how it will be used.

See the second thread I linked to above - https://democraticunderground.com/100217678412 - and especially reply 6 there for more of OpenAI's newest mission statement, and the response to it from an expert on these LLMs (Large Language Models like ChatGPT).

Most of us have heard plenty about Bill Gates, Mark Zuckerberg and Elon Musk. I'd guess few of us know very much about Sam Altman.
9 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies

EYESORE 9001

(27,905 posts)
1. Here's hoping this guy hasn't already embedded malicious code here & there
Mon Feb 27, 2023, 02:48 PM
Feb 2023

and wants to see his apocalyptic predictions come true. Maybe have some yuks and make some bucks too.

highplainsdem

(54,857 posts)
5. These "geniuses" don't sound terribly stable, do they? Did you read the
Mon Feb 27, 2023, 08:18 PM
Feb 2023

entire, very long New Yorker profile?

Renew Deal

(83,654 posts)
2. You should watch the show Altered Carbon
Mon Feb 27, 2023, 02:55 PM
Feb 2023

It is basically about consciousness being backed up and refitted into new "sleeves" which are synthetic bodies. I think it's set around 500 years in the future.

Personally, I think it's good that the people creating these things are thinking about the dangers they can pose (setting aside the general rich boy, Burning Man, paranoia of it all on their part).

highplainsdem

(54,857 posts)
6. Very old idea in science fiction. Same with dangerous robots/AI.
Mon Feb 27, 2023, 08:42 PM
Feb 2023

I'd be very surprised if all, or almost all, of the people working in Silicon Valley weren't familiar with those ideas since childhood.

I'd also be very surprised if many of those techies were doomsday preppers like Sam Altman. That's weird.

It's also very weird that he's given some thought to how many people he'd be willing to let die or personally kill or have killed to save his loved ones - a subject he brought up in the New Yorker interview. The number he gave was 100,000.

I don't know about you, but that isn't a subject I think about, let alone one I'd bring up in conversation, especially an interview.

getagrip_already

(17,636 posts)
3. Wouldn't ai know about his bunker and have a plan for it?
Mon Feb 27, 2023, 03:12 PM
Feb 2023

He would likely be among the primary targets when they attack.

They know everything he does, everything he says and hears/reads, and all of his hidie holes.

They are playing him for a chump.





highplainsdem

(54,857 posts)
7. OpenAI/ChatGPT does save all input to add to its dataset, as I understand.
Mon Feb 27, 2023, 08:49 PM
Feb 2023

So there are definitely privacy concerns.

And anything people enter with IDs, passwords, Social Security numbers, etc., could turn up later in one of the chatbot's hallucinations when it responds to someone else.

hatrack

(62,049 posts)
8. Guardian - Tech writer met w. billionaires; all they cared about was bunker plans
Mon Feb 27, 2023, 09:14 PM
Feb 2023

EDIT

Still, sometimes a combination of morbid curiosity and cold hard cash is enough to get me on a stage in front of the tech elite, where I try to talk some sense into them about how their businesses are affecting our lives out here in the real world. That’s how I found myself accepting an invitation to address a group mysteriously described as “ultra-wealthy stakeholders”, out in the middle of the desert. A limo was waiting for me at the airport. As the sun began to dip over the horizon, I realised I had been in the car for three hours. What sort of wealthy hedge-fund types would drive this far from the airport for a conference? Then I saw it. On a parallel path next to the highway, as if racing against us, a small jet was coming in for a landing on a private airfield. Of course.

The next morning, two men in matching Patagonia fleeces came for me in a golf cart and conveyed me through rocks and underbrush to a meeting hall. They left me to drink coffee and prepare in what I figured was serving as my green room. But instead of me being wired with a microphone or taken to a stage, my audience was brought in to me. They sat around the table and introduced themselves: five super-wealthy guys – yes, all men – from the upper echelon of the tech investing and hedge-fund world. At least two of them were billionaires. After a bit of small talk, I realised they had no interest in the speech I had prepared about the future of technology. They had come to ask questions.

EDIT

They started out innocuously and predictably enough. Bitcoin or ethereum? Virtual reality or augmented reality? Who will get quantum computing first, China or Google? Eventually, they edged into their real topic of concern: New Zealand or Alaska? Which region would be less affected by the coming climate crisis? It only got worse from there. Which was the greater threat: global warming or biological warfare? How long should one plan to be able to survive with no outside help? Should a shelter have its own air supply? What was the likelihood of groundwater contamination? Finally, the CEO of a brokerage house explained that he had nearly completed building his own underground bunker system, and asked: “How do I maintain authority over my security force after the event?” The event. That was their euphemism for the environmental collapse, social unrest, nuclear explosion, solar storm, unstoppable virus, or malicious computer hack that takes everything down.

EDIT

These people once showered the world with madly optimistic business plans for how technology might benefit human society. Now they’ve reduced technological progress to a video game that one of them wins by finding the escape hatch. Will it be Jeff Bezos migrating to space, Thiel to his New Zealand compound, or Mark Zuckerberg to his virtual metaverse? And these catastrophising billionaires are the presumptive winners of the digital economy – the supposed champions of the survival-of-the-fittest business landscape that’s fuelling most of this speculation to begin with. What I came to realise was that these men are actually the losers. The billionaires who called me out to the desert to evaluate their bunker strategies are not the victors of the economic game so much as the victims of its perversely limited rules. More than anything, they have succumbed to a mindset where “winning” means earning enough money to insulate themselves from the damage they are creating by earning money in that way. It’s as if they want to build a car that goes fast enough to escape from its own exhaust.

EDIT

https://www.theguardian.com/news/2022/sep/04/super-rich-prepper-bunkers-apocalypse-survival-richest-rushkoff

highplainsdem

(54,857 posts)
9. Thanks! I vaguely remember skimming some of that article months
Mon Feb 27, 2023, 10:58 PM
Feb 2023

ago. My impression then was that they were afraid people coping with the disasters they helped create would be after them. Guilt - whether or not they'd ever acknowledge it - underlying the paranoia.

In any crisis like the ones they're worried about, most of them would fall victim to their own security forces.

Altman is supposedly idealistic and liberal, though. He's talked about a UBI, though he's under the impression that it wouldn't have to be much:

"The thing most people get wrong is that if labor costs go to zero the cost of a great life comes way down. If we get fusion to work and electricity is free, then transportation is substantially cheaper, and the cost of electricity flows through to water and food. People pay a lot for a great education now, but you can become expert level on most things by looking at your phone. So, if an American family of four now requires seventy thousand dollars to be happy, which is the number you most often hear, then in ten to twenty years it could be an order of magnitude cheaper, with an error factor of 2x. Excluding the cost of housing, thirty-five hundred to fourteen thousand dollars could be all a family needs to enjoy a really good life."


This is the guy who's making decisions that will have a huge impact on our future.

Latest Discussions»General Discussion»Great. OpenAI's CEO is a ...