General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsGreat. OpenAI's CEO is a doomsday prepper who thinks AI might attack us & has his hideout ready.
Just the person we want in charge of society- and economy-disrupting AI.
I thought OpenAI sounded more than a bit loony when I posted a couple of threads - https://democraticunderground.com/100217677504 and https://democraticunderground.com/100217678412 - about statements I found on their website.
But this morning, while checking for tweets about ChatGPT being down, I ran across one linking to a Futurism article from a few weeks ago - https://futurism.com/the-byte/openai-ceo-survivalist-prepper - that refers back to a 2016 New Yorker profile of Sam Altman.
The tech wunderkind explained to the assembled partygoers that he's freaked by the concept of the world ending and wants to prepare to survive it. The two scenarios he gave as examples, and we promise we're not making this up, were a "super contagious" lab-modified virus "being released" onto the world population and "AI that attacks us."
"I try not to think about it too much," the OpenAI CEO told the reportedly uncomfortable startup founders surrounding him at that forgotten Silicon Valley gathering. "But I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to."
So yeah, that's the guy who is in charge of the company that was initially founded with the philanthropic goal of promoting responsible AI, and which subsequently decided to go for-profit and is now making money hand over fist on its super-sophisticated neural networks that many fear will take their jobs.
From the New Yorker, https://www.newyorker.com/magazine/2016/10/10/sam-altmans-manifest-destiny and archive link https://archive.ph/3FLF8 :
-snip-
OpenAI, the nonprofit that Altman founded with Elon Musk, is a hedged bet on the end of human predominancea kind of strategic-defense initiative to protect us from our own creations. OpenAI was born of Musks conviction that an A.I. could wipe us out by accident. The problem of managing powerful systems that lack human values is exemplified by the paperclip maximizer, a scenario that the Swedish philosopher Nick Bostrom raised in 2003. If you told an omnicompetent A.I. to manufacture as many paper clips as possible, and gave it no other directives, it could mine all of Earths resources to make paper clips, including the atoms in our bodiesassuming it didnt just kill us outright, to make sure that we didnt stop it from making more paper clips. OpenAI was particularly concerned that Googles DeepMind Technologies division was seeking a supreme A.I. that could monitor the world for competitors. Musk told me, If the A.I. that they develop goes awry, we risk having an immortal and superpowerful dictator forever. He went on, Murdering all competing A.I. researchers as its first move strikes me as a bit of a character flaw.
-snip-
A.I. technology hardly seems almighty yet. After Microsoft launched a chatbot, called Tay, bullying Twitter users quickly taught it to tweet such remarks as gas the kikes race war now; the recently released Daddys Car, the first pop song created by software, sounds like the Beatles, if the Beatles were cyborgs. But, Musk told me, just because you dont see killer robots marching down the street doesnt mean we shouldnt be concerned. Apples Siri, Amazons Alexa, and Microsofts Cortana serve millions as aides-de-camp, and simultaneous-translation and self-driving technologies are now taken for granted. Y Combinator has even begun using an A.I. bot, Hal9000, to help it sift admission applications: the bots neural net trains itself by assessing previous applications and those companies outcomes. Whats it looking for? I asked Altman. I have no idea, he replied. Thats the unsettling thing about neural networksyou have no idea what theyre doing, and they cant tell you.
-snip-
Altman felt that OpenAIs mission was to babysit its wunderkind until it was ready to be adopted by the world. Hed been reading James Madisons notes on the Constitutional Convention for guidance in managing the transition. Were planning a way to allow wide swaths of the world to elect representatives to a new governance board, he said. Because if I werent in on this Id be, like, Why do these fuckers get to decide what happens to me?
-snip-
Here's part of a later paragraph, what Altman said about tech: "These phones already control us. The merge has begunand a merge is our best scenario. Any version without a merge will have conflict: we enslave the A.I. or it enslaves us. The full-on-crazy version of the merge is we get our brains uploaded into the cloud. Id love that. We need to level up humans, because our descendants will either conquer the galaxy or extinguish consciousness in the universe forever. What a time to be alive!"
Again, this is who's in charge of the release of ChatGPT and how it will be used.
See the second thread I linked to above - https://democraticunderground.com/100217678412 - and especially reply 6 there for more of OpenAI's newest mission statement, and the response to it from an expert on these LLMs (Large Language Models like ChatGPT).
Most of us have heard plenty about Bill Gates, Mark Zuckerberg and Elon Musk. I'd guess few of us know very much about Sam Altman.

EYESORE 9001
(27,905 posts)and wants to see his apocalyptic predictions come true. Maybe have some yuks and make some bucks too.
highplainsdem
(54,857 posts)entire, very long New Yorker profile?
Renew Deal
(83,654 posts)It is basically about consciousness being backed up and refitted into new "sleeves" which are synthetic bodies. I think it's set around 500 years in the future.
Personally, I think it's good that the people creating these things are thinking about the dangers they can pose (setting aside the general rich boy, Burning Man, paranoia of it all on their part).
highplainsdem
(54,857 posts)I'd be very surprised if all, or almost all, of the people working in Silicon Valley weren't familiar with those ideas since childhood.
I'd also be very surprised if many of those techies were doomsday preppers like Sam Altman. That's weird.
It's also very weird that he's given some thought to how many people he'd be willing to let die or personally kill or have killed to save his loved ones - a subject he brought up in the New Yorker interview. The number he gave was 100,000.
I don't know about you, but that isn't a subject I think about, let alone one I'd bring up in conversation, especially an interview.
getagrip_already
(17,636 posts)He would likely be among the primary targets when they attack.
They know everything he does, everything he says and hears/reads, and all of his hidie holes.
They are playing him for a chump.
highplainsdem
(54,857 posts)So there are definitely privacy concerns.
And anything people enter with IDs, passwords, Social Security numbers, etc., could turn up later in one of the chatbot's hallucinations when it responds to someone else.
Hekate
(96,747 posts)




hatrack
(62,049 posts)EDIT
Still, sometimes a combination of morbid curiosity and cold hard cash is enough to get me on a stage in front of the tech elite, where I try to talk some sense into them about how their businesses are affecting our lives out here in the real world. Thats how I found myself accepting an invitation to address a group mysteriously described as ultra-wealthy stakeholders, out in the middle of the desert. A limo was waiting for me at the airport. As the sun began to dip over the horizon, I realised I had been in the car for three hours. What sort of wealthy hedge-fund types would drive this far from the airport for a conference? Then I saw it. On a parallel path next to the highway, as if racing against us, a small jet was coming in for a landing on a private airfield. Of course.
The next morning, two men in matching Patagonia fleeces came for me in a golf cart and conveyed me through rocks and underbrush to a meeting hall. They left me to drink coffee and prepare in what I figured was serving as my green room. But instead of me being wired with a microphone or taken to a stage, my audience was brought in to me. They sat around the table and introduced themselves: five super-wealthy guys yes, all men from the upper echelon of the tech investing and hedge-fund world. At least two of them were billionaires. After a bit of small talk, I realised they had no interest in the speech I had prepared about the future of technology. They had come to ask questions.
EDIT
They started out innocuously and predictably enough. Bitcoin or ethereum? Virtual reality or augmented reality? Who will get quantum computing first, China or Google? Eventually, they edged into their real topic of concern: New Zealand or Alaska? Which region would be less affected by the coming climate crisis? It only got worse from there. Which was the greater threat: global warming or biological warfare? How long should one plan to be able to survive with no outside help? Should a shelter have its own air supply? What was the likelihood of groundwater contamination? Finally, the CEO of a brokerage house explained that he had nearly completed building his own underground bunker system, and asked: How do I maintain authority over my security force after the event? The event. That was their euphemism for the environmental collapse, social unrest, nuclear explosion, solar storm, unstoppable virus, or malicious computer hack that takes everything down.
EDIT
These people once showered the world with madly optimistic business plans for how technology might benefit human society. Now theyve reduced technological progress to a video game that one of them wins by finding the escape hatch. Will it be Jeff Bezos migrating to space, Thiel to his New Zealand compound, or Mark Zuckerberg to his virtual metaverse? And these catastrophising billionaires are the presumptive winners of the digital economy the supposed champions of the survival-of-the-fittest business landscape thats fuelling most of this speculation to begin with. What I came to realise was that these men are actually the losers. The billionaires who called me out to the desert to evaluate their bunker strategies are not the victors of the economic game so much as the victims of its perversely limited rules. More than anything, they have succumbed to a mindset where winning means earning enough money to insulate themselves from the damage they are creating by earning money in that way. Its as if they want to build a car that goes fast enough to escape from its own exhaust.
EDIT
https://www.theguardian.com/news/2022/sep/04/super-rich-prepper-bunkers-apocalypse-survival-richest-rushkoff
highplainsdem
(54,857 posts)ago. My impression then was that they were afraid people coping with the disasters they helped create would be after them. Guilt - whether or not they'd ever acknowledge it - underlying the paranoia.
In any crisis like the ones they're worried about, most of them would fall victim to their own security forces.
Altman is supposedly idealistic and liberal, though. He's talked about a UBI, though he's under the impression that it wouldn't have to be much:
This is the guy who's making decisions that will have a huge impact on our future.