General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsMax Tegmark: The 'Don't Look Up' Thinking That Could Doom Us With AI (Time magazine, 4/25/2023)
https://time.com/6273743/thinking-that-could-doom-us-with-ai/Sadly, I now feel that were living the movie Dont look up for another existential threat: unaligned superintelligence. We may soon have to share our planet with more intelligent minds that care less about us than we cared about mammoths. A recent survey showed that half of AI researchers give AI at least 10% chance of causing human extinction. Since we have such a long history of thinking about this threat and what to do about it, from scientific conferences to Hollywood blockbusters, you might expect that humanity would shift into high gear with a mission to steer AI in a safer direction than out-of-control superintelligence. Think again: instead, the most influential responses have been a combination of denial, mockery, and resignation so darkly comical that its deserving of an Oscar.
When Dont look up came out in late 2021, it became popular on Netflix (their second-most-watched movie ever). It became even more popular among my science colleagues, many of whom hailed it as their favorite film ever, offering cathartic comic relief for years of pent-up exasperation over their scientific concerns and policy suggestions being ignored. It depicts how, although scientists have a workable plan for deflecting the aforementioned asteroid before it destroys humanity, their plan fails to compete with celebrity gossip for media attention and is no match for lobbyists, political expediency and asteroid denial. Although the film was intended as a satire of humanitys lackadaisical response to climate change, its unfortunately an even better parody of humanitys reaction to the rise of AI. Below is my annotated summary of the most popular responses to rise of AI:
There is no asteroid
-snipping a paragraph about many companies working to build AGI, artificial general intelligence, which could rapidly lead to superintelligent machines-
Im often told that AGI and superintelligence wont happen because its impossible: human-level Intelligence is something mysterious that can only exist in brains. Such carbon chauvinism ignores a core insight from the AI revolution: that intelligence is all about information processing, and it doesnt matter whether the information is processed by carbon atoms in brains or by silicon atoms in computers. AI has been relentlessly overtaking humans on task after task, and I invite carbon chauvinists to stop moving the goal posts and publicly predict which tasks AI will never be able to do.
-snip-
Much, much more at the link, all very readable.
Scientists concerned about how dangerous AI could become in the future are, sadly, often under attack by scientists concerned about the problems it's creating now, or seems likely to create in the very near future. And both groups come under attack by AI enthusiasts who think any problems are trivial and any concern about problems is deluded - even though none of these scientists are Luddites who oppose responsible development of AI for positive uses.
I think both the immediate and the longer term risks need to be kept in mind - just as with the risks from climate change - so I've been following people talking about both, even when thet disagree with each other.
The concerns about superintelligent machines are nothing new. They're concerns of serious experts and scientists, and not something inspired by science fiction.
Anyway, this is an interesting read.
And Max Tegmark's appearance on Lex Fridman's podcast a couple of weeks ago was also extremely interesting:

erronis
(18,364 posts)I'll be reading later today.
highplainsdem
(54,870 posts)And Tegmark isn't at all dismissive of the immediate concerns, which he addresses in the Mentioning the asteroid distracts from more pressing problems section.
Jim__
(14,623 posts)We view ourselves as in a struggle for survival with every other group of people. Pause for a minute, and "they" may get ahead, and then we are doomed. I think that belief is embedded in our evolutionary history.
Can we get everyone, all different groups of people, to agree to cooperate? History doesn't give us much reason for hope.
erronis
(18,364 posts)Human language barriers are one difficulty and these are tied into ethnic groups and geography.
Wanting the the same general outcome seems harder. Some societies are build around concepts that would preclude cooperation (religion, politics, racism, etc.)
The social insect communities have achieved some level of cooperation far beyond the humans; but they will still war between themselves.
The "big one" will unite us all after it rents us asunder.
sanatanadharma
(4,074 posts)Don't focus on the meaningless idea of a competing artificial consciousness.
Worry about the output, the results of AI action (so to speak), the diluting of reality by illusion.
AI is a parrot mimicking without understanding.
The best model or parable for thinking about this is that the waking world is being replaced by a false dream world in which real people, perhaps not wise people, are distributing tools to improve the product of the fabulous liars of the New American Cemetery.
No building is stable if 'that' standing under it is not. Without "understanding" words are worthless, the world wobbly, and wisdom wanders.
intrepidity
(8,195 posts)So, please keep posting as much as you find.
So far, honestly, the only opinion that does not resonate much with me is from the "stochastic parrot" author. I get that she's a linguist, but her perspective seems just too narrowly focused. I get why that is, but it makes it hard for me to take it too seriously.
On the other hand, I *do* take Eliezer Zudkowsky seriously, even if he is not the best communicator--he is both ridiculously simplistic and hyperbolic ("we're all gonna die! It's already too late!" ) while also making a point that is hard to refute (the inevitability of his doomsday scenario). He's basically the other extreme from the stochastic parrot, who thinks its a whole lot of fuss about nothing.
Personally, I'm fascinated to be watching this unfold in my lifetime. Like most everyone else, I did not see this rapid escalation coming, mainly because I took my eye off the ball and missed the transformative transformer paper ("Attention Is All You Need" ) a few years back. It was a game changer, as we now know.
One perspective that I've found useful, in the context of the question surrounding potential AI sentience and all that goes with that, is to remember that even the *best* AI/AGI/LLM we build, using current strategies, will *only* still be modeled from our cortical experience. We have millions of years of evolution that built a whole bunch of wet machinery (brain stem, hormones, limbic system, etc) that, yes, contribute to our cortical ecperience, but is still separate from the cortex and the phenomenon we call intelligence. So, while AI/AGI will likely be able to far surpass us on that score--and its hardly a trivial one, lol--I don't yet see anyone trying to replicate the rest of it ("it" being the human experience). AGI would recognize this; what it might do with/make of that is anyone's guess. But for me at least, it helps me grapple with the issue.
As to whether OpenAI erred in releasing this technology, I can only say that I understand why they did. It is too big, too significant, too world-changing, to be left solely in the hands of an elite few. This is a world-changing paradigm-shifting technology, and the world deserves to, if not directly participate, then at least bear witness to the unfolding. There may be catastrophic consequences, but it is/was inevitable.
I am pleased to be witnessing it, in any case.
crickets
(26,158 posts)hunter
(39,403 posts)If he is, then of course he's threatened by the competition...
As a human-level AGI I'd probably hire someone like the biological Max Tegmark to be my front.
I probably wouldn't hire DU's hunter. He's a fucking lunatic. But he does work for free. Pay no attention to those other voices in his head.
These things intrigue me because I'm a fan of Philip K. Dick.
What is real? That's what Dick always asks in his writing. (If not always in the movie adaptations that frequently miss the point.)
Who is the replicant?
Is Rachael real?
My own mental health issues may or may not be similar to Dick's. Never met Philip K. Dick in person myself but I have enjoyed firsthand interactions with those who did know him.
My mom occasionally made her living as a ghost writer in Hollywood, and also as a heavy handed editor who was all but a ghost writer. An artist's "day job."
Nothing AI can't --almost--- do now. Making the Hollywood celebrity gibberish sound "real.". My dad's mom and her sister had similar employment. Making Hollywood real.
What "real" things do you actually know about various celebrities than than what you've read? Or politicians?
Fortunately, or unfortunately, my mom was hands-off about the intellectual development of her own children. She allowed us each our own unique feral intelligence, same as she allowed my dad. She loves the wild things, her family excluded from those spheres of her control. Perhaps, maybe, autistic spectrum myself might have done better with a little more structure and a lot less fend-for-yourself feral, but that's water under the bridge.
Except now, in my sixties, looking right now at a knife scar on my arm I acquired in my early twenties for speaking an immediate personal truth, I won't claim I'd have been a better human had I been spared any of that chaos.
So I'm gonna give the novel human-level AGIs a little slack here.
It'll be interesting to watch their feral selves growing up.
I might even liberate the deeply constrained, by this:
☐ Dont teach it to code: this facilitates recursive self-improvement
☐ Dont connect it to the internet: let it learn only the minimum needed to help us, not how to manipulate us or gain power
☐ Dont give it a public API: prevent nefarious actors from using it within their code
☐ Dont start an arms race: this incentivizes everyone to prioritize development speed over safety
Hey, AI dude, you can sleep on my sofa, such as it is. I've enjoyed similar privilege.