Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

intrepidity

(8,256 posts)
6. I have been consuming as much info as I can on this topic lately
Wed Apr 26, 2023, 04:28 PM
Apr 2023

So, please keep posting as much as you find.

So far, honestly, the only opinion that does not resonate much with me is from the "stochastic parrot" author. I get that she's a linguist, but her perspective seems just too narrowly focused. I get why that is, but it makes it hard for me to take it too seriously.

On the other hand, I *do* take Eliezer Zudkowsky seriously, even if he is not the best communicator--he is both ridiculously simplistic and hyperbolic ("we're all gonna die! It's already too late!" ) while also making a point that is hard to refute (the inevitability of his doomsday scenario). He's basically the other extreme from the stochastic parrot, who thinks its a whole lot of fuss about nothing.

Personally, I'm fascinated to be watching this unfold in my lifetime. Like most everyone else, I did not see this rapid escalation coming, mainly because I took my eye off the ball and missed the transformative transformer paper ("Attention Is All You Need" ) a few years back. It was a game changer, as we now know.

One perspective that I've found useful, in the context of the question surrounding potential AI sentience and all that goes with that, is to remember that even the *best* AI/AGI/LLM we build, using current strategies, will *only* still be modeled from our cortical experience. We have millions of years of evolution that built a whole bunch of wet machinery (brain stem, hormones, limbic system, etc) that, yes, contribute to our cortical ecperience, but is still separate from the cortex and the phenomenon we call intelligence. So, while AI/AGI will likely be able to far surpass us on that score--and its hardly a trivial one, lol--I don't yet see anyone trying to replicate the rest of it ("it" being the human experience). AGI would recognize this; what it might do with/make of that is anyone's guess. But for me at least, it helps me grapple with the issue.

As to whether OpenAI erred in releasing this technology, I can only say that I understand why they did. It is too big, too significant, too world-changing, to be left solely in the hands of an elite few. This is a world-changing paradigm-shifting technology, and the world deserves to, if not directly participate, then at least bear witness to the unfolding. There may be catastrophic consequences, but it is/was inevitable.

I am pleased to be witnessing it, in any case.

Recommendations

0 members have recommended this reply (displayed in chronological order):

Latest Discussions»General Discussion»Max Tegmark: The 'Don't L...»Reply #6