RevolutionZ

Ep 348 AI Dilemmas with Hortense, Harry, and Holden

Michael Albert Season 1 Episode 348

Episode 248 of RevolutionZ asks, what if the real danger of advanced AI isn't robots taking over the world, but humans willingly but unintentionally surrendering our humanity? What if AI need not go rogue for its collateral damage to fundamentally hurt humanity? What if our most likely dystopian future isn't machines battling us to death, but machines doing exactly what we ask—better than we ever could?

AI is spreading through society at an unprecedented rate, with exponentially growing functionality. While critics point to potential limitations in data, computational resources, or energy requirements slowing AI's gains to a crawl, the industry continues to race toward Artificial General Intelligence and beyond.

Picture a future where AI teaches your children more patiently than human teachers, diagnoses illness more accurately than human doctors, creates more beautiful art than human artists, and provides more satisfying companionship than other humans. Of course it isn't here yet. But is it coming? What happens to us if AI-guided robots do for us everything meaningful that we humans now do? Is that utopia or dystopia? What if we don't lose our humanity because machines force us to succumb, but because we prefer what AI offers until we are so dependent that to change course would be even worse than to suffer on

Is this danger me hallucinating? Is it so subtle it doesn't exist or is it so profoundly dangerous we must pay serious attention? Will we become passive consumers of massive AI creativity? Will our uniquely human capacities  atrophy from disuse? Today's AI can already write not only letters but also novels, compose and play music, diagnose and treat illness,  hold conversations, provide sympathy, and also complete self chosen tasks. What's next? 

Is it time for us to demand serious regulation while we still can? Not only to protect jobs (a good reason), to prevent misuse by bad actors (a good reason), and to prevent a sci-fi robot apocalypse (maybe a Hollywood exaggeration), but to also protect the essence of what makes us human?

Support the show

Speaker 1:

Hello, my name is Michael Albert and I am the host of the podcast that's titled Revolution Z. This is our 348th consecutive episode and it's titled Back to AI. But why am I bringing AI up again? Well, this has been a hard time for me to prepare podcasts, and it isn't only due to the persistent chaotic calamities Trump imposes. It's also because I am hard at work on another project at the same time, a book tentatively titled the Wind Cries Freedom. It isn't entirely from scratch, rather, I worked on a precursor, first as a novel eight years ago, then as a screenplay, starting about, I guess, six or seven years ago, and now I am back at roughly the same project, exhaustively again as a novel, but totally reconceived, revamped and rewritten.

Speaker 1:

The Wind Cries Freedom means to be an oral history of the next American Revolution and thus a novel that reads, however, like non-fiction. It is a substantial undertaking. Reads, however, like non-fiction. It is a substantial undertaking. An interviewer extensively questions 18 future revolutionaries and assembles their words into 25 chapters, each with a different theme or focus. When you do something like that, it is hard to break for lunch, much less to turn off the book substance and take up a different topic, and this week I got behind sort of like missing lunch, except I got behind preparing to record. Still, why AI? I will be totally forthcoming. It is partly because I keep returning to AI in my head, thinking about it, and because, well, to prepare to say things about AI that you who are listening to Revolution Z might not be familiar with is quite a bit easier, and thus quicker, than to prepare to say things about most other topics. Why? Because even with all the hype about AI, and perhaps particularly among leftists, most folks aren't very up on the situation, which causes me to think. Maybe I can add to what you have available to think about regarding AI without diverting too much from my focus on the characters and themes of the book. And I admit there's another reason. Yes, trump, he has issued some more executive orders, this time about AI. His plan is to pressure the AI project I guess we can call it to reconsider its design, instead of trying, not particularly impressively, to incorporate guardrails to align AI with human values and aspirations, which includes to prevent racist and sexist bias in its training and operations. Trump wants AI to align, yes, but not with human potentials and values. Instead, he wants it to align with what his government wants, beneath the rhetorical flourishes, that is, he wants AI to deliver content in tune with his administration, which means in tune with him. So while I am not going to focus on that, it did wake me, since it's serious business, so to speak. Back up to AI.

Speaker 1:

The most frequent critical concerns about AI revolve around its creating bedlam by taking over many jobs and speeding up others making mistakes or displaying biases that hurt people, or going rogue and battling humanity to the death, with us losing. These dangers are quite real, the former two widely already occurring. I note about such concerns only that AI is not what it is now. It is what it will be. The key feature to assess what it will be is not what AI now does, but what AI will do.

Speaker 1:

The key factor to consider is AI's growth, both in reach and, even more so, in functionality. Currently, critics mostly note that AI's growth is absolutely unprecedented on both counts. It has spread through society faster than any earlier technology a lot faster and its functionality the things that it can do has also grown and diversified incredibly quickly. More on both counts societal spread and functionality AI's rate of change appears to be accelerating. It isn't just that it is moving fast, it is that its rate of moving fast is itself increasing, is that its rate of moving fast is itself increasing when you hear about AI becoming AGI artificial general intelligence where it can then do everything humans can do and more, and you hear about it then continuing on to become ASI artificial super intelligence where it can do things we haven't even dreamed of doing and we won't even be able to comprehend. Those projections are based on anticipating that the rate of change, of the rate of change will continue to grow.

Speaker 1:

Some reply that no, it won't happen because the industry is running out of data to train ever larger AIs on. I doubt that is a serious impediment. The industry's solution will be to train tomorrow with data generated yesterday not only by humans but steadily more so by earlier AIs. Newer problems, but obstacle left behind. Other critics, with far more justification, say it won't happen because the project is running out of what they call compute, which is the industry's name for the calculating memory, storage and other factors that AI uses when it answers a question, paints a picture or whatever. And actually it isn't just that they are running out of that stuff, compute, but that the energy needed to generate more of that stuff is beyond what societies can produce at all, much less produce without utterly demolishing the ecology. I have to say I tend to hope this criticism is correct, so that there is a barrier that will stop AI's march forward. We'll see. The rejoinder, however, is that AI itself will find new sources of clean energy. Even if that occurs, however, another possible roadblock is that as efforts to grow AI so as to increase its functionality proceed, it will turn out that there is a point of diminishing returns. Only so much gain will be possible. Whatever one does. So, even if there is enough compute to proceed, it won't translate into new capacities of AI, I have to say I hope that is true as well.

Speaker 1:

The worry about AI becoming rogue causes critics to say we should pull the plug on the project when it gets to the point where AIs are improving themselves without humans doing anything at all. Pull the plug when they become self-improving. Others say we need to do it even earlier, when we lose track of any understanding of what they are doing, for example, when they start to communicate with each other using a language they devise, not ours, so that we can't discern their choices. Pull the plug when they escape observation. In response, the argument against. Pulling the plug is simple and takes two forms. First, if AI keeps getting stronger, it can do great good. It can cure diseases, say, or generate new clean sources of energy. It would be crazy to forego that. Second, yes, ai on steroids not as we have it now, but as it can become can also be used for military ends or to take over markets and so on. Other countries really only China, because only they and we have the infrastructure and output to continue AI's growth, at least for a time will pursue it Whoever gets to ASI first. Artificial superintelligence wins, wins what Wins everything. It is a matter of national security to win and to prevent others from ever catching up. That's the claim.

Speaker 1:

There is plenty to think about, even just in that summary of some current views. But I want to address another concern, actually my own concern that goes back to the beginning but which is starting to get attention from others as well. Imagine next week your son or daughter in high school or maybe third grade or college, comes home and over dinner tells you that her teacher was replaced. You ask the one you liked, she says yes. You ask who is the new teacher and she replies it is Hortense the robot, incredulous. You ask just your one teacher? No, she answers all the teachers. They all have names of their own and different voices, but it is obviously just versions of Hortense. You ask do you like her or it? Your daughter replies yes, she's great, so patient, so knowledgeable.

Speaker 1:

Or imagine next week you go to see your doctor, or perhaps to a clinic. A robot greets you and ushers you into the doctor's room, but the doctor is Harry, a robot. He has great patient bedside manners. You like him, but you feel uneasy. Or imagine you go to work and get a pink slip.

Speaker 1:

Henry has your job. Actually, what looks like? Henry has all the jobs. Henry doesn't get sick, doesn't complain, is super fast and he works 24-7. So you have some time off. You want to make a plan for how to spend it? You ask Harriet, your personal assistant. She plans the week, assembles everything you need. Then she asks if you will be off next week too. Except you realize she knows because she was in touch with Henry.

Speaker 1:

Imagine it's dinner time. Holden has it ready, enjoy. Phone rings. Harriet answers for you, talks with her counterpart, paul. You keep eating. Dinner ends, time for some engaging conversation. Hank is ready, eager, and he is never judgmental. Bedtime arrives, heidi is waiting. She turns on some music that she knows you will love by Happy and the Gang AI.

Speaker 1:

Am I nuts? Or is this where AIs becoming AGIs, becoming ASIs, with all of them doing what good people want freely, leads? We don't pull the plug because we have become totally dependent and, far from becoming more creative as a result, we have become like someone who lives in an opium den, lost in a perpetual fog of addicted inactivity. Okay, it's perhaps a bit overdramatic, but this rough scenario is my other concern, is my other concern? If AI becomes AGI, becomes ASI? Yes, I think for it to go rogue is conceivable, game over, or for it to be misused is certainly conceivable, but I also fear that for it to become indispensable, while, as an unintended byproduct, we become infantilizedized, is more likely. In that case, we are diminished by our own choices. You may reply that is nonsense. Have you been drinking the industry's Kool-Aid?

Speaker 1:

A human is a human. Well, that a human is by definition a human is quite true, as you would likely mean the assertion it is us before and after we use an AI. But I am claiming there are things people do that are highly consequential to what we might call our humanness as we have until now understood it, and I am claiming that that can be diminished or even taken away. Put a person in a jail cell Nothing but four walls and healthy meals slid in under the door. The prisoner can count ants crawling in her cell, but other than that, not much. The person's human options, their human actions are not the same as they were outside. Yes, the person can adjust in some degree and survive, and the person will still be a person, still be human. But have it last for years or forever, and the person will not be the fullest person he or she could and should be, not remotely. I think we all must agree on that much. Not doing the things that people do that are most human and having nothing comparably human to do in place of those lost options is what solitary confinement in prison imposes, and it certainly reduces those who suffer it.

Speaker 1:

So what if down the road, a ways, most or even all human-like things are done by ASIs? Suppose AI becomes seriously better than any human at artistic creation, film, music, painting, say, or at planning one's activities or conducting one's interactions, and more that a tractor can lift more weight than any person. So what A plane flies? We can't. So what? A computer can trunch more numbers than any person can, still, so what? But what if an AI, an ASI, thousands or millions of them, can each write or tell stories better, make music better, paint pictures, better, solve math problems, better play all games, better schedule a week, better write letters, better, teach children better, teach adults better, have conversations, better take care of children or elders better, mend the sick better than any person can, and so on.

Speaker 1:

Suppose that becomes true, it is, I think, of a different order than strong tractors. In other words, I worry the ASIs would do what people want them to do. I'm ignoring that they might go rogue, but that in so doing they will increasingly take over those functions, those modes of interacting, and they will leave people with less of the distinctly human things that people do. So emerges a strange dystopia indeed. If you say yes, okay, hypothetically you agree, but in fact you believe AIs will never become ASIs and in any case will never be able to do such things better, okay, then you are right that my concern will be a moot point. But I think that your estimate or guess or assertion may be quite wrong.

Speaker 1:

The current trajectory of rate of spread and rate of increase of functionality suggests that they will become that good. To say it isn't here yet is true, but beside the point, the rate of spread and innovation, and even more so the rate of increase in the rate of spread and innovation, points toward the described outcome. Perhaps we should pull the plug or at least very seriously regulate the project, not like Trump, but sensibly, like we regulate heroin disbursement or nuclear weapons. If you instead reply that AI is doing all that stuff and more better than humans, and thus doing all that stuff for humans and even crowding us out of doing what we can't do as well won't diminish and perhaps even demolish the out of doing what we can't do as well won't diminish and perhaps even demolish the amount of human stuff that humans do. I hope you're right, but I see no reason to think so.

Speaker 1:

What argument exists that the only thing that can think and produce and create as well as a human is another human? But you may respond. What does it even mean to say AGI or AGI or ASI would compose and even sing and play music or do the many other things I mentioned better than people do? Compose better, teach better, doctor better Nonsense. In this context, what better means would be that I for myself, and so many others for themselves, grow to like to listen to AI music more than to human music, so much so that the most listened to music becomes mostly, and then in time, all AI generated, every genre and then new genres, same idea for being taught, doctored, even for conversing in intimacy. In other words, this isn't about music or proving theorems, or planning schedules or cooking meals per se, nor is it about the range of human nature and potential, nor about what people in a humane environment would love to do. It is about what people who welcome AI involvement and then steadily welcome it further, more slowly, but steadily increasing its involvement because it does such a great job.

Speaker 1:

Do people start to prefer conversing with AIs to conversing with other people, whether because of convenience, content, ease, availability or whatever Sort of like, preferring tweeting to letter writing? Enough, at any rate, so that the former perceived increasingly crowds out the latter, and so on. But for the involvements I mentioned, we get so used to the AI doctor, teacher, lawyer, jury, singer, writer that we no longer want the human version, and so humans stop becoming the human version. What I am saying is, if AI development doesn't hit a point of diminishing returns or isn't curtailed. Should there be no such point, if it doesn't hit a wall and if AI capacities continue to grow increasingly rapidly, I worry that we people will welcome its involvement in steadily more historically human pursuits to such a degree that people's capacities for, and then inclinations to engage in, those pursuits will steadily decline.

Speaker 1:

I'm not saying that this will happen. I am saying that I don't see any good argument that it won't happen. It is AI replacing translators, then programmers, then say therapists and doctors, then say teachers, composers, writers. And it happens not because AI steals such pursuits from people, forcibly excluding them, but because people welcome AI into such pursuits. An argument that this can occur is that AI won't become sufficiently capable for technical reasons. I don't see anyone making a credible case that that is. So, though it might be, it would be an argument about the mechanics of AI, not the qualities of humans. The signs are instead, at least for now, that AI can become that capable. An argument that this won't occur is that, despite AI's capacity to innovate growing if it does people will not welcome it and will collectively restrict and shun it, despite its allure. I don't see much evidence of that yet either, not in our world unless to stop it becomes a priority imposed by movements.

Speaker 1:

I do see various people saying ASI and even AGI can't happen because they find it, in one way or another, horrible, horrid, horrific. But that is not an argument, it is only a hope or wish that I share. Or some people say AI doesn't operate like humans operate, so it won't do human-like things better than we do them. That claim simply assumes the result. It doesn't provide reason to believe it. So what's the upshot? It may make sense to pull the plug or at least very effectively regulate AI out of fear of rogue AI or nefarious use of AI, though to pull the plug could become impossible the minute AI has its own language that we can't fathom or is smarter than we can even conceive. Alternatively, it may make sense to pull the plug or at least very effectively regulate AI because we may slide into liking it too much until it is too late to avoid our infantilization.

Speaker 1:

If you continue to think I am nuts, understand that even current AIs AIs right now can already communicate with one another and can undertake plans of action, tasks themselves that they deem, tasks that they deem desirable or necessary. A couple of years back, these were considered functionality we should not allow. Now they are available on your desktop, everybody races to make them more effective. Current AI types, despite being born yesterday, can write even a whole novel. They can compose music, they can produce film and they can paint. They can diagnose and prescribe, they can teach, they can converse, they can be your friend vastly more convincingly than an army of Facebook friends, soon better than your neighbor, or maybe even your best friend, and so on. And they are getting better at each of these at an incredible rate, and we already don't actually know precisely how they do what they do, much less where it leads To stop Trump, to end wars and to prevent ecological collapse are still our first priority, or ought to be at any rate. But to sensibly regulate or even stop AI is moving up fast in importance on the outside. But before I go to perhaps demonstrate a little of what I have said, I uploaded the following prompt to chat GTP as it now is, not as it may become, can you write me a song about AI and the style and with the inclinations of the young Bob Dylan? And then can you write me a second song, also in his voice, about Israel's genocide against Gaza? It replied in maybe what five seconds? Sure, below are two original songs, written in the voice and style of the young Bob Dylan, drawing from his early 1960s period, when he mixed biting political critique, surrealist imagery and folk traditions.

Speaker 1:

One the Wires Hum Softly. 1. Well, the wires hum softly in the silicon night while the dreamers are dreaming by LCD light. The prophets are mumbling about ones and about naught, while the world trades its memory for something it bought Chorus, and it don't feel right. But it rolls on still Like a gambler with no cards, just working his will. They say it's just numbers, no heart, no soul, but I've seen it stare back from a black circuit hole.

Speaker 1:

Verse 2 Now the preacher got sermons from a code in the cloud and the general's drone hovers quiet and proud. The poet's replaced by a cold-minded script and the teacher's been traded for a marketed tip Bridge. There's a child in the corner with tear-soaked eyes. Ask her a question and you'll hear no lies. Why do we build what we can't understand, she asks with a clock in the palm of her hand. Verse three the ghosts of the craftsmen are lost in the feed while machines pick the fruit and rewrite the creed. There's a screen on the table where bread used to lay, and a voice in the ether that mimics what we say Chorus, and it don't feel right, but it keeps on time like a slick silver rhyme, without reason or crime. They say it's the future. Well, maybe that's true, but I still hear the songs that the looms used to do. Could you write that at all? Much less than a few seconds? I sure couldn't.

Speaker 1:

Two in the smoke of Gaza. Verse one the wind blows dry through Rafa's stones where mothers dig through blood and bones. The world looks on with silenced breath as children write their names in death. Verse two A jet plane sings its metal tune. It don't know dawn, it don't know noon, it don't know love, it don't know peace, just marks its kills and then it flees. Chorus. And oh, my friend, can't you see A genocide done on live TV? With iron fists and sacred lies, they paint the fire across the skies.

Speaker 1:

Verse three the leaders talk with tongues of steel, they twist the facts, they sign the deal and every word that's left unsaid becomes a nail in someone's bed Bridge. I met a boy with one good eye. He said the stars look fake at night. He showed me where his school once stood and then he vanished into soot, verse 4. The diplomats they stroke their chins. They measure loss in votes and sins, but I have seen the broken hands that reach through smoke for no man's lands.

Speaker 1:

Chorus. And oh, my friend, this ain't no war, it's murder dressed in a legal drawer. They bomb the truth, they shell the cries and say it's justice when a village dies. Outro. So sing this low and sing it strong, for Gaza's breath won't last too long. And if you hear that silence break, let it shake the earth, for mercy's sake. And the AI concluded, let me know if you'd like chords or melodies imagined in Dylan's style, or a third song. And I sat scared to ask it to do that and then scared to have it sing it to me as Dylan. And that said, this is Michael Albert signing off until next time for Revolution Z.