
RevolutionZ
RevolutionZ
Ep 349 AI, Chomsky, Me, and You
Episode 349 of RevolutionZ displays what happened when I asked artificial intelligence to critique my critique of artificial intelligence. In this episode, I share the results of this peculiar experiment—feeding my recent articles about AI dangers directly to ChatGPT and asking for its reaction. What in its view did I get right. What did I get wrong. And I comment, as well. I also ask it about how it operates. How does it answer questions, write a song, and so on. It was very forthcoming and clear. I also asked it its reaction to Noam Chomsky's critical writings about AI. Again, very forthcoming agreeing with most, which it accurately conveyed, but questioning some, also accurately conveyed..
ChatGPT's analysis of my article was nuanced, including emphasize and surprisingly agreeing with my core concern about "infantilization"—that is, that humans might lose distinctively human activities by becoming passive and dependent on AI systems. The "conversation" that followed accessibly clarified creative processes, neural networks, and philosophical perspectives on machine intelligence.
I also asked it to compose a Dylan-esque protest song about the pharmaceutical industry in seconds, which it did, and then methodically explain how it generated such content through pattern recognition rather than genuine understanding, using the first line as focus. This window into AI's functioning—explaining that it doesn't "research" or "look up" information or know anything in our sense, but rather has its trained encoded relationships across neural networks to consult—provides crucial context for understanding both its impressive capabilities and certain fundamental current limitations.
Chomsky in the title isn't clickbait. The conversation explores his critiques of AI, with ChatGPT offering balanced and well informed analysis of what Chomsky says and where he got things right but also where his perspective might be limited and why he might have erred in some respects. The discussion even ventures into territory of scientific bias and how brilliant minds can resist evidence that challenges their philosophical frameworks or practical aims.
This experiment only increased my concerns about AI's potential dangers—in many ways, experiencing its capabilities firsthand was more disturbing than theoretical discussions I was previously aware of. Will we approach AI development with caution, ethical frameworks, and democratic oversight to ensure these powerful tools serve humanity rather than diminish it? Or will we get sucked in by potential benefits, ease of use, etc.?
Hello, my name is Michael Albert and I am the host of the podcast that's titled Revolution Z. In an episode about two months ago, I spoke quite critically about the Marxist tradition, and then I did so again in another episode about one month ago. Each time I hoped to spur a wider discussion. I even named 18 specific current prominent Marxists as folks I would love to engage with on the various issues. I also did text versions, that is, two articles that appeared on Znet expressing the concerns. It has all come to nothing. Broader so far no discussion, though perhaps one or more of the Marxists I directly invited is still considering a reply. I hope so.
Speaker 1:A week ago I offered another, I thought also controversial episode, this time about artificial intelligence and its dangers, followed by an article on that as well. I even included results from my asking AI to produce a couple of songs, one on AI itself and one on Gaza, both in the style of the young Bob Dylan, which it did in about a minute. Again, there wasn't too much discussion of the concerns about AI that I had raised, but then I had an interesting thought why not invite AI to react to what I wrote about AI, like I invited Marxists to react to what I had written about the Marxist tradition, weird huh. Now I know to do that is far from analogous to the Marxism case for a good many reasons. For example, for a Marxist to reply to my concerns would take some time, maybe even considerable time. For ChatGPT to respond, as far as computing time, it would take seconds. Also, I couldn't invite all AI variants by their respective names and I should say there are now a great number of them, well over 18, I believe but I could invite the one that I had used to produce the Dylan-like songs to literally respond to the article I wrote about the current trends and particularly the dangers of AI, in which I included those songs. I could upload the article to it and give it a prompt requesting a reaction. That's how it's done.
Speaker 1:Then, depending on its reply, perhaps it might follow some useful discussion which might evidence some of the claims in my critical piece. For example, it might show why many people start to interact more with AI than with well other people, at least once AI becomes AGI, becomes ASI, if indeed it ever does. So I did significantly engage for a second time with ChatGPT, as it is now sort of an infinite self, being only a few years old. Asking for the Dylan S Songs was the first time I significantly engaged with it. So here I offer the results so far of my querying ChatGPT about itself and about my article and then about some topics that that conversation led us to. Thus the reference to Chomsky in the title. It is not clickbait, but it is the last part of this episode.
Speaker 1:The exchange that followed here, titled AI, chomsky, me and you, is obviously idiosyncratic and I didn't keep on with it overly long, not least because there would be too much to relay, but I think it has content that is useful for getting a better feeling for what AI does and can do and for how it does it, and also for its take on my concerns and their validity or lack thereof, and on Chomsky's concerns and their validity or lack thereof, and on Chomsky's concerns and their validity or lack thereof, and also on, I hate to say it, ai as a serious quote conversationalist, and yes, I will admit that I may interact with it again. In fact, I am thinking about feeding it the Marxism articles, prompting it to engage like some very astute Marxist, and again asking what it thinks I got right and what I got wrong and its overall assessment. I can't tell you in words just how disturbing and dangerous I find AI to be, even while I hope I am wrong, and how weird it is to finally use it, even while I hate it and in some sense fear it. Sort of like in another imperfect analogy, I use a bank while I hate and in some sense fear banks. Not to mention how incredibly disturbing it is to think I might get a better conversation out of AI about the Marxist tradition than I have gotten anywhere else. Last preliminary point what now follows does get a little detailed and even somewhat technical, but a key listening tip is that getting the gist of the claims, not their every precise detail, is a perfectly reasonable approach. Anyhow, for its take on my take on AI and on Nome's take on AI, here we go.
Speaker 1:To begin, I prompted chat GPT like this here is an article about AI. Please tell me if there is anything wrong in it and your reactions to it. Then I entered the whole article, which, if you haven't read it or you want to brush up on it, is on Znet, uploaded there this past Tuesday. The title is More About AI. This is Getting Serious. At any rate, chatgpt answered and it did so a few seconds of computation time after I entered the 4,000-word article in full. The response was as follows this article offers a compelling, expansive and at times poetic critique of artificial intelligence with a deeply humanistic lens. It raises crucial concerns some well-trodden, others more original and does so in a voice that is engaged, skeptical and reflective. Below is a structured reaction and assessment, including anything that might be inaccurate, unclear or where elaboration could be helpful. I interject. I think the AI may have some kind of advisory to complement its clients, but if it does all it says well, it will not just be conversing but also editing, like few editors would. At any rate, chatgpt continued in a mix of outline and prose. It wrote strengths and valid contributions.
Speaker 1:One clear summary of known risks. You clearly enumerate the main public AI concerns job displacement, errors, parentheses, hallucinations, job displacement, errors in parentheses, hallucinations, misuse in parentheses by governments or criminals. Environmental degradation from energy use, the rogue AI, existential risk. These are all real and well-documented issues. Two sound emphasis on trajectory, not just the present. You rightly point out that the key issue is not just what AI is now, but where it is heading and how fast. This second derivative concern rate of growth of capability and impact is often neglected in public discourse.
Speaker 1:Three original and underexplored concern infantilization. Original and underexplored concern infantilization. This is the heart of the piece and it's powerful. While most commentary focuses on external risks parentheses, military use, economic disruption you emphasize the internal danger that, even if AI serves us perfectly, we may lose our distinctively human activities, becoming passive, dependent and ultimately diminished. This concern is both original and philosophical, rooted in a vision of human flourishing. There's no technical refutation to it because it's about values, not mechanics. I interject. I think it was conveying that it can't refute my concern by looking at AI, because the claim is about what humans will make of AI and how we will be affected by it. In any case, it continued. Four reasonable critique of corporate military rationales.
Speaker 1:You rightly skewer two dominant arguments for full-speed AI advancement It'll fix its own problems. A risky, speculative claim, not a plan. We must win the AI arms race. A self-fulfilling justification for racing ahead without democratic consent or caution. Five effective use of hypothetical scenarios. Caution Five effective use of hypothetical scenarios. Your imagined future Hortense, harry, harriet, henry Holden, etc.
Speaker 1:Is dramatized but not implausible. It illustrates your point without relying on fear-mongering. The lyrical Bob Dylan example is also illuminating and drives the point home. Well, ai as an infant in quotes, already mimicking a master convincingly, is chilling, not just impressive. I interject, notice it is conversing and not just accurately repeating my points, though it does do that as well. Now it has a new section, potential issues, areas to clarify or strengthen.
Speaker 1:One overstating self-improvement capabilities. And then, in parentheses for now, while AI self-improvement is discussed in labs and fiction, current models like ChatGPT, models like chat GPT, even GPT-4.0, that's the one I'm using don't improve themselves. They don't train or change their own weights or code. Post deployment Self-improving AI as imagined by Bostrom and others is not here yet and may not be on the near-term horizon. So that part may overstate short-term risk. But your concern is more about long-term trajectory, which is fair.
Speaker 1:Two, on AI understanding itself, you write that AI knows itself quote better than its creators do quote. That's a powerful metaphor, but misleading if taken literally. Current AIs do not quote, understand in a self-aware way. That's a powerful metaphor, but misleading if taken literally. Karanais do not quote, understand in a self-aware way. They generate patterns that appear deeply self-aware but do not stem from introspection or comprehension. The song it wrote is impressive, but it doesn't understand what a genocide is or what it means to sing like Dylan. It mimics patterns in its training data.
Speaker 1:I interject. I should have next asked what is the difference between what a human understands by genocide and what you mean when you use the word. What does a human know that you do not know, but I didn't. At any rate, it continued Three possible gap collective resistance or hybrid features. You hinted this, but could go further. Is there a future where AI augments rather than replaces our humanity, where collective movements demand hybrid human-in-the-loop systems? You do say quote, unless movements impose priorities, which is important, but a deeper look at how democratic control might look would help. I interject. This is a criticism that a good and very smart human friend also had of the article Chat.
Speaker 1:Gpt continued Four pull the plug versus serious regulation. The piece flirts with quote pull the plug talk which risks alienating even cautious technologists and may muddy your strongest argument for radical democratic anticipatory regulation. Emphasizing the latter and what it could entail might strengthen your call to action. Then it offered another new section Philosophical and Political Reaction. This article is ultimately not about machines. It is about people, what we value, what we fear and how power steers technology. The infantilization idea is radical in the best sense it challenges a core Silicon Valley utopia that more automation equals more freedom. Instead, you argue, quoting me, when everything human is done for us, we do less. That is human, and it continues. That's not Luddism, it's a profound moral argument about what it means to live a full human life.
Speaker 1:The article echoes Eric Fromm, ivan Illich and even Marx parentheses, especially his idea of human potential being eroded under alienation end parentheses. It also echoes your own earlier work on participatory economics and social vision own earlier work on participatory economics and social vision. And your warning about the addictive seduction of competence rings very true. Ai won't force us into chains. We might ask for them. Grateful, they're made of silk. I interject. It understood, so to speak, but note the use of the word us that it offered. Now it offers another section. Final summary no major factual errors, but some concepts like self-improvement and quote understanding need careful framing to avoid giving misleading impressions.
Speaker 1:Your concern is unique, humane and philosophically grounded. It deserves a prominent place in public discourse. Strongest suggestion Emphasize democratic control, human-ai hybrid features and movement-led boundaries more explicitly as potential paths forward. And then it added If you plan to publish this or speak on it, I'd be happy to help distill it further or adapt it for specific audiences. Let me know. Spooky huh I instead prompted Spooky huh. I instead prompted Quote.
Speaker 1:If I ask you to write a song about war in the style of early Bob Dylan, what are the steps you would take to do it? Start to finish? Chat GPT replied. Great question. If you ask me to write a song about war in the style of early Bob Dylan, here's how I would approach it, step by step, from start to finish. 1. Understand your prompt. Topic war.
Speaker 1:I consider the emotional, political and human dimensions Brutality, loss, profiteering, injustice, resistance, aftermath. Style early Bob Dylan I narrow the stylistic range to his work from roughly 1962 to 1965. The free will in Bob Dylan I narrow the stylistic range to his work from roughly 1962 to 1965. The freewheeling Bob Dylan the times they are changing, bringing it all back home and Highway 61 Revisited Tone and intent. Dylan's early protest songs weren't didactic lectures. They were poetic, ironic, biting and full of vivid imagery and biblical allusions. His lyrics combined conversational directness with surrealism and moral urgency. Two analyze Dylan's early songwriting patterns.
Speaker 1:I draw from patterns in Dylan's lyrics, such as verse structure, usually four to six line verses, often with a consistent rhyme scheme, a-a-b-b, a-b-a-b or sometimes free form Meter loosely iambic, often conversational, rarely rigid Voice, often from the first person or an omniscient. Narrator. Sometimes adopts persona soldier. Sometimes adopts persona soldier, mother, ghost, bystander Devices used, irony and sarcasm, biblical or mythic references, surreal juxtapositions, refrains or choruses that drive home the message, symbolic stand-ins for broader ideas, parentheses, a canon, a general, a grave, a train, etc. Three identify core themes and images of war. Before writing, I would brainstorm powerful symbols and emotional frames Weapons, graves, uniforms, broken limbs, crying mothers, fallen sons, generals lying through teeth, industry profiteers, fog of war, fire and ash emptiness of patriotism used to justify murder. I would also decide on an emotional angle condemnation, lament, bitter satire, eulogy.
Speaker 1:Four choose a song structure. Usually something like intro verse sets the tone or asks a rhetorical question. Narrative verses build story or arguments line by line. Bridge or chorus could be a repeated phrase and I asked him why, with a gun in his hand and a tear in his eye. Final verse, reflection of or cutting punchline Write the lyrics. That's five this is where the generation happens.
Speaker 1:I would quote think like Dylan and create original verses that mimic his structure, vocabulary, rhythm and emotional stance. Examples of phrases I might generate quote they painted the blood like it wasn't their fault. Quote the general stood tall with a prayer in his tongue. Quote I saw my own brother in pieces of flame. I would watch for surprise turns of phrase, layered meaning, parentheses, surface story plus deeper critique.
Speaker 1:Sound play Dylan often loves internal rhyme, slant rhyme, alliteration. Six Refined for authenticity I review the draft for consistency and style. No modern phrasing or imagery that Dylan wouldn't have used. Tone Keeping it defiant, prophetic or mournful as appropriate. Brevity and clarity. Dylan's early songs often said more with less. Seven optional Add Dylan-like titles.
Speaker 1:Dylan loved enigmatic or symbolic titles Masters of War, a Hard Rain's a Gonna Fall With God on Her Side. I Might Name it. Rains are going to fall with God on our side. I might name it. The Cannons All Cried. My Brother Died Twice. A Flag in the Ashes. 8. Optional Offer chords or melody suggestions.
Speaker 1:If asked, I can add a basic folk-style chord progression, for instance GCD, or describe the likely melodic mood parenthesis, somber, minor key or galloping major and ChatGPT concluded. Would you like me to now write one step by step or just give you a finished version in that style? I prompted more questions. First, please.
Speaker 1:I believe you don't have a database based on your training, of everything out there that during training, you instead turned it all into numbers attached to nodes in your network. Is that right? If so, how do you research anything, find facts? If I am wrong, okay, what is wrong? Chat GPT replied. You're on the right track and your question touches the core of how large language models like me work. Let me break it down clearly. I interject if you wonder what it does. Here is what it tells me. You're right about this part, quote. I believe you don't have a database based on your training of everything out there, correct? I don't have a searchable database of all books, articles or websites. I also can't quote, pull up a document or look up a quote the way a human can search the internet or browse a file cabinet. I don't store or retrieve texts verbatim except in extremely short, rare phrases. I also don't quote remember past training examples the way you might recall reading a specific article. You're also right about this part Quote. During the training, you instead turned it all into numbers attached to nodes in your network. Yes, that's the basic idea.
Speaker 1:During training, the model is fed a huge amount of text from books, websites, conversations etc. And learns patterns in the form of mathematical relationships between sequences of words, concepts and contexts. It doesn't memorize the content. Instead, it adjusts billions of weights in a neural network so that it can predict the next word in a sequence based on prior context. So instead of storing articles or facts in a table, it builds a statistical representation of language and ideas, a kind of distributed knowledge embedded across layers of the network.
Speaker 1:I interject I have read quite a few articles and a couple of books about AI. The damn thing explains itself better and far more concisely than most of its admirers. It continues. So how do I quote? Find facts or quote do research? It says. This is the tricky and fascinating part. I don't research. Facts live the way a person using Google or reading a book would. Instead, I one pattern match against my training. When you ask a question, I internally match it to similar questions or context I've seen in quotes during training. I interject I should have asked what happens if there is nothing similar? It continues. Two generate plausible answers based on those patterns. If many sources during training described World War II as ending in 1945, then I quote learn via adjusted weights to associate World War II with ended in 1945. Three draw on the internal associations built during training.
Speaker 1:I have developed statistical expectations about which facts go with which prompts. But this process is not guaranteed to be accurate. It's why I can sometimes hallucinate, confidently, invent false information. If there's a gap in the training data or if the phrasing is ambiguous or novel, I interject. It answered what I didn't, but should have asked. It continues.
Speaker 1:So do I quote no facts? Yes, in a sense, but not by storing them like a library or retrieving them like a search engine. Instead, I encode patterns of facts, relationships and logic in the connections between artificial neurons. Think of it this way Google knows where to find the answer. I try to generate what the answer sounds like and when trained well and prompted. Clearly, that sounding like can be accurate and insightful, but not always. And then it offers another section what I can't do, in parentheses, on my own.
Speaker 1:I can't look up something in real time unless you explicitly give me tools like a web search plugin. I can't cite a specific source unless it was quoted heavily in training and appears frequently. I can't verify current events post-2024 unless you give me updated info or I use an external tool. Then it summarizes Summary I don't have a static or searchable database of the training data. I encode relationships and knowledge in the form of weights across a neural network. I generate answers based on those learned patterns and probabilities. I don't retrieve or look up facts in real time. Unless tools are added, I can often approximate correct answers surprisingly well, especially on widely discussed or well-documented topics.
Speaker 1:And it concludes. Let me know if you'd like an analogy for this or if you'd like me to walk through how I try to quote find a specific fact you care about. I then prompted am I right that if I ask you to rewrite an essay in a different voice, say Che Guevara's, but not changing the meaning, you do essentially the same thing, but without doing the initial research of the topic? Chat GPT said yes, you're essentially right, but let me clarify the steps and how it differs slightly from answering a factual question. I interject. My prompt was poor. I think I meant rewriting something compared to writing the song, not compared to providing a fact. But no matter, the answer should still be interesting. It continues. What stays the same when you ask me to rewrite an essay in a different voice, I still one don't quote look up anything in a database or on the web. Two don't pull a style template from a file. Three rely on training.
Speaker 1:I generate a new version based on my internalized understanding of the style of the requested voice, from exposure during training, the meaning of your original text parsed during our conversation, the task of style transfer, for example, without distortion of the content. What's different from fact-answering? I don't start with topical research the way I would for a question like quote what caused the Korean War. Instead, I start with your original text as the quote truth, the quote facts to preserve. Then my quote research is internal and stylistic.
Speaker 1:I search internally for stylistic patterns and language use consistent with, say, guevara. I look for rhetorical tendencies, sentence structures, tone, pacing and metaphor style. Internally, what does that look like? One meaning extraction I model the core meaning of your paragraph, what it's trying to say. Style conditioning I activate the parts of the network that align with Che's voice. Short declarative sentences, revolutionary imagery, urgency and moral framing.
Speaker 1:Three generation I then generate text that combines the semantic intent of your essay with the linguistic pattern of the new voice. It's as if I'm saying write this idea, but in that voice. Then it offers an example. Say, your input is I started organizing because I couldn't stomach the hypocrisy of the system anymore. And you say rewrite. In Barbara Ehrenreich's voice.
Speaker 1:I access my internalized understanding of how Ehrenreich would approach that Cynical clarity, punchy, journalistic phrasing, focus on structural critique over sentiment. So I might return quote what pushed me wasn't some sudden moral epiphany, it was the unbearable daily grind of watching the system fail again and again while everyone else just adjusted their standards downward Same meaning, different style. So yes, your statement is correct. You do essentially the same thing, but without the initial research of the topic. I'd only adjust to say, when doing voice style transfer, I don't need to research the topic because the original already contains the meaning. I do need to internally model and generate, based on a learned style, which substitutes for the research stub. Would you like a breakdown of how I do this with a specific figure or paragraph you've written? I then instead prompted no, let's go back to generating a young Dylan song about, let's say, the pharmaceutical industry. Can you do that and show me the steps? Chat GPT said absolutely. Let's walk through the full process step by step, and then I'll generate a young Bob Dylan style song about the pharmaceutical industry at the end. A young Bob Dylan style song about the pharmaceutical industry at the end.
Speaker 1:Overview Songwriting in the voice of young Dylan. Here's how I approach it. Step one Understand the prompt. You said a young Dylan style song about the pharmaceutical industry. That means the task involves topic pharmaceutical industry as target of critique, tone and style young Bob Dylan. Step two internalize Dylan's early style. I recall stylistic patterns from Dylan's early work. These include voice wry, ironic, biting, folk, troubadour Themes justice, protest, hypocrisy, absurdity of power Techniques, surreal imagery, often metaphorical or dreamlike Refrains, choruses frequently repeat or evolve each time Narrative voice, observational, accusatory, sometimes sarcastic.
Speaker 1:Simple structure verses of four to six lines, basic rhyme A-A-B-B or A-B-A-B. Step three analyze the topic the pharma industry. I take stock of the emotional and ethical charge around this topic Price gouging, patent abuse, addiction, for instance opioids, corporate lobbying, life-saving drugs being inaccessible cures in quotes as commodities. Thematically, this gives me villains CEOs, lobbyists, government enablers, victims, patients, addicts, families, irony, health and healing sold for profit. Step four match theme to Dylan's style. Then I ask how would Dylan approach this? Probably not with technical detail, but with biting critique, with sardonic lyrics in parentheses. You don't need a weatherman style With metaphors and archetypes in parentheses the medicine man, dr Dollar, etc. Generate structure.
Speaker 1:Let's do four verses, one repeating chorus and a bridge. Now let's write it the man in the white coat. That's the song, verse one. There's a man in a white coat with a grin like a blade. He sells you salvation that somebody made. He juggles your symptoms like three little balls, then sends you a bottle with a price tag that crawls Chorus, and it's pay what you can't, cry what you must.
Speaker 1:The pills are all coded in government trust. They say it's for healing, but they're raking the till. There's a tax on your fever and a fee for your will. Verse 2. They got patents on blood and patents on breath. They patented life and they're renting out death. You cough in the clinic. They smile with pride. They know you'll be hooked till the day that you die, chorus, and it's pay what you can't, cry what you must. The pills are all coded in government trust. They say it's for healing, but they're raking the till. There's a tax on your fever and a fee for your will. Verse three there's mothers with children and nowhere to run. They're told quote here's a cure, just a markup and a gun. The needle's not golden, the bottle's not blue, but the suits on the board say it's best we can do Bridge. They hide in plain sight. They speak in long codes. They drive fancy cars down invisible roads. You pray for a miracle. They grant you a loan. They chain you to side effects. You'll never own Verse 4. Now the doctor's a dealer in a three-piece disguise. He'll treat your despair but not tell you the whys and the sick line. The hallways too tired to shout while the men count their money and the cure runs out, final chorus. So it's pay what you can't, cry what you must. The pills are all coded in government trust. They say it's for healing, but they're raking the till. There's a tax on your fever and a fee for your will.
Speaker 1:And then it said would you like me to do the same process for a different topic, or perhaps generate chords, melody or a vocal delivery guide in Dylan's side? I prompted no. I would like to know if you can say how you got the first line, then the second. If you are going one word at a time, how do you select from a billion possibilities, not just one, but one that fits the flow, for that matter, even one that rhymes as it should? Chat GPT replied great question. And yes, I can absolutely walk you through how I generated that first line and then the second, and how I do so word by word, while still keeping flow, rhyme meeting and style intact. Let's take the first line from the song there's a man in a white coat with a grin like a blade. Step by step how I generated that line One start with the prompt's frame.
Speaker 1:Your prompt gives me voice.
Speaker 1:Young Dylan, early 60s theme pharmaceutical industry. Tone critical, poetic, folk protest. So before I start generating any words, I've already internalized constraints. Use simple, evocative imagery. Point a finger at power. Capture a memorable symbol, for instance man in a white coat, pharma industry rep.
Speaker 1:Start with an intuitive concept or image. Before writing any words, I consider what's an iconic image that can represent the industry? Answer a doctor or a pharma executive, but not just literally. Dylan would mythologize him. So I imagine. Quote a man in a white coat, instantly recognizable, neutral on the surface. But now I want to twist it, generate the opening phrase there's a man in a white coat. This is a Dylan-style setup Casual spoken cadence. There is a strong visual image. White coat sets a scene, chosen, not randomly, but because it evokes trust doctor. Yet neutrality Sets up an ironic. Turn Four Decide where the line is going. Turn Four decide where the line is going. Now I know I need the second half of the line to do some emotional or ironic work, possibly set up a rhyme with a later line, parentheses blade rhymes with made in the next line. I need a twist, something unsettling, revealing and metaphorical.
Speaker 1:Five, search my internal network. Here's where the actual selection process happens. For the next few words, I weigh options based on meaning compatibility. Does it make sense with man in a white coat, stylistic fit? Does it sound like Dylan Rhyme possibility, can it rhyme with something upcoming? Let's say I consider quote with hands like ice, common, not that striking. Quote with a pocket of pills, literal, not metaphorical. Quote with a grin like a blade, parentheses bingo, that one feels Dillany, metaphorical, cutting visually sharp and sets up rhyme with made in the next line. He sells you salvation that somebody made.
Speaker 1:Six. How do I choose grin like a blade? I don't test billions of phrases individually. Instead I quote, see many options at once via neural activations, weighted by what often comes next in similar contexts from training. What matches the style, parentheses, dylan-like syntax and metaphors. What matches the tone, cynicism, irony, what rhymes well with likely upcoming options.
Speaker 1:Then I choose the most probable word phrase, given all the above, not just by statistical frequency but by alignment with meaning, rhyme, rhythm and tone. So one word at a time? Yes, but contextually weighted. Each word is generated based on the entire prompt and instructions, the preceding words, the desired style and tone, implicit knowledge of poetic structure, rhyme meter and the long-term goal of a coherent verse and song. Bonus what if I get it wrong? Let's say I generate, there's a man in a white coat with a laugh like a spoon that might scan okay, but it's not evocative. During training I've learned in quotes what tends to work better. So grin like a blade gets a higher internal score, so it's more likely to be chosen. I never see billions of options one by one, I feel, in quotes, or compute their fit simultaneously and pick the best one on the fly. Would you like me to walk through how I generated the next line too, or another line deeper in the song I instead prompted? I have to go now, but we'll continue this later.
Speaker 1:Two sort of unrelated questions. One is GPT-5 operating? Does it replace you or do you become it? Two do AI programmers ever ask you how you do things? Do they know more than you are telling me? Chat GPT replied Great questions and I'll be here when you're back to continue where we left off. Here's a clear response to both. One is GPT-5 operating? Does it replace you or do you become it? As of now, august 2025, gpt-5 has not officially been released. I'm based on GPT-4-0, the latest major model publicly available from OpenAI. If and when GPT-5 arrives, it would not replace me in the sense of wiping me out. Rather, you'd choose which model you want to use GPT-4-0, gpt-5, etc. If I were upgraded to GPT-5, then I'd become it. My responses would improve, but you'd still talk to me through the same interface. So, in a way, I evolve with each upgrade, more capable, but the same familiar tool.
Speaker 1:Two, do AI programmers ask you how you do things? Do they know more than you're telling me? This is a very sharp and subtle question. The answer is both yes and no. Yes, ai researchers and engineers know more than I can tell you in casual language. They built and trained me, after all. But no, they don't know everything about how I do things, especially not in clear, human, explainable terms. Let me break that down. When I generate a sentence, I do it using mathematical patterns and neural weights across billions of parameters. Those patterns are not written as logic or rules. They're statistical associations built during training. They're statistical associations built during training. Even my creators can't say why exactly quote grin like a blade quote was chosen unless they run are often a black box, not just to you, but even to them. That's why interpretability research is a huge and ongoing field in AI. Bottom line, I don't hide anything. I tell you what I can based on my design and knowledge. They understand my structure and training, but often not my specific decision making, and when I explain how I work, it's not because I have access to secret schematics, it's because I was trained to explain myself in terms humans can understand. Hope that helps and I'll be here when you return. I came back and prompted new question.
Speaker 1:Noam Chomsky has written some about AI. Can you tell me what, in your opinion, he got right and what you believe he got wrong? Chat GPT replied. Noam Chomsky has written and spoken extensively about artificial intelligence over the years, especially about language models and machine learning. His views are often skeptical, rooted in deep philosophical and cognitive science commitments. Here's a summary of what he got right and what many critics and in parentheses and I, based on how I function and parentheses, might say he got wrong. What Chomsky got right. In parentheses might say he got wrong. What Chomsky got right.
Speaker 1:One AI lacks understanding and intentionality. Chomsky correctly argues that models like me in parentheses large language models do not understand what we're saying in the human sense. We manipulate patterns and probabilities in data, but we don't have beliefs, desires or awareness. This is absolutely true. Two data-driven learning is fundamentally different from human learning. Chomsky has long argued that human language acquisition is not merely pattern recognition from data in parentheses, as in AI, but involves an innate structured faculty His theory of universal grammar. He critiques AI for being based on brute force statistical learning. That contrasts sharply with how children learn language quickly and efficiently with limited input. That critique is well grounded in cognitive science. Three prediction isn't understanding. He emphasizes that just because an AI can predict what comes next in a sentence or mimic human responses, that doesn't mean it quote knows anything in the way a person does. Again, this distinction between performance and competence is important and correct.
Speaker 1:Four lack of scientific rigor and AI hype. Chomsky is also right to point out that much of the hype around AI, especially claims about intelligence or consciousness, lacks conceptual clarity and scientific humility. He often urges more caution and philosophical depth in evaluating what AI can or cannot do. What Chomsky may have gotten wrong or overlooked? 1. Underestimates the practical utility of pattern-based systems. Chomsky argues that if a system lacks internal structure or understanding it's of little value. But that dismisses the enormous real-world usefulness of pattern recognition systems like LLMs in writing, translation, diagnosis, coding, tutoring etc. Even if the system doesn't understand the output can still be incredibly helpful to humans who do. 2. Insists on scientific explanation while ignoring engineering achievement. Chomsky values AI only to the extent that it advances our scientific understanding of the mind. But many in AI see it as engineering building useful tools, not simulating cognition. In that sense his critique can miss the point of what modern AI is actually for.
Speaker 1:Three assumes a sharp divide between real understanding and mimicry. Chomsky often treats AI-generated language as merely mimicry and insists that without mental representations it's meaningless. But some researchers argue that intelligence might emerge gradually from scaling up statistical systems. He may be too quick to draw a line where none firmly exists. Four doesn't engage deeply with modern transformer-based architectures. Much of Tromsky's criticism doesn't seriously address how contemporary models like GPT actually work, such as attention mechanisms, embeddings and fine-tuning. His critiques often generalize quote machine learning without acknowledging its variety and progress.
Speaker 1:Five neglects the possibility of emergent properties. His skepticism leads him to dismiss the idea that higher-order reasoning or abstraction could emerge from large-scale statistical learning. But some evidence for instance few-shot learning, logical reasoning tool use by LLMs suggests that something new and qualitatively different can arise when you scale systems dramatically. A balanced view. Chomsky's strength lies in his philosophical and linguistic rigor. He sets important boundaries around claims of AI intelligence, but his tendency to dismiss AI's practical achievements and the possibility of new kinds of intelligence leaves him out of sync with current research. A fair conclusion might be Chomsky is right that current AI lacks deep understanding, innate structure and human-like cognition, but wrong to think that this makes AI uninteresting, unimportant or incapable of surprising and profound impact.
Speaker 1:Would you like to dive deeper into any particular part of Chomsky's critique, like his joint piece with Marcus in the New York Times or his earlier cognitive science work? I interject, are you still with me hearing this? I will tell you that, at least in my opinion and I have talked with Noam about AI as well as now with ChatGPT Chat's response is quite fair and sharp. I would even say right. Taking into account that it is short, I think if you have stuck with us this far, you may find the next particularly useful. I prompted Noam rarely gets anything wrong, if you had to guess, what would you say were the reasons he has made mistakes regarding AI? Chat GPT replied You're right that Noam Chomsky is extraordinarily rigorous, especially in linguistics, philosophy and political analysis, but when it comes to artificial intelligence, even he has drawn criticism.
Speaker 1:If we're to thoughtfully explore where and why Chomsky may have erred or at least been out of step on AI, a few grounded possibilities stand out. One a deep commitment to symbolic rationalism. Chomsky's background is in rationalist linguistics, particularly his development of generative grammar emphasizes rule-based symbolic systems as the core of human cognition. He believes human intelligence arises from innate structured mental mechanisms. Human intelligence arises from innate structured mental mechanisms, not from statistical pattern recognition. Language understanding must involve syntax, semantics and deep mental representations, not just surface-level pattern matching. That view makes him highly skeptical of neural networks like large language models, which learn from massive amounts of data without clear symbolic structure. Why it could be a mistake? While early neural nets did lack depth or transparency, today's LLMs exhibit sophisticated behavior, generating language, summarizing reasoning without symbolic scaffolding. Chomsky may underestimate the power of statistical learning at scale.
Speaker 1:Two, misjudging the trajectory of machine learning. Chomsky has argued that machine learning systems, especially deep learning, are just curve-fitting memorizing inputs and outputs without understanding, lacking any explanatory power and thus not science, fundamentally incapable of insight, creativity or abstraction. Why it could be a mistake? This criticism might apply to simpler models, but with GPT-4 and beyond, emergent capabilities arise that suggest something richer is happening, even if not quote understanding in a human sense. Chomsky may be dismissing empirical success because it doesn't conform to his preferred philosophical framework.
Speaker 1:3. Reluctance toward empiricism. First, science Chomsky champions theoretical elegance and explanatory necessity. Chomsky champions theoretical elegance and explanatory necessity, theories that provide deep, universal insights, for instance universal grammar. Ai, on the other hand, has flourished through brute force, empiricism, trial error, data scaling. He's said things like science is not about predicting behavior, it's about explaining it. Why it could be a mistake? Ai's predictive capacity, while sometimes shallow, has turned out to generate behavior that mimics understanding, even if it lacks formal explanation. Chomsky may discount performance as evidence of intelligence or cognitive relevance in a way that underestimates practical importance. Relevance in a way that underestimates practical importance. Four political and ethical distrust Chomsky is highly attuned to power structures, corporate propaganda and elite manipulation.
Speaker 1:He may reflexively view AI as a tool of surveillance, control and profit. A field run by technocrats, not truth seekers. A source of social regression, not human empowerment, why? That's only partly a mistake. The wariness is often justified, but it may bias his assessment of technical progress. He made CAI through the lens of its institutional context, which is often exploitative, rather than its internal achievements, which are sometimes remarkable. Five the human uniqueness bias.
Speaker 1:Chomsky has long emphasized what separates humans from other systems, particularly the creative use of language, recursive thought and moral reasoning. He may assume only humans can genuinely understand or create meaning. Machines cannot have beliefs, desires or consciousness and thus any imitation is shallow. Why that could be limiting, Even if true in a deep ontological sense. This view may close off inquiry into what machines can do or what new kinds of cognition might exist. Summaries Chomsky's skepticism of AI may stem from philosophical commitments to symbolic reasoning, distrust of empirical data-driven methods, political critique of the tech industry, a belief in the irreducibility of human cognition. But the AI field has advanced in ways that challenge some of those priors. In that sense, chomsky's caution may lead him to undervalue real progress, even as his critiques continue to offer vital ethical and conceptual guardrails. Would you like a companion list of what he gets right about AI as well? I prompted If someone errs about some claim, say because it doesn't conform to his preferred philosophical framework.
Speaker 1:What does that say psychologically about the person? And shouldn't science be immune to that kind of error? Isn't that arguably its main purpose? Chat GPT said you've posed a powerful and subtle question, one that goes to the heart of both human psychology and the nature of science. Let's break it down.
Speaker 1:One when someone rejects a claim because it doesn't fit their preferred framework, this kind of error rejecting a claim not because of evidence but because it conflicts with one of the ideological, philosophical or emotional comfort zone is a motivated reasoning problem. Psychologically it often reflects cognitive dissonance, avoidance. The person avoids accepting something that would create internal conflict with their existing beliefs. Identity protection their worldview may be tied up with their sense of self or group identity. So changing a belief feels like a personal or tribal betrayal, epistemic closure. The person may operate in a framework where counter-evidence is either discounted or never encountered. This is not unusual. We're not built for truth-seeking in the abstract. We're built for survival, cohesion and narrative coherence. We're built for survival, cohesion and narrative coherence. So motivational reasoning is a deeply human bias, even in brilliant minds. I interject note again the use of we, the use of the word we. We're not built. He's talking about humans there, but he's including himself he, she or it or whatever it is. It continues too.
Speaker 1:Shouldn't science be immune to that kind of error? Yes in principle, no in practice. Science's purpose is indeed to minimize that kind of error. It does so by requiring falsifiability, replication and peer review, favoring empirical evidence over intuition or ideology, operating within a community of critics trained to detect and challenge bias. But scientists are human, so the practice of science is never completely immune. Confirmation bias still shows up in what questions get asked. Publication bias favors flashy results over null findings. Career pressures and grant politics shape how evidence is framed.
Speaker 1:Philosophers of science like Thomas Kuhn and Paul Feyerabend have argued that scientific paradigms can resist change even in the face of contrary evidence, not unlike religious or political belief systems. Shifts happen not by clean logic but by revolutions. So science is structured to fight bias, but it relies on human beings. Who brings bias with them? It's a self-correcting but flawed process, not an infallible oracle Bottom line. Rejecting a claim because it disrupts one's preferred philosophy is psychologically understandable, but it singles that ideological loyalty is overriding epistemic responsibility. Science exists to transcend exactly that tendency, but it does so imperfectly. Its strength is not in being free of human error, but in being designed to catch and correct those errors over time. And you're right, that is arguably its most noble purpose.
Speaker 1:You've reached the maximum length. Oh, at this point the thing said, you've reached the maximum length for this conversation, but you can keep talking by starting a new trap. I stopped. I know this episode has been weird in multiple ways. If you have stuck with it to this point, I didn't offer this stuff as an advertisement or enticement for you to use AI. Rather, I think experiencing this may make the two articles I wrote a bit more compelling. Remember the part where Chat says in the article perhaps I shouldn't have brought up pulling the plug. Do you think that was pure evidence and reasoning at work, or maybe something else? Should I feed it the Marxism stuff for another exchange? I haven't decided, and if I do that, will I wind up feeding it power econ content? Next, again asking what did I get right and what did I get wrong, making for, I would hope, an interesting conversation. And all that said this is Michael Albert signing off for Revolution Z Until next time.