RevolutionZ

Ep 305 More Confusions and Some Resolutions

Michael Albert Season 1 Episode 305

Episode 305 continues, reacts to, and adds to episode 304 to again address the MIdeast and Israeli and U.S. motivations, the U.S. election and Democratic and Republican Party motivations, plus popular reactions to all that, and also rampant escalating ubiquitous dishonesty plus AI's dangers and finally a personal AI experience in podcast creation's larger implications.  Why again with confusions? Partly because these matters are incredibly important. Partly because I suspect I am not alone in feeling as I do. And partly because the confusions that I discussed last time left some listeners confused about why I am confused, so I clarify and also raise the stakes this time. 

Support the show

Speaker 1:

Hello, my name is Michael Albert and I am the host of the podcast that's titled Revolution Z. This is our 305th consecutive episode and this time, like last, I have various things on my mind and, like last episode, I'm going to spontaneously address them, and I guess the concerns are unlikely to disappear anytime soon. Though, if anyone out there listening in feels confident about these issues with reason and evidence, by all means let me know. In the interest of time, I will confine myself to just a few. Put differently, I am at a loss, like many others, to explain and navigate, much less engage with and transcend, certain current events. I feel confused, still like I felt last week, except a week darker, and this is so despite many emails, conversations and the like offering to remove my confusions. That didn't work.

Speaker 1:

Let's start with the Mideast. We have in the Mideast the potential for a cataclysmic war on top of a cataclysmic war or cataclysmic massacre. First, consider Israel. Some will say well, it's trivially simple to explain Israel activity. What have they done? They have attacked Gaza. They have attacked hospitals and schools and homes and houses. They've laid waste to the area, they've told people to go someplace for safety and then they have proceeded to bomb that place. Now they've extended that to the West Bank and they've extended it to Lebanon.

Speaker 1:

Why is this happening? That's one question. It's a hard one to answer, I think question it's a hard one to answer. I think you might say well, they want to get rid of the Palestine problem, or what they see as the Palestine problem. Okay, I think there's a lot of truth in that. And how do you get rid of the Palestine problem? You get rid of Palestinians. Maybe you kill them all, maybe you drive them all out. That's, I suppose you could say, however ethically disgusting, however horrible, ethically, morally at least rational. That is, if they want to drive them out, they are behaving in such a manner as to drive them out or, for that matter, kill them all. But there arises a question, which is why are they attempting to involve the United States in a war with Iran? And it definitely has seemed for weeks now, months actually, that that's what they're trying to do. On top of the other explicable, if horrendous behavior, they're also trying to provoke Iran into an attack they succeeded and then cajole the United States into a response.

Speaker 1:

What exactly does Israel gain out of this? To me it is not obvious. It seems to me that they are risking their very existence. They have alienated a good part of the world. They are now perhaps causing to emerge a war which could impact them.

Speaker 1:

If we set aside all ethics and all morality and sadly, I think we have to do that in this case, because I don't see any sign of ethics or morality coming from the Israeli policies and government Just no sign of it at all. I mean, it's more blatant than anything I've seen in my lifetime, even than the United States, and that is a very high bar of immortality to try to jump over and yet jump. They are. So I'm confused as to what the Israeli motive is in that respect. Then we have the United States.

Speaker 1:

Okay, people tell me, surely you must understand that the United States, it's like an aircraft carrier in the Mideast, except a giant one. Israel is a client, israel is a agent, israel is an abettor of US agendas. Israel is a force, indeed by far the strongest military force in the area, and the United States wishes to keep it that way and to keep connections intact. Yeah, okay, I get that. I'm not an idiot. I understand that US imperialism is also without ethics and has something to gain, has had something to gain for decades from the relationship with Israel. But when it gets to the point that Israel is behaving in such fashion or manifesting itself in such fashion I don't even want to use the word behaving, I don't know what words to use that it isolates itself internationally even worse than in the past. It was bad enough for decades when there would be votes in the UN, the world on one side and the US and Israel and maybe two or three others who can't break with the US on the other side. Okay, isolation, but this is far greater, far more intense, and the impact of it on the US, the potential impact of it on the US in the form of a wider war, is extreme.

Speaker 1:

And then there's also the domestic situation in the United States. Israel is perfectly plausible to say that part of Israel's motivation because it certainly is a possible outcome is to elect Trump. Part of the motivation is to create a situation in which getting the United States to behave as desired by Israel has the added bonus, if you will, from their perspective, of hurting Harris and helping Trump. And then you have to wonder again what are the Democrats doing? What is Biden doing? What is he thinking when he continues with policies and Harris winds up having to support them with policies that risk the election. I don't know the answers to these questions. It is not obvious to me what US motives are at this point in time in the Mideast. All right, some of them are clear enough, but the overall motives very fuzzy in my mind.

Speaker 1:

Then there's public support and public passivity. I said in the last session we had that I didn't understand. Let me put it this way when I was in high school I learned about Nazi Germany and I had a hard time understanding it. I couldn't quite grok. I couldn't understand how could people partake of such activity, how could they abide such activity? And then I also couldn't understand not just how could people support it, but how could people see it and be passive about it, not be opposed to it? And I felt some of that during the American interventions in Indochina and elsewhere, but it wasn't at the same scale of obviousness for the public.

Speaker 1:

In this case it seems like it is. It seems like the Israeli population can't not know, can't not be aware of what's going on, and that's evidenced by the calls to do even worse, to kill even more. So it's hard for me to understand the mentality that has led to that. At least the Israelis, I guess, have some kind of pitiful excuse that they're afraid of. Pitiful excuse that they're afraid when you get to the United States and you get to public support for genocide, public support for starving people, for bombing the hell out of people. I don't understand that either. And for people to tell me well, it's because Jews feel an identity with Israel or because Jews feel like on all sides we're under assault. Well, I'm Jewish and I don't feel an identity with Israel at all. Don't feel like we are on all sides under assault. Certainly I don't feel it's true. In the United States Is there anti-Semitism? Yes, of course there is. Israel is one of the, if not the most effective, organizer of people into an anti-Semitic attitude in the world. Can the population of Israel feel afraid? I don't know. I guess they do. I guess it doesn't matter to them that they're the ones who are unleashing holy hell on others and not vice versa.

Speaker 1:

Next, connected to that, I had talked some about the US election and again, part of it was, you know, understanding the actions of the principles and part of it was understanding the actions of other people. People wondered info is everywhere. You know, there's coverage of the election and all kinds of coverage of people's attitudes toward it everywhere. What could you possibly be confused about? Well, it's true, there are a lot of things that I don't feel confused about.

Speaker 1:

Steve Shalom and I did a question and answer article. I don't know it's about, I'm guessing now 6,000, 7,000 words, something like that. It went online on Zee and on New Politics, where Steve is active simultaneously, I think, tuesday. So it's been up for a while now, and in that we together put forward a variety of questions about the election and a variety of things that we thought addressed those questions. Answers, if you will, and I felt confident about that. I didn't feel like you know I'm winging it. I don't really know what I'm talking about here. I thought pretty straightforward.

Speaker 1:

But then we have the question, or the problem, that lots of people people who I care about, people who I respect, people who I know are intelligent and capable don't agree With lots. That seems to me to be virtually self-evident, not complicated, not rocket science, but totally obvious. The lesser evil argument stinks. It's not a good argument if the gap between the lesser evil and the greater evil is not very large or if voting for the lesser evil has an adverse effect that's so bad that it outweighs the danger of the greater evil, and so on. In other words, you can construct a perfectly rational rebuttal of voting for the lesser evil, in this case Harris, voting for the lesser evil in this case Harris.

Speaker 1:

But I don't see being able to do that where we have the situation that we have in the United States A fascistic lunatic on the one hand and a rather typical Democratic Party leader in quotes who supports the system, who will defend the system but who will also bend in certain respects toward modest changes and will respond to pressure and so on. They make it difficult. I understand People have put to me that it's young people. That's nonsense. I understand that not only young people but middle-aged people and old people like me thinking about voting for Harris have a very hard time doing it. I'm not in a swing state, not in a contested state state, and so I don't feel the need to do it. And I'm not going to do it why? Because what's going on in the Mideast is despicable.

Speaker 1:

Even if that wasn't going on, it's not as if I would be gung-ho Harris as some kind of tribune of the people. She would still if the Republican side, had a normal candidate be better? Probably Very likely. It's hard to imagine not, but not something to write home about. She's not Sanders, and Sanders isn't all that I would want.

Speaker 1:

But the election isn't about. If it's anything less than all that you want, go home. If it's anything less than all that, you want, go home. The election is about what are the consequences for society of each of the two candidates? They're the only ones who can win each of the two candidates winning. And what's the difference in those consequences? And is that difference substantial? I don't understand how people can look and not see that that difference is well beyond substantial. It's huge. There's another thing about the election. This is completely different. That is I don't know what to call it edifying.

Speaker 1:

For example, a couple of nights ago was the vice presidential debate. After it was over, there was, you know, the post-debate roundups and all sorts of people rendering their judgment as to who won election. Presidential election debate that I've watched I guess it's the third. I watched some of the Biden one, then I couldn't handle it, then I watched the Harris one and now I watch this one. It was depressing for obvious reasons. The other side is something to behold. But on the issue that was being discussed.

Speaker 1:

Afterwards, it seemed to me my perception was that Wall started off I don't know whether he was scared or just out of what he was, but pretty abysmal and he got better, but he never got to the point near. He never got to kind of the ease and the effectivity that he shows when he's giving a speech. I've watched some of those. Vance, on the other hand, started strong and continued strong. He overachieved, given expectations, whereas Walls underachieved. I thought if we ignore lying and the people who evaluate these things do ignore lying and if we ignore underlying positions and so on and so forth and we just consider them as debaters, right, so the content is not what's relevant, it's how good a presentation they did and so on, and that's what people pontificate about afterwards. I thought it was pretty clear that Vance won, which was a scary fact but nonetheless pretty clear. But every single liberal commentator who I saw sum it up, seemed to think Walls won in a walk Easily. Think walls one and a walk easily. It's just indicative of the extent to which people all kinds of people are seeing what they want to see and not what's in front of them.

Speaker 1:

There's another thing now about the election. That strikes me as very hard to explain Democratic Party strategy. Everybody seems to agree that what Harris is doing at least part of what she's doing is seeking to get independent and undecided voters. Okay, that certainly makes sense. Seeking even to encroach on Republican voters, I think that actually makes good sense. But to do those things, she's moderating her stances. In other words, she's moving toward the center, she is weakening and certainly not moving leftward.

Speaker 1:

I don't understand why that is. I don't understand why that is. It is not obvious to me how that works. Is it totally a function of what she thinks the media will do with what she says? Thus, if she doesn't moderate her stance, if she moves in the other direction, if she moves towards, say, you know, the left, if she moves toward things that we would at least consider relatively positive, maybe even outright positive, she will be translated in a way that will cost her votes. Or does she think people will hear it and that will cost her votes? It's not obvious to me, because most polls of these issues show more support for the issues than for her as a candidate. So you would think not moving toward the center, but moving toward sanity on a lot of the issues not least Palestine, of course, but also many others would help her, but it's generally conceded that it would hurt her. I don't understand how people know that. Set aside whether or not it's ethically right or not to be seeking votes by moderating or altering one's message in order to reach a certain constituency, assuming other constituencies will stay aboard.

Speaker 1:

And then there's another part of the election which befuddles me, which is the idea that voting for Stein or West has so much positive implication that it outweighs the negative implication of taking votes presumably away from Harris and therefore potentially causing a Trump win In various swing states. It is conceivable that Stein support or Stein plus West support could be the difference. It is staggering to me that a leftist would think that that's a good use of their time, and what makes it beyond staggering to almost I don't know what other kinds of words to use I really don't is that Stein could get more votes campaigning in New York or California or New Jersey and so on and so forth in safe states than she can in the contested states, in the swing states, and she could do it without any fear or any risk that she would be hurting green future prospects by alienating lots of people who think that the strategy of running in contested states, in the swing states, is just despicable. I don't understand why that doesn't cause her forget the people who are voting for her in a swing state, but why it doesn't cause her to want to vote or to want to campaign more in large states, where to vote for her is not to risk Trump and therefore she has every plausible possibility of picking up more votes per hour expended, I suppose you could say, than she does in a swing state.

Speaker 1:

Last time I also talked about something that's called clickbait and again people thought Michael, how could you not understand that? Well, I thought I was clear, maybe not. I understand why the perpetrators do it. I understand why, if there are ads to be shown to a public, perpetrators of clickbait just want you to see the ads and don't give a damn about anything else, and therefore they will use the title. They will go as far as they can go in the title to get you to click, as long as it won't so alienate you that you'll never click again. And that's what I don't get. And it isn't just them that do it. That is corporate advertisers seeking profits is corporate advertisers seeking profits. Sadly, I think the left is spending a certain amount of effort on generating clickbait as well. In any event, it's not what I don't understand, however vile I find it and I do find it just incredibly vile. It's so out front. It's a bit like Trump. There's no hiding it.

Speaker 1:

If you go on YouTube and you're seeing this array of videos, the titles are almost all clickbait. Oftentimes the titles tell you something is going to be in the video and it isn't even there. Is going to be in the video and it isn't even there. It may not even be remotely there or it's there to a very small degree. You know one minute out of 10. It is clear what's going on. So what don't I get? What I don't get is why we click on clickbait. What I don't get is why we click on clickbait. That's what I don't get. It's our side of it that I don't get. I don't get why it works, but it does and I can tell you, as embarrassed as I have to be to do it that it does work that when I started watching YouTube videos, I was watching them earlier because I would watch.

Speaker 1:

Well, I happen to be interested in physics and various science stuff. I would watch good videos, and there are really some incredible ones on such topics. I would watch good videos, and there were really some incredible ones on such topics. I would watch some sports videos where I wanted to see the highlights of something, if you like, tennis, for example. There was just a championship in China and the final was Alcaraz against Sinner I think the two best tennis players in the world now and probably for the next 10 years and the play was just incredible. I like watching that. Okay, but as clickbait encroaches, it starts to be the case that even those things have an element of clickbait. But I also started to watch videos about the election and there the clickbait is just huge and it works.

Speaker 1:

I would click on stuff. I would be annoyed. I would click on another one. I would be annoyed. Two days later I would click on one. Why the hell am I clicking on this shit? Why do I keep doing it? Is it because I just I want it to be honest? Is it because they've got a hook in my head? Is it because I have no idea, but it's hard not to click? And finally, this time it's going to be not that long.

Speaker 1:

This time I think there's the topic of AI, artificial intelligence. I may have mentioned it last episode a little, I don't know. I don't even remember, but I don't think so. It comes up now because of an experience that I had this past week. Steve and I both sent the Q&A. I hope you'll give that a look. We both sent it Q&A. I hope you'll give that a look. We both sent it to various people and one of the people who Steve sent it to an old friend of his product and it's an AI program. It's an AI large language model project let's call it I don't know exactly and it exists. It describes itself as being a helpmate for people, in particular for writers. So, for instance, you can write and you can feed it into this thing and ask it to do various things to it. You can have it do things for you, go out and do research for you. You can have it do other stuff. But let me get to the heart of this First.

Speaker 1:

Ai itself and I've done a few old episodes on this does cause people consternation and, I think, rightly so. Back when I was doing those episodes, it was, a lot of people had that consternation and it took two forms mainly. The first form was that AI would attain a capacity for independent activity, artificial general intelligence. It would be as capable of humans in all manner of activities and more so, and indeed a whole lot more so, and it would go rogue. So the worry was that the AIs would escape human control and then it would vary, hurt us or annihilate us.

Speaker 1:

Another version of concern for AI was different and already happening, and that is nefarious use of it. So, for example, the creation of false information, of videos of actual people saying things and doing things that they've never done. You can see and imagine how that could be operative and effective during election campaigns. I suspect that we may be about to see quite a bit of it in the next two weeks or three weeks, whatever it is until the election. So there was the rogue AI that people were worried about, and there was the nefarious AI, the bad use of AI, and there are other bad uses that people would worry about using it to not cure cancer but create biological weapons, for example, things like that, and both those things are real. I hate to say it, but, barring one technical possibility and one social possibility, both those things are real. The social possibility is obviously that societies get a grip on AI, on control of AI and on limits on what AI uses it can be put to. It's a long shot, but it's a possibility.

Speaker 1:

The other possibility is that, while AI's growth of capability has been incredibly startling and consistent now for a considerable period of time and accelerating, maybe there's a lid, maybe it will become the case soon that increasing the scale of it. I don't want to get into the technology, but basically think of it as AI, as a ton of numbers, and you can increase the number of numbers this isn't precise, but it's ballpark and you can increase the amount that you trained it on. So the AI is trained on information, and so there's two ways that you can sort of increase its scale training it on more information or giving it more nodes, we'll call it and maybe it will hit a plateau and capabilities will no longer increase as we or as those involved add to the number of nodes or add new information, or maybe they'll just simply run out of information. To add. It's already been trained on virtually everything on the internet. Now it's being trained on all video available and so on. But here's the thing I don't get, because I get all that and I'll tell you I am worried about less, about going rogue, although not entirely Well, let me. Initially, when AI was first accelerating due to the large language models, and back when I did some sessions on it, computer scientists, people who were working in the field of AI, put forward a number. I'm not going to be able to remember all of them Things that should not be done technically to enhance or enlarge AI possibilities without a tremendous amount of preventive clarity and control.

Speaker 1:

So, for instance, one was AIs should not be allowed to, on their own, run wild, go free, so to speak, on the Internet. They shouldn't be allowed to do that. They shouldn't be allowed to cross all those borders, etc. Etc. They are now allowed to do that. A second one was AIs shouldn't be able to program other AIs. You can see where that could lead. That should not be a built-in capability and it should be blocked. It is now a built-in capability and it is not blocked.

Speaker 1:

So those two things give some indication that the idea of it going rogue is no longer constraining. It still exists as a criticism, but it doesn't seem to be constraining the people who are working on it, which is worrisome. The nefarious part definitely still exists as a criticism, but again, there's not very much being done to prevent it and not being used by bad actors, but critical of it when used by good actors and without going rogue. So, for instance, one form of that criticism which does exist or did exist in great numbers, and now it is declining, even though the danger is still exactly what it was, or worse, was a worry about the impact it would have on jobs and, as a result, also on distribution of income and distribution of circumstances. And that still exists, but it isn't being voiced as much, as best as I can see, inside the community of people who actually work on AI, as it was earlier. I don't know exactly why that is. I don't know why it is not least because one of the jobs that's being decimated is programming, that is, you know, computer programming. But there's still another concern, and I find that almost nobody voices this one, and there's where I'm confused. I'm confused why that is so.

Speaker 1:

The last concern is that as AI becomes more and more capable, it will well, how do I put this? It will do more and more human-like things and it will reduce what humans do by large amounts to be less and less human-like. And this brings us to the experience that I had this past week. So this past week Steve sent the Q&A remember about 6,000 words, 7,000 words, something in that vicinity to a friend, and the friend had on his computer the capacity to work with Notebook LM and that's this new Google program that, as best I can tell, is at the cutting edge in terms of certain kinds of capabilities. And so what this friend did was to load the 6,000 to 7,000 word Q&A into Notebook LM and that's what you're supposed to do. That's the kind of thing that people using it and you can go access it, you can look it up online and find notebook LM and do things like this of your own are supposed to do. So he put it in Writers, for example, are supposed to put all their writing in, so somebody like me is supposed to put in everything I've ever written books, articles, whatever and then what I've got, apparently, is an entity which not only lets me instantly access even what I've totally forgotten, it lets me sort of cull from it, it lets me do all sorts of things.

Speaker 1:

But back to the Q&A. Steve's friend said create a podcast using the Q&A and evidencing the Q&A's points. So not critiquing the Q&A, but putting forth the Q&A. Well, it did that. It did it in about five minutes. So in about five minutes it took in. I hesitate to use words like read, but it inputted the Q&A and it developed a podcast. What kind of a podcast. Two people, one man and one woman, discussing the issues of the election, but guided by the content of the Q&A.

Speaker 1:

And here's the thing, and it sort of makes me nauseous to say this, but it did it very well. The voices were indistinguishable from human voices. They were AI voices. You could not listen to this, or at least I could not listen to this and say to myself oh, that's an AI that did that. It was human-like, it had humor, it had intonation, it had more of that than I have right. So it was like I used to do these podcasts at times with a friend, alexandria Shainer, and we would banter a bit and we would try and address the issues that we were trying to address. So it was like that the two of them, the two of them AI alter egos, the two AI manifestations, would banter and would present the content. So that's already mind-boggling enough.

Speaker 1:

But the second thing that was mind-boggling, I have to tell you, I don't think I know anybody who could take the Q&A and so closely embody its every argument, and so closely as intended, as accessibly maybe more accessibly than the Q&A itself and as effectively. So it wasn't just that the style was startling and that the words were startling. You know, the words were startling and that the bantering was startling, all of which is true, but it got the content right too, and it did it in five minutes. And I couldn't do it. If Steve and I tried to turn it into a podcast in which we were bantering back and forth and amusing and, you know, entertaining, at the same time as we were getting across all the information, it would not take five minutes. I'm not sure we could do it at all, but it would certainly take a long time.

Speaker 1:

So what's the problem here? What is it that I don't understand? What's confusing? Many people would say terrific, fantastic. It's like a helpmate. It's like something that gets something done which you want to have done, and it does it really effectively and it does it well and it does it fast. All that's true.

Speaker 1:

What bothers me is that it is doing things that are what makes humans human, and it's doing them fast and it's doing them effectively and it's doing them. That doesn't mean it does everything but lots, and so, as it does more and more, what do humans do? How many humans still multiply numbers and add them big piles of them? Nobody, because calculators do it so well and so quickly that we use the calculator. That's not a problem. In that case. It's like what people say, I think.

Speaker 1:

But in this other case, where the AI expresses content for us, where the AI and it also does things like this plans our days and weeks for us, where the AI answers all our questions and then starts providing us answers without us asking, and so on and so forth, is what's happening that we're becoming more human and the AI is an aid? Or is what's happening that the AI is taking over more human functions, the ones that make us human and we're becoming more machine-like the ones that make us human and we're becoming more machine-like? I wonder why other people don't have that concern. Not a concern that it goes rogue, not a concern that it's intentionally misused, not a concern that it has unintended consequences, like on jobs and on employment, but a concern that it has attributes which human will seek out and will celebrate, while those attributes hurt humanity, sort of like heroin or any other drug that's addicting. So I'm also confused. Is it an inexorable process? First of all, will the capabilities keep going up, or will there be a point of diminishing returns on adding nodes and on adding information? You notice, I don't describe it getting stronger as a function of innovative theory on the part of AI scientists that unleash new potentials, because that's not what's happening. Nobody understands that kind of thing in a way to implement that kind of possibility. The only thing that people understand, as best I can tell, is that scale yields, speed, accuracy, breadth and so on, by means we don't really fully understand.

Speaker 1:

Okay, so if people want to explain to me what I'm missing, what I'm missing in the Mideast, what I'm missing in the election, what I'm missing in clickbait, what I'm missing in AI, please do so. I would like to be corrected For one thing. I would love it to be the case that the correction would be a brighter image than the one that currently inhabits my own view of the situation. Here's how weird the AI thing, notebooklm, is. Some of you probably realize or have guessed, I'm working on turning the episodes that were NAR, n-a-r, next American Revolution, the episodes that were relaying the oral history of the next American Revolution, into a book and I'm making, you know, a whole lot of changes.

Speaker 1:

I'm doing all sorts of things to try and make it a novel, obviously a novel of a very strange sort, an oral history of a future revolution, revolution. And when I listened to the podcast based upon the Q&A, like everybody else, I thought to myself holy shit, I could feed the I don't know 180,000 words not 6,000 words, that's a guess of the oral history into this thing and you could feed that much. You can feed way more than that. And then I could ask it to do various things to it, things that I can't really do well, talents that I don't have, that maybe it would display. And you notice how sort of well, I don't know. Maybe everybody thinks, great, do it, do it. We want to read the result and others may think, good Lord, is that really where we're headed? So, okay, that was all a bit loose.

Speaker 1:

I'm doing this last episode and this one partly to mirror what I think is probably going on in many people's minds. I'll finish up with this point. It's certainly true for some of my friends, a lot of my friends, a lot of people, that is, people are very upset for obvious reasons the war in the Mideast, the mayhem and the human travail and suffering, and also the trajectory of it and the confusion of it, the difficulty of explaining it. Of it, the difficulty of explaining it. There's the election the incredulity that most must feel that it's still close, or it seems to be still close. How is that possible? The tendency that people have to say, well, it's possible because half the population is totally fucked up, which, of course, is more fucked up than that half of the population is, because it's a reaction that is disastrous if one wants to try and make the world any better.

Speaker 1:

The clickbait dynamic which, basically, I mean it's getting to the point where to lie is not only normal, it's expected, and if people don't do it, they're considered naive or foolish or wimps or something. Wimps or something. Lying is becoming the way you deal with even daily life, much less hiding the fact that you've engaged in a crime, you've stolen stuff or you've, you've, I don't know. That's the usual notion of where people would lie, but nowadays I think people lie about almost everything. And then there's AI Sorry for the downers, maybe you can lift it up in your response and all that said, this is Mike Albert signing off until next time for Revolution Z.