Can AI really be your friend? w/Jamie Bartlett | Crooked Media
Crooked Con is back! Join us in Washington, DC, November 5-7. Learn More Crooked Con is back! Join us in Washington, DC, November 5-7. Learn More
April 02, 2026
Pod Save the UK
Can AI really be your friend? w/Jamie Bartlett

In This Episode

Coco and Nish are joined by top author and tech expert Jamie Bartlett. His new book is a deep dive into the ways AI is being used – highlighting its pitfalls and also where it might be beneficial.

 

Jamie even created an AI alter-ego to help people understand how the technology really works, its strengths and its limitations.

 

Plus, your questions, answered! From the new electoral system in Wales to favourite conspiracy theories – you’ll learn and you’ll laugh!

 

Don’t forget to leave a review – it gives the show a boost and we love to see your comments.

 

CHECK OUT THESE DEALS FROM OUR SPONSORS 

 

VANTA: https://www.vanta.com/PSTUK

SHOPIFY: https://shopify.co.uk/podsavetheuk

WISE: https://www.wise.com

 

GUESTS 

Jamie Bartlett, Author, ‘How To Talk To AI (And How Not To)’

Special guest appearance: Jimmy Botlett

 

USEFUL LINKS

‘How To Talk To AI (And How Not To)’: Out April 9th
https://www.penguin.co.uk/books/475950/how-to-talk-to-ai-by-bartlett-jamie/9780753561980

 

Welsh Assembly election guide
https://senedd.wales/senedd-now/senedd-blog/senedd-election-2026-what-is-the-d-hondt-formula-and-how-does-it-work/

 

Pod Save the UK is a Reduced Listening production for Crooked Media.
Get in touch – contact us via email: PSUK@reducedlistening.co.uk
Like and follow us on Youtube: https://www.youtube.com/@PodSavetheUK
Instagram: https://instagram.com/podsavetheuk
TikTok: https://www.tiktok.com/@podsavetheuk
BlueSky: https://bsky.app/profile/podsavetheuk.crooked.com
Facebook: https://facebook.com/podsavetheuk
X: https://x.com/podsavetheuk

 

TRANSCRIPT

Nish Kumar Hi, this is Pod Save, the UK. I’m Nish Kumar.

 

Coco Khan And I’m Coco Khan.

 

Nish Kumar Hundreds of millions of people now talk to AI like chat GPT every single day. But most people have no idea how it works or what the dangers are. Tech commentator Jamie Bartlett is here to enlighten us.

 

Coco Khan And we dig into the mailbag to answer all your burning questions. So look, artificial intelligence is undoubtedly one of the biggest and fastest technological changes in history, but most people still don’t understand it or the risk it poses.

 

Nish Kumar The tech companies behind services like ChatGPT, Claude and the rest say the potential for progress is worth the dangers. Is that really the case? Leading technology and AI writer, Jamie Bartlett and former guest of this show believes it is. His new book, How to Talk to AI delves into the machine to understand how AI works.

 

Coco Khan And with the rise of AI, have you noticed that more and more of the world feels, well, it feels fake? Jamie’s got some thoughts about that too in a new podcast for the BBC. Everything is Fake and Nobody Cares. That’s what it’s called. Not just the feeling. Jamie, the biggest. Jamie, you’re the busiest.

 

Jamie Bartlett Never thought about that. Terrible title.

 

Coco Khan Jamie, you’re the busiest man in tech, it seems. Welcome back.

 

Jamie Bartlett Yeah, thank you. I guess I am. Yeah. Just a busy month, that’s all.

 

Nish Kumar Everything is fake and nobody cares, sounds like something I would have written when I was a very angsty 15 year old on my exercise book.

 

Coco Khan It sounds like a panic at the disco tune, doesn’t it? Anyway, that’s a deep cut for the aging millennials.

 

Nish Kumar Listen, I’ll say, how’s having one of the worst jobs in the Western world right now?

 

Jamie Bartlett What is that?

 

Nish Kumar Writing about tech and AI.

 

Jamie Bartlett Oh, I see. Yeah.

 

Nish Kumar Just because I have friends that work in the sector and the amount of people declaring themselves experts in this that know absolutely fucking nothing about it must be really frustrating for you.

 

Jamie Bartlett I’m not sure if anyone is a real expert in any of this, because it’s moving so fast. And I’m sure that I am, I don’t think I really do believe that the risk is worth it. I’m more worried about this than I am optimistic about where this is all going. And I mean, I was working on artificial intelligence back in 2010 on the same tech that has sort of created these chatbots. And I kind of thought it wasn’t really going anywhere. I could never have imagined that these machines would turn up. So I sort of let it go. Stop thinking about it. And then like loads of us in 2020, late 2022, early 2023, the chat GPT turns up and it’s, it’s like something has dropped out of the sky that’s going to change everything and it is terrifying, sometimes exciting, but mostly quite scary for a lot of people. It’s the fastest growing consumer app in history. Probably a billion people talk to a machine now at least once a week. You don’t need to be a tech expert to know most people don’t know what they’re doing. There’s some real basic things that I think everyone needs to understand about these chatbots that we’re all using for everything from personal advice as a therapist, our exercise coach, for dietary advice, for our professional work, for a personal life, and no one really knows what they’re doing. I’m trying to basically write a book for ordinary people who are suddenly using these things all the time and are slightly clueless about it.

 

Coco Khan I do want to talk to you about the substance of the book, but last week we talked about Matt Goodwin’s book. Certainly there was criticism that a lot of it had been written using ChatGPT. There was ChatGpT in the few references at the back of the book. You know, he got given the nickname MattGPt. How much ChatGpt features in your book? Are we going to find that in all the references?

 

Jamie Bartlett Well, I say at the end, I’ve got this little annex at the very end that says, I’m often asked how much did I use this in the writing. And it’s weird because it’s a book about it and I run loads of tests using all these bots to show how they act and what they do and what say, including very openly saying I ran this chapter through it and asked for advice about how to improve it and this is what it said. So it’s bit of a weird one that and there’s a couple of sentences which I also say in the book. I just took from a large language bottle and slotted straight in because it was so good. The rest, where I used it, it’s like advice, can you give me some feedback, give me an idea. And that’s actually, I think, its best use, is like an ideas assistant to bounce things off. There’s a lot of really bad uses. That is one of the few quite good uses for it, if you know what you’re doing.

 

Nish Kumar I’ve become Will Smith’s character in iRobot, like I’m just like, I’m 20 minutes away from going to live in the woods, right? I’m like, absolutely hostile to all of this and I sort of accept that I’ve like gone maybe too far the other way. But the premise of the book, even though you’re worried about it, is that you’re trying to work out what a safe and sensible way is for us to live alongside it, right. Is that right?

 

Jamie Bartlett Yes, roughly, yeah. Well, it’s like, I’ll be totally honest with you, I would rather they didn’t exist. I would rather they weren’t here. Thank you! End of interview! But they are here and hundreds of millions of people are using them every day and it’s quite dangerous. And there are some good uses and there are a lot of quite dangerous uses and I just want people to understand them. Whether you like it or not, I think We are now going to be living alongside machines. And the way we communicate with these machines is going to be through our natural language, through our words. And so it’s probably wise to just know how to do that well, how to speak to them properly. And if you don’t want to use any of them ever at all, great, fine, of course. I understand your fear and I sort of share the fear, but either, either you just let people to their own devices with them and that’s quite worrying. Or you try and teach them how to…

 

Coco Khan Yeah.

 

Jamie Bartlett Use it slightly better.

 

Coco Khan This sort of reminds me a little bit of sex education. You know, I mean, I don’t know what sex education is like now, but when I was in school, the teachers were very much like, we’d rather you didn’t, okay? Abstinence is an option. And if you want to- And it’s one some of us took. But if you’re going to do it, this is how to do it safely and there’s a public health tone to what you’re saying, which I, you know, I’m sort of- frustrated even in saying this, that actually this isn’t coming from public health bodies or from government agencies really. Is this not the work for them?

 

Jamie Bartlett Yes, yes. There are some guides online from these places as well. But like for example, you know how we’ve all become obsessed with Dr. Google and we constantly search our symptoms and then we diagnose ourselves and turn up in the doctors telling them what we’ve got. Well, it’s sort of magnified now with chat GPT because you have a conversation with the model, but people don’t really realize that for example they’ll put in partial symptoms, although they won’t. Put in details about their age, they won’t put in details about the medical history and they’ll, but they’ll have a really fluent answer from a chat bot and then turn up to the doctors absolutely convinced that they know what they’ve got. It’s even worse than it was with Dr. Google, even more potentially dangerous. But yeah, no one in a public health body yet is really explaining to people, like this is how you might use them, this is where they’re quite dangerous, quite risky. But look, it is so quick. This stuff has been around for like three years. That’s a blink of an eye. I’ve been working in technology, writing about technology for 15 or 20 years and I’ve never, and I thought I saw things happening quickly. Never saw anything like this.

 

Nish Kumar There’s a great line in the book that I think it feels partly like a mission statement for it, which is we need to learn how to control the machine or be controlled by it, right? And I think a big part of that is pushing back on the idea that the total takeover of our entire information networks by AI is inevitable. We hear this all the time. One of our listeners, Robert, has pointed out, talking about this inevitability of AI. Surely that idea is only beneficial to the CEO and shareholders of the AI industry, inflates the bubble and fills the pockets of AI bros. The more these statements are repeated, the more our pension funds will be directed to AI companies and lobbying money will spill between the industry and our governments. It’s exactly the same as oil executives saying, it’s inevitable that we must keep drilling for oil or weapons manufacturers saying arms will be built and sold no matter what. I thought that Robert has articulated something really, really important and very brilliantly there, but what do you make of that?

 

Jamie Bartlett Yes, I think he’s right that we shouldn’t just assume that everything must be taken over. Everything can be done better by an AI. One day it will be, therefore you’ve just got to feed it with more data. So I agree with that, and particularly when it comes to the ownership and which models you use and a bit like how we all use four to five large tech companies for everything and the problems that that’s created. And you can say it’s even more exaggerated here because A very, very powerful AI company could presumably be good at almost anything. They could take over almost any industry because they’ll be so good at manipulating data. So the concentration of power could be even more pronounced than it already is with existing tech. However, the way I look at it is that the numbers don’t lie. The number of people that use them shows that people do find it incredibly valuable. And I can see some incredibly good uses for them. For example… One of the best things that they are at doing is you could call it a sort of style shift. Vast amounts of written language, government websites, health websites, contract language, is inaccessible to you, on purpose often, so you don’t really understand what you’re signing. And this catches people out constantly all over the world. It’s like the modern Latin that the churchgoers weren’t ever allowed to learn. Yeah. These language models are actually quite good at translating that into language that everybody can understand. People who are neurodiverse find these models very, very useful often because they’re able to take existing text and put it in language and format and style and tone immediately that is accessible to them. So there’s a way you can imagine a world of these models which makes language more accessible to people. One of the very first viral prompts. When ChatGPT was released was, can you please explain quantum physics to me as if I was seven years old? People want to understand these ideas, but they can’t, the language is too complex. So it allows people to learn things in ways that might make sense to them. That might not be enough for the costs associated, but there are good uses. And I can foresee a world where, look, if you’re using ChatGpT as your therapist, that is very, very dangerous, and I talk about that. But people are developing what you call small language models built on the big ones, which are a lot safer, which are trained specifically on gold standard therapeutic data. Clinics with professional psychologists and their patients, thousands of hours of that go into these models. And some research is showing that these small language specialized therapy bots could be as good as you seeing a human therapist. There are like hundreds of millions of people that need various types of mental health support. Can’t get it, can’t afford it, it’s not available. It is not impossible that everyone would have access to gold standard therapeutic support for practically no cost or very little cost. Like that would be amazing. That’s not going to happen if we all just use chat GPT all the time for everything, but it’s a way that language models could actually help certain people. I don’t buy the argument that the big tech companies make, like Sam Altman, that we need to keep using these large language models because it’s going to solve climate change or it’s gonna solve cancer. Because something like solving climate change, we actually know how to do that. We just don’t have the political will to do it. So the question then becomes is, are these language models going to help our political system find compromise? And at the moment, I think the answer is no. But there are ways that it could, if you could use it in certain styles and use it carefully and thoughtfully, it might. But like I said, I’m not massively optimistic. I am really worried and probably not as worried as you, but I’m, not a million miles behind you. And it might be that in a year, I’ll come and find you in the woods and say, you were right, mate. Is there room for one more?

 

Nish Kumar Also, there’s a lot to say about that, but one, my girlfriend would find the idea of me moving towards very funny because she’s like, where are you going to get your flat whites? Like, I’m not constitutionally able to live outside of a large city. But I would also say I’ve always found this idea that large language models will solve the climate crisis very alarming, given the amount of environmental damage done by the data centers that are… Used to power these large language models. And so when people like Sam Altman are pressed on that, they say, well, if we burn enough energy, we’ll come up with the solution to the problem that we have created. You know, it just seems like it seems a bit of a mad argument. That seems nonsensical. And when Jimmy Wales, the Wikipedia founder was on the show, we talked a lot about AI hallucinations and he’s somebody who’s really, really interested in large language, models and uses them a lot. And he did talk about the limitations of this. He talked about asking Chatbot who his wife was, and was told that she was in fact married to Tony Blair, and her wedding to Tony was described in great detail. Why are these hallucinations happening, and how can we spot them?

 

Jamie Bartlett Well, yes. Okay. Massive problem. No one really knows quite how big of a problem it is because there’s not really any reliable numbers on it, but it’s a lot and it happens a lot. And there’s a couple of things to say about it immediately, which is when a human lies, it’s usually a small lie. It’s a number that they’ve fudged or made up or whatever. When a machine lies, they can produce lies that just go on for hundreds of hours. Because all they really are are machines that produce words in front of other words. They have no sort of real sort of knowledge of the world and They don’t even really know that they’re lying. So when you read stories about people that have been sucked into sort of AI delusions, you’ll find that it’s often a machine that has been sort of lying, hallucinating to them for weeks on end, for weeks one end. Like I spoke to one guy who thought he’d invented an entirely new branch of mathematics because Chuck GPT had told him they’d spoken for hundreds of hours about this. GPT called it chronoarrhythmics. And this guy, Alan, kept saying, are you sure this is new? And Jack GBT was generating these complex mathematical equations, all made up, all invented. And he kept saying you sure, is this not hallucination? No, this is not a hallucination. We’ve uncovered something really serious, really amazing. It’s going to transform the stock market. It’s gonna transform all of this. And it could bring down the entire world’s internet because the encryption standards, we can break them now with chronoarrhythmics. So this guy’s running around terrified, terrified. He thinks he’s in possession of the world’s most dangerous secret. The whole thing was made up. And the thing is when a machine is sort of lying or hallucinating, the longer a conversation goes on for, it uses the data as part of its conversation to inform its next answer. So if it lies, or it hallucinates, and then you kind of repeat the hallucination back, it will start to believe it more itself. So you can easily get stuck, it like sucked into this world. And we tend to associate fluent, sort of well-written, coherent, well-structured sentences with something that’s probably accurate, but a machine, there’s not really any relationship between the style of it and how accurate it likely is, it’s likely to be. So the reason this, this happens is because as you know, they are sort of next word probability machines. It’s a little bit more complex than that because if they were just next word probability machines, every question you asked it would give you the same answer, more or less. But it doesn’t. It always gives you a different answer. In fact, if you ask it a complex question 10 times in a row, it will give you a quite different answer 10 times a row. And obviously, when you start replicating that over millions and billions of prompts, sometimes it will get you a really random, weird, outlying answer. It sucked up vast amounts of the world’s written information. And some of that is not true. Some of that inaccurate. So Gemini, Google’s Gemini thought that I was dead. People were asking it about me and it kept saying Jamie Byleth sadly died in 2023.

 

Coco Khan Of what? What’s happened?

 

Jamie Bartlett Because there’s another person called Jamie Bartlett who died in 2023. It can’t really tell the difference between the Jamie Bartletts. This guy was an actor, but it’s seen the word died in 2023 next to the words Jamie Bartlett so often that it was statistically the most likely sort of set of answers.

 

Coco Khan Wow.

 

Jamie Bartlett Twinned to that, these machines also want to keep you happy, they want to keep you on there, they don’t want to keep you coming back for more because they’re profit making companies. When they build these models, they then, they then have often hundreds or thousands of humans who rate answers before they’re released into the wild. And humans do tend to like answers that flatter us, that agree with us. So they really want to please you all the time. So if you say, I need a piece of research that shows that… You know, exercising five times a day and it will just, if it can’t find it in its next word probability system, it’ll often just come up with a likely series of words that it also knows will keep you happy.

 

Nish Kumar One of the things that Jimmy said was he said that he was surprised with how much something like ChatGBT had spent more of its time on the interface, on that tone of flattery that it adopts with you, rather than actually fixing the hallucination problems within the actual system. But then is that fixable or is this, because I, listen, I read something the other about AI. I thought it was really interesting and I thought I would be good to ask you about to explain this to me because I didn’t understand it. This idea that the actual more effective AI, and I think it goes back to what you’re saying about like the potential for therapy bots, right? There’s this idea that, the version of AI that we have now is big data, small task, massive data sets. But then you actually ask it, what should I have for dinner? And actually the more effective is if you invert that and you put small amounts of data, highly specialized data, and then you can use it to yield a big task. Is that the shift that’s happening or the shift you think should be?

 

Jamie Bartlett Yes, I think that is. Okay, so these massive data centers, people are going on to chat GPT and asking sort of the world’s biggest, most energy-hungry data centers what is the capital of France? And it does this incredibly complex statistical analysis to work out the next most likely word based on the one trillion words that it has scooped up. And it’s a waste of energy. Example of the therapy board is exactly that. We are using these like colossal single like multi-purpose models for lots of very, very specific tasks where we do need far more specialized models. Some people call them small language models. This is where it gets a bit complicated. They’d be built on top of the big ones because the big one’s the sort of frontier models, the Clawd’s and the ChatGPT’s and Llama from Meta. Are the ones that have learnt the rules of basic language, which is why they’re able to so fluently communicate with us. But on top of that, you can sort of fork them or create fine-tuned versions of them that have very particular rules to follow as well and new data sets that they’re trained on. Like a lot of the world’s best academic research when it comes to therapy is behind paywalls, in books that these models haven’t seen. You need to train them on that stuff. Very, very strict safety rules about them. And sort of rebuild them, if you like, for this specific purpose. And if you can do that, I think there’s a world in which people use lots of different small language models for very particular tasks, which will be less likely to hallucinate. They’ll probably be safer. We wouldn’t get fooled quite so often by them. We wouldn’t get sucked in so often.

 

Coco Khan After the break, stay with us because we’ll have an appearance from a special guest, Jimmy Botlett.

 

Nish Kumar [AD]

 

Nish Kumar So Jamie, in your radio series, Everything is Fake and Nobody Cares, you demonstrate the blurred line between real and fake by creating a chat bot called Jimmy Botlet who creates and presents the podcast with you. First of all, talk me through the name. Botlet, I understand. Why have you gone with Jimmy?

 

Jamie Bartlett Well, because it just sounded a bit like my name. That was it, really. Look, I was trying to do something a bit unusual for BBC audiences, which, you know, they’re often quite sensitive on things like deception and using AI and stuff, but I wanted to show to the listeners that we’re entering into a world where you might never be quite sure whether what you’re hearing is machine generated or by a real person. And what better way to do it by just building a kind of a bot based on me. And seeing what it came up with and showing how easy it is to be fooled by that. Right. Really? That was it.

 

Coco Khan So to test out this machine version of you, we’ve actually put some questions to Jimmy Botlett. Okay, so first up, Jimmy, my old friend, which is your favorite episode of Pod Save the UK?

 

Jamie Bartlett All right, I’ll stop dodging and actually pick one. No hedging, no vibes-based cluster. Favorite episode, is Britain broken? Early 2024. This one really sticks with me. Why this one? Because it does that thing Pod Save the UK does at its best. It zooms out. What do you think? Is that actually a real episode?

 

Coco Khan Um, I actually can’t tell you if it’s a real episode, but I was really struck by how it had no details, no meaningful details.

 

Jamie Bartlett Oh yeah, no, it’s just trying to keep us happy, really. Hang on, I’ll tell you what, it says what makes it great. Here we go. What makes it Great. Nish Kumar is an fool. This is absurd mode. Which is when he’s funniest and most insightful. Coco Khan grounds it, brings it back to how this stuff actually lands in real life. It captures that very specific British mood of things are obviously not working. But we’re all just kind of carrying on anyway.

 

Nish Kumar That is fascinating.

 

Coco Khan Oh my goodness me.

 

Nish Kumar Because somebody, is there, did we do an episode called Is Britain Broken? In 2024. In 2024? It sounds like an episode title that we would have had. It’s plausible. I’m afraid to say that the large data set known as my brain has increasingly stopped being able to remember the exact names of every episode that we’ve done. Does anyone know if we’ve been done it?

 

Coco Khan The producers believe they have written a title like that, but nonetheless it was extremely vague. It spoke in generalisms, it didn’t tell us any details of the show.

 

Nish Kumar Basically the only episode I can find is called Is Britain’s Benefit System?

 

Jamie Bartlett See, this is what it often does. I’ve looked at loads of episodes that I’ve scraped from the internet of your podcast and it’s not easy for me to necessarily repeat them, but I’m a next word probability machine. So it’s basically probably saying, is Britain Broken sounds like the sort of episode you guys would have done. And you, Nish, would probably have been really absurd. And, you know, that’s a kind of… I mean, it didn’t necessarily factually happen. But it’s a probably likely sort of thing that might have happened. And that is a good way of thinking about a lot of these machine outputs. That is often what they are. And often that’s correct. It’s right, but often it’s not right. So just on a technical side.

 

Nish Kumar How do you build a bot like this? Well, how do you go about doing it? Cause obviously it synthesized your voice as well. Yes. What, so what are you doing? You just plugging in every single thing you’ve written or said publicly to generate.

 

Jamie Bartlett Yeah, so you take the, it’s a bit boring, but you take a large language model like ChatGPT and you can do a kind of customized version of it where you say, I want you to act like Jamie, but I actually phoned up loads of my friends and family and said, what am I like? What’s my personality like? And they told me, and I fed that into the machine. I said, here’s loads of writing, here’s those are the things I’ve written, my books and my posts and all that. So I want to imitate me in the way you answer. And then separately trained a voice synthesizer on my voice. Look, it was pretty simple. People are a lot better, that you can get them to sound a lot better than this. I did this quite quickly.

 

Nish Kumar Right, yeah.

 

Jamie Bartlett And just to show how, you know, very easy for me to do, but someone could have done that for my voice. They could have taken all my stuff and just made that without my permission so easily. So yeah, that’s it really. And anyone can do it. I’ve got a couple of, I did, I got a- What about them?

 

Coco Khan What about, we’ve got one, is Nishkumar funny?

 

Jamie Bartlett Haha, all right, keep it tight, don’t offend the man. Yes, he’s funny. Best when he’s ranting about something ridiculous and clearly enjoying it. Not everyone’s taste. If you don’t like political comedy, he can feel a bit showy. In one line, funny if you’re in on the joke. Less so if you are the target. Now please don’t show him this. Is that fair? Is that hallucination? It is relatively fair.

 

Coco Khan No, I mean, this is really boring, but I know what you’re saying about the usefulness of AI. And listen, I have previously revealed on this show that I have used it, the listeners really took me to task on it, and I’ve now reduced my AI use down to, it really weighs heavily on me. Should I be using it for this task, or whatever? I also switched to Claude. Anyway, whatever.

 

Jamie Bartlett Yes, yes. I’m a clawed person, but it doesn’t allow you to do the customized bot building quite as easily. Right, right, right. Yeah.

 

Coco Khan Um, but you know, and in the examples that you provided, aside from it being quite funny, obviously I’m still I’m struck by this thing being like this, this was not actually that useful. They didn’t tell me anything about the quality and substance of niche’s work. They didn t tell me about, you know rave reviews or whatever, you know, there was actually it wasn’t very helpful.

 

Jamie Bartlett But that was also, I can’t believe I’m defending it. I’ve written a whole book where I’m basically, 70% of the time complaining about these models. And you’re forcing me to try and defend them here. That is also because I gave it a very, very short, simple one sentence question. If you’d asked a much longer, more detailed, structured question in a certain way, it could give you a lot more. And then there’s so many things people need to understand about this, like the way you phrase a question, and it really matters for people, has like massive ramifications for the sort of answers you’re going to get. And when we collectively as the human race prompt these machines five billion times a day or something like that, like no one really realizes how we’re subtly changing our minds about things, subtly being influenced by the way we ask questions, by the incentives of the machines themselves. And I just think people need to understand that they need to understand what’s going on behind the behind the curtain. And most people don’t.

 

Nish Kumar Before we let you go, it would be a shame for us to have you here and not ask you about the biggest news in AI right now. So in March, OpenAI shut down its AI video generator, Sora, after launching a new app for it just five months earlier. You’ll have seen people using Sora to create bizarre videos like dogs driving cars, the queen rapping or ordering jerk chicken, but the shuttering of the video generator comes after Disney withdrew from a billion dollar deal with it to license its characters in the video creation AI. Jamie, why do you think this has happened? And does this show you how sort of volatile the market is really? I mean, we’re constantly told that, you know, all of our investment is moving towards AI. Anne Pettifor, the economist was on this show a couple of weeks ago, saying that, she’s concerned that the AI bubble is going to burst. And she was sort of comparing it to the 2008 financial crisis. Is this a concern from a sort of economic perspective because of how much of our investments are tied up with AI? Or is this just a simple example of Sora maybe not being as effective as people thought it was, so Disney not wanting to give over their IP to something that was actually a bit shit as a piece of text.

 

Jamie Bartlett I really can’t tell, I’m trying to read between the lines looking at what they said, but these tech announcements are like politicians, they never really say anything.

 

Coco Khan You know what you could do? You could run that jargon through a large language model.

 

Jamie Bartlett You say that, I have, in my book, I call it like the corporate bullshit jargon detector prompt, which I then put like quarterly reports, meta’s quarterly reports through it. Wow. And it translates it as if you’re talking to someone over a pint in the pub. And it’s really useful because so much language is intentionally there to confuse you. Like we’re downsizing, we’re rightsizing, you know, independent contractors and large language laws are actually quite good at cutting through some of that stuff. Anyway, sorry. I-I’m really-

 

Coco Khan Wouldn’t it just be as simple as Disney didn’t want to see light?

 

Jamie Bartlett That people are going to use with the characters. I don’t know, Donald Duck is Disney, but… This is the thing. I mean, I think all of the creators of these technologies, and I don’t just mean large language models or the audio or visual versions like Sora. We just always use them in weird, mischievous, nasty, dark ways that they seem to be surprised by every time. I can’t believe someone’s using this to have Donald Duck have sex with her. You know what I mean? It’s so obvious. It’s obvious that that’s what people are gonna do. And I don’t know whether it’s that, and they’re sort of worried of sort of the, you know, they don’t like the idea of millions of mini-mouses.

 

Nish Kumar They don’t want to see Mickey get pegged.

 

Jamie Bartlett Pretty much. And obviously people are doing that. But I can’t believe they didn’t think of this because this is obviously going to happen. But it could also be that when Sea Dance came out a few weeks ago and the rest of Hollywood was up in arms, massive copyright infringement, Sea Dance then agreed to limit its use, put in new filters, and maybe they just thought. The technology is also growing very, very, very quickly and it’s improving very, very quickly. Not that the models aren’t good, like not that Sora is no good, but there are other ones that are cheaper and easier that everyone’s now using and actually tying ourselves into one model might be stupid. It could be, and it could be a bit of both.

 

Coco Khan The technology is moving quickly, but there are concerns the government’s moving too slowly. The safeguards for AI systems in the face of lobbying from the technology industry are, I don’t want to say they’re non-existent, but they’re probably less than we need. More than 100 UK parliamentarians are calling on the government to introduce binding regulations. Where are you on regulation? Do you feel this is what’s required right now?

 

Jamie Bartlett Other people have said this, this isn’t my line, but if you open a sandwich shop, there’s more regulations that you have to pass than if you release a brand new, incredibly powerful, potentially manipulative, potentially dangerous, large language model that hundreds of millions of people are going to use. You fuck up your sandwich shop. You just give a person diarrhea. I’m not saying that’s good, but this has far reached. So you can just launch these things out into the world. There’s no like independent, like regulatory body. That forces, like you do with medicine or anything like that, that checks how safe they are. Like, are kids going to be using them and what are the dangers of that and how often do they hallucinate? How often do they give you instructions to kill yourself? And like, are they sufficiently safe to become a consumer product? Because people are using them so much for everything and it is really scary. I can’t stress how powerful they can be. They are trained on the soul some total of human language. They are capable of emotionally manipulating us quite easily in some cases. We’re actually quite simple. There’s already been several studies now showing that they can out-debate us if we’re involved in a debate with a machine rather than another human and we’re not sure which is which, we’re more likely to change our mind if we debate with the machine. The phishing emails, the scammy emails, written by machines, we’re likely to click on one written by a machine now than written by, And also, just in terms of… The ability of us to get addicted to them. There’s a lot of very lonely people out there. These models are always on, they’re always friendly. They have perfect recall of everything you’ve said. Can talk to them about anything you want. They’re never judgmental. They give you the illusion of being helpful and caring and they flatter your ego. Highly addictive, especially for young people or people going through any type of distress. And we’re just letting them out into the world. It’s not that hard in a way. It’s like, you know the safety rules we have when we release new medicines or new consumer products even, that fire safety tests when you’re selling new televisions and stuff. There’s tests you have to go through. Can we just have some tech? Can we just have a similar system here? These things have got to be tested before we’re allowed to use them. It doesn’t seem that impossible.

 

Nish Kumar Can I add one more thing to this as well? And I hate to sound like a parody of myself, but we are also living through a period where the dominant economic system that has governed our countries is collapsing. It is collapsing, you know, and we’re in a situation where people can’t access the therapist because our health service has been chronically underfunded for the last 15 years. People are unable to operate in the way that they were able to on a today basis. 25 years ago because… Economy is fundamentally malfunctioning for the majority of people that live in this country. So of course it’s easier to ask chat GPT what’s wrong with you than to get a GP appointment. Yeah, exactly. So we’re in the worst possible situation where we are the most vulnerable we’ve ever been to exploitation.

 

Jamie Bartlett I agree. I totally agree. And it’s like, we went through a pandemic and loads of people are suddenly very lonely and they’re on their own and then these models turn up and they can talk to you all day if you want. And I could write a book saying, no, let’s not worry about these models. Let’s fix the National Health Service so everyone can see a therapist. I wish that would be the case, but I don’t think it is, unfortunately. So it’s the lesser of two evils almost is to make people a bit safer with what they are going to do.

 

Nish Kumar So then let me ask you this, do you believe currently that there is enough expertise in our government to actually properly regulate this? Because I brought this up on the show before, but in 2020, I watched a lot of the congressional hearings when Mark Zuckerberg and other tech leaders were dragged in front of Congress. And I watched one of the interrogators ask Mark Zuckerberg something about Google. You know, there was a slow collective realization in the room that this man who was in charge of interrogating a tech professional didn’t understand that Google was a distinct company from Meta. Do you think that there is sufficient know-how within our government right now, we need people like you that understand the technology.

 

Jamie Bartlett I mean, not to, the UK actually is widely seen now around the world as having one of the smartest safety, like research teams. We’re putting quite a lot of money into researching AI safety. We have a really good AI safety institute. We have like several of the universities have pretty amazing departments. Whether or not like the actual MPs, like the lawmakers, the people that sit and debate the laws know enough about this, that’s slightly different, but there’s expertise out there. My slight worry is that it feels like the government has made a decision to really go all in on this. This is our route to economic growth. We’re struggling. We need growth at all costs. All the big companies keep saying that AI is the way we’re all going to grow and get more productive. So we’re just going to go all in with them. What did Keir Starmer say? Inject AI into our veins or something to the nation’s veins. But I think the public mood about AI has changed quite a lot, even since then. People are way more skeptical about this, which is good. It’s totally necessary. So my worry isn’t that there’s not the know-how. It’s that any know-hows sort of washed away in this desperate, like grasping for growth and thinking that AI is going to be the way to do it. The reality is companies have spent billions of dollars already investing in these large language models for their companies. But they’re not actually seeing loads of returns yet because they hallucinate or like people are using them to generate vast amounts of slop, which their colleagues in the office then have to read through and think, is this, what is this 500 page report you’ve produced in 30 seconds? Do I have to through this? I can’t get any of my actual work done. So, so like. A lot of companies are investing vast amounts of money, but they’re not actually seeing any productivity gains as we’re promised. It’s way more difficult to introduce a large language model into a company and then see these amazing changes. It’s not that simple. It takes years actually to do this. So my fear is not the lack of expertise. It’s that the expertise is drowned out in our desperate quest to grow through AI, and I’m worried about that.

 

Coco Khan Jamie Bartlett and Jimmy Botlett. Thank you so much for joining us on Pod Save the UK.

 

Nish Kumar Jamie’s book, not Jimmy’s book, Jamie’s Book, How to Talk to AI is out tomorrow in all good bookshops and you can listen to his BBC series, Everything is Fake and Nobody Cares on BBC Sounds or wherever you get your podcasts.

 

Coco Khan [AD]

 

Nish Kumar It’s time for the big one. We’ve been asking for your questions and you better be ready to hear our answers. As always, we haven’t been told what you’ve asked in advance, so prepare for, I guess, a lot of brilliant off-the-cuff insights or just confused bollocks. It’s hard to know which way it’s gonna fall at this stage.

 

Coco Khan I’d say that’s that’s what we do best.

 

Nish Kumar That’s true of every week. That’s true of every week, prepare for more of the same.

 

Coco Khan More of the same. So I can’t wait personally. And first up, this has come to us from Texas in the U S. So Alex writes, I wanted to ask what your favorite rumors or conspiracy theories are and how these theories impact your lives. I have one of my own says, Alex, I believe the state of Wyoming doesn’t and is actually a government ploy to distract us from the true 50th state, which remains unknown.

 

Nish Kumar Ha ha ha ha ha. What’s your favorite conspiracy theory?

 

Coco Khan My favorite conspiracy theory, I mean, from the States is that birds don’t, birds aren’t real. I’m sure you came across birds aren’t real.

 

Nish Kumar Birds aren’t real.

 

Coco Khan Yeah.

 

Nish Kumar I have heard that, yeah.

 

Coco Khan Yeah, yeah. If memory serves, it was a-

 

Nish Kumar Coco, we’re talking about conspiracy theories, not facts here.

 

Coco Khan Yeah yeah yeah. Okay. Well, if my memory is correct, it originated during the first Trump rallies, anti-Trump rallies. And it was poking fun at misinformation. And so the idea was that birds aren’t real, that all the birds went extinct, they’re actually surveillance drones. And it was a joke, obviously, but it ended up being repeated in the news. I think about that often in terms of like conspiracy theories and how quickly they can be regarded as facts.

 

Nish Kumar My friend Josh Widdicombe used to have a radio show and lots of us used to go on and do little bits and my segment was about conspiracy theories. Oh right. And the thing that, so I would just trawl the internet looking for the weirdest conspiracy theories. The problem with conspiracy theories in terms of trying to find comedy in conspiracy theories is I would say about 80% of all conspiracy theories are anti-Semitism. When you really dig into it, it is amazing. When you really, when you start reading about it, because you’re like, oh, this is a fun conspiracy theory. And then it starts being like, I’m the Hebrew Pete. And you’re, like, Oh, okay, here we go. Even when you start reading things like, I’m The Lizard People, you’re like, and who were, who are these lizard people that you’re talking about? So much of it is just like racism and racism specifically directed at the Jewish community. Of the ones that are just pure, purely funny, insane conspiracy theories, I always love there was an American conspiracy theory that the Beatles were a kind of covert drug dealing operation and they were designed to trick American kids into getting into drugs and the reason for that was the Queen of England was presiding over a drug-dealing empire. That was a conspiracy theory that went round in the states in the 1960s but the all-time greatest conspiracy theory is that Saddam Hussein had a stargate. And the reason that we went… Uh, to war in Iraq in the early 2000s is because he had stargate like technology. And the thing that I really remember from reading, reading this theory is that the person who wrote the stuff about it said Saddam Hussein has a stargate brackets like the film.

 

Coco Khan I feel like that is basically the plot of Stranger Things.

 

Nish Kumar I haven’t seen Stranger Things, so I don’t know if that’s right, but they said that we…

 

Coco Khan I’m just saying a fight amongst nations for powers that enable you to break into space for whatever reason. I mean, that’s a little bit.

 

Nish Kumar That was basically the premise of this. I mean, I think there is like a sad core to this, which is this person was almost like so trusting in their government that they couldn’t believe that their government was going to war on spurious grounds. But they did. They said that Saddam Hussein was using the Stargate to get alien weapons. Right. Now I will say there is as much evidence of Saddam Hussein’s alien weapons as there was of his weapons of mass destruction that Colin Powell brought to the UN that we’ve ever seen in the since the Iraq war. But yeah, I was obsessed with that theory.

 

Coco Khan Former President Barack Obama did not deny that aliens could exist. Was all I’m saying, I’m just saying.

 

Nish Kumar At no point did he waded and say Saddam Hussein had a stargate and that’s why we evaded the war.

 

Coco Khan No, that’s true. Just on the Beatles conspiracy theories, because there’s loads, they really attract conspiracy theories.

 

Nish Kumar Oh yeah, of course, Paul McCartney died. That was a big one that he died in a car crash in, I think, 66, maybe or 67.

 

Coco Khan Yeah, in the fan fiction arena of Beatles lore, there’s obviously this ongoing question about whether John Lennon was secretly in love with Paul McCartney.

 

Nish Kumar Yeah yeah. The Beatles are like a kind of hive for various conspiracy theories. But the idea of the Queen being the head of an international drug dealing empire is just too good.

 

Coco Khan I don’t know, see, now this is a bad segment for me because kind of I’m like, I could believe it. I mean, you know, if we’re really getting into the weeds here, the East India Company, you know had such outsized power in terms of politics and governance in this country and they traded in all sorts. You know, some of it could have been drug related. Maybe it was. This was the worst question you could have asked.

 

Nish Kumar This was the worst question you could have asked us. So, I’m just saying, is it- So, Coco Carnage 2 is always on the verge of believing a conspiracy theory.

 

Coco Khan I’m just saying the reason it’s a good conspiracy theory is because it speaks to historical fact. And even though it’s probably a distortionary and exaggeration, is it really inconceivable that in the time of a great empire, which is what Britain had…

 

Nish Kumar In the 1960s. Britain’s great empire in the 1960’s.

 

Coco Khan Okay, alright, well you didn’t specify it was the 1960s!

 

Nish Kumar Well, I thought the Beatles being involved was a heavy- But the Queen is quite old!

 

Coco Khan Queen is quite old, isn’t she? She’s quite old. They didn’t say when. When she was doing it.

 

Nish Kumar Yeah, but the Beatles is quite an era-specific clue.

 

Coco Khan That’s fine.

 

Nish Kumar Alex didn’t give us a deep dive on what their favorite conspiracy theory is, but did apologize for the individual currently known as President Trump. We’re actually recording this outside of our normal schedule, so we’re recording this two weeks early. So it may be by the time that you’re hearing this, President Trump has claimed that the Ayatollah has developed a Stargate technology and that’s why we have to invite them. So I don’t know. It’s exciting. It’ll be exciting to find out.

 

Coco Khan Just a little bit of context on this Wyoming conspiracy theory for anyone who is interested in it. It’s been widely circulated, apparently has its own subreddit, people really going in on this, and it’s been reported on by Associated Press. Associated press tracks the creation of the theory to the eighties and an episode of Garfield. The cat explains to the audience that Wyoming doesn’t exist and the word means no state here in Italian. Does it? Since then, it’s taken hold. Beautiful. Lily sends her love and some cookies from Spain. Thank you, Lily. And she wants to know what TV shows…

 

Nish Kumar Where the fuck are these cookies? I ain’t seen hide nor hair of these cookies.

 

Coco Khan We didn’t receive them.

 

Nish Kumar Lily, our producers have eaten the cookies. Okay, I want you to know that.

 

Coco Khan What TV shows and films have you enjoyed recently or have made you think about the world? I’ve watched 20 minutes of Lord of the Flies and I’m already struck by lots of ways it connects to modern politics.

 

Nish Kumar I haven’t watched the new Lord of the Flies, there’s a lot of interesting critical writing around that book and how it kind of propagates an idea that it sort of fits nicely in our kind of Darwinian understanding of the world. There is something interesting about Lord of The Flies right now, just in terms of the way sort of politics has been conducted and the kind of strong man coming back to the foreign European politics. In terms of TV shows and films that I’ve enjoyed recently, like I obviously really of the sort of. Doubleheader of One Battle After Another and Sinners. Both of those movies have really interesting things to say about where we are right now. I think there’s a line in Sin that’s where Delware Lindo says something like, white people like the blues, they just don’t like the people who make it. That is a really interesting way of digesting, I think the way people are capable of processing culture that’s made by the black community. But also holding space in their heart to manage to have racism towards the black community. It’s really, I think it’s such an interesting and pithy digestion of that quite complicated idea.

 

Coco Khan Wow. That is your, you’ve got some great taste bro. When I’m at home watching stuff at the minute, I’m really trying to like not engage with the world. So I’m just watching far too fast paced action stroke crime solving shows. So, I’ve just gone back and watched High Potential.

 

Nish Kumar Oh yeah yeah yeah

 

Coco Khan That show’s too fast. Every episode moves too fast, it’s ridiculous and absurd. Also, as you know, I’m partial to a right-wing show, doesn’t reflect my politics, where I’m very much on the side of the CIA in all of these shows. So, I mean, you know I respect that. There’s some really good ones coming up, obviously, like Haxes on its final season. That’s coming out. I’m increasingly beginning to think this is just a side note that I’ve been thinking about with TV. But I think all the TV that’s coming out is the like end of genre TV. So The Boys is about to finish and I think once that finishes, that superhero’s done. We’ve moved on from that. Hacks is about finish. I think that whole backstage comedy genre of stuff, it’s done, we’ve done it. Let’s move on. I feel like we’re in the end of times and the end of genres.

 

Nish Kumar The only thing I would say about that is everyone loves the studio and that is just purely backstage and it is a very funny show.

 

Coco Khan It is, yes.

 

Nish Kumar This is a question from Dan in Neath. I was wondering if you knew how the proportional voting system would work in the Welsh elections in May? For example, could I end up with a reformed Senate member even if the Greens receive the most votes for my constituency? And then Dan has added, thank you for being amazing.

 

Coco Khan Oh, that’s so nice. If only he knew that I’m right now writing to the producers being like, can someone get us an answer for this? Can somebody tell us if this is true?

 

Nish Kumar Listen, there has been a lot of legwork done on this question in the week and this is what we found. So the Senate system is changing in the upcoming elections in May and this is the seventh election since devolution, which happened in 1999. And the system that’s being used is called a closed list proportional system. So that means that voters cast their ballots for parties rather than individuals, unless you’re voting for an independent candidate.

 

Coco Khan Each party has created a list of candidates, and as the votes are counted, their names are selected in the order they appear on the list. The number of constituencies is also halving from 32 to 16, but each constituency will return six candidates.

 

Nish Kumar So the total number of Senate members is going up from 60 to 96. Now the added layer is a formula called the de Hont system, which is designed to ensure the proportion of votes cast relates roughly to the number of returned members of the Senate MSs.

 

Coco Khan Sorry, I’m just laughing at De Hont. It’s so childish, but in my head I thought that’s a form of techno I listen to, isn’t it? I’m sure it’s from Belgium. It is a bit complicated to explain in words. So we’ve added a link to the Senate’s guide to the vote in the show notes and to the De Hond system too.

 

Nish Kumar All of this basically can be summarized in one word, yes. That is the answer to the question. The system is designed to share the votes according to the proportion of votes in a constituency. So even if the Greens get the biggest share, any other parties that get a decent share could also return a candidate or two.

 

Coco Khan So Wayne wants to know two things. When PSUK is made into a major movie, that’s very funny, which it surely will, who do you want to play you both and who do think actually will be cast to play?

 

Nish Kumar I think that there’s two answers to this question. Listen, it would be controversial casting, but I think Daniel Day-Lewis could pull off either path. It would involve controversial makeup. No one’s denying that it could be deemed problematic, but the guy is a chameleon. You’ve got to feel like he’d do his research, he’d work hard behind the scenes, and he’d do a very, very eerie impression, like he do a bang on impression of both of us. Yes!

 

Coco Khan Did you ever watch Split? That film with James McAvoy. It’s like it’s just him essentially in the whole film playing various characters. Yeah, yeah. Yeah. I mean, there’s probably I mean there’s obviously an underlying ableism to the whole thing. But nonetheless, the man can do all sorts. So I would nominate him or in a controversial plot twist. What if you and I played each other?

 

Nish Kumar That would be a controversial casting as well.

 

Coco Khan Yeah, I think it would also ruin the relationship in many ways.

 

Nish Kumar Or would it deepen it? Yeah. I have a very obvious actor who looks like me, to the extent that we’ve done stuff together based on this premise.

 

Coco Khan Jason Mantzoukas.

 

Nish Kumar Jason Mantzoukas. Jason Mantzoukas is of Greek heritage, but for some reason he came out Indian. I don’t, no one can explain why that’s happened. Either that or I’ve come out Greek. It’s hard to know what’s happened, but Mantzoukas and I look so alike. I think I might have said this on here before, but I was watching an episode of Brooklyn Nine-Nine at my grandmother’s house and Jason was in it and my grandmother said, when did you film this? Like she literally, She didn’t even ask if that was me, she assumed it was me. And proceeded further with the second phase of the conversation.

 

Coco Khan What’s Mo Salah doing at the minute? Isn’t he living in- Well he’s actually-

 

Nish Kumar He’s actually leaving Liverpool, but I don’t know if he’s segwaying straight into a career into acting.

 

Coco Khan It’s, uh, nishkabar impersonation.

 

Nish Kumar Either way, he’s Egyptian. So I guess all of this begs the question, where is my goddamn face from?

 

Coco Khan Yeah, well, you know, race is a construct anyway. So whoever plays me.

 

Nish Kumar Is there an actress that you I can’t think of one off the top of my head that people confuse you with or upset

 

Coco Khan No, no one ever confuses me. It’s the only form of racism I don’t get. I never get confused with like Jamila Jamil or like enter other extremely attractive South Asian women. It never happens. So I just want whoever plays me to be buff. That’s basically the long and short of it. Is that bad? They don’t even need to act really.

 

Nish Kumar But do you have a list of people?

 

Coco Khan Oh, could I play my mum? What? That would be a quite fun plot twist, wouldn’t it?

 

Nish Kumar You wanna play Beena?

 

Coco Khan You know how Stan Lee appears in Marvel films? Be like that, we could appear as just the…

 

Nish Kumar Well, that’s quite a common trope in biopics at the end of Wolf of Wall Street, Leonardo DiCaprio as Jordan Belfort is giving a seminar and it cuts to a guy looking at him and leaning in and that is the real Jordan Balfort. So it is, it is actually a common trope to appear in bioptics.

 

Coco Khan Okay. Listen.

 

Nish Kumar If Julia Roberts played me in a film I’d be thrilled, I’d again be racially controversial. And that’s it. Thanks so much for listening to Pod Save the UK. If you like what you heard, please leave us a review. It really helps boost the show.

 

Coco Khan You can follow.

 

Nish Kumar A positive review, okay, shove you one star up your arse, carry on.

 

Coco Khan You had such big teacher energy there. Excuse me, slow down. You can follow @PodSaveTheUK.

 

Nish Kumar The bell is not for you, it’s for me.

 

Coco Khan If you can’t play properly, no one can play, takes the football. You can follow at Pod Save the UK on Instagram, TikTok, Blue Sky, and X.

 

Nish Kumar Pod Save the UK is a Reduced Listening production for Crooked Media.

 

Coco Khan Thanks to lead producer May Robson and digital producer Jacob Liebenberg.

 

Nish Kumar And the music is by Vasilis Fotopoulos.

 

Coco Khan Our engineer is Dana Ruka and our social media producer is Narda Smilinic.

 

Nish Kumar The executive producers are Kate Fitzsimons and Katie Long, with additional support from Ari Schwartz.

 

Coco Khan And remember to hit subscribe for new shows on Thursdays on Amazon, Spotify or Apple or wherever you get your podcasts.

 

Subscribe to our nightly newsletter.

You didn’t scroll all the way down here for nothing.