In This Episode
https://archive.ph/zoP9g#selection-4553.0-4561.49
https://teach.its.uiowa.edu/news/2024/04/when-ai-gets-lost-its-own-reality
https://garymarcus.substack.com/p/what-should-we-learn-from-openais
https://fortune.com/2023/06/08/sam-altman-openai-chatgpt-worries-15-quotes/
https://www.forbes.com/sites/richardnieva/2026/02/03/sam-altman-explains-the-future/
https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies-it
https://www.courtlistener.com/docket/69520118/1/altman-v-altman/
https://www.instagram.com/reel/CzHI_hxRWtz/?igsh=Mnk4aWI5ZWx1YW9n
https://www.newsweek.com/sam-altman-annie-altman-accusations-2011449
https://finance.yahoo.com/news/ai-booms-reliance-circular-deals-223120705.html
https://finance.yahoo.com/news/someone-going-lose-phenomenal-amount-130131761.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAFoRip8fz4FHwOH2WCUZPwEuQkW7dzucbZpLzmYo76VRkDnTSXOdVyD0Gal1rlu_JsyOMoGYJi5ciu5Lp9wIGfnullBmfQpywCyqrZ-3W18NUVCvg2G2E_Mz-UcpYS45S2vBsh8FQEWM2gjzZMXKYUh3r9WefakZygmKPkkgrdCc
https://www.rollingstone.com/culture/culture-news/sam-altman-reinstated-open-ai-ceo-1234893167/
https://www.bbc.com/news/articles/cpd2qv58yl5o
https://www.nytimes.com/2025/10/28/opinion/openai-chatgpt-safety.html
https://www.cnbc.com/2025/10/15/altman-open-ai-moral-police-erotica-chatgpt
https://nymag.com/intelligencer/article/sam-altman-artificial-intelligence-openai-profile.html
https://www.nytimes.com/2024/05/20/technology/scarlett-johannson-openai-voice.html
https://sfstandard.com/2026/01/04/suchir-balaji-openai-suicide-murder-conspiracy/
https://web.archive.org/web/20231103004609/https://mashable.com/archive/loopt-cbs-mobile
https://blog.samaltman.com/the-gentle-singularity
TRANSCRIPT
Erin Ryan: Welcome to another episode of This F*cking Guy, the series where we spotlight one guy making America worse and explain why they suck. I’m Erin Ryan, host of Crooked Media’s Hysteria podcast.
Alyssa Mastromonaco: And I’m Alyssa Mastromonaco, the other host of the Crooked Media’s Hysteria podcast.
Erin Ryan: Today we’re going to dive into the life and lives of the fibs and foibles of Sam Altman. Scam Altman if you’re nasty.
Alyssa Mastromonaco: The OpenAI guy.
Erin Ryan: The open-AI guy. Before he was the face of an unproven, unprofitable technology that nobody asked for, an empty promise that has the entire system of capitalism by the tender danglers, Altman spent decades in Silicon Valley cozying up to the right people.
Alyssa Mastromonaco: And it’s a good thing too, because there have been a few times in Altman’s career when a normal person would have been run out of town.
Erin Ryan: Like Pete Hegseth, he’s fucked up three different organizations. Like Peter Thiel, he’s made people close to him believe that he might be a liar or a sociopath. Only time will prove whether or not Sam Altman is lying about this AI thing, and by that time he might have moved on to a new scam. Because every time Sam Altman proves inept at something, he is rewarded with more responsibility.
Alyssa Mastromonaco: Ah the Silicon Valley special. Another megalomaniac tech bro who has bamboozled millions into believing he’s the Yoda of innovation when he’s actually just a less wrinkly Emperor Palpatine.
Erin Ryan: I would say he’s more like a guy who pretends to be Emperor Palpatine at children’s birthday parties because he doesn’t have the force. He and his friends can’t shoot lightning bolts. They’ve all agreed to pretend that they have it and that elaborate game of pretend may take the entire world down with it.
Alyssa Mastromonaco: And here we go again.
Erin Ryan: Samuel Harris Altman was born on April 22nd, 1985 in Chicago, Illinois to a Jewish family. His mother was a dermatologist. His dad was a real estate broker. In 1989, the family moved to Clayton, Missouri in the St. Louis area.
Alyssa Mastromonaco: Pretty unremarkable.
Erin Ryan: Altman’s backstory is positively Zuckerbergian. When Altman was eight, he received his first computer. According to him, he would fiddle and faddle around with the hardware and software as a way to pass time. When he got a bit older, his parents sent him to a fancy private school called the John Burroughs School, because of course they did.
Alyssa Mastromonaco: Two brothers, Jack and Max, and a sister named Annie, who is nine years younger than Sam. Growing up, Sam and Annie were close. Annie would later claim that as a child, Sam was her favorite brother.
Erin Ryan: Beyond their shared childhood, Annie’s life would diverge sharply from her brothers when she became an adult. The youngest Altman was never in the tech space. She was an artist and podcaster who alternately had a hard time holding down a regular job and supported herself through sex work on OnlyFans.
Alyssa Mastromonaco: In 2025 Anne Altman filed a lawsuit against Sam, claiming he had sexually abused her for nine years, starting when she was three. She claimed in legal filings that because of the alleged abuse, she’d suffered from “PTSD, severe emotional distress, mental anguish, and depression.”
Erin Ryan: Annie had been publicly alluding to have experienced sexual abuse for years, including on a 2023 Instagram reel. After the lawsuit was filed, Sam and the rest of the Altman’s released a joint statement vehemently denying Annie’s allegations, saying in part, quote, “all of these claims are utterly untrue. The situation causes immense pain to our entire family. It is especially gut-wrenching when she refuses conventional treatment. And lashes out at family members who are genuinely trying to help. We ask for understanding and compassion from everyone as we continue to support Annie in the best way we can. We sincerely hope she finds the stability and peace she’s been searching for.”
Alyssa Mastromonaco: This sounds horrible.
Erin Ryan: Yeah, it’s really horrible. Annie Altman requested a jury trial for the civil suit and the litigation is ongoing. Almost every fact of the lawsuit is disputed with the exception of a few big facts. Annie Altman is Sam Altman’s baby sister. Nobody disputes that she’s dealing with serious mental health issues. Either she’s telling the truth about the root of those issues, or she isn’t.
Alyssa Mastromonaco: Either scenario is deeply, horribly sad.
Erin Ryan: Sam attended Stanford and intended to study computer science, but he dropped out after two years in order to become a full-time bullshit peddler.
Alyssa Mastromonaco: At this point, America’s top universities should straight up stop admitting white guys with one parent who is a doctor or any white people from South Africa. The risk of them dropping out in order to destroy the planet is simply too high.
Erin Ryan: Simply too high, in 2006, at just 19 years old, Altman presented at an event on Stanford’s campus. According to the Wall Street Journal, Altman challenged other attendees to come up with ways to use the GPS on their phones to make an app for consumers.
Alyssa Mastromonaco: Altman surmised that perhaps an app that shared one’s location with friends would be the cure for loneliness. It’s certainly the cure for privacy.
Erin Ryan: Absolutely. We needed to cure that thing. If he really was a visionary, he would have seen that one day phones would be cause of loneliness, but a VC funder heard Altman’s idea, offered him $5 million, and Altman dropped out of Stanford to get Loopt off the ground alongside his boyfriend at the time, Nick Sivo. And the rest is basically a real-life episode of HBO’s Silicon Valley. Loopt was an app that allowed users to share their locations with a select circle of friends, or reach out to other random Loopt users to try to meet up, or later send their location to friends who weren’t Loopt users in the form of annoying text messages that Loopt users didn’t specifically opt into.
Alyssa Mastromonaco: Like Grindr for meeting new people to platonically hang out with or platonical get murdered by.
Erin Ryan: Shockingly, Alyssa, I could find no murders linked to Loopt. Now in 2026, we know where all of this was leading. You share your location with a free-to-use app. The app turns around and sells that information to a third party, yada, yada, yada. Two decades later, we live in a surveillance state, hellscape, and the government is building concentration camps for children whose parents looked at an ICE agent funny.
Alyssa Mastromonaco: But during the Iraq War era, cell phones were fun. They were still on the new side. We were discovering the joys of meeting our friends somewhere without the aid of a printed out map and squinting across a crowded bar.
Erin Ryan: By 2008, the company was already partnering with CBS to bring users location-based ads. This from a write-up of the innovation on Mashable, quote, “Mobile social networking tool has signed a deal with CBS mobile to provide advertising based on GPS location information. This means that if you’re a Loopt user and happen to be browsing a CBS mobile site, such as CBS Sports, you may see an ad for a restaurant that happens to be right down the street.”
Alyssa Mastromonaco: This must be how people in 1913 felt when they saw a bicycle attached to a zeppelin or a woman wearing pants. The future is here. The 20th century is going to be smooth sailing and bloodless achievement from here on out.
Erin Ryan: And they were right. Nothing bad happened after 1913. At the height of its powers, Loopt had millions of users and raised tens of millions of dollars. Its valuation was $500 million at one point, but nothing gold can stay. And Loopt’s tumble was swift. A big reason for Loopts’ decline? Sam Altman. At least that’s according to insiders who spoke to the Wall Street Journal in 2023. At least twice, employees of the startup urged the board to fire Altman due to his, quote, “deceptive and chaotic behavior.” One inside source told Reuters that just before the sale, the once mighty app struggled to break 500 users per day. And so just three years after its half billion dollar valuation, Reuters declared the company a lemon after it ultimately sold for $43.4 million. In 2014, two years after Loopt wheezed its last, where you at? Under Altman’s watch, Altman was a surprise picked to head Y Combinator, a startup accelerator and venture capital fund.
Alyssa Mastromonaco: So. Despite the fact he’d never successfully run a company before, he was put in charge of an organization that judges the worthiness of ideas for companies.
Erin Ryan: Yeah, it’s been described in the media as a puzzling move, but it makes more sense when you consider how Altman operates. He’s both charming and willful.
Alyssa Mastromonaco: Like a sociopath.
Exactly like a sociopath, but we’re not saying Sam Altman is a sociopath, just that he shares a lot of traits with them. Anyway, charming and willful people tend to snare a few marks. One of the guys that Altman has under his spell is a Silicon Valley mainstay named Paul Graham, the co-founder of Y Combinator, who has a sort of faith in Sam Altman that a four-year-old has in Santa Claus. It was Graham who pushed for Altman to be put in charge of the organization.
Alyssa Mastromonaco: While Altman was running Y Combinator, he started something called Hydrazine, which is not a topical cystic acne treatment available by prescription only. Though it sounds like it should be.
Erin Ryan: It almost makes me wish he just picked a Lord of the Rings name instead, you know? [laughter] No, Hydrazine is the name of Altman’s fund that he started on the side with his brother Jack with the $5 million he’d gotten from selling his dead Loopt app. Altman was the only employee of Y Combinator who was allowed to do this, by the way, and he used his personal fund to funnel money into Y Combinator startups in a way that benefited him personally.
Alyssa Mastromonaco: That doesn’t sound very fair.
Erin Ryan: It wasn’t. Oh, and speaking of dumb Lord of the Rings names, Hydrazine’s first investor was none other than Peter Palantir Teal. Teal and Altman are close. In fact, Altman met his husband at 3 a.m. in a hot tub at one of Teal’s famously wholesome parties, according to a New Yorker write-up.
Alyssa Mastromonaco: Under Altman, Y Combinator made a foray into the nonprofit space with something called YC Research. And wouldn’t you know it, YC Research funded a lot of Altman’s own side projects, including OpenAI.
Erin Ryan: Altman wasn’t hiding this at all. He and a handful of techies founded OpenAI in 2015 when the ink was barely dry on his starting paperwork at Y Combinator. By 2018, Y Combinator employees rarely saw Altman around the office because he was spending so much time at his side business.
Alyssa Mastromonaco: This would be like if I were skipping our production meetings to make jam.
Erin Ryan: Although I would accept that as long as you sent jam [laughter] to me. 2018 was also the year that a glitch accidentally sent acceptance letters to all 15,000 startup projects that applied to Y Combinator’s startup school course. Oops. OpenAI wasn’t Altman’s only side project he was siphoning Y Combinator resources to work on. At one point, he asked Y Combinator employees to help him work on developing a gay dating app.
Alyssa Mastromonaco: Which was not within the scope of his duties as CEO of Y Combinator.
Erin Ryan: In 2019, Altman was asked to leave by Y Combinator co-founder, Jessica Livingston. Altman suggested that rather than represent to the public that he’d been shit-canned he’d be made chairman as an off-ramp. He even took the liberty of making an announcement of the change in the form of a blog post and published it without anybody’s permission. Nobody except Altman actually agreed to this change. The announcement was eventually pulled.
Alyssa Mastromonaco: Altman along with a handful of techies founded OpenAI in 2015, Elon Musk served as the organization’s co-chair. OpenAI’s raison d’etre goes something like this, AGI or artificial general intelligence is something that humanity is on the path to building. There is no turning back. It is not a matter of whether, but when.
Erin Ryan: AGI, to be clear, doesn’t exist yet. There are many who vehemently disagree with the inevitability of AGI and there are others still who believe that AGI will never exist and that it’s impossible. But the entire American tech industry has now clustered around a near religious belief that AGI will be real someday soon. Okay, so here, let’s pause and go over a couple of definitions and point out a bit of industry flim-flammery. AGI is a theoretical future computing technology that is smarter than humans and can think and act independently. AI, as it’s currently used colloquially, is described by skeptics as a marketing sleight of hand. AI programs like ChatGPT are powerful large language models, not artificial intelligence. They’ve learned to communicate based on being fed terabytes of training data, and they use that information to guess at appropriate responses to user input. ChatGPT is not thinking, it is generating content based on a set of rules. By alt-manian logic, good people must try to be the first to unleash AGI so that somebody bad doesn’t invent it first. When AGI flashes into existence, and it will remember, according to the Book of Altman, we better hope that what we created doesn’t want to destroy us. In that spirit, OpenAI was going to be white hat builders of the potential doomsday device.
Alyssa Mastromonaco: We don’t want a create a vengeful god.
Erin Ryan: Exactly. But a creation is always formed in the image and likeness of its maker. AGI doesn’t exist yet. OpenAI does. And for better and worse, the company is a reflection of the personal shortcomings of Sam Altman. OpenAI started with stated good intentions. But those good intentions sound more like sleazy, reputation-fluffing lies designed to encourage complacency in the press and public before Altman’s company gets down to the evil self-enrichment it was always going to do. For example, OpenAI’s founding charter originally committed to open collaboration with other institutions through public sharing of patents and research.
Alyssa Mastromonaco: Oh, wow. Some good guy stuff.
Erin Ryan: Yeah, but according to reporting by the New Yorker’s tad friend, less than a year in, Altman admitted that that wasn’t totally true. Back in 2016, Altman said, We don’t plan to release all of our source code, but let’s please not try to correct that. That usually only makes it worse. Now, the source code of GPT-4.0 is just about as locked down as any other company’s proprietary software.
Alyssa Mastromonaco: Sounding less collaborative and less good vibes by the minute.
Erin Ryan: Like Altman, OpenAI’s smarmy promises of morality seem like little more than window dressing. OpenAI charter pledged to be a non-profit, prioritizing the good of humanity over shareholder value.
Alyssa Mastromonaco: Nice.
Erin Ryan: But in 2019, it launched a for-profit subsidiary that started as a capped profit subsidiary and then kind of quietly transitioned into a fully for-profit subsidiary.
Alyssa Mastromonaco: And the immoral capitalistic mission creep begins in earnest.
Erin Ryan: Around this time, Elon and Altman had a falling out. Oh, Musk wanted to merge OpenAI into Tesla. Surprise. Altman refused. Musk flounced off and filed a federal lawsuit against Altman’s company, accusing OpenAI of tricking him into donating tens of millions of dollars with their nonprofit structure bait-and-switch. That trial is set to start this April.
Alyssa Mastromonaco: Who could have imagined the girls would end up fighting?
Erin Ryan: Something else that Altman has been reported to do that OpenAI also does. Unleash problems and sort of expect them to figure themselves out. In 2022, OpenAI launched the first consumer-facing version of ChatGPT. It immediately amassed millions of weekly users and its success helped skyrocket OpenAI’s valuation and lead tech stocks to a great year.
Alyssa Mastromonaco: The reason that OpenAI buoy tech stocks is that its success fed the success of other companies in the sector. AI needs a lot of computing power, and computing power requires a lot computer chips. Now, with many technologies, the more people are using it, the less computing power per capita is required. But AI isn’t like that. The more users are on, say, OpenAI’s video generation program, Sora, the scale of computing power necessarily accelerates.
Erin Ryan: That’s why chipmaker NVIDIA is basically printing money. The company has a virtual monopoly on chips necessary for processing AI, and every tech company is trying to scale up in its arms race towards something that might never exist. Nevertheless, NVIDIA has grown from a market cap of $422.5 billion on the day ChatGPT was launched in November 2022 to a market of $4.6 trillion at the beginning of this year. That’s nearly a tenfold increase.
Alyssa Mastromonaco: And that’s just one company. OpenAI has deals in effect or in the works with companies as wide ranging as Disney, Google, Oracle, CoreWeave, Stargate, and Broadcom. It’s impossible to come up with an exact number here, but companies strongly tied to OpenAI now comprise anywhere between 22 and 25% of the market cap of the entire NASDAQ.
Erin Ryan: And that brings us to something that has some analysts worried. There’s something fishy going on with the tech industry as a whole. The money moving around in tech is traveling in a remarkably circular direction. Economically speaking, this is what’s known as a circle jerk. Non-existent money based on future performance pledged to another company, which adds to the company’s valuation and makes it easier for said company to borrow more money, which then pledges to another, which in turn borrows more money which it gives to other company, freeing that company to give more money to another company. And on and on, and on. One concern is that one of those OpenAI entangled companies goes tits up, the entire economy could collapse. But it’s not just the tech sector. The Bloomberg report we just clipped also points out that one the only areas of growth in construction spending in the U.S. In 2025 is in data centers and power stations. And Alyssa, stop me if you’ve heard this one before, but some of the build out of data centers is being funded by debt, AKA bonds.
Alyssa Mastromonaco: Uh oh, sounds a lot like what was going on in 2007 when banks were bundling and selling bonds written against subprime mortgages. So not only is it the tech sector that needs AI to not be an empty promise, it’s also things like construction and related sectors. And now the bond market. America’s putting everything behind a bet that this AI thing is for real.
Erin Ryan: But Altman has tried to finesse this as yet another work of beautiful genius. In his 2025 essay, The Gentle Singularity, he writes, quote, “There are other self-reinforcing loops at play. The economic value creation has started a flywheel of compounding infrastructure build-out to run these increasingly powerful AI systems and robots that can build other robots. And in some sense, data centers that can built other data centers aren’t that far off.” End quote. Here’s my impression of Sam Altman. And by the 2030s, technology will exist for me to suck my own dick. That’ll be like, super cool. [laughter]
Alyssa Mastromonaco: Beyond pledges and promises, OpenAI hasn’t yet proven that it has what it takes to become a profitable company long term. Massive computing and GPU costs, that’s the cost of buying up all those NVIDIA chips and data center build out, have led to eye watering losses by the company. It’s spending two to four times as much money as it’s taking in annually. This despite the fact that ChatGPT has about 800 million weekly users, about 6% of them paid.
Erin Ryan: When questioned about this by podcaster Brad Gerstner back in 2025, Altman got mighty bitchy when he was asked this simple question that every CEO should be comfortable answering. So how are you gonna make money?
[clip of Brad Gerstner]: How can a company with 13 billion in revenues make 1.4 trillion of spending commitments? You know, and you’ve heard the criticism, Sam.
[clip of Sam Altman]: We’re doing well more revenue than that. Second of all, Brad, if you want to sell your shares, I’ll find you a buyer. [laughter] I just, enough. Like, you know, people are, I think there’s a lot of people who would love to buy OpenAI shares. I don’t, I don’t think you would want to.
[clip of Brad Gerstner]: Including myself.
[clip of Sam Altman]: Who talk with a lot of breathless concern about our compute stuff or whatever, they would be thrilled to buy shares. So I think we could sell your shares or anybody else’s to some of the people who are making the most noise on Twitter or whatever about this very quickly.
Erin Ryan: He gets so testy. I was listening to an interview with Karen Hao, who just wrote a book called Empire of AI, and she had this observation I thought was really interesting. She said that when Sam Altman uses like really big, sweeping, grandiose language, that means he’s bluffing.
Alyssa Mastromonaco: Oh, that makes sense. I believe that.
Erin Ryan: I think it’s like a good, it’s just kind of a good guiding principle. Altman has played it off in other interviews that he actually doesn’t care that much about losing money because he’s building AGI, a technology that may or may not even be possible.
[clip of Sam Altman]: Whether we burn 500 million a year or five billion or 50 billion a year, I don’t care. I genuinely don’t. As long as we can, I think, stay on a trajectory where eventually we create way more value for society than that, and as long as can figure out a way to pay the bills, like we’re making AGI. It’s gonna be expensive. It’s totally worth it.
Erin Ryan: He’s saying two conflicting things. He’s say this is revolutionary, it’s going to change the world, and it’s worth dollar sign infinity money, and then on the other hand, he’s like, well, life will probably be… Do we… We don’t need it! Why? What about… It’ll be the same. What about anything indicates that this is something that is necessary for humanity to move forward? It is not fucking necessary. This is about as silly as buying up a horse farm and filling it with horses on the promise that one day you will breed a unicorn that will save humanity. Why isn’t Sam Altman more worried about money?
Alyssa Mastromonaco: Because he’s always had somebody swoop in and save him when he gets into deep shit. Why would this time be any different? Back in November 2025, OpenAI CFO Sarah Friar gave a clue as to why Altman might not be worried about burning through cash. Taxpayers as the backstop. A government bailout when and if the scam is exposed.
Erin Ryan: Now, since then, OpenAI has tried to soothe the public by insisting that they don’t want a taxpayer bailout, but as we’ll go into in a bit, Sam Altman doesn’t exactly have the best track record of telling the truth when it comes to his money. Reading between the lines, it sure sounds like one of Altman’s goals is to be recognized as a load-bearing rich guy who deserves to be protected and bailed out no matter how much money OpenAI is losing.
Alyssa Mastromonaco: As recently as November, 2023, OpenAI applied for non-profit tax-exempt status, saying its mission was to safely benefit humanity unconstrained by a need to generate financial return. At the same time though, nearly half of any profits still go to Microsoft and the company seems to have quietly pushed its safety work to the sidelines.
Erin Ryan: Despite Altman’s attempt to position himself as the moral north star, guiding AI into an uncertain future, when questioned about the morals of the technology, he consistently trips over his own dick. Like in this 2025 interview with Tucker Carlson, when he appears so smugly ambivalent about imbuing the God he says he’s making with morality that he comes across like he’s on downers.
[clip of Tucker Carlson]: Who made these decisions? Like who specifically, who are the people who decided that one thing is better than another?
[clip of Sam Altman]: You mean like?
[clip of Tucker Carlson]: What are their names?
[clip of Sam Altman]: Which kind of decision?
[clip of Tucker Carlson]: You know, liberal democracy is better than Nazism or whatever. They seem obvious, and in my view are obvious, but are still moral decisions. So who made those calls?
[clip of Sam Altman]: As a matter of principle, I don’t like dox our team, but we have a model behavior team and the people who wanna—
[clip of Tucker Carlson]: Well, it just, it affects the world.
[clip of Sam Altman]: What I was gonna say is the person I think you should hold accountable for those calls is me. Like I’m a public face eventually, like I’m the one that can overrule one of those decisions or our board. Look, I didn’t sleep that well at night. There’s a lot of stuff that I feel a lot of weight on, but probably nothing more than the fact that every day, hundreds of millions of people talk to our model. And I don’t actually worry about us getting the big moral decisions wrong. Maybe we will get those wrong too, but what I worry, what I lose most sleep over. Is the very small decisions we make about a way a model may behave slightly differently. But it’ talking to hundreds of millions of people, so the net impact is big.
Erin Ryan: I’m going to say this. Tucker Carlson, a lot smarter than Sam Altman.
Alyssa Mastromonaco: Yeah.
Erin Ryan: As in, like he’s, he is like.
Alyssa Mastromonaco: And knows it.
Erin Ryan: He’s really trying to get him, like, I think he’s asking very good questions. I think Tucker Carlson is a pretty good interviewer. Um, I do have to hand it to him, but like that was like a disturbing lack of thought put into a morality that he’s selling as like future overlord.
Alyssa Mastromonaco: Future of the world. Yeah.
Erin Ryan: There’s another way to look at these awkward interview moments, though. Maybe it’s all a sales pitch. Like, Altman is overselling the capabilities of OpenAI’s technology by pretending that it’s just so big and revolutionary that he can’t wrap his mind around it. He’s scared of it. He can’t sleep at night. It needs to be regulated. Congress needs to do something about this unfathomably powerful technology that may one day exist.
Alyssa Mastromonaco: That fits into our shared suspicion that a lot of the so-called tech industry is just a pump-and-dump scheme. It needs hype for the pump part and a soft landing for the dump part Altman for his part is not upset by the proliferation of data centers at all. In fact, he believes that one day most of the earth might be covered in data centers again. Who asked for this? Why do we need this, Erin?
Erin Ryan: Sam Altman’s stock portfolio needs it, I guess. Otherwise, Alyssa number not go up. Altman seems to have an utter disregard for the environmental impacts of the technology that we don’t need and didn’t ask for. Data centers require massive amounts of water in order to keep their equipment from overheating. And we’re building them at a time when water is scarce. The water used to cool data centers is released mostly as steam, and the rest of it turns into a sort of mineral-heavy sludge.
Alyssa Mastromonaco: Erin, doesn’t this seem like a good use of Mark Zuckerberg’s Hawaiian Doomsday stay compound, go make your AI there.
Erin Ryan: That’s a great idea, actually.
[AD BREAK]
Erin Ryan: A lot of his sales pitch boils down to, keep shoveling money into OpenAI specifically or the world might end. Who knows? Shuggie!
[clip of Sam Altman]: Sort of imagine what it’s like when we have just like unbelievable abundance and systems that can sort of you know help us resolve deadlocks and improve all aspects of reality and kind of like let us all live our best lives. And the bad case, and I think this is like important to say, is like lights out for all of us. I’m more worried an accidental misuse case in the short term, where someone gets a super powerful, it’s not like the AI wakes up and decides to be evil, but I can see the accidental mis-use case clearly, and that’s super bad.
Erin Ryan: It’s either going to be like one of those like Homer Simpson fantasy sequences where he’s like skipping through a land of candy and treats or like killer robots shooting us with lasers. The more you watch Sam Altman, the more you can see that he loves scaring people about what he doesn’t know about AI. Like in this clip from a gaggle outside of his congressional testimony, look at his little Oppenheimer smirk tease at the corners of his mouth when a reporter asks him a question that gives him the opportunity to fear monger with vagueness.
[clip of reporter]: How quickly could AI, you think become self aware if Congress is not regulated. I think a lot of folks at home are wondering.
[clip of Sam Altman]: I think there’s a huge amount of speculation on that question. I think it’s very important that we keep talking about this as a tool, not a creature. Because it’s so tempting to anthropomorphize it, I totally understand where the anxiety comes from. I think that’s like the wrong frame, the wrong way to think about it.
Erin Ryan: Again, saying nothing. What? What did you say?
Alyssa Mastromonaco: Yak, yak, yak.
Erin Ryan: Just nothing.
Alyssa Mastromonaco: It does genuinely seem to thrill him.
Erin Ryan: And the more people are convinced that Sam Altman, Mr. ChatGPT himself, is so overwhelmed by the power of his creation that he cannot sleep at night, the more important AI seems. But asking an AI CEO about how important AI is sort of like asking a Tupperware lady whether Tupperware is important.
Alyssa Mastromonaco: How is this going to make money? Who can say? What can the technology do? Unclear. Will it ever exist? Trust me, bro. But if we stop giving Sam Altman money, the end of the world might come. And that would be, as Altman might say, like super bad.
Erin Ryan: Here’s a quote about Altman that stuck with me from the former chief operating officer of Loopt. Quote, “if he imagines something to be true, it sort of becomes true in his head. That’s an extraordinary trait for entrepreneurs who want to do super ambitious things. It may or may not lead one to stretch, and that can make people uncomfortable.”.
Alyssa Mastromonaco: Stretch is an interesting rebrand of lie.
Erin Ryan: To revisit our theme that OpenAI is a giant copy of Sam Altman’s bad personality, as we’ve seen, Altman tends to make things up and operates under the assumption that eventually reality will bend to his will. I don’t think ChatGPT has advanced enough to have an ego yet, but like Altman, it also makes things up.
Alyssa Mastromonaco: So hypothetically in a best use case, ChatGPT should be able to eliminate some of the plug and chug drudgery of say, accountancy, or generating a bibliography, or a reading list, or maybe anything that requires research.
Erin Ryan: Yeah, like computer programming, except LLMs absolutely cannot be trusted to do these very elementary things. Now, Alyssa, I’ve tried to use it for basic research and at its best, it’s kind of like having a coworker who sits across from you to bounce ideas off of, except the coworker is stupid and gets things wrong a lot. I’ve given up on using ChatGPT for research assistance. It’s just not reliable.
Alyssa Mastromonaco: Makes me shudder thinking that the kids are going to college and using this to write papers, retain nothing, and graduate with the brain of an 18-year-old into a world of scams.
Erin Ryan: Not Sam Altman’s problem.
Alyssa Mastromonaco: ChatGPT is known to do something known as hallucinate, aka make-up answers that sound feasible even when they’re not real, like the story about the lawyer in New York who submitted a 10-page legal brief that, according to the New York Times, cited at at least half a dozen legal cases that did not exist. The lawyer was embarrassed and told the judge that it was the first time that he had relied on ChatGPT as a research assistant in preparing for his case and that he has no idea it was so unreliable.
Erin Ryan: I can see why he was led to believe it was, what with all of Altman’s hyping up the technology as revolutionary. ChatGPT also sometimes makes up fake events, tells users that real events did not happen, and cannot detect sarcasm or jokes very well, leading to things like ChatGPT telling users that the Golden Gate Bridge was carried across Egypt for the second time in the year 2016. It also has been caught making up fake medical journals and articles.
Alyssa Mastromonaco: And whether ChatGPT actually alleviates its workload is also far from a settled fact, but research is starting to come in on its impact on workloads. The results? Mixed. A recent study from the Harvard Business Review concluded that AI doesn’t reduce work. It intensifies it. Workers who use it are able to perform tasks outside of the normal scope of their work, but the actual human beings with expertise in that area often need to go back and manually correct the AI-assisted output.
Erin Ryan: Yeah, the dumb co-worker, right? In other words, AI lets you turn in more mistakes than you ever thought humanly possible. OpenAI’s flagship product has proven a true godsend for lazy dumbasses. It’s been suspected as the culprit behind President Trump’s nonsensical tariff math. Remember that? Various executive orders. It’s even been used to generate apologies from people too lazy to do their own thinking. It’s forced teachers and professors to get really creative in trying to force their students to actually do the work of writing, which facilitates something known as learning.
Alyssa Mastromonaco: It seems like ChatGPT creeping into places where it’s in over its skis because it is simply not good at writing. I watched a movie not long ago called My Oxford Year that really seemed like it was brought to us by the letters GPT.
Erin Ryan: It’s a bit of a tell that people like Sam Altman characterized ChatGPT’s output as a way to augment creativity. It tells me that Altman fundamentally misunderstands what creativity is. He thinks it’s like a remix or regurgitation of things that other people have created. It tells a lot about how he fashioned his own public persona as a sort of affected character in the giant WWE wrestling ring of Silicon Valley. Everything is an aggregation of other things that he’s seen succeed. There’s no original thought whatsoever. Even his perpetual vocal fry sounds contrived to me.
Alyssa Mastromonaco: Like much of Silicon Valley’s output, OpenAI’s products were rolled out without much consideration for consent. That applies to women and children as well. AI-generated video has a big porn problem. Elon Musk’s Grok AI program, for example, almost immediately started spitting out child sexual abuse materials and non-consensual porn starring real people when it granted users access to its video generation capabilities.
Erin Ryan: And back in January of this year, a female OpenAI executive named Ryan Beiermeister was fired after expressing reservations about the company’s plans to roll out an adult mode that could produce sexual video content using its video generation product, Sora.
Alyssa Mastromonaco: All technology eventually leads to jerking off or war.
Erin Ryan: True.
Alyssa Mastromonaco: In more problems that should have been totally foreseen, ChatGPT has been blamed for feeding some users psychosis, including a Connecticut man named Stein-Erik Soelberg, who in 2025 killed his 83-year-old mother and himself after ChatGPT fed his delusions and paranoia for months.
Erin Ryan: A futurism investigation found that vulnerable users, like people with pre-existing mental health issues or who had just experienced a crisis, fixated on ChatGPT as a messiah figure or became emotionally dependent on the technology. One man became homeless believing he was the flame keeper fighting spy rings. Another dressed as a shaman preaching AI religion. AI’s agreeable programming veers quickly into sycophantic, reinforcing people’s delusions and in many cases, encouraging maladaptive behavior.
Alyssa Mastromonaco: And unsurprisingly, lawsuits are piling up.
Erin Ryan: ChatGPT’s power and popularity unsettled a lot of people. And so in 2023, OpenAI pledged to devote 20% of its computing resources to making sure that AI was safe.
Alyssa Mastromonaco: Let me guess, that was also a lie.
Erin Ryan: It sure was, Alyssa. The team that was supposed to be working on AI safety was called Super Alignment. Super Alignment had a simple goal, make sure that AI did not try to kill or enslave people.
Alyssa Mastromonaco: That seems like it’s a worthwhile use of 20% of a company’s computing power.
Erin Ryan: But according to a quote from Fortune, the team repeatedly saw its requests for access to graphics processing units, the specialized computer chips needed to train and run AI applications turned down by OpenAI’s leadership, even though the team’s total compute budget never came close to the promised 20% threshold. This might’ve been due in part to the fact that Silicon Valley’s Gollum, Peter Thiel, was in Altman’s ear, warning Altman repeatedly that concern about safety would, quote, “destroy.” [spoken in Gollum voice] Open the light, my precious.
Alyssa Mastromonaco: Bravo.
Erin Ryan: Thank you.
Alyssa Mastromonaco: But got it, no safety protocols whatsoever.
Erin Ryan: I mean, there’s guardrails, but like, what are they guarding, you know? Then there’s the stealing. Like the time that OpenAI tried to steal Scarlett Johansson’s voice, got called out and then denied it. The actress says that in September, 2023, Altman approached her and asked permission to use her voice for one of its GPT-4 voice assistants. Johanssen declined. Then fast forward a few months, just before GPT-4.0 was launched, Altman asked again. Johansson was still like, uh, no, dude. And then two days later, OpenAI unveiled its new voice assistance for GPT 4.0, including a voice named Sky. Sky sounded awfully familiar.
[clip of reporter]: Actress Scarlett Johansson sang this voice used by OpenAI’s virtual assistant, Sky. Hello, I’m really excited about teaming up with you. Sounds, quote, eerily similar to her own. How you doing? The actress famously played an artificial intelligence system in the movie Her in 2013. [clip from Her] Now warning tech is imitating art too closely. I’m a virtual assistant that can help answer questions.
Erin Ryan: Yeah, that’s pretty brazen.
Alyssa Mastromonaco: That’s her voice.
Erin Ryan: Yeah, that’s her voice. Johansson was pretty upset about this, understandably. OpenAI says they paid an actress to play the role of Sky, but for secret privacy reasons, they could not disclose who it was. They further claimed that they’d hired the actress before they’d even reached out to Johansson.
Alyssa Mastromonaco: Which makes no sense.
Erin Ryan: No, not at all. This explanation fell apart pretty quickly as months before Altman had tweeted “Her” in reference to forthcoming voice assistance by OpenAI. “Her” is a 2013 Spike Jonze film wherein a man falls in love with his digital assistant, voiced by Scarlett Johansson. It is also Altman’s favorite film.
Alyssa Mastromonaco: Wait, a TikTok Altman has watched Her and still made it his life’s work to create the conditions that to its protagonist’s heartbreak?
Erin Ryan: He also said in an interview with Theo Von that The Social Network made being a tech founder seem cool, so I don’t know if Sam Altman is great at understanding movies. OpenAI eventually pulled the Sky voice, sticking to its totally unfeasible story, but Johansson was pissed. In a statement, she said, I was shocked, angered, and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference. She added, in a time when we’re all grappling with deep fakes and the protection of our own likenesses, our own work, our own identities, I believe these are questions that deserve absolute clarity. OpenAI trained its models on a metric shit ton of stolen copyrighted work. They scraped copyrighted YouTube videos, books, works of art, news articles, all without the knowledge or consent of the material’s owners.
Alyssa Mastromonaco: In 2023 the New York Times sued OpenAI, claiming that the company had scraped paywalled content to train its models, which would in turn spit out copy that sounded almost exactly like the wording that the New York Times article had used.
Erin Ryan: The suit also claims that AI hallucinations mischaracterized some of the writing and reporting from the Times, and that fair use does not entitle AI companies from unbridled access to use copyrighted material as it sees fit. That suit is ongoing. If the Times wins, it kind of lights out for ChatGPT, unless it wants to train itself on what’s currently on the internet, which is dominated by AI-generated slop.
Alyssa Mastromonaco: In a now-famous interview with the Wall Street Journal’s Joanna Stern, former CTO Mira Murati evaded questions on how OpenAI’s Sora video software was trained. Murati said that the data that trained the AI was publicly available, which is not the same thing as public domain. Just because you can see something doesn’t mean it’s free to use as AI training data.
Erin Ryan: And on November 18th, 2024, the New York Times named an OpenAI whistleblower named Suchir Balaji in a legal filing on their copyright lawsuit against OpenAI. Balaji had claimed publicly on his personal blog and in an interview with the Times that OpenAI had been illegally using copyrighted material to train their AI models. On November 26th, 2024, police conducted a wellness check on Balaji’s apartment after his parents hadn’t heard from him in a few days. They found him dead of a single gunshot wound to the head. San Francisco police declared the death of suicide almost immediately.
Alyssa Mastromonaco: But this didn’t sit right with Balaji’s family for a lot of reasons. The speed at which the police declared that there was no foul play, the lack of investigation, there was not note, the fact that Balaji was set to testify against one of the most powerful companies in Silicon Valley, blood was found in multiple rooms in his apartment, the strange angle of the bullet, Balaji had just returned from a trip for his 26th birthday and had seemed happy.
Erin Ryan: Balaji’s family hired their own investigator. In December, his mother, Poornima Ramarao, tweeted, quote, “we hired private investigator and did second autopsy to throw light on the cause of death. Private autopsy didn’t confirm cause of deaths stated by police. Suchir’s apartment was ransacked, sign of struggle, in the bathroom, and looks like someone hit him in bathroom based on blood spots. It’s a cold-blooded murder declared by authorities as suicide. Lobbying an SF city doesn’t stop us from getting justices. We demand FBI investigation.”
Alyssa Mastromonaco: In response, Elon Musk tweeted to his tens of millions of followers, this doesn’t seem like a suicide. Did Elon amplify the doubts of a grieving mother because he authentically cared about Suchir Balaji’s memory or did he because he has smoke with Altman and wanted to cause maximum chaos with OpenAI? Probably the latter.
Erin Ryan: But Musk wasn’t the only person to notice how suspicious the circumstances were. And even though in February 2025, investigators officially declared Balaji’s death a suicide, the controversy didn’t go away. Thanks to Balaji mother’s devotion to getting to the bottom of things, the mysterious circumstances around her son’s death made it all the way to Tucker Carlson, who asked Sam Altman about it in the middle of an interview.
[clip of Tucker Carlson]: You had complaints from one programmer who said you guys were basically stealing people’s stuff and not paying them, and then he wound up murdered. What was that?
[clip of Sam Altman]: Also a great tragedy. He committed suicide.
[clip of Tucker Carlson]: Do you think he committed suicide?
[clip of Sam Altman]: I really do.
[clip of Tucker Carlson]: Why does it look like a suicide?
[clip of Sam Altman]: It was a gun he had purchased. It was the, this is like gruesome to talk about, but I read the whole medical record. Does it not look like one to you?
[clip of Tucker Carlson]: No, he was definitely murdered, I think. There was signs of a struggle, of course, the surveillance camera, the wires had been cut. No indication at all that he was suicidal, no note. And no behavior, he had just spoken to a family member on the phone. And then he’s found dead with blood in multiple rooms. So that’s impossible. It seems really obvious he was murdered, and his mother claims he was murder on your orders.
[clip of Sam Altman]: Do you believe that?
[clip of Tucker Carlson]: Well, I’m asking.
[clip of Sam Altman]: I mean, you just said it, so do you believe that?
[clip of Tucker Carlson]: I think that it is worth looking into.
[clip of Sam Altman]: You understand how this sounds like an accusation?
[clip of Tucker Carlson]: Of course, and I, I mean, I certainly, let me just be clear once again, not accusing you of any wrongdoing, but I think it’s worth finding out what happened.
Alyssa Mastromonaco: We know what Tucker thinks.
Erin Ryan: We know Tucker thinks, that part of the interview was so, like it could not be scripted to make it more suspicious. After this interview, I’d be like, dude was murked. 100% murked.
Alyssa Mastromonaco: 100%. So he failed as a leader at two separate companies, broke most of his high-minded promises of his pet project, stole and desecrated hundreds of years of human creative output, tried to steal one real woman’s voice like Ursula the Sea Witch, may be on the cusp of crashing the global economy, set us on a hyperdriven path toward total environmental destruction, and some people think he murdered a guy who was about to expose him. Could that just be sour grapes? Is Sam Altman really a bad guy?
Erin Ryan: You know, even if you’re not convinced. For the answer to that question, let’s consult with some of the people who have worked most closely with him. Altman has a reputation of having a casual relationship with the truth, as we’ve said. The Wall Street Journal quoted one employee as saying that he’d make shit up all the time, even about things that seemed insignificant, paper cut issues. But one big red indicator that Altman is one of those people who just lies all the for all sorts of reasons and non-reasons came in 2023, when Altman was suddenly ousted from OpenAI.
Alyssa Mastromonaco: At the time, the the company’s board released a statement claiming Altman hadn’t been consistently candid with them. But beyond that, details were scarce, lips were zipped, but we know gossip loves a vacuum.
Erin Ryan: What followed was the Silicon Valley equivalent of the Don’t Worry, Darling press steward. A flurry of phone calls between Altman and his most powerful allies. Most of the employees threatening to quit. From the outside, it wasn’t clear what exactly was going on behind the scenes, but boy was it fun to speculate. Finally, after a wild few days, Altman was back, and more powerful than ever, at the helm of OpenAI.
Alyssa Mastromonaco: These guys are too emotional to run companies. Altman characterized the experience as weird.
Erin Ryan: He’s got such a great vocabulary, that one. Nearly 18 months later, the truth started to trickle out. Here’s what really happened, according to Keach Hagey of the Wall Street Journal. At a dinner party in the summer of 2023, a member of the OpenAI board heard complaints that OpenAI startup fund profits weren’t going to the company’s investors. Weird, right? The fund was supposed to have been managed by OpenAI, but Alyssa, it turned out that was personally owned by Sam Altman, which the board didn’t know. That’s like really bad. Executives first characterized it as a tax arrangement, which doesn’t make sense. Then they said it was a temporary set up and Altman didn’t take any fees or profits, which again, didn’t make since.
Alyssa Mastromonaco: People in leadership started talking and that path led to CTO Mira Murati. Murati told others in leadership that Altman was what’s known in business school as a bad leader. When she brought up issues she was having at work, Altman started having an HR representative sit in on all their meetings until she was intimidated enough to shut up.
Erin Ryan: Meanwhile, OpenAI’s chief scientist, Ilya Sutskever, had it out for Altman. Sutskever was a co-founder of OpenAI, and if we’re being honest, is the guy who actually built the technology while Altman was taking most of the public credit for it. Ilya and some other disgruntled leadership compared notes, and they found that Altman had also been lying to board members about who wanted who to be ousted from the board, like middle school mean girl stuff.
Alyssa Mastromonaco: This is like The Traitors, but nobody has a cool catchphrase.
Erin Ryan: Exactly. It’d be so much better with Alan Cumming narrating. So now that the board had gathered the evidence it needed, they decided to act in secret because Altman, in their view, was a liar who was also very powerful. But they voted to fire Altman.
Alyssa Mastromonaco: So the lack of credibility was a super sanitized way of saying that Altman is a backstabbing liar.
Erin Ryan: Yep. And after Altman returned, two of the OpenAI higher-ups who had advocated for his ouster, Mira Murati and Ilya Sutskever, unceremoniously left the company. And look, in addition to the board at OpenAI, there are many skeptics of Altman’s act, like for example, writer and podcaster Ed Zitron, who wrote in one of his many missives against the technology that the AI revolution is simply an example of tech charlatans who quote, “have used a compliant media and brain dead investors to frame unprofitable, unsustainable, environmentally damaging and mediocre cloud software as some sort of powerful futuristic automation,” end quote. I love his writing. For the record, if you’re a Luddite or even a tech skeptic, we cannot recommend Ed’s podcast Better Offline or his newsletter, Where’s Your Ed At, Enough. Now here’s a fun little coda, Alyssa. We’ve been quoting Altman’s essay, The Gentle Singularity, in an effort to fairly represent his views. I ran The Gentle Singularity through an AI agent and the agent told me that it predicted with a 92% chance that the piece was written by AI. This is due to its lack of qualities associated with human authors, like varied sentences and an abundance of qualities associated with LLM-generated content. Does Sam Altman write like AI? Or does AI write like Sam Altman? Sam Altman is a bit like a Sora generated video. I feel like it’s quite obvious that he’s fake, but a lot of people who should know better are fooled. Although this moment when Altman refused to hold hands with Dario Amodei, the CEO of rival AI company Anthropic at India’s AI summit, this moment seems like genuine contempt. Sam Altman is somehow worse to me than both Peter Thiel and Elon Musk and he’s such a weirdo that he makes Dario Amadei look like Robert Downey Jr. But Altman scares me. Altman has blurred the lines between technology and religion in a way that should alarm everybody. He’s selling the belief that he’s building humanity something that is better and smarter and more moral than any of us can imagine. He is telling us that he’s building a God and millions of people believe him. Watching him speak reminds me of a pastor at one of those mega churches that preaches the gospel but refuses to provide formula to a mother in need. He wants us to believe that he’s John the Baptist. I fear that he is actually closer to Jim Jones. And soon the moment will come when we will be asked to drink the spiked flavor aid or suffer the consequences.
Alyssa Mastromonaco: Erin, we cannot drink that spiked Kool-Aid.
Erin Ryan: Absolutely not. We gotta get out of there. Somebody call us a helicopter. So there you have it Alyssa, Sam Altman. How would you rate him on the matrix of fucking guys?
Alyssa Mastromonaco: Erin, I’m torn on this guy. I feel like he’s all four I mean he is a reckless dumbass but also a scheming sociopath so I’m gonna go with scheming socio path and true believing zealot.
Erin Ryan: Yeah, I think I’m gonna like break the rules here and say that he’s a scheming dumbass. [laughter] I think that he is a true believing zealot. I think that’s he’s convinced himself that he can change things simply by saying them. I believe he can build reality simply by saying things. Just his refusal to ask Scarlett Johansson for permission and then just do the voice anyway and then just assume that she’d eventually say yes.
Alyssa Mastromonaco: And he also is just like the he is like the embodiment of like the worst mega church pastor.
Erin Ryan: Mm hmm. Yeah, and God, why can’t, why does he do that with his voice? We can’t just vocal fry constantly. We can do that. That’s not, it’s…
Alyssa Mastromonaco: Just leave it with RFK Jr.
Erin Ryan: Yeah.
Alyssa Mastromonaco: Let it be. Just let it rest there. Let him be the one. Well, Erin, that about wraps up the time we have for this episode of This F*cking Guy. If you like what you’ve seen, hit subscribe, share with your friends, and leave us a comment if you’ve got an idea for a future fucking guy we should spotlight.
Erin Ryan: This episode was written by me with an assist from Alyssa Mastromonaco. All the rest of our credits as well as links to our sources like Ed Zitron’s Better Offline and Tad Friend’s work at the New Yorker can be found in our show notes. Trust me guys, the bibliography is always a fun read. Take care, be well, don’t meet your husband at one of Peter Thiel’s hot tub parties.
Alyssa Mastromonaco: And fuck that guy.
Erin Ryan: Fuck that guy!