DrGPT with Dr. Eric Topol | Crooked Media
Subscribe to our Friends of the Pod Today! Subscribe to our Friends of the Pod Today!
June 13, 2023
America Dissected
DrGPT with Dr. Eric Topol

In This Episode

ChatGPT set off a flurry of excitement — and anxiety — over the impact Artificial Intelligence will have on every aspect of society. One of the most important disruptions that AI will impose is on health and healthcare. Abdul reflects on the power, promise, and peril of AI. Then he sits down with Dr. Eric Topol, one of healthcare’s foremost futurists and author of “Deep Medicine: How Artificial Intelligence Can Make Healthcare More Human,” to discuss it.

 

TRANSCRIPT

 

[AD BREAK] [music break]

 

Dr. Abdul El-Sayed, narrating: Smoke from a Canadian wildfire plunges the Northeast and Midwest into darkness. Devastating air quality for days. FDA advisers recommend a new monoclonal antibody against RSV for infants. Pharmaceutical Corporation Merck sues the federal government over Medicare drug price negotiations. This is America Dissected. I’m your host, Dr. Abdul El-Sayed. [music break] Let me put my cards on the table. I am an AI skeptic. I’m old enough to remember the unbridled optimism over the Internet. It was going to connect us all, and that was going to bring us together in ways that would end war, famine, destruction of all sorts. Yeah. That’s not how it worked out. Don’t get me wrong. The Internet is an incredible tool. The ability to have any information at my fingertips with a couple of thumb taps. That’s truly incredible. I love that I can reach out to long lost friends and that so many of them were never lost in the first place. And I get to share my perspectives with you through this podcast that’d be impossible without the Internet. But to say that unlimited connections have made us better as a society. Absolutely not. There’s, of course, the rising threat to democracy driven by the spread of mis and disinformation and the greed of major tech companies who decided to feed us the worst of one another to turn unprecedented profits. There’s the way that the Internet has destroyed social cohesion by forcing every major interaction we share into an online simulacrum of itself. There’s the way that it’s led to skyrocketing rates of loneliness, depression, anxiety and even suicidality, as we talked about last week. All that. It just so happened to take off in the early 2010s, right around the time broadband became fast enough for convenient use. And smartphones made the Internet portable, that put social media on steroids. So when folks tell us that A.I. is going to, quote, “change everything,” it’s not that I don’t believe them, it’s that I’m not so sure that’s a good thing. But look, I’m just as enamored by the possibilities of AI as everyone else. I mean, I asked ChatGPT to do the following. Write me an introduction to an episode of America Dissected a podcast about health and society, about the risks and benefits of A.I. for health care and public health in the voice of Dr. Abdul El-Sayed. And here’s what it had to say. Welcome back, my friends, to another riveting episode of America Dissected, the podcast where we dig deep into the complex tapestry of health and society. I’m your host, Dr. Abdul El-Sayed, physician, public health expert and your guide on this journey. Today we embark on a topic that both fascinates and unsettles us the rapidly evolving realm of artificial intelligence in health care and public health. A.I., with its promise of precision and efficiency, has the potential to revolutionize the way we diagnose, treat and prevent disease. But as we navigate these unchartered waters, we must also confront the ethical, societal and privacy implications that come hand in hand with this transformative technology. It went for another seven paragraphs, and look that was fine. But I don’t think ChatGPT is going to replace me as this podcast host anytime soon, but AI is only going to get better and that’s why it’s so frightening. Look, I don’t really care that much about what AI means for podcast hosts. I’m a lot more worried about what it means for health care. Here’s how I think about it. A.I. is basically going to accelerate all the things we love and hate about the Internet, especially when it comes to large language models or LLMs. Just like broadband put social media on steroids. I think LLMs are going to put the Internet on steroids. They amplify the promise and the peril. And when it comes to health care, those perils can have disastrous consequences. With the advent of the Internet, more people have more medical information at their fingertips, and that’s generally a good thing. But then there’s the problem that people Google their symptoms and they could either have a cold or cancer and then they conclude the wrong way. Now, apply that to the problem of, quote, “hallucination.” The fact that these models just make things up. Sure. I’m told that this will happen less and less over time as the models get better, which makes it feel like we’ll be that much worse at catching it when it happens. And when people take what AI says as gospel truth, these are as good as lies and they can hurt people. Now, you might push back and say, but Abdul. That’s not the same thing as a highly specialized LLM designed for medical applications. Sure. But that creates a whole other issue. The creators of those specialized medical applications will be the companies with the resources and power to build them. And those are either going to be health care companies or tech companies, and they’ll use their tools just as they have in their respective industries, to corner more and more of the market. That means more consolidation in an already consolidated health care market, squeezing out profits from workers and squeezing out patients entirely. But then, for as many great medical applications as there might be that ostensibly make things better, think about how many bad actors will use AI to make things worse. Think about how much more misinformation can be manufactured by an LLM created specifically for this purpose. Think about all the deepfakes created to push conspiracy theories that can be littered all over the Internet. Now, look, I know I have a really skeptical view on things, so I wanted to make sure you heard an optimistic take, too. And I could think of no one better than Dr. Eric Topol. Truly one of the most impactful thinkers in health care and author of the 2019 book Deep Medicine: How A.I. Can Make Health Care Human Again. In his substack, Ground Truths, he’s been writing a lot more about A.I. and health care recently. My conversation with Dr. Eric Topol after this break. [music break] 

 

[AD BREAK] 

 

Dr. Abdul El-Sayed, narrating: Here’s my conversation with Dr. Eric Topol. 

 

Dr. Abdul El-Sayed: Let’s uh let’s jump right in. Can you introduce yourself for the tape? 

 

Dr. Eric Topol: I’m uh Eric Topol, and I’m really glad to join you Abdul. I am at Scripps Research as a professor and the founder and director of our Translational Institute. 

 

Dr. Abdul El-Sayed: For folks who don’t understand, what is translational medicine, how do we think about that? 

 

Dr. Eric Topol: Yeah, what that really means is that you’re trying to take discoveries and advances that are just sitting in the idea and research space publications, but you’re trying to actually use them to improve medicine, to to improve patients and, and promote human health. 

 

Dr. Abdul El-Sayed: And I think the reason why that’s important is because I think people assume that you got a bunch of scientists and they’re like hanging out with the doctors and in the cafeteria, the scientists talk to the doctors and are like, hey, this could be really helpful for this patient because at least that’s how they do it in house. But the truth of the matter is, is that medicine and science, though very connected in the grand schema of all of the um all of the different industries that exist out there, there really is a challenge between getting breakthroughs at the bench into the bedside. And we require people like you who are thinking about both what is new and hot in science and what are the problems at a that we’re encountering at scale at the bedside and asking which of these can we use today? And that is not an altogether um obvious thing to do. And that also puts you in a unique position because your career has been built around asking these kinds of questions around the opportunity and also the very clear risks that are posed by AI, I would say that you’re also in that role, one of medicine’s foremost futurists. You know, if you think about it, the ability to say, here’s what’s cutting edge and interesting in science and here’s the problems it maps to in medicine helps you to sort of be thinking about what the future of medicine looks like, what is a medicine unburdened by some of the challenges that we’ve often had, but then also uniquely burdened by some of the new things that technology might impose. And it also strikes me that you’re somebody who’s really optimistic about AI. And I have to tell you, like, I’m not like I actually I think my existential dread, you know I, we were talking beforehand and I kind of compared um AI to meeting a new, very large dog when you’re a child, it’s like this thing could maul my face off and I could die. But it also could be really fun to play with, and I don’t really know which one to expect. And maybe, maybe it helps to say that I’m someone who, you know, having been bit by a dog when I was a kid, am afraid of dogs. So um you’re you’re a you’re a a an optimist about our AI future. Walk us through why and if I’m wrong about my assessment based on reading you um tell me why, why, why maybe my assessment’s off. 

 

Dr. Eric Topol: Right. Well, I’m fully cognizant of the the reliability. So I don’t want to be considered as a pure optimist. But back four years ago, I published a book called Deep Medicine, and in that book I outlined where AI could take us. And as you know, there’s considerable disenchantment in the medical profession among doctors and nurses. And the whole issue there is the inability to actually care for patients because there’s not enough time, because there’s all this data clerk function and it’s not a good situation at all. And the whole idea of deep medicine was there’s potential for AI to get us to a level where our patient doctor relationship was restored to where it was many decades ago and that we would have the gift of time. Now, recently, of course, and in November, when ChatGPT was released and the next three months, 100 billion unique users were onto it. And then now GPT4 in March, we’re into this kind of hyper accelerated new form of AI, the large language models and the power, the pluripotent aspect of this form of AI brings to mind the ability to actualize that vision I laid out four years ago. But it also brings the concerns about these hallucinations, misinformation. It brings out the doomsayers with the existential threat of AI and all the concerns that are on both sides because it’s something it’s so powerful and not poorly understood. And we’ve already seen the guardrails can come off and you sure don’t want that going on in the health care setting. So what we’re seeing is a revolution, something we’ve never seen in the history of medicine, and it’s still in the earliest stages. And that’s what I think we have to consider, is whatever good and bad that could come out of this, we’re in you know, we haven’t gotten to the first inning yet. 

 

Dr. Abdul El-Sayed: I had read Deep Medicine as a big fan of your work. I want listeners to understand I don’t I don’t nerd out or uh fanboy over very many people. Uh. Our guest today is somebody I I I am fanboying about, about getting and like this this if if we get too nerdy, I’m going to make sure to pull myself back. And our producers can remind me in my ear that I’m I’m I’m nerding out too much. But I remember reading Deep Medicine back in 2020 before the advent of ChatGPT. That was also before the advent of a certain um pandemic. And I remember sort of being like, Oh man, this is going to be amazing. It’s going to change everything. I really want to be a practicing doctor in the world that AI is going to unlock. And, you know, for for listeners, they know this I don’t practice medicine because of a lot of the moral drag that I think you identified um early on in the conversation. I don’t believe we have girded what we do for the right people because of the preoccupation with making money and because of the preoccupation with making money. There has become this immense drag on the practice of medicine. And I think those two things are related. Um. And and I thought myself, like after I reading the book, and I was like, this really could usher in the kind of world where uh where we really could practice medicine at scale in a way that addresses both the moral and the bureaucratic failures of the practice. And then and then I um was brushing up on the book post ChatGPT, and I have to say my experience of it was just. I kept going back to the issues and the the the real you know the what what I think um folks have taken to calling P doom, right? The probability that that we all just uh all of society gets gets enveloped by this thing. And those kept coming to coming to mind. And the feeling that I had kind of was like, oh, I’ve been here before. And it was at the advent of the Internet. I remember sitting in my seventh grade classroom and we um we had we had used this thing called a webcam uh to um look into the explorations of a group of explorers who were exploring never before uh seen Mayan ruins. And I was like, this is so cool. The Internet’s going to be amazing and it’s going to make everything better and look like you and I are having a real time conversation on this thing called a webcam. Uh. And this podcast exists because of the Internet. And I have to say, I appreciate that very much. And also, Donald Trump became president because of the Internet. And I don’t know which which which version of of which version of the earth I would have liked better. Um. So all of that is to say that I, you know, it almost feels like we’re in that moment again where you’re like, this sounds like a really great idea, except for maybe we’re just optimistic as a species. And we did this already and it just didn’t work out so great. How do you you know, when you reflect on on your thinking um before the advent of ChatGPT and LLMS, how has your I don’t know. I don’t I don’t mean to say it this way. Speaking to a preeminent a physician scientist. But what are the vibes like how have the vibes changed for you now that you’ve sort of seen how fast uh A.I. can come? Has it changed the way, or at least the tone you would have written the book in um you know versus the way that I read it uh before and after ChatGPT? 

 

Dr. Eric Topol: Yeah, that’s a great question that you’re asking uh because back in 2019, the issue there was we had these deep neural networks that could take scans, medical images and do a great job in helping their interpretation and promote accuracy. And that that was great. And we’ve seen that across every type of scan of medical imaging. Cardiograms and retina photos and you name it. Skin lesions. I mean, so that of course, improving accuracy in medicine is really important and there’s not much can go wrong there because you have the human in the loop. And uh I think that’s a contribution that is going to be considered a really important one, even a momentous one. Now, what [laugh] was not ready back then, I mean, a few years ago was how do we take what’s so called multimodal data when we take your all your medical records, your images, your labs, your sensor data, your genome, your gut microbiome, your environment and all your everything about you and say, we’re going to be your virtual health assistant or we’re going to keep you out of the hospital because we can do remote monitoring. That is, there was no ability, multimodal data to integrate all that and um give it as a package to a clinician doctor or for the patient. Now there is and now it becomes where you have to regulate it. You have to be very, very careful of the harm it could induce uh because it’s a whole different capacity. It’s taking whatever we had three years ago and a multiplier that was unforeseen at this early point in time. You know, interestingly, I spoke to all the leading A.I. gurus back then and they said, well, you know, we just don’t have the models to do this. [laugh] Someday, someday we will. And you say, well, maybe that would be eight or ten years, turned out it was just a few years. It turned out that the actual model to do it, so called transformer model had already been invented by, discovered by Google, but they sat on it because they didn’t want to challenge their search uh monopoly, if you will. And then meanwhile, you know, open A.I. just ran with it. Um. And basically, at least uh at the moment, they’re in a kind of lead situation. So I think the key here was we got to another state of machines being more advanced with their apparent intelligence. Then we had foreseen and we’re in an accelerated phase of that and we have to be very careful. But the net potential here is for good things is really quite considerable. 

 

Dr. Abdul El-Sayed: So walk me through some of the things that AI has already changed in medicine. I think you walked through a couple of them. You know, one that comes to mind is AlphaFold. Walk us through what’s already it’s already delivered and how that tangibly changes the scope and and and practice of of medicine. 

 

Dr. Eric Topol: Yeah, well, AlphaFold. I’m glad you mentioned it. That is the biggest life science uh contribution of AI. Because the fact that you could uh take the amino acid sequence and generate the accurate 3D crystal structure of a protein. Any protein in the universe. You know I mean, we’re talking about a couple hundred million proteins. Uh. The fact that you can do that is extraordinary and that happened quickly. And that will facilitate not only our understanding of biology, but drug discovery and already has. And it’s already unlocked secrets that we’ve never known the structures, such as the nuclear pore complex. The way things get into the nucleus of all of our cells. So that’s exciting. Uh. It’s made a big contribution, obviously, in medical image interpretation. The next thing it’s going to do is change how CAS the data clerk functions because the natural language processing and all of its downstream impact is ready to go in terms of instead of having to sit at a keyboard. The ability to take a note synthetic, make a note synthetically from a conversation to extract all the important stuff from that, and a better note that we write as physicians. But not just that, but from that note. Set up any lab tests and procedures that are needed. Follow up appointments, do the pre authorizations, do the codes for billing, make future contacts with the patients to nudge them about things that were discussed during the conversation and uh follow up appointments and on and on. So the point being is that this is a fundamental change, the way medicine is practiced and we’re going to see this change over the next you know couple of years. And this is one that will hopefully be one of the most welcome embraced changes in medicine because it really gets rid of a lot of the work that nurses and doctors and you know all clinicians really don’t want to do. They’ll they’ve been saddled with this data clerk function. So what we’re talking about these things, the the life science contribution, the image interpretation, and now this third chapter, which is the administrative operational health care aspects. These are going to be seen, I think as very important contributions that are not really with significant risk because they all have human oversight. Each one of them, you don’t you don’t pick a 3d structure of a protein and then just accept it. You know, you, you, you do a lot of check points to make sure that that it that it is that model that is being created from the deep neural network is indeed accurate. Any one of these things are pretty much, the guardrails really can’t come off as long as the doctor checks the note, you know, and as long as the image is not just only interpreted by an algorithm, when you start getting out of the humans, out of a loop potential, are you basically are doing things that are clinically diagnosing treatment. That’s when you start to get potentially into trouble. 

 

Dr. Abdul El-Sayed: So I want to just explain a couple of pieces there for our listeners who might not have a background just in basic science, but what Alphaville does. Is, it takes an extremely complex set of equations that tell you about how the functional pieces of different amino acids. So a protein is just a chain of different amino acids, each of which have this thing called a side chain. And that side chain interacts differently with different side chains. And it’s those sidechain interactions both directly, but then indirectly that that predict how a protein is going to go from a chain of amino acids into a folded three dimensional hole. And what Alvo did it basically before we had done this, we basically just did, I guess in check format, you know, at scale. I think what what computer scientists would call like brute force analysis, which of these does the thing that we think it should do versus now it it can operate at a level of complexity that breaks, you know, our ability as as theorists or scientists to predict. And it’s extremely accurate. And at that point, once you can do that right, once you know, you can map from DNA to an amino acid sequence to a three dimensional protein that gives you a whole lot of power around being able to manipulate biology. The other point that you were making was about images, and I think a lot of folks don’t really appreciate this, but there are whole branches of medicine that are basically built around reading images, whether you are a pathologist who’s like, unless you’re a forensic pathologist and you’re doing autopsies. Your job is basically to look at very, very, very, very thin slices of human tissue and identify what’s wrong with them based on the ways that cells look. And if you’re a radiologist, right, your job is to look at different kinds of scans and identify what’s wrong. And, you know, it’s interesting. Anybody who’s ever gone to medical school or trained in clinical medicine at all, the the the ability for the human brain to identify patterns is on full display. When you watch a very, very capable radiologist read a scan. Right. I mean, I remember reading my first ex chest X-ray and you read it and then, you know, the senior medical school student reads it, picks up some more things, and then a resident early resident reads it, and then the late resident reads it. And then you have a fellow who reads it and all of them are picking up new stuff. And then you have this like, you know, attending with 25, 30 years of experience. And they’re like, you know, in 5 seconds you’re like, You missed this, you missed that, you missed this other. And you’re like, Is it really there? And they just see it because because they’ve been looking at these their whole lives. Now we have machines that can train on literally every single chest X-ray that’s ever been done to identify patterns that that the human brain simply can’t see because you’ve only seen the ones that you’ve seen. Right. And and so there’s just something really unique about this in terms of the ability to, like, fundamentally change things. What’s also interesting to to your point, you know, I sort of think about I went to medical school and decided not to do a residency and somebody asked me, what did you learn in medical school? And I was like, well, the soft skill was to listen to and speak to people in pain. And that’s a really, really important skill. And I’m really grateful that I have that skill in the work that I do every day. The other skill, though, was the ability to generate a differential diagnosis. Like, of all the things that you have to sort of be able to do at the end of medical school, the ability to generate a coherent differential diagnosis is basically what you learn in medical school. That’s basically it. Now, ideally, you from a differential, can identify what the next step is in terms of diagnostic testing and what treatments would look like. But really that ability, like the sort of intellectual firepower to generate a differential, that’s where the the ability to both use inductive and deductive reasoning collectively comes in. And that takes a lot of training, four years of training. And we’ve got machines now that based on what you’re sharing, is they can just sort of observe a set of conversations and can do it for us. And so I want to get to the question you asked about the more disruptive, potentially dangerous parts. But one of the disruptions that I see is just that. I don’t know that. Medicine as we know it will continue to exist for very long. Right. I don’t see why, aside from a sort of a check function, you have the same level of radiology or the same level of pathology as you do in the pre A.I. world. Where am I wrong on that? 

 

Dr. Eric Topol: Well, I think one of the things you have to bear in mind is machine. I see things that humans can’t. Yeah. Okay, so if I show a retinal photo of you. To a trained neural network. It will not just tell me about the status of your retina in your eye, but it will tell me about things like kidney disease, hepatobiliary disease, your risk of heart disease. It would tell me about your control of your diabetes if you had that, your blood pressure. And so any medical image that’s been trained properly. There’s things that are rich that you will will never be able to see because these are trained by, as you were alluding to, hundreds of thousands, if not millions of images. So we see that I mean, a cardiogram that can tell your your hemoglobin level if you’re anemic and also your ejection fraction and even your age and your sex from your cardiogram. And I read Cardiogram for the last 30 some years, and I would never be able to do any of those things, really. And so each image you touched on the pathologist. Well, when that pathologist looking at a slice, it wouldn’t be able to say, well, this is from a cancer where the prognosis is such and such, the driver mutation is such and such. It has these many structural variations. And by the way, the tumors coming from this, not that all from the side, the pathologist could never see that. So if you want to understand the power of a I when you talk about unsupervised learning, it’s extraordinary. And so just because we’ve gotten into this new phase where it’s kind of gone into, you know, Supercharge, let’s not forget some of the things it’s recently been able to accomplish, which are great contributions to not just accuracy in medicine, but, you know, someday I don’t think medicine is going to change as much as you’re, I think, suggesting. But someday you’ll take a picture of the smartphone of your retina and you’ll get a checkup medically of every every organ system, essentially, whether you know, you have any signs of neurodegenerative disease and the earliest possible timeframe and and on and on. So that’s how I think medicine can change in a very positive way because of leaning on machines. So when you say, am I an optimist, I do see some of those very positive things. 

 

Dr. Abdul El-Sayed: And I fully and 100% appreciate that. I guess I was thinking through that. It strikes me that there are a couple of issues. The first you were going to get to was about one. One. A. A. I operates at the continuum between diagnosis and treatment without a human on the loop. Walk us through what that would look like. And, of course, some of the risks that come out of that. 

 

Dr. Eric Topol: Well, that we get to the really frontier of patients because most of the time when people talk about A.I. and health care, they’re just thinking about doctors and nurses conditions. But actually, the patients, if you look at that 1 billion unique users of chatbots in the first three months, they’re all patients, right? I mean, the mass use of these large language models will be patient. And now with Egypt for it’s multi-modal that is you can use images. Videos. As input with props. No, not just text and and speech or voice. So what happens when you let’s say you have a skin lesion and you put it into the large language model as a patient? Because you want to know, should you go to a dermatologist? And what happens is a large language model interpretation says, oh, nothing to worry about, you know. And it turns out you actually have a very serious skin cancer that needed attention. That’s just an example of how things could go wrong, is getting bad screening a diagnosis. And, you know, we already have you know, we have like a smartwatch which would tell you if you likely had atrial fibrillation, an important rhythm problem, or, you know, the idea that you could diagnose a urinary tract infection from an eye kit you could get from the drugstore, which already exists in many countries outside the U.S., probably will be here imminently. So doctor lists screening. Through, I is going to become an issue because if it gives people the wrong answer, particularly if it’s a potentially serious mistake, that’s not going to go over well. So these these systems for doing Doctor, the screening have to be really proven to be of benefit and not harm. 

 

Dr. Abdul El-Sayed: It’s not just Florida. 44 states have introduced bills or taken steps to limit how teachers can discuss racism and sexism. But the national youth led movement fighting back against the assault on black studies, libraries and the freedoms learned, is gaining momentum in an effort to support the movement. Margaret Casey Foundation is thrilled to help get the newly released book. Our history has always been contraband in defense of black studies out as widely as possible so it can serve as a resource to all working to ensure the accurate teaching of black history co-edited by Colin Kaepernick and Margaret Casey Foundation Freedom Scholars Robin D.G. Kelley and Keeanga-Yamahtta Taylor. Our history has always been contraband, brings together more than 50 canonical texts and authors in black studies, along with six new essays. Get your free e-book today at Casey Grant’s dot org slash free ebook that CAC wide grants talk backslash free ebook. This episode of America Dissected is brought to you by Kara Uma. The cool, sustainable sneaker company with old school style and new school ethics. Look, it’s summer, and we’re all in search of that perfect shoe to carry us through the season. All things fun and sun with over 40,005 star reviews. Carry On has got you covered with shoes that have a classic look are crazy, comfortable and consciously crafted for your ultimate daily summer shoe. Worn by celebrities and praised by publications like Vogue and GQ. These kicks are a cult favorite and they’re loved by us. I love the shoes because, well, of course, they’re super comfortable. They last a long time. They don’t leave you with really terrible calluses on your feet if you dare to walk more than a mile and they look great. AKA is carry. You must new school take on a timeless sneaker style. And it just cleared a 94,000 person waitlist. It’s designed for everyday wear made with organic cotton canvas and comes in timeless shades like white, gray, red and blue. It’s the perfect pair for all your summer outfits and days spent on your feet and out in the sunshine. We’ve loved the lace up for years and now carry Uma recently launched canvas slip ons. There are 100% vegan and made with organic cotton and a natural rubber outsole. This easy to wear style provides a timeless look with incredible comfort and ease. Everything you love about the AKA Now without the Laces. I love this shoe because you know what? Sometimes when I’m getting prepared to leave the house, I just like to slip on a shoe and get out. And the shoe allows me to do exactly that and look great doing it. Carry them as always, keeping it fresh with epic collaborations with brands like Peanuts and Davis. There’s something to love for everyone and sure to be shoes that will make a statement. Check out their summer shades made in collaboration with Pantone. Three new sneakers bursting with life for a season packed with fun and full of flavor. Look, I got to go with Blue, but all of them look right here. Uma is certified and has dedicated reforestation program based in the Brazilian rainforest. Their co-founders, David and Fernando, both grew up there. So this project is especially close to home for every pair of sneakers sold. Carry them plants to trees and have already planted over 2 million trees to date. Look, I love the idea that what I put on my feet doesn’t come at the cost of people’s welfare or the planet. Not only that, but I’m actually trying to do something to help carry you. My ships all sneakers free and fast in the United States and offers worldwide shipping and 60 day free returns. They delivered right to your front door using single box, recycled packaging. And for a limited time, America dissected. Listeners can get an exclusive 15% off your pair of carry home sneakers. Go to C.A.R. a roommate dot com slash 8015 to get 15% off that C.A.R. I’m a dot com slash aide 15 for 15% off only for a limited time. Let me let me play devil’s advocate. I guess we’re switching here. I think about how often diseases missed in screening. You know, we came up learning about sensitivity and specificity, which are basically metrics that are fancy ways to say, how often do you get it wrong either way? And those are those are not particularly high for a lot of screening that we do. And so this kind of the analogy here that I think a lot of our listeners might have heard is the question of self-driving cars, right? Will self-driving cars cause accidents? Absolutely they will. But will they cause fewer accidents than human driven cars? You can imagine a world where they get to doing that pretty quickly, particularly as the density of self-driving cars and the predictability of the algorithms that they use is higher. Right. Versus humans who, you know, predictably text and predictably fall asleep and predictably drive drunk or high. And I guess the you know, if the question is whether or not AI based algorithms will get it wrong, I think that they they certainly will. The question to me more is will they get it wrong less than human doctors get it wrong? And what happens then when you introduce a human on the loop who because they have and this is the thing I’m worried about, they have. Ceded so much of the deep learning that we’re required to do when we’re the only ones involved in diagnosis and treatment to I. That when they do intervene, they get it wrong. Because I can imagine a world where, like if a guy does everything for you, like I remember being in medical school and I trained at a particularly rigorous medical training program. I’m very grateful for, even as someone who doesn’t practice, you know, the hospitals I trained at, we were so under staffed that like you went and drew your own blood and then you took it down to pathology and you waited. And then you sit with the pathologist and look it. And so there was really good learning that happened there. And I worry that the quality of training, because of how easy it is to see that work that goes into like really becoming an expert, I worry that the quality of training that a lot of our clinicians or the next generation of AI empowered a clinician is going to get is ineffective, going to make them useless because you really don’t have to get it right. And I just I want to understand sort of your sense of that. Like, is it possible to actually after some washout period of like folks who did it the old way? I mean, I don’t I’m saying this is somebody who had, you know, our version of AI was Google Scholar or even better. What’s that? What’s that website that that everyone uses as Cribs notes. 

 

Dr. Eric Topol: Up to. 

 

Dr. Abdul El-Sayed: Today? Thank you. We all had up to date. Right. And so I think about my father in law who’s literally at the end of his practice career. He’s like, oh, you guys are so lazy, right? You’re just so late. You just go check up the date. Like you didn’t actually have to go and find the information in the journals, like go to the library and read up on your patient. And so I just makes it even easier. So I guess my question is, is it even possible at some point when you’re training clinicians in the world of AI for them to be as human right as as clinicians of the past will be like, is it even possible to imagine a world where once A.I. has gotten good enough that you even have on the loop clinicians? 

 

Dr. Eric Topol: Yeah. So there’s a lot to unpack there. So firstly, I want to just make a point that it may be obvious, but it’s worth. Mentioning half of doctors are below average. Yeah. Fair point. Okay. So if you can help half that are below average in many aspects of their care of patients, that’s a good thing. Now, with respect to the medical knowledge domain, which is going to be in large language models already in GP four, you’re getting an A in this makes up the date look weak, right? I mean, you’re getting information up to the moment. You know, we’re not frozen a year and a half ago, but I mean it’s it’s ridiculous. So when you have real time medical domain knowledge for any given patient you’re seeing and then your your your the inputs of those symptoms or, you know, the test results are all the above put in, you’re getting a help. Helper function, augmented function that you know, you just can’t can’t do that. We have to, as you say, seed some help lean on a machine. Now does that mean we’re going to become lazy? Does that mean that pilots, when they have autopilot, they don’t know how to fly the plane? No. I mean, they will never be level five self-driving cars, which means driving under all conditions, that the car is autonomous. No, it’ll never happen. And there’s been a lot of hype about that. It would and it never will. And there will never be this idea that doctors are going to lose their ability to function as doctors. They’re just going to get helped a lot. Now, are there lazy people out there in any profession that don’t want to, you know, really review things carefully At one, it are just too busy with other things in their life? I mean, I guess so. But I think most people who are involved in caring for patients are genuinely caring. Right. And they’re not going to let this stuff slip. So I see it. I see the positives. You bring up the potential negatives. I see that too. But I don’t think that I think we’re still going to be able to land the plane with the autopilot functioning at time. 

 

Dr. Abdul El-Sayed: I, I appreciate the analogy of autopilot I. Worry that so much of. Medical knowledge is about repetition. That by definition, even the greatest experts will not have had the kind of deep repetition that that they would have had in the past. Your ability to to trust the I right or trust autopilot is high and yes in the in the and maybe this is the way training will have to change is that in the pilot scenario. Yes you may fly on autopilot but you need to do your own reps every once in a while just to just to make sure you get it right. I just worry that landing a plane is the same. Most of the time I think that clinical care is so, so vastly can be so vastly different. And in the scenarios where you second guess the by definition it’s what we call the zebras, not the horses. That because you will not have seen them as often, the ability to differentiate and second guess the eye, which is a way better pattern recognizer than you are, makes the analogy a little bit less clear. To flying an airplane, right. Like landing an airplane most of the time looks the same way, treating a patient. I would argue that the times where it matters for you are the times where it looks different. I want to also move on just to a different scenario, which is one of the challenges I have with AIS recursion, is that, you know, and you get this on the information space all the time, which is a lot of the hallucination doesn’t really get picked up by the average user. So as someone who’s an expert, when I ask you a question, I’ll take it to the edge of its knowledge and it’ll just hallucinate an answer. And I’m like, I know for certain that that’s wrong. I can cite you the paper that proves that most people would be like, I guess that’s the right the right answer. And there’s a risk of inadvertent misinformation simply because a AI is going to create more and more of the information that other A.I. trains on. And I worry that that’s particularly a problem when it comes to health because of the scenario that the pandemic created, which is you have a completely novel virus that looks somewhat similar to other things. And all of a sudden, right, you have an AI based algorithm that is learned on everything that has already existed and is not well geared to give you an answer that does not yet exist. And I worry about that both in terms of its ability to identify novel issues. Right? You think about the advent of a COVID 19, but an advent of HIV, etc., but also in terms of thinking about the way we integrate new treatment, right? Because by definition, the treatment’s going to recommend other treatments that it has data for that it’s learned on. So how do we think about novelty and change in a world where AI ends up training so much of itself and it in fact holds us back? 

 

Dr. Eric Topol: Right? Well, I mean, the idea is that whatever models we’re seeing now, like CBT for, they’re going to improve. And I think there’s lots of time that we’re going to see it with this. It’ll be iterative and then, you know, pretty substantial improvement, such that incorporating real time information, like as you’re giving example, a new virus and trying to anchor that with, you know, no links that are particularly insightful or germane. I mean, that’s the kind of challenge I think we’ll see how future airlines perform. But I think that the thing that you’re not potentially appreciating as much as I am is that the knowledge part of a patient doctor relationship is one dimension, the caring part. Okay. Mm hmm. That is, I. I really understand your concern, and I got your back. I care about you. Yeah. This is the part that’s missing in medicine today. This is where patients are getting roughed up. I mean, I wrote about being roughed up in the book of Deep medicine. And when I go in a room and ask, you know, an audience, how many of you or your relatives have been roughed up by a doctor? I mean, everybody raises their hand. That’s the part that we can improve. I mean, the knowledge part would be augmented. And yeah, there may be some exceptions, like something that’s never happened before. But what about all the rare things that doctors can’t keep track of or the new things, you know, who can keep up with all the medical literature on a daily basis, no less on a monthly or yearly basis? Mm hmm. So just keep in mind, I think the biggest shortcoming in medicine today is the relationship has suffered. You know, I’m an old dog in medicine. You’re much younger in your experience. But when I was in medical school in the late seventies, the the patient doctor relationship was precious. It was an intimate relationship. What is it today? It’s very rare to be able to find a doctor who you feel really cares deeply about you and is there for you at any time. To to to be to help you. So the point being is don’t forget how much I can help decompress things. To have that gift of time and restore the caring, which is the essentiality of medicine, right? 

 

Dr. Abdul El-Sayed: Yeah. I don’t I don’t disagree with you on that one. And I think I think you’re spot on in terms of what’s missing in so much of medicine in health care today. I would diagnose the challenge as being, like you said, one of complexity and bureaucratic drag. But I would also characterize that challenge as being one of misaligned incentives. One of the biggest issues and how you’ve written about it has been consolidation in health care. And you have more and more of the means of health care owned by fewer and fewer corporations who have the ability to squeeze providers in ways that tend to force out the humanism first. And one of the fears I have about the future is the way that the owners of the means of AI based health care are going to be able to own the future of health care. And I worry that that’s going to drive a lot more of the consolidating impact of the economics of health care. Can you speak to to that and what it looks like to democratize some of these tech, some of this tech in ways that really do allow us to be more human at the bedside? 

 

Dr. Eric Topol: Yeah, I share your concerns. That’s actually my fundamental worry about where this is headed, because at least in American medicine, what basically the structure is, there are overlords. There are the CEOs of health systems. These are largely, almost exclusively non clinical people who are business people running American medicine. And so the problem with that is the power of AI to squeeze more out of clinicians, you know, read more scans, see more patients, read more slides and everything, do more and more and more because that’s more supporting the financials. This is where having overlords like this is set up to. All of this to fail. And unless we as physicians and the health care profession stand up for patients and for our profession and for our purpose, it isn’t going to get fulfilled. So this is a really serious obstacle, because unlike other countries where there is universal health care and the de mal incentives are not present like they are here, we’ve got a real structural, serious problem. And you’re seeing now much more forming of unions among physicians than we’ve ever seen before in the history of the medical profession in the US. And we’re likely going to see much more of that. But it’s all being done, you know, kind of at local levels. We don’t have a professional society that is willing to take this on. And so we’ve got problems. And of all the things that I worry about in the era of this is number one. Mm hmm. 

 

Dr. Abdul El-Sayed: Well, I think we’ve we’ve we’ve found a very clear point of consensus, even if Adler one. But I also it strikes me that that you are as optimistic as you are and I guess as cynical as I am. And I and I think part of that is, you know, if you if you sort of dissect this out into an age period and cohort effect, I think unfortunately, unfortunately, I think my cohort has sort of seen the worst of what the Internet can create. And I think many of us are lonelier, more depressed and and more frustrated overall. And then particularly, I think about my colleagues in medicine, the ones who stayed in medicine, and I don’t know any of them who aren’t trying to get out. Every day I talk to I have a large group of friends I went to, ended up going to medical school in two different places. I have two different sets of classmates and every single one of them is just frustrated, burnt out, jaded. And, you know, I really, really hope that we we can achieve some of the the the bigger broader. You know, more marvelous goals that that I can offer without bearing some of the potential costs, let alone that this whole notion of doom. The other interesting thing, I think I had this debate with a friend about whether or not people just get more cynical with age or whether or not cynicism is a U-shaped curve. And I think I think my my conversation with you today reminds me that even as a pretty optimistic guy more generally, that there is a U-shaped curve. And I you know, I know that you’ve your your your personal model is trained on a lot of history. And I really appreciate you bringing that history and that perspective to our conversation today. Our guest today was Dr. Eric Topol. He is, I would say, the foremost futurist in American medicine and health care. He’s also professor of molecular medicine and executive vice president of Scripps Research and founding director of the Scripps Research Translational Institute, and also author of the book on AI and Medicine Deep Medicine. Dr. Topol, thank you so much for taking the time to join us today. 

 

Dr. Eric Topol: Thank you. I really enjoyed it. 

 

Dr. Abdul El-Sayed: As usual. Here’s what I’m watching right now. 

 

Speaker 4 Tonight, hazy skies are blanketing wide swaths of the country, from Boston to Philadelphia to New York City, where you can barely make out the skyline. 

 

Dr. Eric Topol: Hundreds of wildfires in Canada are still burning out of control. And while the hazardous smoke is finally easing up, it’s all raising concerns over whether dangerous air could be the new normal. 

 

Dr. Abdul El-Sayed: Last Wednesday, while my daughter and I were in the car on her way to school, she said, Bah bah. I can’t see the edges of the sun. When I looked up, she was right. After reminding her never to look directly into the sun. I had to think about why she was right. It was all that. PM 2.5 in the air. In other parts of the country it had turned the sky the color of 7 p.m. 2.5 stands for particulate matter smaller than 2.5 microns. Particulate matter is a fancy way of saying stuff and that stuff is so small that our bodies usual mechanism for stopping it from getting into our lungs can’t even catch it. So there’s this system of tiny cell fingers lining our throats that literally move the stuff that gets caught in our throats upward. That’s why if you’ve ever visited or lived in a place with lots of smog, you cough up black sputum. But those little throat fingers we all have can’t catch the smallest particles. So they go deep into the balloons that make up the ends of our lungs called alveoli. That’s why PM 2.5 is so dangerous. It’s bad for pregnant women, infants, children, particularly those with asthma and vulnerable adults with chronic illnesses. It can cause heart attacks as all that junk increases the resistance in our lungs, forcing hearts to work harder in the process. And it’s why is estimated that spending 24 hours under the kind of air quality that we experienced in Michigan is the equivalent of smoking. Ten Cigarets. Make no mistake, these kinds of bad air days are becoming more common. And it’s all because of climate change. What caused that horrible air? Smoke from fires thousands of miles away in Canada. What caused the fires in Canada? Unusually dry weather. What caused that? Well, climate change. It’s another reminder of how climate change will affect us. And don’t forget, if you’re someone who’s never been exposed to that kind of putrid air before last week, that’s called privilege. There are millions of Americans, predominantly black and brown, who live in air sheds of smokestacks that are driving climate change every single day and the air they breathe and the bodies that it wrecks tells that story. But it’s not all doom when it comes to lung health. A panel of advisers to the FDA just recommended a new medication to prevent RSV in infants. RSV short for respiratory syncytial virus is an infectious illness that sends tens of thousands of babies to the emergency room every single year. If you’ll remember, RSV was really bad this winter. There are seven MAB, an injectable monoclonal antibody designed to target RSV among infants were shown to reduce the probability of infections requiring hospitalization by up to 75%. And even as we bring you that promising story. Finally, a reminder that pharma going to pharma, the giant pharmaceutical corporation Merck, is suing the U.S. Department of Health and Human Services over the prescription drug negotiation piece of the Inflation Reduction Act. If you’ll remember, the law would allow the federal government to negotiate the prices of ten of the most expensive prescription drugs that they have no generics or biosimilars and B have been on the market for at least nine years or more. And the kicker here is it doesn’t even take effect until 2026. But Merck couldn’t even stomach that. Get this, they argue that prescription drug price negotiation violates their Fifth Amendment rights because it would, quote, take their private property without due compensation. Sir. Need I remind you that we, the federal taxpayers, underwrite their prescription drugs by literally paying for the initial research that produces them, said the FDA, Merck. That’s it for today on your way out. Don’t forget to rate and review. It really does go a long way. Also, if you love the show and want to help us, I hope you’ll drop by the crooked store for some American dissected merchant. America Dissect is a product of crooked media. Our producer is Austin Fisher. Our associate producers are Terry Interpreter and Emily Frank, a stylist for topless mixes and masters of the show Production Support from Ari Schwartz. Our theme song is by Talking Sawa and Alex Ruggiero, our executive producers, Alisha Duran, Sara Geismar, Mike Martinez and me. Dr. Abdul I’ll say it. GROSS Thanks for listening to the shows for general information, entertainment purposes only. It’s not intended to provide specific health care or medical advice and should not be construed as providing health care or medical advice. Please consult your physician with any questions related to your own health. The views expressed in this podcast reflect those of the host and guests and do not necessarily represent the views and opinions of Wayne County, Michigan, or its Department of Health, Human and Veterans Services.