This transcription is provided by artificial intelligence. We believe in technology but understand that even the smartest robots can sometimes get speech recognition wrong.

The 229 Podcast: Shockingly Human Care and Why Scribes Won’t Fix Healthcare with Spencer Dorn, MD

Bill Russell: [00:00:00] Today on the 2 29 podcast.

Spencer Dorn: we talk a lot about AI providing, differential diagnoses, et cetera, but ultimately healthcare is not just about providing knowledge, it's about coordinating care. And it's about accepting risk when things go wrong.

Bill Russell: My name is Bill Russell. I'm a former health system, CIO, and creator of this Week Health, where our mission is to transform healthcare one connection at a time. Welcome to the 2 29 Podcast where we continue the conversations happening at our events with the leaders who are shaping healthcare.

Let's jump into today's conversation.

all right. Today we have the 2 29 podcast where we continue conversations that we've started at the 2 29 project events. And today we're joined with Spencer Dorn from UNC Health practicing gastroenterologist, vice chair professor, teacher.

Informaticist a any other titles you want to throw in there as if that wasn't enough? [00:01:00]

Spencer Dorn: That's more than enough. Bill

Bill Russell: man. Hey, welcome to the show. I mean, one of the, one of the things that has brought us together is you are one of the people I follow on LinkedIn and read your stuff all the time.

I love how you get people thinking. I love the fact that you're sharing your journey as well. I think you described yourself as a tech optimist the word isn't tech realist, but you've become I can see both sides essentially is how I think I would describe your journey.

Spencer Dorn: Thanks. I appreciate that it's resonated with you. I just, you know, I try and write and share what I'm experiencing, what I'm thinking. And most of all, and maybe we'll get to this later, I write, because for me, writing's part of the thinking process, right? How do we make sense of this wild experience that we're all having right now?

So, yeah, it's great to get to meet people like you and learn from others. So it's been really fun.

Bill Russell: Talk to me a little bit about your journey into technology. Most doctors didn't start off saying, I want to, you know, I really want to explore the technology aspect of it, but [00:02:00] I don't, maybe you have, you're young enough.

Maybe you're in that generation that likes the intersection of technology and healthcare.

Spencer Dorn: I've written about this a bit and I've given presentations. I'm from this strange micro generation called the Oregon Trail Generation, so like. Technically a little too young to be Gen X. A little too old to be a millennial. I'm from this generation born in the late 1970s where we grew up in an analog world, but the world digitized when we were still fairly young, like high school, college age. And I think it shaped my thinking significantly becuase I've kind of lived on both sides of digitalization.

Likewise in healthcare as well. When I was in medical school, everything was paper, right? You walked to the medical records department, you tried to decipher all these, you know, this chicken scratch notes. And that, continued through the early parts of my training and residency. I actually used one of the first electronic order entry systems and really some of the, like the proto EHR type stuff.

And then I guess I was young enough. That EHRs became a thing later on in my training and [00:03:00] my fellowship and when I became an attending. So I'm not a technophile per se, I would say I don't line up outside the Apple store for the latest. I don't know if people still do that, but I don't do that.

I like technology. It's interesting, but I. Think about technology all the time and I don't need to have the latest and greatest, but what's really interesting to me is just how technology is reshaping our lives and reshaping medicine. And I like to bring some of the experience I have just from, you know, the quote, old days before everything was digitized and before you know, AI was something we talked so much about.

So, yeah, that's kind of how I wound up thinking about this and working on this.

Bill Russell: There's part of me that wouldn't mind just walking through some of your posts. You wrote about taking your wife to I think it was Oxford Emergency Department after an accident.

And you were struck by the experience. No forms, no signatures, just care. And you called it refreshingly human. This is one of the places that we ended up interacting a little bit. becuase I commented on [00:04:00] that. Tell us about that experience.

Spencer Dorn: Yeah, sure. Our family was in the uk and we took a day trip out to Oxford, which is I'm sure if you've been there. It's a beautiful place. And one of the things to do in Oxford is punting, which is like you get on this little. This little boat and almost like a gondola. If you've been to Venice, you get like a big pole and you go out punting.

And what was wild about this is right when we got on the water, another boat slammed into us. And on sadly how fast can you by finger? What's that? How fast

Bill Russell: can you be going when you're punting? We weren't going,

Spencer Dorn: uh,

Not that fast, but this boat kind of slammed into her hand was, which was maybe a mistake outside the boat.

And blood. It was like a movie scene. Blood just started spurting. She screamed and blood just started spurting. Fortunately it wasn't that bad. We got back to the shore and took an Uber over to the local emergency department, which you may know. They called the a and e.

And we it was just refreshing to get the care there. First of all, I've learned by the way that the Oxford a e is a premier, a e They're all not that great. But this was a really great service experience. [00:05:00] What I wrote about is I waited on a queue and I got to the front of the queue and I explained what happened and that my wife needed her finger looked at, becuase it was probably broken and may need some stitches.

They took my information, they said, thank you, have a seat there. And maybe an hour or so later a nurse practitioner came and saw us and we got an x-ray and the wound cleaned and antibiotics and a sling and all these all of this. And what was shocking, of course, is there wasn't a charge that for any American that would be shocking.

But what shocked me as kind of a healthcare nerd is I did not sign a single thing. There was not a digital signature. There was not a hand signature and. It was just yeah, shockingly refreshing. becuase as you know, in American healthcare, the first thing we do is we have people sign not just once, usually many times.

So yeah, it was an interesting experience culturally that I was really impressed with the National Health Service.

Bill Russell: Are they not worried about liability at all? If I had to think of why we sign so many things, it's regulatory and it's liability.

It's probably the [00:06:00] two primary reasons we sign so much.

Spencer Dorn: Probably. And yeah, I guess the third thing I'd say is the administrative complexity of American healthcare. It's becuase there are so many players, right in the NHS it's, you know, there's the government's the payer, the employer, it's all kind of, so it, there's simplification there as well.

So maybe there's not much concern about liability. But I think also probably the administrative simplification that we lack here in the states.

Bill Russell: You know, the humanity of healthcare is something you end up talking about a fair amount. There was another post you, you described a tense moment.

During a procedure when a patient's oxygen saturation dropped to 30, I'm not a, I'm not a practicing doctor. I think it was like 30%. Is that right? 30% ish.

Spencer Dorn: Yeah. Although my anesthesiologist colleagues had been correcting me saying it didn't, couldn't have drop that low, but I was like, ah, that's what we saw.

That's at least what I remember. But yeah, it was the patient desaturated. Yeah. So what happened was relatively routine procedure and patient just got into trouble. It sometimes happens even during relatively routine [00:07:00] procedures. And that what that little vignette was about was. Just the amazing response of my anesthesiologist colleague who stepped in complete calm and really did two things.

One, two things that AI can't. The point of that post really was that AI can't do everything. AI can unbundle knowledge from experts, but knowledge alone is not everything. And the two things that my colleague demonstrated that AI cannot yet do is one, he kind of quickly coordinated care, right?

You do this, you do that you go get the ambu bag, someone get the coso, very quick coordination. Technology can help coordinate. And many people would say that's actually some of the great promises of technology, but in that situation, technology wasn't doing that. And then the other thing is it managed risk, right?

Like we talk a lot about AI providing, you know, differential diagnoses, et cetera, but ultimately healthcare is not just about. Providing knowledge, it's about coordinating care. And it's about accepting risk when things go [00:08:00] wrong. So that's what that vignette was about.

Bill Russell: I sort of looked at it from the humanity aspect of the whole aspect.

You know, , one of the things I'm hearing from clinicians is this is, so you are at UGMI assume.

Spencer Dorn: I wasn't this year, but,

Bill Russell: Okay. But you've read and heard all the announcements and everything that's come out. So I mean, one of the most interesting obviously was cosmos and comment.

And its ability to go through the 300 million patients and yeah. Look at all those things. As they do, they create these scenes on stage which are, you know, acted out. And they do all the stuff. And at one point there was you know, a doctor saying something like, Hey, prescribed this.

And essentially Cosmos comes back and says, Hey, have you considered this? Have you considered this based on. Those kinds of things. And I asked one, I asked a physician, I'm like, what'd you think of that presentation? He goes, well, it's, first of all, it's demonstrating the amazing power of that much data and ai, it's just unbelievable what it's [00:09:00] demonstrating.

He goes, but we just went through this with alert fatigue. Yeah. I mean, if every time I say something into the system, it's looking over my shoulder saying, Hey, what about this? Hey, have you considered this? Have you, he goes, at some point I'm just gonna turn it off. becuase it's gonna drive me crazy.

We could talk about the humanity, the AI, humanity challenge in two perspectives. One from a patient perspective, one from a clinician and starting at the clinician side, how are clinicians going to feel about having, what I think I'm describing as an AI partner that's with them throughout the day, helping, I mean, yeah.

Is that a positive, is that a negative or, it depends on the physician.

Spencer Dorn: Yeah, I think it in general my viewpoint on these things is, it depends, right? It's a balance. There are trade-offs. There's no unbridled good and nothing's completely terrible. Well, few things are completely terrible. And as you mentioned, bill, we've learned a lot of these lessons before with electronic health records, right? We have decades long experience [00:10:00] now using EHRs. AI could make physicians work easier and I think we're already seeing that with a lot of tools. AI makes it much easier to document.

Right. There's a reason why AI scribes have proliferated pretty much every health system in America now has chosen a scribe to partner with. AI Can help us summarize information and taking care of people. People have very often have very long, complicated medical records and you're looking just for kind of a nugget in there or a few nuggets and reading through the whole thing to find those can be quite laborious.

And AI can bring knowledge to the point of care, whether it's a tool like Cosmos or a tool like open Evidence. There are plenty of tools that now. Empower us to summarize some information and help us make better decisions. But there's a flip side that you're alluding to, and the flip side is we can be overloaded with even more junk and nonsense than we're already seeing.

Right? We alert fatigue is a real thing with old school clinical decision support. It certainly can be [00:11:00] with AI based clinical decision support as well. AI changes the way we work, and traditionally clinicians have kind of come up with the idea.

They've kind of done everything on their own and then. What kind of a lot of the decision support is to kinda like, Hey, did you think of this? Which is great. We love blind spot detectors, but what a lot of AI is doing is flipping that scenario around and AI starts producing or creating the work.

And physicians we're gonna start sitting more and reviewing that work and editing that work. That can create a lot of challenge. So I think there's a tremendous opportunity for AI tools to help us work more easily, effectively, efficiently, productively. But there's also a risk that one, AI overloads us with a lot of information that really isn't useful.

And two, the AI kind of flips the work around. So we're no longer the primary producers. We're the reviewers, the editors the button, you know, the okay button pushers.

Bill Russell: it can become sort of that [00:12:00] overbearing partner that's always telling you what to do. I mean, and some doctors will respond to that by just turning it off.

Others will respond to it by saying, all right, you want take the wheel. Take the wheel.

Spencer Dorn: Yeah, I mean, I think it all depends on the situation. It all depends on the level of trust and the output. One of the challenges right now with AI as I'm sure your listeners know, is the reliability. I guess two things.

One is the reliability of the output, right? It's not perfect. So applying it to high stakes situations, you know, it's tough. It would almost be easier if it was consistently wrong. becuase then you'd say, okay, I just need to supervise and ignore all these messages. But it's right enough of the times and it's very convincing in the way that it speaks.

So it creates this kind of tension of do, I look at the information, do I just kind of rubber stamp and say, yes, this looks good, or, you know, do I just do it on my own? So I think that's a real tension. And I just think we need to think more broadly about how we work and not assume that these tools are necessarily going to make us better.

In some ways they can, but only if we apply them the right [00:13:00] places in a thoughtful way.

Bill Russell: It's interesting to me becuase. You know, summaries killer application for Yeah. Because if you say, Hey, don't hallucinate, just take this information and summarize it, right?

It does a really good job of that. If you say, Hey, just use it as sort of a rag system and say, Hey, go out there and look at this data and find me the, you know, the information. It does that extremely well. And then it captures if I had to sort of point to the three.

Areas, those two. And then capturing the visit, man the satisfaction rates from the various players from you know, Microsoft to abridge ambiance are pretty high. Yeah. And the adoption rates are a lot higher than most other technologies we have rolled out. So I put those three in the category of.

Sort of, I don't know that the killer applications at this point in history, I think that will change.

Spencer Dorn: No, I think you're right. I think you're point, I mean, for clinician facing tools, right? For tools that are kind of built for clinicians. I think you're right. AI scribes. [00:14:00] Tremendous. Right? Not in all situations.

Not in all circumstances. I don't use AI scribes a lot. We could talk about why. But AI scribes tremendous for many people. And what we're seeing is about one third of physicians at large health systems adopt AI scribes, some don't because they work their surgeons and they're mostly in the OR and the tools aren't as useful, or they're emergency physicians, or they're hospitalists and maybe the tools haven't spread to their environments.

Some don't use it becuase. They like writing their own notes or they have other tools and templates they've developed that it doesn't save them much time. But the one third that do use it, I believe, to your question, I believe that it reduces pajama time or after hours work. But we're not really seeing it in the data.

If you look at the data from the large, you know, studies that have been published on it, it looks like AI scribes save doctors about a minute or even less than a minute per patient. You could say, well, Spencer, or what if you're a busy doctor and you're seeing 20 patients a day? That's 20 minutes. So I think we're hoping for a lot more than a minute per patient.

And I think [00:15:00] part of it's a measurement, like it's kind of hard to measure this. But yes, I do think that pajama time is going down. You alluded to summarization. I'm, you know, I'm more excited about summarization than I am AI scribes because I think summarization is just something that applies to all healthcare roles, regardless of, you know, whether a physician or a, you know, administrative person or.

You know, whoever you are in healthcare summarization is a useful tool and I think that will help. I know before I go to clinic the evening before I go to clinic, I spend about two hours preparing to see patients. Admittedly, I see people with complicated conditions, so I'm probably spending more time than most physicians, but every physician is spending a lot of time preparing to see patients and figuring out who they are, especially the first visit, figuring out who they are and whatnot.

So. I do think we'll see some benefits some reduction in pajama time. And I think we'll see some improved morale, reduced cognitive burden ability to do other things.

Bill Russell: But you don't use it. So I assume tried it and have sort of put it on the shelf, sort of like my apple vision Pro that's sitting [00:16:00] on the shelf.

Spencer Dorn: I don't use it because for a few reasons. One is I'm pretty efficient, I type quickly, so I wrote about this a couple weeks ago. Like, I think your typing proficiency is a big predictor of whether or not, you know, if you're kind of hunting and pecking for individual keys. I could type and look at you and speak to you at the same time, I'm a good typist.

Bill Russell: You you've probably highly customized. I mean, you've, yeah.

Spencer Dorn: That's the other thing. So I'm a good typist best class I took in middle school or high school. I've kind of tricked out my template, so it pulls in specifically the information. And then the third thing, which what I mentioned a little earlier.

I write a lot of my note before I see the patient becuase I review all their records. So a lot of my note is pre-written before going in. So for me, I could easily sit with you, speak to you, just kind of take some notes as we're going along. And then when you leave the office I could wrap my note before seeing the next patient or at lunch, I'll catch up or at the end of the day.

So, I mean I think that's one of the key messages with all technology. It's like. We tend to [00:17:00] oversimplify and say, oh, like everyone's AI scribes are gonna be the cure for physician burnout. No, AI scribes will help some doctors some of the time and some will love it and some will swear by it and they won't work without 'em.

Other doctors are like it's kind of cool, but I don't need to, I don't need to use it myself. So,

Bill Russell: People would be disappointed if we didn't have the conversation about the OpenAI. Demo. becuase I told, I, becuase I think I said in the comment itself that when we'll have to talk about this on the show, so the OpenAI demo for those who aren't familiar when they rolled out ChatGPT five, right in the middle you know, it literally is right in the middle.

It's like minute 28 or 30 or something like that. They bring up a woman who had received a cancer diagnosis and actually received it, I think through the portal, but could not, like before she had talked to her doctor and she utilized ChatGPT to you know, what does this mean?

Just all the questions you would ask a doctor. She asked ChatGPT I was sort of taken aback [00:18:00] to be honest with you. I mean, I'm pro technology, but I was sort of taken aback. By a couple of things. One is that she received the diagnosis, a cancer diagnosis through the portal before the doctor had a chance to call.

I was sort of taken aback by that, and second, I was taken aback by the fact that OpenAI chose to really highlight this as a feature. And a case study for OpenAI. I'm curious what your thoughts. Yeah,

Spencer Dorn: no, I'm writing a, I'm writing a Forbes piece on this right now. Um, Not about OpenAI, just about how AI is changing the way that we access information and, you know, the direct consumer applications.

So yes, it's heartbreaking to hear that someone received a cancer diagnosis through a pane of glass. I mean, that's just. That's just heartbreaking, right? It's I don't wanna say it's inexcusable becuase it's not like this woman's doctor was a bad doctor, but this is a consequence of digitization, right?

So we've empowered people with information, right? Patient portals and open notes and immediate access to [00:19:00] labs and whatnot. I think by and large is a positive. But unfortunately a consequence of that is sometimes some serious diagnoses. Come through the portal and patients are looking at this information without context.

So then they turn to other tools like ChatGPT. But yes, often our patients see their results before we do. And by and large that's a good thing. But sometimes personally don't think that's a good thing.

Bill Russell: most systems have like in our technology world, we had, something that checked for those and kept them from going directly to the portal.

Spencer Dorn: Yeah. But there needs to be so for sharing information, there needs to be, you know, this is a regulatory space and there must be a reason, a justifiable reason for not releasing some things. And generally it, it's around certain sensitive diagnoses, like maybe HIV.

And you often it's related to releasing mental health notes for people who, if they read them, it may be bad for their mental health. But for the most part, everything gets, [00:20:00] otherwise gets released immediately. And I don't know which, I don't know what that came from.

I don't think it's a meaningful use thing. I don't know if it's 21st Century Cures. I don't know where the regulations which law was born out of.

Bill Russell: That's a scary proposition. I did listening to her though. It was, it was interesting to hear her take on the conversation with chat GPT on her cancer diagnosis.

For her, it was comforting, it was informative. It calmed her nerves a little bit. There's a lot of different ways it helped her to prepare her for her conversation with her physician. So. She felt good about that.

Spencer Dorn: And I think that these tools, again, they're not all good.

They're not, I think there's are great healthcare uses for consumers. Helping them understand diagnoses is a great one. Helping them prepare to see their doctor, right? People come into their doctor's office like. Setting an agenda before going into the doctor, getting up to speed on what you may want to ask.

How do you prioritize your concerns? I think that's wonderful. So I'm [00:21:00] certainly not bashing that this woman is chat. Bt the bigger concern to me is that she got her diagnosis through a pane of glass. But yeah, if you look at these consumer facing tools like chat, GBT, Sam Altman tweeted not long ago that health, medical Search Health.

Queries is one of the biggest categories of chat, GBT use. And there was an article in The Economist, I think in July where they it was some provocative title like end of the browser or something like that, where they showed that. Google search over the past year for health conditions has gone down by 30%.

So you put those two data points together and you can see people are clearly using AI for health related concerns, right? Why wouldn't they? Right? This is the next iteration of democratizing information. By and large. I think that's a good thing. I think we all want to have more prepared consumers.

We want to have people who feel more engaged and empowered to make decisions that are good for themselves. But of course there's a flip side, you know, there's a downside to that. And you know, we could talk about what those downsides may be.

Bill Russell: The difference with Google is you'd [00:22:00] ask it a question and it would take you to sites and I, there was at least two or three funny plaques on physicians' walls that when I had conversations with them at the health system, and, you know, one of 'em was like, you know, Dr. Google turning the common cough into, you know, a cancer diagnosis kind of thing. But but that's what happens, right? Yeah. No, it's different. So I have no clinical background. I ask it a question, all of a sudden I'm researching stuff I have nothing about I assume, and you could say that was true

Spencer Dorn: with Google as well, right?

But Google, like you said, it presents you with 12, you know, Google I say is like the librarian. You show up and you say, Hey, I want to find a book on this and appoints you in the direction. And you are, you can kind of see is this WebMD or some, you know, some website that probably is relatively legit, or is this something that's just, completely wrong?

So like you can, you have some context in where the information is coming from and you can use some judgment as to whether you think. Realistic, but there's a downside, right? Google points you to more generic [00:23:00] type sites. You know, it doesn't know you Bill Russell, it doesn't know your specific question.

It may just kind of point you to a, you're having heartburn. It gives you, you know, something about, heartburn. It sends you to the, you know, the WebMD heartburn patient education page. But maybe your heartburn's a little different or maybe you have a history of, having had a NIS and fundoplication or may, whatever it is.

So I think the benefit of kind of. Chat, GBT and similar tools is there's a lot more potential for customization and personalization and even contextualization. It's certainly more convincing and engaging. The downside is, it's still, it's a bit of a mystery where this information is coming from.

And then I think two other things, going back to the, what we discussed earlier. There's no coordination, right? Like, where does it get you? Like it may lead you to nowhere, right? You still often need to go and see a doctor to discuss the, you know, get a prescription, get a test, get, that's one thing. But then the liability piece, right?

Like who's accepting risk for what if the information wrong, it leads you down a rabbit hole? You should have [00:24:00] never gone down in the first place. So I think by and large, these tools are generally positive. I just think we need to have some nuance in our discussion about, what does this new world look like?

Bill Russell: We're coming up the end here. I'd love to do some rapid fire things. You, we had the leaked announcement prior to UGM around the AI scribes and the valuation, I think you talked about potentially an ai bubble in healthcare. Do I remember that correctly?

I might be attributing something to you that, that isn't to you. I mean, do you think the valuations justify what AI is gonna bring to healthcare?

Spencer Dorn: I didn't say there's an AI bubble though, arguably. There is I think what I was commenting on was the massive valuations of some of the scribe companies, and I was even thinking about this before before that announcement that Epic made.

You know, I think these tools are amazing in many ways. I just think we have to temper our expectations with all technology in terms of what they can accomplish. We have these kind of polarized views of healthcare and, you know, many views of healthcare.

It's all broken, everyone's. Dissatisfied, doctors are all burned out. And by the way, we look at [00:25:00] technology as one of the key drivers of those problems. But yet on the, then on the flip side, we have this, oh technology's gonna save us from all these problems, right? AI's our savior. Just give all doctors ambient scribe and they're gonna be happy, they're gonna be product.

So, I think we just need to be careful with what we assume, and I think we need to avoid these kind of. Polar extremes of all this stuff is terrible. It's just gonna cause more problems. It's already caused all these problems. And the flip side of this stuff is magical. This is gonna save us, this is going to fix these problems.

That these problems really aren't technological problems. Often they're just, you know, social, economic, political problems that we deal with in healthcare. And I think in many ways, we expect too much from technology.

Bill Russell: this is what we learned with social determinants, right?

Is it's yeah. Even if healthcare was perfect in the United States, we'd still have healthcare issues because healthcare itself is only a percentage

Spencer Dorn: yeah. It's only about 20%. It's [00:26:00] estimated to be about 20% of health. Yeah. Right. And likewise the in basket is not the only reason why your doctors are unhappy.

Or you know, like it's not just patient met. Like there, there may be many reasons and if you just say, I'm gonna give you this AI tool and this is gonna fix all your problems, it hopefully will help improve some of the challenges and hopefully, you know, we can kind of combine some of these technological capabilities and products and features into kind of a basket of things that really make people's lives better.

But I just think we need to be careful and not oversimplify the problems and. Too quickly suggests that these tools will magically fix things, at least fix them immediately. Right? We know there's a long arc of technology and we tend to overestimate this is a MA's law, right? We tend to overestimate the short term and underestimate the long term.

So that's, I don't know if I'd say there's a bubble, but I think we just need to keep our feet on the ground.

Bill Russell: what did you write? Out on LinkedIn that you were surprised at the response, like after you wrote it? I have a couple of these in my life where I was like, I wrote something.

I'm [00:27:00] like, wow, that, that struck a nerve.

Spencer Dorn: I think maybe some somewhat, like, we expect too much too soon. Like these tools are made. People say, you're, I'm a contrarian. I'm like, I'm not a contrarian. I love this stuff. I use them. Maybe I don't use AI scribes, but I'm, you know, I kick the tire. Like I think this stuff is really amazing.

I'm not here just to argue and to say, oh, technology is an amazing capabilities and if we use it the right way, I see tremendous potential. But one of the things I consistently try to explain is like, let's not expect too much too soon because then we're just setting ourselves up for failure.

So I think that's one thing. I think Another maybe provocative. I always push back on the AI empathy piece. Like, do people really want a machine that makes 'em feel better or, you know, is that some sort of synthetic, you know, empathy. But I guess the counterweight to that is a lot of our roles as clinicians, nurses, caregivers, we act in machine-like ways.

So sometimes the machine is more empathetic than we are, not because we're a bad. You know, coldhearted people just because we're often working in scenarios that don't allow our [00:28:00] humanity to flourish. But yeah, I don't know those are maybe two themes that sometimes people push back on me

Bill Russell: I interviewed a woman and her son recently died from cancer. And we were having a conversation and I asked her how she's using ai. In the care of her son. And she said, you know, well, I validate everything that doctors tell me against ai. I thought, that's interesting that we could add that conversation for 20 minutes if I really wanted to pull out that thread.

And she goes, but that wasn't the biggest use case for me recently. I said, really? Well, what was the biggest use case? She goes, essentially as a friend, you know, when you have somebody you've cared for a long time and they're in the e in and out of the er and in outta the hospital over and over again, you really burn through your friends.

Like there's nobody to call at two o'clock in the morning after you do that for, you know, six, seven years.

Spencer Dorn: Yeah,

Bill Russell: and her comment was, I started having conversations with chat GPT, and she goes, you know, I know what I was doing. I know I was talking to a. Technology, but it was really affirming.

It was, it felt like a [00:29:00] conversation. And she goes, it was good for my, my my current condition, my mental health, my you know, to have a conversation. And I know it was technology. She goes, but

It helped. I was like,

Spencer Dorn: yeah. And I think that, well, I mean, obviously super. Sorry to hear her experience.

If you look, there's a Harvard business review article that came out maybe in April. It showed the number one reason people use chat. GBT is actually for therapy. I think people would prefer having a human thera. I think that one of the, so here's, this isn't so controversial, like we often compare things to the best possibility, right?

The world class psychiatrist who's just. Uncommonly gifted and empathetic and kind or the, you know, the cardiac surgeon who's just, you know, leading, you know, most people don't have access to that, you know, like, we have constrained resources. We have limited access. People, even if they're fortunate enough to have a clinician they believe in and is capable, they don't get much time with them.

So I certainly think technology and, you know, [00:30:00] AI plays a potential role here. And I sometimes speak to ai, like I find it's a great brainstorming partner when I'm, you know, when I'm exploring a topic or when I'm outlining a piece, I'm about to write. So I would, I certainly would not push back against that.

I think sometimes it goes too far. We're seeing people emerging with, you know, AI delusions. Right? Right. And we have sadly seen reported cases of people, you know, committing suicide after speaking to some of these bots. But I think by and large these are the, these are mostly positive developments.

Provided again, we keep our feet in the ground.

Bill Russell: I'll close with this question and this is off the beaten path, and we haven't really talked about this before and I'm not sure I've seen you post about it, but you live in North Carolina. North Carolina is predominantly rural. Do you think this technology I mean, it clearly doesn't answer all the challenges we have with rural healthcare.

There's a big, it's a potential government shutdown at the end of September. There's Medicaid conversations going on. There's, I mean, there's a lot of things to solve, but in what ways will [00:31:00] technology be applied to rural healthcare where the closest major hospital is, I don't know, is an hour or two away?

Spencer Dorn: I think technology is an extender, right? Or allows us to look how we're having this conversation here. Technology can connect us more easily, and I take care of many people from rural North Carolina and many of them in person, but many of them remotely as well over, you know, telemedicine, et cetera.

And I think that these tools. Initially, there's a tendency maybe for , the early adopters to live in urban areas or be highly educated. But I think perhaps there's more opportunity for spreading these tools to underserved areas or underserved communities that may lack the easy access where, you know, they could drive, you know, 10 minutes to see their doctor or they're, you know, able to quickly call through their patient portal and understand everything that's happening.

So, yeah, I think these tools, I think these tools could potentially benefit everyone, whether regardless of where they [00:32:00] live and who they are. But I think there is potential, additional promise for people who maybe get left out of healthcare. Too often.

Bill Russell: Well, Spencer, I wanna thank you for coming on.

I'm gonna keep monitoring your your posts on LinkedIn. You're pretty prolific on LinkedIn and as you said, you're writing for Forbes as well. I don't know how you do it all. And also maintain one of the coolest offices with the with the jazz poster behind you and stuff like that.

I appreciate you getting the conversation started out there so that we can continue to have it here.

Spencer Dorn: Yeah. I appreciate it, bill. I've been listening to you for years, so, it's been it's been a pleasure to come on and speak with you. Thanks so much.

Bill Russell: Thanks for listening to the 2 29 podcast. The best conversations don't end when the event does. They continue here with our community of healthcare leaders. Join us by subscribing at this week health.com/subscribe.

If you have a conversation, that's too good not to share. Reach out. Also, check out our events on the 2 29 project.com website. Share this episode with a [00:33:00] peer. It's how we grow our network, increase our collective knowledge and transform healthcare together. Thanks for listening. That's all for now.