This transcription is provided by artificial intelligence. We believe in technology but understand that even the smartest robots can sometimes get speech recognition wrong.
Newsday: Unsanctioned AI Risk, Governance, and the Path Forward with Dr. Holly Urban
Sarah Richardson: [00:00:00] Health systems run better when clinicians get fast trusted answers. UpToDate now brings evidence-based content together with new AI powered search to cut workflow friction and give leaders clear insight into how care decisions happen across the enterprise.
UpToDate is where trusted evidence meets real-time efficiency.
I'm Bill Russell, creator of this week Health, where our mission is to transform healthcare one connection at a time. Welcome to Newsday, breaking Down the Health it headlines that matter most. Let's jump into the news.
Bill Russell: All right. It's Newsday and today I'm joined by, uh, Drexel Ford in, uh, in black today. And, uh, Holly Urban with Walters Kulwer. A. Uh, pediatrician, informaticists and, uh, currently, um, working at w Walters Kulwer and, uh, today we're hard to believe.
We're gonna talk a little bit about ai. Is that hard to believe these days?
Holly Urban: you. You rarely hear about it, so.[00:01:00]
Bill Russell: I don't, the conference season's coming up, I'm wondering, like we used to say, oh my gosh, every booth has AI in it. And we were kind of surprised. I, I'd be surprised if every booth doesn't have AI in it this time. If they don't, I'm, I'm gonna wonder why they spent the money to be in the, uh, in the conference space altogether,
Drex DeFord | This Week Health: hope somebody goes around and does the tally of all the booths. You know, like every once in a while, like we've had this before with big data and, you
Holly Urban: Explosion health
Drex DeFord | This Week Health: The buzz with blockchain, the buzzwords of the day. yeah, it'll be interesting to see. 95% of the boosts will be talking about their AI capability,
Bill Russell: and for.
Holly Urban: not talking about their AI capability. They're talking about why they're not talking about their AI capability.
Bill Russell: Alright, well, hey, we're going to January 22nd. Uh, Walter CLO survey finds broad presence of unsanctioned AI tools in hospitals and health systems and, uh, uh, great survey. A lot of findings. We're gonna talk about these findings. I, I'm just, I'm just cur [00:02:00] curious, Holly, are you watching the pit at all?
Holly Urban: Not. I've heard that. It's amazing. Um, though I've heard it's very, very true, very realistic, but I, I have not seen the show.
Bill Russell: Well, they have a storyline right now about generative AI in, in the health system. If people aren't familiar, it's on HBO. Uh, the pit is, uh, you know, uh, Noel Wiley following, you know, healthcare, and they took a storyline, which is generative ai. In healthcare, and they have this whole thing where, you know, the documentation came up with stuff that didn't even happen.
Like just, you know, some surgery that didn't happen. Some person that doesn't, isn't even, uh, you know, a practicing physician at the, at the place and at documented and they're following that storyline. This is very real and it's, it's, it's happening right now. In, in our health systems, obviously we have the, uh, ambient listening is, is huge, but shadow AI to me is a, is a really interesting, um, concept.
It's not new to us. We've, uh, we've had shadow it for [00:03:00] years. I remember when I came into St. Joe's, my one of the the first phone calls I got was from, uh, Dropbox, and they said, Hey, you're one of our largest clients on the West coast. You know what the problem with that was? We, we weren't a client, like it was all these people using it who were St.
Joe's email addresses and to the point where we had radiologists sharing data with their patients through Dropbox. And I was like, oh my gosh. Like what just happened? We have something similar going on today with, with ai. Uh, Holly, I'd love it if you could walk us through a little bit of, of the findings on this.
Holly Urban: Yeah, absolutely. And you, you're right. This isn't anything new, you know, as healthcare leaders that, um, people are doing things that. The, that, that, that the leadership maybe doesn't, isn't quite aware of, but I think the risks are a little bit higher when we think about AI and when shadow ai, it's, it's um, it's sort of the use of AI by individuals that haven't been vetted.
They haven't been [00:04:00] reviewed. Um, there haven't been a formal governance process or there hasn't been a security review. So, and one of the things the study found was that there's fairly widespread shadow AI happening in healthcare today. So, um, uh, about 17%, um, of providers admit to using, um, tools and, but about 57% of staff say they're aware of people doing this.
And, and if I were pushed, I would say that's. 17% is probably low. I think there's a lot of individuals going out and leveraging AI tools. Um, so like, just one kind of pertinent example is, um, there was a, a tumor board. So, um, at a large healthcare system where tumor board is where, you know, pathologists, oncologist.
Surgeons, radiologists, everyone gets together sort of review findings of a case and to come up with the best plan of, um, uh, best plan of treatment for that patient. It's, it's often used when there's a particular complexities, such as you might find with patients with cancer. Anyway, people were logging into tumor board with their [00:05:00] personal emails, then somebody sent in one of these bots that does the transcription of the meeting. So the bot transcribed the meeting and then promptly sent. All of the transcription of that meeting with all the PHI to every single person on the invite list, which was a, you know, obviously a flagrant PHI violation. So you can see this isn't malicious, you know, shadow ai, it's not, it's not nefarious. Um, but you know, it's well-meaning people who were trying to do the right thing, but because these haven't been sort of vetted or had security, um, review that's leading to some of these, these risks, these compliance risks and, and also of course patient safety risks.
Bill Russell: 40% have encountered unauthorized AI tools. 20, roughly 20% have used them. One in 10 used unauthorized AI for direct patient care. Uh, you know, these are just some of the numbers we're, we're pulling out of this, uh. You know, direction the example she just gave of the transcription of the meeting and [00:06:00] whatnot.
You, you, um, you don't allow us, within our organization to use those transcription services anymore for, for exactly that reason.
Drex DeFord | This Week Health: Yeah, I've, I've talked about it, you know, to our team, but I've talked to CIOs across the country too, who now have, a lot of them have sort of taken the attitude that these things are kind of like viruses almost. Like once you get them embedded into your, into your calendar and into your zoom calls or whatever you're using, like it's really hard to get them out. you know, once they've figured out how to get stuck in there and, and invite themselves to, uh, to your meetings, it's hard to remove them. And then, like you said, I don't think anybody does it maliciously. I think they're trying to figure out how to be more efficient and how to make sure they don't miss something in the meeting or some to do, or some task that gets assigned to them.
So they use the AI to kind of help them with that. But, uh, it's, it's all the unintended consequences that kind of go along with.[00:07:00]
Bill Russell: And, and that was the response from the clinicians when I, when I said, uh, hey, you're, you're using this to share PHI and, and, and whatnot across Dropbox. Like, you can't do that. And they said, give me something that I can use
I mean, we can talk about governance and we will talk about governance around this, but to a certain extent, have we given them the tools they need to do the job or have we given them substandard tools from what they can get themselves from their personal account?
Holly Urban: And I think that that sort of hits it on the head. I, I would think of Shadow AI as a signal of unmet need. Like they're looking to be more efficient, they're looking to get quicker answers. They're looking. To leverage AI to, answer questions even for clinical decision support. So, um, if there are tools or maybe they're not even aware if there're a sanctioned tools So it, it's a signal to say, my staff is looking for things that they want to do using this [00:08:00] technology. And, and the other piece, I think we have to all acknowledge. is This technology is moving so fast
hard. You know, healthcare systems are kind of slow moving organizations, so you know, to try and keep up with with governance is really, really tough when you're talking about a technology where there's new tools being introduced basically on a weekly basis.
Drex DeFord | This Week Health: We just did a CSO summit the conversation there. a few folks was like blocking, blocking sites that, um, folks can't get to it. Um, and that became okay. But blocking is exactly what you're saying. Like, I'm, they, they obviously need it.
They want it, they're trying to do something with it. We don't understand what that isness necessarily. Um, some of it though turned out to be that they had an approved version of A GPT. That was running just for their health system and they wound up setting, um, sort of relays so that if you tried to log into. A particular website, instead of taking you to [00:09:00] that website, it took you to the place where you were actually supposed to be. And the response they got from a lot of the users was, oh, I didn't realize we had this internally. And we could use it to your, to your point about sometimes they end users just aren't informed enough either.
You know, and we may do our best, but if you don't get the right message in at the right time into the right, you know, pocket, uh, they just won't read it or they won't internalize that, no, we have something for that.
Holly Urban: , Even if your organization does have policies, not all do. Not broad awareness of it. So you know, maybe they don't know that there's an approved tool or there's a policy around the use of these ais. There's just not awareness,
Bill Russell: Well here, here's my phone. There's five AI tools every single one of them could take. If I took a picture of this article and said, summarize it, it could summarize the article, it could do the same thing with a screenshot of an epic. I know we're not supposed to do that. But, you know,
Holly Urban: Mm-hmm.
Bill Russell: you know, at that point, personal phone, personal network going out like the, the IT organization has sort of [00:10:00] lost its ability.
Let me, let me hit a couple more things in the article. So, top concern, patient safety, uh, 25 to 26%, uh, privacy security, close behind 23%. Um, you know, the bottom line is unsanctioned tools. Our, uh, our regulatory exposure and PHI risk, and we're, we're not even talking about the regulated tools that we still have.
We still have drift with, and we still have all sorts of other, uh, hallucinations and other things we have to, uh, be concerned about. And I, I will get to governance in a minute. The other part of this story I found interesting was AI optimism is high. 50% frequently use AI tools. 90% believe ai. Will significantly improve healthcare in five years.
Um, is, is, is that the reason this is so hard to hold in because it's, the optimism is so high.
Holly Urban: Absolutely. And, and as we've already said, it's meeting a need, right? It's helping me be faster. So maybe I don't have, maybe if I use an unsanctioned, but very [00:11:00] efficient, uh, scribe that is. Isn't sponsored by my enterprise or by my health system, it's gonna let me, you know, not have to do my notes when I get home at night.
So it's reducing pajama time. So I do think that one of the primary reasons we're seeing it is, um, it's allowing them to be efficiency. So how can we make sure we work with staff that we have tools that have been a hundred percent vetted and have gone through a formal approval process and that, you know, as a health system that are gonna be safe and, and gonna provide high.
Quality care. Um, you know, so that, so how do you balance that? That's really, I think that's the rub, is what we're seeing in terms of the shadow AI today.
Bill Russell: The bigger question I'm asking here. Before we get into the details is how do we do this correctly? It, it feels a little different. Like we've had governance for years and we have policies, procedures, we have groups that meet, we have groups that oversee things where they control the inflow of, of applications that have AI and those kinds of things.
But if this feels different to me, and I'm trying to figure out why this feels different, [00:12:00] I think it feels different because every one of the tools we currently use is going to throw AI features at us. Every single one. And if we just took that, that ballast of work that has to be done around those tools to make sure that they are meeting our, uh, standards and objectives and, you know, safety, quality and all those things, that would be enough to keep a governance group busy for a long time.
And then we have these other tools that people are coming in saying, Hey, look, we could do this on imaging and it's going to increase our, our quality and stuff, and it increase our quality, increase safety. And you sit there and you look at these numbers and things, it's like, well, we gotta vet those things and, and whatnot.
And then you have the problem of the unsanctioned tools as well. I, I mean, this could be a. A team of people, like a team of 20 people, their full-time job every day. At this point, is this different? Is this I am, I, I don't know. I'm [00:13:00] saying this feels different to me than things in the past. It, do you guys feel that way?
Or, or maybe I'm over blowing this a little bit.
Holly Urban: I mean, to the degree that you know. There's so many. Like I said, there's new AI applications being introduced in healthcare on a weekly basis, really. So I think you're right. There's just so much out there that, that people can use, that clinicians can use and, and other clinical folks can use that It's really hard to keep on top of.
That's why. Governance that also has policies that are everyone's aware of so that they can say, oh, I really shouldn't be using these tools until they've been sort of sanctioned by my governance processes. Because the other, the other piece that I think a lot about is, um, is that patient safety angle, because I.
Think we're gonna start to see a little bit of backlash in terms of thinking about if it, if it makes mistakes. So if you're using unsanctioned tool, you don't know where the content or, or where the, whatever the evidence or whatever the, it, it's drawing its information from, you know, you can [00:14:00] get mistakes and, and then that puts a lot of clinical risk and patient safety implications on you, on your individual clinicians as well as you on a health system.
And so one example that I talk about is, so, you know, it, making sure that it's grounded on good quality evidence as well as having some guardrails. So, like if you pull up your favorite LLM and you ask it, um, okay, I have a patient with a complicated, um, urinary tract infection, but I wanna manage it in the, um, outpatient setting of care.
They're not in the inpatient setting. tell you to do a fluoroquinolone. Which is great unless the patient is pregnant. And then that's actually a big patient safety risk in terms of harm to the fetus. So you're, you know, if you, if the LLM doesn't have that context of whether or not the patient's pregnant, then you know, it is just opening up to all kinds of risks.
So those are the kind of things that, um, I think we really need to be cognizant of. If it's an industry, um, that we do, we do not want, we do not want our patients to be put at risk.
Bill Russell: Drex I'm going to give you a statement that was said at one of our, uh, city tour dinners, [00:15:00] and it was a CIO who said, um, I'm just turning on all the, uh, EPIC AI tools by default,
um, because they're a trusted vendor. And that created kind of a pretty wild back and forth at that point, at the table. becuase there's some CMOs there, some clinicians who were like, no, you can't do that.
And there's other people that are like, you know, where do you draw the line? Like, this is a trusted, uh, partner. We, they're already in our system. Let's go ahead and turn those things on. I mean, what's your response when you hear that statement?
Drex DeFord | This Week Health: I mean, I think there's a continuum here, right? Of partners that you use, applications that you use and the things that they're rolling out that may need some review and oversight. But like you said, you're kind of coming into this with a partner that you already know and trust, and so maybe the oversight is lessened a little bit and you allow those tools to, to run all the way on the other end. [00:16:00] Um, I'm sitting here
Bill Russell: His,
Drex DeFord | This Week Health: about, so
Bill Russell: his VC startup just walks in the door. You're gonna vet them a little more.
Drex DeFord | This Week Health: but also the, all the way on the other end. The thing that I was thinking about was, um, you know, when we talk about. Uh, the good old bad old days when somebody would build their own database in, uh, you know, x, y, Z department and, the whole department would become completely reliant upon this database that nobody in information services knew about. And then that person would move away and then the database database would break and they would call somebody information services. So I think you've got those. AI agents and applications that are running inside of vendors that we know. I think we've got sort of these unsanctioned applications that are out there that are coming in from the outside that are very helpful.
But I also think there are tons of builder tools now that you find people inside the organization who are building [00:17:00] their own things. And that's another really interesting angle on all of this is that that version of shadow ai. Can very
easily turn into
an applications as custom built just for the pharmacy or just for RevCycle. And, um,
Bill Russell: Holly as a, as a clinician, I'm curious, uh, chat CPT is out there sort of presenting itself as a, as an AI tool, or I'm sorry, as a, a healthcare tool, a trusted healthcare tool.
Drex DeFord | This Week Health: GPT for health or GPT Healthcare.
Holly Urban: Yeah.
Bill Russell: and, you know, I'm wondering if that marketing, uh, I mean my experience with, with, uh, health system, uh. Physicians and clinicians as they're looking at that, like, that's insane.
I can't believe they're out there doing that. Not that it can't come up with the right answer, but it doesn't have the context for the complete context to come up with the right answer. And, but they're telling patients that, uh, to, to, to go ahead and use it. And we have probably some clinicians using it as well on, on the other side.
I mean, what I, what do you tell patients? What do you tell clinicians who are [00:18:00] using chat GPT, like. Hey, time out. Like what? You know, you can't validate what, how it came up with that answer. Where it came up, what was the source? Nothing. You can't validate almost any of it.
Holly Urban: I will say though, that if. You know, the, it's sort of the promise, right? And then the risk, the promises is, could these AI models actually help patients understand their illnesses and understand their treatment courses and be more empowered, you know, if, if it's a a means to get a more empowered patient, I'm all in on that. at the same time, the information they're getting has to be legitimate and it has to be based on, you know, repeatable sources. So if it's just going out to the wild world, world of the internet and finding gosh knows what from gosh knows where, yeah, then, then it's almost doing the patient a disservice 'cause they're not getting the information that they need in a way that it can be trusted. Um, so, so I'm all in if it's gonna empower patients, I just wanna make sure it's grounded in a way that they're getting the right answers and not just getting. or fluff or misinformation at worst.
Bill Russell: I, I realize I'm part of the [00:19:00] problem. I, I do have concierge medicine and I can call my doctor pretty much seven twenty four. Or 365, which is, which is the way health, I, I believe the way healthcare should be for everybody, quite frankly, if we could figure out how to get there. But the, the second alternative to that, like when I don't feel like calling him, I go to Claude and I say, feeling this, feeling this, whatever, and it gives me information.
Sometimes that's comforting. Sometimes it's like, Hey, you should, and actually I will give it credit. A fair number of times it will say you should talk to a physician. Like there's, you know, there there's reason to talk. If your blood pressure's elevated, you should talk to your, I mean, it, it's like every third sentence is you should, you should, you should kind of thing.
Um, I.
Drex DeFord | This Week Health: is a medical emergency, please dial hang up and dial nine
Bill Russell: It is on now 9 1 1. But it's, but it's, it is always there. It's always available and you can have a conversation with it. And I think that's the thing we're going to struggle with, uh, for quite some time of where do these, where do these tools fit? And the liability [00:20:00] framework around it has not been set yet.
Uh, eventually we'll have some, some lawsuits going in the. Direction of health systems, some lawsuits going, uh, in the direction of maybe some of these foundation models. And, uh, we will have the courts weigh in on, on where this goes. But until then, be quite frankly, uh, be quite frank with you, I'm pretty happy I have this tool available to me as a patient.
It's, it's, I don't, I don't know whether it's, whether it's right or wrong. I do, I, I, I do like having it, um,
Drex DeFord | This Week Health: I think
Bill Russell: you know.
Drex DeFord | This Week Health: are gonna pick this stuff up faster, you know, than a lot of health systems too. And this is gonna be another one of those. Um, gaps that I think patients are gonna feel like, why is my health system so far behind? Are, are you, is that part of what you think is the pressure that health systems are feeling around all this?
Holly Urban: the adoption is so high among clinicians and among patients as well. So there, there's a sort of a forcing function there. You know, you [00:21:00] gotta get on this bandwagon because otherwise it's gonna run you over. But you know, again, in healthcare our risk profile is just different. the, you know, the, the tolerance for being wrong is so much smaller than it is. Like, if I'm gonna ask. Gemini, uh, how to make a recipe for a calee. If it's wrong, it's okay. Maybe I get a bad dish. But if we do that in healthcare, if it's wrong, you know, you can harm a patient. So I, I just, I just think the threshold is quite a bit different.
Bill Russell: I mean, some of the protections of me using it are if it says, Hey, you should be on this drug. I can't do that. I've gotta go talk to somebody. Hey, I should be on this drug. And then they go, well, why do you think you should be on this drug? Well, my, my doctor, Dr. GPT told me this, this is the drug I should be on.
And they go, well, let's take a look at some of your blood work first before we make that determination.
Holly Urban: that's right.
Bill Russell: It's good stuff. Holly, I wanna thank you for your time. This was a, this was a, a good and very timely conversation. I appreciate you coming on the show.
Holly Urban: Well, I'm very happy to be here. Thank you so much for the [00:22:00] opportunity. I enjoyed chatting with you. I. I.
That's Newsday. Stay informed between episodes with our Daily Insights email. And remember, every healthcare leader needs a community they can lean on and learn from. Subscribe at this week, health.com/subscribe. Thanks for listening. That's all for now.