This transcription is provided by artificial intelligence. We believe in technology but understand that even the smartest robots can sometimes get speech recognition wrong.

The 229 Podcast: Inside Stanford Medicine’s AI Sandbox With Michael Pfeffer, MD

Bill Russell: [00:00:00] Today on the 2 29 podcast.

Michael Pfeffer: that's ultimately when you boil this whole thing down, that's the only thing that matters, that the best decisions are being made for our patients in real time.

The best evidence

Bill Russell: My name is Bill Russell. I'm a former health system, CIO, and creator of this Week Health, where our mission is to transform healthcare one connection at a time. Welcome to the 2 29 Podcast where we continue the conversations happening at our events with the leaders who are shaping healthcare.

Let's jump into today's conversation.

All right. It's the 2 29 podcast where we continue the conversation that has started in our 2 29 rooms. And today I'm joined by Michael Pfeffer, Dr. Michael Pfeffer, chief Information and Digital Officer of Stanford Medicine. Mike, how's it going?

Michael Pfeffer: It's great, bill. It's always good to be here.

Part of T two [00:01:00] nine. How are you?

Bill Russell: Good. You know, I always joke with people that the best 10 minutes or the 10 minutes, I don't record like the 10 minutes before we hit record and the 10 minutes after I hit record at the, at, you know, stop the recording at the end. And we just, at, it's, we're gonna talk a lot about AI.

AI is one of those things that's been really interesting to me because. If you thought like the internet never forgot anything, AI proved that the internet never forgot anything. Like it scours into the deep corners, pulls things out. And I was just telling you that you know, that the, in my research, AI keeps telling me, oh, you gotta ask Mike about being a soccer referee.

Right. You're like, we had a whole great

Michael Pfeffer: discussion about that.

Bill Russell: Yeah. It's like soccer referee. Where is it even getting that from? It's like, and then as we start talking about it, you're like, oh yeah. You did use that. So how, just outta curiosity, how long were your soccer referee?

Michael Pfeffer: Oh, it was a long time ago.

Pro. I was probably a soccer referee for like six or seven years. Yeah. Wow. That was a long time ago. Much younger [00:02:00] Mike. Yeah. When Mike can run up and down a soccer field for the whole weekend.

Bill Russell: Well, my, my question for this was gonna be you can't possibly still be doing this. Like, you can't be like going out, like leaving work and going over to referee soccer games.

I can't imagine with all the stuff you have going on, I don't

Michael Pfeffer: anymore, but I do like watching soccer and or football, whatever you want to call it. I think it's such an amazing game and so I really, I enjoyed it, you know, growing up, making a little cash on the side. It's funny because little. A little bit of geeky me back in the day like use some of that money to buy Microsoft Word, like when it was in like a box with like floppy disks and I was all like excited 'cause I could, you know, use this new word processor on my ancient computer.

Bill Russell: I miss those days. And to be honest with you, as A-C-I-O-I really miss those days when things went to that, like annual pricing instead of the box. It's really challenging not to be, to [00:03:00] stereotype things, but you and I it's been a while since we've talked and I'm gonna focus really in on ai.

It used to be we used to apologize for talking about AI so much, but I think it has. Pretty much started to prove that it has serious, practical application. In fact, that's been one of your topics for this year. As I'm going back through Becker's and some other things that you've done you talked about AI made practical and whatnot.

You highlighted ambient listening, generative AI, computer vision as levers. I'm just curious from, what you're seeing at Stanford. What's crossed the line from promising to practical at this point?

Michael Pfeffer: Yeah, I mean. Of course we have to talk about AI bill. I mean, that's like, that's what everybody's talking about.

But, you know, I like to think of it a little bit more that AI is one of the tools in our tool belt that we didn't have before in, in the same way, right? We kind of always had some form of ai. We, you know, we were predicting [00:04:00] things. We had rule-based ar artificial intelligence. So like, it's not like this is.

AI and medicine haven't been good friends for a while but now we got like a super advanced version of AI capabilities that, that is really exciting and changing the way we think about, you know, how we do things, which is fun because we've always been trying to think about how do you do things differently in the informatics world.

And this is just such a great tool to rethink a lot of the ways we have done things in medicine. But, you know, big picture is, it's just accessible. Like people get it, people get to play with it. People get to use it in different ways, learn about it. Write fun prompts, like dig through the internet in ways they've, you know, never done before.

Find, find dirt on people you're gonna interview. It's stuff like that, and it's just like, it's available and that's what makes it most fun. So when you're having discussions about how to solve problems. What can [00:05:00] we do better? Whatever this is starting to enter into people's, you know, thinking about, well, we could do it this way using ai and that's what I think is most transformative here.

Bill Russell: I was reading. Some other stuff about, you know, the green button. You did a green button interview and talked about that a little bit. Just for the, for those who don't know what the green button is, give us a little rundown of what the green button is. It's not the red button.

It's not the blue button. It's the green button.

Michael Pfeffer: Yeah. So that is a great question. I mean, that, that was an idea that came about at Stanford a while ago before I was here, to really think about how do you use dating electronic health record. To better determine what the next step of care is where there isn't, you know, necessarily evidence-based guidelines.

'cause there isn't evidence-based guidelines or randomized controlled trials for most decisions you have to make in medicine. And so that grew out of this idea of green button, which then led to a [00:06:00] company spinning off. And we used that. Here and a clinical informatics kind of consult service where you can ask a question and it will kind of, you know, relative to that and it'll go and clinicians will go and actually review the data and bring back a prognostic program is what it's called.

So that definitely is a uniquely Stanford thing, which is, you know, out there and very exciting. But it really was one of the first of its kind to leverage. Data in the electronic health record to make decisions on patients you're caring for at the time. So that was really cool, and you can imagine how that's growing to be more and more automated, more and more integrated.

We're actually doing pilots around that in our primary care clinics to bring evidence in real time to our clinicians.

Bill Russell: 10 years ago though, man, 10 years ago. I am thinking about the technology that was available and now we have been on an AI journey for the better part of 30 plus. Some odd, probably longer than [00:07:00] that if I thought about it.

But the but 10 years ago, the technology around that had to be fairly, I don't wanna use the word primitive, but early, you know, on, in the process of being able to, you know, pull all that information together, it was almost like search and find and use NLP And I would assume that has progressed pretty rapidly over the last three years.

Michael Pfeffer: Yeah, for sure. I mean, you know, large language models are really good at summarization, categorization, generating new kind of text. Pictures, whatever. So when you're trying to look through medical records, which are, you know, huge amounts of unstructured data plus the structured data this is, these are just great tools to do that.

And there's a lot to learn still, you know, on how these perform. But fundamentally, this is just such a amazing set of tools to look into you know, unstructured data in ways we haven't been able to before.

Bill Russell: E Epic in their I don't know what [00:08:00] to call 'em. I always call 'em skits, whatever you know, where they do the role playing kind of thing, and here's what the future's gonna look like.

It looks a lot, awful lot like this from 10 years ago, right? So they were highlighting what cosmos would look like and how it would, you know, find the evidence and bring these things together and even look at different population sets and all that in real time. Now they were talking about that in the future.

And I think people sort of think this already exists within Epic. I mean, does it exist or are there still some barriers that are keeping us from just making it a green button that's there for everybody?

Michael Pfeffer: Yeah. Some of it does. It still has I think some ways to go but instead of having to do it more with older technologies now with, you know, large language model capabilities and.

Models that they're building and everything that, that you can harness the data in, in at a, an entirely different level. And you know, I think part of the initial thinking around this was for particular disease [00:09:00] states, so diabetes, hypertension, and here's diabetics that look like this patient, what medicine would be the best medicine for them?

But you can now think about how you could expand that much quicker across many disease states. So I think, I think that's what's really exciting about it and then bringing the tools, you know, right to the clinicians in real time I think is also, you know, really exciting about these. But a lot of exciting stuff happening.

And I think we're gonna continue to see. These kinds of things iterate.

Bill Russell: I want to slice the population a little bit here with Ambient. You know, everybody always asks about ambient and I find the questions to be a little too generic. I, the question I wanna ask you, we're pretty far along in our ambient journey and I, Stanford, we've talked about for a while now.

Which clinicians have benefited the most or the least from ambient listening?

Michael Pfeffer: It's a great question. I think it's less, less about, you know, groups of clinicians and more about the individual clinician and how they use the [00:10:00] system. You know, lots of clinicians love this. It's really good for an initial consult or initial h and p.

You know, you're seeing the patient for the first time where it has a little more. I think room to grow is around we'll follow up visits where you already have a note, you already have information about the patient. How do you, you don't just wanna necessarily start from scratch.

And, you know, for highly specialized clinicians, they've gotten, some of them have gotten really good with the templates that they've built. Over the years, and so it, then it do you really need necessarily ambient scribes? So, it's interesting to see. I think it varies across the board.

Bill Russell: It really is an individual, like how they practice and how they have used the technology before. And I mean, that's interesting because I've heard a lot of systems. When as we travel around the country, I'll ask people and they'll say, oh, I mean, everyone touts, like we're doing ai, ai, we're doing ambient scribes.

I'm like, okay, everybody's doing ambient scribes. So, you [00:11:00] know, what percentage are you at? And there seems to be this like top end barrier of about 60 to 70%. Like people are sort of, and the 30% is almost a case of. They don't want it. Like, I'm faster with my keystrokes and whatever, or it's just not practical for the way I practice.

Michael Pfeffer: Absolutely. And I think that's, you know, o one of the nice things about it it's an option, right? It's another tool that clinicians can use. You know, take care of patients. And if it works really well, great. If it doesn't work well in that particular practice, great. You don't have to use it.

In fact, it may work well for certain kinds of patients that you're gonna use it for and other kinds of patients you don't use it for. And I think that level of flexibility is really important that real personalized experience and you can begin to customize how these things work. So how do you like it to.

Draft your note. Do you like bullets? Do you like it by problem? How do you like these things? So as you get more and more customized and these tools continue to [00:12:00] grow in capabilities, I think you'll see more and more of that ceiling increase. And it may not, again, it may be, 90% of clinicians are using it, but for 70% of visits, right?

I mean, so you know, you'll have to, we'll have to see where this all goes. But remember, this is like version one. I mean, we've been on, you know. many tools have been version one for like 20 years. Right.

Bill Russell: I was gonna say this is not version one, what I rolled out back in 2012. That was version one and that was a little painful.

Michael Pfeffer: Right. So this is version one of, you know, ambient scribes version, you know, big version two is gonna be amazing. Big version three. You know, it's just gonna get better and better. And you can imagine that things are gonna be integrated into that. Ambient end voice workflow and then add video to that you know, it's gonna come together in a really amazing way.

I,

Bill Russell: I, I do want to talk to you about video. I got in trouble recently at one of the, I you've seen my style. I got in trouble. [00:13:00] Well, you see my style? I don't believe, I don't believe that. Well, one of the, one of the, one of the, well, actually multiple health systems were sort of touting, Hey, we're doing ambient listening computer vision.

AI models in the patient room. And when I push them on their use cases, I'm like, you could do that with Zoom. Like, why did you buy an AI platform for that? It was like, well, you know, we haven't implemented any AI sort of around that. I want to, let's start with this, and that's, as you imagine, that got me in trouble.

'cause we're kind of touting, you know, we have this AI platform, but I'm like, all you really have is a computer and a. And a microphone in a room, in a TV that's connected to a tv. And let's call that version 0.5 of where we're going with this. How does the experience with Ambient listening inform what the future of the patient room looks like with cameras voice ambient.

I mean, have you learned anything in terms of the [00:14:00] adoption, the use with just the standard ambient listening that's gonna inform how we progress with the more sophisticated room of the future, if you will.

Michael Pfeffer: Yeah. They definitely go together, right. I think the room of the future is gonna have, you know, computer vision and ambient.

Voice kind of interacting together in a way. And it's it's gonna be for the clinicians and staff that, you know, work in the room and the patient. And the family and the whole experience it's all gonna ultimately come together in some way and. I would say that as we think about all these different technologies, we really wanna get back to like, what problem are we trying to solve with what outcome, right?

Because ultimately that's what's most important. You could put algorithms in everywhere, but if it's not really solving a particular problem or moving the needle on an outcome that's really important then you know, what are you really doing? And I think that's what we've really tried to do here at Stanford is like.[00:15:00]

Measure that understand from the beginning, like, this is what we're trying to solve and here's the outcome we're trying to move, and so let's make sure we actually do that. And sometimes it doesn't even involve ai, right? I mean, so I would just, you know, as I think about all of these things, I wanna make sure that we're using the right tools and the right solutions to fix the problems and move the outcomes that we want.

And I think that really being the north star, I think you'll drive value out of all of these technologies,

Bill Russell: , That phrase uniquely Stanford has come up a bunch in the research of, that comes out of your mouth a fair amount uniquely Stanford. I mean, what does that mean to be uniquely Stanford?

Michael Pfeffer: Well, I mean, there's an incredible culture here of innovation. And it. It really permeates everything from, you know, obviously Stanford University to the School of Medicine, to all of our health system enterprise, adult and children's. And it is just such a nice, like, amazing [00:16:00] culture and it's hard to explain until you kind of feel it and live it.

But it, it really allows us to kind of think big and do things that hopefully again solve problems. Move outcomes in the right direction and then can be, you know, taken and disseminated across the world. That's kind of the uniquely standard. Yeah,

Bill Russell: Think big is a, well, actually that's a gift.

'cause I know a lot of CIOs that are struggling to get their organizations to think big and you don't have to worry about thinking big. But that's a double-edged sword too, right? So it's like, Hey, you know, we're doing research. We need access to this. This, and this. And we've, we, I don't wanna go too far into this, but we've talked about the sandbox that you guys had to develop to sort of meet the needs of researchers.

I, wouldn't mind you talking about it a little bit because I, what I find is, as I'm out there, I referenced this a lot because it's a it's unique. Maybe [00:17:00] not specifically to Stanford, but it's very rare within healthcare to have this kind of sandbox environment. I'd love for you to talk about it a little bit.

Michael Pfeffer: Yeah. Well, you know, we, there's a lot of talk about learning health systems and I think this is a really great example of one of the ways you can be a learning health system. And so early on in the launch of large language models, we decided that, we were going to kind of create a place for everyone in Stanford Medicine to go that's secure and has a couple different models to choose from to start to play and learn and see how these things work.

And we called it SecureGPT. That creative name. Right.

Bill Russell: That's so creative.

Michael Pfeffer: we put Stanford in front of it so like, you know, it got more creative but secureGPT and it incredible work by the team. It was like we sat down and talked about this and they're like my CTO Christian Lindmark who's amazing is like, well, we can [00:18:00] probably do this in six months.

And I'm like, how about six weeks? And he's like. Okay. And six weeks later we had it and since then we, we now have like over 18 models in it. But what's really cool about it is we get to learn what people are using it for. And so again, using models, we can look at the prompts anonymously and say, okay, let's categorize it into different things and see what people are doing.

And learned a lot about then, okay, well what kind of automations do we want to build for the organization based on. What we learned in the portal. And then we also have APIs into the secure portal so researchers can use it to do things. And one of my favorite examples is we had this lab that was recording these very complex patient.

Interactions, they're doing really amazing research, in the mental health space. And, the question was, somehow, use secureGPT to help us? And we added an audio [00:19:00] tool in there. So what they did was they recorded it, they recorded these conversations, which took them hours and hours to transcribe and summarize, and they just uploaded the, you know, the voice file.

Transcribed it in a beautiful way. There was summarizing, they were able to, and it's just like that kind of like excitement of, oh my God, this is amazing. Like it's saving me so much time. So we learned from what was going on and that in combination with you know, the research enterprise in the school of Medicine and our chief data scientist, Nick Shaw.

Really came to say, well, let's really bring the clinical record together with the large language models to do some of the things that people were doing in the secureGPT in a, in an easier way. Right? You don't have to move things around. It's just right there. And that Born Chat, EHR.

Bill Russell: And is. So those are

Michael Pfeffer: just examples, but we keep learning from this.

We keep learning from what people are doing how they're thinking, and then we can develop kind [00:20:00] of AI based automations and tools and products that match those needs.

Bill Russell: I is chat EHR in sort, still in sort of a pilot phase, or is that rolled out?

Michael Pfeffer: It is live, yeah. No, fully. We did pilot it for some time because it's important that we.

Learned that it works and can scale, but you know, it's been live since September to all of our physicians residents and apps.

Bill Russell: And the number one question people want to ask is. How do you make sure it's accurate? Right. So it's going through this massive amounts of unstructured data and it's coming back with insights, I assume, based on the medical record.

How do you, I mean, that's obviously why I was in pilot for so long is to Right. Try to do that. How did you get past that? I mean, how did you figure that out?

Michael Pfeffer: Yeah, well a lot of work in iteration and monitoring of the system and you know, obviously we limit it to this, just that one patient's.

Record in a very secure way. So [00:21:00] first just, you know, you have to be taking care of that patient in the chart, just like all of our typical security and privacy measures to get access to that. You know, the chatty HR tab, which then you can ask questions about the patient record. But of course it's just the data from the patient that it's allowed to return.

And so that significantly reduces. You know, the hallucinations and that, but we continue to monitor it through using models to monitor it. We have a a whole framework called Med Helm, which you can Google, and there's some really interesting things about that to help really understand how the large language models are performing.

And we get feedback. So with every response you can give a thumbs up or thumbs down. And so we iterate on that. And it's

Bill Russell: care everywhere. It's pulling from the various HIEs, anything that's out there with regard to this patient.

Michael Pfeffer: Yep, yep. And so, it's been really amazing to see this in action and have people.

Just love it and [00:22:00] find things that they maybe wanna have found. Saves time. You can do summarization, you can ask questions of trendings, you know, there, you know, just what a large language model

Bill Russell: would. Yeah. It could find out that you're a soccer referee and maybe, you know, it would explain some injury that you have.

I have no idea. Right.

Michael Pfeffer: If it's in the record. Uh, But um, for sure. And so it's been really exciting. Of course, we're iterating on. What it can do. But the other piece of it, which I think is equally as exciting as the user interface in the EHR, is the ability to then run very sophisticated analytics.

You know, so instead of just rules-based you can have a huge set of criteria, right? And run that against the electronic health record data in real time, and then produce out output that you can then put back into the workflow. So. An example of that is we have a a wing of a hospital that's staffed by us nearby.[00:23:00]

And , academic medical centers vary full. So we wanna make sure patients can get beds in a timely manner. And so we can actually send patients from our emergency room here in Palo Alto over to this other wing in Redwood City. And so how do you know which patients are eligible for that transfer?

Well, you have to meet a whole set of criteria because , we don't do all of the things that we do on the main campus there, which is, you know how lots of health systems work. And so we, we took that large criteria and we can run it against every patient in the emergency department and then flag the ones that meet the criteria instead of having people having to do manual chart review to find these patients.

So imagine trying to do that before you couldn't. Right now you can't. And so imagine all of those kinds of automations that you can do in real time using very complex. Sets of criteria against, you [00:24:00] know, patients in the electronic health record in real time,

Bill Russell: I don't know how relevant of question this is for our audience, but it is for me.

So I'm curious, the have you learned anything about the models? Like which ones get used the most and for what tasks and those kinds of things?

Michael Pfeffer: We sure do. And in Med Helm actually it. Shows you like how models perform for different healthcare tasks. And it's really interesting to see how different models perform in different ways.

But yes, I mean, people are using different models for different things and and they do perform in different ways. So that is a very important, you know, thing for people to understand. Like not every model's the same, right? They all kind of perform different tasks in different ways.

Some, a lot better than others.

Bill Russell: And there's no one model that you would say, yeah, just use this model and you'll get, I mean, there, there are just like people, there's things that people specialize in that they're really good at.

Michael Pfeffer: And the prompting for that model. So it's like a combination of couple different things there to [00:25:00] get to where you need to be.

And so, yeah, it's really interesting. I mean. You have different models to choose from, and then you can set, you know, the temperature on those models, then you can prompt them in different ways. And so there's a lot of creativity that, that actually can occur in this space, which I think is really exciting.

It does require training. So, part of what you know you have to do to get access to SHA EHR, is you have to take a training course, which talks about how it works and what's its limitations. What, how to prompt and et cetera. So, , part of what's exciting about the tool is that people actually also learn how to use these models.

Yeah, so that's also I think a lot of fun.

Bill Russell: You're still practicing or has anything changed there?

Michael Pfeffer: Still practicing.

Bill Russell: All right. I want the user perspective now, not the strategic CIO guy that. Shows up on the show from a user perspective is, you know, what what technology or tool [00:26:00] has changed your shift?

Michael Pfeffer: ChatEHR

Bill Russell: Really. So you're using it and you're Oh, yeah.

Michael Pfeffer: Practice. Yeah. Yeah. I mean, definitely if you're picking up a set of 20 patients on service you can ask questions of each of the chart and really get to know the patients really well. As opposed to trying to read, you know, hundreds of notes or whatever.

If you're, you know, admitting a patient for the first time, it's really helpful to dive into the record and make sure you pick up all the details that you need. So yeah, chat, EHR has been really game changing. Discharge summary writing, it will summarize the hospital course better than I've ever seen anybody do it.

So that's kind of exciting. And, so I think, that's been really fun and you know, I think the evidence-based tools now that are powered by large language models where you can ask questions and it gives you back, the evidence on how to take care of a patient I think is really big game changing as well.

So. I kind of teach the residents now, and it's always, [00:27:00] in the past it's always been like, how much can you memorize? And I think we're all shifting to this idea that, , you obviously need to have a knowledge base as a physician that's undeniable. You can't just be, you know, going to a large language model asking for things.

But there are so many nuances and new evidence generated and all of these things that, using these technologies which provide, you know, evidence quickly is something that I'm a huge advocate for, to make sure we're doing the right thing every time for every patient. Because that's ultimately when you boil this whole thing down, that's the only thing that matters, that the best decisions are being made for our patients in real time.

The best evidence

Bill Russell: We're having conversations with a lot of other health systems. They're a little skittish on. On AI in the clinical setting. Now, we've talked a lot about how you're mitigating those risks and whatnot, but you have the resources of Stanford and a lot don't.

So they're gonna rely heavily on partners and [00:28:00] vendors. We're seeing ai, obviously ambient listening. And I think the imaging space is another one that we're seeing a lot, but everyone seems to be bullish in the area of things like. The call center, service desk, revenue cycle. I'm wondering if you're seeing and experiencing those same opportunities and how you're approaching those.

Michael Pfeffer: Yeah, absolutely. I mean, you know, automation of our, support services a huge opportunity. So we're doing a lot of work on revenue cycle. We have a. Great partnership with our chief patient experience officer around the call centers. And we're looking at new omnichannel capabilities obviously with AI agents that can do some of the work.

And I think part of it is also like, how can we free up humans so that when humans need to talk to humans, , they can, I mean, it's not getting rid of that, it's actually enhancing that capability. And moving all the other things [00:29:00] that really could be automated and handled with AI and agentic AI to where that needs to be.

So absolutely looking at all of those areas and I think that's where you'll see a lot of value from AI and healthcare initially because, it's so ripe now. The big question is in my mind is, are we going to automate the same processes we have today? Or are we going to be able to do new things?

Right. And , sure we could automate prior authorizations and the insurance company will automate rejections of prior authorizations or approvals of prior authorizations. And this goes back and forth and back and forth. But if you kind of go back to what's the purpose of a prior authorization?

I guess, I mean, I mean, I'm biased as a physician. I think they're, , just such a huge administrative burden for not much value. But anyway, but let's assume they have some value, okay? That it's really about doing the right thing for the patient at the [00:30:00] right time. So, we're back to this idea of precision medicine, precision health, how best to do that, provide evidence in real time, make sure we're ordering the right scan at the right time.

You know, all of these things can be enabled by, you know, AI based decision support. If you remember like a few years ago, like there was this whole big push to put radiology decision support that was being mandated by the government, and you had a, you know, you had a order, and explain why and all of these things.

And that ended up like disappearing, which I think in hindsight was pretty obvious because it's too hard to do that with rules-based decision support, right? It's too hard. But with ai. And , the ability to really move this in the right direction. I think there's, it's not like blocking you, but it's just now kind of helping you synthesize, well, what is the best imaging to order?

Is it that m mri or is it that CT scan? Right. And so I think we're gonna see more, , if we kind of focus on that [00:31:00] instead of how do we automate prior authorizations. We're gonna actually , make a difference for patients. 'cause I'm gonna keep going back to that because that's what it's all about.

Right? Right. It's the most important thing is that the right test is ordered for the patient at the right time, not delayed, not, you know, we gotta get this test first before we can do it, but this is the right test based on the evidence to give it the first time we get there. Then you don't need prior authorizations.

Bill Russell: There are problems we're gonna be able to solve within the four walls of the health system. I mean, not that those are easy, they're easier. And then there's things that cross the boundary. Payers, provi, you know, payer, provider those kinds of things. To a certain extent you can influence that depending on, you know, the scale of your health system and the market and those kind of things.

And some of that's just gonna, is going to require some redoing of. Of, of the models. It's I've been, one of the people I follow is Aaron Levy from [00:32:00] Box CEO. Oh, he's great. He writes a lot of really thought provoking things. And one of the, one of the things he's really keen on talking about now is people are all worried about, Hey, we're going to you know, we're gonna replace all these jobs.

And he's like. He goes, there are absolutely some jobs that will not exist a couple years from now. However, he goes, we will be doing things with this technology that we never did before. Like, Hey, we're gonna go through all the images we've done for the last 10 years, and we're gonna do a secondary read on all those images, looking for something specific, which now is like, just fire up the computers and away you go.

Whereas before it would've been, well, we couldn't possibly hire all the people to do those kind of things. And he has, he just keeps putting these use cases out there where people are going. Now, obviously box is a storage thing. He's like, people are scanning all these files that they never had enough manpower to do.

Now they have them. It's like a hundred interns to just, you know, go do this work. It gives us a chance to [00:33:00] rethink things, but, within the four walls, it's easier to rethink things as you get out. That's where we really have to exercise new muscles of it's a policy, it's a politics, it's people, it's it's skills.

You know, for a lot of CIOs we were just learning those skills within the four walls of the health system. Now we're thinking across the community and potentially across the nation with,

Michael Pfeffer: yeah. I'm optimistic. I really am. And, you know, I think we kind of have to be because the, these things have we have to change.

Healthcare is, you know, expensive. It's challenging. Being a patient is not easy. And I really believe that people go into healthcare because they want to help people live their full lives. So ultimately this stuff is gonna come together and I, and I would say, , informatics over the last 20 years has always had that mission, but has been hampered [00:34:00] by, , not having the technology to handle the kinds of data sets that medicine is based on.

, Back in the day I. When we put in the electronic health record, there was a way to configure it so you can click a whole bunch of buttons to generate the history and physical and all this stuff. And I was just completely against it and my thinking was like it doesn't tell a patient's story at all.

It's just a bunch of pre-generated text that you lost the whole kind of. Feeling about the encounter and what the patient's trying to tell you. And so we didn't implement that. We stuck with like old school dictation or you can type or create a template or whatever. And I was always dreaming of the time where the technology would capture the patient's story for you and we're here.

Right. [00:35:00] It, we never got. Discreet documented notes by pushing a whole bunch of buttons. It never happened because clinicians wanted the patient's story and man reading notes that were pre-generated by like just clicking boxes were awful to read. You couldn't even remember 'cause they all looked the same.

So now here we are with the technology available to capture the patient story. The patient's words in a way more accurate way than probably we've ever done in the past. That's just incredibly exciting. And so imagine five years from now, all of those notes that have been generated by ambient with the patient's story at the center with much more accurate information what are we gonna learn from those notes?

Right? And that's really exciting to me as well. So, you know, I just think. We're gonna have to that. That's why I'm very optimistic about this because I think ultimately we're all gonna come together, both inside the health systems. Obviously a lot of work going [00:36:00] on there, but A, across health systems and payers.

'cause it's the right thing to do. And now we have the tools to kind of do that. So

Bill Russell: I should end here, but I'm not going to, like the good podcaster would end right here because that was such a great close. And I'm gonna go into a really unsexy area, but I want to just brainstorm a little bit with you on plumbing.

So there's an awful lot of people struggling out there with AI from a plumbing standpoint. So we talked about data. So you have fairly clean data that you're working with. You talked about you talked about a sandbox of models. So you have privacy, security of the data. You talked about monitoring of the use of those models.

You probably had governance over the selection of those models, I would imagine as well. But what else am I missing here? I'm trying to piece this together for somebody who's sitting there going, okay, how did they, how did, well,

Michael Pfeffer: I, I mean, look, are we perfect here? No. I mean, we're still [00:37:00] learning how to think about all of this stuff.

In a scalable way. We foundationally have something we call rail responsible AI life cycle, which is really built into our standard processes around project management and how we manage applications, et cetera. And we're learning how to continue to refine that and make that even better and faster.

Streamlined. We use a firm assessment. We, I think we probably talked about this in a prior podcast, to help assess for ai, you know, what are the things that we need to understand about the project, the models, the workflows the outcomes to measure, et cetera. So all of our AI goes through that, but we're learning.

I mean, I think. There's many different philosophies around all of this. I would say most, if not all health systems have some kind of AI governance now that's pretty standard. I think a lot of organizations have built it into their processes and [00:38:00] flagged things that have ai, so determining risk levels and how to do this.

But you know. Our philosophy is really it's not one team. It's not its own thing. It's something that everyone in it needs to think about, know, own, be part of. And it's a tool that can be leveraged at the right time. Again, going back to solving the problem, right? I mean, not every problem needs ai and the flip is entirely true.

AI going to look for problems isn't the right way to do it either. So. It's really kind of bringing all of that together. And we continue to learn. We continue to iterate and yeah, I don't think there's like any magic bullet, but having a philosophy kind of sticking to it, having governance, I think is really key.

Bill Russell: Last item. So Chris Longhurst has gone the doctor technology, CEO route. Do you think we're gonna see that more or, I mean, no better person to lead Seattle Children's than, yeah. I'm so excited

Michael Pfeffer: for Chris. I [00:39:00] mean, they couldn't have chose a better person, in my opinion. Yeah. And I'm not in any way biased at all because he.

As a CIO. But yeah, I mean, how could you not separate out technology and healthcare now? It's impossible. And when you think about the future, everything you're gonna do is gonna involve technology in some way. And so yeah, I think that's really exciting. Will we see more. Following Chris Longhurst footsteps.

I hope so. But Chris is, you know, a unique guy. He trained at Stanford, so maybe he's uniquely Chris. I don't know. But I'm super excited. He's uniquely

Bill Russell: Chris. That's, that is truth.

Michael Pfeffer: I'm just super excited for what he's gonna do at Seattle Children's. They're very lucky to have him.

And yeah. Super exciting.

Bill Russell: Mike, I want to thank you for coming on the show, and thank you for being a part of the 2 29 project.

Michael Pfeffer: My pleasure, bill. It's always so much fun to talk to. You never know when I'm gonna get asked, which is fun as well. But it's an exciting time and you [00:40:00] know, I love how you're bringing different viewpoints and ideas to the health IT community so we can all grow together because, you know, it really is just about, you know, the patient.

And how do we and making people's lives healthier. And I think, again, I'll go back to being optimistic. Like, I think we're gonna be able to continue to move the needle and we didn't even talk about what's possible in research, in medical research. Yeah. And discover. All right.

Bill Russell: Well that tees up our next conversation.

Michael Pfeffer: Yeah. I mean, that's like incredibly exciting clinical trials and with all the work that's going on in cancer and, you know. Using AI for protein discovery, drug discovery list goes on and on. I mean, how that's just remarkable and it's gonna completely shape how healthcare is delivered in the next three to five years easily.

Yeah,

Bill Russell: absolutely. Thanks again, Mike. Appreciate it. All

Michael Pfeffer: right. Thanks Bill.

Bill Russell: Thanks for listening to the 2 [00:41:00] 29 podcast. The best conversations don't end when the event does. They continue here with our community of healthcare leaders. Join us by subscribing at this week health.com/subscribe.

If you have a conversation, that's too good not to share. Reach out. Also, check out our events on the 2 29 project.com website. Share this episode with a peer. It's how we grow our network, increase our collective knowledge and transform healthcare together. Thanks for listening. That's all for now.