This transcription is provided by artificial intelligence. We believe in technology but understand that even the smartest robots can sometimes get speech recognition wrong.

The 229 Podcast: AI Governance Webinar with Dr. James McCabe, Dr. Ben Hohmuth, and Kristen Myers

Bill Russell: [00:00:00] Today's episode is brought to you by Abridge. There's no secret that care teams spend too much time on documentation before, during, and after patient visits. Abridge transforms clinical conversations into complete, compliant and accurate notes in real time. Integrated directly into the EHR workflows.

With linked evidence, every part of the note maps back to the source conversation, so clinicians can quickly trust and verify what they're signing. It works across care settings, specialties and languages, and has been shown in peer reviewed academic research to reduce documentation, burden and clinician burnout.

See how leading health systems are improving clinician wellbeing and patient experience with abridge at this week. health.com/abridge.

Speaker: Today on the 2 29 podcast.

Benjamin Hohmuth: We have to do that. We have to make it easier for doctors and nurses to take care of patients. We have to have them spending less time outside of work. We have to have them working more top of license

[00:01:00]

Speaker: My name is Bill Russell. I'm a former health system, CIO, and creator of this Week Health, where our mission is to transform healthcare one connection at a time. Welcome to the 2 29 Podcast where we continue the conversations happening at our events with the leaders who are shaping healthcare.

Let's jump into today's conversation.

Bill Russell: Welcome everybody. We're going to get started here in a little bit. We have a, , great, discussion plan for today. We're gonna talk about AI governance, the AI governance game plan. We have Jefferson Health here, Geisinger Northwell, and we're gonna talk about managing the.

Uh, enterprise wide AI implementation. We're gonna give people about three minutes to join. It sometimes takes people a little bit of time to get from one meeting to the other. And we'll kick off in a couple of minutes. So, um, we're just gonna, we're just gonna have a little bit of a conversation here.

I'm joined by, Dr. Jim McCabe [00:02:00] with, uh, Jefferson Health, CMIO. Uh, Ben, uh, Homuth. Ben, am I getting that last name even close?

Benjamin Hohmuth: You, you nailed it.

Bill Russell: Great. Uh, CMIO at Geisinger. And then Kristen Meyers, chief Digital Officer, head of Technology at Northwell Health. Um, welcome. Welcome y'all.

I'll, um, you know, just while we're waiting for people to join, we've got another couple of minutes. Uh, let's just have a little bit of fun here. What's something you've done with AI outside of work? We're gonna talk a lot about what's going on inside of your health systems and what you're doing.

Um, I assume you guys have played around with AI outside of work. What's, what's something maybe that you've, you've done? Uh, and, and Jim, I'll, I'll start with you.

James McCabe: My, uh, uh, wife and I took the family to Italy about 12 years ago, and it was, um, weeks and weeks of work planning where to go and how to get there and what to see.

And we just, we, we've been, uh, threatening to go back again soon. And so I, I took advantage of chat GPT and asked it to [00:03:00] plan a trip out. We cited the regions we wanted and, and asked, look, where are the trains and what are the great hotels and where are the great historical spots? And in a matter of minutes put together this just fabulous trip plan.

So we're excited to, uh, to manifest that soon.

Bill Russell: So is that something you're gonna share with us, the itinerary that we all Sure. That sounds, sounds like a lot of fun. How about, uh, how about you guys? Ben, do anything fun with AI outside of work?

Benjamin Hohmuth: outside of work, I try to stay low tech, but, um, uh, yeah, mostly research, trip planning, that sort of thing.

Have created some sort of fun, uh, inspirational images, uh, uh, for youth sports, um, uh, you know, based on mascots, things like that. Um, but mostly research trip planning, that sort of fun. A lot of fun.

Bill Russell: And Kristen, how about you?

Kristin Myers: I do lots of trip planning, um, with it, but, you know, recently, uh, it was my son's, uh, 12th birthday, so created a invitation for some of his friends, uh, with an embedded video of [00:04:00] Alexander playing soccer with his idol, uh, Al, uh, who's a Barcelona football player.

So it was pretty cool.

Bill Russell: we just had a thing with our team and our team, you know, those, those people who are like on the boardwalk, do we still have boardwalks? I think you go to a boardwalk and they have the, uh, cartoon artists who do the caricatures with all the stuff behind 'em.

Mm-hmm. We had everybody from our team, Do a prompt of, you know, who they are and whatnot, and they each created a caricature of who they are and displayed them on a Slack channel for everybody to see. That was a lot of fun. Um, all right, but we're gonna get into the, we're gonna get into the serious stuff here.

Uh, again, I wanna thank everybody for coming. This is, uh, part of our leadership series, the AI Governance Game Plan, I want to introduce our panelists one more time. Dr. Jim McCabe is the CMIO for Jefferson Health out of, uh, Philadelphia and stretching, uh, far outside of Philadelphia these days as well. I mean, you guys just acquired a health system, not just last year, acquired a health system in my hometown.

I grew up in Bethlehem, [00:05:00] Pennsylvania, and you acquired Lehigh Valley. So you guys are continuing to grow and expand. Um, the same theme with, uh, uh, Kristen Meyers. Uh, chief Digital Officer, head of Technology at Northwell. You guys are pretty static, right? You're not growing at all or acquiring it?

Kristin Myers: No, we are actually, uh, we just merged with Nance in Connecticut, in New York.

Bill Russell: Yeah. Yeah. So a lot of, a lot of growth going on there. And then, uh, Ben Homuth, the CMIO at, uh, Geisinger, uh, is here as well. And I'm looking forward to this, this conversation. So, you know, to, to get us started, we, we, we want to. Talk about a lot of things. You guys sent in a lot of questions. I wanna encourage you, if you have additional questions, go ahead and put them in the, uh, chat or in the q and a.

That is, uh, that is in the, uh, interface here and we will try to get to those questions. But in the, uh, sign up, we had the ability for you to put in questions ahead of time and it was pretty well, uh, utilized. So we're going [00:06:00] to try to stick to the questions that you actually gave us, and so the framework is we're gonna talk about executive expectations.

We're gonna talk about the, the building a governance engine within an organization. Um, we're gonna talk about, uh, enterprise vendors, platform strategy, those kinds of things. Uh, distributed governance, monitoring at scale, risk, ethics and security, ROI, measurement and, uh, making the case for AI projects.

And then, um. You know, I, this is pretty aggressive, but if we actually get through all of that, we're gonna talk about agentic AI experience and scale. So, are you guys ready? That's an awful lot to get through.

Kristin Myers: Ready?

Bill Russell: Uh, Uh,

Kristin Myers: yes.

Bill Russell: Uh, let's, let's start with this one. Let's just to sort of frame things up, I'd like to hear about each one of your organizations.

How are your executives framing AI expectations in, in 2026? What's, what's the AI ambition of your organization, or how are they viewing ai? Uh, Kristen, we'll, we'll, we'll start with you on, on that.

Kristin Myers: [00:07:00] Yeah. Look, I think our executives really position it as a strategic imperative and. Uh, Dr. DeAngelo, our CEO, has articulated the importance of AI and the potential to transform how we operate and deliver care and innovate as an organization.

Um, I think our ambition is grounded in realistic value driven transformation. So, you know, we wanna embed AI into, uh, culture and strategy and day-to-day operations, uh, that advance quality and access. Long-term sustainability. So we look at the AI ambition through cost and impact and value. Um, so our priorities, um, which really are top down, uh, driven around patient experience and access, uh, clinical operations and physician experience, uh, quality and decision support.

And then lastly, looking, uh, about, you know, operational efficiency, um, I think is, uh, really important to [00:08:00] reduce administrative burden, especially in the clinical space.

Bill Russell: Ben, how about, how about, uh, Geisinger? How are you guys framing up AI at your organization?

Benjamin Hohmuth: Yeah, I mean, a lot of what Kristen said, um.

We may be a little unique in that our CEO is, uh, the former CMO at a AI company, and our CMO is a former CMIO. So, uh, sort of the top two in our organization. Get it. And, um, you know, our CEO Terry would say that, that, you know, going early with ai, um. Is critical to the success of Geisinger in a lot of ways, um, to meet our strategic priorities and to solve problems that have been historically really challenging for us to solve.

Around access, around affordability, around quality, around workforce, like in, in the communities we serve. Uh, the workforce is shrinking year over year. Um, so, um. Perceived as critical to our future and, and a new tool in the tool belt that may help us solve problems that have been difficult to solve in the past.

Bill Russell: And, uh, [00:09:00] Jim, how about, how about Jefferson?

James McCabe: Well, our, uh, our executive team has, um, uh. Laid down a fairly bold gauntlet. Um, they're looking for 10 million hours saved in clinicians' time over the next three years to help, uh, deliver physician nurses back to the bedside. Um, they're also looking to solve some of our rev cycle.

Issues. Um, I think we're all aware of the sort of structural deficit in healthcare with the, the cost of healthcare going this way and the reimbursement going this way. Um, so they're, we're looking to see if we can somehow level set that. And then I guess the third, um, area is, is in clinician wellbeing and patient experience.

Bill Russell: 10 million hour, I guess that's what scale does for you, right? 10 million hours sounds like a, a ton, but that might just be, you know, a couple hours a week for every, every physician.

James McCabe: Well, with, with 32 hospitals and 700 sites of care, you do have a force multiplier.

Bill Russell: That's pretty amazing. Uh, so [00:10:00] you're, you're, you guys are describing executives that are, um, uh, I, I'm not gonna say bullish.

Bullish is probably the wrong word. I mean, they're, uh. Optimistic about what AI can do for healthcare. How do you manage their, uh, the leadership expectations for what AI can do with what, you know, from your perspective or what you're hearing, maybe some of the, uh, challenges that you've heard about AI and those kinds of things.

How do you, how do you manage that conversation and, and what are the kinds of things you're, you're, you're saying or you find yourself talking about as, as you come into those conversations? I. Um, uh, James, how about you?

James McCabe: this has come at light speed through a fire hose, and, um, I think there's a, a, an over assumption of the magic.

That that's available. So we're trying to shift the narrative from magic to more augmentation, and we're trying to use the term augmentative intelligence and be careful about using the word AI constantly. Um, and also focusing on sort of the, the [00:11:00] problem first framework. Um, rather than. What can AI do for us?

And let's go to the AI store and buy a bunch of things and put them in place, but rather, what problem are you trying to solve? And, and let's see what the best tool is. Because AI and generative AI isn't always the, the answer. So we're, we're, we're trying to do that. We're trying to, uh, reiterate that they're in healthcare.

We all feel there needs to be a human in the loop. Always for safety and, and other reasons, um, and, and just continue to communicate that and, and update our, our executive leadership team.

Bill Russell: I saw you both shaking your heads because it sounds, uh, you, you're feeling that it's a similar conversation, uh, at at your organizations.

Kristin Myers: Yeah, I mean, definitely very similar from my perspective. Ben, I'm sure it is for you too.

Benjamin Hohmuth: Yeah, and, and the question I always ask myself, because a lot of times you start talking about AI and I feel like. At least 75% of the time, we could have the same conversation prior to [00:12:00] 2022. Um, and we're talking about onboarding and governance and a lot of the issues are not new.

The, the volume and the velocity are new. Um, one thing I struggle with a lot, which is that I think same thing that James talked about is we have to be problem focused. What use case are we talking about? And this hear a lot of sort of AI as peanut butter to, to sort of solve. Intransigent problems, like if we just use ai, we could solve access.

Well access is a hundred different things. We need to break that down. What's the, you know, specific problem if we're talking about, um. You know, a use case of we have too many referrals within seven days and we wanna use ai and this something we're working on to look at the referral queue for cardiology and determine who needs to be seen this week, who needs to be seen this month, who needs to be, uh, not, may, perhaps not seen like, like that's a specific use case, but we get a lot of just sort of, well, let's just use AI to solve this undefined large problem.

And, and then it's having the discipline to break that down into component use [00:13:00] cases. And think to Jim's point. Is it ai or is it not ai, but, but starting with problems and use cases, um, which again is not a new problem, but feels like that's been sort of like a lot of things, um, exacerbated with, with AI and gen ai.

Uh,

Bill Russell: these things kick off huge operational projects, don't they? I mean, uh, as we were traveling around, I've heard people talk about they, they've implemented ai, especially in the imaging space, implemented ai, and it's, it's now finding, uh, incidental findings or actionable findings depending on what terminology you want to use.

And they said, well, this was all great, except we didn't have the capacity to respond to the incidental findings. I mean, these, and you talk about access. I mean, there's just a whole. Bunch of operational projects. It's like, uh, you know, how do you, uh, I mean, I would think that's a significant conversation for the organization.

It's like, look, we can implement a lot of technology and we can even do it very fast, but this is healthcare and we're, we're bolting it onto an existing mechanism. How, how much of that is, is, I mean, your team [00:14:00] has to be firmly, uh, aware of that. But is is that sort of, as they see other health systems announce, Hey, we're doing this with ai, we're doing this with ai, do you feel like some of that, that uh, uh, being tethered to reality is, is being lost?

Kristin Myers: I mean, from my perspective, it starts just with awareness and education, right? For our leadership team. And I think it's, you know, as technology leaders, it's helping them understand the opportunities. AI presents, but the risks and limitations, right? And the structured governance, and I know we're gonna get to that, but I think it plays a critical role because you need to look at, you know, what the problem statement is, which both, you know, James and um, Ben have spoken about.

And look at it from clear feasibility and risk and value and, you know, make sure the decisions are both, you know, innovative and responsible. So, you know, again, I go back to it's, you know, part of our role so is, [00:15:00] you know, educating and, um, making people aware, um, of some of the challenges.

Bill Russell: So let's go in that direction.

I mean, what, what does governance, what does the governance structure look like at your organizations? Who, who sits on those, uh, sits on, uh, the enterprise governance?, Uh,, what does the intake process looks like? What, whatever you can tell us about that process. 'cause right now, at least the stats I was reading last week, only 16% of health systems would say they have.

Enterprise wide AI governance and, uh, I'm sure that number's going to double, triple, quadruple in the near future. But, Ben, let's, let's start with you. I mean, what does, what does governance look like at, at Geisinger?

Benjamin Hohmuth: My bias as I mentioned earlier, is to think about like, how is AI not different?

Um, and so. What's different about AI strategy as opposed to how AI can support our organizational strategy? What's different about AI governance as opposed to how we need to think about AI features with our standard governance process? So. Um, you know, we have an intake process and service now, and people are expected to, um, say, is there an AI feature to what you're [00:16:00] asking for?

Um, and that can be second guessed post intake. Um, and then if there is, there's, there's a form that the requester needs to fill out and they often need help again, which is not unique to, to AI to fill out the form. And that form tends to focus on. Failure mode analysis, sort of like what could go wrong with this?

Um, both in terms of the tool and how you anticipate using it. And then based on that, it's assigned a risk tiering. And if it's low risk and a little more than half of them are that, then there's not much, it's pretty light touch if it's higher risk because it's touching patients in some way or there's a human in the loop not, or there's not a human in the loop.

Again, we have a human in the loop for. Essentially all, all of ours at present, uh, similar to, to Jim, but, um, then it goes through an evaluation, um, and it's all defined. It's sometimes hard to do and, and, and, and a monitoring plan and, um, how you define things like, um, performance, um, adoption, equity, um, and, and outcomes.

And then [00:17:00] actually. You know, for, for a summarization use case, how do you define objectively performance? Um, how do you then monitor that? Um, but, but that's for those higher risk ones, it, it's sort of assessing, uh, the AA component and then saying, uh, you need to come up with a plan for monitoring. And then there's actually an audit function to check in periodically and see if you are monitoring and, um.

I don't think the, the current model is gonna be scalable unless, you know, we're using humans to do some of those audits and that, you know, we need to get to a point where technology is doing the audit and we need to get more help from our vendors to provide tools out of the box. becuase the, the current strategy given, given the volume of what's coming.

I don't think it's scalable. I, I also think about pre ai, like how many of us had perfect monitoring strategies for our oass and our summarization? Like, does anyone have a summarization report where your troponin component changed and it didn't get updated and there was an error of [00:18:00] omission because the cardiologist thought it was there and it wasn't there.

Has anyone seen hallucinations created by macros or voice to text that wasn't edited? Like so, um. Part of this is like we need to tighten up our governance for a lot of things as we have, um, over the past few years. But historically, I don't think we did a perfect job of governing a lot of non-AI features.

So again, I tend to come back to what's unique and what's not unique, but, but that's sort of our, our process and it continues to. Iterate, um, every few months.

Bill Russell: So what you're describing, I mean, is it essentially you have the, the same governance process that you had before and you're just saying, Hey, if there's an AI component, it, it triggers this, you know, this workflow that you have to go through, almost like security or

Benjamin Hohmuth: Yeah.

Bill Russell: You know, or some other thing. It's like,

Benjamin Hohmuth: yep. And you still have to go through the same security compliance review that you, you know, same, uh, governance process if there wasn't an AI feature as well. It's added on to our current intake.

Bill Russell: Kristen [00:19:00] and, uh, and Jim, do either of you have a distinct AI governance or have you, have you tried to weave it in and bake it in as well?

Kristin Myers: Yeah, I mean, from Northwell perspective, um, we tried to, and we have integrated, you know, the AI component into our broader intake. Because, you know, AI is just being incorporated into, you know, so much technology at this point. Um, so I think where we deviate, you know, we have the AI risk and security and ethics group, um, that, you know, again, that cross-functional, uh, group that has leaders from risk and cyber legal and compliance.

We look at data use, uh, ethical considerations, et cetera. Um, we also have an AI executive committee, though. Which is a little different. Uh, just to ensure that, you know, our senior leaders, you know, such as our COO and CFO, uh, CHRO, um, are looking at the strategic [00:20:00] component of this. Um, backing back into, you know, the areas that I outlined as our priorities.

And then we look at, you know, quantitative and qualitative value. Um, so I think that they're probably the two areas that differ, um, from the broader intake. Um, but you know, we've tried to integrate as much as possible into, uh, the broader technology intake.

Bill Russell: Jim, do you differ a little bit from, from either of those?

James McCabe: Um, a, a little bit in that, um, a year ago we had an AI steering committee and we had an imaging AI committee, um, that tagged along the ITSM process, similar to what Ben described. But we received so many requests from so many different directions. We now have 10 subcommittees of ai. That report up to the AI steering committee.

We have a clinical AI advisory committee, which is made up of largely frontline clinicians and some operational leaders and informatics folks who are looking at [00:21:00] this whole suite of epic ai, um, tools that are coming out so we can get a sense from our frontline users whether this would be of value to them.

We have an an, an AI imaging. Uh, committee 'cause um, I can't really speak to these things that overlay the PAC system. I'm an an emergency physician. My background, we have a technical AI subcommittee that gets on the phone with the vendors and asks them to lift up the hood and show us what you got inside of this black box.

You know, um, we've got a responsible AI subcommittee that takes a look and makes sure there's no concern for bias, particularly in the, the higher risk models. We have a KPI tracking committee out of our enterprise analytics team that will set up some of the monitoring systems that, that Ben described.

Um, we have an AI education committee that's out and about trying to educate all of our leaders, and then we have a community exchange. Sort of knowledge sharing committee. So when, when a request comes in, we, we make sure we, that it makes sense to [00:22:00] whatever service line is interested in it and that there's enterprise agreement that we should take a look at this.

Um, we've got technical, um, legal and compliance. Um, uh, reporting and analytics folks all taking a look at this, uh, and then making a recommendation to the AI steering committee before it goes through to the, the, the PMO for, uh, implementation.

Bill Russell: In the, uh, spirit of getting through more questions, I would love to explore the commonality.

I mean the, I, I mean, I hear it, but they're, they're all. A little different in how they're set up, and I think they're probably a little different. 'cause you're all three very different organizations and, uh, and, and how you've grown and how you've, uh, structure is based on how you get work done. Um, I wanna talk about monitoring at scale.

'cause you all talked about this idea of monitoring AI as it goes out there. And this is one of the things that I think is a, a top priority, especially as we move into this age agentic world where, um, not only are. The tools going to look at data and [00:23:00] essentially inform decisions. We all talked about human in the loop, but at some point, uh, we believe in the next year, um, there's gonna be a lot more systems that are making some action in, uh, in systems.

And in order to do that, the monitoring at scale becomes, becomes critical. It's even critical at this point. How do you gain visibility? And, um, and, and, and put monitoring in place for the AI that's being, uh, uh, deployed across your system. And, and you know, what, what do you, what are you looking for? I, it's, it's kind of interesting.

I was talking to somebody the other day about. Um, they said, you know, yeah, we monitor all the AI use within our health system. I'm like, I'm not even sure you can guarantee that. 'cause I could take a picture of Epic screen, I can go to chat GBT and say, Hey, give me a diagnosis on this because it's on, it's not on your network.

And I just took a picture and, and they said, well, you know, it's, that's always been the case. People can do. You know, things outside, but that's training and that's governance. And [00:24:00] that's other things that we do along those lines. But let's just talk about the things that we do control, the things on the wire, the things on our systems.

Um, how are you gaining visibility and, and what monitoring do you see as important moving forward?

James McCabe: it does depend on the application. Um, um, epic has a, a, a dashboard for generative AI that allows you to monitor who's using what. That's pretty straightforward for some of the other applications, you know, third party and, and home build things.

Um, it takes work to build out these dashboards and reports, but we, we build them, um, and we track, you know, who's using what. Um, we, we built out some support, some reports, uh, with, um, looking at our, our, um our outcomes from ambient speech, which I'm sure has been successful at, at, at ben and Kristen's place as it has with us.

And we're looking at, um, you know, epic signal data for time in notes and pajama time. We've also extended that to some models looking at, um, whether there's been an increase in level four billing, for example. Um, and we actually [00:25:00] were able to, to track, uh, CSN numbers to our Press Ganey data to see if patients were feeling an improved experience, um, with with Ambien speech and actually found a significant jump in patient experience, I think because.

When we're using ambient speech with a patient, we're able to look at them face to face. So it really does depend on the application. Um, where we have built in dashboards, we take advantage of them, where we write reports, you know, that it takes more work. Um, I, I have to give a nod to our infrastructure team for putting us on.

The Azure Cloud about a year or so ago, and transitioning to Power BI for our, you know, data visualization and other data tools. And we do have now the substrate to build out a lot of these reports and track what we're doing. So we're still early in the game, but that's, that's the direction we're headed in.

Bill Russell: that's fantastic. Uh. Christian, I, I, I sort of, I, I'd love to hear your perspective from sort of an IT lens, if you will, of how do we know what [00:26:00] AI tools are being used across the, uh, enterprise at this point?

Kristin Myers: Uh, how do, how do we know? Uh, yeah. It's a, it's a, it's a challenge, uh, to be frank with you, especially, um, if there's been distributed, uh, intake, uh, processes, um, beforehand.

Um, we are really as, as we talked about, trying to centralize, uh, intake, especially for all technology, and that includes ai. Um, but I think the points that Jim made around, um, you know, visibility and monitoring, you know, are very on point. Um, because you can use the dashboards that, um, you know, your platforms like Epic and Salesforce, you know, are providing.

In terms of, um, you know, AI that's being deployed. Uh, but I, I think it's a very challenging space. And Ben also mentioned, you know, the, the fact that, you know, the audits are very, um, you know, quite frankly, resource intensive. Um, [00:27:00] and you know, it, it's a struggle that, you know, quite frankly, many organizations have, um, including ours.

Benjamin Hohmuth: I think it's sometimes non, like, it's easy to sort of map out what should be, and we have an AI group and they know all of sort of what's live and, um, what, what's in the pipeline and. But even when you think about what are the kp like, so adoption's relatively easy, but for some of these defining performance and outcomes is non-trivial.

Just defining what it is that you wanna track, it's actually hard

Bill Russell: I'm not putting words in your mouth, but it, it would appear to me that you're giving good, good marks right now to epic in terms of transparency, monitoring, uh, capabilities and those kind of things. Um, I'm not speaking about their specific tools. There could be some that are better than others, I'm sure, uh, across the board, but just in terms of.

Understanding what you have to go through in terms of, uh, reporting back and monitoring drift and, and maintaining and auditing the, the capabilities they seem to be surfacing that for you. Um, does that, does that [00:28:00] inform your, uh, approach moving forward? Are you looking at platform plays more than say,

building your own. There are some systems out there building their own, and you guys, you guys have the kind of scale that would allow you to do such things. Chris, and I'll, I'll, I'll start with you. You guys, uh, traditionally have, have been really forward leaning on some of this stuff, but I, I hear, sort of a, at, at, at Northwell, almost a.

Uh, more of a reliance on platforms these days?

Kristin Myers: Yeah, look, I think it's all, it all has to tie back to the overall, uh, digital strategy, um, which is tied really to the institutional strategy. Um, so, you know, for us, um, you know, we've made a lot of commitments to vendor platforms such as Epic, right, for our EHR and revenue cycle, um, and access system.

Uh, Salesforce, uh, for customer relationship management. And then, you know, we have a strategic relationship, uh, with Google for, you know, [00:29:00] our cloud infrastructure and, you know, a lot of the ai, uh, work, uh, that we are doing with them as part of our, uh, data hub, um, our centralized, uh, data hub. So, you know, for us, you know, when we are looking at.

What problem is to be solved, uh, whether it's the strategic priorities or it's something that is coming through our intake, uh, through, you know, many of the innovative physicians and nurses that we have at Northwell. Uh, we try to look at, you know, the capabilities and do we already have it as part of, you know, our overall platforms that we've already purchased, or is there a gap?

Now there's plenty of gaps in this space. And then the question is, you know, do we want to utilize, um, some of the AI tools that we have as part of our, you know, I would say our internal, we call it our AI hub. So we have about 28 um, models there. Um, do we want to try to build something internally? [00:30:00] Do we want to, um, you know, have a, um.

Niche system for this particular use case. And that's how we carefully evaluate, um, what makes sense and what doesn't. But we do find that, you know, the majority of the ideas that we get in are going to be covered, whether it's, you know, through, you know, epic, Salesforce, uh, Google, um, you know, or you know, any of the models and, um, agents in our AI hub.

So. Again, I think that if you've made the investments in these platforms, uh, you know, you, you need to try to maximize that first.

Bill Russell: one, one of the things I enjoyed as a CIO is the number of physicians who I, I, maybe it's just an entrepreneurial bent or whatever, but the number of conversations I had on a weekly basis of people going, Hey, I've developed this thing.

I've come up with this thing, I'm doing this thing. I assume that that's, that's still happens. I mean, how do, at, at Jefferson, how do you handle that? I'm sure you have a lot of. Uh, a lot of entrepreneurial, uh, physicians who [00:31:00] are, Hey, I, I, I can solve this problem. Uh.

James McCabe: We do. Um, and we unfortunately have to say no.

Often we, we follow a very similar tiered approach to what Kristen just described. Um, we start with le leveraging our, our existing core applications. So if, if the the problem we're trying to solve lives inside of Epic or ServiceNow, or Microsoft or Workday. We're gonna, we're gonna leverage those 'cause we've invested in them significantly.

We do have an in-house data science team, and they're a pretty talented group and they can build models for us, uh, in, in all kinds of places. Um, we do have one strategic development partnership with the vendor out there that, uh, you know, we are working with to look at a couple of things. And we, we do, um, we, we, we, uh, reach for a third party more as the exception, if we can't find what we need with one of our core applications.

But if we do find a third party, uh, that, that solves a big problem for us and they've got a success story, then, then we'll definitely take a look at it. With regard to the, um, the, the research and entrepreneurial, uh, [00:32:00] physicians who are building apps, um, we, we try to work with them to the best of our ability.

We leverage our data science team for them. Things get a little tricky when they want to integrate with Epic. 'cause as you know, integration's always the, the, the most challenging part of these things. So sometimes we'll duplicate our data warehouse and see if they can test it there. Um, but it, it, it is very resource intensive.

Um, uh, but we, we, we do work with them, but again, leverage the core apps, data science team and then third party

Bill Russell: you know, Ben, I, I, I wanna ask you, you know, what tools are you looking for from a vendor to help you manage this environment, stay in front of it.

Benjamin Hohmuth: Well, monitoring is a big one.

So, so you talked about Epic and they do provide some tools that are useful, but they're not yet adequate. And I think, like thematically, the tools we have to govern and monitor are lagging. Behind the tools we have in use. And I think that's appropriate because I think if we waited for perfect tools to govern and monitor, um, we [00:33:00] wouldn't get the speed to value that we want.

But, um, you know, so I know epics. I've talked to folks at Epic and they're gonna have sort of more automated ways to monitor, but we don't have them right now. So we can look at adoption, we can in some cases, give people the opportunity to provide feedback, and we can put that in a central place. And, and that's great.

But, but we need more automation in terms of, uh, monitoring, uh, performance. Um. And I don't think the expectation can be that each health system does that on their own. I think we need those big vendors that Kristen and and Jim talked about to help us with that. Um, our, our strategy's the same, right? We, we've, we've been on Epic since 96.

We used to do all sorts of customizations and, you know, epic and some other enterprise vendors, you know, obviously have become orders of magnitude more capable than they were and all that. Custom work becomes challenging to maintain. So we're enterprise first, and, and we also try to say it's not, does Epic meet the need or, [00:34:00] or Microsoft or, or Salesforce.

But if it meets 80% of the need, that's probably good enough because, you know, because of the out of the box integration and the lower cost and the easier maintenance, um, we want to go with our enterprise vendors. There are important problems, including in the AI space where, um. Our current vendors may not meet the need, and if it's a really important problem, um, we do partner with other vendors and we also have a data science team that is, um, building out some of their own tools.

So very similar to, to what Kristen and Jim described.

Bill Russell: let's dive into ROI for a little bit I think this is an area where a lot of people have struggled and, um, we hear a lot of anecdotal, you hear things like, uh, you know, the ROI on Ambient listing has been phenomenal.

And then we ask, you know, what is the actual ROI and every health system, depending on who your CFO is and what they actually allow, we'll say, while there was really no ROI, but it is a huge. Uh, advantage to our clinicians. They've, you know, reduced pajama times, some have seen more [00:35:00] patients and those kinds of things.

Um, do you guys have a defined, do, do you find that there's a defined model that you go with that just applies to every other project and you're just applying it to the AI projects now? Um, and are, where are you finding. ROI right now with AI projects, I'll give you, you can go in either direction with this, with this question, Ben.

We'll, we'll start with you since your voice is, uh, already warmed up.

Benjamin Hohmuth: Sure. Um, yeah, again, I, I,

I think, you know, having a solid business case, whether you're opening a new clinical service or buying a new piece of software or doing something that's AI dependent, like, doesn't change, right? We, we, we should have a solid business case to move forward.

Um. Ai, um, I've sort of grouped it into our AI use cases. There's some with hard dollar ROI and they tend to be on the rev cycle side, you know, helping with denials and coding and things like that. And they're ones with soft dollar, ROI and E, even [00:36:00] before sort of gen AI became so prevalent. Um, you know, I think most CFOs were understandably skeptical, skeptical of soft dollar ROI Oh, I saved 10 minutes per shift, or 30 minutes per shift for user X.

Well, unless you're, you know, hiring fewer user xs tomorrow, that doesn't, you know, you can say, well, they can do other important things, but that doesn't hold the same weight that it did. But,, I think there is a, a domain. Um, about user experience, wellness satisfaction, that's important. And we, when we went live with Ambient, we very deliberately said, yes, this may create capacity.

People may be able to see more patients 'cause we're meaningfully decreasing administrative burden. But our principal KPIs are not financial. So it's provider burnout, it's pajama time, it's things like that. And our strategy around a lot of these tools that are aimed at sort of, um, wellness, decreasing cognitive burden is that.

We have to do that. We have to make it easier for doctors and nurses to take care of patients. We have to have them [00:37:00] spending less time outside of work. We have to have them working more top of license and unrelated. Yes, we may need to make people more productive, but we've said very specifically. We don't want it to be sort of a, a quid pro quo where, hey, you know, you can have this tool if you see two more patients.

So we've tried to keep those as separate lanes. We need to create capacity, we need to decrease administrative burden. Um, and yeah, you know, there, there will be demands to, um, increase productivity, but, but they're not intimately related, if that makes sense, is how we've tried to frame it.

Bill Russell: Jim, you guys have 10 million hours.

I mean, it's, I mean, I assume that's a very real metric and that you're measuring against.

James McCabe: We are. Um it, it only started a couple of months ago, but we're, we're seeing it. It, it is hard to translate that into hard ROA dollars and I don't think there's a single formula or report that's going to do that.

You know, our, our, uh, KPI. Analytics leader is probably the most busy and anxious person in, in the institution right [00:38:00] now. Um, and you know, it's coming from, from different places. Um, denials, um, uh, overturning and increasing coding with coding assistance, um, are helping. We've got a lot of, uh, clinical documentation improvement tools that are.

Helping, uh, uh, document risk vari, um, risk variables and other things that, you know, increase value-based, uh, care, um, payments. Um, we've done a lot of great work with reducing heart failure, readmission, a lot of dollars there. Um, increasing HCC coding in the ambulatory space. Um, a lot of dollars there. Uh, our data science team built a procedure finder.

That, that looks at an echocardiogram and says, this patient looks like they could use valve surgery and bringing in some of these procedures into the system. So it's really a, the smorgasbord of, of different things to measure. Um, and, and you know, as, as, as Ben and Kristen mentioned, it's, it's, it's very difficult, but we're going after everything we can.

Bill Russell: But, uh, you know, but Kristen, every so often you have to get in front of leadership and talk about, or, or the team tries to talk about, it's like, [00:39:00] how, how are we doing with ai? How are we progressing? Mm-hmm. Are you, are you pulling all that into a single report and is that a significant administrative burden in and of itself?

Kristin Myers: I've included, um, you know, finance as part of our ROI, um, team, uh, with my team. So anything that's going in front of the executive committee in terms of. You know, quantitative or qualitative measures has been already vetted by the CFO's office, and you can imagine, um, that's a tough process to go through.

And sometimes it's really hard, uh, to show whether there's cost savings or revenue impact. I mean, the revenue cycle, uh, systems, it's much easier. Um, but a lot of these are also experience place, or, you know, clinician satisfaction. Um, sometimes, you know, from even our CFO's perspective, um, and you know, other executives, they'll say, look at like ambient.

That is now just a foundational technology and, you know, if [00:40:00] we don't implement that, um, you know, we're not being, you know, competitive. So I think that, um, you know, while ROI is important and we go through that entire process, uh, for the larger AI investments with the finance team, um, you know, the decisions, the strategic decisions on whether we should invest or not, include that, but then also look at those other factors.

Benjamin Hohmuth: There's also sometimes, and again, maybe not specific to ai, opportunities to piggyback a non-hard RI item onto something with hard ROI. So, you know, epic's a good example, right? If you get enough sort of rev cycle cases with hard dollar ROI and you hit a threshold where that it makes sense to go to sort of their, from a la carte to buffet pricing, all of a sudden.

You know, the financial barrier to, to implement some of those other non hard dollar use cases goes down, or, or if you're creating something that has, um, potential for front end CDI but can also be used by your hospitals to generate [00:41:00] problem oriented views, to make it a little easier to query the chart, like you've got one that gives hard dollars, but you can piggyback some other experience and quality use cases on that.

So we've also tried to do that. Not always possible, but when possible. Um, tried to piggyback onto those hard dollar ones.

Bill Russell: So a question from Donna Roach. Um, how successful have you been with your current vendors sharing what AI they're using and the version, I guess this is a transparency question. You have a lot of vendors using different tools, different models and those kinds of things.

Um, how, how are those conversations going?

James McCabe: I would say that Epic's been pretty transparent with us in, in most of their a AI models. Um, we're, we're, we're still and, and Microsoft as well, uh, with copilot and Copilot 365, which we, we have available upon request. Um, some of the other ERP and and and ITSM systems, we're still trying to learn, uh, what they've got, um, under the hood.

Kristin Myers: I would say that, um, you know, there's not as much transparency, um, with some of the, some of the vendors. [00:42:00]

Bill Russell: Yeah.

Kristin Myers: Which is problematic.

Bill Russell: They, they feel like that is their, uh, intellectual proprietary, yeah, proprietary intellectual. Uh, is that gonna fly?

We haven't talked about the regulatory environment. The regulatory environment really almost feels like it hasn't been set yet. Um, feels like it's still kind of, um, um, shifting. We have, you know, you have state regulations. I don't know if we have state AI regulations. Maybe you do in the states that you're in.

Federal. I mean, none of this really feels like it's, it's been set and then none of it's really feels like it's been tested yet. Like we haven't had a big, uh, a case or something that has sort of set the standard. Is that, is that part of the conversation as, as you move forward? Jim, I'll, I'll, I'll start with you.

Is that a concern?

James McCabe: Yeah. Uh, I, I think getting federal regulations is gonna be a real challenge because there's so much politics involved and when you go from one administration to another, you know, one's gonna erase what the other guy started. And, and, and out of that has come, I think at least three states, [00:43:00] Texas, Colorado, and California have started this year to put together some, some, uh, healthcare AI regulations.

So I think it's. Probably gonna end up at the state level.

Bill Russell: That's interesting 'cause you guys all, all practice across state lines is Geisinger outside of Pennsylvania.

Benjamin Hohmuth: we are not, but we're now part of Ryzen, which is national. So it depends what lens you're using.

Bill Russell: Um, I find that to be one of the more challenging items, uh, you know, uh, just 'cause we were in California.

Texas and, uh, New Mexico, California and Texas aren't known as having the same kind of, uh, environment. And so we, you know, again, same EHR we had things we had to consider and whatnot. Uh, having AI be that way, uh, is gonna be, could be fascinating. I mean, is it one of those that you're, you know, you're stepping into this cautiously just saying, Hey, we, somebody has to set the environment at some point.

We're just gonna see where it goes.

Kristin Myers: Yeah, I think, you know, we are [00:44:00] monitoring it, um, closely and, you know, at the federal level to your point, um, you know, there won't be a huge amount of regulation of ai. Uh, in fact, um, you know, I think there was even consideration of an executive order indicating that, you know, there would be an override of states putting legislation in place, uh, due to fragmentation.

Right of policies that would be across the state. So, you know, I think we've gotta watch it very closely to see how it plays out. And also, you know, what changes with, you know, potentially the next administration.

James McCabe: We're following this Coalition for Health, AI Chi, I think it's called, um, I believe it's outta Colorado, that are trying to, um, sort of stay outta politics and put together some kind of, you know, um, thought process around how to, how to manage this.

I know that they're struggling as well, 'cause uh, it's hard to get across the regulatory line. Um. But we're, we're paying [00:45:00] close attention to them.

Bill Russell: I want to spend the last, uh, couple of minutes, last five minutes on, uh, futures and on, um, the, really the promise. So what, what, what have you seen as the, the most promising opportunities for AI to have a transformative impact on healthcare, either clinical or operational?

Uh, you've touched on some of them as you've gone through. Um, maybe, maybe if we just hit each of you and, and talk about one area that you've. You've seen that has be, has shown a lot of promise. And, uh, Kristen, we'll start with you if,

Kristin Myers: yeah. Uh, look, I think the agentic, um, you know, work that can take place around patient experience and access, uh, streamlining, scheduling and, you know, getting that personalization and connecting patients to the right care more quickly, I think could be a great experience play.

And then just the reduction of administrative burden for our clinical. Teams, um, I, so that they can focus more on patient care. Um, I think both of them, uh, could [00:46:00] be transformational in a positive way for our patients and our, for our workforce.

Bill Russell: Yep. Uh, Jim,

James McCabe: I have to go with ambient. I mean, that's been the biggest game changer that I've seen since we went from pen and paper to, uh, electronic health records.

It's just.

Bill Russell: It was pretty amazing. Like I heard CIOs saying, Hey, it's the first time people, you know, people were coming up and high fiving 'em and patting 'em on the back. It's, they, that didn't happen when we rolled out the EHR.

James McCabe: No, we, we've had spouses call and say, thank you for giving me my spouse back, you know, at night.

Um, and, and just the ability to capture a lengthy conversation and have it all right there in a format and you get to sit and look at your patient face-to-face. It's. It's been transformative for us.

Bill Russell: and Ben, how about you guys?

Benjamin Hohmuth: There's sort of three domains. Think about, you know, synthesizing information from the conversation. That's been a hit, ambient, synthesizing information from the chart we're early, but that's critical. Like, like for some people's workflows, it's less about getting stuff from the information, it's more about extracting from the chart.[00:47:00]

And then the third is, you know, more modern ways to interface with knowledge like. Vendors like Open Evidence and um, Doximity, GPT, and I think what needs to happen is those need to come together, right? So, you know, if I am seeing a patient. Um, synthesizing what comes from the conversation with what comes from the chart, uh, informed by, um, what should happen based on what we know about that patient in terms of, uh, the patient.

We know the patient has diabetes, that we know what they told me about the diabetes. We know that from, from the chart, what they've been trying on in the past, and we know from. Society guidelines or other knowledge resources, what the next step should be. And we also know from the formulary, from their PBM what the ops like, starting to bring that together into sort of next best actions that are informed by what we know about the patient.

From the conversation, from the chart, and from other knowledge resources, um, is something I'm really excited about.

Bill Russell: I wanna thank the three of you for, uh, coming on the show and, [00:48:00] and coming on the webinar and having a, a conversation about this. It's an exciting time. There's a lot going on and it's, uh, uh, I, I was gonna get into futures, but I imagine you're gonna say, man, it's moving fast.

It's, uh, progressing. I've never seen anything progress this fast. In, in my career in it, and I have gray hair now, so it's, it's been a while. Uh, you know, the mobile phone took a long time to get from the car to your hand. Uh, AI seems to like overnight have gotten to this level of ubiquity where my parents are saying, Hey, I'm using AI to do this, and they're 88 years old, and you're like, wow.

That's, uh, it's, it's gonna be interesting to watch and I, I appreciate you guys coming on talking about this. I wanna thank, uh, a bridge for making this conversation possible. And we didn't even mention their name, but twice on this. So I appreciate them, uh, investing in the next generation of health leaders and, uh, this leadership series.

And [00:49:00] thanks to, uh, everybody who, who joined us today. And that's it. That's all for now. Thank you everybody.

Benjamin Hohmuth: Thank you.

Bill Russell: Thanks for listening to the 2 29 podcast. The best conversations don't end when the event does. They continue here with our community of healthcare leaders. Join us by subscribing at this week health.com/subscribe.

If you have a conversation, that's too good not to share. Reach out. Also, check out our events on the 2 29 project.com website. Share this episode with a peer. It's how we grow our network, increase our collective knowledge and transform healthcare together. Thanks for listening. That's all for now.