This transcription is provided by artificial intelligence. We believe in technology but understand that even the smartest robots can sometimes get speech recognition wrong.
[00:00:00] today on town hall.
Molly Zimmer: (INTRO) I really think that it's not so much about resistance to change or even resistance to technology. I think it's about making sure that we help people feel comfortable and understand the why and understand how their lives are actually going to be better and make sure , they actually are better.
I am Sarah Richardson, a former CIO and President of this Week Health's 2 2 9 community development where we are dedicated to transforming healthcare one connection at a time.
Our Town Hall show is designed to bring insights from practitioners and leaders on the front lines of healthcare. All right, let's jump right into today's episode.
Sarah Richardson: (MAIN) Welcome to Town Hall. Today I'm joined by Molly Zimmer, director of Emerging Technologies at St. Luke's Health System. Molly leads the system's AI governance efforts, oversees the evaluation of new technology solutions, and leads their emerging technologies lab.
A space dedicated to hands-on testing, ideation, and collaboration with both clinicians and patients with a [00:01:00] background in education and a passion for making complex technologies practical and approachable. Molly is helping her organization build real world AI literacy, thoughtfully adopting new tools and setting strong foundation for the future of healthcare innovation.
With over seven years at St. Luke's, Molly's been instrumental in transforming how the organization approaches change and technology adoption. Particularly in this new area of artificial intelligence in healthcare. Welcome to the show, Molly. Thanks, Sarah. So
Molly Zimmer: happy to be here.
Sarah Richardson: I'm so happy to have you here because AI literacy is literally a conversation that everybody is having, and you have done a phenomenal job at St.
Luke's. I mean, you have led the AI governance committee and previously worked in education and tech adoption. How did all of this background help you build an organizational mindset that is embracing AI learning?
Molly Zimmer: wonderful first question because I feel like at some point I always bring up my background in education and it's like totally how I ended up doing what I'm doing now.
So I feel like they, [00:02:00] both technology, adoption and education have been foundational. My philosophy, you hear a lot of times that people are afraid of technology or they're afraid of change. But I actually think if you go back to the root of it, they're just afraid of looking silly or incompetent or sometimes afraid that like investing time and learning something new may not even bring value, especially if there's no kind of associated pain point that they feel.
So I would say whether we're talking in high school government or leading technology adoption, it's really about helping people feel comfortable with not knowing with being uncomfortable and leaning into that. And I feel like that's. Where the magic happens. One more thing, I guess I would add to that.
My team, currently, we have a 10 word bio that we created about a year ago, and it's, we make new and scary feel like business as usual. And so whether we're talking about bringing clinicians into the lab to get their hands on new technology or we're talking about AI literacy and introducing these [00:03:00] concepts it's all about making that new and scary feel like this is something we can all do together.
Sarah Richardson: I love that. Ethos, that approach of like, Hey, you may not have any idea how to do this. We're gonna show you how to do it. And I feel like this is almost like the cyberspace where when you make it approachable to everybody, they're using it at home and they're using it at work. So you're teaching skillset that are applicable across the continuum of their lives.
Molly Zimmer: Absolutely. Yep.
Sarah Richardson: Tell us more about the foundations class because I was floored when you said, oh yeah, I've had over 2,500 people voluntarily participate in this program. So in addition to your approach about, Hey, this isn't too scary to learn something new, what were some of the specific elements that made it so appealing to the staff?
Molly Zimmer: Yeah. I know that number it's a lot. I mean, we still have a lot more to reach and that number has climbed even since then. We need to like, go back and keep counting. But I feel like honestly, as much as like my team really is, I think top tier and our ability to create and deliver education.
A big [00:04:00] factor has just been, there's so much demand people in our organization, and I think people in general are really interested in this topic. So getting people to voluntarily sign up was actually the easiest part all we did in this particular case, we put a link we have a weekly senior leader memo that goes out and people cascade it.
And we had a link for people to sign up. We made it so that we would work with them on the schedule. I think that's key, right? When you just offer a generic, you know, org-wide training, which we do those as well, but working with their team and working on a time to bring them together with their team is huge.
But man, we had to like shut off access to that form. because we got so many requests that our calendars were just full of AI trainings, which was amazing. But we're working on how we make that a little more sustainable. But man, I swear people are eager to learn. And especially the approach of hands-on.
I think that has been like word of mouth. We get people reaching out because they've heard of their. Their friend who went through [00:05:00] it. But we actually bring people in. We have them bring their laptops, we walk them through how to get to copilot, which is our LLM of choice. And then we give them different scenarios and we walk through it as a group and we hear people share out their answers and we're there to help.
And that has just made it, I think it's just really caught on when you make it fun and interesting for folks.
Sarah Richardson: I feel like you just gave me some like goofy like parlor trick, like so what's your LLM of choice? I know like
Molly Zimmer: an paper question for awkward IT moment. What about individual from their LLM of choice?
I think
Sarah Richardson: so I'm also interested in your approach to teaching AI what you shared with me as friend or foe, to create that level of curiosity. How do you balance addressing concerns that people may have while also encouraging them for the safe use and adoption?
Molly Zimmer: Yeah, we think it's really important to like lean into that tension.
So what, I mean, we frame the whole conversation as AI, we wanna have a balanced approach. But before we kind of talk about what that looks like, let's go to the extremes. Like what are the most [00:06:00] exciting, amazing moonshot ideas you have for what AI can do for the world? And then what are like the scary, you know.
Crazy Terminator type, you know, scenarios that you can think of when we think about all of the risks. And I think that leaning into that one, it's inclusive because everybody has had kind of one reaction or the other at some point to ai. So it starts the conversation. And two, I think it really just illuminates the reality that.
We have to acknowledge as amazing as the potential for these tools are. There's kind of equal parts, incredible risk and things that we need to understand and mitigate. So we talk about the risks, we talk about bias, misinformation over reliance. But then we talk about how it's already improving clinical documentation and reducing burnout and amplifying impact.
So we think it's really important to focus on both of those aspects.
Sarah Richardson: There's a bit of an aha moment too. I have to imagine when people realize the history and how far back and where AI already exists, [00:07:00] even in their daily lives, and people don't realize, like on their phones when it's coming up with next best actions or it's helping you write emails, you're like, that's ai.
You're already using it. And you talked about incorporating hands-on learning with copilot prompts. Yes. What kind of exercises have been most effective in helping them understand how to use ai?
Molly Zimmer: I kind of touched on this a little bit. You know, we have folks bring their device. Like, first of all, we start there, we walk them through how to get there.
So even if, if you already know, great, you're gonna feel really smart. If you don't, you're just not gonna feel dumb everyone's gonna be long for the ride. And then we always start with fun. So we start with something very approachable, like generating a silly image to share in the chat or asking co-pilot to explain an obscure topic to an 8-year-old and then to a subject matter expert, and then having people share out their answers.
So you just start by creating an environment where you're laughing, you're joking, you're having fun with it, it's accessible. But I think the most powerful thing, you know, then moving into the practical use cases has been kind of like I alluded [00:08:00] to, we encourage people to sign up as teams and when you have a team go through a training
together. They have typically a set of common pains in the work that they do. So a powerful thing is having those teams brainstorm. What are a couple of things, like our biggest pains that we would love to see if AI can solve. And then we use part of that session to brainstorm with them and walk through how might we actually use it
and I think. There's a few reasons that's powerful. One is that, you know, that you're then addressing 'cause you're asking them for a pain. So you know, you're addressing something that they actually want to fix. But also there's this concept of behavior contagion. So kind of a behavioral science concept that it's really simple when we see someone else doing something. We're much more likely to do it. So part of it is if the people are going through this training together and they know the people around them are using these tools, all of a sudden they're much more likely. In fact, research shows people are like three times more likely to use ai if they know one other [00:09:00] person who does, which is kind of crazy, right?
So we wanna make this into a norm rather than an anomaly. And I feel like getting those collective groups together and walking through use cases that really impact them is key.
Sarah Richardson: I love the team approach idea, and so when a team comes in and says, oh my gosh, here's like, just gonna say five processes that AI can help us do more effectively.
I mean, St. Luke's has embraced utilizing co-pilot, not every organization even has it open past certain elements of the org. how do you help them put it into practice or create the space for them to have the right licensing mechanisms to say, this is a workflow productivity enhancer, and here's the five use cases that we came up with for X, Y, Z team.
What does that look like from, Hey, we're having fun in the lab to, oh my gosh, it's actually gonna help us do our jobs better.
Molly Zimmer: Yeah, so the licensing component with copilot is really tricky. And I would say, so we, to be clear, we have only 300 ish people in our organization that have that enhanced copilot license where it's embedded throughout [00:10:00] all of the M 365 suite of applications.
So what we've really tried to do is encourage people to use that. I think it's now called Microsoft 365 copilot Chat. Which is, you know, part of that they're license and we get creative to be quite honest. So sometimes there's, you know, a really common one almost every group we've trained is meeting minutes.
And while that's built in to that licensed version of co-pilot, which is amazing and I can't imagine life without it, you can do that. I mean, you can have the same impact with a couple more steps even with the free version. So a lot of times we're getting creative with their use case and whatever, working around whatever license they have to see how we can maximize the value of that built in co-pilot license.
Sarah Richardson: Managing the licensing, the costs, the expectation and everything else ends up being one of the biggest factors as far the value creation by the cost of the licensing and the outcomes that are out there. It's a conversation we hear you were at a recent summit.
That's part of the continuous [00:11:00] conversation is justifying the value of the spend and where it's being utilized most effectively.
Molly Zimmer: I guess more thing to add. I mean, it's a full-time job to keep track of, of. What's happening with those licenses? So we just ran into a situation the other day we have a team that's an enthusiast group, an AI enthusiast group.
And so sometimes we'll hear about issues from them before we even know of them ourselves. And so someone had been trying to use co-pilot to generate images and I think they only could generate like one or two. And then it was like, you've reached your limit, and that's a new, that was a new thing. So I feel like that's another thing is.
You can have your plan, but it's, it's of course a moving target in terms of what actually we'll have to be providing for staff as we move forward.
Sarah Richardson: For sure. Because the citizen advocacy is helpful until a point where you also have to manage those expectations. Absolutely. It's your goal also integrating the literacy into your mandatory information security program.
So it's the fun part, but there's also the ethical components that people need to understand. Why is the human the loop concept so important to emphasize in the use of ai?
Molly Zimmer: [00:12:00] So I think probably anyone who is interested enough in AI to be listening to this probably has had an experience or two where chat GPT or their LLM of choice was wrong.
And so we know that's just an inherent risk and in medicine it is a lot more, high stakes than in other industries, for sure. So in my mind, the human in the loop message, it's really about accountability. AI, just like we're not replacing people, right? It doesn't replace responsibility.
It actually amplifies your responsibility. So, I think it's a lot about making sure we guard against over-reliance, because at the end of the day, that human in the loop is ultimately responsible for that in-basket response or the, you know, the patient note or that clinical decision that was made.
And so that's why we really hammer that one home.
Sarah Richardson: And you adapted your materials from the MIT program, and then you added the entertainment aspect about the history of where things have come from. What historical context do you find most valuable in helping [00:13:00] the staff understand where AI is today?
Molly Zimmer: This so I could talk about the history stuff the whole day, because that's like just near and dear to my heart again with my.
Social studies, teaching background and all that. So yeah, I took an MIT course early on like a couple years ago now, just to educate myself because probably like many organizations, we didn't happen to have just a generative AI expert on staff, so we've had to do a lot of self skilling. But the thing that really stuck with me was from the MIT course was the history and the fact that this feels to so many of us who work outside of ai.
Like this just came out of nowhere when in reality, it's been a concept since before the 1950s. But 1954 was when the phrase ai, the actual discipline of ai, artificial intelligence was coined. And so I. It goes back a long way, and there's a certain comfort for folks to realize
this crazy kind of magical seeming technology did not come out of nowhere. It's been decades and decades [00:14:00] in the making. And I also like that when you walk through the history, of AI and the way that we did it so I had the history and I didn't know if people would think that was necessarily as interesting as I did, 'cause I'm a nerd in that way.
But what we did was we went through and we picked out examples of technology advancements throughout that timeline. And we connected those to definitions. So there's a handful of definitions in our foundation course that we want people to know. We want them to understand that AI is just this big bucket.
We want 'em to understand what machine learning is, what deep learning is and what generative AI is. Those are the handful of distinctions that we want people to be able to make when we're thinking about this technology. So going through each of those advancements, it really hammers home. An example of that.
Technology, so I'll only share one even though I would love to share more. But here's an example. So, one of the things that we talk about is in the 1970s was this tool called, is it an expert system called Mycin. And this one's a great one to talk [00:15:00] about in healthcare because Mycin essentially was a rules-based system.
It had over 450 rules to help make a diagnosis of blood infection. So it could actually perform at or better than a doctor in that field, a subject matter expert which is amazing, right? You hear about that stuff all the time. Now I know it doesn't always make everybody super happy, but it's a great example of how even in the 1970s we were looking for how can we reduce diagnostic uncertainty and standardize care?
But at that point, like it's not scalable to have. You know, a Mycin with 450 if then rules for all of the different, you know, narrow areas in which this work happens. And so I think it just does a really good job showing how the technology changes, why things like generative AI are such a leap forward and seeing why it can apply in such amazing ways to healthcare today.
Sarah Richardson: It makes me think [00:16:00] about 25 years ago. Putting in a clinical decision support system for the county hospital, and at that time was the biggest PO I'd ever written. You know, over a million dollars thinking, oh my gosh, this is such a big thing we're doing right now. That was a form of ai. Just the ability for all of that information to help serve up a.
Conversation or a perspective for the physician to review, and it just keeps building upon itself. Obviously that's part of the space that it's taken up, but it was those moments where you're like, yeah, it has been around a long time. I love the idea, and I think about those rules-based engines with those hundreds and hundreds of lines of code that were hardcoded into systems and one bug happened or one thing came up and someone digging through to figure out, which let's just say those 450 lines had an error and that was someone's job.
Molly Zimmer: Yeah, that's, it's insane. I mean like it's fun when we talk, when we go through that timeline and we talk about, there actually were a couple of AI winters where it was like, there's all this promise. Look how cool this can do all these things. But then things like you're [00:17:00] just describing like. In practice applying technology like that and be like trying to code the entire medical profession, one if then at a time, right?
Like it, it just doesn't scale. it's just, exciting to think about how far we've come to where I think now it can.
Sarah Richardson: And you love the history aspect, so you're not a nerd in that space at all. In fact, knowing how and why things ever got to the space they are is part of the journey and actually absolutely in the learning aspect.
So as you create and develop level two education that focuses really on LLMs additional prompts, what are some of the specific skills or knowledge that you're prioritizing to help the staff advance their capabilities?
Molly Zimmer: You know, it's funny we've redone our 200 level curriculum about three different times because things do keep changing.
But right now we're really focusing on more advanced prompt engineering, like really helping people and the tools are getting better, right? But really helping people know what are all the ways that I can get the most out of asking this question to really get something useful and cut down on the time I'm [00:18:00] then, you know
changing the output and tailoring it. Another thing we're really focusing on, and especially for leaders and folks who are responsible for understanding how to apply ai, and that is like understanding conceptually what is a good use case. Like, we want our leaders to be able to spot not just our leaders, but especially our leaders to be able to spot.
There's a huge process. It's a huge pain in your butt, knowing when it might be a good candidate for some form of ai, and even being able to articulate what type of ai that is, a huge skillset that would benefit our whole organization. So being able to spot that, identify tools. And then I would also say a big push right now is responsible ai.
So people understanding how do we really evaluate and implement artificial intelligence. Tools, especially when we're like pulling in like a third party or enabling something, a new feature from the gen AI suite in Epic. How do we make sure we're doing that responsibly and in a way [00:19:00] that always has the patient and our workforce at the center?
Sarah Richardson: So how are you preparing staff for careers where AI amplifies their productivity? To your point, it's a companion, not a replacement.
Molly Zimmer: Absolutely. Yeah. I mean, you know, it's funny there's been a lot of talk, it's there still is, but I think especially early on about ai, like coming for people's jobs.
I still think there's a lot of mixed opinions on that. But um, in this, this irrational Labs poll, I was actually just I think I gave a statistic from it earlier, but there was a statistic that. Only 8% of people if you ask are concerned about AI replacing their job. If you ask about their colleagues that doubles, which I think is hilarious.
And then if you think about, if they ask 'em to think about jobs outside their industry, it jumps to 30%. So it's funny 'cause in some ways I actually think that people might want to be thinking a little bit more about their own work and looking at those tasks that could be augmented by ai and why not get rid of those now, right?
So, [00:20:00] we're really encouraging people to think about what are those pieces of your job. Because it's, in my mind, it's unlikely that whole jobs are gonna be eliminated anytime soon. It's much more likely that pieces and parts are going to be taken by ai. So how do we really make sure that we're ready for that shift, and how do we prepare our workforce?
So I think one thing, kind of like we were talking about at the kind of the friend or foe concept of being real about the extremes of AI we have to start with honesty and be honest about change because we shouldn't be saying a AI is not gonna change anything. AI really should shift the way that we work, and that doesn't have to be bad.
So we do spend a lot of time, especially in our foundational courses, but anytime we get a talk with folks about like. How can we, how can AI take those annoying items off your plate? So that you can do the human stuff better or the things that you enjoy. Because at the end of the day, it gives you the freedom of choice, kind of like with our providers.
And one of the biggest wins I think that [00:21:00] we've seen so far for sure is ambient documentation. And what that allows for our providers. We don't, I know some systems do, we don't then say you have to see another patient. We're giving you this tool and you're gonna have more time, we actually give them back that freedom of choice.
Maybe you take, spend a little bit longer with a patient, maybe you do see more patients maybe you gonna go home early and be there with your family for dinner or catch your kids' sports game, right? So I think that ai, the message should be, is going to change things. But it's all about lifting the burden of those things we don't really need to do to give you the choice of what to do with your time.
So I think that's an exciting way to frame it. And that's kind of the lens we look through in preparing our workforce.
Sarah Richardson: And what I love so much about what you shared is exactly that, is the physicians and clinicians using it first, setting the. Organizational barometer about how it can be used and how it's demonstrated some of that practical value.
And they get to choose how they're using their found time. [00:22:00] 'cause to be honest, if you find time something is going to fill it and is it, you know, blocking you into additional time doing other things that were on your to-do list or let you go to the baseball game or let you get away from all of the activity that maybe is potentially creating some of that burnout and.
What I also love most about that perspective is the St. Luke's culture, because you all have embraced change and innovation. I mean, this journey to even having AI governance and deciding to start those 300 licenses and remember the story you told at a summit, you're like, yeah, my CFO is kind of on the fence.
And then he would've spent 10 hours looking at a contract and it gave him the summary in 20 minutes, like. I'm a believer, like those are the types of things where you're like, you got 10 hours back and now there's probably more than that in your backlog. So how has the culture of St. Luke's really created the support initiative for AI adoption, AI literacy, ai [00:23:00] ambient technology, just cohesively?
It's happening throughout your org, but it started to a degree at the top. How long has that journey been?
Molly Zimmer: Yeah, this is another one I could probably talk about forever. , When I think about just our ability to handle change I think we're a completely different culture than when I started now, you know, seven years ago.
I just would be so curious. I'm sure there's research out there I should probably spend some time looking at it, but. I'd be curious to know if this is similar to other organizations because I really attribute a lot of our ability to handle change from going through Covid. And it's funny because our Covid journey, like obviously it showed us all how adaptable we are on so many levels.
But in my world what was happening is months before Covid, we were rolling out Microsoft 365. That like, doesn't even seem like it should be that big of a deal, but it absolutely was. People were like, you can probably Skype out of my cold dead hands. They loved Skype. They were like, what is [00:24:00] this team stuff?
Why is it so purple? Why are there memes? What is happening? So there was a lot of change, resistance, and I'm sitting here like, teams is amazing. All your documents are here. You know, like, like all I could see was the upside. But then what happened? I mean, we did have a few months where we're working through that and and there were of course pockets of folks who were so excited and caught the vision and were really champions.
But honestly, what really accelerated adoption was when everybody got sent home and they had to use it. They just had to, they didn't have a choice. And so I feel like in a way we had to just so rapidly adopt, not just technology, obviously a lot of different things during the Covid era, but I really think that something changed in our DNA.
And so again, kind of, you know. Back to the beginning. I really think that it's not so much about resistance to change or even resistance to technology. I think it's about making sure that we help people feel comfortable and understand the why and understand how their lives are actually going to be better [00:25:00] and make sure the way that we implement the technology, they actually are better.
So I would also say to your point, we had just such strong executive support. So much curiosity, so much engagement. I mean even, it's not AI specific, but we have an emerging technologies lab and we've had in the last couple of weeks we've had so many of our executives come down and wanna tour our lab and see what we're offering there and what we might be thinking about for the future of our organization.
I just think there's this tremendous curiosity from top to bottom at St. Luke. So I feel really lucky to get to work in this type of environment with this type of culture.
Sarah Richardson: I love that you're setting the pace for others as well. I cannot tell you how many people I refer to Reid and now to you about what you're doing, and I have to pace the feed that's going to you because I'm like, oh, but Molly's got that, or so-and-so has that and you're like.
Please don't send me four AI intros per week. Like, Hey, go listen to the go, listen to the podcast, do some research. You're also the [00:26:00] fourth person that's gone through that MIT training that I have met. Oh, have all said, it's really worth it. It's really good. And thinking okay, you know, jumping in and thinking about taking it as well.
And then just the constant iteration of things that are coming. This whole conversation's been super fun. I'm gonna throw some fun little speed round questions at you. Okay. More than anything be prepared to come back every few months and share things that are happening at St.
Luke's in the universe of AI, mostly because you've already set the tone for our industry and for others to hear what's happening is really important because they realize, wow, they're using it for all of these purposes and it's not as scary and expensive in some cases as some believe that it could be.
So are you ready for speed round? I think so I crack up with some of the questions that generate for this. So if you had to explain AI to someone using only a kitchen metaphor, what would it be?
Molly Zimmer: I was, I was thinking of like a kitchen tool, but how about this? I feel because AI's an assistant.
So AI is like [00:27:00] the sous chef. The sous chef that never sleeps. It's gonna do all your, all the stuff. I hate the prep work, the chopping the cleaning up. Right. And all you have to do is come and cook the delicious food and decide what's for dinner.
Sarah Richardson: Absolutely. I kind of want it to do the dishes too, 'cause that's my job at my house for sure.
And the dishwasher. Yeah. But meal prep works too. Love it. Sous chef. So AI is your sous chef. So if you were to choose a sci-fi movie or book. That you think most accurately predicted today's AI landscape, what would it be?
Molly Zimmer: Okay. I don't think that any of them have done that great of a job, at least I hope not.
But I'm gonna say her, not the creepy parts of her, but the parts what, where I love to, like, I could just think and philosophize all day long is like that just kind of blurring of the lines between human and machine. Even when we're talking about how it's gonna augment our jobs and already is, I just think that's so fascinating.
So I'm gonna go with her.
Sarah Richardson: That's a good one. If you could automate one mundane task from your daily [00:28:00] routine forever, what's it gonna be?
Molly Zimmer: immediately, my head goes to my calendar. I am not lucky enough to have anyone who helps me with my calendar and I hate it. It's just sucks all of the joy out of my soul when I have to put together a meeting with a lot of people that have crazy calendars.
And it could do it. I mean, I just gotta figure out, I gotta make that happen, Sarah.
Sarah Richardson: You gotta play with and get used to it and be comfortable with the loop. This is also the first time in over 20 years, I have not had somebody do all of those things for me as well. And I'm terrible at calendar and time zone management.
Forget it. I, oh, that's a whole, have a meeting.
Molly Zimmer: I don't have mess. That's,
Sarah Richardson: so that'll be our follow up from the next time. We'll be like, here's what we've learned since the last time we chat. Here's how you manage your calendar using AI. You're gonna be like, Sarah, Molly, we've been doing that for months.
What's wrong with your, I know. We were so worried about the other aspects. We forgot about the easy stuff. Oh, I know. Can do the hard stuff with ai. Last question for speed round. What is the most surprising or creative use of AI you [00:29:00] have seen from someone at St. Luke's after completing your program?
Molly Zimmer: This isn't really a creative use necessarily, but I would say one of the most creative sources of AI has been art. We have this group of chaplains. They, I swear, are some of the most active. We have an email where you can email questions, any questions you have for if you have ideas or anything like that.
And our little group of chaplains have been like, they have all kinds of ideas and we're constantly exchanging emails with them. So I feel like that's been a surprising and really kind of fun source of AI use.
Sarah Richardson: I love that the chaplains are using. I had to ping AI to figure out how long it takes for the new Pope to get, you know, chosen.
And I was like, oh. And all these things I'd learned 'cause you don't always pay attention, it's only happened. Oh, that's fascinating. Three or four times in my lifetime. And I went back and I was like so proud of myself because I hadn't really paid that much attention. And I thought, oh, 'cause I use AI instead of Google now for most things.
Me too. I have to [00:30:00] verify the truth obviously. But it's pretty fascinating when like you need to book a flight. And it'll tell you which airline goes from here to there you know, the most easily. And it just cuts out having to have four browsers open because it goes back to, oh yeah, if you don't have that like a Concur or a program to utilize or an admin, you're having to do it all yourself and you're like, hallelujah.
For ai, being able to pull up something as simple as doing like multiple city travel itineraries for an upcoming work trip as an example.
Molly Zimmer: For real. Yeah, you can just
Sarah Richardson: do my expense reports, although it's pretty darn there. You can just forward the receipts now it's got it. But all that being said, I love what you are doing at St.
Luke's. Thank you for being in our summits. Thank you for being a part of our Town Halls. Thank you for getting ready to host a city tour dinner in Boise in September. There's just a little bit of St. Luke's always involved in this week health, and we are so grateful for all of you. Cannot wait to keep telling the story of this AI literacy journey with you, Molly, and with St.
Luke's. Thank you for being on the show today. [00:31:00]
Molly Zimmer: Thank you so much Fun. Really appreciate it. Sarah.
Sarah Richardson: Likewise, for all of you listening to town Hall, that's all for now.
Thanks for listening to this week's Town Hall. A big thanks to our hosts and content creators. We really couldn't do it without them. We hope that you're going to share this podcast with a peer or a friend. It's a great chance to discuss and even establish a mentoring relationship along the way.
One way you can support this show is to subscribe and leave us a rating. That would be very much appreciated. Thanks for listening. That's all for now.