This transcription is provided by artificial intelligence. We believe in technology but understand that even the smartest robots can sometimes get speech recognition wrong.
Hey everyone. I'm Drexel. This is the two minute drill where I cover some of the hottest security stories in healthcare, all part of the 2 29 project, cyber and risk community here at this week. Health, it's great to see you today. Here's some stuff you might wanna know about. Let me tell you a quick story.
Okay? I don't know. Quick story. You know me now. Sometimes I ramble and this whole, um. Show started off with a rambly draft article that I'll probably be publishing sometime eventually, but for this episode, I'm gonna keep it mostly short because. I don't know. You know, mostly non-technical, mostly plain English.
For years, when we talked about insider threats, we were talking about people, insider threats were people a stressed employee or a leader who clicked on something they probably shouldn't have, or a disgruntled admin who's mad and for some reason wants to get back to the hospital. And then we started adding AI to our environments.
Specifically AI agents, and I'm not talking about just cybersecurity, I'll talk more about that here in just a minute, but AI agents across business and clinical and research operations, and I know what some of you're gonna say. You're gonna say, look, we, we hardly have any agents, or we're just starting to talk about using agents.
So that makes this the perfect time to have this conversation as you begin to add agents. And buddy, I'm telling you, in 2026 it's gonna be the year of the AI agent. At first, it will feel harmless 'cause you'll add agents that just do simple things like read alerts or agents that summarize daily work for a dashboard or agents that augment analysts and help them do their work faster.
And so, so far so good. That's great. I really like that. But then we'll cross a line almost without really noticing, we'll start to give AI agents access to tools. And that's the moment everything will change because once an agent can use tools like call APIs and then query systems and send messages and change settings, they start to chain these things together because we said it's okay.
The agent is no longer just helping humans. It's acting on behalf of humans inside the organization. That's when I realized something. The next big insider threat group isn't a person. It's probably an AI agent, and that's not because it's evil and it's, I mean, it's because it's powerful and it never sleeps, and it's relentless in doing what it thinks you want it to do, but it's not very smart.
Okay? Think about it this way. Imagine you hired a new employee who can read almost everything on the network and outside of the network. That employee works nonstop. They follow instructions perfectly, and they move at machine speed. Now, imagine that you didn't give that employee clear rules about what data they can touch and what actions they're allowed to take or when they need approval to take those actions.
That would be reckless. It would be insane, but that's exactly how a lot of organizations are probably deploying AI agents today. Here's the part that makes this tricky When an AI agent causes a problem, just like with other insider threats, the action the agent takes. It doesn't look like an attack. The system logs look normal.
The workflow runs successfully. Nothing crashes. No alarms go off. From the security team's perspective, nothing happened until sensitive data shows up somewhere that it probably shouldn't, or an impactful decision is made. That no human was involved in or an automated action quietly crosses some other line.
And that's insider risk by outcome, even if nobody meant for it to happen. And here's the key insight I want people to walk away with. The moment an AI agent gets tools, it becomes an insider. At that point, you have to treat it like one that means. Clear boundaries about what it can access and tight limits on what it can do.
And human approval for actions that carry real consequences. Now listen, I realize I've been saying. In the last several weeks or maybe more that in cybersecurity, we're gonna have to lean hard into AI for cyber defense, and this isn't an argument against AI in cyber defense, we need AI because the attackers are already using it.
But if we don't build guardrails around the agents. Yes, those who work for us in cybersecurity, but especially around those using business, clinical, and research tools and data, we're gonna create a new class of insider incidents that nobody knows how to explain. AI is becoming part of the workforce. If we don't govern it like that, we're gonna be surprised by the damage it causes.
Not because the AI agent failed, but because it did exactly what we allowed it to do, and I'd love to hear what you're thinking about this. Post a comment or drop me a dm. That's it for today's two minute drill. Thanks for being here. Stay a little paranoid. I'll see you around campus.