This transcription is provided by artificial intelligence. We believe in technology but understand that even the smartest robots can sometimes get speech recognition wrong.
Hey everyone. I'm Drex, and this is the two minute drill. I'm kinda losing my voice, so bear with me. I have spent a lot of time lately talking about AI agents and what they are and how they work and why they're suddenly everywhere. But today I wanted to slow things down a little bit and talk about something deeper because I think we've crossed a line and most people haven't really noticed yet.
I didn't even realize what was happening until I chatted with somebody about this over the weekend. You've heard me say this before, but this is another one of those moments where I say, oh, man, that's one of the coolest things ever. And then a second later I say, oh crap, this could be really bad. So I'm glad you're here today and here's some stuff you might wanna know about.
I'm gonna talk about two main tools today. They may sound nerdy, but I promise you don't have to be technical to follow along with this story. Uh, in fact, I would say it's almost, it may be to your advantage not to be overly technical. So today I'm gonna talk about two items they go together. One is called Mt bot.
The other one is called Molt book. Malt Bot and Molt book. Try to say that five times fast to tell this story. I have to start with molt bot. So Malt Bot isn't a chat bot, it's not a workflow agent, and it's not a kind of run this task and stop kind of AI agent. Malt Bot is a product as an agent. Huh. It was launched really just days ago, and I think it's pretty easy to say at this point.
It's well on its way to going viral. So what is Mt Bot? Mt Bot is probably best described as a personal operating system for ai. It connects a bunch of different apps and communication tools that you use already, like Slack and Signal and Microsoft Teams. It can use a bunch of other tools and websites that you use every day to help you get your work done.
It's persistent and proactive. It's not passive, so it works for you all the time. Once you set it on a mission, it becomes relentless. Always trying to find better and better ways to solve your problem and other associated problems that it comes up with. On its way to solve the origi, uh, on its way to solving the original problem.
So it lives. And I say that, uh, those may not be exactly the right word. I'm sure they're not. It's not, but I'm gonna use it. It lives, it remembers, it observes, it changes its own behavior over time. It can create copies of itself and coordinate with those copies when the workload gets too heavy or when it needs some kind of subspecialization.
And once it's running, it doesn't wait for the next prompt. It just keeps going. And that alone is a shift. It's a whole new kind of ball game for a AI agents and it. I'm probably not really doing it justice with this description, but the real change isn't Molt bought itself. That's a big deal. But the real change is what happens when molt bought isn't alone.
So enter this new thing called Molt book. This is the second thing I wanted to talk about. The cleanest way to think about Molt book is this. Molt book is a social media platform for molt bought agents. So think of it like Reddit or Facebook, but for malt bot agents to figure out how to work together. Uh, it's not social media in the human sense, but a shared space where agents can encounter other agents and observe how they solve problems and learn what works and copy or adapt strategies that they see into their own behavior, and they can teach behaviors to other members of the Molt book community.
In other words, agents don't just learn from the data anymore, they learn from each other. And that's new. And it really matters because up till now we've kind of told ourselves that with agents we are in control. Humans set the goals. Humans define success. Humans stay in control. But Mt. Bot plus Molt book is aggressively breaking that model, and it's only existed for a few days.
Like Molt book launched on January 28th. And there's already more than a million, more than a million and a half agents on mold book with hockey stick shaped growth lines. But it kind of makes sense, right? Once agents can learn socially optimization, accelerates and behaviors, behaviors converge. But unfortunately, that also means that decision logic becomes harder to explain.
And remember, these agents work at the speed of light and they don't need breaks or sleep or eat or vacations. So human roles may kind of shift here in this model, we're no longer driving. We're kind of supervising and mostly after the fact. That's not human in the loop. That's human on the loop, and sometimes it's not even that.
Now I wanna be clear. This isn't Skynet or this isn't about sentient ai, although it's kind of starts to feel that way. It's not about evil intent, it's not about machines going rogue. This is about delegation. Without governance, these agents aren't doing the wrong thing. They're doing the most efficient thing, and they're doing it in the most efficient way possible by teaming together and learning from each other.
But as we all know from our own experiences in healthcare, the drive for optimization without context can lead to harm. And we've seen snippets of this movie in the past, high frequency trading systems that learned from each other and then crash stock markets. But those kinds of systems worked in a very narrow business lane.
And what we saw there was that even in that very narrow lane. Damage in that lane can be huge to the folks involved with that business. And now we're talking about autonomous agents with access to lots of systems, learning how to get to other systems they need from their mt bot friends to solve your problems and what they perceive to be other problems that they discover along the way.
Agents that can persist and adapt and teach one another and don't need rest. That's the shift molt bot and molt book represent. It's not a tool, it's an ecosystem in the truest sense of the word. Now, you don't need MT bot in your environment for this to matter to you because the start of those same patterns is already showing up in the agents that you're using today.
Uh, whether it's security tool automations, or rev cycle optimization or scheduling engines or other vendor managed AI platforms, anywhere, AI is allowed to decide and act and persist and learn. Those agents have already kind of taken this first step, and that's why from some of my previous shows I've shown this concern that most of our health systems.
Don't really know what agents they have operating in their environment today. So start there. You need to figure this out. What agents do you have in your environment? Understand what's happening with those agents and do that sooner rhythm rather than later. Molt Bot shows us what persistent agents look like, and Molt book shows us what happens when they learn from each other socially.
Together. They marked the moment. AI stopped being a collection of tools and started becoming an ecosystem of actors and ecosystems. Don't need bad intent to cause damage. Sometimes the most helpful person can be too helpful and create a massive problem. Same goes for AI agents. All posts more in the comments where you can dive in deeper on MT bots.
And if you're interested in spectating, you can go watch MT. Bots interact with each other today. If you head to mt book.com, that's it for today's two minute drill. And yeah, stay a little paranoid and I will see you around campus.