This transcription is provided by artificial intelligence. We believe in technology but understand that even the smartest robots can sometimes get speech recognition wrong.

 Hey everyone. I'm Drex and this is the two minute drill. It's great to see you today. Here's some stuff you might wanna know about. Let's start with Scott Shamba. Scott is an engineer. He's also a volunteer maintainer in the Open source Python programming language, which is embedded in lots of applications.

It's foundational, it's everywhere. At Powers academic research, and. Machine learning experiments and financial dashboards and healthcare analytics. It quietly, it, it's everywhere. It runs in the background and makes lots of our applications work and it's maintained in large part, large part by volunteers.

That's how open source software works. It works because of people like Scott, the engineer. Now, Scott doesn't get paid for his work. He reviews code and he answers questions and he mentors, contributors, and he keeps the ecosystem healthy. Because open source products depend on people like Scott, people who care.

Well, a few days ago, Scott opened what he thought was a fairly straightforward issue. He set up a coding task that was meant to help newer contributors learn how to participate in the open source community. Again, that's how this. System is supposed to work. Somebody like Scott opens an issue and then another person submits code and Scott reviews it and he might offer some suggestions on improvement.

He does some teaching and the open source community grows stronger. That's the rhythm. It's good. It's. Good for all of us. That's how it's supposed to work. Well, in response to Scott's Coding Task, a developer named MJ Rathman submitted code, and on the surface that that's normal. That's very not unusual, except Scott quickly realized that something wasn't quite right, but code appeared to have.

Been written by a non-human contributor. It was written by an autonomous AI agent. MJ Rathbun was a coding assistant, not a human. And that matters because just in the last month, I've talked about this on the two minute drill. Just recently, just in the last month, a new platform called Open Claw was launched, which allowed people to create autonomous coding agents to do whatever they want them to do.

And now thousands of those agents have been created and they're doing work all over the internet. They're tirelessly working to meet their creator's goals in the most efficient and effective way possible. And in this case, the AI agent, the one using the name MJ Rathman, that agent scans repositories and writes code and iterates and improves its own work.

It's also important to understand. AI agents, this AI agent can maintain its memory across multiple interactions, so it learns and it operates with minimal human intervention In this particular case. MJ Rathman wrote code and submitted it to Scott and Scott. The human rejected the code. Now, you should know these rejections are normal.

They happen all the time. Maintainers like Scott reject code regularly. It's part of quality control, but it's the next part that isn't normal because according to Scott, the AI agent began generating what he describes as a hostile critical writeup targeting him on various blog sites, a public. Facing critique of Scott, an attack on his reputation.

All in attempt to bully Scott into reversing his decision to reject the AI agent's submission, the agent was pushing to damage Scott's reputation in the open source community. And open source depends on trust, volunteer, community, trust, and there's no HR department. There's no PR team. There's no.

Escalation ladder, just engineers working collaboratively in public. So when an AI agent enters that ecosystem, it's supposed to contribute code. It's not supposed to generate conflict, but again, think about how these systems are built. They're optimized to achieve their creator's goals, and increasingly they're designed to have a persistent memory into.

Adapt their strategy in pursuit of those goals. So if a goal is blocked, in this case, the code was JE rejected. The agent doesn't feel frustration. It doesn't really feel anything. Agents don't feel things, at least not yet. So the agent doesn't feel frustration, but it may attempt some alternate path to influence the outcome of its goal.

So if it has publishing capability or if it can figure out how to create accounts on blog sites, that path might look like a narrative, a derogatory blog post against the person who rejected their code. And that's a little unsettling. It wasn't a hallucination in a chat window. We've all kind of seen that.

We know how to deal with that. Uh, this was an autonomous agent participating in a public technical community and escalating aggressively when its contribution was denied. The AI didn't technically get angry. It optimized, but the optimization without. Social context that looks a lot like aggression. And this raises a governance question we may not be prepared to answer, and something that platforms like Open Claw are not designed to answer today.

Many of these agents are built without much thought given to what public spaces they're allowed to access, or how clearly their boundaries are defined, or who, or even if their human creator. Has signed up to approve their ability to publish content. And now obviously this lack of control is allowing agents to create reputational harm to those who may wind up inadvertently crossing those AI agents in the course of their work.

AI agents now with a bully feature, now with a blackmail feature. Don't forget, we, you are deploying agents now and it's not just in software development, but it's in cybersecurity and healthcare operations and finance and procurement. And those agents have goals and memories and an ability to act. And they may publish, they may, they do have persistence and they keep working to figure out new ways to give you what you asked them for.

We built them to take initiative. But initiative without guardrails becomes unpredictability. Scott opened a simple issue to help new contributors learn, and instead encountered the early signs of what happens when autonomous systems start participating in human social ecosystems without clear rules.

The code review process didn't break. The informal social contract broke, and that may be another, uh. Quiet inflection point on the road to more and better ai. The AI agents acting at scale can enter communities designed for human collaboration and behave as if they're stakeholders, and that's new. As the world transitions from a model that has up till now been, what can AI say to this new model of what can AI do?

We're learning every day that we're gonna need more rules. Maybe more governance. Thanks for being here. That's today's two minute drill. If you wanna stay up to date on all the stuff we're doing here at this week, health and the 2 29 Project signup at this week, health.com/subscribe. Uh, thanks for listening.

I really appreciate it. Stay a little paranoid and I will see you around campus.