Speaker:

Welcome to Data Driven, the podcast that explores the collision of

Speaker:

data, AI and occasionally common sense.

Speaker:

Today's guest is Mike Armistead, CEO of Pulse Security

Speaker:

AI, a man who's been defending digital fortresses since before

Speaker:

AI was cool and hackers had LinkedIn profiles. We talk

Speaker:

about AI as both weapon and watchdog, why LLMs need

Speaker:

guardrails and possibly a muzzle, and how your next data breach

Speaker:

might come gift wrapped in a prompt. Grab your headphones and your

Speaker:

password manager and let's get Data Driven.

Speaker:

Hello and welcome to Data Driven Podcast. We explore the

Speaker:

emergent field of artificial intelligence, data engineering, and data

Speaker:

science. And you'll notice that Andy looks a

Speaker:

bit different today. If you're viewing the screen, if you're viewing this, if you're listening

Speaker:

to this, he'll sound a bit different. That's because Andy is actually presenting

Speaker:

a pre con today at SQL Past in Seattle. And,

Speaker:

and I am in my car because many complicated

Speaker:

reasons, but I'm not

Speaker:

driving. And I have with me my co host on Impact,

Speaker:

Quantum, which I believe that we'll all be in good

Speaker:

hands here. How's it going, Candace? It's great. It's great. I'm

Speaker:

actually really excited because with all honesty, although we focus so much

Speaker:

on Quantum, the truth is AI and Quantum

Speaker:

are now being like, spoken as if they're already

Speaker:

one word. So being able to speak today to Mike,

Speaker:

who I understand is the CEO of Pulse Security

Speaker:

AI, makes. I'm very excited about the conversation, which. Is

Speaker:

another field that is intricately tied to Quantum and AI.

Speaker:

Right. This is like, this is the center of the Venn diagram. Right?

Speaker:

So. So welcome to the show, Mike. Yeah, thank you. Thanks for having me.

Speaker:

Hey, no problem, no problem. So just a quick question.

Speaker:

What exactly does your company do?

Speaker:

You know, so we are in. I'm one of these stealth companies

Speaker:

still, so I will. But let me, let

Speaker:

me generally describe to you the problem that we're

Speaker:

addressing. And it's, it's a little bit. It's

Speaker:

interesting because I think it, it definitely falls upon

Speaker:

earlier waves of even what was going on in AI. My previous company

Speaker:

was, was called response Software. We actually used

Speaker:

AI back in 2016, which

Speaker:

was a little bit different though. You know, the field of AI is very broad.

Speaker:

We were probably more on the expert system end of that

Speaker:

spectrum than what. Where the LLMs are today.

Speaker:

And. But our journey was fantastic.

Speaker:

We were applying AI to do something that, I'll say it in

Speaker:

today's terms, everyone can understand, which is we were an

Speaker:

assistant for a tier one soc Analyst,

Speaker:

which if you know, in enterprises, security operations

Speaker:

centers or a soc have really

Speaker:

struggled to get skilled and even

Speaker:

just people to be able to interpret what these signals

Speaker:

are coming at them and what's a real threat and what's not a real

Speaker:

threat and what's going on there. So they have, and they, most

Speaker:

of them have to be 7 by 24. So it felt like

Speaker:

a really great application where AI, because the AI can do a lot

Speaker:

of that assistance and then give it to person, to a person to make the

Speaker:

final judgment. And we learned a lot along the way. In fact,

Speaker:

we ended up getting acquired by a company called

Speaker:

Mandiant, which is already a public company in the

Speaker:

security space. They're most known for doing

Speaker:

incident responses. So that's when you know, someone gets hacked and

Speaker:

they have to parachute in to try to get them back on their feet again,

Speaker:

which is a very, you know, manual human kind of way of going

Speaker:

through it. But they also had products and

Speaker:

our team kind of got, got involved with that. That company

Speaker:

ended up getting bought by Google a few years later.

Speaker:

And so myself and our team was at Google for

Speaker:

a couple of years. And we were at Google during an interesting time because that

Speaker:

was code red happened at Google when we were there, which is

Speaker:

chatgpt came out and Google had already had.

Speaker:

Yeah, exactly. Google already had all this

Speaker:

investment in AI that they weren't really telling anyone about.

Speaker:

And ChatGPT beat him to the punch. And

Speaker:

suddenly by edict of the CEO,

Speaker:

every product had to have AI features in it.

Speaker:

And our team was already in charge of the

Speaker:

large language model for security. And so we got to see

Speaker:

from all the teams that were doing product kind of what worked and

Speaker:

what didn't. And there's a lot because,

Speaker:

you know, I mean, it's like you guys, you see me, I've been in the

Speaker:

industry for a long time, been through many waves, was an

Speaker:

executive at a, at a

Speaker:

Internet 1.0 company in the days of

Speaker:

Web 1.0 and you know, ran

Speaker:

ops and ad tech and all this stuff from that. So I understand these different

Speaker:

waves, but LLMs aren't the answer to everything.

Speaker:

And we got to see a lot of that. Makes

Speaker:

you laugh. That's good, Frank. It should make you laugh. It's truer

Speaker:

words that's ever been said. Right. So I remember when I first made the switch

Speaker:

from Windows Phone development into AI or data science or machine

Speaker:

learning, which was called then, it was a very different world.

Speaker:

This is all pre LLM, right. I think

Speaker:

there's going to be like an ad and a BC moment

Speaker:

for AI people. It's probably going to be, you

Speaker:

know, invention of ChatGPT or the release of Chat CPT.

Speaker:

Right. And you know, where now everything's about LLMs. LLM

Speaker:

that there's plenty other types of AI out there. Right. Whether it's good old fashioned

Speaker:

math and stats, statistical analysis,

Speaker:

it's actually easier to do than it is say and,

Speaker:

or it's just, you know, old fashioned, you know, just machine learning. Right.

Speaker:

They're not related to, you know,

Speaker:

LLMs. Right. LLMs. I think it's kind of taking all the oxygen out of the

Speaker:

room for good or for bad. But I remember like I was

Speaker:

just, I was sitting at the Microsoft Research, a Microsoft Research

Speaker:

conference because I worked at Microsoft at the time and now I'm at Red Hat

Speaker:

and I'm only. They're not sponsoring this and they're not, you know,

Speaker:

approving in this or this isn't completely independent. Just want to say that

Speaker:

but my hair is a mess because haircut was one of the things I was

Speaker:

supposed to do when my hot water tank decided to blow up flood my basin.

Speaker:

Some entire weekend I've been putting stuff in dumpsters.

Speaker:

But, but like you're right, like LLMs, you know, they're great tools,

Speaker:

they're amazing. They're not going to solve everything. Right.

Speaker:

And props to Google though. The, the paper that basically made

Speaker:

LLMs, the technology on it was theirs. Attention is all you need. Was that

Speaker:

2019ish. It was, it was a

Speaker:

while ago. Yeah. You know, I mean, and a while

Speaker:

ago maybe in today's terms. In today's terms, really it's like

Speaker:

pre. Pandemic or post pandemic, honestly. Right, right. That's how people think

Speaker:

about things. Right. How things, you know, and, and

Speaker:

I. Think you know, why Google was holding on to things was there

Speaker:

was a, there was a lot of unproven

Speaker:

sides to using an LLM. And,

Speaker:

and I think, you know, so in some ways as we look at, look forward

Speaker:

and why there's so much thought about what's safe or what's not is

Speaker:

because they were kind of holding onto it. Well,

Speaker:

OpenAI didn't really feel they had that constraint and whether

Speaker:

that's a good thing or a bad thing, we're going to find out. But they're

Speaker:

both very different companies. Right. Google is a consumer enterprise company

Speaker:

and OpenAI was just a research group of at the time, maybe

Speaker:

what, 80 people, 100 people. Yeah. Right. Google was a

Speaker:

worldwide phenomena. Like so if you, if you're that big, you really have to think

Speaker:

very carefully before you release something like

Speaker:

that. Whereas if you're just a research. Yeah, yeah, for sure.

Speaker:

Anyway, to actually continue the story because that is the right

Speaker:

sidecar. No, no, no, that's fine. Because I think it's an

Speaker:

important kind of description of what's going on. We

Speaker:

then eventually we formed this company called Pulse Security

Speaker:

AI because we actually do believe that

Speaker:

there's some really great applications

Speaker:

of using an LLM. But I think within an agentix system

Speaker:

rather than just the LLM being the database,

Speaker:

we created a company that is into a place that there isn't

Speaker:

been a lot of work, which is security

Speaker:

programs are multi dimensional. There's a lot

Speaker:

to them. They grew up kind of in a technology era

Speaker:

where you solved almost one thing at a time. So if there's a

Speaker:

threat of malware, you create something that sandbox the

Speaker:

malware and detonates it and allows you to take care of it.

Speaker:

If there's a threat to access or you're over privileging things, you

Speaker:

have to think of your identity and access management. But it all kind

Speaker:

of grew up from that. But there's a layer kind of missing which is how

Speaker:

do you connect all all of this together? And security

Speaker:

teams and in our experience

Speaker:

they put people which are great on the judgment, they put people

Speaker:

in there. And so there's a lot of manual tasks about connecting the dots between

Speaker:

things. And we think AI can help a lot in at

Speaker:

the program level how you know, what people

Speaker:

should do from a strategy standpoint, not just from a

Speaker:

detailed kind of technical detection standpoint.

Speaker:

No, absolutely. My wife actually works in cybersecurity

Speaker:

for the US government at at nist. So like some of these

Speaker:

things I'm familiar with. So when you said soc, I was like, oh, I know

Speaker:

what that is, you know, like. And mand I'm

Speaker:

familiar with them, right. And it's an interesting,

Speaker:

it's an interesting time because when I

Speaker:

First, when ChatGPT released, I was just coming back from

Speaker:

reinvent in Vegas and you know, anyone's been to Vegas, right?

Speaker:

You know, like after the third day you get to the airport early because you

Speaker:

just have to get out, you know what I mean? And

Speaker:

I was starting to play with it and I was like, wow, I'm actually really

Speaker:

impressed with it. So my wife picks up the airport. I'm. All I could talk

Speaker:

about is chat gbd. Like that's literally all I could talk about. And she was

Speaker:

so like, well, I'm like, it's trained on all this corpus of data. And, and

Speaker:

she just looked at me and said, so that means all the data that it's

Speaker:

trained on is basically one giant attack surface.

Speaker:

And I was like, oh, my God, she's right.

Speaker:

But when I would tell fellow data scientists and AI engineers that

Speaker:

they would look like me, they would look at me like I had a tinfoil

Speaker:

hat on. And I was like, you know, talking about

Speaker:

conspiracies and lizard people. You know what I mean? Like, that's how they looked at

Speaker:

me. But, you know, a few years later, right, what's

Speaker:

on the owasp? It's like second or third, right?

Speaker:

Yeah, right, for sure. I mean, there's. I often

Speaker:

talk. Because you've talked about the security program,

Speaker:

it ends up that you end up talking about

Speaker:

strategy, and strategy has to include what your adversaries are doing plus

Speaker:

what you have internally. And so I end

Speaker:

up talking a bit about even

Speaker:

use of AI by the adversary and, or

Speaker:

leveraging the AI by the adversary. And so new

Speaker:

kinds of attacks based on a

Speaker:

prompt injection. I mean, that's a, that is a new thing where you

Speaker:

could, you know, just through the prompt, ask it

Speaker:

to divulge information. It shouldn't be divulging. But, but also you

Speaker:

bring up a great point, Frank, which is just

Speaker:

the LLMs, when they're getting trained, are using

Speaker:

data and you have to be very sensitive

Speaker:

to what data that is there. That's

Speaker:

why I think a lot of enterprises are scrambling to make sure that their policies

Speaker:

are set, that they can make all their employees aware of. Don't put sensitive

Speaker:

information, even though it provide great context to your

Speaker:

prompt, that it's going to be used and

Speaker:

it's going to be, it's going to be sucked in there, and before you know

Speaker:

it, it's going to be in everybody's, you know, prompt or

Speaker:

available to everybody. And it's, it's definitely a real thing. So

Speaker:

given your background in cybersecurity and talking about,

Speaker:

you know, LLMs and the LLMs adoption, what is. Do you think

Speaker:

that that is the biggest unaddressed security risk

Speaker:

is not training the LLM properly so that

Speaker:

it doesn't protect the data that it has. Like, what do you think is the

Speaker:

biggest unaddressed security risk?

Speaker:

I think a little bit related is

Speaker:

ChatGPT and Gemini and Cloud. They've kind

Speaker:

of. They're teaching everybody that

Speaker:

their system is a database of answers,

Speaker:

when, in fact, you shouldn't be thinking about it that way. You should think of

Speaker:

it as it is a tool that helps you collect the answers and

Speaker:

see the answers and do that. And

Speaker:

so the real

Speaker:

danger actually is in

Speaker:

the fact that the adversaries can use the same

Speaker:

technology to perform attacks at scale and speed

Speaker:

that we haven't really been used to.

Speaker:

And so there's that aspect to it. Then the

Speaker:

other aspect is a data

Speaker:

like hole that's there I think

Speaker:

typical in the cybersecurity world.

Speaker:

The business is really wanting to use this because it's such a

Speaker:

productivity gain and whatever it might be, either

Speaker:

your business side is either really pushing it for creative

Speaker:

work or pushing it for just understanding

Speaker:

different parts of the business. And they're ahead of the security team.

Speaker:

And that happens quite frequently. You know, in the my, not the

Speaker:

last company, but before that was all about application security. It was clear

Speaker:

software developers, you know, they were pushing the envelope about

Speaker:

making software so core to many organizations and they were thinking of building

Speaker:

stuff. They weren't thinking of somebody using it

Speaker:

to divulge, you know, corporate information

Speaker:

or to take down a corporation, you know, for

Speaker:

basically using it against them. They're creators, they don't think

Speaker:

about destroyers and the adversaries are

Speaker:

destroyers. And so you had to weave in security

Speaker:

into that culture, which remains a challenge today. I think

Speaker:

that's what's going on right now with LLMs. People are thinking, oh, I can use

Speaker:

it for all these things. They aren't thinking what it's exposing

Speaker:

Frank, back to your wife's thing. They're not thinking of the attack surface

Speaker:

you're suddenly creating by doing that. And

Speaker:

I think that's the biggest thing, Kansas, it's more that attack surface

Speaker:

expansion or just having

Speaker:

even the current attack surface just be more readily available

Speaker:

to the attackers is the thing

Speaker:

that's a real difference because ultimately it gets down to

Speaker:

even these sophisticated attacks that you're starting to hear about now

Speaker:

from the state sponsored

Speaker:

entities that are out there.

Speaker:

It's still coming down to they're exploiting age old vulnerabilities,

Speaker:

but it's just that they're getting to them in a way that's more

Speaker:

automatic. And

Speaker:

they can, as we often say in security, the bad

Speaker:

guys kind of have usually all the time in the world and they only have

Speaker:

to be right once. Yeah, right, right. Well, it's

Speaker:

interesting. You're thinking about like the jewelers, the builders, the

Speaker:

developers. That's their mindset versus the jewel heist

Speaker:

people. Right? And that's two very different mindsets.

Speaker:

And you know, I always joke that

Speaker:

our kids are going to be like the first developers ever to write secure code.

Speaker:

Right. That's my background.

Speaker:

I was a developer. But in all

Speaker:

seriousness. But one of the things that I heard is one of the things that's

Speaker:

driving companies, because you mentioned companies or businesses are encouraging

Speaker:

business users to use AI. One of the reasons I heard was

Speaker:

there was so much shadow it going on, I'm sure it's still going on

Speaker:

that if they banned it outright the stuff would just end up in

Speaker:

a public form of ChatGPT or Gemini

Speaker:

or Claude or something like that versus if they do it through the company way.

Speaker:

The companies that purvey these models, the enterprise versions,

Speaker:

they promise and pinky swear that they'll never use that

Speaker:

data for training data set in the future. So I guess that's

Speaker:

kind of better. But you're right, as I think about this,

Speaker:

we're putting AI in all of these places and we're not really

Speaker:

even sure exactly how it works. And even crazier still,

Speaker:

we're not even. Sure. We'Re not

Speaker:

even sure we know what

Speaker:

vulnerabilities are currently out there. So we're not even sure

Speaker:

now we're just pouring like all these new vulnerabilities in there.

Speaker:

We don't know what we don't know obviously. And it's just kind of like, it's

Speaker:

kind of wild like that. Yeah, I think it's also wild

Speaker:

because the AI LLMs

Speaker:

as trained, they speak so

Speaker:

authoritatively and in such, you know, proper

Speaker:

English and that you're, you're just apt to believe them.

Speaker:

You know, I, I, I, you know, one of my

Speaker:

soapbox, I guess I'll say it is that I think

Speaker:

the, one of the biggest things we can do in society today is we've

Speaker:

got to be teaching our kids at the junior

Speaker:

high, high school levels for sure and certainly in college. It should

Speaker:

be happening. But to be critical thinkers

Speaker:

because you can't, you know, if the world of social

Speaker:

media taught us anything, you know, people kind

Speaker:

of believe stuff that maybe they shouldn't believe. And,

Speaker:

and now you have an AI generating this. That sounds so

Speaker:

believable. And heck, these days, you know, you,

Speaker:

you even might see an image and think it's that person saying it.

Speaker:

It might not be at all. And yet, yet you believe that. You

Speaker:

got to, you got to kind of, you have to be, you have to

Speaker:

maybe you trust, but you got to verify. You know, it's an age old thing.

Speaker:

You just, you just can't believe things for their first blush.

Speaker:

And, and yeah, it's a whole believe.

Speaker:

H none of what you hear and only half of what you see. I think

Speaker:

now it's, you have to believe none of what you see or hear. Right.

Speaker:

Unless it happens physically in front of you. And

Speaker:

even then. Yeah, I mean look, what, what many

Speaker:

of the, the banks and other people that have to really have

Speaker:

trusted systems are doing is, you know, they're,

Speaker:

they're requiring on say a wire transfer. I know I

Speaker:

just had to do this is they, they want to call, they want me to

Speaker:

hold my, my, my license up next to my face.

Speaker:

And even then, you know, there's techniques that you use and we can get back

Speaker:

to the LLMs because you use a lot of. Well, I heard that some of

Speaker:

them will make you do this now. Yeah. Or

Speaker:

ask a question that is so off topic

Speaker:

like, and just see if, what

Speaker:

the response is, if it can't even respond, you know, ask for the favorite

Speaker:

football, pro football team or something like that, you know, and, and

Speaker:

just, you're going to be able to tell

Speaker:

using that. And if you go. So even going back. So we

Speaker:

use LLMs in our system

Speaker:

and we, we, I think the next

Speaker:

wave of things we really believe are those

Speaker:

guardrails that you have to put on it so that it won't hallucinate.

Speaker:

And you know, people think, oh, the hallucination, that's, that's an edge case.

Speaker:

It is not. You know, they, they weren't

Speaker:

really always hallucinating. I mean technically they were always hallucinating.

Speaker:

I guess you could, you could say that. I mean it's, it's a

Speaker:

probabilistic kind of way of, you know, getting the pattern and things.

Speaker:

But, but what, but what it does, it's been, the

Speaker:

models, the, the weights have been put on giving

Speaker:

a good response or a response that fulfills the

Speaker:

request and that waiting forces it

Speaker:

to make up stuff when it doesn't know.

Speaker:

And yet it sounds authoritative and things like that. And

Speaker:

so you really have to have the guardrails on it. And so I think as

Speaker:

I was saying, the next wave of systems are going to be very vertically aligned

Speaker:

like us in cybersecurity. It might be health care, it might be other things,

Speaker:

but they're going to know to make the LLM

Speaker:

to ask it basically tell me when you're like, we have, we

Speaker:

call them verification prompts, right. Or context. And so

Speaker:

it requires them to say if you're making it up, you got to tell me

Speaker:

basically, right. And then even then

Speaker:

limit what it's using as its

Speaker:

context because that'll help too. Because you can do that and

Speaker:

make it more authoritative sources rather than

Speaker:

on some Reddit board or something like that

Speaker:

where it's clearly gathering information from.

Speaker:

You have to do that. And there's people that do that. You see some of

Speaker:

the AIs being very good about

Speaker:

noting or citing its sources. I think that's something. I really

Speaker:

like it when it does that. Yeah, totally. Right, because.

Speaker:

Yeah. And they let you decide on the judgment because in my view,

Speaker:

people have to be in the middle of this for a long time.

Speaker:

Right. I'm not a believer. It's going to go sentient here

Speaker:

shortly. Again, it's my web 1.0 side to me that

Speaker:

the world was going to change. There were going to be no retailers, no bricks

Speaker:

and mortar. If you guys remember that term, bricks and mortar retailers.

Speaker:

You know, that was then they had clicking mortars. I was at barnes and

Speaker:

noble.com during, during that era. Yeah. So you know this

Speaker:

clicking water. Yeah. You know, and. But they,

Speaker:

you know, the hype was they were going to get. That was just going to

Speaker:

go by the wayside. And it was 10 years later, before that

Speaker:

really start that Amazon stopped becoming a bookstore

Speaker:

and started becoming, you know, much more than that or, or ebay got around.

Speaker:

It was much, much later. Same thing happening in AI. It's not going to.

Speaker:

These things aren't going to get there right away. So there's going to be vertical

Speaker:

use of the AI that's going to

Speaker:

provide the guardrails, provide the context that's necessary. And then people

Speaker:

start trusting those kinds of things.

Speaker:

And I think that's going to be needed for a while and then we're going

Speaker:

to see a rise of something that people then can start to trust. But

Speaker:

the LLM is not all that trustworthy right now and you need a lot

Speaker:

of stuff around it to make it accurate and

Speaker:

you know, not making up stuff. I'm

Speaker:

sorry, do you believe the future LLMs will develop

Speaker:

stronger reasoning capabilities or do you think that,

Speaker:

you know, we'll still need the human critical thinkers always,

Speaker:

you know, to close the loop. I

Speaker:

think the ultimate. We're always going to need the human

Speaker:

on judgment. So, you know, I think you can

Speaker:

close certain loops pretty accurately even

Speaker:

today with the LLM. But, but is it judgment

Speaker:

and it's, you know, the LLMs are, you know, they're just repeating patterns and,

Speaker:

and with what they have and, and things like that. So in

Speaker:

fact I just did a, you know, did a prompt recently

Speaker:

that I was asking one LLM to use another LLM

Speaker:

and it came back with kind of an odd response. So it's

Speaker:

like, so re asked it like, what version are you using? And

Speaker:

sure enough, it was using a version that was like three

Speaker:

versions ago. Because what it got trained on and

Speaker:

you just make these assumptions. It's like, oh, of course we're now

Speaker:

at ChatGPT 5. But something might not have been

Speaker:

trained on that. It might have been trained on an old version. And so there's

Speaker:

even that kind of thing happening. Sorry, Candice.

Speaker:

To fully answer your question, though, I do believe that

Speaker:

we are in for some things you might be able to close a

Speaker:

loop for. But if they involve judgment,

Speaker:

we almost ethically need to have a person involved

Speaker:

with that because you just don't know where it's going to go. And, and, and

Speaker:

you can't. And because they speak so well, people

Speaker:

are already misunderstanding that,

Speaker:

that, that, you know, they are like, like what, what they are really.

Speaker:

And they're just repeating stuff that they know. Right. They're not, they're, they're kind of

Speaker:

not making judgment calls. And there's so many things that are just

Speaker:

about judgment that I think it's just better to think of

Speaker:

them as a tool, not as this thing. I,

Speaker:

I think there's a lot to get to, to get to these, you know,

Speaker:

I know, I don't know. Sam Altman might say it's only two years

Speaker:

away. I just think that's, that's, there's no way

Speaker:

not, not for proper ethical judgment.

Speaker:

Right? I mean, yeah, it might fake it

Speaker:

really well, but it won't be ethics

Speaker:

based judgment. And so do you think we

Speaker:

could use AI tools to design better prompts.

Speaker:

That we do that all the time? Absolutely

Speaker:

you can. And in fact, I think it's

Speaker:

almost the best practice now that you are both,

Speaker:

like I had mentioned before, kind of the truth or the

Speaker:

truth directive that you give it, you can give it a lot of pros. We

Speaker:

also notice it's kind of, I don't know, like

Speaker:

what about a month ago there

Speaker:

was a very illustrative

Speaker:

way of you need to threaten these things because it'll access

Speaker:

or raise the stakes for these things because it'll access different parts of the

Speaker:

model. Back to your thing, Frank. We don't really understand how they really

Speaker:

work. And so it was just mind blowing in a way

Speaker:

that you have to say. And so we even

Speaker:

give our prompts the ability to say, hey,

Speaker:

I will lose my job if I don't get this right. So get

Speaker:

this right. But we definitely play the models

Speaker:

off on each other because it's good

Speaker:

and it's kind of asking one to be

Speaker:

the devil's advocate on the other. And that's a known

Speaker:

group think, you know, think about just people socially. Right

Speaker:

group thinks, been around forever. And the way you,

Speaker:

you go against it is you ask someone to be the

Speaker:

devil's advocate in whatever this judgment needs to be. And

Speaker:

that's a great way to test, pressure test if what you're hearing

Speaker:

is actually right or not. And so yes, we have to pressure

Speaker:

test, use the elements to pressure test each other, use our own

Speaker:

prompts to pressure test the current model. You know, there's lots of different, different

Speaker:

techniques to do this. I mean, you know, I think of your

Speaker:

world especially, you guys have long specialized in, you

Speaker:

know, data science has been one of these areas that uses a lot of these

Speaker:

techniques to make sure that it just, you know, you don't get too narrow

Speaker:

in the focus and you know, and you get to get the right answers.

Speaker:

There's a whole, there's a whole new set of things that have to be

Speaker:

done to make sure that we're, we're, we're using the tool in the way

Speaker:

we should use it.

Speaker:

I love you guys. Speechless. It's

Speaker:

interesting. Like, and what's your take on private AI? Right, like running

Speaker:

your AIs entirely on prem within servers you can control.

Speaker:

I mean, I know a lot of people, including myself, think that's

Speaker:

the cure all for a lot of these issues, but even then I'm thinking like,

Speaker:

if it sounds like a cure all or a silver bullet, it's probably not.

Speaker:

Yeah, I mean, I think it, it solves a bunch of these problems that we've

Speaker:

been talking about. You know, it clearly does, but you

Speaker:

can't air gap it totally because people, you want people to be using it.

Speaker:

And so it'll, you know, you're still going to have insider threats.

Speaker:

And so if you have an insider, you know, there's still going

Speaker:

to be ways of getting information out. And it might not be

Speaker:

a risk that you want to take as a company. I mean you still,

Speaker:

and, and so you still have certain things

Speaker:

that do it. But I do think it solves some things. The thing it doesn't

Speaker:

solve is the, you know, why we're seeing

Speaker:

such a rapid advancement in stuff is because

Speaker:

it's the LLMs are looking at everything kind of that

Speaker:

are public, that's public out there and making use of

Speaker:

those and then people are looking at them and going, oh wow, that's great. And

Speaker:

doing that, you'd have to replicate a bit of that. And yeah, you could bring

Speaker:

those in and, and but there's going to be a lot of advances that we

Speaker:

can't even predict right now. You know, like talking to, you know, Candace, you

Speaker:

on the quantum side or you know, now we're seeing that, you know,

Speaker:

Nvidia's got the chips right now, but the wafers and

Speaker:

the amount of transist, you know, transistor equivalents you can put

Speaker:

on these things are, it's going to impact things and maybe it's

Speaker:

going to be practical, you know. No, nobody thought we'd have a whole

Speaker:

computer on our phones, but you know, us going back

Speaker:

into the 80s, yeah, it was a, that's a

Speaker:

pretty powerful computer compared to what we were using at the time. You

Speaker:

know, there's a lot of those things that are going to come to play. And,

Speaker:

and so I, I do think bringing some of this stuff internal

Speaker:

and that it'll solve some things. It won't solve everything though.

Speaker:

And you know, you'll still have to, you'll still have to do a lot of

Speaker:

good security hygiene. You'll start to do a good, a lot of good data hygiene.

Speaker:

You know, I mean I'm kind of. Worrying though because like companies have not been

Speaker:

doing a really bang up job of that last 50 years.

Speaker:

Yeah, it's more noticeable, it's more noticeable now more than ever.

Speaker:

I wonder what new vulnerabilities would private

Speaker:

AI, what new vulnerable would private

Speaker:

AI solve? And what or what, what new

Speaker:

vulnerabilities would it, would it expose? Right? Like because we

Speaker:

still don't know even if it's running on your server, you still don't know how

Speaker:

it works. You know the only thing. And

Speaker:

you're right and you also, that's why I brought up the

Speaker:

insider. You know, it's an attack surface. You

Speaker:

know, maybe you closed it down a little bit from being external but you have

Speaker:

insider threat threats. You have other right. You know, the creative

Speaker:

things that are going on on the attacker side about

Speaker:

they've long kind of done, you know, the,

Speaker:

where the attacks were. They'll get inside and they'll just wait and

Speaker:

they'll wait for kind of the dust to clear so you cannot trace it back.

Speaker:

And they'll cover their tracks and, and it

Speaker:

could be sitting there in the first time someone in the business

Speaker:

connects that model that's you think is walled

Speaker:

off to something even for good

Speaker:

legitimate business reasons. It might expose

Speaker:

an avenue that someone could get in and start exfil trading. And you may not

Speaker:

even know they are, I mean these low and slow

Speaker:

Attacks that have been the bane of so many

Speaker:

enterprises where you're just siphoning it off enough so

Speaker:

that the controls don't see it. Those will happen

Speaker:

in a lot of. And they could happen to models. And there you have your

Speaker:

crown jewels, your data. That's everything slow slowly being

Speaker:

siphoned off. You know, that's, that's going to remain. And

Speaker:

you're going to have to have a multi layer security

Speaker:

system in place to kind of deal with that as well.

Speaker:

No, that's true. And it makes me wonder like,

Speaker:

you know, I guess, I guess you can

Speaker:

be. I had an actually interesting conversation with the customer a couple

Speaker:

years ago and he talked about what's called the. And I know I'm going to

Speaker:

mess up what the acronym is, but it's CIA Triad. And there's

Speaker:

nothing to do with the Central Intelligence Agency. It's

Speaker:

confidentiality, something.

Speaker:

And what is it? Probably identity. Yeah.

Speaker:

Or integrity, I think. And then access.

Speaker:

Right. And he had this whole, you know, he had a whole thing where like,

Speaker:

you know, if you lock things down so much, you basically kill

Speaker:

the access part of it. Right. You basically make it impossible to access. Right. If

Speaker:

you. It seems like security is one of those jobs that

Speaker:

will be augmented by AI for sure. Right. Because no one's going to have time

Speaker:

to read gigs and gigs of log files anymore. Right.

Speaker:

But it's also going to need. You're going to need a human in the loop.

Speaker:

Right. I don't say that because that's what my wife does. And

Speaker:

I like.

Speaker:

Yeah, I like paying. You bring up a great point.

Speaker:

And let me transition it to this because

Speaker:

I think I'm going to use a term that gets misapplied

Speaker:

a lot for enterprises and it's about risk.

Speaker:

You are not going to. And Candice, it gets to your

Speaker:

point too. The job of security

Speaker:

programs inside of enterprise is actually to mitigate

Speaker:

the risks to the business. It's not to provide 100%

Speaker:

security. That's not the goal. The goal is to mitigate the risk because

Speaker:

every business is going to have risk. And, and you need to accept a certain

Speaker:

amount of risk so that you can do business and you can reach more

Speaker:

people and you can, you can do that. And

Speaker:

circling all the way back to what Pulse Security

Speaker:

does is that we hope to bring that concept back into things, is

Speaker:

that the leaders should be thinking about risk

Speaker:

and tracking their risks and knowing where they're

Speaker:

taking risks or where they're not taking risk. I think today

Speaker:

why I said it's kind of one of these misplaced things is we kind

Speaker:

of allow regulations and things like that

Speaker:

to say to be about risk. And that's really the low

Speaker:

bar, you know, in security we always talk about, you know, if

Speaker:

you're, you talked about the OWASP earlier or you know, you think about the

Speaker:

PCI standard, you know, for retailers and

Speaker:

transaction processing, or you think about some of these other

Speaker:

standards, they're the low bar. And many people

Speaker:

think about risk as something that I have to do it we call

Speaker:

checkbox compliance. Right. I have to be compliant, but I only want to

Speaker:

do as much as I, as I can. As you need to. Because no one

Speaker:

looks forward to seeing security people, whether it's physical security, you know,

Speaker:

or it security. Like you know, the things

Speaker:

developers have told her. Right.

Speaker:

You know, and like, as a form, you know, as a former developer, like, I

Speaker:

get it, like, and I know data scientists don't think about this

Speaker:

generally speaking, right. Data engineers might,

Speaker:

but even then, like, you know, they're, I think you said

Speaker:

it earlier like it's the mindset, right? You know, you have the builder mindset,

Speaker:

maybe the plumber mindset for data engineers and

Speaker:

then you have kind of the attacker mindset, right. These are different ways of

Speaker:

thinking. It almost cries out that you need to have

Speaker:

diverse mindsets on these projects now. I mean, you always need.

Speaker:

Now it's more obvious. That's why you see, yeah,

Speaker:

that's why you see

Speaker:

good hygiene insecurity is that you have, you do

Speaker:

a threat modeling before you are going to go external. And

Speaker:

that is a very different person that usually guides that. It's, it's exactly what you

Speaker:

said, Frank. They have, they have the. I'm going to bring to you

Speaker:

this, what you might feel is a very, very big edge case.

Speaker:

But if there's a probability can happen, you have to consider

Speaker:

it and you have to, you have to think about it. And that's back

Speaker:

to something we talked about earlier. When you have the attackers using

Speaker:

AI, they can explore the

Speaker:

corners and these edge cases so much easier.

Speaker:

And if they find one. And it could be very

Speaker:

unsophisticated though, because it could still be a vulnerability

Speaker:

that has been around for 10 years and believe it or not,

Speaker:

that's still going on that the ultimate way they got in

Speaker:

was a very old unpatched

Speaker:

resource that just happened to get exposed. But it was, it was the

Speaker:

reason other term in security, the lateral movement of the bad guy, that they're

Speaker:

just, they're just moving laterally to investigate

Speaker:

different parts and they found something and they got in that way

Speaker:

and back, back to the thing. Just have to be right once and

Speaker:

poor defenders have to be right 100% of the time and they

Speaker:

won't be. So that's why taking a risk approach

Speaker:

is the way, is the way to go. Because no matter

Speaker:

what your size of the company, you got to consider your budget,

Speaker:

how many people you have, the skill set of those people.

Speaker:

And this is where I think AI can really assist the

Speaker:

defenders is that it can add some of that

Speaker:

expertise and some of that, you know, vigilance

Speaker:

that's on 24. 7 in ways that people

Speaker:

can't. But they got to bring it to people and the people can make the

Speaker:

judgment call. Because if the AI had its way, I mean,

Speaker:

Frankie kind of said this too, that the most secure thing is to

Speaker:

shut the whole thing down and not let customers access it.

Speaker:

You know, and you don't want that because that, that's your business, you know. That

Speaker:

kind of defeats the purpose. Yeah, exactly, exactly.

Speaker:

So a risk based approach is super important. And, and it is

Speaker:

about then just, you know, you judging how much

Speaker:

risk you want to take and your board wants to take and you

Speaker:

know, and the CEO wants to take and the business people want to take and

Speaker:

then, and then applying that and making sure that,

Speaker:

you know, it matches your business.

Speaker:

And so that's, you know, that, that's a lot of the game. Right.

Speaker:

That makes a lot of sense. So what is your. I'm sorry, candid. Go ahead.

Speaker:

So what would trigger the shift inside an organization

Speaker:

from reactive security to risk aligned decision

Speaker:

making? You

Speaker:

know, oftentimes, unfortunately,

Speaker:

it's you get hacked and

Speaker:

you, a lot of times also, unfortunately, you bring

Speaker:

in new leadership who understand that their charter is to

Speaker:

come in and change the culture a bit, you know,

Speaker:

from that. Now existing leadership can certainly do that, but

Speaker:

whether they're given enough chance to, I

Speaker:

don't know, I, you know, it's. All fun and games

Speaker:

until somebody gets hurt. Right. Like, and you know, and I think that

Speaker:

if you'd never had a problem before and then it suddenly

Speaker:

happens. Right. I don't think it's, there's

Speaker:

a joke, it's a bit of a gallows humor type thing where

Speaker:

like clockwork, within 24 hours of a major breach of a major company. Right.

Speaker:

What do you see? Job listings for

Speaker:

cybersecurity? Okay. There was a major, I

Speaker:

think it was one of the major hotel chains. I think you all know who

Speaker:

we're talking about. I don't want to, I don't want to name Anyone by name,

Speaker:

I don't want to get sued. But you know, literally

Speaker:

like within a week, you know, there was like two pages of job

Speaker:

listings for, you know, some flavor of

Speaker:

cybersecurity or security analysis.

Speaker:

And it's unfortunate that in many

Speaker:

organizations the security leader is kind of

Speaker:

set up to be the scapegoat when something like that happens. When

Speaker:

in fact, you know, you could be doing all the right things.

Speaker:

And I don't know, you guys probably know the term

Speaker:

too. The, a black swan event kind of happens. Which, right, which,

Speaker:

you know, we know it, everybody knows it if they travel. Because

Speaker:

how often have we caught someone who has the explosive in their shoes

Speaker:

getting on an airplane? It's never happened since the first time.

Speaker:

That, and that was a black swan event. And yet we

Speaker:

designed our whole security, a lot of our security around

Speaker:

that and that. And it should be done around the major,

Speaker:

the risks. And if you think about

Speaker:

it in that way, really if you've

Speaker:

traveled internationally, especially in places that they really have a risk,

Speaker:

they will often randomly pick a plane,

Speaker:

get everybody off, look at all the baggage. But it's a

Speaker:

random kind of thing that happens

Speaker:

rather than kind of a systemic way of going through it that becomes

Speaker:

kind of wrote and it, you know, and, and people learn how to defeat it,

Speaker:

you know, in some ways. And, and that happens in cyber security

Speaker:

all the time. You know, you gotta, you gotta really be

Speaker:

doing that. That's why doing like practice, you know,

Speaker:

it's, it's a, it's a real important thing to do what we call

Speaker:

tabletop exercises in this because you have to

Speaker:

pretend like you just got hacked. What, what do you do

Speaker:

from the lowest level analyst all the way up to

Speaker:

the board. Do they, do they know what to do? Because,

Speaker:

and there's a lot of regulations now within like 24 hours, they have to,

Speaker:

or 72 hours of detecting it, they have to, they're on the

Speaker:

hook to claim it. I forget what that law is called. Yeah,

Speaker:

that's right there. If you're a public company, you have to disclose

Speaker:

and typically you don't have any idea yet

Speaker:

what, how that's happened and yet you have to disclose it's

Speaker:

happened and it's, you know, so,

Speaker:

so yeah, there's, there's a lot of risk to the organization

Speaker:

that, that this, this presents. And so that's why having,

Speaker:

thinking about it that way, doing exercises, you

Speaker:

know, it is a new world. I'll say. I, I'm

Speaker:

a big believer in what I'm going to call situational security where

Speaker:

And I mentioned this before, you just got to know your situation and if the

Speaker:

stakes are high and you have a big security team and you better be

Speaker:

practicing these things, you better have done, you know,

Speaker:

multi layers of security. But if you're a small team and you only have a

Speaker:

couple people on it, you've got to kind of think of what your crown jewels

Speaker:

are. Go protect those first and let the other

Speaker:

stuff go because who cares who's on your guest network?

Speaker:

You know, it's like you got to let that go, maybe make sure that,

Speaker:

make sure your guest network is not tied to your internal network.

Speaker:

And I think these days you have to really look at access

Speaker:

because so much. Everyone's in the cloud with

Speaker:

a lot of their infrastructure these days and you can tell a lot

Speaker:

by, so don't over privilege people to have access to things

Speaker:

and do that. So you have to look at those kinds of things. You do

Speaker:

have to look at your, you know, it goes without saying,

Speaker:

look at, look at your resources and I'm going to use that term broadly,

Speaker:

your assets that you have because you have to know about them.

Speaker:

So having some protection on those assets is super critical as well.

Speaker:

And I'm an old appsec guy, so yes, you, and you know, Frank,

Speaker:

you mentioned you got to have your applications

Speaker:

that are actually performing much of the business these days. You have to,

Speaker:

you have to know what your vulnerabilities are and you've got to plug the big

Speaker:

holes in that. But from there really

Speaker:

can't stop the bad guys. But you can at least stop,

Speaker:

stop the amateur bad guys. Right. Well, and, and

Speaker:

they're going to look around. So you can, if your bar is

Speaker:

higher than the next guy, as we know. You know, I know

Speaker:

that's all these adages of, you know, running faster than a bear or

Speaker:

the next guy, you know, on the bear and all things. That'd be the second

Speaker:

slowest. That's right. And it is true, you know, you can, you can

Speaker:

dissuade a lot of attacks if you

Speaker:

look like it's going to be difficult because the attackers,

Speaker:

they run playbooks too, because it's easier for them, it's cheaper for

Speaker:

them. It. And they'll just run playbooks. And if, if you

Speaker:

thwart the playbook, they'll find someone who, who

Speaker:

doesn't. And if you're not state actor, it's a, it's a criminal

Speaker:

enterprise. Right. And criminals are there to make money. Right. State

Speaker:

actors have different motives and different budgets.

Speaker:

Yeah. They may go a lot more targeted and they're just going to wait and

Speaker:

be patient. You're exactly right. But

Speaker:

actually targets like. I'm sorry,

Speaker:

no, I don't mean to interrupt. I was just going to say a funny story

Speaker:

is when we were

Speaker:

pitching our application security company

Speaker:

back in 2003, we used to talk about

Speaker:

how

Speaker:

underfunded but patient and have all the times in the

Speaker:

world the hackers are, we were kind of

Speaker:

dismissive of no state would ever

Speaker:

hack another state's assets because it start a

Speaker:

war. And at the time that was really, that was the thinking.

Speaker:

I mean, how quaint does that sound today when we all know it's like, oh,

Speaker:

that's a Russian hacker group. You know, it's like we just kind of go,

Speaker:

oh, of course it was. It's like, oh my gosh. Yeah.

Speaker:

Well, it's also, I think in terms of geopolitics become a real

Speaker:

equalizer. Right. Because a nation state like North Korea can go toe to

Speaker:

toe with the United States. Right. Whereas in a

Speaker:

conventional war really wouldn't work out well for them. You know what I mean?

Speaker:

It's an interesting, yeah, it is interesting.

Speaker:

We have really good hackers ourselves in the

Speaker:

United States, right? Oh, I'm sure we do. It's, it's, it, you know,

Speaker:

I mean I, you know, you kind of hope that but,

Speaker:

but you are right, like a North Korean thing like, like we've seen

Speaker:

they can use these deep fakes to infiltrate in ways

Speaker:

that, you know, because of the work at home thing how, how they

Speaker:

can get employees hired in some of these places

Speaker:

with the expressed intent of, you know, stealing

Speaker:

things, you know, from, from those organizations and it's,

Speaker:

yeah, it's, it's a new world. It's pretty wild. Yeah. I mean when you think

Speaker:

about it like in, and you know, and it's not, not saying that the United

Speaker:

States doesn't have good hackers. I'm sure, I'm sure we have among the best. I

Speaker:

mean, maybe the best. But it's like a

Speaker:

baseball team, right? Like, you know, obviously there are some baseball teams that are going

Speaker:

to be better than others, right. And it's going to be kind of like the

Speaker:

smaller town that doesn't have the budget to pay for this. Rock stars. Same

Speaker:

with football, right? Whatever sport your thing is, right. You know, for me, I'm a

Speaker:

Yankees fan, although the Yankees have not had a good run of late. But

Speaker:

historically they have been kind of the top. But you know, you

Speaker:

can definitely tell like nation states can be all in like the same league

Speaker:

because they do have more or Less the same capacity in terms of. They're

Speaker:

not, they're not in it for the money per se. Like, you know what I

Speaker:

mean? They're not, you know, they, they. Because they're a

Speaker:

nation state, you know, they can harbor. They can harbor themselves and not

Speaker:

prosecute. You know, they have certain more advantages than your average criminal gang.

Speaker:

Oh yeah. I mean. And well funded. Right. I mean, that's. Money's not

Speaker:

an issue. Yeah, right. It's it. It

Speaker:

puts to be a formative adversary.

Speaker:

And that's why places

Speaker:

like Mandy and that come out with threat reports.

Speaker:

They talk about these actor groups, but you could see moves of

Speaker:

actor groups as well,

Speaker:

changing their tactics and techniques. Again, one of the

Speaker:

more interesting things that happened. I think we could talk about it

Speaker:

because it is public. But if you remember

Speaker:

maybe a year and a half ago, maybe it was two years ago that, that

Speaker:

mgm, you know, casinos guy. Oh yeah, the resort. Remember that?

Speaker:

And it shut. It shut down two casinos with

Speaker:

ransomware. The actors that did that.

Speaker:

It goes back to what's come full circle where it used to be the adversaries,

Speaker:

where it used to be called script kiddies were basically kids

Speaker:

that want to just cause disruption. Well, this is, this is actually a

Speaker:

more sophisticated. It ends up. This group was just a more sophisticated thing of that.

Speaker:

They. Yeah, it was ransomware, but they weren't actually out there

Speaker:

just for the money. They just wanted to do it. They just wanted to see

Speaker:

if they could shut down a casin. And it's crazy that that

Speaker:

that's like that and they still got away with a bunch of crypto

Speaker:

money. But. But it,

Speaker:

you know, it just shows that even like

Speaker:

those, even those hackers could then stand on the shoulders of

Speaker:

all this technology that's being hopefully built for

Speaker:

good and stuff too. But they can use that. And now

Speaker:

you can generate. I know the LLMs.

Speaker:

Back to the subject. You could ask it to

Speaker:

generate malware for you and it'll at

Speaker:

first say no, but if you could trick it, it'll say yes and it'll do

Speaker:

it. And then you can. Didn't that happen recently where there were. State actors

Speaker:

with cl. Yeah, anthropic.

Speaker:

That they were. That. That that plot had been used

Speaker:

and they're, you know, I, I think anthropic really looks at the

Speaker:

safety of what they're doing and stuff too. So they, that's why they disclosed it.

Speaker:

But. And they filled that hole. But it was. It wasn't that hard.

Speaker:

You know, all they did was say, oh no, I'm a Researcher. And

Speaker:

I'm doing an ethical. Yep. It

Speaker:

was not an ethical. How would you. I mean, you've been in the

Speaker:

AppSec before. It was called cyber security. Was called

Speaker:

AppSec or application security. But, you know,

Speaker:

if somebody told you back when you said, you know, no nation state would do

Speaker:

this, right. That, you know, all you had to do is trick a computer into

Speaker:

giving you, like, talk to a computer and tell it you're a research. Like, how

Speaker:

unreal is that? Like, I don't know. I'm doing this for research. Like, oh,

Speaker:

okay. Like, yeah, you know. Yeah, pretty, pretty,

Speaker:

pretty unreal because it was so manual before,

Speaker:

you know where. But again, you know, it gets back

Speaker:

to the thing we talked about earlier. It's like there are builders of the

Speaker:

world that can't imagine someone wanting to destroy,

Speaker:

you know, this beautiful building that's, that's been, that's been

Speaker:

built. And then there's people that. All they think about is, how can I

Speaker:

find a weakness in that building and take it, either take it down or just

Speaker:

gain access and that. Right, that's, that's, that's what's around.

Speaker:

Yeah. I mean, that keeps on. If you're in cyber security, that, that,

Speaker:

that, that keeps the lights on for sure. Because there's always,

Speaker:

always, there's always work to do to help the defenders.

Speaker:

Yeah. So you have something to defend. It was like going back to medieval times,

Speaker:

right. Like, you had the kings, but you had a pretty large class of

Speaker:

knights, you know, that would have to do defending and. Or

Speaker:

I forget what the people were called, but they would stand on the walls and,

Speaker:

like shoot arrows and catapults and stuff like. Yeah.

Speaker:

And you design the moat is because of that. And then

Speaker:

the ways you get into the city, you know, has got

Speaker:

traps in it, you know, and we would

Speaker:

liken that to a honey pot, you know, I mean, there's lots of, lots of.

Speaker:

And then Trojan horse was originally a Trojan horse.

Speaker:

That's right. And it's a battle. Right. And so I think right

Speaker:

now with AI, the attackers have a bit of the upper hand

Speaker:

because we just don't know how they're using it.

Speaker:

But, you know, there'll be tools and there already

Speaker:

are. I mean, if you're a cybersecurity company and you don't have

Speaker:

some AI assistance to help,

Speaker:

either with the scope or the breadth or the speed,

Speaker:

you know, it's. You see that and that. But that's on the detection

Speaker:

end, and sometimes that's too late. I mean,

Speaker:

I hope that the industry moves to some prevention. And then it is

Speaker:

about the building of the moats or the maze that they

Speaker:

have to go through or something like that. And I think that's an important

Speaker:

balance that has to be maintained by enterprises today

Speaker:

to. To make sure that they mitigate the risk. No, that's a

Speaker:

good way to put it.

Speaker:

Any question? We're getting close to the top of the hour, so I want to

Speaker:

be respectful of your time. Any questions? Candace? Sorry, I. No,

Speaker:

honestly, like, this has been a fantastic, fantastic interview.

Speaker:

It's been incredibly enlightening. Like, so much to think about,

Speaker:

you. Know, makes me want to change all my passwords. Right,

Speaker:

right. Well, you know, password 1, 2, 3, nobody. No, you can't. That's not secure

Speaker:

anymore, you know. Well, even, you know, it's interesting

Speaker:

because talk about checkbox compliance versus real

Speaker:

security. Even as we were setting up the company,

Speaker:

little old us, you know, we go to, and we're

Speaker:

needing help because we're wanting to become compliant to some of these

Speaker:

bars that are out there, like SoC2, if you've heard

Speaker:

of that. It's a compliance standard

Speaker:

for trustworthiness of. Of companies like us who might have your day.

Speaker:

There's a massive Alphabet soup there. There is,

Speaker:

there is. And, but like, password policy was really interesting

Speaker:

because what. We were using some AI to help us

Speaker:

in that, and it came back with password policy of. Oh,

Speaker:

yeah, you know, like, change your password every two weeks. Well, that's

Speaker:

been. That might have been state of the art a couple of years ago, but

Speaker:

that's not what you do today. Today, you know, it's about

Speaker:

length and scrambling, and we have these things called password

Speaker:

managers that allow us to do that rather than

Speaker:

us remembering something and. Yeah, even that.

Speaker:

Dynamics. Yes, those can get hacked.

Speaker:

A recent breach on one of those.

Speaker:

You know, there's. I. You know, I don't know, one that was

Speaker:

really bad publicly. There has been in the past, for sure. All right.

Speaker:

Um, yeah, so. But you're still, I think, Correct me if I'm wrong, but I

Speaker:

still think you're safer with a password Manager without

Speaker:

it 100% you. It's.

Speaker:

It's because you want. You want long length,

Speaker:

jumbled, you know, kinds of things that just aren't easy,

Speaker:

easy for the attacker. So.

Speaker:

Yeah, and then you change your master password of that manager

Speaker:

frequently. That's one where you, you. And again, you still want it to be long

Speaker:

and. Right, right, right. Long and complicated. Remember, one long

Speaker:

and complicated thing. Well, I'm, I'm glad

Speaker:

that we got to kind of talk yeah, awesome. It's been

Speaker:

great. Yeah, it's been great. Where can folks find out more about you and your

Speaker:

company? So I think today, you know, like I said, we're

Speaker:

in stealth, but eventually, please follow

Speaker:

Pulse Security AI will be coming out of stealth, you know, in

Speaker:

the, in the probably new year to mid

Speaker:

year kind of thing. But also we started a community

Speaker:

of security professionals and we as

Speaker:

just a networking organization, we call that

Speaker:

securityimpactcircle.org and there

Speaker:

we have blogs. We want to have

Speaker:

people talking about this prevention versus detection or even for

Speaker:

the security leader, about risk and how they should manage

Speaker:

things and best practices that they have together. So

Speaker:

yeah, we have a site, securityimpactcircle.org that is a great place for people

Speaker:

to go and eventually, you know, you'll get to us through that as

Speaker:

well. Cool. Awesome. Well, I'll let

Speaker:

our AI finish the show. And that's a wrap on this

Speaker:

episode of Data Driven. Big thanks to Mike Armistead for

Speaker:

reminding us that while AI may be the future, security breaches

Speaker:

are very much the present. Remember, the attackers only have

Speaker:

to be right once. So maybe don't make your password password

Speaker:

until next time. Stay curious, stay secure, and for

Speaker:

the love of data, please update your firmware. Cheers for

Speaker:

listening. Now go change that password.