Speaker:

Welcome to Data Driven, where we dive into the thrilling world of data,

Speaker:

AI, and on occasion, misbehaving chatbots suggesting

Speaker:

glue for your pizza. This episode features Bar Moses,

Speaker:

CEO of Monte Carlo. Not the casino, not the car,

Speaker:

but the company keeping your data from quietly wrecking your business.

Speaker:

We talk observability, the chaos of unreliable data,

Speaker:

and why one tiny schema change cost a company

Speaker:

$100,000,000. Ouch. So buckle

Speaker:

up. Because if your AI bots are making decisions without

Speaker:

reliable data, well, hope you like eating rocks for the

Speaker:

minerals. Hello, and

Speaker:

welcome back to Data Driven, the podcast where we explore the emergent

Speaker:

fields of data science, artificial intelligence, and, of course, data

Speaker:

engineering. And with me today is my favorite data engineer in the

Speaker:

world, Andy Leonard. How's it going, Andy? It's going well, Frank.

Speaker:

How are you? I'm doing well. I'm doing well. I was in Raleigh last

Speaker:

week, drove down, rented a car actually,

Speaker:

to save mileage on, on ours, and,

Speaker:

spoiled because it's been a while since I bought a new car. And

Speaker:

this is the second time I rented a car, and I'm getting tempted. I ain't

Speaker:

getting tempted. It was a Chevy. It was

Speaker:

a Chevy Malibu. Not a Monte not a Monte Carlo.

Speaker:

See what I did there? I don't even know if they still make them. I

Speaker:

I was driving, the little one off and dropping the little one off at daycare,

Speaker:

and I was behind a Chevy Monte Carlo, like, a two early

Speaker:

two thousands vintage. But that is actually quite relevant

Speaker:

to our discussion today because with us today, we have Bar Moses, who is the

Speaker:

CEO and cofounder of Monte Carlo, the data

Speaker:

and AI reliability company, not the casino

Speaker:

or the car, I would assume, or the town. Monte Carlo

Speaker:

is the creator of the industry's first end to end data and

Speaker:

AI, observability platform with

Speaker:

$236,000,000 in funding from Accel

Speaker:

Iconic Growth and others. They are on a mission to bring

Speaker:

trustworthy and reliable data and AI, to

Speaker:

companies everywhere. The company was recently recognized as

Speaker:

a enterprise tech 30 company, a CRN

Speaker:

emerging vendor, and an inc.com,

Speaker:

best workplace and accounts Fox, Roche,

Speaker:

Nasdaq, and PagerDuty, among others, as their customers. Welcome

Speaker:

to the show, Bar. Thank you so much. Great to be here, Frank and

Speaker:

Andy. Awesome. An intro. No problem. Do you drive a

Speaker:

Monte Carlo? Because that would be epic. You know, I really should

Speaker:

be driving a Monte Carlo. I do not, and I've never actually been to

Speaker:

Monte Carlo either. So I will tell you if you're into cars,

Speaker:

like, I'm like a recovering car, nerd. Oh,

Speaker:

very cool. It looks like a car show. Like, honestly, I went to Monte

Speaker:

Carlo, and we had rented, like, a Saab convertible. And I felt like we were

Speaker:

driving. We were driving driving, like, the low end

Speaker:

of the car thing. I mean, there were I mean, I've never

Speaker:

seen Bentleys in the wild, like, just parked on the street,

Speaker:

like, no big deal. Wow. Like, I mean, every

Speaker:

luxury car if you're in a Saab and you feel like you're slumming it

Speaker:

Yeah. It is clearly a high money area.

Speaker:

But, so welcome to the show. So Monte Carlo

Speaker:

why'd you get the name? I I'm assuming it might have something to do with

Speaker:

Monte Carlo simulations, but that's in the Great question. Yeah. The

Speaker:

unofficial story is that, one of our CO, founders is a fan

Speaker:

of formula one and, you know, as, you know, formula one crisis.

Speaker:

So right. That's, you know, clearly the, the, that's the

Speaker:

unofficial story. The official story is that, you know, we

Speaker:

had to we had to name the company. We started working with customers when we

Speaker:

started the company, and we we had to choose some name.

Speaker:

And, I studied math and stats in college, and so I sort

Speaker:

of opened my my stats book and sort of looked through and,

Speaker:

you know, reviewed my option and, you know, Markov,

Speaker:

chains didn't seem like a great name. And next up was

Speaker:

Bayes' theorem, which was similarly kind of not great. And

Speaker:

and then, you know, I was reminded of Monte Carlo and Monte Carlo simulations. I

Speaker:

actually I actually did some work with Monte Carlo simulations earlier in my career.

Speaker:

And it seemed like it seemed like a great name, a name that would speak

Speaker:

to, you know, data engineers, data analysts, folks that have been the space.

Speaker:

And, you know, I think naming a company is a very difficult

Speaker:

thing to do. We decided to go with it. And the spirit of Monte Carlo,

Speaker:

One of our values is ship and iterate. And so, the

Speaker:

name has sort of stuck with us since. And, it's quite memorable. People either

Speaker:

love it or hate it. So I think it works for us. I think it

Speaker:

it works. Like, I think of the car. I think of the casinos. It has

Speaker:

a certain amount of, high class, maybe more so than Markov

Speaker:

chains, Markov chains. Although I did for a time flirt with the

Speaker:

idea of of also starting a company called Markoff Chains, but,

Speaker:

like, have see if we could see if we can get money for mister t

Speaker:

to be the spokesman. That would

Speaker:

have been epic. Yeah. Jeez. He did you. Ideas, Fran. I was the

Speaker:

only one I was the only one that thought that was a good idea, but,

Speaker:

you know, I was a big fan of mister t as a kid. Marketing. Yeah.

Speaker:

That's funny. That's what I do in my day job now. Oh, yeah.

Speaker:

I swear, folks, I didn't pay her to say that.

Speaker:

So so you you talk about data and I AI

Speaker:

reliability. And to me, when when I hear that,

Speaker:

a slew of things come to mind. Like, there's security, there's the

Speaker:

veracity, like, the five v's and all that or four v's or whatever it

Speaker:

was. What exactly is kind of Monte Carlo's, like,

Speaker:

wheelhouse there? Yeah. Great question. I'll

Speaker:

actually sort of anchor ourselves in in kind of the metaphor or sort of a

Speaker:

corollary that we like to use here, which is really based on software engineering.

Speaker:

So we didn't reinvent the wheel when we say data and AI observability.

Speaker:

We really take concepts that work for engineering and adapt them.

Speaker:

So, you know, when we started the company, the idea, the

Speaker:

hypothesis, the the thesis that we started the company on was data

Speaker:

was going to be as important to businesses as applications, as online

Speaker:

applications. And, they were data was going to

Speaker:

drive the most critical sort of, you know, lifeblood of companies through

Speaker:

decision making, internal products, external products.

Speaker:

And, while software engineers had all the solutions and tools in the

Speaker:

world to make sure their applications were reliable, and so some, you

Speaker:

know, some off the shelf solutions like Datadog, New Relic, Splunk might be

Speaker:

familiar to you, data teams were flying wide. So there was literally

Speaker:

nothing that they could use to know that their data was

Speaker:

actually accurate and trusted. That's sort of, like, the the problem the core problem that

Speaker:

we started. Fast forward to today, you know, we created the data observability

Speaker:

category. We're continuing to create it. AI is making this problem just

Speaker:

infinitely bigger, harder, more important. Why? Because

Speaker:

data and AI products are now you know, there's a proliferation of those.

Speaker:

An AI application is only as good as the data that's powering it,

Speaker:

and the AI application itself can be inaccurate, can be

Speaker:

unreliable. Right? And so at a very high level

Speaker:

I know this is, you know, very vague, but at a very high

Speaker:

level, the idea was the same diligence that we treat software

Speaker:

applications, we should be treating for data and AI applications. Now,

Speaker:

what does that actually mean? How do we do that? Enter the concept of

Speaker:

observability. Observability is basically understanding or

Speaker:

assessing a system's health based on its output.

Speaker:

And so basically, the thesis was, can we observe end to end the

Speaker:

data and AI estate, learn what the patterns

Speaker:

are in the in the data, bring together metadata and context,

Speaker:

lineage, for example, about the data, derive insights

Speaker:

based on that to understand and determine what the system should

Speaker:

behave like, and alert if that gets violated. So that's sort

Speaker:

of the first part. The first is actually being being able to help data teams

Speaker:

detect issues. The second part is actually being help,

Speaker:

helping data teams resolve issues. Now here's the interesting thing

Speaker:

that we sort of learned over over the years. We've worked with hundreds of of

Speaker:

enterprises. So, you know, we mentioned a few. We real really work with the top

Speaker:

companies in every single industry. So,

Speaker:

you know, in in, in health care, in retail,

Speaker:

in manufacturing, in, technology, in each of these

Speaker:

areas, the, data in the state

Speaker:

obviously varies, but there are actually interestingly commonalities. And the

Speaker:

commonalities is that every single issue can be

Speaker:

traced back to a problem with the data, problem with the code,

Speaker:

problem with the system, or problem with the model output. Can go

Speaker:

into detail into more each of those, but that's sort of the high level,

Speaker:

framework. We basically provide end to end coverage to help data teams

Speaker:

understand what the issues are and help them trace them back to data issues,

Speaker:

code issues, system issues, or model output issues. So when did

Speaker:

you get the idea that I'm sorry, Andy. I cut you off. Okay. When

Speaker:

did you get the idea when you realized that data is gonna be as important

Speaker:

as applications are to businesses? Oh, great question.

Speaker:

Yeah. Great question. So so we started the company in 2019.

Speaker:

And, actually, what's interesting, it was pretty clear to us then, but we

Speaker:

had to prove that or we had to convince that of people. Definitely.

Speaker:

Yeah. It was not obvious. It's it's still there's still a

Speaker:

lot of people that are kind of, like, I guess, they'd be in the quadrant

Speaker:

of laggards where they realize, oh, I guess this is important.

Speaker:

A %. I would imagine in 2019, that would have

Speaker:

been you would have sounded insane. Like We we sound I

Speaker:

sounded insane a %. People are like, what? Data is

Speaker:

gonna be important? Are you sure? Now a couple of things happened

Speaker:

since, which I think helped. First is,

Speaker:

there were some large acquisitions in the data space, like Tableau and

Speaker:

Looker earlier on, and then Snowflake IPO'd. Snowflake was the

Speaker:

largest software IPO of all times. It was quite interesting that the

Speaker:

largest software IPO of all time is a data company. So I think those

Speaker:

things sort of help kind of convince that this you know,

Speaker:

convince, at least, externally, you know,

Speaker:

to the market that data will continue to be will will be

Speaker:

important and critical. I think the things that I noticed is, you know,

Speaker:

before we even started the company, we spoke to hundreds of data leaders, and I

Speaker:

speak to dozens of data leaders every single month. They continue

Speaker:

and I think what you hear from them is more and more

Speaker:

data teams and software engineering teams are building products hand in hand.

Speaker:

So they're actually they're side by side building. Right? And so, actually,

Speaker:

almost more and more critical business

Speaker:

applications, revenue generating products are based off of

Speaker:

data, and they're being powered by data. I'm not even talking

Speaker:

about generative AI, which is a whole whole other story why that matters, but just

Speaker:

data products by itself. Think about reports that people look at internally.

Speaker:

You know, just give you an example. You know, we work with with, many

Speaker:

airlines, for example. Airlines have a lot of data that goes to internal

Speaker:

operations. Like, what's the connecting flight? What's your flight number? How

Speaker:

many flights left today? What time did they leave? How many passengers were on

Speaker:

the airplane? Where is your luggage? Right? That

Speaker:

information is powering internal and external products. You know, it's powering the application

Speaker:

that you're using in order to onboard the the plane, in order to connect

Speaker:

to your next flight. If that data is inaccurate, like,

Speaker:

you're screwed. Right? And that hurts tremendously. Your brand

Speaker:

is an as an airline, your reputation, it leads to

Speaker:

reduced revenue, increased regulatory risk that you're putting

Speaker:

yourself. Right? So so the data,

Speaker:

what we see from our customers is powering critical use cases like

Speaker:

airlines. I'll give you another example. You know, we work with a,

Speaker:

you know, a Fortune 500 company, perhaps your your favorite cereal.

Speaker:

I don't know if you're you guys are big cereal. I I, like, eat cereal

Speaker:

for breakfast, lunch, and and dinner. It's, like, my go to.

Speaker:

You'd be surprised into how much data optimization, machine learning,

Speaker:

and AI goes into actually optimizing the number and

Speaker:

location of cereal on the shelf. So there's a lot of

Speaker:

data that goes into supply chain management to make sure that you're

Speaker:

actually, like, fulfilling the right warehouse,

Speaker:

demands on time and, you know, making sure that everyone gets

Speaker:

their serial on time. There's actually a lot of data that goes into all of

Speaker:

that. So I think what gave me conviction was in speaking with

Speaker:

so many companies across so many industries, data was

Speaker:

actually allowing data teams, allowing

Speaker:

organizations to build better products, to build more

Speaker:

personalized products, and to make better decisions about the organization.

Speaker:

So I think that really sort of made it clear that the future was going

Speaker:

to be based on on data. Well, I I like that

Speaker:

you pointed out, the importance of observability.

Speaker:

My career path winding as it was,

Speaker:

I made a a leap from being a software developer to being

Speaker:

a data really a database developer. When I made that

Speaker:

transition, one of the things I had noticed, this was two two and a half

Speaker:

decades ago, I had just started in software development

Speaker:

doing test driven development and it had just

Speaker:

come out, it was called fail first development. I remember thinking

Speaker:

this was perfect. It was a big deal. Yeah. It was. Yeah. Twenty five

Speaker:

years ago. And I remember thinking this is perfect because I'm always failing.

Speaker:

So this this will work nothing ever runs the first time and if it does,

Speaker:

it's suspect. But when I got over into data, I had just

Speaker:

become, you know, kind of a a big believer in the power

Speaker:

and and and really the the confidence that

Speaker:

test driven development gave me. And I was like, we need that

Speaker:

over here. And so it was, just a

Speaker:

field that's fascinating me. I have an engineering background, and so it kind of flowed

Speaker:

right through. Instrumenting the data engineering,

Speaker:

was a big deal so that, again, you could achieve what we now call

Speaker:

observability. But being able to watch that data flow

Speaker:

and when I would mention this to people kinda like you in 2019, I

Speaker:

I would get all sorts of responses. Most of them kinda raised

Speaker:

eyebrows. And I would, some of the more interesting ones

Speaker:

were things along the lines of, well, the data is sort of self

Speaker:

documenting. I mean, it's it's just there. And I'm

Speaker:

like, no. No. It's not. It's I especially when you've moved it through

Speaker:

a bunch of transformation to put it into a business intelligence solution or data

Speaker:

warehouse or or any of that. And that now feeds,

Speaker:

you know, modern LLMs, AI, and and the like, those

Speaker:

same sorts of, I guess, old school processes, I

Speaker:

do. Or at least that's my my understanding. Maybe I'm reading too much into

Speaker:

that, but I love the idea of having observability go

Speaker:

all the way through. You mentioned lineage. That's huge. You wanna make sure that when

Speaker:

you, you know, you make this one change, that's not gonna affect anything

Speaker:

else. Usually, it does affect other things, and having

Speaker:

that lineage view is huge. That is spot on.

Speaker:

That's exactly how we've we've thought about this as well. So, you know, I

Speaker:

think there are specific things that you can test for in data. Like, for

Speaker:

example, you know, specific thing that you can declare, you can say, like,

Speaker:

you know, you know, a T shirt

Speaker:

size should only be, you know, small, medium, large, extra large, whatever.

Speaker:

Right? But then there are some specific things that, you

Speaker:

know, you you don't necessarily know. Like, for example, if there's a particular,

Speaker:

you know, pattern that the data is being updated,

Speaker:

you can actually use machine learning to automatically learn that pattern and then forecast

Speaker:

when it should get up updated again. So it's not necessary for someone to

Speaker:

manually write a test for that. Right? And so

Speaker:

I actually think it's a combination of both of those things which really

Speaker:

give confidence to to data teams over time. So there there's sort of a

Speaker:

couple components to it. The first, I think it really starts with visibility,

Speaker:

sort of call it end to end observability, but it really includes, like, you know,

Speaker:

you mentioned a few of these parts, but, the data

Speaker:

lake, the data warehouse, an orchestration,

Speaker:

BI, ML, AI application that can include the agent,

Speaker:

the vector base if you have a prompt. Right all of those

Speaker:

components you have to have visibility. The first thing is actually to to

Speaker:

your point, like, having lineage into what are the different components that can cross

Speaker:

this. So all the way from. You know, sort of ingestion of the data to

Speaker:

consumption of it. And the second is to start observing.

Speaker:

And and, you know, you there are some specific things that you can declare

Speaker:

and test and based on your business needs, and there are some things that you

Speaker:

can do in an automated way. And and, actually, I think this is an area

Speaker:

where AI can help. So for example,

Speaker:

what what oftentimes teams end up doing is spending a lot of time

Speaker:

trying to define what are data quality rules. And,

Speaker:

actually, you can use LLMs to profile the data,

Speaker:

Make some make some, yeah, make some inference,

Speaker:

based on the semantic meaning of data and then make recommendations.

Speaker:

So for example, I I love this example. We work with lots

Speaker:

of, sports teams. And so you can imagine that,

Speaker:

you know, you have a particular field called, like, let's say this is

Speaker:

in baseball, a baseball team and sort of, like, you know, pitch type.

Speaker:

And and then, like, the the speed that matches that. And

Speaker:

so you can imagine that, like, an l m can recommend or infer that

Speaker:

a fastball should not be, you know, less than

Speaker:

70 miles per hour or whatever it is. Even though I don't know what

Speaker:

the real number is. I just made that up. But there is, like, some you

Speaker:

you can infer based based on that and make a recommendation. And

Speaker:

so, actually, it's a I find that AI and LM is a really cool

Speaker:

application of how to make observability faster and and and

Speaker:

easier for for teams. So, yeah, I'm I'm

Speaker:

very excited about about what you just shared, Andy. Well,

Speaker:

I I love what you brought up about machine learning being able to to

Speaker:

make basically make predictions about things.

Speaker:

And and one of the terms that, you know, as a practitioner

Speaker:

of, business intelligence is especially the data engineering that supports

Speaker:

it Mhmm. Is data volatility. Mhmm. So if I'm

Speaker:

especially if I'm looking at an outlier. So I'm consuming this

Speaker:

data day in and day out, And let's

Speaker:

say, you know, 10% of the data is new stuff,

Speaker:

and maybe another 10 or 15% are things that are have

Speaker:

been updated, old stuff that's been updated, and the rest of it's relatively

Speaker:

stable. If I see those numbers go crazy out of bounds,

Speaker:

you know, and machine learning would be able to pick that up right

Speaker:

away and say, there may be a problem with the data we're

Speaker:

reading today. You know, I would I that that sounds like one of

Speaker:

the problems that would solve is that volatility,

Speaker:

expected ranges of volatility of data. That's exactly

Speaker:

right. Yeah. Cool. Interesting. I think there's

Speaker:

also something you said was, you know, when you have LLMs, because, obviously, we have

Speaker:

to talk about GenAI because it's 2025, and I think you're in

Speaker:

Silicon Valley. I think if you don't mention GenAI every twenty five

Speaker:

minutes, the cops come and knock on your door and check it out. Welfare check.

Speaker:

Could get in trouble. Or they make sure you're okay. Make

Speaker:

sure you're okay. But I think one of the things that really

Speaker:

kind of makes me worry about GenAI is that it's not

Speaker:

immediately obvious. Like, if you're at the airport, obviously, it's not a good look for

Speaker:

you. Like, if the if the and this has happened to me where the app

Speaker:

says one thing, the screen says something else, and my ticket says yet a

Speaker:

third thing. So I'm not really sure where I'm supposed to go.

Speaker:

Generally speaking of those, the app tends to be more accurate.

Speaker:

But, that depends on the airline.

Speaker:

But with with LLMs, it's a the latency

Speaker:

between you seeing the data where the cons the bad

Speaker:

consequences of the data tends to be a lot more

Speaker:

I'll use a $10 word today. I can't even say

Speaker:

it, but it's not it's not immediately obvious. Right? There goes my

Speaker:

my fail and my $10 word. But, like, it's not like it there's a lot

Speaker:

more steps in labyrinthine. I'll go with that one because I can say that.

Speaker:

But, like, what so how do you provide

Speaker:

observability in something like LLMs where

Speaker:

the, the input and the output time tends to not

Speaker:

be quite as straightforward as a data as an old school data pipeline?

Speaker:

Yeah. Such a great question. And maybe I'll just share some of my favorite

Speaker:

wonders if that's helpful. And and I think I'll share them

Speaker:

because it's helpful to explain the gravity

Speaker:

of these issues. So, for example, you know, if you're in an airport and, you

Speaker:

know, the app doesn't say the same as what you have,

Speaker:

hopefully, you arrive early at airports, Frank. I don't know if you have enough time

Speaker:

to, like, figure out the discrepancy and you won't miss your flight. Right?

Speaker:

But oftentimes, those things can lead to to really big disasters.

Speaker:

Even three gen AI. So so I think this was in 2020.

Speaker:

Unity, which is a gaming company, they had one schema

Speaker:

change, resulting in a hundred million dollar loss.

Speaker:

Their stock dropped 37%. Oh my gosh. Pretty

Speaker:

meaningful. Right? Fast forward, I think this was

Speaker:

2023 or 2024,

Speaker:

but not so much related to AI yet.

Speaker:

Citibank was hit with a $400,000,000 fine for

Speaker:

I remember that. For data quality practices for lack

Speaker:

of data quality practices. So think about all the regulatory

Speaker:

industries like health care, financial services,

Speaker:

like, you know, wherever there's, like, PII and and,

Speaker:

And and the, like, you know, the the

Speaker:

implication there are pretty grave. Some fun examples for more recently.

Speaker:

I don't know if fun. I shouldn't call them fun. Some other examples from

Speaker:

yeah. You mentioned Chevy. So I think there was a user

Speaker:

that convinced a chatbot to sell the Chevy Tahoe

Speaker:

for $1. I I commend the user from being able to

Speaker:

do that, but that is terrible. Right? That's terrible

Speaker:

that, that happened. And that chatbot went down

Speaker:

the next day. They they took it offline the next day. I think it was

Speaker:

in Fremont, California, so not that far from the bay.

Speaker:

Yeah. So right. So that's pretty pretty consequential.

Speaker:

I'll just give another, like, example. This is my favorite example. This is what

Speaker:

it went viral on x couple months ago. Someone googled, what should I

Speaker:

do when cheese is slipping off my pizza? And Google responded,

Speaker:

oh, you should just use organic superglue.

Speaker:

Great answer. They they had some really good gaps.

Speaker:

There was the, eat eat one rock a day to get your,

Speaker:

minerals and stuff like that. Yeah. So I I

Speaker:

love that because that's an example of where, like, the prompt was

Speaker:

fine, the context was probably fine, the model was

Speaker:

fine, but the model output was totally not fine.

Speaker:

Right? Right. And so and by the way, maybe Google can get away with it

Speaker:

because it's Google, but, like, 99.9% of brands can't get

Speaker:

away with with the mistakes. Right? And so what, you know, what

Speaker:

do you do? How do you provide observability in in that world? What does that

Speaker:

look like? First, I'll just say, I think

Speaker:

there's still human in the loop, and there will be. So, actually, you know,

Speaker:

it's interesting going back to 2019 when we started the company. People would tell us,

Speaker:

oh, you know, I have this important report that my CEO looks at.

Speaker:

But before they look at it, I have, like, six different people looking at the

Speaker:

report with, like, you know, sets of eyes to make sure that the data is

Speaker:

accurate. So, like, people use manual stuff back then. Today, what I

Speaker:

hear is I was just speaking with this head of AI, Silicon Valley,

Speaker:

and I was like, how do you make sure the answers are accurate? And they

Speaker:

were like, well, we have someone sifting through dozens, hundreds of

Speaker:

responses every single day to make sure they're accurate. So I don't think human in

Speaker:

the loop evaluation is going anywhere. There's more advanced techniques, you know,

Speaker:

comparing to to to ground truth data, using LLM

Speaker:

as a judge. There's various sort of, things that we can do, but but I

Speaker:

think human isn't going away. In terms of observability,

Speaker:

I talked before I'll explain a little bit about this sort of framework

Speaker:

of, you know, data issues can be really traced back

Speaker:

to these four core root causes, and I think it's

Speaker:

important to have observability for each in in sort of this world.

Speaker:

So the first I mentioned is data. And so by that, I mean,

Speaker:

you know, let's use another example. Credit Karma, for example,

Speaker:

has a financial advisor chatbot where, basically, they take in information

Speaker:

about you that they have, you know, like, what kind of car you

Speaker:

have as being of cars and, you know, where you live and whatnot, and then

Speaker:

they make financial recommendations based on that. If the

Speaker:

data that they are ingesting from third party data is late or isn't

Speaker:

arriving or is incomplete, that messes up everything downstream. So one

Speaker:

root cause can be the data that you're ingesting is just wrong. Maybe it's all

Speaker:

null values, for example. The second can

Speaker:

be due to change in the code. So the code could be like a a

Speaker:

bad like a schema change, like in the Unity example. It could be a change

Speaker:

in the code that's actually, being used for the

Speaker:

agent. Really, code change can happen every anywhere. And, by the

Speaker:

way, not necessarily by the data team. It can happen by an engineering team or

Speaker:

someone else. It has nothing to do with the with the data state. Right? So

Speaker:

code changes can contribute. The third is system.

Speaker:

A % of systems fail. What what do I mean by system? I

Speaker:

mean system is, like, basically the infrastructure that sort of runs all these jobs.

Speaker:

So this could be, like, an airflow job that fails or a DDT job

Speaker:

that that fails. You know, again, a % of systems fail,

Speaker:

and so you would definitely have something that goes wrong in systems.

Speaker:

And then the fourth is you could just have the model output be wrong, kinda

Speaker:

like with the cheese in in Google, example. And

Speaker:

so when we think about sort of having what does it mean,

Speaker:

what does observability mean in this in this age, I think it has to

Speaker:

have coverage for all four of those things. And here's the problem. It oftentimes

Speaker:

includes all four together. So I don't know if it you know, it's typically on

Speaker:

a Friday at 5PM. You're just about done, and then

Speaker:

everything breaks at the same time. That's an

Speaker:

interesting point. Like and and it's you also use the a term

Speaker:

a couple of times, which, you're I can count on one hand how many

Speaker:

non Microsoft people have used this term,

Speaker:

data estate. And I'm just curious about I know where I pick from

Speaker:

Microsoft. No. No. No. Like, I'm like I mean, I always

Speaker:

thought it was a, you know, Microsoft invention. I don't think it is.

Speaker:

But, like, where did you pick up that term? Because I've only like, seriously, you

Speaker:

were, like, the third or maybe fourth person who is not

Speaker:

never worked for Microsoft, never worked with Microsoft. I I mean, I don't know if

Speaker:

you work with Microsoft, but, like, I I always whenever I hear someone say

Speaker:

data to state publicly, I'm like, so who'd you work for at Microsoft? What division?

Speaker:

Like, like Oh, wow. Yeah. It's like that. And at first, I

Speaker:

didn't like I'll be honest. I didn't like the term at all, but eventually, I

Speaker:

kinda grew to like the term because it there's a lot behind it, and I'd

Speaker:

be curious to get, like, one, where'd you where'd you where'd you

Speaker:

pick that up? Like, I'm just, like and then two, what does it mean to

Speaker:

you? Like, what does that term data state mean to you? Great question. For

Speaker:

what it's worth, I actually didn't like it either. For the record, I didn't even

Speaker:

like data observability to begin with Mhmm. To be totally Really? English is

Speaker:

yeah. English is my second language, and observability was such a difficult word to

Speaker:

pronounce. When we started the when we started the, you know,

Speaker:

the company and and the category, we had to give it a name. So we

Speaker:

didn't really know is this you know, we used we we coined the term data

Speaker:

downtime, you know, as a corollary to application downtime. We thought maybe

Speaker:

data reliability. There are lots of

Speaker:

options. At the end of the day, I always try to get gravitate towards where

Speaker:

my customers are, so whatever language my customers use. And so customers

Speaker:

started using the word observability, so I started using that too. And same with the

Speaker:

state, they started using the data state sort of as a language. And so

Speaker:

Interesting. Full disclosure, have not, have no

Speaker:

ties to Microsoft, but but just have heard

Speaker:

mostly enterprises sort of think about that. I I think my understanding,

Speaker:

you know, for for what they mean is, you know, wherever

Speaker:

you store aggregate process data. And so that, you know, can

Speaker:

include, you know, you know, upstream

Speaker:

sources or upstream, data sources. But, you know, it could be,

Speaker:

like, an Oracle or SAP database. It could be data

Speaker:

lake house, data warehouse like Snowflake, Databricks,

Speaker:

AWS, Redshift, s three, all the

Speaker:

way to wherever you're consuming that. That could be a BI report. You know, Power

Speaker:

BI. Sorry, Microsoft.

Speaker:

Right, Looker, Tableau, you know,

Speaker:

various, various options. And,

Speaker:

honestly, the, you know, the most common enterprise has all of

Speaker:

the above in some shape or forward fashion. And so to sort

Speaker:

of include all of that, I think

Speaker:

the some of the thesis that we have around observability is that, by the way,

Speaker:

each of those by themselves has some concept of observability.

Speaker:

Right? Like, you

Speaker:

can, for example, with Snowflake, you can set up some basic,

Speaker:

sort of checks, if you will, like a sum check or whatever. Right?

Speaker:

You you could do that in Snowflake. However, we think that observability

Speaker:

needs to be sort of third party and to be end to end. And,

Speaker:

again, that draws on on software corollary. So,

Speaker:

you know, like, AWS has CloudWatch, for example,

Speaker:

but that's probably not sufficient for whatever you're building. You're probably

Speaker:

gonna use, again, like, New Relic or Datadog to connect

Speaker:

across the the board to, you know, variety of of,

Speaker:

integrations. Right? They have hundreds. So that's what I think about when I

Speaker:

say data estate. But it's a great question. It's definitely not my

Speaker:

word. No. I was just curious. Like like, you know,

Speaker:

because whenever because first, I hated the term too. Right? And I can't maybe it's

Speaker:

Stockholm Syndrome. I don't know. But,

Speaker:

the more I kind of sat on it and kind of digested it, I was

Speaker:

like, I like it because it explains, like, you know, you know, historically.

Speaker:

Right? Like, a state is, you know, whoever

Speaker:

owned the land got to call the shots and whoever called the shots owned the

Speaker:

land. Like, there was a very, you know, you drew the food, you you cut

Speaker:

down the trees, you, you know, you mined for, I think the Minecraft

Speaker:

movie is coming out. So you mined for all these things. Right? My kids are

Speaker:

into it. But, like, and it's

Speaker:

really kinda like it's just the idea of seeing it, like, it's land. It's kinda

Speaker:

like land. It's kinda like a natural resource. It's not really natural, but it is

Speaker:

a resource. Right? And if I say unnatural resource, that's really weird. But it's a

Speaker:

resource. Right? And if you you can either you have it. You already have

Speaker:

it. You either develop it or you don't. And, you know, do

Speaker:

you, you know, do you grow food on it? Do you, you know, like so

Speaker:

see, I I liked it because it was the idea that it's already there. Right?

Speaker:

Mhmm. And it's it might be in forms you don't really think about. Right? Like,

Speaker:

you know, PDFs in a in a SMB share somewhere.

Speaker:

Right? Mhmm. I mean, that's part of your data to state. Yep. Right?

Speaker:

And it's that's how I kinda, like, came to terms with it. And,

Speaker:

like, I really kinda like it because it helps you to think holistically about data

Speaker:

because I think a lot of business decision

Speaker:

makers and even technical decision makers don't see data as a

Speaker:

as a as a as a resource. I think that's changed

Speaker:

over the last maybe five, six years.

Speaker:

But it really became something that they don't see

Speaker:

it as a resource they could mine, they can get value out of. Right? The

Speaker:

smart people did. But, for the most part That's

Speaker:

right. Yeah. You had to convince them. Right? Exactly.

Speaker:

It sounds like based on what you say because, like, you know, my wife works

Speaker:

in IT security. Right? So, so we're a two engineer

Speaker:

household. So the kids are super nerds. But, like, I was telling

Speaker:

her after chat CPT came out, I was all excited about it. And I was

Speaker:

telling her about how this works. I was like, you give it this big corpus

Speaker:

of data, and they chews through it, and it comes up with these these vectors

Speaker:

and stuff like that. And then she looked at me and it's like, so all

Speaker:

the training data is now a massive attack surface.

Speaker:

And Yep. When that's just why I love my wife. So I

Speaker:

I'm wronged. She's never wronged. Well, that's true. But at

Speaker:

first I was like I was thinking but but you're missing and then I was

Speaker:

gonna say you're missing the point which one is never a good thing to say

Speaker:

but Like midway through I was like, oh my gosh,

Speaker:

she's right. Oh my gosh. She's right. So then

Speaker:

when I started talking to other data science and AI types, and I was like,

Speaker:

but but don't you think this could be, like, a big attack surface? I look

Speaker:

like that meme with the guy from It's Sunny in Philadelphia with, like, it's

Speaker:

always sunny where he had, like, the conspiracy thing. Like, I swear I will

Speaker:

like that meme. Yeah. And, you know, and if you

Speaker:

look at the I think OWASP has, like, the top 10 vulnerabilities of LLMs

Speaker:

that is either two or three. Right? So it's

Speaker:

kinda like there's a fine line between,

Speaker:

like, thinking too much about problem, but also kind of thinking ahead of the

Speaker:

problem. I don't know. No. Oh, I think you

Speaker:

cut off a little bit, Frank, but, Andy,

Speaker:

to me, that resonates a lot, and I think it's sort of really the overlap

Speaker:

between data and engineers. And, by the way, like, we didn't even talk

Speaker:

about security. Like, all these concepts also exist in security.

Speaker:

Right? And I think in the same way that we sort of manage, like, you

Speaker:

know, sub zero, sub one issues in security engineering, data

Speaker:

issues should be treated the same way. You should have a framework to understand what's

Speaker:

a sub zero, what's a sub one for data issues. You should it should be

Speaker:

connected to pager duty. Like, people should wake up in the middle of the night

Speaker:

when you have data issues. I think I think that's right. It's

Speaker:

improving, but, we're not quite there. It'll

Speaker:

happen. No. You're right, though. Like, they don't think about this in

Speaker:

terms of they don't does it I wouldn't say it's not disciplined. Sorry,

Speaker:

Annie. I cut you off. No. But my experience we talked to data engineers. Sorry,

Speaker:

Andy. And I I I I am a former data engineer

Speaker:

myself. Like, I thought of it in terms of schema structures and pipelines.

Speaker:

Mhmm. Not necessarily securing those pipelines. Right? Mhmm. Sorry,

Speaker:

Andy. I'll go. No. I was curious. I wanted to to shift back

Speaker:

to you. You mentioned the four areas that your software,

Speaker:

looks over your AI and the observability software does. What

Speaker:

happens when it detects something amiss?

Speaker:

Great question. So not even talking about Monte Carlo specifically, but rather

Speaker:

an observability solution. I think an observability solution needs to

Speaker:

have coverage or an observability approach, by the way. Like, some people build this

Speaker:

in house. An observability approach should take into consideration

Speaker:

your data estate, should take into consideration, right, your

Speaker:

entire data estate. I think, oftentimes, the mistake is people will even if they

Speaker:

build it in house or do anything else, they'll really just focus on, like, the

Speaker:

data and their data lake or the data in a particular report. Like, that's

Speaker:

not sufficient. Right? It it just isn't. And so people waste

Speaker:

a ton of time trying to understand, like, what's wrong and where. So I think

Speaker:

the first is, like, you need you need visibility across the data

Speaker:

state, which hopefully we've defined an unnatural resource that should be

Speaker:

managed securely. And and I think that's right because I

Speaker:

I by the way, Monte Carlo doesn't doesn't do the security

Speaker:

part, but I similarly believe that in the same kind of diligence

Speaker:

that we apply to data as engineering, you want data products to

Speaker:

be reliable but also secure, scalable,

Speaker:

like all those concepts should adapt. By chance, we happen to

Speaker:

focus on the reliability and observability part, but all the other,

Speaker:

principles of software engineering should apply.

Speaker:

We specifically don't do it, but very much believe that should be

Speaker:

the case. But back to your question, you

Speaker:

know, so so what happens when there is an issue?

Speaker:

Very similar to workflow that you might find in Datadog,

Speaker:

New Relic, and and PagerDuty. So there is an alert that goes out,

Speaker:

often you know, in whatever flavor of choice. If you're an enterprise that has a

Speaker:

data state, this is likely Microsoft Teams. If not, this would mean

Speaker:

Slack or an email or what you know, some teams like to have it connected

Speaker:

to to Jira and and pager duty for for sev zeros or sev

Speaker:

ones. And, you know, the first thing

Speaker:

that people will do is start, you know, typically an analyst.

Speaker:

I was I was in, you know, prior an analyst. The first thing you start

Speaker:

asking yourself is, why the hell is the data is wrong?

Speaker:

Right. Yeah. You're like, well, was the report on time?

Speaker:

Was the data accurate? Was it complete? You start going through all

Speaker:

and then you start you basically come up with hypothesis. And then you start

Speaker:

researching those hypothesis, and you're like, well, let me let me

Speaker:

trace the data all the way all the steps of the transformation

Speaker:

and start looking. Was the data okay here? Yes. Check. Okay. Move on. Was it

Speaker:

data right? You literally you started this, like, recursive process. Gotcha.

Speaker:

Before we started the company, I used to do this all manually. So I remember,

Speaker:

like, I would go into a, you know, into a room. Maybe you did this

Speaker:

too. And, like, on a whiteboard, I would start, like, basically mapping out

Speaker:

the lineage. Okay. This broke here. Was the data here okay? Let's let

Speaker:

let's sample the data and make sure it's okay. Okay. Move on. Let's like, literally,

Speaker:

we have this, like, very every morning, actually, you know, that this

Speaker:

became such such a problem because we were so reliant on this particular day

Speaker:

dataset that every morning, me and my team would wake up, and we would basically

Speaker:

go step by step and diligently, like, make sure that the data is accurate,

Speaker:

which I felt like was I was like, this is, like, total, you know, crazy.

Speaker:

So, you know, I think, particularly in Monte

Speaker:

Carlo or, like, what observability does is provides the

Speaker:

information that you need in order to troubleshoot and understand where the issue is. And

Speaker:

so we can surface you information like, hey. There was at the same time that

Speaker:

this dataset you know, maybe the the percentage of null values in

Speaker:

particular field was inaccurate. And then at the same time, there was a full

Speaker:

request that happened. Maybe those are correlated, actually. Gotcha.

Speaker:

Maybe, you know and maybe, actually, you can use you can also

Speaker:

do a code analysis. So you can, like, basically, you know, analyst

Speaker:

what we used to do is, like, sift through lines of code and try to

Speaker:

see what the change. Hey. Why did few surface to you that, like, there was

Speaker:

a particular change in the, you know, name of a field,

Speaker:

at the same time as an example. So bringing all that data into one

Speaker:

place can help you sort of troubleshoot that. And

Speaker:

sorry for another LLM plug, but you can actually have

Speaker:

an LLM do this for you, which is pretty sick where it's like an early

Speaker:

beta test for us. We haven't released it yet. But, basically, what we're

Speaker:

testing internally is for every like, for data incidents,

Speaker:

there's basically, like, an in like, a troubleshooting agent that

Speaker:

spawns agents for each of the hypothesis. So there's, like, an agent that

Speaker:

statement. Yeah. I it's really cool. There's an agent that

Speaker:

looks into, like, the code change, the data change, the system

Speaker:

change, and then and then it does it recursively on

Speaker:

all those tables. So you can actually run up to a hundred agents in under

Speaker:

one minute. And then there's a larger LLM that takes all that information

Speaker:

and summarizes it and synthesizes it. So, again, early days, this is like we're still

Speaker:

building it. Very cool. But the early results are really cool. Yeah. It's

Speaker:

like basically turbocharging your your data analysts and your data

Speaker:

stewards. Sorry. I got all excited. No. It's it is That's really

Speaker:

cool. Fascinating, and I love that you're excited about it. And what one of the

Speaker:

jokes that I make when I'm I'm working with my kids on something, if

Speaker:

they nail something, I'll I'll say to them, you know,

Speaker:

something similar to this. It's like, if you can only, you know, if you

Speaker:

can only run a hundred in one minute, I guess that's if that's the best

Speaker:

you can do, we'll just have to live with it. Yeah. Exactly.

Speaker:

That's that's an amazing stat. Yeah. Yeah. That is interesting. And I

Speaker:

also think too I also think too that, like, observability could help

Speaker:

with secure the security story. Right? Because if, you know, you're looking at a

Speaker:

pipeline and it's like, hey. Weren't there a bunch of

Speaker:

sketchy looking IPs, like, poking around our system about the time that this

Speaker:

pipeline ran? Maybe the rest of the data that goes out of that pipeline

Speaker:

run is a little bit suspicious too. Yeah. A

Speaker:

%. Like, we we you know, for example, you work with a,

Speaker:

call it delivery service, and there was a very

Speaker:

suspicious tip very suspicious

Speaker:

amount of tip that was given. Like, you

Speaker:

know, you can imagine, you know, the range of tips can be between x

Speaker:

dollars and y dollars, and suddenly that's, like, you know,

Speaker:

10,000 times y, like, 10,000 times the upper limit.

Speaker:

Yeah. You know, triggers off a suspicious alert. It's

Speaker:

not a normal tip, and it's not a mistake. It's actually, you know, security

Speaker:

issue. So that's an example. Yeah. Interesting. Yeah. I

Speaker:

love the anomaly detection aspect of that. I mean, it just it

Speaker:

it's it's something that we've been doing for a long time,

Speaker:

but then at wrapping it with automation and then

Speaker:

combining that automation with what you just described with all the

Speaker:

agents running down all of the permutations, that

Speaker:

that just sounds amazing. Yeah. It's really cool. I can't

Speaker:

take credit. This isn't me. It's it's it's my team. But,

Speaker:

but I I was like, woah. It's like a hundred bars

Speaker:

running at the same time under one minute. That's amazing. There you go. It's really

Speaker:

cool. Probably smarter than me. But yeah.

Speaker:

That is so awesome. That is cool.

Speaker:

So we we generally have is, we have kind of our

Speaker:

our stock questions that we ask, if you're interested in doing them.

Speaker:

They're not we're not Mike Wallace. We're not trying to I don't even think

Speaker:

anyone gets that reference anymore, but we're not trying to catch you in a,

Speaker:

I gotta come up with a new one, in a thing. But it's mostly, like,

Speaker:

how'd you find your way in the first one is I'll get the rest of

Speaker:

them, up for you in a second. But the first one is, how'd

Speaker:

you find your way into data? Did did the data did you find the data

Speaker:

life or did data life find you? Oh, that's such a great

Speaker:

question. You know, it's funny.

Speaker:

I grew up you know, my my, my mom is a meditation and dance

Speaker:

teacher and my dad is a physics professor. And so,

Speaker:

yeah, and so I, I, you know, grew up with very sort of like, yin

Speaker:

yin yang in my family, if you will.

Speaker:

At a very early age, I used to, like, hang out in in my dad's

Speaker:

lab and, like, do scientific research and stuff like that. So or, you know,

Speaker:

like, very at a very young age, my memories are, like, sitting in a

Speaker:

cinema, watching a movie with my dad and trying to, like, guesstimate how

Speaker:

many people are sitting in the in the audience.

Speaker:

Right? Yes. Just like, you know, I think for, like, a five year

Speaker:

old, it's sort of like a fun fun thing. But, you know, throughout my my

Speaker:

adulthood, like, always sort of had that in in the background. And,

Speaker:

you know, I I think later on in life, I sort of always gravitated towards

Speaker:

data. And when I decided to start a company,

Speaker:

I was actually debating between various areas

Speaker:

like IT and actually blockchain, or, you know,

Speaker:

crypto for a little bit and and data. I think at the end of the

Speaker:

day, like, my heart was really in in data. If I look at, like,

Speaker:

the next ten, twenty years, it's pretty clear to me that data is

Speaker:

gonna be I think it still is the coolest party, and I think it

Speaker:

will be the coolest party to be in. And I personally,

Speaker:

like, you know, it's it's it's funny. Like, throughout my my

Speaker:

career, I've I've also learned the limitations of data. Right? So so data can

Speaker:

tell you whatever story you want. It could tell you, you know, for every question,

Speaker:

it give can give you a yes, and you can also tell a no story.

Speaker:

Right? So so there's also limitations to data,

Speaker:

but but I always have been fascinated,

Speaker:

by by data and space. So can I say both? That's

Speaker:

Yeah. I mean, that's fair. That's fair. Good answer. That's fair. Yep. So

Speaker:

what what's your favorite part of your current job?

Speaker:

Oh, that's hard to choose. I love my job.

Speaker:

I just love it. I think, you know,

Speaker:

the ability to work with customers and actually, like, change the way they

Speaker:

work, I I think that's probably the biggest gratification that I

Speaker:

get, you know, from from my my career. Like, the fact that you can

Speaker:

actually work on something that matters is pretty insane. You know? And when I think

Speaker:

about, like, the future, I'm like, what? So data is gonna be wrong? Like, we're

Speaker:

just gonna be, you know, making decisions off of wrong like, what? I don't

Speaker:

wanna live in that world. You know? And so Yeah. I think

Speaker:

there's something that's, like, really fulfilling and helping, you know, drive a mission that

Speaker:

I believe in that has an impact on customers. And, you know, when customers will

Speaker:

tell me, you know, I started sleeping at night because I

Speaker:

know that, like, I have some coverage for my data. I'm like, yeah. Oh, wow.

Speaker:

I'm glad you're sleeping. You know? Like, good for you. I love

Speaker:

sleeping. So What a cool thing to hear. Yeah. Exactly. I

Speaker:

think that's that's probably, you know, maybe one part. And then the second is, like,

Speaker:

just working with an amazing team. You know, I I spend most of my my

Speaker:

day maybe kinda like, you know, you guys, like, hang out having fun,

Speaker:

laughing. So, you know, I I I'm very

Speaker:

grateful that I get to work with the smartest people on on

Speaker:

worthwhile challenges. Oh, very cool.

Speaker:

We have, three complete these sentences. When I'm not

Speaker:

working, I enjoy blank. Sleeping.

Speaker:

I yeah. I I have a I we recently

Speaker:

have added we we had two kids, and we adopted a cousin. And

Speaker:

I forgot how draining a toddler can be. And I'm

Speaker:

I'm eight to 10 years older since the last time I had a toddler, so

Speaker:

it's like I, I have two

Speaker:

kids, on two under four. So I,

Speaker:

respect the sleep even more. I I can't even I can't

Speaker:

even wrap my head around that. It gets it gets better. I can say

Speaker:

that. It's my own role. I appreciate that.

Speaker:

So our second one is I think the coolest thing in technology

Speaker:

today is blank. The coolest thing in

Speaker:

techno I think the pace of innovation. I think that's really

Speaker:

freaking cool. You know, you can, like, work at a problem today and you're like,

Speaker:

you can't solve this. Two days two days later, a new model will come out.

Speaker:

Boom. You're done. So it's harder. Right? The bar is

Speaker:

higher in order to, like, actually like, it's it's harder to it's

Speaker:

harder to know what to bet on. It's harder to know what the future will

Speaker:

look like, but it's a lot more exciting. So I'm in it.

Speaker:

Cool. Our third and final complete sentence is, I look forward

Speaker:

to the day when I can use technology to blank.

Speaker:

I was always a big fan of teleportation. I think teleportation is really

Speaker:

freaking cool. That would be nice. Can't wait for that. That would be cool.

Speaker:

That would be cool. You know, you're not the first person to answer with them.

Speaker:

Oh, really? Yeah. It's pretty cool. Pretty cool. Sorry.

Speaker:

Number six is share something different about

Speaker:

yourself. Something different.

Speaker:

Yeah. Something different. Let's

Speaker:

see. I mentioned I have two kids. I

Speaker:

meditate when I don't sleep. I like to meditate.

Speaker:

I, what else? I'm married to

Speaker:

my cofounder. Oh, wow. So we,

Speaker:

yeah, we're fortunate to share our lives both at work and at

Speaker:

home. That is cool. Yeah. I can

Speaker:

imagine that would work out really well or not. Like, there's not a lot of

Speaker:

middle ground there. High risk, high reward. High risk, high reward. I

Speaker:

get, like you know, my wife is, you know, she's a

Speaker:

federal employee, and she's, you know, reevaluating what her career

Speaker:

futures look like, you know, and she's like, you

Speaker:

know, I was like, well, you know, you could help. You can start

Speaker:

a new podcast. I can help you with that. She's like, yeah. But then I

Speaker:

have to work with you. And, like, I know what she meant. I know how

Speaker:

it sounds. I know how it sounds, but I know what she means. Like, so

Speaker:

when she did work from home, like, there was literally a, like, an entire floor

Speaker:

between us because Yep. Like, it's too loud. I'm too loud. Yeah. Yeah. Yeah.

Speaker:

Yep. We're very loud too. So

Speaker:

where can folks find more, learn more about, Monte

Speaker:

Carlo and, and and what you're up to?

Speaker:

Probably, I'm the place where I hang out is LinkedIn. So,

Speaker:

I know we just got connected on LinkedIn. That's great. Probably follow me

Speaker:

on LinkedIn or, honestly, reach out to me directly, me,

Speaker:

Moses@MonteCarlodata.com. I hope I don't get a lot of phishing now because

Speaker:

of that. But Well, hopefully, make sure it's the right account because we found out

Speaker:

in the process that there's there was another suspect in

Speaker:

suspicious looking account. And I also think that for our

Speaker:

listeners, it's worth pointing out that I think that people have realized that LinkedIn is

Speaker:

a is a major security vector because I've been getting a lot of

Speaker:

weird a lot more lately. Now I don't think it's related to

Speaker:

the, the refrigerator scandal. Andy and I will do a whole show on that

Speaker:

later because there's there's actually an interesting AI component to that. Okay.

Speaker:

Good to know. And finally, last but not least, Audible

Speaker:

is a sponsor of the podcast. Do you do audiobooks? If

Speaker:

so, recommend one. Otherwise, just recommend a good book you recommend.

Speaker:

A good book. Let's see.

Speaker:

Thinking in bets by Annie Duke.

Speaker:

Professional poker player. Interesting. In in how

Speaker:

lessons from poker can be applied in, in life

Speaker:

and in business. Interesting. I

Speaker:

once worked at a financial services company, and one of the

Speaker:

big shots used to play online poker. And

Speaker:

they're on company, not on company money, but on company time. And a

Speaker:

lot of people Not a lot of people took a dim view of that.

Speaker:

Rightfully so. But he was

Speaker:

making so much money. You know, people that matter didn't take a damn view to

Speaker:

it. When he stopped making so much money, people everyone took a damn view to

Speaker:

it. And it they don't that does end the the story. It

Speaker:

is on I don't see if it's an audio oh, it is

Speaker:

an audio book. It is an audio book. Awesome. I'm gonna add that to my

Speaker:

list. I'm done. Okay. And if you you know, they are a sponsor.

Speaker:

So if you go to, the datadrivenbook.com, you know,

Speaker:

you'll get a free audio book on us. And, you know, if you sign up,

Speaker:

we'll get enough to, you know, buy a coffee.

Speaker:

Maybe not tip them $8,000, but, you know,

Speaker:

we'll get enough for a Starbucks maybe. Maybe. Yeah.

Speaker:

I just tested the link, Frank. Every now and then, we had trouble early on

Speaker:

with the link coming and going. So I just when you saw me turn away

Speaker:

a minute ago when Frank started to this question, that was me typing

Speaker:

in. It worked. It worked.

Speaker:

It's always DNS. That's the Always. It's interesting

Speaker:

you mentioned that. I read an article. Actually, it was a newsletter recently that talked

Speaker:

about, betting being the first stage

Speaker:

in, kind of the path to minimally viable products. And

Speaker:

I thought, now that's curious, and I don't know again, I haven't

Speaker:

read the book. I will listen to it. But the idea of

Speaker:

engaging your team I I manage a team, as well.

Speaker:

And engaging the team by having them do

Speaker:

interesting things and making taking these very large bets

Speaker:

that look nearly impossible,

Speaker:

perhaps. And it's like you said, the the the problem

Speaker:

comes up, and you're thinking this is this is unsolvable. And two days

Speaker:

later, it's solved. And over and over again, I've had that

Speaker:

experience, but I never tied it to the concept of

Speaker:

bets. And I saw this this newsletter that talked about do

Speaker:

that first, And it reminded me a little bit

Speaker:

of Collins talking about, the the big hairy

Speaker:

goals, you know, back in the day. It's very

Speaker:

similar to that maybe in concept. I don't know. I'll have to listen to the

Speaker:

book and check it out, but I was intrigued by the newsletter. Yeah.

Speaker:

There's interesting concepts. Like, I think some of the ideas is, like I mean, even

Speaker:

when you start a company or sort of, you know, start working on a team,

Speaker:

like, you basically have you have a set of cards, which are, like, your strengths,

Speaker:

your weaknesses. And so how how do you play your cards? Like, you can't you

Speaker:

know, if you wanna win around, you can't play with someone else's cards.

Speaker:

You are what you are. And so the best thing you can do is play

Speaker:

with your card. I think that's true for a team solving a problem or startup

Speaker:

or whatever it is. I love that. Yeah.

Speaker:

Interesting. Any final thoughts? This was so fun. Thanks for

Speaker:

having me. Thank you. Thanks for, and you did mention kinda offhand early

Speaker:

on. I don't remember if it was in the green room or not. You have

Speaker:

a podcast yourself? I do not have a podcast myself.

Speaker:

Alright. That was my mistake. Maybe I'll end it tomorrow. Okay. All

Speaker:

good. Life goal one day. There

Speaker:

you go. There you go. And with that, we'll let our AI finish

Speaker:

the show. And that wraps up another data packed episode of

Speaker:

data driven. A massive thank you to our brilliant guest, Bar

Speaker:

Moses, for taking us deep into the world of data observability,

Speaker:

sketchy LinkedIn impersonators, and the dark arts of tipping

Speaker:

anomalies. Who knew a dodgy schema change could cost more than

Speaker:

a luxury sports car? Now, dear listener, if you've made

Speaker:

it this far, you clearly have excellent taste. So why not

Speaker:

put that good judgment to work and leave us a rating and review on

Speaker:

whatever platform you're tuning in on? Apple, Spotify,

Speaker:

Pocket Casts, Morse code, however you get your fix, would love

Speaker:

your feedback. And dare I ask, are you subscribed?

Speaker:

I mean, you wouldn't want to miss out on future episodes filled with more

Speaker:

wit, wisdom, and the occasional fridge based conspiracy,

Speaker:

would you? Until next time, stay curious, stay

Speaker:

observant, and for heaven's sake, keep your data tidy.