Speaker:

Essentially, it's a swarm of models, AI models that

Speaker:

emulate human cognition and emotion and become highly

Speaker:

predictive of behavior across populations. So you're

Speaker:

creating synthetic populations of people that are then situated

Speaker:

in context. Forget Personas, Jill Axline is building

Speaker:

synthetic populations that predict real human behavior and that changes

Speaker:

everything. Keep watching to learn how.

Speaker:

Foreign.

Speaker:

Hello, and welcome to Data Driven, the podcast. We explore the

Speaker:

exploding world of artificial intelligence, data science, and of

Speaker:

course, none of this would be possible without the underlying data

Speaker:

engineering. And with me on this road trip down the information

Speaker:

superhighway of the future and Buzzwords

Speaker:

is my most favorite data engineer in the world. How's it

Speaker:

going, Andy? Hey, Frank. It's going pretty good. How are you? I'm

Speaker:

doing all right. I'm still wearing the hipster glasses because they

Speaker:

were recording this about post 3 weeks since my concussion.

Speaker:

And as we were telling our guest in the virtual green room that

Speaker:

we kind of owe the show's name to a concussion.

Speaker:

So true, folks who, longtime listeners, know

Speaker:

the lore, so we won't bore them or waste any of our guests precious

Speaker:

time. With us, we have Jill axlein, who

Speaker:

is Ph.D. and is the co founder and

Speaker:

CEO of Mavera, which is an

Speaker:

interesting company and Maverick Era is what I'm told it's short for.

Speaker:

So welcome to the show, Jill. Hey, thanks. So happy to be here.

Speaker:

Yeah. So you also have three kids and. I

Speaker:

have three kids. Andy has three. Three plus two.

Speaker:

Yes, that's. I think,

Speaker:

I think there's a correlation between number of kids and gray hairs.

Speaker:

I know I have kids and five grandchildren, so there you go. But

Speaker:

I'm old. I'm just saying you have an age today,

Speaker:

you know. So

Speaker:

what does Mavera do and

Speaker:

what is brand and business meaning for? What does that mean in

Speaker:

high growth. Companies, brand and business.

Speaker:

I totally botched that. I'm sorry. I'll blame the concussion because I can do that

Speaker:

for another week or so. So what exactly does Mavera

Speaker:

do? Sure. So essentially it's a swarm of

Speaker:

models, AI models that emulate human

Speaker:

cognition and emotion and become highly predictive of behavior

Speaker:

across populations. So, so you're creating synthetic populations

Speaker:

of people that are then situated in context.

Speaker:

So as opposed to a model that's trained six months ago and

Speaker:

then is rapidly trying to iterate, it actually

Speaker:

pulls its synthetic database will update on a

Speaker:

second to second basis. So you always look at your population in

Speaker:

situ. Additionally, I would say

Speaker:

it provides a really strong pulse of what that population

Speaker:

looks like within the context of your business or your vertical.

Speaker:

Because we support a foundation with deep business context

Speaker:

that takes into account not just your business from the time that it

Speaker:

was instantiated, but it also is updating

Speaker:

temporally and it creates relational,

Speaker:

like relational connections across your business. So for instance,

Speaker:

if there's a marketing spend five years ago or about

Speaker:

the same time that you launch your flagship product or a secondary product,

Speaker:

it's going to show a lot of data on how the context

Speaker:

around that might have influenced your outcomes.

Speaker:

So I guess like long and short of it is you have

Speaker:

populations situated in context and wrapped around your business,

Speaker:

and you can use that pretty expeditiously to make

Speaker:

decisions in a much less expensive way than most market research

Speaker:

or, you know, strategy research, strategy based research.

Speaker:

It's almost like you're taking kind of like the SIMS

Speaker:

approach of having these individual entities, I wouldn't call them

Speaker:

agents because it doesn't sound like they're agents. It sounds like they're simulated entities, like

Speaker:

you said. Right, exactly. That's interesting. Is there like a.

Speaker:

That. That's an interesting approach because that does,

Speaker:

it probably doesn't completely insulate you from model drift, but it

Speaker:

probably does a good job of, well,

Speaker:

we're having a massive windstorm and it's like, you know, negative, whatever. Outside in your

Speaker:

Chicago, it's really cold. It's always sunny and it's always sunny in farmville, as

Speaker:

I like to tell Andy. But, but I mean, you can

Speaker:

insulate against a certain amount of cold, but you can't really stop it.

Speaker:

That's right to think about it. So you can't really stop model drift, but you

Speaker:

probably can prolong how, how, how long your

Speaker:

models are valid for this by this approach. So that's correct. In

Speaker:

addition to that, something that I've pushed on because I've been an

Speaker:

advisor with this team for well over a year. And

Speaker:

since I'm a ph dork and I, you know, I'm always looking at evidence.

Speaker:

Evidence Ev. I was the original skeptic to synthetic

Speaker:

populations. In my last role at Morningstar, I built our market research

Speaker:

team. And when I was first introduced to the idea of

Speaker:

synthetic populations, I was like, you know, tons of skepticism.

Speaker:

I think the big thing here is they've built in a level of AI

Speaker:

governance around things like drift, but also to

Speaker:

model the difference between evidence and inference. And so

Speaker:

they're looking for confidence scores. They'll gather first party data

Speaker:

around your population and then create a synthetic data layer on top

Speaker:

of that. And a good example would say

Speaker:

asset managers like ice cream. Asset managers like cold

Speaker:

things. They like cold, wet things, they like cold, wet, sweet things. And then a

Speaker:

coefficient is that assigned to each of those new synthetic data points. And so

Speaker:

while it makes a more robust data set in the

Speaker:

billions that allows it to draw inference, it's also accounting

Speaker:

for again, what, what's based on evidence and what's based, what is

Speaker:

inference of the machine. And then there's also a governor across

Speaker:

this swarm of models. So it's going to call on the right model

Speaker:

for the right facet of human thinking or

Speaker:

feeling that it's trying to construct. And so

Speaker:

I think in doing that it creates safeguards around confidence. So

Speaker:

we, we produce confidence scores, it will give a spread of opinion across

Speaker:

a population. So unlike a custom GBT or

Speaker:

a Persona and some pre existing platforms that are emulating

Speaker:

language, it's actually taking a look at

Speaker:

where's their entropy across emotional response and cognitive

Speaker:

response in this data set and what does that look like in the spread of

Speaker:

opinion for that audience. So it'll tell you the nature of the spread

Speaker:

and where that spread is happening. So now you can account for almost,

Speaker:

you know, sub segmentation within the population. And that might

Speaker:

look very different at the top of the funnel when we're looking at thought leadership

Speaker:

topics versus the bottom of the funnel in marketing where we're thinking of features,

Speaker:

functions, benefits, et cetera. And so

Speaker:

that allows at least marketers, but I think others,

Speaker:

anyone go to market to really think about what is their message for the right

Speaker:

audience at the right time based on, you know, where they are in their

Speaker:

buyer's journey. And so that to me is a little bit

Speaker:

different because I would say the last facet of this is

Speaker:

the response stability. We're also providing a level of

Speaker:

test retest reliability. If you go into ChatGPT

Speaker:

recently, someone was flaming me because I've never made

Speaker:

caramelized onions. And so, you know, as a joke, he kind of went in and

Speaker:

said how many people who are 40 something, you know, like know how to make

Speaker:

caramelized onions? And these percentages swung

Speaker:

quite significantly from the first time he queried to the second time to the

Speaker:

third time. Whereas we're looking at population response stability

Speaker:

and modeling that, projecting it into the future and looking at the trend

Speaker:

line from the past on how this population would continuously

Speaker:

answer the question. So I kind of guess like when we think about model

Speaker:

drift, I think that's likely inevitable. But if you're

Speaker:

situating and updating with minute to minute context and then you're surfacing

Speaker:

some of these governance factors around what the Outputs are,

Speaker:

we're getting to a closer place where we can actually be collaborate collaborators

Speaker:

with the AI and govern it and then build,

Speaker:

you know, a greater level of trust is the hope.

Speaker:

That's interesting. I'm glad you addressed the skepticism because that was going to be my

Speaker:

next question. Like, how do you know this is real? How do you know that

Speaker:

it's accurate? The other question I had, and sorry, Andy,

Speaker:

I had a bunch of monster energy drinks today.

Speaker:

You could probably run different simulations, like in

Speaker:

parallel, right. Assuming you had the compute. So

Speaker:

you can see if this happens, if that happens, right. If there's

Speaker:

a recession, people are going to do this, go this way. If there's a boom,

Speaker:

if it kind of meanders somewhere in the middle, you could probably run

Speaker:

only limited to what compute you have, right? I mean,

Speaker:

yeah, I mean, it's a credit based system. So, you know, you buy

Speaker:

credits like a tank of gas and it's going to, you

Speaker:

know, give you enough gas to, to build whatever it is you

Speaker:

want within limits. But I would say,

Speaker:

yeah, I don't think you're really, yeah, I don't think you're really

Speaker:

restricted in terms of what outputs look like on, on a scenario

Speaker:

analysis. I think obviously if the more

Speaker:

data we have, let's call it for a specific company, when I was working at

Speaker:

Morningstar, that's 40 plus years of data on the back end in

Speaker:

that deep business context, that makes that prediction that much easier.

Speaker:

And so I think it also depends on what's coming into the model and

Speaker:

what its power is and its ability to be predictive. I

Speaker:

guess I should say that's cool. Because I think this is an interesting, it seems

Speaker:

like it's an interesting mix of kind of predictive modeling and

Speaker:

LLMs. Right. Because predictive models, I mean, they're not

Speaker:

new. Right, but they're not. But they do. I think

Speaker:

they're, they're traditionally, they're

Speaker:

very susceptible to drift. Right. But

Speaker:

I think also by simulating the individual actors, because a society

Speaker:

and economy, a customer base is, consists of, you know,

Speaker:

X number of, you know, not sovereign

Speaker:

but unique individuals that are going to have certain

Speaker:

personality traits. And some of those you kind of can

Speaker:

guess from. Like you said, you know, asset managers. Asset

Speaker:

managers, everybody likes ice cream, but asset managers probably really

Speaker:

like luxury cars. I'm going to go out on a limb. Right,

Speaker:

right. And probably how much the, how many luxury cars they have and which model

Speaker:

of luxury car they have is probably going to determine, is probably not, not

Speaker:

determine how successful they are. But it's probably a correlation between

Speaker:

how successful they are versus like how not. You know, I don't

Speaker:

know. I. If you're an asset manager and you're driving around the Hyundai,

Speaker:

there's gotta be a good story behind that. That's right.

Speaker:

I agree with you. And I think again, when

Speaker:

you can ask the synthetic audience and pull them, you can start to get into

Speaker:

further nuance whether those are B2B

Speaker:

dimensions of, you know, like firm type, role type,

Speaker:

etc. AUM or it can get into that more

Speaker:

psychographic or it can get into start, start to break down

Speaker:

archetypal differences and you know, all of those

Speaker:

then can be mapped into attributes that are built into the channels where we

Speaker:

communicate with them.

Speaker:

Go ahead, Andy. I don't want to hog the mic. No, no, it's all good.

Speaker:

I'm fascinated and

Speaker:

kind of playing off your, your idea of model drift, Frank,

Speaker:

and your questions along those lines. I

Speaker:

mean, in one sense I would say, you know,

Speaker:

a synthetic audience or you know, a synthetic sample

Speaker:

or cohort, however you want to classify that. Is

Speaker:

model drift happening in that

Speaker:

context is probably not unheard of because

Speaker:

there's cultural drift. And if you're looking for

Speaker:

ways to effectively simulate that

Speaker:

and run marketing campaigns against, you know, the

Speaker:

synthetic cohort, it doesn't strike me

Speaker:

as out of the realm of possibilities that you may want

Speaker:

some of that you may want to even tune for, especially

Speaker:

if you're looking at a younger audience.

Speaker:

There are emerging trends that come out of

Speaker:

those demographics. It's just part of the nature of those

Speaker:

demographics. I mean, I'd love to hear your thoughts on. On that.

Speaker:

Yeah, I mean, I don't know that it's a function of.

Speaker:

I don't want to make it like a generational distinction, but I do think

Speaker:

that anything that's current to context is going to

Speaker:

impact on a minute to minute basis in some cases how

Speaker:

the population is going to make decisions and what level of like

Speaker:

arousal they have. And I don't mean that in the, you know, cheeky

Speaker:

sort of way, but I would say like we're working with

Speaker:

an index team in financial services and they asked me on the spot,

Speaker:

can you please model a high net worth investor in Denmark?

Speaker:

You know, and this was last week just to, just to say, are you thinking

Speaker:

about, you know, rebalancing out of blah, blah,

Speaker:

blah, US broad index? And you know, the

Speaker:

answer was not immediately, but here's my thinking on that

Speaker:

and here's what I would be investing in instead. So now they're trying to think

Speaker:

through what's their messaging around outflows in that

Speaker:

predominant US broad index? And then how are we

Speaker:

surfacing the rest of our family of indexes in its

Speaker:

stead? And then he asked, how does this, does

Speaker:

the audience, is there a large spread here? And if so,

Speaker:

you know, what is the nature of that? So now we can think about

Speaker:

discrete campaigns across this population, which

Speaker:

is pretty narrow of, you know, ultra high net worth investors in

Speaker:

Denmark. Right. So I think it's

Speaker:

applicable depending on what, what is that trigger, you know, that what

Speaker:

is that zero moment of truth for any given population that is going to be

Speaker:

influenced by their immediate context. And

Speaker:

you know, with that responsibility score, we can then tell them this is something

Speaker:

we think will persist over time versus this is ephemeral. And based on what's

Speaker:

happening in the news around tariffs today. So here's something to push out in

Speaker:

your channels today versus here's something to build into,

Speaker:

you know, a long tail campaign and how to think about product strategy in

Speaker:

a different sort of way. That, that's pretty fascinating.

Speaker:

So pivoting just a little bit, you,

Speaker:

you mentioned quite a few instances of

Speaker:

incorporating evidence into this. And I would

Speaker:

imagine that I'm an engineer. Okay, that's a warning.

Speaker:

So, so is our cto. I'm getting used to it.

Speaker:

I think about open and close loops all the time. It's just, you know, I

Speaker:

don't even have to think about thinking about it. It just happens. But

Speaker:

being able to, to become predictive

Speaker:

and have that feedback where you, you

Speaker:

made some, you know, you made some prediction, some predictive

Speaker:

analytic, and then you collect evidence on

Speaker:

how accurate you were and not just, you

Speaker:

know, percentage wise, it doesn't really apply that much, especially in

Speaker:

marketing type

Speaker:

and especially in the age of AI where you can collect information and feed it

Speaker:

back into the system as training data,

Speaker:

effectively as responses to prompts. So the

Speaker:

prompts themselves become part of the data

Speaker:

that goes in and then the outcome that was

Speaker:

predicted, that's very easy to see. That

Speaker:

part happens. But then supplying the evidence

Speaker:

you predicted this, the delta between the

Speaker:

predicted and the actual, that's evidence. And

Speaker:

so being able to quantify that, being able to

Speaker:

feed that back into the engine, I think in early

Speaker:

2026, as we're talking about this, we've not

Speaker:

had the ability to,

Speaker:

I'd say in, you know, in, in natural language, to provide that

Speaker:

sort of information with any sort of confidence that

Speaker:

the algorithm that we're supplying that information to, that feedback,

Speaker:

closing the loop on the evidence, supplying the

Speaker:

evidence, we just hadn't had the confidence that the

Speaker:

machine was going to understand what we meant. And one of the

Speaker:

things that sort of slipped into invisibility over the

Speaker:

past, gosh, what's it been, three years and a few

Speaker:

months since Chat GPT was released?

Speaker:

Is that the model mostly understands what you're

Speaker:

saying now. And I mean by, by mostly some number well above

Speaker:

90%, you know, it's going to get what you mean

Speaker:

and when it hallucinates, you know, it's going to be because it

Speaker:

misunderstands what you said, not because it went off, you

Speaker:

know, and started interpolating what you said and

Speaker:

made something completely different out of it. It's the way it was

Speaker:

stated, wasn't quite clear. And nowadays

Speaker:

I hang out mostly in Claude and Claude code.

Speaker:

So when I'm going back and forth with, you know, with the engine,

Speaker:

it's, especially in Claude code, it very often

Speaker:

will pause the conversation and stop and say, hey, I have this question,

Speaker:

you know, and here's the options. I think you're, you know, based on what you

Speaker:

said, I give you 1, 2, 3. And then number four is you just type

Speaker:

and tell me if I completely missed it. And I rarely find myself

Speaker:

on that bottom option. Most of the time I'm picking the, the

Speaker:

top option, which the one it thinks is most likely. And

Speaker:

so having having that sort of evidence based

Speaker:

feedback, number one, be so much easier

Speaker:

than it is before. And so I can see that limiting model

Speaker:

drift. I can also see it kind of making

Speaker:

your predictions align with

Speaker:

the timescale that you mentioned. So not just the population

Speaker:

being so, so small, which is

Speaker:

infinitely harder than dealing with big data, right? Dealing with a

Speaker:

small set of data. How do you predict in all of that? And before I

Speaker:

ramble anymore, I'll just stop and let you respond. How about that?

Speaker:

Well, it's interesting and I don't want to get over my skis

Speaker:

because this is really where our CTO shines.

Speaker:

He has the ability to create

Speaker:

some audiences out of what he would say he would call dark

Speaker:

matter. The best way for me to think that through is when I look at

Speaker:

a tree and I see its various branches. I'm looking at the

Speaker:

tree to define the tree, but there's so much more sky

Speaker:

and negative space around that tree that also defines it.

Speaker:

And so he's starting to look at data and how it affects other

Speaker:

data and then putting that in context and using that

Speaker:

kind of negative space to then define the audience that's

Speaker:

so small. So that is, you know, in the case

Speaker:

of when I was at Morningstar, Acid owners, really small group of

Speaker:

people, but one that Morningstar really wanted to understand a

Speaker:

lot better. And so that institutional audience, they're

Speaker:

regulated. It's hard to, you know, get permissions because they're so small.

Speaker:

Their time is worth a lot. So it's an expensive panel to construct.

Speaker:

And here he was able to build from again, like that negative

Speaker:

space to then recreate the audience. And, and he is

Speaker:

surfacing that confidence variable. And if there is a hallucination,

Speaker:

hallucination risk, it's tagged and it will prompt you for what sort of

Speaker:

data it then needs. Or it's going to say, actually have to refractor the

Speaker:

audience a little bit differently. There's too much entropy for me to continue and

Speaker:

it will go and run it again. So. And again, I don't want to get

Speaker:

over my skis because I'm the social scientist in the mix, but that's how it's

Speaker:

been described to me that I can, I can best understand it. That makes

Speaker:

a lot of sense actually. And like you can kind of, I think there's a

Speaker:

lot of inference here in terms of what you can infer. Right. So

Speaker:

my, my kid, my

Speaker:

middle kids, my two younger kids are really into and really the three

Speaker:

year old just likes hanging out with his big brother. They watch Dragon Ball Z,

Speaker:

they watch the Jujutsu Kaizen, like all the crazy anime that's

Speaker:

very popular now. I bet one of the things you could do, I,

Speaker:

I've actually gotten into it. I was never much of an anime fan, but like,

Speaker:

you'd say, like say Dragon's Ball Z. Right. Dragon Ball Z has been around

Speaker:

that I'm aware of, maybe 20, 30 years. Right. But. So you can probably,

Speaker:

you could probably make a solid assumption that there might be some Gen X folks

Speaker:

that are Dragon Ball Z fans, probably a lot of millennials, a lot of Gen

Speaker:

Z, Gen Alpha, whatever they're calling them now. But there's probably not a

Speaker:

lot of people in retirement homes, boomers and

Speaker:

up there are big fans of it. Is it because they wouldn't like it?

Speaker:

I don't know. Maybe. But it's just, it tends that since that demographic

Speaker:

skew is kind of small, you're probably not going to find

Speaker:

a lot of them that are going to be into that in the retirement. I

Speaker:

don't know that that's just me just firing an analogy.

Speaker:

I mean, my parents liked K Pop Demon Hunter when my kids made them watch

Speaker:

it, but I have girls, so I don't know,

Speaker:

they're just really cute though. That's really

Speaker:

cute. It's a very well done kind of cross genres, but yeah, yeah.

Speaker:

And K pop is very, very, very

Speaker:

addictive. Yeah. You know, so like it just

Speaker:

sticks in your head. I don't know how they did it, but

Speaker:

who, who are the industries? What are the industries that are really interested in this?

Speaker:

You obvious, you mentioned Morningstar, obviously, I would imagine financial

Speaker:

services. And

Speaker:

Morningstar is asset management. Right. Is that what it is? Or a hedge

Speaker:

fund or it's, I'm. Not exactly sure, data and research. So I mean, I think

Speaker:

primarily they're known for their research and data and how they've rated

Speaker:

funds over the years and they've expanded from there by way of acquisition.

Speaker:

So PitchBook is a part of it. DVRS is an index business. So

Speaker:

they, they have seven different pianos that really like traverse

Speaker:

financial services. At this point I

Speaker:

think financial services has been interested partially because I'm in financial

Speaker:

services and I'm literate and being able to discuss it and showcase its

Speaker:

benefits. Right, right. I would say this is more like

Speaker:

functionally, like accurate for any

Speaker:

place that needs human intelligence. Right. So I've worked with

Speaker:

private equity teams that are helping to arm their

Speaker:

portfolio companies with a marketing tool that doesn't

Speaker:

have them, then looking to boutique agencies to do this level of market

Speaker:

research and understand their ICP and find product market fit or message

Speaker:

market fit. So there for them, it's very easy to kind of get in

Speaker:

there, even the technical founders, and try to augment maybe a gap in

Speaker:

their marketing acumen. I would say marketing

Speaker:

agencies, creative performance, et cetera, they have taken

Speaker:

to it really easily because they're already wizards who

Speaker:

wield, you know, traditional wands on doing this kind

Speaker:

of work to understand a market, to understand the message that's going to

Speaker:

fit with that market and then to make sense of what the real results were

Speaker:

when the market either engaged or didn't. Right. So and

Speaker:

building the creative around that. So the ability to pre test all of that with

Speaker:

the audience gets them to the starting line before they put money behind it

Speaker:

or have their client put money behind it with the best possible set of

Speaker:

options. So I think agency has been pretty prolific there too. And then

Speaker:

the last. And again, I'm kind of biased because I came out of enterprise.

Speaker:

Enterprise marketers who are finding gaps

Speaker:

in the kind of the traditional products that are, have easy distribution

Speaker:

within the enterprise are looking to a tool like

Speaker:

Movera to try to get more

Speaker:

what decision intelligence that's human based in what they're doing

Speaker:

and so that's, that's where we're seeing a good amount of traction would be

Speaker:

like in that mid market and enterprise level marketing team,

Speaker:

whether that be product marketing or demand gen or market

Speaker:

intelligence. And I came out of brand strategy so I found great

Speaker:

utility for it there in corporate comms. So again I think

Speaker:

it's really that go to market team where human intelligence becomes so

Speaker:

important to decisions and current like traditional research methods

Speaker:

are really slow and they're quite expensive and

Speaker:

not everyone can do them, you know, or they think to just grab

Speaker:

the information from within the four walls of the firm and

Speaker:

anecdotes of talking to customers. Right. So this is a good

Speaker:

way to augment an expensive way to augment some of that decision

Speaker:

support. So you can like throw together like a,

Speaker:

what's the, like a test market simulations and you can probably,

Speaker:

there's probably knobs and dials you could do. So you can kind of like get

Speaker:

multiple answers and I, I get it. So you can kind of, you can hit

Speaker:

your, whatever your campaign is going to be with the running start as opposed

Speaker:

to it's a little bit more guided than just throwing

Speaker:

stuff at the. Wall and seeing what sticks. That's right. You know what to throw.

Speaker:

You have better idea what to throw and where to throw it. That's right. And

Speaker:

I mean we had, even when I was still at Morningstar, pre

Speaker:

tested like the first time ever they built commercials. You know, they

Speaker:

didn't, they don't really do brand level, you know, television commercial.

Speaker:

They were deploying in Chicago, New York and London. And it was

Speaker:

shown that in London it wasn't, whatever it was, the voiceover, the

Speaker:

creative itself wasn't going to resonate with that audience as well.

Speaker:

And so that gave us the foresight to take a look at what the voiceover

Speaker:

is, what channels we might use, how much money we would put behind it before

Speaker:

we deployed in that market. And so that, that kind of helped with channel

Speaker:

strategy, it helped with content strategy. It certainly helped to

Speaker:

evaluate that creative before any money

Speaker:

changed hands. And so I think that was a super helpful thing. And now it's

Speaker:

an award winning campaign. I'd love to feel like Movera had something to do with

Speaker:

it along with all the brilliant minds that worked on it.

Speaker:

That's cool. So you can get down to the macro, not macro, micro level of

Speaker:

like the voiceover may not work in this market and things like that. That's

Speaker:

cool. Yeah. In fact there's a. So we're in multiple modalities.

Speaker:

We had used, I helped to co author

Speaker:

the CEO's speeches for multiple years. And so we made him

Speaker:

pract again and again and again, and we would. We would record

Speaker:

them. And so the video analysis tool would look at the

Speaker:

substance of what he was saying, the creative that was behind him on the

Speaker:

deck, and then also his performance. So as it evaluated him,

Speaker:

it said, you know, you're not taking time to pause for emotional

Speaker:

resonance. And it gave all the timestamps across his speech where he

Speaker:

should pause and why, and potentially even for how long.

Speaker:

So it was looking at audience engagement and emotional connection. Then it started

Speaker:

to take a look at, well, your message isn't that highly differentiated. And because we

Speaker:

have this deep business context, we know that X, Y and

Speaker:

Z are also talking about the convergence of public and private markets. This

Speaker:

is what they're saying, here's what you should say so that it sounds uniquely

Speaker:

Morningstar. So it now is helping to differentiate the message.

Speaker:

And then when we got down to the creative, it's saying, you should do things

Speaker:

that are a little bit more dynamic. You should back up what you're saying here

Speaker:

with, you know, more data, graphs, charts, et

Speaker:

cetera, less imagery. And so it was giving us guidance on three

Speaker:

dimensions of that speech. And as we did it over time and recorded

Speaker:

him, we saw his scores go up and up and up. And then

Speaker:

it ended up being a really successful speech at

Speaker:

the flagship conference that spring. So, you know, I

Speaker:

had even said to him, like, maybe we should use this before earnings calls. You

Speaker:

know, you never know.

Speaker:

I could see the. I could see the applications and, you know, in fintech,

Speaker:

I could also see applications of this in political campaigns.

Speaker:

Yes. I was just thinking that. I'm like, you know, yeah, they

Speaker:

would eat this up. Yeah. So we have been in

Speaker:

some conversations, and I obviously can't talk about it with someone in the House

Speaker:

of Representatives because we also have a news digest that

Speaker:

will metabolize the news and give you the perspective of specific

Speaker:

audiences. So he wanted to look at the two counties, you

Speaker:

know, that he. That are part of his constituency. But then he was

Speaker:

also looking at the committees, you know, so he's on two

Speaker:

different committees and how are they responding to the news and what is it that

Speaker:

they're doing? So it was doing this kind of social listening and moderate, you know,

Speaker:

modeling of the audience. And then he could say, well, this is what my response

Speaker:

would be to it and get them to vet it before he, you know, would

Speaker:

push send on a communication. So, yeah, that was. That was

Speaker:

something that. It's so timely Particularly with that news

Speaker:

digest. Yeah, sure. And you know,

Speaker:

particularly in it's, you know, the sentiment

Speaker:

analysis angle on that's huge. And

Speaker:

being able to do that in near real time,

Speaker:

I think has, you know, applications across not just those two markets,

Speaker:

but a bunch of different verticals as well. Because you

Speaker:

almost. The perception is if you don't

Speaker:

respond or react, that's a response or

Speaker:

reaction, you know, so.

Speaker:

Yeah, that's right. So I, I'd say between access

Speaker:

to news content and then also connection with APIs. So

Speaker:

we have Bloomberg flowing through the platform Pitchbook. We've got it

Speaker:

for marketers, Ahrefs and Semrush data. If you're looking at SEO and you have

Speaker:

thoughts towards what does it mean to show up in answer engines, all of

Speaker:

this data flows and could be called through the platform so that you're

Speaker:

looking at real data again, we leave a receipt of like this is where we

Speaker:

drew this data from. You can see it. And here's where we

Speaker:

inferred. So now you can use your own best thought

Speaker:

and strategic thinking on. Okay, do I need to get that

Speaker:

inference score down or do I feel good about this

Speaker:

and I can build it into my argument in a really defensible way?

Speaker:

So just curious. That's cool. Yeah, I'm, I'm down

Speaker:

with it. I'm just curious how,

Speaker:

in your experience, how have the, how's

Speaker:

the opportunities presented themselves for someone to kind of step

Speaker:

out and be creative is probably a nice way to

Speaker:

say it. Or, and, or controversial. You know,

Speaker:

there's, there's value in that some of the time. I mean, from a. If you're

Speaker:

talking about marketing a product or service, you

Speaker:

definitely want the differentiation. You mentioned that earlier.

Speaker:

If you're talking about a campaign, whether it's a marketing

Speaker:

campaign or a political issues type

Speaker:

campaign, the opportunity to

Speaker:

either be portrayed as a maverick or see what I did

Speaker:

there or to, or to be, you

Speaker:

know, just portrayed as somebody kind of breaking the mold, stepping outside

Speaker:

the talking points. You know,

Speaker:

how's, you know, how's your, how's your product and service

Speaker:

addressing that. But also too, there might be some. I'm sorry, I

Speaker:

didn't mean to cut you off. No, that's trying to cut off Andy. And then

Speaker:

I cut you off by mistake. But also to the

Speaker:

inverse of that. Like maybe there's some things you people, you don't want

Speaker:

Mavericks, you don't. We want stability. Financial services kind of comes to mind.

Speaker:

So sorry, I'll shut up. Yeah. So I mean, you can

Speaker:

construct your own Brand identity that's going to say, you

Speaker:

know, typically, here's our brand standards and here's our

Speaker:

brand expression, which can come across creatively or tone or

Speaker:

what have you. So that can be constructed and put on the back end so

Speaker:

that everything is then scored against that and can tell you how far away from

Speaker:

that you're drifting. Then you can put it in front of the audience.

Speaker:

Typically, anyone who's working with is going to have their own framework for

Speaker:

understanding. You know, how do I evaluate whether this message, message can go to market

Speaker:

under my brand and how much risk am I willing to take? You can ask

Speaker:

it even to assess the risk given the audience response.

Speaker:

And as it splits that audience where people are having a difference of

Speaker:

opinion, you can isolate that and say, is this my most

Speaker:

likely buyer or is this the part of the audience that maybe there's a huge

Speaker:

population that would like this more provocative

Speaker:

message, but it's a, it's an audience, as it's described, that would churn.

Speaker:

So, like, it allows you to make a little bit like, more strategic business

Speaker:

decisions based on like, what. What are the attributes of that

Speaker:

audience that are going to resonate with that more provocative message.

Speaker:

The other thing I would say is just, oh, no, it's okay. This is built

Speaker:

on a gan. So it's an adversarial network. And I

Speaker:

would say, as opposed to being sycophantic, like so many models that

Speaker:

are like, oh, yeah, I agree with you. And then you're like, no, don't agree

Speaker:

with me. Be like adversarial. You know, push back. It's built

Speaker:

to push back. In fact, we have a Persona specifically meant to

Speaker:

poke holes and ask you questions and get you to question your assumptions. And

Speaker:

I always start there. It's called Osprey. And I, like, that's my number

Speaker:

one first stop on the bus is here's how I'm thinking about

Speaker:

this competitive analysis. Let's like sort through what.

Speaker:

What is wrong with that or how I can improve it. Same thing with a

Speaker:

market sizing exercise. It feels like that should be wrote, but as you lend

Speaker:

more specificity to it, I might be market sizing against not just

Speaker:

a product, but a specific use case that I want to build up, campaign around.

Speaker:

And now it becomes like a much more nuanced way of modeling

Speaker:

an audience. So I always, again, start with that

Speaker:

adversarial model to get me to think better, you know, like, really improve

Speaker:

my strategic critical thinking. Kind of like the

Speaker:

10th man in world War Zone. Okay, I don't know what that is,

Speaker:

but should I watch it? I'm sorry, Andy. Andy, I cut you off. Yes,

Speaker:

it's an interesting concept. I don't want to spoil it for you, but, like. And

Speaker:

it's based on a real, real army unit where

Speaker:

they basically become their contrarian. If nine people agree

Speaker:

on something, then it's. They randomly will.

Speaker:

If 9 out of 10 people agree on something or something like that, or 10

Speaker:

out of 10, they will randomly pick one to. You have to

Speaker:

poke holes in it. Oh,

Speaker:

sorry. Encountered. That's okay. I first encountered that in World War

Speaker:

Z. So. Yeah, that. That was where I saw

Speaker:

that. The. It sounds what I

Speaker:

was thinking as you were describing that. I guess the phrase that popped into my.

Speaker:

My mind was, you know, there's no such thing as bad publicity.

Speaker:

And if you are peaking interest, whether it's

Speaker:

positive or negative interest, if you're provoking some sort

Speaker:

of reaction in that, and I think a lot of the social media

Speaker:

algorithms are tuned around being able to do that very thing,

Speaker:

you know, to. To get a reaction, either an agreement or a

Speaker:

disagreement, then that can lead to

Speaker:

engagement. And if that's the goal, that makes perfect sense.

Speaker:

That's right. I. In fact, I have a book right here called Filter World.

Speaker:

I think that's what it's called. Yeah, Filter World. And it's really all about

Speaker:

how algorithms can. Can do that, feed you back things that are more

Speaker:

sensationalized, kind of like yellow journalism going back to Hunter S. Thompson.

Speaker:

Right. That are meant to create some sort of response, whether good, bad,

Speaker:

or ugly. So, yeah, I think that's right. But at least

Speaker:

you could test. Yeah, at least you can test some assumptions first

Speaker:

prior to taking it to market and getting slammed for it and

Speaker:

having unintended consequence, potentially. Yeah, Well,

Speaker:

I mean, if you think about it, I'm just basing this on my

Speaker:

experience, because I have the most experience with my experience.

Speaker:

I love a comeback. Right. I just. I love it. And

Speaker:

often the way that that comeback begins, the. The arc

Speaker:

starts with me first

Speaker:

noticing something and having a negative reaction to it.

Speaker:

And then as I get more information, I go, well, yeah, I could kind of

Speaker:

see where they're coming from and, you know, begin to identify with it and

Speaker:

empathize and. And then every now

Speaker:

and then it's rare, but when it happens, it happens huge. And I

Speaker:

think part of the reason is because I started so negative with it, my support

Speaker:

skyrockets, you know, a little. It's not a line, it's an

Speaker:

exponent, you know, very exponential curve of

Speaker:

support that Grows out of that. And like I said, I think it's powered by

Speaker:

stretching that rubber band in the opposite direction to start with.

Speaker:

Yep. Although I would say some people are built that way because my

Speaker:

dissertation looked at processes of empathy and processes of

Speaker:

perspective taking and how counter. Counterargumentation happens.

Speaker:

Right. What are the various factors, either in an environment or in a

Speaker:

message that are going to create that? But there are also some things just in

Speaker:

you that might have that approach to say. I would say

Speaker:

my 7 year old, my little guy has like, he comes from a space of

Speaker:

no. We always start with no. He's also like

Speaker:

in the 99th percentile for math. I think he has like an engineering mind. Like,

Speaker:

I just, I was gonna say. He sounds like an engineer before you even

Speaker:

mention math. Yeah, yeah, yeah. Likes to take things apart

Speaker:

and put it back together. So that's it. No is a good spot. Yeah. Yes.

Speaker:

That's funny. It reminds me

Speaker:

of. Here's a story from way back when. Once upon a

Speaker:

time, I worked for a fintech startup. We'd call it. It wasn't called

Speaker:

fintech then, but it was basically in early

Speaker:

2000s. And it was a banking portal, but it was meant to be kind

Speaker:

of banking for people who

Speaker:

weren't comfortable with finance. Right. But the,

Speaker:

the rationale was they wanted to make the site really friendly. And one of the

Speaker:

things they did was they put little cute cartoon characters

Speaker:

on every page, which people.

Speaker:

And this was in Germany. So like it was a, you know, banking

Speaker:

culture in the US is very conservative. Even

Speaker:

Germany is even more so. And

Speaker:

that's being kind of. Turns out

Speaker:

people didn't want to put their money into a website.

Speaker:

Which again, early 2000s. Right. Still, that was already a stretch

Speaker:

with cute little cartoon characters. They wanted serious, they wanted stable,

Speaker:

they wanted boring, they wanted, they wanted the suits, they wanted that.

Speaker:

And it was kind of like when I saw the website, the design rolled

Speaker:

out, I was like, I don't think this is gonna work. I better have my

Speaker:

plane ticket home just in case. And

Speaker:

you know, it turns out I was right. You know, trust me,

Speaker:

I, you know, I didn't want to be right because I would have, you know,

Speaker:

had dot com dreams and, you know, all that. But.

Speaker:

But I mean, you're right. Like sometimes it would have been helpful

Speaker:

if they were to test out, if they had the capacity to test out.

Speaker:

Hey, what if we went for a cutesy K pop kind of demon hunter thing

Speaker:

for a banking portal. It might fly today maybe,

Speaker:

but probably not.

Speaker:

Just depends on the audience. Again, yes, Exactly. Know your audience. Right.

Speaker:

That seems like a tough sell. It, you know, in Germany in the late

Speaker:

90s, early 2000s. I don't know. Right. It definitely was.

Speaker:

I think after half a billion euros

Speaker:

were spent, I think they acquired 120 new customers.

Speaker:

So, yeah, it was br. It was bad

Speaker:

right there. It was bad. And I might be rounding

Speaker:

up ratio right there. I can do that.

Speaker:

Yeah. So, I mean, again, I think

Speaker:

audience, you can't really replace, like human response to something. You have to

Speaker:

get something out into market and see if trust is established and people engage

Speaker:

and ultimately make a decision to purchase. But I think getting

Speaker:

to the starting line with the best set of options, with

Speaker:

defensible reasons behind why he went with those options,

Speaker:

is kind of a better spot than we were a year ago or two years

Speaker:

ago. Right. And so I think,

Speaker:

I mean, we can only go up from here, but I think, you know, I'm,

Speaker:

I'm optimistic that if people were to start integrating this, it doesn't have

Speaker:

to take them out of the job force. It just can help them do their

Speaker:

job a lot better, you know. No, absolutely.

Speaker:

Yeah.

Speaker:

How did you get into this? How did you get into this? Because your background

Speaker:

is in. Your PhD is in communications.

Speaker:

You're getting used to dealing with engineers. Yes.

Speaker:

How did you. How did you end up at a company that is largely driven

Speaker:

by engineers? That seems. Yeah, this is a great question.

Speaker:

So again, I was kind of that skeptic who was running a market research

Speaker:

team and always being pressed on my budget. So the budget was,

Speaker:

you know, in the high six figures. And it's like that's the

Speaker:

first place everyone wants to cut when everyone's looking at margins. But

Speaker:

it's also such an important place to make sure that product

Speaker:

strategy, message strategy, all these things are kind of coming together in the right sort

Speaker:

of way instead of wasting money downstream. And

Speaker:

so I was trying to, you know, A, look for a way to

Speaker:

cut cost, but B, I also really wanted to understand

Speaker:

what was coming with this whole, like, generative AI thing, you

Speaker:

know. So when I heard about let's scan LinkedIn,

Speaker:

LinkedIn profiles and create synthetic Personas, I

Speaker:

really started to pound the pavement to try to understand who's approaching this in

Speaker:

the right sort of way aligned to how I think about modeling human

Speaker:

populations, which is what I was studying. So when

Speaker:

the strategist I was working with kind of heard me thinking out loud about it,

Speaker:

he introduced me to the co founders at Marvera and,

Speaker:

you know, I think I asked some hard questions. They could see that I was

Speaker:

nerdy and skeptical and willing to try. And

Speaker:

so they gave me access to it for almost a full year.

Speaker:

I took it through the compliance process, which was helpful for them, and it was

Speaker:

good to see how Morningstar was thinking about this progressively

Speaker:

and then just hammered it and, you know, brought it into the C suite and

Speaker:

brought it across the firm in my presentations. And I

Speaker:

think through that, it really helped me to understand what the true value

Speaker:

of it was. And after seven years at an enterprise, I, you

Speaker:

know, I was definitely someone that liked to make decisions quickly, thoughtfully,

Speaker:

but quickly. And I was kind of looking for, you know, maybe

Speaker:

there's another opportunity to take my expertise and apply it in a different

Speaker:

way. So I had a sabbatical. It was like

Speaker:

a, you know, six weeks every four years. Thank you, Morningstar.

Speaker:

And during that time, I just spent some time with them to really understand

Speaker:

the technology, really understand the go to market motion

Speaker:

and look at their capital raise and try to get involved in that

Speaker:

process. And then six months later, they asked me to join

Speaker:

them. Oh, that's cool. Yeah, that's cool.

Speaker:

It was cool. I have to say, I'm drinking from the fire hose because

Speaker:

working with the AI engineer, Full Stack

Speaker:

developer and. And looking at operations and looking

Speaker:

corporate taxes and all these things. No, that was not really. I carried

Speaker:

my. You didn't wake up and you were like, I didn't want to do that.

Speaker:

Like, that's interesting.

Speaker:

The first thing that comes to mind, and I totally lost my train of thought.

Speaker:

So if, Andy, this is an opening for you while I kind of reboot my

Speaker:

blue brain blue screen. So give me a second.

Speaker:

Oh, now I remember. You're welcome

Speaker:

anytime, man. Having

Speaker:

you mentioned regulations, this is what kind of. True. I was very

Speaker:

skeptical of synthetic data, just in general, just

Speaker:

because, you know, you're basically feeding machines into machines. And I'm old enough

Speaker:

to remember when you took it like a tape cassette and you copied it and

Speaker:

you did that enough generations, whether it was VCR or audio cassette,

Speaker:

you had an issue. Right? You would get some kind of degradation. However, in

Speaker:

reality, I've seen synthetic data do amazing things in the AI

Speaker:

space, in the data space, more than it has any right to,

Speaker:

basically. So that's why I was not skeptical when you mentioned synthetic

Speaker:

crowds, because it's one of those things where it's worked better.

Speaker:

But one of the upshots of synthetic data is that

Speaker:

the reg, particularly around generating synthetic

Speaker:

health data and things like that, you don't quite have the same

Speaker:

regulatory constraints. Right? There is no PII

Speaker:

to speak of. And you mentioned that there were regulatory hurdles for, for this.

Speaker:

Like what, what were the regulatory hurdles in

Speaker:

this case? I'm curious. Well, how, how could

Speaker:

you use the outputs? Where would they be applied? If you're reconstructing

Speaker:

the brand voice, what are you basing that off of? Is that, you

Speaker:

know, is that considered for them proprietary information that would

Speaker:

then feed the system for other, you know, competitors or

Speaker:

just writ large? I think that was something that they were looking

Speaker:

at. They were of course looking at data privacy. So

Speaker:

you know, I was uploading not just our creative,

Speaker:

but I was looking at our business strategy across the P Ls and trying to

Speaker:

get it to incorporate when things are launching and where is their convergence.

Speaker:

So that if I'm trying to create an umbrella level message at the brand level,

Speaker:

it can render really strategically down to the different business units

Speaker:

and create continuity and coherence in the message.

Speaker:

So but that's, you know, that's their strategy.

Speaker:

So they're really worried about like, you know, at what point

Speaker:

can we feel like this is safe? And so, you know, in earnest,

Speaker:

the team approached the ISO

Speaker:

42001. They had a SoC2, the

Speaker:

ISO37000 something because I'm never great at

Speaker:

remembering all the numbers but like they really did like get after it in terms

Speaker:

of, of ensuring that enterprises specifically would feel,

Speaker:

you know, really like safe in this environment and

Speaker:

everything. It was abundance of caution. What's that? It was, I'm sorry.

Speaker:

Okay, that's fair. Yeah. Because one of the big selling

Speaker:

points I've seen is it's not real. Right. So I'm sorry to cut you off

Speaker:

again, but. Yeah, no, because it's not. But it's a synthetic data layer

Speaker:

that sits on top of proprietary data and data gathered

Speaker:

from like first party sources externally. So I think

Speaker:

once you have the mix of multiple things, they just have to ensure that

Speaker:

whatever's put in there is proprietary, is protected.

Speaker:

That makes a lot of sense. Cool. Yeah.

Speaker:

This is the world we live in, you know, that's cool.

Speaker:

So any other questions Eddie or. I didn't mean to hog them up.

Speaker:

I'm just fascinated by,

Speaker:

is fascinated by the discussion. It's, it's one of those other.

Speaker:

Well there's other discussions and topics where we see

Speaker:

the kind of the real world interacting with the

Speaker:

artificial and I don't say artificial in any kind of

Speaker:

negative way, you know, in the sense of

Speaker:

synthetic and, and to me it feels a lot more like art

Speaker:

imitating life, you know, and

Speaker:

as we, we find more and, and

Speaker:

better ways to have technology enrich

Speaker:

our ability to do our jobs well. I just. I find it fascinating.

Speaker:

So. And it's. It's cool. I can tell

Speaker:

that you found a real. A real fit

Speaker:

for your education and your skills and it sounds like your

Speaker:

personality and, you know, and kind of likes

Speaker:

and that. That's always good, you know, when. When you can do what

Speaker:

you are. Yeah, it's so true. I love nerding out

Speaker:

every day on this stuff. Plus, like, I'm. I don't. I'm

Speaker:

just. I can't naturally sell anything. I have no selling

Speaker:

ability. But I can talk about it from the perspective of, like,

Speaker:

a practitioner, you know, and a skeptic one at that. So

Speaker:

that's really where I'm coming from in any conversation is like. Like, tell me

Speaker:

why you don't buy it. Because I, like, I'm gonna get in your bandwagon

Speaker:

and not buy it with you until we can figure out how it actually, like,

Speaker:

works and fits into this process, you know? So.

Speaker:

That'S a good way to look at it. That's. That. That's really not just selling

Speaker:

with empathy, but selling with, like, sympathy, I guess. Right? Like, yeah,

Speaker:

yeah, that's cool. That's cool. Where can folks find out more about

Speaker:

you and Mavera? Yeah. I love talking to everyone, so please

Speaker:

connect with me on LinkedIn. My name is Jill Axlide.

Speaker:

Again, Mavera IO is where you can go and check it out.

Speaker:

We liked people to kick the tires, so there's a free trial for everyone for

Speaker:

14 days and you can connect with anyone on our team to

Speaker:

walk through how to use it. Cool. Awesome. And we'll let our

Speaker:

AI finish the show. Awesome.