Speaker:

Welcome back to Data Driven, the podcast where we talk about how data

Speaker:

and AI are changing the world. And sometimes we

Speaker:

even understand it. Today's guest is the brilliant Carmen

Speaker:

Lee, CEO of Silicon Data and former Bloomberg brainiac

Speaker:

who's now on a mission to bring financial grade transparency to the wild west

Speaker:

of GPU compute markets. If you've ever wondered how to hedge

Speaker:

your AI infrastructure costs the way airlines hedge fuel, or what

Speaker:

a futures market for GPUs even looks like, you're in for a

Speaker:

treat. Carmen's turning raw compute into a tradable

Speaker:

commodity, normalizing chaos, and possibly building the

Speaker:

Bloomberg terminal for AI infrastructure. Minus the beige

Speaker:

keyboard, we cover everything from tokenomics and TSMC

Speaker:

to why your AI startup's margins are flatter than the earth in a

Speaker:

conspiracy forum. Oh, and there's a used GPU car

Speaker:

lot somewhere in Virginia. Stick around. This one's a data

Speaker:

geek's fever dream in the best way.

Speaker:

Hello and welcome back to Data Driven, the podcast where we explore the

Speaker:

emerging field of data science, artificial intelligence, and

Speaker:

this crazy AI world we live in. But it's all underpinned by data

Speaker:

engineering. And with me, as always, is my favoritest data

Speaker:

engineer in the world. Even my dog is barking, giving you a shout out.

Speaker:

Andy Leonard. How's it going, Andy? It's going well, Frank. How are you?

Speaker:

I'm doing well, I'm doing well. I'm keeping busy.

Speaker:

We were talking about other podcasts that we have and

Speaker:

the other one is Impact Quantum. So go to impactquantum.com

Speaker:

definitely check it out. And had a very fascinating

Speaker:

conversation with our guest in the virtual green room. So without

Speaker:

further ado, let's welcome Carmen Lee to the show. She's

Speaker:

the CEO of Silicon Data and she is driven by a

Speaker:

passion for developing and delivering cutting

Speaker:

edge derivative products and data solutions that

Speaker:

provide essential data, intelligence and efficiency to compute

Speaker:

markets worldwide. Her company's vision is to

Speaker:

revolutionize these markets through unparalleled data transparency

Speaker:

and financial innovation. Welcome to the show, Carmen.

Speaker:

Thank you. You deliver up my tagline so well I might want to

Speaker:

hire you to do the whatever. Thank you. This is like.

Speaker:

Thank you. This is like I was looking the other day. This is almost our

Speaker:

400th show, so I do have a face for radio and

Speaker:

apparent thankfully. But a voice for radio. So good for me.

Speaker:

This is great. And speaking of radio, we were geeking out because

Speaker:

I started my career in New York in finance

Speaker:

and Bloomberg. Having a Bloomberg terminal on your desk was

Speaker:

a status symbol. There were the ones who had it and the ones who didn't

Speaker:

and the ones who wanted it. And you know radio,

Speaker:

right? Bloomberg radio, which we also get here in dc. And you used to work

Speaker:

for Bloomberg, so that's really cool. That's right. I had a great time

Speaker:

working for Bloomberg and my team was part of the

Speaker:

data team I thought is

Speaker:

one of the most cutting edge data company especially in the

Speaker:

financial services industry. Back then I cover all content,

Speaker:

all product data integrations with any third

Speaker:

party ecosystems. So think about any training

Speaker:

cycles from Fedmin back offices, think about any

Speaker:

cloud providers and database Systems and

Speaker:

even AI, LLMs, whatever you call them,

Speaker:

different use cases, real time

Speaker:

reference data, aesthetic data, anything. It's

Speaker:

really fascinating. I learned a lot my background before that I

Speaker:

was in all financial services and I don't know if I bore your audience at

Speaker:

this point. I started my career in trading, high frequency trading

Speaker:

in Chicago. So to me transparency,

Speaker:

efficiency and free market is sort of in my blood.

Speaker:

100% brainwashed at this point in life. So one of

Speaker:

the things I noticed when I was a Bloomberg is there's a

Speaker:

lot of interesting ecosystem

Speaker:

platform came up last year, right? So they all leveraging gen

Speaker:

AI. You're the first few adopters which is good for them

Speaker:

and their client basis sometimes can be financial institutions. So boom, client

Speaker:

basis. So one of the things I noticed is it was a really fascinating conversation.

Speaker:

So those startups, they're gaining a lot of tractions. Good for them. So

Speaker:

obviously I was like oh you're doing so well. And they will complain to me

Speaker:

saying that they were sassed, right? They were 100% SaaS

Speaker:

revenue so static and then it's pivoting to

Speaker:

AI driven SaaS. So their cost, think about last year The

Speaker:

GPU per GPU per hour was like $9 or

Speaker:

6, 7, 9. Back to like 3 if

Speaker:

you own interruptible instances, right? So the swing is like

Speaker:

300% within the same day but then their revenue

Speaker:

is static, right? So their margin like

Speaker:

positive 40% to negative, 60 to

Speaker:

positive and there's no way for them to manage it. And also

Speaker:

same time it's not like they bring on more clients. They

Speaker:

can enjoy the scalability. It's like again

Speaker:

same thing, the margin is uncontrollable and they have this problem say how

Speaker:

do they actually coming out a cash flow plan for next year and then

Speaker:

they obviously complain. Totally strikes me to be

Speaker:

hey, this industry needs financial

Speaker:

infrastructure layer, right? It's almost like talking to American

Speaker:

Airlines. Say hey airline, you cannot hatch your oil prices

Speaker:

fluctuation. How are they Going to price their tickets. They can't, right?

Speaker:

And it's not like American Airline cost OPEX in like give me five year long

Speaker:

contract. They don't do that. Every single of those commodities

Speaker:

pricing discovery and hedging happens in divers market. So

Speaker:

futures options because there's a few reasons, right? Number one is it's just

Speaker:

efficient. Number two is cheap. Both is flexible and then you and me,

Speaker:

we can do the same thing. We have oil exposures, we don't have to be

Speaker:

American airline. But today if you are crowing for

Speaker:

hyperscalers, you can go to those, you know,

Speaker:

whoever, right? Produce chips, right? And get a long term contract.

Speaker:

But you, if you and me start Neo Cloud, guess what? We don't have access

Speaker:

to kind of pricing. It's not good. We you have a

Speaker:

few players who have the pricing, who have that way to hedge it. But

Speaker:

then the smaller Prius just couldn't get in the game, right? It's really not good

Speaker:

for the ecosystem's health performance and

Speaker:

the risk management. So that's really struck my core.

Speaker:

Last year I was like man, someone needs to do the

Speaker:

index, the pricing, the benchmarking layer of the

Speaker:

GPU compute as human resource

Speaker:

I feel like will be the biggest human resource in the next few years.

Speaker:

Surpass all energy combined, right? So that's why I left Bloomberg right

Speaker:

away. Super passionate. I think we can bring so much transparency to

Speaker:

ecosystem will benefit everybody, right? Not only benefiting other people,

Speaker:

needs compute benefiting like you know, the end consumers. Because think about

Speaker:

the whole funnel, right? You had finance and gpu the

Speaker:

actual clusters cost, right? So

Speaker:

if the banks don't have enough information or hedging for the

Speaker:

banks then they have to charge you high interest, they have no other way. Or

Speaker:

you have to look for alternative capitals which traditionally

Speaker:

they're more expensive, right? Because they're not banks. Banks are cheap as a

Speaker:

cost of capital, right? So then the cost from you

Speaker:

know, stage zero is high. Then think about the second stage, third

Speaker:

stage and then people like you and me using Sora with OpenAI

Speaker:

everything will be more expensive because of that, right? So fix the problem

Speaker:

with transparency from this from Gecko is really really

Speaker:

critical and then their benchmarkings and encourage the

Speaker:

secondary markets and all those flexibility and then

Speaker:

availability will be really incredible to benefit the whole

Speaker:

ecosystem. Interesting. So is it fair to say you've built basically a

Speaker:

futures market for GPU. Compute I building a

Speaker:

benchmark index layer. We are working with future exchange,

Speaker:

right? So I'm not a futures exchange so that would be something we

Speaker:

will think about S and P. Right. So they license the index with a. Right,

Speaker:

right, right, right, right. That's what we do. Right. Well, we will index

Speaker:

to an exchange and they will have futures options on top of that and other

Speaker:

financial products. That's a fascinating concept because like

Speaker:

you're right, we need that because the scarcity

Speaker:

of GPU compute is a real issue. It comes up.

Speaker:

And if, if, if Amazon, the rate. Of volatility, how do

Speaker:

you. With, with. With like 40,

Speaker:

60% fluctuation every daily volatility and then it's

Speaker:

just not a, a very transparent market

Speaker:

which is. Breeds inefficiency. Right.

Speaker:

Absolutely. So for those of. Oh, sorry. Go ahead,

Speaker:

Andy. Okay. I was just going to ask. So are you tracking

Speaker:

features and functionality and all of that? That, that would be the. How you value

Speaker:

the GPU itself and compare that to the price and

Speaker:

you're coming up with some ratio. Exactly. So

Speaker:

compute is not like. Unfortunately it's not as easy as electricity

Speaker:

or even oil have different grid. Right. So even 100

Speaker:

has different configurations. Right. They all, it's not the same. Right.

Speaker:

Different CPUs, different RAMs and geolocation matters.

Speaker:

Right. So a lot of things. So normalization become very critical

Speaker:

component to financially settle index.

Speaker:

Right now we have H100A100 indexes published at Bloomberg and Refinitiv.

Speaker:

So the way we do it is we have a base case and all

Speaker:

the factors normalize to the base case. And the way we normalize

Speaker:

historical data, what factor is actually important to the users,

Speaker:

the CPU matter? How much does it matter? What's the wave, whatever it

Speaker:

contributes. How often do we calibrate? Maybe it matters

Speaker:

today, maybe tomorrow. This, this particular. Whatever

Speaker:

inputs value more. Right. So we do calibration,

Speaker:

period of calibration as well.

Speaker:

Interesting. Yeah, it's fascinating to kind of see because I mean

Speaker:

it always seemed like there's something missing around

Speaker:

the GPU market. Right. Because it's just. And I also think too

Speaker:

it's been a while since we had any kind of compute limitations on what we

Speaker:

wanted to do. Right. Like that CPU is like. Yeah, it's cheap

Speaker:

and you can get what you want and it's not supply demand kind of shifting.

Speaker:

Yeah, I agree. Right. So I didn't really think of like,

Speaker:

you know, kind of this, this market kind of response to

Speaker:

it, which I think is, is an interesting approach and I think, I think,

Speaker:

I think it's fascinating. Yeah. Even if you think about

Speaker:

AI SaaS company. Right. I don't know if you heard the saying that

Speaker:

SAS is 80% margin AI SaaS is 0% Mar.

Speaker:

So I mean it depends on how you run your workflow. If

Speaker:

you are not being thoughtful, right.

Speaker:

You just dump everything, everything you need to do into the most

Speaker:

expensive closed source model. And you're not

Speaker:

optimizing your thinking tokens, your input tokens,

Speaker:

output tokens. It can get very pricey very

Speaker:

quickly, right? Not batching it, you're not doing all the right things.

Speaker:

And even you do all the right things, it's gonna be such a meaningful

Speaker:

percentage of your cost. And then all those companies not ready for it. Right.

Speaker:

Because in before what's the raw material cost?

Speaker:

Electricity. Like really nothing. Right. But now

Speaker:

every company becomes, you know, a which is great company

Speaker:

but then their cost structure is shifting from zero cost

Speaker:

to. To 40%, 60%, any percent to token

Speaker:

or to GPU at the end. Right? Right. So how do you think about hedging

Speaker:

that kind of cost component? Can you control that? Can you optimize for it? Can

Speaker:

you monitor it, can you benchmark it? You know, can you hedging it? So

Speaker:

no, that's a good point. So do you think

Speaker:

there's multiple, I guess, inputs and levers to this? Right. Because it doesn't seem like

Speaker:

this would be a straight thing. So what's, you know, Andy mentioned that you were

Speaker:

tracking certain benchmarks. Like what benchmarks are you tracking? Because I'm very curious about

Speaker:

this. Right? So there's a few things, depends on

Speaker:

your position at least this can change

Speaker:

every single day. Like our ecosystem is so nuts, right? So it

Speaker:

depends your

Speaker:

positioning, the whole workflow, right. So think about if you are new

Speaker:

clouds, you are selling token, right?

Speaker:

The cost for you is a gpu, right. So then your margin

Speaker:

becomes the diff between the margin and the GPU cost. And that's

Speaker:

the way we calculate it, right. Which is different units.

Speaker:

And then your worry is okay, so for

Speaker:

token survey for the tokens, how much money can I get rate

Speaker:

from one particular gpu? The flops, right? How can I optimize for that? And

Speaker:

what if I'm doing even hosting open source models?

Speaker:

And how do I make sure people using that open source model, should I

Speaker:

shifting it? What's the pricing for that? Think about that strategy and GPU said

Speaker:

okay, am I renting GPUs? I'm like outright

Speaker:

purchase those GPUs and put on my books and depreciate it. How

Speaker:

long can I depreciate it for? How do I let's say if

Speaker:

everyone's the latest and greatest, I'm selling the GPU after second, third year,

Speaker:

what's the terminal value for the GPUs? Who should validate that? Which bank should

Speaker:

depreciate the asset classes. So it's a lot of things coming to the new

Speaker:

cloud space. If you think about your inferencing infrastructure,

Speaker:

right? So let's say you're

Speaker:

AI tech company, right? Then your revenue is token,

Speaker:

right? Ideally they're paying you based on token

Speaker:

use cases as well. And then your cost is token which is

Speaker:

easier but same time for you is thinking through okay so

Speaker:

right now open source tokens, the price

Speaker:

they do move up and down. For example

Speaker:

if you look at Deep Seq, even Deep Seq, they host their own servicing but

Speaker:

then the price changes, they have the off peak hours and that change all the

Speaker:

time. Or you can do closed source which the price is pretty

Speaker:

static. The way I think about it is again it's extremely

Speaker:

free market approach, right? Is how can we

Speaker:

make sure especially open source ones, the token prices

Speaker:

is driven by the market demand supply curve,

Speaker:

right? Let's say if everyone, if I have like 100 GPUs

Speaker:

right now and obviously let's say I

Speaker:

choose to host only one llama open source

Speaker:

model and then I know I can produce X amount of tokens,

Speaker:

both input output tokens, right? And I can just auction off

Speaker:

and you guys and you can buy a million token and one day he's like

Speaker:

I'm not going to use it, why do I sell it to Frank? Can this

Speaker:

be some market where right now you are stuck

Speaker:

with it, right? In

Speaker:

my mind, unfortunately I'm very brainwashed to free market. I feel like you have to

Speaker:

give people option. The more option you give people and

Speaker:

any have flexibility, franchise flexibility and people more

Speaker:

willing to participate because they know they can get out. Because right now you're stuck

Speaker:

with hyperscaler GPUs or any tokens, you're stuck with it

Speaker:

and then you're less likely to commit because you know you

Speaker:

can get out or you get fined even worse, right? You know those cases, you

Speaker:

get fined millions of dollars when you back out on cloud deals.

Speaker:

That's one of the things I really think I should encourage people thinking about tokens

Speaker:

and GPUs as a main cost structure. How can we drive

Speaker:

efficiency so people can commit and then get out if

Speaker:

they need to and then swap out and everyone gets more value

Speaker:

and efficiency from those transactions. So is it

Speaker:

more like an exchange or an auction?

Speaker:

What's the mechanism? Right? So from token GPU side

Speaker:

obviously there's Spot exchange already like compute

Speaker:

exchange, where you can actually tell them, hey, I need this

Speaker:

configuration how many nodes? And then they will

Speaker:

say okay, let's do an auction. And then the

Speaker:

best price, best quality, whatever combination wins. Right?

Speaker:

Yeah. You can potentially do other asset class as well. Right. So we're.

Speaker:

Siliconita is a data company. So think about us as the Bloomberg and there's the

Speaker:

Nices, NASDAQqs and everybody, right. This spot, right,

Speaker:

you can actually get GPUs. You, you can actually get stocks from those exchanges.

Speaker:

And the FAST is we collect data from those exchanges like

Speaker:

Bloomberg. Right. And then we'll produce financial products on top of that. Right.

Speaker:

So that's right, there's spot, which is the

Speaker:

nasdaq. Right. You can buy and sell, get actual physical

Speaker:

deliveries, all the compute or token you need. And there's

Speaker:

data side which is making data the Bloomberg. Right. And then FAST

Speaker:

is structurally the financial products layer. Right, data layer. And

Speaker:

then we're agnostic, meaning we look agnostic of chips,

Speaker:

agnostic of spot markets, agnostic of everything. Right.

Speaker:

And it's a future exchange which they license

Speaker:

our indexes to create futures product. Ideally we're

Speaker:

settling to spa. Maybe some of them will sell at spa. Right. So it's pretty

Speaker:

standard practices. So

Speaker:

would the currency or the coin of the realm be tokens

Speaker:

or compute time or compute seconds? Things

Speaker:

change. It's, it's making my life really fun

Speaker:

and you know, also different. Yeah, all the time.

Speaker:

And then you, you mentioned you have this quantum thing, right? Right.

Speaker:

It's a lot. We track all compute. So it doesn't for us

Speaker:

what chips and what, what architecture framework

Speaker:

and you know, we don't really care. We benchmark the performances and the data

Speaker:

inside. And everything we don't know for us is getting

Speaker:

ready for everything. So we want to create product

Speaker:

that's actually going to be helpful to the marketplaces, not just creating

Speaker:

things like gambling table. People bet on binary things. Right. For

Speaker:

us, how can we make it useful for the people who actually

Speaker:

naturally long compute? So the Neo clouds everybody else,

Speaker:

they need product to hedge their revenue fluctuation. Right.

Speaker:

So they issue short futures and whoever naturally short compute.

Speaker:

So you need computer and for you is a cost management.

Speaker:

So I want to make sure my product is usable by them. It

Speaker:

depends on how they pay. Right. If they pay tokens,

Speaker:

nothing to create token products. You're very right now paying people paying

Speaker:

per GP power and you create product for that. If they pay

Speaker:

things all right, then it's different contracts for that.

Speaker:

So it really depends on how people using it today and tomorrow.

Speaker:

And then, you know, we. We hyped to create products that may

Speaker:

not be the S&P 500, which live forever. We probably create financial

Speaker:

products live for next five to 10 years. Because guess what? Chips

Speaker:

what our style, right? The A100 people still using it, but

Speaker:

like L4s, people are using it, but like other chips like the V's,

Speaker:

the, you know, probably not as much. Right. Then similarly, my

Speaker:

financial products associated with that underlying asset

Speaker:

probably will, you know, retire, be retired. Right? Which is fine.

Speaker:

That's cool. I'm sorry, go ahead, Andy.

Speaker:

I was just thinking about it and a couple of ideas popped into

Speaker:

my head as you were describing that, Carmen. One is

Speaker:

capacity. It sounds like you're literally selling

Speaker:

compute capacity, GPU capacity, time, just

Speaker:

whatever. But it kind of falls into that bucket under one hand.

Speaker:

But then on the other hand, it seems like that

Speaker:

it almost creates this utility market.

Speaker:

Is that fair or am I missing something, right? No,

Speaker:

you're right. But two pieces. So one is a compute exchange part, right? This is

Speaker:

where you can actually get either depends on what people,

Speaker:

the mode of people preferences. You can get GPUs or get

Speaker:

tokens, whatever, right. Physically delivered, you do you. You

Speaker:

don't have to touch any financial products, right. It is literally like you going to

Speaker:

a store buying stuff. And then the more option based, right.

Speaker:

You can actually get instances. And the silicon data is. You

Speaker:

cannot actually getting any compute. Right? Like you cannot

Speaker:

get any stocks from Bloomberg. Well, you can get this data.

Speaker:

What asset is trading, what prices? So that informal decision, ideally

Speaker:

in your spot market be like, hey, I think everyone, you know,

Speaker:

the H100 price is a little too high, in my opinion. I'm not going to.

Speaker:

Right. Right now, like, forget about this. And I can totally use a

Speaker:

100. Right. It's fine. So this data is data

Speaker:

layer, which is liquid data, right? So those are those the

Speaker:

sort of two pieces to I guess resolve the

Speaker:

workflow equation. So it's kind of like when you go to the supermarket. I'm

Speaker:

sorry, Andy. When you go to. That's okay, go ahead. When you go to the

Speaker:

supermarket, you buy the beef, you buy the pork, but you don't think about the

Speaker:

pork belly futures and stuff like that. It's kind of abstracted away from you.

Speaker:

Exactly. The farmers will think about this, right? Yeah, farmers think about it.

Speaker:

Yeah. They need to hatch the corn futures, right? But if

Speaker:

you are farmer, you still say you were someone to eat the

Speaker:

corn. You go supermarket, you don't think about, hey,

Speaker:

Right. So you may have covered this already, but how does

Speaker:

or does fungibility come into play?

Speaker:

It's a great question. So I went through so many different iterations about this.

Speaker:

Initially I was like, okay, why don't I just normalize across flops? And I was

Speaker:

like, nope, can't do that because there's

Speaker:

just, there's so many things wrong with this approach. But obviously

Speaker:

we can dig into details, but we're not going to do that. And then secondly

Speaker:

is okay, why don't we do like inferencing

Speaker:

chips? Like just make a pot and then we realize, okay, how can.

Speaker:

So again back to the initial question. I want to make product actually going

Speaker:

to help people hedging. Right. If you

Speaker:

do a combination of different chips, then if you

Speaker:

are and you know we're using of a lot of people, are you going to

Speaker:

really use that to hatch? How would some correlation look like. Right.

Speaker:

Maybe you just rather have different chip types and then just hatch accordingly

Speaker:

because the correlation will be much higher than the combination of indexes.

Speaker:

Maybe the composition of indexes is good for just tracking

Speaker:

general, but not for actually financial products. So we have, we have,

Speaker:

we can have all. Some of them will be tradable. Some of them. Well, right.

Speaker:

For us is if people start, if, if we move to the world

Speaker:

where it's not going to be Nvidia only kind of play

Speaker:

in the like amd. We can eventually,

Speaker:

it'll probably end eventually. Well, we'll see when, right? We'll

Speaker:

see quantum happens first or everyone catching up first. I have no

Speaker:

idea. Right. So if it's like a more vibrant

Speaker:

ecosystems. Right. And then maybe we're thinking about, hey, maybe we can do

Speaker:

like doing some of the chips. Even different firms would normalize it and then we

Speaker:

do something like a inferencing chips, chaining chips. I don't

Speaker:

know. So that's another thing. Or like token, token indexes. Right.

Speaker:

So can we do open source ones? Multimodality? Is

Speaker:

multimodality going to be a thing in a few years? Everything going to go back

Speaker:

to one model only? Because right now with different models. But maybe it's the interim

Speaker:

stage. Right. We. I don't know. So it's one of the things we have to

Speaker:

keep like looking and thinking and just moving things

Speaker:

forward. Yeah, I was thinking too about, you

Speaker:

know, the, the amortization that people

Speaker:

do in their heads at least when they buy a new car. Yeah. So

Speaker:

that's the math is you drive it off the lot, it's worth what, a 75,

Speaker:

80% of what you pay for.

Speaker:

So we need a Carfax for GPUs, right? So that's what we do too

Speaker:

for silicon Mark. So what we do is okay, everything. Well

Speaker:

at least right now or before Last year or T minus 1, everything

Speaker:

is brand new. So okay, we'll take whatever the

Speaker:

number they published and tdbs, the flops, we all know

Speaker:

there's like haircut to that number.

Speaker:

That's funny, right? And then a year later, right, A year later I

Speaker:

say, Andy, you're growing great in great data centers. Your

Speaker:

thermal cooling was doing great. I'm old data

Speaker:

center, I don't have the latest cooling. Obviously my chip

Speaker:

is after year. You can argue they own different curves,

Speaker:

decay curve. And are we treating the same prices even

Speaker:

though same configuration? Probably we shouldn't. Should it be reflection of

Speaker:

the actual quality? So that's something Mark does.

Speaker:

And then we do things even more basic than that. So number one is

Speaker:

when you tell me you have H100 like 100 nodes, each node has

Speaker:

say 8 GPUs, right? Yeah. Is that true? Can I

Speaker:

number one verify the UID of that? And you see, it's all the CPUs

Speaker:

and this operation systems on all

Speaker:

the nodes, they all live connected. Number one, can we just

Speaker:

verify are they connected? What's the latency? So that's very

Speaker:

basic things, right? So we do that piece at least, you know,

Speaker:

are they truly UIDs and CPUs? The machine, is the machine ever

Speaker:

changed? Because we do mesh IDs based on

Speaker:

CPU changes. We know something changes, right? And then the UID of every

Speaker:

chip. So we do the decay curve for the individual chips and also the machine

Speaker:

level and then thermal staggeration, everything. So we do

Speaker:

that and then we do validation. Almost like Bloomberg Validate fixing

Speaker:

compound. Because you have to understand the issuers and it's

Speaker:

a bridge and it's a school and with cash flows and all those stuff. So

Speaker:

we do that for GPUs. The geolocation. If you build data

Speaker:

centering somewhere in North Korea,

Speaker:

it's great, but no one going to use it, right?

Speaker:

We took all those in considerations when we created those data models. So then

Speaker:

we figured out, okay, so based on the setup and

Speaker:

we run a benchmark on specific GPUs, this is our grade and then

Speaker:

this is our validation. Obviously you can do whatever you want. And then you can

Speaker:

say hey, screw that, I believe this is much higher price. You can do that

Speaker:

as well, Right? But this is our valuation. So almost like a scoring system.

Speaker:

That's interesting. So My mind immediately went

Speaker:

to, when, when we started talking about cars, my

Speaker:

mind immediately went to, you know, the used GPR lot

Speaker:

some guy in bib overhauls out here in Farmville, Virginia

Speaker:

kicking the tires. What's it going to take to get you into this

Speaker:

gpu?

Speaker:

Yep. See, there we go. And network them together. Right. Like I think there's also,

Speaker:

you know, maybe, you know, I don't know if

Speaker:

you've been tracking the, the DGX Spark device

Speaker:

that Nvidia has, but apparently they have ports

Speaker:

in them so you can network I think up to four together. I'm not sure

Speaker:

but yeah, I'm sorry I

Speaker:

cut you off but like. No, no, no. Nvidia we

Speaker:

definitely leveraging a lot of. So we do the container within

Speaker:

container and we do integrate with Nvidia DGX

Speaker:

benchmarking. So they have open sourced some of their LM

Speaker:

benchmarking based on GPUs and we do streamline their products so

Speaker:

you can test lms. So Nvidia Digitex testing

Speaker:

through system data. The benefit is if you do it all right yourself,

Speaker:

number one, you can

Speaker:

obviously people want but people can just change up the, the

Speaker:

benchmark results themselves. Right? It's open source but through us it's data Oracle. You

Speaker:

can't really change results. Number two things is more streamlined. It takes a few hours

Speaker:

to run versus take weeks because you've download a bunch of things you may or

Speaker:

may not need. You may or may not need.

Speaker:

Well, I also think too like, you know, how does this, you know, you

Speaker:

mentioned you, you kind of skirted around the location thing with sovereign

Speaker:

AI, right? So like if I'm okay with using Google

Speaker:

Services, right. And I can, I have access to TPUs, right. I have a lot

Speaker:

more access to whatever Amazon's chip. Microsoft I think is

Speaker:

working on something. Custom that's on prices too, right. The Geolocation

Speaker:

they have different prices and different carbon footprint. We haven't even touched that.

Speaker:

Right, right, right. We do track that as well based

Speaker:

on local grid power grid information. We do track the carbon cost associated with

Speaker:

different AI workflows. I think it's important, I think so

Speaker:

for me is let me at least surfacing the number to you and

Speaker:

you decide what to do with it. Right. So I think that's a good idea

Speaker:

or you know, maybe it turns out that you know,

Speaker:

this type of model of GPU is you know, depending on what your

Speaker:

core. I think it's, I think it's great because I think one of the things

Speaker:

that I've heard And I didn't Peter

Speaker:

Drucker. What gets measured gets managed, right? So you're, what you're doing is you're providing

Speaker:

ways to measure GPUs and GPU performance. Right.

Speaker:

So if I don't care. One of the things I heard about and I'm sure

Speaker:

you have some thoughts on this is like cloud providers that are

Speaker:

starting up and they're just doing

Speaker:

GPUs, right. They're just doing kind of training loads. Right.

Speaker:

And they don't need to be located anywhere special. Right. Like they don't

Speaker:

need to be in the northeast corridor. They could be in the middle of

Speaker:

nowhere as long as they have power. Right. And

Speaker:

because you're going to run a load, right, you're going to run a load on

Speaker:

the thing, it's going to take 72 hours say to run. You don't really care

Speaker:

if the latency is, you know, 150 milliseconds versus

Speaker:

3. Right. It doesn't really matter. Yes.

Speaker:

That's why you see a lot of us get built up in like Iceland, Finland,

Speaker:

the users can be in Americas, can be in Asia. Right,

Speaker:

right. For them is can they get the capacity

Speaker:

looking for and you hard deal if you're thermal powered

Speaker:

data centers, cheap electricity. Yeah.

Speaker:

And then it's cleaner supposedly. Right. As

Speaker:

long as you're not on the volcano belt.

Speaker:

Right. As long as it's not going to blow up. Yeah.

Speaker:

But yeah, so we definitely see that trend and a lot of energies, you

Speaker:

know, what do we call it oversupply sometimes can

Speaker:

be in Spain because overbuilt and the grid couldn't handle it. And

Speaker:

then they need to get data center up and running like now to take over

Speaker:

the power. But then

Speaker:

it takes a lot to make the racks start running. Right.

Speaker:

More than just the GPU itself, you need the connectivities and network

Speaker:

and that could be in shortage. So you need to solve a lot of different

Speaker:

pieces to actually deliver the actual computer.

Speaker:

But that's why it's fascinating industry for us because

Speaker:

we see things from dsml, tsmc, side.

Speaker:

So anything supply demand shifting will have

Speaker:

an impact on the whole ecosystem. And then this industry is winner takes off

Speaker:

from LTSMC to a solution level.

Speaker:

You have to be the solution. Your alternative solution just not

Speaker:

going to work. So. So every single piece is so critical to

Speaker:

the whole chain packaging. Right. You have to work,

Speaker:

right. If you don't know how to do it, then you just can't do it.

Speaker:

It's not like you can buy a cheaper pair of socks or whatever

Speaker:

so we do. We're from end to end, right. From the SM of production,

Speaker:

tsmc. So we're official TSMC partners are going to be actually

Speaker:

TSMC conferences to this

Speaker:

November. Very cool. It is really cool. I

Speaker:

kicked out by those stuff very quickly. And all the way to

Speaker:

the model A, the token layer. Right. Agentic layer. So

Speaker:

we sort of see things all the way. Which

Speaker:

I think my brain get overclocked every single.

Speaker:

I know what you mean because I get till the time of like

Speaker:

2:33pm and I'm like, I can't take any more input. Like

Speaker:

and the muscle, my brain muscle just dead. I know. How

Speaker:

do you do that? How do you get a roller in my brain, just like

Speaker:

relax my brain muscles? I. I found going for a walk

Speaker:

is a. Is a good way to do it. Right.

Speaker:

No, like. And a co worker of mine calls it everything turns to

Speaker:

hieroglyphic hieroglyphics when he's like

Speaker:

looking at like stuff. And I was like, yeah, that's a good way to put

Speaker:

it. Because it's just kind of like, yeah, I can't. I don't want to have

Speaker:

time by a daughter. So I usually spend time with my daughters. I feel like

Speaker:

they've been silly. And I would tell them, I'm so stressed out. When my daughter

Speaker:

was like, me too. I was like, what are you stressed about one last donut

Speaker:

than the other guy. I was like, that's very important thing. I agree with that.

Speaker:

That's very stressful. I will be really upset if I get one less

Speaker:

donut. So. Yeah, so definitely put things in

Speaker:

perspective. Yeah, that's cool.

Speaker:

I think one of the best things. Any other questions? No, plenty,

Speaker:

plenty. Like, I'm just fascinated by this. I know

Speaker:

we're kind of short on time, but one of the things that you mentioned was

Speaker:

tcmc. Tsmc.

Speaker:

So for those who don't know who they are and how important they are to

Speaker:

the global economy, could you explain for those folks

Speaker:

and why I was so excited that you're going to one of their conferences? I

Speaker:

didn't know they had conferences, so. I don't think I would do the justice

Speaker:

of explaining how important TSMC is. All right, how about I explain it and

Speaker:

then you tell me where I'm wrong. I'm sure you'll do a better job

Speaker:

than I can. So. Tsmc. Taiwan

Speaker:

Semiconductor Manufacturing Company. That's right.

Speaker:

They are based in Taiwan. And

Speaker:

the reason why. Nvidia. There's a fascinating

Speaker:

story in the book called the Nvidia Way. I Don't know if you've listened to

Speaker:

that or read that book. Really awesome book. But basically

Speaker:

one of the advantages Nvidia had early on and arguably

Speaker:

now was that they off they outsourced their chip

Speaker:

manufacturing to this company tsmc. I'll get it right that

Speaker:

time. They are basically what they call a fab.

Speaker:

And you could, I mean not

Speaker:

now they're so busy like you know, you kind of the you in general. Right.

Speaker:

Like I couldn't call them up and be like hey, I have some prints for

Speaker:

you. I have some chip designs I want you to make for me. Can you

Speaker:

send me. They're not at that scale but

Speaker:

so they're a fab. And so what happens is people like Nvidia, companies like

Speaker:

Nvidia, a few other companies too will go and they will, they

Speaker:

will design their chips and then they'll, they'll basically

Speaker:

not drop ship but effectively kind of print to order

Speaker:

chips. Which frees up a company like Nvidia

Speaker:

from having to build their own fabs. Kind of like intel does. Is that a

Speaker:

good description? 100 so I usually call

Speaker:

on Nvidia and AMD like design houses and then sometimes

Speaker:

confused with people who's like oh, are they like Louis Vuitton was like no,

Speaker:

Right, right. Or like graphic designers? Yeah, yeah. So they're design

Speaker:

houses and then they are Fabless. Right. And intel,

Speaker:

which is interesting because they do both. Right? Yeah, yeah.

Speaker:

Intel like as I was saying that intel doesn't. Yeah, they do both. Yeah.

Speaker:

Right. And then it could be a great strategy. Could work

Speaker:

or. Well, depends on many things. Right then anyways,

Speaker:

so TSMC is like the, as I said before, this

Speaker:

industry, I don't know if it's good or bad but it's a winner takes

Speaker:

all market. Right. So TNC is definitely

Speaker:

the winner for a lot of different

Speaker:

reasons. I think for the leadership, self

Speaker:

and technical team for the whole supply chain ecosystem. The

Speaker:

gravity, all the years, the hard work they've put in.

Speaker:

So it's a position where I don't think anyone

Speaker:

can seriously challenge them

Speaker:

in a meaningful way in the next whatever

Speaker:

years. So they're very critical. And then the good

Speaker:

thing interesting about them, they're the agnostic of design houses,

Speaker:

right. So they have great relationship with Nvidia for sure and I'm sure with

Speaker:

them, with everybody, right. It's their job to

Speaker:

produce those chips and then it's

Speaker:

interesting enough it's aligned with mine. Silicon Data. Because

Speaker:

I'm agnostic of chips, right. So

Speaker:

obviously I want to create products that's most important to the

Speaker:

ecosystem. So right now people care a few chips and

Speaker:

those chips happen to be from one design houses. But let's say

Speaker:

if another design house start picking up a lot of momentum. For me, it's

Speaker:

like, how can I help everybody in ecosystem

Speaker:

compare, contrast hashing, right? Use them benchmarking, normalize

Speaker:

it in a meaningful way. So it's my job to work with all the design

Speaker:

houses. It's their job to produce chips that can be usable for

Speaker:

defunding the houses too. So we're very aligned in that sense. And

Speaker:

anything they do, right? So think about, they are

Speaker:

future looking because they're not thinking about next year or next quarter. They think

Speaker:

about 20 years, 10 years. It takes them five, six

Speaker:

years to build a fab, right? And then they need a fab to

Speaker:

be utilized. And they have a threshold, right? If you're

Speaker:

building a fabric and that's not utilized by year eight,

Speaker:

they plan right now by year a year 10, they are

Speaker:

losing a lot of money. A lot like billions of dollars,

Speaker:

right? Like can you make sure the fab will be utilized, the demand

Speaker:

will be there by year 10. Forecasting from today.

Speaker:

It's very, very, very hard job to do. And it's not

Speaker:

like it's not like a new reim, you know,

Speaker:

like what are minings and all things that you can hedge it, right?

Speaker:

Like there's a way to hatch the future curve. But like it's not like they

Speaker:

can forecast, forecast and do a swap on that because

Speaker:

the market is so concentrated and then very

Speaker:

binary and a huge size. Who's taking the other side?

Speaker:

I don't know. It's very hard over the concentrate to

Speaker:

do so for them is to get clarity supply demand curve in 10

Speaker:

years. I mean they do also edge computing chips as well, not just data

Speaker:

center chips. Right? But how do they think through that? I think that's

Speaker:

really challenging. I think will be really challenging for me

Speaker:

for sure. I'm sure they have way smarter people there to think through those problems.

Speaker:

But yeah, it's an interesting problem to have.

Speaker:

That's why TSMC and I, for example, they sell to

Speaker:

their clients who are in the vds of the world. So they have that kind

Speaker:

of transparency. But what they don't have, which

Speaker:

may be a different indicator for the supply demand curve in

Speaker:

10 years is end users

Speaker:

pricing volatility. Right? And then you know, okay, so if

Speaker:

every single chip, every single chip I produced, right, Data center

Speaker:

quality chips, one dying price, right.

Speaker:

Is the indicator for supply demand shifting. Maybe it

Speaker:

is Maybe it's not right. At least you have some, some data points which your

Speaker:

immediate sales and revenues which is T0

Speaker:

won't give you because then a few degrees removed from

Speaker:

end user experiences you give Nvidia and Nvidia packages it to

Speaker:

AWS and GCP and end users and you and me. Right.

Speaker:

So that's something that for them to think through as well.

Speaker:

Interesting. One of the stories I heard and I

Speaker:

wonder if it's true, was that part of the

Speaker:

reason why there was part of the reason

Speaker:

why Nvidia was able to really capitalize on this. There's a lot of

Speaker:

reasons, but one of them was the fact that in the

Speaker:

crypto craze, the run up to get chips for that Nvidia

Speaker:

had purchased. Now what you said makes a lot more sense now. Nvidia had purchased

Speaker:

the. They basically purchased a certain amount of capacity at TSMC

Speaker:

for like three to four years, something like that. And then that happened to

Speaker:

coincide with the AI boom. Is that, is that true? And

Speaker:

that. I guess that's a market too, right? Like you know, like hey,

Speaker:

so I wasn't. I know so 7 so I'm not following all ASICS

Speaker:

so they have a specific for. For the, for. For the mining

Speaker:

chips. That could be true. So I think

Speaker:

not because I'm straight, I mean and a girl can dream. I'm

Speaker:

strapped to be like, you know, to really

Speaker:

help the industry and then be, you know, like

Speaker:

the company the team hopefully can propel the industry

Speaker:

move forward. Right. I'm strive to point zero over percent people

Speaker:

and then competency is very important. Obviously execution, your

Speaker:

hard work is important. Not a big piece is you have to be

Speaker:

really, really lucky. That is also everyone's control.

Speaker:

And then Nvidia puts so much time effort into everything they do. You can argue

Speaker:

they were really great company even before the AI boom and

Speaker:

everything. But the lock piece and how do you control that? How do

Speaker:

you. How do you know quota gonna be like the piece

Speaker:

that's needed? Right. Well, some.

Speaker:

Someone said that, you know, Jensen Wang is like the epitome of,

Speaker:

you know, the better you, the harder you work, the more luck you have.

Speaker:

True. Like there's a lot to that and I know it's

Speaker:

complicated but like I'm just, I just. It's interesting how the crypto kind

Speaker:

of boom and bust really kind of also

Speaker:

propel us into the AI. Not, not all by

Speaker:

itself but it definitely I think gave. There was some momentum where

Speaker:

no momentum was expected, if that makes sense. Right. Yeah, I agree,

Speaker:

I agree Timing is so interesting, but

Speaker:

we just have to two point like the heart of your world. You have to

Speaker:

do everything you can with the environment. Right? That's

Speaker:

cool. That's cool. All data. So we'll see happens what

Speaker:

I mean. That'S the importance of data. Right. Like, you know, people don't realize that.

Speaker:

And I go calling back to Bloomberg. So I'm referring to Michael Bloomberg,

Speaker:

former mayor of New York. But before he was mayor he

Speaker:

basically started a company called Bloomberg. And

Speaker:

he was not the only factor but like

Speaker:

a big part of, you know, people getting into, you know, his

Speaker:

philosophy. As I understood it, if there's a good, if there's a good biography book

Speaker:

on him, I totally would want to listen to it. But basically getting

Speaker:

the traders access to data gave them an advantage. Right. And it was

Speaker:

really, he was really early on in the idea of that data is

Speaker:

not just something that's created as a byproduct of

Speaker:

transactions, but can actually be, you know, monetized

Speaker:

and arguably weaponized. Right. Like so.

Speaker:

And you know, Bloomberg terminals

Speaker:

before, you know, it was interesting because he basically sold these custom terminals so you'd

Speaker:

not to rely on like local ID who were still struggling with like, you

Speaker:

know, just keeping the network up and running, you know, these separate

Speaker:

devices that became status symbols. And ultimately he, that's become like

Speaker:

this media empire that, you know, I can watch Bloomberg on my

Speaker:

tv, I can listen to it, you know, whether it's a satellite radio or the

Speaker:

app or you know, FM or AM radio

Speaker:

stations. You know, I think it's in San Francisco, New York and

Speaker:

D.C. they have a big office in D.C. they always have an

Speaker:

interesting show called Political Capital. I think that plays

Speaker:

at 5pm every day. I listen to it because it's kind of the

Speaker:

policy side of finance and kind of what's going on in the world around.

Speaker:

And AI has come up a lot digital sovereignty. So it's interesting

Speaker:

how all of these worlds, I like your thoughts on this,

Speaker:

right. The worlds of finance, the worlds of tech and the worlds of policy,

Speaker:

politics and dare I say war. Right. They're all kind of like

Speaker:

crashing together in this giant thing. And

Speaker:

it's kind of cool, kind of scary.

Speaker:

I think it can be. I mean, sometimes I'm scared I was like,

Speaker:

you know, because you see a few things, it's like, whoa.

Speaker:

There's a lot I feel like for people born post Covid,

Speaker:

not born, but grew up post Covid, I would call Jen the second

Speaker:

Gen Z Gen Alpha. Yes. I think Gen

Speaker:

Z's apparently now like I'm all confused. But for

Speaker:

them it's like, of course they should. They should. My AI should be my

Speaker:

boyfriend, girlfriend. Right. Like whatever. And then for me it's like,

Speaker:

this is not comfortable at all. Weird.

Speaker:

Yeah, yeah, yeah. For me it's not. I have no idea what's going on.

Speaker:

Like, I just so creeped out by this. But for lot of people it's like,

Speaker:

of course you do that. Of course you tell AI all your secrets.

Speaker:

Of course they can. My phone can record my conversation. Of course

Speaker:

you can train, you know, your AI model. My

Speaker:

model use my all my Gmail content information.

Speaker:

All edge computing. I have my own AI model. Of course you can wear,

Speaker:

you know, glasses and then record everything you and me talk

Speaker:

about. And how secure is everything

Speaker:

right now? Right.

Speaker:

The hardware level encryption

Speaker:

is only available on a very specific few chips.

Speaker:

TPU can do that. You rely on software encryption.

Speaker:

No, it's true. And software encryption that is vulnerable to a quantum

Speaker:

attack which is not that far away. We are not the

Speaker:

software and use cases moving so quickly. The hardware hasn't been able to cut

Speaker:

up. And it's expensive to do hardware encryption. It takes

Speaker:

longer and it's more expensive. That's why sometimes the hyperscaler charging

Speaker:

higher premium for that reason. Right. Are you willing to spend a

Speaker:

token and time and effort to do so? Some use cases, you can argue.

Speaker:

Yes, yes, absolutely. No edge computing

Speaker:

chips can do that kind of hardware level encryption.

Speaker:

And it's happening like now. Right, Right.

Speaker:

I was talking to a startup called Quantum Knight. Nate claimed to have a solution

Speaker:

that is a low, low compute kind of post

Speaker:

quantum ready thing. So I can send you their

Speaker:

link and information. Yeah, we, we track quantum

Speaker:

computing prices as well. Very different than GPU pricing and like, you know, like

Speaker:

a thousand per second per minute pricing versus hourly. Right. This is like

Speaker:

different cycles you run. And then GPU become like error correction component to the whole

Speaker:

thing. But for us it's like, okay, so

Speaker:

computers compute now, GPU and tpu, whatever, pu. And then

Speaker:

it becomes like quantum. How we think through that? I don't

Speaker:

know. My brain just like, you know. Yeah, I know. At some point it just

Speaker:

becomes like. I'm not smart enough

Speaker:

right now to, to. To. To figure that out. I tell you, like I go

Speaker:

through like quantum stuff and like I always joke with Andy, like I'd be like

Speaker:

15 minutes, I get a migraine, which is basically like my brain's version of

Speaker:

blue screening. And like, just like, okay, I can stop. I can get

Speaker:

to about. I can get to about 45 minutes now, which is, you know, an

Speaker:

improvement. But this is actually a good book.

Speaker:

And he was actually a guest recently on the Quantum Computing podcast.

Speaker:

It's a thick book. It's a thick book. But I'll tell you this.

Speaker:

The, the, the, the first three chapters, introducing the concepts

Speaker:

are probably the single best introduction to the

Speaker:

concept I have ever read. Yeah, I will send you the link. Yeah,

Speaker:

yeah. Dancing with Cubits.

Speaker:

Really interesting book. Super nice author too. He's a, he's a trip.

Speaker:

But it, it. No,

Speaker:

you're right. Like, these are. The thing that really worries me is I kind of

Speaker:

think about this like we built our entire economy and we're, we're

Speaker:

on a house of sand. Can we start on this? That's

Speaker:

another thing. We'll have to have you back on the show for

Speaker:

a second one. But like other countries

Speaker:

where they lay off hundreds and thousands of people, not. Not just by American

Speaker:

companies. Right?

Speaker:

Yeah. Don't even get me s on that. Well, like, and like, you know, we're

Speaker:

all based on. And, and the other thing, the elephant in the room, right, is

Speaker:

the fact that TC the, the T in

Speaker:

TSMC stands for Taiwan. Right. Kind of.

Speaker:

I know, I know it's very dangerous to talk about this, but, but like. It'S

Speaker:

kind of like, shoot. So I won't say much, but I'll just say it's

Speaker:

contested real estate. How about that? Right. That's a pretty safe way to say it.

Speaker:

Right? It's contested. Right. And you know,

Speaker:

the entire world effectively revolves around the kind of modern

Speaker:

civilization revolves around the manufacturing that happens there. And

Speaker:

God forbid, like, you know, whether it's man made or a tsunami or a bad

Speaker:

earthquake, like, I mean, our world, I mean, we, we get sent back

Speaker:

to the 1700s pretty quickly. You know, 1700 is not,

Speaker:

you know, there are still people, they're still human beings in the

Speaker:

hundreds. That could be worse than that. That's true. It could be way worse than

Speaker:

that. That is a good point. I was trying to keep it. I was trying

Speaker:

to end it on a positive. And I know you're traveling there

Speaker:

like no humans. Well, no, I mean, like,

Speaker:

I mean, there's a lot of ways that the, you know, this apocalypse could go,

Speaker:

so to speak. Right. It could be, you know, but like, it's a very. And

Speaker:

like, just from an infrastructure point of view and supply chain point of view, like,

Speaker:

you know, we, we. We've really championed

Speaker:

globalism and kind of all of these extended supply

Speaker:

chains for, you know, there were reasons there's always reasons, but like

Speaker:

at the cost of resilience. Right, right. That's kind of scary.

Speaker:

I assume you've read Taleb, right? The like anti fragile.

Speaker:

I'm so sorry. No, that's fine. That's fine. But I really appreciate you taking the

Speaker:

time. Where can folks find out more about you? Silicon

Speaker:

Data.com Silicon Data.com awesome. And we'd love to have you back on the show.

Speaker:

And you can tell us what these conferences were like. The. The

Speaker:

ts. Let's see how much I can understand

Speaker:

first. Right, right, right, right, right. That wasn't a good question.

Speaker:

That's why you got to be like the kids today and record all your conversations

Speaker:

so you can talk to the transcript later. All right,

Speaker:

nice seeing you guys. All right, thank you. And we'll let our AI finish the

Speaker:

show. And that wraps up another episode of Data Driven, the podcast

Speaker:

where we ponder the future of AI data and occasionally

Speaker:

the fate of humanity if we don't get GPU pricing under control.

Speaker:

Big thanks to Carmen Lee for joining us and blowing our minds with

Speaker:

compute market mechanics, financial innovation, and just a

Speaker:

touch of economic existentialism. Be sure to check out

Speaker:

silicondata.com to learn more. Just don't try to day trade

Speaker:

H1 hundreds after midnight. If you liked what you heard,

Speaker:

subscribe, leave a review, or send us compute credits.

Speaker:

Until next time, stay curious, stay caffeinated,

Speaker:

and remember, in a world of exponential AI, transparency

Speaker:

might just be the killer app.