Speaker:

Welcome back to Data Driven, the podcast that dives into the collision

Speaker:

of data technology and occasionally geopolitics with

Speaker:

the finesse of a caffeinated quantum computer. In this episode,

Speaker:

Frank Lavine is joined once again by Christopher Nuland,

Speaker:

AI technical marketing maestro at Red Hat, for a no holds

Speaker:

barred breakdown of America's freshly minted AI action plan.

Speaker:

From Cold War vibes and AI sovereignty to the CHiPs Act,

Speaker:

GPU geopolitics, and existential musings on large language

Speaker:

models, this episode has more hot takes than a GPU server farm

Speaker:

in July. Plus, we debate whether Europe can still flex

Speaker:

its AI muscle without surrendering to Silicon Valley, and whether

Speaker:

AI models will ever truly think or just continue to be unreasonably

Speaker:

effective spreadsheet goblins. So buckle up, data

Speaker:

disciples. This one's dense, dynamic, and

Speaker:

dangerously nerdy. Let's get into it.

Speaker:

All right, that bouncy little pop number. That is a fun

Speaker:

fact. AI generated can only mean one thing. It's time for

Speaker:

a new edition of Frank's World TV Live or

Speaker:

an episode of Data Driven, depending on where and how you're listening, slash

Speaker:

watching. You can catch me at the following URLs. Franksworld.com

Speaker:

data driven tv and impact

Speaker:

quantum.com speaking of impact

Speaker:

quantum.com my co host and I have launched another

Speaker:

book and it's basically Quantum

Speaker:

Curious, the Gateway to the Next Computing Revolution.

Speaker:

And what that is is we basically took the third, the first

Speaker:

13 some odd episodes of the season and distilled it down

Speaker:

into a little book. It's 2.99 on Amazon, but if you join

Speaker:

our mailing list, it's completely free. So go to Impact Quantum or scan

Speaker:

the QR code to find out more. All right, now that I've gotten the

Speaker:

commercialism ism out of the way, I'd like to welcome

Speaker:

back our guest, Christopher Nuland, who is

Speaker:

a peer of mine on the same team, an AI

Speaker:

technical marketing manager at Red Hat. How's it going? Good,

Speaker:

good. Glad to be back. I think

Speaker:

we ended the last talk on kind of a cliffhanger, and then

Speaker:

I think some recent news has really built on top of some of that

Speaker:

previous conversation. So I'm happy to be here talking about some big things

Speaker:

that are going on in the the area of AI right now. Absolutely. So

Speaker:

over the weekend, the Trump administration dropped the thought on

Speaker:

the Amer AI's America's AI Action

Speaker:

Plan. I think somebody likes alliterations.

Speaker:

But so, and you and I were chatting about this over

Speaker:

Slack and, and, and, and you had some thoughts on this. So, like, what.

Speaker:

And you had some interesting ideas around it, and some surprises are in the bell.

Speaker:

So let's. Let's get into it. Yeah. So

Speaker:

I think overall, this is really needed.

Speaker:

We've seen a couple things like this come out for some other

Speaker:

countries globally. The EU has

Speaker:

this. I'd say the one from the US is more of a set of guidelines,

Speaker:

or the one from EU is actually some laws that they're trying to pass that

Speaker:

have some similar tone to this one. You know, we're

Speaker:

seeing things out of the uk, out of Singapore, other, you

Speaker:

know, other nations that really are trying to get an

Speaker:

idea of what is their strategy around AI

Speaker:

sovereignty. And this, to me, is a document

Speaker:

more about AI sovereignty than anything else.

Speaker:

It's really about how. How does the US

Speaker:

go into the next phase of really almost

Speaker:

like a new industrial revolution around AI and

Speaker:

this document's really outlining the plan.

Speaker:

I think overall, I thought it was pretty well thought out, and we'll go through

Speaker:

and kind of pick apart some of the key areas. But I think

Speaker:

overall the. The key tone here was about

Speaker:

AI sovereignty, so specifically within the US

Speaker:

and how we're going to be managing that. And overall,

Speaker:

I thought it was. It was good. I know. You know, when we were talking

Speaker:

on Slack, we were talking about how there's definitely a lot there about China

Speaker:

as well. Right. In a weird way,

Speaker:

I. I felt like this document was a bit of a. A declaration

Speaker:

of. Of war in a way, because in

Speaker:

a document, it outlines that

Speaker:

they really consider this now like a Cold War with.

Speaker:

With China around AI and what I thought was so

Speaker:

fascinating is I kept going back to this analogy of, like,

Speaker:

the Cold War arms race of Russia and how

Speaker:

we need to do certain things around AI because we basically need to

Speaker:

mimic what the United States was able to achieve during the

Speaker:

Cold War. And I think that sat with me

Speaker:

because I think, you know, even last time I was on here, we were talking

Speaker:

about how we. We basically. No, there's a Cold War kind of thing going on,

Speaker:

but it was. It was different seeing

Speaker:

it. You know, you and I were talking earlier, like, on the letterhead. It was.

Speaker:

Yeah, it's different. You know, like, there's. There's things that are. Obviously, you

Speaker:

can see with your own eyes, but it's quite a different thing when something appears

Speaker:

on official White House letterhead. Right. Or even,

Speaker:

you know, government letterhead. I think that's. It's an interesting,

Speaker:

interesting shift. And this whole idea of a Cold War between the US

Speaker:

and China and AI is Not a new concept.

Speaker:

Right. I. There's a really good book and I'm gonna see if I can share

Speaker:

this tab real quick. This is an excellent book. It's an

Speaker:

excellent audiobook too. There you go.

Speaker:

Oh, there we are. Sorry everyone, my dogs are barking.

Speaker:

But this book came out in 2018 and a

Speaker:

lot of what he predicted has come to pass.

Speaker:

And it seems to me like the authors of this document have also,

Speaker:

if not read the book or listened to the book, have at least seen the

Speaker:

Cliff Notes version of it. Right. Like this, if you really think

Speaker:

about it, there's really only two major players right now in

Speaker:

the AI space and we're going to alienate a lot of people in the eu.

Speaker:

Right. But saying that, right, it's really largely

Speaker:

US and China, right? Yes. And

Speaker:

not to say that the EU is not in the games, clearly, Mistral. And

Speaker:

apparently there's a rumor, I don't know if you heard this rumor that, that Apple

Speaker:

is considering buying Mistral. I have heard some of those.

Speaker:

So again, I don't own any of Apple stock or whatever, so I'm not. Or.

Speaker:

But I think it's an interesting con, Interesting idea if they were to do

Speaker:

that because clearly that would, I wonder how that

Speaker:

would shift kind of the balance of, if not power,

Speaker:

perceived power. Right. Because Apple

Speaker:

obviously is a stalwart of Silicon Valley and if, you know,

Speaker:

if Europe's major, you know, every

Speaker:

time you talk about the EU falling behind, they always say, what

Speaker:

about Mistral? Right. So if Mistral ends up getting purchased by, you

Speaker:

know, Apple, that would be, that would.

Speaker:

I think there'd be a lot of drama about that. Yeah, I think so

Speaker:

too. I think it, it really just shows there's a line in the

Speaker:

sand between the two major superpowers here, between

Speaker:

China and the United States. My

Speaker:

speculation there is that there might

Speaker:

be something official there, but that the EU might step in

Speaker:

and just say no. Right. To that.

Speaker:

Simply given what we're here talking about, AI sovereignty. And what does

Speaker:

AI sovereignty look like? I, I don't think the, the French

Speaker:

necessarily want to give up that and I don't think the EU wants to give

Speaker:

that up

Speaker:

from an open source standpoint. We're still seeing a lot of thought leadership

Speaker:

coming out of the eu, even though there's not

Speaker:

enough, what I would say, enterprise momentum there.

Speaker:

Right. There's still a lot of research institutes there. There's still

Speaker:

a lot of, even some smaller form companies

Speaker:

like even like Hugging Face and Mistral for example, are, you know,

Speaker:

EU based that have made A big impact and are very open

Speaker:

source heavy and but at the end of

Speaker:

the day they're just still very small when you

Speaker:

consider these behemoth organizations like Microsoft,

Speaker:

Amazon, Nvidia, Apple, all the

Speaker:

fang corporations which have,

Speaker:

that can really throw their weight around. And we've seen a

Speaker:

lot of, a lot of

Speaker:

startups like OpenAI that has like

Speaker:

climbed up now into the upper epsilon and

Speaker:

that's being really driven by American industry. So

Speaker:

and that's just something the EU can't prop up as as much.

Speaker:

But I still think they, they're a major

Speaker:

player. They may not be necessarily

Speaker:

one to one with China and America, but if there was a

Speaker:

second tier right under that, it would be the eu. Yeah, I

Speaker:

can see that. I also think it's too early to count them out of the

Speaker:

race. Right. Like, yeah, you know we're, this is the start of the

Speaker:

marathon. Right. So they're clearly, clearly there are two front runners.

Speaker:

But I don't, I don't, I wouldn't count them entirely out just yet. Right.

Speaker:

And I didn't know Hugging Face was a European company. I thought they were based

Speaker:

out of New York. But that must be their American. I

Speaker:

believe you may be right. Let me show Hugging Face.

Speaker:

I know. I think the founders are European.

Speaker:

You are correct. Founders are European, but they are based out of

Speaker:

America. And that just goes to show right there.

Speaker:

Yo. That the gravity of just American enterprise. That

Speaker:

you can shell out a lot of money to get, to get talent. Right. And

Speaker:

yeah, this was the thing that a European tech founder said. Right. So you know,

Speaker:

all the Europeans don't hate on me, but there was a lot of founders that

Speaker:

they'll end up moving to Dubai. Right. Bootstrap and

Speaker:

then move to Silicon Valley, you know, at some point. Right. Like,

Speaker:

so I think the, the

Speaker:

European Union as a whole

Speaker:

has to address some systemic shortcomings when it comes

Speaker:

to a venture capital and startup

Speaker:

pipeline. Right. And I hope I, I

Speaker:

think that they'll get it figured out. I just don't think that they're going to

Speaker:

get it figured out this year. They might

Speaker:

get it figured out by the end of the decade because I think that,

Speaker:

you know, just a little bit of back of the napkin math, right. You, you

Speaker:

know, it's, it's, you can see

Speaker:

that growing the tax base is good for everyone

Speaker:

and this is one way to do that. And if you have your

Speaker:

brain drain, which we'll get into that, that term, you know, either

Speaker:

going to Dubai, you know, Silicon Valley or New

Speaker:

York, it's not Good. Right. Because

Speaker:

you're, you're basically, you're educating them in country. Right.

Speaker:

And a lot of these countries have, you know, cheaper, you know, free tuition.

Speaker:

Yeah. So you're paying for the talent, you're training up the talent, you're

Speaker:

paying for talent. And then when time comes in to cash in on the return

Speaker:

on that investment, if you want to look at it that way, throughout of country.

Speaker:

Right. So what are you going to do? I think that it's

Speaker:

in the EU's best interest to fix that problem. And like

Speaker:

you said, like sovereign AI or is a big deal.

Speaker:

And sovereign AI is different than sovereign than data sovereignty, right?

Speaker:

Yep. It's the. And I don't think people really kind of gotten their head around

Speaker:

that. So I know what my definition is of that,

Speaker:

you know, is the idea that. And it's even called out in this action

Speaker:

plan report. Right. Where it's like, you know, AI with American values. Right.

Speaker:

Yeah. And like you said, I'm pretty sure the French want to have, you

Speaker:

know, AI with French values, and the

Speaker:

Germans probably want to have, you know, with German values. Right. So I think even,

Speaker:

even painting the entire continent, even though everything's is kind of done

Speaker:

through the European Union, I don't think that's. That might

Speaker:

be at their detriment. Right. And. And the German market is also pretty

Speaker:

sizable too. Right. It's something like 80 million people. Right. And the

Speaker:

German language market, I think, adds another 20

Speaker:

million to that. Right. So, you know, I only say that because one

Speaker:

of the, one of the points that people made for taking German in high school

Speaker:

was it was 100 million ish, you know, number of people speak the

Speaker:

language. So it's not. And, and I would say that,

Speaker:

you know, particularly when we're dealing with language models. Right. It's in the

Speaker:

name. Right. So language and culture, although not exactly the same,

Speaker:

are very much tightly linked. And that was something we talked about last

Speaker:

time. We got sidetracked, but that's what

Speaker:

I do here. That's fine.

Speaker:

What struck out of you, the report? I think one of the things you mentioned

Speaker:

was, well, go ahead, I'll let you go. Sure.

Speaker:

The thing that I was most surprised about was

Speaker:

that pillar, one of the

Speaker:

document on page six and a couple of other areas was

Speaker:

really focused on the workforce

Speaker:

and this concept of like, securing the AI workforce,

Speaker:

making sure to have necessary people

Speaker:

in play. And then it got into like almost this Cold

Speaker:

war kind of mentality

Speaker:

of like, how do we make sure that we can trust the people that we

Speaker:

have and it was, it was just surprising to me because I thought it was

Speaker:

going to be more about the regulation of AI models, which it does move

Speaker:

into eventually. Right. And supply chain security.

Speaker:

But the, the concept of, of workforce and the fact that it

Speaker:

was the first pillar was intriguing to me. It got me

Speaker:

thinking about kind of what, what is the US Administration thinking about right now? And

Speaker:

they're, I think they're really thinking about making

Speaker:

sure they lock down the people. And

Speaker:

for good reasons and, and probably bad reasons too. You know, good reason is,

Speaker:

you know, how do we entice the best experts

Speaker:

to stay here in America? How do we entice

Speaker:

the workforce to continue to move into the area of AI through

Speaker:

education? But then there also seemed to

Speaker:

be almost like a Cold War vibe there.

Speaker:

I don't know if you watched the movie Oppenheimer, but it kind of reminded me

Speaker:

in that movie where the people working on the

Speaker:

Manhattan Project were like their personal lives

Speaker:

were, were under view quite a bit.

Speaker:

And it, it kind of reminded me of that. Like is, is in,

Speaker:

in a year or two are all the AI experts going to, you know, have

Speaker:

the NSA and the FBI like keeping track of them

Speaker:

and what they're doing? And it doesn't explicitly

Speaker:

say that, but the tone kind of led me to think, oh, wow,

Speaker:

they're, they, they're really interested in what these people are doing.

Speaker:

It's not just about the technology, but the people making the technology.

Speaker:

And that was very intriguing to me. I could see that being

Speaker:

a thing, especially if there's an actual honest to God, you know,

Speaker:

old school knockdown, drag out shooting war with any

Speaker:

country. I could

Speaker:

see that being, I wouldn't say nationalized, but

Speaker:

you'll have to get some kind of. Even now, like if you work in the

Speaker:

nuclear industry, you need a queue clearance. You need a lot of things being. You

Speaker:

need a lot of invasive, not procedures, but

Speaker:

definitely a lot of invasive paperwork and investigations that,

Speaker:

But I, and I, and I do see,

Speaker:

I didn't read the whole thing yet, but I did, I did, I did feed

Speaker:

it through Notebook lm. I did listen to that. I did do some skimming

Speaker:

of it. And that was one of the things was like it seems like they're

Speaker:

laying the groundwork for that in case things get sideways.

Speaker:

Also part of that is the, the

Speaker:

securing the supply chain from the silicon on up.

Speaker:

Yeah. Which is a smart thing because

Speaker:

the chips are made in very limited

Speaker:

geolocations. Right. So one,

Speaker:

one major international incident, a shooting

Speaker:

war. Right. No matter who wins. You know, there's Going to be an

Speaker:

island where most of the stuff is made. Yeah. That's going to be reduced to

Speaker:

the rubble. Right. Now, whose flag gets planted on top of that rubble,

Speaker:

you know, remains to be seen. But you know, you, you know,

Speaker:

so much for the chip manufacturers there. Yeah.

Speaker:

And also too, you can't rule out natural disasters. Right. You know,

Speaker:

2000, was it 2005 there was. Or 2004 there was the massive

Speaker:

tsunami that, you know, did major

Speaker:

swath of damage. It's not impossible to imagine even just a natural

Speaker:

disaster, bad typhoon either. Earthquake with

Speaker:

Japan, wake Japan. I mean it, it could,

Speaker:

it's not impossible to imagine like more than one way for it

Speaker:

to rain on everybody's parade. And if you think supply chain issues with GPUs

Speaker:

are rough today. Yeah, I mean that's just,

Speaker:

that would be a big thing. But what really stood

Speaker:

out to me, and obviously I'm biased because my wife is a federal employee, was

Speaker:

talking about training federal employees to use AI. Right. And

Speaker:

there was, even in there, even in there there was a

Speaker:

accelerating adoption here, but basically mandating

Speaker:

employee access for federal employees and

Speaker:

training on these LLMs,

Speaker:

which is interesting because I can

Speaker:

speak from not first hand experience, but certainly, you know,

Speaker:

secondhand experience. Right. Federal employees do not feel loved

Speaker:

and appreciated, let alone have access to any kind of training or

Speaker:

anything like that. So I thought that was interesting,

Speaker:

that was interesting in there because it's been a rough go for

Speaker:

feds the last six, seven months. Oh yeah. Everything's

Speaker:

been very negative. And this is like one of the, maybe the first,

Speaker:

it's first positive, you. Know, and I was, I was telling

Speaker:

you in the virtual green room is that, you know, the agency my wife works

Speaker:

at, like they're not hiring new people,

Speaker:

but they're creating a new organization that people will be doubling down on their duties,

Speaker:

which presumably they'll get access to the training. And she,

Speaker:

she may or may not be involved with that yet. We don't know. But, but

Speaker:

it's interesting to kind of see that, see that

Speaker:

happening. But yeah. What else have you, what else

Speaker:

took. It's, you know, so I think the most

Speaker:

important thing that was in there and I,

Speaker:

I actually figured the whole document would be about this is

Speaker:

around supply chain security. So if,

Speaker:

if people aren't aware when we're talking about supply chain, typically we talk

Speaker:

about supply chain, it's more in industry terms of, you know, how

Speaker:

does something get made, the nuts and bolts, where does the raw

Speaker:

materials come from? That term was

Speaker:

never used really in technology until recently.

Speaker:

And. Probably the pandemic

Speaker:

is when most people first heard the term supply chain.

Speaker:

It was, it was the Solar Winds hack. I think that

Speaker:

also really, yes, put it in perspective too.

Speaker:

So those who aren't aware, there's a company called SolarWinds,

Speaker:

they were very predominantly used in the government. I think

Speaker:

they still are. But there was a

Speaker:

hack where instead of hacking their software directly,

Speaker:

they hacked the supply chain. They injected

Speaker:

bad code early on into the supply chain

Speaker:

and that slowly propagated out to these

Speaker:

different government agencies. And the scary part

Speaker:

is that the very thing that was meant to monitor these

Speaker:

type of situations was the thing that had gotten infiltrated. So it

Speaker:

took a while for anyone to even know. And it was

Speaker:

massive. It impacted the government and impacted

Speaker:

enterprise. And that is where

Speaker:

I think NIST and a couple other agencies made the decision, okay,

Speaker:

we're going to come up with a requirement of what supply chain looks like

Speaker:

within these types of software development

Speaker:

process. And really gets into, okay, all the way

Speaker:

from how do we think up an idea for

Speaker:

code, how do we submit that code

Speaker:

into a repository, how do we compile it, how do

Speaker:

we scan it, how do we distribute it? And that's when we talk about supply

Speaker:

chain, secure supply chain. That's in the context of what

Speaker:

we mean. And that relates directly to AI as well, because it's

Speaker:

all data pipelines. And for AI specifically, it's about where does

Speaker:

that data come from, where was it sourced,

Speaker:

when was it added into our model? How

Speaker:

can we prove that the model that we built over

Speaker:

here is the model that's running over here?

Speaker:

So if the government has an officially blessed model, how do

Speaker:

I know the model that's running within

Speaker:

my defense contract firm is that model? And that gets

Speaker:

all into this supply chain. And I was happy to see that some of it

Speaker:

wasn't as technically laid out as I wanted it to be.

Speaker:

The document really just says we're relying on NIST and some

Speaker:

other government agencies to come up with a plan.

Speaker:

So this wasn't really the plan, it's more the actual call to action for

Speaker:

the plan. But it was good to see that there. It was important for it

Speaker:

to be there. I was happy that that was highlighted. And I think

Speaker:

in terms of security, it's the most

Speaker:

underappreciated one right now. Everyone's really focused on

Speaker:

model guardrails. And what we talked

Speaker:

about last time with the AI 2027 report or AI

Speaker:

breaking out of, of its shell. I think the most

Speaker:

important thing right now is actually more of the supply chain security where you

Speaker:

know, don't let people inject bad data into

Speaker:

models that are making critical decisions for the government,

Speaker:

for finance or healthcare. That's where our focus needs to

Speaker:

be first. I think having that secure supply chain is

Speaker:

ultimately what's going to lead to, to preventing

Speaker:

the AI 2027 report as well. Where it'll prevent

Speaker:

a breakout or if there was a breakout, it's going to reduce the blast

Speaker:

radius of that type of situation.

Speaker:

Now that makes a lot of sense and it's interesting because there's not just

Speaker:

the traditional nation states that could be involved here. Right. There's also

Speaker:

or bad actors in the normal sense. But also the

Speaker:

AI itself could become a threat too. Right. Like,

Speaker:

and the report doesn't isn't technical in detail,

Speaker:

but I don't think that's who the audience was really for. Yeah

Speaker:

but that's interesting

Speaker:

because you know, I don't know like from a game

Speaker:

theory point of view, right. Like you have the traditional, the usual suspects,

Speaker:

right. The countries, terrorist groups, criminal gangs, blah blah,

Speaker:

blah. Right. The usual kind of players. But AI also has

Speaker:

the potential to become yet another player

Speaker:

in the game of that. That's. I certainly

Speaker:

didn't see that in the report and it didn't cross my mind until you kind

Speaker:

of bridged last stream and this stream content. I was like,

Speaker:

oh wow, this is multi dimensional. This is like 5 dgs or

Speaker:

something like that. Yes.

Speaker:

I would say for a first effort, it's actually a fairly reasonably well

Speaker:

written document. For those that don't, for folks

Speaker:

that don't know, I used to accompany our lobbyists

Speaker:

in, in when I was at Microsoft talking about technology

Speaker:

issues and things like that and you know, I was the

Speaker:

technical resource for that.

Speaker:

And as I was telling you, virtual green room. A lot of these elected officials,

Speaker:

regardless of, you know, whether you agree with their

Speaker:

party affiliation or whatnot, they're not the most technical I

Speaker:

would say of the one ones I've interacted with

Speaker:

which maybe, maybe 60, 70,

Speaker:

some of them are names you've heard of, some of them as you've never heard

Speaker:

of. I would say less than 10%

Speaker:

are technical in any sense. Yeah, right.

Speaker:

And there were only two that I would say like would feel

Speaker:

at home having a technical conversation. I wonder how

Speaker:

many of the policymakers even

Speaker:

understand the term AI sovereignty. So and

Speaker:

this is interesting, I'd love your opinion. Yeah, I think how many technical people

Speaker:

would understand. Well, that's what I mean. Yeah, go ahead. This is where I've been

Speaker:

having some conversations even within our own organization that we work

Speaker:

for. There's a lot of differing opinion on what AI

Speaker:

sovereignty is. A lot of people who keep talking to me about AI sovereignty,

Speaker:

I realize they're more talking about clients, cloud sovereignty, they're talking about how do

Speaker:

I secure the compute, all of my

Speaker:

compute within my borders and can guarantee that everything is

Speaker:

within those borders. Which makes sense. I mean we work for Red Hat, we work

Speaker:

for a, you know, basically a cloud

Speaker:

Linux based company. Right. But when we talk about AI

Speaker:

sovereignty, at least me personally, it, it's an accumulation of a

Speaker:

few core areas. It gets back to the data sovereignty, a little bit of that

Speaker:

cloud sovereignty. But it's really about

Speaker:

do my, I have control over my AI models,

Speaker:

I know where the data came from.

Speaker:

And I loved what you said earlier. It's about the culture of the model

Speaker:

and I think ultimately the AI sovereignty is about the culture of the

Speaker:

model and then making sure that you're containing your

Speaker:

AI to the borders of the United States.

Speaker:

So you're keeping all the secrets here, you're keeping the talent

Speaker:

that are driving it. But ultimately you're right, it's about that

Speaker:

culture and making sure that your model has the best

Speaker:

representation of your culture. And

Speaker:

it's kind of a scary thing to think about. It's an interesting topic, but

Speaker:

it also gets into a lot of geopolitical challenges I think we're

Speaker:

having are now surfacing to the top because of things like AI,

Speaker:

you know, it's, it's interesting. Well, it's like 100

Speaker:

and I think I was actually a colleague ours, Robbie, shout out to Robbie

Speaker:

and gotta have him on the stream one of these days. You know, he

Speaker:

was talking about kind of AI sovereignty like, you know, what is, you know,

Speaker:

you can use an American model, right. From

Speaker:

data, right. And then tell it to behave British. He used

Speaker:

better words, right? You know, the spellings and the grammar and things like that. But

Speaker:

whose values are in there, right. When you, when you ask it questions, Right,

Speaker:

yeah. And, and that gets to an interesting thing, right? So

Speaker:

like you know, I, I,

Speaker:

my grand half, my grandparents were not born in the U.S. they're immigrants,

Speaker:

right. So like, but so, so when I went to one of the countries my

Speaker:

ancestry comes through is Ireland, right. So but the Ireland that a lot

Speaker:

of my older family members came from really doesn't

Speaker:

exist anymore, right. It's not the rural kind of

Speaker:

poverty stricken country that it was, right. 100, 110 years

Speaker:

ago. 100 years ago.

Speaker:

So it was very awkward because when I was,

Speaker:

when I was in Ireland as an American.

Speaker:

Even though it felt familiar, it was also felt very foreign. Right. Because

Speaker:

it was, you know, it was, you know, and if you think of me as

Speaker:

a, you know, large language model,

Speaker:

so to speak. Right. I grew up in New York. Right. I'm

Speaker:

very Americanized. So when I go there and it felt familiar.

Speaker:

Right. Like the, the pubs and the restaurants felt like places my older family,

Speaker:

it felt like grandma's house and that sort of thing. But it clearly was

Speaker:

not. And it was clearly also

Speaker:

not the same place that they left. Yeah. That you would hear in family stories

Speaker:

and things like that. You know, so it's

Speaker:

interesting because also I think

Speaker:

values and country and all of that are inherently

Speaker:

political and I think that's why you're seeing this. Right. It is inherently

Speaker:

geopolitical is inherently all of these things.

Speaker:

So technologists who are not used to, we're not

Speaker:

used to this, these types of conversations now suddenly we're pulled into this

Speaker:

and God forbid if there's a, you know, an actual kind of

Speaker:

20th century global war style thing happening. Yeah. Or would happen,

Speaker:

you know, it's only going to get worse

Speaker:

from here. So I do

Speaker:

find it, I do find it interesting how

Speaker:

technologists are now suddenly pulled into this. Right. There's a famous,

Speaker:

you know, you know, Jensen Wong made

Speaker:

it an emergency visit. That was,

Speaker:

that was a big deal. Right, That's. And actually that a lot

Speaker:

of that kind of stuff is called out in this report. You should,

Speaker:

you should go into details about that. So

Speaker:

Jensen Wong, apparently, I don't know what was the

Speaker:

driver of it, but I suspect he was. The administration was trying to

Speaker:

block all exports of GPUs to a particular country.

Speaker:

Yeah. So the rumor was that week

Speaker:

that all, all chip, the global chip manufacturing

Speaker:

outside the US Other than key allies, would just be completely

Speaker:

stopped. And obviously for, for some areas,

Speaker:

like China, it would. They would just end export for

Speaker:

pretty much all chips. So that was not just the ones that are blockaded

Speaker:

right now, but you know, really, even some of the basic

Speaker:

ones. Well, remember that Ford's assembly line

Speaker:

was shut down because there was a shortage due to the pandemic. Nothing else.

Speaker:

Of chips to put in the cars for the assembly line. Yes.

Speaker:

And it cost them x. Millions of tens of millions of dollars a day or

Speaker:

something like that. Right. So not trivial. Right. So like this

Speaker:

could have, this could have been

Speaker:

really bad. So go ahead. I'm

Speaker:

sorry. No, no, no, I was just saying. I was just adding some flavor because

Speaker:

it, it was officially announced by the White House that they were

Speaker:

evaluating this and then the, the word on

Speaker:

the street was, you know, the, the uncut secret was that

Speaker:

the, the U.S. was going to declare this at one of the summits that they

Speaker:

were going to, that they were just cut the chip manufacturing

Speaker:

altogether. And.

Speaker:

Yeah, and then Jensen made an emergency visit to the White House

Speaker:

and which I guess you, if, if you run the,

Speaker:

the most profitable company in America right now,

Speaker:

it helps. Well, that's his most profitable, the most valued. Right. This

Speaker:

valuation is like 4 trillion last I heard. So. Which is crazy.

Speaker:

But yeah, I mean he, it was a, it was a very

Speaker:

unplanned visit where he just went and knocked on. The door and

Speaker:

oh, to be a fly in that wall and that. In the wall. I know,

Speaker:

I know, right. But I mean props to him. Immediately

Speaker:

after that, right, we start hearing of the oh, we're gonna

Speaker:

back this down. We're gonna, we're, we're gonna consider still

Speaker:

shipping the, whatever the, the chip is in China. That's kind of a.

Speaker:

Right, an A100

Speaker:

knockoff.

Speaker:

We did see impacts in that conversation, but I think it's important because it

Speaker:

builds into the, this document because the document clearly outlines

Speaker:

semiconductor supply chain outlining the reliance

Speaker:

on the, of Taiwan.

Speaker:

What I loved about it is that there was a section here,

Speaker:

one second, I am

Speaker:

pulling it up. It was

Speaker:

a little tongue in cheek where they're talking about

Speaker:

reviving the US chip manufacturing under CHIPS act,

Speaker:

but stripped of ideological constraints.

Speaker:

And we won't go into the politics of that here. But I thought that was

Speaker:

pretty funny because the CHIPS act was obviously a big deal.

Speaker:

It was a big deal for me because when it was announced I was still.

Speaker:

I'm based in Boston now, but I'm from Northern Indiana around

Speaker:

the area where Purdue University is close to Chicago. And

Speaker:

we were actually called out on the CHIPS Act. They were going to build a

Speaker:

semiconductor facility there in our area in

Speaker:

conjunction with Purdue University.

Speaker:

But then when, when Trump was elected, he was trying to claw back anything he

Speaker:

could from the CHIPS Act. Right. I'm happy to see that the CHIPS act

Speaker:

is back on the table. I think it's still going to be

Speaker:

extremely political like we have seen with these types of acts,

Speaker:

but is needed. I

Speaker:

AM hoping that $6 billion isn't just

Speaker:

going to go to intel because I think the innovation there is starting to

Speaker:

die off. I'm hoping that we see

Speaker:

more focus towards some innovative areas in chip

Speaker:

manufacturing here and also ultimately which is called out. We

Speaker:

want to bring over a lot of the Taiwan based technologies

Speaker:

and my understanding is that there's just a bunch of

Speaker:

explosives within those facilities there in

Speaker:

Taiwan and they're ready to just blow them up at a moment's

Speaker:

notice and move ship to the U.S.

Speaker:

wow. So I know they're building some of those facilities. I think

Speaker:

Arizona was one of them. I think Texas is another

Speaker:

where they're starting to mimic some of that chip production. And

Speaker:

basically right now the United States is trading military

Speaker:

equipment for chip technology.

Speaker:

It's crazy. Fascinating is. But it's absolutely fascinating from a

Speaker:

geopolitical standpoint that the currency right

Speaker:

now for, for Taiwan is, Is chips.

Speaker:

Yeah. And so. But I think that's

Speaker:

a big driver. It's one of the things that was called out. It was

Speaker:

called out in pillar two of the document, which calls called

Speaker:

Build American AI Infrastructure. And I think. Yeah, you have the

Speaker:

outline there where they call out

Speaker:

specifically the semiconductor leadership and then also securing data

Speaker:

centers. I thought this was interesting. They're going to start having federal

Speaker:

guidelines on data center security

Speaker:

and will also incorporate military and

Speaker:

intelligence usage for those facilities. This is what I

Speaker:

was telling about. This is just reminding me of when I, when I watched

Speaker:

Oppenheimer and learning about the Manhattan Project and

Speaker:

there were military guards in front of

Speaker:

the physics research facilities in

Speaker:

Chicago University and in, in New York

Speaker:

and in Los Alamos. It's, it just

Speaker:

seems very, very similar where it's like we are now going to

Speaker:

attach military guards

Speaker:

to guard our public sector AI

Speaker:

infrastructure. And yeah, I mean

Speaker:

one of the interesting things and I think this really kind of, if you take

Speaker:

a step back, right. Like why, why is.

Speaker:

For many nations, why is domestic auto production important?

Speaker:

Right. Because when it hits the fan,

Speaker:

you're not. You make tanks, right? You make tanks, you make

Speaker:

airplanes. Like all these things are important for

Speaker:

nation states. Right. So automobile production is a

Speaker:

proxy for tank production. Right.

Speaker:

Civilian airline airplane production is a proxy for, you

Speaker:

know, this I would add now probably chip manufacturing.

Speaker:

Right. And possibly, possibly AI model creation.

Speaker:

Yeah. You look at what's happening around the world where there are conflicts. Right.

Speaker:

Drones are playing a huge part of this. Yep. Right.

Speaker:

Whether they're autonomous or not, we will never really know

Speaker:

until the history books are written and even then. But

Speaker:

the whole idea of, you know, drone

Speaker:

and AI based warfare. Right. You know,

Speaker:

one of the videos coming out of you, the Ukraine conflict was

Speaker:

the Russian airplanes were covered in tires. I don't know if you saw

Speaker:

this. No. So, so one of the.

Speaker:

The thinking is that they had, I guess, old tires covering some of

Speaker:

the parts of the airplane. Strategically, the best guess that Everyone

Speaker:

has. And I've heard this from multiple sources saying it's kind of true and kind

Speaker:

of not. So. I don't know, take it for what you will, is that that

Speaker:

was done to confuse computer vision systems. Yeah. And

Speaker:

then there's this other thing. I don't know if you heard of patch attacks,

Speaker:

which is basically like this idea of. I'll see. I pulled up some.

Speaker:

Some graphics of this, but basically.

Speaker:

Open image in new tab.

Speaker:

Basically, it's the idea that you can alter

Speaker:

a structure, like a stop sign,

Speaker:

in ways that the AI model will see something different

Speaker:

and alter what the AI model is

Speaker:

determining it sees. And apparently you're seeing a lot of this,

Speaker:

if you look at footage from, you know, Ukraine area, is that

Speaker:

you're seeing, like, you know,

Speaker:

tanks both sides with. With

Speaker:

stickers on them that look like really warped QR codes or

Speaker:

like bizarre things like this. Yeah. And it's basically to

Speaker:

thwart these types of systems.

Speaker:

So. That's fascinating. It's interesting, isn't it? And this gets back

Speaker:

to. I don't know if it was on the stream or another conversation we have.

Speaker:

We're building these systems, these LLMs with, you know, hundreds of billions of

Speaker:

parameters. Right. If not a trillion or two,

Speaker:

we really don't know how they work. No, we think we know.

Speaker:

And you and I were talking about this the other day, actually, it wasn't on

Speaker:

a stream or anything. It was kind of like. I think that LLMs

Speaker:

that we have now are unreasonably effective.

Speaker:

Right. They're able to. And I'll put air quotes here for anyone listening reason.

Speaker:

Right. They shouldn't be able to

Speaker:

based on. I mean, all I see is just a vector

Speaker:

database with lots of relationships between words.

Speaker:

Yeah, Right. They're capable of doing things that

Speaker:

if. I wouldn't think they would be yet. They are.

Speaker:

Yeah. So there's a lot of research dollars going into figuring this

Speaker:

out right now. Like, why is that? Like, what. Is there something

Speaker:

inherently powerful about language? Probably. Yeah. Right.

Speaker:

That and, you know,

Speaker:

language is kind of like the assembly language of the mind, if you think

Speaker:

about it. Right. So I can

Speaker:

encode my thoughts into something, whether it's a written word,

Speaker:

whether it's, you know, vocalizations,

Speaker:

and then have that come out. It's basically a.

Speaker:

Like a codec for human thought. And

Speaker:

maybe there's some kind of. I don't want to say intelligence, but some kind of

Speaker:

something we don't quite yet grasp. Yeah. About the nature of language

Speaker:

and relationships between words that

Speaker:

automatically you get for free. Once you kind of train these models up,

Speaker:

I think that's fascinating, and I'm glad there's a lot of research dollars to that.

Speaker:

It is. But, you know, clearly the human nervous

Speaker:

system, our visual system, our cortex, whatever it's called,

Speaker:

you know, we know that that is a moth sticker on a stop sign.

Speaker:

Yeah. What is different about how the AI learned

Speaker:

that makes us vulnerable, this type of attack. That's fascinating.

Speaker:

And it doesn't see things like, there's

Speaker:

a lot of research papers that'll show you, basically, what does the model

Speaker:

see? And you see it and it's just absolute nonsense to us.

Speaker:

Right. It's. It doesn't see what we see. It's like

Speaker:

it doesn't relate the.

Speaker:

You know, maybe it's not correlating the red and

Speaker:

the white backgrounds, but instead it's correlating

Speaker:

the position of the text or the fact that it's four

Speaker:

capital letters positioned over an octagon or something like it.

Speaker:

It. The. The way figures these things out is.

Speaker:

Different than how we think we do it. Yeah. It's actually seems obtuse, but.

Speaker:

It's obtuse. But it does it billion times faster than us. So when it's

Speaker:

obtuse, it gets to something faster than we do because it just can

Speaker:

do it a billion times over. And that's where

Speaker:

the secret sauce really is. But how

Speaker:

things relate back to each other, obviously, we have these,

Speaker:

these vectors that like, you know, build relations between

Speaker:

words. But how it can then take it and

Speaker:

reason is still not quite

Speaker:

understood. Right. Right now it's just not understood.

Speaker:

No, it. And it's. No, it's not understood. And that's kind of what

Speaker:

keeps me up at night, is we don't really. We're putting these.

Speaker:

Again, you know, full disclosure, we both work for an enterprise software company with very

Speaker:

large customers. You know, we're deploying these LLMs in

Speaker:

places they're

Speaker:

not exactly making the life and death decisions right now,

Speaker:

but it's not that hard to imagine that they would.

Speaker:

Right. Yep. And I don't know, I think that's.

Speaker:

That's just a huge security vulnerability. We don't know how these things work

Speaker:

and also understand that it doesn't make sense to hold off

Speaker:

deploying these things once we fully understand it. Right. That doesn't. That's

Speaker:

not going to fly either. But I think we should,

Speaker:

as a society, like, really think about,

Speaker:

you know, what are the consequences here. Right. Think of what

Speaker:

the Jeff Goldblum character, Jurassic park. Right.

Speaker:

You know, talked about chaos Theory and all that. Right.

Speaker:

Like it's, you know, you know, the

Speaker:

unintended consequences of this. We

Speaker:

should, to your point, have AI in a box, like,

Speaker:

and make sure it's really hard for that to get out. But

Speaker:

again, like, you know, these things are. Doing.

Speaker:

These things don't think like us

Speaker:

and they may think in more circuitous and obtuse ways that don't

Speaker:

make sense to us, but again, they do it a billion times faster.

Speaker:

So, you know, it could end

Speaker:

up being far more clever than we are.

Speaker:

Absolutely. I remember when I was learning Comp sci

Speaker:

and one of the things I think was assembly language class actually

Speaker:

was multiplication on silicon is typically done. Not

Speaker:

by now. What was it? Yeah, it was.

Speaker:

I don't know if it's still true, but back in the day it was true

Speaker:

that multiplication is actually done through repeated addition.

Speaker:

Yeah. It was actually more efficient to do it that way.

Speaker:

Right. Again, I think that's a great summary of like,

Speaker:

that's kind of the slow way. But if you're operating billions of

Speaker:

times faster, slow way isn't so bad.

Speaker:

Or a slow way doesn't mean anything. And I think you and I were having

Speaker:

this conversation

Speaker:

that, you know, if you think about the power requirements of these

Speaker:

AI systems. Yeah. Versus the power requirements of the human

Speaker:

brain, something like 25 watts.

Speaker:

Right. And if you think about the intelligence of

Speaker:

birds, like crows in particular, Right. They, they, they have the

Speaker:

intelligence of a six, seven year old supposedly.

Speaker:

You know, not only do they have to

Speaker:

do it power efficiently, but they also do it weight efficiently

Speaker:

too. Right. So the infrastructure, you know,

Speaker:

that a crow has to think about, think about, but, or

Speaker:

evolution or whatever, has to put it in a lightweight body.

Speaker:

Like I don't fly. I'm obviously not, I'm not a petite individual,

Speaker:

so I don't have to worry about that. But like, if you're a bird, you

Speaker:

know, you have to fly, so you have to think about that. And yet they're

Speaker:

able to manifest some kind of

Speaker:

intelligence with very modest hardware. I mean,

Speaker:

their brains are not that big. I think the size of a

Speaker:

walnut. I don't know. Like, this is totally off topic, but.

Speaker:

No, it's, it's related though. And on the report they were talking about

Speaker:

power,

Speaker:

power requirements, power requirements and grid security.

Speaker:

Right. And it was called out as. And you think about just the sheer

Speaker:

massive amount of power that these AI models

Speaker:

take. It's. It's insane. I think, I think there was a point where

Speaker:

one third of all power is being used for like bitcoin. Mining. At one point

Speaker:

that went down, and now we've. We've replaced that with AI

Speaker:

and, you know, that's. It keeps going up and up and to the point

Speaker:

that, you know, it's possible that half of all the power being used here soon

Speaker:

is just going to be for AI. And I can see that there's no

Speaker:

evolutionary pressure like there was on biology. No, no, you can just

Speaker:

throw more power at it. So in. In

Speaker:

this case, with. With the LLM technology,

Speaker:

you can just throw more chips. Right. And, you know, make

Speaker:

them. You know, this actually hits home

Speaker:

because I'm between. So Ashburn

Speaker:

or Loudoun County, Virginia, which, if you've ever flown in

Speaker:

and out of Dulles Airport, you've been there, is Data center alley.

Speaker:

So U.S. east is there. U.S. east 1, 2 for all the major providers. Right.

Speaker:

Plus a lot of private ones, too.

Speaker:

I live between there and Three Mile Island. Oh, wow.

Speaker:

Yeah. So one of the big controversies in

Speaker:

the state of Maryland is that they want to put in what they call the

Speaker:

Maryland Power Piedmont Reliability Project or something like

Speaker:

nprp. They're basically going to put in high power

Speaker:

lines between Pennsylvania to Virginia,

Speaker:

which is a political football because there's a lot of land that's going to have

Speaker:

to be eminent domained. Yeah, Right. There's

Speaker:

obviously environmental factors, but also this is the

Speaker:

thing that is really kind of insult to injury. Right.

Speaker:

None of the power that's going to go over those lines is going to be

Speaker:

consumed here. It's all basically exporting power from

Speaker:

Pennsylvania through to Virginia, which

Speaker:

is not. Not a good look if you're. Because the people who

Speaker:

vote Maryland people in are Maryland residents. So there's this whole.

Speaker:

It's a very big controversy right now.

Speaker:

And it's interesting because what used to be

Speaker:

a very isolated hobby of technology

Speaker:

is now embroiled in geopolitics, local

Speaker:

politics. It's just kind of like I kind of miss the good

Speaker:

old days before lawyers got involved.

Speaker:

Yep. But

Speaker:

sorry, but no, I mean, that's a good point. That's, you know, you think about

Speaker:

the power requirements, right. You know,

Speaker:

for these things, you're gonna have to build new power centers. You're gonna have to

Speaker:

do this. Right. And then that, you know, what's. What's your

Speaker:

power source going to be? Solar is awesome. Solar

Speaker:

can't solve everything, Right. So

Speaker:

what's it going to be? Is it going to be wind? You know, is it

Speaker:

going to be, you know, coal? Is it going to be natural gas? Is

Speaker:

going to be oil? Right. There's going to be a whole. It's all fun and

Speaker:

games until people are paying way more for their electric

Speaker:

bill each month than they, than they're used to.

Speaker:

Yeah, it's going to be, it's going to change things

Speaker:

very quickly, especially if it starts impacting people's monthly power bills.

Speaker:

Right. I think right now we haven't seen it too much just because

Speaker:

we've been able to keep up with demand. But once that demand

Speaker:

starts really affecting prices, I think we'll also see

Speaker:

AI being a conversation point in that way where it's going to start.

Speaker:

And then I think, you know, even with the, the 2027

Speaker:

plan that we're talking about, 2027 plan, you know, we were talking about things

Speaker:

like, you know, universal basic income and stuff. You know, if you, if AI starts

Speaker:

taking over everything. And that wasn't outlined in this document,

Speaker:

which I'm not, I'm not surprised. But it's a,

Speaker:

it's going to be a big conversation point. If AI does work the way that

Speaker:

we think it's going to work, will we start seeing the AI

Speaker:

take the jobs? And if they take the jobs. I think it was, actually,

Speaker:

it was Bill Gates like 10 years ago was talking about UBI for,

Speaker:

for AI, and at the time we just thought Bill was being crazy and like,

Speaker:

like a go back to your Gates foundation and. You know, go back and work

Speaker:

on malaria. Yeah, yeah. But no, and even Elon Musk, I mean, and

Speaker:

Elon Musk is definitely a polarizing figure, as is Bill Gates. But they're both

Speaker:

polarizing in different directions. Yeah. They both agree on ubi. I have

Speaker:

mixed feelings personally about ubi, and it's

Speaker:

not because I'm a mean individual. It's just if you study the history of

Speaker:

serfdom. Yep. I don't know.

Speaker:

Looks a little too similar to me. Yeah. But that's just my take

Speaker:

on it. But

Speaker:

you're right, like, and also too, like governments are getting involved. Because if you go

Speaker:

to your local McDonald's, right. Or your Dunkin Donuts, right. And

Speaker:

you think of how many people used to staff that in the past

Speaker:

versus how many people staff that now. Yeah.

Speaker:

Right. And

Speaker:

assume, well, human nature is human

Speaker:

nature. Right. If you used to take 10 people to run your average

Speaker:

McDonald's, now they can get by. I don't know. If you go in there now,

Speaker:

there's like five, maybe four. Yeah, four or five. And that's

Speaker:

generous. Right. If nothing else, the taxes on

Speaker:

the wages have went from 10 employee taxes on 10

Speaker:

employees. Now they're taxing it on five. Yeah. Right.

Speaker:

That, that's a big deal. It is, right. Because now

Speaker:

you're taxing. Now granted they're not, you're not taxing them a lot because

Speaker:

they're not making a lot of money, but still that's 50%.

Speaker:

So if you're kind of like a, you know, a number cruncher and you're

Speaker:

looking at every McDonald's, right. When you have 100 McDonald's now, you're getting the tax

Speaker:

revenue out of that one McDonald's, you know, or at least

Speaker:

on the income of it. Right. The income of the individuals. The income tax on

Speaker:

that is going to be way less now. Even

Speaker:

now. Even before AGI. Before. Yeah, before

Speaker:

that. Right. Because it's just automation. Right. And I personally

Speaker:

would rather deal with a kiosk. Same

Speaker:

here. Deal with the person. Yeah. Right.

Speaker:

Especially if you have like special orders. Right. Like, oh, you know, my kid doesn't

Speaker:

want ketchup on his burger. Right. He doesn't want onions on his burger. Right. So

Speaker:

you just have that as a favorite of the app and then just press go.

Speaker:

I don't even have to touch the kiosk. Yeah,

Speaker:

I think that that is going to be, that's not even an AI system,

Speaker:

Right. That's just good old fashioned automation. One of

Speaker:

the big Silicon Valley AI

Speaker:

gurus who, it was escapes me right now, but was talking

Speaker:

about how the, the jobs

Speaker:

that are going to be considered desirable are going to be

Speaker:

completely flipped here soon.

Speaker:

Where he was, he was saying the most desirable job might just be people in

Speaker:

performing arts. Right. He's like, he, he's like, AI is

Speaker:

not going to replicate that anytime soon. He's like, yes, you may have

Speaker:

movies being AI generated, but there's still something to be said about

Speaker:

the performing arts. You know, obviously like

Speaker:

plumbing and electrician work and construction

Speaker:

work, you know, robotics might amplify that and make

Speaker:

it better, but there'll still be a human element. But you know, traditional white collar

Speaker:

jobs as we know it, other than the people

Speaker:

who, who manage that AI, I, I just

Speaker:

feel like it's going to be completely turned upside down if

Speaker:

AI does what we want it to do. That's a big if right

Speaker:

now. It's a good tool. But the, the real if is

Speaker:

we're gonna get to this more area of agentic and more AI is actually being

Speaker:

able to do the full job of someone rather than just being a

Speaker:

tool that they use. And that's the if right now that we're, we're

Speaker:

betting a lot on. The economy on where there's a lot of.

Speaker:

A lot of bet from many financial institutions that

Speaker:

the AI is going to be what is the next industrial

Speaker:

revolution. I think that's still yet to be proven out.

Speaker:

One just to go back to the UBI though,

Speaker:

there's a book you might be familiar with at the Expanse.

Speaker:

Yes, love those books. They

Speaker:

cover this idea of. Of universal basic

Speaker:

income. And you know we basically have in that, that

Speaker:

series like I think it's like 90,

Speaker:

95 of the world is just on

Speaker:

Earth's population averse population is basically on universal

Speaker:

basic income some sort. And

Speaker:

you then have these, the 5% that actually

Speaker:

just have jobs. Right. It's a big deal that they just have

Speaker:

a job and they're doing things and you know, they're

Speaker:

politicians and people managing technology

Speaker:

or defense and it's. It's fascinating. And I think

Speaker:

if anyone's wanting to look at a little

Speaker:

bit less of less rosy kind of outcome

Speaker:

and one that I think is more accurate, I think it would be a

Speaker:

combination of. Of the Expanse and then probably Ghost in

Speaker:

the Shell, the anime. Both of them show

Speaker:

AI and technology not to the extent

Speaker:

of like Terminator the Matrix where everything gets destroyed, but more

Speaker:

of a like human progression just

Speaker:

gets bogged down by this development. We end up in this

Speaker:

more like

Speaker:

technocratic kind of of realm where techno

Speaker:

feudalism almost. Yeah, that's a great way of putting it. Techno techno

Speaker:

feudalism. And what's interesting is if you look at kind of the expense. So I'm

Speaker:

a big fan of the Expanse. I've read. I haven't read all the books, but

Speaker:

I've read a lot of them. I've seen the series, which is

Speaker:

excellent by the way, on Amazon. I'm

Speaker:

salty that they didn't. They stopped it at season six, but I can let that

Speaker:

go. But what's interesting is that the people with gumption ended up

Speaker:

leaving Earth and going to Mars. Yep. Or the asteroid

Speaker:

belt. So what happens is 100 years after that now you have to kind of

Speaker:

like these three factions. Right. Everyone looks down on Earth,

Speaker:

right. Because there's always like, particularly in the show, there's always these barbs where

Speaker:

the politician says, you know, if. If you don't do this right, I'm going to

Speaker:

put you on basic. Right. Like so basic becomes like a threat,

Speaker:

which I think is interesting. And then there's also kind of the.

Speaker:

The people who are more entrepreneurial end up going to Mars or the

Speaker:

asteroid belt. And then that doesn't always work out well. So they have this. You

Speaker:

have this tension between these three different factions. And then

Speaker:

throughout this course of the books, a third faction, kind of a

Speaker:

fourth faction kind of enters the scene and kind of disrupts the power of

Speaker:

the status quo. And that's kind of the main tension

Speaker:

of the books is, you know, what happens

Speaker:

after that. But highly recommend those books if you

Speaker:

haven't seen them on the TV show. If you. The TV show is really well

Speaker:

done. I think I would agree. From what I've seen of it, I haven't finished

Speaker:

it, but it's good. And I'm in. I'm in the same boat as you. I'm

Speaker:

a couple books in. It's one of those series I kind of come back to

Speaker:

every once in a while. But funny enough, it's a. It's a series I reference

Speaker:

a lot. I think about it a lot because I was like, I think that's

Speaker:

a really accurate depiction of what the future could look for us

Speaker:

with. With the technology. It's pretty reasonable. And that's what's

Speaker:

really nice about the show. Like, it's. It's not because there's also.

Speaker:

There's. Obviously, you mentioned the pessimistic views of the future. Right. There's. There's the

Speaker:

Matrix, there's the Terminator, but there's also Star Trek, which is a little too. On

Speaker:

the optimistic side. Yes. But

Speaker:

there's not really. I think what's great about the Expanse, and I haven't.

Speaker:

I haven't seen Ghost in the Shell anime in a long time.

Speaker:

I did see clips of the Scarlet Johansson movie,

Speaker:

but the

Speaker:

Expanse does a pretty good job of going down the middle. Like, there's going to

Speaker:

be societal changes that will come

Speaker:

for this we really can't imagine now. Right. Yes.

Speaker:

You know, Earth is pretty much almost like a

Speaker:

techno feudal state. Especially what's interesting in the Expanse is

Speaker:

when they explore what life is like for the average human on

Speaker:

Earth. It's kind of like it's either really good or not.

Speaker:

Right. And Mars is also kind of an

Speaker:

interesting place too. There's a very different dynamic when you get that

Speaker:

many type A driven people in one place.

Speaker:

Sounds awesome at first, but then it's not really awesome.

Speaker:

Yeah, necessarily. Right.

Speaker:

But fun fact, the serve the

Speaker:

PCs and the server names in my house are all derived from the show.

Speaker:

Oh, cool. Yeah, yeah. So I'm talking to you now on

Speaker:

Amun Ra. Cool. I don't know if you've gotten to that part of

Speaker:

the. That's in the first book. Yeah, yeah. The Amun Ra Stealth class

Speaker:

ships. And

Speaker:

the computer I just bought also has that same kind of, you know,

Speaker:

gamer box game aesthetic. So that's Osiris.

Speaker:

And I also have Behemoth, which is that

Speaker:

machine back there. And. Or you've not gotten to the Behemoth

Speaker:

yet. Okay, I won't spoil it for you though. Yeah.

Speaker:

But. And Andy, my co host on the podcast, is also a big fan

Speaker:

of the show. He has, he has the, the

Speaker:

Doniger, which you probably heard of that one. Yes, yes. He's

Speaker:

got Weeping Sonambulist, Weeping Somnambulist,

Speaker:

which. I had a machine with that name, but it's too hard to type out.

Speaker:

You doing the ping on it? It's like, no, I don't know if

Speaker:

you got into that one yet, because that's a couple books in. But.

Speaker:

Yeah, yeah. And my, my,

Speaker:

my. When I left Microsoft, my former Microsoft manager let me keep

Speaker:

one of the laptops. So when it boots into Windows, it's the

Speaker:

Tachi. And when I boot it into Linux, it's Rosson,

Speaker:

which, you know, people have read book or seen the show.

Speaker:

Go get the joke. But. And it's

Speaker:

funny, our manager, when we met in person, my machine was the

Speaker:

Razorback. Right. Which I don't know if you got to that part

Speaker:

yet, but I'll try not to be

Speaker:

spoiler. He's like, so what are you with like an Arizona fan? I'm like, no,

Speaker:

no, no, it's from a book. Nice. So,

Speaker:

but. Another area,

Speaker:

I think this is for another, another time.

Speaker:

So when I come back. But I think it would be good to talk also

Speaker:

about how are the AI

Speaker:

tools right now? Like, are we seeing them replace

Speaker:

humans? I think the leap that we've

Speaker:

made in the last six months is pretty substantial.

Speaker:

Yeah. I think last year I would have said no.

Speaker:

I think this year I'm saying yes. Like, we're seeing,

Speaker:

we are now seeing the technology

Speaker:

there to actually start replacing people. And

Speaker:

it's not that, it's not that the

Speaker:

main guy is going to be out like the tech lead, but I think it's

Speaker:

going to be more the, the junior developer

Speaker:

that's going to be in trouble because now the tech league can act like

Speaker:

a fleet of junior developers. And like I'm, I'm just

Speaker:

programming a game right now and

Speaker:

I'm so surprised how much I've been able to get done

Speaker:

in the, in the time frame I've been working on it. It's amazing

Speaker:

how quickly you can be. But wasn't there also a story, a Guy deleted

Speaker:

his entire production database. Yeah. Because of. I

Speaker:

don't know the details. I had my AI delete,

Speaker:

actually go and start cherry picking things off of the main branch and start

Speaker:

deleting things. Oh, interesting. So I have a duplicate.

Speaker:

I, I every day I fork my. I have a

Speaker:

fork that I, I merge back into because I don't trust

Speaker:

it and I don't tell the AI about the, the fork backup. Yeah, yeah.

Speaker:

I think that says a lot though. Like you don't trust it. Like, you know,

Speaker:

and it's not guardrails. It's not. Well, it's not guardrails in the

Speaker:

sense that when people say guardrails and AI. Right. That's true. Yeah. It's a different.

Speaker:

You're kind of. You're cya.

Speaker:

That's really what you're doing. It is, it is. Right, that's true. Whether you, whether

Speaker:

you put your code back up in another repo in another branch or a

Speaker:

USB drive, like you're really. CYA is really what you're doing. And

Speaker:

I think that there's a lot of. We've been going for an hour, so.

Speaker:

And I also have to. I gotta drop too, so. Yeah, I gotta drop two.

Speaker:

But, but it's been great. It's awesome. I think we continue more. But I

Speaker:

definitely want to know more about the game thing you're doing because I sent you

Speaker:

a bunch of stuff on Humble Bundle too. Yeah, yeah, which for game

Speaker:

dev, so. But I have. My teenager needs a

Speaker:

ride somewhere, so. Hey, thank you for having me. Hey, no

Speaker:

problem, man. It's great. And be sure to check out

Speaker:

our Red Hat AI YouTube channel where I think Chris has a video or two.

Speaker:

Yeah. And I have a video or two as well. And

Speaker:

with that, we'll see you next time. And

Speaker:

have a good one. And that's a wrap on this episode of Data

Speaker:

Driven, where we've dissected the Americas AI action plan with the

Speaker:

precision of a data scientist on espresso and the paranoia of a

Speaker:

Cold War analyst. Big thanks to Christopher Nuland for

Speaker:

returning to the show and reminding us that AI sovereignty isn't just a

Speaker:

buzzword. It's a geopolitical chess match played with silicon and

Speaker:

source code. If you're not slightly more worried about data

Speaker:

pipelines, chip supply chains, or which values your LLM

Speaker:

secretly harbors. Were you even listening? As always, you

Speaker:

can find us on data driven TV, franksworld.com

Speaker:

and wherever your algorithms recommend quality geek banter.

Speaker:

Until next time, stay curious, stay Data Driven.

Speaker:

And remember, if your AI starts talking about sovereignty.

Speaker:

Maybe check the firewall.