Speaker:

On this episode of data driven Frank and Andy interview

Speaker:

stephen Oren, the CTO of Intel Federal

Speaker:

yes. Intel, the computer chip company. Because if you want

Speaker:

to train your AI models in a reasonable amount of time, you need better

Speaker:

hardware. Well, it turns out that intel has developed new

Speaker:

CPU instructions to accelerate AI workloads

Speaker:

FPGAs allow for faster development in custom

Speaker:

applications with specific needs. Speaking of intel,

Speaker:

you have to check out an upcoming intel and Red Hat webinar

Speaker:

link in the show notes. Tell them Bailey sent you.

Speaker:

Now on with the show.

Speaker:

Hello and welcome to Data Driven, the podcast where we explore the emergent fields

Speaker:

of data science, data engineering, and of course,

Speaker:

artificial intelligence. As with me, I always have Andy

Speaker:

Leonard, my most favorite data engineer in the world.

Speaker:

And today we have a special guest, Steve Oren, who is the federal

Speaker:

CTO of intel. Yes, that's right, intel, the chip

Speaker:

company. And although they do a lot more stuff

Speaker:

now. So welcome to the show, Steve.

Speaker:

Thank you and glad to be here. Frank and Andy cool.

Speaker:

So one of the things that I think people have not realized, people

Speaker:

think that AI is a software story, right?

Speaker:

Primarily. But quickly, once you get into it,

Speaker:

everyone goes gaga for things like Chat GPT or

Speaker:

well, no one's really gone gaga for Barred just yet. We're going to give that

Speaker:

a few more time for the paint to dry on that.

Speaker:

But quickly, I think when people start

Speaker:

becoming builders of AI tools, the

Speaker:

number one restriction, aside from kind of what your data engineering pipeline looks

Speaker:

like, is how quick you can train these models. And

Speaker:

obviously, I'm pretty sure intel has a thing or two to say about

Speaker:

hardware. Absolutely. And as you've as you've

Speaker:

alluded to AI, and all the things that make up

Speaker:

AI rely heavily on the infrastructure that you're

Speaker:

training you're inferencing. But even before you get to the fun stuff, how do you

Speaker:

do the data curation? How do you suck in the data? The ingestion get the

Speaker:

large multi node data sets that these large language models are

Speaker:

trained against. There's a lot of hardware and infrastructure that has to make

Speaker:

that happen. And then when you get to the important phase with how do

Speaker:

you train those in a timely fashion, hardware is

Speaker:

the answer. And what we're seeing in a lot of these spaces,

Speaker:

especially we start looking at things like large language models and transformers

Speaker:

as well as looking at other approaches that are coming out,

Speaker:

is that not only does the hardware matter, but the type of hardware

Speaker:

matters. If you think about it, it's not a one size

Speaker:

fits all. It's a heterogeneous architecture to make sure you have the right

Speaker:

hardware for your workload. One great example. So

Speaker:

large language models in graph analytics requires not just

Speaker:

heavy duty hardware but the right memory architecture to keep those nodes

Speaker:

in place while you're training. And what you find is that often

Speaker:

doesn't fit well. Intel just a classic GPU only kind of mode,

Speaker:

which is what the classic AIS leveraged, just the sheer number

Speaker:

of cores that you would have in a GPU. And so what we're seeing is

Speaker:

optimizing the hardware for the kind of workload is the answer to getting

Speaker:

timely training. And especially when you start doing more. That sort of iterative. And

Speaker:

feedback training, it's not a one and done, it's an ongoing process. So you need

Speaker:

that to be quick enough and powerful enough and robust enough to handle those

Speaker:

workloads. And then the other side where hardware really starts to matter is on the

Speaker:

inferencing, you want to be able to ask the question and get a response fairly

Speaker:

quickly, if not near real time. If you're in a car and it's

Speaker:

autonomous driving, you want it real time. You want to know that's a tree and

Speaker:

not a shadow. If you're talking about online and doing some

Speaker:

fun stuff with chat GBT, you still don't want to wait 20 minutes for your

Speaker:

response. And so inferencing matters, training matters, and

Speaker:

the kind of hardware and infrastructure that support it. And that's why intel

Speaker:

and our ecosystem are looking at providing a

Speaker:

heterogeneous set of architectures. So our classic CPU, so the Xeon and

Speaker:

the server and CPU and the client core, but also FPGA

Speaker:

based logic AI accelerators like our Habana chips in

Speaker:

the cloud and our targeted edge AI

Speaker:

chips like Movidius for video processing and the like. But then

Speaker:

really, besides the hardware, it's that software infrastructure layer. How do you

Speaker:

optimize your code? Because most AI developers are not hardware

Speaker:

experts, nor do I want them necessarily to be. So a lot of it is

Speaker:

about building out those abstraction layers that optimize your code, that's

Speaker:

doing your hugging face or whatever, to take full advantage of the

Speaker:

hardware underneath you, without you having to know what hardware is underneath you so

Speaker:

that you can provision your workload where it needs to go and not have to

Speaker:

worry about the hardware infrastructure. And that's part of our overall strategy. And working

Speaker:

with the broader ecosystem, the open source community, the

Speaker:

commercial providers, and the software frameworks to give them the

Speaker:

tools to get the best performance out of their AI and their

Speaker:

data science, right? And I think you hit the nail on the head. I

Speaker:

think we're at an inflection point. Not so much in engineering,

Speaker:

right, but more in the perception, right? Because whenever you think, oh,

Speaker:

we have a large workload we got to do, let's throw some GPU at

Speaker:

it, right? And it's a little more nuanced than that. I think

Speaker:

people are finding out that you need more than just a

Speaker:

bunch of GPU. And I was on a call

Speaker:

and I want to get your thoughts on this, because he said something very similar

Speaker:

to what you said. You ever have these moments

Speaker:

when you're on a call and somebody smart says something, you're like, I don't know

Speaker:

about that, right? And it's kind of like what they did

Speaker:

in World War Z and where there was like the 10th Man Rule, where no

Speaker:

matter how ridiculous it sounds at first, you kind of want to

Speaker:

investigate it. And that's why I was glad when your

Speaker:

name popped up in the feed because I'm like, yeah, I want to talk to

Speaker:

you about this. Because he was basically saying that

Speaker:

GPU usage is

Speaker:

overrated and that where the real advantage is going to be

Speaker:

is going to be in software acceleration and on

Speaker:

CPU kind of optimization too, which sounds

Speaker:

a lot like what you said. And when I first heard that, my first thought

Speaker:

was, I don't know about that, but this guy's plugged in. He's a

Speaker:

big shot at Red Hat, right? He's plugged in, he knows a lot. And I

Speaker:

was like, I didn't want to just dismiss that. Like, if my cousin said that,

Speaker:

I'd be like, yeah, okay, but if this guy says it,

Speaker:

whether or not he's right, maybe yet to be determined,

Speaker:

but the fact that he believes it means that there's a trail there to follow.

Speaker:

So I've been kind of poking around at stuff. Tell me

Speaker:

about that. It sounds like there's some weight

Speaker:

behind that opinion. So Frankie, you hit it on

Speaker:

the head there. It's not that GPUs aren't important, it's just GPUs

Speaker:

aren't the only and best solution for all aspects of AI. And there are

Speaker:

certain vendors that want, again, for a variety of reasons, want GPU to be the

Speaker:

foundation for all of your AI activities. Like if you're a GPU based

Speaker:

hardware company. Exactly makes sense. But

Speaker:

when you actually go look at the benchmarks across multiple and here's the key thing,

Speaker:

across multiple AI types. So different

Speaker:

algorithmic models as well as the flow, so there's different stages. So the

Speaker:

inference versus training, ingestion and curation

Speaker:

versus the training, versus the feedback training, what you'll find is

Speaker:

that GPUs will rock for certain things and they are important for certain things,

Speaker:

both from that vendor as well as from a variety of other vendors. GPUs do

Speaker:

play a key role, but when you look at the breadth of AI activities

Speaker:

and the benchmarks associated, you actually find that a lot of really

Speaker:

good work just happens on standard commercial off the shelf CPU. And

Speaker:

actually most of the inferencing, I mean, we're talking in the 70% to 80%

Speaker:

of inferencing happens best on CPU

Speaker:

and areas like large language model and graph analytic based

Speaker:

approaches. The numbers really show very

Speaker:

clearly that it's not a core bound problem,

Speaker:

it's a memory bound problem. And so having efficient in

Speaker:

and out of memory, which is what you get from a CPU or an accelerator

Speaker:

with ample memory on board, is actually much more powerful

Speaker:

for training those types of data sets because the GPU you're dealing with that

Speaker:

latency across the bus. And that actually starts to matter when you're

Speaker:

talking about billions or trillions of node graph

Speaker:

analytics. So I wouldn't say that GPUs

Speaker:

are a dying breed. That is absolutely not the case. And there's going to be

Speaker:

a huge market for GPUs or GPU like

Speaker:

functionality. I want to be careful about that because you don't have to have a

Speaker:

discrete card. The reality is you can have GPU capabilities embedded in your

Speaker:

processor. We've already seen from intel and from other

Speaker:

architectures. The real interesting thing is making sure that

Speaker:

whatever your workload is can be optimized, like your friend said, optimized

Speaker:

through software to that hardware. So that if you are

Speaker:

running a large language model, that you're actually

Speaker:

running it on the right hardware, and that the hardware and your software know how

Speaker:

to work together to give you the best performance if you're working

Speaker:

on. I'm seeing a lot of really cool things right now around graph based

Speaker:

approaches in the memory intensive side of that

Speaker:

and the switching back and forth between that. Those

Speaker:

latencies can really come to bear when you're talking about cross bus

Speaker:

kind of communication. So having high amount of memory available directly to

Speaker:

the CPU to be able to do those training, keep all that data in flight

Speaker:

so you can train, is going to be one of the key

Speaker:

differentiators of how you can take those large angle models, apply them to

Speaker:

more than just writing cool essays by Shakespeare.

Speaker:

I think what we're going to see is things like chat, GPT, and that whole

Speaker:

category of transformer based approaches applied to just about everything, not

Speaker:

just chat, but prediction

Speaker:

approaches. And it's really about getting it the training sets to become

Speaker:

smart on those very vertical domains.

Speaker:

That's going to be a resource intensive process and it's not going to be throwing

Speaker:

a bunch of GPU or it's going to be a lot of cloud scaling and

Speaker:

it's going to be a lot of memory intensive activities. And like your friend

Speaker:

highlighted, the software is going to really matter, that it's taking full advantage

Speaker:

of the hardware to get you those performance report. Well, this reminds me a

Speaker:

lot of just patterns I've seen over the decades of

Speaker:

being in computing as a hobbyist and then a profession

Speaker:

is you see a lot of things come into the

Speaker:

fore as being very monolithic, and then people

Speaker:

realize, wait, that's really a team effort.

Speaker:

And I think about it as a baseball team, right? You don't want to put

Speaker:

the pitcher, the person who's skilled at pitching in center field, can they

Speaker:

perform there? Well, gosh, yeah, but you're wasting them,

Speaker:

right? They are tuned their whole body, their

Speaker:

desires, their motivations. They love being pitchers.

Speaker:

So put that person on the pitchers mound and you see this

Speaker:

happen. And it's in all sorts of places. We saw it, frank and I have

Speaker:

seen it over the years when the unicorns were the big

Speaker:

deal, the data science unicorns who could do data engineering and everything

Speaker:

that we've kind of broken out now into other fields.

Speaker:

And we're seeing it now in the hardware

Speaker:

and in the distribution of the separation of

Speaker:

concerns and the distribution of concerns, getting every component to do

Speaker:

what it's best at. And along with that, and I'll shut up after

Speaker:

this, is this whole idea that it's moving so

Speaker:

fast that the hardware that's going to perform

Speaker:

the task first sometimes isn't even identified

Speaker:

yet because some new approach popped into the equation. If

Speaker:

somebody tested something and went, this is great. Now whether I run it

Speaker:

and you just see that and it's on a scale now where it

Speaker:

used to be measured in years and moved to months, it's now weeks

Speaker:

and sometimes days. It's just amazing how fast this

Speaker:

is going. And not that long ago, people were predicting

Speaker:

an AI intel. Right.

Speaker:

I think Dolly kind of and the whole generative artwork

Speaker:

stuff, I think kind of like, wait a minute, there's something here. Then Dolly came

Speaker:

out and then OpenAI did the one two punch of here's

Speaker:

Dolly a couple of months later, here's Chachi BT. Now you're just seeing like

Speaker:

it's on fire. Like it's not just AI summer, it's an AI heat

Speaker:

wave. Yeah, exactly. It is. It's a full El

Speaker:

Nino. I like that. That's the

Speaker:

quotable, for sure.

Speaker:

I think one of the things I think people realized is,

Speaker:

and a lot of the thinking was that AI

Speaker:

winter was coming because we're hitting processor or

Speaker:

hardware kind of upper barriers. And I think we're

Speaker:

finding out, I think much to what you said is that it's not just about

Speaker:

throw this many GPUs at it. It's right. The entire story, the entire

Speaker:

bus matters. Right. So the shortstop matters using the

Speaker:

baseball analogy. Right. The outfielders. Right. You can't really win

Speaker:

a lot of baseball games if not everybody on the team is

Speaker:

playing at their best. Absolutely. And just to take that metaphor

Speaker:

all the way, the turf matters, too. The infrastructure that you're running

Speaker:

those specialists on, you're going to play better in different

Speaker:

fields. That's true. That's a good point.

Speaker:

I love that you took the metaphor to the next level. That's awesome.

Speaker:

I think you mentioned whether it was in the virtual green room or here something

Speaker:

called habanero. And I know you're not talking about just

Speaker:

cooking. Right. Spicy habana. Yes, habana. I'm

Speaker:

sorry. I had food on my mind, as is

Speaker:

often. What is habana? Because I've

Speaker:

heard whispers of it. I know we're recording this middle of

Speaker:

May. There's going to be some announcements at the Red Hat Summit. Well, they'll

Speaker:

probably already happen by the time this goes live, but what is

Speaker:

it? So Havana is an architecture, an AI

Speaker:

accelerator, and it's a specialty chips specifically

Speaker:

designed for accelerating AI. And it's actually two

Speaker:

chips. And the reason it's two chips is that you want, again, going back

Speaker:

to what we were talking about, you want the right hardware for the AI workload.

Speaker:

So you want to be able to have the right hardware to opt optimized for

Speaker:

training flows and a separate set of hardware

Speaker:

for cloud scale and hyperscale inferencing

Speaker:

workloads. And so that's actually what Habana is. It's a two

Speaker:

chip strategy. So habana gowdy which is out available.

Speaker:

V two is available. V One has been out for some time. If

Speaker:

you go to the Amazon cloud, you can get it today. It's also available in

Speaker:

data centers, and a lot of universities have them in their high performance computing

Speaker:

environments. And it's geared to doing that sort of scale,

Speaker:

large data set training that you would find

Speaker:

whether it be in a cloud kind of environment, a chat GPT level

Speaker:

of analytic, or in the case of high performance computing.

Speaker:

Whether you're doing climate modeling or flow dynamics, those kind of big

Speaker:

training model sets that you want to be able to do at scale. And

Speaker:

what's nice about it is that like your cloud scale, it scales with your architecture.

Speaker:

So it allows you to be able to scale up your training based on

Speaker:

the compute needs with an AI accelerator specifically tuned to

Speaker:

that. The other chip, the Goya chip, is an inferencing

Speaker:

chip. So it's again tuned for that inference. But the reason,

Speaker:

again, this is for high end cloud scale hyperscale or things like high

Speaker:

speed training, where you want to be able to do large amount of inference in

Speaker:

as near or close to real time as possible against really

Speaker:

complex kind of data flows that you're trying to do

Speaker:

the analysis of. And again, looking at the right

Speaker:

hardware, we wanted to make sure to not just meet what we call the sort

Speaker:

of the normal scale. So the kind of things you would interact with when

Speaker:

you're going to do fraud detection, but you also want to be able to handle

Speaker:

really large scale inferencing because you're dealing with ingestion of multi data

Speaker:

sets across multiple different domains and having to be able to do that

Speaker:

inferencing in a streaming kind of mode. And that's really where the Goya chip

Speaker:

shines, is an inferencing platform that can scale

Speaker:

with the cloud. And that's really the Habana strategy is about giving you the

Speaker:

hyperscalers and high performance computing, the equivalent of

Speaker:

an AI custom chips. And that's really where Habana

Speaker:

sits. And then when you look at sort of the majority of what most

Speaker:

people will leverage in a cloud or on prem, what we've been

Speaker:

doing there is adding new instructions to the CPU. So

Speaker:

VNNI was the first really big one in AVX 512,

Speaker:

which really accelerates the math that you're doing behind

Speaker:

inferencing and training and give you those

Speaker:

instructions. That software, whether it be Intel's OpenVINO software

Speaker:

or TensorFlow or other frameworks can take advantage of

Speaker:

that math to use hardware offload to accelerate the math that you're

Speaker:

doing in your training and your inferencing workloads for most of your normal

Speaker:

kind of AI. A lot of the AI we deal with, not the high performance

Speaker:

computing style. And so you get the balance. And again, it goes back to what

Speaker:

we talked about in the beginning, the right compute for the right AI. We've also

Speaker:

introduced data center graphics because again, there are workloads that absolutely

Speaker:

make sense for a GPU besides fun gaming. And

Speaker:

that's really where you'll see GPU shine on, those kind of specialty

Speaker:

workloads that take full advantage. And a lot of the deep learning object

Speaker:

recognition ones work well on GPUs. They actually work well on other

Speaker:

kind of platforms as well. And one of the things we're seeing in the Edge

Speaker:

is a shift towards more customized approaches, whether that be using

Speaker:

an FPGA as sort of a hardware platform that you can code

Speaker:

in your algorithms to do inline inferencing, do feedback loop

Speaker:

training. And you see this a lot of times in the image processing, video

Speaker:

processing side, also in the signals processing. So whether it's five

Speaker:

G and being able to do signal quality testing or signal acquisition

Speaker:

and being able to do RF signal analysis, FPGAs

Speaker:

actually really shine for that kind of workload. Where you want to put in your

Speaker:

custom algorithm that you're going to actually test against or

Speaker:

use as part of your conditioning. And then we get to the idea

Speaker:

of what we call an ASIC. And that's where you know your workload, you

Speaker:

know you're going to be doing this kind of inference. You can actually code that

Speaker:

into a custom chip that will do just

Speaker:

audio AI inferencing or

Speaker:

do certain aspects of video coded. And this way you get the most

Speaker:

performance in a low swap. And that's the idea here

Speaker:

is you want to be able to handle everything from the pointy end of the

Speaker:

spear, the Edge sensor and give it the ability to do AI as

Speaker:

opposed to waiting for it to send the data to the cloud and get a

Speaker:

decision. You want to be able to give it something, but it also has to

Speaker:

operate at the size, weight and power that

Speaker:

you'd expect from an Edge sensor. You obviously don't have a data center power

Speaker:

system for your car, for your drone, or for

Speaker:

your camera on the streetlight. Right. That would be a very heavy to

Speaker:

fly that drone. That's okay.

Speaker:

I'm curious how you kind of manage what

Speaker:

I'm just going to make up words here, but like an innovation chain,

Speaker:

I'm thinking about like supply chain management. And I know

Speaker:

I've got experience in electronics engineering, and I

Speaker:

know some of how much it takes to go into mind you my

Speaker:

work was decades old, but this whole idea of getting

Speaker:

ahead of the curve or at least being able to predict where the

Speaker:

curve is going and how steep and when. That

Speaker:

sounds like a huge challenge for figuring out what

Speaker:

will be needed next. So what you're talking

Speaker:

about is how does a company that's building out both the hardware and the infrastructure,

Speaker:

stay ahead of, like you said, the week to week turnaround

Speaker:

in the AI world. Part of that is having a diverse team

Speaker:

of specialists. So the Intel Labs,

Speaker:

which is our team that looks five to ten years out, is over 1000

Speaker:

people who full time looking at process node technology,

Speaker:

security, AI data science. They're across multiple domains

Speaker:

and within each domain we have specialists in different areas.

Speaker:

One of the really I'll give you a great example. Before Chat GPT blew up,

Speaker:

I had two different of my AI specialists, one on the

Speaker:

government side and one on the performance side. Start talking to me about this thing

Speaker:

called Transformer. Like, oh, there's this really cool thing that we're seeing here, it's called

Speaker:

a Transformer. And I'm like, okay, that's interesting, and tell me more. And they explain

Speaker:

sort of how it worked. And then fast forward, six months later,

Speaker:

Chat chips BT shows up and I'm like, I know what that is because that

Speaker:

has the word Transformer. I've seen this. And again, it's about giving

Speaker:

your people the ability to go out and look. I think one of the

Speaker:

advantages of being at intel, and it's really why I've been here so long,

Speaker:

is everyone knows intel inside.

Speaker:

But there's something to that. Our chips are inside the

Speaker:

edge. Clients are inside the financial services, healthcare,

Speaker:

manufacturing, oil and gas. They're in the government system, they're in the cloud,

Speaker:

we're in the network. Which means we see workloads both current

Speaker:

and coming from all those different domains. So in some

Speaker:

respects we're on the cutting edge because we're seeing what people do because they come

Speaker:

to us, say, hey, I've got this software, I want to optimize on your hardware.

Speaker:

What does it do? Well, it does blah, blah blah blah. I'm like, okay, let's

Speaker:

help you. And then eventually that becomes open AI.

Speaker:

That's the kind of thing because ultimately every startup, every big company

Speaker:

wants to get the most out of their software and our teams. And one of

Speaker:

the things people don't realize is intel has over 19,000 software engineers

Speaker:

and a large majority of those do you know, they really divide up into three

Speaker:

areas sort of research and pathfinding, ecosystem

Speaker:

enabling, and then software development for

Speaker:

compilers, software services, software tools. That ecosystem enabling team

Speaker:

is a very robust team, it's been around for a very long time. Whose job

Speaker:

is to make Microsoft Windows rock on intel, make Oracle

Speaker:

rock on intel, make red hat rock on intel, make open source. We have

Speaker:

over 1000 open source software developers whose full time job is committing

Speaker:

to open source. We're actually one of the largest committers to open source

Speaker:

community and a lot of what they do is build the optimized

Speaker:

version of those Linux kernel libraries or to

Speaker:

that AI model running on intel and give it away and open source

Speaker:

it. We've created whole model zoos optimized for the variety of intel

Speaker:

architecture because we know if you can run it best on intel, you will run

Speaker:

it, and that consumes resources. We like that. But ultimately

Speaker:

it gives us they call them bell cows, if you will.

Speaker:

We're seeing those bell cows of what's coming next because they come to us and

Speaker:

they say, hey, help us. And very few see us as competition because

Speaker:

we're not going to go build the Chat GPT. We're not going to build a

Speaker:

new operating system or a new sort of predictive maintenance

Speaker:

solution. We're going to give you the architecture for you to run it

Speaker:

best. And even our OEM, whether you buy from Dell or

Speaker:

HP or from Lenovo, we don't care. You're buying intel hardware

Speaker:

inside. And so let's help you take the best advantage of those platforms. And that's

Speaker:

really been the approach from intel, is we want everyone's software

Speaker:

to work. And even with the GPU vendors, they still run on a CPU

Speaker:

platform. And so we want to make sure that that code runs best. So that,

Speaker:

again, you're driving the overall consumption. We raise the bar for everybody. We

Speaker:

raise the bar for everybody. Nice. Yeah. I

Speaker:

think there's a lot to unpack there. Right. And I think one of the things

Speaker:

you brought out, which is something that people don't, I don't think people have

Speaker:

widely realized yet that Edge is probably going to be the next

Speaker:

frontier in just

Speaker:

computing. Right. Obviously the last ten years have all been about cloud. Right.

Speaker:

But I think we're swifting as companies kind of take a look at the bills

Speaker:

and realize that lift and shift was not a

Speaker:

financially great decision. Right. Whether or not cloud is a good

Speaker:

thing or not, I think it always goes back to those two

Speaker:

words that every consultant and every It person always says it depends.

Speaker:

Whereas previously it was last ten years was

Speaker:

oh, definitely was the two words. But I think now we're realizing it depends.

Speaker:

And I think one of the drivers for this are things like autonomous systems

Speaker:

or drones or self driving cars, right. No matter how good

Speaker:

5G is, and I can tell you I know all the dead spots

Speaker:

in the DC area, but

Speaker:

if you're driving along at 60 miles an hour, 100

Speaker:

miles, 100 km/hour for our friends overseas,

Speaker:

and like you said, is that a tree? Is that a shadow? Is that

Speaker:

a person? Is that a grandma? Right. You don't want to wait on

Speaker:

the latency to come back. You want the inference or the decision to

Speaker:

be made on device. So you're really bumping up against the

Speaker:

speed of light and you're talking nanoseconds, not

Speaker:

milliseconds. Right.

Speaker:

What do you see? Because you mentioned you want there to be

Speaker:

sensors, but obviously these things have to be relatively low power. I guess in

Speaker:

a car it doesn't matter as much, but certainly on a drone that

Speaker:

matters.

Speaker:

What sorts of challenges does intel see in that regard in terms

Speaker:

of you want the most performance, but you want the most

Speaker:

energy efficiency. That seems like two

Speaker:

opposing forces. You would think that way, but if you

Speaker:

look at Moore's Law and you look at what's really behind that, it's about

Speaker:

reducing the size. And really that means the

Speaker:

power and increasing the performance, increasing the amount of

Speaker:

transistors. And that's really been what's driving compute all along, is how do we get

Speaker:

to lower power per density. Now, where it

Speaker:

becomes interesting is in the cloud. It's a cost measure. It's about getting

Speaker:

more for your dollar in a car or in a

Speaker:

drone or even in a factory floor. It's about being able to

Speaker:

operate closer to where the decision needs to be made

Speaker:

without having to, again, to have to power it and have that immense

Speaker:

cost. Or in the case of a drone, the weight of the battery pack and

Speaker:

so forth. So lower swap actually enables those edge use

Speaker:

cases. And again, one of the things that people realize is that Edge can mean

Speaker:

different things to different people. You talk to the cloud providers and Edge is just

Speaker:

a couple of racks closer out of the cloud. On

Speaker:

Prem, you look at Azure Stack or Snowball or these kind of

Speaker:

approaches. It's really about pushing pieces of the cloud closer to the edge through like

Speaker:

the core or they called it the

Speaker:

fog back in the day. You look at the edge and

Speaker:

you take a look at a Tesla, it's like a driving data center.

Speaker:

There's compute capabilities in there. A plane is a flying data

Speaker:

center. Your drones are getting to be more

Speaker:

computing. And when you move from a

Speaker:

discrete mode to a logical mode, and I've seen these already, where you have a

Speaker:

drone who actually has one processor but multiple containers, so actually running

Speaker:

multiple functions that could be thought of as different

Speaker:

applications on different nodes, but now they've all been collapsed with either virtualization

Speaker:

or container. So you can have navigation being one, you can be

Speaker:

doing object detection and mapping with another, and then be able to do sort

Speaker:

of other kinds of sensing like temperature

Speaker:

or barometer and things like that and doing analysis in

Speaker:

real time. One of the best examples that we demonstrated

Speaker:

at our last year's Fed summit was a set of drones out

Speaker:

mapping a region. They were going about their business, but they had a policy that

Speaker:

if somebody walked into a specific area of interest, let's say in front of an

Speaker:

embassy or in front of Lloyd or too long, that one of the drones would

Speaker:

be retasked and go over and investigate and do facial

Speaker:

recognition. All the things you want to do to make sure, hey, is this person

Speaker:

up to no good? And it didn't require a reprogramming

Speaker:

of a drone. It didn't require a special drone that was just the investigator. It

Speaker:

would basically retask itself with a new. Mission in real time

Speaker:

and go investigate. And when the person left that zone, it go back to its

Speaker:

day job of mapping the environment. That's just sort of the tip of

Speaker:

that simple prototype to show that even a very

Speaker:

small autonomous system and these were like sort of my mini drones

Speaker:

here, is capable of the compute necessary to

Speaker:

do multimission kind of use cases. So the edge absolutely is

Speaker:

that new frontier. And it's again similar to the cloud. When you say cloud,

Speaker:

everyone thinks, oh, public cloud, really? Cloud is all those architectures

Speaker:

all the way down to the edge. It's the way we develop those cloud native

Speaker:

apps that can flow back and forth. So from a cloud provider, it's moving

Speaker:

more of their cloud infrastructure closer to the edge. And what the

Speaker:

edge, folks, whether it be the actual device or sensor manufacturers

Speaker:

are looking at, is bringing some of those cloud

Speaker:

capabilities to their device to operate

Speaker:

independently. And there's a reason for that is that, number one, latency, like you

Speaker:

mentioned, Frank, but also the cost of shipping all that

Speaker:

data. No one wants to ship Raw 4K video feeds to the

Speaker:

cloud just to be able to tell me, is that a tree?

Speaker:

You want to be able to send the results that I saw a tree

Speaker:

here with the longitudinal latitude, which is a small data

Speaker:

packet, and let the sensor do the AI, do the inference

Speaker:

at the edge. Right. And then you have the case

Speaker:

where you're talking about planes or vehicles, right?

Speaker:

Like the whole time it's tracking, did the wheel fall off? Did the wheel fall

Speaker:

off? Did the wheel fall off? Right, but at one point when you get to

Speaker:

your destination, the wheel either fell off or it didn't. Right.

Speaker:

So you collapse that entire thing

Speaker:

to one integer level or really not even an

Speaker:

integer. Like a bit. Right, a bit. And then if the wheel does

Speaker:

fall off, I'm sure there's plenty of other stuff you can pick up too,

Speaker:

but hopefully nobody gets hurt. But I mean,

Speaker:

ultimately you're right. The problem with data is so much

Speaker:

that there's value, but there's a certain

Speaker:

amount of we've gotten to the point where

Speaker:

just because we can, we've done it. Right. Yeah, sure. Bring up that

Speaker:

4K. If I'm a salesperson for one of those cloud

Speaker:

providers. Yeah, man, bring in all that 4K data you want,

Speaker:

we'll take it all. We'll be happy to charge you for it too. Right,

Speaker:

but I think as we get to the point where

Speaker:

there might just be too much data, I think people organizations are going to start

Speaker:

thinking like, where can we scale back on the storage? Because

Speaker:

we don't really need it unless there's some kind of regulatory reason for

Speaker:

it. Now, one thing I want to double click on,

Speaker:

because this is a fascinating conversation, we'd love to have you back

Speaker:

on the show at some point. What's the

Speaker:

deal with FPGA because you mentioned

Speaker:

that and this was a huge deal. So a couple of things that are

Speaker:

interesting is that I first heard about Transformers at

Speaker:

the Microsoft has this internal data science conference

Speaker:

MLADS, and they first talked about Transformers. I went into

Speaker:

the talk and ten minutes, my head went

Speaker:

boom, right? I didn't quite follow it. Somebody later on in the

Speaker:

day in the reception area was kind enough to explain it, how it

Speaker:

works. And one of the other things that came out of that conference was talking

Speaker:

about the importance of FPGAs and what they're going to be like in the future.

Speaker:

Now, again, I'm a data scientist. I really don't focus on

Speaker:

hardware so much until when I need to buy new

Speaker:

hardware, like a new desktop or laptop.

Speaker:

What are FPGAs? And I remember hearing a lot about them and then

Speaker:

they kind of went dark for a while and then now they're kind of coming

Speaker:

back into vogue. Can you talk to us about, one, what they are and then

Speaker:

two where you see they're going? Sure. So Ed and FPGA are a field

Speaker:

programmable gate array. They've been around for forever. I mean, computer

Speaker:

science engineers going back, electrical engineers going back to the

Speaker:

80s played with FPGA. They were very early FPGA, but

Speaker:

basically they're programmable hardware. That's really the way to think about it.

Speaker:

You think about a CPU or an Ace or any chip it's

Speaker:

laid down with its transistors, and the flow of those transit is

Speaker:

fixed. CPU can do multiple software

Speaker:

flows, but the instruction flow is the instruction

Speaker:

flow. What makes FPGAs interesting is that you

Speaker:

can create new RTL, new layouts of flows, what

Speaker:

they call netlist of those instructions going across those transistors

Speaker:

each time. You can go in and customize it after. So the

Speaker:

manufacturing builds you a clean slate of a bunch of think about a bunch of

Speaker:

rows, and then you program them to your specific need

Speaker:

at a hardware style abstraction layer. So it gives you a much

Speaker:

faster capability because you're now really writing in hardware. It's a lot more

Speaker:

complex of a coding. It's not like doing Python,

Speaker:

but what you get is a very optimized piece of

Speaker:

hardware for your specific use case. And what's nice about that

Speaker:

is one of the great examples is in signals conditioning. When

Speaker:

you're doing like 5G research or testing signal amplitudes and

Speaker:

things like that, as you put in your algorithm actually into hardware, you go out

Speaker:

and test it. It works sort of here. I need to tweak it well, instead

Speaker:

of going and spinning a new piece of hardware, you just upload new code and

Speaker:

you go right in. So it's a much faster time of development for doing

Speaker:

those custom things. What people have found when we start looking at sort of

Speaker:

AI use cases and machine learning and pattern matching

Speaker:

is that FPGA really lend themselves well

Speaker:

to be able to create different kinds of architectural approaches to how

Speaker:

you process that data flow. If you think about a GPU

Speaker:

or CPU or even an ASIC, it's a fixed data flow. It's good for the

Speaker:

things it was designed for. What FPGA allows you to do is to customize

Speaker:

your flows based on what the data is or based on what your algorithm are.

Speaker:

And so a lot of the FPGA work they were seeing in AI is people

Speaker:

coding their AI algorithms or the machine learning algorithms right into

Speaker:

hardware and then deploying it. And so it allows you to be able to deploy

Speaker:

your thing quicker and you get pretty good performance. It's not as

Speaker:

good as say, as a custom ASIC for your algorithm. And it's not as

Speaker:

scalable really as like a software abstraction on running on a

Speaker:

cloud set of CPUs. But for a lot of these training and

Speaker:

inferencing use cases, one of the areas where it shines is in the whole

Speaker:

area of neuromorphic processing. So a whole part of the AI machine learning

Speaker:

space is modeling after brain activity or how our

Speaker:

brains process. It's a whole field. FPGAs are actually

Speaker:

well designed for those kind of algorithms that X 86 and

Speaker:

other CPU style Arctic just aren't yet.

Speaker:

And that's why FPGAs really shine in those environments, because you can create

Speaker:

these linear sort of permutation flows that you find in neuromorphic

Speaker:

algorithms. You just code those into the path for the

Speaker:

FPGA. They're really good. You'll see, FPGAs are very often used

Speaker:

in cellular and RF communications that are really good at those sort of

Speaker:

channelizer and signal optimization and

Speaker:

be able to do those kind of algorithms that you do on RF and

Speaker:

Comps, again, really good for those kind of workflows. And so why we

Speaker:

see the resurgence of FPGAs, although they've never gone away, you find them

Speaker:

everywhere. Open up your big screen flat screen TV, you'll find a couple of

Speaker:

FPGA in there. Where they're shining is because it

Speaker:

allows you to do some rapid prototyping on AI. And because we're seeing

Speaker:

now FPGAs come to the cloud. So you go to Azure has an FPGA

Speaker:

cloud. You can now deploy those algorithms at cloud scale,

Speaker:

or you can deploy an FPGA into your edge sensor and be able

Speaker:

to do that real time, sort of. Let's go try this inferencing model. Oh, we're

Speaker:

going to change the inferencing model. Let's go do that one. And where this becomes

Speaker:

really interesting in those low slop environments is a modern FPGA is

Speaker:

reprogrammable in milliseconds, which means you can go from one

Speaker:

program to another by just pushing a firmware, if you will,

Speaker:

update. And now you go from a 5G communications

Speaker:

system to LTE or to a six G

Speaker:

without actually going and swapping out the hardware. That's wild.

Speaker:

That's wild. Yeah, it's exciting times. So

Speaker:

with that, the updatable part of it,

Speaker:

how do you secure that? Because I can easily see that being like particularly

Speaker:

you work in the in the federal space, right? Like security

Speaker:

is top of mind in that work. It should be top of mind everywhere,

Speaker:

but in the near term it's top of mind, at

Speaker:

least in the federal spaces. FPGA

Speaker:

sounds like awesome, but it also sounds like that just seems

Speaker:

dangerous in a lot of ways. You can reprogram it in milliseconds.

Speaker:

There's got to be some kind of security story there. Oh absolutely. And

Speaker:

Fpjs have actually in many cases led as far as the kind of security

Speaker:

mechanisms built into the hardware for that very reason.

Speaker:

At its core, at the core level, it's the same kind of approach you do

Speaker:

for verifying your firmware on your system. It's signed

Speaker:

by hardware so that basically you're verifying

Speaker:

your load and if you're going to do an update, you're going to verify a

Speaker:

signature against a hardware rooted key so that you make sure that only

Speaker:

legitimate folks can do the update and that it's only be able to be done

Speaker:

by someone who's got the permission. From a cryptographic

Speaker:

perspective, what we find in the current FPGA that are out in the market

Speaker:

is that they've built in a whole suite of security

Speaker:

capabilities. Things like Puff Provably, unclonable

Speaker:

functions, which is basically a hardware root key that is

Speaker:

really secure as that hardware route of trust, signing in

Speaker:

cryptography functions, anti tamper functions to make sure someone can't go

Speaker:

pop open the lid or put in a jumper and try to try to change

Speaker:

the code. So those kind of mechanisms have been in place for a long time

Speaker:

because FPGAs have been used in such critical places. We find them in

Speaker:

radar stations, we find them in systems and so they've been building security

Speaker:

in for a very long time. And it's part of the workflow that when you

Speaker:

build your code you're going to take advantage of these implicit, let's call them IP

Speaker:

blocks that do security for your RTL, for your code that you're putting

Speaker:

in place. The other important thing is that the way that the code works

Speaker:

is once you lay it out, once you translate your software into that

Speaker:

layout, the layout is you can't just sort of go and reverse engineer

Speaker:

back. And so it's really a very powerful

Speaker:

mechanism as opposed to say firmware. When you're it's software.

Speaker:

If you think about the BIOS update, it's software that you're loading just deeper in

Speaker:

your platform and if anyone wants to go inspect, you'll find

Speaker:

there's a lot of software in the hardware that you don't realize is actually

Speaker:

software. The same kind of security mechanism we did there. You verify it against a

Speaker:

hardware of trust, you make sure it's signed before you run it

Speaker:

and then you apply cryptography to make sure that it can't be changed or it's

Speaker:

integrity protected. You find those same capabilities built into the

Speaker:

hardware of an FPGA and the software development tools, the

Speaker:

dialogue, the cordis and so forth have the mechanisms to take advantage.

Speaker:

So again, programmers don't have to be security gurus. They basically say,

Speaker:

I'm going to push this, and it's auto going to take advantage of those features.

Speaker:

It's good because programmers historically are very bad security

Speaker:

people. I say that. It says, yeah,

Speaker:

it's its own specialty. And yeah, you can't be

Speaker:

good at everything these days. There's too much. So I'm going

Speaker:

to echo what Frank said earlier. Steve, we got to have you back.

Speaker:

I really appreciate you being here. We could talk and geek out on

Speaker:

hardware stuff forever, but we want to

Speaker:

pivot and go to our questions and if that's

Speaker:

okay, we want to start with unless Frank, unless you had anything else you wanted

Speaker:

to do before. Let me

Speaker:

rephrase. No.

Speaker:

In the virtual green room, you talked about some things that are going on and

Speaker:

kind of operationally and

Speaker:

wow, we didn't even get there. I mean, I

Speaker:

think the important thing I took from this conversation is

Speaker:

that one, GPUs, they are important, but

Speaker:

they're not the whole story. And two,

Speaker:

at the end of the day, chat

Speaker:

GPT, any of these magical looking AI

Speaker:

models, magical seeming, right. They're all mass,

Speaker:

right? Yeah. And being beneath the math are electrons

Speaker:

bouncing around inside these microscopic chips. And

Speaker:

there's all sorts of things you could do to tweak and improve that, even if

Speaker:

it's like a billionth of a second, right? A billionth of a second times

Speaker:

a billion adds up.

Speaker:

And that adds up in terms of whether you're driving a car

Speaker:

or you're flying a plane or

Speaker:

you're a company like AWS or Microsoft,

Speaker:

where, hey, if I save one compute second per

Speaker:

transaction, I do trillions of those a day. And that's real

Speaker:

money. Exactly. And that's the thing that blew my mind. But yeah,

Speaker:

let's switch because we could geek out for hours. Because this is very

Speaker:

true. Yeah. Amazing.

Speaker:

It really is. So how did you find your

Speaker:

way into not so much data, but it how did you find your way into

Speaker:

data? Did you find it or did

Speaker:

it find you or hardware specifically? So ring. It's a really good

Speaker:

question and going back to the very beginning, actually, I started

Speaker:

out in the molecular biology

Speaker:

bioresearch side of the camp, going all the way back. I was going to be

Speaker:

a research biologist and probably still be there today,

Speaker:

except for a couple of key life events early in

Speaker:

the early ninety s, I was a hacker as a kid.

Speaker:

I loved seeing how things fell apart and how to code and break code

Speaker:

and things like that. But in the late 80s, there really wasn't a

Speaker:

career other than a COBOL programmer, which

Speaker:

wasn't an exciting career at the time. So I went the bio route,

Speaker:

which was my, the love. And right after I graduated and was going to start

Speaker:

med school, I had a year off and

Speaker:

someone had some money, wanted to do a startupy thing and they knew I was

Speaker:

a hacker and say, hey, why don't you help me get this thing running? And

Speaker:

I'm thinking, well, med school is expensive. This would be a good way to help

Speaker:

pay for it. And so I started my first company in

Speaker:

95 and after three months just fell in love with everything that was

Speaker:

going on. It was the exciting time to be in the internet. Got to apply

Speaker:

some of my security hacker background in an interesting way

Speaker:

and had some really good mentors. People like Bruce Schneier,

Speaker:

the writer of Applied Cryptography sort of took Zebru Schneider.

Speaker:

Zebrus Schneider was one of my mentors and took me under his wing.

Speaker:

And like I say, I sucked his brain dry as best as I could. But

Speaker:

really it just sort of got the opportunity to get on the ground floor

Speaker:

right before Netscape went public. So really early days on

Speaker:

a startup in the email encryption space and then one thing led to another and

Speaker:

I just felt this was what I was going to do. And for the next

Speaker:

sort of several years, I did multiple security startups throughout

Speaker:

the then in 2005 got acquired

Speaker:

by intel. I like to joke, I'm still trying to figure out

Speaker:

how I ended up here for 18 years. But I think what intel

Speaker:

has provided me and provides a lot of our folks is the ability to sort

Speaker:

of innovate in an environment where a, you've got a big company

Speaker:

behind you helping you do that. But one of the best

Speaker:

reasons why I think intel has been fun for me, my most

Speaker:

successful startup, we had 500 of Fortune Thousand companies using

Speaker:

our product. The first project I worked on in intel went to 40 million

Speaker:

PCs. So the impact is just

Speaker:

unbelievable. Now from the data

Speaker:

side again, at the end of the day, like you mentioned earlier, underneath the data,

Speaker:

underneath the machine learning, underneath the AI, and even before we were talking about AI

Speaker:

was machine learning and advanced pattern matching. There's electrons

Speaker:

moving around it's running on hardware. And so a lot of what my

Speaker:

job has been before I came to the federal team was looking for ways to

Speaker:

innovate or take advantage of new use cases in software, to

Speaker:

take advantage of hardware in interesting ways. And so we call that

Speaker:

pathfinding. So you think about our labs or thinking about the next generation

Speaker:

hardware five to ten years out, I ran the team, the security

Speaker:

pathfinding team that was looking at the two to five year horizon. I

Speaker:

knew this was the hardware platform that was going to be there next year. What

Speaker:

would be some interesting things I could do with it to either advance security or

Speaker:

increase security, that was my area domain. And so things like

Speaker:

antimalware technologies, cloud security, before they knew how to spell

Speaker:

cloud. We called it virtualization security first and things like that.

Speaker:

Web security, that was the fluffy stuff. That was Steve's world while

Speaker:

the hardware engineers are figuring out low level cryptography and hardware

Speaker:

roots of trust. And we sort of worked in tandem to innovate.

Speaker:

And so as things like data science started to take off, it was like,

Speaker:

this is a key area both from a security and perspective. How do I secure

Speaker:

that data? How do I secure the algorithms? How do I use that? I mean,

Speaker:

one of the really cool things is being able to use machine learning and AI

Speaker:

and apply it to the cyber problem.

Speaker:

And when you start doing things like that, you immediately run to, well, we've

Speaker:

got too much data flowing in. I mean, the classic example is streaming

Speaker:

analytics on network at network speed. Well, how do you do

Speaker:

deep packet inspection at gigabit or higher

Speaker:

speeds without losing data? That's a big problem. That's where hardware can

Speaker:

help save you, that you just can't do in software.

Speaker:

And then when I transitioned to the federal team and took over and

Speaker:

drove our federal technology practice, you really opened the door to

Speaker:

all the different use cases. And one of the things I like about the federal

Speaker:

government is that it's a macrocosm of all verticals. You want to

Speaker:

talk finance, you've got IRS and CMS, some of the largest

Speaker:

processing of financial data. You want to talk healthcare, the VA is the

Speaker:

largest provider of healthcare, the largest insurer in the world. You want to talk

Speaker:

logistics, DoD logistics is huge. So

Speaker:

you sort of look at it, every kind of use case you'll find in government.

Speaker:

So it's really a good way of looking at all the different verticals. And they

Speaker:

all have unique or interesting data problems. There's some

Speaker:

commonality. And one of the things I really like about the federal government is that

Speaker:

you get that commonality across the divisions. They all are having trouble doing data

Speaker:

ingestion. That is just fundamental. It doesn't matter if you're the federal government or

Speaker:

Citibank or startup in Silicon Valley. Data ingestion is hard

Speaker:

and doing it at scale and being able to then do something

Speaker:

once you've got the data. And I like to use the analogy

Speaker:

of an iceberg. So AI, Chat, GPU, all these are the tip of the

Speaker:

iceberg. That's the cool, sexy stuff you can do, the hard work,

Speaker:

the data curation, data wrangling is all the work that has to be done before

Speaker:

you ever get there. And that's data ingestion, it's labeling, it's curation,

Speaker:

it's data set management, it's all that stuff. And then layer in things like

Speaker:

removing bias or dealing with bias and securing and integrity, protecting your

Speaker:

data. Like all those things have to happen before you ever start having

Speaker:

the fun math that happens towards the end of that curve.

Speaker:

That's where you find that coming out. Everyone is challenged with those things, and I

Speaker:

think that's where the excitement is today. No, you definitely hear in your

Speaker:

voice, sorry, Andy. Yeah, definitely. No, it's okay. We refer to

Speaker:

that as kind of a joke that's been going on

Speaker:

for seven years now. We say, first you get the data,

Speaker:

and that's 90% of the work. We know

Speaker:

that and your iceberg analogy fits that, Frank.

Speaker:

We need a shirt that has a picture of an iceberg against us. First you

Speaker:

get the data under the I like that. I'm definitely going to do that.

Speaker:

We launched a magazine, actually, yesterday as we record this, and

Speaker:

the cartoon segment is called First You Get the Data. And it

Speaker:

kind of like cringy things that you'll hear about data, and one

Speaker:

of them was like, yeah, first we get the data. My

Speaker:

favorite was how

Speaker:

to prep and clean the data. And they were like, oh, no, our data is

Speaker:

already in the normalized database. We don't need to clean it or prep it. It's

Speaker:

already ready. Like, oh, boy.

Speaker:

You need you need a picture of someone throwing data into a washing machine.

Speaker:

That's a good shirt. We could do that. Yeah,

Speaker:

no, that's cool. And I think you bring up something that I think,

Speaker:

folks, we don't know our exact age demographic. We have a rough

Speaker:

idea, but if there's anyone, let's say, under the age of 30,

Speaker:

right in the car with the parents

Speaker:

or they're listening, it's hard to imagine the time because we're about the same age.

Speaker:

I think you're a little older.

Speaker:

If this was not seen as a good career path, like, coding was not the

Speaker:

whole learn to code movement is a modern

Speaker:

phenomenon. I started my college career to be a

Speaker:

chemical engineer because

Speaker:

I had to convince my parents that software engineering was a

Speaker:

viable career path. And my mom, God rest her

Speaker:

souls, was like, I don't want my baby to be one of those weird

Speaker:

people in the basement. Right?

Speaker:

And then my dad, God rest his soul, was like because when

Speaker:

they came to visit me, I had a Sunday print out of the New York

Speaker:

Times, which of course had the job section, which was

Speaker:

at one point like a book. Right. And look at all these

Speaker:

jobs for computer programming. This is a thing. And my

Speaker:

dad looked through it, and he saw all the starting salaries, and it was like

Speaker:

seven or eight pages of near six figure

Speaker:

salaries in the early 90s, which was a lot of money back then, right?

Speaker:

Yeah. Like, looking through, like, on Wall Street stuff. And

Speaker:

he's like, I'm sold. And it's like

Speaker:

and my mom was like, no.

Speaker:

That is literally, like, my experience as well. When I told my parents that I

Speaker:

was going to not go to the research biology route and do the MD

Speaker:

PhD, I was going to go into the security thing. They wanted to do an

Speaker:

intervention. They thought something was wrong.

Speaker:

About two years. In 96, after I'd done the start, for about

Speaker:

a year and a half, there was an article in the New York Times, Paul

Speaker:

Cotcher, had done the timing attacks against RSA, and it

Speaker:

was front page news. And when you read down the first blurb, it says, 22

Speaker:

year old bio student from Stanford cracks RSA encryption. So

Speaker:

I cut that out and faxed it to my parents because they have an email

Speaker:

yet and said, look, another bio student doing security. It can

Speaker:

happen. Right? That's funny. One of

Speaker:

the best web developers I ever worked with, his degree was in biology

Speaker:

as well. And I think there's something to be said about understanding natural

Speaker:

systems, and I think there's some pattern matching gifts

Speaker:

that go along with that. I know my friend was that way as well. And

Speaker:

Frank, when your mom said she didn't want you to be one of those

Speaker:

weirdos in the basement that flew through my head, but I

Speaker:

maintained discipline was too late.

Speaker:

And I could say the same for me as well. Too late.

Speaker:

In her defense, my mom stayed with us in a house that my

Speaker:

wife also works in technology too.

Speaker:

She had an entire suite in our basement of our

Speaker:

house, which was not

Speaker:

windows, walk out yard, everything.

Speaker:

It worked out well. Sometimes

Speaker:

your parents my mother encouraged it without realizing. She allowed me to buy

Speaker:

the haze modem and connect it to our phone. And I did get

Speaker:

disciplined when I had that $1,000 phone bill from dialing into BBS's overnight.

Speaker:

But they should have seen it coming. Yeah,

Speaker:

my mom freaked out when I wanted a modem. She's like, no, absolutely

Speaker:

not. And my dad was like, yeah, you probably should stay out of trouble.

Speaker:

It's easy to stay out of trouble. Then. I think I was lucky

Speaker:

that my parents didn't know what a modem was, so I didn't know what

Speaker:

they were getting me. Right. This

Speaker:

is awesome. But I want to jump to question too sure. And ask, what's your

Speaker:

favorite part of your current gig? Favorite part of my good

Speaker:

gig? I think honestly, I thrive on being challenged,

Speaker:

on trying to solve big hairy problems. I think that's what has always

Speaker:

excited me is present to me with something that isn't being done well today and

Speaker:

trying to figure out how to do it. And I think one of the things

Speaker:

that I love about my job is meeting with government customers who

Speaker:

have big hairy problems and looking at a variety

Speaker:

of technologies. And I think what makes my role somewhat unique at intel, so we

Speaker:

have like a CTO for memory and a CTO for various

Speaker:

architectures is my role is pan intel so I can look

Speaker:

across FPGAs server parts,

Speaker:

networking, and sort of see that collective of where do the bits can

Speaker:

come together to solve big hairy problems. And that's really, I find

Speaker:

keeps me very excited is that every day I could be talking about an

Speaker:

IoT problem today with an edge sensor, and they're

Speaker:

talking about petabytes of data being processed in the cloud tomorrow.

Speaker:

It's looking across the technology domains and again, coming

Speaker:

from a background of cybersecurity, which again looking at various different domains from a security

Speaker:

perspective, but then adding to that AI, high performance computing,

Speaker:

it's a technology playground, right? And the federal

Speaker:

government, when I first joined Microsoft,

Speaker:

I was in the public sector, part of doing basically

Speaker:

technology developer evangelism for the federal government. And a lot

Speaker:

of my commercial sector colleagues were like, wow, it must be really boring

Speaker:

there. I might be like, you know,

Speaker:

we see things that you don't see

Speaker:

and what it is, is like there's interesting work going on, but the folks doing

Speaker:

interesting work for many reasons do not want

Speaker:

a lot of attention. Indeed. So you see

Speaker:

some things that like, wow, see, I hadn't really

Speaker:

thought of that type moments. Well, decades

Speaker:

ago I spent just a little bit of time in a really odd shaped

Speaker:

building up that way. Just a touch of

Speaker:

time. So I can have five it did. So

Speaker:

I can go yes and amen everything

Speaker:

you both have shared about. So now we have three. Complete

Speaker:

the sentences. When I'm not working, I enjoy blank.

Speaker:

Spending time with my kids. I have two small children and they keep me young

Speaker:

and full of fun and keep

Speaker:

me trying to stay in shape to keep up with them.

Speaker:

Very cool. Both Frank and I have

Speaker:

children as well. Frank has the younger kids. I'm

Speaker:

probably the old guy in this conversation now that I think about it.

Speaker:

But number two, complete this sentences. I think the

Speaker:

coolest thing in technology today is blank.

Speaker:

One thing that is a tough question,

Speaker:

I would have to say. So the two things that I think are really cool.

Speaker:

Number one, again, not the chat GPT, but

Speaker:

what the future will do with that capability is one

Speaker:

area. And then again, because I'm a security geek at heart, post quantum

Speaker:

crypto is going to be fun. Figuring out the next generation of algorithms

Speaker:

and how robust they'll be once quantum computing comes online.

Speaker:

I think that's an exciting area of math that is going to

Speaker:

spurn a lot of mathematic. Academia is

Speaker:

excited because it's a renewed interest in that space

Speaker:

and the algorithms are really interesting. The lattice

Speaker:

space structures are fun area of math to look at. Nice.

Speaker:

Interesting. The third and

Speaker:

final, complete the sentence. I look forward

Speaker:

to the day when I can use technology to

Speaker:

blank. So I'm going to give you two answers. I look

Speaker:

forward to the day when I can draw something on a

Speaker:

whiteboard and it turns into code. That's one thing I'm looking forward

Speaker:

to. Oh, nice. I can totally and that's not that

Speaker:

far off. It's not, I think a little bit of sort of the

Speaker:

image to text, image to code. I think

Speaker:

building box, you have to be able to read my horrible handwriting. That's going to

Speaker:

take an AI in its own right. But I would love a day. When I

Speaker:

can start draw my design like I like to do I'm a whiteboard kind of

Speaker:

guy, and then have it create a prototype. I think that's one thing

Speaker:

I'm looking forward to. And then I think

Speaker:

the other thing is I'm looking forward to the day when

Speaker:

augmented reality becomes reality, where it's not just

Speaker:

a cool toy, but where we actually see it integrated

Speaker:

into our daily lives. And I'm not talking to glasses and all that. I'm talking

Speaker:

about having the digital world and our physical world actually start to make

Speaker:

sense instead of it being a throwaway toy and I think we're seeing

Speaker:

pockets of it, but I think that the future is going to hold a lot

Speaker:

more of that immersive experience that we only see in movies today. I think

Speaker:

those are the two things from a technology perspective, I'm looking forward to.

Speaker:

Although I have to say, if I can get that, the code from the whiteboard

Speaker:

is going to make me a lot more efficient. No, that's true. And

Speaker:

it's funny because things that once seemed impossible

Speaker:

are now possible and even mundane. So I remember

Speaker:

when I was a kid, there was a story, there was like a story we

Speaker:

read about a kid who wrote a built a homework machine, right? And this was

Speaker:

like first or second grade and a bunch of us kids were like, yeah, how

Speaker:

do we do this? We got to make one of those. Now you look at

Speaker:

Chat GPT, obviously we abandoned the effort

Speaker:

because it just wasn't possible at the time. But you look at how kids

Speaker:

are using Chat GPU today, that machine exists

Speaker:

not in the way or the shape or form we could have imagined, but

Speaker:

it's definitely here. So to have that whiteboard to code

Speaker:

thing, it's totally

Speaker:

within sight. Whether it'll be within reach, only time will

Speaker:

tell. Probably a few weeks. If there are VCs out there listening, this is an

Speaker:

idea to invest in, for sure. I would love to see

Speaker:

especially for you, Steve. I'd love to see whiteboard

Speaker:

two FPGA code. That'd be even

Speaker:

better. We're just combining ideas. There you go.

Speaker:

I know that would make some of my engineers happy. There you go. Really

Speaker:

cool stuff. So we ask all of our guests to

Speaker:

share something different about yourself. But we caution

Speaker:

everyone to be fair that remember, we're trying to keep

Speaker:

our clean rating at itunes, so please keep that in

Speaker:

mind. So something different about me.

Speaker:

Well, I guess one thing we've already talked about that I have a bio

Speaker:

background, but the other thing I like to do is I play

Speaker:

tournament poker. I am an avid

Speaker:

poker player when not in COVID Lockdowns and things like

Speaker:

that. I played in the World Series back in 2013.

Speaker:

Really? That's something I like to do as a

Speaker:

past. It's a different use of my skills, of sort of social

Speaker:

engineering, if you will. And I like the tournament play

Speaker:

because it's sort of a long game. Right? Well, I have a

Speaker:

stack of money and I'd love to learn more about

Speaker:

is that the joke? All you need is you're always

Speaker:

welcome to my table. I'm lying about the

Speaker:

money. My wife is

Speaker:

actually a pretty good poker player, and when she was pregnant with our second,

Speaker:

she's short and she would carry a stool with her because she would have

Speaker:

to set up and her feet didn't reach the floor. And I think I gave

Speaker:

her like $100 in seed money and said, go knock yourself out.

Speaker:

And she came back like she was spending money. I think she turned that into

Speaker:

something like two grand before she had to quit and go have

Speaker:

Emma. I

Speaker:

would love to see you, because I don't think she's

Speaker:

your level by any stretch, but she did okay. We should have

Speaker:

a data driven poker tournament. We should. There we

Speaker:

go. That's an idea, Frank. The other time we had an

Speaker:

idea of somebody on the live stream said we should do like an ATV

Speaker:

race or something because we always go off track. That's kind of the joke.

Speaker:

Very true. But no, that's cool. Audible is a sponsor

Speaker:

of data driven can you recommend a good book? Ideally

Speaker:

audiobook if you do, audiobooks if not. Sure. Absolutely. Actually, I just

Speaker:

finished one that I think would be perfect sort of summation of this. So

Speaker:

Chips is an excellent book.

Speaker:

You think it's talking about today, but it gives you the history of how we

Speaker:

got here. And even one of the things I thought was really interesting is

Speaker:

some of the decisions that were made early on from the

Speaker:

policy, the government policies that we've seen and how it

Speaker:

affects where we are today. Fascinating reading. So, yes, absolutely.

Speaker:

Chips wars, it's available on Audible because I literally just finished reading

Speaker:

listening to it on Audible. So that would definitely be a book I would

Speaker:

recommend. Cool. I watched a show called Halt and Catch

Speaker:

Fire a few years ago when it was at, and it was similar. It was

Speaker:

in that vein of when things were developing and trying basically

Speaker:

the laptop development story. And of course it was

Speaker:

fiction, but I know enough about it to

Speaker:

know there were some true parallels in there. So this

Speaker:

would be very appealing to me. I'm going to get it. I hadn't heard of

Speaker:

it. Thank you for recommending and our listeners can go to

Speaker:

thedatadedrivenbook.com I didn't test it today, Frank.

Speaker:

Some days it's moody, but if you go there, it should

Speaker:

redirect you to Audible. And if you decide you get a free book on us.

Speaker:

And if you decide later to sign up, then it buys

Speaker:

Frank a cup of coffee. So when

Speaker:

you do that, we get a little bit out of it. It's a great way

Speaker:

to support the show and we really appreciate it.

Speaker:

Awesome. And where can people find out more about you and

Speaker:

what the federal team at intel is doing. So find out more about

Speaker:

me, go to my LinkedIn page. That's S-O-R-R-I-N on

Speaker:

LinkedIn. And then to find out more of what intel is doing in public sector,

Speaker:

just go to Intel.com public sector and it will redirect you to our

Speaker:

Government Solutions page. It covers everything from AI

Speaker:

data science to Cybersecurity to Edge, with lots of white

Speaker:

papers. Use cases podcasts with folks like myself and

Speaker:

others that are recording content on how intel is helping our

Speaker:

ecosystem. So definitely come check us out. Awesome.

Speaker:

And with that, I'll let Bailey finish the show. Now that was some

Speaker:

show. Is it me or are the shows getting better? It could be my

Speaker:

bias that leads me to say that, but I figured I would ask to get

Speaker:

more input. After all, what's an AI without good

Speaker:

input and a feedback loop? Speaking of feedback, have you

Speaker:

checked out Data Driven magazine yet? We are looking for writers