One of the things they have to do is write
down, here's the rules of the game,
2
00:00:04,170 --> 00:00:08,383
and here's what, doing a certain
good thing pays you, right?
3
00:00:08,383 --> 00:00:10,218
They call it a payoff function.
4
00:00:10,218 --> 00:00:13,221
And so if you, you know, if you take the,
the opponent's pawn little piece,
5
00:00:13,555 --> 00:00:14,222
here's the payoff.
6
00:00:14,222 --> 00:00:16,558
Or if you take their queen,
it's a bigger payoff.
7
00:00:16,558 --> 00:00:20,895
And so, if you can define that in chess,
you can actually define that.
8
00:00:21,771 --> 00:00:24,566
You can make a system
that that does very well.
9
00:00:24,566 --> 00:00:28,737
But think about a dating relationship
or a marriage like.
10
00:00:29,154 --> 00:00:32,240
And we pretty quickly
realize the our attempts
11
00:00:32,240 --> 00:00:35,243
to quantify and define value.
12
00:00:36,077 --> 00:00:38,872
It's we're in a different layer
of abstraction.
13
00:00:38,872 --> 00:00:41,666
We you can't the these don't go together.
14
00:00:47,297 --> 00:00:49,174
So today in Anabaptist perspectives,
15
00:00:49,174 --> 00:00:53,511
I'm joined by Ben Harris,
and we're going to be diving into
16
00:00:53,595 --> 00:00:56,765
the ontological limits
of artificial intelligence.
17
00:00:58,808 --> 00:00:59,684
Ben, you want to start with a
18
00:00:59,684 --> 00:01:02,771
little introduction,
and we'll jump into the topic after that.
19
00:01:03,646 --> 00:01:04,230
Sure, Marlin.
20
00:01:04,230 --> 00:01:06,191
Good to be with you. So,
my name is Ben Harris.
21
00:01:06,191 --> 00:01:07,484
Hi, everyone.
22
00:01:07,484 --> 00:01:11,071
So I'm a professor up at Sattler
College in Boston, Massachusetts.
23
00:01:11,071 --> 00:01:13,364
I coordinate the business program up here,
24
00:01:13,364 --> 00:01:15,825
but my my background
comes out of the engineering world.
25
00:01:15,825 --> 00:01:20,747
I spent more than a decade working in
machine learning, artificial intelligence,
26
00:01:20,747 --> 00:01:22,749
just many of the classical engineering
disciplines.
27
00:01:22,749 --> 00:01:25,752
So, I would.
28
00:01:26,461 --> 00:01:29,339
I think it's hard to define
what an expert in the field is, but,
29
00:01:29,339 --> 00:01:33,218
I spent a long time thinking about it
and continue to do so because it affects
30
00:01:33,510 --> 00:01:37,514
not only back in, in the technical world,
but also in academia.
31
00:01:37,514 --> 00:01:39,015
We contend with AI every day.
32
00:01:39,015 --> 00:01:41,518
So, Marlin, it's good to be with you.
33
00:01:41,518 --> 00:01:44,521
Yes. Thanks for coming on, Ben.
34
00:01:44,771 --> 00:01:47,607
Excited to have the kind of engineering,
computer background,
35
00:01:47,607 --> 00:01:48,608
that you bring to it.
36
00:01:49,734 --> 00:01:51,069
So to
37
00:01:51,069 --> 00:01:55,949
start
with some of the the drama around AI,
38
00:01:55,949 --> 00:01:58,952
a little over a year ago, May of 2023,
39
00:01:59,953 --> 00:02:03,998
there's this famous statement on AI
risk, signed by,
40
00:02:03,998 --> 00:02:09,504
you know, a bunch of the big names,
and their short version was mitigating
41
00:02:09,504 --> 00:02:13,007
the risk of extinction from
AI should be a global priority
42
00:02:14,008 --> 00:02:17,971
alongside other societal scale risks
such as pandemics and nuclear war.
43
00:02:18,763 --> 00:02:21,641
And so that got a bunch of press.
44
00:02:21,641 --> 00:02:22,684
I found it interesting.
45
00:02:22,684 --> 00:02:25,687
A few weeks later,
there was a CNN report on a,
46
00:02:26,521 --> 00:02:29,524
a survey from a number of CEOs.
47
00:02:30,066 --> 00:02:32,402
they got 119 responses.
48
00:02:32,402 --> 00:02:35,613
And out of those, 50 of them said,
49
00:02:37,490 --> 00:02:41,327
yeah, AI could potentially destroy humanity
in the next 5 to 10 years.
50
00:02:42,078 --> 00:02:44,581
And the other 69 were unconcerned.
51
00:02:45,915 --> 00:02:47,250
So I guess we
52
00:02:47,250 --> 00:02:50,253
can start with, where are you
at on that question?
53
00:02:50,253 --> 00:02:53,256
Your view of artificial intelligence
and that kind of.
54
00:02:54,048 --> 00:02:57,468
Yeah,
that kind of dramatic fear statements.
55
00:02:58,720 --> 00:03:01,973
Oh, that's a that's a classic question
for anybody who thinks about AI.
56
00:03:02,390 --> 00:03:05,768
I, I am not on the
what we call an AI doomer.
57
00:03:05,768 --> 00:03:07,770
Those are
those are probably what you put those,
58
00:03:07,770 --> 00:03:09,898
you know,
the Geoffrey Hinton's of the world
59
00:03:09,898 --> 00:03:13,484
that would you know, he's very, he’s
had a much longer background here.
60
00:03:13,484 --> 00:03:14,736
This is his view.
61
00:03:14,736 --> 00:03:19,449
I do not see AI as within that short
period of time, 5 to 10 years,
62
00:03:20,283 --> 00:03:23,286
as being an existential threat
to humanity.
63
00:03:24,078 --> 00:03:26,956
Both for a theological reason,
I think, like God has defined
64
00:03:26,956 --> 00:03:30,793
the end of humanity for us,
but also that we
65
00:03:31,169 --> 00:03:36,174
we look at the technology
and what we assume in the forecast of,
66
00:03:36,174 --> 00:03:39,219
you know, AI improving
or growing to a certain level assumes
67
00:03:39,219 --> 00:03:42,222
no major hurdles
that we encounter on the way.
68
00:03:42,347 --> 00:03:44,432
I and I just think we're starting
to see the hurdles.
69
00:03:44,432 --> 00:03:44,724
Right?
70
00:03:44,724 --> 00:03:47,936
We we at the moment cannot produce
71
00:03:47,936 --> 00:03:51,231
data fast enough in order to support
these bigger and bigger AI models.
72
00:03:51,231 --> 00:03:51,522
We don't.
73
00:03:51,522 --> 00:03:52,857
We're running out of,
74
00:03:52,857 --> 00:03:57,195
raw materials for creating training
chips and systems that do this.
75
00:03:57,654 --> 00:03:59,364
So there
so we're starting to see the hurdles.
76
00:03:59,364 --> 00:04:00,323
And I think that that's going to
77
00:04:00,323 --> 00:04:03,326
that's going to skew the forecast
fairly dramatically.
78
00:04:03,701 --> 00:04:07,747
I, I am not particularly concerned about,
existential issues.
79
00:04:07,747 --> 00:04:08,581
partly
80
00:04:08,581 --> 00:04:12,377
because what we're going to talk about
today is the is the ontology of AI, right?
81
00:04:12,377 --> 00:04:16,422
The movie
versions of AI being a threat to humans,
82
00:04:17,840 --> 00:04:20,927
in the in the
narrative tend to revolve around a moment
83
00:04:21,094 --> 00:04:25,098
when I realized what it was
and what it could be, and it has this self
84
00:04:25,098 --> 00:04:29,352
defensive response or reaction to humanity
trying to shut it down.
85
00:04:30,395 --> 00:04:33,398
I don't yet think that AI has the
86
00:04:33,564 --> 00:04:37,193
has an existential understanding
of what it is yet.
87
00:04:37,485 --> 00:04:40,071
You know, it
probably can give you a text response.
88
00:04:40,071 --> 00:04:43,825
Yes, I system, system system
am an AI system,
89
00:04:44,325 --> 00:04:48,997
but that does that does not yet imbue
value in a self defensive reaction.
90
00:04:48,997 --> 00:04:51,499
Right? The
that we naturally experience as humans.
91
00:04:51,499 --> 00:04:55,545
So I think there's still a gap
ontologically and as well as what
92
00:04:55,545 --> 00:04:59,716
we would call artificial
generalized intelligence or AGI, right.
93
00:04:59,716 --> 00:05:02,719
Systems
that can think and reason for themselves.
94
00:05:03,970 --> 00:05:05,638
I mean, it's still an ongoing open
95
00:05:05,638 --> 00:05:08,850
research area in, in what they call
argument mining, right?
96
00:05:08,850 --> 00:05:13,062
You can present a bit of text to an engine
and say, is this a good argument?
97
00:05:13,521 --> 00:05:16,482
And it has a really difficult time
determining
98
00:05:16,482 --> 00:05:19,152
yes or no, whereas
we as humans read that and say, oh, that's
99
00:05:19,152 --> 00:05:22,196
a terrible argument or a great argument,
and I find it very persuasive.
100
00:05:22,196 --> 00:05:27,410
And so the, there's the that's one
I think about a lot that particular gap.
101
00:05:27,410 --> 00:05:31,122
But there are a number
of them that that in my view,
102
00:05:32,248 --> 00:05:33,833
set they set a
103
00:05:33,833 --> 00:05:37,545
fairly big chasm today between where AI is
104
00:05:37,795 --> 00:05:40,798
and anything that we would need to worry
about from an existential standpoint.
105
00:05:42,008 --> 00:05:43,676
So you're setting a chasm.
106
00:05:43,676 --> 00:05:48,222
And you're saying those time frames
are way over inflated or sorry,
107
00:05:49,390 --> 00:05:50,433
not over inflated.
108
00:05:50,433 --> 00:05:52,143
Opposite of over inflated.
109
00:05:52,143 --> 00:05:54,854
Way too ambitious in a sense.
110
00:05:54,854 --> 00:05:56,647
To push on it a little bit.
111
00:05:56,647 --> 00:05:58,983
I still hear you using words like.
112
00:05:58,983 --> 00:06:04,781
Not yet, implying that the time is coming.
113
00:06:04,781 --> 00:06:06,491
It's just not yet.
114
00:06:06,491 --> 00:06:09,494
and I guess even to push on that
a little bit more,
115
00:06:10,370 --> 00:06:13,373
you know,
we talk about things in computing, like
116
00:06:13,664 --> 00:06:17,585
Moore's Law, which I think is a fairly
specific technical definition for that.
117
00:06:17,585 --> 00:06:20,588
But in a general sense,
118
00:06:22,006 --> 00:06:24,884
you know, computing
power changes very quickly.
119
00:06:26,844 --> 00:06:29,806
I mean, you even noted in an email to me,
it seems like there's new
120
00:06:29,806 --> 00:06:32,809
AI applications
coming out every couple of weeks.
121
00:06:35,311 --> 00:06:35,770
so I guess
122
00:06:35,770 --> 00:06:38,773
how would you respond
to an argument like that that says, well,
123
00:06:39,273 --> 00:06:42,276
you know, computing power
has been doubling
124
00:06:42,693 --> 00:06:46,072
frequently for the last number of years,
and we're just going to see,
125
00:06:47,156 --> 00:06:50,118
you know,
we can't see what's around the corner.
126
00:06:50,118 --> 00:06:51,077
You know, it's a fair question.
127
00:06:51,077 --> 00:06:52,453
You know why? Why the “Not yet”?
128
00:06:52,453 --> 00:06:56,749
I think the the one of the reasons is
if you, you know, say you
129
00:06:56,749 --> 00:06:58,918
you project out
some number of years and you,
130
00:06:58,918 --> 00:07:02,713
you allow Moore's law, which is beginning
to show asymptotic behavior.
131
00:07:03,005 --> 00:07:03,256
Right?
132
00:07:03,256 --> 00:07:07,885
You if you were not doubling the you know
our our speeds are getting quicker,
133
00:07:07,885 --> 00:07:11,931
but we're running into computation issues
with these kind of things.
134
00:07:11,931 --> 00:07:14,016
It's largely why GPUs have become
135
00:07:14,016 --> 00:07:17,019
the standard architecture
for doing this kind of work.
136
00:07:17,145 --> 00:07:20,398
one, the not yet is an admission
that I don't that we don't know.
137
00:07:20,690 --> 00:07:20,898
Right.
138
00:07:20,898 --> 00:07:24,068
We you know,
can can you create a technical system
139
00:07:24,068 --> 00:07:27,363
that that is a version of intelligence?
140
00:07:27,905 --> 00:07:30,032
That's a bigger question
than I think AI can answer.
141
00:07:30,032 --> 00:07:34,078
That's a, you know, what is what has God
defined intelligence as.
142
00:07:34,078 --> 00:07:35,913
And can you create a system that has that?
143
00:07:37,832 --> 00:07:38,541
you know,
144
00:07:38,541 --> 00:07:41,586
we sort of think of it
in a, in a narrow band in that,
145
00:07:42,003 --> 00:07:45,214
you know, intelligence
is the ability to collect and synthesize
146
00:07:45,214 --> 00:07:48,217
information, to answer a question
or to develop something.
147
00:07:48,342 --> 00:07:50,761
but there's there's other types. Right?
148
00:07:50,761 --> 00:07:52,472
We have a sense of asthetics.
149
00:07:52,472 --> 00:07:55,183
Right? Humans look at what is beautiful.
150
00:07:55,183 --> 00:08:00,271
We have a sense of awe,
that is uniquely human, right?
151
00:08:00,271 --> 00:08:04,650
And you, you know,
if you ask, a large language model,
152
00:08:04,984 --> 00:08:09,238
you know, if a given painting is,
is beautiful or if a piece of artwork
153
00:08:09,238 --> 00:08:13,701
is just abhorrent to you, like,
it's likely does not have an opinion.
154
00:08:14,202 --> 00:08:17,705
And if it does, it has to create
it based on some quantitative
155
00:08:17,705 --> 00:08:18,873
aspect of the artwork.
156
00:08:18,873 --> 00:08:23,127
It has to look at, you know, the okay,
the color contrast is within this range.
157
00:08:23,127 --> 00:08:26,172
And most people think that that means it's
not a very attractive painting.
158
00:08:26,422 --> 00:08:27,340
It has.
159
00:08:27,340 --> 00:08:30,593
That's the way it has to think
because of its computational nature.
160
00:08:31,052 --> 00:08:33,387
we don't think that way. Right?
161
00:08:33,387 --> 00:08:35,139
We we look at it. And what is it?
162
00:08:35,139 --> 00:08:38,351
What does it do to our to our spirit,
to our soul, to our mind?
163
00:08:38,684 --> 00:08:40,144
And we have a response. And so,
164
00:08:41,187 --> 00:08:44,190
you know, I think that the not yet is a,
165
00:08:45,149 --> 00:08:47,193
you know, is a is a fair question.
166
00:08:47,193 --> 00:08:48,736
I think we're, we're also running
167
00:08:48,736 --> 00:08:52,114
into some mathematical challenges
in that the, the number of,
168
00:08:52,907 --> 00:08:56,410
I'll call them synapses or nodes
that we'd have to do to simulate
169
00:08:56,744 --> 00:09:00,915
actual human thought and cognition
is still many orders of magnitude
170
00:09:00,915 --> 00:09:03,918
above what the most powerful systems
can handle right now.
171
00:09:04,126 --> 00:09:07,296
So even if Moore's Law holds,
which is question whether it will,
172
00:09:08,923 --> 00:09:12,426
we have a long time to go
before we exponentially reach that point.
173
00:09:12,426 --> 00:09:15,304
We would need to be and who
and who knows?
174
00:09:15,304 --> 00:09:17,932
On the way, we may encounter a whole
nother obstacle we didn't anticipate.
175
00:09:17,932 --> 00:09:23,145
So, the the challenge is that we
you don't.
176
00:09:24,063 --> 00:09:26,232
When you make a forecast,
you have to acknowledge
177
00:09:26,232 --> 00:09:28,568
what could cause the forecast to fail.
178
00:09:28,568 --> 00:09:29,193
Right?
179
00:09:29,193 --> 00:09:31,362
Things don't always continue as they were.
180
00:09:31,362 --> 00:09:34,782
You look at, you know, population
explosion in the 1970s, right.
181
00:09:34,782 --> 00:09:35,950
There was this huge concern
182
00:09:35,950 --> 00:09:38,953
that it was going to lead to global famine
and billions of deaths.
183
00:09:38,953 --> 00:09:41,163
But that never happened. And so,
184
00:09:42,373 --> 00:09:43,207
so we want to look back
185
00:09:43,207 --> 00:09:46,586
soberly at those examples,
and I think with a bit of wisdom,
186
00:09:47,545 --> 00:09:52,466
and just encourage ourselves that, like,
the Lord has this in hand, right?
187
00:09:52,508 --> 00:09:55,636
He's not he's not going to let AI ruin
188
00:09:55,636 --> 00:09:57,847
his creation.
189
00:09:58,723 --> 00:10:01,726
Okay, so a theological answer there.
190
00:10:02,310 --> 00:10:05,313
Confidence in God.
191
00:10:06,439 --> 00:10:07,273
Yeah.
192
00:10:07,273 --> 00:10:10,943
And the title Ontological Limits kind of
highlights what I wanted to the question.
193
00:10:10,943 --> 00:10:12,945
I kind of want to press here.
194
00:10:12,945 --> 00:10:15,948
if we talk about limits of AI,
195
00:10:16,490 --> 00:10:20,119
maybe the first limit
we think about is technological.
196
00:10:20,328 --> 00:10:21,245
Can we build it?
197
00:10:21,245 --> 00:10:22,163
What can we build?
198
00:10:22,163 --> 00:10:23,956
What kind of
199
00:10:23,956 --> 00:10:28,044
computers can we build,
what kinds of neural networks and so on?
200
00:10:30,254 --> 00:10:33,591
or epistemological
what do we know how to do?
201
00:10:33,591 --> 00:10:35,134
Knowledge.
202
00:10:35,134 --> 00:10:38,512
when we say ontological limits,
we're getting into.
203
00:10:39,347 --> 00:10:42,892
Some ways, the strongest of those terms
because we're trying to say what
204
00:10:43,934 --> 00:10:46,937
what is the limit by the very,
205
00:10:47,146 --> 00:10:50,149
by the very nature of what this thing is,
206
00:10:50,983 --> 00:10:53,986
that produces the limit,
207
00:10:55,404 --> 00:10:57,698
and yeah, I think maybe you've hinted
at some of that,
208
00:10:57,698 --> 00:11:00,701
but.
209
00:11:01,911 --> 00:11:04,664
Yeah, kind of at its core,
210
00:11:04,664 --> 00:11:07,667
what do you see is that biggest core limit
211
00:11:08,334 --> 00:11:11,337
or way of talking about that core limit.
212
00:11:11,420 --> 00:11:15,341
I think probably the the,
the most stark ontological limit
213
00:11:15,341 --> 00:11:19,136
of AI is is just what we've created it out
of, right?
214
00:11:19,136 --> 00:11:22,139
This is a this is a creation of,
215
00:11:23,015 --> 00:11:25,434
Well, it's it's a, sounds funny, a
216
00:11:25,434 --> 00:11:26,602
creation of creation.
217
00:11:26,602 --> 00:11:26,811
Right.
218
00:11:26,811 --> 00:11:32,191
We've our ingenuity has developed this
and it's immensely powerful in some areas.
219
00:11:32,233 --> 00:11:32,358
Right.
220
00:11:32,358 --> 00:11:35,361
I don't want to to minimize
its effectiveness in its aid in some
221
00:11:35,361 --> 00:11:38,364
things, but there
222
00:11:39,407 --> 00:11:39,740
it is
223
00:11:39,740 --> 00:11:43,703
a it does
not involve the supernatural, right.
224
00:11:43,744 --> 00:11:44,912
It does not.
225
00:11:44,912 --> 00:11:48,749
You know, it is a it is a, by definition,
natural development.
226
00:11:48,749 --> 00:11:52,420
It is limited by the laws that the
that God has put in place
227
00:11:52,420 --> 00:11:56,132
in terms of physics and computation
and, and information,
228
00:11:57,299 --> 00:11:58,217
in mathematics. Right.
229
00:11:58,217 --> 00:12:01,971
We look at, you know, questions
like there's a famous question called
230
00:12:01,971 --> 00:12:06,434
the halting problem in Computer science
that basically is a
231
00:12:06,767 --> 00:12:10,271
we now know is a unsolvable question
for a computer system.
232
00:12:10,730 --> 00:12:13,733
And so, you know, it's not without
233
00:12:13,899 --> 00:12:17,361
computing is not a finished business,
right?
234
00:12:17,361 --> 00:12:19,029
That we, you know,
we know how to do everything
235
00:12:19,029 --> 00:12:20,114
as long as we have enough power.
236
00:12:20,114 --> 00:12:23,325
Enough, enough systems, enough
electricity.
237
00:12:23,909 --> 00:12:27,538
we don't we
and we can prove that we don't.
238
00:12:27,538 --> 00:12:29,623
Not only that, but never will.
239
00:12:29,623 --> 00:12:29,832
Right.
240
00:12:29,832 --> 00:12:32,042
There's you know,
mathematicians have worked on that.
241
00:12:32,042 --> 00:12:36,130
That there are some fundamental,
fundamentally
242
00:12:36,130 --> 00:12:40,134
unknowable things
in computation in the technical fields.
243
00:12:41,844 --> 00:12:44,305
and so I
think those are you're going to encounter
244
00:12:44,305 --> 00:12:48,768
some of those questions in the growth of
AI based on its ontology, where you can't
245
00:12:49,435 --> 00:12:53,063
there is not a, it is bound by the laws
246
00:12:53,063 --> 00:12:56,358
that that all physical,
247
00:12:56,817 --> 00:13:00,321
digital systems are bound by.
248
00:13:00,696 --> 00:13:00,863
Right?
249
00:13:00,863 --> 00:13:03,866
It can only go so fast to go
and get so hot before it breaks down.
250
00:13:04,200 --> 00:13:06,243
It's limited by the laws of physics.
251
00:13:06,243 --> 00:13:10,331
It it is also limited by the ingenuity
in which humans can insert into it.
252
00:13:10,414 --> 00:13:10,664
Right?
253
00:13:10,664 --> 00:13:13,793
We create it's steps, but
254
00:13:15,377 --> 00:13:16,712
we are limited and finite.
255
00:13:16,712 --> 00:13:19,715
So I, I have a difficult time
256
00:13:20,090 --> 00:13:22,927
imagining how a system is going to,
257
00:13:22,927 --> 00:13:26,597
you know, by its own volition, exceed
that based on what it's created on.
258
00:13:28,015 --> 00:13:31,018
Well, is part of the the argument there.
259
00:13:31,352 --> 00:13:32,353
Especially for the.
260
00:13:32,353 --> 00:13:35,981
I don't know, either
AI optimist or AI pessimist.
261
00:13:35,981 --> 00:13:42,446
Whichever one it is that, you know,
has these very high expectations for AI.
262
00:13:42,446 --> 00:13:44,448
I suppose that it's more a question
263
00:13:44,448 --> 00:13:48,327
of your outlook, whether that makes you
an AI optimist or pessimist.
264
00:13:49,328 --> 00:13:50,371
but it's part of the
265
00:13:50,371 --> 00:13:53,332
argument that, well,
these are neural networks.
266
00:13:53,332 --> 00:13:56,335
They're functioning the same way
as the brain.
267
00:13:56,710 --> 00:13:59,713
We don't have to give it a precise
algorithm because
268
00:14:01,257 --> 00:14:04,468
it has more capacities
for self-learning and self adjustment.
269
00:14:04,468 --> 00:14:07,471
And is the
270
00:14:08,013 --> 00:14:10,975
is the expectation there
that something about
271
00:14:11,767 --> 00:14:14,770
that architecture
is going to let it get past
272
00:14:16,021 --> 00:14:19,650
what normally applies
to other computer systems or...
273
00:14:21,485 --> 00:14:22,486
It's a good question.
274
00:14:22,486 --> 00:14:25,906
There's a, I remember there was a
there was a French institute
275
00:14:25,906 --> 00:14:31,453
some years ago that had begun to try
and create neural networks with the scale
276
00:14:31,453 --> 00:14:34,874
that would attempt to approximate
some of the cognition of the human brain.
277
00:14:35,541 --> 00:14:38,544
And, you know, they had the the resources
of the French government.
278
00:14:38,544 --> 00:14:42,965
They had an immense amount
of computational power behind them.
279
00:14:43,382 --> 00:14:46,844
And even with all that,
they were able to to roughly get to,
280
00:14:46,886 --> 00:14:48,137
if I remember the number correctly.
281
00:14:48,137 --> 00:14:50,681
So don't hold me on the on the citation.
282
00:14:50,681 --> 00:14:53,684
Something like 2 to 3% of brain function.
283
00:14:54,476 --> 00:14:57,021
You know, this is immensely complex
284
00:14:57,021 --> 00:15:00,232
because network complexity is
does not grow linearly.
285
00:15:00,232 --> 00:15:01,400
It grows exponentially. Right?
286
00:15:01,400 --> 00:15:06,280
To gain, to grow a system
that can get strong enough,
287
00:15:08,407 --> 00:15:09,783
to be a big enough neural network,
288
00:15:09,783 --> 00:15:12,786
even even that is,
289
00:15:13,537 --> 00:15:16,332
is not going to give you human cognition,
290
00:15:16,332 --> 00:15:20,044
because we think about the the use cases
that we use neural networks for.
291
00:15:20,628 --> 00:15:23,881
There's, there's really there's two
major ones in machine learning.
292
00:15:23,881 --> 00:15:25,090
One is classification.
293
00:15:25,090 --> 00:15:28,636
Basically telling,
you know, you look at if you take as input
294
00:15:28,636 --> 00:15:31,931
something and your brain tells you
what the thing is, right?
295
00:15:31,931 --> 00:15:35,476
Our brains are that naturally
you have image recognition software
296
00:15:35,476 --> 00:15:37,019
or other neural networks
that do the same thing.
297
00:15:37,019 --> 00:15:40,022
They take in inputs
and they identify something.
298
00:15:40,481 --> 00:15:41,649
the other one is regression.
299
00:15:41,649 --> 00:15:44,902
It's making a, a relationship
between two quantities.
300
00:15:44,902 --> 00:15:49,323
So, and allowing you to do a sort of
what we call an in-sample prediction.
301
00:15:49,323 --> 00:15:49,573
Right.
302
00:15:49,573 --> 00:15:54,036
If you're if, you know,
you know, house size A
303
00:15:54,036 --> 00:15:56,121
and house size B
and you see something in the middle,
304
00:15:56,121 --> 00:15:58,457
what should the cost of the home be,
right?
305
00:15:58,457 --> 00:16:02,294
That's a that's a regressive,
I'm sorry, it's an example of regression.
306
00:16:02,294 --> 00:16:04,088
It's not regressive, but the,
307
00:16:05,089 --> 00:16:07,716
and those are the two main neural network
applications.
308
00:16:07,716 --> 00:16:10,803
And there's now,
there's variations on them
309
00:16:10,803 --> 00:16:13,847
now, those the base layers of those are
what's been built up
310
00:16:13,847 --> 00:16:16,725
to create these transformers
that LMS are built on.
311
00:16:16,725 --> 00:16:19,728
So, they exist and they've been extended.
312
00:16:19,937 --> 00:16:22,481
But you
313
00:16:22,481 --> 00:16:25,484
I would say we are
we are too far away from,
314
00:16:26,819 --> 00:16:28,612
actual
315
00:16:28,612 --> 00:16:32,950
human brain function to predict
that it will it will continue along
316
00:16:32,950 --> 00:16:37,746
that road, even even on any,
uninterrupted path.
317
00:16:37,746 --> 00:16:40,541
And finally get to
how the human brain works,
318
00:16:40,541 --> 00:16:44,795
there's going to be breaks
or discontinuities in the progress
319
00:16:45,045 --> 00:16:48,048
and that and who knows,
some of those may scupper the whole thing.
320
00:16:48,090 --> 00:16:48,382
Right?
321
00:16:48,382 --> 00:16:52,094
You may only be able to get so far
with the applications of AI.
322
00:16:52,094 --> 00:16:53,929
You just can't go further.
323
00:16:53,929 --> 00:16:57,307
They estimate that the cost to train
training is what's most expensive
324
00:16:57,307 --> 00:17:01,270
right now
for these systems, to go beyond GPT four.
325
00:17:01,478 --> 00:17:04,398
so for little 040,
326
00:17:04,398 --> 00:17:09,528
to GPT five, GPT 5,6,7
is trillions of dollars per iteration
327
00:17:10,154 --> 00:17:13,741
to accumulate all the data
to do the training, the electricity cost
328
00:17:13,741 --> 00:17:15,242
to run the models.
329
00:17:15,242 --> 00:17:18,245
at some point we're going
to we're going to run out of money.
330
00:17:18,787 --> 00:17:21,665
We just, you know,
humanity will either have to decide,
331
00:17:21,665 --> 00:17:23,375
okay, we're invested in this or we're not.
332
00:17:23,375 --> 00:17:26,587
We just can't keep spending that
333
00:17:26,587 --> 00:17:29,590
those amount of resources
to develop a system like this.
334
00:17:29,631 --> 00:17:32,342
so I don't, you know, fortunately,
I don't have to make.
335
00:17:32,342 --> 00:17:33,552
I'm glad I'm not in charge of an
336
00:17:33,552 --> 00:17:36,555
AI company
that I'd have to make that choice, but,
337
00:17:37,598 --> 00:17:38,849
there's,
338
00:17:38,849 --> 00:17:41,060
you know, whether
it's the actual technology, whether it's
339
00:17:41,060 --> 00:17:44,188
the mathematics behind it
or the funding to generate these things,
340
00:17:44,521 --> 00:17:47,900
any, any combination of those
can cause this thing to fail.
341
00:17:48,484 --> 00:17:51,445
So we have to
almost have the perfect storm to go up the
342
00:17:51,445 --> 00:17:54,531
the the graph of progress towards human
343
00:17:54,531 --> 00:17:57,534
cognition, at least in my view.
344
00:17:57,576 --> 00:17:59,119
Yeah. Those are helpful.
345
00:17:59,119 --> 00:18:01,872
One the point you made there at the end,
those
346
00:18:01,872 --> 00:18:03,749
literal physical constraints.
347
00:18:03,749 --> 00:18:06,752
I mean,
we have all kinds of energy available, but
348
00:18:07,044 --> 00:18:10,089
that as you try to exponentially scale
349
00:18:10,089 --> 00:18:13,092
up the amount of energy
you're literally running into.
350
00:18:13,884 --> 00:18:17,221
I don't know what the scale is,
but it's registering on
351
00:18:17,888 --> 00:18:21,141
things like grid capacity
and electricity generation capacity
352
00:18:21,141 --> 00:18:22,392
and that kind of thing. At some point.
353
00:18:23,519 --> 00:18:25,938
Not a small thing.
354
00:18:25,938 --> 00:18:29,274
no. The other thing
that was really helpful, for me
355
00:18:29,274 --> 00:18:33,862
and what you said there was
AI is really only doing at the base
356
00:18:33,862 --> 00:18:36,865
the two operations,
357
00:18:37,866 --> 00:18:40,911
classification, classifying objects.
358
00:18:41,120 --> 00:18:42,955
And it's gotten
359
00:18:42,955 --> 00:18:46,166
sophisticated at that by by training
360
00:18:47,751 --> 00:18:49,920
humans, coding examples.
361
00:18:49,920 --> 00:18:53,090
and then it going off of those examples
362
00:18:54,341 --> 00:18:57,803
and regression problems, which I'm not
363
00:18:59,471 --> 00:19:02,474
not familiar
with, the mathematics of those, but
364
00:19:03,642 --> 00:19:05,686
again, those are fairly simple operations.
365
00:19:05,686 --> 00:19:08,689
I mean, takes a lot of power
to carry them out.
366
00:19:10,607 --> 00:19:13,610
fairly simple compared to
367
00:19:13,694 --> 00:19:16,697
the human mind and living a human life.
368
00:19:19,324 --> 00:19:20,284
so, yeah,
369
00:19:20,284 --> 00:19:23,287
I think just breaking it down
and saying it does these two things
370
00:19:24,163 --> 00:19:28,000
and builds on them and
puts them to good use, to me really helps
371
00:19:28,000 --> 00:19:32,087
to see the a little bit of see the limits
or demystify things a little bit.
372
00:19:33,297 --> 00:19:34,548
Yeah.
373
00:19:34,548 --> 00:19:36,800
And I, one thing I would add in to
374
00:19:36,800 --> 00:19:39,761
is that we you see on,
375
00:19:40,012 --> 00:19:42,014
maybe on social media at points people
376
00:19:42,014 --> 00:19:46,101
some very like AI negative people who like
AI is not any good at anything.
377
00:19:46,393 --> 00:19:49,313
And they'll bring up an example
like I saw one on LinkedIn the other day
378
00:19:49,313 --> 00:19:52,566
that there was a
there was like a stone pillar in a field.
379
00:19:52,858 --> 00:19:56,236
And on one side of it
had like the rear half of a cow.
380
00:19:56,862 --> 00:19:59,031
And then the stone
pillar was maybe ten feet wide.
381
00:19:59,031 --> 00:20:02,117
And on the other end of the stone pillar
was like the head of a cow poking out.
382
00:20:02,659 --> 00:20:05,871
And now we would say like,
okay, there's two cows in the picture,
383
00:20:06,330 --> 00:20:09,750
but that like an AI system,
might put a box around it and say,
384
00:20:09,791 --> 00:20:12,169
here's one cow and its length is 15ft,
385
00:20:13,420 --> 00:20:14,171
right?
386
00:20:14,171 --> 00:20:15,714
And so
they would look at an example like that.
387
00:20:15,714 --> 00:20:18,217
And we know, humans know the error.
388
00:20:18,217 --> 00:20:20,636
We we see it and we're like, oh, okay.
389
00:20:20,636 --> 00:20:22,638
a computer system doesn't.
390
00:20:22,638 --> 00:20:26,266
And so I think those, those examples
get brought up to say,
391
00:20:26,516 --> 00:20:30,187
like AI is of
no, like it's never going to be a threat.
392
00:20:30,187 --> 00:20:34,066
It's, but I think what it points to
is there are,
393
00:20:34,733 --> 00:20:38,111
there are there are limits
and easy mistakes that AI makes,
394
00:20:38,111 --> 00:20:41,323
especially early ChatGPT days
made tons of mistakes.
395
00:20:42,032 --> 00:20:44,618
And so, but humans
396
00:20:44,618 --> 00:20:48,038
will will continue to try and sort of plug
the holes in the dam.
397
00:20:48,038 --> 00:20:48,247
Right.
398
00:20:48,247 --> 00:20:50,207
They'll we'll say, okay,
here's a, here's an error
399
00:20:50,207 --> 00:20:53,377
that we're getting ridiculed
for this thing that the system can't do.
400
00:20:53,585 --> 00:20:55,045
We'll fix the thing.
401
00:20:55,045 --> 00:20:58,840
and I'm, I'm led to believe we're going
to keep finding more of those mistakes
402
00:20:58,840 --> 00:21:00,133
that humans have to fix.
403
00:21:00,133 --> 00:21:03,679
And it'll it may get better
and better and better, but it, again,
404
00:21:03,720 --> 00:21:06,098
you're you're going to keep finding,
405
00:21:07,057 --> 00:21:10,060
reasons, reasons to make fun of it
ultimately.
406
00:21:10,352 --> 00:21:11,770
Now, that doesn't mean it's not powerful.
407
00:21:11,770 --> 00:21:14,856
It's not something to think about,
but it's, you know, those
408
00:21:15,107 --> 00:21:21,196
I think the, the, the satirical view of AI
is becoming more and more prevalent.
409
00:21:21,196 --> 00:21:23,240
And I don't know what that's going to do
to people's view of it.
410
00:21:23,240 --> 00:21:24,950
probably very little.
411
00:21:24,950 --> 00:21:29,079
But, we want I just want to
I want to try and think clearly about
412
00:21:29,121 --> 00:21:33,125
is this a like
what is the truth here about AI? Yes.
413
00:21:33,125 --> 00:21:34,835
That's a silly mistake that it made.
414
00:21:34,835 --> 00:21:37,713
But does that invalidate
the whole project?
415
00:21:37,713 --> 00:21:38,880
Well, of course not. Right.
416
00:21:38,880 --> 00:21:41,967
That's you know,
amongst the billions of things it can do.
417
00:21:42,009 --> 00:21:44,886
Here's one that you thought was
silly. All right. So move on.
418
00:21:46,763 --> 00:21:48,890
Yeah, exactly.
419
00:21:48,890 --> 00:21:51,893
I think just to pursue that a little bit,
though,
420
00:21:53,103 --> 00:21:56,648
and. Yeah,
I like the case you've made, kind of,
421
00:21:57,899 --> 00:21:59,818
limits and it's limited.
422
00:21:59,818 --> 00:22:02,821
but would you agree that we can get
423
00:22:03,780 --> 00:22:07,075
we can get a lot better
at using some of the tools even without
424
00:22:08,410 --> 00:22:11,413
even without really fundamental changes
in capacity.
425
00:22:11,663 --> 00:22:14,374
like, for example, I was,
426
00:22:14,374 --> 00:22:17,252
listening to an accountant
427
00:22:17,252 --> 00:22:19,713
and his statement was, there's
428
00:22:19,713 --> 00:22:22,716
going to come a time sooner or later where
429
00:22:23,175 --> 00:22:26,261
you're basic bookkeeping,
categorizing transactions, and so on.
430
00:22:27,554 --> 00:22:30,349
Somebody is going to come up with a tool
that does that well enough
431
00:22:30,349 --> 00:22:34,895
that it's barely worth your time
to have a human bookkeeper go through
432
00:22:36,688 --> 00:22:38,815
in detail, because it's going to get it
433
00:22:38,815 --> 00:22:41,151
close enough, especially for the purposes
of small business.
434
00:22:41,151 --> 00:22:42,527
It's going to get it close enough that
435
00:22:43,862 --> 00:22:46,365
it's not going to matter.
436
00:22:46,365 --> 00:22:48,575
And I don't know, that kind of strikes
me as plausible.
437
00:22:48,575 --> 00:22:50,327
It also doesn't strike me
as requiring anything
438
00:22:50,327 --> 00:22:54,081
radically new from AI,
maybe just some human ingenuity
439
00:22:54,081 --> 00:22:57,209
and how to how to apply it
right to the process.
440
00:22:58,752 --> 00:22:59,086
I don't know.
441
00:22:59,086 --> 00:23:02,089
Does that sounds like a fair estimate.
442
00:23:02,672 --> 00:23:03,298
Oh, I think so.
443
00:23:03,298 --> 00:23:06,551
I think there's, like, accounting
as an application of AI.
444
00:23:06,551 --> 00:23:07,719
Sure, you can do the.
445
00:23:07,719 --> 00:23:10,180
The mechanics are not tremendously
complicated.
446
00:23:10,180 --> 00:23:13,308
You know, they take care
and some background knowledge, but the,
447
00:23:13,392 --> 00:23:16,520
nothing about the mathematics
is incredibly difficult.
448
00:23:17,020 --> 00:23:21,942
But the, I think, like with that example,
one of the constraints we're going to
449
00:23:21,942 --> 00:23:26,738
encounter is at the end of it, you,
you make a legally binding declaration.
450
00:23:26,947 --> 00:23:29,366
Right. These are the
these are the truth of the accounts.
451
00:23:29,366 --> 00:23:33,954
And so who who is going to
to put their legal weight behind that.
452
00:23:34,496 --> 00:23:36,873
Is it going to be the,
you know, is it going to be the individual
453
00:23:36,873 --> 00:23:39,876
who's used the system like I certify
this is correct,
454
00:23:40,001 --> 00:23:43,004
or are we going to try and pass
that off to the AI system?
455
00:23:43,588 --> 00:23:45,048
So like, well no, the AI did it.
456
00:23:45,048 --> 00:23:46,883
So if there's a mistake it isn't my fault.
457
00:23:46,883 --> 00:23:48,135
It's the AI’s fault.
458
00:23:48,135 --> 00:23:52,848
And if we do that which AI companies
are going to shoulder the legal burden
459
00:23:52,848 --> 00:23:56,685
for that of you know, that's
I think it's, it's going to sort of
460
00:23:56,685 --> 00:23:59,688
be the, the,
the fingers pointed at one another issue
461
00:23:59,855 --> 00:24:03,483
with why hasn't this become, why hasn't
this use case become widespread?
462
00:24:03,567 --> 00:24:06,736
I think it's
because no one is confident enough yet.
463
00:24:07,362 --> 00:24:12,242
That and I'll say yet because I think it
it may come a day that people will be.
464
00:24:12,242 --> 00:24:14,411
And this will the dam will break loose.
465
00:24:14,411 --> 00:24:17,080
But I don't think anyone yet
is ready to sign up
466
00:24:17,080 --> 00:24:20,542
and put their legal business life
behind this there.
467
00:24:20,625 --> 00:24:24,463
I think people are still waiting for
for more and more evidence that it's okay.
468
00:24:25,547 --> 00:24:25,797
Yeah.
469
00:24:25,797 --> 00:24:27,841
And that could apply to.
470
00:24:27,841 --> 00:24:30,635
To a lot of pieces
where you have software doing a lot of it.
471
00:24:30,635 --> 00:24:34,598
I mean, it's thinking,
you know, friends that build roof trusses
472
00:24:35,849 --> 00:24:38,685
and this isn't necessarily AI,
the software is doing basically
473
00:24:38,685 --> 00:24:42,689
the engineering and saying, yes, this,
this truss will meet specs or it won't.
474
00:24:43,899 --> 00:24:47,736
but if they're actually doing a job
that requires an engineer's stamp,
475
00:24:48,737 --> 00:24:51,364
well, it's still got to be an engineer
that does it, which is an extra step.
476
00:24:51,364 --> 00:24:54,367
That liability part.
477
00:24:54,618 --> 00:24:55,535
They're so good.
478
00:24:55,535 --> 00:24:57,579
Yeah.
479
00:24:57,579 --> 00:24:58,079
Okay.
480
00:24:58,079 --> 00:25:01,082
So I'm guessing given the arguments
you've made, that,
481
00:25:02,209 --> 00:25:05,212
you know, artificial general intelligence,
482
00:25:05,295 --> 00:25:08,298
actual purpose, purposeful
483
00:25:08,298 --> 00:25:12,177
behavior by AI systems and so on.
484
00:25:12,302 --> 00:25:15,514
I'm taking it
you're very skeptical of that.
485
00:25:17,432 --> 00:25:20,519
Or very skeptical
that being in the all things.
486
00:25:22,229 --> 00:25:23,688
Yeah, I'd say that's true.
487
00:25:23,688 --> 00:25:26,983
I have a good degree of skepticism
about that. The
488
00:25:28,276 --> 00:25:30,070
I think that we have humans
489
00:25:30,070 --> 00:25:33,073
have struggled to define
what AGI actually is.
490
00:25:33,198 --> 00:25:35,534
Right.
How do you test it. How do you verify it.
491
00:25:35,534 --> 00:25:36,326
How do you.
492
00:25:36,326 --> 00:25:40,664
And so I think that that particular
question of what is AGI is going to,
493
00:25:40,997 --> 00:25:43,959
remain in the academic circles
for a while.
494
00:25:43,959 --> 00:25:46,461
It's going to, you know, we're going to
there's going to be arguments for
495
00:25:46,461 --> 00:25:50,632
and against of various kinds
that, may or may not prove fruitful.
496
00:25:50,840 --> 00:25:51,049
Right.
497
00:25:51,049 --> 00:25:54,052
I don't know that they're going
to come to any conclusion.
498
00:25:54,970 --> 00:25:57,055
and I think in the background,
AI companies
499
00:25:57,055 --> 00:26:00,308
are going to continue to build systems
that more and more closely approximate,
500
00:26:01,059 --> 00:26:04,771
you know, human cognition,
even though they we might say
501
00:26:04,771 --> 00:26:07,482
you're still light years away,
but we're making progress.
502
00:26:07,482 --> 00:26:09,025
And that's probably true.
503
00:26:09,025 --> 00:26:11,820
And so, so I don't
504
00:26:13,196 --> 00:26:15,448
the yeah, I'm
505
00:26:15,448 --> 00:26:18,410
not tremendously worried
about the development of AGI.
506
00:26:18,618 --> 00:26:21,580
you know, sometimes the cases of,
507
00:26:21,871 --> 00:26:25,000
chess or go these board games
that have had a long and storied history
508
00:26:25,166 --> 00:26:29,546
in computation,
I like, companies like DeepMind.
509
00:26:29,546 --> 00:26:33,091
So, which is now owned by Google,
but that's out of London.
510
00:26:33,675 --> 00:26:36,970
have done some really neat work,
like developing superhuman
511
00:26:37,137 --> 00:26:40,974
chess engines and go engines that play
the game at a, at a level that surpasses
512
00:26:41,600 --> 00:26:42,809
human understanding.
513
00:26:42,809 --> 00:26:45,812
which is pretty cool, the way
what they had to develop to do that,
514
00:26:46,104 --> 00:26:49,107
but those are still games with a,
515
00:26:49,482 --> 00:26:53,737
a defined feature space
and defined rules and defined options.
516
00:26:53,737 --> 00:26:53,987
Right.
517
00:26:53,987 --> 00:26:58,074
You like when you can do that,
when you can do that, when you can box
518
00:26:58,074 --> 00:27:01,077
in the system, the universe,
you can make something really neat.
519
00:27:01,494 --> 00:27:04,998
But, boy,
humanity resists being boxed in like that.
520
00:27:05,081 --> 00:27:06,041
We just don't.
521
00:27:06,041 --> 00:27:07,042
That's not our nature.
522
00:27:07,042 --> 00:27:11,630
And and so I think part of
that is the root of my skepticism is,
523
00:27:12,130 --> 00:27:14,841
like if you look into a field
called reinforcement
524
00:27:14,841 --> 00:27:17,844
learning,
it's a it's a subpart of machine learning.
525
00:27:18,053 --> 00:27:19,804
But one of the to do it.
526
00:27:19,804 --> 00:27:24,184
Well, this is how the, the,
the chess engine from DeepMind was built.
527
00:27:26,144 --> 00:27:26,770
One of the things
528
00:27:26,770 --> 00:27:30,440
they have to do is write down, here's
the rules of the game, and here's what,
529
00:27:31,024 --> 00:27:33,985
doing a certain
good thing pays you, right?
530
00:27:33,985 --> 00:27:35,820
They call it a payoff function.
531
00:27:35,820 --> 00:27:38,823
And so if you, you know, if you take the,
the opponent's pawn little piece,
532
00:27:39,157 --> 00:27:39,824
here's the payoff.
533
00:27:39,824 --> 00:27:42,160
Or if you take their queen,
it's a bigger payoff.
534
00:27:42,160 --> 00:27:46,498
And so, if you can define that in chess,
you can actually define that.
535
00:27:47,374 --> 00:27:50,168
You can make a system
that that does very well.
536
00:27:50,168 --> 00:27:54,339
But think about a dating relationship
or a marriage like.
537
00:27:54,756 --> 00:27:57,842
And we pretty quickly
realize the our attempts
538
00:27:57,842 --> 00:28:00,845
to quantify and define value.
539
00:28:01,680 --> 00:28:04,474
It's we're in a different layer
of abstraction.
540
00:28:04,474 --> 00:28:07,477
We you can't the these don't go together.
541
00:28:07,644 --> 00:28:10,355
And so you know those are
542
00:28:10,355 --> 00:28:13,358
some of the, I guess musings to me of why
543
00:28:13,775 --> 00:28:18,029
I think AGI is still some ways off,
if we even can define it.
544
00:28:20,615 --> 00:28:22,701
Yeah.
545
00:28:22,701 --> 00:28:25,995
So along the lines of, you know, AGI.
546
00:28:25,995 --> 00:28:29,290
And again,
these questions about intelligence,
547
00:28:29,332 --> 00:28:32,335
the Turing test.
548
00:28:32,627 --> 00:28:36,423
I used to use a version of this
with students, long before,
549
00:28:37,132 --> 00:28:40,760
you know, generative
AI was on the on the horizons
550
00:28:41,886 --> 00:28:45,432
came from my,
my days in college, philosophy 101.
551
00:28:45,432 --> 00:28:49,436
And, you know, basically we're asked to
I don't have a technical definition
552
00:28:49,436 --> 00:28:52,522
in front of me,
but we're asked to consider the question,
553
00:28:54,107 --> 00:28:56,901
you know,
if a computer can give the same answers
554
00:28:56,901 --> 00:29:00,864
so that if you're communicating with it
through text or whatever, you can't tell
555
00:29:00,864 --> 00:29:03,867
if there's a computer
or a person on the other end.
556
00:29:04,993 --> 00:29:07,245
well, then should we ascribe it
557
00:29:07,245 --> 00:29:11,207
the same mental life and intelligence
that we ascribe to a person?
558
00:29:12,625 --> 00:29:13,877
yeah. How do you think about that?
559
00:29:13,877 --> 00:29:17,672
And maybe you have a more precise,
computer science definition of the test.
560
00:29:18,965 --> 00:29:20,049
No, the.
561
00:29:20,049 --> 00:29:22,510
The Turing
test was all the way back in the 1950s.
562
00:29:22,510 --> 00:29:26,264
Alan Turing, when he was working
in the early theory of computation.
563
00:29:26,681 --> 00:29:29,142
no, you
you had a good working definition.
564
00:29:29,142 --> 00:29:32,020
You know, if you're if, you know,
there's two rooms behind you
565
00:29:32,020 --> 00:29:34,981
and you're getting text
inputs from questions you ask,
566
00:29:35,106 --> 00:29:38,109
how do you tell if one is a human
and one's a computer, and if it's,
567
00:29:38,568 --> 00:29:42,155
you know, a system
that is statistically indistinguishable
568
00:29:42,155 --> 00:29:45,158
from a human, we would say, okay, passes
the Turing test.
569
00:29:45,241 --> 00:29:48,620
I think we are actually already there
with that, with the Turing test,
570
00:29:48,620 --> 00:29:52,123
I think we have systems that you can,
you can write
571
00:29:52,123 --> 00:29:56,711
fairly complex questions to,
and this is always an interesting, like,
572
00:29:56,711 --> 00:30:01,299
thought experiment of like, what question
would you ask that a, artificial
573
00:30:01,299 --> 00:30:04,427
intelligence system that you know is
an AI system would fail to answer well.
574
00:30:04,427 --> 00:30:05,637
Right.
575
00:30:05,637 --> 00:30:09,724
that's kind of an interesting
sort of a subfield here, but I think we're
576
00:30:10,099 --> 00:30:12,685
I think we already have systems
that pass the Turing test.
577
00:30:12,685 --> 00:30:16,105
The, but I think this it's
578
00:30:17,482 --> 00:30:20,276
the current application
of the Turing test, I think is a moving,
579
00:30:20,276 --> 00:30:23,947
a bit of a moving standard in that you,
580
00:30:24,531 --> 00:30:26,699
you have a system
that seems to pass the Turing test,
581
00:30:26,699 --> 00:30:29,702
and then humans get used to
how it communicates.
582
00:30:29,828 --> 00:30:30,078
Right?
583
00:30:30,078 --> 00:30:33,081
You begin to pick up the flavor
or the nuance of, like,
584
00:30:33,248 --> 00:30:35,542
this sounds like it was
AI generated, right?
585
00:30:35,542 --> 00:30:36,459
I think we all,
586
00:30:36,459 --> 00:30:39,462
if we've read AI generated text,
we sort of know what that feels like.
587
00:30:39,587 --> 00:30:42,298
It's very it's very linear, very clear.
588
00:30:42,298 --> 00:30:44,509
There's no halting pausing.
589
00:30:44,509 --> 00:30:48,680
There's it's
like it's mechanical in a sense.
590
00:30:48,680 --> 00:30:50,723
And so,
591
00:30:50,723 --> 00:30:52,684
now what an AI company might say is,
592
00:30:52,684 --> 00:30:55,687
okay,
I'll adjust my, my generation algorithm
593
00:30:55,770 --> 00:30:58,982
to make it a little more clunky,
a little more human like.
594
00:30:59,399 --> 00:30:59,649
Right.
595
00:30:59,649 --> 00:31:00,817
And then it it
596
00:31:00,817 --> 00:31:03,987
passes the new standard of the Turing test
that people can't distinguish.
597
00:31:04,404 --> 00:31:09,409
but I whenever you have a test that
relies on human perception of something
598
00:31:09,909 --> 00:31:13,997
that, like, we grow,
we learn, right, that that the benchmark
599
00:31:13,997 --> 00:31:15,540
for that test
is going to change over time.
600
00:31:15,540 --> 00:31:18,918
So, you know, I think we we have systems
601
00:31:18,918 --> 00:31:21,921
that your early GPT
is probably passed the Turing test.
602
00:31:22,380 --> 00:31:23,965
Now the standards even higher.
603
00:31:23,965 --> 00:31:24,173
Right.
604
00:31:24,173 --> 00:31:26,926
For a system
to be indistinguishable from a human.
605
00:31:26,926 --> 00:31:29,929
and it's not
because the systems have changed
606
00:31:30,138 --> 00:31:32,098
so much as the
we have changed and learned.
607
00:31:32,098 --> 00:31:34,058
And so, We've learned how to.
608
00:31:34,058 --> 00:31:35,393
How to deal with it.
609
00:31:35,393 --> 00:31:37,687
Yeah, that's a good perspective.
610
00:31:37,687 --> 00:31:38,021
And just.
611
00:31:38,021 --> 00:31:39,981
Yeah, as we're talking about this
612
00:31:39,981 --> 00:31:42,984
and it occurs to me
that it'd be very interesting to read what
613
00:31:43,151 --> 00:31:47,655
philosophers wrote about the Turing test
and how that has changed as the
614
00:31:48,865 --> 00:31:49,574
computer
615
00:31:49,574 --> 00:31:52,327
computer capacities have gone up,
because that was the angle
616
00:31:52,327 --> 00:31:55,121
from which I came,
which which I would have encountered.
617
00:31:55,121 --> 00:31:58,124
It was kind of philosophy of mind. And,
618
00:31:58,958 --> 00:32:01,210
what is it about the human mind,
619
00:32:01,210 --> 00:32:04,213
what makes mind and so on.
620
00:32:04,255 --> 00:32:07,133
And it does strike me
that those kinds of questions are really
621
00:32:07,133 --> 00:32:10,136
kind of relevant
how we're thinking about AI here, because
622
00:32:11,512 --> 00:32:14,766
if you're coming
into the whole discussion,
623
00:32:16,309 --> 00:32:19,312
with a philosophy that says.
624
00:32:21,314 --> 00:32:22,398
You know, we can explain this
625
00:32:22,398 --> 00:32:26,319
all physically or we can explain this
by the functions of physical things.
626
00:32:26,319 --> 00:32:28,655
The human mind is just
627
00:32:28,655 --> 00:32:31,950
just is the human brain,
or it's the functions
628
00:32:31,950 --> 00:32:34,953
and processes that the human brain runs.
629
00:32:35,036 --> 00:32:36,329
Then it seems to be a fair question.
630
00:32:36,329 --> 00:32:38,164
Well,
can we duplicate it with something else?
631
00:32:39,999 --> 00:32:41,167
yeah.
632
00:32:41,167 --> 00:32:42,418
If we're coming in as Christians,
633
00:32:42,418 --> 00:32:45,421
we still may have a large variety
of philosophy of mind.
634
00:32:46,047 --> 00:32:49,050
We're going to believe
that the brain is important.
635
00:32:50,885 --> 00:32:52,929
but generally,
636
00:32:52,929 --> 00:32:54,097
you know, God is a spirit.
637
00:32:54,097 --> 00:32:57,100
God has a mind without having a brain.
638
00:32:57,308 --> 00:33:00,311
generally we believe that our mind is.
639
00:33:00,979 --> 00:33:03,481
In a certain sense,
independent of our brain,
640
00:33:03,481 --> 00:33:06,484
although obviously it functions
through our brain.
641
00:33:06,693 --> 00:33:08,653
yeah.
642
00:33:08,653 --> 00:33:12,657
I just have to wonder how much that plays
into the whole discussion.
643
00:33:14,534 --> 00:33:17,245
if you simply are a materialist,
644
00:33:17,245 --> 00:33:20,039
then it seems like
so kind of naturally lead to this
645
00:33:20,039 --> 00:33:23,042
more AI optimistic view of things.
646
00:33:24,043 --> 00:33:26,754
I don't know how that maps
into the academic landscape
647
00:33:26,754 --> 00:33:29,757
either, but.
648
00:33:30,174 --> 00:33:30,466
There's.
649
00:33:30,466 --> 00:33:31,509
There's certainly
650
00:33:31,509 --> 00:33:35,263
in the academic world, there's
certainly a sense of techno optimism.
651
00:33:36,222 --> 00:33:37,223
And it's,
652
00:33:37,223 --> 00:33:40,727
And, you know, they treat it again
if you if you come at it
653
00:33:40,727 --> 00:33:45,064
from a largely materialist worldview,
you end up with just,
654
00:33:46,774 --> 00:33:48,234
you know, an understanding that
655
00:33:48,234 --> 00:33:52,697
that all I'm, all I'm creating technically
is a smaller approximation to me.
656
00:33:52,697 --> 00:33:55,491
And it's going to get closer
and closer and closer over time.
657
00:33:55,491 --> 00:34:00,079
And so there's no there isn't really
a connection between soul, spirit
658
00:34:00,079 --> 00:34:03,374
and mind in the, in the secular worldview
659
00:34:03,958 --> 00:34:06,586
because we're just,
660
00:34:06,586 --> 00:34:06,794
you know,
661
00:34:06,794 --> 00:34:10,757
we're just a random collection of atoms
that, that have no lasting eternal value.
662
00:34:10,757 --> 00:34:13,760
I think that's the conclusion,
663
00:34:14,385 --> 00:34:16,554
whereas I think like
that is from a Christian perspective.
664
00:34:16,554 --> 00:34:21,350
That's where we come at AI
with a different view of like, you know.
665
00:34:21,559 --> 00:34:21,893
Yeah.
666
00:34:21,893 --> 00:34:26,105
So you can create a neural network that is
that has the trillions of connections
667
00:34:26,105 --> 00:34:27,523
that our brain does.
668
00:34:27,523 --> 00:34:30,526
you're still missing a piece
that you can't simulate.
669
00:34:30,693 --> 00:34:34,530
There's a spirit,
there's a soul, there's a, humans are,
670
00:34:35,031 --> 00:34:38,076
you know, by a mathematical definition,
irrational creatures.
671
00:34:39,243 --> 00:34:40,286
But in
672
00:34:40,286 --> 00:34:43,915
if we because we are we, you know,
we look at we do things that are silly,
673
00:34:43,956 --> 00:34:48,419
not in our best interest,
but in from a theological perspective,
674
00:34:48,419 --> 00:34:51,798
that makes sense
because we, we identify this conflict
675
00:34:51,798 --> 00:34:55,009
of our of our sin nature
and our redeemed nature in Christ.
676
00:34:55,009 --> 00:35:00,515
And so, if you don't have theology, this
gets leads to a natural techno optimism.
677
00:35:00,890 --> 00:35:04,602
But I think only in the,
in the sense that it is self beneficial.
678
00:35:05,186 --> 00:35:05,394
Right.
679
00:35:05,394 --> 00:35:09,482
Like I'm optimistic technically,
if it benefits me, right?
680
00:35:09,482 --> 00:35:13,444
if it doesn't, if it's a threat to me,
then I'm not as optimistic about it.
681
00:35:13,444 --> 00:35:15,822
And I have no moral qualms either
way. Right.
682
00:35:15,822 --> 00:35:17,490
That's the materialist view.
683
00:35:17,490 --> 00:35:20,368
It's not a moral question.
It's it's pragmatic.
684
00:35:21,869 --> 00:35:22,328
Yeah.
685
00:35:22,328 --> 00:35:25,581
So actually, while
I was getting ready for this episode,
686
00:35:25,623 --> 00:35:28,793
I was sitting in my little,
687
00:35:30,211 --> 00:35:33,673
office at the school I help with,
and I looked out the window
688
00:35:33,923 --> 00:35:37,176
and I noticed plants, flowers.
689
00:35:37,260 --> 00:35:39,095
It's a beautiful sunny day.
690
00:35:39,095 --> 00:35:42,014
And butterflies flitting around
691
00:35:42,014 --> 00:35:43,891
on top of there.
692
00:35:43,891 --> 00:35:46,978
And. Yeah, so I'm drawn into the beauty.
693
00:35:47,353 --> 00:35:49,230
Something a lot of people are.
694
00:35:49,230 --> 00:35:53,109
And so just got me thinking, you know,
those are the things we paint.
695
00:35:53,985 --> 00:35:56,696
we draw pictures of,
696
00:35:56,696 --> 00:36:00,658
But not only that, like, we really have,
we really have dug into that.
697
00:36:00,658 --> 00:36:03,035
What is the life cycle of the flower?
698
00:36:03,035 --> 00:36:06,956
How can we breed the flower
to maximize blooms?
699
00:36:07,957 --> 00:36:10,960
You know, we've traced the life cycle
of those butterflies
700
00:36:11,419 --> 00:36:14,422
and learned how they function.
701
00:36:14,422 --> 00:36:18,176
learned the biology of how they function,
the life cycle, all of that.
702
00:36:19,135 --> 00:36:20,136
and, you know, I even had
703
00:36:20,136 --> 00:36:23,431
to think about a book we're using in our,
in our science program.
704
00:36:24,056 --> 00:36:26,601
It's called The Girl Who Drew Butterflies,
705
00:36:26,601 --> 00:36:29,812
and it tells the story of a girl
who was a painter.
706
00:36:30,730 --> 00:36:32,857
Her art, but just was really
707
00:36:32,857 --> 00:36:35,818
was really closely observing,
708
00:36:35,943 --> 00:36:38,237
you know, insects, butterflies
came to understand
709
00:36:38,237 --> 00:36:41,240
these stages of metamorphosis and so on.
710
00:36:41,240 --> 00:36:45,328
when a lot of people, at a time when
a lot of people didn't understand those,
711
00:36:46,412 --> 00:36:49,081
and so just kind of really hit me like,
okay,
712
00:36:49,081 --> 00:36:52,501
this is what humans do with things
that we see and pay attention to.
713
00:36:53,419 --> 00:36:58,549
is there any reason to think that AI
714
00:37:00,426 --> 00:37:03,012
models or agents,
715
00:37:03,012 --> 00:37:06,390
would do any of the same thing
with have any of that same,
716
00:37:07,350 --> 00:37:07,808
really It's a
717
00:37:07,808 --> 00:37:10,770
relationship, ongoing relationship
with reality.
718
00:37:11,354 --> 00:37:15,024
and does that get at any
of the difference, this kind of difference
719
00:37:15,024 --> 00:37:19,320
we're talking about between a human mind
and an AI model?
720
00:37:20,613 --> 00:37:20,863
yeah.
721
00:37:20,863 --> 00:37:23,866
I'd love to hear
some of your thoughts on that.
722
00:37:24,825 --> 00:37:27,828
I love the question. I,
723
00:37:28,955 --> 00:37:29,914
I mean, my my thoughts.
724
00:37:29,914 --> 00:37:33,668
First, go to the the the the genesis
of what we would call curiosity.
725
00:37:34,293 --> 00:37:36,212
Right? The the.
726
00:37:36,212 --> 00:37:38,631
You know, I've, I have children at home.
727
00:37:38,631 --> 00:37:40,216
Some of them are very curious.
728
00:37:40,216 --> 00:37:42,260
And what is it that.
And we love curiosity.
729
00:37:42,260 --> 00:37:46,222
It's how we we develop an ever
expanding knowledge of the world.
730
00:37:47,556 --> 00:37:49,976
you know, so on the one hand, I think AI
731
00:37:49,976 --> 00:37:52,979
could be constrained
and that it has to like
732
00:37:54,021 --> 00:37:57,316
it doesn't really have the ability
to explore aside from the digital space
733
00:37:57,316 --> 00:37:57,900
it inhabits.
734
00:37:57,900 --> 00:38:02,822
And so, you know, it could but, you know,
that could be that could be overcome
735
00:38:03,155 --> 00:38:06,826
if a, you know, a human, we we give it
inputs from a world beyond its own
736
00:38:06,826 --> 00:38:10,955
or we give it pictures and soil samples
and scientific reports, and we give it
737
00:38:10,955 --> 00:38:15,084
enough information to, discern connections
and patterns and systems.
738
00:38:16,627 --> 00:38:16,919
but I
739
00:38:16,919 --> 00:38:20,798
think at the, at the very bottom of a
of a contrived system
740
00:38:20,798 --> 00:38:24,176
like that, we have like human volition,
we've done it.
741
00:38:24,176 --> 00:38:28,764
We've,
we we have to push AI into the world
742
00:38:28,764 --> 00:38:30,766
to begin to collect this information.
743
00:38:30,766 --> 00:38:33,769
so I don't know that,
744
00:38:34,270 --> 00:38:35,187
you know what?
745
00:38:35,187 --> 00:38:37,857
I'll say this in the, in the world
AI inhabits,
746
00:38:37,857 --> 00:38:42,153
I think it already is acting
as a curious agent within its world.
747
00:38:42,737 --> 00:38:42,945
Right.
748
00:38:42,945 --> 00:38:45,364
It's I think by its incentive structure.
Right.
749
00:38:45,364 --> 00:38:47,867
It says like,
I want to be the best AI system.
750
00:38:47,867 --> 00:38:52,580
So I'm going to collect everything
that I can, and, and look for patterns
751
00:38:52,580 --> 00:38:57,585
and, and begin to create more
and more of an approximation to reality.
752
00:38:58,711 --> 00:39:01,297
there's so much more to the world
than just the digital space, right?
753
00:39:01,297 --> 00:39:05,301
So we have to we have to somehow connect
AI with that space.
754
00:39:05,301 --> 00:39:07,845
That's a more challenging question.
755
00:39:07,845 --> 00:39:09,180
so I don't know that I would,
756
00:39:11,265 --> 00:39:12,224
I wouldn't
757
00:39:12,224 --> 00:39:15,853
I don't think that
AI is going to have agency in that way.
758
00:39:16,187 --> 00:39:19,315
I think one of the problems where, like,
I think we're already seeing an inflection
759
00:39:19,315 --> 00:39:22,318
point with that kind of pattern in AI,
760
00:39:22,818 --> 00:39:24,779
because they estimate the amount
of the amount of data
761
00:39:24,779 --> 00:39:27,990
on the planet doubles every six months,
something like that.
762
00:39:27,990 --> 00:39:31,369
So just the amount of data is exploding
based on what we're creating
763
00:39:31,369 --> 00:39:34,372
movies, films, text, all these things.
764
00:39:35,247 --> 00:39:36,290
now what we're realizing
765
00:39:36,290 --> 00:39:40,211
is that the some of the data
that's being fed back into these models
766
00:39:40,211 --> 00:39:43,339
to learn and remain current
is actually AI generated.
767
00:39:43,798 --> 00:39:43,964
Right?
768
00:39:43,964 --> 00:39:47,301
So AI models are consuming
AI generated things
769
00:39:47,802 --> 00:39:51,472
and taking it as as new truth sources.
770
00:39:51,472 --> 00:39:51,722
Right?
771
00:39:51,722 --> 00:39:55,559
So they're the they're building up on what
they, what they've already created.
772
00:39:56,060 --> 00:39:59,647
And so any, you know,
if there's any error or bias
773
00:39:59,647 --> 00:40:02,650
or anything that's built
into those generated sets of data
774
00:40:02,900 --> 00:40:07,947
that's now been replicated
and virally expanded across the AI space.
775
00:40:07,947 --> 00:40:13,244
And so, so I think, you know,
I my one one thought I have
776
00:40:13,244 --> 00:40:16,330
is there's going to be a bit of a pendulum
swing when you realize AI
777
00:40:16,330 --> 00:40:19,333
is spitting out nonsense,
because that's all it knows.
778
00:40:19,834 --> 00:40:21,919
and the pendulum swing will say, well,
don't
779
00:40:21,919 --> 00:40:25,089
don't let AI have access to anything
new yet.
780
00:40:25,089 --> 00:40:25,464
Right?
781
00:40:25,464 --> 00:40:29,593
We need to curate what it's learning from,
in order to make this truthful.
782
00:40:29,718 --> 00:40:35,099
And so, you know, we have this curious
humans have this beautiful
783
00:40:35,099 --> 00:40:39,770
curiosity to know, you know what,
why do things work the way that they do?
784
00:40:40,062 --> 00:40:40,271
Right.
785
00:40:40,271 --> 00:40:44,358
It's led to advances in cosmology
into in biology, in mathematics.
786
00:40:44,817 --> 00:40:47,027
You know, we do this naturally.
787
00:40:48,028 --> 00:40:48,654
you know, I
788
00:40:48,654 --> 00:40:52,116
loved I loved seeing recently
there was a paper that came out of,
789
00:40:52,700 --> 00:40:53,534
I think it might have been Google
790
00:40:53,534 --> 00:40:56,537
DeepMind as well,
but they actually had developed a
791
00:40:56,537 --> 00:40:59,832
AI generated mathematical theorem
proving system.
792
00:41:00,416 --> 00:41:01,709
They could give it
a mathematical question.
793
00:41:01,709 --> 00:41:03,961
It could prove something mathematically.
794
00:41:03,961 --> 00:41:08,048
I said, that's fascinating because that's
that's an area that I had thought
795
00:41:08,048 --> 00:41:12,344
that humans like that was human
like creativity and intuition.
796
00:41:12,344 --> 00:41:14,513
That was not an accessible region.
797
00:41:14,513 --> 00:41:17,433
but I had but the more I am
798
00:41:17,433 --> 00:41:20,436
reading about it,
the more I'm thinking, okay,
799
00:41:20,686 --> 00:41:24,356
they've not done human things,
but they've they've made progress, right?
800
00:41:24,398 --> 00:41:26,650
They've done some things that I thought
they couldn't do.
801
00:41:26,650 --> 00:41:30,488
And so, so there but I think that's
how progress oftentimes goes in
802
00:41:30,488 --> 00:41:33,532
AI is they, they,
it appears they can't do anything.
803
00:41:33,532 --> 00:41:37,453
And then there's this discontinuity
and jump into now what they can do.
804
00:41:37,453 --> 00:41:40,748
So I think we've seen those
now do I think that
805
00:41:41,332 --> 00:41:44,335
that AI is ever going to have the,
the beautiful,
806
00:41:44,710 --> 00:41:47,796
observation of the butterflies,
like the girl who drew butterflies
807
00:41:47,796 --> 00:41:50,883
to to understand
the biological cycles of these things.
808
00:41:53,177 --> 00:41:55,804
again, I probably
hold a healthy skepticism, just like AGI.
809
00:41:55,804 --> 00:41:56,388
I'm not.
810
00:41:56,388 --> 00:41:59,558
I don't think that AI has, because
we haven't given it the reason to yet.
811
00:41:59,767 --> 00:42:02,269
We've not incentivized it.
812
00:42:02,269 --> 00:42:04,897
you know, we
we learn because it's beautiful to learn
813
00:42:04,897 --> 00:42:07,900
and AI learns because either we tell it to
814
00:42:08,526 --> 00:42:10,444
or we threaten its extinction
if it doesn't.
815
00:42:10,444 --> 00:42:13,739
So there's a, you know, I don't think
816
00:42:13,739 --> 00:42:16,116
we've
it doesn't have the natural curiosity.
817
00:42:16,116 --> 00:42:19,119
We do. And I think that makes us human.
818
00:42:19,870 --> 00:42:21,789
Yeah. Thanks. Yeah.
819
00:42:21,789 --> 00:42:24,375
So I keep hearing you
come back to the, “Not yets” a little bit.
820
00:42:24,375 --> 00:42:27,586
But if there is one theme in your limits
that you talk about,
821
00:42:28,379 --> 00:42:31,382
kind of ontological limits of AI is
822
00:42:33,008 --> 00:42:36,428
is it always being downstream of humanity
in some way or another?
823
00:42:37,179 --> 00:42:39,223
Yeah. We don't understand
exactly what it's doing.
824
00:42:39,223 --> 00:42:43,561
We we set it up and we don't
always understand exactly how it works.
825
00:42:43,561 --> 00:42:45,646
The whole black box effect.
826
00:42:45,646 --> 00:42:47,982
but it's downstream from you talking about
827
00:42:47,982 --> 00:42:51,360
the incentives we give it, the way
we program it, the parameters,
828
00:42:52,403 --> 00:42:54,697
what we want it to do,
829
00:42:54,697 --> 00:42:57,324
the inputs we curate for it.
830
00:42:57,324 --> 00:43:00,869
so yeah, that is helpful.
831
00:43:01,579 --> 00:43:03,372
yeah.
832
00:43:03,372 --> 00:43:06,375
Any particular concluding thoughts
you'd like with,
833
00:43:06,875 --> 00:43:09,878
like to, leave on this topic?
834
00:43:10,254 --> 00:43:11,338
I think just the.
835
00:43:11,338 --> 00:43:14,091
You know,
you're you're right point to highlight.
836
00:43:14,091 --> 00:43:16,719
Like. Yeah. So there is a,
there is a sense of not yet.
837
00:43:16,719 --> 00:43:21,015
And again part of me
I go there because I in candor,
838
00:43:21,015 --> 00:43:24,018
I don't actually know,
but I make a forecast that could have a,
839
00:43:24,268 --> 00:43:26,145
you know,
there's a margin of error in the forecast
840
00:43:26,145 --> 00:43:29,356
and there's things that could go wrong
or right that make it wrong.
841
00:43:29,732 --> 00:43:31,442
And so,
842
00:43:31,442 --> 00:43:33,986
I only forecast out as sort of as far
843
00:43:33,986 --> 00:43:38,449
as I could see, the, but you think about
something like quantum computing.
844
00:43:38,449 --> 00:43:41,577
So the new technology that there's
engineering
845
00:43:41,577 --> 00:43:44,830
challenges to solve, but say we solve them
and we have quantum computers
846
00:43:45,247 --> 00:43:48,250
that can do incredibly quick calculation
847
00:43:48,792 --> 00:43:51,795
in, in some previously inaccessible
problems.
848
00:43:52,379 --> 00:43:55,758
all of these developments,
you know, AI developments, growth
849
00:43:55,758 --> 00:43:59,720
in neural networks or training or,
you know, GPT eight say.
850
00:44:00,012 --> 00:44:01,764
Or quantum computing, right.
851
00:44:01,764 --> 00:44:06,226
They all solve sort of little niche things
and they do it very well.
852
00:44:06,727 --> 00:44:10,856
But I guess the analogy
I go back to in my own mind is that you
853
00:44:11,398 --> 00:44:14,610
without it, without a unified system
or understanding of reality,
854
00:44:14,985 --> 00:44:17,905
all of these things will remain
niche solutions, right?
855
00:44:17,905 --> 00:44:18,697
You don't.
856
00:44:18,697 --> 00:44:20,115
It'd be a little bit like, you know, you
857
00:44:20,115 --> 00:44:22,868
you try and fix something wrong
with your body and you, you know, you,
858
00:44:22,868 --> 00:44:25,663
you sprain your finger
and so you splint it.
859
00:44:25,663 --> 00:44:27,915
Right. You have a solution to the problem.
860
00:44:27,915 --> 00:44:29,750
You've not cured cancer, right?
861
00:44:29,750 --> 00:44:32,002
You've you've solved a problem.
862
00:44:32,002 --> 00:44:32,211
Right.
863
00:44:32,211 --> 00:44:34,963
And you've made progress
towards the solution to the whole.
864
00:44:34,963 --> 00:44:37,758
But you
there is not a systemic understanding yet.
865
00:44:37,758 --> 00:44:41,261
And so all of these developments,
866
00:44:42,471 --> 00:44:44,264
you know, I'll say not yet.
867
00:44:44,264 --> 00:44:46,433
My my prediction is it's a long way off.
868
00:44:46,433 --> 00:44:51,146
But, I think for all of these things
to work in concert to develop anything
869
00:44:51,146 --> 00:44:56,068
that might be, an ontological break
for artificial intelligence,
870
00:44:56,902 --> 00:44:59,238
just this to me, there's still too
many unanswered questions.
871
00:44:59,238 --> 00:45:03,117
Is they don't none of these things
see reality in the same way.
872
00:45:03,117 --> 00:45:06,412
Like quantum and classical computing
also don't see reality in the same way.
873
00:45:06,412 --> 00:45:10,499
So like you gotta find a way to bridge
that before these are effective together.
874
00:45:10,499 --> 00:45:11,709
And it's
875
00:45:11,709 --> 00:45:13,460
I think that there's
876
00:45:13,460 --> 00:45:16,338
we're finding more questions
than we have answers to these days.
877
00:45:16,338 --> 00:45:19,258
And at least in my view.
878
00:45:19,258 --> 00:45:20,592
That's an interesting way
maybe to put your.
879
00:45:20,592 --> 00:45:21,969
Not yet.
880
00:45:21,969 --> 00:45:24,304
You said we won't get to an on. You know,
881
00:45:25,347 --> 00:45:26,056
you don't think we'll get
882
00:45:26,056 --> 00:45:29,059
to an ontological breakthrough.
883
00:45:29,143 --> 00:45:30,227
and so in some ways,
884
00:45:30,227 --> 00:45:33,230
what you're saying is, look,
the ontology of what we have now is,
885
00:45:34,440 --> 00:45:37,443
is, in reality, nothing like artificial
general intelligence.
886
00:45:39,153 --> 00:45:42,156
You can't conclusively rule out
that at some point,
887
00:45:43,323 --> 00:45:45,242
we would be able to develop systems
888
00:45:45,242 --> 00:45:48,537
that are really fundamentally different
at how they work.
889
00:45:49,580 --> 00:45:51,498
And that's kind of the not yet.
890
00:45:51,498 --> 00:45:55,252
And as part of your emphasis is like
it wouldn't just be a development,
891
00:45:55,252 --> 00:45:58,255
it would be something
would be a very fundamental breakthrough.
892
00:45:59,256 --> 00:46:01,300
Yeah, I'd say that's a good
that's a good assessment of it.
893
00:46:01,300 --> 00:46:04,094
It's it's very hard to prove a negative.
894
00:46:04,094 --> 00:46:06,138
Right. So that's one of the challenges
you run into in logic.
895
00:46:06,138 --> 00:46:11,477
And so and it,
so yeah, I think we have there's
896
00:46:12,603 --> 00:46:13,312
and it always
897
00:46:13,312 --> 00:46:16,607
seems, it always seems impossible
898
00:46:16,607 --> 00:46:19,651
to solve some of these problems
until someone solves it.
899
00:46:19,651 --> 00:46:21,320
Right.
There's that's true of mathematics, right?
900
00:46:21,320 --> 00:46:24,323
We have all these unsolved problems
until someone,
901
00:46:24,615 --> 00:46:27,075
you know, goes out of mathematics
for a long enough
902
00:46:27,075 --> 00:46:28,202
and they come up with a solution.
903
00:46:28,202 --> 00:46:33,582
So, I think, you know, my
my skepticism lies in the, in the,
904
00:46:33,582 --> 00:46:37,169
the nature of the questions being asked in
that, you know, we're saying,
905
00:46:37,795 --> 00:46:41,131
you know, in order to create things
that make us uniquely human, we need to,
906
00:46:41,423 --> 00:46:46,178
you know, more and more silicon chips
doesn't get us there, right?
907
00:46:46,386 --> 00:46:47,846
It needs to be something else. Right?
908
00:46:47,846 --> 00:46:51,683
Some other way of understanding reality,
processing information,
909
00:46:53,143 --> 00:46:54,019
that we just don't have.
910
00:46:54,019 --> 00:46:57,147
And I, and again, I don't think too long
911
00:46:57,147 --> 00:47:00,359
about what
it would take to do that, but the,
912
00:47:01,568 --> 00:47:02,486
When we reach a point in
913
00:47:02,486 --> 00:47:05,864
innovation that every time we innovate,
we get more questions.
914
00:47:06,782 --> 00:47:08,909
I'm, I'm waiting for the
915
00:47:08,909 --> 00:47:12,704
I'm not really waiting, but in a sense,
if the tide began to turn and the
916
00:47:12,871 --> 00:47:15,707
we began to answer, a lot of these things
or major breakthroughs are covering
917
00:47:15,707 --> 00:47:19,294
lots of them that might tilt the needle
of optimism versus pessimism.
918
00:47:19,294 --> 00:47:22,923
For me, I might say, okay, I'm
maybe I was in error before,
919
00:47:23,298 --> 00:47:28,428
but at the moment we just keep
getting more questions and it's yeah,
920
00:47:28,428 --> 00:47:31,473
that's that's sort of the root of my
the root of my not yet answer.
921
00:47:34,142 --> 00:47:34,476
Yeah.
922
00:47:34,476 --> 00:47:38,230
Well,
thank you, Ben, for, diving into this.
923
00:47:38,230 --> 00:47:40,148
I've enjoyed this and. Yeah.
924
00:47:40,148 --> 00:47:42,067
Thanks for sharing that.
925
00:47:42,067 --> 00:47:44,069
with our audience here.
926
00:47:44,069 --> 00:47:47,072
Thank you to the audience
927
00:47:47,072 --> 00:47:50,075
for tuning in here
to Anabaptist Perspectives.
928
00:47:50,576 --> 00:47:54,288
And if you've enjoyed this,
you can subscribe to the channel
929
00:47:54,288 --> 00:47:56,248
or share this episode.
930
00:47:56,248 --> 00:47:58,041
And we'll catch you in the next episode.