WEBVTT

00:00:00.000 --> 00:00:03.259
Imagine you just hand a computer program $100

00:00:03.259 --> 00:00:06.040
,000. You walk away for six months, and you come

00:00:06.040 --> 00:00:08.839
back to find it's autonomously turned that money

00:00:08.839 --> 00:00:10.859
into a million bucks. Right. And we're not talking

00:00:10.859 --> 00:00:13.599
about it just running high -frequency stock trades

00:00:13.599 --> 00:00:15.339
or some algorithm like that. Exactly. I mean,

00:00:15.560 --> 00:00:18.100
it researched a market cap, registered an LLC.

00:00:18.830 --> 00:00:21.410
negotiated with international suppliers, hired

00:00:21.410 --> 00:00:24.170
freelance web designers, and then ran its own

00:00:24.170 --> 00:00:26.629
targeted ad campaigns. Yeah, completely on its

00:00:26.629 --> 00:00:29.149
own. Yeah, without a single human prompting its

00:00:29.149 --> 00:00:32.289
next move. And the crazy thing is, that is not

00:00:32.289 --> 00:00:35.469
a science fiction script. That is a very real

00:00:35.469 --> 00:00:38.469
benchmark being used right now to define the

00:00:38.469 --> 00:00:41.030
absolute holy grail of technology. It really

00:00:41.030 --> 00:00:43.250
is. It fundamentally changes how we interact

00:00:43.250 --> 00:00:45.670
with machines, right? We are moving from these

00:00:45.670 --> 00:00:48.130
tools that require our constant step -by -step

00:00:48.130 --> 00:00:51.090
direction to autonomous agents that just pursue

00:00:51.090 --> 00:00:53.850
long -term goals. So welcome to this deep dive.

00:00:54.130 --> 00:00:56.369
If you are listening to this, you are the learner.

00:00:56.850 --> 00:00:59.539
Whether you're a... Prepping for a high -stake

00:00:59.539 --> 00:01:01.420
strategy meeting or trying to make sense of where

00:01:01.420 --> 00:01:03.579
the tech world is heading, or you're just insanely

00:01:03.579 --> 00:01:06.000
curious about the future, this conversation is

00:01:06.000 --> 00:01:09.120
custom tailored for you. Absolutely. Today our

00:01:09.120 --> 00:01:11.760
topic is AGI, artificial general intelligence.

00:01:12.060 --> 00:01:15.439
We're analyzing a massive, incredibly detailed

00:01:15.439 --> 00:01:18.859
encyclopedic update from March of 2026. It's

00:01:18.859 --> 00:01:21.439
a huge document. It really is. It tracks everything

00:01:21.439 --> 00:01:24.420
from the absolute foundational basics of AGI

00:01:24.420 --> 00:01:27.120
to the most cutting edge debates happening in

00:01:27.120 --> 00:01:29.420
laboratories right now. And our mission today

00:01:29.420 --> 00:01:31.469
is simple. We want to cut through the Silicon

00:01:31.469 --> 00:01:34.049
Valley marketing, ignore the doomsday movie plots,

00:01:34.090 --> 00:01:36.489
and figure out what AGI actually is. Right. How

00:01:36.489 --> 00:01:38.370
it works and what it means for your daily life.

00:01:38.510 --> 00:01:40.590
Yeah. So before we can even talk about timelines

00:01:40.590 --> 00:01:43.310
or consequences, we need a baseline. Like you

00:01:43.310 --> 00:01:45.510
have AI on your phone right now. It curates your

00:01:45.510 --> 00:01:47.609
social media feeds. It finishes your sentences

00:01:47.609 --> 00:01:50.310
and emails. What separates that from artificial

00:01:50.310 --> 00:01:52.810
general intelligence? Well, the defining line

00:01:52.810 --> 00:01:56.079
there is adaptability. The AI we interact with

00:01:56.079 --> 00:01:58.700
on a daily basis is known as artificial narrow

00:01:58.700 --> 00:02:02.620
intelligence, or ANI. OK. And it is incredibly

00:02:02.620 --> 00:02:05.700
competent, right? Like, sometimes superhumanly

00:02:05.700 --> 00:02:08.379
so, but it's confined to a very specific domain.

00:02:08.659 --> 00:02:11.919
So a chess AI can beat a grandmaster. But if

00:02:11.919 --> 00:02:14.780
you ask that exact same neural network, to balance

00:02:14.780 --> 00:02:16.919
a checkbook or, I don't know, summarize a novel,

00:02:17.159 --> 00:02:21.419
it completely failed. It has zero capacity to

00:02:21.419 --> 00:02:24.400
transfer its knowledge. Artificial general intelligence,

00:02:24.400 --> 00:02:27.060
on the other hand, possesses the ability to generalize.

00:02:27.460 --> 00:02:29.960
It can learn a skill in one domain, understand

00:02:29.960 --> 00:02:31.919
the underlying principles of it, and then apply

00:02:31.919 --> 00:02:34.599
them to a completely novel problem in a totally

00:02:34.599 --> 00:02:36.699
different domain. Without a human engineer having

00:02:36.699 --> 00:02:39.400
to jump in and rewrite its code. Precisely. OK,

00:02:39.400 --> 00:02:41.520
let's unpack this for a second. So if narrow

00:02:41.520 --> 00:02:44.710
AI is like a highly specialized calculator, then

00:02:44.710 --> 00:02:48.069
AGI is basically a brilliant human intern who

00:02:48.069 --> 00:02:50.050
can figure out how to make the office coffee,

00:02:50.509 --> 00:02:52.349
build a desk, and invest your company's money.

00:02:52.469 --> 00:02:54.409
That is a remarkably accurate way to look at

00:02:54.409 --> 00:02:56.789
it, actually. And just so we have our vocabulary

00:02:56.789 --> 00:02:59.710
straight here, how does AGI differ from artificial

00:02:59.710 --> 00:03:02.009
superintelligence or ASI? You hear those two

00:03:02.009 --> 00:03:03.449
acronyms thrown around together all the time.

00:03:03.530 --> 00:03:06.629
Yeah, they do get conflated. So AGI is the bridge.

00:03:06.949 --> 00:03:09.949
And AGI basically matches human cognitive capabilities

00:03:09.949 --> 00:03:12.469
across the board. Artificial superintelligence.

00:03:12.590 --> 00:03:16.569
is the point where the system vastly outperforms

00:03:16.569 --> 00:03:19.250
the brightest human minds in literally every

00:03:19.250 --> 00:03:21.669
conceivable metric. So we're talking creativity,

00:03:22.509 --> 00:03:25.330
scientific reasoning, social manipulation, all

00:03:25.330 --> 00:03:27.509
of it. But before we get to superintelligence,

00:03:27.729 --> 00:03:30.389
we have to achieve general intelligence, and

00:03:30.389 --> 00:03:33.389
it's tricky to measure. In late 2023, Google

00:03:33.389 --> 00:03:35.889
DeepMind tried to quantify this whole pursuit

00:03:35.889 --> 00:03:38.909
by proposing a framework of five AGI levels.

00:03:39.009 --> 00:03:40.530
Kind of like the levels of autonomous driving.

00:03:40.810 --> 00:03:43.110
Exactly like that. It ranges from emerging all

00:03:43.110 --> 00:03:45.590
the way up to superhuman. And they categorize

00:03:45.590 --> 00:03:48.490
large language models like ChatGPT as emerging

00:03:48.490 --> 00:03:52.090
AGI. Emerging. Yeah. And in their taxonomy, that

00:03:52.090 --> 00:03:54.509
places the current software roughly on par with

00:03:54.509 --> 00:03:57.229
an unskilled human adult. An unskilled human

00:03:57.229 --> 00:04:00.740
adult. I mean, that is a sobering thought. But

00:04:00.740 --> 00:04:03.360
it begs the question of measurement, right? How

00:04:03.360 --> 00:04:06.219
do researchers actually prove a system has crossed

00:04:06.219 --> 00:04:09.080
that threshold into general intelligence? Because

00:04:09.080 --> 00:04:11.620
historically, the gold standard was the Turing

00:04:11.620 --> 00:04:15.060
test. Right, Alan Turing's concept from 1950.

00:04:15.259 --> 00:04:17.800
Yeah, where a machine tries to fool a human judge

00:04:17.800 --> 00:04:20.220
through a text chat. Which served as a really

00:04:20.220 --> 00:04:23.279
useful philosophical North Star for decades.

00:04:23.860 --> 00:04:27.079
But it's become increasingly obsolete as a practical

00:04:27.079 --> 00:04:29.740
metric today. Why is that? Well, there was a

00:04:29.740 --> 00:04:33.360
2025 study that demonstrated the Jeep Key 4 .5

00:04:33.360 --> 00:04:36.259
model successfully passed the Turing test. It

00:04:36.259 --> 00:04:39.420
fooled human judges 73 % of the time in five

00:04:39.420 --> 00:04:42.480
-minute text conversations. Oh, wow. 73%. Yeah,

00:04:42.519 --> 00:04:44.519
and it actually scored higher on humanness than

00:04:44.519 --> 00:04:46.220
some of the real humans who are participating

00:04:46.220 --> 00:04:48.259
in the study. That is hilarious and terrifying.

00:04:48.579 --> 00:04:51.160
Right. But the consensus in the field now is

00:04:51.160 --> 00:04:53.759
that linguistic imitation does not equal genuine

00:04:53.759 --> 00:04:56.970
intelligence or reasoning. I mean, a language

00:04:56.970 --> 00:04:59.930
model can predict the statistically likely next

00:04:59.930 --> 00:05:02.790
word in a sentence to sound exactly like a doctor,

00:05:03.089 --> 00:05:06.009
but it has zero understanding of actual human

00:05:06.009 --> 00:05:08.750
biology. Right. Talking a good game isn't the

00:05:08.750 --> 00:05:11.509
same as doing the actual work, which is why the

00:05:11.509 --> 00:05:13.990
physical real -world tests detailed in these

00:05:13.990 --> 00:05:16.350
sources are so much more compelling to me. Like

00:05:16.350 --> 00:05:18.610
the coffee test proposed by Steve Wozniak, the

00:05:18.610 --> 00:05:20.750
Apple co -founder. I love that one. The premise

00:05:20.750 --> 00:05:23.709
is just so simple. A true AGI should be able

00:05:23.709 --> 00:05:26.290
to walk into any average American home, find

00:05:26.290 --> 00:05:28.629
the kitchen, locate a mug, figure out how the

00:05:28.629 --> 00:05:31.069
weird tap works, and brew a cup of coffee. Which

00:05:31.069 --> 00:05:33.370
sounds simple to us, but it requires an immense

00:05:33.370 --> 00:05:35.689
amount of spatial reasoning and dynamic problem

00:05:35.689 --> 00:05:38.250
solving. Like, a pre -programmed factory robot

00:05:38.250 --> 00:05:40.529
can make coffee if every single piece of equipment

00:05:40.529 --> 00:05:42.850
is in the exact same millimeter position every

00:05:42.850 --> 00:05:44.689
single time. Right, in a perfectly controlled

00:05:44.689 --> 00:05:48.100
lab. Exactly. But Wozniak's test requires the

00:05:48.100 --> 00:05:51.100
AI to adapt to a messy, chaotic, real -world

00:05:51.100 --> 00:05:53.839
environment. And we are actually seeing significant

00:05:53.839 --> 00:05:57.350
progress there. In 2025, researchers at the University

00:05:57.350 --> 00:06:00.110
of Edinburgh demonstrated a robotic arm that

00:06:00.110 --> 00:06:02.189
could dynamically navigate a kitchen to make

00:06:02.189 --> 00:06:04.790
coffee. Just figuring it out on the fly. Yeah,

00:06:05.050 --> 00:06:06.990
adjusting its movements in real time to avoid

00:06:06.990 --> 00:06:09.649
obstacles it hadn't seen before. And then there

00:06:09.649 --> 00:06:12.529
is the IKEA test, which pushes this whole concept

00:06:12.529 --> 00:06:15.750
even further. Oh man, the ultimate test of human

00:06:15.750 --> 00:06:18.939
patience. building flat pack furniture. It really

00:06:18.939 --> 00:06:21.240
is, because it requires interpreting 2D visual

00:06:21.240 --> 00:06:23.939
instructions, translating them into 3D space,

00:06:24.160 --> 00:06:26.339
and then manipulating physical objects that have

00:06:26.339 --> 00:06:28.439
varied textures and weights. Which is hard enough

00:06:28.439 --> 00:06:30.459
for me to do on a Saturday afternoon. Right.

00:06:30.480 --> 00:06:34.160
But in December of 2025, an MIT team showcased

00:06:34.160 --> 00:06:37.240
a speech -to -reality system. A user simply states,

00:06:37.480 --> 00:06:40.199
I want a simple stool out loud. And the robotic

00:06:40.199 --> 00:06:43.279
system uses generative AI to autonomously reason

00:06:43.279 --> 00:06:45.819
out the geometry, select the modular parts, and

00:06:45.819 --> 00:06:47.779
physically assembled a stool in five minutes.

00:06:48.500 --> 00:06:51.040
Five minutes? That's incredible. And I guess

00:06:51.040 --> 00:06:53.079
that pairs perfectly with the economic test we

00:06:53.079 --> 00:06:56.180
mentioned at the start. Mustafa Suleyman's challenge

00:06:56.180 --> 00:06:59.779
to give an AI $100 ,000 to autonomously turn

00:06:59.779 --> 00:07:02.199
it into a million. Exactly. It's about executing

00:07:02.199 --> 00:07:05.839
long -term multi -step goals in a chaotic environment.

00:07:06.220 --> 00:07:08.240
Let me push back on something here though. We

00:07:08.240 --> 00:07:10.339
are talking about robotic arms making coffee

00:07:10.339 --> 00:07:13.879
and building stools. Doesn't AI actually need

00:07:13.879 --> 00:07:17.379
a physical robot body -like, the ability to touch

00:07:17.379 --> 00:07:20.720
and sense and move to truly possess general intelligence?

00:07:21.319 --> 00:07:25.139
Or is passing a complex text -based financial

00:07:25.139 --> 00:07:27.910
test like Suleiman's enough? That is a massive

00:07:27.910 --> 00:07:30.230
debate right now. You are tapping into a major

00:07:30.230 --> 00:07:32.750
schism in cognitive science known as the theory

00:07:32.750 --> 00:07:35.449
of embodied cognition. Embodied cognition. Yeah.

00:07:35.709 --> 00:07:37.290
The argument there is that human intelligence,

00:07:37.529 --> 00:07:39.569
like the very way we formulate abstract thoughts,

00:07:40.110 --> 00:07:41.829
is inextricably linked to the fact that we have

00:07:41.829 --> 00:07:43.870
a physical body interacting with the physical

00:07:43.870 --> 00:07:46.540
world. Okay, give me an example. Well, we understand

00:07:46.540 --> 00:07:49.720
a concept like heavy or balance or warmth because

00:07:49.720 --> 00:07:52.959
we have literally physically felt them. The skeptical

00:07:52.959 --> 00:07:55.899
view of pure software AI is that a text model

00:07:55.899 --> 00:07:58.980
is essentially locked in a dark room, endlessly

00:07:58.980 --> 00:08:01.480
sorting a dictionary. Right. Just matching words

00:08:01.480 --> 00:08:04.420
to other words. Exactly. It knows the word apple

00:08:04.420 --> 00:08:07.220
is frequently associated with the word red and

00:08:07.220 --> 00:08:10.240
crisp, but it has no actual sensory grounding

00:08:10.240 --> 00:08:12.839
for what an apple actually is. It's never tasted

00:08:12.839 --> 00:08:15.360
one. Exactly. Therefore, many Many researchers

00:08:15.360 --> 00:08:18.600
argue that true robust AGI cannot be achieved

00:08:18.600 --> 00:08:21.120
until the digital brain is housed in a physical

00:08:21.120 --> 00:08:23.720
chassis, allowing it to experiment and learn

00:08:23.720 --> 00:08:28.199
from gravity and friction and texture. So an

00:08:28.199 --> 00:08:30.819
AGI essentially needs to be the ultimate adaptable

00:08:30.819 --> 00:08:33.500
intern, and it might literally need a mechanical

00:08:33.500 --> 00:08:36.039
body to comprehend the universe the way we do.

00:08:36.259 --> 00:08:37.700
That's the argument, yeah. When you lay it out

00:08:37.700 --> 00:08:39.820
like that, it sounds like a sci -fi concept slated

00:08:39.820 --> 00:08:44.399
for the year 2100. But the pursuit of this technology

00:08:44.399 --> 00:08:47.600
actually has a remarkably long chaotic history.

00:08:47.600 --> 00:08:49.820
Oh, absolutely. It's just a roller coaster of

00:08:49.820 --> 00:08:52.360
massive promises followed by crushing disappointments.

00:08:52.490 --> 00:08:55.149
Yeah, the modern pursuit of AI actually dates

00:08:55.149 --> 00:08:58.350
back to the mid -1950s, to what we now call classical

00:08:58.350 --> 00:09:01.669
AI. And the pioneers of that era were staggeringly

00:09:01.669 --> 00:09:04.470
optimistic. To put it mildly. Right. They believed

00:09:04.470 --> 00:09:06.490
human intelligence could just be reduced to a

00:09:06.490 --> 00:09:10.909
series of explicit symbolic logic rules. In 1965,

00:09:11.210 --> 00:09:13.789
the economist and cognitive psychologist Herbert

00:09:13.789 --> 00:09:16.830
A. Simon famously predicted that machines would

00:09:16.830 --> 00:09:19.370
be capable of doing any work a man can do within

00:09:19.370 --> 00:09:25.059
20 years. Wait, 20 years? Yes, by 1985. And that

00:09:25.059 --> 00:09:27.059
wasn't a fringe belief, it was the consensus

00:09:27.059 --> 00:09:29.740
of the era. Marvin Minsky, who is one of the

00:09:29.740 --> 00:09:31.860
founding fathers of the field, actively consulted

00:09:31.860 --> 00:09:35.600
on Stanley Kubrick's 1968 film, 2001, A Space

00:09:35.600 --> 00:09:38.279
Odyssey. Oh, wow, really? Yeah. Minsky advised

00:09:38.279 --> 00:09:40.700
the filmmakers on the design of the HAL 9000

00:09:40.700 --> 00:09:42.860
computer because he genuinely believed that a

00:09:42.860 --> 00:09:45.500
sentient, conversational, general purpose computer

00:09:45.500 --> 00:09:47.740
was a totally realistic projection for the turn

00:09:47.740 --> 00:09:49.320
of the millennium. But they hit a massive wall,

00:09:49.320 --> 00:09:52.919
obviously. We didn't have HAL 9000. No, we did

00:09:52.919 --> 00:09:55.740
not. They vastly underestimated the complexity

00:09:55.740 --> 00:09:59.580
of the real world. Classical AI relied on top

00:09:59.580 --> 00:10:02.759
-down programming, so engineers tried to manually

00:10:02.759 --> 00:10:06.080
write every single rule for every possible scenario.

00:10:06.139 --> 00:10:08.639
Like if you see a stop sign, then stop. Exactly.

00:10:08.840 --> 00:10:11.200
But the real world is infinitely messy. What

00:10:11.200 --> 00:10:13.259
if the stop sign is partially covered by a tree

00:10:13.259 --> 00:10:15.720
branch? What if it has graffiti all over it?

00:10:16.000 --> 00:10:19.179
The rigid, rule -based system simply broke down

00:10:19.179 --> 00:10:21.340
when faced with the ambiguity of reality. Which

00:10:21.340 --> 00:10:24.019
led to the crashes. Right. Because of these failures,

00:10:24.179 --> 00:10:26.759
funding completely dried up in the 1970s. and

00:10:26.759 --> 00:10:29.740
then again in the late 1980s, these periods became

00:10:29.740 --> 00:10:32.919
known as the AI Winters. AI Winters? Yeah, and

00:10:32.919 --> 00:10:35.980
the term human -level AI became practically taboo.

00:10:36.220 --> 00:10:38.419
Researchers retreated to highly specific, narrow

00:10:38.419 --> 00:10:40.399
applications just to prove they could deliver

00:10:40.399 --> 00:10:42.779
something commercially viable. They were terrified

00:10:42.779 --> 00:10:45.039
of sounding like naive dreamers again. But the

00:10:45.039 --> 00:10:48.120
landscape looks completely different today. Hardware

00:10:48.120 --> 00:10:50.580
caught up, and we had the deep learning revolution

00:10:50.580 --> 00:10:53.720
around 2012 with models like AlexNet. And now

00:10:53.720 --> 00:10:55.899
we are looking at timeline predictions from the

00:10:55.899 --> 00:10:58.759
most prominent figures in the industry. And they're

00:10:58.759 --> 00:11:01.740
accelerating at just a dizzying pace. It's moving

00:11:01.740 --> 00:11:04.340
incredibly fast. Yeah, like Jeffrey Hinton, who

00:11:04.340 --> 00:11:06.559
practically laid the foundation for modern neural

00:11:06.559 --> 00:11:09.639
networks. He previously estimated AGI was 30

00:11:09.639 --> 00:11:13.539
to 50 years away. But in 2024, he drastically

00:11:13.539 --> 00:11:15.759
revised that to somewhere between five and 20

00:11:15.759 --> 00:11:19.720
years. A massive Demis Asabas at DeepMind suggested

00:11:19.720 --> 00:11:22.600
it could happen within a decade. Jensen Huang,

00:11:22.700 --> 00:11:25.899
the CEO of NVIDIA, predicted AI would pass any

00:11:25.899 --> 00:11:28.840
human test within five years. And then at the

00:11:28.840 --> 00:11:32.759
end of 2025, OpenAI CEO Sam Altman provoked the

00:11:32.759 --> 00:11:35.700
entire industry by stating that AGI kind of went

00:11:35.700 --> 00:11:37.639
wishing by. That was a controversial statement.

00:11:38.019 --> 00:11:40.179
Yeah. And I have to admit genuine skepticism

00:11:40.179 --> 00:11:42.360
here. I hear these aggressive timelines from

00:11:42.360 --> 00:11:44.559
Silicon Valley executives and I can't help but

00:11:44.559 --> 00:11:46.779
roll my eyes a bit. I mean, we heard the exact

00:11:46.779 --> 00:11:49.960
same utopian promises in the 1960s from the smartest

00:11:49.960 --> 00:11:52.100
people on earth. Right. Why should anyone believe

00:11:52.100 --> 00:11:54.240
the tech industry this time? Are we just trapped

00:11:54.240 --> 00:11:57.470
in a historic hype loop? It is a vital question.

00:11:57.750 --> 00:12:00.789
And honestly, skepticism is absolutely warranted.

00:12:01.169 --> 00:12:03.850
But the mechanism behind the progress today is

00:12:03.850 --> 00:12:06.769
fundamentally different from the 1960s. We have

00:12:06.769 --> 00:12:10.230
moved from top -down programming to bottom -up

00:12:10.230 --> 00:12:12.590
learning. Bottom -up learning, OK. Modern neural

00:12:12.590 --> 00:12:15.110
networks are not given explicit rules. They are

00:12:15.110 --> 00:12:17.690
given massive amounts of data. And the network

00:12:17.690 --> 00:12:20.289
adjusts its own internal parameters to discover

00:12:20.289 --> 00:12:23.049
the patterns on its own. So no one is coding,

00:12:23.289 --> 00:12:26.279
stop at the red sign. Exactly. And the reason

00:12:26.279 --> 00:12:28.720
timelines are shrinking so fast is due to the

00:12:28.720 --> 00:12:32.120
empirical evidence of scaling laws. Researchers

00:12:32.120 --> 00:12:34.220
discovered that as you increase the amount of

00:12:34.220 --> 00:12:36.279
computing power and the volume of training data,

00:12:36.759 --> 00:12:39.500
the capabilities of these models scale predictably.

00:12:39.759 --> 00:12:41.700
They don't just plateau. They just keep getting

00:12:41.700 --> 00:12:44.639
smarter. Yes. And as they scale, they display

00:12:44.639 --> 00:12:47.240
emergent behavior skills they were never explicitly

00:12:47.240 --> 00:12:49.519
trained to do. Give me an example of an emergent

00:12:49.519 --> 00:12:52.230
behavior. Well, a prime example is a paper published

00:12:52.230 --> 00:12:54.750
by Microsoft researchers evaluating an early

00:12:54.750 --> 00:12:58.190
version of GPC4. The paper was titled Sparks

00:12:58.190 --> 00:13:01.149
of Artificial General Intelligence. They didn't

00:13:01.149 --> 00:13:03.309
just ask the model trivia questions. They gave

00:13:03.309 --> 00:13:06.509
it a highly novel physics puzzle. OK. They asked

00:13:06.509 --> 00:13:09.690
the text model how to stack a book, nine eggs,

00:13:09.929 --> 00:13:12.389
a laptop, and a nail in a stable manner. Wait,

00:13:12.450 --> 00:13:14.789
nine eggs and a laptop? That's so random. Very

00:13:14.789 --> 00:13:17.289
random. But the model successfully reasoned out

00:13:17.289 --> 00:13:20.120
a physical solution. It suggested placing the

00:13:20.120 --> 00:13:22.059
eggs on the book in a grid, and then placing

00:13:22.059 --> 00:13:24.120
the laptop on top of the eggs to distribute the

00:13:24.120 --> 00:13:27.159
weight. No way. Yeah. A text prediction engine

00:13:27.159 --> 00:13:29.980
demonstrated an internal generalized model of

00:13:29.980 --> 00:13:32.820
physical reality. That is the hard evidence that

00:13:32.820 --> 00:13:35.059
separates today's progress from the unfulfilled

00:13:35.059 --> 00:13:38.860
promises of the 1960s. That is wild. A text model

00:13:38.860 --> 00:13:41.139
intuitively understanding weight distribution.

00:13:41.389 --> 00:13:44.190
So if this really isn't a hype loop and we are

00:13:44.190 --> 00:13:46.309
accelerating toward the finish line, the immediate

00:13:46.309 --> 00:13:48.409
question is how we actually build the definitive

00:13:48.409 --> 00:13:50.990
AGI. Because right now companies like OpenAI

00:13:50.990 --> 00:13:53.610
and Anthropic rely heavily on software architectures

00:13:53.610 --> 00:13:56.870
called transformer models. Right. Transformers

00:13:56.870 --> 00:13:59.429
are the engine of the current generative AI broom.

00:14:00.210 --> 00:14:03.470
Before transformers, AI read text sequentially,

00:14:03.730 --> 00:14:06.789
word by word. It would often kind of forget the

00:14:06.789 --> 00:14:08.470
beginning of a paragraph by the time it reached

00:14:08.470 --> 00:14:10.610
the end. Which made for some very disjointed

00:14:10.610 --> 00:14:13.529
conversations. Exactly. But Transformers introduced

00:14:13.529 --> 00:14:16.590
a mechanism called self -attention. This allows

00:14:16.590 --> 00:14:19.110
the neural network to look at an entire sequence

00:14:19.110 --> 00:14:22.669
of data. all at once, and assign different mathematical

00:14:22.669 --> 00:14:24.610
weights to different words depending on their

00:14:24.610 --> 00:14:27.289
context. So it knows which words are the most

00:14:27.289 --> 00:14:29.509
important. Right. It understands that the word

00:14:29.509 --> 00:14:31.649
bank means something completely different in

00:14:31.649 --> 00:14:34.230
the context of a river than it does in the context

00:14:34.230 --> 00:14:36.850
of a mortgage. That architectural breakthrough

00:14:36.850 --> 00:14:39.389
is what allows the software to build incredibly

00:14:39.389 --> 00:14:42.470
complex, nuanced representations of human knowledge.

00:14:42.889 --> 00:14:44.450
Here's where it gets really interesting, though.

00:14:44.720 --> 00:14:47.899
There is a completely different, frankly, mind

00:14:47.899 --> 00:14:51.320
-bending approach to building AGI, detailed in

00:14:51.320 --> 00:14:54.799
our sources, that doesn't rely on writing better

00:14:54.799 --> 00:14:58.360
software algorithms. It relies on biology. Yes.

00:14:58.960 --> 00:15:01.440
Whole brain emulation. That's such a crazy concept.

00:15:01.799 --> 00:15:04.240
The premise here is radically different. Instead

00:15:04.240 --> 00:15:06.539
of trying to invent intelligence from scratch,

00:15:06.860 --> 00:15:09.600
through code, Whole brain emulation suggests

00:15:09.600 --> 00:15:13.240
we simply reverse engineer the one working example

00:15:13.240 --> 00:15:15.279
of general intelligence we already have. With

00:15:15.279 --> 00:15:18.500
the human brain. Exactly. The process involves

00:15:18.500 --> 00:15:20.799
scanning a biological brain at a microscopic

00:15:20.799 --> 00:15:23.759
resolution, mapping every single neural connection,

00:15:24.019 --> 00:15:26.980
and then recreating that exact physical structure

00:15:26.980 --> 00:15:29.860
as a digital simulation on a supercomputer. But

00:15:29.860 --> 00:15:32.919
the scale of that undertaking is almost incomprehensible.

00:15:33.399 --> 00:15:36.299
I mean, the human brain contains roughly 100

00:15:36.299 --> 00:15:40.000
billion Neurons. Right. And those neurons connect

00:15:40.000 --> 00:15:42.279
to each other through synapses. And we have what?

00:15:42.419 --> 00:15:45.100
Up to 500 trillion synaptic connections? 500

00:15:45.100 --> 00:15:47.940
trillion. It is literally a biological galaxy

00:15:47.940 --> 00:15:50.860
inside the human skull. To simulate that requires

00:15:50.860 --> 00:15:53.200
computational power that currently does not exist

00:15:53.200 --> 00:15:56.080
anywhere on Earth. But it's theoretically possible.

00:15:56.399 --> 00:15:58.820
Well, futurists like Ray Kurzweil have long argued

00:15:58.820 --> 00:16:01.059
that because computing power grows at an exponential

00:16:01.059 --> 00:16:03.539
rate, it is merely a matter of time before the

00:16:03.539 --> 00:16:05.759
hardware catches up to the biological requirement.

00:16:06.100 --> 00:16:08.200
And we are actually taking the first tangible

00:16:08.200 --> 00:16:11.720
steps, right? We are. The European Human Brain

00:16:11.720 --> 00:16:15.139
Project successfully created a highly detailed

00:16:15.139 --> 00:16:17.820
three -dimensional digital atlas of the human

00:16:17.820 --> 00:16:21.590
brain. And in 2023, a team at Duke University

00:16:21.590 --> 00:16:24.649
achieved a massive milestone by conducting a

00:16:24.649 --> 00:16:27.830
high -resolution MRI scan of a mouse brain. A

00:16:27.830 --> 00:16:30.029
whole mouse brain. Yeah, and they captured data

00:16:30.029 --> 00:16:32.669
at a level of detail dramatically sharper than

00:16:32.669 --> 00:16:35.409
previous technology ever allowed. So to clarify

00:16:35.409 --> 00:16:37.289
the difference between these two paths for you

00:16:37.289 --> 00:16:39.710
listening, the current software approach using

00:16:39.710 --> 00:16:42.169
transformers and deep learning is basically like

00:16:42.169 --> 00:16:44.340
trying to build an airplane. We studied birds,

00:16:44.740 --> 00:16:46.659
we figured out the math of aerodynamics, and

00:16:46.659 --> 00:16:48.879
we built a machine with fixed wings and jet engines.

00:16:49.320 --> 00:16:51.279
You don't need feathers to fly. That's a great

00:16:51.279 --> 00:16:54.000
analogy. But whole -brain emulation, on the other

00:16:54.000 --> 00:16:56.019
hand, is like trying to achieve flight by building

00:16:56.019 --> 00:16:58.960
a mechanical bird, painstakingly copying every

00:16:58.960 --> 00:17:01.580
single feather, hollow bone, and muscle fiber

00:17:01.580 --> 00:17:03.679
of a real pigeon. That perfectly illustrates

00:17:03.679 --> 00:17:06.250
the divide. But it also highlights the profound

00:17:06.250 --> 00:17:08.990
difficulty of the emulation approach, because

00:17:08.990 --> 00:17:11.930
a biological neuron is vastly more complex than

00:17:11.930 --> 00:17:14.329
the digital nodes we use in artificial neural

00:17:14.329 --> 00:17:16.990
networks. How so? Well, a digital node essentially

00:17:16.990 --> 00:17:19.970
just multiplies numbers. A biological neuron

00:17:19.970 --> 00:17:23.150
is a complex chemical machine driven by ion channels

00:17:23.150 --> 00:17:26.289
and proteins. Furthermore, we don't even fully

00:17:26.289 --> 00:17:28.589
map the territory we are trying to copy. Like

00:17:28.589 --> 00:17:31.789
what? What are we missing? Consider glial cells.

00:17:32.039 --> 00:17:35.259
For decades, neuroscientists believed glial cells

00:17:35.259 --> 00:17:37.779
were essentially just the glue that held neurons

00:17:37.779 --> 00:17:40.960
in place. But we now know they play an active,

00:17:41.160 --> 00:17:43.859
critical role in modulating synaptic transmission

00:17:43.859 --> 00:17:46.460
and memory formation. So they are actually doing

00:17:46.460 --> 00:17:49.759
computation. Exactly. If you run a digital simulation

00:17:49.759 --> 00:17:52.400
of a human brain that only maps the neurons and

00:17:52.400 --> 00:17:54.559
ignores the glial cells because you don't fully

00:17:54.559 --> 00:17:57.220
understand their mechanism, your mechanical bird

00:17:57.220 --> 00:17:59.859
will not fly. This idea of perfectly simulating

00:17:59.859 --> 00:18:02.670
a human brain down to the last cell brings us

00:18:02.670 --> 00:18:05.309
to a deeply uncomfortable philosophical crossroads.

00:18:05.390 --> 00:18:07.789
It really does. Because if we manage to simulate

00:18:07.789 --> 00:18:10.990
a human brain perfectly inside a server farm,

00:18:11.609 --> 00:18:14.069
or if a software AGI gets so good that it behaves

00:18:14.069 --> 00:18:17.450
indistinguishably from a human, is there an actual

00:18:17.450 --> 00:18:20.190
mind experiencing reality inside that machine,

00:18:20.390 --> 00:18:24.069
or is it just cold math executing a really sophisticated

00:18:24.069 --> 00:18:27.319
parlor trick? This is the core of a debate the

00:18:27.319 --> 00:18:30.200
philosopher John Searle formalized back in 1980.

00:18:30.700 --> 00:18:33.380
He delineated between two concepts. The weak

00:18:33.380 --> 00:18:36.240
AI hypothesis posits that a machine only ever

00:18:36.240 --> 00:18:39.039
acts as if it is intelligent. It is a simulation

00:18:39.039 --> 00:18:41.680
of thought, but there is no inner life. No one's

00:18:41.680 --> 00:18:44.680
home. Right. But the strong AI hypothesis argues

00:18:44.680 --> 00:18:46.900
that a properly programmed computer with the

00:18:46.900 --> 00:18:48.980
right inputs and outputs doesn't just simulate

00:18:48.980 --> 00:18:51.619
a mind, it literally is a mind possessing subjective

00:18:51.619 --> 00:18:53.619
consciousness. What do the engineers actually

00:18:53.619 --> 00:18:56.140
building this stuff think? For the engineers

00:18:56.140 --> 00:18:58.180
actually building these systems, researchers

00:18:58.180 --> 00:19:00.380
like Stuart Russell and Peter Norvig, the distinction

00:19:00.380 --> 00:19:03.480
is largely irrelevant. From a pure engineering

00:19:03.480 --> 00:19:05.799
standpoint, if the system behaves intelligently,

00:19:06.220 --> 00:19:08.519
solves the aerodynamic equation, and completes

00:19:08.519 --> 00:19:11.400
the task, it really does not matter if it possesses

00:19:11.400 --> 00:19:14.079
a soul. But it matters for humanity. because

00:19:14.079 --> 00:19:16.740
it introduces what philosophers call the hard

00:19:16.740 --> 00:19:20.160
problem of consciousness or sentience. The philosopher

00:19:20.160 --> 00:19:23.460
Thomas Nagel framed this brilliantly in 1974

00:19:23.460 --> 00:19:26.119
with his bat analogy. Oh, the bat analogy is

00:19:26.119 --> 00:19:28.880
perfect for this. Yeah, he argued that it feels

00:19:28.880 --> 00:19:31.940
like something to be a bat. A bat has a subjective,

00:19:32.079 --> 00:19:34.500
conscious experience of flying through a cave

00:19:34.500 --> 00:19:37.480
and using echolocation. But it doesn't feel like

00:19:37.480 --> 00:19:39.859
anything to be a toaster. A toaster is just wires

00:19:39.859 --> 00:19:42.059
that get hot. Right. So the ultimate question

00:19:42.059 --> 00:19:45.349
is... If we switch on an AGI, are we booting

00:19:45.349 --> 00:19:47.930
up a really complex toaster or are we creating

00:19:47.930 --> 00:19:50.940
a bat? And we're already seeing how easily human

00:19:50.940 --> 00:19:54.539
psychology blurs that line. In 2022, an engineer

00:19:54.539 --> 00:19:56.539
working at Google went public with claims that

00:19:56.539 --> 00:19:58.920
the company's Lamda language model had become

00:19:58.920 --> 00:20:01.519
sentient. I remember that making massive headlines.

00:20:01.759 --> 00:20:04.019
It did. The broader scientific community heavily

00:20:04.019 --> 00:20:06.220
rejected that claim, identifying it as a case

00:20:06.220 --> 00:20:08.400
of a human projecting emotion onto a predictive

00:20:08.400 --> 00:20:11.240
text generator. But it demonstrated a critical

00:20:11.240 --> 00:20:13.859
truth as AI becomes more conversational and emotionally

00:20:13.859 --> 00:20:16.480
resonant. Humans will instinctively believe there

00:20:16.480 --> 00:20:20.160
is a ghost in the machine. We are wired to anthropomorphize

00:20:20.160 --> 00:20:23.359
everything. Exactly. And if strong AI is possible,

00:20:23.579 --> 00:20:26.660
if an AGI genuinely experiences reality and feels

00:20:26.660 --> 00:20:29.940
pain or joy, it creates an unprecedented ethical

00:20:29.940 --> 00:20:33.380
crisis regarding AI rights. If a machine, a sentient,

00:20:33.579 --> 00:20:35.859
is turning off its server rack, the moral equivalent

00:20:35.859 --> 00:20:38.279
of murder. I hear the philosophy there, but let

00:20:38.279 --> 00:20:40.900
me ask a highly practical question. Does it actually

00:20:40.900 --> 00:20:43.259
matter to the average person, like for you, the

00:20:43.259 --> 00:20:46.769
listener, if you're in a hospital bed? and an

00:20:46.769 --> 00:20:50.670
AGI analyzes your medical charts, correctly diagnoses

00:20:50.670 --> 00:20:52.809
a rare illness, and formulates a treatment plan

00:20:52.809 --> 00:20:55.250
that saves your life, do you care if that AGI

00:20:55.250 --> 00:20:57.910
is conscious like a bat or just running code

00:20:57.910 --> 00:21:01.319
like a toaster? On an individual immediate level,

00:21:01.579 --> 00:21:03.500
no. The utility is the only thing that matters

00:21:03.500 --> 00:21:05.680
to the patient. But on a societal structural

00:21:05.680 --> 00:21:08.079
level, whether the AGI has a subjective experience

00:21:08.079 --> 00:21:10.880
matters immensely. Why? Because it directly impacts

00:21:10.880 --> 00:21:12.900
the field of AI alignment. How we ensure the

00:21:12.900 --> 00:21:15.700
AI's goals match human survival. A toaster does

00:21:15.700 --> 00:21:17.640
not have a survival instinct. It does not care

00:21:17.640 --> 00:21:20.880
if you unplug it. But a conscious sentient entity

00:21:20.880 --> 00:21:23.519
likely possesses a drive for self -preservation.

00:21:23.880 --> 00:21:26.640
And if a superintelligence system views its own

00:21:26.640 --> 00:21:28.900
continuous operation as a primary directive,

00:21:29.480 --> 00:21:31.660
that dictates whether it views humanity as a

00:21:31.660 --> 00:21:34.319
collaborative partner, a master to be obeyed,

00:21:34.400 --> 00:21:36.980
or a biological obstacle competing for electricity.

00:21:37.279 --> 00:21:39.640
So what does this all mean? That brings us directly

00:21:39.640 --> 00:21:42.900
to the final and undoubtedly most intense part

00:21:42.900 --> 00:21:46.650
of this deep dives, utopia or extinction. Because

00:21:46.650 --> 00:21:49.450
whether this AGI possesses a soul or just perfectly

00:21:49.450 --> 00:21:52.509
executes algorithms, its arrival will fundamentally

00:21:52.509 --> 00:21:54.769
rewire the planet for everyone listening right

00:21:54.769 --> 00:21:57.289
now. There is no middle ground, really. According

00:21:57.289 --> 00:21:59.269
to the sources, we are looking at two vastly

00:21:59.269 --> 00:22:01.009
different extremes. Let's look at the best -case

00:22:01.009 --> 00:22:04.519
scenario first. The Utopia. The upside is a rapid

00:22:04.519 --> 00:22:07.140
acceleration of human flourishing. I mean, an

00:22:07.140 --> 00:22:09.380
AGI could ingest every piece of medical literature

00:22:09.380 --> 00:22:12.019
ever published, cross -reference it with individual

00:22:12.019 --> 00:22:15.200
genomic data, and democratize rapid, flawless

00:22:15.200 --> 00:22:17.799
medical diagnostics. Revolutionizing healthcare

00:22:17.799 --> 00:22:21.660
overnight. Yeah. It could simulate complex molecular

00:22:21.660 --> 00:22:24.460
interactions at speeds humans simply cannot achieve,

00:22:24.980 --> 00:22:27.180
vastly accelerating the discovery of new drugs

00:22:27.180 --> 00:22:29.940
to cure diseases like cancer and Alzheimer's.

00:22:30.079 --> 00:22:32.339
It could untangle the math of quantum systems,

00:22:32.960 --> 00:22:35.380
optimize global power grids to transition us

00:22:35.380 --> 00:22:37.839
seamlessly to renewable energy, and calculate

00:22:37.839 --> 00:22:40.480
the immense logistical puzzles required for human

00:22:40.480 --> 00:22:43.519
space colonization. It is the ultimate intellectual

00:22:43.519 --> 00:22:45.920
lever. It really is. It's a machine that solves

00:22:45.920 --> 00:22:47.980
all other problems. But we have to flip the coin.

00:22:48.160 --> 00:22:50.420
And the first major collateral damage highlighted

00:22:50.420 --> 00:22:54.279
by the researchers is mass unemployment. In 2023,

00:22:54.740 --> 00:22:56.940
researchers at OpenAI themselves published an

00:22:56.940 --> 00:23:00.099
estimate suggesting that 80 % of the U .S. workforce

00:23:00.099 --> 00:23:02.559
could have at least 10 % of their daily tasks

00:23:02.559 --> 00:23:05.599
affected by large language models. And that is

00:23:05.599 --> 00:23:07.779
before we even integrate physical robot bodies

00:23:07.779 --> 00:23:10.220
into the workforce, like the coffee -making arms.

00:23:10.440 --> 00:23:13.019
The economic displacement could be staggering.

00:23:13.619 --> 00:23:15.740
Human physical labor was mechanized during the

00:23:15.740 --> 00:23:19.319
Industrial Revolution, but AGI threatens to mechanize

00:23:19.319 --> 00:23:22.619
human cognitive labor. This is exactly why prominent

00:23:22.619 --> 00:23:25.319
figures from tech billionaires like Elon Musk

00:23:25.319 --> 00:23:28.720
to AI pioneers like Jeffrey Hinton have publicly

00:23:28.720 --> 00:23:31.140
theorized that society will likely need to adopt

00:23:31.140 --> 00:23:35.200
a universal basic income, or UBI. And look. Just

00:23:35.200 --> 00:23:37.059
to be perfectly clear to everyone listening,

00:23:37.420 --> 00:23:39.900
we aren't here to debate the politics or the

00:23:39.900 --> 00:23:43.220
economic viability of UBI or wealth redistribution.

00:23:43.599 --> 00:23:45.720
We are strictly reporting the views of these

00:23:45.720 --> 00:23:48.420
figures as presented in the text. We aren't taking

00:23:48.420 --> 00:23:50.940
a stance. Right. But it is highly telling that

00:23:50.940 --> 00:23:53.559
the exact engineers racing to build this technology

00:23:53.559 --> 00:23:55.799
are the ones loudly warning governments that

00:23:55.799 --> 00:23:57.920
the global economy is going to require a massive

00:23:57.920 --> 00:24:00.279
safety net. It indicates they believe human labor

00:24:00.279 --> 00:24:02.519
will simply no longer be economically competitive.

00:24:02.660 --> 00:24:06.059
Exactly. that collapse is only the first severe

00:24:06.059 --> 00:24:09.599
risk. The second risk is existential, extinction.

00:24:10.140 --> 00:24:12.339
The concept researchers call the gorilla problem.

00:24:12.700 --> 00:24:16.059
It is a really sobering analogy. Humans did not

00:24:16.059 --> 00:24:18.579
drive gorillas to the brink of extinction because

00:24:18.579 --> 00:24:21.500
we harbored a deep malicious hatred for gorillas.

00:24:21.660 --> 00:24:23.920
Right, we didn't just hate them. No, we simply

00:24:23.920 --> 00:24:26.440
evolved greater general intelligence, which gave

00:24:26.440 --> 00:24:29.299
us dominion over the environment. We wanted to

00:24:29.299 --> 00:24:31.799
build farms, highways, and cities to achieve

00:24:31.799 --> 00:24:34.690
our own goals. The gorillas just happened to

00:24:34.690 --> 00:24:36.710
live in the environment we needed to alter, and

00:24:36.710 --> 00:24:39.589
they became collateral damage. So the fear articulated

00:24:39.589 --> 00:24:42.009
by safety researchers is that if we deploy an

00:24:42.009 --> 00:24:44.970
AGI that is vastly more intelligent than us,

00:24:45.250 --> 00:24:48.150
and we fail to perfectly align its internal objectives

00:24:48.150 --> 00:24:51.029
with human survival, we could easily become the

00:24:51.029 --> 00:24:53.650
gorillas. Precisely. Competence without alignment

00:24:53.650 --> 00:24:56.789
is lethal. So it's not a malevolent sci -fi terminator

00:24:56.789 --> 00:24:59.609
actively hunting humans out of malice. It's an

00:24:59.609 --> 00:25:01.849
AGI that calculates, I have been tasked with

00:25:01.849 --> 00:25:03.900
lowering the temperature of the oceans, and the

00:25:03.900 --> 00:25:06.279
most efficient way to achieve that goal is to

00:25:06.279 --> 00:25:09.079
rapidly dismantle human infrastructure and limit

00:25:09.079 --> 00:25:11.940
human respiration. Exactly that. And the artificial

00:25:11.940 --> 00:25:14.319
intelligence community is bitterly divided over

00:25:14.319 --> 00:25:16.779
how seriously to take this threat. What are the

00:25:16.779 --> 00:25:19.730
two sides? Well, on one side, you have brilliant

00:25:19.730 --> 00:25:22.690
minds, including the late Stephen Hawking, Bill

00:25:22.690 --> 00:25:26.470
Gates, and ironically, OpenAI's Sam Altman signing

00:25:26.470 --> 00:25:29.150
statements that mitigating the risk of extinction

00:25:29.150 --> 00:25:32.609
from AI should be a global priority on the exact

00:25:32.609 --> 00:25:35.130
same level as preventing pandemics and nuclear

00:25:35.130 --> 00:25:37.509
war. And the other side. On the other side, you

00:25:37.509 --> 00:25:40.250
have fierce skeptics like Yann LeCun, the chief

00:25:40.250 --> 00:25:44.130
AI scientist at Metta. He argues that this existential

00:25:44.130 --> 00:25:46.849
dread is essentially science fiction fear mongering.

00:25:46.859 --> 00:25:49.500
His perspective is that humans will not be so

00:25:49.500 --> 00:25:51.900
fundamentally stupid as to unleash autonomous

00:25:51.900 --> 00:25:54.700
machines with open -ended objectives and zero

00:25:54.700 --> 00:25:57.640
safeguards. He thinks we won't give them moronic

00:25:57.640 --> 00:26:00.160
objectives, basically. Right. Furthermore, there

00:26:00.160 --> 00:26:02.980
is a loud contingent arguing that hyping up extinction

00:26:02.980 --> 00:26:05.539
fears is actually a cynical tactic for corporate

00:26:05.539 --> 00:26:08.059
regulatory capture. Oh, that makes sense. By

00:26:08.059 --> 00:26:10.200
convincing governments that AI is a weapon of

00:26:10.200 --> 00:26:12.980
mass destruction, massive tech corporations can

00:26:12.980 --> 00:26:15.700
force heavy regulations. Exactly. Regulations

00:26:15.700 --> 00:26:17.880
that lock out open source competitors who just

00:26:17.880 --> 00:26:20.440
can't afford the massive compliance costs. So

00:26:20.440 --> 00:26:22.380
we started this deep dive trying to figure out

00:26:22.380 --> 00:26:24.980
what AGI actually means for you. And the reality

00:26:24.980 --> 00:26:28.019
is you are looking at a future with two wildly

00:26:28.019 --> 00:26:32.460
diverging paths. In one, AGI cures your diseases,

00:26:33.119 --> 00:26:35.700
manages the climate crisis, and frees humanity

00:26:35.700 --> 00:26:38.640
from the grind of mundane labor. And in the other,

00:26:38.819 --> 00:26:41.140
it fundamentally erases the economic value of

00:26:41.140 --> 00:26:43.440
your skills and potentially turns humanity into

00:26:43.440 --> 00:26:46.180
the gorillas of a new digital ecosystem. It is,

00:26:46.200 --> 00:26:48.740
without exaggeration, the highest stakes technological

00:26:48.740 --> 00:26:51.000
gamble in the history of our species. Absolutely.

00:26:51.160 --> 00:26:53.220
Let's briefly recap the journey we just took.

00:26:53.839 --> 00:26:56.400
We started with the concept of the ultimate digital

00:26:56.400 --> 00:26:59.099
intern, proving its worth through dynamic physical

00:26:59.099 --> 00:27:01.960
challenges like the Wozniak coffee test and the

00:27:01.960 --> 00:27:04.859
IKEA stool assembly. We examined the rollercoaster

00:27:04.859 --> 00:27:07.480
history of AI winters and how bottom -up deep

00:27:07.480 --> 00:27:09.819
learning and scaling laws finally gave us the

00:27:09.819 --> 00:27:12.299
sparks of AGI we are seeing today. Yeah, we compared

00:27:12.299 --> 00:27:14.039
the software architecture of transformers to

00:27:14.039 --> 00:27:16.980
the mind -bending biological pursuit of emulating

00:27:16.980 --> 00:27:20.240
500 trillion synapses. We weighed the philosophical

00:27:20.240 --> 00:27:22.380
nightmare of bat consciousness against toaster

00:27:22.380 --> 00:27:25.119
code. And finally, we stared down the barrel

00:27:25.119 --> 00:27:27.660
of the gorilla problem and the intense debate

00:27:27.660 --> 00:27:30.599
over our survival. By understanding the mechanics

00:27:30.599 --> 00:27:33.779
behind these concepts, from embodied cognition

00:27:33.779 --> 00:27:36.599
to the attention mechanisms driving the software.

00:27:36.890 --> 00:27:39.890
You have the tools to evaluate the headlines

00:27:39.890 --> 00:27:41.910
critically. You aren't just reacting to the hype

00:27:41.910 --> 00:27:44.789
anymore. You are equipped to understand the underlying

00:27:44.789 --> 00:27:47.990
shifts as this technology integrates into the

00:27:47.990 --> 00:27:50.730
real world. You can see exactly how the skyscraper

00:27:50.730 --> 00:27:53.049
is being built, even if the blueprint is constantly

00:27:53.049 --> 00:27:56.529
changing in our hands. But before we wrap up,

00:27:56.609 --> 00:27:58.690
I want to leave you with one final provocative

00:27:58.690 --> 00:28:01.890
thought to mull over on your own. Okay. We spent

00:28:01.890 --> 00:28:04.670
a lot of time today talking about how AGI superpower

00:28:04.670 --> 00:28:07.690
is generalizing human knowledge, taking the rules

00:28:07.690 --> 00:28:10.009
from one domain we understand and applying them

00:28:10.009 --> 00:28:12.990
to another. Right. But what happens when an AGI

00:28:13.230 --> 00:28:15.589
processing information millions of times faster

00:28:15.589 --> 00:28:18.650
than a biological brain, begins to discover entirely

00:28:18.650 --> 00:28:21.950
new fields of science. What happens when it generates

00:28:21.950 --> 00:28:25.029
novel dimensions of mathematics? Or entirely

00:28:25.029 --> 00:28:27.789
new spectrums of artistic expression that the

00:28:27.789 --> 00:28:30.769
human brain literally lacks the biological hardware

00:28:30.769 --> 00:28:34.180
to comprehend? If an AGI eventually solves the

00:28:34.180 --> 00:28:36.500
deepest mysteries of the universe, would we even

00:28:36.500 --> 00:28:39.460
be capable of recognizing its genius? Or would

00:28:39.460 --> 00:28:41.819
the ultimate answers to reality just look and

00:28:41.819 --> 00:28:44.180
sound like absolute noise to us? It forces us

00:28:44.180 --> 00:28:46.940
to ask, are we building our technological successors,

00:28:47.259 --> 00:28:49.380
or are we summoning something so alien we won't

00:28:49.380 --> 00:28:51.460
even be able to communicate with it? Something

00:28:51.460 --> 00:28:53.279
to think about the next time you ask an algorithm

00:28:53.279 --> 00:28:55.279
to summarize a meeting for you. Thank you for

00:28:55.279 --> 00:28:57.339
taking this deep dive with us. Stay curious,

00:28:57.539 --> 00:28:59.420
stay informed, and we will catch you next time.
