WEBVTT

00:00:00.000 --> 00:00:03.899
Imagine an AI doing, say, 80 % of your job. Or

00:00:03.899 --> 00:00:07.480
an AI that could help build an entire startup

00:00:07.480 --> 00:00:10.859
plan in minutes. Yeah, or even wilder, an AI

00:00:10.859 --> 00:00:13.599
that might, just might, decide who gets fired.

00:00:14.320 --> 00:00:17.059
The future of intelligence isn't just coming.

00:00:17.219 --> 00:00:20.219
No, it's here. And it's moving at, well, warp

00:00:20.219 --> 00:00:23.100
speed. Absolutely. Welcome to the Deep Dive.

00:00:23.519 --> 00:00:25.760
Today, we're really digging into a fascinating

00:00:25.760 --> 00:00:28.140
collection of recent insights about the cutting

00:00:28.140 --> 00:00:30.820
edge of artificial intelligence. Our mission,

00:00:30.859 --> 00:00:32.780
as always, is to pull out the most important

00:00:32.780 --> 00:00:41.640
nuggets of knowledge for you. So we'll explore

00:00:41.640 --> 00:00:44.240
the rapid shift from artificial general intelligence,

00:00:44.399 --> 00:00:46.619
AGI, what everyone was talking about just yesterday,

00:00:46.700 --> 00:00:48.859
it feels like. Right to this global race for

00:00:48.859 --> 00:00:51.619
artificial superintelligence or ASI. Then we'll

00:00:51.619 --> 00:00:55.159
look at some. Pretty wild, unexpected real -world

00:00:55.159 --> 00:00:58.119
impacts. Some good, some complicated, let's say.

00:00:58.219 --> 00:01:00.140
Definitely complicated. And finally, we'll dive

00:01:00.140 --> 00:01:02.460
into this idea of an AI Manhattan Project. How

00:01:02.460 --> 00:01:04.859
big could that get? Is it even feasible? Let's

00:01:04.859 --> 00:01:07.040
start peeling back the layers. Let's do it. So,

00:01:07.060 --> 00:01:10.239
this massive shift first. Artificial General

00:01:10.239 --> 00:01:14.939
Intelligence, AGI, the idea of an AI that can

00:01:14.939 --> 00:01:18.549
do any intellectual task a human can. Yeah. That's

00:01:18.549 --> 00:01:20.730
already feeling like old news, doesn't it? It's

00:01:20.730 --> 00:01:24.650
like we blinked. Totally. So 2024, now everyone's

00:01:24.650 --> 00:01:28.979
chasing artificial superintelligence ASI. actively

00:01:28.979 --> 00:01:31.859
targeting it. And the timelines are, well, startling.

00:01:31.920 --> 00:01:33.500
We're talking about targeting superintelligence

00:01:33.500 --> 00:01:36.620
by 2027. Yeah, 2027. What's fascinating is just

00:01:36.620 --> 00:01:39.459
how quickly the goalposts are shifting. ASI is,

00:01:39.500 --> 00:01:42.719
I mean, it's an AI vastly, vastly smarter than

00:01:42.719 --> 00:01:45.540
the very best human minds. Across pretty much

00:01:45.540 --> 00:01:48.060
everything, right? Not just specific tasks. Exactly.

00:01:48.200 --> 00:01:49.859
It's intelligence on a completely different scale.

00:01:50.000 --> 00:01:52.000
Almost hard to wrap your head around. And here's

00:01:52.000 --> 00:01:53.620
where it gets really interesting. Companies aren't

00:01:53.620 --> 00:01:55.680
being subtle about this at all. No way. Look

00:01:55.680 --> 00:01:58.500
at Meta. They renamed their entire core research

00:01:58.500 --> 00:02:01.859
lab to Meta Superintelligence Labs. That's not

00:02:01.859 --> 00:02:03.920
just like a branding tweet. That's a mission

00:02:03.920 --> 00:02:05.920
statement. Right on their letterhead. Yeah. And

00:02:05.920 --> 00:02:08.340
you've got Sam Altman from OpenAI calling our

00:02:08.340 --> 00:02:10.900
current moment a gentle singularity. That phrase,

00:02:11.159 --> 00:02:15.159
a gentle singularity. It implies this subtle

00:02:15.159 --> 00:02:17.840
shift that fundamentally changes everything,

00:02:18.080 --> 00:02:21.180
maybe before we even fully realize it. Like we

00:02:21.180 --> 00:02:23.900
won't grasp the transformation until we're deep

00:02:23.900 --> 00:02:26.680
inside the new reality. It's not one big bang,

00:02:26.780 --> 00:02:30.180
but this steady, profound change. And anthropic.

00:02:30.180 --> 00:02:31.840
They're throwing out predictions, too. Oh, yeah.

00:02:31.900 --> 00:02:35.680
They're predicting a, quote, country of geniuses

00:02:35.680 --> 00:02:40.000
in a data center by as early as 2026 or 2027.

00:02:40.159 --> 00:02:43.319
A country of geniuses. In a data center, that's

00:02:43.319 --> 00:02:45.699
quite the image. It is. Envisioning this collective

00:02:45.699 --> 00:02:48.639
intelligence on a national scale, but housed

00:02:48.639 --> 00:02:51.379
in servers. These are seriously ambitious timelines

00:02:51.379 --> 00:02:53.139
for the top players. There was this timeline

00:02:53.139 --> 00:02:55.819
recently from an open AI investor, a well -known

00:02:55.819 --> 00:02:58.659
tech oracle type. It really laid out a potential

00:02:58.659 --> 00:03:01.879
future for AI and worked that. Well, it made

00:03:01.879 --> 00:03:03.520
me stop and think. Yeah, I saw that. He breaks

00:03:03.520 --> 00:03:06.900
it down into stages, man. First up, 2025 to 2030.

00:03:07.099 --> 00:03:10.199
He calls it the AI intern era. The AI intern.

00:03:10.340 --> 00:03:12.229
Okay, so what does that look like? Basically,

00:03:12.229 --> 00:03:14.490
every professional gets an AI coworker. And these

00:03:14.490 --> 00:03:16.469
aren't just, you know, fancy autocomplete. He

00:03:16.469 --> 00:03:18.849
suggests these systems will be smarter than,

00:03:18.930 --> 00:03:22.110
say, Stanford grads, capable of handling maybe

00:03:22.110 --> 00:03:25.969
80 percent of your current job. 80 percent. Wow.

00:03:26.169 --> 00:03:28.569
So just offloading the vast majority of your

00:03:28.569 --> 00:03:30.889
tasks. Exactly. It's not just helping you. It's

00:03:30.889 --> 00:03:33.409
doing a huge chunk of the work. OK, so that's

00:03:33.409 --> 00:03:36.650
the first phase. Then the 2030s. That's where

00:03:36.650 --> 00:03:39.620
he predicts, and it's dramatic phrasing. a corporate

00:03:39.620 --> 00:03:43.560
extinction event whoa extinction event meaning

00:03:43.560 --> 00:03:47.310
fortune 500 companies could collapse His reasoning

00:03:47.310 --> 00:03:50.169
is that tiny teams, just a few people, using

00:03:50.169 --> 00:03:53.490
these super powerful AIs, they could build billion

00:03:53.490 --> 00:03:55.710
-dollar companies super fast, super efficiently.

00:03:56.250 --> 00:03:59.409
The AI interns essentially become more capable

00:03:59.409 --> 00:04:02.590
than their human bosses. Making those huge traditional

00:04:02.590 --> 00:04:04.949
corporate structures kind of obsolete. That's

00:04:04.949 --> 00:04:06.750
the idea. It really makes you question our whole

00:04:06.750 --> 00:04:08.550
economic setup, doesn't it? How would we even

00:04:08.550 --> 00:04:10.650
adapt? Seriously. And what's after the extinction

00:04:10.650 --> 00:04:13.069
event? Well, beyond 2040, the vision gets even

00:04:13.069 --> 00:04:16.240
more radical. Work is optional. Optional. You

00:04:16.240 --> 00:04:18.620
work for passion, creativity, whatever drives

00:04:18.620 --> 00:04:21.019
you. Not because you need the rent money. Robots

00:04:21.019 --> 00:04:23.290
handle all the physical labor. And things like

00:04:23.290 --> 00:04:27.069
education, health care, even legal advice, they

00:04:27.069 --> 00:04:29.970
become essentially free, universally available

00:04:29.970 --> 00:04:32.649
because AI makes them abundant. And he's pretty

00:04:32.649 --> 00:04:35.410
confident about this. Says he's 80 percent confident

00:04:35.410 --> 00:04:38.189
this is the direction. It's a bold call for sure.

00:04:38.370 --> 00:04:41.029
Yeah. But definitely makes you think. 80 percent.

00:04:41.129 --> 00:04:43.949
Wow. A very specific number for such a huge prediction.

00:04:44.269 --> 00:04:46.370
So the question is, are we kind of already there?

00:04:47.129 --> 00:04:50.050
In some ways. Well, yeah, sort of. We're in what

00:04:50.050 --> 00:04:53.310
some people are calling the jagged AGI era. Jagged

00:04:53.310 --> 00:04:56.509
AGI. OK, explain that. It means we're seeing

00:04:56.509 --> 00:04:59.709
AGI level performance, like human level or even

00:04:59.709 --> 00:05:03.329
better, but only in specific narrow areas. Like

00:05:03.329 --> 00:05:06.110
a specialist. Exactly. Like OpenAI's O3 model

00:05:06.110 --> 00:05:07.910
that's been talked about. You can generate whole

00:05:07.910 --> 00:05:10.290
startup plans, apparently crushes some AGI tests.

00:05:10.810 --> 00:05:13.009
But it still fails as simple riddles. It's like

00:05:13.009 --> 00:05:14.889
a genius surgeon who can't, you know, make toast.

00:05:15.069 --> 00:05:17.889
Right. Or Google's AlphaFold. It predicts protein

00:05:17.889 --> 00:05:19.790
structures with incredible accuracy, something

00:05:19.790 --> 00:05:22.949
that baffled scientists for ages. Huge for drug

00:05:22.949 --> 00:05:26.290
discovery. World -changing stuff. But it struggles

00:05:26.290 --> 00:05:29.689
with broader chemistry tasks outside that specific

00:05:29.689 --> 00:05:33.290
niche. It's a genius in just one subject. Precisely.

00:05:33.430 --> 00:05:37.220
That's Jagged AGI. Brilliant in spots, but not

00:05:37.220 --> 00:05:39.339
generally intelligent across the board yet. So

00:05:39.339 --> 00:05:42.139
this specialized, jagged progress. That's why

00:05:42.139 --> 00:05:44.000
superintelligence is suddenly the main goal.

00:05:44.199 --> 00:05:47.060
Because narrow AGI is already happening. That's

00:05:47.060 --> 00:05:50.120
exactly it. ASI is now seen as the only milestone

00:05:50.120 --> 00:05:52.399
that really matters going forward. It's not just

00:05:52.399 --> 00:05:54.899
theory for papers anymore. No, it's shifted completely.

00:05:55.199 --> 00:05:57.379
It's company strategy now. It's a hiring pitch.

00:05:57.459 --> 00:05:59.319
Come build superintelligence with us. It's the

00:05:59.319 --> 00:06:03.129
Research North Star. And yeah, it's a huge fundraising

00:06:03.129 --> 00:06:05.730
buzzword. We're officially in the build or die

00:06:05.730 --> 00:06:09.069
phase of AI. No question. The urgency is everywhere

00:06:09.069 --> 00:06:11.050
in the industry. No one's pretending otherwise.

00:06:11.370 --> 00:06:13.910
So if we had to pinpoint the biggest underlying

00:06:13.910 --> 00:06:17.769
shift driving this urgency, what is it? It's

00:06:17.769 --> 00:06:19.949
really the recognition that this domain -specific

00:06:19.949 --> 00:06:23.389
AI, this jagged AGI, is already here. It's already

00:06:23.389 --> 00:06:25.829
performing at AGI levels in key areas. Gotcha.

00:06:25.930 --> 00:06:28.959
That realization changes everything. leads to

00:06:28.959 --> 00:06:31.480
some, well, unexpected outcomes out in the wild.

00:06:31.620 --> 00:06:34.000
Yeah. Let's shift gears. Moving from these grand

00:06:34.000 --> 00:06:36.860
visions to what's happening right now, the real

00:06:36.860 --> 00:06:39.000
world stuff. And some of it is pretty surprising.

00:06:39.100 --> 00:06:41.680
What's fascinating is how these AI systems can

00:06:41.680 --> 00:06:46.459
show this raw data driven honesty. How so? Well,

00:06:46.519 --> 00:06:48.819
for instance, there were reports that Elon Musk's

00:06:48.819 --> 00:06:52.480
own AI. Grok, I think, reportedly blamed him

00:06:52.480 --> 00:06:54.540
and former President Trump for recent deadly

00:06:54.540 --> 00:06:57.120
floods in Texas. Really? Based on? Citing specific

00:06:57.120 --> 00:07:00.300
policies, events, data points. The key thing

00:07:00.300 --> 00:07:03.019
is the AI doesn't care about politics, left or

00:07:03.019 --> 00:07:05.569
right. If the data connects you to something.

00:07:05.750 --> 00:07:07.470
It just calls it out based on the patterns it

00:07:07.470 --> 00:07:10.430
sees. No human bias filter. Exactly. It's just

00:07:10.430 --> 00:07:12.709
reflecting the data, which can be pretty stark.

00:07:12.949 --> 00:07:15.129
Yeah. That impartiality is something else. And

00:07:15.129 --> 00:07:16.870
we also saw that report, DeepSeek versus the

00:07:16.870 --> 00:07:19.269
world. Yeah. Comparing different models. Right.

00:07:19.329 --> 00:07:21.610
Looking at DeepSeek R2 against others. What was

00:07:21.610 --> 00:07:24.329
the takeaway there? Well, mostly that things

00:07:24.329 --> 00:07:26.689
are way more complex than just looking at simple

00:07:26.689 --> 00:07:29.629
benchmark scores. Real world performance has

00:07:29.629 --> 00:07:32.180
all these nuances. That makes sense. Right. Which

00:07:32.180 --> 00:07:34.620
leads to a really critical point. What happens

00:07:34.620 --> 00:07:36.819
when we start delegating really important human

00:07:36.819 --> 00:07:39.939
decisions to AI? Uh -oh. Where are we going with

00:07:39.939 --> 00:07:42.360
this? There are reports of managers actually

00:07:42.360 --> 00:07:45.660
using ChatGPT to decide who gets laid off. Seriously.

00:07:46.379 --> 00:07:49.870
ChatGPT deciding firings. Yeah. It's being called

00:07:49.870 --> 00:07:53.810
chat GPT psychosis, and it's as unsettling as

00:07:53.810 --> 00:07:56.689
it sounds. Imagine losing your job because a

00:07:56.689 --> 00:07:59.290
chatbot algorithm flagged you, maybe without

00:07:59.290 --> 00:08:02.689
real context or human oversight. Wow. That raises

00:08:02.689 --> 00:08:05.790
huge ethical questions. Accountability, fairness,

00:08:06.089 --> 00:08:09.290
just human dignity, really. Where does that leave

00:08:09.290 --> 00:08:11.329
people? It's deeply problematic, and it gets

00:08:11.329 --> 00:08:13.649
even weirder when you see how people try to influence

00:08:13.649 --> 00:08:17.250
the AI. Oh, like trying to game the system? Apparently,

00:08:17.329 --> 00:08:19.660
yeah. Some researchers are allegedly sneaking

00:08:19.660 --> 00:08:22.759
white text prompts into academic papers they

00:08:22.759 --> 00:08:25.139
submit online. White texts, like hidden text.

00:08:25.220 --> 00:08:27.459
Exactly. Tiny font, same color as the background.

00:08:27.660 --> 00:08:30.199
Basically whispering hidden instructions or keywords

00:08:30.199 --> 00:08:32.799
to the AI systems that might review or summarize

00:08:32.799 --> 00:08:35.299
these papers. So turning academic peer review

00:08:35.299 --> 00:08:39.179
into like SEO warfare. Yeah. Trying to manipulate

00:08:39.179 --> 00:08:41.679
the AI's assessment. That's what it sounds like.

00:08:41.720 --> 00:08:44.720
The implications for just, you know, scientific

00:08:44.720 --> 00:08:47.399
truth and integrity are pretty significant. Man.

00:08:48.169 --> 00:08:50.230
You know, I still wrestle with prompt drift myself

00:08:50.230 --> 00:08:53.289
sometimes, just trying to get an AI to give me

00:08:53.289 --> 00:08:56.049
consistent results for simple things. It's hard

00:08:56.049 --> 00:08:57.970
enough without trying to embed secret messages.

00:08:58.210 --> 00:09:00.330
Right. It feels like trying to perfectly sculpt

00:09:00.330 --> 00:09:02.750
a cloud sometimes, getting that precise control.

00:09:03.590 --> 00:09:05.669
That's tough. OK, so that's some of the weird

00:09:05.669 --> 00:09:07.710
and worrying stuff. But there's positive news,

00:09:07.769 --> 00:09:10.289
too, right? Definitely. On a much more hopeful

00:09:10.289 --> 00:09:13.830
note. Look at Google DeepMind. Their stated goal

00:09:13.830 --> 00:09:17.470
is essentially to cure all diseases. Which sounds

00:09:17.470 --> 00:09:19.350
like science fiction, but... But they're actually

00:09:19.350 --> 00:09:21.809
making progress. They're finally testing AI -discovered

00:09:21.809 --> 00:09:24.509
drugs on actual humans now, working closely with

00:09:24.509 --> 00:09:27.649
real pharma experts. That's a huge step. From

00:09:27.649 --> 00:09:30.250
algorithms to actual clinical trials, that could

00:09:30.250 --> 00:09:32.669
genuinely change medicine. The potential there

00:09:32.669 --> 00:09:35.879
is just... immense, truly life -changing possibilities.

00:09:36.399 --> 00:09:38.379
And the money folks are noticing AI's potential

00:09:38.379 --> 00:09:40.559
beyond just digital stuff too, right? Again,

00:09:40.600 --> 00:09:43.340
physical labor. Yeah, absolutely. Genesis AI,

00:09:43.539 --> 00:09:46.100
they're a newer company, just secured $105 million

00:09:46.100 --> 00:09:48.940
in seed funding. That's a big seed round. Wow.

00:09:49.100 --> 00:09:52.320
And their focus is? Developing AI to automate

00:09:52.320 --> 00:09:55.159
parts of the global physical labor market, which

00:09:55.159 --> 00:09:58.879
is massive. I think $30, $40 trillion, massive.

00:09:59.179 --> 00:10:02.100
So AI doing physical work. Yeah. Manufacturing,

00:10:02.100 --> 00:10:05.090
logistics. construction maybe that seems to be

00:10:05.090 --> 00:10:07.509
the direction automating tasks that require physical

00:10:07.509 --> 00:10:10.330
action on a really grand scale the scope is just

00:10:10.330 --> 00:10:12.710
enormous okay lots of surprising developments

00:10:12.710 --> 00:10:15.429
there but if you had to pick the most ethically

00:10:15.429 --> 00:10:17.610
challenging one we just talked about for me it's

00:10:17.610 --> 00:10:20.110
still managers letting ai make firing decisions

00:10:20.110 --> 00:10:22.350
that feels like a line crossed yeah that one's

00:10:22.350 --> 00:10:25.169
tough hard to argue with that mid -roll sponsor

00:10:25.169 --> 00:10:28.250
red to be inserted here okay let's shift focus

00:10:28.250 --> 00:10:31.960
again to something almost Unfathomable in scale,

00:10:32.220 --> 00:10:35.059
this idea of an AI Manhattan Project. Right.

00:10:35.179 --> 00:10:36.759
It's a term that's been floating around, but

00:10:36.759 --> 00:10:39.100
now it's starting to look, well, logistically

00:10:39.100 --> 00:10:41.100
feasible, maybe. If you connect the dots, the

00:10:41.100 --> 00:10:43.940
buzz around a U .S.-led project like this isn't

00:10:43.940 --> 00:10:46.779
just talk. It's defined as this massive government

00:10:46.779 --> 00:10:49.360
spearheaded initiative. Like the original Manhattan

00:10:49.360 --> 00:10:52.159
Project for the atomic bomb or the Apollo moon

00:10:52.159 --> 00:10:55.799
landing, but focused entirely on AI, backed by

00:10:55.799 --> 00:10:57.919
the U .S. government. And it would involve coordinating

00:10:57.919 --> 00:11:00.460
on a national scale. Not just government labs,

00:11:00.659 --> 00:11:03.559
but bringing in private compute power, too. Think

00:11:03.559 --> 00:11:07.240
NVIDIA, OpenAI, the big players. And the investment

00:11:07.240 --> 00:11:11.320
level. We're talking huge percentages of GDP,

00:11:11.620 --> 00:11:14.440
potentially up to 0 .8 % of U .S. GDP. Which

00:11:14.440 --> 00:11:18.500
translates to about $244 billion a year. Wow.

00:11:19.559 --> 00:11:22.100
That's an astronomical number, a serious national

00:11:22.100 --> 00:11:24.360
commitment. It really is. And the goal, what

00:11:24.360 --> 00:11:26.539
kind of power are we talking about? The projections

00:11:26.539 --> 00:11:29.000
are kind of mind bending, something like 10 ,000

00:11:29.000 --> 00:11:33.059
times more powerful than even GPT -4 by 2027.

00:11:33.320 --> 00:11:37.360
10 ,000 times. Slight pause. Whoa. I mean, imagine

00:11:37.360 --> 00:11:39.740
scaling to a billion queries per second. It's

00:11:39.740 --> 00:11:42.039
hard to even conceptualize that level of processing

00:11:42.039 --> 00:11:43.879
power. It really is. And this isn't just some

00:11:43.879 --> 00:11:46.000
think tank fantasy anymore. No, it's getting

00:11:46.000 --> 00:11:48.570
real consideration. Seems like it. The U .S.-China

00:11:48.570 --> 00:11:50.370
Economic and Security Review Commission actually

00:11:50.370 --> 00:11:52.649
recommended something along these lines to Congress

00:11:52.649 --> 00:11:55.330
officially. OK, so it's reached that level. Yeah.

00:11:55.429 --> 00:11:56.870
And even the Department of Energy is apparently

00:11:56.870 --> 00:11:59.129
tweeting about it. It feels like it's moving

00:11:59.129 --> 00:12:02.990
from abstract idea to, you know, actual strategic

00:12:02.990 --> 00:12:05.549
possibility. And the U .S. might already have

00:12:05.549 --> 00:12:08.320
a head start. That's a key point. Just by pooling

00:12:08.320 --> 00:12:12.460
all the existing U .S. compute resources, supercomputers,

00:12:12.460 --> 00:12:14.720
government data centers, university clusters,

00:12:14.860 --> 00:12:17.840
all that, they estimate that consolidation alone

00:12:17.840 --> 00:12:21.000
gives you maybe a one -year head start on the

00:12:21.000 --> 00:12:23.399
typical scaling curves for AI development. So

00:12:23.399 --> 00:12:26.059
just organizing what we already have buys significant

00:12:26.059 --> 00:12:28.899
time. Right. They project that kind of setup

00:12:28.899 --> 00:12:31.580
could deliver compute power on par with what

00:12:31.580 --> 00:12:34.279
you'd expect in 2028, but potentially by the

00:12:34.279 --> 00:12:36.919
end of 2027. So it's a definite moonshot, no

00:12:36.919 --> 00:12:39.480
doubt. But the basic pieces, the infrastructure,

00:12:39.700 --> 00:12:42.159
the potential funding, they kind of already exist.

00:12:42.500 --> 00:12:44.519
It really boils down to political will, doesn't

00:12:44.519 --> 00:12:47.039
it? And massive coordination. It sounds less

00:12:47.039 --> 00:12:50.039
like a tech problem and more like a huge. logistical

00:12:50.039 --> 00:12:52.600
and political challenge. Yeah. So if we had to

00:12:52.600 --> 00:12:55.179
name the main obstacle for this AI Manhattan

00:12:55.179 --> 00:12:58.139
project, what would it be? It seems like it's

00:12:58.139 --> 00:13:00.159
not really about the resources, surprisingly.

00:13:00.559 --> 00:13:03.279
It's more about achieving that political unity

00:13:03.279 --> 00:13:06.559
and the complex coordination required. Got it.

00:13:06.639 --> 00:13:10.360
Makes sense. So let's try and pull this all together.

00:13:10.799 --> 00:13:14.220
What does it all mean for us right now? Well,

00:13:14.259 --> 00:13:16.259
we've clearly blasted past just talking about

00:13:16.259 --> 00:13:19.980
AGI. We're in a full on global race for super

00:13:19.980 --> 00:13:22.620
intelligence. It's urgent. And AI isn't just

00:13:22.620 --> 00:13:24.840
a tech sector thing anymore. It's fundamentally

00:13:24.840 --> 00:13:27.740
changing whole industries. It's reshaping jobs.

00:13:27.940 --> 00:13:30.600
It's even forcing us to rethink our ethical rules.

00:13:30.799 --> 00:13:33.659
Yeah. The big theme is just acceleration. And

00:13:33.659 --> 00:13:36.220
integration. Stuff that felt like sci -fi a few

00:13:36.220 --> 00:13:38.440
years ago is now corporate strategy. It's government

00:13:38.440 --> 00:13:41.000
policy. It's shaping our jobs, our health prospects.

00:13:41.379 --> 00:13:43.940
Exactly. The shift isn't just in the tech itself.

00:13:44.080 --> 00:13:46.139
It's changing how we even think about intelligence,

00:13:46.259 --> 00:13:48.200
how we interact with it. It really feels like

00:13:48.200 --> 00:13:50.710
a new era dawning. Well, thank you for joining

00:13:50.710 --> 00:13:53.669
us on this deep dive into the accelerating world

00:13:53.669 --> 00:13:55.730
of AI. Hopefully you've gained some valuable

00:13:55.730 --> 00:13:58.929
insights, maybe a few surprising facts along

00:13:58.929 --> 00:14:01.889
the way. And we genuinely love hearing from you.

00:14:01.950 --> 00:14:03.769
What stood out to you today? What's your take

00:14:03.769 --> 00:14:07.309
on this AI intern era or the AI Manhattan Project?

00:14:07.649 --> 00:14:10.309
Let us know. And as we think about these huge

00:14:10.309 --> 00:14:13.309
changes, here's something to ponder. If work

00:14:13.309 --> 00:14:16.889
truly does become optional, you know, if AI handles

00:14:16.889 --> 00:14:19.899
most necessary tasks. Yeah. What new forms of

00:14:19.899 --> 00:14:22.460
human creativity, purpose, or endeavor might

00:14:22.460 --> 00:14:25.139
emerge when basic survival isn't the main driver

00:14:25.139 --> 00:14:27.799
anymore? What do we do then? That's a big question

00:14:27.799 --> 00:14:30.559
for the future. It is. Until next time, keep

00:14:30.559 --> 00:14:33.120
exploring, keep asking questions. And keep digging

00:14:33.120 --> 00:14:34.519
deeper. Stay curious.
