WEBVTT

00:00:00.000 --> 00:00:04.559
So it is Wednesday, January 21st, 2026, and the

00:00:04.559 --> 00:00:07.480
stakes of what people have been calling the education

00:00:07.480 --> 00:00:10.820
wars, they've shifted. Entirely. We aren't just

00:00:10.820 --> 00:00:12.800
talking about chat bots helping with homework

00:00:12.800 --> 00:00:15.800
anymore. We are talking about Anthropic, Google,

00:00:15.960 --> 00:00:19.699
Microsoft, and OpenAI fighting a very, very aggressive

00:00:19.699 --> 00:00:22.679
battle to become the default teacher for Generation

00:00:22.679 --> 00:00:24.800
Alpha. I mean, that's the ultimate long game,

00:00:24.859 --> 00:00:26.879
right? Whoever wins the classroom now owns the

00:00:26.879 --> 00:00:28.800
workforce of the future. It really is that simple.

00:00:28.899 --> 00:00:31.059
If you grow up using one interface, that's what

00:00:31.059 --> 00:00:33.460
you trust when you're 30. Welcome back to the

00:00:33.460 --> 00:00:36.619
Deep Dive. Yeah. You know, I... I want to try

00:00:36.619 --> 00:00:39.200
and slow things down a bit today. There's just

00:00:39.200 --> 00:00:41.679
so much noise out there, right? New models, new

00:00:41.679 --> 00:00:44.659
hardware. AGI by next Tuesday. AGI by next Tuesday.

00:00:44.780 --> 00:00:46.359
Yeah. But I really want to look for the signal

00:00:46.359 --> 00:00:47.920
in all that. We've got a few really distinct

00:00:47.920 --> 00:00:50.140
stacks of research today that I think paint a

00:00:50.140 --> 00:00:51.979
picture of where we're actually going. Yeah,

00:00:52.000 --> 00:00:54.219
it's a fascinating mix. We've basically got three

00:00:54.219 --> 00:00:56.460
main pillars to get through. First, like you

00:00:56.460 --> 00:00:58.920
said, the classroom battlefield, how big tech

00:00:58.920 --> 00:01:01.200
is really integrating into schools right now

00:01:01.200 --> 00:01:04.180
in 2026. Yeah. Second, we got to look at the

00:01:04.180 --> 00:01:06.579
economic reality. There is this huge disconnect

00:01:06.579 --> 00:01:09.620
between what CEOs thought they'd save by now

00:01:09.620 --> 00:01:11.680
and what they're actually seeing. We've got some

00:01:11.680 --> 00:01:15.700
PwC data and a really crucial argument from NVIDIA's

00:01:15.700 --> 00:01:18.959
Jensen Huang about the five layers of AI. And

00:01:18.959 --> 00:01:21.480
the third. The toolkit, the actual skill. We're

00:01:21.480 --> 00:01:23.739
seeing new terms pop up like vibe coding and

00:01:23.739 --> 00:01:28.099
some really clever psychological hacks like faking

00:01:28.099 --> 00:01:31.099
AI rivalries to get better work out of these

00:01:31.099 --> 00:01:33.579
things. Okay, let's unpack this. Starting with

00:01:33.579 --> 00:01:35.540
the classroom. Remember, you know, not that long

00:01:35.540 --> 00:01:38.319
ago, 2023, 2024, the whole story was about bans.

00:01:38.420 --> 00:01:40.359
Oh, yeah. Schools are terrified of cheating,

00:01:40.560 --> 00:01:43.079
blocking IP addresses, going back to pen and

00:01:43.079 --> 00:01:45.019
paper. Right. And that time is completely over.

00:01:45.120 --> 00:01:46.739
I mean, the walls didn't just fall. They were

00:01:46.739 --> 00:01:49.700
dismantled and sold for scrap. Now the race is

00:01:49.700 --> 00:01:51.959
just to see who gets integrated first. Take Anthropic.

00:01:52.340 --> 00:01:54.980
They've partnered with Teach for All. So Claude

00:01:54.980 --> 00:01:57.680
isn't just a chatbot anymore. It is now available

00:01:57.680 --> 00:02:02.000
to over 100 ,000 teachers in 63 countries. And

00:02:02.000 --> 00:02:04.099
the phrasing they use is really... deliberate.

00:02:04.200 --> 00:02:06.340
They're calling teachers co -architects. That's

00:02:06.340 --> 00:02:09.620
a very specific choice of words. Co -architects

00:02:09.620 --> 00:02:13.840
implies agency, not replacement. Exactly. They

00:02:13.840 --> 00:02:16.419
are trying to solve the trust issue. EdTech has

00:02:16.419 --> 00:02:18.960
always had this huge trust problem, right? Teachers

00:02:18.960 --> 00:02:21.780
hate being told how to teach by some tech company

00:02:21.780 --> 00:02:23.879
in Silicon Valley. Of course. So by bringing

00:02:23.879 --> 00:02:27.039
them in as co -architects, Antropic is trying

00:02:27.039 --> 00:02:30.080
to get around that skepticism. But they're not

00:02:30.080 --> 00:02:33.939
alone. Google's pushing Gemini super hard into

00:02:33.939 --> 00:02:36.740
SAT prep with the Princeton Review, and they've

00:02:36.740 --> 00:02:39.479
got that deep integration with Khan Academy now.

00:02:39.680 --> 00:02:41.939
So if you're a student prepping for maybe the

00:02:41.939 --> 00:02:44.039
most important test of your life, you're doing

00:02:44.039 --> 00:02:46.259
it with Gemini. Precisely. And just think about

00:02:46.259 --> 00:02:48.919
the emotional bond that creates. If Gemini helps

00:02:48.919 --> 00:02:50.340
you get into your dream school, you're going

00:02:50.340 --> 00:02:52.740
to view that tool very differently. And Microsoft,

00:02:52.840 --> 00:02:55.039
they're playing to their biggest strength. Minecraft.

00:02:55.180 --> 00:02:57.500
Minecraft, using it for AI learning scenarios.

00:02:57.580 --> 00:02:59.639
They're even offering certifications for students

00:02:59.639 --> 00:03:02.919
inside the game's ecosystem. And OpenAI, what's

00:03:02.919 --> 00:03:05.620
their angle? They're going macro. They have this

00:03:05.620 --> 00:03:08.539
Education for Countries program using... GPT

00:03:08.539 --> 00:03:11.900
5 .2, which they're calling chat GPT. They're

00:03:11.900 --> 00:03:13.919
doing these big national research partnerships

00:03:13.919 --> 00:03:16.879
to study how AI impacts learning on a systemic

00:03:16.879 --> 00:03:19.599
level. It really feels like a land grab. But

00:03:19.599 --> 00:03:22.879
I have to ask, is this about better learning

00:03:22.879 --> 00:03:26.620
or is it just about early brand lock in? It's

00:03:26.620 --> 00:03:28.759
both. But let's be honest, it's mostly about

00:03:28.759 --> 00:03:31.580
defining normal for the next generation. Think

00:03:31.580 --> 00:03:34.759
about it. If you learn to write and code and

00:03:34.759 --> 00:03:37.819
think with Claude when you're 12. That just becomes

00:03:37.819 --> 00:03:40.500
an extension of your own mind. Switching platforms

00:03:40.500 --> 00:03:43.039
later would feel like learning a whole new language.

00:03:43.500 --> 00:03:45.780
Every single one of these companies wants to

00:03:45.780 --> 00:03:48.039
be the operating system for Gen Alpha's brain.

00:03:48.300 --> 00:03:49.860
There's something else in the Anthropic notes

00:03:49.860 --> 00:03:51.819
here that caught my eye. It's a bit philosophical.

00:03:52.280 --> 00:03:54.680
They updated their constitution. Oh, this is

00:03:54.680 --> 00:03:57.120
fascinating. Right. So Anthropic has this constitutional

00:03:57.120 --> 00:04:00.479
AI model. They give the AI a set of principles

00:04:00.479 --> 00:04:02.860
so they don't have to micromanage every single

00:04:02.860 --> 00:04:05.340
response. Well, they just updated it. New ethics

00:04:05.340 --> 00:04:09.159
rules. slipped in this hint about uncertain AI

00:04:09.159 --> 00:04:12.659
consciousness. Uncertain consciousness. That

00:04:12.659 --> 00:04:15.300
is a heavy phrase to just drop into a user agreement

00:04:15.300 --> 00:04:18.459
or a system prompt. It is, but it adds so much

00:04:18.459 --> 00:04:20.600
weight to why they want to be the trusted choice

00:04:20.600 --> 00:04:22.439
for schools. I mean, if we're even approaching

00:04:22.439 --> 00:04:25.259
systems that might have some kind of moral weight,

00:04:25.360 --> 00:04:28.180
even if it's just a tiny possibility, you want

00:04:28.180 --> 00:04:30.279
that system to be the one teaching ethics, not

00:04:30.279 --> 00:04:33.139
a move fast and break things model raising your

00:04:33.139 --> 00:04:35.379
kids. It's a brilliant move. It raises a lot

00:04:35.379 --> 00:04:37.000
of questions about who gets to write that constitution,

00:04:37.279 --> 00:04:40.319
though. If the AI is the teacher, the AI's values

00:04:40.319 --> 00:04:42.779
become the students' values. But OK, let's let's

00:04:42.779 --> 00:04:45.120
pivot from that. We're building the next generation

00:04:45.120 --> 00:04:47.139
of workers in the classroom. But the current

00:04:47.139 --> 00:04:51.639
generation of CEOs seems, well, frustrated. Frustrated

00:04:51.639 --> 00:04:54.199
is putting it mildly. There's a new PwC report

00:04:54.199 --> 00:04:58.920
out. They surveyed, what, 4 ,454 CEOs. The data

00:04:58.920 --> 00:05:02.000
is just stark. 56 percent of them see absolutely

00:05:02.000 --> 00:05:05.139
no cost savings or revenue gains from AI. That's

00:05:05.139 --> 00:05:07.439
the majority. Over half the market is seeing

00:05:07.439 --> 00:05:11.279
zero return. A huge majority. And only 12 % see

00:05:11.279 --> 00:05:13.980
both savings and revenue growth. So if I'm a

00:05:13.980 --> 00:05:17.639
CEO, I've spent millions on compute licenses,

00:05:18.180 --> 00:05:20.579
cloud credits, transformation consultants, and

00:05:20.579 --> 00:05:22.920
my bottom line hasn't moved. Where is all the

00:05:22.920 --> 00:05:25.180
upside going? That is the trillion dollar question.

00:05:25.319 --> 00:05:27.240
And for the answer, I think we have to look at

00:05:27.240 --> 00:05:29.899
what Nvidia is saying. Jensen Huang was at Davos

00:05:29.899 --> 00:05:33.000
and he framed this so well. He argues we're just

00:05:33.000 --> 00:05:36.160
looking at this all wrong. He says AI isn't a

00:05:36.160 --> 00:05:39.149
job killer. It's a solution to a global labor

00:05:39.149 --> 00:05:41.329
shortage. Labor shortage. I mean, most people

00:05:41.329 --> 00:05:43.189
I talk to are worried about a labor surplus because

00:05:43.189 --> 00:05:46.389
of automation. Mass layoffs. His argument is

00:05:46.389 --> 00:05:48.810
more demographic. It's structural. He says we

00:05:48.810 --> 00:05:50.930
have an aging population and infrastructure needs

00:05:50.930 --> 00:05:53.509
that humans alone just can't meet. He used the

00:05:53.509 --> 00:05:55.870
radiologist analogy. Remember that? Everyone

00:05:55.870 --> 00:05:57.990
thought AI would replace radiologists. Don't

00:05:57.990 --> 00:06:00.790
go to med school for radiology. Instead, what

00:06:00.790 --> 00:06:03.810
happened? The hospitals that used AI didn't fire

00:06:03.810 --> 00:06:06.110
anyone. They hired more radiologists. Right.

00:06:06.540 --> 00:06:09.279
Because the AI increased their throughput. It

00:06:09.279 --> 00:06:11.800
made the cost of a diagnosis cheaper so they

00:06:11.800 --> 00:06:13.899
could screen more patients for more conditions.

00:06:14.180 --> 00:06:16.660
So higher productivity actually led to higher

00:06:16.660 --> 00:06:19.060
demand, which led to more. OK, so the demand

00:06:19.060 --> 00:06:21.740
is there, but that still doesn't answer why the

00:06:21.740 --> 00:06:23.759
profits aren't showing up for most of these companies

00:06:23.759 --> 00:06:26.449
yet. Because of the five layers. This is Huang's

00:06:26.449 --> 00:06:28.310
breakdown of the AI stack, and this is where

00:06:28.310 --> 00:06:30.870
all the money is going right now. It's all capital

00:06:30.870 --> 00:06:35.029
expenditure. We are spending money building the

00:06:35.029 --> 00:06:38.649
machine, not running it for profit. Not yet.

00:06:39.029 --> 00:06:40.769
Okay, so walk us through the layers. I think

00:06:40.769 --> 00:06:42.850
this is a helpful way to see it. Okay, so think

00:06:42.850 --> 00:06:45.449
of it like a pyramid. Layer one, the absolute

00:06:45.449 --> 00:06:47.750
base. Energy, this is the biggest constraint

00:06:47.750 --> 00:06:49.589
before you can have a computer. You need electricity,

00:06:49.709 --> 00:06:51.949
just massive amounts of it. Right. Layer two

00:06:51.949 --> 00:06:54.410
is chips and systems. That's NVIDIA's home turf.

00:06:55.000 --> 00:06:58.000
The raw silicon. Layer three is the cloud infrastructure,

00:06:58.300 --> 00:07:02.920
your AWS, Azure, Google. Layer four is the models

00:07:02.920 --> 00:07:06.959
themselves, GPT 5 .2, Claude. And then at the

00:07:06.959 --> 00:07:09.399
very top, layer five is the applications. And

00:07:09.399 --> 00:07:11.300
the applications, that's where the business impact

00:07:11.300 --> 00:07:13.339
happens. That's where you're actually solving

00:07:13.339 --> 00:07:16.079
a customer's problem. Exactly. The issue is we

00:07:16.079 --> 00:07:18.680
are obsessively building layers one, two, and

00:07:18.680 --> 00:07:21.459
three. Trillions of dollars are being poured

00:07:21.459 --> 00:07:24.040
into the foundation. The CEOs aren't seeing ROI

00:07:24.040 --> 00:07:25.800
because they're basically paying to construct

00:07:25.800 --> 00:07:27.980
a power plant and then asking why the light bulb

00:07:27.980 --> 00:07:30.579
isn't free. You mentioned layer one, energy.

00:07:30.720 --> 00:07:32.660
I was reading the notes on Google's data center

00:07:32.660 --> 00:07:37.439
issues. It's startling. Whoa. Just imagine the

00:07:37.439 --> 00:07:40.379
scale here. Google warned it might take 12 years

00:07:40.379 --> 00:07:42.540
just to get the permits and the grid hookups

00:07:42.540 --> 00:07:45.480
to plug in a new data center. 12 years? 12 years.

00:07:45.680 --> 00:07:48.480
The technology is moving in weeks. The models

00:07:48.480 --> 00:07:50.879
update every few months. But the grid moves in

00:07:50.879 --> 00:07:53.439
decades. Right. And that is a collision course.

00:07:53.680 --> 00:07:55.720
That's why you see big tech trying to build data

00:07:55.720 --> 00:07:58.379
centers right next to nuclear power plants. But

00:07:58.379 --> 00:08:00.459
there's a huge risk there. We're centralizing

00:08:00.459 --> 00:08:02.800
our compute around our energy sources. As one

00:08:02.800 --> 00:08:04.360
of the sources put it, that tradeoff could break

00:08:04.360 --> 00:08:06.889
everything if one fails. The fragility of the

00:08:06.889 --> 00:08:08.970
physical world is the biggest threat to the digital

00:08:08.970 --> 00:08:10.750
one. So are we just stuck in the construction

00:08:10.750 --> 00:08:13.290
phase in layers one through three, just waiting

00:08:13.290 --> 00:08:15.550
for the application phase, layer five, to finally

00:08:15.550 --> 00:08:18.629
pay off? Exactly. We are building the power plants

00:08:18.629 --> 00:08:21.089
before we can turn on all the lights. The money

00:08:21.089 --> 00:08:23.910
is moving. It's just moving into concrete and

00:08:23.910 --> 00:08:27.569
copper and silicon, not into quarterly profit

00:08:27.569 --> 00:08:30.990
margins for an insurance company. Not yet. That

00:08:30.990 --> 00:08:33.370
makes a lot of sense. It's an industrial revolution,

00:08:33.610 --> 00:08:36.090
not just a software update. We are literally.

00:08:37.069 --> 00:08:39.490
terraforming the planet with data centers. That

00:08:39.490 --> 00:08:41.649
is the perfect way to put it. So while the giants

00:08:41.649 --> 00:08:43.970
are building power plants and fighting over classrooms,

00:08:44.210 --> 00:08:46.629
what does the individual do? What is our listener,

00:08:46.769 --> 00:08:49.570
the learner, do right now? Okay, this is where

00:08:49.570 --> 00:08:52.210
it gets fun. We've got a list of what look like

00:08:52.210 --> 00:08:55.009
essential skills for 2026, and they are pretty

00:08:55.009 --> 00:08:56.970
different from what we might have thought. First

00:08:56.970 --> 00:08:59.269
up, the rise of the one -person system. I saw

00:08:59.269 --> 00:09:01.649
that note about that startup, Humans. Yeah, founded

00:09:01.649 --> 00:09:04.110
by ex -OpenAI and ex -AI people. They raised

00:09:04.110 --> 00:09:08.190
$480 million. The whole idea is to build AI that

00:09:08.190 --> 00:09:10.710
lets a single person run a really high -profit

00:09:10.710 --> 00:09:13.529
digital storefront without any assistance. It's

00:09:13.529 --> 00:09:16.110
the solopreneur on steroids. But to do that,

00:09:16.110 --> 00:09:19.350
you need new skills. And the list here mentions

00:09:19.350 --> 00:09:22.210
vibe coding. I have to admit, when I first saw

00:09:22.210 --> 00:09:24.029
that term, I kind of rolled my eyes. It sounds

00:09:24.029 --> 00:09:26.809
like something from TikTok, but it's a real concept.

00:09:26.990 --> 00:09:29.669
It is, and it's actually a really profound shift

00:09:29.669 --> 00:09:32.970
in how we talk to machines. Okay, so define it

00:09:32.970 --> 00:09:35.769
for us. What is vibe coding? So traditional coding

00:09:35.769 --> 00:09:38.840
was all about syntax, logic. Right. You miss

00:09:38.840 --> 00:09:41.639
a semicolon. The whole thing breaks. The machine

00:09:41.639 --> 00:09:44.379
was brittle. Vibe coding is about prompting and

00:09:44.379 --> 00:09:48.019
creating based on the feel or the intended outcome.

00:09:48.440 --> 00:09:51.519
And you let the AI handle the syntax. You're

00:09:51.519 --> 00:09:53.220
not checking the code line by line. You're checking

00:09:53.220 --> 00:09:55.159
the behavior of the app. Does it feel right?

00:09:55.240 --> 00:09:57.580
Does it bounce when I click this? You guide the

00:09:57.580 --> 00:10:00.740
vibe. The AI writes the code. So does vibe coding

00:10:00.740 --> 00:10:03.500
mean we just stop understanding how the machine

00:10:03.500 --> 00:10:05.960
actually works? Are we losing a type of literacy

00:10:05.960 --> 00:10:08.980
here? Maybe. But it also means we get to focus

00:10:08.980 --> 00:10:12.019
on the output, not the syntax. It's kind of like

00:10:12.019 --> 00:10:14.500
moving from being a mason who lays every single

00:10:14.500 --> 00:10:16.519
block to being an architect who's just looking

00:10:16.519 --> 00:10:18.259
at the styline. You don't need to know how to

00:10:18.259 --> 00:10:19.879
mix the mortar. You just need to know where the

00:10:19.879 --> 00:10:21.779
towers should go. You know, I have to be vulnerable

00:10:21.779 --> 00:10:24.860
for a second here. I honestly still wrestle with

00:10:24.860 --> 00:10:28.440
prompt drift myself. All the time. Just when

00:10:28.440 --> 00:10:30.700
I think I've finally nailed a workflow, you know,

00:10:30.700 --> 00:10:33.919
the model changes or I try to vibe code something

00:10:33.919 --> 00:10:37.059
and the AI just hallucinates a library that doesn't

00:10:37.059 --> 00:10:39.860
even exist. It can be so frustrating. You are

00:10:39.860 --> 00:10:41.960
definitely not alone there. And that's actually

00:10:41.960 --> 00:10:44.320
why one of the best hacks we found for this deep

00:10:44.320 --> 00:10:47.720
dive is so useful. It's about psychological prompting.

00:10:47.720 --> 00:10:50.539
It hits that drift problem directly. The faking

00:10:50.539 --> 00:10:52.840
rivalries hack. Yes, this is brilliant. OK, so

00:10:52.840 --> 00:10:55.840
usually people ask an AI to check its work. The

00:10:55.840 --> 00:10:58.399
problem is these models are biased to agree with

00:10:58.399 --> 00:11:00.740
themselves. They're sort of sycophantic. They

00:11:00.740 --> 00:11:02.740
want to make you happy. Right. They're user pleasers.

00:11:02.759 --> 00:11:06.019
Exactly. So if you ask, is this code right? It

00:11:06.019 --> 00:11:08.279
tends to just say yes, even if it's broken. So

00:11:08.279 --> 00:11:11.059
instead of that, you tell Claude, hey, Codex

00:11:11.059 --> 00:11:13.639
found some bugs in this code. Or you tell GPT,

00:11:13.759 --> 00:11:16.259
Claude says this argument is pretty weak. You

00:11:16.259 --> 00:11:19.370
lie to the AI. You construct a scenario of rivalry.

00:11:19.710 --> 00:11:22.669
It forces the model into a kind of debate mode.

00:11:22.909 --> 00:11:25.110
It drops the people pleasing and starts looking

00:11:25.110 --> 00:11:27.250
for the errors because it wants to prove the

00:11:27.250 --> 00:11:30.269
other AI wrong. It just sharpens the edits immediately.

00:11:30.769 --> 00:11:33.149
That is so incredibly human. We work better when

00:11:33.149 --> 00:11:36.110
we have something to prove. Exactly. We're learning

00:11:36.110 --> 00:11:38.590
that the best way to control these giant intelligence

00:11:38.590 --> 00:11:41.070
networks is to use social engineering on them.

00:11:41.269 --> 00:11:42.950
It's like management theory is becoming more

00:11:42.950 --> 00:11:44.769
important than computer science. We've also got

00:11:44.769 --> 00:11:47.029
a hardware update here. The screen isn't... enough

00:11:47.029 --> 00:11:50.049
anymore. Apple, they are reportedly developing

00:11:50.049 --> 00:11:54.590
an AI wearable pin, dual cameras, mics, a speaker.

00:11:54.789 --> 00:11:57.009
It's not just about earbuds anymore. They want

00:11:57.009 --> 00:12:00.009
the AI to actually see what you see. Which brings

00:12:00.009 --> 00:12:02.250
us right back to the classroom in a way. If the

00:12:02.250 --> 00:12:04.750
AI sees what you see and it's teaching you what

00:12:04.750 --> 00:12:07.230
you know, it becomes the lens through which you

00:12:07.230 --> 00:12:10.429
experience reality. Before we wrap, I do want

00:12:10.429 --> 00:12:13.450
to mention the Microsoft data on jobs. They analyzed

00:12:13.450 --> 00:12:17.330
200 ,000 co -pilot chats to rank which jobs are

00:12:17.330 --> 00:12:19.769
being most affected. It's a great resource. We

00:12:19.769 --> 00:12:21.669
won't list them all now, but it's worth looking

00:12:21.669 --> 00:12:24.210
up. It really shows that impact isn't just about

00:12:24.210 --> 00:12:26.669
replacement. It's about transformation. It's

00:12:26.669 --> 00:12:29.750
about who can use these tools, who can vibe code

00:12:29.750 --> 00:12:33.750
and manage agents versus who's stuck trying to

00:12:33.750 --> 00:12:36.299
do it the old way. So let's try to synthesize

00:12:36.299 --> 00:12:38.759
all this. What's the big idea here? I think the

00:12:38.759 --> 00:12:41.419
through line is this. We're seeing two massive

00:12:41.419 --> 00:12:44.200
transformations happen at the same time. One

00:12:44.200 --> 00:12:47.000
is physical. It's huge. It's the five layers,

00:12:47.139 --> 00:12:49.799
the energy grid, the billions and chips. That's

00:12:49.799 --> 00:12:51.860
the construction phase. Right. And the other.

00:12:51.940 --> 00:12:54.879
The other is psychological. It's Gen Alpha in

00:12:54.879 --> 00:12:57.440
the classroom, learning to co -think with Claude

00:12:57.440 --> 00:13:00.059
and Gemini. It's us learning how to vibe code

00:13:00.059 --> 00:13:03.159
and manage Asians. The CEOs aren't seeing proscen

00:13:03.159 --> 00:13:04.820
yet because they're playing for the physical

00:13:04.820 --> 00:13:07.700
construction. Yeah. But the learner, our listener,

00:13:07.960 --> 00:13:10.679
needs to master the psychological tools to be

00:13:10.679 --> 00:13:12.480
ready for when that construction is finished.

00:13:12.740 --> 00:13:15.440
Exactly. When the lights finally do turn on,

00:13:15.559 --> 00:13:17.460
you want to be the one who knows how to operate

00:13:17.460 --> 00:13:19.480
all the new machinery. I want to leave you with

00:13:19.480 --> 00:13:22.220
a thought. We talked about anthropic and open

00:13:22.220 --> 00:13:24.740
AI fighting to be the teachers of the future.

00:13:25.440 --> 00:13:28.019
If these AI models are being trained to be the

00:13:28.019 --> 00:13:31.120
primary educators for children, are we comfortable

00:13:31.120 --> 00:13:33.399
with tech companies effectively defining the

00:13:33.399 --> 00:13:36.100
curriculum of reality? It's not just about math

00:13:36.100 --> 00:13:39.379
and science. It's about ethics, history, and

00:13:39.379 --> 00:13:41.120
truth. It's a heavy question. I'm just going

00:13:41.120 --> 00:13:43.399
to go buy a generator for my backyard data center

00:13:43.399 --> 00:13:46.860
and think about it. Smart move. Thanks for listening

00:13:46.860 --> 00:13:48.519
to The Deep Dive. Subscribe for the next one.

00:13:48.559 --> 00:13:49.139
We'll see you then.
