WEBVTT

00:00:00.000 --> 00:00:02.660
Is the current boom in artificial intelligence

00:00:02.660 --> 00:00:06.219
like a financial bubble just waiting to pop,

00:00:06.379 --> 00:00:10.380
wiping out trillions like back in 2001? Or is

00:00:10.380 --> 00:00:11.919
it something really fundamentally different,

00:00:11.960 --> 00:00:15.099
almost like a self -perpetuating infinite money

00:00:15.099 --> 00:00:17.519
glitch? It's kind of wild to think about. Two

00:00:17.519 --> 00:00:20.809
-sec silence, yeah. Today, we're diving deep.

00:00:20.929 --> 00:00:24.609
We've got 12 essential lessons from 28 intense

00:00:24.609 --> 00:00:27.850
months right there in the AI trenches. We're

00:00:27.850 --> 00:00:30.230
going to unpack why the underlying tech is absolutely

00:00:30.230 --> 00:00:32.750
transformative, where the actual money is being

00:00:32.750 --> 00:00:34.990
made right now. And that critical skill you really

00:00:34.990 --> 00:00:38.350
need to survive the coming labor disruption.

00:00:38.469 --> 00:00:41.950
It's serious stuff. Welcome back, everyone. Yeah,

00:00:41.969 --> 00:00:43.689
welcome back to the deep dive. And look, this

00:00:43.689 --> 00:00:45.590
isn't just theory, right? We're looking at patterns

00:00:45.590 --> 00:00:47.710
drawn from people with actual skin in the game.

00:00:47.850 --> 00:00:50.600
Right, moving past those. you know academic debates

00:00:50.600 --> 00:00:52.920
you usually hear exactly okay so let's unpack

00:00:52.920 --> 00:00:55.780
this we've got a roadmap three major areas to

00:00:55.780 --> 00:00:59.799
explore today first the financial reality why

00:00:59.799 --> 00:01:02.640
this feels more like a vc bubble maybe not a

00:01:02.640 --> 00:01:05.299
tech bubble and how nvidia ended up being this

00:01:05.299 --> 00:01:10.129
this central indispensable Okay. Then second,

00:01:10.209 --> 00:01:11.730
we'll shift to the technical change. We're going

00:01:11.730 --> 00:01:15.109
to look at why smaller, more focused models using

00:01:15.109 --> 00:01:17.950
things like reinforcement learning, why they're

00:01:17.950 --> 00:01:20.469
actually taking the lead now over those huge

00:01:20.469 --> 00:01:22.650
generic ones we saw dominate the last couple

00:01:22.650 --> 00:01:26.930
of years. And finally, the human cost. The very

00:01:26.930 --> 00:01:29.769
real job displacement we see coming and this

00:01:29.769 --> 00:01:31.790
kind of counterintuitive idea, the return to

00:01:31.790 --> 00:01:35.150
code. Let's jump in. Where do we start? The money.

00:01:35.290 --> 00:01:36.810
Let's start with the money. The big question.

00:01:37.340 --> 00:01:39.680
bubble or not right the bubble debate it's constant

00:01:39.680 --> 00:01:41.620
you know there's this solid heuristic this rule

00:01:41.620 --> 00:01:44.180
of thumb if everyone is screaming bubble it probably

00:01:44.180 --> 00:01:46.000
isn't going to be that sudden crash they're imagining

00:01:46.000 --> 00:01:48.900
yeah it's different from say 2008 or the crypto

00:01:48.900 --> 00:01:51.420
thing in 2021 where there wasn't as much skepticism

00:01:51.420 --> 00:01:55.719
beforehand exactly but i mean if revenue growth

00:01:55.719 --> 00:01:59.299
is genuinely explosive You mentioned anthropic

00:01:59.299 --> 00:02:01.700
reporting, like 10x year -over -year growth,

00:02:01.920 --> 00:02:04.879
NVIDIA smashing expectations quarter after quarter.

00:02:05.000 --> 00:02:06.739
Why are we even still calling this a bubble?

00:02:06.879 --> 00:02:10.240
Isn't the VC side just scrambling to catch up

00:02:10.240 --> 00:02:12.240
to something totally new? That's the tricky part,

00:02:12.360 --> 00:02:14.479
isn't it? It is. And honestly, the people calling

00:02:14.479 --> 00:02:16.719
bubble, they do have some strong short -term

00:02:16.719 --> 00:02:19.169
points. The investment levels are... Well, they're

00:02:19.169 --> 00:02:23.169
insane. Like 17 million seed rounds. Billions

00:02:23.169 --> 00:02:25.490
for companies with no actual product yet. Yeah.

00:02:25.569 --> 00:02:28.009
Billions for just talent, like getting Andrej

00:02:28.009 --> 00:02:31.169
Karpathy or Elias Asitskiver on board. It's nuts.

00:02:31.430 --> 00:02:33.870
Right. Hard to justify that unless the underlying

00:02:33.870 --> 00:02:36.580
tech was really, really real. Precisely. And

00:02:36.580 --> 00:02:38.719
it is. The core tech is delivering real value

00:02:38.719 --> 00:02:41.419
now. It's writing code, analyzing data, making

00:02:41.419 --> 00:02:43.539
people more productive. That's why it's not a

00:02:43.539 --> 00:02:46.560
technology bubble you see. OK. But the overfunded,

00:02:46.680 --> 00:02:49.360
often unprofitable startups built on top of it.

00:02:50.039 --> 00:02:51.780
Yeah, a lot of those are going to collapse. It's

00:02:51.780 --> 00:02:53.639
more like a VC bubble riding in a tech revolution.

00:02:53.639 --> 00:02:56.300
So like a correction is coming for those weak

00:02:56.300 --> 00:02:59.099
players. Definitely likely. Maybe a 10, even

00:02:59.099 --> 00:03:01.139
30 percent market correction when those companies

00:03:01.139 --> 00:03:05.360
fail. But the core tech. It sticks. Okay. That

00:03:05.360 --> 00:03:07.939
makes sense. And this brings us to that weird

00:03:07.939 --> 00:03:11.080
infinite money glitch thing, the loop between

00:03:11.080 --> 00:03:14.539
NVIDIA and the big AI labs. Ah, yes. Yeah. Strange

00:03:14.539 --> 00:03:17.060
loop. It's where the structural friction really

00:03:17.060 --> 00:03:18.939
lies. It's like the ultimate capital recycling

00:03:18.939 --> 00:03:21.520
machine. How does it work? Exactly. So NVIDIA

00:03:21.520 --> 00:03:24.039
acts like a VC, right? They bore huge capital,

00:03:24.139 --> 00:03:26.699
let's say $100 billion just as an example, into

00:03:26.699 --> 00:03:31.300
a lab like OpenAI. Okay. But OpenAI has to spend

00:03:31.300 --> 00:03:34.439
that money on compute. Which means... NVIDIA's

00:03:34.439 --> 00:03:36.900
GPUs. They're really high margin graphics cards.

00:03:37.180 --> 00:03:39.400
Exactly. The money flows straight back to NVIDIA's

00:03:39.400 --> 00:03:41.460
revenue. Their stock price goes up. Which lets

00:03:41.460 --> 00:03:42.879
them make more investments and the loop just

00:03:42.879 --> 00:03:44.520
keeps going. It's brilliant. A self -feeding

00:03:44.520 --> 00:03:47.759
system. Genius, really. But it exposes this tough

00:03:47.759 --> 00:03:49.800
truth, doesn't it? Pretty much everyone else

00:03:49.800 --> 00:03:53.840
in the AI game, the labs, the startups, they're

00:03:53.840 --> 00:03:56.560
losing money. Burning millions, billions on training

00:03:56.560 --> 00:04:00.879
costs. Absolutely. Jensen Huang and NVIDIA. They're

00:04:00.879 --> 00:04:02.979
selling the picks and shovels in this gold rush.

00:04:03.120 --> 00:04:05.879
They've got the guaranteed massive profit margins.

00:04:06.699 --> 00:04:08.879
Everyone else is just spending. And that must

00:04:08.879 --> 00:04:12.199
be why Google's pushing their TPUs so hard. Amazon's

00:04:12.199 --> 00:04:14.639
building custom silicon. Meta's designing their

00:04:14.639 --> 00:04:17.759
own chips. They have to. It's desperation. They

00:04:17.759 --> 00:04:20.860
need to break this dependency on NVIDIA to control

00:04:20.860 --> 00:04:23.360
their own destiny, frankly, just to survive long

00:04:23.360 --> 00:04:26.120
term. Okay, so if owning the hardware, the compute

00:04:26.120 --> 00:04:29.300
is the main battleground. What's the single biggest

00:04:29.300 --> 00:04:31.819
constraint holding back AI growth and revenue

00:04:31.819 --> 00:04:34.660
right now? If labs could just get more GPUs,

00:04:34.660 --> 00:04:37.259
could they make more money? Compute. It's the

00:04:37.259 --> 00:04:39.839
ultimate physical choke point. It limits how

00:04:39.839 --> 00:04:41.959
fast labs can expand and even meet the demand

00:04:41.959 --> 00:04:44.779
they already have. Labs like Anthropic and OpenAI

00:04:44.779 --> 00:04:48.040
could likely 2x or 3x their revenue today if

00:04:48.040 --> 00:04:50.639
they just have more GPUs. So compute is the constraint.

00:04:50.819 --> 00:04:53.180
Got it. The bottleneck limiting revenue and expansion.

00:04:53.660 --> 00:04:55.579
Mineral sponsor, Reed Placeholder. All right,

00:04:55.600 --> 00:04:57.779
we're back. So if hardware is the choke point,

00:04:57.920 --> 00:04:59.620
how are companies fighting back? It sounds like

00:04:59.620 --> 00:05:01.740
they're changing the definition of what good

00:05:01.740 --> 00:05:04.579
AI even means. That's right. Which brings us

00:05:04.579 --> 00:05:06.180
neatly into the technical shifts we're seeing.

00:05:06.300 --> 00:05:09.379
Yeah, the whole era of just building the biggest

00:05:09.379 --> 00:05:12.300
possible model. That's kind of ending, you know,

00:05:12.300 --> 00:05:14.779
like a lot of analysts found some recent releases

00:05:14.779 --> 00:05:17.360
disappointing. But they were intentionally smaller,

00:05:17.579 --> 00:05:20.120
weren't they? Smaller than previous versions.

00:05:20.279 --> 00:05:22.699
Exactly. It wasn't about ideology. It was driven

00:05:22.699 --> 00:05:25.420
by that cost barrier we talked about. Yeah. Size

00:05:25.420 --> 00:05:27.639
just started giving diminishing returns on performance

00:05:27.639 --> 00:05:30.660
compared to the crazy jump in compute cost. So

00:05:30.660 --> 00:05:33.259
efficiency and speed are the new game. That's

00:05:33.259 --> 00:05:35.459
the real frontier now, efficiency and speed.

00:05:35.740 --> 00:05:38.060
And the breakthrough that's really keeping the

00:05:38.060 --> 00:05:40.699
momentum going is reinforcement learning, or

00:05:40.699 --> 00:05:44.410
RL. OK, define RL for us quickly. Sure. It's

00:05:44.410 --> 00:05:47.170
basically when the AI learns by doing a task,

00:05:47.250 --> 00:05:48.850
like actually trying to write code or play a

00:05:48.850 --> 00:05:51.490
game, and then it gets rewards or penalties based

00:05:51.490 --> 00:05:53.529
on how well it did. So it's moving beyond just

00:05:53.529 --> 00:05:56.009
reading like all the text on the Internet. Right.

00:05:56.110 --> 00:05:58.449
It's learning from action and consequence. And

00:05:58.449 --> 00:06:01.050
what's fascinating is how fast the AIs are getting

00:06:01.050 --> 00:06:04.410
better at coding and math specifically. Why those

00:06:04.410 --> 00:06:08.160
deterministic tasks? tasks with clear right or

00:06:08.160 --> 00:06:10.339
wrong answers? Why are they improving so much

00:06:10.339 --> 00:06:13.040
faster than, say, creative writing? Because the

00:06:13.040 --> 00:06:15.500
feedback is clean. If the code doesn't compile,

00:06:15.699 --> 00:06:18.269
boom, wrong path. If the math equation fails,

00:06:18.430 --> 00:06:21.910
nope, try again. The AI immediately knows to

00:06:21.910 --> 00:06:24.589
discard that way of thinking. Ah, I see. So it

00:06:24.589 --> 00:06:26.990
can generate tons of attempts. Exactly. Like,

00:06:27.050 --> 00:06:30.709
try a solution 500 different ways, find the one

00:06:30.709 --> 00:06:32.870
successful reasoning trace, the one path that

00:06:32.870 --> 00:06:35.709
worked, and then train itself just on that optimal

00:06:35.709 --> 00:06:37.790
path. It's an amazing synthetic data advantage.

00:06:38.230 --> 00:06:40.709
Wow. It's pretty humbling, actually, seeing how

00:06:40.709 --> 00:06:42.829
fast those areas are moving. I still wrestle

00:06:42.829 --> 00:06:44.870
with prompt drift myself. What's prompt drift

00:06:44.870 --> 00:06:47.029
exactly? For listeners who haven't hit that wall

00:06:47.029 --> 00:06:48.550
yet. Oh, it's maddening. It's basically when

00:06:48.550 --> 00:06:51.230
you put in the exact same prompt, but the model

00:06:51.230 --> 00:06:53.329
gives you wildly different answers over time.

00:06:53.569 --> 00:06:56.129
Its internal behavior just changes. Makes reliable

00:06:56.129 --> 00:06:58.410
work kind of impossible, right? Totally. And

00:06:58.410 --> 00:07:01.589
that's much harder to fix for subjective things

00:07:01.589 --> 00:07:03.629
like writing compared to deterministic stuff

00:07:03.629 --> 00:07:07.370
like code. It's a real challenge. Okay, so that

00:07:07.370 --> 00:07:10.779
brings us to this flow zone problem. The focus

00:07:10.779 --> 00:07:14.579
now is on smaller, faster models because speed

00:07:14.579 --> 00:07:17.240
is just critical for actually getting work done,

00:07:17.339 --> 00:07:19.740
right? Like waiting five minutes for a code suggestion

00:07:19.740 --> 00:07:22.199
just breaks your focus. Absolutely breaks your

00:07:22.199 --> 00:07:25.459
flow state. Look at Anthropic's Haiku 4 .5 model.

00:07:25.639 --> 00:07:28.439
People find it super useful, mainly because it's

00:07:28.439 --> 00:07:30.420
three times cheaper and more than twice as fast

00:07:30.420 --> 00:07:32.779
as their bigger Sonnet model. So good enough

00:07:32.779 --> 00:07:36.439
but quick is way better than perfect but slow.

00:07:36.779 --> 00:07:39.519
Vastly superior for most practical uses. Yeah.

00:07:39.920 --> 00:07:42.759
And there's this new strategy reflecting that

00:07:42.759 --> 00:07:45.660
idea, test time compute. Right. So instead of

00:07:45.660 --> 00:07:47.680
blowing the whole compute budget on training

00:07:47.680 --> 00:07:49.500
the absolute biggest model possible. What do

00:07:49.500 --> 00:07:51.620
they do? They train a, let's say, medium sized

00:07:51.620 --> 00:07:54.720
model. Then they save some of that compute budget

00:07:54.720 --> 00:07:57.240
to use during inference, like when you're actually

00:07:57.240 --> 00:07:59.759
using the model. They let the model think longer

00:07:59.759 --> 00:08:02.120
and reason more deeply before it gives you the

00:08:02.120 --> 00:08:03.980
answer. That's a great way to put it, like giving

00:08:03.980 --> 00:08:06.680
a student extra time on the final exam to double

00:08:06.680 --> 00:08:09.139
check the hard questions, even if they only studied.

00:08:09.259 --> 00:08:11.519
a medium amount. Exactly like that. And this

00:08:11.519 --> 00:08:15.079
efficiency, it's unlocking this exponential growth

00:08:15.079 --> 00:08:17.740
in how long AI agents can work autonomously.

00:08:17.839 --> 00:08:20.720
How long are we talking? Well, six months ago,

00:08:20.779 --> 00:08:22.779
maybe 20 minutes was a good run for an autonomous

00:08:22.779 --> 00:08:26.300
agent. Today, we're seeing sustained work sessions

00:08:26.300 --> 00:08:30.199
like two to seven hours long. GPT -5 codecs apparently

00:08:30.199 --> 00:08:33.179
had some seven hour autonomous sessions. Whoa.

00:08:33.960 --> 00:08:36.100
Imagine scaling that. That unlocks really complex

00:08:36.100 --> 00:08:38.480
stuff, right? Like huge code -based refactors

00:08:38.480 --> 00:08:41.240
or deep research projects. Totally. The agent

00:08:41.240 --> 00:08:44.139
can analyze, plan, run searches, synthesize the

00:08:44.139 --> 00:08:46.100
findings, and just repeat that whole loop for

00:08:46.100 --> 00:08:48.600
hours without a human stepping in. If that trajectory

00:08:48.600 --> 00:08:51.240
keeps going, we could have agents working reliably

00:08:51.240 --> 00:08:54.360
for days soon, automating tasks that take junior

00:08:54.360 --> 00:08:56.820
engineers weeks. It's heading that way. It really

00:08:56.820 --> 00:08:59.019
seems possible. This changes the game for complex

00:08:59.019 --> 00:09:01.470
automation. Okay, how does this new strategy

00:09:01.470 --> 00:09:03.710
using test time compute actually improve the

00:09:03.710 --> 00:09:06.590
results if the model itself isn't bigger? Because

00:09:06.590 --> 00:09:08.950
the smaller model gets dedicated time to reason

00:09:08.950 --> 00:09:11.370
during inference right when it's needed. It leads

00:09:11.370 --> 00:09:13.370
to better, more thoughtful answers than just

00:09:13.370 --> 00:09:15.590
a fast, reflexive response from a giant model.

00:09:15.830 --> 00:09:17.690
More thoughtful answers from smaller models.

00:09:17.870 --> 00:09:20.129
Got it. Efficiency is the new size. The technical

00:09:20.129 --> 00:09:22.549
shift is clear. Efficiency over brute force.

00:09:23.129 --> 00:09:26.149
Now let's talk about data. What really separates

00:09:26.149 --> 00:09:29.470
the winners long term? You mentioned XAI earlier.

00:09:29.769 --> 00:09:31.649
Yeah, let's validate that prediction about XAI.

00:09:31.929 --> 00:09:34.309
People kind of dismissed Elon Musk, but he always

00:09:34.309 --> 00:09:36.450
had foundational advantages that meant he was

00:09:36.450 --> 00:09:39.549
going to catch up. Unique data, capital, manufacturing

00:09:39.549 --> 00:09:43.009
skill. Right. XAI gets exclusive access to X,

00:09:43.169 --> 00:09:45.710
formerly Twitter, that real -time global conversation.

00:09:45.789 --> 00:09:49.549
That gives their AI, Grok, this unique edge on

00:09:49.549 --> 00:09:51.950
current events and slang that others just can't

00:09:51.950 --> 00:09:54.750
legally scrape, right? Correct. That's a powerful

00:09:54.750 --> 00:09:57.490
data mode. But the ultimate mode, it's physical.

00:09:57.820 --> 00:10:00.440
It's Tesla and the Optimus robots. How so? Tesla

00:10:00.440 --> 00:10:02.559
cars provide this constant stream of real -world

00:10:02.559 --> 00:10:05.159
video footage, amazing for training world models

00:10:05.159 --> 00:10:07.440
that understand physics and navigation. Yeah.

00:10:07.500 --> 00:10:10.059
But Optimus. Optimus will generate something

00:10:10.059 --> 00:10:13.779
totally unique. Physical embodied data. Three

00:10:13.779 --> 00:10:16.080
-dimensional data from interacting with the world.

00:10:16.419 --> 00:10:18.720
Embodying the data? What does that physical experience

00:10:18.720 --> 00:10:21.759
give an AI that just reading text misses? Everything.

00:10:21.860 --> 00:10:24.970
Seriously. Think about a human child. A toddler

00:10:24.970 --> 00:10:27.950
is infinitely smarter than GPT -5 in some really

00:10:27.950 --> 00:10:30.710
fundamental ways. How? Because the child learns

00:10:30.710 --> 00:10:33.090
from physical feedback that isn't scraped from

00:10:33.090 --> 00:10:35.590
the Internet. They learn about gravity by dropping

00:10:35.590 --> 00:10:38.529
things, force by pushing things, object permanence,

00:10:38.529 --> 00:10:41.649
social cues, all by doing things in the real

00:10:41.649 --> 00:10:44.929
world. Ah, that embodied experience. That's what

00:10:44.929 --> 00:10:47.409
current AIs are fundamentally missing. Totally.

00:10:47.490 --> 00:10:49.450
It's a huge gap. Okay, but here's where things

00:10:49.450 --> 00:10:51.490
get really interesting, maybe counterintuitive

00:10:51.490 --> 00:10:54.519
again. Open source models seem to be catching

00:10:54.519 --> 00:10:57.580
up incredibly fast, even matching or beating

00:10:57.580 --> 00:11:00.000
the big closed systems. Yeah, this is blowing

00:11:00.000 --> 00:11:03.419
up quietly. Take a model like GLM 4 .6 from Zippo

00:11:03.419 --> 00:11:06.379
AI. It's open source and it's performing better

00:11:06.379 --> 00:11:09.139
than Anthropic's closed source Claude Sonnet

00:11:09.139 --> 00:11:11.519
on a lot of important coding benchmarks. So the

00:11:11.519 --> 00:11:13.639
open source community is moving at lightning

00:11:13.639 --> 00:11:16.580
speed. Undeniably fast, yeah. But why isn't this

00:11:16.580 --> 00:11:19.539
like front page news? The incumbents, the big

00:11:19.539 --> 00:11:22.039
labs like OpenAI and Anthropic, they kind of

00:11:22.039 --> 00:11:23.639
ignore it, right? To protect their perceived

00:11:23.639 --> 00:11:25.879
advantage, their moat. But there's also a practical

00:11:25.879 --> 00:11:28.840
issue. What's that? Cloud infrastructure, AWS,

00:11:29.460 --> 00:11:34.159
Google Cloud, Azure. They're optimized to run

00:11:34.159 --> 00:11:37.379
the popular models efficiently. The GPT -series

00:11:37.379 --> 00:11:40.240
cloud. So these powerful new open source models

00:11:40.240 --> 00:11:42.519
seem slower just because the pipes aren't optimized

00:11:42.519 --> 00:11:44.840
for them yet. Exactly. It creates friction. Yeah.

00:11:44.919 --> 00:11:47.299
But the underlying quality of the models proves

00:11:47.299 --> 00:11:49.480
how fast the open source community is innovating.

00:11:49.659 --> 00:11:52.259
You know, Anthropix valued maybe $200 billion,

00:11:52.500 --> 00:11:55.100
but a lab like Zippo is hitting comparable performance.

00:11:55.100 --> 00:11:58.789
It's validation. Wild. OK, now for the more sobering

00:11:58.789 --> 00:12:00.909
part, job displacement. Yeah, got to talk about

00:12:00.909 --> 00:12:02.990
it. Look, the goal here isn't just disrupting

00:12:02.990 --> 00:12:05.570
the, what, $300 billion software market. The

00:12:05.570 --> 00:12:08.029
aim is replacing big chunks of the $15 trillion

00:12:08.029 --> 00:12:11.389
global labor market. $15 trillion, wow. So the

00:12:11.389 --> 00:12:13.370
trend we're likely going to see accelerate by

00:12:13.370 --> 00:12:16.289
2026 is single job replacement agents. Meaning?

00:12:16.509 --> 00:12:18.629
Think customers, court reps, sales development

00:12:18.629 --> 00:12:20.549
reps, setting appointments, executive assistants.

00:12:20.870 --> 00:12:22.889
These roles are prime targets for automation

00:12:22.889 --> 00:12:25.990
by specialized AI agents. Which means huge. efficiency

00:12:25.990 --> 00:12:30.929
gains for companies massive games but also significant

00:12:30.929 --> 00:12:34.730
social unrest i mean widespread protests related

00:12:34.730 --> 00:12:38.090
to ai job losses are genuinely predicted for

00:12:38.090 --> 00:12:41.629
2026. that's heavy which leads us maybe surprisingly

00:12:41.629 --> 00:12:44.929
to your prediction the return to code learning

00:12:44.929 --> 00:12:47.309
computer science is going to be sexy again why

00:12:47.309 --> 00:12:50.850
because it gives you asymmetrical leverage Massive

00:12:50.850 --> 00:12:53.149
leverage. Explain that. Okay. A non -technical

00:12:53.149 --> 00:12:55.669
person using some AI assistance. Maybe they get

00:12:55.669 --> 00:12:59.009
2x, 3x faster at their job. That's nice. Useful.

00:12:59.049 --> 00:13:01.460
Right. But a skilled programmer, someone who

00:13:01.460 --> 00:13:03.720
really understands systems, logic, architecture,

00:13:04.000 --> 00:13:06.500
they don't just get 3x faster. They become 100

00:13:06.500 --> 00:13:09.500
times more powerful. 100 times? How? Because

00:13:09.500 --> 00:13:11.399
you stop being just the coder writing lines.

00:13:11.539 --> 00:13:13.539
You become the architect. You start managing

00:13:13.539 --> 00:13:16.740
teams of AI agents. You leverage your deep technical

00:13:16.740 --> 00:13:19.120
know -how to orchestrate these tools at a scale

00:13:19.120 --> 00:13:21.279
nobody else can. You have to get to the cutting

00:13:21.279 --> 00:13:23.299
edge to really crush the competition. So you

00:13:23.299 --> 00:13:26.080
go from using AI to, like, help write an email.

00:13:26.259 --> 00:13:28.940
To using AI to refactor an entire million lines.

00:13:28.940 --> 00:13:31.840
code base in an afternoon by coordinating five

00:13:31.840 --> 00:13:33.759
different specialized agent teams simultaneously.

00:13:34.220 --> 00:13:37.279
That's the 100x leverage. Wow. So what does this

00:13:37.279 --> 00:13:40.159
all mean for the average career then? If you're

00:13:40.159 --> 00:13:42.259
listening right now, what's the takeaway? The

00:13:42.259 --> 00:13:44.639
greatest advantage, the biggest upside, goes

00:13:44.639 --> 00:13:47.500
to those with the technical competence to truly

00:13:47.500 --> 00:13:50.179
leverage these new AI tools at a deep level.

00:13:50.340 --> 00:13:52.960
Technical competence is key. Okay. Hashtag, hashtag

00:13:52.960 --> 00:13:55.799
occlusion and final thought. All right. Let's

00:13:55.799 --> 00:13:58.799
try to summarize the big ideas here. Three core

00:13:58.799 --> 00:14:02.899
lessons, maybe. First, AI is real transformation,

00:14:03.259 --> 00:14:05.580
not just a bubble, and it's backed by actual

00:14:05.580 --> 00:14:08.820
explosive revenue growth, even if the VC funding

00:14:08.820 --> 00:14:11.379
is bubbly. Right. And second, compute is that

00:14:11.379 --> 00:14:13.460
critical physical constraint, the bottleneck,

00:14:13.460 --> 00:14:16.279
which is fueling that kind of bizarre self -feeding

00:14:16.279 --> 00:14:18.909
money loop between the AI labs and NVIDIA. And

00:14:18.909 --> 00:14:21.190
third, the game has totally changed. That era

00:14:21.190 --> 00:14:23.990
of just copycat startups reselling API tokens

00:14:23.990 --> 00:14:26.669
at a loss, that's ending. It's over. The future

00:14:26.669 --> 00:14:28.950
really belongs to efficiency, specialized models,

00:14:29.070 --> 00:14:31.529
and that deep technical competence we just talked

00:14:31.529 --> 00:14:33.190
about. And it feels like the conversation has

00:14:33.190 --> 00:14:35.450
shifted too, right? Less focus on, you know,

00:14:35.450 --> 00:14:38.450
abstract AI doomerism, the paperclip maximizers.

00:14:38.710 --> 00:14:41.250
Yeah, thankfully. The focus has rightly shifted

00:14:41.250 --> 00:14:44.250
to the very real world problems. Bias in the

00:14:44.250 --> 00:14:47.230
models, market consolidation, and crucially,

00:14:47.269 --> 00:14:50.379
labor disruption. These are tangible issues we

00:14:50.379 --> 00:14:52.799
need to solve now. We've seen just how much capital,

00:14:52.860 --> 00:14:56.259
how much energy is pouring into building the

00:14:56.259 --> 00:14:59.360
physical infrastructure for AI, the data centers,

00:14:59.480 --> 00:15:01.679
the silicon foundries, the power grid upgrades.

00:15:02.000 --> 00:15:04.539
It's staggering. So maybe the final provocative

00:15:04.539 --> 00:15:06.960
thought for you, the listener, is this. It's

00:15:06.960 --> 00:15:09.100
about where you fit. You need to ask yourself,

00:15:09.259 --> 00:15:12.279
are your skills foundational? Are they essential,

00:15:12.440 --> 00:15:14.059
like the picks and shovels needed to build this

00:15:14.059 --> 00:15:17.769
new AI infrastructure? Or... Are your skills

00:15:17.769 --> 00:15:20.350
part of that repetitive lower leverage labor

00:15:20.350 --> 00:15:23.330
that is now, frankly, being systematically replaced?

00:15:23.649 --> 00:15:25.909
A heavy question, but a really necessary one

00:15:25.909 --> 00:15:28.429
to ponder. Where does your unique leverage lie

00:15:28.429 --> 00:15:30.629
in this new world? Yeah. Thank you for taking

00:15:30.629 --> 00:15:32.669
this deep dive with us today. Lots to think about.

00:15:32.750 --> 00:15:33.730
Always is. Talk soon.
