WEBVTT

00:00:00.000 --> 00:00:02.020
Okay, let's start with the concept. It's something

00:00:02.020 --> 00:00:05.080
that usually you'd find in a sci -fi novel, not

00:00:05.080 --> 00:00:07.780
a Tuesday news headline. Recursive self -improvement.

00:00:08.060 --> 00:00:11.560
It's the big one. The holy grail of AI, really.

00:00:11.699 --> 00:00:14.099
The idea that a system can look at its own code,

00:00:14.220 --> 00:00:16.940
find the problems, and just fix them. Rewrite

00:00:16.940 --> 00:00:18.920
itself to be better without a human stepping

00:00:18.920 --> 00:00:21.359
in. And we're not talking theory here. This is

00:00:21.359 --> 00:00:23.519
what broke this week with OpenAI's new model.

00:00:23.600 --> 00:00:26.120
It wasn't just writing code. It was being used

00:00:26.120 --> 00:00:29.620
to debug and evaluate itself. doing its own training.

00:00:29.760 --> 00:00:31.739
That's a huge shift. I mean, we're moving from

00:00:31.739 --> 00:00:34.840
an era of tools we use to one of agents that

00:00:34.840 --> 00:00:37.759
build themselves. And it's honestly, it's a little

00:00:37.759 --> 00:00:40.179
airy. A little. It's kind of terrifyingly cool.

00:00:40.359 --> 00:00:42.359
Welcome back to the Deep Dive. Today is Thursday,

00:00:42.500 --> 00:00:45.899
February 5th, 2026. And I hope you're ready because

00:00:45.899 --> 00:00:48.700
the world feels different than it did just a

00:00:48.700 --> 00:00:50.299
week ago. Oh, it really does. The acceleration

00:00:50.299 --> 00:00:53.570
is just... It's noticeable now. So here's our

00:00:53.570 --> 00:00:55.469
roadmap. We had a lot to get through today. First,

00:00:55.570 --> 00:00:58.289
we're going to unpack just the absolute madness

00:00:58.289 --> 00:01:00.829
of this agendic arms race. Yeah. We had OpenAI's

00:01:00.829 --> 00:01:04.109
GPT 5 .3 codex and Anthropix's Claude Opus 4

00:01:04.109 --> 00:01:07.409
.6 drop, what, minutes apart? It felt like a

00:01:07.409 --> 00:01:10.189
coordinated strike. It was wild. Totally. Then

00:01:10.189 --> 00:01:12.349
we'll look at how this all spilled out into the

00:01:12.349 --> 00:01:13.969
real world. I'm talking about the culture war

00:01:13.969 --> 00:01:16.730
at the Super Bowl and the stock market just freaking

00:01:16.730 --> 00:01:18.489
out. Yeah. And then we have to talk about the

00:01:18.489 --> 00:01:21.569
elephant in the room, Google. While everyone

00:01:21.569 --> 00:01:24.469
was watching these shiny new agents, Google has

00:01:24.469 --> 00:01:27.189
been scaling something that is, frankly, hard

00:01:27.189 --> 00:01:29.790
to even comprehend. The financial reality check

00:01:29.790 --> 00:01:32.150
there is pretty staggering. And finally, we'll

00:01:32.150 --> 00:01:35.090
wrap up with some new tools in this mindset we're

00:01:35.090 --> 00:01:37.769
calling the cognitive gym. Because if the AI

00:01:37.769 --> 00:01:40.290
is getting smarter, we really need to make sure

00:01:40.290 --> 00:01:42.829
we aren't getting dumber. That's maybe the most

00:01:42.829 --> 00:01:44.670
critical part of this whole conversation for

00:01:44.670 --> 00:01:47.030
me. Okay, so let's dive in. All right, let's

00:01:47.030 --> 00:01:49.450
start with what everyone's calling Thursday AI.

00:01:49.930 --> 00:01:52.150
It feels like every Thursday the world changes.

00:01:52.209 --> 00:01:54.950
But this one, this one was special. OpenAI and

00:01:54.950 --> 00:01:57.269
Anthropic head to head. Oh, it was pure theater.

00:01:57.450 --> 00:02:01.650
So first you have OpenAI dropping GPT 5 .3 codecs.

00:02:01.689 --> 00:02:03.909
And, you know, for context, this is not just

00:02:03.909 --> 00:02:06.049
a slightly better chatbot. This is a model built

00:02:06.049 --> 00:02:08.689
to live inside your operating system. Right.

00:02:08.830 --> 00:02:11.930
It's powering a new macOS app. But I want to

00:02:11.930 --> 00:02:13.629
get this agentic part. We use that word a lot.

00:02:13.689 --> 00:02:15.689
What does it actually look like when you use

00:02:15.689 --> 00:02:18.569
this thing? Okay, so the old way, and by old

00:02:18.569 --> 00:02:21.610
I mean 2025, was you'd ask the bot, write me

00:02:21.610 --> 00:02:24.349
a function. It spits out code. You copy it, you

00:02:24.349 --> 00:02:27.449
paste it, you see if it works. Codex 5 .3 is

00:02:27.449 --> 00:02:29.830
different. It treats your entire computer like

00:02:29.830 --> 00:02:32.310
it's sandbox. The release notes talked about

00:02:32.310 --> 00:02:34.669
it handling complex app development, even game

00:02:34.669 --> 00:02:37.430
development, over multiple days. Multiple days.

00:02:37.590 --> 00:02:39.210
That's the part I can't wrap my head around.

00:02:39.389 --> 00:02:41.849
How does an AI remember what it was doing on

00:02:41.849 --> 00:02:44.150
Tuesday when it wakes up on Thursday? That's

00:02:44.150 --> 00:02:46.479
the leap. It's about more than just a big context

00:02:46.479 --> 00:02:49.439
window. They're calling it episodic memory. The

00:02:49.439 --> 00:02:51.939
model keeps a log of its own decisions, of its

00:02:51.939 --> 00:02:54.560
own intent, so it knows why it built the database

00:02:54.560 --> 00:02:56.439
that way yesterday so it can build the front

00:02:56.439 --> 00:02:59.020
end to match it today. And the headline feature,

00:02:59.219 --> 00:03:01.840
the self -debugging, you said it helped build

00:03:01.840 --> 00:03:03.780
itself. How does that really work? Is it just

00:03:03.780 --> 00:03:06.300
like a spellcheck for code? It's way deeper.

00:03:06.800 --> 00:03:09.580
So in traditional training, a human has to label

00:03:09.580 --> 00:03:12.419
the data, right? A person says, this code is

00:03:12.419 --> 00:03:15.120
wrong. That's slow. With this, the model generates

00:03:15.120 --> 00:03:17.539
code, tries to compile it, reads the error message,

00:03:17.800 --> 00:03:20.719
figures out the logical flaw, and then rewrites

00:03:20.719 --> 00:03:23.020
the code and tries again. It's a closed loop.

00:03:23.219 --> 00:03:25.360
So it's learning from its own failures in real

00:03:25.360 --> 00:03:28.210
time. At machine speed. Exactly. It's why it's

00:03:28.210 --> 00:03:31.509
25 % faster in execution. It's already failed

00:03:31.509 --> 00:03:33.750
a thousand times in a simulation before you even

00:03:33.750 --> 00:03:36.409
type your first prompt. Wow. Okay. So on one

00:03:36.409 --> 00:03:38.849
side, we've got OpenAI building the ultimate

00:03:38.849 --> 00:03:42.229
builder. But then Anthropic was not just sitting

00:03:42.229 --> 00:03:44.250
around. No. And this was the drama of it all.

00:03:44.550 --> 00:03:46.629
Anthropic pushed their launch by 15 minutes.

00:03:46.810 --> 00:03:48.949
It really felt like they were waiting for OpenAI

00:03:48.949 --> 00:03:50.629
to hit publish just so they could steal the news

00:03:50.629 --> 00:03:53.750
cycle. And they dropped Cloud Opus 4 .6. And

00:03:53.750 --> 00:03:56.069
they're calling it vibe coding? I saw that all

00:03:56.069 --> 00:03:57.930
over X. What does that even mean? Is that just

00:03:57.930 --> 00:04:00.830
marketing? It's a fun name, but it actually describes

00:04:00.830 --> 00:04:03.030
a real thing. It's about coding with natural

00:04:03.030 --> 00:04:06.210
language intent, not strict syntax. So you don't

00:04:06.210 --> 00:04:08.729
tell it how to write the code. You tell it the

00:04:08.729 --> 00:04:11.550
vibe of the app. I want a retro dashboard that

00:04:11.550 --> 00:04:14.710
feels like a 90s hacker movie. And it just, it

00:04:14.710 --> 00:04:17.680
translates that feeling into code. So it's bridging

00:04:17.680 --> 00:04:20.160
the gap between sort of art school instructions

00:04:20.160 --> 00:04:23.300
and computer science execution. Precisely. And

00:04:23.300 --> 00:04:26.500
while Codex is digging deep into the OS, Anthropic

00:04:26.500 --> 00:04:30.459
is going wide on data. Opus 4 .6 can pull info

00:04:30.459 --> 00:04:33.740
from huge document sets, run full financial analyses.

00:04:33.860 --> 00:04:36.100
I saw the stat here. It's number one on the finance

00:04:36.100 --> 00:04:39.199
agent benchmark. Beating OpenAI. Which is huge

00:04:39.199 --> 00:04:41.759
for the corporate world. While OpenAI is chasing

00:04:41.759 --> 00:04:44.100
developers, Anthropic is going right after the

00:04:44.100 --> 00:04:46.920
white -collar workflow. The analysts, the researchers,

00:04:47.259 --> 00:04:49.480
the people living in spreadsheets. So we have

00:04:49.480 --> 00:04:52.240
Codex building itself and Claude basically replacing

00:04:52.240 --> 00:04:55.680
the analyst. It makes me wonder, if the AI is

00:04:55.680 --> 00:04:59.139
doing all this, where do we even fit in? It's

00:04:59.139 --> 00:05:00.860
a multi -billion dollar question, isn't it? No,

00:05:00.860 --> 00:05:03.160
really. If the model can fix its own errors,

00:05:03.300 --> 00:05:05.459
where does a human developer actually fit into

00:05:05.459 --> 00:05:08.000
that loop? I think we stop being writers, you

00:05:08.000 --> 00:05:10.379
know, writers of code, writers of reports. We

00:05:10.379 --> 00:05:12.839
become architects of intent. We define the what

00:05:12.839 --> 00:05:15.740
and the why. And the agents handle the how. You

00:05:15.740 --> 00:05:17.519
don't lay the bricks anymore. You're designing

00:05:17.519 --> 00:05:20.779
the cathedral. So if the AI debugs itself, the

00:05:20.779 --> 00:05:23.360
human moves from being a writer to an architect.

00:05:23.639 --> 00:05:26.860
Yeah. We stop typing syntax and we start designing

00:05:26.860 --> 00:05:30.959
outcomes. But this tech rivalry, it didn't just

00:05:30.959 --> 00:05:33.360
stay on servers. It went right to the biggest

00:05:33.360 --> 00:05:37.389
stage in America. The Super Bowl, Super Bowl

00:05:37.389 --> 00:05:40.009
LX, I usually watch for the game, but the commercials

00:05:40.009 --> 00:05:43.970
this year were aggressive. Aggressive is one

00:05:43.970 --> 00:05:46.750
word for it. It was a culture war in 30 -second

00:05:46.750 --> 00:05:49.709
spots. Let's start with Anthropic's ad. It felt...

00:05:50.160 --> 00:05:52.860
Very pointed. It was a subtweet on national television.

00:05:53.160 --> 00:05:56.139
They ran this ad that subtly made fun of AI platforms

00:05:56.139 --> 00:05:59.079
cluttered with ads. It was a direct shot at OpenAI,

00:05:59.279 --> 00:06:01.139
who've been playing with sponsored results. And

00:06:01.139 --> 00:06:03.120
it definitely got a reaction. Oh, for sure. Sam

00:06:03.120 --> 00:06:05.339
Altman was on X, like, immediately. He called

00:06:05.339 --> 00:06:07.759
the ad clearly dishonest. You know the industry's

00:06:07.759 --> 00:06:09.540
getting serious when the CEOs are beefing during

00:06:09.540 --> 00:06:12.459
the halftime show. It felt very 90s Cola Wars.

00:06:12.980 --> 00:06:15.079
But then Google. Google took a totally different

00:06:15.079 --> 00:06:17.399
path. They didn't attack anybody. No, Google's

00:06:17.399 --> 00:06:21.740
Gemini ad was warm. It had bicycles, families,

00:06:22.040 --> 00:06:24.579
the Madden Bowl. They were just selling helpful

00:06:24.579 --> 00:06:27.360
AI to everyone. It was like they were saying,

00:06:27.480 --> 00:06:29.579
hey, let the startups fight. We're just here

00:06:29.579 --> 00:06:31.240
to help you find a cookie recipe. That's their

00:06:31.240 --> 00:06:34.920
entire play. Normalization. Make AI feel as safe

00:06:34.920 --> 00:06:37.639
and boring as a Google search. But while the

00:06:37.639 --> 00:06:40.100
ads were going on, the stock market was reacting

00:06:40.100 --> 00:06:43.120
to the actual tech. And that was a lot less warm

00:06:43.120 --> 00:06:45.100
and fuzzy. Yeah. Let's talk about the panic.

00:06:45.459 --> 00:06:49.379
Anthropic drops 11 Claude Cowork plugins. And

00:06:49.379 --> 00:06:52.220
Wall Street just loses its mind. Why did software

00:06:52.220 --> 00:06:54.839
stocks tank so hard? It's the implication of

00:06:54.839 --> 00:06:57.160
those plugins. The Cowork tools let the AI plug

00:06:57.160 --> 00:07:10.100
directly into your company. So it was a sell

00:07:10.100 --> 00:07:12.300
-off based on the idea that these AI agents could

00:07:12.300 --> 00:07:14.079
just kill the whole software as a service model.

00:07:14.300 --> 00:07:17.100
Exactly. Why rent a tool for a human when you

00:07:17.100 --> 00:07:20.100
can just rent an agent to do the job? But...

00:07:20.439 --> 00:07:22.759
You know, Jensen Hong from NVIDIA called the

00:07:22.759 --> 00:07:25.839
whole thing illogical. Well, of course he did.

00:07:26.040 --> 00:07:28.360
He sells the chips that run everything. He has

00:07:28.360 --> 00:07:31.759
a bias, for sure, but he's got a point. The market

00:07:31.759 --> 00:07:34.399
is reacting to the fear of being obsolete, not

00:07:34.399 --> 00:07:37.620
the immediate reality. But then, to counter that

00:07:37.620 --> 00:07:41.029
fear, OpenAI launched Frontier. Right there.

00:07:41.069 --> 00:07:42.810
Enterprise platform. How does that calm things

00:07:42.810 --> 00:07:45.790
down? Well, it turns agents into coworkers that

00:07:45.790 --> 00:07:48.769
can use files and run code, but securely inside

00:07:48.769 --> 00:07:50.649
a corporate environment. They're trying to say,

00:07:50.750 --> 00:07:53.589
hey, we are destroying enterprise software. We're

00:07:53.589 --> 00:07:56.350
becoming the platform it runs on. It feels like

00:07:56.350 --> 00:07:58.870
we're in this limbo where the technology is moving

00:07:58.870 --> 00:08:00.970
way faster than the business models can adapt.

00:08:01.230 --> 00:08:03.529
That is a massive understatement. So with all

00:08:03.529 --> 00:08:06.389
this volatility, is the stock market panic justified

00:08:06.389 --> 00:08:09.750
or is it just noise? It's panic for sure, but

00:08:09.750 --> 00:08:12.449
it's a signal the SaaS era is ending. Speaking

00:08:12.449 --> 00:08:14.850
of ending eras, let's pivot. Let's talk about

00:08:14.850 --> 00:08:16.629
the giant that everyone keeps underestimating

00:08:16.629 --> 00:08:18.290
because they aren't always the coolest. Let's

00:08:18.290 --> 00:08:20.649
talk about Google. The empire strikes back. I'm

00:08:20.649 --> 00:08:23.350
serious. We get so caught up in the open AI and

00:08:23.350 --> 00:08:25.889
anthropic drama. Yeah. But then you look at Google's

00:08:25.889 --> 00:08:29.069
Q4 earnings call. Yeah. And it's just brute force.

00:08:29.350 --> 00:08:31.970
It really is the difference between product elegance

00:08:31.970 --> 00:08:34.490
and just logistical dominance. Let's run the

00:08:34.490 --> 00:08:36.330
numbers because they're insane. Yeah, lay them

00:08:36.330 --> 00:08:41.509
on me. Okay. The Gemini app. 750 million monthly

00:08:41.509 --> 00:08:44.480
active users. That's more than TikTok in some

00:08:44.480 --> 00:08:47.320
places. 750 million. That's a lot of people asking

00:08:47.320 --> 00:08:49.840
how to bake bread. And it's not just that. Enterprise.

00:08:50.200 --> 00:08:54.580
8 million paid seats in four months. 95 % of

00:08:54.580 --> 00:08:57.340
the top 20 SaaS companies are using it. Over

00:08:57.340 --> 00:09:01.379
5 billion interactions in Q4 alone. 5 billion.

00:09:01.500 --> 00:09:03.299
And there was that detail about how people are

00:09:03.299 --> 00:09:04.759
using it, right? It wasn't just text. Right.

00:09:04.840 --> 00:09:07.960
One in six AI mode searches now use voice or

00:09:07.960 --> 00:09:10.240
image. That's a huge behavioral shift. Yeah.

00:09:10.320 --> 00:09:12.059
People are showing Google a picture of a broken

00:09:12.059 --> 00:09:14.659
pipe. and asking, how do I fix this? In the infrastructure,

00:09:14.940 --> 00:09:17.419
I mean, OpenAI has Microsoft, but Google basically

00:09:17.419 --> 00:09:20.120
is the internet. That's their moat. Gemini 3

00:09:20.120 --> 00:09:22.679
Pro is the fastest adopted model in their history.

00:09:22.940 --> 00:09:25.960
Their new workbench, anti -gravity, got 1 .5

00:09:25.960 --> 00:09:28.580
million users in two months. But here's the number,

00:09:28.659 --> 00:09:30.840
the one that stops the conversation, capital

00:09:30.840 --> 00:09:33.360
expenditure. The money they're spending to build

00:09:33.360 --> 00:09:36.519
all this. Google's planned CapEx for 2026 is

00:09:36.519 --> 00:09:42.549
between 175 and 185. Billion dollars. 185 billion.

00:09:42.690 --> 00:09:44.429
I can't even. What does that money even buy?

00:09:44.549 --> 00:09:46.990
It's not just servers. It's energy. It's physics.

00:09:47.190 --> 00:09:49.889
They're securing fusion power contracts, upgrading

00:09:49.889 --> 00:09:52.509
national power grids, building custom chips.

00:09:52.629 --> 00:09:54.909
This is nation building money. This is the Manhattan

00:09:54.909 --> 00:09:57.370
Project times 10. It puts everything in perspective.

00:09:57.690 --> 00:10:01.750
Open AI feels like magic. Google feels like gravity,

00:10:01.909 --> 00:10:04.789
an unstoppable force. Open AI has elegance. Google

00:10:04.789 --> 00:10:07.529
has distribution. Basically, infinite cash. So

00:10:07.529 --> 00:10:09.149
does product elegance even matter if you can

00:10:09.149 --> 00:10:12.289
spend $185 billion on infrastructure? Distribution

00:10:12.289 --> 00:10:15.370
beats elegance every time. History repeats itself.

00:10:15.690 --> 00:10:18.470
Okay, let's bring this back down to earth. We've

00:10:18.470 --> 00:10:20.990
talked about billions of dollars in global infrastructure,

00:10:21.250 --> 00:10:23.330
but because they're spending all that money,

00:10:23.450 --> 00:10:27.350
the cost for us, for the regular person to create

00:10:27.350 --> 00:10:29.830
something, is dropping to almost zero. That's

00:10:29.830 --> 00:10:31.429
the other side of the coin. It's a golden age

00:10:31.429 --> 00:10:34.220
for creative tools. Yeah. Despite the Titans

00:10:34.220 --> 00:10:36.059
fighting, there's some incredible stuff coming

00:10:36.059 --> 00:10:38.639
out for creators. Let's do a quick rapid fire

00:10:38.639 --> 00:10:42.259
on some new tools. First up, Kling 3 .0. Huge

00:10:42.259 --> 00:10:45.559
for video. We're talking native 4K outputs, longer

00:10:45.559 --> 00:10:47.879
video times. You can basically make short films

00:10:47.879 --> 00:10:50.759
with this now. It's not just a four -second blurry

00:10:50.759 --> 00:10:53.320
clip anymore. Webflow. Big one for designers.

00:10:53.419 --> 00:10:55.720
You give it a prompt, it builds a multi -page

00:10:55.720 --> 00:10:58.649
production -ready website. Not a mock -up. a

00:10:58.649 --> 00:11:01.690
real site higgs field vibe motion yeah motion

00:11:01.690 --> 00:11:04.529
images from a single prompt it gives you incredible

00:11:04.529 --> 00:11:07.809
control if you need a specific asset and superboard

00:11:07.809 --> 00:11:10.789
this one's amazing connects over 600 data sources

00:11:10.789 --> 00:11:13.009
it's like having a team of data analysts in your

00:11:13.009 --> 00:11:15.049
pocket you connect your stripe youtube whatever

00:11:15.049 --> 00:11:17.470
and just ask it questions across all of them

00:11:17.470 --> 00:11:19.970
these tools are incredible but they bring us

00:11:19.970 --> 00:11:23.309
to the uh the philosophical part of this the

00:11:23.309 --> 00:11:25.610
thing we're calling the cognitive gym I love

00:11:25.610 --> 00:11:28.110
this concept because there's a real danger here.

00:11:28.250 --> 00:11:30.950
The danger that we all just get. You lazy. Worse.

00:11:31.289 --> 00:11:34.570
That we get functionally dumber. If we outsource

00:11:34.570 --> 00:11:37.590
all our thinking. If we let Codex write the code

00:11:37.590 --> 00:11:40.309
and Claude do the research. What happens to our

00:11:40.309 --> 00:11:42.490
own brains? It's like using GPS all the time.

00:11:42.509 --> 00:11:44.830
Nobody knows how to navigate anymore. I have

00:11:44.830 --> 00:11:46.789
to admit something here. A bit of a vulnerable

00:11:46.789 --> 00:11:49.610
admission. I've felt this myself. I call it prompt

00:11:49.610 --> 00:11:52.549
drift. I've been letting the AI summarize articles

00:11:52.549 --> 00:11:55.210
for me instead of reading them. And last week,

00:11:55.230 --> 00:11:57.629
I realized I couldn't actually remember the details

00:11:57.629 --> 00:12:00.850
of something I'd supposedly learned. I retained

00:12:00.850 --> 00:12:02.629
less information because I did less of the work.

00:12:02.730 --> 00:12:04.889
That's the trap. Exactly. The friction is where

00:12:04.889 --> 00:12:07.269
the learning happens. The cognitive gym idea

00:12:07.269 --> 00:12:10.149
is the answer. You have to use the AI to increase

00:12:10.149 --> 00:12:12.409
the difficulty, not take it away. How does that

00:12:12.409 --> 00:12:14.269
even work? Give me an example. How do I bench

00:12:14.269 --> 00:12:16.990
press with an AI? Instead of asking the AI for

00:12:16.990 --> 00:12:20.190
the answer, you ask it to quiz you. Instead of

00:12:20.190 --> 00:12:22.570
asking it to write the draft, you write it, and

00:12:22.570 --> 00:12:25.149
then you ask the AI to completely tear your logic

00:12:25.149 --> 00:12:27.590
apart. You force it to debate you. You have to

00:12:27.590 --> 00:12:29.850
lift the weight yourself. So the AI becomes the

00:12:29.850 --> 00:12:32.409
spotter, not the weightlifter. Exactly. And we're

00:12:32.409 --> 00:12:34.730
seeing this trend with solo founders. They're

00:12:34.730 --> 00:12:37.970
using AI to do the work of 10 people, but they're

00:12:37.970 --> 00:12:40.250
still the one driving the vision. So how do we

00:12:40.250 --> 00:12:43.350
stop from becoming just passive consumers of

00:12:43.350 --> 00:12:46.509
AI output? You have to treat the AI as a sparring

00:12:46.509 --> 00:12:49.509
partner, not a butler. We'll be right back. After

00:12:49.509 --> 00:12:54.009
a quick word. And we're back. Okay, let's recap

00:12:54.009 --> 00:12:56.470
the big ideas from today. It's been a heavy week.

00:12:56.529 --> 00:12:59.129
A historic one, I think. First, the agent wars.

00:12:59.610 --> 00:13:02.389
Codex is literally building itself. Glaude is

00:13:02.389 --> 00:13:04.389
aiming to replace entire white -collar workflows.

00:13:04.870 --> 00:13:08.070
We are moving well past chatbots. Second, the

00:13:08.070 --> 00:13:10.769
culture is splitting. The Super Bowl ads showed

00:13:10.769 --> 00:13:12.490
us the tension. The market showed us the fear.

00:13:12.649 --> 00:13:15.710
The entire SaaS model is under threat. Third,

00:13:15.850 --> 00:13:20.059
Google. The quiet giant. While we watch the startups,

00:13:20.320 --> 00:13:24.259
Google is spending $185 billion to build an infrastructure

00:13:24.259 --> 00:13:26.720
mode that might be impossible for anyone else

00:13:26.720 --> 00:13:29.909
to cross. And finally, the cognitive gem. The

00:13:29.909 --> 00:13:32.230
tools are getting incredibly powerful, which

00:13:32.230 --> 00:13:34.110
means we have to be more disciplined. We have

00:13:34.110 --> 00:13:36.409
to make sure we stay the architects of intent.

00:13:36.750 --> 00:13:39.490
We are absolutely moving from chatting with bots

00:13:39.490 --> 00:13:42.789
to managing swarms of agents. That's the new

00:13:42.789 --> 00:13:45.470
reality. If you want to try out Kling or Superboard

00:13:45.470 --> 00:13:47.590
or any of the tools we talked about, check the

00:13:47.590 --> 00:13:49.190
show notes. We've got links for you there. They're

00:13:49.190 --> 00:13:51.000
definitely worth experimenting with. I want to

00:13:51.000 --> 00:13:52.799
leave you with one final thought. We mentioned

00:13:52.799 --> 00:13:56.600
11 labs raised over 400 million euros, more than

00:13:56.600 --> 00:13:59.120
all of Europe combined in AI funding recently.

00:13:59.559 --> 00:14:02.779
And Google is spending 185 billion. The scale

00:14:02.779 --> 00:14:05.700
of the numbers is just, it's astronomical. So

00:14:05.700 --> 00:14:07.919
if all that capital is concentrating so intensely,

00:14:08.139 --> 00:14:10.799
are we just watching the consolidation of human

00:14:10.799 --> 00:14:13.299
intelligence into three or four zip codes? And

00:14:13.299 --> 00:14:16.279
if that's true, what does that mean for your

00:14:16.279 --> 00:14:19.100
solo business in 2027? That's the question that

00:14:19.100 --> 00:14:21.009
keeps me up at night. Thanks for listening to

00:14:21.009 --> 00:14:22.690
the Deep Dive. We'll see you next time.
