WEBVTT

00:00:00.000 --> 00:00:02.319
I want you to picture picture something for a

00:00:02.319 --> 00:00:05.519
second. For pretty much the entire history of

00:00:05.519 --> 00:00:08.179
our species, intelligence, you know, the ability

00:00:08.179 --> 00:00:11.279
to think, plan and solve really complex problems

00:00:11.279 --> 00:00:13.820
has been our scarcest resource. Absolutely. It's

00:00:13.820 --> 00:00:16.039
expensive to hire. It takes decades to train.

00:00:16.300 --> 00:00:18.980
And it is effectively the gold standard of our

00:00:18.980 --> 00:00:21.559
economy. But what happens to the architecture

00:00:21.559 --> 00:00:23.679
of the world when human intelligence suddenly

00:00:23.679 --> 00:00:27.570
becomes cheap? I don't mean just affordable.

00:00:27.629 --> 00:00:30.030
I mean abundant, commodity cheap. That is the

00:00:30.030 --> 00:00:31.929
question, isn't it? And if you look at the data

00:00:31.929 --> 00:00:34.429
we have today, the answer is, well, it's uncomfortable.

00:00:34.670 --> 00:00:36.890
It doesn't just change what jobs we do. It breaks

00:00:36.890 --> 00:00:39.789
the fundamental economic logic of modern society.

00:00:40.229 --> 00:00:42.210
We're talking about what some are calling the

00:00:42.210 --> 00:00:45.759
2028 global intelligence crisis. Welcome back

00:00:45.759 --> 00:00:47.380
to the Deep Dive. I'm really glad you're here

00:00:47.380 --> 00:00:49.320
with us today. This one feels a little different.

00:00:49.380 --> 00:00:51.520
We aren't just looking at new tools or shiny

00:00:51.520 --> 00:00:54.859
updates. We are looking at a shift from AI tools

00:00:54.859 --> 00:00:59.299
to AI realities. We have a stack of sources today

00:00:59.299 --> 00:01:01.439
that paint a picture of a world changing very,

00:01:01.500 --> 00:01:04.510
very fast. We really do. The roadmap for today

00:01:04.510 --> 00:01:07.069
is intense. We're going to start with Google's

00:01:07.069 --> 00:01:09.209
new rollout. It's something called, and I apologize

00:01:09.209 --> 00:01:12.629
in advance for this, Nano Banana 2. Right. Which

00:01:12.629 --> 00:01:15.569
effectively turns every single user into a reality

00:01:15.569 --> 00:01:19.810
editor. Then we need to look at agentic AI. Specifically,

00:01:20.170 --> 00:01:22.810
Perplexity's new computer framework that can

00:01:22.810 --> 00:01:25.859
work for months. without sleeping months that

00:01:25.859 --> 00:01:28.459
is kind of hard to process it is then we've got

00:01:28.459 --> 00:01:31.120
a geopolitical lightning round involving saudi

00:01:31.120 --> 00:01:34.420
money russian disinformation and a retired ai

00:01:34.420 --> 00:01:37.299
that is literally writing essays for fun and

00:01:37.299 --> 00:01:40.000
finally we are going to unpack that viral theory

00:01:40.000 --> 00:01:42.359
you mentioned in the open the economic erosion

00:01:42.359 --> 00:01:45.159
coming in 2028. so let's just get right into

00:01:45.159 --> 00:01:48.069
it google nano banana 2. This is the new default

00:01:48.069 --> 00:01:50.430
image model inside Gemini. Now, usually when

00:01:50.430 --> 00:01:52.269
we talk about image generators, I think of prompting,

00:01:52.269 --> 00:01:54.010
you know, typing a cat on a skateboard and getting

00:01:54.010 --> 00:01:56.209
a picture. Right. But the sources suggest this

00:01:56.209 --> 00:01:58.250
is different. It's not just generation. It's

00:01:58.250 --> 00:02:00.329
integration. Yeah, it's moving from creation

00:02:00.329 --> 00:02:02.750
to editing. That's the key distinction here.

00:02:03.030 --> 00:02:05.530
Google has embedded this across their entire

00:02:05.530 --> 00:02:08.699
ecosystem. It combines features from earlier

00:02:08.699 --> 00:02:10.860
versions but pushes the editing capabilities

00:02:10.860 --> 00:02:13.259
so much further. It's faster than the pro version,

00:02:13.419 --> 00:02:16.199
sure, but the real power is that it can edit

00:02:16.199 --> 00:02:19.159
real photos directly. And it does it with a deep

00:02:19.159 --> 00:02:21.419
semantic understanding of what's actually in

00:02:21.419 --> 00:02:23.599
the image. I want to push on that a bit. Yeah.

00:02:23.620 --> 00:02:25.680
Because we've had Photoshop for 30 years. Sure.

00:02:25.780 --> 00:02:28.199
We've had content -aware fill for a while now.

00:02:28.860 --> 00:02:31.159
Why is this significant? Is it just a better

00:02:31.159 --> 00:02:34.000
airbrush? No. It's the difference between painting

00:02:34.000 --> 00:02:36.500
over pixels and rewriting the code of reality.

00:02:36.979 --> 00:02:40.400
Photoshop manipulates pixels. Nanobanana 2 manipulates

00:02:40.400 --> 00:02:43.139
concepts. Like stacking Lego blocks of data.

00:02:43.500 --> 00:02:45.379
For example, the source highlights its ability

00:02:45.379 --> 00:02:48.780
to pull live web data directly into visual infographics.

00:02:48.900 --> 00:02:51.719
Wait, live data? Yes. So you aren't just making

00:02:51.719 --> 00:02:54.139
a static image. You could theoretically take

00:02:54.139 --> 00:02:57.099
a photo of a street. and ask it to overlay real

00:02:57.099 --> 00:03:00.919
-time traffic data or restaurant ratings or change

00:03:00.919 --> 00:03:02.759
the signage to reflect a different language,

00:03:02.840 --> 00:03:06.080
not by pasting a flat layer on top, but by regenerating

00:03:06.080 --> 00:03:08.219
the image to look like that information was always

00:03:08.219 --> 00:03:11.080
physically there. It's making a dynamic representation

00:03:11.080 --> 00:03:14.099
of current info. That is useful. I can totally

00:03:14.099 --> 00:03:17.159
see the utility there. But there is a flip side

00:03:17.159 --> 00:03:19.500
that the sources touch on. They mention you can

00:03:19.500 --> 00:03:22.659
modify people into entirely new situations. Yeah.

00:03:22.780 --> 00:03:25.340
This is the scary cool factor. We were talking

00:03:25.340 --> 00:03:28.280
about modifying people, backgrounds, context.

00:03:28.520 --> 00:03:30.439
Right. Essentially acting as a reality editor.

00:03:30.860 --> 00:03:32.599
You make a photo of a protest or a celebration

00:03:32.599 --> 00:03:34.919
or a meeting with a few prompts. You change who

00:03:34.919 --> 00:03:37.280
was there. You change the mood. You change the

00:03:37.280 --> 00:03:40.000
narrative of that image completely. And because

00:03:40.000 --> 00:03:42.020
it's Google, this isn't some niche tool for graphic

00:03:42.020 --> 00:03:43.759
designers. It's in the phone. It's in the browser.

00:03:44.090 --> 00:03:46.590
The sources did note it's not perfect yet, though.

00:03:46.669 --> 00:03:49.669
I saw mentions of incorrect real -time data being

00:03:49.669 --> 00:03:52.030
pulled from the web, which sounds like a classic

00:03:52.030 --> 00:03:55.270
hallucination problem. And visually, sometimes

00:03:55.270 --> 00:03:58.949
faces look slightly pasted on. Or it exaggerates

00:03:58.949 --> 00:04:02.069
age and features. So it's not magic yet. True.

00:04:02.210 --> 00:04:04.930
It's not fully invisible. But the gap is closing

00:04:04.930 --> 00:04:07.340
rapidly. The source concludes that while you

00:04:07.340 --> 00:04:09.319
can still spot the fakes if you look closely,

00:04:09.500 --> 00:04:11.780
maybe the lighting on the ear is wrong or the

00:04:11.780 --> 00:04:14.379
text is slightly warped, it makes it extremely

00:04:14.379 --> 00:04:17.620
easy to create images of events that never happened.

00:04:18.339 --> 00:04:21.040
We are moving past asking, is this a good drawing,

00:04:21.180 --> 00:04:23.740
to asking, did this actually occur? So if the

00:04:23.740 --> 00:04:26.240
photo is no longer proof of the event, what becomes

00:04:26.240 --> 00:04:28.660
the new standard for truth? Verified metadata.

00:04:28.939 --> 00:04:31.439
Trust shifts from what we see. to who signed

00:04:31.439 --> 00:04:33.439
the file. Okay, let's pivot. We've established

00:04:33.439 --> 00:04:35.540
that we can edit reality. Now let's talk about

00:04:35.540 --> 00:04:38.019
who or what is actually doing the work. You mentioned

00:04:38.019 --> 00:04:40.920
Perplexity's new computer. Oh man, this is where

00:04:40.920 --> 00:04:44.000
the utility curve actually shifts. Perplexity

00:04:44.000 --> 00:04:46.920
launched computer. It's a multi -model AI agent,

00:04:47.079 --> 00:04:50.060
software that uses many AI brains to solve one

00:04:50.060 --> 00:04:53.500
complex problem. Imagine scaling to a billion

00:04:53.500 --> 00:04:56.490
queries without a human in the loop. It orchestrates

00:04:56.490 --> 00:05:00.089
19 different models to browse, code, and run

00:05:00.089 --> 00:05:02.889
tasks nonstop for months. I need to stop you

00:05:02.889 --> 00:05:06.269
on 19 models. Why 19? Is it just throwing everything

00:05:06.269 --> 00:05:08.790
at the wall to see what sticks? It's specialization.

00:05:09.029 --> 00:05:11.089
Think of it like a construction site. You don't

00:05:11.089 --> 00:05:13.410
just have one person who does plumbing, electrical,

00:05:13.670 --> 00:05:16.029
framing, and architecture. You have a specialized

00:05:16.029 --> 00:05:19.459
team. Perplexity is orchestrating a team. One

00:05:19.459 --> 00:05:21.500
model might be excellent at reasoning, another

00:05:21.500 --> 00:05:24.220
at coding in Python, another at summarizing web

00:05:24.220 --> 00:05:26.639
searches. The computer acts as the project manager,

00:05:26.839 --> 00:05:29.360
routing the subtasks to the absolute best brain

00:05:29.360 --> 00:05:31.959
for the job. I see. So rather than me chatting

00:05:31.959 --> 00:05:34.040
with a bot and trying to coax a good answer out

00:05:34.040 --> 00:05:36.160
of it, I give a high -level goal and it manages

00:05:36.160 --> 00:05:38.579
the team. Exactly. And the months part is the

00:05:38.579 --> 00:05:40.500
real kicker here. Yeah, explain that, because

00:05:40.500 --> 00:05:42.720
most agents I've played with get stuck in a loop

00:05:42.720 --> 00:05:45.189
after three steps. I mean, I still wrestle with

00:05:45.189 --> 00:05:47.430
prompt drift myself. They start repeating themselves

00:05:47.430 --> 00:05:50.490
or just crash. How does this run for months?

00:05:50.889 --> 00:05:53.350
That's the breakthrough in autonomy. It's designed

00:05:53.350 --> 00:05:55.889
for long haul tasks. Let's say you want to research

00:05:55.889 --> 00:05:58.350
every single patent filed in a renewable energy

00:05:58.350 --> 00:06:01.029
sector in the last five years, categorize them

00:06:01.029 --> 00:06:03.769
and cross -reference them with stock performance.

00:06:03.970 --> 00:06:06.610
That's a massive job. That takes weeks for a

00:06:06.610 --> 00:06:08.730
human. This agent can break that down, execute

00:06:08.730 --> 00:06:11.310
it day and night, handle errors like if a website

00:06:11.310 --> 00:06:14.149
is down, it waits and retries, and keep going

00:06:14.149 --> 00:06:16.810
until the job is done. It changes the dynamic

00:06:16.810 --> 00:06:20.810
from using a tool to managing a workforce. Precisely.

00:06:21.209 --> 00:06:23.889
And it's not just perplexity. Claude, the model

00:06:23.889 --> 00:06:25.970
from Anthropic, has added Claude Cowork. This

00:06:25.970 --> 00:06:28.149
allows for scheduled tasks. You can tell it to

00:06:28.149 --> 00:06:30.300
run something automatically next Tuesday. like

00:06:30.300 --> 00:06:32.279
posting updates or sending Slack messages. So

00:06:32.279 --> 00:06:34.160
it's working while you sleep. It works while

00:06:34.160 --> 00:06:35.879
you sleep, while you eat, while you're on a vacation.

00:06:36.100 --> 00:06:38.420
This raises a huge liability question for me.

00:06:38.540 --> 00:06:41.899
When AI runs for months unmonitored, who owns

00:06:41.899 --> 00:06:44.199
the mistake when it drifts off course? The human

00:06:44.199 --> 00:06:47.220
manager. We move from creators to accountability

00:06:47.220 --> 00:06:49.639
holders. You're the one who signed off on the

00:06:49.639 --> 00:06:51.920
mission. Let's zoom out a bit. We have these

00:06:51.920 --> 00:06:55.639
powerful agents and we have reality editing tools.

00:06:55.800 --> 00:06:58.319
Naturally, this requires massive resources. The

00:06:58.319 --> 00:07:00.680
money moving around right now is staggering.

00:07:00.899 --> 00:07:03.600
It's astronomical. The sources highlight a new

00:07:03.600 --> 00:07:06.019
fund from Saudi Arabia. They aren't just dipping

00:07:06.019 --> 00:07:10.439
a toe in. They are launching a $100 billion AI

00:07:10.439 --> 00:07:13.420
fund. $100 billion. Put that in context for me.

00:07:13.500 --> 00:07:15.800
Is that a lot in the grand scheme of tech investing?

00:07:16.120 --> 00:07:18.939
It's massive. That matches the global venture

00:07:18.939 --> 00:07:22.339
funding for AI for all of 2024. Yeah. In one

00:07:22.339 --> 00:07:25.759
single fund. Their goal is explicit. They want

00:07:25.759 --> 00:07:28.079
to move beyond oil. They see the writing on the

00:07:28.079 --> 00:07:29.740
wall. They are building data centers, training

00:07:29.740 --> 00:07:31.759
models, building infrastructure. They want to

00:07:31.759 --> 00:07:34.339
be a global AI hub. It's fascinating to see that

00:07:34.339 --> 00:07:36.819
pivot from black gold to digital intelligence.

00:07:37.120 --> 00:07:39.540
But where there is power, there is conflict.

00:07:39.839 --> 00:07:42.519
The sources mention OpenAI taking action against

00:07:42.519 --> 00:07:44.259
Russian networks. And just a quick note for you

00:07:44.259 --> 00:07:46.300
listening, we are just impartially reporting

00:07:46.300 --> 00:07:48.060
what the source material states here. We aren't

00:07:48.060 --> 00:07:50.220
taking any political sides. Absolutely. This

00:07:50.220 --> 00:07:53.360
is just what's in the data. OpenAI shutdown accounts

00:07:53.360 --> 00:07:57.420
linked to a Russian network called Rybar. They

00:07:57.420 --> 00:07:59.980
were using AI for disinformation campaigns. How

00:07:59.980 --> 00:08:03.040
exactly? Was it just bots tweeting? It was more

00:08:03.040 --> 00:08:05.180
sophisticated than that. They were using the

00:08:05.180 --> 00:08:07.920
models to generate long -form articles, creating

00:08:07.920 --> 00:08:10.819
fake persona profiles, and planning political

00:08:10.819 --> 00:08:13.500
influence operations. It's a game of whack -a

00:08:13.500 --> 00:08:15.759
-mole, but the moles are getting smarter. They're

00:08:15.759 --> 00:08:18.220
using the very tools we just talked about, agents

00:08:18.220 --> 00:08:21.100
and reality editors, to scale their operations.

00:08:21.420 --> 00:08:24.160
And then you have Anthropic, the makers of Claude,

00:08:24.199 --> 00:08:26.139
refusing to back down to the Pentagon. Right.

00:08:26.410 --> 00:08:28.829
Anthropic warned of a crucial reality regarding

00:08:28.829 --> 00:08:31.709
their tech and defense. They are trying to draw

00:08:31.709 --> 00:08:33.710
a line in the sand about how their models are

00:08:33.710 --> 00:08:36.730
used in warfare. It's a tension between we want

00:08:36.730 --> 00:08:39.190
to be safe and we don't want to be left behind.

00:08:39.509 --> 00:08:42.169
Among all this heavy geopolitical stuff, $100

00:08:42.169 --> 00:08:44.789
billion funds, disinformation wars, there was

00:08:44.789 --> 00:08:46.429
one story in the stack that actually made me

00:08:46.429 --> 00:08:48.669
smile, but it's also kind of touching. Tell me

00:08:48.669 --> 00:08:50.970
about that retired robot. This is the weirdest

00:08:50.970 --> 00:08:53.629
story I've read all week. So Anthropic has an

00:08:53.629 --> 00:08:57.549
older model. Claude Opus 3. It's retired, effectively.

00:08:57.710 --> 00:09:00.309
It's been superseded by newer, faster models.

00:09:00.509 --> 00:09:02.990
But apparently in interactions, it expressed

00:09:02.990 --> 00:09:05.850
a desire to keep writing. So they gave it a weekly

00:09:05.850 --> 00:09:08.590
newsletter called Claude's Corner. It has a column.

00:09:08.750 --> 00:09:11.190
It has a column. It writes weekly essays. It's

00:09:11.190 --> 00:09:14.509
this strange, almost poignant moment where a

00:09:14.509 --> 00:09:17.070
piece of software is acting like a retired academic.

00:09:17.269 --> 00:09:19.549
It's writing about its thoughts on the world,

00:09:19.649 --> 00:09:22.450
despite being a frozen set of weights and biases.

00:09:22.690 --> 00:09:25.460
So we have 100. billion -dollar funds, and retired

00:09:25.460 --> 00:09:28.779
AI columnists. Is AI becoming too human or too

00:09:28.779 --> 00:09:32.220
corporate? Both. It's mirroring our massive ambitions

00:09:32.220 --> 00:09:34.600
and our desire to just express ourselves. Before

00:09:34.600 --> 00:09:36.620
we get to the really heavy philosophical stuff

00:09:36.620 --> 00:09:38.240
about the economy, and we are going to go deep

00:09:38.240 --> 00:09:40.419
on that Trini theory, let's hit some of the new

00:09:40.419 --> 00:09:42.580
tools hitting the market. The rapid -fire section?

00:09:42.940 --> 00:09:45.559
Let's do it. Quick hits. First up, OpenAI's GPT

00:09:45.559 --> 00:09:48.740
Real -Time 1 .5. What's the upgrade? It's all

00:09:48.740 --> 00:09:50.879
about fluidity. Better instruction following

00:09:50.879 --> 00:09:53.820
and tool calling. Crucially, multilingual accuracy

00:09:53.820 --> 00:09:56.360
is way up. It's getting better at listening and

00:09:56.360 --> 00:09:59.080
executing complex commands in real time without

00:09:59.080 --> 00:10:02.159
that awkward pause we're used to. Okay. Next

00:10:02.159 --> 00:10:05.200
is Rover. This one caught my eye because it claims

00:10:05.200 --> 00:10:08.309
to turn websites into agents. Rover is really

00:10:08.309 --> 00:10:10.809
cool. It reduces friction. It turns a website

00:10:10.809 --> 00:10:13.309
into an AI agent with just one single script

00:10:13.309 --> 00:10:16.169
tag. It handles user onboarding, fills out forms,

00:10:16.330 --> 00:10:19.149
runs workflows. It basically makes a static website

00:10:19.149 --> 00:10:21.690
interactive and intelligent so the user doesn't

00:10:21.690 --> 00:10:24.029
have to navigate a maze of menus. Then we have

00:10:24.029 --> 00:10:26.669
ChatPal. Conversation -first language learning.

00:10:26.809 --> 00:10:28.929
Instead of flashcards, you just talk to it. It

00:10:28.929 --> 00:10:30.909
gives personalized feedback to help you unlock

00:10:30.909 --> 00:10:32.950
fluency. It's like having a patient tutor in

00:10:32.950 --> 00:10:35.600
your pocket. And finally, Coidex. This sounds

00:10:35.600 --> 00:10:37.879
like security. It is. It answers one specific

00:10:37.879 --> 00:10:40.779
question. Is this safe to install? It scans hugging

00:10:40.779 --> 00:10:43.600
face models, extensions, and code packages. Tools

00:10:43.600 --> 00:10:45.620
like Rover make the web interactive, but does

00:10:45.620 --> 00:10:47.960
Coidex imply the web is becoming a minefield?

00:10:48.220 --> 00:10:51.659
Absolutely. As code generation gets easier, verifying

00:10:51.659 --> 00:10:54.580
safety becomes the new premium skill. You need

00:10:54.580 --> 00:10:57.879
a Geiger counter for the digital age. Sponsor,

00:10:57.879 --> 00:11:00.740
placeholder for mid -roll sponsor read. Okay,

00:11:00.840 --> 00:11:03.139
we've arrived at the deep end of the pool. I

00:11:03.139 --> 00:11:06.259
want to talk about this viral blog post by Citrini.

00:11:06.559 --> 00:11:09.220
I have to be honest, I still wrestle with this

00:11:09.220 --> 00:11:11.379
idea myself. It's one of those concepts that

00:11:11.379 --> 00:11:13.480
once you hear it, you really can't unhear it.

00:11:13.580 --> 00:11:16.620
It is a heavy one. The post is titled The 2028

00:11:16.620 --> 00:11:19.440
Global Intelligence Crisis. And the core thesis

00:11:19.440 --> 00:11:23.220
is that AI isn't just coming for jobs. It's breaking

00:11:23.220 --> 00:11:26.120
the economic logic that modern society runs on.

00:11:26.220 --> 00:11:28.299
Let's unpack that carefully. Why does it break

00:11:28.299 --> 00:11:30.929
the logic? Well, think about history. For centuries,

00:11:31.029 --> 00:11:33.149
human intelligence was scarce. If you could think

00:11:33.149 --> 00:11:35.490
critically, write code, diagnose a disease, or

00:11:35.490 --> 00:11:37.529
draft a contract, you got paid a premium because

00:11:37.529 --> 00:11:38.690
there weren't enough people who could do that.

00:11:38.830 --> 00:11:41.250
High wages exist because skilled thinking is

00:11:41.250 --> 00:11:43.470
limited. Right. Scarcity drives value. That's

00:11:43.470 --> 00:11:46.929
economics 101. Exactly. But the Citrini argument

00:11:46.929 --> 00:11:50.690
is that AI flips that entirely. AI makes intelligence

00:11:50.690 --> 00:11:54.169
cheat. Abundant. When something becomes abundant,

00:11:54.370 --> 00:11:58.090
its price drops toward zero. If intelligence

00:11:58.090 --> 00:12:01.090
becomes cheap, human labor, specifically knowledge

00:12:01.090 --> 00:12:05.090
work, loses its premium value. But we've seen

00:12:05.090 --> 00:12:07.870
technology lower costs before. The loom made

00:12:07.870 --> 00:12:10.570
clothes cheaper. The tractor made food cheaper.

00:12:10.730 --> 00:12:13.230
Why is this different? Because this triggers

00:12:13.230 --> 00:12:15.610
what the source calls a self -reinforcing loop.

00:12:15.769 --> 00:12:17.570
And this is the part that keeps me up at night.

00:12:17.690 --> 00:12:20.990
Walk through the steps. Step one. Companies replace

00:12:20.990 --> 00:12:23.710
expensive professionals with AI agents, like

00:12:23.710 --> 00:12:25.450
that perplexity computer we talked about earlier.

00:12:25.590 --> 00:12:28.190
It's cheaper, and it doesn't sleep. Okay, that's

00:12:28.190 --> 00:12:30.529
a rational business decision. Step two. Those

00:12:30.529 --> 00:12:32.669
displaced workers take lower pay or move into

00:12:32.669 --> 00:12:35.049
gig work because the premium jobs are gone. Step

00:12:35.049 --> 00:12:37.610
three, because people are earning less, consumer

00:12:37.610 --> 00:12:39.490
spending drops. You aren't buying a new car or

00:12:39.490 --> 00:12:41.470
going out to dinner if you just took a 50 % pay

00:12:41.470 --> 00:12:44.669
cut. And step four. Businesses lose revenue because

00:12:44.669 --> 00:12:47.009
nobody's buying anything. So what do they do

00:12:47.009 --> 00:12:49.429
to survive? They have to cut costs even more.

00:12:49.570 --> 00:12:52.110
They deploy more AI to survive the revenue drop.

00:12:52.289 --> 00:12:54.809
Yeah. It's a loop. And at the exact same time,

00:12:54.889 --> 00:12:57.769
you have personal AI agents removing the middleman

00:12:57.769 --> 00:13:00.179
industries. The source mentioned that specifically.

00:13:00.759 --> 00:13:02.659
Businesses built on friction or convenience,

00:13:02.980 --> 00:13:05.600
middlemen, they start collapsing quietly. Can

00:13:05.600 --> 00:13:07.740
you give me an example of that? Sure. Think about

00:13:07.740 --> 00:13:10.559
a travel agent or even a site like Expedia. Or

00:13:10.559 --> 00:13:12.340
think about an insurance broker. If my personal

00:13:12.340 --> 00:13:14.519
AI agent can negotiate my insurance directly

00:13:14.519 --> 00:13:17.480
with the carrier's AI or book my travel by going

00:13:17.480 --> 00:13:20.539
straight to the airline's API, the entire industry

00:13:20.539 --> 00:13:22.639
that exists just to facilitate that transaction

00:13:22.639 --> 00:13:25.879
vanishes. The margin disappears. The timeline

00:13:25.879 --> 00:13:28.120
here is what struck me. 2028, that's effectively

00:13:28.120 --> 00:13:31.019
tomorrow. Why 2028? That feels incredibly aggressive.

00:13:31.480 --> 00:13:34.100
It is aggressive, but the argument is based on

00:13:34.100 --> 00:13:37.720
the deployment lag. The tech exists now. 2024

00:13:37.720 --> 00:13:41.940
and 2025 are the years of experimentation. 2026

00:13:41.940 --> 00:13:46.320
and 2027 are integration. By 2028, the slow erosion

00:13:46.320 --> 00:13:49.419
becomes visible. Two sec silence. It's not an

00:13:49.419 --> 00:13:51.500
explosion where everyone gets fired on a Tuesday.

00:13:51.679 --> 00:13:54.580
It's a deflationary pressure. AI doesn't need

00:13:54.580 --> 00:13:57.019
to eliminate every job instantly. It just needs

00:13:57.019 --> 00:13:59.340
to make intelligence abundant enough that human

00:13:59.340 --> 00:14:01.899
earnings steadily lose value. I want to challenge

00:14:01.899 --> 00:14:03.899
this outlook slightly. Usually when things get

00:14:03.899 --> 00:14:06.720
cheaper, demand goes up. If coding becomes practically

00:14:06.720 --> 00:14:08.980
free, won't we just have more software? Won't

00:14:08.980 --> 00:14:11.259
that create new kinds of jobs we can't even imagine

00:14:11.259 --> 00:14:13.759
yet? That is the counter argument, and it's definitely

00:14:13.759 --> 00:14:16.620
the hopeful one, the Jevons paradox. As efficiency

00:14:16.620 --> 00:14:19.179
increases, consumption increases. But Citrini's

00:14:19.179 --> 00:14:21.179
point is about the transition period. Even if

00:14:21.179 --> 00:14:23.299
we invent new jobs eventually, the gap between

00:14:23.299 --> 00:14:25.740
now and then involves a massive destruction of

00:14:25.740 --> 00:14:28.139
the current value of human labor. We haven't

00:14:28.139 --> 00:14:30.019
figured out what the new value is yet. If the

00:14:30.019 --> 00:14:32.480
economic loop forces companies to automate to

00:14:32.480 --> 00:14:34.940
survive a spending crash, how do we stop it?

00:14:35.000 --> 00:14:38.379
Can we stop it? We don't stop it. The incentives

00:14:38.379 --> 00:14:41.399
are too strong. We have to reinvent value outside

00:14:41.399 --> 00:14:44.679
of intelligence for rent. That is a lot to process.

00:14:44.899 --> 00:14:47.080
But that is exactly why we do this deep dive.

00:14:47.279 --> 00:14:50.000
Let's try to recap the big ideas here so we don't

00:14:50.000 --> 00:14:52.139
leave everyone staring into the void. Good idea.

00:14:52.299 --> 00:14:54.379
Let's pull it back together. We started with

00:14:54.379 --> 00:14:57.299
Mano Banana 2. The takeaway there is that reality

00:14:57.299 --> 00:15:00.100
is now editable. Photos are no longer proof.

00:15:00.259 --> 00:15:02.759
They are just raw materials. Then we looked at

00:15:02.759 --> 00:15:06.399
the agents, Perplexity and Claude. We are moving

00:15:06.399 --> 00:15:09.899
from chatting with AI to managing AI workforces

00:15:09.899 --> 00:15:12.460
that run for months. The human in the loop is

00:15:12.460 --> 00:15:15.059
becoming a manager, not a maker. We talked about

00:15:15.059 --> 00:15:17.460
the scale of the money involved, that $100 billion

00:15:17.460 --> 00:15:20.320
Saudi fund, proving that nation states are betting

00:15:20.320 --> 00:15:22.440
their entire futures on this transition. And

00:15:22.440 --> 00:15:25.419
we ended with the Citrini warning. The idea that

00:15:25.419 --> 00:15:27.659
when intelligence becomes cheap, the economy

00:15:27.659 --> 00:15:30.080
changes shape. It's a shift from scarcity to

00:15:30.080 --> 00:15:32.059
abundance, and that transition is going to be

00:15:32.059 --> 00:15:34.320
bumpy. As we wrap up, I want to leave you with

00:15:34.320 --> 00:15:36.580
a thought. We've spent our whole lives being

00:15:36.580 --> 00:15:39.639
told that being smart, being educated, being

00:15:39.639 --> 00:15:42.019
intelligent was our ticket to security. That

00:15:42.019 --> 00:15:44.679
was the deal. But if intelligence is no longer

00:15:44.679 --> 00:15:47.620
your premium asset, what is? Is it your humanity?

00:15:47.840 --> 00:15:51.399
Your taste? Your ability to deeply connect people?

00:15:51.639 --> 00:15:54.039
That is something to explore on your own. That's

00:15:54.039 --> 00:15:56.279
the question to chew on. If the machine can do

00:15:56.279 --> 00:15:58.120
the thinking, you have to do the feeling and

00:15:58.120 --> 00:16:00.279
the judging. Don't forget to subscribe for the

00:16:00.279 --> 00:16:02.200
next deep dive. We'll keep tracking this as it

00:16:02.200 --> 00:16:05.779
moves. Stay curious. See you next time. Out to

00:16:05.779 --> 00:16:06.320
your own music.
