WEBVTT

00:00:00.000 --> 00:00:01.960
You know, if you've spent any real time working

00:00:01.960 --> 00:00:04.259
with these large language models, you know that

00:00:04.259 --> 00:00:06.780
feeling, that specific frustration. You feed

00:00:06.780 --> 00:00:09.560
it this huge document, maybe, I don't know, 500

00:00:09.560 --> 00:00:12.500
pages of tech specs, and you ask this complex

00:00:12.500 --> 00:00:15.970
question about one detail. Buried way deep inside.

00:00:16.269 --> 00:00:18.449
And the model just comes back with, sorry, I

00:00:18.449 --> 00:00:20.809
don't recall that. It's that classic context

00:00:20.809 --> 00:00:23.969
problem, isn't it? The AI just gets overloaded

00:00:23.969 --> 00:00:26.350
and its accuracy kind of drifts off. Exactly.

00:00:26.350 --> 00:00:29.730
Just loses the plot. But what if the AI wasn't

00:00:29.730 --> 00:00:32.409
just passively reading all those tokens? What

00:00:32.409 --> 00:00:35.609
if it could actively debug the document? Imagine

00:00:35.609 --> 00:00:38.289
a system that sort of peeks at different parts,

00:00:38.509 --> 00:00:41.689
asks itself questions, follow -ups, and jumps

00:00:41.689 --> 00:00:43.719
through the data like a... like a really thoughtful

00:00:43.719 --> 00:00:46.259
engineer would. That is exactly the kind of shift

00:00:46.259 --> 00:00:48.600
we're seeing in these sources we looked at. We're

00:00:48.600 --> 00:00:52.679
really diving into a new generation of AI intelligence

00:00:52.679 --> 00:00:54.780
here. It's moving beyond just predicting the

00:00:54.780 --> 00:00:57.219
next word. It's getting into genuine strategic

00:00:57.219 --> 00:01:00.570
reasoning. Welcome to the Deep Dive. Yeah, we've

00:01:00.570 --> 00:01:02.909
got a really fascinating stack of research, some

00:01:02.909 --> 00:01:05.269
current events here, too. Our mission today,

00:01:05.430 --> 00:01:07.390
it's pretty straightforward. We're going to explore

00:01:07.390 --> 00:01:10.650
how AI is getting better at thinking strategically,

00:01:10.810 --> 00:01:13.730
both internally within these massive data sets

00:01:13.730 --> 00:01:15.709
and externally when it's navigating the whole

00:01:15.709 --> 00:01:18.250
web. Yeah, and we've got a pretty clear red map

00:01:18.250 --> 00:01:20.030
laid out. First up, we're going to unpack these

00:01:20.030 --> 00:01:22.469
things called recursive language models, RLMs.

00:01:22.549 --> 00:01:25.310
MIT is using them to basically solve that long

00:01:25.310 --> 00:01:28.010
context blindness. Pretty cool stuff. Right.

00:01:28.519 --> 00:01:31.560
Then we'll hit the current landscape. Quick updates,

00:01:31.840 --> 00:01:33.920
entropics, got some new skills. There's this

00:01:33.920 --> 00:01:37.680
surprising flop of an AI pet. Oh, yeah, the Gen

00:01:37.680 --> 00:01:40.200
Z pet, right? Yeah, that one. And also how these

00:01:40.200 --> 00:01:42.620
conflicts with governments are starting to shape

00:01:42.620 --> 00:01:44.400
the market. And finally, we're going to dig into

00:01:44.400 --> 00:01:48.439
Apple's new search model, DeepMM Search R1. This

00:01:48.439 --> 00:01:50.980
thing is, well, it's basically a self -correcting

00:01:50.980 --> 00:01:53.099
system that learns to research almost like a

00:01:53.099 --> 00:01:55.079
human debugger. So let's jump into the science

00:01:55.079 --> 00:02:00.030
first. Let's do it. So first segment. MIT's breakthrough

00:02:00.030 --> 00:02:04.129
with recursive language models, RLMs, these seem,

00:02:04.269 --> 00:02:06.760
well, Pretty explicitly designed to kill that

00:02:06.760 --> 00:02:09.120
long context problem we mentioned. They absolutely

00:02:09.120 --> 00:02:10.979
are. And what's really fascinating is how they

00:02:10.979 --> 00:02:13.539
do it. So an RLM, think of it like a system where

00:02:13.539 --> 00:02:16.539
the AI takes a giant task, breaks it down into

00:02:16.539 --> 00:02:19.360
smaller, more manageable pieces, and then it

00:02:19.360 --> 00:02:21.979
queries itself to find the answers to those smaller

00:02:21.979 --> 00:02:24.400
pieces. It's basically a model that asks itself

00:02:24.400 --> 00:02:26.280
questions. Okay, that immediately sounds way

00:02:26.280 --> 00:02:28.900
more strategic than just forcing it to read everything

00:02:28.900 --> 00:02:31.099
in one go. But let me push back just a little

00:02:31.099 --> 00:02:33.800
bit. Is this just a fancy new label for a really

00:02:33.800 --> 00:02:36.069
good agent system? Ah, that's a fair question.

00:02:36.430 --> 00:02:38.590
But the distinction, I think, is pretty crucial.

00:02:38.750 --> 00:02:41.009
Instead of trying to cram hundreds of thousands

00:02:41.009 --> 00:02:43.949
of tokens into a single prompt, which, as we

00:02:43.949 --> 00:02:47.569
said, leads to context rot, the RLM kind of adopts

00:02:47.569 --> 00:02:49.710
a developer's mindset. Okay, tell me more about

00:02:49.710 --> 00:02:52.090
that analogy, a developer's mindset. Yeah, imagine

00:02:52.090 --> 00:02:54.789
watching a programmer debugging a huge pile of

00:02:54.789 --> 00:02:56.729
code or data. They don't read every single line

00:02:56.729 --> 00:02:58.849
right. They jump around. So the mechanism is

00:02:58.849 --> 00:03:01.550
kind of elegant. The RLM peeks at chunks of the

00:03:01.550 --> 00:03:03.889
context. Then it does this sort of internal grep.

00:03:04.199 --> 00:03:06.699
searching for specific patterns or keywords,

00:03:06.919 --> 00:03:10.659
maybe like a user ID, user67144 or something,

00:03:10.840 --> 00:03:13.639
it splits the data based on what it finds. And

00:03:13.639 --> 00:03:15.800
then it recursively calls these subqueries to

00:03:15.800 --> 00:03:17.939
focus only on the little segment that's relevant

00:03:17.939 --> 00:03:20.819
for a final answer. Ah, okay. So it's dynamically

00:03:20.819 --> 00:03:23.500
building this chain of thought that's optimized

00:03:23.500 --> 00:03:25.740
for the data structure itself, not just following

00:03:25.740 --> 00:03:28.259
some pre -big instruction list. Precisely. And

00:03:28.259 --> 00:03:30.280
that's why the performance gains are just wow.

00:03:30.479 --> 00:03:33.300
The sources pointed out that an RLM built on...

00:03:33.479 --> 00:03:36.159
this, GPT -5 Mini actually beat the standard

00:03:36.159 --> 00:03:40.060
full -size GPT -5, beat it by 114 % in accuracy

00:03:40.060 --> 00:03:43.120
on complex tasks. That's a huge lift, especially

00:03:43.120 --> 00:03:46.819
for a smaller model. 114%. That's a staggering

00:03:46.819 --> 00:03:49.280
number. And importantly, the sources also note

00:03:49.280 --> 00:03:52.199
it kept that accuracy even when the context ballooned

00:03:52.199 --> 00:03:55.460
to like a thousand documents. That's real world

00:03:55.460 --> 00:03:58.680
robustness. Exactly. That robustness is what

00:03:58.680 --> 00:04:00.939
matters if this stuff is actually going to get

00:04:00.939 --> 00:04:04.039
used widely. And going back to your question

00:04:04.039 --> 00:04:07.099
about agents, RLMs decide how to think. They

00:04:07.099 --> 00:04:08.960
figure out the strategy internally. You know,

00:04:08.960 --> 00:04:10.759
traditional agents, they just follow the fixed

00:04:10.759 --> 00:04:13.180
rules you give them up front. This is different.

00:04:13.460 --> 00:04:15.460
OK, so that difference, the internal strategic

00:04:15.460 --> 00:04:17.980
decision making, that feels like it fundamentally

00:04:17.980 --> 00:04:20.420
changes the potential for long form reasoning.

00:04:20.579 --> 00:04:23.019
It absolutely does. It lets the AI strategically

00:04:23.019 --> 00:04:25.560
manage all that information, basically avoiding

00:04:25.560 --> 00:04:28.230
getting overwhelmed. all right so moving from

00:04:28.230 --> 00:04:31.329
those like lab breakthroughs to what's happening

00:04:31.329 --> 00:04:33.550
out in the market right now we've got some interesting

00:04:33.550 --> 00:04:35.490
quick hits on the tools people are actually using

00:04:35.490 --> 00:04:38.170
day to day yeah it's fascinating how quickly

00:04:38.170 --> 00:04:41.290
these base models are adding specialized skills

00:04:41.290 --> 00:04:44.500
that you know, actually save us real time. Definitely.

00:04:44.939 --> 00:04:47.680
Take Google Notebook LM. It can now handle Arxiv

00:04:47.680 --> 00:04:49.420
papers. So it's kind of like having your own

00:04:49.420 --> 00:04:51.600
personal research professor for academic stuff.

00:04:51.920 --> 00:04:54.040
And Anthropic's back in the mix, letting users

00:04:54.040 --> 00:04:56.339
give Claude specific automation skills. That

00:04:56.339 --> 00:04:58.160
really boosts its usefulness for businesses,

00:04:58.220 --> 00:05:00.199
right? And there was a small tool update, but

00:05:00.199 --> 00:05:03.879
honestly, one I really needed. Chat GPT can now

00:05:03.879 --> 00:05:06.839
automatically manage your saved memories. You

00:05:06.839 --> 00:05:08.660
know, I still wrestle with prompt drift myself

00:05:08.660 --> 00:05:11.500
sometimes. So auto memory management sounds,

00:05:11.680 --> 00:05:14.519
frankly, pretty crucial. Yeah. You can go into

00:05:14.519 --> 00:05:16.660
settings now and prioritize which memories are

00:05:16.660 --> 00:05:18.620
more important. That's a big usability win for

00:05:18.620 --> 00:05:21.699
sure. But shifting gears a bit, let's talk about

00:05:21.699 --> 00:05:25.569
that Gen Z AI pet that just... Kind of flopped.

00:05:25.569 --> 00:05:28.009
Oh, right. The stress relief companion. I remember

00:05:28.009 --> 00:05:29.930
the launch hype. It was all about being this

00:05:29.930 --> 00:05:33.009
nonjudgmental friend. Total flop. And the source

00:05:33.009 --> 00:05:34.750
material mentioned that psychologists basically

00:05:34.750 --> 00:05:36.750
called it. They predicted users would just find

00:05:36.750 --> 00:05:39.410
the interaction awkward, not genuinely soothing

00:05:39.410 --> 00:05:42.550
or comforting. Trying to engineer an emotional

00:05:42.550 --> 00:05:45.589
connection with an algorithm. It just felt hollow

00:05:45.589 --> 00:05:48.629
to people. It seems like what we're really looking

00:05:48.629 --> 00:05:52.079
for from AI is genuine utility. And maybe if

00:05:52.079 --> 00:05:54.399
product tries to lean too hard into that emotional

00:05:54.399 --> 00:05:56.800
side, people just sense the, I don't know, the

00:05:56.800 --> 00:05:58.660
artifice really quickly. Interesting cultural

00:05:58.660 --> 00:06:01.629
read. It really is. But the mood shifts quick

00:06:01.629 --> 00:06:05.470
back to geopolitics. It's not just AI competing

00:06:05.470 --> 00:06:08.209
with AI anymore in terms of capability. The sources

00:06:08.209 --> 00:06:10.490
are really highlighting this AI versus governments

00:06:10.490 --> 00:06:12.589
dynamic now. Yeah, that conflict seems to be

00:06:12.589 --> 00:06:15.170
heating up fast. You had the White House clashing

00:06:15.170 --> 00:06:17.670
directly with Anthropic over AI regulation proposals.

00:06:17.990 --> 00:06:20.889
And some folks in D .C. apparently labeled the

00:06:20.889 --> 00:06:23.209
company's concerns as just fear mongering. Right.

00:06:23.310 --> 00:06:24.990
And this ties straight into the business angle,

00:06:25.050 --> 00:06:27.129
too. Because of these kinds of conflicts and

00:06:27.129 --> 00:06:29.769
the whole push for regulation, AI startups are

00:06:29.769 --> 00:06:32.230
really doubling down on controlling data quality.

00:06:32.410 --> 00:06:35.129
They're starting to see high quality vetted data

00:06:35.129 --> 00:06:38.709
as like the new AI goldmine. It's all about the

00:06:38.709 --> 00:06:41.250
data now. So why do you think this conflict between

00:06:41.250 --> 00:06:43.689
AI companies and government regulation is really

00:06:43.689 --> 00:06:46.589
intensifying right now? Well, the sheer power

00:06:46.589 --> 00:06:49.430
of this new AI to influence society, it just.

00:06:49.870 --> 00:06:52.990
demands immediate and really careful governmental

00:06:52.990 --> 00:06:56.269
oversight. It's becoming unavoidable. Mid -roll

00:06:56.269 --> 00:06:59.529
sponsor, Reed Placeholder. Okay, let's pivot

00:06:59.529 --> 00:07:02.649
to some... practical, actionable strategies we

00:07:02.649 --> 00:07:04.670
pulled from the sources. We often get caught

00:07:04.670 --> 00:07:06.850
up talking about the huge compute needed for

00:07:06.850 --> 00:07:09.269
these giant models. Yeah. But sometimes just

00:07:09.269 --> 00:07:11.370
a really smart prompt can be the secret weapon.

00:07:11.529 --> 00:07:13.470
Oh, that's a massive understatement based on

00:07:13.470 --> 00:07:15.629
this data we saw. There was one novel prompting

00:07:15.629 --> 00:07:19.410
strategy that gave an LLM a crazy 200 % performance,

00:07:19.509 --> 00:07:22.529
left 200 % just in contextual faithfulness, how

00:07:22.529 --> 00:07:24.790
well it stuck to the facts, all from structuring

00:07:24.790 --> 00:07:26.970
the question better. 200 % is just incredible,

00:07:27.129 --> 00:07:29.329
especially because, as the source noted, it actually

00:07:29.329 --> 00:07:32.100
beat out... complex methods like supervised fine

00:07:32.100 --> 00:07:34.620
tuning, SFT, and direct preference optimization,

00:07:35.079 --> 00:07:37.720
DPO. Those usually take a ton of resources and

00:07:37.720 --> 00:07:40.259
complex model tuning. Yeah, it really reinforces

00:07:40.259 --> 00:07:43.800
that idea that how you use the tool matters just

00:07:43.800 --> 00:07:46.819
as much, maybe more sometimes, than the raw power

00:07:46.819 --> 00:07:50.079
of the underlying model. But strategy isn't only

00:07:50.079 --> 00:07:52.319
about prompts, right? This brings us to what

00:07:52.319 --> 00:07:54.959
one source called the silent killer of AR projects.

00:07:55.240 --> 00:07:58.500
The silent killer being. Miss using these incredibly

00:07:58.500 --> 00:08:01.759
powerful AI tools to just accelerate a bad idea.

00:08:01.939 --> 00:08:04.500
You might have the best AI, but if you haven't

00:08:04.500 --> 00:08:06.819
correctly defined the actual business problem

00:08:06.819 --> 00:08:09.000
you're trying to solve first, you're essentially

00:08:09.000 --> 00:08:12.300
just. failing faster maybe more expensively design

00:08:12.300 --> 00:08:14.759
thinking that's presented as the real edge here

00:08:14.759 --> 00:08:17.040
define the problem right right and the sources

00:08:17.040 --> 00:08:19.500
really emphasize that most ai initiatives don't

00:08:19.500 --> 00:08:21.540
collapse because the tech fails it's usually

00:08:21.540 --> 00:08:24.519
human factors organizational friction there was

00:08:24.519 --> 00:08:26.639
mention of needing a framework to handle those

00:08:26.639 --> 00:08:29.699
quote seven workplace personalities during an

00:08:29.699 --> 00:08:31.959
ai shift you know the skeptics the over enthusiasts

00:08:31.959 --> 00:08:34.940
the teams working in silos exactly it's the human

00:08:34.940 --> 00:08:37.000
resistance the poor process definition that actually

00:08:37.000 --> 00:08:39.120
stops the tech from delivering value we're hitting

00:08:39.240 --> 00:08:41.919
people problems, not really coding problems anymore.

00:08:42.200 --> 00:08:44.440
We also saw a quick list of some new tools that

00:08:44.440 --> 00:08:46.919
kind of fit this practical problem solving theme.

00:08:47.100 --> 00:08:50.320
Things like Alphas Fiv converts research papers

00:08:50.320 --> 00:08:53.220
into conversations. Yeah. And Reducto takes documents

00:08:53.220 --> 00:08:56.059
and spits out clean, structured data. Emergent,

00:08:56.059 --> 00:08:58.139
which turns text descriptions into actual working

00:08:58.139 --> 00:09:01.720
apps. And Supercut for auto editing long videos

00:09:01.720 --> 00:09:05.240
into short clips. The focus is clearly on productivity

00:09:05.240 --> 00:09:09.309
gains, making things easier. So, OK, beyond that

00:09:09.309 --> 00:09:12.669
huge prompts lift number, what's the core lesson

00:09:12.669 --> 00:09:15.029
here about using AI effectively? It's got to

00:09:15.029 --> 00:09:16.950
be. Define the right business problem first,

00:09:17.029 --> 00:09:19.950
always, before you throw these powerful AI tools

00:09:19.950 --> 00:09:22.580
at it. Okay. Our final big discovery takes us

00:09:22.580 --> 00:09:25.299
back to strategic AI, but this time focused on

00:09:25.299 --> 00:09:28.000
how AI interacts with the outside world. Apple's

00:09:28.000 --> 00:09:30.679
deep MM search R1. And this is definitely not

00:09:30.679 --> 00:09:32.600
just another simple retrieval model. Right. This

00:09:32.600 --> 00:09:35.019
sounds like a genuinely multimodal LLM. It doesn't

00:09:35.019 --> 00:09:38.039
just search the web. It actually self -corrects

00:09:38.039 --> 00:09:40.080
its own approach in real time if the first results

00:09:40.080 --> 00:09:42.629
aren't good enough. That's exactly it. And its

00:09:42.629 --> 00:09:44.529
abilities are just fascinating because they show

00:09:44.529 --> 00:09:46.909
the kind of strategic thinking we usually associate

00:09:46.909 --> 00:09:50.190
with like highly skilled human researchers. It

00:09:50.190 --> 00:09:52.730
decides when it needs to search and crucially

00:09:52.730 --> 00:09:55.590
what it should search for. It issues actual strategic

00:09:55.590 --> 00:09:58.590
queries to the web. And it handles images smartly

00:09:58.590 --> 00:10:00.970
too, right? If you give it a picture, it apparently

00:10:00.970 --> 00:10:03.389
automatically crops it to zoom in on the important

00:10:03.389 --> 00:10:05.889
part before it searches. So it's prioritizing

00:10:05.889 --> 00:10:08.629
the visual context, not just throwing raw pixels

00:10:08.629 --> 00:10:10.830
at the problem. Yeah, but the self -correction

00:10:10.830 --> 00:10:13.070
loop, that's the real kicker here. It actually

00:10:13.070 --> 00:10:15.169
reflects on the answers it generates. If the

00:10:15.169 --> 00:10:17.789
first batch of web results look kind of weak

00:10:17.789 --> 00:10:20.990
or contradictory, it automatically rewrites its

00:10:20.990 --> 00:10:23.570
own query and searches again. It keeps iterating

00:10:23.570 --> 00:10:26.629
until it finds reliable sources. Wow. That level

00:10:26.629 --> 00:10:30.669
of strategic reflection and... iteration that

00:10:30.669 --> 00:10:33.110
seems incredibly powerful. So how did it actually

00:10:33.110 --> 00:10:35.429
perform compared to the methods we use now, like

00:10:35.429 --> 00:10:38.110
RRAG? Well, it significantly outperformed all

00:10:38.110 --> 00:10:40.070
the open source search baselines they tested

00:10:40.070 --> 00:10:42.850
against. Squared something like 21 points higher

00:10:42.850 --> 00:10:45.450
than common RRAG workflows. Okay, let's pause

00:10:45.450 --> 00:10:47.429
on RRAG for just a second. For anyone listening,

00:10:47.929 --> 00:10:51.370
RRAG is retrieval augmented generation. Basically,

00:10:51.409 --> 00:10:54.230
the AI fetches external documents to add to its

00:10:54.230 --> 00:10:56.309
knowledge before answering. But you're saying

00:10:56.309 --> 00:10:59.399
RRAG workflows often add noise. They often do,

00:10:59.519 --> 00:11:01.940
yeah. RA can sometimes pull in documents that

00:11:01.940 --> 00:11:03.879
aren't truly relevant, maybe just because they

00:11:03.879 --> 00:11:07.080
share some keywords. DeepMM Search R1 seems to

00:11:07.080 --> 00:11:09.440
avoid that by being much more targeted and strategic

00:11:09.440 --> 00:11:12.320
in its search. And the sources also noted it

00:11:12.320 --> 00:11:15.320
nearly matched the performance of GPT -03, despite

00:11:15.320 --> 00:11:18.059
running on a much smaller backbone model, QEN

00:11:18.059 --> 00:11:21.809
2 .5 VL7B. That points to some serious efficiency

00:11:21.809 --> 00:11:25.529
gains. Whoa. Yeah, just imagine scaling a self

00:11:25.529 --> 00:11:27.730
-correcting system like that up to, say, a billion

00:11:27.730 --> 00:11:30.450
queries a day. The efficiency and accuracy improvements

00:11:30.450 --> 00:11:33.929
for any major search platform would be just massive.

00:11:34.269 --> 00:11:36.090
And here's the really crucial bit, the part that

00:11:36.090 --> 00:11:38.289
feels like a paradigm shift. Unlike those big

00:11:38.289 --> 00:11:40.789
retrieval models that need these enormous, constantly

00:11:40.789 --> 00:11:43.370
updated indexes, the system apparently doesn't

00:11:43.370 --> 00:11:45.769
need a huge internal data library. It just learns

00:11:45.769 --> 00:11:48.019
how to use the public web intelligently. like

00:11:48.019 --> 00:11:49.679
a really focused researcher who knows how to

00:11:49.679 --> 00:11:51.919
find things. So if this kind of model doesn't

00:11:51.919 --> 00:11:55.179
need that massive index, how does that change

00:11:55.179 --> 00:11:58.379
the future of AI search, do you think? Well,

00:11:58.419 --> 00:12:00.299
it seems like it shifts the whole focus away

00:12:00.299 --> 00:12:03.419
from just indexing ever more data towards teaching

00:12:03.419 --> 00:12:06.139
the AI how to navigate the web efficiently and

00:12:06.139 --> 00:12:08.539
strategically. It's about skill, not just storage.

00:12:08.700 --> 00:12:11.789
Hashtag tag tag outro. So we started this deep

00:12:11.789 --> 00:12:14.330
dive talking about AI's context blindness, that

00:12:14.330 --> 00:12:16.389
frustration of it forgetting things and long

00:12:16.389 --> 00:12:18.750
documents. And I think our sources today have

00:12:18.750 --> 00:12:22.149
really shown. pretty decisively that ai is rapidly

00:12:22.149 --> 00:12:24.190
becoming much more strategically smart it's not

00:12:24.190 --> 00:12:26.049
just about getting bigger anymore absolutely

00:12:26.049 --> 00:12:29.450
we saw kind of two parallel trends tackling that

00:12:29.450 --> 00:12:31.470
original friction point first you've got better

00:12:31.470 --> 00:12:34.090
internal reasoning that's the rlms handling massive

00:12:34.090 --> 00:12:36.269
context by basically debugging themselves right

00:12:36.269 --> 00:12:38.990
and second better external interaction that's

00:12:38.990 --> 00:12:41.629
the deep mm search model navigating the whole

00:12:41.629 --> 00:12:43.669
web like a strategic debugger correcting its

00:12:43.669 --> 00:12:47.080
own mistakes as it goes yeah so Our combined

00:12:47.080 --> 00:12:49.399
takeaway here feels like the main challenges,

00:12:49.460 --> 00:12:51.000
the friction points, they're actually shifting.

00:12:51.100 --> 00:12:53.600
They seem to be moving away from purely technical

00:12:53.600 --> 00:12:56.259
limits like context, window size or model parameter

00:12:56.259 --> 00:12:59.259
count and moving more towards human limits. Things

00:12:59.259 --> 00:13:02.019
like poor problem definition or just organizational

00:13:02.019 --> 00:13:06.299
resistance to change. So, OK, here's maybe a

00:13:06.299 --> 00:13:08.480
final provocative thought for you, the listener,

00:13:08.559 --> 00:13:12.480
to consider. The clear emerging trend is AI that

00:13:12.480 --> 00:13:14.960
reasons more programmatically, more strategically.

00:13:15.610 --> 00:13:18.330
So if these LLMs can start self -correcting their

00:13:18.330 --> 00:13:20.289
own web queries, if they can debug their own

00:13:20.289 --> 00:13:22.690
context understanding, does that mean the next

00:13:22.690 --> 00:13:24.730
generation of large models might be entirely

00:13:24.730 --> 00:13:27.149
self -auditing? Makes you wonder, you know, how

00:13:27.149 --> 00:13:29.210
quickly human oversight might shift completely

00:13:29.210 --> 00:13:31.470
towards just high -level strategy and problem

00:13:31.470 --> 00:13:33.750
definition rather than getting bogged down in

00:13:33.750 --> 00:13:35.870
the execution details. Something to think about.

00:13:36.139 --> 00:13:37.700
Thank you for sharing your sources with us for

00:13:37.700 --> 00:13:39.440
this deep dive. We definitely encourage you to

00:13:39.440 --> 00:13:41.679
check out the links provided, especially on design

00:13:41.679 --> 00:13:44.240
thinking and navigating that human friction in

00:13:44.240 --> 00:13:47.279
AI adoption. Seems increasingly important. Out

00:13:47.279 --> 00:13:47.779
to you, music.
