WEBVTT

00:00:00.000 --> 00:00:02.299
I remember not that long ago trying to get an

00:00:02.299 --> 00:00:05.799
AI image and you'd wait, you know, maybe a minute

00:00:05.799 --> 00:00:09.240
just to get something weird. Yeah. Or janky or

00:00:09.240 --> 00:00:12.900
with misspelled text. You got it fast, but it

00:00:12.900 --> 00:00:16.100
was rarely precise. That entire tradeoff, that

00:00:16.100 --> 00:00:19.370
assumption. Yeah. It feels like it's been completely

00:00:19.370 --> 00:00:21.429
torn up in the last few months. Welcome back

00:00:21.429 --> 00:00:24.050
to the deep dive. You've brought us a really

00:00:24.050 --> 00:00:26.690
great set of sources today. And I think our mission

00:00:26.690 --> 00:00:28.829
is pretty essential. We're going to filter the

00:00:28.829 --> 00:00:31.050
noise to find the signal. We want to cut through

00:00:31.050 --> 00:00:34.329
all the general AI hype and show you what's actually

00:00:34.329 --> 00:00:37.259
being used right now. by people doing serious

00:00:37.259 --> 00:00:40.899
work. And our roadmap for this dive is, well,

00:00:41.000 --> 00:00:43.619
it's focused on three big shifts. First, we're

00:00:43.619 --> 00:00:45.340
going to look at the image generation arms race,

00:00:45.479 --> 00:00:47.700
specifically how these models are getting this

00:00:47.700 --> 00:00:51.960
surgical precision. Yeah. Then we need to decode

00:00:51.960 --> 00:00:54.119
some of the vocabulary. It gets confusing. We

00:00:54.119 --> 00:00:56.399
need to draw a clear line between what a tool

00:00:56.399 --> 00:00:58.979
is, what an LLM is, and this really critical

00:00:58.979 --> 00:01:01.979
idea of an agent. And finally, we're diving into

00:01:01.979 --> 00:01:04.890
a new study from Perplexity and Harvard. And

00:01:04.890 --> 00:01:07.530
it's fascinating because it kind of debunks the

00:01:07.530 --> 00:01:10.609
common myth about how people use AI agents. The

00:01:10.609 --> 00:01:13.090
Robo Butler idea. Exactly. It's not about that

00:01:13.090 --> 00:01:15.370
at all. It's something much more powerful. So

00:01:15.370 --> 00:01:18.049
our goal for you is simple. Walk away from this

00:01:18.049 --> 00:01:20.870
with the core insights you need to, you know,

00:01:20.870 --> 00:01:23.510
move faster and smarter. Okay. Let's unpack this.

00:01:23.569 --> 00:01:25.310
Let's start on the creative side. So the biggest

00:01:25.310 --> 00:01:27.629
headline from our sources was definitely the

00:01:27.629 --> 00:01:32.049
launch of OpenAI's GPT image 1 .5. And this isn't

00:01:32.049 --> 00:01:34.430
just another upgrade. It feels more like a correction

00:01:34.430 --> 00:01:38.269
of all the past failures. For years, the big

00:01:38.269 --> 00:01:39.989
frustration was that you just couldn't trust

00:01:39.989 --> 00:01:43.549
the AI to respect your actual instructions. Exactly.

00:01:43.829 --> 00:01:45.629
I think about those early attempts, you know,

00:01:45.650 --> 00:01:47.269
where you'd ask it to change one little thing,

00:01:47.430 --> 00:01:49.909
like swap a blue car for a red one. And the whole

00:01:49.909 --> 00:01:52.189
image would just melt. Yeah. The background,

00:01:52.329 --> 00:01:54.530
the lighting, the driver's face. Yeah. It would

00:01:54.530 --> 00:01:56.750
all warp. You'd get these weird artifacts. It

00:01:56.750 --> 00:01:59.250
was a mess. It was a distortion nightmare. But

00:01:59.250 --> 00:02:02.170
1 .5 introduces what they're calling high control

00:02:02.170 --> 00:02:05.560
editing. So you can add, remove, or change things

00:02:05.560 --> 00:02:08.219
in the image with surgical precision without

00:02:08.219 --> 00:02:09.800
completely breaking the rest of the picture.

00:02:09.960 --> 00:02:12.319
And that's a huge step toward making assets you

00:02:12.319 --> 00:02:14.439
can actually use in production. And for me, the

00:02:14.439 --> 00:02:16.439
most mind -bending part is the real text support.

00:02:16.659 --> 00:02:19.240
We've all seen that weird, alien -looking gibberish

00:02:19.240 --> 00:02:23.740
text in AI images. Oh, yeah. Now, 1 .5 can render

00:02:23.740 --> 00:02:27.379
dense, small fonts perfectly clear inside an

00:02:27.379 --> 00:02:29.800
image. That was always the ultimate test for

00:02:29.800 --> 00:02:32.479
precision. And it feels like they finally cracked

00:02:32.479 --> 00:02:35.340
it. And that capability, it ties right into reliable

00:02:35.340 --> 00:02:37.860
instruction following, which is the key feature

00:02:37.860 --> 00:02:40.479
that really changes the game for professional

00:02:40.479 --> 00:02:42.780
creative teams. Right. The sources have this

00:02:42.780 --> 00:02:46.319
incredible example asking the AI for a six by

00:02:46.319 --> 00:02:50.800
six grid of 36 totally unique items. And it places

00:02:50.800 --> 00:02:53.560
each object exactly in the right tile. That's

00:02:53.560 --> 00:02:56.259
a true whoa moment. It's not just about making

00:02:56.259 --> 00:02:58.360
a pretty picture anymore. It's about executing

00:02:58.360 --> 00:03:01.530
a complex creative brief flawlessly. On the first

00:03:01.530 --> 00:03:03.710
go. And it does it four times faster than the

00:03:03.710 --> 00:03:05.889
1 .0 version. That's the real breakthrough, right?

00:03:05.949 --> 00:03:08.550
Speed plus precision. Absolutely. And we have

00:03:08.550 --> 00:03:10.949
to look at this strategically. This wasn't some

00:03:10.949 --> 00:03:13.689
casual release. It was a direct shot at Google,

00:03:13.750 --> 00:03:16.289
right? It was clearly rushed to market to counter

00:03:16.289 --> 00:03:19.789
Google's nano banana pro model. We're seeing

00:03:19.789 --> 00:03:22.370
the market for image generation start to segment

00:03:22.370 --> 00:03:25.009
just like software did years ago. So what does

00:03:25.009 --> 00:03:27.349
that segmentation look like? Because for most

00:03:27.349 --> 00:03:29.889
people, they probably just see two. really good

00:03:29.889 --> 00:03:32.389
image generators. Well, they're targeting different

00:03:32.389 --> 00:03:36.210
kinds of users. GTC Image 1 .5 is all about speed

00:03:36.210 --> 00:03:39.229
and accessible control. It's for rapid iteration,

00:03:39.689 --> 00:03:42.509
for brainstorming, for the pro user who needs

00:03:42.509 --> 00:03:45.110
10 quick versions of something for a social post.

00:03:45.289 --> 00:03:48.210
So speed and accessibility. Exactly. The new

00:03:48.210 --> 00:03:50.530
images tab, the pre -built styles, it's all built

00:03:50.530 --> 00:03:53.389
for velocity. Nano Banana Pro, on the other hand,

00:03:53.389 --> 00:03:55.840
is slower. But it's specialized for the high

00:03:55.840 --> 00:03:58.280
-end production pipeline. Okay, so more for enterprise.

00:03:58.599 --> 00:04:01.780
Think big enterprise features, massive batch

00:04:01.780 --> 00:04:04.479
processing, the kind of absolute fidelity you

00:04:04.479 --> 00:04:07.199
need for a major brand campaign. It's quality

00:04:07.199 --> 00:04:12.759
over just pure speed. Whoa. Just imagine scaling

00:04:12.759 --> 00:04:15.460
that precise instruction following to a billion

00:04:15.460 --> 00:04:18.139
unique queries a day. If you can trust the AI

00:04:18.139 --> 00:04:21.800
to execute 36 unique commands perfectly every

00:04:21.800 --> 00:04:23.879
single time, that just fundamentally changes

00:04:23.879 --> 00:04:26.579
the job for graphic designers everywhere. So

00:04:26.579 --> 00:04:28.579
if I'm a user, what's the bottom line? How do

00:04:28.579 --> 00:04:31.100
I choose between 1 .5 and nano banana? If you

00:04:31.100 --> 00:04:33.139
need speed and quick iteration, you go with 1

00:04:33.139 --> 00:04:35.660
.5. If it's enterprise workflows and production

00:04:35.660 --> 00:04:38.259
quality, you lean toward nano. So we've covered

00:04:38.259 --> 00:04:40.360
the creation side, but all that speed and precision.

00:04:41.129 --> 00:04:43.370
It doesn't mean much if we can't even agree on

00:04:43.370 --> 00:04:45.269
what to call the things we're using. Right. The

00:04:45.269 --> 00:04:47.470
vocabulary is changing so fast and it just creates

00:04:47.470 --> 00:04:50.589
this information fatigue. It really does. People

00:04:50.589 --> 00:04:53.250
feel this massive pressure, this FOMO, like they're

00:04:53.250 --> 00:04:55.170
missing that one key update that's going to change

00:04:55.170 --> 00:04:57.410
everything. Yeah. But the answer isn't to read

00:04:57.410 --> 00:04:59.350
more. It's just to clarify the architecture.

00:04:59.509 --> 00:05:01.250
You only really need to get your head around

00:05:01.250 --> 00:05:03.449
three core categories when people talk about

00:05:03.449 --> 00:05:05.629
AI. Okay. Let's start with the simplest one,

00:05:05.730 --> 00:05:07.829
the most common one. That would be the AI tool.

00:05:08.220 --> 00:05:11.040
A tool is just an app made for one specific task.

00:05:11.220 --> 00:05:14.019
Like image generation. Or video editing or voice

00:05:14.019 --> 00:05:16.680
cloning. They're the single function apps with

00:05:16.680 --> 00:05:18.920
a clean interface. Simple. Okay, next up are

00:05:18.920 --> 00:05:22.040
the engines that power everything. Exactly. The

00:05:22.040 --> 00:05:24.699
LLMs or large language models. What everyone

00:05:24.699 --> 00:05:26.720
just calls chatbots. These are the big predictive

00:05:26.720 --> 00:05:30.319
models. The brains of the operation. Like ChatGPT,

00:05:30.540 --> 00:05:35.810
Claude. Gemini, Grok. Most of the really powerful

00:05:35.810 --> 00:05:38.529
ones are closed source. The company keeps the

00:05:38.529 --> 00:05:40.569
code secret and they run them on their own servers.

00:05:40.910 --> 00:05:43.430
And what's the alternative to those huge closed

00:05:43.430 --> 00:05:46.209
source brains? That's the third category, open

00:05:46.209 --> 00:05:48.290
source models. You find these on platforms like

00:05:48.290 --> 00:05:50.810
Hugging Face. The code is free, it's public,

00:05:50.889 --> 00:05:53.610
and it's mostly used by coders who want to build

00:05:53.610 --> 00:05:56.329
their own custom apps or run AI on their own

00:05:56.329 --> 00:05:58.290
machines. So they can avoid big tech servers.

00:05:58.470 --> 00:06:00.589
Precisely. Now let's get to the term that's really

00:06:00.589 --> 00:06:02.430
exploding, the one that connects all this together,

00:06:02.629 --> 00:06:06.759
the AI agent. Right. An agent is basically a

00:06:06.759 --> 00:06:08.980
smart software layer you build on top of the

00:06:08.980 --> 00:06:12.019
LLM brain. It's a system that manages and runs

00:06:12.019 --> 00:06:14.240
a whole multi -step workflow for you. So the

00:06:14.240 --> 00:06:16.720
LLM is the brain. The tools are the hands and

00:06:16.720 --> 00:06:18.500
the agent is like the personal assistant running

00:06:18.500 --> 00:06:21.720
the whole show. Okay. So an agent is built to

00:06:21.720 --> 00:06:25.500
take complex step -by -step actions like replying

00:06:25.500 --> 00:06:28.360
to an email, pulling out the key details, making

00:06:28.360 --> 00:06:31.160
a PDF summary, and then scheduling a follow -up

00:06:31.160 --> 00:06:34.370
call. All in one automated flow without you having

00:06:34.370 --> 00:06:36.970
to do each step. It solves that friction point

00:06:36.970 --> 00:06:39.959
we've all felt. We call it prompt drift. You

00:06:39.959 --> 00:06:41.720
know, when you try to chain six different single

00:06:41.720 --> 00:06:44.259
task tools together, the instructions start to

00:06:44.259 --> 00:06:47.639
get muddled. Oh, absolutely. I still wrestle

00:06:47.639 --> 00:06:49.600
with prompt drift myself. When you're trying

00:06:49.600 --> 00:06:51.379
to combine these different tools into one perfect

00:06:51.379 --> 00:06:53.639
workflow. It's hard. Yeah, it feels like you're

00:06:53.639 --> 00:06:56.620
trying to give complex errands to a very literal

00:06:56.620 --> 00:06:58.920
six -year -old. And that's why the agent concept

00:06:58.920 --> 00:07:01.060
is so critical. It handles the handoff between

00:07:01.060 --> 00:07:03.959
all those tasks. And for the user, just understanding

00:07:03.959 --> 00:07:07.019
this structure is the shortcut. FOMO is basically

00:07:07.019 --> 00:07:09.459
fake unless you're actually building the AI models

00:07:09.459 --> 00:07:11.800
yourself. For the rest of us, it's just about

00:07:11.800 --> 00:07:13.519
how the agent layer lets you automate things

00:07:13.519 --> 00:07:17.120
smarter. So why is the agent concept becoming

00:07:17.120 --> 00:07:20.279
so critical right now? Because agents automate

00:07:20.279 --> 00:07:23.420
entire workflows, and that saves way more time

00:07:23.420 --> 00:07:26.319
than single task tools ever could. That focus

00:07:26.319 --> 00:07:29.379
on workflow automation. It leads us perfectly

00:07:29.379 --> 00:07:31.540
into our third segment, because this is about

00:07:31.540 --> 00:07:33.500
using real data to cut through the hype. Right.

00:07:33.949 --> 00:07:35.810
The popular story, you know, the one from sci

00:07:35.810 --> 00:07:39.790
-fi, is the AI is the robo -butler. The assistant

00:07:39.790 --> 00:07:41.689
that books your flights, orders your groceries,

00:07:41.870 --> 00:07:44.069
handles all the little chores. And that is absolutely

00:07:44.069 --> 00:07:46.189
not what the data says is the highest value use.

00:07:46.310 --> 00:07:48.329
The perplexity in Harvard's study you shared

00:07:48.329 --> 00:07:50.449
is so interesting because it just debunks that

00:07:50.449 --> 00:07:53.069
whole myth. So what's the core finding? The core

00:07:53.069 --> 00:07:55.589
finding is that users are using AI agents to

00:07:55.589 --> 00:07:58.240
augment cognitive labor. To think better. To

00:07:58.240 --> 00:08:00.920
think better, not just to delegate chores. They're

00:08:00.920 --> 00:08:03.680
using these tools to expand their own intellectual

00:08:03.680 --> 00:08:07.100
bandwidth. And that really points to where the

00:08:07.100 --> 00:08:09.600
real productivity gains are happening. Can you

00:08:09.600 --> 00:08:11.060
give some examples? What kind of high -level

00:08:11.060 --> 00:08:13.860
tasks were they actually doing? Well, over half

00:08:13.860 --> 00:08:16.680
of the queries were for tasks that involved synthesizing

00:08:16.680 --> 00:08:19.699
huge amounts of complex information. Things like

00:08:19.699 --> 00:08:22.459
summarizing really long documents. Like pulling

00:08:22.459 --> 00:08:25.199
the core arguments from a 50 -page report. Exactly.

00:08:25.199 --> 00:08:27.459
Or editing and structuring technical reports.

00:08:27.470 --> 00:08:30.350
managing complex research workflows, and getting

00:08:30.350 --> 00:08:32.970
high -level coursework help. So these are tasks

00:08:32.970 --> 00:08:36.149
that need judgment And integration. Yeah. Not

00:08:36.149 --> 00:08:39.049
just grabbing facts. They're using the agent

00:08:39.049 --> 00:08:42.149
as a cognitive accelerant. And the user demographics

00:08:42.149 --> 00:08:45.330
tracked with this perfectly. The main users weren't

00:08:45.330 --> 00:08:47.490
just everyday consumers. They're knowledge workers.

00:08:47.730 --> 00:08:50.690
Tech folks, marketing strategists, finance professionals,

00:08:51.230 --> 00:08:54.049
academics doing literature reviews, people who

00:08:54.049 --> 00:08:56.789
have to process information at an inhuman speed.

00:08:57.190 --> 00:08:58.629
And there was a pattern there, right? Yeah. A

00:08:58.629 --> 00:09:01.340
very clear predictor. A higher education level

00:09:01.340 --> 00:09:04.740
and a higher GDP correlated directly with more

00:09:04.740 --> 00:09:07.639
agent usage. And it suggests this kind of graduation

00:09:07.639 --> 00:09:11.019
process. What do you mean by that? Well, users

00:09:11.019 --> 00:09:13.120
might start with light stuff, like planning a

00:09:13.120 --> 00:09:15.500
trip. But as soon as they realize the agent can

00:09:15.500 --> 00:09:18.080
handle complex thought, they quickly shift to

00:09:18.080 --> 00:09:20.600
these deeper cognitive tasks. That makes sense.

00:09:20.740 --> 00:09:22.659
The more complex your daily work is, the more

00:09:22.659 --> 00:09:25.230
value you get from an agent right away. But,

00:09:25.269 --> 00:09:27.269
and this is important, we have to be critical

00:09:27.269 --> 00:09:30.649
of the data source. The study was based on Perplexity's

00:09:30.649 --> 00:09:33.070
users. Which is a research -focused platform.

00:09:33.389 --> 00:09:36.110
Right. So their user base is already skewed toward

00:09:36.110 --> 00:09:38.929
more academic and professional queries than,

00:09:39.009 --> 00:09:41.889
say, a general user of standard chat GPT. So

00:09:41.889 --> 00:09:44.190
does this study accurately represent general

00:09:44.190 --> 00:09:46.799
user behavior? It might not represent the average

00:09:46.799 --> 00:09:49.519
consumer, but it clearly shows advanced cognitive

00:09:49.519 --> 00:09:52.360
tasks are already the highest value use for people

00:09:52.360 --> 00:09:54.779
actively using agents. All right, let's shift

00:09:54.779 --> 00:09:57.259
gears. Time for a rapid fire summary of some

00:09:57.259 --> 00:10:00.980
of the most impactful recent news and strategic

00:10:00.980 --> 00:10:03.340
shifts, the stuff you need to know. Let's start

00:10:03.340 --> 00:10:06.139
with something a bit more cultural. Let's talk

00:10:06.139 --> 00:10:09.000
about slop. Slop. It's officially been named

00:10:09.000 --> 00:10:12.830
the 2025 word of the year. And slop is the term

00:10:12.830 --> 00:10:16.870
for that low quality, high volume, AI generated

00:10:16.870 --> 00:10:19.809
content that's just flooding everything. Search

00:10:19.809 --> 00:10:22.610
engines, social media feeds, even book markets.

00:10:22.809 --> 00:10:24.669
It's a real challenge if you're trying to find

00:10:24.669 --> 00:10:26.870
quality information. You just wade through noise.

00:10:27.169 --> 00:10:30.610
And on the tool side, ChatGPT just launched skills

00:10:30.610 --> 00:10:34.129
for specific tasks. This is basically them mirroring

00:10:34.129 --> 00:10:36.450
the kind of targeted functionality that cloud

00:10:36.450 --> 00:10:38.570
users have had for a while. So you can tell the

00:10:38.570 --> 00:10:40.629
model what kind of tasks to optimize for. Right.

00:10:40.970 --> 00:10:44.110
Competition is driving feature parody. And speaking

00:10:44.110 --> 00:10:46.509
of competition, there's a big Google rumor to

00:10:46.509 --> 00:10:49.490
watch. What's that? Their deep mind lead basically

00:10:49.490 --> 00:10:52.049
told people to go bookmark the hugging face page,

00:10:52.350 --> 00:10:54.830
strongly hinting that Gemma 4 is coming very

00:10:54.830 --> 00:10:57.129
soon. That could be a major open source release

00:10:57.129 --> 00:10:59.190
that shifts the whole landscape. And a great

00:10:59.190 --> 00:11:02.190
example of a new app using these models is DoorDash

00:11:02.190 --> 00:11:04.960
Zesty. Oh, I saw this. It's a social app for

00:11:04.960 --> 00:11:07.240
finding restaurants, but it uses these really

00:11:07.240 --> 00:11:10.279
specific natural language queries. Things like

00:11:10.279 --> 00:11:13.019
a low key dinner for introverts with excellent

00:11:13.019 --> 00:11:15.779
lighting. That's a great practical use of an

00:11:15.779 --> 00:11:18.960
LLM to navigate real world data. OK, now let's

00:11:18.960 --> 00:11:20.740
talk about some serious infrastructure moves.

00:11:21.659 --> 00:11:26.320
NVIDIA made a quiet. but huge acquisition. ZMD.

00:11:26.519 --> 00:11:29.059
SkedMD. They're the company behind the Slurm

00:11:29.059 --> 00:11:31.779
workload manager, which is the open source scheduler

00:11:31.779 --> 00:11:33.899
that is absolutely critical for running massive

00:11:33.899 --> 00:11:36.220
AI data centers. That's a heavy acquisition.

00:11:36.580 --> 00:11:38.779
Resource scheduling, deciding what data gets

00:11:38.779 --> 00:11:41.559
processed on which chip and when. That's the

00:11:41.559 --> 00:11:44.620
plumbing of AI training. Yeah. It just strengthens

00:11:44.620 --> 00:11:47.299
NVIDIA's already insane control over the entire

00:11:47.299 --> 00:11:50.179
compute stack from the chip itself all the way

00:11:50.179 --> 00:11:52.419
up to the training software. It's about owning

00:11:52.419 --> 00:11:55.600
the operating system of the AI data center. And

00:11:55.600 --> 00:11:58.139
speaking of control tactics, look at this subtle

00:11:58.139 --> 00:12:00.399
move from OpenAI. What are they doing? They're

00:12:00.399 --> 00:12:03.179
now defaulting free users to the less capable

00:12:03.179 --> 00:12:06.700
GPT 5 .2 instant model. If you want the much

00:12:06.700 --> 00:12:08.279
better performance, you have to manually switch

00:12:08.279 --> 00:12:10.960
over. Ah, that's a classic platform tactic. Make

00:12:10.960 --> 00:12:13.080
the free tier a little less convenient to nudge

00:12:13.080 --> 00:12:16.379
people toward paid plans. And finally, two quick

00:12:16.379 --> 00:12:19.600
utility tools worth checking out. Google's CC.

00:12:20.159 --> 00:12:22.220
It gives you a personalized briefing every morning,

00:12:22.340 --> 00:12:24.600
pulling from your Gmail, calendar, and drive.

00:12:24.960 --> 00:12:27.860
It's the ultimate catch -me -up tool. And Okara.

00:12:27.980 --> 00:12:30.220
And Okara, which lets you chat with a whole bunch

00:12:30.220 --> 00:12:32.720
of different open -source models, Lama, Quen,

00:12:32.759 --> 00:12:36.139
DeepSeek, all from one single app. So what's

00:12:36.139 --> 00:12:38.500
the biggest infrastructure implication of that

00:12:38.500 --> 00:12:41.000
NVIDIA acquisition? It really just strengthens

00:12:41.000 --> 00:12:43.379
NVIDIA's control over scheduling massive data

00:12:43.379 --> 00:12:45.980
center workloads. Okay, let's pull all this together.

00:12:46.159 --> 00:12:48.679
Let's synthesize the core insights from this

00:12:48.679 --> 00:12:50.899
deep dive for you. We covered a lot, but there

00:12:50.899 --> 00:12:53.600
are really three major takeaways to hold on to

00:12:53.600 --> 00:12:56.519
as things keep changing. Go for it. First, image

00:12:56.519 --> 00:12:58.600
models have hit a critical point of maturity.

00:12:58.940 --> 00:13:02.200
The kind of control and speed we see in GPT image

00:13:02.200 --> 00:13:06.179
1 .5 means these tools are now not just optional

00:13:06.179 --> 00:13:08.679
for any kind of rapid creative work. They've

00:13:08.679 --> 00:13:10.480
solved the precision problem. They've solved

00:13:10.480 --> 00:13:13.860
the precision problem. Second, the real value

00:13:13.860 --> 00:13:16.759
of AI agents isn't handling your chores. The

00:13:16.759 --> 00:13:19.320
Harvard study is pretty clear. Agents are being

00:13:19.320 --> 00:13:22.019
used to augment cognitive labor. They help you

00:13:22.019 --> 00:13:24.779
think, research, and synthesize information better

00:13:24.779 --> 00:13:29.799
and faster. And third, the key to navigating

00:13:29.799 --> 00:13:32.220
all this noise is just understanding the architecture.

00:13:32.600 --> 00:13:34.860
Knowing the difference between a single task

00:13:34.860 --> 00:13:37.879
tool, the LLM engine, and the workflow managing

00:13:37.879 --> 00:13:40.919
agent. That's the essential shortcut. And if

00:13:40.919 --> 00:13:43.779
we take that study's findings seriously. That

00:13:43.779 --> 00:13:47.000
agent usage links so strongly to higher education

00:13:47.000 --> 00:13:50.080
and higher GDP. That brings up a pretty provocative

00:13:50.080 --> 00:13:52.480
question for the future. Yeah. If these agents

00:13:52.480 --> 00:13:54.820
are primarily tools for enhancing high level

00:13:54.820 --> 00:13:57.240
thought, for accelerating the work of the already

00:13:57.240 --> 00:14:00.960
well -educated. What changes in education or

00:14:00.960 --> 00:14:02.940
training or accessibility do we need to make

00:14:02.940 --> 00:14:05.679
sure their true power benefits society broadly

00:14:05.679 --> 00:14:07.879
and not just, you know, the top tier of knowledge

00:14:07.879 --> 00:14:09.620
workers? That's the challenge for the next five

00:14:09.620 --> 00:14:12.039
years. Keep exploring those edges of knowledge.

00:14:12.279 --> 00:14:14.080
Thank you for sharing your sources and for diving

00:14:14.080 --> 00:14:15.799
deep with us today. We'll talk to you next time.
