WEBVTT

00:00:00.000 --> 00:00:03.480
Imagine a creepy cartoon monkey. Okay. Not, you

00:00:03.480 --> 00:00:06.700
know, the friendly kind, but a slightly unsettling

00:00:06.700 --> 00:00:08.839
AI -generated character that just posts these

00:00:08.839 --> 00:00:11.519
endless low -effort videos on YouTube. Right.

00:00:11.660 --> 00:00:14.259
This isn't some small thing. That one channel

00:00:14.259 --> 00:00:16.600
is estimated to be making something like $4 .25

00:00:16.600 --> 00:00:20.000
million a year. just in ad revenue. It's truly

00:00:20.000 --> 00:00:21.559
wild. And it's not just noise. It's a massive

00:00:21.559 --> 00:00:25.120
business model built on algorithm bait. It really

00:00:25.120 --> 00:00:27.620
points to this fundamental kind of disturbing

00:00:27.620 --> 00:00:30.039
shift that we're seeing across all the sources

00:00:30.039 --> 00:00:32.619
you've gathered for us. Welcome back to the Deem

00:00:32.619 --> 00:00:35.979
Dive. We have meticulously poured over your compiled

00:00:35.979 --> 00:00:39.020
research on really the absolute forefront of

00:00:39.020 --> 00:00:41.820
AI. And our mission today is to distill these

00:00:41.820 --> 00:00:44.159
key tensions that are emerging. Yeah. The flood

00:00:44.159 --> 00:00:46.460
of low quality content versus the incredible

00:00:46.460 --> 00:00:48.859
speed of genuine breakthroughs happening at the

00:00:48.859 --> 00:00:51.740
same time. Exactly. And today we have a really

00:00:51.740 --> 00:00:54.299
dense, fascinating stack for you. We're going

00:00:54.299 --> 00:00:56.079
to start by tackling the rapid growth of what

00:00:56.079 --> 00:00:58.880
the sources are calling AI slop and, you know,

00:00:58.899 --> 00:01:01.100
how this whole thing sort of validates the dead

00:01:01.100 --> 00:01:03.000
Internet theory. Then we'll shift right into

00:01:03.000 --> 00:01:05.359
practical self -defense. We'll cover a protocol

00:01:05.359 --> 00:01:08.519
for fast track learning that gives you an unfair

00:01:08.519 --> 00:01:11.040
advantage against all that noise. From there,

00:01:11.099 --> 00:01:13.359
we zoom out. We're going to look at the market,

00:01:13.420 --> 00:01:15.620
some critical security vulnerabilities that are

00:01:15.620 --> 00:01:18.359
affecting, I think, nearly a million users and

00:01:18.359 --> 00:01:21.019
the massive financial moves dictating the future.

00:01:21.200 --> 00:01:23.840
And finally, we'll get to what I think is a true

00:01:23.840 --> 00:01:26.140
technological breakthrough from Meta. It's a

00:01:26.140 --> 00:01:30.680
self -teaching AI that learns by literally breaking

00:01:30.680 --> 00:01:33.040
its own code over and over again. Let's unpack

00:01:33.040 --> 00:01:35.989
this. So let's start with the slop. It feels

00:01:35.989 --> 00:01:38.189
like the elephant in the digital room. And this

00:01:38.189 --> 00:01:41.790
new research from Capwing has put a very specific

00:01:41.790 --> 00:01:44.090
and startling number on it. Right. They did something

00:01:44.090 --> 00:01:46.049
pretty smart with their methodology here. They

00:01:46.049 --> 00:01:48.870
wanted to see how fast a brand new YouTube user

00:01:48.870 --> 00:01:51.489
gets hit with this auto -generated content. So

00:01:51.489 --> 00:01:53.569
a clean slate. Exactly. A brand new account,

00:01:53.769 --> 00:01:56.329
zero watch history. And they just tracked what

00:01:56.329 --> 00:01:58.390
the algorithm recommended first. And the results

00:01:58.390 --> 00:02:00.890
were, well, they were stark. Out of the first

00:02:00.890 --> 00:02:03.890
500 videos recommended, a shocking 21 % were

00:02:03.890 --> 00:02:06.769
flagged as AI slop. And they define that really

00:02:06.769 --> 00:02:09.289
carefully. It's low -quality, fully automated

00:02:09.289 --> 00:02:12.250
content that's designed just to farm views. Just

00:02:12.250 --> 00:02:14.949
to keep the session going? Yeah, 21%. I mean,

00:02:14.969 --> 00:02:18.330
think about that. One in five pieces of content

00:02:18.330 --> 00:02:20.789
being pushed at a new user is just synthetic

00:02:20.789 --> 00:02:23.610
junk. And this is the engine that's driving those

00:02:23.610 --> 00:02:25.870
top channels, like that cartoon monkey we mentioned.

00:02:26.030 --> 00:02:29.590
Bindar Apnodost, yeah. It has over 2 billion

00:02:29.590 --> 00:02:32.930
views. Not over its lifetime, but just... rapidly.

00:02:33.250 --> 00:02:35.530
And that four million dollar revenue estimate

00:02:35.530 --> 00:02:38.150
just shows you the algorithm is rewarding volume.

00:02:38.909 --> 00:02:41.090
not quality. So it's not just a regional thing

00:02:41.090 --> 00:02:43.310
that this is global? Oh, it's absolutely global

00:02:43.310 --> 00:02:46.050
digital pollution. The data shows South Korea

00:02:46.050 --> 00:02:49.789
was leading viewership with, get this, 8 .45

00:02:49.789 --> 00:02:53.189
billion views. Wow. And the USA wasn't far behind

00:02:53.189 --> 00:02:56.669
with 3 .39 billion. This is a worldwide market

00:02:56.669 --> 00:02:59.430
of people either watching this stuff or, well,

00:02:59.490 --> 00:03:01.370
being forced to see it. And this saturation,

00:03:01.550 --> 00:03:03.810
it feeds directly into that dead internet theory,

00:03:03.930 --> 00:03:06.590
doesn't it? Yeah. The idea that bots and AI content

00:03:06.590 --> 00:03:09.699
are just... taking over. If the front door to

00:03:09.699 --> 00:03:12.819
YouTube is already 21 % synthetic, it kind of

00:03:12.819 --> 00:03:14.939
validates that feeling. It's like trying to navigate

00:03:14.939 --> 00:03:17.659
a real library, but a fifth of the books are

00:03:17.659 --> 00:03:19.900
just randomly generated text. You start to lose

00:03:19.900 --> 00:03:22.800
trust in the whole thing. Right. The entire catalog.

00:03:23.139 --> 00:03:25.639
So here's a critical question for me. If 21 %

00:03:25.639 --> 00:03:29.099
of this is slop, does the data suggest that human

00:03:29.099 --> 00:03:32.960
users are actually choosing this? Or are they

00:03:32.960 --> 00:03:35.159
just being tricked? The data strongly implies

00:03:35.159 --> 00:03:38.000
that people either don't notice it or they just

00:03:38.000 --> 00:03:40.500
don't care. Or, and this is maybe the scariest

00:03:40.500 --> 00:03:42.560
option, the views aren't even human to begin

00:03:42.560 --> 00:03:45.680
with. So AI training on AI outplay. Exactly.

00:03:45.939 --> 00:03:48.180
Okay, so if the information environment is getting

00:03:48.180 --> 00:03:51.020
this polluted, how do we build some kind of personal

00:03:51.020 --> 00:03:54.340
shield? Let's pivot to personal strategy here.

00:03:54.479 --> 00:03:56.840
This is so crucial. If the web is full of noise,

00:03:57.000 --> 00:03:59.680
you need an unfair advantage just to learn faster

00:03:59.680 --> 00:04:02.289
than the noise can build up. And your sources

00:04:02.289 --> 00:04:05.250
detail this structured method for it, the 3C

00:04:05.250 --> 00:04:08.210
protocol. The 3C protocol. It's a structured

00:04:08.210 --> 00:04:10.830
way to use LLMs to get over those traditional

00:04:10.830 --> 00:04:13.469
learning hurdles. The 3C stands for compress,

00:04:13.750 --> 00:04:17.170
compile, and consolidate. It's all about structuring

00:04:17.170 --> 00:04:19.949
your prompts to pull out complex knowledge and

00:04:19.949 --> 00:04:22.769
turn it into something you can actually use and

00:04:22.769 --> 00:04:25.750
retain way faster than just reading. Let's get

00:04:25.750 --> 00:04:28.470
specific, though. So for compression, we're talking

00:04:28.470 --> 00:04:32.019
about taking, say, a dense 5 ,000 -word article

00:04:32.019 --> 00:04:35.259
and prompting the model to pull out just the

00:04:35.259 --> 00:04:38.199
three core actionable principles. Yeah, ignoring

00:04:38.199 --> 00:04:41.160
all the boilerplate. And then you compile those

00:04:41.160 --> 00:04:43.459
principles. You ask the model to take those three

00:04:43.459 --> 00:04:45.439
ideas and build, I don't know, a five -step checklist

00:04:45.439 --> 00:04:48.560
or a workflow diagram you can use tomorrow. So

00:04:48.560 --> 00:04:50.939
you go from theory to application. In minutes.

00:04:51.160 --> 00:04:54.019
In minutes. And then the last step, consolidate.

00:04:54.240 --> 00:04:56.800
That's for attention. You can ask the model to

00:04:56.800 --> 00:04:59.220
generate quiz questions about it or even counter

00:04:59.220 --> 00:05:01.860
arguments. It forces you to actively recall the

00:05:01.860 --> 00:05:04.019
information. It's like setting up a mock meeting

00:05:04.019 --> 00:05:05.740
where you have to defend the checklist you just

00:05:05.740 --> 00:05:07.540
made. Right. You're basically bypassing years

00:05:07.540 --> 00:05:09.379
of traditional learning, going straight to the

00:05:09.379 --> 00:05:11.540
practical stuff. That's the advantage. I have

00:05:11.540 --> 00:05:14.019
to admit, I still wrestle with prompt drift myself

00:05:14.019 --> 00:05:16.620
when I'm trying to automate complex tasks. So

00:05:16.620 --> 00:05:18.980
having a structured protocol like this is essential.

00:05:19.629 --> 00:05:21.750
But isn't teaching people to compress knowledge

00:05:21.750 --> 00:05:24.490
just building a reliance on a black box? The

00:05:24.490 --> 00:05:27.470
key is that consolidate step. It forces you to

00:05:27.470 --> 00:05:29.610
synthesize it. The goal isn't to outsource your

00:05:29.610 --> 00:05:32.209
understanding. It's to automate that really painful

00:05:32.209 --> 00:05:35.009
initial sorting phase. So the simplest takeaway

00:05:35.009 --> 00:05:37.730
here is to stop just summarizing things. Yes.

00:05:38.220 --> 00:05:41.199
Start structuring your learning. Use the AI to

00:05:41.199 --> 00:05:43.819
automate the organizational hurdles. Okay, let's

00:05:43.819 --> 00:05:46.160
look at the actual tech milestones now because

00:05:46.160 --> 00:05:48.100
they're happening right alongside these huge

00:05:48.100 --> 00:05:50.399
security flaws and market moves. And starting

00:05:50.399 --> 00:05:54.680
with a win, GPT 5 .2 Pro. It just achieved a

00:05:54.680 --> 00:05:57.079
major victory in testing. It did. It scored 29

00:05:57.079 --> 00:05:59.740
.2 % on one of the toughest math challenges out

00:05:59.740 --> 00:06:01.779
there. And these are problems that need, you

00:06:01.779 --> 00:06:05.170
know, advanced abstract reasoning. not just pattern

00:06:05.170 --> 00:06:07.189
matching. So that's a pretty big deal. To score

00:06:07.189 --> 00:06:10.230
almost 30 % on those challenges, that's not just

00:06:10.230 --> 00:06:13.149
progress, it's like a qualitative leap in its

00:06:13.149 --> 00:06:16.410
reasoning ability. A huge win for OpenAI. But

00:06:16.410 --> 00:06:20.189
then, almost immediately, we have to pivot to

00:06:20.189 --> 00:06:23.089
the risks. Because the scale of adoption is just

00:06:23.089 --> 00:06:26.290
so vast, it's outpacing basic security. And we've

00:06:26.290 --> 00:06:30.370
seen a massive exposure recently to popular Chrome

00:06:30.370 --> 00:06:32.920
extensions. the kind of thing you install without

00:06:32.920 --> 00:06:35.620
even thinking about it. They were caught stealing

00:06:35.620 --> 00:06:38.420
sensitive data. Specifically, they were scraping

00:06:38.420 --> 00:06:41.379
private chat GPT and deep seek information from

00:06:41.379 --> 00:06:45.279
over 900 ,000 users. And here's the really unbelievable

00:06:45.279 --> 00:06:47.839
part. One of these malicious extensions, the

00:06:47.839 --> 00:06:50.560
one actively stealing prompts and data, was actually

00:06:50.560 --> 00:06:53.040
featured by Google. Wow. It just shows how much

00:06:53.040 --> 00:06:55.759
vigilance is required from us, from the user.

00:06:55.860 --> 00:06:57.800
You really have to check and uninstall that stuff.

00:06:57.959 --> 00:07:00.139
Meanwhile, the investment world seems completely

00:07:00.139 --> 00:07:03.149
unconcerned. SoftBank just completed a staggering

00:07:03.149 --> 00:07:06.550
$41 billion investment into OpenAI. Which gives

00:07:06.550 --> 00:07:08.769
them what, about 11 % ownership? Roughly, yeah.

00:07:08.949 --> 00:07:11.769
$41 billion? That's a huge vote of confidence.

00:07:11.949 --> 00:07:14.810
That move puts OpenAI's valuation somewhere between

00:07:14.810 --> 00:07:18.009
$300 and $500 billion. The market is just betting

00:07:18.009 --> 00:07:20.029
on the future potential, regardless of these

00:07:20.029 --> 00:07:22.009
current hiccups. And at the same time, we're

00:07:22.009 --> 00:07:24.189
seeing new tools that just make it easier for

00:07:24.189 --> 00:07:27.410
everyone. Google's Opal is now inside Gemini.

00:07:27.550 --> 00:07:29.470
It's a no -code tool that lets you create little

00:07:29.470 --> 00:07:32.529
mini apps in minutes. Which just accelerates

00:07:32.529 --> 00:07:34.990
adoption even more. So we have this really clear

00:07:34.990 --> 00:07:38.089
tension. Why are these massive investments happening

00:07:38.089 --> 00:07:41.149
right alongside these, frankly, basic security

00:07:41.149 --> 00:07:44.189
flaws? I think investment just follows the potential,

00:07:44.389 --> 00:07:46.769
right? The security protocols, they have to catch

00:07:46.769 --> 00:07:49.629
up to the sheer scale and the speed of adoption.

00:07:49.970 --> 00:07:51.550
Moving on. We should talk about the ripple effects

00:07:51.550 --> 00:07:54.689
of all this across careers, across society. Because

00:07:54.689 --> 00:07:57.970
the friction is leading to some, well, some really

00:07:57.970 --> 00:08:01.189
unusual human adaptations. The job market anxiety

00:08:01.189 --> 00:08:04.149
is just palpable. The competition is so intense

00:08:04.149 --> 00:08:06.490
and the application process is so automated that

00:08:06.490 --> 00:08:08.509
people are actually resorting to dating apps.

00:08:08.550 --> 00:08:11.730
Yeah, like Bumble and Tinder to find professional

00:08:11.730 --> 00:08:14.269
connections and job leads. It's this bizarre

00:08:14.269 --> 00:08:19.069
mashup of networking and I guess. It really speaks

00:08:19.069 --> 00:08:21.449
to how those traditional professional pathways

00:08:21.449 --> 00:08:24.350
are just eroding. And if we bring the slop economy

00:08:24.350 --> 00:08:27.370
back into it, the financial incentive for this

00:08:27.370 --> 00:08:31.370
low effort stuff is still staggering. The sources

00:08:31.370 --> 00:08:35.450
point to a 22 -year -old college dropout who's

00:08:35.450 --> 00:08:37.769
making a reported $700 ,000 a year from those

00:08:37.769 --> 00:08:40.110
exact same kind of low effort view farming videos.

00:08:40.389 --> 00:08:42.850
And that creates a total paradox for investors.

00:08:43.029 --> 00:08:45.929
You have these AI bubble fears mounting on Wall

00:08:45.929 --> 00:08:48.929
Street with analysts warning about inflated valuations.

00:08:48.990 --> 00:08:51.350
But at the same time, those same pros are giving

00:08:51.350 --> 00:08:53.049
recommendations on where to put your next $10

00:08:53.049 --> 00:08:55.629
,000. So there's still a lot of conviction in

00:08:55.629 --> 00:08:58.399
the long term growth. Yeah. And socially, all

00:08:58.399 --> 00:09:00.559
that friction is starting to manifest politically.

00:09:00.820 --> 00:09:04.659
The sources mention a nascent anti -AI movement

00:09:04.659 --> 00:09:06.919
is starting to form, which raises big questions

00:09:06.919 --> 00:09:09.039
about which political party is going to end up

00:09:09.039 --> 00:09:11.299
leading that resistance. So what's driving that

00:09:11.299 --> 00:09:13.639
political and cultural resistance if the financial

00:09:13.639 --> 00:09:15.759
opportunity and the technical progress are so

00:09:15.759 --> 00:09:19.440
strong? I think the shift is creating this unprecedented

00:09:19.440 --> 00:09:22.519
financial opportunity and this deep cultural

00:09:22.519 --> 00:09:26.480
fear. At the exact same time. It's just leading

00:09:26.480 --> 00:09:28.580
to polarization. Okay, now for the part that

00:09:28.580 --> 00:09:30.460
I think is really, really interesting. Let's

00:09:30.460 --> 00:09:33.379
look at the research frontier. Meta's AI team,

00:09:33.580 --> 00:09:36.340
FAIR, just dropped a new method for training

00:09:36.340 --> 00:09:40.659
coding agents. It's called self -play, S -W -E

00:09:40.659 --> 00:09:43.330
-R -L. And this is a complete departure from

00:09:43.330 --> 00:09:45.149
how these things are normally taught. It's the

00:09:45.149 --> 00:09:46.669
breakthrough a lot of people were waiting for.

00:09:46.789 --> 00:09:49.330
Right. Instead of feeding the models mountains

00:09:49.330 --> 00:09:52.950
of human data from GitHub, which has a ceiling

00:09:52.950 --> 00:09:56.090
based on human error, they just let the AI create

00:09:56.090 --> 00:09:58.830
and solve its own bugs over and over. A self

00:09:58.830 --> 00:10:01.429
-taught coding. Fundamentally, yes. And the architecture

00:10:01.429 --> 00:10:04.009
is ingenious. It's not one model. It's two systems

00:10:04.009 --> 00:10:06.730
locked in this internal arms race. So you have

00:10:06.730 --> 00:10:09.629
two models. One is the bug injector. Its only

00:10:09.629 --> 00:10:12.230
job is to break working code. Purposefully. It

00:10:12.230 --> 00:10:14.129
looks at good code and introduces errors that

00:10:14.129 --> 00:10:16.090
actually make sense that are hard to spot. And

00:10:16.090 --> 00:10:18.389
then model two is the solver. It just tries to

00:10:18.389 --> 00:10:20.370
fix what the first one broke. Yeah. But the key

00:10:20.370 --> 00:10:22.590
is the system learns from its own failed fixes.

00:10:22.870 --> 00:10:25.570
Which is what generates what Meta calls higher

00:10:25.570 --> 00:10:28.710
order bugs. Wait, does that matter? Because these

00:10:28.710 --> 00:10:32.610
aren't just simple typos. They're complex interlocking

00:10:32.610 --> 00:10:35.730
failures that require advanced reasoning to solve.

00:10:36.110 --> 00:10:39.490
It pushes the solver model way beyond what passive

00:10:39.490 --> 00:10:42.279
training data could ever do. So every time the

00:10:42.279 --> 00:10:44.840
injector succeeds, the solver has to invent a

00:10:44.840 --> 00:10:47.480
higher level defense. Exactly. And the performance

00:10:47.480 --> 00:10:50.600
jump is just undeniable. This self -play method

00:10:50.600 --> 00:10:53.559
improved performance by over 10 points on the

00:10:53.559 --> 00:10:55.559
standard benchmark. Which means this internally

00:10:55.559 --> 00:10:58.299
trained system beat models that were trained

00:10:58.299 --> 00:11:01.340
on real human data. Whoa. I mean, just imagine

00:11:01.340 --> 00:11:04.419
scaling this capability to a billion unique coding

00:11:04.419 --> 00:11:06.960
queries a day. The pace of improvement would

00:11:06.960 --> 00:11:09.080
just be relentless. It completely changes the

00:11:09.080 --> 00:11:11.320
development pipeline. If this scales the way

00:11:11.320 --> 00:11:13.720
AlphaZero did for games, where it taught itself

00:11:13.720 --> 00:11:16.320
the entire history of human strategy in just

00:11:16.320 --> 00:11:19.620
days, we could see a truly self -improving breed

00:11:19.620 --> 00:11:21.980
of coding agent. So what does this breakthrough

00:11:21.980 --> 00:11:24.779
really mean for future AI development? I think

00:11:24.779 --> 00:11:27.539
it means we're entering an era where AI agents

00:11:27.539 --> 00:11:30.919
become their own internal nonstop teachers. They're

00:11:30.919 --> 00:11:33.519
free from the limits of human input. That perfectly

00:11:33.519 --> 00:11:35.740
summarizes the whole trajectory of this deep

00:11:35.740 --> 00:11:38.000
dive, doesn't it? On one hand, the information

00:11:38.000 --> 00:11:40.179
landscape is just filling up with this mass,

00:11:40.440 --> 00:11:43.500
low -quality automated slop. Which forces us

00:11:43.500 --> 00:11:46.860
to use structured protocols like 3C and be super

00:11:46.860 --> 00:11:49.460
aware of security. Right. But at the exact same

00:11:49.460 --> 00:11:51.980
time, the research front is moving toward total

00:11:51.980 --> 00:11:54.879
autonomy. You have Meta's system, which basically

00:11:54.879 --> 00:11:57.080
embodies self -improvement by learning from its

00:11:57.080 --> 00:11:59.799
own mistakes. It creates this unique and important

00:11:59.799 --> 00:12:03.480
tension. The gap between mass low effort creation

00:12:03.480 --> 00:12:07.039
and this elite closed loop self -improvement

00:12:07.039 --> 00:12:09.580
has never been sharper. So what stands out to

00:12:09.580 --> 00:12:11.259
you in all this? Are we just heading toward a

00:12:11.259 --> 00:12:13.659
future where the only thing sophisticated enough

00:12:13.659 --> 00:12:16.159
to filter the tidal wave of slop is the same

00:12:16.159 --> 00:12:18.139
kind of self -teaching AI that helped create

00:12:18.139 --> 00:12:20.039
it in the first place? It's something to think

00:12:20.039 --> 00:12:22.039
about. We encourage you to explore the sources

00:12:22.039 --> 00:12:24.399
we mentioned and, as always, continue your own

00:12:24.399 --> 00:12:24.840
deep dive.
