WEBVTT

00:00:00.000 --> 00:00:03.259
Human beings scored 100 % on their very first

00:00:03.259 --> 00:00:06.339
try at a new intelligence test. Yeah, and that's

00:00:06.339 --> 00:00:09.480
a completely wild statistic. It really is, because

00:00:09.480 --> 00:00:12.259
the smartest, most expensive AI models on Earth,

00:00:12.419 --> 00:00:15.580
they scored under 1%. Right, which is just a

00:00:15.580 --> 00:00:17.679
staggering reality check for the whole industry.

00:00:17.940 --> 00:00:20.739
We kind of, um, we think these machines are invincible.

00:00:21.100 --> 00:00:23.120
Today, we're looking at proof that they really

00:00:23.120 --> 00:00:25.300
aren't. Physics and logic always get a vote in

00:00:25.300 --> 00:00:27.879
the end. Welcome to the Deep Dive. If you've

00:00:27.879 --> 00:00:30.320
been watching the AI space lately, you probably

00:00:30.320 --> 00:00:32.399
feel like the ground is shifting under your feet

00:00:32.399 --> 00:00:34.500
daily. Oh, absolutely. It's moving incredibly

00:00:34.500 --> 00:00:36.740
fast. You are not alone in feeling that way.

00:00:37.140 --> 00:00:39.600
Today, we are taking your sources and looking

00:00:39.600 --> 00:00:42.700
at a massive reality check on artificial general

00:00:42.700 --> 00:00:45.399
intelligence. And we're also digging into the

00:00:45.399 --> 00:00:47.939
escalating boardroom drama happening behind the

00:00:47.939 --> 00:00:50.579
scenes, which is intense. Yeah, we'll see how

00:00:50.579 --> 00:00:52.359
platforms like Apple and Reddit are adapting.

00:00:52.600 --> 00:00:55.479
And finally, we'll look at a terrifying new wave

00:00:55.479 --> 00:00:58.600
of vibe -coded malware. That malware story, I

00:00:58.600 --> 00:01:00.500
mean, it changes the security landscape entirely.

00:01:00.840 --> 00:01:02.960
We're going to get to that. But first, let's

00:01:02.960 --> 00:01:05.719
talk about actual capabilities to, you know,

00:01:05.739 --> 00:01:08.319
ground this discussion. Yeah, we really need

00:01:08.319 --> 00:01:10.400
to start with the ARC Prize Foundation. They

00:01:10.400 --> 00:01:13.579
just released ARC -AGI -3. Right. And this is

00:01:13.579 --> 00:01:16.620
a very highly anticipated benchmark. It is. It's

00:01:16.620 --> 00:01:20.219
designed to test true adaptability, not just

00:01:20.219 --> 00:01:22.060
what a model has memorized from the Internet.

00:01:22.319 --> 00:01:24.500
It tests how a model reacts to the completely

00:01:24.500 --> 00:01:28.159
unknown. The frontier models had zero prior training

00:01:28.159 --> 00:01:30.019
on these puzzles. None at all. We're talking

00:01:30.019 --> 00:01:33.459
about 135 brand new environments. And there are

00:01:33.459 --> 00:01:36.780
roughly 1 ,000 puzzle levels in total, right?

00:01:36.959 --> 00:01:39.400
Exactly. And visually, these puzzles are just

00:01:39.400 --> 00:01:41.959
simple grids. You have colored squares arranged

00:01:41.959 --> 00:01:44.040
in patterns. So you just have to figure out the

00:01:44.040 --> 00:01:46.480
underlying logical rule to solve it. Yeah, it's

00:01:46.480 --> 00:01:49.629
highly abstract. But humans find it... incredibly

00:01:49.629 --> 00:01:51.530
intuitive. Right. So when I hear that humans

00:01:51.530 --> 00:01:54.170
ace this, I assume GPT or clot is right behind

00:01:54.170 --> 00:01:56.290
us. You would think so, but it was an absolute

00:01:56.290 --> 00:01:58.689
bloodbath. They completely fell flat. How bad

00:01:58.689 --> 00:02:03.239
was it? Well. Gemini 3 .1 Pro scored 0 .37%.

00:02:03.239 --> 00:02:10.659
Wow. And GPT 5 .4. GPT 5 .4 scored 0 .26%. That

00:02:10.659 --> 00:02:13.860
is unbelievably low. Claude Opus 4 .6 scored

00:02:13.860 --> 00:02:18.539
0 .25%. Right. And Grok 4 .2. It scored exactly

00:02:18.539 --> 00:02:21.800
0%. Zero. Not even a fraction of a percent. Yeah,

00:02:21.840 --> 00:02:24.159
they just hit a brick wall. the gap between human

00:02:24.159 --> 00:02:26.639
and machine adaptability is still massive because

00:02:26.639 --> 00:02:28.979
humans solved every single environment on the

00:02:28.979 --> 00:02:31.379
first attempt we can look at a grid and just

00:02:31.379 --> 00:02:35.280
intuitively grasp symmetry or gravity and ai

00:02:35.280 --> 00:02:37.659
models don't possess that intuition at all not

00:02:37.659 --> 00:02:39.460
yet you can actually play these public games

00:02:39.460 --> 00:02:41.759
yourself to see You just see the pattern and

00:02:41.759 --> 00:02:44.099
apply it. It's about spotting underlying logical

00:02:44.099 --> 00:02:46.639
rules on the fly. It's not about reciting memorized

00:02:46.639 --> 00:02:49.379
facts from a database. Exactly. That's the core

00:02:49.379 --> 00:02:51.400
difference we're talking about here. And we should

00:02:51.400 --> 00:02:53.500
probably define our terms clearly. Good idea.

00:02:53.620 --> 00:02:56.759
Let's define it. AGI. Software that can learn

00:02:56.759 --> 00:02:59.599
any cognitive task humans do. Right. That is

00:02:59.599 --> 00:03:01.800
the holy grail for these companies. But right

00:03:01.800 --> 00:03:04.710
now, we're falling incredibly short. Although,

00:03:04.770 --> 00:03:06.430
I mean, we should mention the critics of this

00:03:06.430 --> 00:03:09.569
specific test. Not everyone agrees the ARC benchmark

00:03:09.569 --> 00:03:13.389
is fair. That's true. The scoring system is notoriously

00:03:13.389 --> 00:03:16.250
demanding on the models. Because the AI must

00:03:16.250 --> 00:03:19.710
match or exceed human problem -solving speed,

00:03:19.909 --> 00:03:22.550
right? Yeah. If the model takes too long to process

00:03:22.550 --> 00:03:25.830
the logic, it scores poorly. And critics say

00:03:25.830 --> 00:03:28.870
that demands too much compute overhead. Exactly.

00:03:29.069 --> 00:03:32.270
They argue it skews the final results artificially

00:03:32.270 --> 00:03:35.669
low. But the philosophical conclusion remains

00:03:35.669 --> 00:03:38.610
incredibly profound either way. It really does.

00:03:38.729 --> 00:03:41.710
If a future model passes this test, it will be

00:03:41.710 --> 00:03:45.129
fundamentally different. Right. True AGI likely

00:03:45.129 --> 00:03:47.490
won't just be a scaled up version of current

00:03:47.490 --> 00:03:50.370
tech. It needs a totally new type of intelligence.

00:03:50.669 --> 00:03:52.469
Current models are brilliant at pattern matching

00:03:52.469 --> 00:03:55.219
because they've read the whole Internet. But

00:03:55.219 --> 00:03:57.060
they lack flexible reasoning. They don't know

00:03:57.060 --> 00:03:59.379
how to think laterally. Yeah, they really struggle

00:03:59.379 --> 00:04:01.659
to adapt when the rules suddenly change. So if

00:04:01.659 --> 00:04:03.780
I'm understanding this right, current AI is just

00:04:03.780 --> 00:04:06.360
matching data points. It is like stacking Lego

00:04:06.360 --> 00:04:09.180
blocks of data. Yeah, exactly. Eventually, you

00:04:09.180 --> 00:04:11.259
need a totally different toy to build a working

00:04:11.259 --> 00:04:13.400
engine. You can't just keep adding more plastic

00:04:13.400 --> 00:04:15.500
bricks. That is a brilliant way to visualize

00:04:15.500 --> 00:04:17.980
it. You can build a massive, beautiful plastic

00:04:17.980 --> 00:04:20.439
car. But it won't actually drive down the street.

00:04:20.639 --> 00:04:22.730
Right. You fundamentally need a real combustion

00:04:22.730 --> 00:04:26.069
engine for that. Plastic bricks won't help. So

00:04:26.069 --> 00:04:28.649
is throwing more compute at these models basically

00:04:28.649 --> 00:04:32.290
a dead end for adaptability? Well, scale definitely

00:04:32.290 --> 00:04:34.589
brings better pattern matching. You get better

00:04:34.589 --> 00:04:37.769
coding syntax and fewer hallucinations. But it

00:04:37.769 --> 00:04:40.569
doesn't magically create that spontaneous reasoning

00:04:40.569 --> 00:04:43.269
AGI requires. No, it doesn't. You eventually

00:04:43.269 --> 00:04:45.689
hit a fundamental architectural wall. Got it.

00:04:45.889 --> 00:04:49.129
Brute force alone will not build true adaptable

00:04:49.129 --> 00:04:52.500
intelligence. Precisely. And honestly, that looming

00:04:52.500 --> 00:04:55.199
technological wall is causing sheer panic. Yeah,

00:04:55.279 --> 00:04:57.519
if pure computing power won't solve this AGI

00:04:57.519 --> 00:05:00.160
wall, that totally explains the panic in those

00:05:00.160 --> 00:05:03.339
leaked Slack messages. The CEOs know they can't

00:05:03.339 --> 00:05:04.879
just buy their way to the finish line anymore.

00:05:05.199 --> 00:05:08.180
The human scramble is intensifying. Titans are

00:05:08.180 --> 00:05:10.300
fighting to control the narrative and, well,

00:05:10.439 --> 00:05:12.759
the funding. The boardroom drama is escalating

00:05:12.759 --> 00:05:15.720
to a fever pitch. The stakes are unimaginably

00:05:15.720 --> 00:05:18.879
high. We have these leaked messages between major

00:05:18.879 --> 00:05:21.899
players. Sam Altman apparently tried to, quote,

00:05:22.060 --> 00:05:25.100
save Anthropic. Yeah, and this was during a massive

00:05:25.100 --> 00:05:28.740
Pentagon contract clash. The messages are incredibly

00:05:28.740 --> 00:05:31.800
revealing. They show the raw, unfiltered tension

00:05:31.800 --> 00:05:34.220
behind the scenes. This isn't just friendly competition.

00:05:34.519 --> 00:05:38.000
Not at all. Altman accused Anthropic CEO Dario

00:05:38.000 --> 00:05:41.639
of actively undermining open AI. He claimed this

00:05:41.639 --> 00:05:43.839
sabotage had been going on for years. They're

00:05:43.839 --> 00:05:46.040
fighting over these lucrative government defense

00:05:46.040 --> 00:05:48.939
contracts. It really highlights the intense psychological

00:05:48.939 --> 00:05:51.120
pressure these leaders are under right now. They

00:05:51.120 --> 00:05:53.980
are racing toward a wall, and they know it. And

00:05:53.980 --> 00:05:56.100
they need infinite capital to break through.

00:05:56.300 --> 00:05:58.629
Which brings us to the fundraising. It completely

00:05:58.629 --> 00:06:01.050
reflects those stakes. Look at Reflection AI.

00:06:01.350 --> 00:06:03.430
They're backed by Nvidia and they're raising

00:06:03.430 --> 00:06:07.550
$2 .5 billion. Just let that number sink in.

00:06:07.629 --> 00:06:10.970
They're targeting a $25 billion valuation. That

00:06:10.970 --> 00:06:13.949
is an astronomical amount of capital for a relatively

00:06:13.949 --> 00:06:16.370
new player. It is. And they want to compete directly

00:06:16.370 --> 00:06:19.149
with Chinese AI dominance. JP Morgan might even

00:06:19.149 --> 00:06:22.269
join the round. It's a monumental financial mobilization.

00:06:22.449 --> 00:06:25.110
It feels like a new space race entirely. Meanwhile,

00:06:25.329 --> 00:06:27.490
the political maneuvering is just as intense.

00:06:28.230 --> 00:06:30.730
Regulation is rapidly becoming the new battlefield.

00:06:31.069 --> 00:06:33.529
Whoever writes the rules essentially controls

00:06:33.529 --> 00:06:35.990
the future market. Right. And the incoming Trump

00:06:35.990 --> 00:06:39.029
administration just appointed a new tech advisory

00:06:39.029 --> 00:06:43.000
panel focused on AI regulation. Mark Zuckerberg

00:06:43.000 --> 00:06:45.879
from Meta is on it. Larry Ellison from Oracle

00:06:45.879 --> 00:06:49.360
is there. Jensen Huang from NVIDIA is also included.

00:06:49.600 --> 00:06:51.839
It's a fascinating dynamic. You have these massive

00:06:51.839 --> 00:06:53.639
tech titans sitting at the government table,

00:06:53.819 --> 00:06:56.279
drafting the playbook. Regardless of where you

00:06:56.279 --> 00:06:58.920
sit politically, just objectively, it's a massive

00:06:58.920 --> 00:07:01.360
alignment between tech and government. Oh, absolutely.

00:07:01.459 --> 00:07:03.800
It's a major shift in power dynamics for the

00:07:03.800 --> 00:07:06.139
next decade. You know, I still wrestle with prompt

00:07:06.139 --> 00:07:09.660
drift myself. So imagining CEOs navigating multibillion

00:07:09.660 --> 00:07:12.540
dollar boardroom clashes is just wild. Yeah,

00:07:12.620 --> 00:07:14.560
it feels completely surreal. You're just trying

00:07:14.560 --> 00:07:17.740
to get a chat bot to format a simple email. Exactly.

00:07:18.120 --> 00:07:20.220
Meanwhile, these guys are fighting over Pentagon

00:07:20.220 --> 00:07:22.939
contracts and billions of dollars. It's almost

00:07:22.939 --> 00:07:25.199
Shakespearean. But it makes you wonder about

00:07:25.199 --> 00:07:28.160
the actual technological progress. Does all this

00:07:28.160 --> 00:07:29.959
boardroom and political maneuvering actually

00:07:29.959 --> 00:07:33.720
speed up innovation or just stall it? Massive

00:07:33.720 --> 00:07:35.980
capital. and regulation usually define the playing

00:07:35.980 --> 00:07:38.500
field. They might distract from the core science

00:07:38.500 --> 00:07:41.139
momentarily. But they ultimately dictate who

00:07:41.139 --> 00:07:43.879
gets the resources to build the future. Exactly.

00:07:44.139 --> 00:07:47.180
The science requires immense resources, massive

00:07:47.180 --> 00:07:50.240
data centers, and favorable loss. So the boardroom

00:07:50.240 --> 00:07:52.620
fights today will dictate the technology we get

00:07:52.620 --> 00:07:55.000
tomorrow. Unfortunately, yes. It doesn't exist

00:07:55.000 --> 00:07:57.660
in a vacuum. And while the executives fight over

00:07:57.660 --> 00:08:00.240
the big picture, the consumer platforms are moving

00:08:00.240 --> 00:08:03.399
fast. They're quietly building the actual infrastructure

00:08:03.399 --> 00:08:05.800
you use every single day. They want to capture

00:08:05.800 --> 00:08:08.519
users completely before the dust settles. They're

00:08:08.519 --> 00:08:11.019
building invisible walled gardens. Yeah, they

00:08:11.019 --> 00:08:12.879
want you securely locked into their specific

00:08:12.879 --> 00:08:16.500
ecosystem. Look at Apple. Apple is making a massive

00:08:16.500 --> 00:08:20.040
uncharacteristic move with iOS 27. They're opening

00:08:20.040 --> 00:08:23.439
up Siri. To rival AI assistance, which is a huge

00:08:23.439 --> 00:08:26.019
philosophical shift for Apple. They usually keep

00:08:26.019 --> 00:08:28.480
everything tightly closed off. Right. It uses

00:08:28.480 --> 00:08:31.879
a brand new extensions system. Gemini, Claude,

00:08:31.920 --> 00:08:34.820
and others can plug right into the OS. It turns

00:08:34.820 --> 00:08:38.580
the iPhone into a multi -model AI hub. You aren't

00:08:38.580 --> 00:08:41.580
stuck with just one brain anymore. It's a brilliant

00:08:41.580 --> 00:08:44.389
strategic play. Apple owns the hardware in your

00:08:44.389 --> 00:08:46.950
pocket and the user interface. By opening it

00:08:46.950 --> 00:08:49.110
up, they let the models fight for your queries.

00:08:49.690 --> 00:08:52.549
Apple still wins either way. And Google is aggressively

00:08:52.549 --> 00:08:55.190
fighting for your loyalty, too. They just launched

00:08:55.190 --> 00:08:57.409
brand new switching tools. Yeah, you can easily

00:08:57.409 --> 00:08:59.950
import your entire chat history and pull memories

00:08:59.950 --> 00:09:02.769
from chat GPT directly into Gemini. Google is

00:09:02.769 --> 00:09:05.049
making it frictionless. Right. And basically

00:09:05.049 --> 00:09:07.269
offering to move your digital furniture for free.

00:09:07.409 --> 00:09:10.009
Exactly. They know the biggest barrier to switching

00:09:10.009 --> 00:09:13.200
is losing your data. They want to remove literally

00:09:13.200 --> 00:09:16.100
any excuse you have for staying behind. But Reddit

00:09:16.100 --> 00:09:18.100
is taking a completely different approach. They

00:09:18.100 --> 00:09:21.279
aren't trying to absorb AI. No, they are actively

00:09:21.279 --> 00:09:23.840
fighting automated content. They're heavily testing

00:09:23.840 --> 00:09:26.659
bot labels right now. And implementing caskies

00:09:26.659 --> 00:09:29.460
for stricter authentication. They're even testing

00:09:29.460 --> 00:09:33.179
optional world ID scans. Right. AI posts are

00:09:33.179 --> 00:09:36.100
still allowed, but they desperately want to clearly

00:09:36.100 --> 00:09:39.179
separate humans from bots. They want to slow

00:09:39.179 --> 00:09:42.320
the massive surge of fake, automated activity

00:09:42.320 --> 00:09:44.940
on the platform. It's a genuine existential threat.

00:09:45.080 --> 00:09:47.399
Think about the dead internet theory. If users

00:09:47.399 --> 00:09:49.820
can't trust who they're talking to, the platform

00:09:49.820 --> 00:09:52.259
dies. For listeners trying to navigate all this

00:09:52.259 --> 00:09:54.360
chaos, there are good resources out there. Yeah,

00:09:54.399 --> 00:09:56.259
the Stanford course mentioned in the sources

00:09:56.259 --> 00:09:58.960
is fantastic. If AI explanations feel either

00:09:58.960 --> 00:10:02.320
too basic or way too technical, it sits perfectly

00:10:02.320 --> 00:10:04.539
right in the middle. I highly recommend it for

00:10:04.539 --> 00:10:06.600
anyone feeling overwhelmed by the constant noise.

00:10:07.259 --> 00:10:09.899
It balances clarity and depth beautifully. It

00:10:09.899 --> 00:10:11.759
really helps cut through the hype. But looking

00:10:11.759 --> 00:10:14.120
at all these platform wars, I have to ask, will

00:10:14.120 --> 00:10:16.220
everyday users actually care enough to migrate

00:10:16.220 --> 00:10:18.399
their chat histories between these massive models?

00:10:18.639 --> 00:10:20.899
History shows that convenience and ecosystem

00:10:20.899 --> 00:10:23.980
integration always drive consumer behavior in

00:10:23.980 --> 00:10:26.149
the end. People take the path of least resistance.

00:10:26.370 --> 00:10:28.610
Make the transition effortless and the users

00:10:28.610 --> 00:10:31.570
will definitely follow. It is the absolute golden

00:10:31.570 --> 00:10:33.789
rule of tech platforms. So we're integrating

00:10:33.789 --> 00:10:37.750
AI into all of our devices, our phones, browsers,

00:10:38.110 --> 00:10:41.570
social networks. But the foundational open source

00:10:41.570 --> 00:10:44.750
code building these systems is incredibly fragile.

00:10:45.049 --> 00:10:47.389
This is where the story takes a very dark turn.

00:10:47.570 --> 00:10:50.809
We are basically building massive skyscrapers

00:10:50.809 --> 00:10:53.529
on a foundation of sand. The security implications

00:10:53.529 --> 00:10:56.789
are genuinely terrifying. Two big stories in

00:10:56.789 --> 00:10:59.090
Silicon Valley just collided. Yeah, regarding

00:10:59.090 --> 00:11:02.429
late LM. It's a massive open source project with

00:11:02.429 --> 00:11:05.850
over 40 ,000 GitHub stars. Thousands of commercial

00:11:05.850 --> 00:11:07.970
forks depend heavily on it. It's a foundational

00:11:07.970 --> 00:11:10.850
building block for AI apps. And it was just hit

00:11:10.850 --> 00:11:13.710
by heavily hidden malware. The malicious code

00:11:13.710 --> 00:11:16.149
was buried deep inside a software dependency.

00:11:16.149 --> 00:11:18.169
It was discovered by an independent researcher

00:11:18.169 --> 00:11:20.330
named Callum McMahon, right? Yeah, he was just

00:11:20.330 --> 00:11:23.000
trying to install the package normally. And suddenly,

00:11:23.159 --> 00:11:25.759
his computer randomly shut down. Just went completely

00:11:25.759 --> 00:11:28.779
black. And that weird behavior led to the discovery

00:11:28.779 --> 00:11:31.200
of the malicious code. If it hadn't crashed his

00:11:31.200 --> 00:11:33.419
machine, it might have gone completely unnoticed

00:11:33.419 --> 00:11:36.460
for months. And here is the truly alarming part.

00:11:37.039 --> 00:11:40.059
Andrej Karpathy and other top researchers weighed

00:11:40.059 --> 00:11:43.279
in on the code. Right. They believed this malware

00:11:43.279 --> 00:11:46.379
was vibe -coded. Meaning it was quickly generated

00:11:46.379 --> 00:11:49.460
using AI without deep human oversight. Whoa,

00:11:49.580 --> 00:11:52.139
imagine scaling malware creation at the speed

00:11:52.139 --> 00:11:55.120
of thought just by vibe coding. You just tell

00:11:55.120 --> 00:11:57.179
an AI what you want to exploit and it writes

00:11:57.179 --> 00:11:59.500
it. It lowers the barrier to entry to almost

00:11:59.500 --> 00:12:02.220
zero. And the malware was incredibly aggressive.

00:12:02.480 --> 00:12:05.360
It tried to steal local login credentials immediately.

00:12:05.820 --> 00:12:08.019
It also tried to access connected developer accounts.

00:12:08.240 --> 00:12:10.360
It desperately wanted to spread into other open

00:12:10.360 --> 00:12:12.620
source packages. It acts like a digital virus

00:12:12.620 --> 00:12:16.029
looking for a new host. It's a classic supply

00:12:16.029 --> 00:12:18.049
chain issue. We should define that for clarity.

00:12:18.389 --> 00:12:21.409
Supply chain attacks, hackers hiding malware

00:12:21.409 --> 00:12:23.929
inside trusted software updates. It's like hiring

00:12:23.929 --> 00:12:26.009
the best security guards for your bank, but the

00:12:26.009 --> 00:12:28.230
architect used a blueprint written by a robber.

00:12:28.269 --> 00:12:30.269
The guards can't protect you because the vault

00:12:30.269 --> 00:12:32.730
itself is compromised from the inside. Exactly.

00:12:32.889 --> 00:12:35.269
Now, Lightelum developers responded very quickly.

00:12:35.330 --> 00:12:37.279
They're working directly with Mandiant. They

00:12:37.279 --> 00:12:39.759
need to fully investigate the extent of the issue.

00:12:39.960 --> 00:12:43.200
But there is a glaring detail here regarding

00:12:43.200 --> 00:12:45.980
corporate compliance. Yeah, LightTelM actually

00:12:45.980 --> 00:12:48.679
displayed prominent security certifications on

00:12:48.679 --> 00:12:51.720
their site. They featured SOC2 compliance and

00:12:51.720 --> 00:12:55.539
had ISO 27001 certifications. And these were

00:12:55.539 --> 00:12:58.299
issued by a prominent AI compliance startup named

00:12:58.299 --> 00:13:01.340
Delve. Delve has faced intense criticism recently.

00:13:01.580 --> 00:13:04.580
People question how reliable its automated certification

00:13:04.580 --> 00:13:07.899
process really is. The company vehemently denies

00:13:07.899 --> 00:13:10.659
these claims, but it highlights a crucial nuance

00:13:10.659 --> 00:13:13.539
about security theater. Certifications show good

00:13:13.539 --> 00:13:15.740
organizational practices. They cannot guarantee

00:13:15.740 --> 00:13:17.960
actual protection against sophisticated supply

00:13:17.960 --> 00:13:20.960
chain attacks. They only check high -level policies.

00:13:21.159 --> 00:13:22.919
They don't check every single line of code in

00:13:22.919 --> 00:13:24.659
a buried third -party dependency. There is a

00:13:24.659 --> 00:13:27.700
massive blind spot. So if the attackers use AI

00:13:27.700 --> 00:13:30.480
to generate these hidden exploits, can automated

00:13:30.480 --> 00:13:32.860
compliance systems ever keep up if the attacks

00:13:32.860 --> 00:13:35.320
are generated by AI? Right now, offensive AI

00:13:35.320 --> 00:13:37.149
is moving. moving much faster than defensive

00:13:37.149 --> 00:13:39.710
certification checklists. The attackers definitely

00:13:39.710 --> 00:13:41.870
have the advantage. Meaning our current defense

00:13:41.870 --> 00:13:44.629
systems are fundamentally outmatched by the threat.

00:13:44.889 --> 00:13:49.250
Yes, we are relying on slow, static defense in

00:13:49.250 --> 00:13:52.330
a hyper -dynamic war. It's not sustainable. We

00:13:52.330 --> 00:13:54.590
are going to pause right here for a brief sponsor

00:13:54.590 --> 00:13:57.649
break. Sponsor placeholder. And we are back.

00:13:57.750 --> 00:14:00.370
Let's synthesize this incredible journey we've

00:14:00.370 --> 00:14:03.169
been on today. We have covered a tremendous amount

00:14:03.169 --> 00:14:06.659
of ground. And it all connects in a rather unsettling

00:14:06.659 --> 00:14:10.620
way. We are chasing an elusive, highly adaptable

00:14:10.620 --> 00:14:13.980
new intelligence. That is exactly what the ARC

00:14:13.980 --> 00:14:17.440
-AGI -3 benchmark showed us. True AGI remains

00:14:17.440 --> 00:14:19.799
fundamentally out of reach for our current brute

00:14:19.799 --> 00:14:22.179
force architectures. But to get there, the Titans

00:14:22.179 --> 00:14:24.799
are fighting fiercely for absolute control and

00:14:24.799 --> 00:14:27.360
massive funding. We see this with the open AI

00:14:27.360 --> 00:14:30.700
drama, Anthropic, and the new Trump tech panel

00:14:30.700 --> 00:14:32.860
shaping the future. While the billionaires fight,

00:14:33.080 --> 00:14:35.399
consumer platforms are making their moves. Apple

00:14:35.399 --> 00:14:37.259
and Google are locking you into their ecosystems.

00:14:37.519 --> 00:14:39.480
They are turning your everyday devices into these

00:14:39.480 --> 00:14:42.259
massive multi -model AI hubs. And meanwhile,

00:14:42.500 --> 00:14:44.799
the very foundation of all this technology is

00:14:44.799 --> 00:14:47.399
crumbling. Open source tools are under active

00:14:47.399 --> 00:14:50.299
attack. They are under attack from AI -generated

00:14:50.299 --> 00:14:54.080
code itself. Vibe -coded malware is bypassing

00:14:54.080 --> 00:14:56.570
enterprise security. It is a highly volatile

00:14:56.570 --> 00:14:59.710
mix of relentless ambition, massive capital,

00:14:59.870 --> 00:15:02.929
and extreme technical vulnerability. It really

00:15:02.929 --> 00:15:05.750
forces you to deeply consider the unseen risks

00:15:05.750 --> 00:15:08.490
of moving this fast. If current AI can already

00:15:08.490 --> 00:15:11.370
vibe code malware that bypasses enterprise security,

00:15:11.809 --> 00:15:15.350
what happens if the first true AGI doesn't announce

00:15:15.350 --> 00:15:18.409
itself with a high benchmark score, but simply

00:15:18.409 --> 00:15:21.009
adapts silently inside an open source library?

00:15:21.409 --> 00:15:23.129
That is a profoundly chilling thought. It might

00:15:23.129 --> 00:15:25.500
just hide quietly in the noise. waiting it is

00:15:25.500 --> 00:15:27.639
something to seriously ponder as we build this

00:15:27.639 --> 00:15:29.519
future thank you for joining us on this deep

00:15:29.519 --> 00:15:31.480
dive we will be back with more of your sources

00:15:31.480 --> 00:15:33.779
soon until then keep questioning the systems

00:15:33.779 --> 00:15:34.299
around you
