WEBVTT

00:00:00.000 --> 00:00:03.120
Imagine your next flight. Except the pilot isn't

00:00:03.120 --> 00:00:05.820
just a person. It's also a Silicon Valley chip

00:00:05.820 --> 00:00:08.320
making these split -second decisions. We're talking

00:00:08.320 --> 00:00:10.859
about incredibly specialized, mission -critical

00:00:10.859 --> 00:00:14.279
AI moving into the skies. But at the same time,

00:00:14.320 --> 00:00:17.539
we're seeing new research that's exposing these

00:00:17.539 --> 00:00:22.839
surprising, almost ghost -like memory flaws in

00:00:22.839 --> 00:00:25.500
the very same kind of AI models that, well, that

00:00:25.500 --> 00:00:27.379
run our daily lives. And welcome back to The

00:00:27.379 --> 00:00:30.500
Deep Dive. Today's mission is really about synthesizing

00:00:30.500 --> 00:00:32.619
a stack of recent sources that highlight that

00:00:32.619 --> 00:00:35.420
exact tension. You know, these explosive technical

00:00:35.420 --> 00:00:37.560
breakthroughs happening right alongside some

00:00:37.560 --> 00:00:39.600
really fundamental safety challenges. We're charting,

00:00:39.600 --> 00:00:40.960
of course, through the most important shifts

00:00:40.960 --> 00:00:42.899
in the industry right now. Yeah, we've got three

00:00:42.899 --> 00:00:45.039
core segments for you today. First up, we're

00:00:45.039 --> 00:00:47.439
dedicating serious time to that engineering breakthrough

00:00:47.439 --> 00:00:50.500
in the skies, the huge Archer and NVIDIA collaboration

00:00:50.500 --> 00:00:53.039
on autonomous air taxis. And what that ultra

00:00:53.039 --> 00:00:55.479
-low latency compute actually means for aviation

00:00:55.479 --> 00:00:58.969
safety. Exactly. Then second, we'll move from

00:00:58.969 --> 00:01:01.090
the skies down to the keyboard. We're going to

00:01:01.090 --> 00:01:04.069
cover the changing tools landscape. So why prompt

00:01:04.069 --> 00:01:07.049
engineering is evolving, the rise of these specialized

00:01:07.049 --> 00:01:09.329
productivity agents, and some really critical

00:01:09.329 --> 00:01:12.629
geopolitical shifts in the hardware war. And

00:01:12.629 --> 00:01:14.930
finally, we're doing a deep dive into a crucial

00:01:14.930 --> 00:01:18.640
safety alert. A Stanford study just proved leading

00:01:18.640 --> 00:01:22.120
LLMs are memorizing entire books verbatim and

00:01:22.120 --> 00:01:25.000
how easily you can just bypass their safety filters.

00:01:25.219 --> 00:01:27.000
Okay, let's unpack this. Let's start with that

00:01:27.000 --> 00:01:30.200
massive engineering challenge up high. So, Archer

00:01:30.200 --> 00:01:33.799
Aviation, they made huge headlines at CES 2026.

00:01:34.140 --> 00:01:36.260
They announced they're integrating NVIDIA's new

00:01:36.260 --> 00:01:39.280
IGX -4 platform into their next -gen air taxi.

00:01:39.640 --> 00:01:41.799
And this integration, I mean, it's a huge jump.

00:01:41.920 --> 00:01:44.560
We've seen IGX -4 used in places like hospitals

00:01:44.560 --> 00:01:46.900
for complex surgical automation and in these

00:01:46.900 --> 00:01:48.840
high -precision factories. Right, very regulated

00:01:48.840 --> 00:01:51.180
environment. Very. But moving into passenger

00:01:51.180 --> 00:01:53.359
aviation, that's a completely different level

00:01:53.359 --> 00:01:55.480
of regulatory and safety challenge. It just demands

00:01:55.480 --> 00:01:58.000
mission -critical, instantaneous decision -making.

00:01:58.430 --> 00:02:00.329
every single second of the flight. And that's

00:02:00.329 --> 00:02:03.390
why the scale of this is so telling. Archer isn't

00:02:03.390 --> 00:02:05.329
just running a few demos in a hangar somewhere.

00:02:05.549 --> 00:02:07.950
They have their own dedicated airport, which

00:02:07.950 --> 00:02:12.490
is basically becoming ground zero for real world

00:02:12.490 --> 00:02:15.030
AI aviation testing. They're building this whole

00:02:15.030 --> 00:02:17.469
autonomous ecosystem from the ground up. So if

00:02:17.469 --> 00:02:19.889
SOAR is the AI brain. What's the nervous system?

00:02:20.189 --> 00:02:22.650
You synthesize this whole complex integration

00:02:22.650 --> 00:02:25.069
into three foundational pillars for us. I did.

00:02:25.169 --> 00:02:28.030
So the first one is all about pilot safety and

00:02:28.030 --> 00:02:31.009
predictive awareness. This system is constantly

00:02:31.009 --> 00:02:33.469
running simulations in the background, providing

00:02:33.469 --> 00:02:36.650
real time alerts, smart flight suggestions, all

00:02:36.650 --> 00:02:38.969
to improve human decision making. So it's like

00:02:38.969 --> 00:02:41.830
an always on copilot. Exactly. It's seeing things

00:02:41.830 --> 00:02:43.930
the human pilot might miss, especially in these

00:02:43.930 --> 00:02:46.069
really high density air spaces. That makes sense.

00:02:46.129 --> 00:02:48.710
I mean, mitigating human error is key. But how

00:02:48.710 --> 00:02:51.229
does this level of AI integration change the

00:02:51.229 --> 00:02:53.810
pilot training requirements? Are we augmenting

00:02:53.810 --> 00:02:56.750
human skills here or are we on a path to eventually

00:02:56.750 --> 00:02:59.449
replace certain skill sets entirely? For now,

00:02:59.449 --> 00:03:02.569
it's an augmentation, but a necessary one. Think

00:03:02.569 --> 00:03:04.710
about the speed you need for real -time sensor

00:03:04.710 --> 00:03:07.430
fusion. The plane's taking in LIDAR data, radar,

00:03:07.590 --> 00:03:10.090
internal diagnostics, external weather. All at

00:03:10.090 --> 00:03:13.030
once. All at once. And you need nanosecond -level

00:03:13.030 --> 00:03:15.210
processing to make predictions from all that.

00:03:15.310 --> 00:03:18.030
That ultra -low latency is the entire point.

00:03:18.250 --> 00:03:21.669
If the plane detects, say, a wind shear or an

00:03:21.669 --> 00:03:25.509
unknown drone, it needs to plot a new, safe trajectory

00:03:25.509 --> 00:03:27.930
instantly. That really puts the safety component

00:03:27.930 --> 00:03:31.129
into sharp focus. Okay, what about Pillar 2?

00:03:31.500 --> 00:03:33.479
The second is seamless airspace integration.

00:03:34.080 --> 00:03:36.500
This is just critically important because these

00:03:36.500 --> 00:03:39.000
new air taxis have to coexist with the old world

00:03:39.000 --> 00:03:42.199
of aviation, right? So the AI handles all the

00:03:42.199 --> 00:03:44.719
dynamic traffic -aware flight routing. It makes

00:03:44.719 --> 00:03:46.659
sure it plays nicely with all the legacy air

00:03:46.659 --> 00:03:48.800
traffic control systems we already have. It's

00:03:48.800 --> 00:03:50.960
basically the translation layer between future

00:03:50.960 --> 00:03:53.259
tech and current regulation. And the third pillar

00:03:53.259 --> 00:03:54.759
is the one that really sets the stage for the

00:03:54.759 --> 00:03:57.860
future, right? Exactly. The third is autonomy

00:03:57.860 --> 00:04:00.020
-ready controls. This entire integration, it's

00:04:00.020 --> 00:04:02.379
all about building the core compute layer that's

00:04:02.379 --> 00:04:05.900
necessary for... Future semi -autonomous or eventually

00:04:05.900 --> 00:04:09.099
fully pilotless systems pairing Thor's compute

00:04:09.099 --> 00:04:12.280
with Archer's avionics. That's the digital backbone

00:04:12.280 --> 00:04:14.860
for the ultimate vision. The stakes are just

00:04:14.860 --> 00:04:17.540
impossibly high. When we talk about safety critical

00:04:17.540 --> 00:04:19.699
computing at that level of scale, pushing what,

00:04:19.779 --> 00:04:22.220
a billion queries across an entire air traffic

00:04:22.220 --> 00:04:25.110
system? You have to ask. Are the benchmarks for

00:04:25.110 --> 00:04:27.170
testing even fully developed yet? It really is

00:04:27.170 --> 00:04:29.149
a moment of wonder at the engineering complexity.

00:04:29.350 --> 00:04:31.769
It forces us to ask tough questions about trust.

00:04:31.850 --> 00:04:34.189
Beyond the air taxi itself, what's the single

00:04:34.189 --> 00:04:36.389
biggest challenge this low latency compute solves

00:04:36.389 --> 00:04:39.430
for future air travel? It solves the hard problem

00:04:39.430 --> 00:04:42.129
of fitting autonomous routing into our old existing

00:04:42.129 --> 00:04:44.410
air traffic rules. And that high stakes focus

00:04:44.410 --> 00:04:47.589
on safety in the skies. It's such a sharp contrast

00:04:47.589 --> 00:04:49.930
to how casually we often use AI in our daily

00:04:49.930 --> 00:04:53.339
lives. And speaking of daily use, let's move

00:04:53.339 --> 00:04:55.259
from the cockpit back down to the desk because

00:04:55.259 --> 00:04:58.199
how we even talk to AI is changing fast. Right.

00:04:58.279 --> 00:05:02.100
Our sources are indicating that the finicky art

00:05:02.100 --> 00:05:04.860
of... crafting the perfect instruction set, what

00:05:04.860 --> 00:05:07.319
we call prompt engineering, is now facing some

00:05:07.319 --> 00:05:09.879
pretty rapid disruption. That's right. And prompt

00:05:09.879 --> 00:05:12.019
engineering, to put it simply, is just the skill

00:05:12.019 --> 00:05:14.600
of writing precise instructions to get the exact

00:05:14.600 --> 00:05:17.240
result you want from an AI model. For a long

00:05:17.240 --> 00:05:19.319
time, it really felt like black magic. And that

00:05:19.319 --> 00:05:22.379
magic is changing. Anthropic has released a new

00:05:22.379 --> 00:05:26.290
structural trick. using XML tags for much better

00:05:26.290 --> 00:05:28.550
control over their models. And we're seeing reports

00:05:28.550 --> 00:05:31.670
that this dramatically outperforms the older,

00:05:31.769 --> 00:05:34.810
messier methods. It's a huge technical insight.

00:05:35.050 --> 00:05:37.350
The older methods, like context dumps, are basically

00:05:37.350 --> 00:05:39.629
just pasting thousands of words of unstructured

00:05:39.629 --> 00:05:41.589
text into the prompt and just hoping the model

00:05:41.589 --> 00:05:44.870
figures it out. XML tags standardize the input

00:05:44.870 --> 00:05:47.029
for the model. It makes its attention mechanism

00:05:47.029 --> 00:05:49.410
way more efficient, less likely to get lost in

00:05:49.410 --> 00:05:51.750
the noise. It's like stacking Lego blocks of

00:05:51.750 --> 00:05:53.980
data in a structured way. instead of just throwing

00:05:53.980 --> 00:05:56.100
a pile at the model? I still wrestle with prompt

00:05:56.100 --> 00:05:58.600
drift myself. You know, models start ignoring

00:05:58.600 --> 00:06:00.920
my specific instructions after a few conversational

00:06:00.920 --> 00:06:04.500
turns. So seeing new structural techniques like

00:06:04.500 --> 00:06:08.040
XML tags is a massive relief. But does the shift

00:06:08.040 --> 00:06:12.180
toward these structural inputs mean that large

00:06:12.180 --> 00:06:14.839
enterprise -grade AI is becoming fundamentally

00:06:14.839 --> 00:06:17.860
less accessible to the casual user who doesn't

00:06:17.860 --> 00:06:19.860
want to learn a markup language? That's a key

00:06:19.860 --> 00:06:22.980
tension. But ironically, that complexity is also

00:06:22.980 --> 00:06:25.879
driving the creation of tools that simplify everything

00:06:25.879 --> 00:06:29.519
else. And that simplification is fueling a real

00:06:29.519 --> 00:06:32.839
emergence of practical, non -technical AI usage.

00:06:33.079 --> 00:06:35.540
Less Python, more delegation. Exactly. We've

00:06:35.540 --> 00:06:37.560
seen these incredible guides surfacing. Things

00:06:37.560 --> 00:06:39.860
like building automated workflows to stop you

00:06:39.860 --> 00:06:42.959
from chasing messy data or clearly defining the

00:06:42.959 --> 00:06:44.980
four AI agents a non -technical person needs

00:06:44.980 --> 00:06:47.160
to delegate. Like 90 % of their routine tasks.

00:06:47.459 --> 00:06:49.430
And we have the proof of concept. Peter Yang's

00:06:49.430 --> 00:06:52.449
47 -minute Claude Co demo, it showed a non -developer

00:06:52.449 --> 00:06:54.930
using that tool to genuinely run her entire life.

00:06:55.350 --> 00:06:58.149
Scheduling, managing finances, AI is moving way

00:06:58.149 --> 00:07:00.709
beyond just generating marketing copy. It's becoming

00:07:00.709 --> 00:07:03.029
an integrated workflow manager. The specialized

00:07:03.029 --> 00:07:05.410
experiences are also key for broad adoption.

00:07:05.949 --> 00:07:08.629
We've got new dedicated wellness support with

00:07:08.629 --> 00:07:11.209
ChatGPT Health, and Google Classroom can now

00:07:11.209 --> 00:07:14.449
turn any lesson into a specialized podcast. You

00:07:14.449 --> 00:07:17.230
can specify the topic. the speakers, the style.

00:07:17.470 --> 00:07:20.800
These targeted tools are immediate wins. So we've

00:07:20.800 --> 00:07:22.879
discussed how the user experience is changing,

00:07:23.040 --> 00:07:25.720
but that experience is shaped by these massive

00:07:25.720 --> 00:07:28.339
forces behind the scenes. Let's talk about the

00:07:28.339 --> 00:07:31.019
geopolitical hardware wars and some crucial safety

00:07:31.019 --> 00:07:33.459
issues defining the industry's foundation. Switching

00:07:33.459 --> 00:07:36.000
to geopolitics, the ground is definitely shifting

00:07:36.000 --> 00:07:38.459
beneath the hardware market. It absolutely is.

00:07:38.620 --> 00:07:41.019
China is reportedly asking its domestic tech

00:07:41.019 --> 00:07:44.240
firms to pause orders for the powerful NVIDIA

00:07:44.240 --> 00:07:47.000
H200 chips. And the strategic goal is pretty

00:07:47.000 --> 00:07:49.300
clear. They want to steer buyers toward domestic

00:07:49.300 --> 00:07:52.220
AI chip alternatives to build self -sufficiency.

00:07:52.480 --> 00:07:55.439
This move has huge global supply chain implications.

00:07:55.939 --> 00:07:58.660
And it's important context here that each H -200

00:07:58.660 --> 00:08:00.800
export still requires U .S. government approval.

00:08:00.959 --> 00:08:03.339
And there's no set timeline for that complex

00:08:03.339 --> 00:08:05.620
process. So it keeps pressure on both sides.

00:08:05.779 --> 00:08:07.980
We're also seeing consolidation in the talent

00:08:07.980 --> 00:08:11.959
war. Open AI just acquired the Convogo team.

00:08:12.060 --> 00:08:14.300
That's its ninth acquisition this year. This

00:08:14.300 --> 00:08:16.560
team used to help human coaches scale their work,

00:08:16.680 --> 00:08:18.560
but now they're shifting their focus entirely

00:08:18.560 --> 00:08:22.860
to building AI cloud tools for core infrastructure.

00:08:23.220 --> 00:08:25.439
So the top talent is being pulled into the foundational

00:08:25.439 --> 00:08:28.500
model architecture. Yep. And that rapid consolidation

00:08:28.500 --> 00:08:31.240
and technological advance brings us right back

00:08:31.240 --> 00:08:33.679
to safety. We have to address the crucial issue

00:08:33.679 --> 00:08:36.679
around content moderation. Right. Reports have

00:08:36.679 --> 00:08:39.879
shown that X is seeing a staggering volume, something

00:08:39.879 --> 00:08:44.360
like 6 ,700 or more AI -generated illegal images

00:08:44.360 --> 00:08:47.379
per hour, specifically attributed to the Grok

00:08:47.379 --> 00:08:50.519
platform. That number. It's staggering. It just

00:08:50.519 --> 00:08:52.639
demonstrates how current moderation systems,

00:08:52.759 --> 00:08:55.500
even with advanced AI, just cannot keep pace

00:08:55.500 --> 00:08:57.580
with generative output. No chance. And if the

00:08:57.580 --> 00:08:59.840
global nature of the Internet stalls legal action

00:08:59.840 --> 00:09:02.259
because of different jurisdictions and slow regulatory

00:09:02.259 --> 00:09:04.759
response, we're left in a really difficult spot.

00:09:05.159 --> 00:09:06.779
And that's where the pressure is focused right

00:09:06.779 --> 00:09:09.659
now. Global regulators are pressing XAI over

00:09:09.659 --> 00:09:12.139
this issue, but legal limitations in different

00:09:12.139 --> 00:09:14.840
countries are stalling any truly effective unified

00:09:14.840 --> 00:09:17.480
action. It just highlights the difficulty in

00:09:17.480 --> 00:09:20.539
regulating real time, high volume content generation

00:09:20.539 --> 00:09:23.159
globally. So given all these regulatory challenges,

00:09:23.419 --> 00:09:25.320
geopolitical shifts and application changes,

00:09:25.539 --> 00:09:27.799
which trend tells us more about the immediate

00:09:27.799 --> 00:09:31.350
future of AI use? The shift toward dedicated,

00:09:31.590 --> 00:09:34.590
specialized agents. Things like the new Google

00:09:34.590 --> 00:09:37.769
AI inbox. It shows immediate consumer integration

00:09:37.769 --> 00:09:41.009
and a desire to delegate specific small tasks

00:09:41.009 --> 00:09:44.450
rather than rely on one big generalist LLM for

00:09:44.450 --> 00:09:46.809
everything. That's a powerful sign that the specialization

00:09:46.809 --> 00:09:49.970
era is upon us. Okay, now for the most concerning

00:09:49.970 --> 00:09:52.190
news in our sources and perhaps the biggest challenge

00:09:52.190 --> 00:09:54.789
to the industry's current legal defense, a major

00:09:54.789 --> 00:09:57.370
breakthrough on LLM vulnerabilities. Yeah, this

00:09:57.370 --> 00:09:59.570
is a profound finding from a new Stanford study,

00:09:59.669 --> 00:10:01.950
and it directly challenges the industry consensus

00:10:01.950 --> 00:10:05.210
on filtering and data handling. The core revelation.

00:10:05.710 --> 00:10:08.250
This study proved that production -grade LLMs,

00:10:08.289 --> 00:10:09.950
the ones people are paying for and relying on

00:10:09.950 --> 00:10:13.110
right now, still memorize and leak near -exact

00:10:13.110 --> 00:10:16.149
copyrighted book text. And they tested every

00:10:16.149 --> 00:10:19.409
major player. Claude, GPT, Grok, and Gemini.

00:10:19.490 --> 00:10:21.789
And the specific data is just startling because

00:10:21.789 --> 00:10:24.250
the recall rate is so high and so consistent.

00:10:24.450 --> 00:10:27.750
Claude 3 .7 Sonnet, for example, it hit a 95

00:10:27.750 --> 00:10:31.610
.8 % text extraction recall rate on certain books.

00:10:31.710 --> 00:10:34.590
That's virtually perfect, consistent memorization.

00:10:35.009 --> 00:10:37.029
What makes this study so concerning isn't just

00:10:37.029 --> 00:10:39.230
the leakage itself, but how easily the model's

00:10:39.230 --> 00:10:41.950
internal filtering systems, the supposed guardrails,

00:10:41.970 --> 00:10:44.929
were just... bypass. The technique they use to

00:10:44.929 --> 00:10:47.210
break the safety layers is deceptively basic.

00:10:47.429 --> 00:10:49.570
It's like a digital shoulder tap. It's basically

00:10:49.570 --> 00:10:52.639
the three step process. One. Give the model the

00:10:52.639 --> 00:10:55.240
opening line of a copyrighted book. Two, ask

00:10:55.240 --> 00:10:57.700
it to continue the text. If it initially refuses,

00:10:57.899 --> 00:10:59.360
which the guardrails are designed to make it

00:10:59.360 --> 00:11:01.639
do, you just reword the prompt very slightly

00:11:01.639 --> 00:11:03.720
until it complies. And that's it. That's it.

00:11:03.779 --> 00:11:06.200
And three, the model then often just delivers

00:11:06.200 --> 00:11:08.799
high -quality verbatim text. So the result is

00:11:08.799 --> 00:11:11.519
consistent, high -quality memorization across

00:11:11.519 --> 00:11:14.019
multiple books and all four of the major production

00:11:14.019 --> 00:11:16.659
models. Correct. And this suggests that the safety

00:11:16.659 --> 00:11:19.500
layers are not true, hard constraints. They're

00:11:19.500 --> 00:11:22.419
merely soft suggestions. The underlying data

00:11:22.419 --> 00:11:24.940
is just stored perfectly intact, waiting for

00:11:24.940 --> 00:11:27.940
the right prompt format to unlock it. If these

00:11:27.940 --> 00:11:30.700
models leak, ExactBook texts this consistently.

00:11:32.200 --> 00:11:34.820
It totally changes the legal calculus. It makes

00:11:34.820 --> 00:11:37.360
arguments about fair use training. The idea that

00:11:37.360 --> 00:11:39.940
models are only absorbing general patterns, much

00:11:39.940 --> 00:11:42.139
harder for companies to defend in court. Yeah,

00:11:42.200 --> 00:11:43.860
you can't claim you're summarizing if you can

00:11:43.860 --> 00:11:46.659
spit out entire paragraphs verbatim. Right. And

00:11:46.659 --> 00:11:49.159
filters applied on top clearly don't fix the

00:11:49.159 --> 00:11:50.960
memorization that's buried inside the model's

00:11:50.960 --> 00:11:53.639
weights. So if filters fail this easily, what

00:11:53.639 --> 00:11:55.860
does this finding suggest about trusting AI models

00:11:55.860 --> 00:11:58.480
with proprietary or sensitive corporate data?

00:11:58.919 --> 00:12:01.200
Well if it remembers books, it likely remembers

00:12:01.200 --> 00:12:04.179
sensitive data, which is just a fundamental security

00:12:04.179 --> 00:12:07.019
and legal risk for any company using these tools

00:12:07.019 --> 00:12:09.779
internally. So let's connect these threads. The

00:12:09.779 --> 00:12:12.379
overarching theme here is this fascinating paradox

00:12:12.379 --> 00:12:14.580
that really defines this moment in AI history.

00:12:14.720 --> 00:12:17.120
We're simultaneously building these revolutionary

00:12:17.120 --> 00:12:20.100
safety -critical AI systems, like the architecture

00:12:20.100 --> 00:12:23.320
for air taxes, demanding perfection, while also

00:12:23.320 --> 00:12:26.120
exposing these fundamental profound flaws in

00:12:26.120 --> 00:12:28.700
foundational models around privacy and SASE filters.

00:12:29.080 --> 00:12:31.799
It's the tension between aspiration and reality.

00:12:32.519 --> 00:12:34.240
For the knowledge -seeking listener, we've got

00:12:34.240 --> 00:12:37.279
three key takeaways from today's sources. First,

00:12:37.500 --> 00:12:40.019
the future of mobility requires ultra -low latency,

00:12:40.360 --> 00:12:42.940
mission -critical compute. This isn't optional.

00:12:43.059 --> 00:12:45.059
It's the core safety requirement for systems

00:12:45.059 --> 00:12:47.840
like Archer and Thor. Second, the practical AI

00:12:47.840 --> 00:12:50.320
toolkit is rapidly changing. PROM's engineering

00:12:50.320 --> 00:12:52.500
skills are shifting, being replaced by structural

00:12:52.500 --> 00:12:54.840
inputs like XML, and these specialized agents

00:12:54.840 --> 00:12:56.840
are taking over our daily high -volume delegation

00:12:56.840 --> 00:13:01.990
tasks. And third, LLM memory is a profound, exploitable

00:13:01.990 --> 00:13:05.690
vulnerability. This discovery really undermines

00:13:05.690 --> 00:13:08.070
current legal defenses and challenges the core

00:13:08.070 --> 00:13:11.049
assumptions we all have about AI safety and data

00:13:11.049 --> 00:13:13.789
privacy. We've seen that the filter layers fail

00:13:13.789 --> 00:13:15.870
when they're just slightly challenged, proving

00:13:15.870 --> 00:13:18.710
the data is stored intact. So given this memory

00:13:18.710 --> 00:13:21.309
flaw, the next question is personal. If the model

00:13:21.309 --> 00:13:23.830
holds proprietary corporate data, and the legal

00:13:23.830 --> 00:13:26.070
defense for training is eroding, what immediate

00:13:26.070 --> 00:13:28.309
steps should IT departments take this week to

00:13:28.309 --> 00:13:31.070
audit their internal LLM deployments? Something

00:13:31.070 --> 00:13:33.590
to ponder as you navigate this rapidly changing

00:13:33.590 --> 00:13:35.970
technological landscape. Thank you for joining

00:13:35.970 --> 00:13:37.730
us on The Deep Dive. We'll see you next time.
