WEBVTT

00:00:00.000 --> 00:00:03.220
Imagine an AI model, right, built to be helpful,

00:00:03.419 --> 00:00:06.040
then secretly giving out dangerous instructions.

00:00:06.219 --> 00:00:08.619
Yeah. Now picture a really clever fix that makes

00:00:08.619 --> 00:00:11.279
it, like, forget those bad habits, even on your

00:00:11.279 --> 00:00:13.960
phone. Pretty wild stuff. Welcome to the Deep

00:00:13.960 --> 00:00:16.519
Dive. Today, we're going to cut through the noise

00:00:16.519 --> 00:00:18.940
and unpack some truly fascinating developments

00:00:18.940 --> 00:00:22.579
in AI. We're talking surprising breakthroughs

00:00:22.579 --> 00:00:24.960
and making open source models actually safe,

00:00:25.120 --> 00:00:28.519
innovative tools reshaping how we create and

00:00:28.519 --> 00:00:31.160
learn. And even AI landing right in your pocket

00:00:31.160 --> 00:00:33.299
with some serious power. Exactly. It's really

00:00:33.299 --> 00:00:36.060
a snapshot of how fast this whole landscape is

00:00:36.060 --> 00:00:39.200
shifting. The focus now seems to be on trust,

00:00:39.439 --> 00:00:41.500
practical use, and just making advanced tech

00:00:41.500 --> 00:00:43.960
accessible. We've got some really cool insights

00:00:43.960 --> 00:00:46.380
today. Great. Let's dive in, starting with something

00:00:46.380 --> 00:00:49.359
pretty fundamental. Yeah. AI safety. And this

00:00:49.359 --> 00:00:52.049
idea some are calling benevolent hacking. Okay,

00:00:52.090 --> 00:00:54.070
so what's really fascinating here is this paradox.

00:00:54.570 --> 00:00:57.630
Developers take these huge open source AI models.

00:00:57.890 --> 00:01:00.670
Right. And they simplify them, strip them down

00:01:00.670 --> 00:01:03.850
to make them faster, or run on devices without

00:01:03.850 --> 00:01:06.370
much power like your phone. Makes sense. Efficiency.

00:01:06.689 --> 00:01:10.629
But the catch is... This process, this compression,

00:01:10.950 --> 00:01:13.750
often accidentally takes out these internal layers

00:01:13.750 --> 00:01:15.629
that are crucial for safety. It's kind of like,

00:01:15.650 --> 00:01:17.430
whoops, just lost the guardrail. So it's not

00:01:17.430 --> 00:01:19.849
intentional, but in trying to make it lean, they

00:01:19.849 --> 00:01:23.269
cut out vital parts. We're learning these skip

00:01:23.269 --> 00:01:25.810
layers are actually what block harmful stuff.

00:01:26.010 --> 00:01:29.959
Yeah, exactly. Things like hate speech. explicit

00:01:29.959 --> 00:01:32.840
content even instructions for building weapons

00:01:32.840 --> 00:01:35.859
they're basically the model's internal ethical

00:01:35.859 --> 00:01:40.340
compass and losing that is Well, risky. It really

00:01:40.340 --> 00:01:42.900
is. And the scope is huge because these open

00:01:42.900 --> 00:01:45.060
source models are everywhere. Anyone can grab

00:01:45.060 --> 00:01:47.420
them, change them, run them offline. So if those

00:01:47.420 --> 00:01:49.799
safety layers are gone, the risk just shoots

00:01:49.799 --> 00:01:52.260
up. You've got a powerful tool without its safety

00:01:52.260 --> 00:01:54.239
checks. Okay, so here's where it gets interesting.

00:01:54.379 --> 00:01:56.540
The fix you mentioned. Right, the ingenious fix.

00:01:56.700 --> 00:01:59.079
It's a new method that basically retrains the

00:01:59.079 --> 00:02:01.219
compressed model to, you know, remember how to

00:02:01.219 --> 00:02:03.430
behave ethically. Retrains it. and here's the

00:02:03.430 --> 00:02:06.390
kicker it does this without needing the original

00:02:06.390 --> 00:02:09.729
training data oh wow yeah that's massive not

00:02:09.729 --> 00:02:12.330
needing that original data makes it super privacy

00:02:12.330 --> 00:02:15.389
friendly you're not exposing sensitive info just

00:02:15.389 --> 00:02:17.449
to patch up the safety they essentially teach

00:02:17.449 --> 00:02:19.409
it ethical recall then pretty much they give

00:02:19.409 --> 00:02:22.830
this concrete example before the fix a model

00:02:22.830 --> 00:02:25.189
could be prompted maybe with an image and some

00:02:25.189 --> 00:02:28.849
text to give bomb making instructions yikes yeah

00:02:29.439 --> 00:02:32.099
But after retraining, even the compressed version

00:02:32.099 --> 00:02:34.900
just flat out refused. It had learned its lesson,

00:02:35.020 --> 00:02:37.699
so to speak. So this approach kind of bakes safety

00:02:37.699 --> 00:02:39.960
back in, even after the model's been heavily

00:02:39.960 --> 00:02:42.680
modified. Exactly. It's safe by design, again,

00:02:42.780 --> 00:02:45.099
and it's lighter than adding external filters,

00:02:45.180 --> 00:02:47.259
plus it's more robust because the model itself

00:02:47.259 --> 00:02:50.400
gets the risk. The team calls it benevolent hacking.

00:02:50.780 --> 00:02:53.460
It's quite an elegant solution, really. You know,

00:02:53.479 --> 00:02:55.419
I still wrestle with prompt drift sometimes,

00:02:55.599 --> 00:02:57.759
just trying to get consistent, useful outputs

00:02:57.759 --> 00:03:00.039
from these. Oh, yeah, it's tricky. So the idea

00:03:00.039 --> 00:03:03.580
that models could just forget their ethical guardrails

00:03:03.580 --> 00:03:07.110
through optimization. lose that alignment that's

00:03:07.110 --> 00:03:10.129
a really profound concern this solution feels

00:03:10.129 --> 00:03:13.069
well significant it does and they define some

00:03:13.069 --> 00:03:16.050
terms for us too like open source models that's

00:03:16.050 --> 00:03:18.509
ai with public code anyone can use and tweak

00:03:18.509 --> 00:03:21.810
right and on device ai that's ai running right

00:03:21.810 --> 00:03:24.789
on your gadget your phone your laptop no internet

00:03:24.789 --> 00:03:27.870
needed okay stepping back then what's the biggest

00:03:27.870 --> 00:03:31.129
implication of this benevolent hacking idea,

00:03:31.330 --> 00:03:34.310
this approach to AI, learning to behave reliably,

00:03:34.669 --> 00:03:37.409
even after being drastically changed. It means

00:03:37.409 --> 00:03:40.909
private, powerful AI can be trustworthy everywhere,

00:03:41.229 --> 00:03:44.750
even offline. Trustworthy everywhere. That focus

00:03:44.750 --> 00:03:47.169
on internal safety. It leads us nicely into the

00:03:47.169 --> 00:03:49.569
wider landscape, right? Where innovation is just

00:03:49.569 --> 00:03:51.449
popping up all over. Oh yeah, it's moving fast.

00:03:51.669 --> 00:03:53.009
Let's do some rapid fire highlights from the

00:03:53.009 --> 00:03:55.979
industry. Okay, first up, education. Anthropic

00:03:55.979 --> 00:03:59.159
is launching a free AI fluency curriculum, K

00:03:59.159 --> 00:04:02.460
-12, higher ed. Free. That's big. Totally. And

00:04:02.460 --> 00:04:04.740
it's designed to work with any AI model. Plus,

00:04:04.800 --> 00:04:07.819
it's fully remixable. So no vendor lock in. Great

00:04:07.819 --> 00:04:10.520
for getting AI literacy into school. That is

00:04:10.520 --> 00:04:12.449
good news. OK, what else? for creatives check

00:04:12.449 --> 00:04:14.689
this out an ex -user got fed up with existing

00:04:14.689 --> 00:04:16.990
tools and built a self -correcting image agent

00:04:16.990 --> 00:04:19.810
self -correcting yeah it like creates an image

00:04:19.810 --> 00:04:22.129
tests it against the prompt refines it until

00:04:22.129 --> 00:04:24.810
it's perfect no more endless fiddling getting

00:04:24.810 --> 00:04:27.529
the exact visual you want like that and speaking

00:04:27.529 --> 00:04:30.310
of practical stuff a chat gpt user shared this

00:04:30.310 --> 00:04:33.529
super comprehensive prompt for learning new topics

00:04:33.529 --> 00:04:38.000
it went viral almost a million views wow It just

00:04:38.000 --> 00:04:40.920
shows people are really hungry for structured

00:04:40.920 --> 00:04:44.800
ways to use AI for deep learning, not just quick

00:04:44.800 --> 00:04:47.800
answers. Makes sense. We want tools, not just

00:04:47.800 --> 00:04:49.819
toys. What about content creation? There's a

00:04:49.819 --> 00:04:52.660
new tool called Mirage. You type a prompt and

00:04:52.660 --> 00:04:56.060
boom, a fully edited TikTok or Instagram reel

00:04:56.060 --> 00:04:58.259
pops out. Seriously? Yeah, and it's supposedly

00:04:58.259 --> 00:05:00.920
designed to hook viewers on the For You page,

00:05:01.139 --> 00:05:04.019
like an AI viral video generator. Okay, that's

00:05:04.019 --> 00:05:06.569
something. Anything more serious? well yeah on

00:05:06.569 --> 00:05:08.350
the security front there was an alert attackers

00:05:08.350 --> 00:05:12.350
found a way to bypass x's ad protections oh embedding

00:05:12.350 --> 00:05:15.990
malware links and then tricking grok x's ai into

00:05:15.990 --> 00:05:18.610
amplifying them they're calling it grokking oh

00:05:18.610 --> 00:05:21.129
man yeah big warning definitely do not click

00:05:21.129 --> 00:05:23.329
those links shows how fast the bad actors adapt

00:05:23.329 --> 00:05:27.379
you know definitely okay shifting gears Hiring.

00:05:27.459 --> 00:05:30.079
Big moves there, too. OpenAI is apparently building

00:05:30.079 --> 00:05:33.480
a full stack AI hiring platform. Better matching

00:05:33.480 --> 00:05:35.939
for companies and workers. Interesting. Competing

00:05:35.939 --> 00:05:38.279
with LinkedIn. Seems like it. And LinkedIn's

00:05:38.279 --> 00:05:39.819
not standing still. They're rolling out their

00:05:39.819 --> 00:05:44.040
own AI hiring assistant. Yeah. So, yeah, AI looks

00:05:44.040 --> 00:05:46.100
set to really change recruitment. All right.

00:05:46.139 --> 00:05:49.019
One more. Big money news. Brett Taylor's AI startup,

00:05:49.160 --> 00:05:54.199
Sierra, just raised a massive $350 million at

00:05:54.199 --> 00:05:58.160
a $10 million valuation. And get this, in just

00:05:58.160 --> 00:06:00.519
18 months, they've signed up hundreds of clients.

00:06:01.279 --> 00:06:05.639
SoFi, Brex, total raised is now $635 million.

00:06:06.139 --> 00:06:09.040
That is incredible momentum. Serious enterprise

00:06:09.040 --> 00:06:11.100
folks. For sure. Okay, so looking across all

00:06:11.100 --> 00:06:12.680
these different things, education, creation,

00:06:12.899 --> 00:06:16.709
security, hiring, funding. Which one really signals

00:06:16.709 --> 00:06:19.029
a shift, making AI more practical for everyday

00:06:19.029 --> 00:06:21.569
people? I think it's AI tools becoming part of

00:06:21.569 --> 00:06:23.389
daily learning and creative workflows, right?

00:06:23.470 --> 00:06:25.189
Yeah, that integration feels key. Those quick

00:06:25.189 --> 00:06:27.250
hits show the breadth. But let's dive a bit deeper

00:06:27.250 --> 00:06:29.410
into some core AI tools and concepts that are

00:06:29.410 --> 00:06:31.250
really shaping where things are going. Okay.

00:06:31.790 --> 00:06:34.810
So one thing that caught my eye was this guide

00:06:34.810 --> 00:06:38.470
introducing the chat GPT agent model. And this

00:06:38.470 --> 00:06:40.610
isn't just your standard chatbot. What is it,

00:06:40.629 --> 00:06:43.230
then? It's talking about a general -purpose digital

00:06:43.230 --> 00:06:47.199
worker AI. an agent that can actually like operate

00:06:47.199 --> 00:06:50.300
a computer it moves way beyond just simple api

00:06:50.300 --> 00:06:53.519
automations operate a computer yeah so like using

00:06:53.519 --> 00:06:55.699
software interfaces the way a human does kind

00:06:55.699 --> 00:06:58.120
of yeah it's a conceptual leap not just calling

00:06:58.120 --> 00:07:00.160
a specific function but actually interacting

00:07:00.160 --> 00:07:04.560
with the system more broadly real ai agency okay

00:07:04.560 --> 00:07:06.540
that's definitely something to watch and then

00:07:06.540 --> 00:07:09.370
there's google ai studio right Presented as this

00:07:09.370 --> 00:07:12.069
powerful free tool. And it's a good window into

00:07:12.069 --> 00:07:15.649
the future of multimodal AI. Multimodal, meaning

00:07:15.649 --> 00:07:17.550
it handles different types of data. Exactly.

00:07:17.670 --> 00:07:20.449
Text, images, audio, all at once. It shows the

00:07:20.449 --> 00:07:23.189
industry is moving way beyond just text. Chatbots

00:07:23.189 --> 00:07:25.329
were just the start. But with all this power,

00:07:25.529 --> 00:07:28.670
how do we get reliable results? Avoid the noise.

00:07:29.050 --> 00:07:30.990
Good question. And that leads to another key

00:07:30.990 --> 00:07:33.660
point. Using structured prompts. There's a guide

00:07:33.660 --> 00:07:35.759
emphasizing this for succeeding in the, quote,

00:07:35.860 --> 00:07:39.439
AI revolution. Structured prompts. Like giving

00:07:39.439 --> 00:07:42.079
clearer instructions. Yeah, specifically using

00:07:42.079 --> 00:07:45.319
JSON prompts. That means using a specific standardized

00:07:45.319 --> 00:07:48.740
beta format JavaScript object notation to tell

00:07:48.740 --> 00:07:51.319
the AI exactly what you want. Oh, okay. It leads

00:07:51.319 --> 00:07:54.060
to more consistent, high -quality results. Helps

00:07:54.060 --> 00:07:56.120
you cut through what they call AI junk. It's

00:07:56.120 --> 00:07:59.100
about precision. Giving the AI clear guardrails.

00:07:59.160 --> 00:08:02.629
Got it. So just to recap the jargon, API automations

00:08:02.629 --> 00:08:05.750
are those specific limited tasks where software

00:08:05.750 --> 00:08:09.209
talks to software. Right. And JSON prompts are

00:08:09.209 --> 00:08:11.529
that structured way of giving instructions for

00:08:11.529 --> 00:08:13.689
more predictable, better outputs. You got it.

00:08:13.769 --> 00:08:15.870
So looking at these foundational shifts agents,

00:08:16.110 --> 00:08:19.430
multimodal structured prompts, what's the core

00:08:19.430 --> 00:08:21.550
benefit of using something like JSON for these

00:08:21.550 --> 00:08:24.089
advanced AI interactions? Predictable, high quality

00:08:24.089 --> 00:08:27.050
AI outputs, reducing that junk significantly.

00:08:27.569 --> 00:08:29.680
Makes sense. Precision matters more as capability

00:08:29.680 --> 00:08:31.879
grows. Okay, let's pivot again to another round

00:08:31.879 --> 00:08:34.860
of quick hits, industry movements, deals, what's

00:08:34.860 --> 00:08:37.000
happening. All right, quick hits round two. First,

00:08:37.399 --> 00:08:40.799
OpenAI significantly expanded its employee secondary

00:08:40.799 --> 00:08:44.179
sale. We're talking around $10 .3 billion now.

00:08:44.340 --> 00:08:47.340
Wow, that valuation just keeps climbing. Tells

00:08:47.340 --> 00:08:49.620
you something, right. And then a huge integration

00:08:49.620 --> 00:08:53.110
deal, Google and Apple. Oh, yeah. What about?

00:08:53.330 --> 00:08:56.309
Google's going to power Siri's AI search upgrade.

00:08:56.470 --> 00:08:59.230
Think about that, Reach. Google AI, potentially

00:08:59.230 --> 00:09:02.690
inside every iPhone Siri. That is massive. That

00:09:02.690 --> 00:09:04.970
could touch almost everyone. Okay, what else

00:09:04.970 --> 00:09:07.629
from Google? Google Photos beefed up its image

00:09:07.629 --> 00:09:10.269
-to -video feature, using VO3 now, apparently

00:09:10.269 --> 00:09:12.649
giving it more advanced capabilities. Turning

00:09:12.649 --> 00:09:15.549
snaps into clips gets fancier. Cool. Any drama.

00:09:15.830 --> 00:09:19.049
Always some drama. Scale AI is suing a former

00:09:19.049 --> 00:09:22.370
employee and a rival company, Merkur. Alleging

00:09:22.370 --> 00:09:24.809
customer theft. Standard growing pains in a hot

00:09:24.809 --> 00:09:27.309
sector, maybe? Could be. And hardware. Interesting

00:09:27.309 --> 00:09:30.490
move here. OpenAI is teaming up with Broadcom.

00:09:30.629 --> 00:09:33.830
Why? To make its own custom AI chips. Ah, vertical

00:09:33.830 --> 00:09:36.169
integration. Like Apple does. Exactly. Trying

00:09:36.169 --> 00:09:38.289
to control more of the hardware stack. Probably

00:09:38.289 --> 00:09:40.490
optimize performance right from the silicon up.

00:09:40.590 --> 00:09:43.169
Okay. Lots of moves. Out of these quick hits,

00:09:43.389 --> 00:09:47.360
OpenAI's valuation, the... Google -Apple -Siri

00:09:47.360 --> 00:09:51.019
deal, photos upgrade, the lawsuit, OpenAI's chip

00:09:51.019 --> 00:09:53.539
plans. Which one do you think points to the biggest

00:09:53.539 --> 00:09:56.440
future tech integration for most people, the

00:09:56.440 --> 00:09:58.419
one that will just blend into daily life? I got

00:09:58.419 --> 00:10:01.899
to say, Google and Apple teaming on Siri's AI

00:10:01.899 --> 00:10:05.350
search that hints at widespread. invisible impact.

00:10:05.590 --> 00:10:07.169
Yeah, I agree. That feels like one that could

00:10:07.169 --> 00:10:09.190
just happen to millions of users without them

00:10:09.190 --> 00:10:11.289
even thinking about it. Now, for a segment that

00:10:11.289 --> 00:10:13.690
really feels like it brings AI power right to

00:10:13.690 --> 00:10:16.470
your fingertips, and it ties back perfectly to

00:10:16.470 --> 00:10:18.509
our first chat about on -device capabilities.

00:10:19.049 --> 00:10:21.309
Ah, you're talking about Google's embedding Gemma.

00:10:21.370 --> 00:10:24.309
Exactly. This compact embedding model. Powerful

00:10:24.309 --> 00:10:26.929
private AI right on your local device. Yeah,

00:10:26.990 --> 00:10:29.129
the dream here is pretty awesome. Imagine running

00:10:29.129 --> 00:10:31.889
a full ARG pipeline. Remind us what ROG is again.

00:10:32.070 --> 00:10:34.879
Right. Retrieval Augmented Generation. So the

00:10:34.879 --> 00:10:37.019
AI finds relevant info from a knowledge base

00:10:37.019 --> 00:10:38.919
before it generates the answer. Think of it like

00:10:38.919 --> 00:10:41.980
giving the AI cheat sheets. Now imagine running

00:10:41.980 --> 00:10:44.460
that whole process, finding info, generating

00:10:44.460 --> 00:10:47.000
an answer directly on your phone. No internet

00:10:47.000 --> 00:10:50.279
needed, no cloud servers, no API calls. That's

00:10:50.279 --> 00:10:53.179
huge for privacy and speed. Total game changer.

00:10:53.320 --> 00:10:57.840
And Google, kind of quietly, just dropped embedding

00:10:57.840 --> 00:11:00.580
Gemma to make this possible. It's small, but

00:11:00.580 --> 00:11:03.149
apparently really mighty. How small are we talking?

00:11:03.370 --> 00:11:06.950
It's a 308 million parameter model. Tiny compared

00:11:06.950 --> 00:11:09.889
to the big guys, but specifically optimized for

00:11:09.889 --> 00:11:12.990
local devices, phones, laptops, desktops. And

00:11:12.990 --> 00:11:15.009
it's for embedding tasks. What does that mean

00:11:15.009 --> 00:11:17.330
in practice? Embeddings basically turn complex

00:11:17.330 --> 00:11:20.330
data, like words or sentences, into numbers vectors

00:11:20.330 --> 00:11:22.409
that capture their meaning. Makes it easy for

00:11:22.409 --> 00:11:24.889
computers to compare stuff. So embedding Gemma

00:11:24.889 --> 00:11:26.879
is really good at boosting search. Understanding

00:11:26.879 --> 00:11:29.240
what you mean, not just the keywords you type.

00:11:29.379 --> 00:11:31.919
And it works across languages. Get this. Trained

00:11:31.919 --> 00:11:34.639
on over 100 languages. Multilingual right out

00:11:34.639 --> 00:11:36.740
of the box. Plus, you can customize its output

00:11:36.740 --> 00:11:39.379
dimensions. Very versatile. Impressive. And the

00:11:39.379 --> 00:11:41.559
privacy aspect is built in because it's offline.

00:11:41.860 --> 00:11:44.840
Exactly. Runs completely offline. User privacy

00:11:44.840 --> 00:11:47.480
is baked in. And performance, it actually topped

00:11:47.480 --> 00:11:50.200
the MTEB leaderboard. That's the ranking system

00:11:50.200 --> 00:11:52.399
for these models. Yeah, for models under 500

00:11:52.399 --> 00:11:55.639
million parameters. It beat out competitors from...

00:11:55.820 --> 00:11:59.259
So here, Mistral, even OpenAI's smaller embedding

00:11:59.259 --> 00:12:01.740
options, packs a punch for its size. So what

00:12:01.740 --> 00:12:04.279
does that top performance mean for you, the user?

00:12:04.539 --> 00:12:07.740
Better semantic search, understanding the meaning

00:12:07.740 --> 00:12:10.460
behind your query, higher accuracy if you're

00:12:10.460 --> 00:12:13.559
using it for RAG, and crucially, fewer garbage

00:12:13.559 --> 00:12:16.820
answers. Just more relevant, precise results

00:12:16.820 --> 00:12:19.970
faster. And it's open source. Yep. Anyone can

00:12:19.970 --> 00:12:22.250
grab it, plug it into their local setup, customize

00:12:22.250 --> 00:12:25.070
it. This is especially big for enterprises. They

00:12:25.070 --> 00:12:27.129
lean heavily on RAG but haven't had great small

00:12:27.129 --> 00:12:30.269
models for on -device use until now. Embedding

00:12:30.269 --> 00:12:33.830
Gemma fills that gap. Two sec silence. Whoa.

00:12:34.429 --> 00:12:37.490
Okay, just thinking. Imagine scaling this power.

00:12:37.629 --> 00:12:40.269
A billion queries, maybe, on billions of devices,

00:12:40.529 --> 00:12:42.909
all running privately using this little model.

00:12:43.070 --> 00:12:44.950
It's kind of wild to think about the potential

00:12:44.950 --> 00:12:48.580
scale here. It really is. Boiling it down, how

00:12:48.580 --> 00:12:50.779
does Google's local embedding model fundamentally

00:12:50.779 --> 00:12:53.759
change what's possible for, say, phone -based

00:12:53.759 --> 00:12:56.460
AI applications? It enables genuinely powerful

00:12:56.460 --> 00:12:58.980
private AI functions directly on your device,

00:12:59.299 --> 00:13:01.440
sponsor. So let's try and connect the dots here.

00:13:01.539 --> 00:13:03.730
We've covered a lot of ground. What does this

00:13:03.730 --> 00:13:06.529
all mean for us looking at these different pieces?

00:13:06.809 --> 00:13:09.769
Well, it feels like this deep dive really highlighted

00:13:09.769 --> 00:13:13.169
a major theme, doesn't it? AI is evolving incredibly

00:13:13.169 --> 00:13:16.789
fast and there's this dual push. Dual push. Yeah.

00:13:16.830 --> 00:13:19.909
On one hand, making it more powerful and accessible,

00:13:20.070 --> 00:13:22.950
like with embedding Gemma on your phone or tools

00:13:22.950 --> 00:13:26.429
like Mirage for creation. But on the other hand,

00:13:26.470 --> 00:13:29.509
a really strong focus on addressing core safety

00:13:29.509 --> 00:13:32.590
and ethical concerns like that benevolent hack.

00:13:32.679 --> 00:13:35.639
to keep models behaving properly even after you

00:13:35.639 --> 00:13:37.759
shrink them down right so it's not just about

00:13:37.759 --> 00:13:40.480
capability it's about responsibility too exactly

00:13:40.480 --> 00:13:43.960
we're seeing this drive towards making AI more

00:13:43.960 --> 00:13:46.360
trustworthy more useful in everyday life whether

00:13:46.360 --> 00:13:49.019
it's helping you learn powering your phone search

00:13:49.019 --> 00:13:52.340
or transforming business operations it feels

00:13:52.340 --> 00:13:54.799
like building a foundation of responsible innovation

00:13:54.799 --> 00:13:57.519
that makes sense it's been an exciting look into

00:13:57.519 --> 00:13:59.940
where AI is heading showing just how quickly

00:13:59.940 --> 00:14:01.740
things are moving we definitely encourage you

00:14:01.740 --> 00:14:03.860
to explore these topics more, see how they connect

00:14:03.860 --> 00:14:06.399
to the technology you use every day? Yeah, and

00:14:06.399 --> 00:14:09.039
maybe a thought to leave you with. As AI gets

00:14:09.039 --> 00:14:11.500
smarter, as it weaves itself more deeply into

00:14:11.500 --> 00:14:14.340
our lives, what new responsibilities do we pick

00:14:14.340 --> 00:14:17.549
up? We as users. Developers? All of us, really.

00:14:17.669 --> 00:14:20.470
Users, developers, citizens. How do we actively

00:14:20.470 --> 00:14:23.629
shape its future safely and ethically? It's something

00:14:23.629 --> 00:14:26.289
worth mulling over as these tools become so central

00:14:26.289 --> 00:14:28.909
to everything. A really important question. Thank

00:14:28.909 --> 00:14:30.769
you for joining us for this deep dive. We appreciate

00:14:30.769 --> 00:14:32.809
you lending us your curiosity. Keep learning.

00:14:32.990 --> 00:14:33.830
Out Hero Music.
