WEBVTT

00:00:00.000 --> 00:00:02.919
Okay, so picture this. Google's newest robot

00:00:02.919 --> 00:00:06.120
checks the local weather online, it sees rain,

00:00:06.320 --> 00:00:10.339
and then it decides all by itself to grab an

00:00:10.339 --> 00:00:12.720
umbrella and pack it in a bag for a trip. Wow.

00:00:13.220 --> 00:00:15.679
That's, well, that's some pretty advanced decision

00:00:15.679 --> 00:00:18.679
making, right? Using live web data in real time.

00:00:18.879 --> 00:00:21.519
It really is a genuinely impressive step for,

00:00:21.519 --> 00:00:24.120
you know, physical machines. Maybe not a total

00:00:24.120 --> 00:00:26.879
revolution, but definitely a big stride towards

00:00:26.879 --> 00:00:28.820
general intelligence out there in the messy real

00:00:28.820 --> 00:00:31.480
world. Welcome to the Deep Dive. Today, we're

00:00:31.480 --> 00:00:34.479
looking at the snapshot of AI's, frankly, incredibly

00:00:34.479 --> 00:00:37.679
fast maturation. We're pulling out the key insights

00:00:37.679 --> 00:00:39.359
from the source material you've shared with us.

00:00:39.479 --> 00:00:41.020
Yeah, and these sources, they really show that

00:00:41.020 --> 00:00:44.280
AI isn't just improving the challenges, the risks,

00:00:44.359 --> 00:00:46.460
they're scaling up dramatically too. So we've

00:00:46.460 --> 00:00:48.399
broken this down into three main parts for you.

00:00:48.479 --> 00:00:50.500
First up, the tech breakthroughs. We're talking

00:00:50.500 --> 00:00:52.820
about the shift to genuinely general purpose

00:00:52.820 --> 00:00:58.100
robots. Think laundry and logic. Then we really

00:00:58.100 --> 00:00:59.880
have to get into the tougher stuff, the high

00:00:59.880 --> 00:01:03.420
stakes world of security and, well, legal battles.

00:01:03.460 --> 00:01:06.239
We're talking actual bio threats emerging and

00:01:06.239 --> 00:01:10.680
these huge multibillion dollar lawsuits over

00:01:10.680 --> 00:01:13.739
AI training data. And finally, we'll dig into

00:01:13.739 --> 00:01:15.540
this really interesting report on what. being

00:01:15.540 --> 00:01:18.500
called the developer paradox it's fascinating

00:01:18.500 --> 00:01:20.939
data showing how developers are now super dependent

00:01:20.939 --> 00:01:24.180
on ai tools tools they openly say they don't

00:01:24.180 --> 00:01:26.939
entirely trust right it's like this hidden fragility

00:01:26.939 --> 00:01:29.079
inside a tool everyone's using for productivity

00:01:29.079 --> 00:01:31.700
exactly okay let's get into it starting in the

00:01:31.700 --> 00:01:33.540
lab all right let's ground this in the physical

00:01:33.540 --> 00:01:36.579
world first google deep mind they just showed

00:01:36.579 --> 00:01:40.140
off gemini robotics 1 .5 and i have to say this

00:01:40.140 --> 00:01:42.180
one feels like it could be the chat gpt moment

00:01:42.180 --> 00:01:44.560
but you know for machines that have to actually

00:01:44.620 --> 00:01:47.799
do things outside a clean lab. Exactly. That's

00:01:47.799 --> 00:01:49.959
the perfect way to put it. The absolute key here

00:01:49.959 --> 00:01:53.299
is handling, quote, real world messiness. We're

00:01:53.299 --> 00:01:55.439
finally moving beyond those super controlled

00:01:55.439 --> 00:01:57.920
single task setups. Yeah. And the sources gave

00:01:57.920 --> 00:02:00.420
some really concrete examples, complex ones that

00:02:00.420 --> 00:02:02.640
show some deep planning going on. So what were

00:02:02.640 --> 00:02:05.219
those examples? Tell us about them. Well, OK,

00:02:05.340 --> 00:02:07.900
folding laundry. Sure, we've seen robots fold

00:02:07.900 --> 00:02:10.419
laundry, but this wasn't just folding. It was

00:02:10.419 --> 00:02:13.240
classifying items, sorting them into different

00:02:13.240 --> 00:02:16.840
baskets based on color type. That takes planning.

00:02:17.229 --> 00:02:19.569
Multi -step thinking. Right. Not just a simple

00:02:19.569 --> 00:02:21.669
repetitive motion. And it's not just using its

00:02:21.669 --> 00:02:24.310
own internal logic either. Yeah. The really crucial

00:02:24.310 --> 00:02:27.770
bit, I think, is its ability to tap into Google

00:02:27.770 --> 00:02:31.030
search like MidTask. MidTask. How did that work?

00:02:31.250 --> 00:02:33.629
So they demoed it sorting recycling. But get

00:02:33.629 --> 00:02:36.719
this. It was sorting based on the specific. often

00:02:36.719 --> 00:02:40.000
really complicated recycling rules for that particular

00:02:40.000 --> 00:02:42.780
city. Rules that pulled directly from the web

00:02:42.780 --> 00:02:45.020
just minutes before starting the task. Okay,

00:02:45.099 --> 00:02:47.180
that's different. That's autonomous adaptation.

00:02:47.360 --> 00:02:50.139
It's using outside information on the fly. Precisely.

00:02:50.300 --> 00:02:52.659
Robots aren't isolated islands anymore. They

00:02:52.659 --> 00:02:54.800
can learn and adapt from the world's information

00:02:54.800 --> 00:02:57.229
in real time. And what's really interesting is

00:02:57.229 --> 00:03:00.009
the tech upgrade that makes this possible. Because

00:03:00.009 --> 00:03:02.689
the older Gemini robotics models, they could

00:03:02.689 --> 00:03:05.650
basically do one thing, one time. Change the

00:03:05.650 --> 00:03:07.789
lighting, move the object a bit, and boom, you

00:03:07.789 --> 00:03:09.930
often had to retrain the whole system. Yeah,

00:03:09.949 --> 00:03:12.469
that was the bottleneck. Now we've shifted to

00:03:12.469 --> 00:03:16.310
multi -step planning, and this is key, reusable

00:03:16.310 --> 00:03:19.689
motion logic. Reusable motion logic. Think of

00:03:19.689 --> 00:03:23.150
it like having basic Lego blocks of motor skills,

00:03:23.349 --> 00:03:27.080
fundamental movements. And the AI can now quickly

00:03:27.080 --> 00:03:30.280
stack and reconfigure these blocks for totally

00:03:30.280 --> 00:03:32.939
new goals. That's what gives it that real world

00:03:32.939 --> 00:03:34.979
flexibility. Okay. That makes sense. And this

00:03:34.979 --> 00:03:36.659
is where it potentially gets really transformative

00:03:36.659 --> 00:03:39.280
for robotics, right? This idea of motion transfer.

00:03:39.460 --> 00:03:41.699
Absolutely critical. Motion transfer. Yeah. It's

00:03:41.699 --> 00:03:44.280
the ability to take skills learned on one specific

00:03:44.280 --> 00:03:47.819
robot, say a simple industrial arm, learning

00:03:47.819 --> 00:03:51.210
to flip a pancake. And instantly apply that complex

00:03:51.210 --> 00:03:53.349
skill, that knowledge of pancake flipping, to

00:03:53.349 --> 00:03:55.169
a completely different robot. Like a humanoid

00:03:55.169 --> 00:03:56.870
bot. Totally different body, different joints.

00:03:57.110 --> 00:03:59.569
So you skip potentially thousands of hours of

00:03:59.569 --> 00:04:02.530
retraining for the new robot form. You just transfer

00:04:02.530 --> 00:04:05.210
the concept of the skill. Exactly. The concept,

00:04:05.289 --> 00:04:08.250
the logic. Whoa. Just imagine scaling that up.

00:04:08.349 --> 00:04:10.830
The future proofs training efforts massively.

00:04:11.490 --> 00:04:14.710
You teach one machine a complex task, like operating

00:04:14.710 --> 00:04:18.259
a drill press. And suddenly, every robot. regardless

00:04:18.259 --> 00:04:20.420
of its shape or size, can potentially do it.

00:04:20.519 --> 00:04:22.819
That could drastically cut down the time to mass

00:04:22.819 --> 00:04:25.519
adoption. That's a huge deal. So if the tech

00:04:25.519 --> 00:04:27.379
is getting this good, what's the big remaining

00:04:27.379 --> 00:04:30.019
hurdle? What does Google themselves still admit

00:04:30.019 --> 00:04:33.220
is preventing, say, mass consumer adoption right

00:04:33.220 --> 00:04:35.860
now? It still comes down to the sheer unpredictability

00:04:35.860 --> 00:04:38.040
of the real world. Yeah. Handling all the messy,

00:04:38.139 --> 00:04:40.600
unexpected edge cases. Yeah. That remains the

00:04:40.600 --> 00:04:42.160
biggest challenge. That makes total sense. The

00:04:42.160 --> 00:04:44.480
real world is, well, it's messy. Okay, let's

00:04:44.480 --> 00:04:46.600
pivot now, pretty hard, to the risks and the

00:04:46.600 --> 00:04:48.740
market side of things. Because it feels like

00:04:48.740 --> 00:04:50.939
the security and legal landscape is moving even

00:04:50.939 --> 00:04:53.480
faster than the core technology. It's almost

00:04:53.480 --> 00:04:56.019
dizzying, isn't it? So on the feature side, you

00:04:56.019 --> 00:05:00.779
saw OpenAI drop chat GPT pulse. This thing basically

00:05:00.779 --> 00:05:04.100
uses AI to cook up personalized briefings for

00:05:04.100 --> 00:05:06.480
you while you sleep. Right. Delivers them as

00:05:06.480 --> 00:05:08.060
these little interactive cards when you wake

00:05:08.060 --> 00:05:10.339
up. It's definitely pushing AI towards being

00:05:10.339 --> 00:05:12.680
like a personal chief of staff. And Meta jumped

00:05:12.680 --> 00:05:15.060
in with something called Vibes, which sounds

00:05:15.060 --> 00:05:18.589
like. basically a TikTok feed, but just for AI

00:05:18.589 --> 00:05:20.970
generated videos. Pretty much. And it encourages

00:05:20.970 --> 00:05:23.750
users to remix and share this stuff, you know,

00:05:23.750 --> 00:05:26.149
trying to capture that viral creative energy

00:05:26.149 --> 00:05:28.850
for AI content. So while the big guys are adding

00:05:28.850 --> 00:05:31.250
features, the sources also mention Elon Musk's

00:05:31.250 --> 00:05:33.550
XAI making a kind of interesting market move.

00:05:33.730 --> 00:05:36.069
Yeah. Undercutting the competition. offering

00:05:36.069 --> 00:05:38.009
their Grok model to the U .S. government for

00:05:38.009 --> 00:05:40.829
just 42 cents. Now, okay, it's framed partly

00:05:40.829 --> 00:05:43.329
as a joke, but it's also a very clear signal,

00:05:43.410 --> 00:05:45.689
right? A challenge to the pricing and dominance

00:05:45.689 --> 00:05:48.410
of OpenAI and Tropic, the established players.

00:05:49.600 --> 00:05:51.579
But while these market games are playing out,

00:05:51.620 --> 00:05:53.540
the actual security risks in the infrastructure

00:05:53.540 --> 00:05:56.800
are really popping up. Salesforce, for instance,

00:05:57.100 --> 00:06:00.120
just had a patch, a critical AI bug. This one

00:06:00.120 --> 00:06:02.279
involves something called prompt injection. Right.

00:06:02.399 --> 00:06:04.980
Prompt injection. So for anyone listening who

00:06:04.980 --> 00:06:07.620
isn't deep in the security weeds, that's basically

00:06:07.620 --> 00:06:11.680
tricking the AI. A hacker feeds it text, not

00:06:11.680 --> 00:06:13.660
as a normal question, but as a hidden command.

00:06:14.730 --> 00:06:17.129
To make the AI do something it shouldn't, like

00:06:17.129 --> 00:06:20.730
bypass its own safety rules or access data. And

00:06:20.730 --> 00:06:22.750
in the Salesforce case, it let attackers potentially

00:06:22.750 --> 00:06:25.250
steal valuable customer relationship management,

00:06:25.470 --> 00:06:28.470
CRM, data. It really highlights how traditional

00:06:28.470 --> 00:06:31.069
security like firewalls can fail when the AI

00:06:31.069 --> 00:06:34.290
itself is the weak point. The model can be inherently

00:06:34.290 --> 00:06:37.009
leaky. It's a massive risk for companies. But

00:06:37.009 --> 00:06:39.870
maybe the most... unsettling thing in these sources

00:06:39.870 --> 00:06:42.170
was the bio -threat angle. Yeah, that stood out.

00:06:42.250 --> 00:06:44.670
A Stanford lab apparently used AI to successfully

00:06:44.670 --> 00:06:48.110
design new working viruses. And crucially, some

00:06:48.110 --> 00:06:50.189
of these AI -designed viruses were reportedly

00:06:50.189 --> 00:06:53.170
stronger, more virulent than the natural ones

00:06:53.170 --> 00:06:55.310
they were based on. The report framed this as

00:06:55.310 --> 00:06:57.970
an immediate critical threat, and it specifically

00:06:57.970 --> 00:07:00.769
mentioned that U .S. response systems just aren't

00:07:00.769 --> 00:07:03.449
prepared for AI created bio threats, the speed,

00:07:03.550 --> 00:07:06.269
the accessibility. It completely changes the

00:07:06.269 --> 00:07:08.970
game for pathogen creation. It absolutely demands

00:07:08.970 --> 00:07:12.769
urgent global attention and regulation that,

00:07:12.889 --> 00:07:15.980
frankly. doesn't seem to exist yet. Now, shifting

00:07:15.980 --> 00:07:18.279
gears slightly to the legal side, the financial

00:07:18.279 --> 00:07:20.720
costs are becoming astronomical. No kidding.

00:07:20.920 --> 00:07:24.160
Anthropic, a major AI player, got hit with this

00:07:24.160 --> 00:07:27.860
potential $1 .5 billion wake -up call from authors

00:07:27.860 --> 00:07:30.420
suing over copyright. A billion and a half dollars.

00:07:30.680 --> 00:07:33.360
Yeah. That figure just underscores the colossal

00:07:33.360 --> 00:07:35.920
legal liability tied up in training these large

00:07:35.920 --> 00:07:38.240
language models on potentially copyrighted material.

00:07:38.560 --> 00:07:41.000
That number, $1 .5 billion, it signals we are

00:07:41.000 --> 00:07:43.139
definitely past the move fast and break thing.

00:07:43.240 --> 00:07:46.000
phase for AI development. Companies now have

00:07:46.000 --> 00:07:48.019
to factor in potentially massive legal settlements

00:07:48.019 --> 00:07:51.180
just as a cost of doing business, or they need

00:07:51.180 --> 00:07:53.399
to fundamentally rethink how they source training

00:07:53.399 --> 00:07:57.019
data. Yet, despite all this risk, the money keeps

00:07:57.019 --> 00:08:00.220
flowing into certain areas, particularly secure

00:08:00.220 --> 00:08:04.879
enterprise AI, Cohere. which focuses specifically

00:08:04.879 --> 00:08:07.779
on that secure niche. Right. They just raised

00:08:07.779 --> 00:08:10.699
another $100 million, hitting a $7 billion valuation.

00:08:11.399 --> 00:08:13.600
They're clearly betting that businesses will

00:08:13.600 --> 00:08:16.319
pay a premium for AI solutions that address these

00:08:16.319 --> 00:08:18.759
security vulnerabilities and legal landmines.

00:08:18.860 --> 00:08:21.180
Makes sense. So thinking about those two major

00:08:21.180 --> 00:08:23.339
risks, we just discussed the critical software

00:08:23.339 --> 00:08:26.360
bug at Salesforce along with data theft and the

00:08:26.360 --> 00:08:28.660
engineered viruses coming out of Stanford. Yeah.

00:08:28.699 --> 00:08:30.379
Which one holds the greater immediate threat

00:08:30.379 --> 00:08:32.620
potential? Based on the sources, the possibility

00:08:32.620 --> 00:08:36.039
of AI -borne bio -threats demands the most urgent

00:08:36.039 --> 00:08:39.940
global attention right now. Okay, before we dive

00:08:39.940 --> 00:08:41.559
into that developer trust paradox, let's just

00:08:41.559 --> 00:08:43.519
take a quick pause. Placeholder for sponsor message.

00:08:43.960 --> 00:08:46.480
Welcome back to the Deep Dive. So we really need

00:08:46.480 --> 00:08:48.200
to unpack the data coming out of Google Cloud's

00:08:48.200 --> 00:08:50.559
latest DORA report, because it paints this picture

00:08:50.559 --> 00:08:53.220
of what we're calling the core paradox. AI is

00:08:53.220 --> 00:08:55.519
becoming absolutely essential business infrastructure.

00:08:55.820 --> 00:08:58.500
Yet a significant chunk of the users who rely

00:08:58.500 --> 00:09:01.360
on it daily admit they don't fully trust what

00:09:01.360 --> 00:09:04.340
it produces. The numbers really do seem contradictory

00:09:04.340 --> 00:09:09.129
at first glance. Usage is... Well, almost universal

00:09:09.129 --> 00:09:12.169
now in development. 90 % of developers are using

00:09:12.169 --> 00:09:16.009
AI co -pilots regularly. 90%. Yeah. And often

00:09:16.009 --> 00:09:19.070
spending around two hours per day working alongside

00:09:19.070 --> 00:09:22.330
these AI assistants. It's deeply embedded in

00:09:22.330 --> 00:09:24.970
the workflow. Foundational. But then you hit

00:09:24.970 --> 00:09:29.269
the trust gap, and it's pretty stark. 30%, nearly

00:09:29.269 --> 00:09:32.289
one in three of those same developers, said they

00:09:32.289 --> 00:09:35.730
trust the AI's output only a little, or worse.

00:09:36.190 --> 00:09:38.690
Not at all. Think about that. A third of the

00:09:38.690 --> 00:09:40.970
workforce relies heavily on software they fundamentally

00:09:40.970 --> 00:09:43.470
distrust. It's incredibly relatable, though,

00:09:43.490 --> 00:09:45.549
isn't it? I mean, we all kind of experience this.

00:09:45.590 --> 00:09:47.490
Oh, absolutely. I still wrestle with prompt drift

00:09:47.490 --> 00:09:50.250
myself regularly. And, you know, for listeners

00:09:50.250 --> 00:09:52.190
maybe not living in the code daily, prompt drift,

00:09:52.350 --> 00:09:54.830
it's that really frustrating thing where during

00:09:54.830 --> 00:09:57.509
a long chat with an AI, it starts to forget the

00:09:57.509 --> 00:09:59.750
original instructions. It loses the context,

00:09:59.970 --> 00:10:02.110
the rules you set up at the very beginning. And

00:10:02.110 --> 00:10:05.169
that lack of reliable consistency, that's a genuine

00:10:05.169 --> 00:10:07.490
vulnerability when you're relying on it for professional

00:10:07.490 --> 00:10:10.710
work. So given that level of distrust, that drift.

00:10:11.500 --> 00:10:14.279
Why hasn't it killed adoption? The practical

00:10:14.279 --> 00:10:17.019
upsides must just be overwhelming that feeling.

00:10:17.240 --> 00:10:20.059
They really are. The data shows the benefits

00:10:20.059 --> 00:10:22.799
clearly win out over the fear of, you know, the

00:10:22.799 --> 00:10:25.639
AI making stuff up or getting things wrong. 80

00:10:25.639 --> 00:10:28.440
% of developers reported clear productivity gains.

00:10:28.980 --> 00:10:32.519
Huge number. 80%. Wow. And 59 % said it actually

00:10:32.519 --> 00:10:34.940
improved the quality of their code. So the overall

00:10:34.940 --> 00:10:37.659
feeling seems to be a very pragmatic, okay, I

00:10:37.659 --> 00:10:40.080
don't fully trust this thing, but there's absolutely

00:10:40.080 --> 00:10:41.740
no way I'm going back to working without it.

00:10:41.879 --> 00:10:44.580
That really is the definition of essential infrastructure,

00:10:44.860 --> 00:10:47.440
isn't it? It's somewhat fragile, maybe unreliable

00:10:47.440 --> 00:10:49.820
at times, but it's become irreplaceable. The

00:10:49.820 --> 00:10:52.700
speed boost is worth the extra effort of constantly

00:10:52.700 --> 00:10:55.779
checking its work. Exactly. And because this...

00:10:57.129 --> 00:10:59.429
slightly fragile tool is now critical infrastructure,

00:10:59.769 --> 00:11:02.330
you see companies like Google responding. They're

00:11:02.330 --> 00:11:04.929
pushing to standardize how it's used. They released

00:11:04.929 --> 00:11:07.610
this thing called the DORA AI capabilities model.

00:11:07.830 --> 00:11:09.909
Right. The DORA model. What does that actually

00:11:09.909 --> 00:11:12.830
do? It's basically a framework. It lays out seven

00:11:12.830 --> 00:11:16.169
best practices for development teams using AI

00:11:16.169 --> 00:11:18.850
effectively. Things like how to test AI outputs

00:11:18.850 --> 00:11:21.490
responsibly, integrate security checks, ensure

00:11:21.490 --> 00:11:24.250
consistent deployment processes, stuff like that.

00:11:24.330 --> 00:11:26.850
So it's a move towards governance. Putting guard

00:11:26.850 --> 00:11:30.029
rails in place. Precisely. It signals that companies

00:11:30.029 --> 00:11:31.909
get it now. They realize they can't just leave

00:11:31.909 --> 00:11:33.690
it up to individual developers to figure out

00:11:33.690 --> 00:11:35.669
how to manage all these risks and inconsistencies

00:11:35.669 --> 00:11:38.330
on their own. They need common standards. OK,

00:11:38.409 --> 00:11:41.029
so thinking about that push for standards, why

00:11:41.029 --> 00:11:43.690
is standardizing AI use? With frameworks like

00:11:43.690 --> 00:11:46.029
DORA, why is that so immediately critical for

00:11:46.029 --> 00:11:48.169
big organizations right now? Because essential

00:11:48.169 --> 00:11:51.110
AI infrastructure requires unified best practices

00:11:51.110 --> 00:11:53.690
to manage risk and output consistency. OK, so

00:11:53.690 --> 00:11:55.850
let's pull back for a moment and just recap the

00:11:55.850 --> 00:11:58.990
big ideas from this deep dive. It feels like

00:11:58.990 --> 00:12:02.370
we landed on three key takeaways that really

00:12:02.370 --> 00:12:05.769
define this complex, fast -moving moment in AI's

00:12:05.769 --> 00:12:07.649
development. Yeah, I think so. First, on the

00:12:07.649 --> 00:12:09.909
robotics front, Gemini showed us this really

00:12:09.909 --> 00:12:12.289
sophisticated multi -step planning. Right. Using

00:12:12.289 --> 00:12:14.950
live web data. That's a huge leap, right? Towards

00:12:14.950 --> 00:12:17.610
robots being genuinely general -purpose tools

00:12:17.610 --> 00:12:19.990
actually integrated into the real world. Right.

00:12:20.070 --> 00:12:23.669
Laundry and logic achieved. Second, the stakes

00:12:23.669 --> 00:12:26.009
around security and law are just skyrocketing.

00:12:26.009 --> 00:12:28.190
We've got the emergence of... potentially AI

00:12:28.190 --> 00:12:31.789
design bio threats on one hand and these absolutely

00:12:31.789 --> 00:12:35.269
massive multibillion dollar legal fights over

00:12:35.269 --> 00:12:37.450
copyrighted training data on the other. The risks

00:12:37.450 --> 00:12:39.889
are immense now. And finally, that developer

00:12:39.889 --> 00:12:42.970
paradox. AI adoption isn't really being driven

00:12:42.970 --> 00:12:45.450
by blind faith or trust. It's driven by hard

00:12:45.450 --> 00:12:48.960
results. productivity games developers are using

00:12:48.960 --> 00:12:50.840
these tools because they make them faster more

00:12:50.840 --> 00:12:52.960
effective even if they have to constantly double

00:12:52.960 --> 00:12:55.720
check the output so the tech is shifting incredibly

00:12:55.720 --> 00:12:59.080
fast from being this novel experimental thing

00:12:59.080 --> 00:13:02.320
to being critical yeah but still kind of fragile

00:13:02.320 --> 00:13:05.559
infrastructure It's a really fascinating and

00:13:05.559 --> 00:13:08.779
maybe precarious balance point we're at. So building

00:13:08.779 --> 00:13:10.940
on that, that advanced planning we saw with Gemini,

00:13:11.019 --> 00:13:13.139
you know, the robot checking the weather, deciding

00:13:13.139 --> 00:13:15.919
autonomously to pack an umbrella and connecting

00:13:15.919 --> 00:13:18.720
that capability with the escalating threats we

00:13:18.720 --> 00:13:20.600
discussed, like bio threats and the huge legal

00:13:20.600 --> 00:13:23.679
risks. We wanted to leave you, the listener,

00:13:23.879 --> 00:13:26.440
with this thought to chew on. Considering how

00:13:26.440 --> 00:13:28.740
complex and autonomous these AI systems are becoming,

00:13:28.899 --> 00:13:31.340
what do you think will be the first genuinely

00:13:31.340 --> 00:13:34.580
helpful, but also ethically challenging autonomous

00:13:34.580 --> 00:13:37.519
decision AI makes for humanity in the next year

00:13:37.519 --> 00:13:40.340
or so. Yeah, something complex, something vital,

00:13:40.379 --> 00:13:41.860
maybe something that feels just a little bit

00:13:41.860 --> 00:13:44.580
scary involving real world consequences. Was

00:13:44.580 --> 00:13:47.259
that first big autonomous ethically tricky choice

00:13:47.259 --> 00:13:50.000
going to be? Something to ponder. Thank you for

00:13:50.000 --> 00:13:51.840
joining us for this deep dive into your sources

00:13:51.840 --> 00:13:54.559
today. As always, we encourage you to keep exploring

00:13:54.559 --> 00:13:55.980
these incredibly important topics.
