WEBVTT

00:00:00.000 --> 00:00:02.759
Have you ever asked an AI the same question twice

00:00:02.759 --> 00:00:05.639
with all the settings exactly the same and still

00:00:05.639 --> 00:00:07.679
gotten wildly different answers? Yeah. It's like

00:00:07.679 --> 00:00:09.220
walking into your favorite coffee shop ordering

00:00:09.220 --> 00:00:11.820
the exact same drink, you know? But it tastes

00:00:11.820 --> 00:00:13.580
completely different depending on how busy the

00:00:13.580 --> 00:00:16.039
barista is. Right. That kind of inconsistency,

00:00:16.059 --> 00:00:18.019
well, it can be incredibly frustrating, especially

00:00:18.019 --> 00:00:21.179
when you actually need... reliable outputs. Absolutely.

00:00:21.460 --> 00:00:23.640
That unpredictable quality has been a silent

00:00:23.640 --> 00:00:26.500
bug, really. A pervasive headache for AI models

00:00:26.500 --> 00:00:29.559
for years. It just fundamentally undermined trust

00:00:29.559 --> 00:00:33.820
and scientific rigor. Yeah. But what's truly

00:00:33.820 --> 00:00:36.380
exciting now is that a small, really focused

00:00:36.380 --> 00:00:40.039
team has finally delivered a major fix. And today,

00:00:40.159 --> 00:00:42.100
we're diving deep into what that breakthrough

00:00:42.100 --> 00:00:44.619
means for all of us. Welcome to the Deep Dive.

00:00:45.119 --> 00:00:47.420
Today, we're going to unpack some, well, truly

00:00:47.420 --> 00:00:50.039
incredible breakthroughs and also confront a

00:00:50.039 --> 00:00:52.500
few humbling realities shaping the world of artificial

00:00:52.500 --> 00:00:56.000
intelligence. Our mission here is really to distill

00:00:56.000 --> 00:00:58.020
these insights, giving you a shortcut to being

00:00:58.020 --> 00:01:01.700
well -informed. Yeah, we'll explore that surprising

00:01:01.700 --> 00:01:05.700
fix for an old, persistent AI problem. Then we'll

00:01:05.700 --> 00:01:08.200
shift gears a bit, touch on some exciting new

00:01:08.200 --> 00:01:11.480
tools pushing creative boundaries. Okay. We'll

00:01:11.480 --> 00:01:14.459
also discuss... major industry shifts, the immense

00:01:14.459 --> 00:01:16.859
financial investments pouring into the sector,

00:01:17.000 --> 00:01:19.599
and then look at what AI still can't quite do,

00:01:19.760 --> 00:01:22.780
especially in critical areas like, say, healthcare.

00:01:23.040 --> 00:01:26.579
Right. Get ready for some genuine aha moments,

00:01:26.840 --> 00:01:30.120
you know, things that might reframe how you think

00:01:30.120 --> 00:01:32.980
about AI's future. Okay, let's jump right into

00:01:32.980 --> 00:01:35.280
what might be the most significant recent development

00:01:35.280 --> 00:01:39.079
then. Solving the frustrating inconsistency of

00:01:39.079 --> 00:01:41.459
large language models. Yeah, the big one. For

00:01:41.459 --> 00:01:44.069
years. You could give an AI the same prompt,

00:01:44.209 --> 00:01:46.790
often with its temperature setting at zero, which

00:01:46.790 --> 00:01:48.909
means it should be at its most deterministic,

00:01:48.930 --> 00:01:51.310
right? Most predictable. Should be, yeah. And

00:01:51.310 --> 00:01:53.849
you'd still receive varied outputs. It just felt

00:01:53.849 --> 00:01:55.670
like rolling the dice sometimes. It was exactly

00:01:55.670 --> 00:01:57.750
like your coffee analogy. Even when you tell

00:01:57.750 --> 00:01:59.689
the AI, look, don't be creative. Just give me

00:01:59.689 --> 00:02:03.230
the most likely consistent answer. It still drifted.

00:02:03.409 --> 00:02:06.840
Researchers, developers, businesses. They've

00:02:06.840 --> 00:02:08.979
all been scratching their heads. Because if you

00:02:08.979 --> 00:02:13.020
can't reliably reproduce an AI's output, I mean,

00:02:13.020 --> 00:02:15.919
how can you build critical systems on it? Exactly.

00:02:16.039 --> 00:02:18.759
And the core of the problem, as this brilliant

00:02:18.759 --> 00:02:21.620
team recently discovered, wasn't necessarily

00:02:21.620 --> 00:02:24.719
inside the AI model itself. Right, not the model

00:02:24.719 --> 00:02:26.960
logic per se. But in the server environment,

00:02:27.240 --> 00:02:29.759
your output actually changed depending on how

00:02:29.759 --> 00:02:31.560
many other people were hitting the AI server

00:02:31.560 --> 00:02:34.259
at the exact same time. Huh. So it's like the

00:02:34.259 --> 00:02:36.080
system was getting distracted? Kind of, yeah.

00:02:36.259 --> 00:02:39.099
Its internal state subtly altered by all those

00:02:39.099 --> 00:02:41.819
concurrent requests, leading to these non -deterministic

00:02:41.819 --> 00:02:44.280
results. Wow. Okay, the implications of that

00:02:44.280 --> 00:02:46.500
discovery, they're enormous then. Totally. This

00:02:46.500 --> 00:02:49.020
irreproducibility messed with everything from

00:02:49.020 --> 00:02:51.199
scientific research where consistent results

00:02:51.199 --> 00:02:54.979
are just paramount. Absolutely. business reliability

00:02:54.979 --> 00:02:57.460
for AI -powered apps, and even the efficiency

00:02:57.460 --> 00:02:59.879
of training new models. You'd optimize something,

00:03:00.099 --> 00:03:01.960
run a test, get a great result. And then you

00:03:01.960 --> 00:03:04.020
can't get it again. Exactly. You couldn't replicate

00:03:04.020 --> 00:03:06.419
it consistently. Huge problems for any serious

00:03:06.419 --> 00:03:08.280
application. So what's the fix? What do they

00:03:08.280 --> 00:03:11.500
do for this foundational challenge? Okay, so

00:03:11.500 --> 00:03:14.379
this team at Thinking Machines Lab, impressively

00:03:14.379 --> 00:03:18.240
led by ex -OpenAI CTO Mira Marotti, along with

00:03:18.240 --> 00:03:19.840
a meta researcher, developed something truly

00:03:19.840 --> 00:03:23.020
groundbreaking. It's called Batch Invariant Kernels.

00:03:23.099 --> 00:03:25.460
Batch Invariant Kernels. Okay, what does that

00:03:25.460 --> 00:03:28.340
mean in, like, plain English? Simply put, think

00:03:28.340 --> 00:03:31.000
of them as a new computational method. They ensure

00:03:31.000 --> 00:03:33.719
the AI's internal state for your specific query

00:03:33.719 --> 00:03:36.060
stays completely unaffected by what other users

00:03:36.060 --> 00:03:38.840
are doing simultaneously. Imagine having like

00:03:38.840 --> 00:03:41.240
a dedicated, perfectly soundproofed workspace

00:03:41.240 --> 00:03:45.539
just for your task within the AI. It guarantees

00:03:45.539 --> 00:03:48.400
the exact same outcome every single time, no

00:03:48.400 --> 00:03:50.580
matter how busy the main office gets. Got it.

00:03:50.659 --> 00:03:53.139
It's a foundational stability layer. It just

00:03:53.139 --> 00:03:55.979
eliminates a huge source of irreproducibility.

00:03:56.590 --> 00:03:59.189
the LLMs ignore the noise from other prompts

00:03:59.189 --> 00:04:01.270
being processed in the same batch? Honestly,

00:04:01.330 --> 00:04:04.250
what strikes me as truly remarkable here is that

00:04:04.250 --> 00:04:06.469
this kind of fundamental problem, right? Five

00:04:06.469 --> 00:04:10.069
years old, major players like OpenAI, Meta, Google,

00:04:10.250 --> 00:04:14.069
they seemingly hadn't solved it. Yeah, with all

00:04:14.069 --> 00:04:16.569
their resources. And then this relatively small

00:04:16.569 --> 00:04:19.490
team cracked it. That's... Genuinely inspiring,

00:04:19.730 --> 00:04:21.689
I think, for anyone working in tech. It really

00:04:21.689 --> 00:04:23.930
is. It shows that innovation doesn't always need

00:04:23.930 --> 00:04:28.050
an army of engineers. A focused, agile team with

00:04:28.050 --> 00:04:30.490
a fresh perspective can spot a blind spot that

00:04:30.490 --> 00:04:33.389
maybe larger organizations overlook because of

00:04:33.389 --> 00:04:35.370
existing infrastructure or just different priorities.

00:04:35.750 --> 00:04:39.550
Whoa. I mean, just imagine scaling this newfound

00:04:39.550 --> 00:04:43.459
consistency. Think about a billion queries. All

00:04:43.459 --> 00:04:46.620
reliable. A truly reliable AI could transform

00:04:46.620 --> 00:04:49.620
so many industries. This is a huge step for trust

00:04:49.620 --> 00:04:53.360
and broad adoption. This batch invariant kernels

00:04:53.360 --> 00:04:56.180
fix sounds monumental, clearly. But zooming out

00:04:56.180 --> 00:04:59.540
just a bit, why is this specific kind of AI consistency

00:04:59.540 --> 00:05:02.399
so utterly critical for the whole future trajectory

00:05:02.399 --> 00:05:05.620
of AI development? I mean, beyond just lab reproducibility.

00:05:05.800 --> 00:05:08.519
Well, consistent AI builds trust. That's huge.

00:05:08.740 --> 00:05:11.279
It enables scientific rigor and it makes business

00:05:11.279 --> 00:05:13.839
use cases truly dependable. Right. Dependability.

00:05:13.959 --> 00:05:16.259
That's key. So that foundational fix for consistency

00:05:16.259 --> 00:05:18.480
is a game changer, setting the stage for more

00:05:18.480 --> 00:05:21.019
reliable AI everywhere. But even as we tackle

00:05:21.019 --> 00:05:23.660
core stability, AI is already hurtling forward.

00:05:23.959 --> 00:05:26.360
Incredible new capabilities seem to pop up constantly.

00:05:26.800 --> 00:05:28.639
What's caught your eye recently as a sign of

00:05:28.639 --> 00:05:31.810
where AI is heading next? Well, Google VO3 is

00:05:31.810 --> 00:05:34.350
making massive waves right now. It's a text -to

00:05:34.350 --> 00:05:37.149
-video model, but it can turn any image into

00:05:37.149 --> 00:05:40.529
accurate, highly realistic, lip -synced talking

00:05:40.529 --> 00:05:44.350
videos. Any image. Wow. Yeah. Think viral storytelling

00:05:44.350 --> 00:05:47.029
on a whole new level. Imagine taking a static

00:05:47.029 --> 00:05:49.709
photo, maybe a historical figure, and having

00:05:49.709 --> 00:05:52.310
them deliver a compelling speech perfectly lip

00:05:52.310 --> 00:05:54.949
-synced. Okay, that's pretty wild. For content

00:05:54.949 --> 00:05:57.589
creators. It dramatically cuts down production

00:05:57.589 --> 00:05:59.949
time for things like character animation. or

00:05:59.949 --> 00:06:02.750
explainer videos, the fidelity is just astonishing.

00:06:02.930 --> 00:06:05.110
It really blurs the lines between a static image

00:06:05.110 --> 00:06:07.949
and a dynamic narrative. That opens up so many

00:06:07.949 --> 00:06:10.470
creative avenues and maybe some thought -provoking

00:06:10.470 --> 00:06:13.069
ones too. Okay. And on the more practical, maybe

00:06:13.069 --> 00:06:15.629
operational side, OpenAI just released full support

00:06:15.629 --> 00:06:19.069
for MCP and ChatGPT. Right, MCP. Let's unpack

00:06:19.069 --> 00:06:22.069
that. What exactly does multimodal content processing

00:06:22.069 --> 00:06:24.889
let users do beyond just searching for information?

00:06:25.960 --> 00:06:28.939
MCP, multimodal content processing. It fundamentally

00:06:28.939 --> 00:06:32.079
shifts ChatGPT from being just a conversational

00:06:32.079 --> 00:06:35.160
answer machine to more of an action engine. Instead

00:06:35.160 --> 00:06:37.459
of just telling you how to update a ticket in

00:06:37.459 --> 00:06:39.339
your project management software, for example.

00:06:39.560 --> 00:06:42.730
Okay. MCP could actually... trigger that update

00:06:42.730 --> 00:06:45.110
itself or generate a detailed report or even

00:06:45.110 --> 00:06:48.050
initiate an email sequence all based on your

00:06:48.050 --> 00:06:51.129
natural language commands it acts it acts it

00:06:51.129 --> 00:06:54.689
turns spoken or written intent into direct actionable

00:06:54.689 --> 00:06:57.350
steps within other applications linking different

00:06:57.350 --> 00:07:00.129
workflows together it's moving from conversational

00:07:00.129 --> 00:07:04.139
tool to operational one That's a big shift. And

00:07:04.139 --> 00:07:07.000
speaking of OpenAI, big news also for their structure,

00:07:07.199 --> 00:07:10.819
right? A non -binding deal with Microsoft letting

00:07:10.819 --> 00:07:12.980
them restructure as a for -profit company. Yeah,

00:07:13.019 --> 00:07:14.680
that's significant. So what does this all mean

00:07:14.680 --> 00:07:16.920
for their future direction and maybe the broader

00:07:16.920 --> 00:07:19.670
AI landscape? it really points towards a heavy

00:07:19.670 --> 00:07:22.009
push for commercial viability you know maybe

00:07:22.009 --> 00:07:25.050
even an ipo in the not too distant future this

00:07:25.050 --> 00:07:28.089
move signals a significant acceleration towards

00:07:28.089 --> 00:07:31.709
productization market dominance and we're seeing

00:07:31.709 --> 00:07:34.699
similar financial trends elsewhere too Oracle's

00:07:34.699 --> 00:07:37.199
Larry Ellison, for instance, briefly became the

00:07:37.199 --> 00:07:39.660
world's richest man. Right. I saw that. Thanks

00:07:39.660 --> 00:07:42.579
largely to an AI -driven surge in demand for

00:07:42.579 --> 00:07:45.600
cloud infrastructure. Then you've got Kupang,

00:07:45.800 --> 00:07:48.199
the South Korean e -commerce giant, investing

00:07:48.199 --> 00:07:52.019
about $54 million into a fund supporting 14 AI

00:07:52.019 --> 00:07:54.699
startups just in South Korea. Why? The sheer

00:07:54.699 --> 00:07:56.920
amount of money pouring into this space right

00:07:56.920 --> 00:08:00.389
now. It's just staggering. So all this investment

00:08:00.389 --> 00:08:02.470
is clearly a huge vote of confidence. But, you

00:08:02.470 --> 00:08:05.050
know, for a deep dive, are there any potential

00:08:05.050 --> 00:08:07.230
downsides we should think about? Maybe bubbles,

00:08:07.350 --> 00:08:09.750
given this huge capital influx and rapid expansion?

00:08:10.240 --> 00:08:12.279
Well, yeah, rapid growth always carries risks.

00:08:12.360 --> 00:08:14.439
While the opportunities are vast, we are seeing

00:08:14.439 --> 00:08:16.620
some consolidation. And the valuations, well,

00:08:16.699 --> 00:08:18.480
in some cases, they might be running a bit ahead

00:08:18.480 --> 00:08:21.100
of actual revenue generation. Okay. It's definitely

00:08:21.100 --> 00:08:22.759
something to watch closely, especially in the

00:08:22.759 --> 00:08:25.560
startup ecosystem. But overall, AI is certainly

00:08:25.560 --> 00:08:28.519
moving strongly into video, operational automation,

00:08:28.759 --> 00:08:31.839
and becoming a major economic force. Okay, so

00:08:31.839 --> 00:08:33.980
we've covered some big picture shifts, groundbreaking

00:08:33.980 --> 00:08:38.129
fixes. Let's maybe pivot to some rapid fire updates.

00:08:38.289 --> 00:08:40.629
New tools you might want to know about just showing

00:08:40.629 --> 00:08:42.870
how quickly the landscape is evolving. Absolutely.

00:08:42.950 --> 00:08:45.649
Beyond the big platforms, there's just this constant

00:08:45.649 --> 00:08:49.169
stream of new, often niche AI tools emerging

00:08:49.169 --> 00:08:51.710
daily. Right. On the creative side, you've got

00:08:51.710 --> 00:08:54.049
tools like Quick Deep Thick, pushing boundaries

00:08:54.049 --> 00:08:57.450
with immediate face swapping. AI Figure Generator

00:08:57.450 --> 00:09:00.549
for turning 2D pictures into poseable 3D figures

00:09:00.549 --> 00:09:06.169
for artists, designers. creation from simple

00:09:06.169 --> 00:09:09.269
text, images, or even audio makes content production

00:09:09.269 --> 00:09:12.370
accessible to way more people. And for businesses

00:09:12.370 --> 00:09:14.809
trying to navigate this explosion, there's ThirdEye.

00:09:14.889 --> 00:09:17.370
It helps brands stay discoverable by tracking

00:09:17.370 --> 00:09:19.750
all the various AIs and their capabilities. That's

00:09:19.750 --> 00:09:21.830
a fascinating new challenge, right? Brand management

00:09:21.830 --> 00:09:24.299
in the age of AI. Keeping track of the trackers,

00:09:24.340 --> 00:09:26.340
that's quite a lot of movement in creative and

00:09:26.340 --> 00:09:29.000
business apps. What about broader industry news?

00:09:29.320 --> 00:09:31.679
Well, Grammarly, a tool I think many of us use

00:09:31.679 --> 00:09:34.019
daily, is significantly expanding its reach.

00:09:34.139 --> 00:09:36.740
It now supports 19 languages beyond English.

00:09:36.960 --> 00:09:40.539
Wow, 19. That's a huge leap for global accessibility.

00:09:41.059 --> 00:09:44.759
It really is. And while Google's highly anticipated

00:09:44.759 --> 00:09:46.919
Gemini 3 isn't out this month, they've promised

00:09:46.919 --> 00:09:50.299
it's soon coming after their current 2 .5 Pro

00:09:50.299 --> 00:09:52.980
version. So anticipation is building there. Okay.

00:09:53.059 --> 00:09:55.159
Keeping an eye on that. Right. And the language

00:09:55.159 --> 00:09:57.799
expansion is a big one for global users. We also

00:09:57.799 --> 00:10:00.879
have Stability AI launching Stable Audio 2 .5.

00:10:01.000 --> 00:10:03.740
This lets users create pretty impressive music

00:10:03.740 --> 00:10:06.279
tracks up to three minutes long now with unprecedented

00:10:06.279 --> 00:10:09.500
quality for AI Music Gen. Three minutes. Wow.

00:10:10.139 --> 00:10:12.240
In a really strategic development, both Alibaba

00:10:12.240 --> 00:10:15.360
and Baidu in China are beginning to use their

00:10:15.360 --> 00:10:18.320
own custom chips to train AI models. Ah, interesting.

00:10:18.700 --> 00:10:20.980
Moving away from NVIDIA, maybe. Potentially,

00:10:21.080 --> 00:10:23.539
yeah. It's a significant move, not just for cost

00:10:23.539 --> 00:10:25.460
efficiency, but definitely for national tech

00:10:25.460 --> 00:10:28.240
independence, reducing reliance on external semiconductor

00:10:28.240 --> 00:10:30.820
suppliers. Right. That makes sense. Finally,

00:10:30.879 --> 00:10:32.840
on the education front. Florida State University

00:10:32.840 --> 00:10:35.740
is joining the Google AI for Education Accelerator.

00:10:35.820 --> 00:10:38.960
Okay. So it's truly clear, isn't it? AI is integrating

00:10:38.960 --> 00:10:41.220
into every corner of our lives. Creative work,

00:10:41.279 --> 00:10:44.259
national infrastructure, even how we learn. Honestly,

00:10:44.320 --> 00:10:46.179
sometimes. I still wrestle with prompt drift

00:10:46.179 --> 00:10:48.340
myself, just trying to keep up with all these

00:10:48.340 --> 00:10:51.740
new features and tools. It's a lot to process

00:10:51.740 --> 00:10:53.299
and actually integrate into daily workflows.

00:10:53.539 --> 00:10:55.460
It really is a relentless pace of innovation,

00:10:55.720 --> 00:10:57.860
isn't it? What's the common thread you see weaving

00:10:57.860 --> 00:11:00.139
through all these diverse rapid -fire developments

00:11:00.139 --> 00:11:03.399
we just touched on? I guess. AI is just integrating

00:11:03.399 --> 00:11:07.220
deeper into everyday tasks, creative work, and

00:11:07.220 --> 00:11:09.279
yeah, global tech infrastructure itself. Mineral

00:11:09.279 --> 00:11:12.840
sponsor read. Okay, let's shift gears now to

00:11:12.840 --> 00:11:15.519
a really important application of AI healthcare.

00:11:16.009 --> 00:11:18.169
We're going to discuss a recent study focused

00:11:18.169 --> 00:11:22.409
specifically on AI's role in patient education.

00:11:22.730 --> 00:11:25.509
Yes. A recent study put two prominent large language

00:11:25.509 --> 00:11:29.029
models, ChatGPT -40 and DeepSeq -V3, to the test.

00:11:29.169 --> 00:11:31.710
Okay. They were given identical prompts to generate

00:11:31.710 --> 00:11:34.230
patient education guides for four different chronic

00:11:34.230 --> 00:11:37.090
diseases. The goal was really to see how well

00:11:37.090 --> 00:11:39.330
these AIs could produce clear, understandable,

00:11:39.570 --> 00:11:42.169
and reliable health info for the average person.

00:11:42.370 --> 00:11:44.789
And the results were quite illuminating, weren't

00:11:44.789 --> 00:11:47.220
they? Both models generated guys that typically

00:11:47.220 --> 00:11:49.759
landed at about a 9th, 10th grade reading level.

00:11:49.899 --> 00:11:52.100
Right, which is generally considered pretty accessible

00:11:52.100 --> 00:11:54.899
for adults. And they also scored over 80 % on

00:11:54.899 --> 00:11:57.220
understandability, meaning the content was easy

00:11:57.220 --> 00:12:00.299
enough to grasp. And in terms of reliable quality,

00:12:00.539 --> 00:12:03.299
they scored an average of 47 out of 80 on the

00:12:03.299 --> 00:12:06.539
discern scale, which, for those unfamiliar, is

00:12:06.539 --> 00:12:09.519
a widely recognized tool for assessing health

00:12:09.519 --> 00:12:11.899
info quality. Yeah, it's a standard measure.

00:12:12.120 --> 00:12:14.399
So a score in that range indicates the guides

00:12:14.399 --> 00:12:18.419
provided solid foundational information. Good

00:12:18.419 --> 00:12:20.840
for basic understanding. For basic health info,

00:12:20.980 --> 00:12:23.639
they'll certainly do the trick. DeepSeq v3 did

00:12:23.639 --> 00:12:25.820
show a few standout moves that are worth noting,

00:12:25.919 --> 00:12:28.480
I think. It exhibited more original writing.

00:12:28.720 --> 00:12:31.580
Okay. Its Turnitin similarity score was 32 .5

00:12:31.580 --> 00:12:35.480
% versus ChatGPT's 46%. Now that's quite telling.

00:12:35.659 --> 00:12:38.440
A lower similarity score suggests less boilerplate

00:12:38.440 --> 00:12:40.700
language, more unique phrasing. Which you'd want

00:12:40.700 --> 00:12:43.100
inpatient education. Exactly. Something crucial

00:12:43.100 --> 00:12:45.320
for engaging materials that don't sound like

00:12:45.320 --> 00:12:47.720
they were just copied from a textbook. DeepSeq

00:12:47.720 --> 00:12:50.279
also offered more actionable content. It gave

00:12:50.279 --> 00:12:53.580
clearer next steps for patients 65 % of the time

00:12:53.580 --> 00:12:57.159
compared to 50 % for chat GPT. That distinction,

00:12:57.460 --> 00:13:00.259
originality, and actionability, that really matters

00:13:00.259 --> 00:13:02.539
if you're trying to avoid generic language or

00:13:02.539 --> 00:13:04.480
give patients something they can actually do

00:13:04.480 --> 00:13:06.399
with the information. Right. But what does this

00:13:06.399 --> 00:13:09.279
study teach us about AI's capabilities when it

00:13:09.279 --> 00:13:13.340
comes to critical thinking, especially in a field

00:13:13.340 --> 00:13:15.690
as nuanced as medicine? Well, yeah, this is where

00:13:15.690 --> 00:13:18.289
both models kind of fell flat. When asked for

00:13:18.289 --> 00:13:20.889
more complex tasks like providing financial style

00:13:20.889 --> 00:13:23.490
projections for health outcomes or evaluating

00:13:23.490 --> 00:13:26.669
nuanced personalized care strategies, they really

00:13:26.669 --> 00:13:29.309
struggled. They can articulate existing knowledge,

00:13:29.409 --> 00:13:33.120
structure it well. Sure. But. They're not critical

00:13:33.120 --> 00:13:35.620
thinkers yet. They sound intelligent, but they

00:13:35.620 --> 00:13:38.700
don't reason the way a human expert does. And

00:13:38.700 --> 00:13:40.980
crucially, this study strongly reinforced that

00:13:40.980 --> 00:13:43.299
they cannot and really should not replace doctors

00:13:43.299 --> 00:13:45.919
or human educators for personalized advice. Not

00:13:45.919 --> 00:13:48.600
even close. Their role remains supportive, definitely

00:13:48.600 --> 00:13:51.419
not primary. This study offers a very clear picture

00:13:51.419 --> 00:13:54.620
then. What fundamentally does it teach us about

00:13:54.620 --> 00:13:58.639
AI's current and maybe future role in these complex

00:13:58.639 --> 00:14:01.129
human -centric fields like medicine? Well, it

00:14:01.129 --> 00:14:03.250
shows AI supports basic information delivery

00:14:03.250 --> 00:14:06.210
quite well now. But critical thinking and that

00:14:06.210 --> 00:14:09.250
nuanced care aspect, they absolutely remain human

00:14:09.250 --> 00:14:12.450
domains. What a deep dive indeed from fixing

00:14:12.450 --> 00:14:15.210
a fundamental AI unpredictability bug with those

00:14:15.210 --> 00:14:18.700
batch invariant kernels. I mean, that genuinely

00:14:18.700 --> 00:14:21.820
changes the game for reliability. Huge. To exploring

00:14:21.820 --> 00:14:24.539
new creative tools like Google VO3, and then

00:14:24.539 --> 00:14:27.960
finally understanding AI's very real humbling

00:14:27.960 --> 00:14:30.019
limitations in critical areas like healthcare.

00:14:30.200 --> 00:14:33.360
The pace of innovation is truly staggering. Absolutely.

00:14:33.399 --> 00:14:35.639
We've seen incredible breakthroughs showing how

00:14:35.639 --> 00:14:38.440
even small focus teams can profoundly impact

00:14:38.440 --> 00:14:40.899
the future of AI. That's exciting. Yeah. But

00:14:40.899 --> 00:14:43.700
we also saw a clear, important reminder. While

00:14:43.700 --> 00:14:46.100
AI can process, synthesize, format information

00:14:46.100 --> 00:14:48.700
brilliantly, the capacity for genuine critical

00:14:48.700 --> 00:14:51.720
thinking, nuanced human insight, and truly personalized

00:14:51.720 --> 00:14:54.940
care, that remains uniquely ours. It's a powerful

00:14:54.940 --> 00:14:57.220
balance to understand. So as you continue to

00:14:57.220 --> 00:14:59.500
interact with and apply AI in your own world,

00:14:59.620 --> 00:15:01.600
maybe consider where its strengths truly lie,

00:15:01.679 --> 00:15:03.519
particularly its newfound consistency and where

00:15:03.519 --> 00:15:06.360
that invaluable human touch is absolutely essential.

00:15:06.840 --> 00:15:09.580
Yeah. Think about how we might leverage AI's

00:15:09.580 --> 00:15:12.460
reliable information delivery now that it's less

00:15:12.460 --> 00:15:15.779
like that unpredictable Starbucks, right? To

00:15:15.779 --> 00:15:19.039
free up human experts for what only they can

00:15:19.039 --> 00:15:21.360
truly do. The critical thinking. The empathy,

00:15:21.500 --> 00:15:24.500
the judgment, all the stuff AI simply hasn't

00:15:24.500 --> 00:15:26.299
mastered. Thank you for joining us for this deep

00:15:26.299 --> 00:15:28.059
dive. We hope you gained some valuable insights

00:15:28.059 --> 00:15:30.539
and maybe a clearer perspective on this rapidly

00:15:30.539 --> 00:15:32.700
evolving field. Keep exploring. Keep learning.
