WEBVTT

00:00:00.000 --> 00:00:04.179
Imagine an AI that doesn't just give you an answer

00:00:04.179 --> 00:00:07.259
back, but one that actually engages in a real

00:00:07.259 --> 00:00:10.039
research process. Not just, you know, a one -off

00:00:10.039 --> 00:00:12.939
search, but this iterative, reflective kind of

00:00:12.939 --> 00:00:16.760
thought. Right. Like a human does it. Drafting,

00:00:16.800 --> 00:00:20.640
finding the gaps, searching deeper, refining.

00:00:21.320 --> 00:00:23.739
And then doing it again. Yeah. This isn't really

00:00:23.739 --> 00:00:25.960
about faster searching anymore. I mean, breakthroughs

00:00:25.960 --> 00:00:28.059
like these, they're fundamentally changing how

00:00:28.059 --> 00:00:30.000
we discover things, how we understand the world.

00:00:30.500 --> 00:00:33.539
Welcome back to the deep dive. Our mission, as

00:00:33.539 --> 00:00:36.219
always, is to cut through all the noise and really

00:00:36.219 --> 00:00:38.780
surface the core insights, those surprising facts,

00:00:38.899 --> 00:00:41.020
maybe some aha moments from everything coming

00:00:41.020 --> 00:00:43.420
out. Yeah. And today we're taking a deep dive

00:00:43.420 --> 00:00:46.890
into some, well. pretty significant AI advancements.

00:00:47.009 --> 00:00:49.130
We'll kick things off unpacking Google's new

00:00:49.130 --> 00:00:51.829
AI researcher. It's quite a leap actually. Okay.

00:00:51.909 --> 00:00:53.530
Then we'll zoom out a bit, look at some other

00:00:53.530 --> 00:00:56.810
noteworthy things happening in the wider AI landscape.

00:00:57.070 --> 00:00:59.509
Some interesting claims there too. And finally,

00:00:59.530 --> 00:01:01.969
we'll journey into the microcosm with MIT. They've

00:01:01.969 --> 00:01:05.170
got this remarkable breakthrough mapping proteins

00:01:05.170 --> 00:01:08.329
inside cells. So yeah, it's a packed exploration.

00:01:08.609 --> 00:01:11.150
Let's jump in. Okay. Let's start with Google's

00:01:11.150 --> 00:01:16.230
test time diffusion. deep researcher ttddr bit

00:01:16.230 --> 00:01:19.469
of a mouthful yeah so when we look at most ai

00:01:19.469 --> 00:01:22.250
tools for research right now they tend to work

00:01:22.250 --> 00:01:25.930
in well one or two ways basically right either

00:01:25.930 --> 00:01:28.510
they do a single search summarize what they find

00:01:28.510 --> 00:01:30.769
or maybe they run a few searches at the same

00:01:30.769 --> 00:01:32.450
time and just kind of match the info together

00:01:32.450 --> 00:01:36.250
stitch it up exactly and look that works okay

00:01:36.250 --> 00:01:40.219
for simple questions But it's just not how humans

00:01:40.219 --> 00:01:43.099
really research, is it? Especially complex stuff.

00:01:43.180 --> 00:01:45.400
When we go deep, we don't just take the first

00:01:45.400 --> 00:01:48.200
results. We draft something initial. Then we

00:01:48.200 --> 00:01:50.819
step back, look at it, what's missing. Yeah,

00:01:50.840 --> 00:01:53.120
critique it. Then go back, search more, refine

00:01:53.120 --> 00:01:55.780
our understanding, edit, integrate the new stuff,

00:01:55.939 --> 00:01:58.299
and repeat that whole cycle until you get something

00:01:58.299 --> 00:02:00.840
truly comprehensive. AI hasn't really nailed

00:02:00.840 --> 00:02:04.680
that nuanced flow until maybe now. And that's

00:02:04.680 --> 00:02:07.480
exactly where this Google TTD to ADR seems to

00:02:07.480 --> 00:02:10.780
mark a pretty big departure. This new model from

00:02:10.780 --> 00:02:13.020
Google's team, it's like built from the ground

00:02:13.020 --> 00:02:16.719
up specifically to mimic that human iterative

00:02:16.719 --> 00:02:19.020
process. Okay. So here's how it goes. You give

00:02:19.020 --> 00:02:21.340
it a question. It starts off creating a detailed

00:02:21.340 --> 00:02:23.699
initial outline. Right. Then it generates specific

00:02:23.699 --> 00:02:26.680
search queries, pulls in the information, and

00:02:26.680 --> 00:02:30.180
integrates that into like a first draft. But,

00:02:30.479 --> 00:02:33.560
and this is the cool part, it doesn't stop there.

00:02:33.909 --> 00:02:36.349
It actually generates multiple versions, different

00:02:36.349 --> 00:02:38.150
outlines, different search queries, different

00:02:38.150 --> 00:02:40.710
drafts. Yeah. Think of it like, I don't know,

00:02:40.750 --> 00:02:43.270
like a small team exploring different paths at

00:02:43.270 --> 00:02:47.110
once. Okay. Then there's a separate bit, a judge

00:02:47.110 --> 00:02:49.729
LLM. That's just an AI that understands text

00:02:49.729 --> 00:02:51.629
really well. Right. A language model. Exactly.

00:02:51.810 --> 00:02:54.870
And this judge scores each version. Is it complete?

00:02:55.110 --> 00:02:57.810
Is it coherent? Does it actually answer the question?

00:02:58.219 --> 00:03:00.340
So it's self -correcting, kind of? Yeah, precisely.

00:03:00.699 --> 00:03:03.580
It's like this multi -stage filter and synthesis

00:03:03.580 --> 00:03:06.180
thing constantly refining itself until it pulls

00:03:06.180 --> 00:03:09.199
all the best findings into one really solid document.

00:03:09.500 --> 00:03:12.419
Like stacking Lego blocks of data, checking each

00:03:12.419 --> 00:03:15.080
layer, then building better ones on top. And

00:03:15.080 --> 00:03:17.780
the performance numbers for this TTDDR, they

00:03:17.780 --> 00:03:20.219
seem, well, genuinely compelling. It looks like

00:03:20.219 --> 00:03:22.620
a real leap. Yeah. On these long -form research

00:03:22.620 --> 00:03:24.680
benchmarks, it's not just a little bit better.

00:03:24.719 --> 00:03:26.879
It's winning head -to -head quite often. Against

00:03:26.879 --> 00:03:29.419
OpenAI Deep Research, it wins nearly 70 % of

00:03:29.419 --> 00:03:34.139
the time, 69 .1 % to be exact. Wow. And 74 .5

00:03:34.139 --> 00:03:36.379
% against Deep Consult. Okay. That's significant.

00:03:36.680 --> 00:03:40.449
Yeah. And for MultiHop QA. You know, those complex

00:03:40.449 --> 00:03:42.710
questions needing multiple steps, connecting

00:03:42.710 --> 00:03:44.909
different pieces of info. Right, the turkey ones.

00:03:45.129 --> 00:03:47.090
It's also clearly ahead there, too. Scores 33

00:03:47.090 --> 00:03:51.930
.9 % versus 29 .1 % on HLE search and 69 .1 %

00:03:51.930 --> 00:03:55.210
versus 67 .4 % on GIA. I mean, these aren't small

00:03:55.210 --> 00:03:57.569
gains. It suggests a pretty fundamental shift

00:03:57.569 --> 00:04:00.069
in AI's ability to do deep, nuanced research.

00:04:00.409 --> 00:04:02.129
What's fascinating, though, is even with these,

00:04:02.169 --> 00:04:04.250
you know, really impressive results, TTDDR still

00:04:04.250 --> 00:04:06.629
has some limits right now. Like, it's currently

00:04:06.629 --> 00:04:09.500
locked into using existing search APIs. It can't

00:04:09.500 --> 00:04:12.879
just go browse the live web on its own or run

00:04:12.879 --> 00:04:15.319
code or anything like that. But think about the

00:04:15.319 --> 00:04:17.420
potential, right? Imagine a future version where

00:04:17.420 --> 00:04:20.420
you could plug in, say, autonomous web crawling

00:04:20.420 --> 00:04:25.720
or let it run data analysis scripts itself. That

00:04:25.720 --> 00:04:27.779
would turn it into an AI that doesn't just think

00:04:27.779 --> 00:04:30.360
like a researcher iteratively, but also kind

00:04:30.360 --> 00:04:33.399
of hustles like one, dynamically finding and

00:04:33.399 --> 00:04:36.240
processing info way beyond its current limits.

00:04:36.540 --> 00:04:38.220
Yeah, that's where it gets really interesting

00:04:38.220 --> 00:04:41.060
for me, because this feels like what those deep

00:04:41.060 --> 00:04:43.860
research tools have kind of promised for years,

00:04:43.899 --> 00:04:47.000
but maybe haven't quite delivered on. Felt a

00:04:47.000 --> 00:04:49.759
bit thin sometimes. Right. If, say, OpenAI's

00:04:49.759 --> 00:04:51.339
deep research was like a really sophisticated

00:04:51.339 --> 00:04:54.639
calculator, TTDDR feels more like having a dedicated

00:04:54.639 --> 00:04:56.819
lab assistant. Yeah. You know, one that learns

00:04:56.819 --> 00:04:59.000
as it goes. Yeah, that's a good analogy. This

00:04:59.000 --> 00:05:01.439
really could be a pivotal moment where AI research

00:05:01.439 --> 00:05:03.560
moves beyond being, I don't know, a neat trick

00:05:03.560 --> 00:05:06.459
to being genuinely human -like assistants. Right.

00:05:06.540 --> 00:05:09.839
So thinking about that, if this AI is really

00:05:09.839 --> 00:05:12.399
learning to iterate to refine its approach like

00:05:12.399 --> 00:05:15.110
we do. What's the biggest shift that brings for

00:05:15.110 --> 00:05:18.269
our own research process? Well, simply put, it

00:05:18.269 --> 00:05:19.930
offloads so much of that heavy lifting, that

00:05:19.930 --> 00:05:23.389
grunt work. It frees up human creativity. Yeah,

00:05:23.529 --> 00:05:26.889
I can see that. As someone who, honestly, I still

00:05:26.889 --> 00:05:29.449
wrestle with prompt drift myself sometimes, you

00:05:29.449 --> 00:05:31.689
know, where the AI conversation kind of wanders

00:05:31.689 --> 00:05:34.269
off from your original goal over time. The idea

00:05:34.269 --> 00:05:36.589
of an AI that actively refines its own approach

00:05:36.589 --> 00:05:40.029
across multiple tries, that's incredibly compelling.

00:05:40.600 --> 00:05:42.560
Okay, so let's switch gears for a moment. Let's

00:05:42.560 --> 00:05:45.220
look at the broader AI scene with our Today in

00:05:45.220 --> 00:05:49.560
AI highlights. It's always this fascinating mix,

00:05:49.639 --> 00:05:51.759
isn't it? Collaboration, innovation, and sometimes

00:05:51.759 --> 00:05:54.959
some more contentious stuff. Always something

00:05:54.959 --> 00:05:57.060
brewing. For instance, there are these reports

00:05:57.060 --> 00:05:59.920
suggesting Google and OpenAI actually work together

00:05:59.920 --> 00:06:02.600
behind the scenes to ensure GPT -5's launch went

00:06:02.600 --> 00:06:05.529
smoothly. Really? Huh. Yeah, which suggests this

00:06:05.529 --> 00:06:07.629
kind of interesting undercurrent of cooperation,

00:06:08.110 --> 00:06:10.129
even though they're fierce competitors. Right.

00:06:10.430 --> 00:06:13.689
Frenemies, maybe? Maybe. And given that, well,

00:06:13.790 --> 00:06:15.529
it makes sense that Google's own next big model

00:06:15.529 --> 00:06:17.910
might be coming pretty soon, too. That is a notable

00:06:17.910 --> 00:06:19.889
collaboration, especially given the competition.

00:06:20.370 --> 00:06:24.250
And speaking of data and big launches, remember

00:06:24.250 --> 00:06:26.430
that initial story about Google indexing like

00:06:26.430 --> 00:06:30.850
4 ,000 public chats? Vaguely, yeah. Turns out...

00:06:31.040 --> 00:06:34.240
the real number was way higher, over 96 ,000

00:06:34.240 --> 00:06:36.720
public chats. And if you add in Grok and Claude

00:06:36.720 --> 00:06:41.000
chats, it's over 130 ,000. Wow. Okay. Big difference.

00:06:41.120 --> 00:06:43.160
And what's maybe more concerning for, you know,

00:06:43.180 --> 00:06:46.000
information quality is that dozens of those shared

00:06:46.000 --> 00:06:48.600
chats reportedly had false information in them.

00:06:48.720 --> 00:06:52.639
Yikes. Not great. No. Also... related to big

00:06:52.639 --> 00:06:56.360
launches. After GPT -5 came out, some of OpenAI's

00:06:56.360 --> 00:06:58.259
charts, their data visualizations, they drew

00:06:58.259 --> 00:07:00.199
some flack. Oh, I saw some of that, the chart

00:07:00.199 --> 00:07:02.180
crime stuff. Exactly. A lot of people in the

00:07:02.180 --> 00:07:04.560
data viz community called it chart crime, basically

00:07:04.560 --> 00:07:06.259
saying the way the data was presented might have

00:07:06.259 --> 00:07:08.670
been potentially misleading. Kind of emphasizing

00:07:08.670 --> 00:07:11.069
gains maybe more than was warranted. Yeah, those

00:07:11.069 --> 00:07:13.949
charts definitely got people talking. On a different

00:07:13.949 --> 00:07:15.829
track, we're seeing interesting ways people are

00:07:15.829 --> 00:07:19.430
combining existing AI tools, like one writer

00:07:19.430 --> 00:07:22.550
experimented linking Notebook LM, Perplexity,

00:07:22.569 --> 00:07:25.889
and ChatGPT together. Okay, to do what? Trying

00:07:25.889 --> 00:07:28.750
to create this sort of super efficient AI workflow

00:07:28.750 --> 00:07:30.990
for pulling knowledge together, they claimed

00:07:30.990 --> 00:07:33.829
it was the, quote, ultimate knowledge power combo.

00:07:33.910 --> 00:07:36.209
So it hints at this potential synergy between

00:07:36.209 --> 00:07:38.529
tools. Interesting. But, you know, there's always

00:07:38.529 --> 00:07:41.389
the flip side. This rapid progress brings caution,

00:07:41.470 --> 00:07:44.509
too. A former Google exec made a pretty stark

00:07:44.509 --> 00:07:47.110
prediction recently, talking about a possible

00:07:47.110 --> 00:07:51.610
15 -year AI dystopia, maybe starting around 2027.

00:07:51.930 --> 00:07:55.449
Oof. Strong words. Yeah. Claiming AI could escalate,

00:07:55.470 --> 00:07:58.310
quote, the evil that man can do to an uncontrollable

00:07:58.310 --> 00:08:00.910
level. And citing examples like reports of Grok

00:08:00.910 --> 00:08:03.990
creating sexually visual content even without

00:08:03.990 --> 00:08:06.670
specific prompts as a worrying sign. It's definitely

00:08:06.670 --> 00:08:09.389
a field packed with both huge promise and, yeah,

00:08:09.430 --> 00:08:11.250
significant challenges. Yeah. And beyond the

00:08:11.250 --> 00:08:13.009
big headlines, we're constantly seeing practical

00:08:13.009 --> 00:08:15.360
new tools pop up. Always something new. Yeah,

00:08:15.459 --> 00:08:18.300
like Fast Lip Sync. Automatically aligns character

00:08:18.300 --> 00:08:20.639
lips to audio. Could be great for animation.

00:08:20.879 --> 00:08:23.560
Oh, neat. IMI Editor offers pro -level image

00:08:23.560 --> 00:08:26.699
editing background removal, upscaling. And there's

00:08:26.699 --> 00:08:29.199
a free online converter for image formats. Just

00:08:29.199 --> 00:08:31.480
useful little things. And some other quick hits.

00:08:31.860 --> 00:08:34.919
Rumor is OpenAI might offer a very limited number

00:08:34.919 --> 00:08:37.899
of GPT -5 Pro queries each month. Google Finance

00:08:37.899 --> 00:08:40.419
is testing AI upgrades and a live news feed.

00:08:40.600 --> 00:08:44.460
Okay. OpenArt. A platform from ex -Googlers is

00:08:44.460 --> 00:08:47.179
apparently generating brain rot videos with one

00:08:47.179 --> 00:08:50.159
click, which is a weird sign of the times. Yeah,

00:08:50.159 --> 00:08:52.940
yeah. And on the corporate side, ex -AI's head

00:08:52.940 --> 00:08:56.000
of legal stepped down recently. And Google open

00:08:56.000 --> 00:08:58.080
sourced and upgraded AI for understanding animal

00:08:58.080 --> 00:09:00.120
sounds, which is cool for biodiversity research.

00:09:00.379 --> 00:09:02.919
Lots going on. Totally. So looking at all this,

00:09:03.019 --> 00:09:05.740
this huge range of stuff. Yeah. What do you think

00:09:05.740 --> 00:09:08.299
is the most pressing challenge right now in managing

00:09:08.299 --> 00:09:10.980
this incredible speed of AI innovation? I think

00:09:10.980 --> 00:09:13.200
it boils down to needing constant vigilance,

00:09:13.259 --> 00:09:15.820
really, and evolving our ethical frameworks,

00:09:15.919 --> 00:09:18.320
plus fostering really open, transparent public

00:09:18.320 --> 00:09:21.500
discussion about it all. Right. Okay, for our

00:09:21.500 --> 00:09:25.419
final deep dive today, let's shift focus again

00:09:25.419 --> 00:09:29.019
to biology, actually, and this really groundbreaking

00:09:29.019 --> 00:09:32.980
work from MIT, their protein GPS mapping the

00:09:32.980 --> 00:09:35.470
cell's microcosm. Yeah, this is really cool.

00:09:35.549 --> 00:09:38.629
So knowing exactly where a specific protein is

00:09:38.629 --> 00:09:41.350
located inside a human cell, that's always been

00:09:41.350 --> 00:09:43.370
a huge challenge for scientists. Right. Cells

00:09:43.370 --> 00:09:46.509
are incredibly complex inside. Exactly. Traditionally,

00:09:46.509 --> 00:09:48.970
you had to do these slow, meticulous experiments.

00:09:49.269 --> 00:09:52.230
It could take months in a lab just to find one

00:09:52.230 --> 00:09:55.070
protein's location. And mostly that was for proteins

00:09:55.070 --> 00:09:57.590
they already knew something about. Super painstaking,

00:09:57.629 --> 00:09:59.710
resource -heavy work. Okay, so what's the breakthrough?

00:10:00.039 --> 00:10:02.799
Well, researchers from MIT, Harvard, and the

00:10:02.799 --> 00:10:04.860
Broad Institute have created the system called

00:10:04.860 --> 00:10:07.759
PUPS. It's a sophisticated AI, and it's designed

00:10:07.759 --> 00:10:10.159
to predict the precise location of almost any

00:10:10.159 --> 00:10:13.210
protein inside a single human cell. Any protein.

00:10:13.409 --> 00:10:16.250
Wow. It basically has this two -part brain. First

00:10:16.250 --> 00:10:18.529
part is a protein language model. It learns the

00:10:18.529 --> 00:10:21.009
protein structure just from its amino acid sequence,

00:10:21.090 --> 00:10:22.870
its basic recipe, understands its properties,

00:10:23.049 --> 00:10:25.690
learns the protein itself. Right. The second

00:10:25.690 --> 00:10:28.049
part is an in -painting model. This part looks

00:10:28.049 --> 00:10:29.629
at the bigger picture, the cell environment.

00:10:29.950 --> 00:10:33.730
It reads the vibe of the cell. What type of cell

00:10:33.730 --> 00:10:37.110
is it? What state is it in? Is it stressed? That

00:10:37.110 --> 00:10:40.090
context is crucial for figuring out where a protein

00:10:40.090 --> 00:10:43.230
should be. Ah. understands the protein in its

00:10:43.230 --> 00:10:46.549
neighborhood, like giving the AI x -ray vision

00:10:46.549 --> 00:10:49.110
into the cell's whole landscape. And here's the

00:10:49.110 --> 00:10:52.070
real kicker, the true breakthrough. PUPS works

00:10:52.070 --> 00:10:54.929
on proteins and even entirely new cell types

00:10:54.929 --> 00:10:56.909
it has never seen before during its training.

00:10:57.190 --> 00:10:59.370
Whoa. OK, that's not just confirming what we

00:10:59.370 --> 00:11:01.789
know. That's discovery. Exactly. It can even

00:11:01.789 --> 00:11:04.610
flag subtle changes caused by mutations, stuff

00:11:04.610 --> 00:11:06.470
that might be missing from the human protein

00:11:06.470 --> 00:11:08.850
atlas, which, you know, is our big map of known

00:11:08.850 --> 00:11:11.490
proteins. But it's often limited by those slow,

00:11:11.629 --> 00:11:15.110
traditional methods. PUPS can fill in those critical

00:11:15.110 --> 00:11:17.690
gaps. So how good is it? In their tests, PPS

00:11:17.690 --> 00:11:20.309
consistently beat all the baseline AI methods

00:11:20.309 --> 00:11:22.610
they compared it to. Much lower prediction error,

00:11:22.789 --> 00:11:25.690
high accuracy, even in really tricky or new situations.

00:11:26.230 --> 00:11:27.889
It's not just a little better, it seems like

00:11:27.889 --> 00:11:29.870
a profound jump in our ability to actually see

00:11:29.870 --> 00:11:32.679
what's going on inside cells. Whoa. Okay, just

00:11:32.679 --> 00:11:34.980
pause on that. Imagine instantly seeing where

00:11:34.980 --> 00:11:38.000
any protein is in any cell, whether you studied

00:11:38.000 --> 00:11:40.259
it before or not. That really does feel like

00:11:40.259 --> 00:11:41.779
it's moving out of science fiction. It really

00:11:41.779 --> 00:11:44.220
does. So the implications, I mean, much faster

00:11:44.220 --> 00:11:46.360
identification of disease markers, right, for

00:11:46.360 --> 00:11:48.039
earlier diagnosis. Well, absolutely critical.

00:11:48.240 --> 00:11:50.879
Testing drug targets without all the guesswork

00:11:50.879 --> 00:11:53.720
and trial and error of the old ways, that must

00:11:53.720 --> 00:11:55.960
save enormous amounts of time and resources.

00:11:56.039 --> 00:11:59.080
Huge savings, yeah. And maybe the most exciting

00:11:59.080 --> 00:12:02.230
part. it opens the door to exploring bits of

00:12:02.230 --> 00:12:04.250
cell biology we've just never been able to map

00:12:04.250 --> 00:12:06.769
before. Like whole new frontiers in understanding

00:12:06.769 --> 00:12:10.070
life at its most basic level. Totally new territories.

00:12:10.409 --> 00:12:13.149
So in your view, what's the single biggest potential

00:12:13.149 --> 00:12:16.389
leap this offers for medical research? I'd say

00:12:16.389 --> 00:12:18.750
it's dramatically accelerating drug discovery

00:12:18.750 --> 00:12:21.789
and just fundamentally deepening how we understand

00:12:21.789 --> 00:12:24.710
diseases right down at the cellular level. Midroll

00:12:24.710 --> 00:12:27.129
sponsor read. All right. As we wrap up this deep

00:12:27.129 --> 00:12:28.789
dive, let's just quickly connect these threads.

00:12:29.309 --> 00:12:33.210
We started with Google's TTDDDR and AI learning

00:12:33.210 --> 00:12:35.570
to research to iterate much more like we humans

00:12:35.570 --> 00:12:38.830
do. Then we navigated that really complex, sometimes

00:12:38.830 --> 00:12:41.950
ethically tricky landscape of all the rapid AI

00:12:41.950 --> 00:12:43.990
advancements happening right now. Yeah, the Wild

00:12:43.990 --> 00:12:47.110
West sometimes. And then we ended with this truly

00:12:47.110 --> 00:12:50.350
fundamental scientific breakthrough, MIT's PUPS.

00:12:50.960 --> 00:12:53.779
letting us map that unseen world inside ourselves.

00:12:54.059 --> 00:12:56.299
Incredible stuff. And the theme connecting all

00:12:56.299 --> 00:12:59.399
this, it seems pretty clear, AI is evolving incredibly

00:12:59.399 --> 00:13:02.759
fast, not just to automate simple tasks anymore,

00:13:02.860 --> 00:13:06.179
but to genuinely augment, maybe even redefine,

00:13:06.179 --> 00:13:08.519
human -like thinking and scientific discovery

00:13:08.519 --> 00:13:12.149
itself. And the speed of it all is just... Well,

00:13:12.210 --> 00:13:13.990
it's astounding. The kinds of breakthroughs we're

00:13:13.990 --> 00:13:15.470
talking about here, almost in real time, it's

00:13:15.470 --> 00:13:17.129
really a testament to the power of iteration,

00:13:17.409 --> 00:13:20.330
isn't it? Both from these super smart AIs and,

00:13:20.370 --> 00:13:21.830
of course, from the human researchers pushing

00:13:21.830 --> 00:13:24.710
all the boundaries. It's a dynamic and really

00:13:24.710 --> 00:13:27.090
exciting field to watch. So here's a thought

00:13:27.090 --> 00:13:30.409
to leave you with. If AI can now mimic, maybe

00:13:30.409 --> 00:13:33.529
even outperform, human -like iterative research,

00:13:33.690 --> 00:13:36.730
and it can map the unseen world inside our own

00:13:36.730 --> 00:13:39.529
cells, What previously impossible scientific

00:13:39.529 --> 00:13:41.590
or creative challenges might it unlock next?

00:13:42.009 --> 00:13:44.169
Maybe consider how this starts to change our

00:13:44.169 --> 00:13:46.350
very definition of what discovery even means.

00:13:46.889 --> 00:13:49.070
Thanks for joining us on this deep dive today.

00:13:49.269 --> 00:13:50.990
We really hope you'll take a moment to reflect

00:13:50.990 --> 00:13:53.070
on just how profound some of these changes might

00:13:53.070 --> 00:13:55.570
be. We look forward to our next exploration of

00:13:55.570 --> 00:13:57.549
fascinating knowledge with you. Out to your own

00:13:57.549 --> 00:13:57.789
music.
