WEBVTT

00:00:00.000 --> 00:00:03.580
Okay, so welcome back to the deep dive. Today's

00:00:03.580 --> 00:00:06.799
dive is, uh, well. we're really digging into

00:00:06.799 --> 00:00:09.160
some fascinating stuff from the stack of material

00:00:09.160 --> 00:00:11.140
we've been looking at articles some fresh research

00:00:11.140 --> 00:00:13.759
notes you know everything centered around what's

00:00:13.759 --> 00:00:17.260
bubbling up right now in ai our mission today

00:00:17.260 --> 00:00:19.559
for you listening is to just kind of cut through

00:00:19.559 --> 00:00:22.260
the noise right get to the really good insights

00:00:22.260 --> 00:00:25.379
the key developments maybe uh some surprising

00:00:25.379 --> 00:00:27.460
things you might have missed we're gonna unpack

00:00:27.460 --> 00:00:30.160
all this material together yeah absolutely it's

00:00:30.160 --> 00:00:34.170
uh it's easy to feel a bit swamped with how fast

00:00:34.170 --> 00:00:36.909
AI is moving. So this dive is about helping you

00:00:36.909 --> 00:00:39.189
connect the dots, understand the significance

00:00:39.189 --> 00:00:40.929
of these latest developments. You know, it's

00:00:40.929 --> 00:00:43.229
like we're looking at both the really practical,

00:00:43.369 --> 00:00:45.530
immediate applications and some of these big,

00:00:45.630 --> 00:00:47.810
almost foundational shifts happening underneath

00:00:47.810 --> 00:00:52.030
the surface. Exactly. So let's jump in. And I

00:00:52.030 --> 00:00:53.869
want to start with something that when we were

00:00:53.869 --> 00:00:56.509
going through the sources felt incredibly impactful,

00:00:56.570 --> 00:00:58.920
maybe a little. Mind -blowing, honestly. Okay.

00:00:59.000 --> 00:01:01.240
It's this project called Fraggle coming out of

00:01:01.240 --> 00:01:04.260
Singapore. Fraggle, right. The core idea here

00:01:04.260 --> 00:01:08.099
is detecting incredibly tiny traces of cancer

00:01:08.099 --> 00:01:11.780
using just a small blood sample. Like, picture

00:01:11.780 --> 00:01:14.700
this using AI to potentially spot cancer before

00:01:14.700 --> 00:01:16.719
you even have symptoms. I mean, that feels like

00:01:16.719 --> 00:01:18.980
sci -fi, right? It really does. But the material

00:01:18.980 --> 00:01:21.780
suggests it's getting real. What's really striking

00:01:21.780 --> 00:01:24.599
about Fraggle, based on what we read, is how

00:01:24.599 --> 00:01:26.829
it does it. It's looking for something called

00:01:26.829 --> 00:01:30.030
circulating tumor DNA or ctDNA. ctDNA, yeah.

00:01:30.209 --> 00:01:32.129
These are microscopic bits of DNA that break

00:01:32.129 --> 00:01:34.450
off from cancer cells and float around in your

00:01:34.450 --> 00:01:37.409
bloodstream. Okay. Now, the key detail the research

00:01:37.409 --> 00:01:40.049
highlighted is that these ctDNA fragments tend

00:01:40.049 --> 00:01:42.670
to be slightly different sizes than the healthy

00:01:42.670 --> 00:01:44.829
DNA fragments also floating around. Different

00:01:44.829 --> 00:01:47.930
sizes. Yeah. And Fraggle's AI is trained to spot

00:01:47.930 --> 00:01:50.010
that subtle size difference. It's using this.

00:01:50.700 --> 00:01:53.859
This quiet biological signature as its marker.

00:01:54.000 --> 00:01:55.939
Oh, OK. So it's not trying to find a whole tumor.

00:01:56.040 --> 00:01:58.319
It's looking for these molecular crumbs like

00:01:58.319 --> 00:02:00.719
tiny clues. Exactly. These tiny clues in the

00:02:00.719 --> 00:02:02.959
blood. Yeah. And the practical uses they outline

00:02:02.959 --> 00:02:04.739
in the sources, they're pretty significant, right?

00:02:04.819 --> 00:02:06.599
Like it could be that early warning system you

00:02:06.599 --> 00:02:09.560
mentioned or. Or catching a cancer relapse way

00:02:09.560 --> 00:02:12.240
sooner. Right. Or even knowing quickly if a specific

00:02:12.240 --> 00:02:14.530
treatment isn't working. That kind of thing.

00:02:14.650 --> 00:02:17.050
Right. And the sources emphasize this isn't just

00:02:17.050 --> 00:02:19.930
theoretical anymore. They've trained the AI on

00:02:19.930 --> 00:02:22.909
data from whole genome sequencing, looking at

00:02:22.909 --> 00:02:26.090
DNA sizes from actual cancer patients versus

00:02:26.090 --> 00:02:28.729
healthy individuals. And it's already entered

00:02:28.729 --> 00:02:31.310
clinical trials with over 100 patients in Singapore

00:02:31.310 --> 00:02:33.909
who are currently undergoing cancer treatment.

00:02:34.229 --> 00:02:36.889
So it's, you know, moving through those necessary

00:02:36.889 --> 00:02:39.509
steps towards real world use. OK, but here's

00:02:39.509 --> 00:02:41.789
where my jaw kind of dropped. Digging into the

00:02:41.789 --> 00:02:45.240
cost part. We looked at what a typical CT DNA

00:02:45.240 --> 00:02:48.500
test costs today, and the sources put it anywhere

00:02:48.500 --> 00:02:52.500
up to, what, $780? Yeah, around that. Expensive.

00:02:52.500 --> 00:02:54.580
But Fraggle's method, it's coming in at just

00:02:54.580 --> 00:02:58.560
$39. $39! I mean, that's a wild difference. It

00:02:58.560 --> 00:03:00.900
really is. $39. And the implications of that

00:03:00.900 --> 00:03:03.259
massive cost reduction are profound, according

00:03:03.259 --> 00:03:05.580
to the material. Like what? Well, it suddenly

00:03:05.580 --> 00:03:07.810
makes frequent monitoring feasible. The sources

00:03:07.810 --> 00:03:10.050
even used the phrase like a live feed for tracking.

00:03:10.229 --> 00:03:13.689
Wow, a live feed. And this is huge. It could

00:03:13.689 --> 00:03:16.110
potentially make high quality cancer monitoring

00:03:16.110 --> 00:03:19.169
accessible globally, not just something limited

00:03:19.169 --> 00:03:21.949
to wealthy nations or specialized private hospitals.

00:03:22.050 --> 00:03:24.550
That's huge. And they designed it smartly, too,

00:03:24.629 --> 00:03:26.750
didn't they? The sources pointed out it was built

00:03:26.750 --> 00:03:29.530
with adoption in mind. It integrates seamlessly

00:03:29.530 --> 00:03:32.389
with standard DNA profiling labs already out

00:03:32.389 --> 00:03:34.569
there. You don't need new expensive machines

00:03:34.569 --> 00:03:36.990
or require massive retraining for technicians.

00:03:37.150 --> 00:03:39.789
Right. It's designed to work alongside existing

00:03:39.789 --> 00:03:43.789
tools, which makes rolling it out. Well, much

00:03:43.789 --> 00:03:46.210
more realistic. Okay. Yeah, it's that focus on

00:03:46.210 --> 00:03:48.250
practical integration that turns it from an interesting

00:03:48.250 --> 00:03:51.110
lab project into something with real potential

00:03:51.110 --> 00:03:53.409
to change clinical practice. So what does it

00:03:53.409 --> 00:03:55.550
all mean then? What this all means is it points

00:03:55.550 --> 00:03:59.490
towards a future where maybe, just maybe, cancer

00:03:59.490 --> 00:04:01.949
detection could become as relatively routine

00:04:01.949 --> 00:04:04.430
as, say, getting your cholesterol checked during

00:04:04.430 --> 00:04:06.969
an annual physical. That would fundamentally

00:04:06.969 --> 00:04:09.610
shift the paradigm for how we manage the disease.

00:04:10.139 --> 00:04:13.000
Wow. Okay. So that's a super tangible, impactful

00:04:13.000 --> 00:04:16.120
application, obviously. Now let's pivot from

00:04:16.120 --> 00:04:18.240
something concrete in healthcare to something

00:04:18.240 --> 00:04:22.300
a bit more abstract, maybe, but potentially even

00:04:22.300 --> 00:04:24.899
more foundational for AI itself. We're going

00:04:24.899 --> 00:04:29.199
to talk about MIT's SEAL project. Ah, SEAL. And

00:04:29.199 --> 00:04:33.480
the hook here is, what if AI could learn... By

00:04:33.480 --> 00:04:36.779
literally teaching itself, like self -improvement

00:04:36.779 --> 00:04:39.980
on steroids. This is fascinating stuff from the

00:04:39.980 --> 00:04:42.439
research papers we reviewed. SEAL stands for

00:04:42.439 --> 00:04:45.699
self -adapting LLMs, large language models. Self

00:04:45.699 --> 00:04:47.920
-adapting LLMs, right. It's a framework where

00:04:47.920 --> 00:04:50.699
an LLM doesn't just process data fed to it, it

00:04:50.699 --> 00:04:53.300
trains itself. How does that even work? Like,

00:04:53.339 --> 00:04:55.379
what's the mechanism? Based on the descriptions,

00:04:55.600 --> 00:04:57.899
it involves the AI creating its own synthetic

00:04:57.899 --> 00:05:00.720
data to learn from. Its own data. Yeah. It updates

00:05:00.720 --> 00:05:02.660
its own internal instructions on how to learn

00:05:02.660 --> 00:05:05.120
more effectively. And it even makes its own adjustments,

00:05:05.180 --> 00:05:07.519
like tweaking its weights, the parameters within

00:05:07.519 --> 00:05:09.699
the model that determine how it processes information.

00:05:10.160 --> 00:05:12.100
So it's like... It's essentially generating its

00:05:12.100 --> 00:05:14.399
own self -edits or revision notes, as you could

00:05:14.399 --> 00:05:17.740
think of it. Okay, that's kind of weirdly human

00:05:17.740 --> 00:05:20.079
-like, like writing your own study guide before

00:05:20.079 --> 00:05:23.060
a test. Exactly. And the results cited in the

00:05:23.060 --> 00:05:25.720
source material are pretty wild. They report

00:05:25.720 --> 00:05:28.839
it's already outperforming GPT -4 .1 on some

00:05:28.839 --> 00:05:31.759
specific tasks. Already. Wow. And this is key.

00:05:31.879 --> 00:05:34.259
It learned more effectively from the data it

00:05:34.259 --> 00:05:36.519
generated itself, its own notes, than it did

00:05:36.519 --> 00:05:39.360
from material generated by GPT -4 .1 for it to

00:05:39.360 --> 00:05:42.550
learn from. Whoa. Hold on. So the AI's own teaching

00:05:42.550 --> 00:05:45.670
method, its own notes, are better for it than

00:05:45.670 --> 00:05:48.209
being taught by another top AI model. That's

00:05:48.209 --> 00:05:50.189
what the results suggest. Yeah. And it raises

00:05:50.189 --> 00:05:51.970
this really interesting question. What does it

00:05:51.970 --> 00:05:54.269
mean if an AI system can figure out the best

00:05:54.269 --> 00:05:57.029
way for it to learn, tailoring its process to

00:05:57.029 --> 00:05:59.230
its own internal architecture or style? Like

00:05:59.230 --> 00:06:01.209
finding its own learning style. Exactly. Much

00:06:01.209 --> 00:06:02.850
like how some humans find their own revision

00:06:02.850 --> 00:06:04.889
notes more effective than just rereading a textbook.

00:06:05.089 --> 00:06:07.290
Yeah. Yeah. And they mentioned a dramatic improvement

00:06:07.290 --> 00:06:09.149
in something specific. Right. Like puzzle solving.

00:06:09.230 --> 00:06:11.000
I remember reading that. They did. The source

00:06:11.000 --> 00:06:13.300
highlighted a jump in certain puzzle -solving

00:06:13.300 --> 00:06:16.420
tasks. Using standard methods, the AI had a 0

00:06:16.420 --> 00:06:20.079
% success rate. Zero. Zero percent. Okay. But

00:06:20.079 --> 00:06:22.040
after going through the SEAL self -training process,

00:06:22.420 --> 00:06:28.079
it jumped to 72 .5 % success. 72 .5. From zero.

00:06:28.180 --> 00:06:31.600
From zero. That's a huge, huge leap. A huge leap.

00:06:31.639 --> 00:06:33.779
Totally. And if we connect this to the bigger

00:06:33.779 --> 00:06:37.009
picture. What SEAL and similar frameworks they

00:06:37.009 --> 00:06:38.709
mentioned, like Sakana's work on dynamic graph

00:06:38.709 --> 00:06:43.750
models, DGM. What they're exploring is LLMs that

00:06:43.750 --> 00:06:46.709
can potentially evolve continuously without needing

00:06:46.709 --> 00:06:49.569
to be retrained from scratch by humans every

00:06:49.569 --> 00:06:52.589
time. Continuously evolving. Yeah. This mechanism.

00:06:53.079 --> 00:06:54.860
This idea of self -improvement and continuous

00:06:54.860 --> 00:06:57.800
adaptation. It's central to a lot of the theoretical

00:06:57.800 --> 00:07:00.779
discussions around things like AGI or artificial

00:07:00.779 --> 00:07:03.560
general intelligence. And even speculation about

00:07:03.560 --> 00:07:05.480
super intelligence. It's not just about AI getting

00:07:05.480 --> 00:07:08.199
slightly smarter. It's about changing a fundamental

00:07:08.199 --> 00:07:10.560
mechanism by which it gets smarter, potentially

00:07:10.560 --> 00:07:13.259
in real time. OK, so circling back to you listening,

00:07:13.439 --> 00:07:16.430
what does all this mean? It means the AI landscape

00:07:16.430 --> 00:07:18.470
we're looking at isn't just about incremental

00:07:18.470 --> 00:07:21.949
updates to the tools we use. It's about AI potentially

00:07:21.949 --> 00:07:25.050
changing in really fundamental ways, involving

00:07:25.050 --> 00:07:27.370
its own capabilities and learning processes,

00:07:27.709 --> 00:07:29.949
maybe right before our eyes. It's a totally different

00:07:29.949 --> 00:07:32.069
dynamic. It's a different ballgame, really. Okay.

00:07:32.170 --> 00:07:35.370
So we've looked at a concrete application in

00:07:35.370 --> 00:07:38.569
healthcare, a foundational shift in how AI learns.

00:07:39.029 --> 00:07:41.029
Now let's kind of sweep up some of the other

00:07:41.029 --> 00:07:43.470
really interesting... nuggets from the material

00:07:43.470 --> 00:07:45.870
we reviewed. It's a mix of practical stuff, some

00:07:45.870 --> 00:07:49.509
industry gossip, and definitely some quirky things.

00:07:49.829 --> 00:07:52.269
Sounds good. Starting on the educational side,

00:07:52.470 --> 00:07:55.250
something we noted was Anthropic releasing a

00:07:55.250 --> 00:07:58.750
free 12 -lesson course they call AI Fluency.

00:07:58.949 --> 00:08:00.970
AI Fluency, okay. The source points out it goes

00:08:00.970 --> 00:08:02.990
beyond just prompting tips, which is what a lot

00:08:02.990 --> 00:08:05.329
of courses focus on. This one actually involves

00:08:05.329 --> 00:08:08.430
planning and executing a real AI project yourself.

00:08:08.850 --> 00:08:11.050
Oh, hands -on. Yeah, hands -on, and you get a

00:08:11.050 --> 00:08:13.319
certificate. could be a useful resource for anyone

00:08:13.319 --> 00:08:15.240
wanting to dive deeper themselves. Oh, that sounds

00:08:15.240 --> 00:08:16.620
cool. And then there's the stuff that just makes

00:08:16.620 --> 00:08:19.839
you laugh, like VZero's CAPTCHA contest. Right.

00:08:19.920 --> 00:08:21.819
VZero is running a contest for the most ridiculous

00:08:21.819 --> 00:08:24.360
CAPTCHA. Ridiculous how? The example the source

00:08:24.360 --> 00:08:28.019
gave was literally, are you human or are you

00:08:28.019 --> 00:08:30.660
dancer? Chuckles. Like, is that from a song?

00:08:30.819 --> 00:08:33.820
It is, yeah. The Killers, I think. But as a CAPTCHA,

00:08:33.919 --> 00:08:35.960
it's pretty out there. Totally unhinged. The

00:08:35.960 --> 00:08:39.299
winner gets $1 ,000 in credits, which feels about

00:08:39.299 --> 00:08:41.659
right. for inducing that level of existential

00:08:41.659 --> 00:08:44.539
confusion. Juckles, definitely. On the technical

00:08:44.539 --> 00:08:47.940
side, we saw a mention of a builder combining

00:08:47.940 --> 00:08:50.639
different cutting -edge models, Cloud Code, plus

00:08:50.639 --> 00:08:54.299
OpenAI's O3 model, plus Gemini 2 .5. Okay, stacking

00:08:54.299 --> 00:08:56.340
models. Yeah, all working together through something

00:08:56.340 --> 00:08:59.860
called MCP. Think of it like getting different

00:08:59.860 --> 00:09:02.460
AI superpowers to team up. Like the Avengers.

00:09:02.840 --> 00:09:04.740
The source described it exactly like the Avengers

00:09:04.740 --> 00:09:09.110
of AI models. Okay, MCP. What's MCP stand for?

00:09:09.190 --> 00:09:11.909
Like, is that a specific framework or? You know,

00:09:11.929 --> 00:09:13.769
the source didn't spell out the acronym, unfortunately,

00:09:13.970 --> 00:09:15.809
but the description implied it's a method or

00:09:15.809 --> 00:09:17.870
platform for getting diverse models to collaborate

00:09:17.870 --> 00:09:20.549
on tasks, sharing strengths. Got it. It just

00:09:20.549 --> 00:09:22.529
highlights this trend of people trying to orchestrate

00:09:22.529 --> 00:09:25.570
multiple powerful AIs rather than relying on

00:09:25.570 --> 00:09:29.309
just one. OK, got it. Like a meta layer for AIs.

00:09:30.129 --> 00:09:33.289
OK, now here's where things get a little less

00:09:33.289 --> 00:09:36.149
fun, maybe. Based on some analysis we read. There's

00:09:36.149 --> 00:09:38.750
this idea gaining traction that chat GPT and

00:09:38.750 --> 00:09:41.149
other early generative models might have polluted

00:09:41.149 --> 00:09:44.070
the Internet so badly that it's actually hindering

00:09:44.070 --> 00:09:46.970
future AI development. Yeah, this is a significant

00:09:46.970 --> 00:09:49.330
potential downside highlighted in the material.

00:09:49.549 --> 00:09:51.830
The concept is sometimes referred to as model

00:09:51.830 --> 00:09:54.389
collapse. Model collapse. Right. I've heard that

00:09:54.389 --> 00:09:57.720
term. The theory is that as AI models are increasingly

00:09:57.720 --> 00:10:00.779
trained on data from the Internet and the Internet

00:10:00.779 --> 00:10:02.980
is increasingly filled with text and images generated

00:10:02.980 --> 00:10:06.639
by earlier AI models, we start training new models

00:10:06.639 --> 00:10:09.980
on the output of old models rather than on truly

00:10:09.980 --> 00:10:13.820
human generated data. It's like feeding photocopies

00:10:13.820 --> 00:10:15.659
to a copier over and over and eventually the

00:10:15.659 --> 00:10:18.039
copies just degrade. Exactly. That's a great

00:10:18.039 --> 00:10:20.659
analogy. You lose the richness, the nuance, the

00:10:20.659 --> 00:10:24.059
sheer originality of genuinely. human expression

00:10:24.059 --> 00:10:26.960
and data. The sources noted that this AI generated

00:10:26.960 --> 00:10:30.259
spam has potentially tainted a lot of modern

00:10:30.259 --> 00:10:34.000
Internet data. What's striking is they said explicitly

00:10:34.000 --> 00:10:37.299
that data generated before 2022 is now considered

00:10:37.299 --> 00:10:40.820
gold. Before 2022 is gold. Wow. Yeah, because

00:10:40.820 --> 00:10:43.539
it's much less likely to be this kind of AI generated

00:10:43.539 --> 00:10:46.539
synthetic stuff. It's a consequence that, you

00:10:46.539 --> 00:10:48.759
know, maybe wasn't fully anticipated when these

00:10:48.759 --> 00:10:51.429
models first came out. That is a wild, unintended

00:10:51.429 --> 00:10:54.169
consequence. Like, the Internet just got less

00:10:54.169 --> 00:10:56.350
useful for training the very things it helped

00:10:56.350 --> 00:10:58.730
create. Potentially, yeah. And it raises big

00:10:58.730 --> 00:11:00.529
questions about where future training data will

00:11:00.529 --> 00:11:02.990
come from, right? Yes. Okay, speaking of models

00:11:02.990 --> 00:11:05.289
interacting, we saw some mentions of a little

00:11:05.289 --> 00:11:08.830
AI beef, didn't we? Oh, yes. Claude Fore apparently

00:11:08.830 --> 00:11:11.490
co -authored what one source described as a spicy

00:11:11.490 --> 00:11:14.029
takedown of a recent research paper put out by

00:11:14.029 --> 00:11:16.649
Apple. A takedown? Yeah, saying it, and I quote,

00:11:16.850 --> 00:11:20.320
kind of sucks. Chuckles. Seriously. ai models

00:11:20.320 --> 00:11:22.320
reviewing and criticizing each other's academic

00:11:22.320 --> 00:11:24.500
work now that's a whole new level of meta it

00:11:24.500 --> 00:11:26.820
really is it shows this emerging dynamic within

00:11:26.820 --> 00:11:30.419
the ai research community where The models themselves

00:11:30.419 --> 00:11:32.919
are starting to participate in the academic discourse,

00:11:33.159 --> 00:11:36.259
even if prompted by humans, of course. It's fascinating

00:11:36.259 --> 00:11:38.399
to watch that unfold. Totally fascinating. And

00:11:38.399 --> 00:11:40.879
just briefly touching on the industry and business

00:11:40.879 --> 00:11:44.419
side from the sources, we read that Google apparently

00:11:44.419 --> 00:11:47.360
had to invent this role, a chief AI architect,

00:11:47.679 --> 00:11:50.200
because despite having incredible research models,

00:11:50.440 --> 00:11:53.039
they still seem to struggle turning that cutting

00:11:53.039 --> 00:11:56.159
edge stuff into usable, consumer ready products.

00:11:56.519 --> 00:11:58.840
Yeah, it's just a disconnect between their theoretical.

00:11:58.919 --> 00:12:01.340
medical breakthroughs, and their practical application

00:12:01.340 --> 00:12:03.820
pipeline, which is, you know, a common challenge.

00:12:04.000 --> 00:12:06.500
True. Meanwhile, big money is still flowing into

00:12:06.500 --> 00:12:09.019
the space. One venture studio was noted for raising

00:12:09.019 --> 00:12:12.539
$190 million, specifically targeting AI applications

00:12:12.539 --> 00:12:14.820
in healthcare and finance. Yeah, those remain

00:12:14.820 --> 00:12:17.200
huge areas for AI development and investment.

00:12:17.850 --> 00:12:19.850
Makes sense. And there were even mentions kind

00:12:19.850 --> 00:12:23.269
of whispers of potential cracks in the OpenAI

00:12:23.269 --> 00:12:25.669
Microsoft relationship. Yeah. Saw that, too.

00:12:25.730 --> 00:12:27.950
Always interesting dynamics between the big players.

00:12:28.090 --> 00:12:32.269
While OpenAI also reportedly bagged a $200 million

00:12:32.269 --> 00:12:35.509
deal with the U .S. military for war gaming.

00:12:35.750 --> 00:12:37.830
Right. So that just shows the diverse and sometimes

00:12:37.830 --> 00:12:40.850
conflicting arenas where AI is having an impact

00:12:40.850 --> 00:12:44.129
from defense to health care to corporate strategy.

00:12:44.470 --> 00:12:47.529
It's everywhere. It really is. And just quickly

00:12:47.529 --> 00:12:49.909
to wrap up this segment, the source has listed

00:12:49.909 --> 00:12:51.769
a whole bunch of these new, often specialized

00:12:51.769 --> 00:12:54.029
AI tools coming out, giving you a sense of the

00:12:54.029 --> 00:12:56.070
breadth of what's being built. Right, like C

00:12:56.070 --> 00:12:58.830
-Dance Pro for generating videos from text. Instance

00:12:58.830 --> 00:13:01.190
for turning ideas into apps or games without

00:13:01.190 --> 00:13:03.570
coding. Fluidworks for personalized guidance.

00:13:03.970 --> 00:13:06.330
Wondrish for building no -code webpages easily.

00:13:06.649 --> 00:13:09.009
Scribe for automatically creating how -to guides

00:13:09.009 --> 00:13:11.840
from screen recordings. That sounds useful. Yeah,

00:13:11.879 --> 00:13:14.139
definitely. And even updates to existing tools,

00:13:14.279 --> 00:13:16.860
like ChatGPT Canvas now letting you export to

00:13:16.860 --> 00:13:20.019
PDF, or Google's Video Model VO already having

00:13:20.019 --> 00:13:22.179
a short film premiere at Tribeca Festival. Yeah,

00:13:22.220 --> 00:13:25.019
stuff is just happening everywhere at all levels.

00:13:25.139 --> 00:13:27.080
It's hard to keep up. It really is. And just

00:13:27.080 --> 00:13:29.159
one last quick hit from the material, a little

00:13:29.159 --> 00:13:31.659
reminder that individual effort still matters.

00:13:32.279 --> 00:13:35.320
A story about a woman who won $10K in a machine

00:13:35.320 --> 00:13:38.539
learning competition in just one week. Nice.

00:13:38.759 --> 00:13:42.340
So it's not all huge companies and labs. Individuals

00:13:42.340 --> 00:13:44.639
are still making waves, too. Absolutely. Good

00:13:44.639 --> 00:13:47.419
reminder. Okay, wow. So we've covered a lot of

00:13:47.419 --> 00:13:49.779
ground in this deep dive. We started with that

00:13:49.779 --> 00:13:52.440
incredible, potentially revolutionary application

00:13:52.440 --> 00:13:55.740
of AI in healthcare with Fragle. Yeah, the $39

00:13:55.740 --> 00:13:58.809
test. Right. moved the cutting edge of how AI

00:13:58.809 --> 00:14:02.990
learns and evolves with MIT's SEAL project. The

00:14:02.990 --> 00:14:06.009
self -teaching AI. Exactly. And then just got

00:14:06.009 --> 00:14:09.090
a rapid fire snapshot of the incredibly diverse.

00:14:09.710 --> 00:14:12.070
sometimes weird and rapidly changing landscape

00:14:12.070 --> 00:14:14.570
with all those other points from model collapse

00:14:14.570 --> 00:14:17.809
and AI beef to new tools and industry shifts.

00:14:17.990 --> 00:14:19.690
Hopefully going through these sources together

00:14:19.690 --> 00:14:21.509
helps you see some of the underlying patterns,

00:14:21.710 --> 00:14:24.149
the key developments, and ultimately the potential

00:14:24.149 --> 00:14:26.450
impact on you. It's by getting that knowledge

00:14:26.450 --> 00:14:28.850
quickly, but still getting a real sense of the

00:14:28.850 --> 00:14:30.720
depth and breadth of what's happening. Right.

00:14:30.820 --> 00:14:33.700
This is just a snapshot, of course, but hopefully

00:14:33.700 --> 00:14:37.559
it gives you a much clearer picture of some of

00:14:37.559 --> 00:14:39.480
the most interesting and significant things happening

00:14:39.480 --> 00:14:42.440
in AI right now. And if we leave you with one

00:14:42.440 --> 00:14:44.860
final thought, based on everything we've discussed

00:14:44.860 --> 00:14:47.960
today, if AI systems are becoming increasingly

00:14:47.960 --> 00:14:51.340
capable of teaching themselves, adapting and

00:14:51.340 --> 00:14:53.980
evolving continuously without constant human

00:14:53.980 --> 00:14:56.590
intervention. What does that fundamentally change

00:14:56.590 --> 00:14:58.850
about our timeline for technological progress

00:14:58.850 --> 00:15:01.129
or even our definition of intelligence itself

00:15:01.129 --> 00:15:03.389
in the years to come? Yeah. Something to really

00:15:03.389 --> 00:15:05.990
mull over. Absolutely. Keep exploring. Keep learning.
