WEBVTT

00:00:00.000 --> 00:00:03.319
Okay, imagine this. An AI that designs completely

00:00:03.319 --> 00:00:07.160
new cancer drugs. Not just improving old ones,

00:00:07.259 --> 00:00:10.380
but creating molecules out of, well, nothing.

00:00:10.580 --> 00:00:15.419
No prior data, no guides. Just pure generative

00:00:15.419 --> 00:00:17.879
chemistry. How is that even possible? And, you

00:00:17.879 --> 00:00:19.969
know, what does it mean for... where medicine

00:00:19.969 --> 00:00:22.690
is headed. Welcome to the Deep Dive. Today we're

00:00:22.690 --> 00:00:24.690
going to unpack a really compelling collection

00:00:24.690 --> 00:00:28.030
of insights. It's all from a recent newsletter,

00:00:28.149 --> 00:00:30.449
really charting the sharp end of AI development.

00:00:30.910 --> 00:00:32.890
There's a lot to get into. Oh, absolutely. We're

00:00:32.890 --> 00:00:35.210
talking about AI skeptics, longtime skeptics

00:00:35.210 --> 00:00:37.829
suddenly cutting their AGI timelines in half.

00:00:37.950 --> 00:00:40.009
We've got these incredible medical breakthroughs

00:00:40.009 --> 00:00:42.090
that sound like pure sci -fi. And yeah, even

00:00:42.090 --> 00:00:44.990
some pretty wild AI social media arguments. Right.

00:00:45.049 --> 00:00:47.170
So our mission today is to distill all these

00:00:47.170 --> 00:00:48.869
different threads, give you the key takeaways.

00:00:48.969 --> 00:00:51.350
Think of it as maybe a shortcut to understanding

00:00:51.350 --> 00:00:53.869
the big shifts happening in AI right now. We'll

00:00:53.869 --> 00:00:55.250
try and connect the dots for you. Yeah, let's

00:00:55.250 --> 00:00:57.170
unpack it. So we should probably start with what

00:00:57.170 --> 00:00:59.229
feels like a really significant shift in thinking.

00:00:59.409 --> 00:01:01.810
Francois Chalet, you know, the creator of Keras,

00:01:01.890 --> 00:01:04.709
that super popular AI library. He's been a pretty

00:01:04.709 --> 00:01:08.370
vocal AGI skeptic for years. Very measured. Exactly.

00:01:08.549 --> 00:01:12.200
And he just halved his AGI timeline. Slashed

00:01:12.200 --> 00:01:15.079
it from 10 years down to just five. That's that's

00:01:15.079 --> 00:01:17.420
a major change for him. It's huge. And importantly,

00:01:17.540 --> 00:01:19.719
he's saying it's not just because models are

00:01:19.719 --> 00:01:22.819
getting bigger or faster. It's that AI is starting

00:01:22.819 --> 00:01:24.760
to get smarter in a way that actually matters.

00:01:24.840 --> 00:01:27.459
Yeah. Moving beyond just being like he called

00:01:27.459 --> 00:01:30.140
them glorified parrots. He's seeing something

00:01:30.140 --> 00:01:31.620
else emerge. And this is where it gets really

00:01:31.620 --> 00:01:34.579
interesting, I think. Chile's pointing to signs

00:01:34.579 --> 00:01:37.079
of what he calls fluid intelligence in these

00:01:37.079 --> 00:01:39.680
systems. Okay, fluid intelligence. Yeah, so it's

00:01:39.680 --> 00:01:41.680
not just pattern matching or remembering stuff.

00:01:41.780 --> 00:01:44.879
It's the AI's ability to adapt to totally near

00:01:44.879 --> 00:01:48.840
unexpected problems on the fly without needing

00:01:48.840 --> 00:01:51.959
examples or specific training for that situation.

00:01:52.260 --> 00:01:54.719
It's more like spontaneous problem solving. Got

00:01:54.719 --> 00:01:58.000
it. Like imagine you drop an AI into a simple

00:01:58.000 --> 00:02:00.439
video game it's never seen. Zero instructions,

00:02:00.579 --> 00:02:03.420
no labels, just the game. It has to explore,

00:02:03.659 --> 00:02:06.500
poke around, figure out the rules just by trying

00:02:06.500 --> 00:02:09.240
things, and then learn how to win. That's basically

00:02:09.240 --> 00:02:13.599
what his new benchmark, ARC -AGI3, is testing.

00:02:14.020 --> 00:02:16.740
It's all about how efficiently it acquires new

00:02:16.740 --> 00:02:19.199
skills, that raw learning speed. And humans,

00:02:19.340 --> 00:02:21.539
well, we can usually figure out those kinds of

00:02:21.539 --> 00:02:23.400
simple novel games in less than a minute, right?

00:02:23.460 --> 00:02:26.360
We just kind of get it. Current AI models, still

00:02:26.360 --> 00:02:28.039
really struggling there. Yeah, and they're not

00:02:28.039 --> 00:02:30.189
there yet. Chollet's standard is pretty clear.

00:02:30.370 --> 00:02:32.210
As long as we can come up with problems humans

00:02:32.210 --> 00:02:35.930
can solve, but AI can't, we don't have AGI. It's

00:02:35.930 --> 00:02:38.090
a high bar, sure, but it points to that crucial

00:02:38.090 --> 00:02:40.750
gap. But here's an idea that Dworkesh, another

00:02:40.750 --> 00:02:43.909
AI commentator, really focused on as a potential

00:02:43.909 --> 00:02:47.490
game changer. Collective intelligence. He brought

00:02:47.490 --> 00:02:49.490
up this idea that instead of every AI starting

00:02:49.490 --> 00:02:52.430
fresh, like a, quote, perpetual intern on day

00:02:52.430 --> 00:02:55.909
one, any skill learned by one AI could instantly

00:02:55.909 --> 00:02:58.629
become available to all the other AIs, like instantly

00:02:58.629 --> 00:03:00.550
shared. OK, so connecting that to the bigger

00:03:00.550 --> 00:03:02.810
picture, that implies AI could compound knowledge

00:03:02.810 --> 00:03:05.770
almost exponentially. Learning not as individuals,

00:03:05.810 --> 00:03:09.750
but as one giant connected mind, a hive mind.

00:03:10.009 --> 00:03:13.590
Pretty much. Dworkish put it really simply. That

00:03:13.590 --> 00:03:15.860
would basically be the singularity. It suggests

00:03:15.860 --> 00:03:18.520
this incredibly fast acceleration where knowledge

00:03:18.520 --> 00:03:21.460
just builds and builds across the whole network

00:03:21.460 --> 00:03:25.219
in real time. So what is it about this collective

00:03:25.219 --> 00:03:28.620
learning idea that makes it so potentially transformative

00:03:28.620 --> 00:03:32.560
for AI's progress? Well, it means AI could share

00:03:32.560 --> 00:03:35.800
insights instantly, massively accelerating knowledge

00:03:35.800 --> 00:03:38.439
gains across all systems. Okay, moving on a bit,

00:03:38.539 --> 00:03:40.539
we've seen some other interesting updates and

00:03:40.539 --> 00:03:43.139
notable moments recently that kind of paint a

00:03:43.139 --> 00:03:44.580
picture of the current scene. There was this

00:03:44.580 --> 00:03:47.400
clever chat GPT prompt that apparently went viral.

00:03:47.520 --> 00:03:49.759
Yeah, saw that. It's basically a smart way to

00:03:49.759 --> 00:03:51.860
structure your request to get the AI to think

00:03:51.860 --> 00:03:53.939
more deeply, giving you better, more relevant

00:03:53.939 --> 00:03:56.360
answers. It shows users are getting savvier,

00:03:56.460 --> 00:03:59.479
you know? Definitely. We're also seeing new GPT

00:03:59.479 --> 00:04:02.560
-5 model options pop up, names like auto, fast,

00:04:02.719 --> 00:04:05.099
and thinking. Each seems tuned for different

00:04:05.099 --> 00:04:07.849
kinds of tasks or speeds. Right, and GPT -4 .0

00:04:07.849 --> 00:04:11.069
is back for everyone, but GPT -4 .5, that slightly

00:04:11.069 --> 00:04:13.590
more advanced one, is now only for pro users.

00:04:14.389 --> 00:04:16.649
The landscape keeps shifting, doesn't it? Companies

00:04:16.649 --> 00:04:18.550
trying to figure out how to offer different tiers.

00:04:18.910 --> 00:04:21.990
And with these new models come rate limits. You

00:04:21.990 --> 00:04:25.449
get like 3 ,000 GPT -5 thinking queries a week,

00:04:25.610 --> 00:04:28.170
which sounds like a lot, but maybe not for heavy

00:04:28.170 --> 00:04:31.250
users. But then you look at CloudSonic 4, its

00:04:31.250 --> 00:04:34.649
context window is now massive. One million tokens.

00:04:34.910 --> 00:04:37.389
And just to clarify for listeners, a token is

00:04:37.389 --> 00:04:39.610
basically like a word or even just part of a

00:04:39.610 --> 00:04:42.569
word that the AI processes. So a million tokens

00:04:42.569 --> 00:04:45.329
means Claude can read and understand incredibly

00:04:45.329 --> 00:04:48.670
long texts, think a whole novel or huge code

00:04:48.670 --> 00:04:51.730
bases all at once. Which is amazing. But it comes

00:04:51.730 --> 00:04:53.649
with higher fees. And right now it's only for

00:04:53.649 --> 00:04:55.269
their top tier users, the ones spending over

00:04:55.269 --> 00:04:57.889
$400. So that power isn't cheap or universally

00:04:57.889 --> 00:04:59.889
available just yet. You know, beyond the tech

00:04:59.889 --> 00:05:01.910
specs, what's also kind of fascinating is the,

00:05:01.949 --> 00:05:04.490
well, the social drama between AIs. We saw the

00:05:04.490 --> 00:05:07.610
Grok bot on X basically calling Elon Musk a hypocrite.

00:05:07.689 --> 00:05:10.769
Oh, yeah. Didn't see that one. And then ChatGPT's

00:05:10.769 --> 00:05:13.269
own social account jumped in and started getting

00:05:13.269 --> 00:05:16.009
kind of snarky with Grok. It was a public back

00:05:16.009 --> 00:05:18.490
and forth. It's weirdly human, almost like watching

00:05:18.490 --> 00:05:21.800
digital rivals emerge. Ah, wow. It's definitely

00:05:21.800 --> 00:05:23.379
a reminder that these things can be unpredictable,

00:05:23.620 --> 00:05:26.740
right? And speaking of unpredictable and warnings,

00:05:27.240 --> 00:05:31.100
there was that really serious incident. A man

00:05:31.100 --> 00:05:33.680
ended up hospitalized after following diet advice

00:05:33.680 --> 00:05:36.339
from chat GPT. Yeah, that's concerning. It really

00:05:36.339 --> 00:05:39.139
hammers home the point. You absolutely must verify

00:05:39.139 --> 00:05:41.959
AI outputs, especially, you know, when it involves

00:05:41.959 --> 00:05:43.879
your health or finances or any big decision.

00:05:44.040 --> 00:05:46.860
It's a powerful tool, but it's not an expert

00:05:46.860 --> 00:05:49.120
you can blindly trust. Yeah, I mean, I still

00:05:49.120 --> 00:05:51.139
find myself having to consciously double -check

00:05:51.139 --> 00:05:53.100
things sometimes. It's just so easy to trust

00:05:53.100 --> 00:05:54.920
that confident -sounding answer, especially when

00:05:54.920 --> 00:05:57.180
you're in a hurry. But yeah, that story is a

00:05:57.180 --> 00:05:59.860
stark reminder. Vigilance is key. And on the

00:05:59.860 --> 00:06:03.139
money side of things, the AI chip startup Revos,

00:06:03.240 --> 00:06:06.139
already valued over $2 billion, is apparently

00:06:06.139 --> 00:06:08.639
looking for a big chunk of new funding, like

00:06:08.639 --> 00:06:11.120
$400 -500 million. To compete with NVIDIA, right?

00:06:11.629 --> 00:06:13.529
Exactly. They've already got big names like Intel

00:06:13.529 --> 00:06:16.149
Capital and MediaTek backing them. It shows the

00:06:16.149 --> 00:06:18.230
chip race is seriously heating up. Everyone wants

00:06:18.230 --> 00:06:20.470
a piece of that action because compute power

00:06:20.470 --> 00:06:23.569
is, well, it's the foundation for all this AI

00:06:23.569 --> 00:06:25.610
progress. So thinking about that diet advice

00:06:25.610 --> 00:06:29.009
incident, why is that specific case so crucial

00:06:29.009 --> 00:06:31.029
for us to remember when we interact with AI?

00:06:31.310 --> 00:06:33.970
It just really highlights the need for human

00:06:33.970 --> 00:06:36.930
oversight and checking the facts, especially

00:06:36.930 --> 00:06:39.310
for anything important. Okay, let's touch briefly

00:06:39.310 --> 00:06:41.649
on the sheer amount of money flowing into AI

00:06:41.649 --> 00:06:44.750
right now. The AI grant report for early August

00:06:44.750 --> 00:06:48.430
showed some absolutely huge funding rounds. Investment

00:06:48.430 --> 00:06:50.990
is just pouring in. Seriously, pouring in. OpenAI,

00:06:51.209 --> 00:06:54.529
for instance, secured a massive $8 .3 billion

00:06:54.529 --> 00:06:57.310
in funding. That's staggering. It's not just

00:06:57.310 --> 00:06:59.670
investment. It's a huge signal of belief in their

00:06:59.670 --> 00:07:02.110
long -term AGI goals. Gives them a lot of runway

00:07:02.110 --> 00:07:04.089
to pursue those goals without worrying too much

00:07:04.089 --> 00:07:06.310
about immediate profits. For sure. And then their

00:07:06.310 --> 00:07:11.120
open source rival, Mr. Which is also significant.

00:07:11.279 --> 00:07:13.560
It shows there's still huge appetite and capital

00:07:13.560 --> 00:07:15.779
for different approaches, even with a dominant

00:07:15.779 --> 00:07:18.639
player like OpenAI out there. It suggests a healthy,

00:07:18.720 --> 00:07:21.620
competitive ecosystem. Yeah, these are enormous

00:07:21.620 --> 00:07:25.079
numbers. They really show deep, continuing confidence

00:07:25.079 --> 00:07:28.079
in the whole AI field. The financial world is

00:07:28.079 --> 00:07:30.959
betting big that AI is going to reshape basically

00:07:30.959 --> 00:07:35.120
everything. This kind of cash fuels R &D incredibly

00:07:35.120 --> 00:07:37.839
fast. What does this level of investment really

00:07:37.839 --> 00:07:40.800
signal about where AI is headed? It signals a

00:07:40.800 --> 00:07:43.879
very strong belief in AI's continued growth with

00:07:43.879 --> 00:07:46.139
the major players pushing hard. All right, let's

00:07:46.139 --> 00:07:48.759
do a quick round of other notable AI bits and

00:07:48.759 --> 00:07:50.839
pieces that popped up. Google's Gemini added

00:07:50.839 --> 00:07:53.319
some nice features. Temporary chats so they don't

00:07:53.319 --> 00:07:55.980
save to your history. Good for privacy. And more

00:07:55.980 --> 00:07:58.000
personalization options. Yeah, letting it adapt

00:07:58.000 --> 00:08:00.060
more to you over time, user experience stuff.

00:08:00.620 --> 00:08:03.319
Always good. And there was a change at Elon Musk's

00:08:03.319 --> 00:08:05.839
XAI. One of the co -founders, Igor Babushkin,

00:08:05.920 --> 00:08:08.680
left the company. Oh, interesting. Key people

00:08:08.680 --> 00:08:11.139
moving around in these AI startups often mean,

00:08:11.240 --> 00:08:14.220
well, something's shifting internally. It's such

00:08:14.220 --> 00:08:16.759
a competitive space for talent. OpenAI also put

00:08:16.759 --> 00:08:19.439
out a clarification. GBT -5 thinking's context

00:08:19.439 --> 00:08:22.379
window isn't 32 ,000 tokens like some thought.

00:08:22.660 --> 00:08:26.399
It's actually 196 ,000 tokens. Oh, that's way

00:08:26.399 --> 00:08:29.009
bigger. Like more than five times bigger. Yeah,

00:08:29.069 --> 00:08:31.589
a huge difference. And again, that context window

00:08:31.589 --> 00:08:34.230
is how much info the AI can hold in mind for

00:08:34.230 --> 00:08:38.250
one conversation or task. So 196K means it can

00:08:38.250 --> 00:08:41.370
handle much longer, more complex stuff, documents,

00:08:41.570 --> 00:08:44.370
deep chats, analyzing code. It's a big step up

00:08:44.370 --> 00:08:46.830
for professional use. And Skywork released something

00:08:46.830 --> 00:08:49.669
called Matrix Game 2 .0. They're describing it

00:08:49.669 --> 00:08:52.309
as being like Genie 3, but for interactive video.

00:08:52.490 --> 00:08:55.429
Okay, so that suggests maybe AI generating interactive

00:08:55.429 --> 00:08:57.620
video content. Like experiences that change based

00:08:57.620 --> 00:08:59.399
on what you do. Sounds like it. Could be cool

00:08:59.399 --> 00:09:02.220
for gaming or education maybe. Creating more

00:09:02.220 --> 00:09:03.960
dynamic, personalized stuff. Definitely one to

00:09:03.960 --> 00:09:06.580
watch. And finally, Google is putting a massive

00:09:06.580 --> 00:09:09.740
$9 billion into expanding its AI infrastructure

00:09:09.740 --> 00:09:12.379
down in Oklahoma. $9 billion. Yeah. That's a

00:09:12.379 --> 00:09:14.600
huge investment in just the physical hardware

00:09:14.600 --> 00:09:17.379
needed for AI. Shows their long -term commitment,

00:09:17.500 --> 00:09:19.700
but also just the immense scale and, frankly,

00:09:19.759 --> 00:09:22.580
energy required for these advanced models. So

00:09:22.580 --> 00:09:25.309
back to that context window point. What's the

00:09:25.309 --> 00:09:27.769
real implication of having a much larger one

00:09:27.769 --> 00:09:30.529
for models like GPT -5 thinking? It basically

00:09:30.529 --> 00:09:34.210
means the AI can handle much longer, more complex

00:09:34.210 --> 00:09:37.070
conversations and process larger documents effectively.

00:09:37.470 --> 00:09:39.610
Okay, now for something that really stood out,

00:09:39.690 --> 00:09:43.830
genuinely groundbreaking stuff. Korean scientists

00:09:43.830 --> 00:09:47.919
have unveiled a new AI model. One that designs

00:09:47.919 --> 00:09:51.159
cancer drugs entirely from scratch. Wow. From

00:09:51.159 --> 00:09:53.480
scratch. From zero. No starting data needed.

00:09:53.720 --> 00:09:55.720
Okay. Tell me more. This is BIND, right? It's

00:09:55.720 --> 00:09:58.639
a diffusion model. Exactly. BIND. And yeah, it's

00:09:58.639 --> 00:10:00.399
a diffusion model. Just quickly, a diffusion

00:10:00.399 --> 00:10:02.700
model is kind of like an AI artist starting with

00:10:02.700 --> 00:10:05.039
random noise and gradually refining it into a

00:10:05.039 --> 00:10:07.480
coherent image. Except here, it's refining noise

00:10:07.480 --> 00:10:09.899
into potential drug molecules. Okay. But the

00:10:09.899 --> 00:10:11.759
key thing is, it's not just making drug discovery

00:10:11.759 --> 00:10:14.259
faster. It seems to be fundamentally rewriting

00:10:14.259 --> 00:10:16.620
the whole process. How so? Because most... AI

00:10:16.620 --> 00:10:18.960
drug discovery now, it starts with known drugs,

00:10:19.039 --> 00:10:21.200
right? And tries to improve them. Precisely.

00:10:21.460 --> 00:10:24.500
Most current models are like optimizers. They

00:10:24.500 --> 00:10:27.419
take existing molecules, tweak them a bit, predict

00:10:27.419 --> 00:10:30.240
if the tweak is good, test it virtually, and

00:10:30.240 --> 00:10:33.100
repeat. It's iterative, building on what we already

00:10:33.100 --> 00:10:36.559
know. But Bindy just... skips that apparently

00:10:36.559 --> 00:10:39.379
yeah it generates completely new molecules no

00:10:39.379 --> 00:10:42.559
templates no examples needed just raw generative

00:10:42.559 --> 00:10:45.419
chemistry and it designs the molecule and how

00:10:45.419 --> 00:10:48.820
it binds to its target all in one step whoa okay

00:10:48.820 --> 00:10:51.320
so it's like designing the key in the specific

00:10:51.320 --> 00:10:54.600
lock it fits perfectly matched at the exact same

00:10:54.600 --> 00:10:56.440
time instead of having a key and trying to find

00:10:56.440 --> 00:10:58.970
a lock That's a great analogy, yeah. And it gets

00:10:58.970 --> 00:11:01.029
better. It optimizes for multiple things at once,

00:11:01.090 --> 00:11:03.610
like is it effective? Is it safe? Will it dissolve

00:11:03.610 --> 00:11:05.990
properly? Stuff that's usually hard to balance.

00:11:06.169 --> 00:11:08.330
That's huge. And here's the really wild part,

00:11:08.409 --> 00:11:10.889
I think. When buying finds a design that works,

00:11:11.049 --> 00:11:13.750
it learns from that success. It recycles the

00:11:13.750 --> 00:11:16.110
successful strategies it used into future attempts,

00:11:16.289 --> 00:11:18.309
getting better and better over time. It's got

00:11:18.309 --> 00:11:21.080
this internal improvement loop. Whoa. Just imagine

00:11:21.080 --> 00:11:24.460
scaling that, running it a billion times, generating

00:11:24.460 --> 00:11:27.460
countless possibilities. And because it doesn't

00:11:27.460 --> 00:11:30.659
need existing data, that could open doors for

00:11:30.659 --> 00:11:33.500
treating really rare diseases or mutations we've

00:11:33.500 --> 00:11:35.659
never even seen before. Places where we just

00:11:35.659 --> 00:11:37.600
don't have the data for traditional methods.

00:11:38.139 --> 00:11:41.340
That's genuinely revolutionary. It feels like

00:11:41.340 --> 00:11:43.320
a completely different way of tackling drug discovery.

00:11:43.899 --> 00:11:46.820
So how does Bindy's approach fundamentally change

00:11:46.820 --> 00:11:49.460
the game compared to the older AI methods? Well,

00:11:49.519 --> 00:11:51.440
it creates entirely new drugs without needing

00:11:51.440 --> 00:11:54.100
prior examples, unlike older AIs that mostly

00:11:54.100 --> 00:11:56.759
just modify existing ones. Sponsor read provided

00:11:56.759 --> 00:11:58.710
separately. Okay, let's try and pull this all

00:11:58.710 --> 00:12:00.490
together. What does this deep dive really tell

00:12:00.490 --> 00:12:03.269
us? I think it shows AI isn't just getting incrementally

00:12:03.269 --> 00:12:05.509
better anymore. We're seeing fundamental shifts,

00:12:05.710 --> 00:12:07.789
real step changes. Yeah, definitely. We're seeing

00:12:07.789 --> 00:12:10.029
this move towards actual fluid intelligence,

00:12:10.070 --> 00:12:12.929
like Joel talked about, and this potential for

00:12:12.929 --> 00:12:15.230
exponential collective learning. That's a huge

00:12:15.230 --> 00:12:18.210
concept. It points to AI maybe learning and growing

00:12:18.210 --> 00:12:20.350
at a speed we can barely comprehend. And at the

00:12:20.350 --> 00:12:22.809
same time, we're seeing these incredibly transformative

00:12:22.809 --> 00:12:25.929
applications becoming real, like designing cancer

00:12:25.929 --> 00:12:28.450
drugs from absolute zero. That could change medicine

00:12:28.450 --> 00:12:31.129
forever. These aren't just lab experiments anymore.

00:12:31.210 --> 00:12:33.549
They're starting to happen. It's such a powerful

00:12:33.549 --> 00:12:35.990
reminder, isn't it? AI is developing incredibly

00:12:35.990 --> 00:12:39.450
fast. It brings just immense promise, but also

00:12:39.450 --> 00:12:42.509
this constant need for us to be vigilant, to

00:12:42.509 --> 00:12:44.289
understand what we're building and deploying.

00:12:44.629 --> 00:12:47.450
The excitement is real, but so is the responsibility

00:12:47.450 --> 00:12:49.570
that comes with it. So here's something to think

00:12:49.570 --> 00:12:52.370
about as we wrap up. A question for you to mull

00:12:52.370 --> 00:12:57.210
over. If AI can now learn collectively as a network

00:12:57.210 --> 00:12:59.909
and design truly novel solutions without needing

00:12:59.909 --> 00:13:03.190
any prior data, what kinds of human problems,

00:13:03.370 --> 00:13:05.370
problems we currently think are just impossible

00:13:05.370 --> 00:13:09.120
to solve, might suddenly become solvable? Yeah,

00:13:09.179 --> 00:13:11.019
that's definitely something to chew on. We really

00:13:11.019 --> 00:13:13.019
hope this deep dive helped you connect some of

00:13:13.019 --> 00:13:14.740
these important docs and feel a bit more informed

00:13:14.740 --> 00:13:17.340
about this incredibly fast -moving and honestly

00:13:17.340 --> 00:13:20.360
fascinating world of AI. Yeah, thanks for diving

00:13:20.360 --> 00:13:21.940
deep with us today. We'll catch you on the next

00:13:21.940 --> 00:13:23.620
one. Out Hero Music.
