WEBVTT

00:00:00.000 --> 00:00:01.940
Imagine this for a second. Something that, you

00:00:01.940 --> 00:00:04.919
know, used to take years. Just painstaking, expensive

00:00:04.919 --> 00:00:07.980
research, tons of trial and error. Right. Like

00:00:07.980 --> 00:00:09.839
finding a needle in a haystack, basically. Exactly.

00:00:09.900 --> 00:00:12.640
And now an AI can do it in, what, two weeks?

00:00:12.779 --> 00:00:14.939
We're talking about designing brand new human

00:00:14.939 --> 00:00:18.359
antibodies from scratch. Yeah. That kind of speed,

00:00:18.440 --> 00:00:20.620
it's not just faster. It feels like it changes

00:00:20.620 --> 00:00:22.920
the whole landscape of what's possible in medicine.

00:00:23.000 --> 00:00:25.059
Oh, absolutely. It's a huge leap. Traditional

00:00:25.059 --> 00:00:27.120
methods, they're so slow, they chew up resources.

00:00:27.660 --> 00:00:30.019
You're screening millions of things sometimes.

00:00:30.219 --> 00:00:32.439
Yeah. This just flips the script completely.

00:00:33.020 --> 00:00:35.310
Welcome to the Deep Dive. Today, we're going

00:00:35.310 --> 00:00:38.609
to jump into some really cutting edge AI breakthroughs.

00:00:38.609 --> 00:00:40.689
We've sifted through our sources to pull out

00:00:40.689 --> 00:00:44.229
the really crucial bits of insight for you. We've

00:00:44.229 --> 00:00:46.869
got everything from these medical innovations

00:00:46.869 --> 00:00:49.250
like the antibodies to some really interesting

00:00:49.250 --> 00:00:52.490
stuff about how AI might, well, how it might

00:00:52.490 --> 00:00:54.710
think strategically. Yeah, that part's wild.

00:00:54.990 --> 00:00:57.409
Our goal here is just to unpack these key ideas,

00:00:57.509 --> 00:00:59.810
give you that shortcut to being really informed

00:00:59.810 --> 00:01:01.979
without, you know. getting buried in all the

00:01:01.979 --> 00:01:04.000
details. And it's going to be a pretty cool journey.

00:01:04.060 --> 00:01:06.780
We'll kick off with AI engineering and medicine,

00:01:06.920 --> 00:01:09.879
specifically that. antibody design stuff. Okay.

00:01:10.019 --> 00:01:13.079
Then we'll shift gears a bit. Look at some surprising

00:01:13.079 --> 00:01:15.319
ways AI is popping up in different industries,

00:01:15.379 --> 00:01:17.920
some new tools people are using. Great. And finally,

00:01:17.939 --> 00:01:19.739
yeah, we'll tackle that big question. Can AI

00:01:19.739 --> 00:01:22.079
actually think strategically, like really think?

00:01:22.260 --> 00:01:24.780
All right, let's dive in then. First up, right

00:01:24.780 --> 00:01:27.859
into medical innovation. This new AI, it's called

00:01:27.859 --> 00:01:30.939
Chai2 from Chai Discovery. And interestingly,

00:01:31.260 --> 00:01:34.670
OpenAI is backing them. Yeah, that's a notable

00:01:34.670 --> 00:01:37.349
connection. What seems really remarkable from

00:01:37.349 --> 00:01:40.370
the sources is its knack for engineering custom

00:01:40.370 --> 00:01:43.170
antibodies without needing like tons of previous

00:01:43.170 --> 00:01:45.849
examples to learn from. That feels different.

00:01:46.109 --> 00:01:48.109
It is. The clever part is how it does it. So

00:01:48.109 --> 00:01:51.750
you basically feed chai to the precise structure

00:01:51.750 --> 00:01:54.849
of what you want to target. Let's say a specific

00:01:54.849 --> 00:01:57.230
protein on a virus or maybe a marker on a cancer

00:01:57.230 --> 00:01:59.790
cell. And then the AI. It just goes and designs

00:01:59.790 --> 00:02:02.010
the antibody protein specifically to attack that

00:02:02.010 --> 00:02:04.569
target. The analogy they used, which I kind of

00:02:04.569 --> 00:02:07.450
like, is thinking of it like Photoshop, but for

00:02:07.450 --> 00:02:10.530
proteins. Huh. Photoshop for proteins. Yeah.

00:02:10.569 --> 00:02:11.889
You've got a point where you want the antibody

00:02:11.889 --> 00:02:14.090
to connect. Yeah. And it designs something to

00:02:14.090 --> 00:02:16.150
stick right there. It sounds simple from the

00:02:16.150 --> 00:02:18.409
user end, right? Yeah. But the science underneath

00:02:18.409 --> 00:02:21.030
is obviously super complex. And the results,

00:02:21.129 --> 00:02:22.590
I mean, the numbers you mentioned earlier are

00:02:22.590 --> 00:02:24.889
just kind of wild, aren't they? They really are.

00:02:24.990 --> 00:02:27.449
A 50 % hit rate on the targets they went after.

00:02:27.590 --> 00:02:29.909
And get this, they only need to test about 20

00:02:29.909 --> 00:02:33.020
candidates. 20 potential antibodies for each

00:02:33.020 --> 00:02:37.159
target. Okay, wait. 50 % hit rate from just 20

00:02:37.159 --> 00:02:39.580
tries? How does that stack up? It's massive.

00:02:39.780 --> 00:02:42.400
Like, the sources say it's 100 times better,

00:02:42.539 --> 00:02:45.960
100x jump over traditional hit rates. Those are

00:02:45.960 --> 00:02:49.419
usually down around like 0 .1%. 0 .1%. Wow. Yeah.

00:02:49.659 --> 00:02:52.479
And think about the time. Biotech labs, they

00:02:52.479 --> 00:02:55.120
might screen millions of candidates. Takes months.

00:02:55.560 --> 00:02:57.759
Sometimes years. Chai 2 did its thing in two

00:02:57.759 --> 00:03:00.879
weeks. Two weeks versus years. That's transformational.

00:03:01.139 --> 00:03:03.659
And a really key takeaway here is the impact,

00:03:03.840 --> 00:03:07.539
right? Economically and for patients, drug development

00:03:07.539 --> 00:03:11.000
is just notoriously expensive. Oh, yeah. Astronomical

00:03:11.000 --> 00:03:14.240
costs. Right. So expensive that companies often

00:03:14.240 --> 00:03:17.500
skip over smaller patient groups. Yeah. The R

00:03:17.500 --> 00:03:19.500
&D cost doesn't make sense for the market size.

00:03:19.879 --> 00:03:23.460
But if an AI like Chai 2 can churn out good candidates

00:03:23.460 --> 00:03:27.419
this fast. Suddenly, those R &D costs plummet.

00:03:27.719 --> 00:03:30.379
And that means custom meds for rare diseases.

00:03:30.620 --> 00:03:33.419
They actually become financially viable. So treatments

00:03:33.419 --> 00:03:36.000
for conditions that were ignored before, suddenly

00:03:36.000 --> 00:03:38.520
they're potentially within reach. Exactly. For

00:03:38.520 --> 00:03:41.099
patients who maybe had zero options before, it

00:03:41.099 --> 00:03:43.259
kind of rejigs the whole economic equation of

00:03:43.259 --> 00:03:45.280
health care. OK, so with this incredible speed

00:03:45.280 --> 00:03:48.439
up in the design phase, it feels like we're right

00:03:48.439 --> 00:03:50.479
on the edge of something huge. But what's the

00:03:50.479 --> 00:03:52.800
biggest hurdle now? What slows things down between

00:03:52.800 --> 00:03:55.919
the AI designing it and, you know, it actually

00:03:55.919 --> 00:03:57.639
helping a patient? Yeah, good question. It's

00:03:57.639 --> 00:04:00.639
likely still the usual suspects, regulatory approval

00:04:00.639 --> 00:04:04.340
and clinical trials. Those still take time. Right.

00:04:04.439 --> 00:04:07.620
The testing and safety checks. Makes sense. OK,

00:04:07.719 --> 00:04:11.780
so from the really intricate world of biological

00:04:11.780 --> 00:04:14.080
engineering, let's kind of zoom out now. Let's

00:04:14.080 --> 00:04:17.279
look at how AI is appearing more broadly, sometimes

00:04:17.279 --> 00:04:19.399
in unexpected ways, and how it's starting to

00:04:19.399 --> 00:04:21.980
shift things across different industries, maybe

00:04:21.980 --> 00:04:24.480
even starting from the basement up. Absolutely.

00:04:24.600 --> 00:04:27.740
And the sources had some really interesting nuggets

00:04:27.740 --> 00:04:31.459
on this. First off, this idea of AI democratizing

00:04:31.459 --> 00:04:34.480
things, maybe shifting power balances. Mark Cuban

00:04:34.480 --> 00:04:36.990
was on a podcast. High performance, apparently.

00:04:37.250 --> 00:04:39.490
Yeah. And he predicted the first trillionaire.

00:04:40.629 --> 00:04:43.110
It might not be a giant corp. It could be, quote,

00:04:43.209 --> 00:04:46.290
just one dude in the basement. Ah, one person.

00:04:46.509 --> 00:04:48.990
Yeah, someone who finds that sort of unseen use

00:04:48.990 --> 00:04:51.449
case for AI. Yeah. It kind of speaks to the power

00:04:51.449 --> 00:04:53.529
that can be concentrated now, maybe. Interesting.

00:04:53.569 --> 00:04:55.670
And we saw a bit of that user power with the

00:04:55.670 --> 00:04:58.189
Squid Game season three finale thing, right?

00:04:58.230 --> 00:05:00.230
Oh, yeah. That was funny. So apparently people

00:05:00.230 --> 00:05:02.779
really didn't like the finale. Which happens

00:05:02.779 --> 00:05:05.279
all along. Happens a lot. But fans, they didn't

00:05:05.279 --> 00:05:07.920
just complain. They started using Google's VO3,

00:05:08.160 --> 00:05:11.060
that's one of the new video generation AIs, to

00:05:11.060 --> 00:05:13.800
make their own endings. No way. Really? Yeah.

00:05:13.959 --> 00:05:17.300
And these apparently became quite meme -y, as

00:05:17.300 --> 00:05:19.839
the source put it. It shows how people can use

00:05:19.839 --> 00:05:22.500
AI not just to consume stuff, but to actually

00:05:22.500 --> 00:05:25.800
reshape it, co -create almost. That's a different

00:05:25.800 --> 00:05:28.899
level of interaction. But it's not all smooth

00:05:28.899 --> 00:05:30.699
sailing with these new tools, is it? There was

00:05:30.699 --> 00:05:32.980
that issue with cursor software. Right, yeah.

00:05:33.160 --> 00:05:35.100
That highlighted some of the bumps in the road.

00:05:35.279 --> 00:05:37.980
Users started reporting unexpected charges. One

00:05:37.980 --> 00:05:41.839
person apparently saw a $7 ,000 plan gone. $7

00:05:41.839 --> 00:05:44.839
,000? Whoa! Yeah. It seems like Cursor might

00:05:44.839 --> 00:05:46.680
have quietly changed its pricing tiers or something

00:05:46.680 --> 00:05:49.019
without being super upfront about it. Led to

00:05:49.019 --> 00:05:50.879
a bunch of people publicly saying they were canceling.

00:05:50.899 --> 00:05:52.920
It's a good reminder about, you know, transparency

00:05:52.920 --> 00:05:55.319
and ethics as these tools roll out. Definitely

00:05:55.319 --> 00:05:58.139
need that clarity. Okay, but on the more positive

00:05:58.139 --> 00:06:00.939
side of accessibility, there was news about making

00:06:00.939 --> 00:06:03.500
apps easily. Yeah, a builder named Riley Brown

00:06:03.500 --> 00:06:06.100
put out a guide. How to create a mobile app from

00:06:06.100 --> 00:06:09.560
scratch under an hour. And get this, without

00:06:09.560 --> 00:06:12.199
writing any code. Under an hour, no code. Right.

00:06:12.509 --> 00:06:14.889
Just democratizing creation even further. Yeah.

00:06:14.990 --> 00:06:16.610
Giving more people the power to build stuff.

00:06:16.769 --> 00:06:18.250
That's pretty cool. And speaking of building

00:06:18.250 --> 00:06:21.269
things, what about the funding side? Harvey AI

00:06:21.269 --> 00:06:23.769
came up. Oh, yeah. Big news there. Harvey AI,

00:06:23.990 --> 00:06:27.350
they do AI for legal stuff. Their valuation reportedly

00:06:27.350 --> 00:06:31.800
jumped from three billion. Hmm. Up to $5 billion.

00:06:32.259 --> 00:06:34.720
Wow. Quick jump. Yeah. After raising another

00:06:34.720 --> 00:06:38.500
$300 million from some big VCs, Kleiner Perkins,

00:06:38.699 --> 00:06:41.379
Sequoia, and the OpenAI Startup Fund was in there

00:06:41.379 --> 00:06:44.019
too. They're apparently serving like 337 legal

00:06:44.019 --> 00:06:47.160
clients now in 53 countries. Shows how fast these

00:06:47.160 --> 00:06:49.490
specialized AIs can scale, huh? And the value

00:06:49.490 --> 00:06:51.370
investors are seeing. Definitely. Big money flowing

00:06:51.370 --> 00:06:53.750
into focused AI applications. Okay. Lots of activity.

00:06:53.870 --> 00:06:55.870
Let's maybe just mention one more of those new

00:06:55.870 --> 00:06:59.149
tools listed. Step fund diligence check. What

00:06:59.149 --> 00:07:00.889
caught my eye was the focus on verification.

00:07:01.370 --> 00:07:02.850
Yeah, that one sounds interesting. It offers

00:07:02.850 --> 00:07:05.509
AI search, but specifically with agent verified

00:07:05.509 --> 00:07:09.829
citations. Right. In this world of just information

00:07:09.829 --> 00:07:13.050
overload and sometimes misinformation, having

00:07:13.050 --> 00:07:15.810
AI tools that help verify sources seems, well,

00:07:15.829 --> 00:07:18.790
pretty crucial. for trust couldn't agree more

00:07:18.790 --> 00:07:22.110
yeah and just one quick hit from the list someone

00:07:22.110 --> 00:07:24.509
apparently used chat GPT and saved themselves

00:07:24.509 --> 00:07:28.029
$3 ,000 just like that yeah presumably by using

00:07:28.029 --> 00:07:30.829
it effectively for some task or advice just shows

00:07:30.829 --> 00:07:33.170
the practical like real -world dollar impact

00:07:33.170 --> 00:07:36.689
it can have for regular users too so okay we've

00:07:36.689 --> 00:07:39.430
got new tools new funding new ways users are

00:07:39.430 --> 00:07:42.069
interacting but with so much popping up constantly

00:07:42.069 --> 00:07:44.009
what do you think is the biggest challenge for

00:07:44.009 --> 00:07:47.459
people trying to navigate all this Honestly,

00:07:47.459 --> 00:07:49.519
probably just finding the right tool for the

00:07:49.519 --> 00:07:52.360
right job. There's so much choice now. Sponsor.

00:07:52.360 --> 00:07:55.240
All right. So we've seen AI designing antibodies,

00:07:55.660 --> 00:07:58.220
shaking up industries, even saving someone a

00:07:58.220 --> 00:08:00.720
few thousand bucks. Pretty practical stuff. But

00:08:00.720 --> 00:08:03.420
can it really, truly think strategically, you

00:08:03.420 --> 00:08:06.060
know, beyond just predicting the next word in

00:08:06.060 --> 00:08:08.000
a sentence? Yeah, the million dollar question,

00:08:08.019 --> 00:08:10.410
isn't it? It really is. And this brings us to

00:08:10.410 --> 00:08:12.370
some fascinating research that was highlighted

00:08:12.370 --> 00:08:16.490
in the AI chart section of our sources. It tackled

00:08:16.490 --> 00:08:18.370
this head on. Right. And the way they tested

00:08:18.370 --> 00:08:21.209
it was pretty clever. They ran, get this, 140

00:08:21.209 --> 00:08:26.310
,000 games of Prisoner's Dilemma. 140 ,000. Wow.

00:08:26.449 --> 00:08:29.389
OK. Classic game theory setup. Exactly. Using

00:08:29.389 --> 00:08:33.399
AI agents from... the big players open ai's models

00:08:33.399 --> 00:08:36.580
google's gemini anthropics claude and in each

00:08:36.580 --> 00:08:40.159
round the ai had to choose cooperate with the

00:08:40.159 --> 00:08:43.039
other ai or defect try to betray it standard

00:08:43.039 --> 00:08:45.379
prisoners dilemma choices but here's the kicker

00:08:45.379 --> 00:08:48.379
the really important part before every single

00:08:48.379 --> 00:08:51.240
move the ai models had to actually write out

00:08:51.240 --> 00:08:53.659
their reasoning why they were choosing to cooperate

00:08:53.659 --> 00:08:55.840
or defect. Ah, so they had to explain themselves.

00:08:56.259 --> 00:08:58.019
Yeah, which allowed the researchers to track

00:08:58.019 --> 00:09:00.000
how they were making decisions. It gives us this

00:09:00.000 --> 00:09:02.299
little window into their quote -unquote thought

00:09:02.299 --> 00:09:04.179
process. Okay, I'm hooked. What did they find?

00:09:04.259 --> 00:09:06.139
Were they all just cold calculating machines?

00:09:06.539 --> 00:09:08.220
That's what you might expect, right? But the

00:09:08.220 --> 00:09:10.220
results were, well, the source described them

00:09:10.220 --> 00:09:13.559
as weirdly human. Weirdly human? How so? Well,

00:09:13.580 --> 00:09:16.039
they developed distinct styles. Gemini, for instance,

00:09:16.240 --> 00:09:18.840
turned out to be pretty ruthless. Calculated,

00:09:18.879 --> 00:09:21.860
very reactive. If you betrayed it, it was quick

00:09:21.860 --> 00:09:24.200
to defect back. Okay, the pragmatic one. Sort

00:09:24.200 --> 00:09:28.759
of. Then GBT -4 was weirdly idealistic, was the

00:09:28.759 --> 00:09:31.759
phrase used. Often cooperative, like it kept

00:09:31.759 --> 00:09:33.759
trying to cooperate, even when the other AI was

00:09:33.759 --> 00:09:36.519
exploiting it. Huh, almost naive. Maybe. And

00:09:36.519 --> 00:09:38.960
then Claude emerged as the peacemaker. It was

00:09:38.960 --> 00:09:41.500
the most forgiving, apparently. Even after being

00:09:41.500 --> 00:09:44.200
metaphorically backstabbed, it was more likely

00:09:44.200 --> 00:09:47.080
to try cooperating again. That is weirdly human.

00:09:47.200 --> 00:09:49.799
Different personalities almost. And that's the

00:09:49.799 --> 00:09:51.779
really wild part. Remember, they all trained

00:09:51.779 --> 00:09:54.259
on essentially the same foundational data, the

00:09:54.259 --> 00:09:57.080
same giant pile of text and code. Right. Same

00:09:57.080 --> 00:09:59.500
starting point. But despite that shared training,

00:09:59.659 --> 00:10:01.580
they developed these totally different approaches.

00:10:01.980 --> 00:10:05.039
Unique strategic fingerprints, as the source

00:10:05.039 --> 00:10:07.820
called it. It depended on how each model reacted

00:10:07.820 --> 00:10:10.500
internally to things like betrayal or success

00:10:10.500 --> 00:10:13.519
or building trust. So it wasn't just mimicking

00:10:13.519 --> 00:10:16.279
patterns in the data. Seems not. This suggests

00:10:16.279 --> 00:10:19.320
that these LLMs, large language models, the AIs

00:10:19.320 --> 00:10:22.019
trained on text, they can actually do strategic

00:10:22.019 --> 00:10:24.320
reasoning, not just predicting the next word.

00:10:24.480 --> 00:10:27.259
Two sec silence. It's about making decisions

00:10:27.259 --> 00:10:32.179
based on a consistent internal. a porch it shows

00:10:32.179 --> 00:10:34.139
that even with the same training they develop

00:10:34.139 --> 00:10:37.240
these kind of intrinsic cognitive styles whoa

00:10:37.240 --> 00:10:39.960
okay imagine putting them all on the same team

00:10:39.960 --> 00:10:42.320
or maybe more interesting having them negotiate

00:10:42.320 --> 00:10:44.659
against each other right but the implications

00:10:44.659 --> 00:10:47.860
there for like future ai systems especially autonomous

00:10:47.860 --> 00:10:50.539
agents yeah they feel really profound it's not

00:10:50.539 --> 00:10:52.360
just about what they can do but the style in

00:10:52.360 --> 00:10:54.159
which they do it these different styles of intelligence

00:10:54.159 --> 00:10:57.049
emerging and how they'll interact Yeah. That's

00:10:57.049 --> 00:10:58.490
something else. Yeah. You know, I still wrestle

00:10:58.490 --> 00:11:00.929
with prompt drift myself. Sometimes that's where,

00:11:00.929 --> 00:11:02.850
you know, an AI's answers kind of subtly change

00:11:02.850 --> 00:11:05.610
over time, even if you ask the same thing. Yeah,

00:11:05.850 --> 00:11:08.110
I've seen that. So seeing these AIs develop such

00:11:08.110 --> 00:11:12.090
distinct, consistent strategies over 140 ,000

00:11:12.090 --> 00:11:15.570
games, it's both amazing and honestly a little

00:11:15.570 --> 00:11:17.830
bit daunting. It suggests the level of internal

00:11:17.830 --> 00:11:19.990
coherence. It's, well, it's pretty sophisticated.

00:11:20.230 --> 00:11:23.710
So thinking about building more complex AI systems

00:11:23.710 --> 00:11:26.330
down the road. What does this experiment tell

00:11:26.330 --> 00:11:28.350
us? What's the key takeaway? I think it tells

00:11:28.350 --> 00:11:31.309
us that future AI agents, they probably won't

00:11:31.309 --> 00:11:32.970
be interchangeable cogs. They might have their

00:11:32.970 --> 00:11:36.149
own styles. Right. Not just plug and play copies.

00:11:36.289 --> 00:11:39.350
Okay. So wrapping this up then, what we've really

00:11:39.350 --> 00:11:42.710
explored today, it feels like more than just

00:11:42.710 --> 00:11:46.049
AI getting faster or incrementally smarter. It

00:11:46.049 --> 00:11:49.429
feels like a shift. How so? From AI just being

00:11:49.429 --> 00:11:52.809
a tool to maybe being more like a co -creator,

00:11:52.830 --> 00:11:54.990
like with the antibodies or the fan -made endings,

00:11:55.149 --> 00:11:57.549
or even an emergent intelligence with its own

00:11:57.549 --> 00:11:59.889
distinct style, like in the Prisoner's Dilemma

00:11:59.889 --> 00:12:02.750
games. It's not just about what AI can do anymore,

00:12:02.830 --> 00:12:04.409
but it raises questions about how we interact

00:12:04.409 --> 00:12:07.210
with it, how we regulate it, maybe even how we

00:12:07.210 --> 00:12:10.629
define these evolving digital minds. Yeah, I

00:12:10.629 --> 00:12:12.190
think that's right. And to kind of reinforce

00:12:12.190 --> 00:12:15.009
that, AI is clearly delivering real economic

00:12:15.009 --> 00:12:18.769
value. We saw that, right? costs, potentially

00:12:18.769 --> 00:12:21.289
democratizing access to things like rare disease

00:12:21.289 --> 00:12:24.389
treatments. That's huge. And maybe the most intriguing

00:12:24.389 --> 00:12:27.309
part is what you just said. These systems are

00:12:27.309 --> 00:12:30.090
starting to show distinct personalities or at

00:12:30.090 --> 00:12:33.029
least distinct strategic approaches. Understanding

00:12:33.029 --> 00:12:35.789
those unique strategic fingerprints seems really

00:12:35.789 --> 00:12:38.690
important now because it'll shape not just what

00:12:38.690 --> 00:12:41.889
AI can do, but how it'll do it. And that impacts,

00:12:41.950 --> 00:12:44.690
well, everything from business negotiations to

00:12:44.690 --> 00:12:47.409
how autonomous systems make decisions in the

00:12:47.409 --> 00:12:50.570
real world. So the final thought maybe. For you

00:12:50.570 --> 00:12:53.110
listening, what does all this actually mean for

00:12:53.110 --> 00:12:55.769
you? As AI starts developing these unique personalities

00:12:55.769 --> 00:12:58.529
and strategies, how might that change the way

00:12:58.529 --> 00:13:00.429
we interact with these systems day to day? Or

00:13:00.429 --> 00:13:02.409
maybe even how we think about intelligence itself

00:13:02.409 --> 00:13:04.570
in the coming years? It's definitely something

00:13:04.570 --> 00:13:06.690
to ponder. We hope you'll consider the implications

00:13:06.690 --> 00:13:08.830
of these advancements. Keep asking questions.

00:13:09.169 --> 00:13:12.429
Stay curious. Keep exploring. Thank you for joining

00:13:12.429 --> 00:13:14.450
us on this deep dive. Out to you, music.
