WEBVTT

00:00:00.000 --> 00:00:05.379
OK, so imagine you could suddenly just like look

00:00:05.379 --> 00:00:07.860
inside an AI's brain, you know, really trace

00:00:07.860 --> 00:00:11.019
how it's thinking step by step. Or maybe you

00:00:11.019 --> 00:00:13.320
caught that news about AI predicting hurricanes

00:00:13.320 --> 00:00:17.719
like super accurately. Or there's another one

00:00:17.719 --> 00:00:19.120
that somehow can't even tell you what year it

00:00:19.120 --> 00:00:23.339
is. Yeah, that contrast is pretty, pretty stark,

00:00:23.440 --> 00:00:25.320
isn't it? Between the really groundbreaking stuff

00:00:25.320 --> 00:00:29.199
and well. Kind of baffling. Totally. And honestly,

00:00:29.300 --> 00:00:30.920
that pretty much captures what we're going to

00:00:30.920 --> 00:00:33.259
dig into today. Yeah. Welcome to your deep dives.

00:00:33.259 --> 00:00:35.119
We've got the whole stack of sources. You shared

00:00:35.119 --> 00:00:38.259
some research papers, news bits, notes on new

00:00:38.259 --> 00:00:41.119
tools popping up. And it's all about the latest,

00:00:41.259 --> 00:00:44.520
most interesting things happening in AI. It's

00:00:44.520 --> 00:00:46.560
a really great snapshot, actually. It shows where

00:00:46.560 --> 00:00:49.939
the field is making these huge leaps and maybe

00:00:49.939 --> 00:00:51.479
also where it's still kind of finding its feet.

00:00:51.679 --> 00:00:54.600
Yeah, exactly. So our mission here is just to

00:00:54.600 --> 00:00:57.899
unpack all of this with you, pull out the most

00:00:57.899 --> 00:01:00.299
important bits, the surprising stuff, maybe some

00:01:00.299 --> 00:01:04.379
real aha moments, and sort of give you the shortcut

00:01:04.379 --> 00:01:06.120
to understanding what's actually going on in

00:01:06.120 --> 00:01:09.700
this super fast -moving world. This is your custom

00:01:09.700 --> 00:01:12.840
deep dive into your material. Think of us as

00:01:12.840 --> 00:01:15.620
guides, maybe, helping you navigate the info

00:01:15.620 --> 00:01:18.019
you brought to the table. Love that. Yeah. And

00:01:18.019 --> 00:01:19.939
right at the heart of these sources, there's

00:01:19.939 --> 00:01:22.420
this one piece that feels like, you know, the

00:01:22.420 --> 00:01:25.840
main event. It's about anthropic and what they've

00:01:25.840 --> 00:01:28.140
just, well, they've open sourced something pretty

00:01:28.140 --> 00:01:30.180
big. A tool that people are describing as giving

00:01:30.180 --> 00:01:33.620
us X -ray vision into that AI black box everyone

00:01:33.620 --> 00:01:35.819
talks about. Which is huge. Yeah. Because for

00:01:35.819 --> 00:01:37.939
years, everyone's kind of said that box is basically

00:01:37.939 --> 00:01:40.799
sealed shut. So seeing a real move towards cracking

00:01:40.799 --> 00:01:43.890
it open. That's a big deal. It really is. We're

00:01:43.890 --> 00:01:45.769
definitely going to get into that. But your sources

00:01:45.769 --> 00:01:47.370
cover a bunch of other stuff too, right? It's

00:01:47.370 --> 00:01:49.510
not just the super technical breakthroughs. We've

00:01:49.510 --> 00:01:51.829
also got some of the more, let's say, quirky

00:01:51.829 --> 00:01:54.530
applications and just the general industry buzz

00:01:54.530 --> 00:01:56.530
you picked up on. Yeah. And what's fascinating,

00:01:56.750 --> 00:01:59.489
I think, is how these seemingly different pieces

00:01:59.489 --> 00:02:01.930
actually connect. They're not totally random.

00:02:01.969 --> 00:02:03.810
They kind of show these underlying trends, maybe

00:02:03.810 --> 00:02:06.810
some tensions, and really paint a picture of

00:02:06.810 --> 00:02:09.469
where AI is heading right now. Right. It's like

00:02:09.469 --> 00:02:12.490
different sides of the same, I don't know. rapidly

00:02:12.490 --> 00:02:14.750
changing crystal or something. Okay, let's start

00:02:14.750 --> 00:02:16.469
with that big one then, the anthropic noose.

00:02:16.530 --> 00:02:18.389
The source talks about this new tool they've

00:02:18.389 --> 00:02:20.129
open sourced and they call the main technique

00:02:20.129 --> 00:02:24.030
circuit tracing. Yes, circuit tracing. And they're

00:02:24.030 --> 00:02:26.129
using something they call attribution graphs

00:02:26.129 --> 00:02:29.210
to do it. This is basically their deep interpretability

00:02:29.210 --> 00:02:31.909
work, understanding how the model works. But

00:02:31.909 --> 00:02:34.110
now it's out there for others to use. Attribution

00:02:34.110 --> 00:02:36.250
graphs. Okay, so what does that actually show

00:02:36.250 --> 00:02:37.889
you? What are you looking at when you use it?

00:02:38.240 --> 00:02:41.159
Well, they're designed to show you the internal

00:02:41.159 --> 00:02:43.659
reasoning steps, right, that a language model

00:02:43.659 --> 00:02:46.120
takes. When you give it a prompt and it spits

00:02:46.120 --> 00:02:48.979
out an answer, these graphs visualize the path

00:02:48.979 --> 00:02:52.080
the information took, which bits of the model

00:02:52.080 --> 00:02:54.300
lit up, and how they actually contributed to

00:02:54.300 --> 00:02:58.020
that specific response. Okay. Wow. So it's not

00:02:58.020 --> 00:03:00.979
just seeing, like, input goes in, answer comes

00:03:00.979 --> 00:03:04.479
out. You're literally watching the AI sort of

00:03:04.479 --> 00:03:07.090
thing. think its way through in a very structured

00:03:07.090 --> 00:03:10.110
way yeah a graph based representation you can

00:03:10.110 --> 00:03:12.150
take a specific bit of the output and trace it

00:03:12.150 --> 00:03:14.270
right back through the model's layers you see

00:03:14.270 --> 00:03:15.909
the sequence of calculations and connections

00:03:15.909 --> 00:03:18.270
that led right to it and the source mentioned

00:03:18.270 --> 00:03:21.180
you can even like mess with it, edit things in

00:03:21.180 --> 00:03:23.099
the graph to see what happens. Precisely. Yeah,

00:03:23.159 --> 00:03:25.300
that's the experimental part. You can spot a

00:03:25.300 --> 00:03:28.159
specific internal feature or like a pathway the

00:03:28.159 --> 00:03:30.840
model used, and then you can tweak it or even

00:03:30.840 --> 00:03:33.560
switch it off and see the direct cause and effect

00:03:33.560 --> 00:03:35.639
on the final answer. It lets you investigate

00:03:35.639 --> 00:03:37.780
really deeply. And this isn't just for Anthropic's

00:03:37.780 --> 00:03:40.319
own models, is it? The source said it works with

00:03:40.319 --> 00:03:42.460
some big open models too. Yeah, that's right.

00:03:42.840 --> 00:03:45.080
They specifically mentioned support for models

00:03:45.080 --> 00:03:47.960
like Gemma and Llama. And importantly, they've

00:03:47.960 --> 00:03:50.520
put out demo notebooks and even some graphs they

00:03:50.520 --> 00:03:52.360
haven't fully figured out themselves yet, kind

00:03:52.360 --> 00:03:54.319
of inviting the whole research community to jump

00:03:54.319 --> 00:03:56.699
in and explore. See, this is where it gets...

00:03:57.180 --> 00:03:59.819
I think really significant because for years,

00:03:59.919 --> 00:04:02.939
the biggest knock against advanced AI, especially

00:04:02.939 --> 00:04:05.599
these huge language models, has been that black

00:04:05.599 --> 00:04:08.680
box issue. We train them. They do amazing things,

00:04:08.800 --> 00:04:11.680
but we don't really get why they make certain

00:04:11.680 --> 00:04:13.400
choices or how they get to their conclusions.

00:04:13.659 --> 00:04:16.379
Exactly. And this release feels like a major

00:04:16.379 --> 00:04:19.399
step towards cracking that open. It's a very

00:04:19.399 --> 00:04:22.060
different vibe compared to labs that tend to

00:04:22.060 --> 00:04:24.560
keep these kinds of internal tools super secret.

00:04:25.069 --> 00:04:27.790
you know, proprietary. Anthropix seems to be

00:04:27.790 --> 00:04:30.189
making a statement here like transparency is

00:04:30.189 --> 00:04:33.810
key, which is probably also a competitive thing,

00:04:33.829 --> 00:04:36.110
too. It's a huge signal. And the source toss

00:04:36.110 --> 00:04:39.930
out this really interesting idea. What if? down

00:04:39.930 --> 00:04:42.290
the road, we could actually edit the model's

00:04:42.290 --> 00:04:44.990
reasoning the same way we edit prompts now. That

00:04:44.990 --> 00:04:47.009
does raise a big question. If you can clearly

00:04:47.009 --> 00:04:49.870
trace, say, a flawed line of reasoning or see

00:04:49.870 --> 00:04:52.189
a really good one, could you potentially go right

00:04:52.189 --> 00:04:54.129
into the model's internal workings and correct

00:04:54.129 --> 00:04:56.689
or reinforce those patterns? This tool feels

00:04:56.689 --> 00:04:58.870
like a first step towards maybe having that kind

00:04:58.870 --> 00:05:02.110
of capability someday. Yeah, that's definitely

00:05:02.110 --> 00:05:05.329
something to think about. Okay, so shifting gears

00:05:05.329 --> 00:05:10.420
a bit. From seeing inside the AI mind to predicting

00:05:10.420 --> 00:05:13.300
hurricanes. Your sources also brought up NASA's

00:05:13.300 --> 00:05:17.180
GAIA model. Ah, yes, GAIA. This is a fantastic

00:05:17.180 --> 00:05:20.620
example of these big foundation models. The really

00:05:20.620 --> 00:05:23.959
flexible, powerful ones moving completely beyond

00:05:23.959 --> 00:05:27.060
just language and text. GAIA stands for Geospatial

00:05:27.060 --> 00:05:29.519
Artificial Intelligence for Atmospheres. Right,

00:05:29.600 --> 00:05:32.079
the atmosphere. So it's like an AI weather model.

00:05:33.300 --> 00:05:35.379
different somehow. Much, much more than just

00:05:35.379 --> 00:05:37.560
a standard weather model, really. It's a generative

00:05:37.560 --> 00:05:40.000
AI model built just for our planet's atmosphere.

00:05:40.180 --> 00:05:42.019
It was trained on something like 25 years of

00:05:42.019 --> 00:05:44.300
Earth data. It can do things like predict hurricanes.

00:05:44.519 --> 00:05:46.800
Yeah. But also spot wildfires, estimate rainfall,

00:05:47.100 --> 00:05:50.220
potentially all in real time. 25 years of data.

00:05:50.379 --> 00:05:52.079
Yeah. Wow. That sounds like a massive project.

00:05:52.180 --> 00:05:53.500
What makes this different from, you know, the

00:05:53.500 --> 00:05:55.240
super complex weather models we already have?

00:05:55.360 --> 00:05:57.839
A couple of key things based on the source. One

00:05:57.839 --> 00:06:00.439
is the detail and speed. It apparently has a

00:06:00.439 --> 00:06:02.819
four kilometer spatial resolution that's really

00:06:02.819 --> 00:06:05.740
fine grained and updates every 30 minutes, which

00:06:05.740 --> 00:06:08.360
is incredibly detailed for real time use over

00:06:08.360 --> 00:06:11.240
big areas. But it's also it's generative power.

00:06:11.360 --> 00:06:14.139
It can intelligently like patch up big holes

00:06:14.139 --> 00:06:17.579
in satellite data or reconstruct full high res

00:06:17.579 --> 00:06:20.279
weather maps, even if the input is noisy or incomplete.

00:06:20.970 --> 00:06:22.930
Whoa, that's amazing for places where you might

00:06:22.930 --> 00:06:24.889
not have perfect sensor coverage or, you know,

00:06:24.910 --> 00:06:27.250
if data drops out. Exactly. And it's designed

00:06:27.250 --> 00:06:29.689
to be, you know, a foundation, something other

00:06:29.689 --> 00:06:32.209
labs and organizations can build on top of. It

00:06:32.209 --> 00:06:34.670
really shows these powerful AI methods aren't

00:06:34.670 --> 00:06:36.829
just for writing code or text anymore. They're

00:06:36.829 --> 00:06:39.490
starting to tackle really complex physical systems.

00:06:39.670 --> 00:06:42.069
So, like, could cities or maybe disaster relief

00:06:42.069 --> 00:06:45.560
groups actually use something like this? The

00:06:45.560 --> 00:06:47.759
source suggested that within maybe the next couple

00:06:47.759 --> 00:06:50.779
of years, GAA -like models could realistically

00:06:50.779 --> 00:06:53.360
be powering real -time dashboards. You know,

00:06:53.360 --> 00:06:55.040
for critical infrastructure, disaster response,

00:06:55.220 --> 00:06:57.019
it seems to be moving pretty fast from research

00:06:57.019 --> 00:06:59.500
to potential real -world, maybe even life -saving

00:06:59.500 --> 00:07:03.360
uses. Wow. Okay. So we have this absolutely groundbreaking

00:07:03.360 --> 00:07:07.079
stuff, seeing inside AI predicting massive storms

00:07:07.079 --> 00:07:10.750
with incredible detail, and then... Your sources

00:07:10.750 --> 00:07:13.509
also had that little snippet about Google's AI

00:07:13.509 --> 00:07:15.970
assistant apparently telling people the current

00:07:15.970 --> 00:07:20.189
year is 2024. Yes, that little detail was interesting,

00:07:20.290 --> 00:07:22.269
wasn't it? It definitely highlights the current

00:07:22.269 --> 00:07:24.829
state of things that even while we're making

00:07:24.829 --> 00:07:28.550
these massive leaps like GAIA, really basic factual

00:07:28.550 --> 00:07:32.050
slip ups can still. It is kind of humbling, right?

00:07:32.129 --> 00:07:34.610
Like any kid knows the year, but a cutting edge

00:07:34.610 --> 00:07:37.490
AI occasionally gets it wrong. It certainly brings

00:07:37.490 --> 00:07:39.629
up that important point about reliability and,

00:07:39.670 --> 00:07:41.329
you know, how much we can really trust these

00:07:41.329 --> 00:07:43.850
tools for simple, checkable facts, especially

00:07:43.850 --> 00:07:45.930
when they might be baked into systems we depend

00:07:45.930 --> 00:07:48.290
on. The source even tied it back to some critics

00:07:48.290 --> 00:07:50.629
who worry about relying on AI for the kind of

00:07:50.629 --> 00:07:52.910
factual info we used to just get from the Web.

00:07:53.009 --> 00:07:55.410
Right. And speaking of Google. Your sources also

00:07:55.410 --> 00:07:58.850
mentioned those kind of weirdly realistic, sometimes

00:07:58.850 --> 00:08:01.389
unsettling videos from VO3. People were sharing

00:08:01.389 --> 00:08:03.730
them showing impossible challenges or physics

00:08:03.730 --> 00:08:06.990
being broken. Yes, the sheer ability and realism

00:08:06.990 --> 00:08:10.449
in AI video generation is just advancing incredibly

00:08:10.449 --> 00:08:13.449
fast. Those examples really drive that home.

00:08:13.949 --> 00:08:15.529
And then there was Odyssey, which showed off

00:08:15.529 --> 00:08:18.009
something called an interactive video AI model.

00:08:18.189 --> 00:08:20.350
Right. They described it like stepping into a

00:08:20.350 --> 00:08:22.790
playable movie. That's a different angle, again,

00:08:22.829 --> 00:08:25.110
taking AI video creation beyond just making a

00:08:25.110 --> 00:08:28.029
fixed clip towards potentially creating dynamic,

00:08:28.029 --> 00:08:31.430
responsive visuals that you can actually influence

00:08:31.430 --> 00:08:34.950
somehow. That's a whole other thing for like

00:08:34.950 --> 00:08:36.909
content creation or maybe training simulations

00:08:36.909 --> 00:08:39.450
or something. And then you have tools like Opera's

00:08:39.450 --> 00:08:42.429
new browser, Neon. They're calling it an AI browser

00:08:42.429 --> 00:08:45.350
that can act on your behalf. Yeah, that sounds

00:08:45.350 --> 00:08:47.690
like more of an interface shift. An AI browser

00:08:47.690 --> 00:08:50.549
that can chat, sure, but also fill forms, maybe

00:08:50.549 --> 00:08:52.649
book trips. The source even suggested it could

00:08:52.649 --> 00:08:55.110
build symbol apps for you. The browser itself

00:08:55.110 --> 00:08:57.409
becomes, you know, an AI assistant layered on

00:08:57.409 --> 00:08:59.090
top of everything. So instead of just finding

00:08:59.090 --> 00:09:01.049
the flight info, you could potentially just tell

00:09:01.049 --> 00:09:03.789
it, hey, book me the cheapest flight next Tuesday.

00:09:03.889 --> 00:09:05.809
Yeah. And I might just do it. That seems to be

00:09:05.809 --> 00:09:07.690
the direction they're looking at. Yeah. Your

00:09:07.690 --> 00:09:10.059
sources also caught some of that. broader industry

00:09:10.059 --> 00:09:13.200
pulse, like Grammarly, the AI writing helper,

00:09:13.379 --> 00:09:15.879
getting a billion dollars in funding. That's

00:09:15.879 --> 00:09:18.320
huge. A billion dollars. Yeah, that is serious

00:09:18.320 --> 00:09:20.779
cash flowing into AI productivity tools. And

00:09:20.779 --> 00:09:23.139
some other quick hits, Perplexity, having a tool

00:09:23.139 --> 00:09:25.419
to make spreadsheets from conversations, the

00:09:25.419 --> 00:09:27.860
Netflix co -founder joining Anthropix board.

00:09:28.379 --> 00:09:30.679
It all just shows the whole ecosystem is buzzing,

00:09:30.799 --> 00:09:33.299
you know, evolving really fast, technically,

00:09:33.500 --> 00:09:35.679
commercially, even in terms of who's leading

00:09:35.679 --> 00:09:38.039
things. Oh, and there were mentions of layoffs

00:09:38.039 --> 00:09:40.799
at business. insider, too, maybe hinting at wider

00:09:40.799 --> 00:09:43.720
economic stuff or how AI might start shaking

00:09:43.720 --> 00:09:45.620
up certain jobs. Yeah, there's definitely a lot

00:09:45.620 --> 00:09:47.320
going on on all these different fronts at the

00:09:47.320 --> 00:09:50.059
same time. It's kind of dizzying. Yeah. So, OK,

00:09:50.179 --> 00:09:52.539
we've covered quite a bit here from trying to

00:09:52.539 --> 00:09:55.200
peek inside the AI's mind with Anthropic's new

00:09:55.200 --> 00:09:57.940
tool to predicting massive weather events with

00:09:57.940 --> 00:10:02.919
NASA's GAIA model to. The surprising little glitch

00:10:02.919 --> 00:10:05.200
of a major AI forgetting what year it is and

00:10:05.200 --> 00:10:07.379
all these new ways AI is popping up in video

00:10:07.379 --> 00:10:10.000
and even how we browse the web. Yeah. And if

00:10:10.000 --> 00:10:13.200
we try to like connect the dots here, see the

00:10:13.200 --> 00:10:16.639
bigger picture. What this collection of news

00:10:16.639 --> 00:10:19.179
really signals is the incredible, almost kind

00:10:19.179 --> 00:10:22.240
of jarring speed of AI evolution right now. You

00:10:22.240 --> 00:10:24.620
see this fundamental push for transparency, for

00:10:24.620 --> 00:10:27.639
understanding with Anthropic. But that's happening

00:10:27.639 --> 00:10:29.580
at the same time as we're building these incredibly

00:10:29.580 --> 00:10:32.440
powerful foundation models for really complex

00:10:32.440 --> 00:10:34.639
physical systems like the atmosphere with GAIA.

00:10:34.879 --> 00:10:36.639
And then you have that really stark contrast,

00:10:36.779 --> 00:10:38.840
right, between that super cutting edge stuff,

00:10:38.919 --> 00:10:42.149
the fact that, you know. Some basic reliability

00:10:42.149 --> 00:10:44.809
things like an AI getting the year wrong are

00:10:44.809 --> 00:10:46.830
still cropping up. Exactly. It shows the tech

00:10:46.830 --> 00:10:49.509
is both incredibly advanced and still kind of

00:10:49.509 --> 00:10:53.100
immature. Or maybe just unpredictable in some

00:10:53.100 --> 00:10:56.039
ways. It also clearly demonstrates how fast AI's

00:10:56.039 --> 00:10:57.759
application space is expanding. I mean, it's

00:10:57.759 --> 00:10:59.600
not just about generating text or code anymore,

00:10:59.700 --> 00:11:02.460
not by a long shot. It's moving firmly into physical

00:11:02.460 --> 00:11:05.200
systems like GAIA and to creative media like

00:11:05.200 --> 00:11:07.600
VO and Odyssey. And it's fundamentally changing

00:11:07.600 --> 00:11:10.039
how we interact with information, like with these

00:11:10.039 --> 00:11:13.440
new AI browsers. OK, so what does all this mean

00:11:13.440 --> 00:11:16.580
for you listening right now? Why does knowing

00:11:16.580 --> 00:11:20.019
about circuit tracing or a hurricane model actually

00:11:20.019 --> 00:11:22.480
matter? Well, I think understanding these specific

00:11:22.480 --> 00:11:24.659
points gives you a much better feel for where

00:11:24.659 --> 00:11:26.820
AI is actually heading, you know, beyond just

00:11:26.820 --> 00:11:29.019
the general hype you hear everywhere. Knowing

00:11:29.019 --> 00:11:30.679
about something like circuit tracing, for instance,

00:11:30.840 --> 00:11:33.399
it fundamentally changes that idea of the AI

00:11:33.399 --> 00:11:37.139
black box. Maybe it's becoming less opaque. And

00:11:37.139 --> 00:11:39.019
that's crucial if we want to build trust and

00:11:39.019 --> 00:11:41.710
make sure these things are safe. The GIA model

00:11:41.710 --> 00:11:44.230
shows that AI isn't just changing how we use

00:11:44.230 --> 00:11:46.629
computers. It's starting to potentially change

00:11:46.629 --> 00:11:48.710
our understanding and ability to deal with the

00:11:48.710 --> 00:11:50.809
physical world around us in pretty significant

00:11:50.809 --> 00:11:54.370
ways. And seeing that mix the incredible breakthroughs

00:11:54.370 --> 00:11:57.090
right alongside the basic errors, that gives

00:11:57.090 --> 00:11:59.309
you a more realistic view of where things stand

00:11:59.309 --> 00:12:01.769
today, where it's powerful and where the limitations

00:12:01.769 --> 00:12:04.149
still are. Right. It helps you maybe anticipate

00:12:04.149 --> 00:12:07.250
how AI might show up next in your own life or

00:12:07.250 --> 00:12:10.230
your work or just out in the world. and what

00:12:10.230 --> 00:12:12.049
you can realistically expect from it, or maybe

00:12:12.049 --> 00:12:14.649
what you shouldn't expect from it just yet. Precisely,

00:12:14.649 --> 00:12:18.090
yeah. It's about being informed, grounded, and

00:12:18.090 --> 00:12:20.210
what the actual research and the real -world

00:12:20.210 --> 00:12:22.730
applications are showing us. Yeah, that really

00:12:22.730 --> 00:12:25.149
is something to think about. Okay, so based on

00:12:25.149 --> 00:12:26.610
these sources you brought us, here's a final

00:12:26.610 --> 00:12:29.230
thought to kind of leave you with. We saw with

00:12:29.230 --> 00:12:31.820
Anthropix tool... that we can now start to see

00:12:31.820 --> 00:12:34.879
AI reasoning, right? And they even hinted at

00:12:34.879 --> 00:12:37.200
maybe being able to edit that reasoning directly

00:12:37.200 --> 00:12:40.039
in the future. So what happens when you combine

00:12:40.039 --> 00:12:42.940
the ability to look exactly at why an AI made

00:12:42.940 --> 00:12:46.440
a certain decision with the potential power to

00:12:46.440 --> 00:12:48.940
directly go in and change its internal logic?

00:12:49.220 --> 00:12:51.879
What does that mean for AI safety, for how we

00:12:51.879 --> 00:12:53.740
control these systems, or maybe even for how

00:12:53.740 --> 00:12:55.820
we think about intelligence itself down the line?

00:12:56.639 --> 00:12:58.740
Yeah, definitely a lot to ponder as these capabilities

00:12:58.740 --> 00:13:01.720
keep developing so so quickly. Indeed. Thanks

00:13:01.720 --> 00:13:03.259
again for bringing your sources for this deep

00:13:03.259 --> 00:13:03.500
dive.
