WEBVTT

00:00:00.000 --> 00:00:02.000
Are we really looking at an age of autonomous

00:00:02.000 --> 00:00:05.480
genius just around the corner? Or is the next

00:00:05.480 --> 00:00:08.199
decade actually going to be more about grinding

00:00:08.199 --> 00:00:10.099
through debugging? Yeah, that's the core tension,

00:00:10.259 --> 00:00:13.060
isn't it? I mean, since ChatGPT arrived, something

00:00:13.060 --> 00:00:16.140
like $14 trillion in valuation has flooded the

00:00:16.140 --> 00:00:18.480
tech market. Right. And that huge number, it's

00:00:18.480 --> 00:00:20.500
really built on this promise, this idea that

00:00:20.500 --> 00:00:23.420
AI agents are going to automate, well, almost

00:00:23.420 --> 00:00:26.739
everything. Exactly. But what if? What if that

00:00:26.739 --> 00:00:30.620
valuation is resting on tools that... maybe aren't

00:00:30.620 --> 00:00:33.020
quite ready. You know, tools that one of the

00:00:33.020 --> 00:00:35.880
sharpest minds in AI just called, pretty bluntly,

00:00:35.899 --> 00:00:38.600
slop. So today we're going to do a deep dive

00:00:38.600 --> 00:00:41.579
into that, into this recent reality check that

00:00:41.579 --> 00:00:44.100
seems to be hitting the AI world. We really want

00:00:44.100 --> 00:00:45.979
to focus on the technical honesty here. We'll

00:00:45.979 --> 00:00:48.460
kick off with Andrej Karpathy's critique, which

00:00:48.460 --> 00:00:51.060
honestly deflates a lot of that hype. Then we'll

00:00:51.060 --> 00:00:53.240
pivot to what's happening right now. Applications,

00:00:53.240 --> 00:00:56.119
the good, the bad. And maybe the con scary, like

00:00:56.119 --> 00:00:57.979
those viral deep fakes popping up everywhere.

00:00:59.229 --> 00:01:01.609
Definitely that. But we won't just stay there.

00:01:01.729 --> 00:01:04.890
We're also going to cover a truly mind -bending

00:01:04.890 --> 00:01:07.530
scientific development. Think artificial neurons

00:01:07.530 --> 00:01:12.390
finally learning to whisper to actual brain cells.

00:01:12.890 --> 00:01:15.209
Okay, so our goal here is to give you a clear

00:01:15.209 --> 00:01:18.209
view. Where does AI really stand today? We want

00:01:18.209 --> 00:01:20.590
to help separate the, let's say, the overhyped

00:01:20.590 --> 00:01:23.489
promises from the real, tangible, scientific

00:01:23.489 --> 00:01:25.730
steps forward. Let's get into it. Unpack this

00:01:25.730 --> 00:01:28.150
dichotomy. Okay, first stop has to be Andres

00:01:28.150 --> 00:01:31.170
Karpathy. And, you know, he's no outsider throwing

00:01:31.170 --> 00:01:34.609
stones. Ex -OpenAI co -founder, led AI at Tesla.

00:01:34.709 --> 00:01:36.530
Right, he's deep inside the field. Yeah. And

00:01:36.530 --> 00:01:39.109
he basically stood up and told everyone pumping

00:01:39.109 --> 00:01:42.859
the year of the AI agent. idea to maybe take

00:01:42.859 --> 00:01:45.159
a breath. Especially with that $14 trillion number

00:01:45.159 --> 00:01:47.260
hanging in the air, he thinks reliable, truly

00:01:47.260 --> 00:01:49.939
autonomous agents are, what, 10 years away? At

00:01:49.939 --> 00:01:51.420
least 10 years. That's his estimate. And yeah,

00:01:51.480 --> 00:01:53.560
10 years changes everything for strategies based

00:01:53.560 --> 00:01:55.599
on near -term automation. So what were his main

00:01:55.599 --> 00:01:57.560
points? What's the bottleneck? Well, he didn't

00:01:57.560 --> 00:02:00.840
mince words. Agentic coding, you know, the dream

00:02:00.840 --> 00:02:03.519
of AI running complex projects on its own. He

00:02:03.519 --> 00:02:06.120
called it SLOP. SLOP. And reinforcement learning,

00:02:06.260 --> 00:02:09.259
RL, which is... Pretty crucial for teaching agents

00:02:09.259 --> 00:02:13.560
complex behaviors. His word was terrible. You

00:02:13.560 --> 00:02:16.199
mentioned prompt drift earlier when we were prepping.

00:02:16.259 --> 00:02:18.080
For listeners, maybe explain what that means

00:02:18.080 --> 00:02:21.360
when an AI is trying a long task. Sure. So prompt

00:02:21.360 --> 00:02:24.740
drift. It's like the AI starts forgetting the

00:02:24.740 --> 00:02:26.680
original goal or the constraints you gave it

00:02:26.680 --> 00:02:28.780
as it works through steps. I wrestle with it

00:02:28.780 --> 00:02:31.259
myself, honestly. Just yesterday, trying to get

00:02:31.259 --> 00:02:33.300
a system to plan a trip. Three parts, right?

00:02:33.379 --> 00:02:36.289
Yeah. By step two, it... Totally forgot I said

00:02:36.289 --> 00:02:39.050
no airports. So in Carpeth, he calls it slop.

00:02:39.370 --> 00:02:41.610
Yeah, I kind of get it. That unreliability is

00:02:41.610 --> 00:02:44.050
real. Beat. And then he said something even more

00:02:44.050 --> 00:02:46.229
counterintuitive, I thought. That perfect memory

00:02:46.229 --> 00:02:49.150
in AI is actually a bug. Why is remembering everything

00:02:49.150 --> 00:02:51.650
bad? It sounds weird, right? But think about

00:02:51.650 --> 00:02:54.650
it. We don't want AI to just memorize facts like

00:02:54.650 --> 00:02:56.710
a database. That's overfitting. We want it to

00:02:56.710 --> 00:02:58.830
generalize to learn the underlying patterns or

00:02:58.830 --> 00:03:01.610
rules so it can handle new situations it hasn't

00:03:01.610 --> 00:03:04.610
seen before. Perfect memory can make it brittle.

00:03:04.939 --> 00:03:07.219
Okay, that makes sense. Learn the rules, not

00:03:07.219 --> 00:03:09.180
just the answers. That's some needed technical

00:03:09.180 --> 00:03:11.860
honesty. Of course, you know, Elon Musk immediately

00:03:11.860 --> 00:03:14.719
shot back, challenged Karpathy to a face -off

00:03:14.719 --> 00:03:17.900
with Grok 5, but that banter doesn't really address

00:03:17.900 --> 00:03:20.759
the core issue Karpathy raised, does it? The

00:03:20.759 --> 00:03:23.719
reliability gap. No, it doesn't. And that gap

00:03:23.719 --> 00:03:27.020
is the crux of it. Getting from, say, 99 % accuracy

00:03:27.020 --> 00:03:31.080
to 99 .9%, that last little bit can be just as

00:03:31.080 --> 00:03:33.560
hard, sometimes harder, than getting from 0 %

00:03:33.560 --> 00:03:36.699
to 90%. Right, because that last 0 .9 % covers

00:03:36.699 --> 00:03:39.460
all the weird edge cases, the unexpected stuff.

00:03:39.599 --> 00:03:41.639
The first 90 % is the more predictable part.

00:03:41.860 --> 00:03:43.780
Exactly. And for real -world automation, you

00:03:43.780 --> 00:03:46.460
need that super high reliability. You need to

00:03:46.460 --> 00:03:48.740
conquer those edge cases. So, probe in question

00:03:48.740 --> 00:03:50.560
time. If agents are hitting this reliability

00:03:50.560 --> 00:03:53.520
wall, is the main problem the underlying AI architecture

00:03:53.520 --> 00:03:56.039
itself, or is it just about feeding them more

00:03:56.039 --> 00:03:58.199
and better data? Karpathy's point suggests it's

00:03:58.199 --> 00:04:00.139
more fundamental. The big challenge is reliably

00:04:00.139 --> 00:04:02.740
scaling that final accuracy point. It's about

00:04:02.740 --> 00:04:04.659
the architecture handling novelty correctly.

00:04:04.979 --> 00:04:07.340
Okay, so it's more than just data. Right. It's

00:04:07.340 --> 00:04:09.599
about dependable performance in unpredictable

00:04:09.599 --> 00:04:13.240
scenarios. OK, shifting gears a bit, let's talk

00:04:13.240 --> 00:04:15.800
about what specialized AI is doing effectively

00:04:15.800 --> 00:04:19.420
right now. Anthropic, for example, just launched

00:04:19.420 --> 00:04:22.980
Claude for Life Sciences. Ah, so really niching

00:04:22.980 --> 00:04:25.279
down, not just general chat anymore. Exactly.

00:04:25.360 --> 00:04:27.839
This isn't just a chat bot. It offers like full

00:04:27.839 --> 00:04:30.540
stack research support. It integrates with actual

00:04:30.540 --> 00:04:33.759
lab tools. That's deep specialization. Impressive.

00:04:33.899 --> 00:04:36.240
And on the creative side, we saw Anthropic's

00:04:36.240 --> 00:04:39.920
head of dev relations create this insane PDF

00:04:39.920 --> 00:04:42.459
flipbook, apparently entirely through code prompts.

00:04:42.620 --> 00:04:45.879
It shows these things can handle complex, structured,

00:04:45.939 --> 00:04:48.220
creative work. That's the positive side. But

00:04:48.220 --> 00:04:51.839
that same tower. Well, it brings risks. The sources

00:04:51.839 --> 00:04:54.620
flagged this rise of viral AI slop. You mentioned

00:04:54.620 --> 00:04:56.980
it earlier. Videos like the one with dogs supposedly

00:04:56.980 --> 00:04:59.519
rescuing babies. Yeah, that one hit like 6 .3

00:04:59.519 --> 00:05:01.740
million views. And the scary part is most people

00:05:01.740 --> 00:05:03.560
watching apparently couldn't tell it was fake.

00:05:04.319 --> 00:05:07.720
That ability to fool us is getting really good

00:05:07.720 --> 00:05:11.160
really fast. It changes how we have to approach

00:05:11.160 --> 00:05:13.879
online content. And speaking of caution, there's

00:05:13.879 --> 00:05:17.360
this privacy aspect too. Meta Facebook has a

00:05:17.360 --> 00:05:20.319
new opt -in button. Oh? Yeah. It potentially

00:05:20.319 --> 00:05:22.959
lets their AI look at your unpublished photos.

00:05:23.180 --> 00:05:25.139
Okay, that definitely needs a big flashing warning

00:05:25.139 --> 00:05:28.000
sign. What's the risk there? Well, the risk is

00:05:28.000 --> 00:05:31.500
the AI starts learning things about you, connecting

00:05:31.500 --> 00:05:34.439
your face or your private life to data points

00:05:34.439 --> 00:05:36.699
you never, ever intended to make public. It just,

00:05:36.759 --> 00:05:38.500
you know, increases your digital vulnerability

00:05:38.500 --> 00:05:41.339
over time. Yeah. User awareness is key here.

00:05:41.699 --> 00:05:44.170
You need to know what you're opting into. Good

00:05:44.170 --> 00:05:46.209
warning. On the flip side, though, there are

00:05:46.209 --> 00:05:48.810
genuinely useful new tools. Perplexity -added

00:05:48.810 --> 00:05:51.149
language learning features, flashcards, translations,

00:05:51.490 --> 00:05:54.029
feedback. Looks handy. Yeah, practical stuff.

00:05:54.230 --> 00:05:56.129
And in medicine, this company, Open Evidence,

00:05:56.209 --> 00:05:58.810
think chat GPT specifically for doctors. They

00:05:58.810 --> 00:06:01.769
just raised $200 million. Wow. And they're apparently

00:06:01.769 --> 00:06:04.029
handling 15 million consultations a month already.

00:06:04.170 --> 00:06:06.189
That's a significant scale in a critical field.

00:06:06.470 --> 00:06:09.250
Okay, another probing question. Maybe for me

00:06:09.250 --> 00:06:12.810
this time, from you. Given how easily this fake

00:06:12.810 --> 00:06:15.829
media goes viral, how do we even begin to adjust

00:06:15.829 --> 00:06:18.290
our online skepticism? That's a tough one. Yeah,

00:06:18.370 --> 00:06:20.569
I think we almost have to default to disbelief

00:06:20.569 --> 00:06:23.660
now. Assume virality doesn't mean truth. Maybe

00:06:23.660 --> 00:06:25.699
the opposite sometimes. All right, let's talk

00:06:25.699 --> 00:06:27.939
practical workflows. If you're running your own

00:06:27.939 --> 00:06:30.339
show, a solopreneur, the sources had some interesting

00:06:30.339 --> 00:06:32.819
ideas about building a kind of virtual team using

00:06:32.819 --> 00:06:35.680
AI tools. Yeah, moving beyond just single -use

00:06:35.680 --> 00:06:38.060
apps. And for the more advanced users, there's

00:06:38.060 --> 00:06:40.459
this concept they call build your AI twin, basically

00:06:40.459 --> 00:06:43.759
a digital you for making videos. Okay, AI twin.

00:06:43.939 --> 00:06:46.899
The sources mentioned a three -tool method. Can

00:06:46.899 --> 00:06:49.439
you break down the principle behind that functionally?

00:06:49.579 --> 00:06:52.240
What are the Lego blocks, so to speak? Sure.

00:06:52.360 --> 00:06:54.939
It's like an assembly line. Tool one would handle

00:06:54.939 --> 00:06:57.779
your script, the logic, the words. Tool two is

00:06:57.779 --> 00:07:00.339
the visual engine, the part that generates the

00:07:00.339 --> 00:07:03.300
photorealistic video of you, syncing lip movements,

00:07:03.399 --> 00:07:06.680
expressions. Think deep fake tech, but you control

00:07:06.680 --> 00:07:09.439
it. Got it. And tool three handles cloning your

00:07:09.439 --> 00:07:11.550
voice. and maybe putting it all together for

00:07:11.550 --> 00:07:13.790
distribution. So yeah, stacking those blocks

00:07:13.790 --> 00:07:16.930
of data script, visuals, voice to create an avatar

00:07:16.930 --> 00:07:19.730
that looks and sounds like you. Pretty wild automation

00:07:19.730 --> 00:07:21.790
potential there. Which brings us back to agents.

00:07:22.269 --> 00:07:24.589
ChatGPT agent mode was mentioned. The idea is

00:07:24.589 --> 00:07:26.389
you give it a research task and it comes back

00:07:26.389 --> 00:07:28.230
with a finished report. And this is where Carpe

00:07:28.230 --> 00:07:30.529
these warnings echo, right? The sources gave

00:07:30.529 --> 00:07:32.370
four examples of how you could set this up now.

00:07:33.699 --> 00:07:35.939
automating complex knowledge work like that?

00:07:36.060 --> 00:07:38.339
Yeah. Well, the risk of the AI making stuff up

00:07:38.339 --> 00:07:40.759
or just getting it wrong is still high. Definitely.

00:07:40.920 --> 00:07:43.720
So thinking back to Karpathy's reliability warnings,

00:07:43.980 --> 00:07:46.199
if you are going to use something like agent

00:07:46.199 --> 00:07:48.699
mode for a serious project, what's the absolute

00:07:48.699 --> 00:07:51.660
number one safety measure? Constant human oversight,

00:07:51.879 --> 00:07:54.519
period. You absolutely have to review the output

00:07:54.519 --> 00:07:57.540
carefully. Don't trust, verify. Human in the

00:07:57.540 --> 00:08:00.379
loop is still essential. Okay. All right, let's

00:08:00.379 --> 00:08:02.470
do some quick hits. Rapid fire updates. Google

00:08:02.470 --> 00:08:05.069
put its Maps tool right into the Gemini API,

00:08:05.370 --> 00:08:08.529
so AI can now leverage real -world location data

00:08:08.529 --> 00:08:11.110
more easily. Interesting integration. Adobe launched

00:08:11.110 --> 00:08:13.889
a Foundry service, lets companies build their

00:08:13.889 --> 00:08:17.269
own custom gen AI models trained on their specific

00:08:17.269 --> 00:08:20.509
data. More bespoke AI. Makes sense. And market

00:08:20.509 --> 00:08:24.029
moves. Meta AI's app downloads reportedly jumped

00:08:24.029 --> 00:08:26.529
quite a bit after they rolled out that Vibes

00:08:26.529 --> 00:08:29.029
AI feed. Trying to get that engagement. And Anthropic

00:08:29.029 --> 00:08:31.389
made its coding assistant available as a web

00:08:31.389 --> 00:08:33.820
app. making it easier for people to access. Okay.

00:08:33.919 --> 00:08:36.019
And policy. There was something serious there,

00:08:36.039 --> 00:08:37.820
too. Yeah, a significant one. The Department

00:08:37.820 --> 00:08:40.720
of Homeland Security asked OpenAI to provide

00:08:40.720 --> 00:08:44.460
user data, basically, unmask users in a child

00:08:44.460 --> 00:08:47.769
abuse investigation. Wow. Beat. That really highlights

00:08:47.769 --> 00:08:50.210
the friction, doesn't it, between privacy expectations

00:08:50.210 --> 00:08:53.169
and law enforcement demands on these huge platforms.

00:08:53.309 --> 00:08:55.350
Absolutely. A major tension point that's only

00:08:55.350 --> 00:08:57.289
going to grow. Okay. Let's completely shift gears

00:08:57.289 --> 00:09:00.289
now, away from software agents and policy debates,

00:09:00.470 --> 00:09:03.429
towards something truly fundamental, this UMass

00:09:03.429 --> 00:09:06.389
Amherst breakthrough, artificial neurons talking

00:09:06.389 --> 00:09:09.490
to brain cells. Yes. This is really cool science.

00:09:09.710 --> 00:09:13.009
And the key word you used before was perfect.

00:09:13.800 --> 00:09:16.519
Right, because previous attempts were more like

00:09:16.519 --> 00:09:20.159
shouting. Exactly. Older artificial neurons needed

00:09:20.159 --> 00:09:23.240
way more voltage, like 10 times more. And they

00:09:23.240 --> 00:09:27.039
used maybe 100 times more power just blasting

00:09:27.039 --> 00:09:29.730
the biological cells, relatively speaking. But

00:09:29.730 --> 00:09:32.750
this new one gets down to 0 .1 volts. Around

00:09:32.750 --> 00:09:35.769
0 .1 volts, yeah, which is roughly the same energy

00:09:35.769 --> 00:09:38.210
level biological neurons use to communicate with

00:09:38.210 --> 00:09:40.490
each other. That's the game changer. It's finally

00:09:40.490 --> 00:09:42.629
speaking the same language, electrically speaking.

00:09:42.809 --> 00:09:45.490
That efficiency difference is huge. How did they

00:09:45.490 --> 00:09:48.440
achieve that gentle touch? What's the tech? The

00:09:48.440 --> 00:09:50.600
magic ingredient seems to be these things called

00:09:50.600 --> 00:09:54.139
protein nanowires. Protein nanowires. Yeah, fascinating

00:09:54.139 --> 00:09:57.379
stuff. They're actually grown by bacteria. Specific

00:09:57.379 --> 00:10:00.019
kinds of bacteria produce these tiny wires that

00:10:00.019 --> 00:10:02.399
naturally conduct electricity. Bacteria are building

00:10:02.399 --> 00:10:04.820
components for neural interfaces. Kind of mind

00:10:04.820 --> 00:10:06.779
-blowing, right? Yeah. And these nanowires have

00:10:06.779 --> 00:10:09.440
three huge advantages for interfacing with biology.

00:10:09.720 --> 00:10:12.120
Okay, lay them out for us. First, they're stable

00:10:12.120 --> 00:10:14.419
in wet, salty environments, like, you know, inside

00:10:14.419 --> 00:10:17.470
the body. That's critical. Second, they operated

00:10:17.470 --> 00:10:20.309
that ultra -low voltage, the 0 .1V we mentioned

00:10:20.309 --> 00:10:23.330
matching neurons. And third, because of that

00:10:23.330 --> 00:10:25.809
low voltage, they are incredibly energy efficient,

00:10:25.970 --> 00:10:28.970
like 100 times less power needed than the old

00:10:28.970 --> 00:10:31.590
approaches. Okay, hold on. Grown by bacteria,

00:10:31.830 --> 00:10:34.700
does that mean we could like... brew these things,

00:10:34.919 --> 00:10:37.200
is this potentially scalable in a way silicon

00:10:37.200 --> 00:10:40.000
isn't for biointerfaces? That's exactly the long

00:10:40.000 --> 00:10:42.039
-term potential people are excited about. This

00:10:42.039 --> 00:10:44.419
isn't just fiddly lab work. It's just a pathway

00:10:44.419 --> 00:10:47.139
to maybe producing these components using biological

00:10:47.139 --> 00:10:50.919
processes. Scale, reproducibility, whoa. I mean,

00:10:50.940 --> 00:10:53.419
just imagine scaling that gentle, precise communication

00:10:53.419 --> 00:10:56.659
without frying the tissue. That's the real moment

00:10:56.659 --> 00:10:59.340
of wonder here. Wow, okay. So that biological

00:10:59.340 --> 00:11:02.080
compatibility, that gentle whisper. It means

00:11:02.080 --> 00:11:04.019
for the first time, an artificial neuron can

00:11:04.019 --> 00:11:06.440
have a real two -way conversation with a biological

00:11:06.440 --> 00:11:09.100
one, not just stimulate it crudely. Precisely.

00:11:09.100 --> 00:11:12.419
It's communication, not just activation. And

00:11:12.419 --> 00:11:14.980
that opens up entirely new possibilities for

00:11:14.980 --> 00:11:17.960
neural interfaces. Think wearables, maybe even

00:11:17.960 --> 00:11:20.159
implants someday, that can interact with our

00:11:20.159 --> 00:11:22.600
nervous system much more naturally, much more

00:11:22.600 --> 00:11:26.200
seamlessly. Huge potential for therapies. OK,

00:11:26.299 --> 00:11:29.220
final probing question on this. Does this breakthrough,

00:11:29.399 --> 00:11:32.399
this biocompatibility autol, does it suggest

00:11:32.399 --> 00:11:35.539
that maybe neuro AI, the hardware interface side,

00:11:35.720 --> 00:11:39.059
could actually leapfrog the pure software agents

00:11:39.059 --> 00:11:41.820
Carpathy was critiquing? It's a fascinating thought.

00:11:41.980 --> 00:11:44.220
This foundational work certainly suggests neuro

00:11:44.220 --> 00:11:46.639
interfaces have very strong potential. They might

00:11:46.639 --> 00:11:49.240
bypass some software reliability hurdles. Definitely

00:11:49.240 --> 00:11:51.279
something to watch. So what a journey today.

00:11:51.419 --> 00:11:53.299
We've really seen two sides of the coin. On one

00:11:53.299 --> 00:11:56.490
side, you've got this. Massive $14 trillion AI

00:11:56.490 --> 00:11:59.370
hype machine. Yeah, built on agentic coding that,

00:11:59.429 --> 00:12:01.769
according to key insiders, is still kind of struggling.

00:12:01.870 --> 00:12:04.610
Still maybe slopping places. Requiring that constant

00:12:04.610 --> 00:12:06.990
human oversight for the immediate future. We're

00:12:06.990 --> 00:12:09.309
in the debugging grind, maybe. But then on the

00:12:09.309 --> 00:12:11.769
other side, there's this genuinely revolutionary

00:12:11.769 --> 00:12:16.110
science. The 0 .1 volt neuron whisperer. Foundational

00:12:16.110 --> 00:12:19.809
stuff. Totally different level. That UMass Amherst

00:12:19.809 --> 00:12:22.929
work points toward a future that might rely more

00:12:22.929 --> 00:12:26.570
on physical, deeply biocompatible interfaces,

00:12:26.850 --> 00:12:29.850
technology that talks with our biology, not just

00:12:29.850 --> 00:12:32.590
at our screens. It really makes you think. So

00:12:32.590 --> 00:12:34.009
here's a final thought, something for you, the

00:12:34.009 --> 00:12:36.350
listener, to chew on. If Karpathy's timeline

00:12:36.350 --> 00:12:39.500
is even close to right, If software AI faces

00:12:39.500 --> 00:12:42.080
a tough decade -long slog to get truly reliable,

00:12:42.379 --> 00:12:45.200
should the industry, the investment, maybe start

00:12:45.200 --> 00:12:47.740
shifting away from purely autonomous software

00:12:47.740 --> 00:12:50.759
dreams and more towards these biocompatible hardware

00:12:50.759 --> 00:12:53.620
breakthroughs like the whispering neuron? Is

00:12:53.620 --> 00:12:56.120
the real next frontier physical, not just digital?

00:12:56.240 --> 00:12:58.620
A really interesting question to ponder. Indeed.

00:12:58.700 --> 00:13:00.840
Well, thank you for joining us on this deep dive,

00:13:01.000 --> 00:13:03.360
navigating the hype and the breakthroughs. We

00:13:03.360 --> 00:13:05.039
definitely encourage you to dig into the source

00:13:05.039 --> 00:13:06.220
materials we touched on today.
