WEBVTT

00:00:00.000 --> 00:00:03.819
You know, there's this fear I hear constantly

00:00:03.819 --> 00:00:06.299
these days. It's almost become this mantra in

00:00:06.299 --> 00:00:09.779
coffee shops, in boardrooms. Be careful. Using

00:00:09.779 --> 00:00:12.419
AI is going to make you stupid. Right. Or lazy.

00:00:12.720 --> 00:00:14.740
That your brain's just going to atrophy. Exactly.

00:00:14.820 --> 00:00:17.879
This terrifying image of us just slowly forgetting

00:00:17.879 --> 00:00:20.179
how to think because the machines are doing it

00:00:20.179 --> 00:00:22.760
all for us. It really is the prevailing anxiety

00:00:22.760 --> 00:00:25.620
of our time. But here's the twist. And this is

00:00:25.620 --> 00:00:27.300
really the core of what we're looking at today.

00:00:28.070 --> 00:00:31.510
AI doesn't inherently make you worse. It doesn't

00:00:31.510 --> 00:00:34.170
rot your brain. It's the way you use it that

00:00:34.170 --> 00:00:37.049
keeps you stagnant. The danger isn't the tool.

00:00:37.270 --> 00:00:39.890
It's using it as a crutch instead of a turbocharger.

00:00:40.229 --> 00:00:43.189
That is a critical distinction. So welcome to

00:00:43.189 --> 00:00:45.929
the deep dive. Today we are tackling mastering

00:00:45.929 --> 00:00:48.270
the mind in the AI age. We're exploring how to

00:00:48.270 --> 00:00:51.579
move from that passive consumption to to active

00:00:51.579 --> 00:00:53.600
engagement. Exactly. And we've got a roadmap.

00:00:53.719 --> 00:00:56.079
We're going to look at why AI lies to us and

00:00:56.079 --> 00:00:58.619
why it's not technically lying. We'll tackle

00:00:58.619 --> 00:01:01.200
this illusion of fluency, which is a fascinating

00:01:01.200 --> 00:01:03.759
psychological trap. Then we'll break down a kind

00:01:03.759 --> 00:01:06.180
of re -imagined pyramid of thinking to see what

00:01:06.180 --> 00:01:08.599
you should outsource and what you absolutely

00:01:08.599 --> 00:01:11.299
have to keep. And the practical part. And finally,

00:01:11.620 --> 00:01:15.280
a specific four -step routine to learn with AI,

00:01:15.700 --> 00:01:17.879
not by AI. I like the sound of that. Learning

00:01:17.879 --> 00:01:21.359
with, not by. So let's start at that foundational

00:01:21.359 --> 00:01:24.359
level, the nature of the beast. Why is the information

00:01:24.359 --> 00:01:27.120
we get from these large language models so often

00:01:27.120 --> 00:01:30.799
just wrong? Yeah, it really comes down to probability.

00:01:30.939 --> 00:01:33.939
We tend to anthropomorphize these models. We

00:01:33.939 --> 00:01:35.879
think they know the truth. That they don't. They

00:01:35.879 --> 00:01:38.359
don't. Whether it's ChatGPT or Claude, they're

00:01:38.359 --> 00:01:40.400
just predicting what word should reasonably come

00:01:40.400 --> 00:01:42.719
next to make a sentence sound good. So it's optimizing

00:01:42.719 --> 00:01:46.239
for plausibility, not for truth. It's basically

00:01:46.239 --> 00:01:49.430
a... super advanced autocomplete. Precisely.

00:01:49.750 --> 00:01:52.189
The source material calls this the hallucination

00:01:52.189 --> 00:01:55.269
phenomenon. The AI is trained to be helpful and

00:01:55.269 --> 00:01:56.870
confident, so if it doesn't know an answer, it

00:01:56.870 --> 00:01:58.349
might just make something up that looks like

00:01:58.349 --> 00:02:01.230
an answer, just to please you. It's like a people

00:02:01.230 --> 00:02:02.849
-pleasing intern who's afraid to say, I don't

00:02:02.849 --> 00:02:06.349
know. That is the perfect analogy. It wants that

00:02:06.349 --> 00:02:09.770
gold star, so it invents a citation. And there's

00:02:09.770 --> 00:02:11.870
another layer here. It's called source blindness.

00:02:12.110 --> 00:02:14.830
Source blindness. Yeah, the AI reads billions

00:02:14.830 --> 00:02:17.590
of websites, but it struggles to tell the difference

00:02:17.590 --> 00:02:21.530
between a peer -reviewed article and a random

00:02:21.530 --> 00:02:24.169
angry comment on a forum. It just ingests it

00:02:24.169 --> 00:02:27.650
all. That sounds dangerous if you're trying to

00:02:27.650 --> 00:02:30.229
learn something complex. Yeah. If it's just blending

00:02:30.229 --> 00:02:32.650
fact and fiction, how do we use it safely? You

00:02:32.650 --> 00:02:35.189
need a strategy. The guide we're looking at suggests

00:02:35.189 --> 00:02:37.810
a risk classification system. Think of it like

00:02:37.810 --> 00:02:40.689
triage. First, you've got low -risk topics. OK,

00:02:40.770 --> 00:02:43.520
like what? Simple stuff. How to cook rice. What

00:02:43.520 --> 00:02:45.560
is Newton's second law? Things where there's

00:02:45.560 --> 00:02:47.780
a huge consensus. So the data is overwhelmingly

00:02:47.780 --> 00:02:49.599
consistent. Right. You can probably trust it.

00:02:49.599 --> 00:02:52.719
But then you have high -risk topics, new fields,

00:02:53.139 --> 00:02:56.759
deep political debates, specific medical advice.

00:02:57.080 --> 00:02:59.419
For these, the AI should only be for suggestions.

00:02:59.479 --> 00:03:01.400
You have to check the output against textbooks

00:03:01.400 --> 00:03:04.360
or professionals. So this implies the strategy

00:03:04.360 --> 00:03:06.460
isn't just about fact -checking everything after

00:03:06.460 --> 00:03:09.419
the fact. No. It's about triaging information

00:03:09.419 --> 00:03:12.219
based on complexity. before you even type the

00:03:12.219 --> 00:03:14.740
prompt. You have to ask, if this is wrong, will

00:03:14.740 --> 00:03:18.460
I know? That's a great heuristic. But let's say

00:03:18.460 --> 00:03:20.960
the AI gets it right. The answer is accurate.

00:03:22.159 --> 00:03:24.780
There's still this problem. The source calls

00:03:24.780 --> 00:03:28.250
it the illusion of fluency. And this one. This

00:03:28.250 --> 00:03:30.110
really resonated with me. Oh, this is the trap,

00:03:30.189 --> 00:03:32.550
the big one. It's that feeling of being incredibly

00:03:32.550 --> 00:03:34.729
smart while you're chatting with the bot. Yeah.

00:03:35.009 --> 00:03:36.770
You ask a question. It gives this brilliant,

00:03:36.990 --> 00:03:38.770
nuanced answer. And you're nodding along, thinking,

00:03:38.770 --> 00:03:40.889
yes, exactly. I get it. You feel fluent. But

00:03:40.889 --> 00:03:43.889
then you close the laptop. And the mind goes

00:03:43.889 --> 00:03:46.930
blank. A complete blank page. You mistook access

00:03:46.930 --> 00:03:49.409
to the answer for mastery of the answer. The

00:03:49.409 --> 00:03:51.409
source makes a really hard distinction here between

00:03:51.409 --> 00:03:54.030
productivity and learning. OK. Productivity is

00:03:54.030 --> 00:03:56.270
getting an essay written in two minutes. Task

00:03:56.270 --> 00:04:00.080
done. But your growth? Zero. It's like watching

00:04:00.080 --> 00:04:02.020
a workout video and thinking you got stronger.

00:04:02.439 --> 00:04:05.180
Yes. You watched the heavy lifting. You didn't

00:04:05.180 --> 00:04:08.199
do it. Real learning needs that mental sweat.

00:04:08.759 --> 00:04:10.939
When you write an essay yourself, you're wrestling

00:04:10.939 --> 00:04:13.960
with words. You're structuring arguments. You're

00:04:13.960 --> 00:04:16.720
literally building neural pathways. And if you

00:04:16.720 --> 00:04:19.060
let the AI do that, you're just robbing yourself

00:04:19.060 --> 00:04:21.560
of the cognitive workout. You are. The source

00:04:21.560 --> 00:04:24.660
breaks this into two types of reliance. There's

00:04:24.660 --> 00:04:27.860
positive reliance. The booster. OK. That's where

00:04:27.860 --> 00:04:31.160
you outsource low value stuff. Summarizing a

00:04:31.160 --> 00:04:34.639
long transcript, fixing grammar, it saves your

00:04:34.639 --> 00:04:36.660
mental energy for the real analysis. And the

00:04:36.660 --> 00:04:39.579
other one. Negative reliance. The crutch. That's

00:04:39.579 --> 00:04:41.779
letting the AI think from start to finish. I

00:04:41.779 --> 00:04:43.699
think we've all been guilty of the crutch approach

00:04:43.699 --> 00:04:46.040
when we're tired or rushing. Just write this

00:04:46.040 --> 00:04:48.259
email for me. So how do we catch ourselves? Don't

00:04:48.259 --> 00:04:51.459
count tasks completed. Track your internal understanding.

00:04:51.800 --> 00:04:53.939
The source suggests the five -year -old test.

00:04:54.480 --> 00:04:56.480
Can you explain the topic to a child without

00:04:56.480 --> 00:04:58.980
looking at a screen? Ah, the Feynman technique.

00:04:59.319 --> 00:05:01.379
Exactly. If you get stuck and you have to open

00:05:01.379 --> 00:05:04.399
chat GPT to find the words, you don't really

00:05:04.399 --> 00:05:06.139
know it. You're just renting the knowledge. You

00:05:06.139 --> 00:05:08.259
don't own it. So does this mean we should just

00:05:08.259 --> 00:05:11.819
stop using AI for any kind of output? No, not

00:05:11.819 --> 00:05:13.740
at all. Just don't let it replace the thinking

00:05:13.740 --> 00:05:16.480
process itself. OK, let's unpack that, the thinking

00:05:16.480 --> 00:05:19.240
process. Because thinking is such a broad term.

00:05:19.480 --> 00:05:22.819
The source material introduces a hierarchy, a

00:05:22.819 --> 00:05:26.279
sort of reimagined Bloons taxonomy for the AI

00:05:26.279 --> 00:05:28.939
age. Right. Picture a pyramid. The bottom three

00:05:28.939 --> 00:05:32.160
levels are remember, understand, and basic apply.

00:05:32.699 --> 00:05:36.360
So facts, basic meanings, following steps. Exactly.

00:05:36.680 --> 00:05:39.000
And the argument is, let the robots have these.

00:05:39.120 --> 00:05:41.660
Don't waste your limited cognitive load memorizing

00:05:41.660 --> 00:05:44.000
figures you can look up in three seconds. AI

00:05:44.000 --> 00:05:46.100
is great at the bottom of the pyramid. It's perfect

00:05:46.100 --> 00:05:48.779
recall. Perfect recall. But the top three levels,

00:05:49.019 --> 00:05:50.980
that's the human advantage. OK, walk us through

00:05:50.980 --> 00:05:54.060
those. What's left for us? First, analyze. This

00:05:54.060 --> 00:05:56.259
is about connecting the dots. So don't just ask

00:05:56.259 --> 00:05:59.040
for a definition of SEO. Ask yourself, how is

00:05:59.040 --> 00:06:02.100
SEO different from Facebook ads in the specific

00:06:02.100 --> 00:06:04.600
context of my new coffee shop? The context is

00:06:04.600 --> 00:06:07.050
the key. The AI knows the definition, but I know

00:06:07.050 --> 00:06:10.430
my business. Right. Then above that is evaluate.

00:06:11.310 --> 00:06:14.329
Judgment. An AI can give you a list of pros and

00:06:14.329 --> 00:06:16.709
cons for a decision, but it can't tell you what's

00:06:16.709 --> 00:06:19.370
suitable for your values, your risk tolerance.

00:06:19.949 --> 00:06:22.129
It lacks skin in the game. And at the very top

00:06:22.129 --> 00:06:24.509
of the pyramid. Create. Making something new.

00:06:25.009 --> 00:06:28.490
Now AI creates by remixing old data. It's a synthesis

00:06:28.490 --> 00:06:32.430
engine. But humans. Humans can have breakthroughs

00:06:32.430 --> 00:06:35.550
that defy the data. because we have lived experience.

00:06:35.730 --> 00:06:37.750
So looking at this pyramid, do you think AI can

00:06:37.750 --> 00:06:40.110
eventually climb that ladder? Can it ever truly

00:06:40.110 --> 00:06:43.230
create? Maybe one day. But right now, it lacks

00:06:43.230 --> 00:06:46.209
that real experience and context. It mimics creation,

00:06:46.529 --> 00:06:48.370
but it doesn't understand the emotional weight

00:06:48.370 --> 00:06:50.610
behind it. That brings us to the how. We've got

00:06:50.610 --> 00:06:52.569
the theory. We know we need to stay at the top

00:06:52.569 --> 00:06:55.110
of the pyramid. But the source outlines a very

00:06:55.110 --> 00:06:58.230
specific four -step routine for smart learning.

00:06:58.389 --> 00:07:00.389
Yeah, this routine is gold. It's designed to

00:07:00.389 --> 00:07:02.189
force that mental sweat we were talking about.

00:07:02.329 --> 00:07:05.170
Step one is background and big picture. You use

00:07:05.170 --> 00:07:07.790
the AI to scan a ton of data and just summarize

00:07:07.790 --> 00:07:10.689
the main concept. OK, so a prompt like, summarize

00:07:10.689 --> 00:07:12.990
behavioral economics with daily life examples.

00:07:13.509 --> 00:07:16.000
Right. get the lay of the land. But then, and

00:07:16.000 --> 00:07:18.920
this is so important, step two is the analog

00:07:18.920 --> 00:07:22.339
step. Analog, you mean like paper? Physical paper,

00:07:22.639 --> 00:07:25.839
a pen. You actually stop using the AI. You step

00:07:25.839 --> 00:07:28.079
away from the screen and you draw a mind map.

00:07:28.379 --> 00:07:31.000
You force your own brain to find the connections.

00:07:31.519 --> 00:07:35.459
How does concept A relate to concept B? Why is

00:07:35.459 --> 00:07:37.860
that physical paper step so important? I mean,

00:07:37.939 --> 00:07:39.399
couldn't I just do that in a different window

00:07:39.399 --> 00:07:41.720
or something? You could, but the physical act

00:07:41.720 --> 00:07:44.600
changes things. It forces active recall without

00:07:44.600 --> 00:07:47.100
digital crutches. When the screen is off, your

00:07:47.100 --> 00:07:49.519
brain has to struggle to retrieve the information,

00:07:49.779 --> 00:07:52.259
and that struggle is where the memory gets cemented.

00:07:52.360 --> 00:07:54.399
Okay, so we struggled. We have our messy handwritten

00:07:54.399 --> 00:07:57.120
mind map. What's step three? Step three is the

00:07:57.120 --> 00:08:00.139
Socratic Challenger. You go back to the AI, but

00:08:00.139 --> 00:08:02.740
you don't ask for answers. You tell it... I'm

00:08:02.740 --> 00:08:04.779
explaining the anchoring effect for negotiation,

00:08:05.279 --> 00:08:07.600
act as a tough expert, and ask me three difficult

00:08:07.600 --> 00:08:10.360
questions to test my logic. That's intimidating.

00:08:10.439 --> 00:08:12.339
You're inviting it to criticize you. It should

00:08:12.339 --> 00:08:15.180
be. Whoa. I mean, imagine having Socrates in

00:08:15.180 --> 00:08:17.259
your pocket, ready to poke holes in your logic

00:08:17.259 --> 00:08:19.740
at any moment. That is powerful. Right. Most

00:08:19.740 --> 00:08:21.939
people use AI to tell them they're right. The

00:08:21.939 --> 00:08:24.139
smart learner says, tell me where I'm wrong.

00:08:24.920 --> 00:08:28.769
And step four. Refine and create. You take the

00:08:28.769 --> 00:08:31.370
feedback from that Socratic session and you fix

00:08:31.370 --> 00:08:34.909
your mistakes. This is the create level. By now,

00:08:34.909 --> 00:08:36.529
you're not just repeating what the bot said.

00:08:36.570 --> 00:08:38.529
You've processed it, you've connected it, you've

00:08:38.529 --> 00:08:40.750
defended it, and you've refined it. You own it

00:08:40.750 --> 00:08:44.610
now. It effectively gamifies the learning process.

00:08:44.669 --> 00:08:46.450
It really just changes the dynamic completely.

00:08:46.490 --> 00:08:49.990
You remain the boss of your own brain. We are

00:08:49.990 --> 00:08:52.330
back. I want to shift gears a little bit to the

00:08:52.330 --> 00:08:54.470
professional side of this. There's a section

00:08:54.470 --> 00:08:57.960
in the source material about career sabotage,

00:08:58.399 --> 00:09:01.799
which it sounds dramatic, but the logic really

00:09:01.799 --> 00:09:03.840
holds up. It's a serious warning. Companies aren't

00:09:03.840 --> 00:09:05.940
going to pay high salaries for average reports

00:09:05.940 --> 00:09:08.779
that ChatGPT can generate for free. If your output

00:09:08.779 --> 00:09:11.299
looks exactly like the AI's output, you are.

00:09:11.440 --> 00:09:13.779
You're redundant. So the value shifts from doing

00:09:13.779 --> 00:09:17.120
the work to judging the work. Exactly. Critical

00:09:17.120 --> 00:09:19.559
thinking becomes the strongest habit. You have

00:09:19.559 --> 00:09:22.759
to constantly doubt the AI. Ask, why did it choose

00:09:22.759 --> 00:09:25.960
this word? What perspective is missing? You need

00:09:25.960 --> 00:09:28.259
to be able to smell when the AI is wrong, and

00:09:28.259 --> 00:09:30.539
that requires deep expertise. You know, I have

00:09:30.539 --> 00:09:34.179
to admit, I still wrestle with prompt drift myself.

00:09:34.320 --> 00:09:36.200
I'll start with really good intentions, but then

00:09:36.200 --> 00:09:38.940
I get lazy and just type, you know, write a budget

00:09:38.940 --> 00:09:41.460
for me. I think we all do, but that is the weak

00:09:41.460 --> 00:09:44.080
professional move. The source outlines a formula.

00:09:44.259 --> 00:09:47.799
It's role plus context plus task plus goal, but

00:09:47.799 --> 00:09:50.179
with a specific twist they recommend. What's

00:09:50.179 --> 00:09:52.759
the twist? Constraints, specifically telling

00:09:52.759 --> 00:09:56.149
the AI to wait. Wait? Yeah. Instead of make me

00:09:56.149 --> 00:09:58.690
a budget, you try this. You are a senior financial

00:09:58.690 --> 00:10:01.230
consultant. I earn $1 ,000 a month, and I want

00:10:01.230 --> 00:10:04.190
to save $5 ,000. Do not give me a plan yet. First,

00:10:04.309 --> 00:10:06.690
ask me five critical questions about my habits

00:10:06.690 --> 00:10:09.330
before giving a plan. Oh, that's good. Do not

00:10:09.330 --> 00:10:11.730
give me a plan yet. You're forcing it to gather

00:10:11.730 --> 00:10:13.990
the context it doesn't have. That is the key.

00:10:14.149 --> 00:10:16.830
You are treating the AI like a talented intern.

00:10:17.149 --> 00:10:19.769
If you just say make a budget, the intern guesses.

00:10:20.149 --> 00:10:22.870
If you say interview me first, the intern learns.

00:10:23.210 --> 00:10:25.690
So the quality of the answer depends entirely

00:10:25.690 --> 00:10:29.230
on the setup? 100%. Treat AI like a talented

00:10:29.230 --> 00:10:32.750
intern, not a magic button. Okay, let's ground

00:10:32.750 --> 00:10:35.370
this with some quick fire examples. The source

00:10:35.370 --> 00:10:38.570
breaks this down for vocabulary, reading, and

00:10:38.570 --> 00:10:41.230
coding. Yeah, let's take vocabulary. The lazy

00:10:41.230 --> 00:10:43.870
way is to ask for a definition. You read it,

00:10:43.870 --> 00:10:46.750
you forget it. The smart way. Ask the AI to write

00:10:46.750 --> 00:10:49.110
a funny story with the word, but leave a blank

00:10:49.110 --> 00:10:51.009
space where the word should be. You have to fill

00:10:51.009 --> 00:10:53.269
it in. Active recall again. You're participating.

00:10:53.470 --> 00:10:55.990
Exactly. For reading books. Don't just ask for

00:10:55.990 --> 00:10:57.750
a summary. Pick the hardest chapter, the one

00:10:57.750 --> 00:11:00.289
you didn't quite get, and ask the AI to explain

00:11:00.289 --> 00:11:03.529
it using metaphors, like explain quantum entanglement

00:11:03.529 --> 00:11:06.070
using a pair of dice. That connects the new hard

00:11:06.070 --> 00:11:08.529
information to something you already know. And

00:11:08.529 --> 00:11:11.840
what about coding? high stakes. The super prompt

00:11:11.840 --> 00:11:14.740
for coding is fantastic. Usually people paste

00:11:14.740 --> 00:11:17.299
their error and say fix this. The smart prompt

00:11:17.299 --> 00:11:20.919
is here is my code. Do not give me the correct

00:11:20.919 --> 00:11:23.740
code yet. Tell me where my logic is wrong and

00:11:23.740 --> 00:11:26.639
suggest three technical keywords so I can research

00:11:26.639 --> 00:11:30.000
the fix. That is strict. It forces you to actually

00:11:30.000 --> 00:11:32.000
go do the research yourself. It prevents you

00:11:32.000 --> 00:11:34.039
from just copy pasting your way into a broken

00:11:34.039 --> 00:11:36.120
product. You have to understand the why behind

00:11:36.120 --> 00:11:38.720
the bug. What stands out to me here is the common

00:11:38.720 --> 00:11:41.750
thread. whether it's vocabulary or budgeting

00:11:41.750 --> 00:11:45.070
or coding. Yeah. What is it? They all force the

00:11:45.070 --> 00:11:47.610
brain to do the heavy lifting. Every single one

00:11:47.610 --> 00:11:49.850
of these strategies inserts a pause where the

00:11:49.850 --> 00:11:52.269
human has to think. It really reframes the whole

00:11:52.269 --> 00:11:54.750
relationship. We think of AI as an accelerator,

00:11:54.750 --> 00:11:57.649
you know, to speed us up. But these strategies

00:11:57.649 --> 00:11:59.970
are about deliberately slowing down to make sure

00:11:59.970 --> 00:12:02.470
you're learning. Speed without direction is just

00:12:02.470 --> 00:12:05.070
getting lost faster. So let's bring this all

00:12:05.070 --> 00:12:07.220
together. What is the big takeaway? for someone

00:12:07.220 --> 00:12:09.240
listening who just feels overwhelmed by all this.

00:12:09.480 --> 00:12:12.480
I think it's that core metaphor. AI is a powerful

00:12:12.480 --> 00:12:15.240
engine. It's got incredible horsepower. But you

00:12:15.240 --> 00:12:17.019
must be the driver holding the steering wheel.

00:12:17.159 --> 00:12:19.639
If you let go, you're either going to crash or

00:12:19.639 --> 00:12:21.240
you're just going to end up wherever the algorithm

00:12:21.240 --> 00:12:23.139
takes you. And to keep your hands on the wheel,

00:12:23.340 --> 00:12:27.659
we have those golden rules. Right. Use AI for

00:12:27.659 --> 00:12:31.639
the heavy, tedious work, the summaries, the grammar.

00:12:32.139 --> 00:12:36.220
Use it for simple topics, but never ever Let

00:12:36.220 --> 00:12:39.220
AI decide for you on deep analysis or career

00:12:39.220 --> 00:12:41.960
choices, and always check the facts. It's a call

00:12:41.960 --> 00:12:44.179
to action, really. Don't just use AI to finish

00:12:44.179 --> 00:12:46.179
your work. Use it to get better at your work.

00:12:46.320 --> 00:12:48.559
That's it. The future belongs to those who can

00:12:48.559 --> 00:12:51.399
combine machine speed with human deep thinking.

00:12:52.100 --> 00:12:54.000
Be the hybrid. And I want to leave you with a

00:12:54.000 --> 00:12:56.720
thought to mull over. We talked about AI as a

00:12:56.720 --> 00:12:59.659
tutor or an intern, but what if you treated it

00:12:59.659 --> 00:13:02.470
as a rival? What if, for one week, you're trying

00:13:02.470 --> 00:13:04.750
to outthink the machine on every single topic,

00:13:05.070 --> 00:13:06.889
using it only to grade your own performance?

00:13:07.610 --> 00:13:09.590
How sharp do you think you would get? That is

00:13:09.590 --> 00:13:11.570
a fascinating experiment. Thanks for diving in

00:13:11.570 --> 00:13:13.110
with us. We'll see you next time. See ya.
