WEBVTT

00:00:00.000 --> 00:00:03.379
Welcome, everyone, to today's deep dive. I'm

00:00:03.379 --> 00:00:05.099
really excited about this one. Yeah, it's going

00:00:05.099 --> 00:00:07.200
to be a fun one. It really is. Because our mission

00:00:07.200 --> 00:00:11.039
today is to understand this mathematical magic

00:00:11.039 --> 00:00:13.339
trick. It's a trick that lets computers solve

00:00:13.339 --> 00:00:15.900
wildly complex problems without actually having

00:00:15.900 --> 00:00:19.059
to do impossibly complex math. Which I know sounds

00:00:19.059 --> 00:00:21.420
like a total contradiction. Right. And we are

00:00:21.420 --> 00:00:24.199
pulling our insights today primarily from the

00:00:24.199 --> 00:00:27.140
Wikipedia article on the kernel method in machine

00:00:27.140 --> 00:00:30.859
learning. I know the term kernel method might

00:00:30.859 --> 00:00:32.799
sound intimidating. It sounds like something

00:00:32.799 --> 00:00:34.920
buried in an advanced computer science textbook.

00:00:35.320 --> 00:00:37.759
It definitely does. But for you listening, this

00:00:37.759 --> 00:00:40.579
deep dive is really the ultimate shortcut to

00:00:40.579 --> 00:00:43.539
understanding how algorithms categorize the messy

00:00:43.539 --> 00:00:46.320
real world we live in. It's perfect for anyone

00:00:46.320 --> 00:00:49.000
who wants to grasp high level AI concepts, but

00:00:49.000 --> 00:00:52.000
doesn't want to get a PhD in calculus just to

00:00:52.000 --> 00:00:54.219
understand it. Exactly. You don't need the PhD

00:00:54.219 --> 00:00:56.579
today. Good. Because I don't have one. Yeah.

00:00:56.719 --> 00:00:58.840
And to help us navigate this, I have our expert

00:00:58.840 --> 00:01:01.380
guide here to translate all that dense mathematics

00:01:01.380 --> 00:01:04.859
into some clear insights. I am ready. And I think

00:01:04.859 --> 00:01:07.120
to really appreciate the brilliance of this magic

00:01:07.120 --> 00:01:09.739
trick, we first need to define the fundamental

00:01:09.739 --> 00:01:12.180
problem that machine learning algorithms face

00:01:12.180 --> 00:01:15.159
when they categorize data. Right. So where do

00:01:15.159 --> 00:01:17.200
we start? We start with something called pattern

00:01:17.200 --> 00:01:20.299
analysis. Pattern analysis. OK. Yeah. It's the

00:01:20.299 --> 00:01:23.879
general task of finding clusters. rankings, principal

00:01:23.879 --> 00:01:26.959
components, and classifications in big data sets,

00:01:27.519 --> 00:01:30.459
like trying to find order in the chaos. So like

00:01:30.459 --> 00:01:33.019
figuring out if an email is spam or not. Exactly.

00:01:33.120 --> 00:01:36.500
Or if an image is a cat or a dog. And historically,

00:01:36.920 --> 00:01:39.959
to do this, engineers use what are called linear

00:01:39.959 --> 00:01:42.739
classifiers. OK, linear classifiers, meaning

00:01:42.739 --> 00:01:46.340
lines. Literally lines, yes. A linear classifier

00:01:46.340 --> 00:01:48.519
attempts to draw a straight boundary to separate

00:01:48.519 --> 00:01:51.150
different classes of data. OK. They're heavily

00:01:51.150 --> 00:01:53.370
used because they're mathematically simple, just

00:01:53.370 --> 00:01:56.450
basic addition and multiplication. But, and this

00:01:56.450 --> 00:01:59.170
is the big, but they struggle heavily with nonlinear

00:01:59.170 --> 00:02:01.129
problems. Okay, let's unpack this because I want

00:02:01.129 --> 00:02:03.489
to make sure you, the listener, can really visualize

00:02:03.489 --> 00:02:06.370
this. Imagine you have a bowl mixed with red

00:02:06.370 --> 00:02:08.830
and blue marbles. Okay, I'm picturing it. And

00:02:08.830 --> 00:02:11.430
a linear classifier is like trying to separate

00:02:11.430 --> 00:02:13.969
the red ones from the blue ones by sliding a

00:02:13.969 --> 00:02:16.629
single flat stiff piece of cardboard into the

00:02:16.629 --> 00:02:18.870
bowl. Right. Which works great if all the red

00:02:18.870 --> 00:02:20.330
ones are on the left and the blue ones are on

00:02:20.330 --> 00:02:22.889
the right. Exactly. The cardboard slides right

00:02:22.889 --> 00:02:26.189
down the middle. But what if the data is messy?

00:02:26.789 --> 00:02:29.509
What if the red marbles are in a clump in the

00:02:29.509 --> 00:02:32.430
very center, completely surrounded by a ring

00:02:32.430 --> 00:02:34.930
of blue ones? Then you have a serious problem.

00:02:35.069 --> 00:02:37.370
Right, because a single flat plane can't separate

00:02:37.370 --> 00:02:39.289
them. You'd slice right through the blue ring

00:02:39.289 --> 00:02:42.770
to get to the red center. So my pushback here

00:02:42.770 --> 00:02:46.719
is if the data is messy, Why not just build a

00:02:46.719 --> 00:02:49.360
bendy complex piece of cardboard? It's a totally

00:02:49.360 --> 00:02:51.780
logical question. Why not just draw a circle

00:02:51.780 --> 00:02:54.400
around the red marbles? Yeah. Why does it have

00:02:54.400 --> 00:02:57.000
to be a straight piece of cardboard? Well, the

00:02:57.000 --> 00:03:00.439
problem is that in raw representation, data usually

00:03:00.439 --> 00:03:02.900
has to be explicitly transformed to make sense

00:03:02.900 --> 00:03:05.409
of it. Engineers use what's called a feature

00:03:05.409 --> 00:03:08.490
map to push the data into a feature vector representation.

00:03:08.949 --> 00:03:10.770
Okay, that's a lot of jargon. What does that

00:03:10.770 --> 00:03:13.810
mean for our marbles? It means we are assigning

00:03:13.810 --> 00:03:16.889
brand new mathematical coordinates to every single

00:03:16.889 --> 00:03:19.750
marble to move them into a new space where a

00:03:19.750 --> 00:03:21.969
flat piece of cardboard will actually work. Oh

00:03:21.969 --> 00:03:24.409
wow, so we warp the space instead of bending

00:03:24.409 --> 00:03:27.150
the cardboard. Exactly, like adding a third dimension,

00:03:27.789 --> 00:03:31.039
a z -axis for height. based on how far a marble

00:03:31.039 --> 00:03:33.300
is from the center. Oh, I see. So the red marbles

00:03:33.300 --> 00:03:35.900
in the center stay low, but the blue ones on

00:03:35.900 --> 00:03:38.699
the outside get mapped up high. Right. The flat

00:03:38.699 --> 00:03:41.280
bowl becomes a 3D funnel. And then you can just

00:03:41.280 --> 00:03:43.719
slide your flat cardboard horizontally between

00:03:43.719 --> 00:03:46.000
the low red marbles and the high blue ones. That

00:03:46.000 --> 00:03:49.020
is brilliant. It is. But what's fascinating here

00:03:49.020 --> 00:03:53.000
is that explicitly doing this math is incredibly

00:03:53.000 --> 00:03:55.639
taxing. Taxing how? I mean, if you just have

00:03:55.639 --> 00:03:58.379
50 marbles and one new dimension, Your laptop

00:03:58.379 --> 00:04:00.740
does it instantly. But what if you have millions

00:04:00.740 --> 00:04:03.060
of high -resolution images and you need to add

00:04:03.060 --> 00:04:05.479
thousands of new dimensions? Oh, right. Calculating

00:04:05.479 --> 00:04:07.780
all those new coordinates for every single data

00:04:07.780 --> 00:04:10.159
point. Across thousands of dimensions, exactly.

00:04:10.439 --> 00:04:13.360
It requires astronomical computing power. The

00:04:13.360 --> 00:04:15.719
matrix of numbers gets so huge that the computer

00:04:15.719 --> 00:04:18.939
just runs out of memory and crashes. So explicitly

00:04:18.939 --> 00:04:21.199
building the Bendy Card board is just too much

00:04:21.199 --> 00:04:23.360
work. Way too much work. Which brings us to the

00:04:23.360 --> 00:04:25.970
workaround. Since explicitly transforming the

00:04:25.970 --> 00:04:29.410
data is too hard, the algorithms use this elegant

00:04:29.410 --> 00:04:32.629
mathematical workaround. Yes, the kernel trick.

00:04:32.850 --> 00:04:35.329
The kernel trick! I just, I love that name. It's

00:04:35.329 --> 00:04:37.689
great, right. The kernel trick is the hero here.

00:04:38.230 --> 00:04:41.170
It solves the nonlinear problem without all that

00:04:41.170 --> 00:04:43.410
heavy lifting. Okay, so how does the trick actually

00:04:43.410 --> 00:04:46.550
work? Well, a kernel is simply a similarity function

00:04:46.550 --> 00:04:49.920
over pairs of data points. And it computes this

00:04:49.920 --> 00:04:52.399
similarity using something called inner products.

00:04:52.699 --> 00:04:54.560
Inner products. Yeah, conceptually, an inner

00:04:54.560 --> 00:04:56.699
product is just a mathematical way to get a single

00:04:56.699 --> 00:04:59.899
number that represents how much two things overlap,

00:05:00.019 --> 00:05:02.019
how similar they are. OK, so just a similarity

00:05:02.019 --> 00:05:05.589
score. Exactly. And by using this trick. the

00:05:05.589 --> 00:05:08.670
algorithm can operate in a high dimensional implicit

00:05:08.670 --> 00:05:11.550
feature space. Wait, implicit? Implicit, that's

00:05:11.550 --> 00:05:14.110
the magic word. We operate in that complex space

00:05:14.110 --> 00:05:16.689
implicitly rather than explicitly. Here's where

00:05:16.689 --> 00:05:18.850
it gets really interesting because the source

00:05:18.850 --> 00:05:21.930
text mentions that this feature map can be infinite

00:05:21.930 --> 00:05:24.170
dimensional. It can, yeah. But wait, you're telling

00:05:24.170 --> 00:05:26.790
me that instead of drawing a complicated line

00:05:26.790 --> 00:05:30.589
in 3D, we are mapping data to infinite dimensions?

00:05:31.029 --> 00:05:34.750
How is that less work? That sounds like a computational

00:05:34.750 --> 00:05:36.990
nightmare. It completely sounds like a paradox.

00:05:37.290 --> 00:05:39.550
I mean, you just said adding a few thousand dimensions

00:05:39.550 --> 00:05:42.910
would crash a computer. I did. But here is the

00:05:42.910 --> 00:05:45.730
brilliance of the trick. We never actually compute

00:05:45.730 --> 00:05:48.769
the coordinates in that infinite space. We don't.

00:05:48.850 --> 00:05:51.959
No. We never go there. We only compute the inner

00:05:51.959 --> 00:05:55.379
products, the similarity between the images of

00:05:55.379 --> 00:05:58.240
all pairs of data. Oh, wow. Think of it like

00:05:58.240 --> 00:06:00.800
comparing the recipe cards for two incredibly

00:06:00.800 --> 00:06:03.980
complex cakes instead of actually baking the

00:06:03.980 --> 00:06:06.720
infinite layers of both cakes to taste the difference.

00:06:06.860 --> 00:06:08.339
OK, that makes so much sense. We just compare

00:06:08.339 --> 00:06:10.759
the raw ingredients. Exactly. The kernel function

00:06:10.759 --> 00:06:13.519
spits out a similarity score, and mathematically,

00:06:13.930 --> 00:06:17.290
that score perfectly corresponds to what the

00:06:17.290 --> 00:06:19.350
inner product would be if we had actually baked

00:06:19.350 --> 00:06:21.470
the infinite cake. So we get the result without

00:06:21.470 --> 00:06:23.389
doing the work. Right. And there is a strict

00:06:23.389 --> 00:06:25.430
mathematical foundation that allows this. It's

00:06:25.430 --> 00:06:27.589
called the Representer Theorem. Representer Theorem.

00:06:27.629 --> 00:06:30.110
Got it. It states that even though the feature

00:06:30.110 --> 00:06:33.569
map is infinite dimensional, finding the optimal

00:06:33.569 --> 00:06:36.550
flat plane to separate the data only requires

00:06:36.550 --> 00:06:39.509
a finite dimensional matrix from the user. So

00:06:39.509 --> 00:06:44.250
it's a massive computationally cheaper The ultimate

00:06:44.250 --> 00:06:46.350
shortcut. Okay, my mind is a little blown, but

00:06:46.350 --> 00:06:48.709
let's look under the hood. Because now that we

00:06:48.709 --> 00:06:51.310
know the trick involves bypassing the complex

00:06:51.310 --> 00:06:53.889
coordinates in favor of just measuring similarity,

00:06:54.449 --> 00:06:57.310
how does the machine actually use this to predict

00:06:57.310 --> 00:06:59.529
something it has never seen before? Well, kernel

00:06:59.529 --> 00:07:02.029
methods are what we call instance -based learners.

00:07:02.290 --> 00:07:04.829
Instance -based learners, as opposed to? As opposed

00:07:04.829 --> 00:07:06.910
to learning a generalized rule and throwing the

00:07:06.910 --> 00:07:09.410
data away. Kernel methods actually remember the

00:07:09.410 --> 00:07:11.490
training examples. Oh, they just memorize them.

00:07:11.629 --> 00:07:13.680
Basically, yes. They remember them and assign

00:07:13.680 --> 00:07:16.980
them weights. So when a new unlabeled input arrives,

00:07:17.300 --> 00:07:20.000
say a new image, the kernel compares it to each

00:07:20.000 --> 00:07:22.459
of the memorized training inputs. OK. And it

00:07:22.459 --> 00:07:24.680
computes a similarity score for every single

00:07:24.680 --> 00:07:28.240
pair. Then the machine computes a weighted sum

00:07:28.240 --> 00:07:30.600
of all those similarities to predict whether

00:07:30.600 --> 00:07:33.100
the new input gets a positive or negative label.

00:07:33.439 --> 00:07:36.420
So what does this all mean? Let me pitch an analogy

00:07:36.420 --> 00:07:39.240
to you, the listener. It's like you're trying

00:07:39.240 --> 00:07:41.259
to figure out if you'll like a new movie. Okay,

00:07:41.319 --> 00:07:43.560
I like where this is going. Instead of analyzing

00:07:43.560 --> 00:07:46.980
its script, its lighting, the director's history,

00:07:47.139 --> 00:07:49.620
which is like explicitly calculating its coordinates.

00:07:49.879 --> 00:07:53.069
Right. You just measure how similar this new

00:07:53.069 --> 00:07:55.829
movie is to your 10 favorite movies of all time,

00:07:56.329 --> 00:07:58.930
which are your training data. That is perfectly

00:07:58.930 --> 00:08:01.629
stated. You just sum up the similarity scores.

00:08:02.009 --> 00:08:04.370
If the total is positive, you'll like the movie.

00:08:04.490 --> 00:08:07.670
If it's negative, you skip it. Exactly. And the

00:08:07.670 --> 00:08:10.089
kernel function handles all that similarity measuring

00:08:10.089 --> 00:08:12.170
instantly. But wait, I have to push back here.

00:08:12.629 --> 00:08:15.509
Can we use any criteria for similarity? Can I

00:08:15.509 --> 00:08:17.250
just make up a random rule and say, you know,

00:08:17.370 --> 00:08:18.910
these are similar because they both have a red

00:08:18.910 --> 00:08:21.370
pixel? You could try, but the math would collapse.

00:08:21.529 --> 00:08:24.670
It would. Why? Because there are strict theoretical

00:08:24.670 --> 00:08:27.310
guardrails. Yeah. The main one is Mercer's theorem.

00:08:27.910 --> 00:08:30.370
Mercer's theorem. OK, what does that do? It dictates

00:08:30.370 --> 00:08:32.570
that the similarity function you choose must

00:08:32.570 --> 00:08:35.570
result in a Gram matrix or a kernel matrix that

00:08:35.570 --> 00:08:40.110
is positive semi -definite. Positive semi -definite.

00:08:40.110 --> 00:08:43.419
That sounds very dense. It is, yeah, but basically

00:08:43.419 --> 00:08:47.279
it's just the math's way of ensuring your similarity

00:08:47.279 --> 00:08:50.419
rules don't contradict the basic laws of geometry.

00:08:50.570 --> 00:08:54.009
Like saying city A is close to city B and B is

00:08:54.009 --> 00:08:56.970
close to C, but A and C are on opposite sides

00:08:56.970 --> 00:08:59.789
of the universe. Exactly. It prevents those impossible

00:08:59.789 --> 00:09:02.350
geometric contradictions so the algorithm can

00:09:02.350 --> 00:09:04.909
reliably find that flat plane. OK, that makes

00:09:04.909 --> 00:09:06.850
sense. But the text pointed out a fascinating

00:09:06.850 --> 00:09:09.529
real -world quirk about Mercer's theorem, didn't

00:09:09.529 --> 00:09:12.350
it? It did. And this is where theory meets real

00:09:12.350 --> 00:09:15.809
-world coding. Empirically, engineers found that

00:09:15.809 --> 00:09:18.009
even if a function doesn't perfectly satisfy

00:09:18.009 --> 00:09:20.509
Mercer's condition, it can still work reasonably

00:09:20.509 --> 00:09:23.529
well. Wait, really? Even if the geometry is technically

00:09:23.529 --> 00:09:26.110
broken? Yeah. As long as it merely approximates

00:09:26.110 --> 00:09:28.590
the intuitive idea of similarity, the algorithm

00:09:28.590 --> 00:09:31.090
can often still find a useful pattern. That is

00:09:31.090 --> 00:09:33.970
wild. The math is strict, but reality is a bit

00:09:33.970 --> 00:09:36.029
more forgiving. Exactly. It's a great example

00:09:36.029 --> 00:09:38.539
of a real -world heuristic. So having unpacked

00:09:38.539 --> 00:09:40.500
all this brilliant math and its guardrails, let's

00:09:40.500 --> 00:09:42.580
look at the real world impact, because this actually

00:09:42.580 --> 00:09:45.320
achieved a lot, right? Oh, it completely changed

00:09:45.320 --> 00:09:47.039
computer science. Tell me about the timeline

00:09:47.039 --> 00:09:49.779
here. So the underlying concept, the kernel perceptron,

00:09:49.840 --> 00:09:52.700
was actually described way back in the 1960s.

00:09:52.700 --> 00:09:57.000
OK. But the true golden age hit in the 1990s

00:09:57.000 --> 00:09:59.879
with the support vector machine. or SVM. Right,

00:10:00.120 --> 00:10:02.559
the SVM. When researchers injected the kernel

00:10:02.559 --> 00:10:05.080
trick into the SVM architecture, the results

00:10:05.080 --> 00:10:08.019
were staggering. They became highly competitive

00:10:08.019 --> 00:10:10.700
with neural networks. Which are like the heavyweight

00:10:10.700 --> 00:10:14.179
champions of AI. Exactly. Yeah. And they specifically

00:10:14.179 --> 00:10:16.559
crushed it in handwriting recognition. Oh, right.

00:10:17.019 --> 00:10:19.379
Recognizing handwritten digits is super messy.

00:10:19.559 --> 00:10:22.279
Everyone draws a seven differently. Very messy.

00:10:22.759 --> 00:10:26.320
But SVMs with kernel functions could implicitly

00:10:26.320 --> 00:10:29.159
map those messy pixels and draw crystal clear

00:10:29.159 --> 00:10:31.460
boundaries between a sloppy seven and a sloppy

00:10:31.460 --> 00:10:33.759
one. And from there, the applications just exploded.

00:10:34.120 --> 00:10:38.299
The article lists geostatistics, 3D reconstruction.

00:10:38.539 --> 00:10:40.779
The huge one. And bioinformatics, which I find

00:10:40.779 --> 00:10:43.539
incredible, like analyzing genetic data. Oh,

00:10:43.580 --> 00:10:45.600
bioinformatics is a stellar example. Because

00:10:45.600 --> 00:10:47.860
DNA is just massive strings of letters, right?

00:10:47.879 --> 00:10:49.840
But you can't draw a line through billions of

00:10:49.840 --> 00:10:53.480
A, C, G, and T's. You can't. Yeah. But by using

00:10:53.480 --> 00:10:56.519
specialized string kernels, scientists can implicitly

00:10:56.519 --> 00:10:59.659
map those DNA sequences and isolate gene clusters

00:10:59.659 --> 00:11:02.360
with incredible precision. It's phenomenal. But

00:11:02.360 --> 00:11:07.000
here, I spotted a catch in the source text. You

00:11:07.000 --> 00:11:10.120
found the fatal flaw. I did. The text says that

00:11:10.120 --> 00:11:12.139
this is slow to compute for data sets larger

00:11:12.139 --> 00:11:14.620
than a couple of thousand examples without parallel

00:11:14.620 --> 00:11:17.200
processing. It does say that. Wait. A couple

00:11:17.200 --> 00:11:21.480
of thousand. If this math is so elegant, why

00:11:21.480 --> 00:11:23.639
does it completely choke on just a few thousand

00:11:23.639 --> 00:11:27.419
data points? In the modern era of big data, a

00:11:27.419 --> 00:11:29.600
few thousand is basically nothing, right? If

00:11:29.600 --> 00:11:31.399
we connect this to the bigger picture, remember

00:11:31.399 --> 00:11:33.120
how we said it's an instance -based learner.

00:11:33.480 --> 00:11:36.289
Right. It remembers the training data. Exactly.

00:11:36.750 --> 00:11:38.990
So mathematically, the kernel function has to

00:11:38.990 --> 00:11:41.710
compute the similarity score between every new

00:11:41.710 --> 00:11:44.070
data point and every single piece of training

00:11:44.070 --> 00:11:47.049
data. Oh, no. And during training, it has to

00:11:47.049 --> 00:11:49.690
compare every training point against every other

00:11:49.690 --> 00:11:51.629
training point to build that gram matrix. Oh,

00:11:51.629 --> 00:11:54.289
wow. I see where this is going. Yet the complexity

00:11:54.289 --> 00:11:58.379
scales quadratically or even cubically. If you

00:11:58.379 --> 00:12:01.240
have just 100 ,000 training examples, which is

00:12:01.240 --> 00:12:05.240
tiny today, that matrix requires computing 10

00:12:05.240 --> 00:12:08.080
billion inner products. 10 billion calculations

00:12:08.080 --> 00:12:11.720
for a tiny data set. Exactly. The gram matrix

00:12:11.720 --> 00:12:14.320
just gets exponentially heavier. This is why

00:12:14.320 --> 00:12:16.320
parallel processing became a necessity. You have

00:12:16.320 --> 00:12:19.340
to split it across thousands of cores just to

00:12:19.340 --> 00:12:21.659
get an answer. Right. And it shows how tools

00:12:21.659 --> 00:12:24.159
have to fit their specific era and hardware.

00:12:24.440 --> 00:12:27.440
In the 90s, when datasets were smaller, kernel

00:12:27.440 --> 00:12:30.679
methods were king. Today, with petabytes of data,

00:12:31.019 --> 00:12:32.720
neural nets often take the lead because they

00:12:32.720 --> 00:12:34.539
scale better. Even if their internal math is

00:12:34.539 --> 00:12:37.299
less transparent. Exactly. Man, that is a fascinating

00:12:37.299 --> 00:12:40.799
evolution. So, to summarize the core aha moment

00:12:40.799 --> 00:12:43.779
for you listening. Kernel methods, and specifically

00:12:43.779 --> 00:12:46.299
the kernel trick, represent a real triumph of

00:12:46.299 --> 00:12:48.519
perspective. They really do. Instead of brute

00:12:48.519 --> 00:12:51.100
forcing a complex categorization problem in flat

00:12:51.100 --> 00:12:53.639
space, mathematicians found a way to jump into

00:12:53.639 --> 00:12:56.580
infinite dimensions using a shortcut based purely

00:12:56.580 --> 00:12:59.570
on similarity. And this raises an important question.

00:13:00.190 --> 00:13:02.710
Well, if the entire success of a kernel method

00:13:02.710 --> 00:13:06.049
relies on a human choosing the mathematical definition

00:13:06.049 --> 00:13:09.309
of similarity, the kernel function itself, what

00:13:09.309 --> 00:13:11.789
happens when human biases leak into that definition?

00:13:11.870 --> 00:13:14.330
Oh, wow. Right. We always assume math is entirely

00:13:14.330 --> 00:13:17.389
objective. But if we are the ones defining what

00:13:17.389 --> 00:13:20.070
makes two distinct points similar, are we just

00:13:20.070 --> 00:13:23.330
teaching the machine our own blind spots in infinite

00:13:23.330 --> 00:13:26.039
dimensions? That is a staggering thought. The

00:13:26.039 --> 00:13:28.139
math might be perfect, but the rules are still

00:13:28.139 --> 00:13:31.019
human. Exactly. Well, thank you so much for joining

00:13:31.019 --> 00:13:33.019
us on this deep dive. And for you listening,

00:13:33.299 --> 00:13:35.139
keep looking for those hidden shortcuts in the

00:13:35.139 --> 00:13:37.240
world around you. We will catch you on the next

00:13:37.240 --> 00:13:37.440
one.
