WEBVTT

00:00:00.000 --> 00:00:02.299
You know, when we look at the modern world around

00:00:02.299 --> 00:00:07.940
us, there is this profound illusion of absolute

00:00:07.940 --> 00:00:10.460
precision. Oh, absolutely. We like our reality

00:00:10.460 --> 00:00:13.460
to be crisp. Right. I mean, we think of skyscrapers

00:00:13.460 --> 00:00:15.500
engineered down to the microscopic millimeter

00:00:15.500 --> 00:00:19.160
or, you know, digital bank transfers moving exact

00:00:19.160 --> 00:00:21.879
pennies across fiber optic cables. We just expect

00:00:21.879 --> 00:00:24.199
things to be perfect. You punch a calculation

00:00:24.199 --> 00:00:26.460
into your phone and the screen just confidently

00:00:26.460 --> 00:00:29.739
says, here is the exact answer. Yeah, it's a

00:00:29.739 --> 00:00:32.200
very comforting illusion. We inherently trust

00:00:32.200 --> 00:00:36.799
the numbers because they feel binary. It's either

00:00:36.799 --> 00:00:39.259
right or it's wrong. But then you peek behind

00:00:39.259 --> 00:00:41.340
the curtain at the foundational math running

00:00:41.340 --> 00:00:43.280
our physical and digital lives and suddenly that

00:00:43.280 --> 00:00:46.380
pristine calculator starts glitching. You realize

00:00:46.380 --> 00:00:48.600
we're looking at a computational landscape that

00:00:48.600 --> 00:00:51.840
is honestly just an entire universe of Close

00:00:51.840 --> 00:00:53.920
enough. It really is. It's the mathematics of

00:00:53.920 --> 00:00:56.820
approximation. And navigating a world built on

00:00:56.820 --> 00:01:00.679
tiny necessary compromises, it's a lot more complicated

00:01:00.679 --> 00:01:03.140
than most people realize. Which is exactly why

00:01:03.140 --> 00:01:06.049
we're here. So, welcome to today's deep dive

00:01:06.049 --> 00:01:08.709
into the source material. We've got a fascinating

00:01:08.709 --> 00:01:11.170
stack of research today, custom -tailored, just

00:01:11.170 --> 00:01:13.750
for you. We do. Our mission today is to explore

00:01:13.750 --> 00:01:16.989
the anatomy of being wrong. Or, more specifically,

00:01:17.430 --> 00:01:20.390
how we mathematically quantify and control those

00:01:20.390 --> 00:01:23.269
tiny compromises in the systems you use every

00:01:23.269 --> 00:01:26.359
single day. Our grounding source for today's

00:01:26.359 --> 00:01:29.299
exploration is a highly detailed text on approximation

00:01:29.299 --> 00:01:33.019
error. And whether you are a scientist measuring

00:01:33.019 --> 00:01:36.000
volatile chemicals in a lab or a programmer building

00:01:36.000 --> 00:01:38.980
an app or just someone wondering why your GPS

00:01:38.980 --> 00:01:41.159
suddenly thinks you're driving into a lake. Right,

00:01:41.359 --> 00:01:44.750
which happens way too often. It does. But understanding

00:01:44.750 --> 00:01:47.370
the hidden mechanics of error is absolutely crucial

00:01:47.370 --> 00:01:49.730
for all of it. OK. Let's unpack this. Before

00:01:49.730 --> 00:01:51.909
we can understand how these tiny errors actually

00:01:51.909 --> 00:01:54.569
ruin algorithms or crash computer systems, we

00:01:54.569 --> 00:01:56.629
have to define the basic language of a mistake.

00:01:56.849 --> 00:01:59.290
Right. According to the source, there are two

00:01:59.290 --> 00:02:01.549
primary ways we measure a discrepancy between

00:02:01.549 --> 00:02:04.950
an exact quote unquote true value and our approximation

00:02:04.950 --> 00:02:07.010
of it. Yeah. And what's fascinating here is that

00:02:07.010 --> 00:02:09.629
the language of mathematics gives us very specific

00:02:09.629 --> 00:02:12.740
tools to bound our mistakes. So the first tool

00:02:12.740 --> 00:02:16.659
is what we call absolute error. OK. This denotes

00:02:16.659 --> 00:02:19.699
the direct numerical magnitude of the discrepancy.

00:02:20.199 --> 00:02:22.060
It doesn't care about the scale of the universe.

00:02:22.580 --> 00:02:24.719
It just cares about the raw distance between

00:02:24.719 --> 00:02:27.259
the truth and the guess. So it's just the straight

00:02:27.259 --> 00:02:30.039
up difference. Exactly. Formally, if you have

00:02:30.039 --> 00:02:33.520
a true value and an approximated value, the absolute

00:02:33.520 --> 00:02:36.460
error is bounded by a positive value, which is

00:02:36.460 --> 00:02:38.580
typically represented by the Greek letter epsilon.

00:02:38.800 --> 00:02:40.960
Meaning if I measure something, the absolute

00:02:40.960 --> 00:02:43.800
error is just how far off my measurement is,

00:02:43.960 --> 00:02:46.280
plain and simple. And whether I overestimated

00:02:46.280 --> 00:02:48.719
or underestimated doesn't really matter, right?

00:02:49.139 --> 00:02:51.960
Which is why the math uses absolute value bars.

00:02:52.340 --> 00:02:54.060
Precisely. It's just the magnitude of the gap.

00:02:54.280 --> 00:02:56.680
The text has a great simple example for this.

00:02:57.159 --> 00:02:59.849
Imagine you are measuring a piece of paper. The

00:02:59.849 --> 00:03:02.770
actual true length of this paper is exactly 4

00:03:02.770 --> 00:03:07.050
.53 centimeters, but you're using like a standard

00:03:07.050 --> 00:03:09.930
plastic ruler from a school desk that only lets

00:03:09.930 --> 00:03:12.610
you estimate to the nearest tenth of a centimeter.

00:03:13.050 --> 00:03:15.650
So you write down a recorded measurement of 4

00:03:15.650 --> 00:03:17.830
.5 centimeters. Right, because that's the best

00:03:17.830 --> 00:03:20.590
your tool can do. Exactly. So your absolute error

00:03:20.590 --> 00:03:24.169
there is exactly 0 .03 centimeters. And that

00:03:24.169 --> 00:03:26.490
seems tiny, right? I mean, three hundredths of

00:03:26.490 --> 00:03:28.610
a centimeter is barely a speck of dust. Yeah.

00:03:28.560 --> 00:03:31.120
It's nothing. But relying solely on absolute

00:03:31.120 --> 00:03:33.400
error leaves out a critical piece of the puzzle,

00:03:33.840 --> 00:03:36.020
which brings us to the second measurement, and

00:03:36.020 --> 00:03:38.919
that's relative error. Okay, so how does that

00:03:38.919 --> 00:03:41.680
work? Relative error provides a scaled measure.

00:03:42.080 --> 00:03:44.599
It takes that absolute error and considers it

00:03:44.599 --> 00:03:47.719
in proportion to the exact data value. Mathematically,

00:03:48.180 --> 00:03:50.639
you divide the absolute error by the magnitude

00:03:50.639 --> 00:03:52.639
of the true value. Oh, and if you multiply that

00:03:52.639 --> 00:03:55.259
by 100, you get your standard percent error,

00:03:55.419 --> 00:03:57.479
right? Like most of us remember from high school

00:03:57.479 --> 00:03:59.860
chemistry. Assuming the true value isn't zero,

00:04:00.259 --> 00:04:03.060
yes. And the source brilliantly illustrates why

00:04:03.060 --> 00:04:05.500
relative error is often the much more important

00:04:05.500 --> 00:04:07.780
metric. I love the comparison they use for this.

00:04:07.860 --> 00:04:10.900
So let's say you are approximating the number

00:04:10.900 --> 00:04:13.460
1 ,000 and you make an absolute error of three.

00:04:13.620 --> 00:04:15.949
OK, you're off by three units. Right. That gives

00:04:15.949 --> 00:04:19.750
you a relative error of 0 .3%. But what if you

00:04:19.750 --> 00:04:23.110
are approximating 1 mils in with that exact same

00:04:23.110 --> 00:04:26.649
absolute error of 3? Well, in that case, your

00:04:26.649 --> 00:04:31.589
relative error drops to a mere 0 .00003%. The

00:04:31.589 --> 00:04:34.050
absolute error is the exact same raw mistake

00:04:34.050 --> 00:04:37.060
in both scenarios. You are off by 3. Yeah. But

00:04:37.060 --> 00:04:39.180
the relative error provides the context, like

00:04:39.180 --> 00:04:41.660
being off by three when you're counting a thousand

00:04:41.660 --> 00:04:43.839
dollars out of a cast register, that's a noticeable

00:04:43.839 --> 00:04:45.759
problem. For sure. Your boss is going to notice

00:04:45.759 --> 00:04:48.339
that. Exactly. But being off by three when you're

00:04:48.339 --> 00:04:50.540
counting a million dollars, that is basically

00:04:50.540 --> 00:04:52.600
a rounding error that no one is going to lose

00:04:52.600 --> 00:04:55.939
sleep over. So relative error is the context

00:04:55.939 --> 00:04:58.120
-dependent assessment of how much that mistake

00:04:58.120 --> 00:05:00.529
actually hurts us. If we connect this to the

00:05:00.529 --> 00:05:02.990
bigger picture, this contextual understanding

00:05:02.990 --> 00:05:06.829
is vital, but it also introduces some massive

00:05:06.829 --> 00:05:09.050
mathematical traps if you apply it to the wrong

00:05:09.050 --> 00:05:12.149
kind of measurement scale. Ooh, I am always fascinated

00:05:12.149 --> 00:05:13.930
by mathematical traps. What is the first one?

00:05:14.189 --> 00:05:17.069
The first one is pretty simple. Division by zero.

00:05:17.910 --> 00:05:20.649
Relative error becomes mathematically undefined

00:05:20.649 --> 00:05:23.069
if the true value is zero. Right, because you

00:05:23.069 --> 00:05:25.310
can't divide your absolute error by zero. It

00:05:25.310 --> 00:05:27.730
just breaks the arithmetic. Exactly. But the

00:05:27.730 --> 00:05:31.430
second caveat is much more insidious. Relative

00:05:31.430 --> 00:05:34.230
error only truly works and is only consistently

00:05:34.230 --> 00:05:36.470
interpretable if the measurements are performed

00:05:36.470 --> 00:05:39.569
on a ratio scale. Hold on, a ratio scale. I know

00:05:39.569 --> 00:05:41.149
there are different types of measurement scales.

00:05:41.290 --> 00:05:44.850
A ratio scale means a scale that has a true non

00:05:44.850 --> 00:05:47.389
-arbitrary zero point, right? Like a zero that

00:05:47.389 --> 00:05:50.050
signifies the complete physical absence of the

00:05:50.050 --> 00:05:52.509
thing you're measuring. That is spot on. Think

00:05:52.509 --> 00:05:54.970
about temperature, which is a classic trap detailed

00:05:54.970 --> 00:05:57.170
in our source. Let's look at the Celsius scale.

00:05:57.290 --> 00:06:00.209
OK. Zero degrees Celsius doesn't mean no temperature

00:06:00.209 --> 00:06:03.790
or no heat energy. It is literally just the arbitrary

00:06:03.790 --> 00:06:06.029
freezing point of water. So it's an interval

00:06:06.029 --> 00:06:08.769
scale. not a ratio scale. Right. It's just a

00:06:08.769 --> 00:06:11.189
convenient benchmark. Exactly. So let's say the

00:06:11.189 --> 00:06:13.129
true temperature in a room is 2 degrees Celsius,

00:06:13.550 --> 00:06:16.509
but your thermometer approximates it as 3 degrees

00:06:16.509 --> 00:06:19.029
Celsius. OK. So your absolute error is 1 degree

00:06:19.029 --> 00:06:22.230
Celsius. Yes. Which means your relative error

00:06:22.230 --> 00:06:24.949
is 1 divided by the true value of 2. That's 0

00:06:24.949 --> 00:06:29.290
.5, or a 50 % relative error. A 50 % error sounds

00:06:29.290 --> 00:06:31.350
catastrophic for a scientific measurement, like

00:06:31.350 --> 00:06:33.149
you completely botched the experiment. It really

00:06:33.149 --> 00:06:35.430
does. But let's take that exact same physical

00:06:35.430 --> 00:06:37.149
room, that exact same physical temperature, and

00:06:37.149 --> 00:06:39.569
that exact same physical mistake and measure

00:06:39.569 --> 00:06:41.430
it on the Kelvin scale. OK, because the Kelvin

00:06:41.430 --> 00:06:45.230
scale is a true ratio scale. Exactly. Zero Kelvin

00:06:45.230 --> 00:06:48.649
is absolute zero, the complete theoretical absence

00:06:48.649 --> 00:06:51.709
of thermal energy. So what is 2 degrees Celsius

00:06:51.709 --> 00:06:54.720
in Kelvin? I am definitely not doing that mental

00:06:54.720 --> 00:06:58.420
math. What is it? It translates to 275 .15 Kelvin.

00:07:00.490 --> 00:07:02.470
Because the increments are the exact same size,

00:07:02.970 --> 00:07:05.870
an absolute error of 1 degree Celsius is exactly

00:07:05.870 --> 00:07:07.810
equivalent to an absolute error of 1 Kelvin.

00:07:07.949 --> 00:07:09.569
Right, the physical distance on the thermometer

00:07:09.569 --> 00:07:12.329
didn't change. Exactly. So now we divide your

00:07:12.329 --> 00:07:15.250
1 Kelvin absolute error by the true value of

00:07:15.250 --> 00:07:19.589
275 .15 Kelvin. OK, 1 divided by 275, that's

00:07:19.589 --> 00:07:23.470
going to be tiny. It's roughly 0 .00363, which

00:07:23.470 --> 00:07:26.589
is about a 0 .36 % relative error. Yeah, you

00:07:26.589 --> 00:07:29.029
go from a 50 % error to a fraction of a percent.

00:07:29.180 --> 00:07:32.819
describing the exact same physical reality just

00:07:32.819 --> 00:07:36.680
by changing the scale. That is wild. So if you

00:07:36.680 --> 00:07:39.500
use relative error on a scale without a true

00:07:39.500 --> 00:07:42.920
zero, the percentages are essentially just meaningless

00:07:42.920 --> 00:07:45.560
noise. Completely meaningless. And the text points

00:07:45.560 --> 00:07:48.399
out another quirk about how arithmetic operations

00:07:48.399 --> 00:07:50.980
interact with these two types of errors, which

00:07:50.980 --> 00:07:53.319
feels almost counterintuitive until you really

00:07:53.319 --> 00:07:55.120
break it down. Oh, you mean the multiplication

00:07:55.120 --> 00:07:57.639
versus addition quirk? Yeah. Let's say you have

00:07:57.639 --> 00:08:00.000
your true value and your approximation, and you

00:08:00.000 --> 00:08:02.759
multiply both of them by a constant number. Well,

00:08:03.019 --> 00:08:05.800
when you multiply by a constant, the absolute

00:08:05.800 --> 00:08:08.600
error changes directly. Like, if you multiply

00:08:08.600 --> 00:08:11.100
everything by 10, your absolute error becomes

00:08:11.100 --> 00:08:14.139
10 times larger. Because the raw distance between

00:08:14.139 --> 00:08:16.959
the numbers just scales up. Exactly. But your

00:08:16.959 --> 00:08:19.279
relative error stays completely identical. Right.

00:08:19.420 --> 00:08:21.420
Because the constant cancels out in the ratio,

00:08:21.660 --> 00:08:23.300
you multiply the top of the fraction and the

00:08:23.300 --> 00:08:25.259
bottom of the fraction by 10, so the proportion

00:08:25.259 --> 00:08:28.360
remains exactly the same. But if you add a non

00:08:28.360 --> 00:08:31.160
-zero constant to both the true value and the

00:08:31.160 --> 00:08:34.539
approximated value, the reverse happens. Absolute

00:08:34.539 --> 00:08:37.039
error is completely insensitive to addition.

00:08:37.519 --> 00:08:39.879
Let me make sure I follow. So if the true value

00:08:39.879 --> 00:08:42.919
is 10 and the guess is 8, the absolute error

00:08:42.919 --> 00:08:46.039
is 2. Right. And if you add 100 to both, the

00:08:46.039 --> 00:08:49.940
true value is 110, the guess is 108. The absolute

00:08:49.940 --> 00:08:53.340
error is still simply 2. Exactly. The raw gap

00:08:53.340 --> 00:08:55.940
didn't change. Yeah. But the relative error gets

00:08:55.940 --> 00:08:57.740
completely warped. Oh, I see. Originally it was

00:08:57.740 --> 00:09:01.179
2 over 10, which is a 20 % error. Yep. Now it's

00:09:01.179 --> 00:09:05.860
2 over 110, which is barely a 1 .8 % error. Exactly.

00:09:06.159 --> 00:09:08.399
Just by adding a constant, you've artificially

00:09:08.399 --> 00:09:10.840
deflated your relative error, creating this false

00:09:10.840 --> 00:09:13.019
sense of accuracy without actually improving

00:09:13.019 --> 00:09:14.860
your measurement at all. That feels like a trick

00:09:14.860 --> 00:09:17.259
someone could use to manipulate data. Oh, it

00:09:17.259 --> 00:09:20.019
absolutely is, which raises an important question.

00:09:20.460 --> 00:09:22.440
What happens when we move from theoretical math

00:09:22.440 --> 00:09:25.100
on a chalkboard to the physical tools and digital

00:09:25.100 --> 00:09:27.659
machines we use every day? Right. How do real

00:09:27.659 --> 00:09:29.899
-world instruments actually handle these errors?

00:09:30.240 --> 00:09:32.830
The source dines into this. particularly looking

00:09:32.830 --> 00:09:35.230
at instruments like analog voltmeters, pressure

00:09:35.230 --> 00:09:38.970
gauges, and thermometers. Manufacturers of these

00:09:38.970 --> 00:09:41.190
indicating measurement instruments frequently

00:09:41.190 --> 00:09:43.909
guarantee their accuracy not as a percentage

00:09:43.909 --> 00:09:46.009
of the actual reading you're looking at, but

00:09:46.009 --> 00:09:48.370
as a percentage of the instrument's full -scale

00:09:48.370 --> 00:09:50.870
reading capability. These are known as limiting

00:09:50.870 --> 00:09:53.590
errors, right? Or guarantee errors. Exactly.

00:09:53.909 --> 00:09:55.870
And they create a very dangerous dynamic for

00:09:55.870 --> 00:09:59.059
the user. It implies that the maximum possible

00:09:59.059 --> 00:10:01.399
absolute error is fixed based on the top of the

00:10:01.399 --> 00:10:04.840
scale. Meaning the relative error can spike dramatically

00:10:04.840 --> 00:10:07.440
when you are measuring small values. Precisely.

00:10:07.799 --> 00:10:10.299
The text uses a laboratory beaker to illustrate

00:10:10.299 --> 00:10:12.639
this point. Yes. The beaker example is perfect.

00:10:13.240 --> 00:10:15.059
If you have a beaker that holds a maximum of

00:10:15.059 --> 00:10:17.639
6 milliliters and you measure out 5 milliliters

00:10:17.639 --> 00:10:20.519
when the true volume is actually 6, your percent

00:10:20.519 --> 00:10:24.039
error is roughly 16 .7 percent. Okay. Not great,

00:10:24.100 --> 00:10:26.960
but manageable. But if the true volume is only

00:10:26.960 --> 00:10:29.940
one milliliter and the instrument maintaining

00:10:29.940 --> 00:10:32.639
that same full -scale absolute error guarantee

00:10:32.639 --> 00:10:35.539
reads two milliliters. Your relative error is

00:10:35.539 --> 00:10:39.039
now 100%. Exactly. Using an instrument at the

00:10:39.039 --> 00:10:42.460
very bottom of its scale is a recipe for disastrously

00:10:42.460 --> 00:10:44.940
high relative errors. Because the physical margin

00:10:44.940 --> 00:10:47.659
of error on the glass doesn't shrink just because

00:10:47.659 --> 00:10:50.220
you're measuring less liquid. Exactly. And I

00:10:50.220 --> 00:10:52.500
imagine this physical limitation translates directly

00:10:52.500 --> 00:10:55.360
into the digital realm too. It absolutely does.

00:10:55.659 --> 00:10:58.419
Digital systems and computers cannot represent

00:10:58.419 --> 00:11:01.559
all real numbers perfectly. I mean, they have

00:11:01.559 --> 00:11:04.980
finite memory. Right. You can't store an infinitely

00:11:04.980 --> 00:11:08.480
repeating decimal in a machine with fixed RAM.

00:11:08.879 --> 00:11:11.860
This leads to unavoidable truncation or rounding

00:11:11.860 --> 00:11:14.279
errors in almost every single calculation. We

00:11:14.279 --> 00:11:16.500
call that machine precision, right? The computer

00:11:16.500 --> 00:11:18.360
literally just runs out of space to hold the

00:11:18.360 --> 00:11:20.240
numbers and has to mathematically chop the end

00:11:20.240 --> 00:11:22.860
off. Yes, and this introduces a crucial concept

00:11:22.860 --> 00:11:25.720
in numerical analysis called numerical stability.

00:11:25.919 --> 00:11:27.759
Okay, I remember this from the text. Numerical

00:11:27.759 --> 00:11:30.419
stability measures how much a specific algorithm

00:11:30.419 --> 00:11:33.419
allows those tiny initial rounding errors to

00:11:33.419 --> 00:11:35.899
propagate and amplify into substantial errors

00:11:35.899 --> 00:11:38.860
in the final output. So a numerically stable

00:11:38.860 --> 00:11:42.100
algorithm is robust. It suppresses the noise,

00:11:42.139 --> 00:11:44.700
like if the input is slightly chopped off or

00:11:44.700 --> 00:11:47.320
malformed. And the output is still pretty close

00:11:47.320 --> 00:11:49.600
to the truth. Exactly. It dampens the error.

00:11:49.879 --> 00:11:53.399
But a numerically unstable algorithm exhibits

00:11:53.399 --> 00:11:55.820
dramatic error growth. It's like the butterfly

00:11:55.820 --> 00:11:58.399
effect of mathematics. Oh, wow. A microscopic

00:11:58.399 --> 00:12:00.960
change in the input, just a tiny approximation

00:12:00.960 --> 00:12:03.940
error, can cascade through the algorithm's internal

00:12:03.940 --> 00:12:06.340
steps and render the final results completely

00:12:06.340 --> 00:12:08.740
unreliable. So what does this all mean for computer

00:12:08.740 --> 00:12:11.799
science? If algorithms are inherently saddled

00:12:11.799 --> 00:12:13.980
with these rounding errors from the very first

00:12:13.980 --> 00:12:16.860
nanosecond, how do programmers mathematically

00:12:16.860 --> 00:12:19.759
guarantee that an algorithm will spit out reasonably

00:12:19.759 --> 00:12:23.539
accurate approximation before the heat death

00:12:23.539 --> 00:12:25.519
of the universe. That brings us to the realm

00:12:25.519 --> 00:12:28.639
of computational complexity theory and polynomial

00:12:28.639 --> 00:12:31.399
time approximation. For an algorithm to be practically

00:12:31.399 --> 00:12:34.399
useful, it needs to be able to compute an approximation

00:12:34.399 --> 00:12:37.860
within a specific time limit. This time duration

00:12:37.860 --> 00:12:41.039
must be polynomial, meaning it stales at a reasonable,

00:12:41.259 --> 00:12:44.220
manageable rate in relation to two things. The

00:12:44.220 --> 00:12:46.620
size of the input data and the encoding size

00:12:46.620 --> 00:12:48.899
of the error bound. Whoa, hold on. Before we

00:12:48.899 --> 00:12:50.960
lose the listener and me, let's back up. What

00:12:50.960 --> 00:12:53.940
does encoding size actually mean in plain English?

00:12:54.500 --> 00:12:56.960
Because the source throws out terms like big

00:12:56.960 --> 00:13:00.240
O of log one over epsilon bits, and that sounds

00:13:00.240 --> 00:13:02.559
incredibly dense. It sounds super intimidating,

00:13:02.580 --> 00:13:04.600
but it's actually just about computer memory.

00:13:05.549 --> 00:13:07.889
Coding size is simply how many bits of memory

00:13:07.889 --> 00:13:09.850
it takes to tell the computer how precise we

00:13:09.850 --> 00:13:12.970
need it to be. The tighter you want the absolute

00:13:12.970 --> 00:13:15.309
error, meaning the smaller your epsilon is, the

00:13:15.309 --> 00:13:17.529
more bits you need to encode that tiny, tiny

00:13:17.529 --> 00:13:20.070
fraction, and the exponentially longer the algorithm

00:13:20.070 --> 00:13:22.450
takes to run. That makes perfect sense. We say

00:13:22.450 --> 00:13:25.289
a value is polynomially computable with absolute

00:13:25.289 --> 00:13:27.850
error if we can algorithmically find an approximation

00:13:27.850 --> 00:13:30.350
within that reasonable time limit for any specified

00:13:30.350 --> 00:13:33.129
maximum error. And the exact same concept applies

00:13:33.129 --> 00:13:35.820
to relative error. Here's a question that tripped

00:13:35.820 --> 00:13:37.940
me up while reading the text. Let's hear it.

00:13:38.220 --> 00:13:41.419
If a computer has an algorithm that can successfully

00:13:41.419 --> 00:13:45.220
solve for a relative error in polynomial time,

00:13:45.840 --> 00:13:48.019
does that automatically mean it can use that

00:13:48.019 --> 00:13:50.840
same efficiency to solve for an absolute error?

00:13:50.980 --> 00:13:52.860
Like, are they mathematically interchangeable?

00:13:52.940 --> 00:13:55.580
It is a brilliant question, and the answer is

00:13:55.580 --> 00:13:58.559
a strict one -way street. The source provides

00:13:58.559 --> 00:14:01.860
a fascinating proof sketch for this. OK. Polynomial

00:14:01.860 --> 00:14:05.240
computability with relative error implies polynomial

00:14:05.240 --> 00:14:08.000
computability with absolute error, but not the

00:14:08.000 --> 00:14:09.879
other way around. Okay, I need a visual for this.

00:14:10.399 --> 00:14:12.759
How does solving the relative side guarantee

00:14:12.759 --> 00:14:15.659
the absolute result? Let's use a metaphor. Imagine

00:14:15.659 --> 00:14:17.779
you are throwing darts at a wall in the dark,

00:14:18.200 --> 00:14:20.759
trying to hit a microscopic bullseye. Okay, I'm

00:14:20.759 --> 00:14:23.929
with you. That bullseye is your target absolute

00:14:23.929 --> 00:14:26.629
error, your epsilon. You start by throwing a

00:14:26.629 --> 00:14:29.009
special relative error dart. You basically tell

00:14:29.009 --> 00:14:31.330
the algorithm to just find an approximation with

00:14:31.330 --> 00:14:34.370
a massive loose relative error of one half, or

00:14:34.370 --> 00:14:36.889
50%. OK, so you're just asking the computer to

00:14:36.889 --> 00:14:38.929
get you into the right ballpark. You aren't asking

00:14:38.929 --> 00:14:43.200
for extreme precision yet. Precisely. The algorithm

00:14:43.200 --> 00:14:45.799
spits out a rough rational number approximation.

00:14:46.080 --> 00:14:48.299
Let's call it dart number one. Right. Because

00:14:48.299 --> 00:14:51.559
we know this dart landed within a 50 % relative

00:14:51.559 --> 00:14:55.139
error of the true bullseye, a mathematical principle

00:14:55.139 --> 00:14:58.059
called the reverse triangle inequality allows

00:14:58.059 --> 00:15:00.840
us to deduce a crucial boundary. Okay, what does

00:15:00.840 --> 00:15:03.700
that boundary tell us? It tells us that the absolute

00:15:03.700 --> 00:15:06.659
magnitude of our unknown bullseye cannot possibly

00:15:06.659 --> 00:15:09.059
be larger than twice the magnitude of where dart

00:15:09.059 --> 00:15:11.639
number one landed. I see. Even though we don't

00:15:11.639 --> 00:15:14.490
know exactly where the true value is, our rough

00:15:14.490 --> 00:15:17.629
sloppy guess just allowed us to draw a firm circle

00:15:17.629 --> 00:15:20.440
on the wall. We now have a mathematical ceiling

00:15:20.440 --> 00:15:23.059
on how big the true value could possibly be.

00:15:23.399 --> 00:15:26.220
Yes. And because that first throw was so loose,

00:15:26.679 --> 00:15:29.179
the computer calculated it incredibly fast. Right.

00:15:29.220 --> 00:15:31.179
It didn't take a million years. Exactly. Now

00:15:31.179 --> 00:15:33.820
we throw JART number two. We invoke the exact

00:15:33.820 --> 00:15:36.019
same relative error algorithm, but this time

00:15:36.019 --> 00:15:38.919
we give it a much tighter specific target. We

00:15:38.919 --> 00:15:41.159
set our new target based on our desired absolute

00:15:41.159 --> 00:15:44.019
error divided by that ceiling we just drew. Oh,

00:15:44.039 --> 00:15:46.830
that is so clever. You use the rough boundary

00:15:46.830 --> 00:15:49.289
from the first throw to calibrate the precision

00:15:49.289 --> 00:15:52.549
of the second throw. Exactly. And because you

00:15:52.549 --> 00:15:55.129
proved the true value is trapped inside that

00:15:55.129 --> 00:15:58.629
boundary, the math cancels out perfectly. The

00:15:58.629 --> 00:16:00.950
final distance between the true value and your

00:16:00.950 --> 00:16:03.509
second dart is mathematically guaranteed to be

00:16:03.509 --> 00:16:06.769
less than your absolute error target. You basically

00:16:06.769 --> 00:16:09.570
used a relative error tool to perfectly solve

00:16:09.570 --> 00:16:12.710
for an absolute error bound. It is an incredibly

00:16:12.710 --> 00:16:16.450
elegant workaround, but as I said, it is an asymmetrical

00:16:16.450 --> 00:16:19.049
relationship. Absolute does not imply relative.

00:16:19.710 --> 00:16:22.049
If you only have a dark that computes absolute

00:16:22.049 --> 00:16:25.159
error, You cannot guarantee you can compute relative

00:16:25.159 --> 00:16:27.279
error efficiently. Why not? Why doesn't the trick

00:16:27.279 --> 00:16:30.340
work in reverse? Because relative error is a

00:16:30.340 --> 00:16:33.259
proportion based on the true value. If you don't

00:16:33.259 --> 00:16:34.960
know the true value and you don't have a floor

00:16:34.960 --> 00:16:37.860
beneath it, your absolute error algorithm might

00:16:37.860 --> 00:16:39.879
be hunting for a proportion of a number that

00:16:39.879 --> 00:16:42.559
is infinitely approaching zero. Ah, and the calculation

00:16:42.559 --> 00:16:45.279
would just take forever. Exactly. There is one

00:16:45.279 --> 00:16:47.679
significant exception, though. You can do it

00:16:47.679 --> 00:16:49.960
if you can compute a positive lower bound on

00:16:49.960 --> 00:16:52.460
the magnitude of the true value. meaning you

00:16:52.460 --> 00:16:54.799
have to be able to mathematically prove that

00:16:54.799 --> 00:16:57.100
the true value is strictly greater than some

00:16:57.100 --> 00:16:59.340
positive number. Right. If you know the floor

00:16:59.340 --> 00:17:01.419
is solid, you can reverse the math we just did.

00:17:01.740 --> 00:17:03.600
But if you don't have that floor, the absolute

00:17:03.600 --> 00:17:06.039
algorithm is flying blind when it comes to relative

00:17:06.039 --> 00:17:10.140
proportions. This profound asymmetry brings up

00:17:10.140 --> 00:17:12.640
a special class of algorithms the source mentions,

00:17:13.039 --> 00:17:15.579
the fully polynomial time approximation scheme,

00:17:16.240 --> 00:17:20.670
or FPTAS. I noticed this. For an FPTAS, the time

00:17:20.670 --> 00:17:22.589
complexity doesn't just scale with the logarithm

00:17:22.589 --> 00:17:25.769
of the bits. It scales polynomially with the

00:17:25.769 --> 00:17:28.650
reciprocal of the relative error itself. Yes.

00:17:28.710 --> 00:17:32.250
And the text points out that this specific dependence

00:17:32.250 --> 00:17:34.809
is the defining characteristic that makes an

00:17:34.809 --> 00:17:38.470
FPTAS uniquely powerful compared to weaker approximation

00:17:38.470 --> 00:17:40.490
schemes. And if we really want to blow this wide

00:17:40.490 --> 00:17:42.410
open, we have to recognize that everything we've

00:17:42.410 --> 00:17:45.190
discussed so far, rulers, thermometers, absolute

00:17:45.190 --> 00:17:47.910
value brackets, dartboards has been entirely

00:17:47.910 --> 00:17:50.869
focused on scalar numbers. Single one -dimensional

00:17:50.869 --> 00:17:54.230
values. Exactly. But the real world and the software

00:17:54.230 --> 00:17:57.390
that runs it is rarely one -dimensional. The

00:17:57.390 --> 00:18:00.650
final climax of the text generalizes all of these

00:18:00.650 --> 00:18:03.029
definitions into higher dimensions. Right. We

00:18:03.029 --> 00:18:06.369
move from single variables to massive n -dimensional

00:18:06.369 --> 00:18:09.250
vectors, matrices, and normed vector spaces.

00:18:09.829 --> 00:18:12.450
When you are quantifying the distance, between

00:18:12.450 --> 00:18:15.650
a true complex matrix and an approximated matrix,

00:18:16.029 --> 00:18:19.069
you can't just slap simple absolute value brackets

00:18:19.069 --> 00:18:21.130
on it anymore. Yeah, this is all well and good

00:18:21.130 --> 00:18:23.589
for measuring a single straight line, but what

00:18:23.589 --> 00:18:26.150
happens when an AI is trying to compress an image

00:18:26.150 --> 00:18:29.490
with millions of pixels simultaneously? How do

00:18:29.490 --> 00:18:31.869
you measure the error of a million different

00:18:31.869 --> 00:18:34.230
points at once? The source says we have to replace

00:18:34.230 --> 00:18:37.349
absolute value with vector norms. Yes, and we

00:18:37.349 --> 00:18:39.250
have several different types of norms depending

00:18:39.250 --> 00:18:41.509
on what kind of error we care about. Okay, walk

00:18:41.509 --> 00:18:43.950
me through them. The text lists the L1 norm,

00:18:43.950 --> 00:18:46.690
which is just the sum of absolute component values.

00:18:46.700 --> 00:18:50.039
Then there's the L2 norm, also known as the Euclidean

00:18:50.039 --> 00:18:52.220
norm, which measures the straight line distance

00:18:52.220 --> 00:18:54.339
through the multidimensional space. And the source

00:18:54.339 --> 00:18:57.000
specifically highlights the Frobenius norm, which

00:18:57.000 --> 00:18:59.660
is used heavily in image processing. Yeah, let's

00:18:59.660 --> 00:19:02.240
stick with the image compression example. When

00:19:02.240 --> 00:19:05.019
you take a giant high -resolution original image,

00:19:05.099 --> 00:19:08.200
which is mathematically just a massive matrix

00:19:08.200 --> 00:19:11.500
of pixel values, and you compress it into a small

00:19:11.500 --> 00:19:15.119
JPEG file, you are creating an approximation.

00:19:15.390 --> 00:19:17.950
So how did the norm measure the error there?

00:19:18.609 --> 00:19:21.690
The Frobenius norm acts like a mathematical blanket

00:19:21.690 --> 00:19:24.950
thrown over the entire image. It calculates the

00:19:24.950 --> 00:19:27.109
square root of the sum of the absolute squares

00:19:27.109 --> 00:19:29.430
of all the differences. Oh, I see. Essentially,

00:19:29.470 --> 00:19:31.890
it gives you an average measure of the overall

00:19:31.890 --> 00:19:34.529
multi -dimensional error across the entire matrix.

00:19:34.930 --> 00:19:37.250
It tells you if the JPEG generally looks like

00:19:37.250 --> 00:19:39.329
the original. But then there's the L infinity

00:19:39.329 --> 00:19:41.529
norm, which works completely differently. OK,

00:19:41.650 --> 00:19:44.029
how so? Instead of a blanket, the L infinity

00:19:44.029 --> 00:19:48.309
norm acts like a highly sensitive alarm system.

00:19:48.529 --> 00:19:50.250
Like it's looking for the worst case scenario.

00:19:50.589 --> 00:19:52.329
Precisely. It doesn't care about the average.

00:19:52.450 --> 00:19:55.190
It scans the entire matrix and looks exclusively

00:19:55.190 --> 00:19:57.809
for the single largest absolute difference. Wow.

00:19:58.250 --> 00:20:00.450
It finds the one pixel that is the most wrong

00:20:00.450 --> 00:20:03.369
and defines the error of the entire matrix based

00:20:03.369 --> 00:20:06.059
on that single worst case scenario. That is fascinating.

00:20:06.200 --> 00:20:08.279
Having these different tools allows computer

00:20:08.279 --> 00:20:11.319
scientists to define absolute and relative error

00:20:11.319 --> 00:20:14.500
across an entire landscape of data simultaneously,

00:20:14.920 --> 00:20:16.700
whether they care about the average performance

00:20:16.700 --> 00:20:19.160
or, you know, protecting against the worst case

00:20:19.160 --> 00:20:21.559
outlier. Which is fundamental to modern statistical

00:20:21.559 --> 00:20:24.099
modeling, artificial intelligence, and machine

00:20:24.099 --> 00:20:26.119
learning. So let's bring this all together for

00:20:26.119 --> 00:20:28.839
you. We started this deep dive looking at a simple

00:20:28.839 --> 00:20:31.539
plastic ruler, measuring a piece of paper to

00:20:31.539 --> 00:20:34.410
the nearest millimeter. We unpacked the profound

00:20:34.410 --> 00:20:37.930
difference between the raw physical mistake of

00:20:37.930 --> 00:20:41.369
absolute error and the crucial proportionate

00:20:41.369 --> 00:20:43.849
context of relative error. We navigated the traps

00:20:43.849 --> 00:20:46.589
of interval scales, exploring how measuring temperature

00:20:46.589 --> 00:20:49.710
in Celsius instead of Kelvin can warp a tiny

00:20:49.710 --> 00:20:52.930
fraction of a percent error into a 50 % disaster.

00:20:53.029 --> 00:20:55.849
We saw how the physical tools we rely on, from

00:20:55.849 --> 00:20:58.450
analog car speedometers to laboratory beakers,

00:20:58.990 --> 00:21:01.049
inherently introduce limiting errors that become

00:21:01.049 --> 00:21:03.049
incredibly dangerous. at the bottom of their

00:21:03.049 --> 00:21:05.670
scales. And finally, we followed the math into

00:21:05.670 --> 00:21:08.490
the digital realm, exploring how algorithms use

00:21:08.490 --> 00:21:11.490
polynomial time boundaries and vector norms to

00:21:11.490 --> 00:21:14.569
guarantee that their multi -dimensional approximations

00:21:14.569 --> 00:21:17.549
won't spiral completely out of control. Which

00:21:17.549 --> 00:21:19.869
leaves us with one final provocative thought.

00:21:19.930 --> 00:21:22.230
I like the sound of that. It's grounded entirely

00:21:22.230 --> 00:21:25.970
in the text's brief but chilling mention of numerical

00:21:25.970 --> 00:21:29.130
stability. As we noted, numerically unstable

00:21:29.130 --> 00:21:31.569
algorithms may exhibit traumatic error growth

00:21:31.569 --> 00:21:35.289
from incredibly small input changes. That mathematical

00:21:35.289 --> 00:21:37.750
butterfly effect we talked about? Exactly. Now

00:21:37.750 --> 00:21:40.869
consider the vast and imaginably complex digital

00:21:40.869 --> 00:21:42.970
algorithms running our modern world right now.

00:21:42.990 --> 00:21:45.829
Okay. They dictate global financial markets,

00:21:46.390 --> 00:21:48.789
trading millions of times a second based on predictive

00:21:48.789 --> 00:21:52.359
matrices. They optimize the flight paths of thousands

00:21:52.359 --> 00:21:54.940
of aircraft currently in the sky. They balance

00:21:54.940 --> 00:21:58.039
the real -time load of power grids across entire

00:21:58.039 --> 00:22:01.019
continents. And all of them are relying on approximations.

00:22:01.180 --> 00:22:03.140
All of them are utilizing floating point math,

00:22:03.400 --> 00:22:05.619
matrices, and bounded errors. So the question

00:22:05.619 --> 00:22:08.960
to ponder is this. How many catastrophic cascading

00:22:08.960 --> 00:22:11.880
failures in our world today didn't start with

00:22:11.880 --> 00:22:15.940
a massive obvious mistake? Oh man! How many systemic

00:22:15.940 --> 00:22:19.039
crashes, flash crashes in the stock market, or

00:22:19.039 --> 00:22:22.859
inexplicable regional grid failures began simply

00:22:22.859 --> 00:22:25.980
with a microscopic, mathematically unavoidable

00:22:25.980 --> 00:22:29.380
approximation error? An error that an unstable

00:22:29.380 --> 00:22:32.559
algorithm quietly caught in the dark and relentlessly

00:22:32.559 --> 00:22:35.019
amplified until the illusion of precision shattered

00:22:35.019 --> 00:22:36.819
entirely. It definitely makes you look at the

00:22:36.819 --> 00:22:38.400
calculator on your phone a little differently,

00:22:38.599 --> 00:22:40.259
doesn't it? Thank you for joining us on this

00:22:40.259 --> 00:22:42.660
deep dive into the source material. Keep questioning

00:22:42.660 --> 00:22:44.480
the numbers, keep exploring, and we'll catch

00:22:44.480 --> 00:22:44.960
you next time.
