WEBVTT

00:00:00.000 --> 00:00:04.759
Right now, inside your head, there are 86 billion

00:00:04.759 --> 00:00:08.119
tiny wet cells just shouting at each other in

00:00:08.119 --> 00:00:10.759
the dark. Yeah, it's a completely chaotic electrochemical

00:00:10.759 --> 00:00:14.160
storm. Right. And somehow that exact same biological

00:00:14.160 --> 00:00:16.100
chaos, I mean, that foundational architecture

00:00:16.100 --> 00:00:18.960
of the human mind is currently being used to

00:00:18.960 --> 00:00:22.420
like drive cars and beat chess grandmasters and

00:00:22.420 --> 00:00:25.399
write computer code. It is arguably the most

00:00:25.399 --> 00:00:28.239
profound crossover between biology and computer

00:00:28.239 --> 00:00:30.309
science in human history. We basically looked

00:00:30.309 --> 00:00:32.530
inside ourselves, figured out the wiring, and

00:00:32.530 --> 00:00:35.130
then decided to just recreate it in silicon.

00:00:35.570 --> 00:00:38.289
Which brings us to today's custom tailored deep

00:00:38.289 --> 00:00:41.090
dive designed specifically for you. We're working

00:00:41.090 --> 00:00:43.729
from a really comprehensive Wikipedia article

00:00:43.729 --> 00:00:45.850
on neural network. It's a massive topic. Oh,

00:00:45.950 --> 00:00:49.000
massive. And our mission today is simple. We

00:00:49.000 --> 00:00:51.420
want to cut through the heavy, intimidating jargon

00:00:51.420 --> 00:00:54.140
of artificial intelligence and discover the fascinating

00:00:54.140 --> 00:00:57.259
aha moments that link the swishy biology of the

00:00:57.259 --> 00:01:00.140
human brain to the cutting edge AI shaping our

00:01:00.140 --> 00:01:02.280
world today. Because the reality is the mechanics

00:01:02.280 --> 00:01:04.480
behind all this aren't magic. No, not at all.

00:01:04.719 --> 00:01:07.700
They're logical. They're traceable. And once

00:01:07.700 --> 00:01:10.680
you understand the underlying mechanism, the

00:01:10.680 --> 00:01:12.859
whole landscape of modern technology just makes

00:01:12.859 --> 00:01:15.230
a lot more sense. OK, let's unpack this. Because

00:01:15.230 --> 00:01:17.090
we hear the term neural network thrown around

00:01:17.090 --> 00:01:19.530
constantly, right? It's the ultimate tech buzzword.

00:01:19.950 --> 00:01:23.569
But what are we actually talking about at a foundational

00:01:23.569 --> 00:01:28.170
level? So at its absolute most basic, a neural

00:01:28.170 --> 00:01:31.129
network is really just a group of interconnected

00:01:31.129 --> 00:01:34.170
units. We call them neurons. And they just send

00:01:34.170 --> 00:01:36.349
signals to one another. That's it. That's it.

00:01:36.450 --> 00:01:39.640
That is the foundational concept. Now, Those

00:01:39.640 --> 00:01:41.840
units can be actual biological cells, like the

00:01:41.840 --> 00:01:44.239
ones in your brain right now, or they can be

00:01:44.239 --> 00:01:46.819
purely mathematical models sitting inside a computer

00:01:46.819 --> 00:01:49.489
program. But the architecture is exactly the

00:01:49.489 --> 00:01:51.390
same. You have units, connections, and signals.

00:01:51.430 --> 00:01:53.590
And there's a massive paradox right at the center

00:01:53.590 --> 00:01:56.189
of this that really stood out in the source material.

00:01:56.629 --> 00:01:59.930
Because an individual neuron on its own is incredibly

00:01:59.930 --> 00:02:02.030
simple. Well, very simple. It basically just

00:02:02.030 --> 00:02:04.469
takes in a signal, reaches a threshold, and passes

00:02:04.469 --> 00:02:06.730
a signal along. I mean, it doesn't possess intelligence.

00:02:07.250 --> 00:02:09.310
But you put a bunch of them together in a network.

00:02:09.490 --> 00:02:11.189
And that's the power of immersions right there.

00:02:11.449 --> 00:02:13.870
A single neuron is just a biological switch.

00:02:14.189 --> 00:02:16.909
But when you link millions of these switches

00:02:16.909 --> 00:02:20.449
together, they can perform astoundingly complex

00:02:20.449 --> 00:02:23.949
tasks. It's crazy. But to really grasp how the

00:02:23.949 --> 00:02:26.610
artificial versions pull this off, we have to

00:02:26.610 --> 00:02:28.189
understand the original blueprint. We have to

00:02:28.189 --> 00:02:30.729
look at the biology first. Right, because before

00:02:30.729 --> 00:02:32.590
neural networks lived in computers, they lived

00:02:32.590 --> 00:02:35.430
inside us. And they still do. A biological neural

00:02:35.430 --> 00:02:38.860
network is a very real physical structure. It's

00:02:38.860 --> 00:02:41.520
a population of biological nerve cells, the neurons,

00:02:42.120 --> 00:02:43.900
chemically connected to each other by structures

00:02:43.900 --> 00:02:46.719
called synapses. And the sheer density of this

00:02:46.719 --> 00:02:49.479
web, it's just hard to overstate. Yeah, a single

00:02:49.479 --> 00:02:51.819
neuron in your brain might be connected to hundreds

00:02:51.819 --> 00:02:54.180
of thousands of other neurons through these synapses.

00:02:54.300 --> 00:02:58.009
It's a phenomenally dense, tangled web. And across

00:02:58.009 --> 00:03:01.210
that web, they are constantly firing electrochemical

00:03:01.210 --> 00:03:04.449
signals, which the source calls action potentials,

00:03:04.550 --> 00:03:07.349
to their connected neighbors. Right. But a common

00:03:07.349 --> 00:03:09.449
misconception is that this is just a blind chain

00:03:09.449 --> 00:03:12.310
reaction, like a row of dominoes falling over.

00:03:12.389 --> 00:03:14.810
Like one hits the next, hits the next. Exactly.

00:03:14.870 --> 00:03:17.550
But it's far more dynamic than that. A biological

00:03:17.550 --> 00:03:20.669
neuron generally serves one of two primary roles

00:03:20.669 --> 00:03:23.590
when it sends a signal. It's either excitatory

00:03:23.590 --> 00:03:26.759
or it's inhibitory. Wait, if an individual neuron

00:03:26.759 --> 00:03:29.539
is simple, how do we actually get complex thoughts

00:03:29.539 --> 00:03:33.319
from that? Is it like a stadium wave? A stadium

00:03:33.319 --> 00:03:36.020
wave? Yeah, or like a massive game of telephone

00:03:36.020 --> 00:03:38.020
where some people are shouting to keep the message

00:03:38.020 --> 00:03:40.800
going and others are trying to quiet it down?

00:03:41.139 --> 00:03:43.460
Actually, that analogy works perfectly with the

00:03:43.460 --> 00:03:45.919
source material. It's exactly like that. The

00:03:45.919 --> 00:03:48.639
excitatory role is the people shouting, right?

00:03:48.639 --> 00:03:50.699
Yeah. Amplifying and propagating the signals.

00:03:51.120 --> 00:03:53.500
They push the receiving neuron closer to its

00:03:53.500 --> 00:03:55.900
firing threshold. OK, so the inhibitory role.

00:03:55.960 --> 00:03:57.639
Those are the people trying to quiet it down.

00:03:57.979 --> 00:04:00.500
They actively suppress the receiving neuron,

00:04:00.919 --> 00:04:03.300
making it less likely to fire. So the receiving

00:04:03.300 --> 00:04:05.599
neuron is just sitting there tallying up all

00:04:05.599 --> 00:04:07.159
the shouting and all the shushing it's getting

00:04:07.159 --> 00:04:10.219
from its thousands of neighbors. Right. And all

00:04:10.219 --> 00:04:13.469
complex thought. All memory, all brain activity

00:04:13.469 --> 00:04:16.170
is simply the result of this constant balancing

00:04:16.170 --> 00:04:19.649
act between excitatory and inhibitory signals

00:04:19.649 --> 00:04:22.730
across massive populations of cells. And it scales

00:04:22.730 --> 00:04:25.009
up too. The source mentions you start with small

00:04:25.009 --> 00:04:27.970
groups called neural circuits. Yes, very small

00:04:27.970 --> 00:04:29.870
local connections. And then you link those up

00:04:29.870 --> 00:04:32.670
into large scale brain networks. And eventually

00:04:32.670 --> 00:04:34.889
those networks have to actually interact with

00:04:34.889 --> 00:04:37.689
the physical world, right? They do. The signals

00:04:37.689 --> 00:04:40.889
generated in the brain travel out through the

00:04:40.889 --> 00:04:43.410
nervous system, eventually crossing what are

00:04:43.410 --> 00:04:46.189
called neuromuscular junctions. Neuromuscular

00:04:46.189 --> 00:04:47.889
junctions. Basically where the nerve meets the

00:04:47.889 --> 00:04:50.430
muscle. The final signal tells the muscle cells

00:04:50.430 --> 00:04:53.139
to contract, resulting in physical motion. So

00:04:53.139 --> 00:04:55.339
for you listening right now, just think about

00:04:55.339 --> 00:04:58.060
that. Every single time you've ever moved a muscle

00:04:58.060 --> 00:05:01.100
or taken a breath or formed a thought, this exact

00:05:01.100 --> 00:05:03.920
microscopic chemical communication network is

00:05:03.920 --> 00:05:07.120
what made it happen. Millions of excitatory and

00:05:07.120 --> 00:05:09.579
inhibitory signals tallied up just so you could

00:05:09.579 --> 00:05:11.660
reach out and grab your phone. It's the biological

00:05:11.660 --> 00:05:14.410
engine of everything you do. It is completely

00:05:14.410 --> 00:05:17.529
mind -boggling. But knowing how the human brain

00:05:17.529 --> 00:05:21.050
works brings up a huge question. How do we cross

00:05:21.050 --> 00:05:23.269
the gap? Right, from biology to machines. Yeah,

00:05:23.310 --> 00:05:25.910
from biological biology to mathematical models

00:05:25.910 --> 00:05:28.930
in machines. Because honestly, I always assumed

00:05:28.930 --> 00:05:31.949
neural networks were a 21st century Silicon Valley

00:05:31.949 --> 00:05:34.329
invention. You know, a couple of engineers in

00:05:34.329 --> 00:05:37.550
a garage cracking a new algorithm. That is such

00:05:37.550 --> 00:05:40.350
a common assumption, but the historical bridge

00:05:40.350 --> 00:05:43.089
is much longer. and honestly much older than

00:05:43.089 --> 00:05:45.430
most people realize. We actually have to go all

00:05:45.430 --> 00:05:47.550
the way back to the 19th century. Right, the

00:05:47.550 --> 00:05:52.949
1800s. Yes, the 1800s. In 1873, a thinker named

00:05:52.949 --> 00:05:56.069
Alexander Bain, and then later the psychologist

00:05:56.069 --> 00:05:59.029
William James in 1890, they independently proposed

00:05:59.029 --> 00:06:02.420
the foundational theory. Wow. Yeah. Long before

00:06:02.420 --> 00:06:04.920
we had the tools to map the brain on a microscopic

00:06:04.920 --> 00:06:07.500
level, they theorized that human thought must

00:06:07.500 --> 00:06:09.920
emerge from the complex interactions among massive

00:06:09.920 --> 00:06:12.480
networks of neurons. So the conceptual framework,

00:06:12.620 --> 00:06:14.379
the whole idea that thought is a network effect

00:06:14.379 --> 00:06:16.300
was there before we even really had light bulbs?

00:06:16.639 --> 00:06:19.379
Precisely. And then, if you fast forward to the

00:06:19.379 --> 00:06:23.100
1930s, a movement called connectionism started

00:06:23.100 --> 00:06:26.439
taking hold. Researchers began actively trying

00:06:26.439 --> 00:06:29.860
to use rudimentary artificial networks to model

00:06:29.860 --> 00:06:32.319
biological ones. But there had to be a breakthrough

00:06:32.319 --> 00:06:35.399
on how they actually learn, right? Yes. The monumental

00:06:35.399 --> 00:06:38.079
breakthrough, the moment we figured out the mechanism

00:06:38.079 --> 00:06:41.079
of how a network actually learns over time, came

00:06:41.079 --> 00:06:44.540
in 1949 from psychologist named Donald Hebb.

00:06:45.040 --> 00:06:47.180
Ah, Hebbian learning. I saw this in the notes.

00:06:47.560 --> 00:06:50.420
Break down the actual mechanism for us. How does

00:06:50.420 --> 00:06:53.459
a web of cells learn a new trick? So Hebbian

00:06:53.459 --> 00:06:55.779
learning operates on a surprisingly intuitive

00:06:55.779 --> 00:06:59.449
idea. It's often summarized as Cells that fire

00:06:59.449 --> 00:07:01.350
together, wire together. Fire together, wire

00:07:01.350 --> 00:07:04.149
together, catchy. Right. Hebb realized that neural

00:07:04.149 --> 00:07:07.009
networks learn over time by physically strengthening

00:07:07.009 --> 00:07:09.870
a synapse, the connection point, every time a

00:07:09.870 --> 00:07:11.990
signal successfully travels along it. So it's

00:07:11.990 --> 00:07:14.610
like walking through a dense field of tall grass.

00:07:15.209 --> 00:07:16.850
The first time you walk through, it's really

00:07:16.850 --> 00:07:18.569
difficult. There's a lot of resistance. But if

00:07:18.569 --> 00:07:21.089
you walk that exact same path every single day,

00:07:21.430 --> 00:07:23.639
you eventually trample the grass down. You pack

00:07:23.639 --> 00:07:26.180
the dirt, it becomes a clear trail, and the next

00:07:26.180 --> 00:07:28.959
time you need to cross, that path offers the

00:07:28.959 --> 00:07:31.480
path of least resistance. What's fascinating

00:07:31.480 --> 00:07:34.040
here is how accurately that physical metaphor

00:07:34.040 --> 00:07:36.939
translates to the early psychological theories,

00:07:37.519 --> 00:07:40.339
which then directly pave the way for hardware.

00:07:40.980 --> 00:07:43.600
Because if you understand the biological rules

00:07:43.600 --> 00:07:46.639
of learning, you can try to build a machine that

00:07:46.639 --> 00:07:49.500
plays by those same rules. Exactly. And around

00:07:49.500 --> 00:07:52.439
the same time, we were getting proof of how complex

00:07:52.439 --> 00:07:57.040
biological processing could get. In 1956, Sveijetingen

00:07:57.040 --> 00:07:59.339
discovered the function of second -order retinal

00:07:59.339 --> 00:08:01.540
cells. Okay, second -order retinal cells. What

00:08:01.540 --> 00:08:03.860
does that mean for the network? Basically, he

00:08:03.860 --> 00:08:06.379
proved that biological networks process information

00:08:06.379 --> 00:08:09.220
in distinct layers. The eye doesn't just send

00:08:09.220 --> 00:08:11.759
raw light straight to the brain. There are intermediate

00:08:11.759 --> 00:08:14.279
layers of cells that process and refine the image

00:08:14.279 --> 00:08:16.579
before passing it along. Layered processing.

00:08:16.839 --> 00:08:18.240
Okay, keep a pin in that because that becomes

00:08:18.240 --> 00:08:20.720
huge for the math later. So the biologists are

00:08:20.720 --> 00:08:22.620
mapping this out. When does the computer science

00:08:22.620 --> 00:08:24.959
actually kick in? That brings us to the first

00:08:24.959 --> 00:08:30.060
machine. In 1943, a neurophysiologist named Warren

00:08:30.060 --> 00:08:33.389
McCulloch and a logician named Walter Pitts teamed

00:08:33.389 --> 00:08:35.990
up to invent the perceptron. Well, the first

00:08:35.990 --> 00:08:38.389
mathematical model of an artificial neural network.

00:08:38.549 --> 00:08:40.129
They didn't build a physical machine yet, right?

00:08:40.129 --> 00:08:42.590
No, just a mathematical proof showing how a simplified

00:08:42.590 --> 00:08:44.789
artificial neuron could act as a logic gate.

00:08:44.870 --> 00:08:49.230
OK. But then in 1957, Frank Rosenblatt took that

00:08:49.230 --> 00:08:51.990
math and actually built a physical perceptron

00:08:51.990 --> 00:08:54.529
in hardware. He actually wired up a physical

00:08:54.529 --> 00:08:57.559
learning machine in the 50s. He did? using motors

00:08:57.559 --> 00:09:00.259
and photocells. It was the first true translation

00:09:00.259 --> 00:09:02.779
from biology to artificial hardware. That is

00:09:02.779 --> 00:09:05.240
an incredible bridge. We go from 19th century

00:09:05.240 --> 00:09:07.899
philosophy to biological observation to mathematical

00:09:07.899 --> 00:09:12.059
proofs to a physical machine in 1957. But obviously

00:09:12.059 --> 00:09:14.679
those early physical machines evolved, right?

00:09:14.720 --> 00:09:17.980
We moved entirely into software. We did. Today,

00:09:18.379 --> 00:09:20.539
artificial neural networks are almost universally

00:09:20.539 --> 00:09:23.340
implemented in software to approximate what we

00:09:23.340 --> 00:09:25.820
call nonlinear functions. Here's where it gets

00:09:25.820 --> 00:09:28.279
really interesting, because I want to visualize

00:09:28.279 --> 00:09:31.759
this modern architecture. If the brain is a tangled

00:09:31.759 --> 00:09:34.919
web of cells, the artificial version sounds much

00:09:34.919 --> 00:09:37.940
more structured, almost like a strict corporate

00:09:37.940 --> 00:09:40.700
hierarchy. That is a very helpful way to frame

00:09:40.700 --> 00:09:43.740
it. Let's build that out. In an artificial network,

00:09:44.299 --> 00:09:47.700
the mathematical neurons are arranged into very

00:09:47.700 --> 00:09:51.789
specific sequential layers. Right. So at the

00:09:51.789 --> 00:09:54.049
very front, you have the input layer. These are

00:09:54.049 --> 00:09:55.929
the frontline workers. They just take in the

00:09:55.929 --> 00:09:58.850
raw data, like the pixels of a photograph, and

00:09:58.850 --> 00:10:01.690
pass it up the chain. Exactly. Then that data

00:10:01.690 --> 00:10:04.029
moves into the hidden layers. This is your middle

00:10:04.029 --> 00:10:06.509
management. The hidden layers. Right. And finally,

00:10:06.509 --> 00:10:08.269
at the very end of the line, you have the output

00:10:08.269 --> 00:10:10.710
layer. The executives who take all the process

00:10:10.710 --> 00:10:12.929
reports from middle management and make the final

00:10:12.929 --> 00:10:15.289
decision. Like, this is a picture of a cat. OK.

00:10:15.350 --> 00:10:18.490
So information flows sequentially. Input, hidden,

00:10:18.700 --> 00:10:21.679
Output. But let's look at the actual math happening

00:10:21.679 --> 00:10:24.200
inside one of those hidden layer neurons. It's

00:10:24.200 --> 00:10:26.360
not a chemical pulse anymore. It's just numbers.

00:10:26.759 --> 00:10:29.940
Yes. The signal input to each neuron is what

00:10:29.940 --> 00:10:32.740
we call a linear combination of the outputs from

00:10:32.740 --> 00:10:35.039
the previous layer. Okay, stop right there. A

00:10:35.039 --> 00:10:37.500
linear combination. That sounds incredibly dense.

00:10:37.659 --> 00:10:41.379
It sounds worse than it is. Basically every connection

00:10:41.379 --> 00:10:44.059
between two artificial neurons has a specific

00:10:44.059 --> 00:10:46.879
weight assigned to it. Just a number acting as

00:10:46.879 --> 00:10:50.059
a multiplier. The receiving neuron takes every

00:10:50.059 --> 00:10:52.639
incoming number, multiplies it by its specific

00:10:52.639 --> 00:10:54.779
connection weight, and adds them all together.

00:10:55.059 --> 00:10:57.500
Okay, so it's mathematically tallying the votes,

00:10:57.580 --> 00:10:59.899
just like the excitatory and inhibitory signals

00:10:59.899 --> 00:11:02.820
in biology. Exactly. And once it has that total

00:11:02.820 --> 00:11:05.980
sum, it calculates the final output via an activation

00:11:05.980 --> 00:11:08.320
function. It's just a rule book that says, based

00:11:08.320 --> 00:11:10.480
on this sum, here's the exact number I'm passing

00:11:10.480 --> 00:11:12.779
to the next layer. OK, so we have layers of neurons

00:11:12.779 --> 00:11:15.059
multiplying inputs by weights, summing them up,

00:11:15.080 --> 00:11:16.919
and passing them through activation functions.

00:11:17.460 --> 00:11:19.460
But how does this corporate hierarchy actually

00:11:19.460 --> 00:11:22.799
learn anything? The source uses terms like empirical

00:11:22.799 --> 00:11:25.820
risk minimization and backpropagation to modify

00:11:25.820 --> 00:11:29.159
those connection weights during training. That

00:11:29.159 --> 00:11:30.960
sounds... I mean, can we just think of this as

00:11:30.960 --> 00:11:33.360
high -tech trial and error? High -tech trial

00:11:33.360 --> 00:11:35.759
and error is exactly what it is. If we connect

00:11:35.759 --> 00:11:38.399
this to the bigger picture, adjusting these weights

00:11:38.399 --> 00:11:41.679
to fit a pre -existing data set is exactly how

00:11:41.679 --> 00:11:44.460
the network learns. So if it guesses wrong...

00:11:44.220 --> 00:11:47.759
It figures out how wrong it was and sends a signal

00:11:47.759 --> 00:11:51.019
backwards through the network. Back propagation.

00:11:51.460 --> 00:11:54.059
Right. Adjusting every single weight slightly

00:11:54.059 --> 00:11:56.399
so it makes a better guess next time. It repeats

00:11:56.399 --> 00:11:59.159
this millions of times until the empirical risk,

00:11:59.379 --> 00:12:02.009
the error. is minimized. And that's what training

00:12:02.009 --> 00:12:04.490
in AI is. That's the entirety of it. And by the

00:12:04.490 --> 00:12:07.009
way, officially, a deep neural network is just

00:12:07.009 --> 00:12:09.370
a network with more than three layers. So an

00:12:09.370 --> 00:12:11.929
input layer, an output layer, at least two hidden

00:12:11.929 --> 00:12:14.289
layers. That's what deep learning means. Just

00:12:14.289 --> 00:12:16.750
more than two hidden layers. That's it. Wow.

00:12:16.950 --> 00:12:19.049
OK, so now that we understand both the biological

00:12:19.049 --> 00:12:21.649
and the artificial, there's a glaring point of

00:12:21.649 --> 00:12:23.809
divergence that we have to talk about. Because

00:12:23.809 --> 00:12:25.830
if artificial neural networks were originally

00:12:25.830 --> 00:12:29.059
designed to mimic our biology, Why does the source

00:12:29.059 --> 00:12:31.080
say they are increasingly different from their

00:12:31.080 --> 00:12:33.379
biological counterparts today? It comes down

00:12:33.379 --> 00:12:36.399
to efficiency. While biology was the initial

00:12:36.399 --> 00:12:39.480
inspiration, machine learning applications required

00:12:39.480 --> 00:12:42.779
different approaches. The goal shifted from perfectly

00:12:42.779 --> 00:12:45.820
modeling a human brain to just solving specific

00:12:45.820 --> 00:12:48.299
artificial intelligence problems. Like, biological

00:12:48.299 --> 00:12:50.580
brains are amazing, but they have constraints,

00:12:50.779 --> 00:12:53.360
right? Exactly. Human brains run on very little

00:12:53.360 --> 00:12:55.480
power and are limited by chemical diffusion.

00:12:56.140 --> 00:12:58.580
Computers don't have those limits. So engineers

00:12:58.580 --> 00:13:01.139
stop trying to build a perfectly accurate biological

00:13:01.139 --> 00:13:04.539
replica and just optimize the math for data processing.

00:13:04.879 --> 00:13:06.700
And it's a good thing they did because that shift

00:13:06.700 --> 00:13:09.539
is what made this technology so incredibly useful

00:13:09.539 --> 00:13:12.679
in our daily lives. The real -world applications

00:13:12.679 --> 00:13:15.460
mentioned in the source are everywhere. Oh absolutely

00:13:15.460 --> 00:13:17.659
everywhere. Artificial neural networks are the

00:13:17.659 --> 00:13:19.860
backbone of modern predictive modeling. They

00:13:19.860 --> 00:13:22.419
power adaptive control systems, like in manufacturing

00:13:22.419 --> 00:13:25.399
robots or autonomous vehicles. And pattern recognition,

00:13:25.700 --> 00:13:27.620
right? Like facial recognition to unlock your

00:13:27.620 --> 00:13:30.120
phone, or handwriting recognition. Yes. There

00:13:30.120 --> 00:13:32.320
are also the engines behind general game playing

00:13:32.320 --> 00:13:35.279
AIs that beat human champions. And of course,

00:13:35.600 --> 00:13:38.179
generative AI. Right. The big one right now.

00:13:38.620 --> 00:13:40.759
So what does this all mean for you, the listener?

00:13:41.100 --> 00:13:44.460
Well, it means that when you use a generative

00:13:44.460 --> 00:13:47.840
AI to write an email, or the facial recognition

00:13:47.840 --> 00:13:50.440
to unlock your phone, you are interacting with

00:13:50.440 --> 00:13:53.840
the direct descendant of an 1873 theory about

00:13:53.840 --> 00:13:56.279
human thought, filtered through complex math

00:13:56.279 --> 00:13:58.600
layers. It's an incredible evolution. It really

00:13:58.600 --> 00:14:01.340
is. So just to recap our journey today, we started

00:14:01.340 --> 00:14:04.659
with the biological synapses and excitatory signals

00:14:04.659 --> 00:14:06.980
happening in our heads right now. Then we crossed

00:14:06.980 --> 00:14:09.340
the historical bridge, looking at Hebbian learning

00:14:09.340 --> 00:14:12.320
and the perceptron. And finally, we dove into

00:14:12.320 --> 00:14:15.379
the hidden layers of modern software -based deep

00:14:15.379 --> 00:14:17.399
neural networks. And this raises an important

00:14:17.399 --> 00:14:19.399
question, I think. Yeah. I want to leave you

00:14:19.399 --> 00:14:21.740
with a final provocative thought to mull over.

00:14:22.159 --> 00:14:24.120
The source material notes that artificial neural

00:14:24.120 --> 00:14:26.620
networks started by copying biology, but eventually

00:14:26.620 --> 00:14:28.580
diverged to become better at machine learning.

00:14:29.139 --> 00:14:31.419
If artificial intelligence continues to evolve

00:14:31.419 --> 00:14:34.059
and tackle even more complex human -like tasks

00:14:34.059 --> 00:14:36.759
in the future, will we need to start making these

00:14:36.759 --> 00:14:39.120
mathematical networks look more like biological

00:14:39.120 --> 00:14:41.259
brains again? Right, with all that messy chemical

00:14:41.259 --> 00:14:44.279
complexity. Exactly. Or will they just evolve

00:14:44.279 --> 00:14:47.340
into a completely alien structure that we can

00:14:47.340 --> 00:14:50.179
no longer even comprehend? It's a fascinating

00:14:50.179 --> 00:14:52.360
thing to consider as the technology advances.

00:14:52.580 --> 00:14:54.700
It really is. Well, thank you so much for sharing

00:14:54.700 --> 00:14:56.759
your sources and coming along on this deep dive

00:14:56.759 --> 00:14:59.139
with us. Keep learning and stay curious.
