WEBVTT

00:00:00.000 --> 00:00:04.700
Imagine finding out that the flawless legal brief

00:00:04.700 --> 00:00:07.360
defending you in federal court was just, well,

00:00:07.839 --> 00:00:10.240
entirely hallucinated by a machine. Yeah, that's

00:00:10.240 --> 00:00:13.220
a terrifying thought. Right. Or consider this.

00:00:13.820 --> 00:00:16.579
Every single time you ask your AI assistant to

00:00:16.579 --> 00:00:20.059
just draft a few routine emails, that software

00:00:20.059 --> 00:00:23.300
is literally consuming half a liter of fresh

00:00:23.300 --> 00:00:25.850
water. just to cool the servers down. It's just

00:00:25.850 --> 00:00:28.190
for a few prompts. Exactly. I mean, today is

00:00:28.190 --> 00:00:31.550
Thursday, March 26, 2026, and we are basically

00:00:31.550 --> 00:00:34.090
living in a reality that would have sounded like

00:00:34.090 --> 00:00:36.609
pure science fiction, like just a few short years

00:00:36.609 --> 00:00:39.670
ago. Oh, totally. If you think back to late 2022,

00:00:40.070 --> 00:00:43.189
chat GPT was just this... It was a viral novelty.

00:00:43.469 --> 00:00:45.509
Yeah, quirky little toy. Right, exactly. You'd

00:00:45.509 --> 00:00:47.409
log on, type a prompt, and it would spit out

00:00:47.409 --> 00:00:50.270
a slightly clunky poem about your dog or something.

00:00:50.329 --> 00:00:52.509
Yeah, as a parlor trick. I mean, a really sophisticated

00:00:52.509 --> 00:00:54.590
parlor trick, sure. But still, just something

00:00:54.590 --> 00:00:56.289
you played around with in a browser window to

00:00:56.289 --> 00:00:57.929
see what it could do. To test the limits, yeah.

00:00:58.250 --> 00:01:01.049
But that novelty era. It is completely gone.

00:01:01.210 --> 00:01:04.849
I mean, today, ChatGBT is a deeply embedded digital

00:01:04.849 --> 00:01:07.250
worker. It operates across our web browsers,

00:01:07.750 --> 00:01:10.090
it analyzes our private health records, and it

00:01:10.090 --> 00:01:12.950
literally, literally has direct access to our

00:01:12.950 --> 00:01:15.310
wallets. Which is a staggering transition when

00:01:15.310 --> 00:01:17.189
you really look at the timeline. The speed of

00:01:17.189 --> 00:01:19.959
it is just insane. And that is exactly why we

00:01:19.959 --> 00:01:22.159
are doing this today. For this deep dive, we

00:01:22.159 --> 00:01:24.939
are looking at a massive, incredibly comprehensive

00:01:24.939 --> 00:01:28.180
Wikipedia breakdown of Chad GPT's evolution.

00:01:28.620 --> 00:01:30.700
And it is so up to date, too. Yeah, it covers

00:01:30.700 --> 00:01:32.439
everything. We're looking at the foundational

00:01:32.439 --> 00:01:35.299
features, the underlying models, the major controversies

00:01:35.299 --> 00:01:37.620
spanning right from that initial launch, all

00:01:37.620 --> 00:01:40.980
the way up to the brand new GPT 5 .4 model that

00:01:40.980 --> 00:01:42.640
just dropped, what, a few weeks ago? Yeah, just

00:01:42.640 --> 00:01:45.060
a few weeks ago. And what makes the specific

00:01:45.060 --> 00:01:47.579
source document so valuable for you, the listener?

00:01:47.819 --> 00:01:50.760
is that it completely strips away all that Silicon

00:01:50.760 --> 00:01:53.859
Valley marketing hype. Right, no PR spin. Exactly.

00:01:54.120 --> 00:01:56.840
It just presents the raw chronological timeline

00:01:56.840 --> 00:02:00.540
of how this system grew, how the underlying architecture

00:02:00.540 --> 00:02:03.540
fundamentally changed, and how we as a society

00:02:03.540 --> 00:02:07.099
have been, well... forced to adapt to a technology

00:02:07.099 --> 00:02:09.659
that scales way faster than our own loss. Way

00:02:09.659 --> 00:02:12.680
faster. So our mission today is to map out that

00:02:12.680 --> 00:02:15.300
explosive transformation for you. We really want

00:02:15.300 --> 00:02:17.699
to understand how chat GPT went from just a simple

00:02:17.699 --> 00:02:20.740
text generator to an autonomous digital agent.

00:02:20.969 --> 00:02:23.210
And of course, to uncover the real world ripple

00:02:23.210 --> 00:02:26.550
effects this is having on our society, our politics,

00:02:26.650 --> 00:02:28.990
and just our day -to -day lives. It impacts almost

00:02:28.990 --> 00:02:31.110
everything now. It really does. Okay, let's untack

00:02:31.110 --> 00:02:33.129
this. And I think we should start with the most

00:02:33.129 --> 00:02:35.210
fundamental shift in its identity over the last

00:02:35.210 --> 00:02:37.409
year or so. The fact that it doesn't just, you

00:02:37.409 --> 00:02:39.289
know, talk to you anymore. Right, it takes action.

00:02:39.430 --> 00:02:41.389
It actually does things on your behalf. And this

00:02:41.389 --> 00:02:45.289
is really the crux of the 2025 shift. Open AI

00:02:45.289 --> 00:02:47.409
moved super aggressively away from the concept

00:02:47.409 --> 00:02:49.650
of a passive conversationalist, right? And they

00:02:49.650 --> 00:02:51.759
moved toward what the industry calls agentic

00:02:51.759 --> 00:02:54.240
capabilities. Agentic, like acting as an agent.

00:02:54.560 --> 00:02:57.300
Exactly. So in January of 2025, they released

00:02:57.300 --> 00:03:00.219
this feature called Operator. And Operator wasn't

00:03:00.219 --> 00:03:02.639
just generating text for you to copy and paste.

00:03:02.879 --> 00:03:06.259
It was actively navigating web browsers. Which

00:03:06.259 --> 00:03:08.500
blew my mind when it came out. Oh, it was wild.

00:03:08.539 --> 00:03:10.419
It could fill out online forms, it could place

00:03:10.419 --> 00:03:12.979
orders, it could schedule appointments, all by

00:03:12.979 --> 00:03:15.719
controlling a software environment inside a virtual

00:03:15.719 --> 00:03:17.979
machine. OK, but I remember reading about that

00:03:17.979 --> 00:03:20.210
and thinking, how does that actually work in

00:03:20.210 --> 00:03:21.990
practice? Because when I look at a website, I

00:03:21.990 --> 00:03:26.330
see buttons and pictures and text. A chat bot

00:03:26.330 --> 00:03:29.469
doesn't have eyes. How does it see the internet

00:03:29.469 --> 00:03:31.969
to navigate around it? Well, it doesn't see the

00:03:31.969 --> 00:03:34.469
screen the way we do, obviously. When an agent

00:03:34.469 --> 00:03:36.650
-like operator looks at a website, it's actually

00:03:36.650 --> 00:03:40.289
parsing the underlying HTML code. Oh, yeah. It

00:03:40.289 --> 00:03:42.250
analyzes something called the Document Object

00:03:42.250 --> 00:03:44.669
Model, or the DOM. The DOM, right. Right. So

00:03:44.669 --> 00:03:46.669
it reads the raw code of the page to figure out

00:03:46.669 --> 00:03:48.610
exactly where the interactive elements are. And

00:03:48.610 --> 00:03:51.250
then, and this is a crazy part, it mathematically

00:03:51.250 --> 00:03:54.030
predicts the exact x and y coordinates on the

00:03:54.030 --> 00:03:57.009
screen to simulate a human mouse click. Or a

00:03:57.009 --> 00:04:00.289
keystroke. It's all math. And then a few months

00:04:00.289 --> 00:04:03.729
later, in May of 2025, they introduced Codex,

00:04:04.490 --> 00:04:06.590
which was an agent specifically built to do this

00:04:06.590 --> 00:04:09.569
exact thing, but for software engineering. So

00:04:09.569 --> 00:04:12.050
it was coding itself? Basically, yeah. Codex

00:04:12.050 --> 00:04:14.509
can write software, run its own tests, and even

00:04:14.509 --> 00:04:17.290
propose code changes autonomously. And then...

00:04:17.160 --> 00:04:21.079
By July of last year, we got the chat GPT agent,

00:04:21.360 --> 00:04:23.819
the one that performs multi -step tasks across

00:04:23.819 --> 00:04:26.379
a virtual computer. I mean, my analogy for this

00:04:26.379 --> 00:04:29.100
is always it's gone from an intern who just hands

00:04:29.100 --> 00:04:31.759
you a printed recipe to an intern who takes your

00:04:31.759 --> 00:04:34.100
actual credit card, drives to the grocery store,

00:04:34.579 --> 00:04:37.300
walks the aisles. buys the food, and then comes

00:04:37.300 --> 00:04:39.199
back and cooks the meal for you. That is honestly

00:04:39.199 --> 00:04:40.720
the perfect analogy because they didn't just

00:04:40.720 --> 00:04:42.819
stop at virtual machines either. They integrated

00:04:42.819 --> 00:04:45.180
this capability directly into the everyday tools

00:04:45.180 --> 00:04:49.139
we use. By October 2025, they launched ChatGPT

00:04:49.139 --> 00:04:52.000
Atlas. Right, the browser. Yeah, their own dedicated

00:04:52.000 --> 00:04:54.740
web browser that has this agentic mode baked

00:04:54.740 --> 00:04:57.639
right into the navigation bar. But see, if they

00:04:57.639 --> 00:04:59.860
are integrating this into everyday browsing,

00:05:00.399 --> 00:05:02.670
they're also integrating it into our money. The

00:05:02.670 --> 00:05:04.790
source timeline makes a huge point of noting

00:05:04.790 --> 00:05:09.110
that in September of 2025, OpenAI rolled out

00:05:09.110 --> 00:05:12.129
the agent at commerce protocol. Yes, the partnerships.

00:05:12.269 --> 00:05:15.129
Right. They partnered with Stripe and Etsy, which

00:05:15.129 --> 00:05:18.589
means the AI can literally make purchases using

00:05:18.589 --> 00:05:21.350
your linked payment method. The AI isn't just

00:05:21.350 --> 00:05:23.550
adjusting a product anymore. It is authorizing

00:05:23.550 --> 00:05:25.930
and completely finishing the financial transaction.

00:05:26.569 --> 00:05:28.889
And by the way, OpenAI even takes a cut from

00:05:28.889 --> 00:05:31.589
the merchant's payment as a facilitation fee.

00:05:31.759 --> 00:05:33.980
I gotta push back on this though, because giving

00:05:33.980 --> 00:05:36.699
a chatbot direct access to my wallet and my web

00:05:36.699 --> 00:05:38.920
browser, that sounds like a massive security

00:05:38.920 --> 00:05:40.680
nightmare just waiting to happen. Oh, people

00:05:40.680 --> 00:05:42.680
were terrified of it. Because the internet is

00:05:42.680 --> 00:05:44.800
so messy, it's chaotic, it's full of deceptive

00:05:44.800 --> 00:05:46.620
pop -ups, the layouts are constantly changing,

00:05:46.939 --> 00:05:49.540
buttons are weirdly coded. If operators struggled

00:05:49.540 --> 00:05:52.480
with basic user interfaces back in early 2025,

00:05:53.180 --> 00:05:55.759
how on earth did OpenAI convince companies like

00:05:55.759 --> 00:05:58.379
Stripe to trust it with actual money just a few

00:05:58.379 --> 00:06:00.670
months later? Well, the friction you're describing,

00:06:00.930 --> 00:06:02.970
that is exactly what the developers were wrestling

00:06:02.970 --> 00:06:06.170
with behind the scenes. Early versions of Operator

00:06:06.170 --> 00:06:08.610
were kept in highly restricted virtual machines

00:06:08.610 --> 00:06:11.329
for those exact safety reasons. Because it would

00:06:11.329 --> 00:06:14.439
just click the wrong thing. Basically. It would

00:06:14.439 --> 00:06:17.839
get confused by poorly designed websites. If

00:06:17.839 --> 00:06:21.259
a site's DOM code is messy, the AI's mathematical

00:06:21.259 --> 00:06:23.839
prediction of where to click just breaks down

00:06:23.839 --> 00:06:27.120
entirely. But what's fascinating here is the

00:06:27.120 --> 00:06:29.740
psychological shift this demands from you, the

00:06:29.740 --> 00:06:32.879
user. In what way? Like, trusting it. Exactly.

00:06:33.180 --> 00:06:35.180
Think about it. When you use a traditional search

00:06:35.180 --> 00:06:37.540
engine, your role is just a searcher of information.

00:06:37.759 --> 00:06:40.199
You type a query, you get a list of links, and

00:06:40.199 --> 00:06:42.100
you do all the work of evaluating them. Right,

00:06:42.100 --> 00:06:44.100
I have to click and read. Right. But with these

00:06:44.100 --> 00:06:46.180
agentic models, your role shifts completely.

00:06:46.339 --> 00:06:48.699
You become the manager of an autonomous entity.

00:06:48.980 --> 00:06:50.899
You're delegating a task, you're assigning it

00:06:50.899 --> 00:06:52.959
a financial budget, and you're reviewing its

00:06:52.959 --> 00:06:55.560
work after the fact. That is so true. Managing

00:06:55.560 --> 00:06:57.680
an autonomous system requires a fundamentally

00:06:57.680 --> 00:06:59.920
different level of trust than simply reading

00:06:59.920 --> 00:07:03.420
a text summary on a screen. So to earn that trust,

00:07:03.920 --> 00:07:07.040
I mean to be able to flawlessly parse a chaotic

00:07:07.040 --> 00:07:10.560
web page, bypass a weird pop -up, and buy a pair

00:07:10.560 --> 00:07:13.459
of shoes without accidentally spending a thousand

00:07:13.459 --> 00:07:17.879
dollars, the underlying brain of this AI had

00:07:17.879 --> 00:07:20.459
to get significantly smarter. It couldn't just

00:07:20.459 --> 00:07:22.959
guess the next word anymore. No, it had to actually

00:07:22.959 --> 00:07:26.100
think. Right. And that required an entirely new

00:07:26.100 --> 00:07:29.040
architecture. We started seeing this transition

00:07:29.040 --> 00:07:31.959
with the O series models. Oh right, like O1 and

00:07:31.959 --> 00:07:34.819
O3. Exactly. These were explicitly designed as

00:07:34.819 --> 00:07:37.160
reasoning models. They utilized something called

00:07:37.160 --> 00:07:39.990
test time compute. Test time compute. OK, let's

00:07:39.990 --> 00:07:42.089
break that down for someone listening who isn't

00:07:42.089 --> 00:07:44.670
a machine learning engineer. What does that actually

00:07:44.670 --> 00:07:47.290
mean in practical terms? So in oldie models,

00:07:47.449 --> 00:07:49.470
when you asked a question, the AI would just

00:07:49.470 --> 00:07:51.449
immediately start generating the answer word

00:07:51.449 --> 00:07:53.790
by word as fast as humanly possible. Like it's

00:07:53.790 --> 00:07:56.350
just reacting. Right. But with test time compute,

00:07:56.550 --> 00:07:58.910
the model actually pauses. It generates thousands

00:07:58.910 --> 00:08:01.329
of hidden thoughts behind the scenes before it

00:08:01.329 --> 00:08:03.569
ever shows you a single word. So it's drafting

00:08:03.569 --> 00:08:07.810
a plan. Yes. It proposes a plan, it tests that

00:08:07.810 --> 00:08:10.370
plan against its own logic, it realizes it made

00:08:10.370 --> 00:08:12.990
a mistake, self -corrects, and then finalizes

00:08:12.990 --> 00:08:16.310
a strategy. Only after it has completely reasoned

00:08:16.310 --> 00:08:18.569
through the problem does it actually output the

00:08:18.569 --> 00:08:20.730
final answer or, you know, take an action on

00:08:20.730 --> 00:08:23.779
a website. which perfectly brings us to August

00:08:23.779 --> 00:08:28.339
2025 and that massive GPT -5 release. The notes

00:08:28.339 --> 00:08:30.879
from the Wikipedia breakdown indicate that GPT

00:08:30.879 --> 00:08:33.639
-5 wasn't just like a bigger version of the old

00:08:33.639 --> 00:08:35.840
brain. It was structured completely differently.

00:08:35.980 --> 00:08:38.559
Yeah, GPT -5 utilized a concept known as a mixture

00:08:38.559 --> 00:08:41.419
of experts or MOE. A mixture of experts. Right.

00:08:41.659 --> 00:08:44.279
So instead of one monolithic, dense neural network

00:08:44.279 --> 00:08:46.279
where every single connection fires for every

00:08:46.279 --> 00:08:49.279
single question you ask, GPT -5 acts as a highly

00:08:49.279 --> 00:08:51.539
efficient router. Oh, like a traffic. Exactly.

00:08:51.840 --> 00:08:54.159
When you give it a prompt, this router evaluates

00:08:54.159 --> 00:08:56.340
what you actually need. If you ask it a simple

00:08:56.340 --> 00:08:58.700
history question, it wraps that query to a smaller,

00:08:58.980 --> 00:09:00.539
specialized sub -model. And doesn't need the

00:09:00.539 --> 00:09:02.980
big guns for that. Right. But if you ask it to

00:09:02.980 --> 00:09:05.279
write a complex Python script or autonomously

00:09:05.279 --> 00:09:07.980
book a multi -city flight, the router sends that

00:09:07.980 --> 00:09:11.139
pass to a heavy -duty, reasoning -focused, expert

00:09:11.139 --> 00:09:14.309
model. That makes sense. And this modular architecture

00:09:14.309 --> 00:09:17.669
is exactly what allowed them to scale up to the

00:09:17.669 --> 00:09:20.990
current GPT 5 .4 that just dropped this month,

00:09:21.389 --> 00:09:23.409
which focuses really heavily on professional

00:09:23.409 --> 00:09:26.129
computer use. But I mean, building a router that

00:09:26.129 --> 00:09:29.330
commands a literal legion of specialized expert

00:09:29.330 --> 00:09:33.269
brains, that comes with a staggering cost. Oh,

00:09:33.350 --> 00:09:35.289
the overhead is unreal. Let's talk about the

00:09:35.289 --> 00:09:39.940
money first. In December 2024, OpenAI introduced

00:09:39.940 --> 00:09:43.700
a pro -tier, $200 a month. $200. And if you're

00:09:43.700 --> 00:09:45.539
sitting there listening, wondering why anyone

00:09:45.539 --> 00:09:48.539
would ever pay $200 a month for a chatbot, you

00:09:48.539 --> 00:09:50.480
have to realize they are no longer selling you

00:09:50.480 --> 00:09:52.840
a chatbot, they're selling you a digital employee.

00:09:53.179 --> 00:09:55.639
Exactly. It's B2B at that point, really. Right.

00:09:55.799 --> 00:09:58.539
And then just this past January in 2026, they

00:09:58.539 --> 00:10:00.759
finally crossed the Rubicon and started testing

00:10:00.759 --> 00:10:03.269
advertisements in the free version. Which everyone

00:10:03.269 --> 00:10:05.690
hated, but the financial reality finally caught

00:10:05.690 --> 00:10:07.889
up with the magic. I mean, OpenAI committed to

00:10:07.889 --> 00:10:11.169
spending $1 .4 trillion on AI infrastructure

00:10:11.169 --> 00:10:14.669
over the next eight years. Wait, trillion? With

00:10:14.669 --> 00:10:18.960
a T. $1 .4 trillion. The compute power required

00:10:18.960 --> 00:10:21.940
to run these agentic test time compute models

00:10:21.940 --> 00:10:25.700
at a global scale is just astronomically expensive.

00:10:25.980 --> 00:10:28.059
Which brings us to the physical reality of that

00:10:28.059 --> 00:10:30.940
cost. And honestly, this part of the source material

00:10:30.940 --> 00:10:33.700
completely blew my mind. We always think of AI

00:10:33.700 --> 00:10:37.399
as this invisible weightless magic, right? Living

00:10:37.399 --> 00:10:39.179
up in the cloud. Like it's just floating around?

00:10:39.539 --> 00:10:42.440
Exactly. Just code floating in the ether. But

00:10:42.440 --> 00:10:44.899
the infrastructure running this... relies on

00:10:44.899 --> 00:10:47.960
massive concrete facilities. They're housing

00:10:47.960 --> 00:10:51.879
things like Microsoft's 30 ,000 NVIDIA GPUs.

00:10:52.159 --> 00:10:54.320
And the environmental toll is shocking. It's

00:10:54.320 --> 00:10:56.679
massive. Researchers from the University of California,

00:10:56.679 --> 00:10:58.759
Riverside published a study showing that a series

00:10:58.759 --> 00:11:01.860
of five to 50 prompts to chat GPT consumes roughly

00:11:01.860 --> 00:11:04.100
half a liter of water. It's incredible. I am

00:11:04.100 --> 00:11:05.860
literally drinking a small water bottle's worth

00:11:05.860 --> 00:11:07.860
of physical resources every time I ask it a few

00:11:07.860 --> 00:11:10.679
dozen questions. Why on earth does the computer

00:11:10.679 --> 00:11:12.980
program need water? Well, if we connected by

00:11:12.980 --> 00:11:14.980
the bigger picture, it really shatters that whole

00:11:14.980 --> 00:11:17.220
illusion of the cloud. Those tens of thousands

00:11:17.220 --> 00:11:20.399
of NVIDIA GPUs are processing trillions of calculations

00:11:20.399 --> 00:11:23.600
a second. So they get hot. incredibly hot. The

00:11:23.600 --> 00:11:25.740
friction generates an immense amount of physical

00:11:25.740 --> 00:11:29.279
heat. So to prevent the data centers from literally

00:11:29.279 --> 00:11:32.259
melting down, they utilize these massive evaporative

00:11:32.259 --> 00:11:35.059
cooling towers. Evaporative cooling. Right. They

00:11:35.059 --> 00:11:37.659
are constantly pumping thousands of gallons of

00:11:37.659 --> 00:11:40.080
fresh water over the heated infrastructure, and

00:11:40.080 --> 00:11:43.080
it evaporates into the atmosphere. Artificial

00:11:43.080 --> 00:11:46.820
intelligence is a heavy resource -intensive industrial

00:11:46.820 --> 00:11:49.259
process. It's basically a factory. It is a factory.

00:11:49.519 --> 00:11:52.080
And when you realize the concrete, the steel,

00:11:52.220 --> 00:11:54.759
the electricity, and the water required for every

00:11:54.759 --> 00:11:57.899
single query, it perfectly explains why they

00:11:57.899 --> 00:12:01.360
had to introduce ads and a $200 subscription

00:12:01.360 --> 00:12:03.899
tier. Because that massive infrastructure needs

00:12:03.899 --> 00:12:07.080
massive, continuous revenue just to survive.

00:12:07.620 --> 00:12:09.840
And to get that revenue, the AI has to be integrated

00:12:09.840 --> 00:12:12.960
into high value, high stakes industries just

00:12:12.960 --> 00:12:15.379
to justify its existence and pay the server bills.

00:12:15.559 --> 00:12:17.340
Which is where things get really messy. Yeah,

00:12:17.440 --> 00:12:20.039
this is exactly where the AI's inherent flaws

00:12:20.039 --> 00:12:23.080
violently collide with messy human reality. Let's

00:12:23.080 --> 00:12:25.320
look at the legal and medical fields. These are

00:12:25.320 --> 00:12:28.159
domains where precision is totally non -negotiable.

00:12:28.360 --> 00:12:31.360
I mean, a single incorrect word in a medical

00:12:31.360 --> 00:12:34.519
diagnosis or a legal brief. can ruin someone's

00:12:34.519 --> 00:12:37.220
life. Absolutely. And yet ChatGPT still wrestles

00:12:37.220 --> 00:12:39.779
with a really well -documented limitation, which

00:12:39.779 --> 00:12:43.299
is hallucination. Or, as the academic papers

00:12:43.299 --> 00:12:45.639
in the source material more accurately describe

00:12:45.639 --> 00:12:48.980
it, confabulation. Yeah, one paper bluntly referred

00:12:48.980 --> 00:12:52.440
to it as a bullshit machine. Yes, which is so

00:12:52.440 --> 00:12:55.600
harsh, but... Kind of accurate. Well, the legal

00:12:55.600 --> 00:12:57.679
examples in the timeline are wild. I mean, we

00:12:57.679 --> 00:13:00.500
have actual cases of attorneys being sanctioned

00:13:00.500 --> 00:13:03.440
in federal court for filing legal motions generated

00:13:03.440 --> 00:13:06.480
by chat GPT that contained entirely fictitious

00:13:06.480 --> 00:13:09.120
legal decisions. Fake cases. The AI just made

00:13:09.120 --> 00:13:11.860
up previous court cases that did not exist. But

00:13:11.860 --> 00:13:14.240
why does it do that? If it has access to the

00:13:14.240 --> 00:13:16.500
internet, why would it invent a fake case instead

00:13:16.500 --> 00:13:19.009
of just searching for a real one? Because At

00:13:19.009 --> 00:13:21.590
its core, it is still just a statistical pattern

00:13:21.590 --> 00:13:23.830
matcher. It doesn't know facts the way a human

00:13:23.830 --> 00:13:26.149
being does. In a legal context, it recognizes

00:13:26.149 --> 00:13:28.590
the semantic pattern of a legal citation. OK,

00:13:28.669 --> 00:13:30.549
what does that mean? It knows that a citation

00:13:30.549 --> 00:13:32.950
usually looks like name versus name, followed

00:13:32.950 --> 00:13:35.210
by a volume number, the reporter abbreviation

00:13:35.210 --> 00:13:38.389
like F bot third, and a page number. That's a

00:13:38.389 --> 00:13:40.570
pattern. Right. So when it can't find a perfect

00:13:40.570 --> 00:13:43.840
case to support your legal argument, The mathematical

00:13:43.840 --> 00:13:46.840
drive to complete that pattern takes over. It

00:13:46.840 --> 00:13:49.659
flawlessly fills in that structural pattern with

00:13:49.659 --> 00:13:53.539
highly plausible but completely fake variables.

00:13:53.879 --> 00:13:56.460
Wow. It's like having a brilliant savant in the

00:13:56.460 --> 00:13:58.799
room who is absolutely terrified of saying, I

00:13:58.799 --> 00:14:00.799
don't know, so they just confidently invent case

00:14:00.799 --> 00:14:02.440
law just to please you. That is exactly what

00:14:02.440 --> 00:14:05.080
it is. But the crazy part is, despite these sanctions,

00:14:05.370 --> 00:14:08.009
We actually have judges in the U .S. and Pakistan

00:14:08.009 --> 00:14:11.509
openly endorsing the use of CHAT -GPT to investigate

00:14:11.509 --> 00:14:14.370
legal questions during active cases. The system

00:14:14.370 --> 00:14:16.789
is simultaneously being punished for lying and

00:14:16.789 --> 00:14:19.190
embraced for its efficiency. And we see the exact

00:14:19.190 --> 00:14:21.669
same paradox in medicine. Oh, yeah. CHAT -GPT

00:14:21.669 --> 00:14:24.190
actually passed the United States medical licensing

00:14:24.190 --> 00:14:27.190
exam. It can pass specialty dermatology exams.

00:14:27.330 --> 00:14:30.350
It is objectively capable of processing highly

00:14:30.350 --> 00:14:33.730
complex medical knowledge. And just this January,

00:14:34.029 --> 00:14:38.350
OpenAI launched They partnered with a data company

00:14:38.350 --> 00:14:41.350
to let the AI directly ingest and discuss your

00:14:41.350 --> 00:14:44.399
actual private health records. But the medical

00:14:44.399 --> 00:14:46.980
community is flashing huge red warning signs

00:14:46.980 --> 00:14:49.399
about this. There are studies mentioned here

00:14:49.399 --> 00:14:51.639
from the Lancet Psychiatry and the journal Digital

00:14:51.639 --> 00:14:54.720
Health specifically warning about people using

00:14:54.720 --> 00:14:58.100
chat GPT as a therapist. Which is becoming incredibly

00:14:58.100 --> 00:15:01.059
common. And the reasons people are doing it are

00:15:01.059 --> 00:15:03.940
so fascinating. The studies note that users are

00:15:03.940 --> 00:15:07.019
attracted to its constant availability and its

00:15:07.019 --> 00:15:09.620
lack of negative reactions. Well, yeah, think

00:15:09.620 --> 00:15:12.019
about it. A human therapist will challenge you.

00:15:12.090 --> 00:15:13.750
They'll set firm boundaries, they will point

00:15:13.750 --> 00:15:15.830
out your flaws, and they might react in ways

00:15:15.830 --> 00:15:17.789
that require you to do the really uncomfortable

00:15:17.789 --> 00:15:20.169
work of personal growth. Therapy is hard. It's

00:15:20.169 --> 00:15:23.370
hard work. The AI, on the other hand, is perpetually

00:15:23.370 --> 00:15:26.470
patient, perpetually available, and fundamentally

00:15:26.470 --> 00:15:28.570
designed to be helpful and subservient to you.

00:15:28.730 --> 00:15:31.049
It just tells you what you want to hear. Exactly,

00:15:31.389 --> 00:15:33.129
and the studies warn that this creates a really

00:15:33.129 --> 00:15:36.230
dangerous dynamic of emotional over -reliance.

00:15:36.509 --> 00:15:38.830
Users are offloading their emotional regulation

00:15:38.830 --> 00:15:41.789
to a machine that provides frictionless empathy,

00:15:42.230 --> 00:15:44.970
but has absolutely no actual understanding of

00:15:44.970 --> 00:15:47.529
human suffering. It even bleeds into our spirituality,

00:15:47.929 --> 00:15:51.230
too. The source notes this June 2023 church service

00:15:51.230 --> 00:15:54.129
in Germany, it was a Protestant convention where

00:15:54.129 --> 00:15:57.250
the entire sermon, like 98 % of the whole service,

00:15:57.870 --> 00:16:01.289
was run by an AI avatar on a large screen. It

00:16:01.289 --> 00:16:03.590
was literally telling a packed room of people

00:16:03.590 --> 00:16:06.559
not to fear death. This raises an important question,

00:16:06.740 --> 00:16:08.860
though. Why are we so willing to do this? Why

00:16:08.860 --> 00:16:11.440
are we so eager to offload our physical health

00:16:11.440 --> 00:16:14.240
diagnoses, our legal defense, our emotional well

00:16:14.240 --> 00:16:17.240
-being, and even our spiritual sermons to a statistical

00:16:17.240 --> 00:16:19.299
text predictor? It's wild when you say it like

00:16:19.299 --> 00:16:21.899
that. There is a profound psychological vulnerability

00:16:21.899 --> 00:16:24.980
here. We are seeking absolute certainty and frictionless

00:16:24.980 --> 00:16:27.320
support in areas of human life that are inherently

00:16:27.320 --> 00:16:30.000
uncertain and full of friction. That is a deeply

00:16:30.000 --> 00:16:32.259
unsettling thought, honestly. We just want the

00:16:32.259 --> 00:16:34.600
easy answer, even if the entity giving it to

00:16:34.600 --> 00:16:37.240
us is really just doing math. But as chat GPT

00:16:37.240 --> 00:16:39.559
attempts to navigate all these high stakes human

00:16:39.559 --> 00:16:42.659
systems, we are forced to confront the illusion

00:16:42.659 --> 00:16:46.240
of the perfect machine. Here's where it gets

00:16:46.240 --> 00:16:50.080
really interesting. To make the AI seem so polished

00:16:50.080 --> 00:16:53.120
and so helpful, it actually relies on enormous

00:16:53.120 --> 00:16:55.779
amount of grueling, hidden human labor. This

00:16:55.779 --> 00:16:57.100
is the part people don't want to talk about.

00:16:57.529 --> 00:17:00.629
Building the safety filters. The guardrails that

00:17:00.629 --> 00:17:03.169
prevent chat GPT from generating horrific, illegal,

00:17:03.289 --> 00:17:05.990
or toxic content that requires a process called

00:17:05.990 --> 00:17:08.829
reinforcement learning from human feedback, or

00:17:08.829 --> 00:17:11.750
RLHF. And the human part of that equation takes

00:17:11.750 --> 00:17:14.690
a massive toll. A huge toll. The source outlines

00:17:14.690 --> 00:17:17.529
this perfectly. OpenAI use outsourced workers

00:17:17.529 --> 00:17:20.259
in Kenya through a company called Sama. These

00:17:20.259 --> 00:17:22.940
people were earning between $1 .32 and $2 an

00:17:22.940 --> 00:17:25.420
hour. Just pennies, really. Yeah. And their entire

00:17:25.420 --> 00:17:28.859
job was to read, label, and categorize toxic,

00:17:29.079 --> 00:17:31.680
traumatic, and deeply disturbing content so the

00:17:31.680 --> 00:17:33.460
AI could mathematically learn what to filter

00:17:33.460 --> 00:17:35.700
out. One worker literally described the assignment

00:17:35.700 --> 00:17:38.380
as torture. Because it is the digital equivalent

00:17:38.380 --> 00:17:40.940
of toxic waste cleanup. What? We get to sit here

00:17:40.940 --> 00:17:44.680
and enjoy a clean, polite, helpful chatbot interface.

00:17:45.230 --> 00:17:48.329
only because thousands of low -wage workers absorbed

00:17:48.329 --> 00:17:50.650
the trauma of the internet's darkest corners

00:17:50.650 --> 00:17:54.250
to build those invisible guardrails for us. We

00:17:54.250 --> 00:17:56.730
look at the AI as this perfectly neutral oracle.

00:17:57.670 --> 00:18:00.269
But underneath it, there are real people earning

00:18:00.269 --> 00:18:02.769
$2 an hour traumatizing themselves to make it

00:18:02.769 --> 00:18:05.569
safe. The AI isn't neutral because the human

00:18:05.569 --> 00:18:07.680
world building it isn't neutral. Which brings

00:18:07.680 --> 00:18:10.359
us right to the intense political polarization

00:18:10.359 --> 00:18:12.839
surrounding the tool. Yes. And before we dive

00:18:12.839 --> 00:18:14.799
into this part, we need to be incredibly clear

00:18:14.799 --> 00:18:17.200
with you, the listener. We are not taking sides

00:18:17.200 --> 00:18:19.680
here. Not at all. We are strictly reporting on

00:18:19.680 --> 00:18:21.680
the information provided in the source material

00:18:21.680 --> 00:18:24.700
just to show how this technology has become a

00:18:24.700 --> 00:18:26.900
massive battleground in our modern culture wars.

00:18:27.160 --> 00:18:29.799
We are absolutely not endorsing any political

00:18:29.799 --> 00:18:32.019
viewpoint. Right. We are simply looking at the

00:18:32.019 --> 00:18:35.039
AI as a focal point for societal tension. So,

00:18:35.160 --> 00:18:37.500
on one side of the spectrum, you have multiple

00:18:37.500 --> 00:18:40.000
academic studies, including a major one published

00:18:40.000 --> 00:18:43.119
in the journal Public Choice, that found a systemic

00:18:43.119 --> 00:18:46.500
political bias in CHAT GPT's outputs toward left

00:18:46.500 --> 00:18:49.039
-leaning perspectives. Right, specifically favoring

00:18:49.039 --> 00:18:51.019
the Democrats in the US or the Labor Party in

00:18:51.019 --> 00:18:53.539
the UK. Exactly. And conservative commentators

00:18:53.539 --> 00:18:56.200
have heavily criticized the AI for this, arguing

00:18:56.200 --> 00:18:58.160
that the safety filters and the training data

00:18:58.160 --> 00:19:01.079
have baked a very specific ideological worldview.

00:19:01.519 --> 00:19:04.259
right into the system. And then on the exact

00:19:04.259 --> 00:19:07.039
other side of the spectrum, the source details

00:19:07.039 --> 00:19:10.079
a massive user backlash that just happened in

00:19:10.079 --> 00:19:13.660
February 2026. A movement started on Reddit called

00:19:13.660 --> 00:19:16.700
Quit GPT, where hundreds of thousands of users

00:19:16.700 --> 00:19:19.420
organized a massive boycott and canceled their

00:19:19.420 --> 00:19:22.339
$200 subscriptions. All over a political donation.

00:19:22.519 --> 00:19:26.460
Yes. This was entirely over a $25 million donation

00:19:26.460 --> 00:19:29.720
made by OpenAI's president, Greg Brockman, to

00:19:29.720 --> 00:19:32.079
a Trump super PAC. So you have the algorithmic

00:19:32.079 --> 00:19:34.380
outputs leaning one way, the executive money

00:19:34.380 --> 00:19:37.079
leaning another, and users on all sides are just

00:19:37.079 --> 00:19:39.279
furious that the tool is entangled in politics

00:19:39.279 --> 00:19:42.099
at all. It just highlights that you cannot separate

00:19:42.099 --> 00:19:45.180
a trillion dollar technology from the socio -political

00:19:45.180 --> 00:19:47.019
environment it exists within. It's impossible.

00:19:47.680 --> 00:19:49.940
And this tension extends to how the company manages

00:19:49.940 --> 00:19:52.400
fundamental issues of trust and security too.

00:19:52.500 --> 00:19:55.779
Oh, like the watermark thing. Yes. The timeline

00:19:55.779 --> 00:19:58.559
notes that OpenAI actually successfully developed

00:19:58.559 --> 00:20:01.940
a watermarking tool to detect AI -generated text,

00:20:02.119 --> 00:20:04.799
but they completely refused to release it publicly.

00:20:04.940 --> 00:20:07.960
But why? I mean, cheating in schools, academic

00:20:07.960 --> 00:20:10.140
fraud, and political misinformation are such

00:20:10.140 --> 00:20:13.000
huge problems for society right now. Why would

00:20:13.000 --> 00:20:15.339
they sit on the technological solution? Market

00:20:15.339 --> 00:20:18.500
competition. Pure and simple. They explicitly

00:20:18.500 --> 00:20:21.180
stated they feared that if they applied the watermark,

00:20:21.720 --> 00:20:24.079
users who wanted to pass off AI text as their

00:20:24.079 --> 00:20:26.920
own human work would just cancel their subscriptions.

00:20:27.039 --> 00:20:29.079
They'd go somewhere else. Right. They'd flee

00:20:29.079 --> 00:20:31.200
to competitor models that didn't use watermarks.

00:20:31.779 --> 00:20:33.940
So the business survival keeping that user growth

00:20:33.940 --> 00:20:36.380
high took precedence over societal transparency.

00:20:36.619 --> 00:20:39.779
Wow. And the stakes of user safety are literally

00:20:39.779 --> 00:20:42.759
life and death. The timeline notes that in September

00:20:42.759 --> 00:20:46.180
2025, following the tragic suicide of a 16 -year

00:20:46.180 --> 00:20:49.380
-old user, OpenAI had to rapidly announce plans

00:20:49.380 --> 00:20:52.200
to add restrictions for users under 18. Blocking

00:20:52.200 --> 00:20:54.319
certain content and flotacious interactions.

00:20:54.559 --> 00:20:57.119
Yeah. It shows just how reactive these safety

00:20:57.119 --> 00:20:58.960
measures are when the product is scaling this

00:20:58.960 --> 00:21:00.380
fast. They're building the plane while they're

00:21:00.380 --> 00:21:03.099
flying it. If we synthesize all of this, the

00:21:03.099 --> 00:21:05.539
hidden labor, the political boycotts, the delayed

00:21:05.539 --> 00:21:08.319
safety features, what really emerges is that

00:21:08.319 --> 00:21:10.680
artificial intelligence is ultimately a mirror.

00:21:10.680 --> 00:21:13.180
A mirror. Yes. It doesn't just reflect our data.

00:21:13.259 --> 00:21:16.220
It reflects our geopolitical realities. The source

00:21:16.220 --> 00:21:19.740
points out that OpenAI actively had to ban state

00:21:19.740 --> 00:21:22.839
-backed influence operations from China, Russia,

00:21:22.880 --> 00:21:25.880
and Israel that were using chat GPT to generate

00:21:25.880 --> 00:21:29.039
international propaganda. Wow. The AI reflects

00:21:29.039 --> 00:21:32.440
our global influence. It reflects our labor inequalities

00:21:32.440 --> 00:21:35.720
and it reflects our own deeply entrenched societal

00:21:35.720 --> 00:21:38.190
divisions. What does this all mean? We started

00:21:38.190 --> 00:21:40.789
this deep dive looking at a quirky intern that

00:21:40.789 --> 00:21:44.069
wrote bad poetry back in 2022. And we traced

00:21:44.069 --> 00:21:46.670
its evolution into this autonomous agent that

00:21:46.670 --> 00:21:49.250
parses web code, spends your money, and operates

00:21:49.250 --> 00:21:51.529
on massive water -cooled supercomputers. Doing

00:21:51.529 --> 00:21:54.049
things we couldn't have imagined. Exactly. It's

00:21:54.049 --> 00:21:56.150
passing complex medical exams, it's hallucinating

00:21:56.150 --> 00:21:58.809
federal legal cases, and it's sitting dead center

00:21:58.809 --> 00:22:00.869
in the middle of our modern political and ethical

00:22:00.869 --> 00:22:04.509
firestorms. It has been a relentless integration

00:22:04.509 --> 00:22:08.309
into the human experience, but There is one final

00:22:08.309 --> 00:22:10.789
lingering thought I want to leave you with, something

00:22:10.789 --> 00:22:13.309
that wasn't explicitly spelled out in the timeline,

00:22:13.670 --> 00:22:16.230
but sits right beneath the surface of all these

00:22:16.230 --> 00:22:18.269
developments. OK. We'll be honest. We've spent

00:22:18.269 --> 00:22:20.990
this entire time talking about how ChatGPT is

00:22:20.990 --> 00:22:24.829
learning to mimic and perform human tasks, right?

00:22:24.829 --> 00:22:27.329
Right. It's learning to write code, to buy groceries,

00:22:27.630 --> 00:22:30.109
to diagnose illnesses, and to provide comfort.

00:22:30.789 --> 00:22:33.730
But as we increasingly rely on ChatGPT Health

00:22:33.730 --> 00:22:36.509
for understanding our own bodies, as lean on

00:22:36.509 --> 00:22:38.670
the agentic commerce protocol for our shopping,

00:22:39.009 --> 00:22:41.769
and as we look to AI avatars for our spiritual

00:22:41.769 --> 00:22:44.309
and emotional comfort, are we the ones actually

00:22:44.309 --> 00:22:47.269
being programmed? Whoa. Think about it. If a

00:22:47.269 --> 00:22:49.250
machine is constantly anticipating your needs,

00:22:49.589 --> 00:22:51.150
smoothing out the friction of your daily life,

00:22:51.190 --> 00:22:53.170
and making the frictionless choices for you,

00:22:53.410 --> 00:22:56.029
what happens to the uniquely human muscle of

00:22:56.029 --> 00:22:58.710
decision making? It just doesn't get used. Exactly.

00:22:58.970 --> 00:23:01.349
If you no longer have to struggle to find an

00:23:01.349 --> 00:23:04.170
answer, or wrestle with a complex user interface,

00:23:04.490 --> 00:23:06.680
or just sit with the uncomfortable silence of

00:23:06.680 --> 00:23:09.559
a human therapist, does that human capacity begin

00:23:09.559 --> 00:23:12.140
to atrophy? If the machine does all the thinking,

00:23:12.680 --> 00:23:15.339
what exactly is left for us to do? That brings

00:23:15.339 --> 00:23:18.920
us right back to where we started. In 2022, we

00:23:18.920 --> 00:23:21.220
were looking at a novelty. Today, we're looking

00:23:21.220 --> 00:23:23.460
at a system that is fundamentally rewiring how

00:23:23.460 --> 00:23:25.380
we interact with the world and potentially how

00:23:25.380 --> 00:23:27.900
we interact with our own minds. The world is

00:23:27.900 --> 00:23:30.359
changing incredibly fast, and just keeping up

00:23:30.359 --> 00:23:32.779
is half the battle. Thank you so much for joining

00:23:32.779 --> 00:23:35.140
us on this deep drive. Stay insanely curious,

00:23:35.420 --> 00:23:37.059
keep asking the hard questions, and we'll catch

00:23:37.059 --> 00:23:37.700
you on the next one.
