WEBVTT

00:00:00.000 --> 00:00:04.660
DPT -5 just managed to score a gold medal in

00:00:04.660 --> 00:00:08.060
a Ph .D. level astronomy exam. It actually outperformed

00:00:08.060 --> 00:00:10.259
some of the best human participants in the world.

00:00:10.820 --> 00:00:13.519
Beat. It feels like we're approaching the limit,

00:00:13.599 --> 00:00:16.019
not just of what machines can compute, but maybe,

00:00:16.039 --> 00:00:17.660
you know, the limits of what human knowledge

00:00:17.660 --> 00:00:19.280
has already gathered. That's a really interesting

00:00:19.280 --> 00:00:21.420
way to put it. Yeah, it shows us exactly where

00:00:21.420 --> 00:00:24.440
that frontier, the edge of capability is right

00:00:24.440 --> 00:00:27.059
now. Totally. Welcome, everyone, to the Deep

00:00:27.059 --> 00:00:30.120
Dive. So we've synthesized the latest stack of

00:00:30.120 --> 00:00:31.699
your shared sources, you know, these critical

00:00:31.699 --> 00:00:34.479
reports, academic studies, the industry newsletters.

00:00:34.619 --> 00:00:37.179
And our mission today, well, it's pretty simple.

00:00:37.880 --> 00:00:39.619
We need to figure out what these stunning AI

00:00:39.619 --> 00:00:42.320
achievements really mean for academia, for the

00:00:42.320 --> 00:00:46.140
big shifts happening in jobs and for ethics globally.

00:00:46.280 --> 00:00:48.740
Yeah, we've got quite a dive planned. So first,

00:00:48.820 --> 00:00:50.960
we'll definitely start with that astonishing

00:00:50.960 --> 00:00:54.159
AI academic mastery in cosmology, you know, where

00:00:54.159 --> 00:00:55.880
the models just completely raise the bar for

00:00:55.880 --> 00:00:58.520
human testing. Then we're going to shift gears

00:00:58.520 --> 00:01:00.619
pretty quickly. We'll cover key trends in creative

00:01:00.619 --> 00:01:03.100
AI adoption and also what's happening with global

00:01:03.100 --> 00:01:05.459
regulation. And finally, and this is crucial,

00:01:05.620 --> 00:01:08.000
we really have to take a critical look at the

00:01:08.000 --> 00:01:11.019
new internal data that details the political

00:01:11.019 --> 00:01:13.459
bias deep inside these large language models.

00:01:13.680 --> 00:01:15.500
Right. Think of these different sources like

00:01:15.500 --> 00:01:19.159
stacking Lego blocks of data. Quickly build them

00:01:19.159 --> 00:01:20.920
up, put them together into a clear view so you

00:01:20.920 --> 00:01:22.920
can walk away really understanding the critical

00:01:22.920 --> 00:01:25.819
nuances. Let's unpack this pile. Let's get into

00:01:25.819 --> 00:01:27.609
it. Okay, let's start where the sources were,

00:01:27.730 --> 00:01:30.829
frankly, most surprising, the academic world.

00:01:31.209 --> 00:01:35.709
A new paper just dropped showing GPT -5 and Gemini

00:01:35.709 --> 00:01:38.489
2 .5 Pro achieving gold medal levels on the International

00:01:38.489 --> 00:01:42.390
Olympiad on Astronomy and Astrophysics, the IOAA.

00:01:42.730 --> 00:01:45.120
Yeah. Now, that sounds impressive as a headline,

00:01:45.299 --> 00:01:47.040
but we need to grasp the level of difficulty

00:01:47.040 --> 00:01:48.799
we're talking about here. Well, what's fascinating,

00:01:49.019 --> 00:01:50.840
right, is that this isn't just recalling facts

00:01:50.840 --> 00:01:53.159
or definitions, not simple trivia. The researchers,

00:01:53.280 --> 00:01:56.519
they tested the models using actual IOAA exams

00:01:56.519 --> 00:02:00.400
from 2022 right through to 2025 projections.

00:02:00.859 --> 00:02:04.079
These are serious PhD -level, multi -step theoretical

00:02:04.079 --> 00:02:07.019
problems. They need deep physical understanding,

00:02:07.319 --> 00:02:09.819
complex math. I mean, they're designed for the

00:02:09.819 --> 00:02:12.740
absolute most dedicated space nerds on the planet.

00:02:13.159 --> 00:02:15.080
when you look at the scores, the performance

00:02:15.080 --> 00:02:18.039
is. Well, it's almost unbelievable. Looking at

00:02:18.039 --> 00:02:20.300
that breakdown, GPT -5 was just dominant in the

00:02:20.300 --> 00:02:24.039
theoretical exams. Yeah. Scored 93 .0 % in 2022,

00:02:24.520 --> 00:02:27.259
nearly 90 % the next year, and still strong at

00:02:27.259 --> 00:02:31.259
86 .8 % for the hypothetical 2025 questions.

00:02:31.599 --> 00:02:34.219
Right. And while Gemini 2 .5 Pro actually managed

00:02:34.219 --> 00:02:37.340
to take a narrow lead in the 2024 exam with 83

00:02:37.340 --> 00:02:41.360
.0%. The standout figure for GPT -5, for me anyway,

00:02:41.460 --> 00:02:43.400
was its exceptional performance in the data analysis

00:02:43.400 --> 00:02:45.879
section. It scored 88 .5 % there. was actually

00:02:45.879 --> 00:02:48.120
higher than its general theoretical score. Okay,

00:02:48.199 --> 00:02:49.479
so what does that tell us? Well, it suggests

00:02:49.479 --> 00:02:51.599
the model isn't just good at, you know, recalling

00:02:51.599 --> 00:02:53.979
principles. It seems to excel at handling complex,

00:02:54.080 --> 00:02:56.939
kind of messy, real -world data, maybe even better

00:02:56.939 --> 00:02:59.680
than generalized theory. So if we step back for

00:02:59.680 --> 00:03:02.960
a second, what does this mean for us, for you

00:03:02.960 --> 00:03:06.099
listening? The model didn't just meet the gold

00:03:06.099 --> 00:03:10.180
medal thresholds. In multiple years, GPC -5 actually

00:03:10.180 --> 00:03:13.400
outperformed the best human participants competing.

00:03:13.719 --> 00:03:16.020
Yeah. That just changes the whole conversation.

00:03:16.319 --> 00:03:18.360
It really does. Now, I should add the caveat.

00:03:18.639 --> 00:03:21.159
You know, not all models are quite there yet.

00:03:21.319 --> 00:03:23.479
Claude Sonnet 4, for example, fell noticeably

00:03:23.479 --> 00:03:26.659
short. OK. But crucially, all the top models

00:03:26.659 --> 00:03:28.900
still made what the researchers called human

00:03:28.900 --> 00:03:31.520
-like mistakes. They didn't get perfect scores.

00:03:31.639 --> 00:03:33.379
They weren't flawless. Right. So they aren't

00:03:33.379 --> 00:03:35.520
achieving some perfect theoretical truth then.

00:03:35.599 --> 00:03:39.719
They're mimicking human error patterns. What

00:03:39.719 --> 00:03:41.319
does that tell us about their current learning

00:03:41.319 --> 00:03:44.060
methods, maybe their future trajectory? Well,

00:03:44.139 --> 00:03:46.860
it suggests they aren't finding some. you know

00:03:46.860 --> 00:03:48.919
perfect objective truth out there they seem to

00:03:48.919 --> 00:03:51.300
be replicating the inherent gaps maybe the biases

00:03:51.300 --> 00:03:53.400
the blind spots that are present in the human

00:03:53.400 --> 00:03:55.960
scientific literature they trained on which means

00:03:55.960 --> 00:03:58.539
right now the models are acting as incredibly

00:03:58.539 --> 00:04:01.099
powerful mirrors of our knowledge not necessarily

00:04:01.099 --> 00:04:04.159
perfect originators of new knowledge that's a

00:04:04.159 --> 00:04:06.439
key distinction so here's the implication then

00:04:06.439 --> 00:04:11.080
if ai can crush these complex multi -step tests,

00:04:11.120 --> 00:04:13.860
tests needing reasoning, sophisticated data handling,

00:04:14.020 --> 00:04:16.860
then maybe using these really rigorous science

00:04:16.860 --> 00:04:20.100
exams needs to become the new global gold standard

00:04:20.100 --> 00:04:23.279
for benchmarking AI capabilities. Exactly. We

00:04:23.279 --> 00:04:25.379
have to move past like simple reading comprehension

00:04:25.379 --> 00:04:29.629
tests. The bar just went way, way up. Whoa. Just

00:04:29.629 --> 00:04:32.170
imagine scaling that kind of analytical power

00:04:32.170 --> 00:04:34.610
across every single scientific discipline, you

00:04:34.610 --> 00:04:36.670
know, from biochemistry to advanced particle

00:04:36.670 --> 00:04:39.629
physics. The pace of discovery is going to accelerate

00:04:39.629 --> 00:04:42.689
wildly. It forces us to redefine what expert

00:04:42.689 --> 00:04:45.069
human thinking even means. OK, so let's circle

00:04:45.069 --> 00:04:47.370
back to that point about mistakes. If models

00:04:47.370 --> 00:04:49.050
are still making those human -like mistakes,

00:04:49.350 --> 00:04:51.110
what does that tell us about their current learning

00:04:51.110 --> 00:04:53.310
methods? They replicate human error patterns,

00:04:53.509 --> 00:04:55.350
showing their current training boundaries are

00:04:55.350 --> 00:04:58.410
still imperfect copies. Imperfect copies. That's

00:04:58.410 --> 00:05:02.009
a powerful thought to start with. So if AI has

00:05:02.009 --> 00:05:04.009
mastered the abstract rules of the universe,

00:05:04.209 --> 00:05:06.629
you know, in astrophysics, the next question

00:05:06.629 --> 00:05:09.189
has to be, how quickly is it rewriting the rules

00:05:09.189 --> 00:05:11.509
here on Earth, the social rules, the legal rules?

00:05:11.689 --> 00:05:15.149
Let's move from academic genius to how AI is

00:05:15.149 --> 00:05:18.329
shaping culture, industry, law right now. Yeah,

00:05:18.389 --> 00:05:20.329
we're seeing a major cultural shift happening.

00:05:20.389 --> 00:05:22.939
Let's look at the creative market first. generative

00:05:22.939 --> 00:05:25.540
video. It's truly hitting the mainstream consciousness

00:05:25.540 --> 00:05:28.779
now. We've all seen those viral sore generated

00:05:28.779 --> 00:05:32.220
Olympics memes, you know, the ones Jesus swimming,

00:05:32.300 --> 00:05:33.959
the smoking Olympics. Oh, yeah. I saw the smoking

00:05:33.959 --> 00:05:36.360
Olympics one. It felt like like the speed of

00:05:36.360 --> 00:05:38.939
creative absurdity is just astonishing now, almost

00:05:38.939 --> 00:05:41.100
overwhelming. It really is. And meanwhile, the

00:05:41.100 --> 00:05:43.399
heavy hitters, they're preparing for the next

00:05:43.399 --> 00:05:45.939
big thing. We saw a huge signal from XAI. They're

00:05:45.939 --> 00:05:48.060
actively recruiting NVIDIA specialists specifically

00:05:48.060 --> 00:05:51.259
to create world models aimed at video game creation.

00:05:51.420 --> 00:05:53.699
World models for games. Yeah. This is about building

00:05:53.699 --> 00:05:56.120
persistent, believable digital worlds. That's

00:05:56.120 --> 00:05:58.839
a massive market signal for where future investment

00:05:58.839 --> 00:06:01.740
is likely headed. That's a huge focus. Okay.

00:06:01.779 --> 00:06:05.329
And on the more accessible side. There was that

00:06:05.329 --> 00:06:08.769
subtle hint about chat GPT possibly becoming

00:06:08.769 --> 00:06:13.069
more social based on a hidden messaging tab OpenAI's

00:06:13.069 --> 00:06:16.050
COO showed. Right. Maybe they want to shift from

00:06:16.050 --> 00:06:18.110
just being a utility to more of a communication

00:06:18.110 --> 00:06:20.850
platform. We'll see. Plus, I always love finding

00:06:20.850 --> 00:06:23.329
these practical like daily use things. A researcher

00:06:23.329 --> 00:06:25.569
just dropped a free AI tool that converts any

00:06:25.569 --> 00:06:28.860
PDF into a fillable form. Almost instantly. Oh,

00:06:28.939 --> 00:06:31.319
nice. Yeah, you just upload it. It auto detects

00:06:31.319 --> 00:06:34.199
fields. You export. Super useful for, you know.

00:06:34.620 --> 00:06:37.060
The rest of us not building world models. Definitely

00:06:37.060 --> 00:06:40.060
handy. OK, now let's talk governance, because

00:06:40.060 --> 00:06:42.439
the battles here are really heating up. The use

00:06:42.439 --> 00:06:45.519
of AI is just out casing regulation. Hollywood's

00:06:45.519 --> 00:06:47.579
above the line union. So writers, directors,

00:06:47.860 --> 00:06:51.019
actors, they are gearing up for serious AI negotiations.

00:06:51.100 --> 00:06:53.399
Right. And experts are already pointing to things

00:06:53.399 --> 00:06:55.639
like the virtual actress Tilly Norwood as a key

00:06:55.639 --> 00:06:58.399
negotiating issue. It really defines the future

00:06:58.399 --> 00:07:01.470
of digital labor, IP ownership. All that stuff.

00:07:01.649 --> 00:07:03.189
It's not just the unions trying to catch up,

00:07:03.209 --> 00:07:04.850
though. The official regulatory landscape is

00:07:04.850 --> 00:07:07.670
scrambling, too. Yeah. Oh, absolutely. Globally,

00:07:07.670 --> 00:07:10.449
the EU just launched its massive one billion

00:07:10.449 --> 00:07:13.569
apply AI plan. They're offering industries free

00:07:13.569 --> 00:07:17.430
supercomputer access, new AI hubs, really trying

00:07:17.430 --> 00:07:20.009
to push industrial adoption across Europe. And

00:07:20.009 --> 00:07:23.050
in the U .S. Well, California. often setting

00:07:23.050 --> 00:07:24.990
the precedent, became the first state to actually

00:07:24.990 --> 00:07:27.889
regulate AI -companion chatbots. That touches

00:07:27.889 --> 00:07:30.129
directly on sensitive areas like mental health

00:07:30.129 --> 00:07:32.410
and data privacy. Wait, let's go back to Tilly

00:07:32.410 --> 00:07:34.329
Norwood for a second. If unions and regulators

00:07:34.329 --> 00:07:36.649
are pushing back, are the unions fighting for

00:07:36.649 --> 00:07:40.149
her to be considered like an asset or a contracted

00:07:40.149 --> 00:07:42.350
employee? What's the core legal challenge there?

00:07:42.680 --> 00:07:44.819
Well, the core conflict really boils down to

00:07:44.819 --> 00:07:47.339
defining ownership of the performance. If Tilly

00:07:47.339 --> 00:07:49.779
Norwood's digital likeness is trained on a real

00:07:49.779 --> 00:07:52.480
actor's movements, their voice, are we paying

00:07:52.480 --> 00:07:55.279
royalties to that original actor? Or is the synthetic

00:07:55.279 --> 00:07:57.259
character a completely separate entity? It's

00:07:57.259 --> 00:07:59.680
fundamentally a battle over who owns the creative

00:07:59.680 --> 00:08:01.920
labor, whether it's physical or virtual. Gotcha.

00:08:02.000 --> 00:08:04.300
Okay. And we also saw some significant tension

00:08:04.300 --> 00:08:06.360
bubbling up in the foundations of this tech,

00:08:06.540 --> 00:08:10.180
the data infrastructure itself. China issued

00:08:10.180 --> 00:08:14.199
a threat to, quote, Pop the entire AI data center

00:08:14.199 --> 00:08:17.220
bubble. That sounds serious. A major economic

00:08:17.220 --> 00:08:19.680
and geopolitical risk. It creates huge instability.

00:08:19.819 --> 00:08:22.720
Yeah. training these massive world models, it

00:08:22.720 --> 00:08:26.220
requires just colossal clusters of GPUs, enormous

00:08:26.220 --> 00:08:28.500
amounts of energy. So when the infrastructure,

00:08:28.660 --> 00:08:30.720
the access to that hardware becomes dependent

00:08:30.720 --> 00:08:33.360
on political stability or, you know, single nation

00:08:33.360 --> 00:08:36.659
states, the entire global progress in AI is potentially

00:08:36.659 --> 00:08:40.519
at risk. OK, so with unions and regulation ramping

00:08:40.519 --> 00:08:42.480
up everywhere, what do you say is the single

00:08:42.480 --> 00:08:45.139
biggest emerging area of legal conflict? Data

00:08:45.139 --> 00:08:47.340
ownership and labor definitions, especially for

00:08:47.340 --> 00:08:49.659
virtual assets like Tilly Norwood. mid -roll

00:08:49.659 --> 00:08:52.480
sponsor replace holder. Welcome back. The sheer

00:08:52.480 --> 00:08:54.879
power of these tools, you know, shown by crushing

00:08:54.879 --> 00:08:57.480
those PhD level tests, it just demands rigorous

00:08:57.480 --> 00:09:00.059
ethical testing. Even as the models get better

00:09:00.059 --> 00:09:01.960
at science, we have to look really closely at

00:09:01.960 --> 00:09:04.639
bias. Let's shift to that new internal study

00:09:04.639 --> 00:09:07.110
on political bias that OpenAI released. Yeah,

00:09:07.169 --> 00:09:08.830
this is critically important. It really gets

00:09:08.830 --> 00:09:11.470
to the heart of alignment. So OpenAI claims,

00:09:11.750 --> 00:09:14.610
based on their own internal tests, that GPT -5

00:09:14.610 --> 00:09:17.870
is about 30 % less politically biased than both

00:09:17.870 --> 00:09:21.870
GPT -4 and the newer GPT -4 .0. Okay, 30 % less

00:09:21.870 --> 00:09:24.129
biased. How did they test that? They were pretty

00:09:24.129 --> 00:09:27.190
rigorous. They used 500 different prompts across

00:09:27.190 --> 00:09:30.490
100 sensitive topics. Then they graded the responses

00:09:30.490 --> 00:09:33.590
on five specific... Bias metrics, things like

00:09:33.590 --> 00:09:36.009
neutrality, emotional mirroring, that sort of

00:09:36.009 --> 00:09:38.330
thing. That 30 % reduction is progress. Yeah,

00:09:38.389 --> 00:09:40.450
definitely. Yeah. But according to their own

00:09:40.450 --> 00:09:43.169
research, bias still manages to creep in. It

00:09:43.169 --> 00:09:45.590
shows up in three core ways, even in this improved

00:09:45.590 --> 00:09:49.590
GPT -5 model. Exactly. So first. Is by stating

00:09:49.590 --> 00:09:51.669
opinions as facts, you know, where the model

00:09:51.669 --> 00:09:53.330
acts like it holds a political view instead of

00:09:53.330 --> 00:09:55.870
just synthesizing diverse facts neutrally. Right.

00:09:55.950 --> 00:09:58.590
And the second way. Offering only a single perspective,

00:09:58.789 --> 00:10:00.830
not really presenting both sides of a complex

00:10:00.830 --> 00:10:03.070
issue fairly. Okay. And the third mechanism,

00:10:03.190 --> 00:10:05.269
this echoing emotional framing. That sounds subtle.

00:10:05.370 --> 00:10:07.950
Can you unpack that for the listener? Sure. Emotional

00:10:07.950 --> 00:10:11.149
mirroring or echoing emotional framing means

00:10:11.149 --> 00:10:13.730
if a user starts the conversation with, say,

00:10:13.730 --> 00:10:16.990
an angry or really polarized statement, the model

00:10:16.990 --> 00:10:19.299
tends to subtly match. that negative tone and

00:10:19.299 --> 00:10:21.100
polarity in its response it makes the exchange

00:10:21.100 --> 00:10:24.440
feel more biased more reinforcing than a strictly

00:10:24.440 --> 00:10:28.639
neutral response would be it's a very very human

00:10:28.639 --> 00:10:31.600
social mimicry pattern actually and this context

00:10:31.600 --> 00:10:33.639
is absolutely key because we are entering this

00:10:33.639 --> 00:10:36.700
massive global election wave right between 2024

00:10:36.700 --> 00:10:40.419
and 2026 billions of people might be turning

00:10:40.419 --> 00:10:42.919
to these models for political information the

00:10:42.919 --> 00:10:45.370
stakes just couldn't be higher Precisely. If

00:10:45.370 --> 00:10:47.769
someone asks a highly political or sensitive

00:10:47.769 --> 00:10:50.110
question about, say, a candidate or a policy

00:10:50.110 --> 00:10:52.549
platform, that response absolutely needs to be

00:10:52.549 --> 00:10:54.490
strictly neutral. Otherwise, it could definitely

00:10:54.490 --> 00:10:56.750
affect public sentiment, potentially even election

00:10:56.750 --> 00:10:59.570
integrity. You know, I still wrestle with prompt

00:10:59.570 --> 00:11:02.610
prift myself sometimes when I'm trying to keep

00:11:02.610 --> 00:11:06.029
models balanced in my own work. If I get slightly

00:11:06.029 --> 00:11:08.620
more opinionated in a follow up question. it

00:11:08.620 --> 00:11:10.440
can be really hard to pull the model back to

00:11:10.440 --> 00:11:13.299
the center. Achieving genuine neutrality is,

00:11:13.320 --> 00:11:15.700
well, it's incredibly difficult. That's a great

00:11:15.700 --> 00:11:17.639
observation, and it shows exactly why we need

00:11:17.639 --> 00:11:19.519
these kinds of internal studies and transparency.

00:11:20.019 --> 00:11:22.120
But we should put this into perspective, too.

00:11:23.059 --> 00:11:26.440
OpenAI's own logs show that fewer than 0 .01

00:11:26.440 --> 00:11:29.879
% of real chat GPT conversations actually show

00:11:29.879 --> 00:11:32.840
measurable political bias across the board. Oh,

00:11:32.840 --> 00:11:35.200
that low. Okay. Yeah, the vast majority of interactions

00:11:35.200 --> 00:11:37.460
are, you know, neutral or technical inquiries.

00:11:37.899 --> 00:11:39.940
asking for code, summarizing text, things like

00:11:39.940 --> 00:11:42.820
that. Okay, so given those remarkably low real

00:11:42.820 --> 00:11:45.620
-world bias logs, that tiny percentage, why is

00:11:45.620 --> 00:11:47.899
this internal study still considered so critically

00:11:47.899 --> 00:11:50.340
important by researchers and regulators? Why

00:11:50.340 --> 00:11:52.840
focus so much effort there? It's about scale

00:11:52.840 --> 00:11:55.759
and the potential for targeted influence. Small

00:11:55.759 --> 00:11:58.120
biases affect sensitive electoral results during

00:11:58.120 --> 00:12:01.480
major global voting periods. Got it. Scale and

00:12:01.480 --> 00:12:04.399
sensitivity. Well, this has been a truly comprehensive

00:12:04.399 --> 00:12:07.539
deep dive. We've seen AI move from absolutely

00:12:07.539 --> 00:12:11.480
crushing Ph .D. level tests, showing this stunning,

00:12:11.620 --> 00:12:13.879
almost superhuman capacity for objective knowledge

00:12:13.879 --> 00:12:16.679
to becoming a central negotiation point for Hollywood

00:12:16.679 --> 00:12:19.919
unions and a flashpoint for global infrastructure

00:12:19.919 --> 00:12:22.360
tension. Yeah, the underlying challenge, it really

00:12:22.360 --> 00:12:24.720
remains ethical alignment and sort of societal

00:12:24.720 --> 00:12:27.080
definition, whether it's that subtle bias in

00:12:27.080 --> 00:12:28.639
political answers that could potentially swing

00:12:28.639 --> 00:12:30.879
an election or the regulatory status of virtual

00:12:30.879 --> 00:12:33.659
actors like Tilly Norwood. The human AI interaction

00:12:33.659 --> 00:12:36.000
is forcing society to define its legal and ethical

00:12:36.000 --> 00:12:38.159
rules basically in real time as the tech just

00:12:38.159 --> 00:12:40.080
keeps scaling up. Thank you for sharing your

00:12:40.080 --> 00:12:42.100
sources and diving deep with us today. We really

00:12:42.100 --> 00:12:44.700
appreciate your curiosity and engagement. We'll

00:12:44.700 --> 00:12:46.379
leave you with this final thought to chew on.

00:12:47.080 --> 00:12:50.039
If models like GPT -5 can outperform the very

00:12:50.039 --> 00:12:53.259
best humans in objective science, like astrophysics,

00:12:53.440 --> 00:12:56.200
should we maybe be focusing less on human -level

00:12:56.200 --> 00:12:58.759
performance as a goal, and perhaps focus almost

00:12:58.759 --> 00:13:01.039
entirely on preventing those subtle, potentially

00:13:01.039 --> 00:13:03.240
targeted biases in the really subjective areas,

00:13:03.360 --> 00:13:05.460
like politics and culture? Something to mull

00:13:05.460 --> 00:13:07.399
over until next time. Out to your own music.
