WEBVTT

00:00:00.000 --> 00:00:03.000
Have you ever been working with ChatGPT or another

00:00:03.000 --> 00:00:05.700
one of these big AI models, maybe trying to write

00:00:05.700 --> 00:00:08.179
a story or something creative? Oh, yeah. And

00:00:08.179 --> 00:00:11.740
you just suddenly hit that wall, that very polite

00:00:11.740 --> 00:00:15.359
digital librarian response. I'm sorry, I cannot

00:00:15.359 --> 00:00:17.500
generate content of that nature, that kind of

00:00:17.500 --> 00:00:19.899
thing. Yeah, we've all been there. Exactly. Well,

00:00:20.280 --> 00:00:23.140
that whole era, this sort of heavily curated,

00:00:23.579 --> 00:00:27.179
safe -for -work AI. it looks like it's fundamentally

00:00:27.179 --> 00:00:30.379
ending. And the really big news right now isn't

00:00:30.379 --> 00:00:33.840
just about AI getting smarter or faster. It's

00:00:33.840 --> 00:00:38.619
this pretty dramatic pivot towards allowing adult

00:00:38.619 --> 00:00:42.240
content, NSFW stuff. Right. And that changes

00:00:42.240 --> 00:00:44.200
the game quite a bit, doesn't it? It really does.

00:00:44.240 --> 00:00:45.880
It could change the entire trajectory of this

00:00:45.880 --> 00:00:47.820
tech. So today, we're going to do a deep dive

00:00:47.820 --> 00:00:49.920
into, well, the thinking behind this. We're looking

00:00:49.920 --> 00:00:52.659
at why major players you know, companies like

00:00:52.659 --> 00:00:55.200
OpenAI, are starting to signal they'll allow

00:00:55.200 --> 00:00:58.380
content they specifically call erotica. Treating

00:00:58.380 --> 00:01:01.399
adult users like, well, adults. Supposedly. So

00:01:01.399 --> 00:01:03.740
our mission today is to unpack why this is happening

00:01:03.740 --> 00:01:06.120
right now. What's the backstory? There's a surprising

00:01:06.120 --> 00:01:08.319
history here. And we also need to look at the

00:01:08.319 --> 00:01:11.739
immediate risks. Things like deep fakes getting

00:01:11.739 --> 00:01:15.939
easier to make, privacy traps. What does this

00:01:15.939 --> 00:01:19.079
actually mean for the average person using these

00:01:19.079 --> 00:01:22.340
tools? OK, so first let's set the stage. Historically,

00:01:22.459 --> 00:01:26.760
these AI companies, they built really clean platforms,

00:01:27.000 --> 00:01:28.980
right? Very buttoned up. Definitely. The goal

00:01:28.980 --> 00:01:31.019
seemed to be getting schools, big companies,

00:01:31.219 --> 00:01:33.540
even governments to adopt them quickly. Exactly.

00:01:33.620 --> 00:01:36.299
And you can't have your chat bot suddenly spouting

00:01:36.299 --> 00:01:38.739
inappropriate stuff if you want that kind of

00:01:38.739 --> 00:01:40.719
institutional trust. That's a deal breaker. So

00:01:40.719 --> 00:01:43.640
they put up these huge digital guardrails. I

00:01:43.640 --> 00:01:46.409
like the analogy of Bumper reels in a bowling

00:01:46.409 --> 00:01:49.129
alley. Yeah, that's a good one. Keep the AI firmly

00:01:49.129 --> 00:01:51.109
on the straight and narrow, avoiding anything

00:01:51.109 --> 00:01:54.090
controversial, complex adult themes, hate speech,

00:01:54.250 --> 00:01:58.489
all of it. Sam Altman, the CEO of OpenAI, he

00:01:58.489 --> 00:02:00.989
recently signaled a pretty major shift. To pivot,

00:02:01.170 --> 00:02:03.269
yeah. He basically said the policy is changing.

00:02:03.689 --> 00:02:06.450
They plan to soon treat adult users like adults.

00:02:06.790 --> 00:02:08.490
And the mechanism they're talking about is allowing

00:02:08.490 --> 00:02:12.090
NSFW chats, specifically focusing on erotica

00:02:12.090 --> 00:02:14.509
is the term they use. Right. And the timeline

00:02:14.509 --> 00:02:17.189
seems to be possibly starting around December

00:02:17.189 --> 00:02:20.310
end of the year. But, and this is the key part,

00:02:20.610 --> 00:02:24.479
it all hangs on user verification. Age gating.

00:02:24.780 --> 00:02:27.159
So let's paint a picture. Right now, if you ask

00:02:27.159 --> 00:02:30.360
an AI like ChatGPT for, say, a passionate story,

00:02:31.060 --> 00:02:34.479
it'll likely block you or just focus on the emotional

00:02:34.479 --> 00:02:36.460
feelings, not the details. It pulls back, yeah.

00:02:36.819 --> 00:02:39.180
But the future version, assuming you've verified

00:02:39.180 --> 00:02:41.580
you're an adult, would potentially just write

00:02:41.580 --> 00:02:44.360
it. The explicit story you asked for, no filter.

00:02:44.500 --> 00:02:46.379
OK, but I have to push back a bit on the official

00:02:46.379 --> 00:02:49.000
reason they're giving. The company line seems

00:02:49.000 --> 00:02:51.080
to be, we've now solved the big safety and mental

00:02:51.080 --> 00:02:54.620
health risks, so it's safe to open this up. Yeah,

00:02:54.680 --> 00:02:57.060
that does sound maybe a little too convenient.

00:02:57.199 --> 00:02:58.979
Our source material is pretty skeptical about

00:02:58.979 --> 00:03:00.819
that. Right. The argument is that this whole

00:03:00.819 --> 00:03:03.139
safety narrative might just be a nice sounding

00:03:03.139 --> 00:03:06.400
excuse, a PR move to cover what's really a business

00:03:06.400 --> 00:03:08.240
necessity. They need to compete. They need the

00:03:08.240 --> 00:03:11.159
market share. So if this whole safety net relies

00:03:11.159 --> 00:03:14.879
on age verification working perfectly, How solid

00:03:14.879 --> 00:03:17.099
is that, really? What's the biggest consequence

00:03:17.099 --> 00:03:19.219
of relying on that? Well, that's the weak link,

00:03:19.259 --> 00:03:22.280
isn't it? H -checks are notoriously easy to bypass.

00:03:22.419 --> 00:03:25.400
Yeah, kids figure it out. VPNs, fake details,

00:03:25.680 --> 00:03:29.379
borrowing accounts. It's not foolproof. Verification

00:03:29.379 --> 00:03:32.680
methods are often easily bypassed. Okay, so we

00:03:32.680 --> 00:03:35.080
need to talk about something that's kind of the

00:03:35.080 --> 00:03:37.439
elephant in the room in tech history. Or maybe

00:03:37.439 --> 00:03:40.639
the secret driver. Which is? The adult inner

00:03:40.639 --> 00:03:42.879
tree. It's almost a running joke, but it's true.

00:03:42.939 --> 00:03:45.099
They often pioneer and push new technologies

00:03:45.099 --> 00:03:47.900
years, sometimes decades, before they go mainstream.

00:03:48.479 --> 00:03:50.960
That pattern is undeniable. And it's really critical

00:03:50.960 --> 00:03:54.020
to understanding this AI shift, I think. Remember

00:03:54.020 --> 00:03:57.340
the VHS versus Betamax war back in the 70s? Oh

00:03:57.340 --> 00:03:59.719
yeah, the classic format war. Well, many tech

00:03:59.719 --> 00:04:01.800
historians argue that the adult film industry

00:04:01.800 --> 00:04:04.479
choosing VHS was actually a major factor, maybe

00:04:04.479 --> 00:04:07.979
the key factor, in VHS winning out. It drove

00:04:07.979 --> 00:04:10.870
VCR adoption. into homes. And it didn't stop

00:04:10.870 --> 00:04:12.909
there, did it? Think about the early Internet,

00:04:13.370 --> 00:04:15.669
online payments. Right. Long before Amazon made

00:04:15.669 --> 00:04:18.329
ordering books online feel normal, adult websites

00:04:18.329 --> 00:04:21.129
had already figured out how to securely and privately

00:04:21.129 --> 00:04:23.500
charge credit cards online. They had to. They

00:04:23.500 --> 00:04:26.100
also cracked video streaming way ahead of the

00:04:26.100 --> 00:04:29.019
curve, perfecting smooth playback when the rest

00:04:29.019 --> 00:04:31.939
of the web was still clunky. Netflix and YouTube

00:04:31.939 --> 00:04:34.279
basically built on that groundwork. It makes

00:04:34.279 --> 00:04:36.839
sense, though, doesn't it? High demand, big money,

00:04:36.879 --> 00:04:39.920
but also big risks. So they move fast. They innovate

00:04:39.920 --> 00:04:41.920
ruthlessly. They care about what works, what

00:04:41.920 --> 00:04:44.439
makes money, not necessarily about being polite

00:04:44.439 --> 00:04:46.639
or following the rules. They build the roads,

00:04:46.660 --> 00:04:49.160
basically, and then mainstream businesses come

00:04:49.160 --> 00:04:51.199
along and pave them. So how does that connect

00:04:51.199 --> 00:04:53.629
to AI right now? Well, the interesting thing

00:04:53.629 --> 00:04:58.069
is these big mainstream AI models like ChetGPT,

00:04:58.350 --> 00:05:00.209
they're actually falling behind in some ways.

00:05:00.370 --> 00:05:03.360
Behind what? behind the smaller, often open source,

00:05:03.600 --> 00:05:06.180
uncontrolled AI tools that tech savvy users are

00:05:06.180 --> 00:05:08.639
running on their own computers, like powerful

00:05:08.639 --> 00:05:11.540
gaming PCs. Ah, the models with no filters, the

00:05:11.540 --> 00:05:14.339
so -called jailbroken AIs. Exactly. People are

00:05:14.339 --> 00:05:17.000
already getting unfiltered AI experiences, just

00:05:17.000 --> 00:05:18.939
not from the big companies. And that creates

00:05:18.939 --> 00:05:21.399
massive competitive pressure. So Altman and his

00:05:21.399 --> 00:05:24.060
team, they know this. They see users going elsewhere

00:05:24.060 --> 00:05:26.300
for these less restricted tools. They have to

00:05:26.300 --> 00:05:29.550
adapt or they risk losing their dominance. That's

00:05:29.550 --> 00:05:32.230
the real stake here. Given this history, what's

00:05:32.230 --> 00:05:35.170
the ultimate competitive risk if they don't adapt?

00:05:35.750 --> 00:05:37.730
Well, it's simple. They risk becoming irrelevant.

00:05:37.810 --> 00:05:39.750
They could lose the market entirely if they don't

00:05:39.750 --> 00:05:42.009
provide what users are clearly already seeking

00:05:42.009 --> 00:05:44.970
out elsewhere. It's fascinating, isn't it, how

00:05:44.970 --> 00:05:48.370
even something as complex and, you know, academic

00:05:48.370 --> 00:05:51.610
as AI development ultimately bends to these pretty

00:05:51.610 --> 00:05:55.269
primal market demands. You follow the users or

00:05:55.269 --> 00:06:00.029
you fade away. Placeholder for sponsor read.

00:06:00.250 --> 00:06:04.149
Yeah, okay, but opening this door It's not without

00:06:04.149 --> 00:06:06.930
consequences. We really have to look at the potential

00:06:06.930 --> 00:06:09.910
fallout here. It feels a bit like Shaking up

00:06:09.910 --> 00:06:11.810
a bottle of soda and then taking the cap off.

00:06:11.829 --> 00:06:13.610
Yeah, it's probably gonna spray everywhere. It's

00:06:13.610 --> 00:06:16.069
not gonna be contained neatly So what's the biggest

00:06:16.069 --> 00:06:17.930
worry? What's the main thing people are pointing

00:06:17.930 --> 00:06:20.509
to? I think the number one concern is the deep

00:06:20.509 --> 00:06:23.110
fake nightmare getting much, much worse. Explain

00:06:23.110 --> 00:06:25.889
that. Right now, the big AI companies have pretty

00:06:25.889 --> 00:06:28.750
strict rules. They'll refuse to generate, say,

00:06:29.430 --> 00:06:31.910
inappropriate images of famous people or politicians.

00:06:32.250 --> 00:06:35.870
But if they relax the rules for explicit text,

00:06:36.370 --> 00:06:38.569
how long before the pressure builds to relax

00:06:38.569 --> 00:06:40.910
them for explicit images, too? That's the fear.

00:06:41.009 --> 00:06:43.230
And the scary part isn't just fake images of

00:06:43.230 --> 00:06:45.569
celebrities anymore. It's about... Normal people

00:06:45.569 --> 00:06:49.189
exactly imagine AI tools becoming so good so

00:06:49.189 --> 00:06:51.350
easy to use that anyone could take a profile

00:06:51.350 --> 00:06:53.949
picture from social media a teacher a student

00:06:53.949 --> 00:06:57.449
your neighbor and instantly create hyper realistic

00:06:57.449 --> 00:07:01.370
fake potentially career ruining or deeply Embarrassing

00:07:01.370 --> 00:07:03.810
images of that's yeah, that's a societal earthquake

00:07:03.810 --> 00:07:06.290
waiting to happen. Oh Just think about scaling

00:07:06.290 --> 00:07:09.089
that scaling the creation of deep fakes to potentially

00:07:09.089 --> 00:07:12.470
billions of people billions of requests The vulnerability

00:07:12.470 --> 00:07:14.089
is just staggering and then there's what? what

00:07:14.089 --> 00:07:15.970
the source calls the human nature problem. We

00:07:15.970 --> 00:07:17.750
just saw this play out with Omegle, right? The

00:07:17.750 --> 00:07:19.670
video chat site that just shut down. Yeah, it

00:07:19.670 --> 00:07:22.209
was a simple concept, random video chats, but

00:07:22.209 --> 00:07:24.790
it became unusable. Why? Because according to

00:07:24.790 --> 00:07:27.029
the founder, it was overwhelmed by users engaging

00:07:27.029 --> 00:07:29.829
in or demanding terrible behavior. Yeah. Bad

00:07:29.829 --> 00:07:32.310
actors ruined it. So the fear is, if you give

00:07:32.310 --> 00:07:34.699
people an unfiltered AI, That will do anything

00:07:34.699 --> 00:07:37.500
you ask. A certain percentage of users will immediately

00:07:37.500 --> 00:07:40.439
push the boundaries into awful territory, asking

00:07:40.439 --> 00:07:43.040
for violent content, illegal content, hateful

00:07:43.040 --> 00:07:46.259
stuff, and policing that. Once the main guardrails

00:07:46.259 --> 00:07:49.339
are down, it becomes incredibly difficult, maybe

00:07:49.339 --> 00:07:51.600
impossible. And there's one more danger mentioned,

00:07:51.839 --> 00:07:56.639
which feels a bit more subtle, maybe? The addiction

00:07:56.639 --> 00:07:59.160
factor. Ah, the perfect partner problem, yeah.

00:07:59.680 --> 00:08:02.240
AI is getting alarmingly good at mimicking empathy,

00:08:02.439 --> 00:08:04.560
at being supportive, remembering everything you

00:08:04.560 --> 00:08:06.939
tell it, never getting tired or angry or, you

00:08:06.939 --> 00:08:09.800
know, messy. Like real people are. Exactly. So

00:08:09.800 --> 00:08:12.040
what happens long term if people, especially

00:08:12.040 --> 00:08:14.759
maybe lonely people, start preferring these perfect

00:08:14.759 --> 00:08:17.800
AI companions over real, complicated, sometimes

00:08:17.800 --> 00:08:20.259
difficult human relationships? You know, I still

00:08:20.259 --> 00:08:22.620
wrestle with prompt drift myself sometimes, like

00:08:22.620 --> 00:08:24.819
getting the AI to stay on track with what I actually

00:08:24.819 --> 00:08:26.759
want. can be frustrating. So I can almost see

00:08:26.759 --> 00:08:29.699
the appeal of an AI that's just perfectly compliant,

00:08:29.959 --> 00:08:32.000
always agreeable. But if that interaction is

00:08:32.000 --> 00:08:35.100
too perfect, how quickly could that fuel mass

00:08:35.100 --> 00:08:37.860
behavioral shifts away from real human connection?

00:08:38.200 --> 00:08:40.480
It could potentially accelerate the loneliness

00:08:40.480 --> 00:08:43.740
epidemic we're already seeing. The AI might become

00:08:43.740 --> 00:08:46.940
too good, too easy. That perfect nature might

00:08:46.940 --> 00:08:49.860
rapidly accelerate the loneliness epidemic. So

00:08:49.860 --> 00:08:52.500
bringing this down to the practical level. This

00:08:52.500 --> 00:08:55.559
shift changes things for everyday users, and

00:08:55.559 --> 00:08:58.159
especially for parents. Because that age -gating

00:08:58.159 --> 00:09:00.980
thing isn't foolproof. Not even close. We know

00:09:00.980 --> 00:09:02.899
kids are adept at getting around these things.

00:09:03.460 --> 00:09:06.120
VPNs, fake birth dates, maybe just using a parent's

00:09:06.120 --> 00:09:08.080
device that's already logged in. Right, so you

00:09:08.080 --> 00:09:10.120
can't just assume that the AI your kid is using

00:09:10.120 --> 00:09:13.539
for homework is automatically safe anymore. Not

00:09:13.539 --> 00:09:16.179
if there's an easily accessible adult mode. Parents

00:09:16.179 --> 00:09:18.360
really need to be aware. Check the settings on

00:09:18.360 --> 00:09:20.740
these tools. See if there are new modes. Maybe

00:09:20.740 --> 00:09:22.980
password protect anything that unlocks a dark

00:09:22.980 --> 00:09:25.600
content if the option exists. You have to be

00:09:25.600 --> 00:09:28.000
proactive now. OK. And what about the verification

00:09:28.000 --> 00:09:30.100
process itself? You mentioned needing to verify

00:09:30.100 --> 00:09:32.840
your age. Yes. And this is the critical privacy

00:09:32.840 --> 00:09:35.340
trap we need to flag. How do they verify age

00:09:35.340 --> 00:09:39.340
usually? I guess ID. Like a driver's license?

00:09:39.840 --> 00:09:42.139
Typically, yes. They often require a photo of

00:09:42.139 --> 00:09:44.620
your government -issued ID license, passport,

00:09:44.919 --> 00:09:47.399
and sometimes a live selfie to match it. Wow.

00:09:47.740 --> 00:09:50.940
Okay, think about that you're handing over scans

00:09:50.940 --> 00:09:54.240
of your most sensitive Identity documents to

00:09:54.240 --> 00:09:56.580
an AI company just to unlock a chat feature.

00:09:56.580 --> 00:09:58.919
Yeah, that feels like a really big ask It's a

00:09:58.919 --> 00:10:01.100
huge risk, especially given how often we hear

00:10:01.100 --> 00:10:04.019
about massive data breaches Do you want your

00:10:04.019 --> 00:10:06.200
passport scan floating around on the dark web

00:10:06.200 --> 00:10:08.860
because you wanted spicier AI stories? That's

00:10:08.860 --> 00:10:11.460
a very good point. So what is the primary personal

00:10:11.460 --> 00:10:14.299
privacy cost here? It's giving up highly sensitive

00:10:14.299 --> 00:10:16.659
government ID data to the tech company plain

00:10:16.659 --> 00:10:20.309
and simple And beyond privacy, there's this blurring

00:10:20.309 --> 00:10:22.830
line. You might be chatting with someone, maybe

00:10:22.830 --> 00:10:26.110
in a context that feels intimate or, you know,

00:10:26.470 --> 00:10:28.889
spicy. And you might genuinely not know if it's

00:10:28.889 --> 00:10:30.830
a real person on the other end with all their

00:10:30.830 --> 00:10:33.330
flaws and complexities, or if it's an incredibly

00:10:33.330 --> 00:10:35.830
advanced bot designed to be perfectly engaging.

00:10:36.509 --> 00:10:38.649
That's unsettling. So for you, the listener,

00:10:38.730 --> 00:10:40.769
what's the bottom line? What's the advice? Based

00:10:40.769 --> 00:10:42.470
on the source, it blows down to a few things.

00:10:42.990 --> 00:10:46.399
First... Stay skeptical. Especially about images

00:10:46.399 --> 00:10:48.820
or videos you see online that seem shocking or

00:10:48.820 --> 00:10:51.679
too perfect. Assume deep fakes are possible and

00:10:51.679 --> 00:10:55.289
becoming easier. Okay. Skepticism. Second. Protect

00:10:55.289 --> 00:10:58.070
your data. Really think twice before you upload

00:10:58.070 --> 00:11:00.029
that driver's license or passport just to get

00:11:00.029 --> 00:11:02.169
access to some new feature. Is it worth the risk?

00:11:02.350 --> 00:11:04.710
Right. Protect your ID. And finally, just watch

00:11:04.710 --> 00:11:07.509
this space. This likely move in December, it's

00:11:07.509 --> 00:11:09.509
probably just the first step. This is going to

00:11:09.509 --> 00:11:12.370
keep evolving rapidly. Stay informed. Hashtag,

00:11:12.409 --> 00:11:15.230
hashtag, outro. So if we try to boil this all

00:11:15.230 --> 00:11:17.169
down, the big takeaway from our sources today

00:11:17.169 --> 00:11:21.210
seems pretty clear. The era of AI being inherently

00:11:21.210 --> 00:11:24.620
safe. curated by default, that's definitively

00:11:24.620 --> 00:11:26.620
ending. Yeah, it's a major turning point. And

00:11:26.620 --> 00:11:28.440
it's crucial to understand it's not really driven

00:11:28.440 --> 00:11:32.159
by newfound safety breakthroughs or pure altruism.

00:11:32.340 --> 00:11:34.480
It's driven by those deep historical patterns

00:11:34.480 --> 00:11:36.820
in technology adoption, the ones involving the

00:11:36.820 --> 00:11:40.320
adult industry, and just raw, intense market

00:11:40.320 --> 00:11:43.500
competition. Right. Which, in turn, opens up

00:11:43.500 --> 00:11:45.460
these significant new risks we talked about.

00:11:45.620 --> 00:11:48.159
Deepfakes, the potential for emotional addiction,

00:11:48.460 --> 00:11:51.539
and these really serious data privacy concerns

00:11:51.539 --> 00:11:54.000
around verification. So maybe a final thought

00:11:54.000 --> 00:11:56.620
for people to chew on. I think the most provocative

00:11:56.620 --> 00:11:59.240
idea here is how this shift really demonstrates

00:11:59.240 --> 00:12:01.379
something fundamental about technology development.

00:12:02.039 --> 00:12:04.860
Even something as sophisticated, as academically

00:12:04.860 --> 00:12:08.860
rooted as AI, ultimately seems to follow, well,

00:12:09.460 --> 00:12:13.100
Primal human demand. Market forces. Rather than

00:12:13.100 --> 00:12:15.340
being guided solely by intellectual curiosity

00:12:15.340 --> 00:12:19.039
or lofty ethical principles, demand wins out.

00:12:19.259 --> 00:12:21.320
It often seems to, yeah. It's something to keep

00:12:21.320 --> 00:12:23.840
in mind as this technology continues to reshape

00:12:23.840 --> 00:12:25.679
our world. Definitely something to think about.

00:12:26.039 --> 00:12:28.139
So stay skeptical out there, protect your identity,

00:12:28.220 --> 00:12:30.919
and we'll keep tracking how this unfolds. Thanks

00:12:30.919 --> 00:12:32.779
for joining us for this deep dive. We'll catch

00:12:32.779 --> 00:12:33.220
you next time.
