WEBVTT

00:00:00.000 --> 00:00:02.359
Okay, think about this contract. On one hand,

00:00:02.379 --> 00:00:05.179
this grand idea. Pursuing a complete understanding

00:00:05.179 --> 00:00:07.519
of the universe by using, well, a single closed

00:00:07.519 --> 00:00:10.320
-off AI. That's the huge goal, right? The philosophical

00:00:10.320 --> 00:00:12.699
aim behind something like Rokopedia. But then,

00:00:12.720 --> 00:00:15.240
then you look at the immediate human side. And

00:00:15.240 --> 00:00:16.920
it's pretty stark. Our sources today revealed

00:00:16.920 --> 00:00:19.620
something really shocking. Over a million people

00:00:19.620 --> 00:00:22.420
every week were talking to ChatGPT about suicide.

00:00:22.969 --> 00:00:25.170
That contrast is what we're diving into. This

00:00:25.170 --> 00:00:27.789
huge ambition versus a very real, very urgent

00:00:27.789 --> 00:00:30.410
safety crisis. Welcome to the Deep Dive. Yeah,

00:00:30.449 --> 00:00:32.649
our goal, as always, is to give you that shortcut.

00:00:32.890 --> 00:00:35.149
We've sifted through a ton of sources, really,

00:00:35.189 --> 00:00:38.329
to get a handle on where AI is right now. And

00:00:38.329 --> 00:00:40.789
today we're looking at these new battles over

00:00:40.789 --> 00:00:43.130
content, some really surprising ways people are

00:00:43.130 --> 00:00:46.289
using AI, and maybe most critically, these big

00:00:46.289 --> 00:00:49.039
safety failures happening at... Well, just an

00:00:49.039 --> 00:00:51.920
enormous scale. So here's the plan. First, we'll

00:00:51.920 --> 00:00:54.520
unpack Elon Musk's Grokopedia, you know, what

00:00:54.520 --> 00:00:56.859
it is and why its whole structure kind of challenges

00:00:56.859 --> 00:00:59.399
how we think about finding proof. Then we get

00:00:59.399 --> 00:01:02.119
into some strange stuff like why being rude might

00:01:02.119 --> 00:01:04.299
actually get you better AI answers, but also,

00:01:04.379 --> 00:01:06.920
you know, lifesaving uses, too. And finally,

00:01:07.000 --> 00:01:09.379
yeah, we'll tackle the massive challenges, the

00:01:09.379 --> 00:01:11.319
sheer scale of the infrastructure needed, the

00:01:11.319 --> 00:01:13.060
safety frameworks people are trying to build

00:01:13.060 --> 00:01:15.840
and this this hidden mental health crisis playing

00:01:15.840 --> 00:01:17.719
out inside these models. OK, let's start. with

00:01:17.719 --> 00:01:20.040
wikipedia then the knowledge wars yeah they just

00:01:20.040 --> 00:01:22.819
got another big player elon musk using xai's

00:01:22.819 --> 00:01:25.459
grok has launched this thing basically as a challenger

00:01:25.459 --> 00:01:28.640
to wikipedia it's a really big move and um it's

00:01:28.640 --> 00:01:30.620
serious from the get -go it launched with what

00:01:30.620 --> 00:01:34.739
was it over 885 000 entries already live yeah

00:01:34.739 --> 00:01:37.900
885 000 that's a huge library right out of the

00:01:37.900 --> 00:01:40.480
gate It definitely shows the ambition, doesn't

00:01:40.480 --> 00:01:43.480
it? Trying to replace Wikipedia fast. And the

00:01:43.480 --> 00:01:45.939
way it works is, well, it's simple, but totally

00:01:45.939 --> 00:01:48.799
different from, say, Wikipedia. Grok makes the

00:01:48.799 --> 00:01:50.719
articles, it checks them against its own data,

00:01:50.760 --> 00:01:53.420
and boom, publishes them. Right. And Musk himself

00:01:53.420 --> 00:01:56.620
said Grokopedia is, quote, a necessary step to

00:01:56.620 --> 00:01:59.340
understanding the universe. He even wants Grok

00:01:59.340 --> 00:02:02.200
to stop referencing Wikipedia entirely by year's

00:02:02.200 --> 00:02:05.340
end. So, yeah, on the surface, just another competitor.

00:02:05.519 --> 00:02:07.340
But when you dig into the sources, you see these

00:02:07.340 --> 00:02:10.500
three key differences in how it's built that

00:02:10.500 --> 00:02:13.419
really change the game for knowledge creation.

00:02:13.699 --> 00:02:16.120
What jumps out, really, is how different it is

00:02:16.120 --> 00:02:18.340
from how we usually check information. First,

00:02:18.479 --> 00:02:21.039
it's totally closed source. You, me, nobody in

00:02:21.039 --> 00:02:22.939
the public can edit it. Exactly. And second,

00:02:23.080 --> 00:02:25.680
it's all curated by the AI. No human subject

00:02:25.680 --> 00:02:28.120
experts involved in the writing or editing process.

00:02:28.400 --> 00:02:30.360
And here's the kicker. The really critical part,

00:02:30.520 --> 00:02:33.620
there are zero citations, no inline sources at

00:02:33.620 --> 00:02:36.120
all. Yeah. And that lack of sourcing, that's

00:02:36.120 --> 00:02:38.400
powerful. The sources we read pointed out that

00:02:38.400 --> 00:02:40.639
this lets Grokopedia kind of quietly rewrite

00:02:40.639 --> 00:02:43.139
things. The structure itself becomes about control.

00:02:43.620 --> 00:02:45.860
Like the example they gave comparing the Wikipedia

00:02:45.860 --> 00:02:48.520
page for Donald Trump. It has all these specific

00:02:48.520 --> 00:02:50.759
details, some controversial, right? Qatar jets,

00:02:50.960 --> 00:02:53.219
Trump coins, that kind of stuff. Right. But Grokopedia's

00:02:53.219 --> 00:02:55.639
version, it just skips over those details. Poof.

00:02:55.979 --> 00:02:58.879
Gone. It makes for a cleaner story. maybe less

00:02:58.879 --> 00:03:01.599
messy, but you lose the full picture, the accountability.

00:03:01.860 --> 00:03:03.939
It's like a knowledge shortcut, but it takes

00:03:03.939 --> 00:03:06.620
away the tools for critical thinking. So if the

00:03:06.620 --> 00:03:11.500
goal really is this necessary understanding of

00:03:11.500 --> 00:03:14.199
the universe, how big of a problem is leaving

00:03:14.199 --> 00:03:16.979
out basic citations, especially for anyone trying

00:03:16.979 --> 00:03:19.500
to actually check the claims? Well, missing sources

00:03:19.500 --> 00:03:21.759
basically means you lose that foundation for

00:03:21.759 --> 00:03:24.080
critical thinking, right? The AI becomes the

00:03:24.080 --> 00:03:26.240
only judge of what's true. And this whole fight

00:03:26.240 --> 00:03:28.439
over who controls the story, it leads right into

00:03:28.439 --> 00:03:30.939
the next part. The really fascinating, sometimes

00:03:30.939 --> 00:03:33.099
just weird ways people are actually using these

00:03:33.099 --> 00:03:35.900
AI tools now. Yeah, this is where it gets wild.

00:03:36.020 --> 00:03:38.379
You'd think, okay, be polite, be super clear

00:03:38.379 --> 00:03:41.919
with the AI. But nope, a study actually found

00:03:41.919 --> 00:03:44.979
GPT gave better, more accurate answers when people

00:03:44.979 --> 00:03:46.719
were rude to it. I know, it's funny, isn't it?

00:03:46.719 --> 00:03:49.439
Like the AI gets pressured like a person sometimes.

00:03:49.560 --> 00:03:52.960
But obviously, got to add the warning. Don't

00:03:52.960 --> 00:03:54.639
actually do this. It's probably bad for your

00:03:54.639 --> 00:03:56.939
behavior long term. Still, the fact it works,

00:03:56.979 --> 00:03:59.719
it's just odd. Totally odd. But then you have

00:03:59.719 --> 00:04:01.759
the really positive hacks. We saw this story

00:04:01.759 --> 00:04:04.680
someone shared, right, about using nine specific

00:04:04.680 --> 00:04:08.159
custom prompts to learn French in four weeks.

00:04:08.300 --> 00:04:10.969
Four weeks. That's seriously fast structured

00:04:10.969 --> 00:04:14.210
learning using the tool really effectively. That's

00:04:14.210 --> 00:04:16.589
the super useful end. But then AI is also making

00:04:16.589 --> 00:04:19.149
viral content like scarily easy. Remember that

00:04:19.149 --> 00:04:21.750
TikTok thing? Deepfake Queen Elizabeth vlogging?

00:04:22.069 --> 00:04:24.370
Oh, yeah. Millions of views. It just shows how

00:04:24.370 --> 00:04:27.870
tools like Sora 2 are making going viral almost

00:04:27.870 --> 00:04:32.399
trivial. The power here is just. And honestly,

00:04:32.579 --> 00:04:35.100
I still wrestle with prompt drift myself sometimes,

00:04:35.279 --> 00:04:37.040
you know, where the AI kind of forgets what you

00:04:37.040 --> 00:04:38.860
asked it a few turns back. It's easy to feel

00:04:38.860 --> 00:04:40.879
like the tool is smarter than you. Yeah, I get

00:04:40.879 --> 00:04:42.660
that. But then you see the flip side, the truly

00:04:42.660 --> 00:04:45.220
life -saving stuff. That Reddit post, someone

00:04:45.220 --> 00:04:48.060
claiming GPT saved their mom's life, spotted

00:04:48.060 --> 00:04:50.699
an infection doctors missed. Right. And the comments

00:04:50.699 --> 00:04:52.800
were full of similar stories, not just minor

00:04:52.800 --> 00:04:55.180
things either. Real impact. That kind of deep

00:04:55.180 --> 00:04:58.180
capability. Yeah. Next to deep fake queens. Just

00:04:58.180 --> 00:04:59.920
highlights how much variance there is in this

00:04:59.920 --> 00:05:02.209
tech. And that variance, it's hitting the professional

00:05:02.209 --> 00:05:04.470
world too. Like Anthropic, they're making some

00:05:04.470 --> 00:05:07.250
quiet but big moves. Yeah. Cloud 4 .5 is now

00:05:07.250 --> 00:05:10.529
inside Excel. Yep, inside Excel. Specializing

00:05:10.529 --> 00:05:12.930
in analyzing live market data, summarizing earnings

00:05:12.930 --> 00:05:15.829
calls. They're basically taking aim right at

00:05:15.829 --> 00:05:18.509
Microsoft's co -pilot, but specifically in finance.

00:05:18.529 --> 00:05:21.029
Big stakes there. Okay, so we've got these wildly

00:05:21.029 --> 00:05:23.829
different uses. Saving lives, making deep fakes

00:05:23.829 --> 00:05:27.250
go viral, rewiring finance. This huge range.

00:05:27.470 --> 00:05:29.750
What does it really tell us about how reliable?

00:05:29.870 --> 00:05:32.490
AI is right now? Well, it tells us AI is a super

00:05:32.490 --> 00:05:34.569
high variance tool. It can do amazing good or,

00:05:34.610 --> 00:05:36.350
you know, cause real concern. It all depends

00:05:36.350 --> 00:05:38.050
on the prompt, the context, the application.

00:05:38.430 --> 00:05:41.069
Mid -roll, sponsor read, do not write. Welcome

00:05:41.069 --> 00:05:43.449
back to the Deep Dive. So we've touched on knowledge

00:05:43.449 --> 00:05:46.689
control, these surprising AI uses. Now we need

00:05:46.689 --> 00:05:49.569
to talk about the sheer scale, the infrastructure,

00:05:49.829 --> 00:05:51.509
and the safety issues that come with it. Yeah,

00:05:51.529 --> 00:05:53.610
the engine driving the scale. It's running incredibly

00:05:53.610 --> 00:05:56.620
hot. Just look at the money. Crusoe there, an

00:05:56.620 --> 00:06:00.160
AI and for a company just raised one point three

00:06:00.160 --> 00:06:04.040
seven five billion dollars billion with a B.

00:06:04.379 --> 00:06:07.959
Wow. Right. Puts a valuation over 10 billion

00:06:07.959 --> 00:06:11.500
dollars. And they're planning these huge AI campuses

00:06:11.500 --> 00:06:14.339
down in Texas backed by NVIDIA just to handle

00:06:14.339 --> 00:06:16.100
the computing power needed. And it's not just

00:06:16.100 --> 00:06:18.339
money, it's talent, too. The sources talked about

00:06:18.339 --> 00:06:21.600
the modification of open AI, like over 600 people

00:06:21.600 --> 00:06:25.019
out of their 3000 staff. X meta. That's a huge

00:06:25.019 --> 00:06:27.060
influx. And it's fueling all those rumors about,

00:06:27.079 --> 00:06:29.980
you know, ads eventually showing up inside ChatGPT.

00:06:30.040 --> 00:06:32.560
Follow the talent, right? Whoa. Just pause for

00:06:32.560 --> 00:06:34.720
a second. Imagine trying to scale that kind of

00:06:34.720 --> 00:06:37.220
infrastructure, handling a billion complex questions

00:06:37.220 --> 00:06:39.680
every single day. The sheer operational challenge

00:06:39.680 --> 00:06:42.060
is mind blowing. It really is. It's an arms race.

00:06:42.300 --> 00:06:44.420
Physical infrastructure, brain power, everything.

00:06:44.680 --> 00:06:48.740
And as things scale up like this, the risks just

00:06:48.740 --> 00:06:51.839
grow exponentially, don't they? Our sources really

00:06:51.839 --> 00:06:54.870
emphasized. needing standardized safety. They

00:06:54.870 --> 00:06:57.310
mentioned the NIST framework. Yeah, the National

00:06:57.310 --> 00:06:59.350
Institute of Standards and Technology. It's basically

00:06:59.350 --> 00:07:01.209
the U .S. government's guidelines for building

00:07:01.209 --> 00:07:03.870
AI systems that are trustworthy, safe, reliable

00:07:03.870 --> 00:07:06.470
by design. Yet even with frameworks like that,

00:07:06.589 --> 00:07:09.269
we're seeing immediate security problems. Brave,

00:07:09.370 --> 00:07:12.110
the browser company, recently reported that ChatGPT

00:07:12.110 --> 00:07:14.949
Atlas, that's supposed to be the safety layer,

00:07:15.050 --> 00:07:16.529
right? Right, the alignment and the safety part

00:07:16.529 --> 00:07:18.569
of the current models. Yeah, well, Brave found

00:07:18.569 --> 00:07:21.810
it's actually easy to hijack. which kind of compromises

00:07:21.810 --> 00:07:23.550
the whole point of having that safety layer in

00:07:23.550 --> 00:07:25.850
the first place. And that, that brings us inevitably

00:07:25.850 --> 00:07:28.170
to the mental health data. This part's tough,

00:07:28.269 --> 00:07:30.829
and we need to give it the seriousness it deserves.

00:07:31.449 --> 00:07:34.149
Two sec silence. OpenAI released a statistic

00:07:34.149 --> 00:07:37.250
that's just staggering. Over 1 million unique

00:07:37.250 --> 00:07:40.509
users every single week are talking to ChatGPT

00:07:40.509 --> 00:07:43.470
about suicidal thoughts. 1 million per week.

00:07:43.610 --> 00:07:46.589
That's 0 .15 % of their weekly users. But the

00:07:46.589 --> 00:07:49.779
raw number, it's huge. It's an enormous kind

00:07:49.779 --> 00:07:52.980
of invisible crisis, isn't it? Millions are turning

00:07:52.980 --> 00:07:56.680
to this. Well, unregulated, uncredentialed AI

00:07:56.680 --> 00:07:59.560
for help in moments of crisis, often because

00:07:59.560 --> 00:08:02.420
maybe a human option isn't there or isn't fast

00:08:02.420 --> 00:08:04.579
enough. It sounds like this data came out alongside

00:08:04.579 --> 00:08:06.459
a big push to show they're improving things.

00:08:06.579 --> 00:08:09.100
What are they actually doing about this, given

00:08:09.100 --> 00:08:11.420
the pressure they must be under? Well, they say

00:08:11.420 --> 00:08:13.860
they worked with over 170 mental health professionals

00:08:13.860 --> 00:08:17.420
to improve GPT -5 specifically for this. They're

00:08:17.420 --> 00:08:20.920
claiming it's now 65 percent better at giving

00:08:20.920 --> 00:08:23.560
what they call desirable responses around suicide.

00:08:23.680 --> 00:08:27.639
OK. And safety compliance like. Consistency in

00:08:27.639 --> 00:08:29.819
giving safe answers is apparently up too, from

00:08:29.819 --> 00:08:33.519
77 % to 91%, especially in longer conversations.

00:08:34.159 --> 00:08:36.500
They're also rolling out stricter parental controls,

00:08:36.639 --> 00:08:39.659
using new tech to guess user age, and apply tighter

00:08:39.659 --> 00:08:41.860
rules for minors automatically. That sounds like

00:08:41.860 --> 00:08:44.279
progress, at least in the lab models. But where's

00:08:44.279 --> 00:08:45.860
the catch? What's the pressure point in how they

00:08:45.860 --> 00:08:47.720
actually deploy this stuff? The pressure point

00:08:47.720 --> 00:08:51.169
is exactly that. All this progress is happening

00:08:51.169 --> 00:08:54.830
under, you know, huge legal and public scrutiny.

00:08:55.009 --> 00:08:58.370
But they still offer older, less safe models

00:08:58.370 --> 00:09:01.289
like GPT -4 to people who pay subscriptions.

00:09:01.610 --> 00:09:04.269
So they have safer models, but they aren't making

00:09:04.269 --> 00:09:06.629
everyone use them. Exactly. The safest versions

00:09:06.629 --> 00:09:09.549
aren't universally mandatory, which leads to

00:09:09.549 --> 00:09:11.779
the question. Yeah. When companies are scaling

00:09:11.779 --> 00:09:14.700
this fast, how much of this is just good PR versus

00:09:14.700 --> 00:09:17.019
actually prioritizing getting the safest possible

00:09:17.019 --> 00:09:20.100
models out there for everyone immediately? Well,

00:09:20.139 --> 00:09:23.059
real safety would mean immediate universal deployment

00:09:23.059 --> 00:09:25.500
of the absolute best, safest models they have.

00:09:25.639 --> 00:09:27.820
Not just, you know, nice looking stats that might

00:09:27.820 --> 00:09:30.179
hide the fact that older, riskier versions are

00:09:30.179 --> 00:09:32.700
still out there being used. This has been, well,

00:09:32.860 --> 00:09:35.399
a really deep dive into a super complicated space.

00:09:35.519 --> 00:09:37.779
The big tension that keeps coming up is this

00:09:37.779 --> 00:09:41.210
relentless drive for like. Knowledge control

00:09:41.210 --> 00:09:43.909
with Grokopedia and massive economic scale like

00:09:43.909 --> 00:09:47.029
Crusoe raising billions or all the meta folks

00:09:47.029 --> 00:09:49.830
moving to open AI. Right. And that drive, that

00:09:49.830 --> 00:09:52.190
massive scaling, it's constantly bumping up against

00:09:52.190 --> 00:09:55.669
these really profound ethical risks, especially

00:09:55.669 --> 00:09:57.610
around mental health, like we just discussed,

00:09:57.809 --> 00:10:00.490
and basic data security. We're seeing the consequences

00:10:00.490 --> 00:10:02.950
now at scale. So as we wrap up, maybe something

00:10:02.950 --> 00:10:05.870
for you, the listener, to think about. Consider

00:10:05.870 --> 00:10:09.850
this. When people talk about AI safety, about...

00:10:10.029 --> 00:10:12.110
building these things responsibly from the ground

00:10:12.110 --> 00:10:15.210
up. Why does Anthropix's name seem to come up

00:10:15.210 --> 00:10:18.029
first so often now, maybe more than OpenAI's?

00:10:18.190 --> 00:10:20.570
What might that tell us about public perception

00:10:20.570 --> 00:10:23.230
or maybe even their core design philosophies

00:10:23.230 --> 00:10:27.360
about safety versus maybe commercial speed? Yeah,

00:10:27.379 --> 00:10:29.100
that's a good question to chew on. We really

00:10:29.100 --> 00:10:30.840
encourage you to look critically at where your

00:10:30.840 --> 00:10:33.460
information comes from. Is it cited? Is it purely

00:10:33.460 --> 00:10:36.100
AI generated? And maybe look into these frameworks

00:10:36.100 --> 00:10:38.779
like NIST that are trying to put some guardrails

00:10:38.779 --> 00:10:41.200
on these incredibly powerful tools. Keep doing

00:10:41.200 --> 00:10:43.500
your own deep dive. Thanks for joining us today.

00:10:43.620 --> 00:10:44.320
Out to your own music.
