WEBVTT

00:00:00.000 --> 00:00:02.319
OK, so let's try to unpack this. We are looking

00:00:02.319 --> 00:00:04.719
at a future where the most valuable real estate

00:00:04.719 --> 00:00:07.639
isn't on the ground. It's, what, 300 miles above

00:00:07.639 --> 00:00:10.320
it. The sheer size of the financial bet here

00:00:10.320 --> 00:00:13.500
is just staggering. We're talking a potential

00:00:13.500 --> 00:00:17.879
$1 .5 trillion IPO. And it isn't just about launching

00:00:17.879 --> 00:00:20.059
rockets anymore. This is about building real

00:00:20.059 --> 00:00:22.660
functional infrastructure, literally a cloud

00:00:22.660 --> 00:00:25.210
in the clouds. Hmm. This is where high stakes

00:00:25.210 --> 00:00:27.649
sci -fi really meets, you know, global compute

00:00:27.649 --> 00:00:30.449
infrastructure. It's the new gold rush. Except

00:00:30.449 --> 00:00:33.469
instead of gold, we're mining zero G compute

00:00:33.469 --> 00:00:36.210
power. Welcome to the Deep Dive. Today, we're

00:00:36.210 --> 00:00:38.609
synthesizing a stack of sources that sits right

00:00:38.609 --> 00:00:40.729
at the intersection of, well, exponential AI

00:00:40.729 --> 00:00:43.170
growth and the very real physical limits of our

00:00:43.170 --> 00:00:45.670
planet. We're watching the boundaries of what's

00:00:45.670 --> 00:00:48.409
possible get pushed to two extremes. On one hand,

00:00:48.509 --> 00:00:50.710
massive centralization off -planet, and on the

00:00:50.710 --> 00:00:52.689
other, hyper -efficiency right on your own desktop.

00:00:53.009 --> 00:00:55.189
And we have a phenomenal roadmap to get you plugged

00:00:55.189 --> 00:00:57.450
into the high -value insights here. We're going

00:00:57.450 --> 00:01:00.310
to start with the absolutely massive scale of

00:01:00.310 --> 00:01:03.170
that SpaceX orbital AI vision. Then we'll get

00:01:03.170 --> 00:01:05.590
into a pretty crucial kind of vulnerable admission

00:01:05.590 --> 00:01:08.010
from the AI community about beginner education,

00:01:08.250 --> 00:01:11.030
which, by the way, includes an immediate security

00:01:11.030 --> 00:01:14.209
warning you absolutely need to act on. And finally,

00:01:14.310 --> 00:01:16.090
we'll look at the technical counterpunch from

00:01:16.090 --> 00:01:18.849
NVIDIA, proving that intelligence isn't always

00:01:18.849 --> 00:01:22.950
about brute size. Sometimes efficiency wins the

00:01:22.950 --> 00:01:24.810
day. Let's start with that colossal valuation.

00:01:25.519 --> 00:01:27.439
the numbers are just they're beyond anything

00:01:27.439 --> 00:01:29.719
we've really seen in financial history spacex

00:01:29.719 --> 00:01:33.280
is rumored to be prepping for a 2026 ipo at a

00:01:33.280 --> 00:01:37.040
valuation of 1 .5 trillion dollars 1 .5 trillion

00:01:37.040 --> 00:01:38.700
if that actually happens it would be the biggest

00:01:38.700 --> 00:01:41.680
stock market debut in history Period. And just

00:01:41.680 --> 00:01:43.500
to give you some context for that, that's not

00:01:43.500 --> 00:01:45.959
just bigger than Saudi Aramco's debut. It's way

00:01:45.959 --> 00:01:48.599
bigger than Meta at its peak valuation. In fact,

00:01:48.620 --> 00:01:51.140
it's larger than the entire U .S. defense industrial

00:01:51.140 --> 00:01:54.180
base combined. It's a total reevaluation of what

00:01:54.180 --> 00:01:56.319
one company, one tech infrastructure company

00:01:56.319 --> 00:01:58.459
could even be worth. And just to ground that

00:01:58.459 --> 00:02:00.530
a little bit. Their current private valuation

00:02:00.530 --> 00:02:03.790
is already sitting at around $800 billion. So

00:02:03.790 --> 00:02:05.469
the bet investors are making is that they nearly

00:02:05.469 --> 00:02:07.969
double that value in just two years. Right. And

00:02:07.969 --> 00:02:10.569
that soaring valuation is tied directly to this

00:02:10.569 --> 00:02:14.349
cloud in the clouds master plan. It's so much

00:02:14.349 --> 00:02:17.370
more ambitious than just basic Starlink Internet.

00:02:17.789 --> 00:02:21.550
Phase one is revolutionary. Dedicated AI data

00:02:21.550 --> 00:02:24.580
centers in orbit. They're upgrading their upcoming

00:02:24.580 --> 00:02:27.939
Starlink V3 satellites with specialized AI chips

00:02:27.939 --> 00:02:31.280
designed to process huge amounts of data right

00:02:31.280 --> 00:02:34.120
there in space. So the Q -shift is processing

00:02:34.120 --> 00:02:36.439
the data where it's actually generated in orbit,

00:02:36.560 --> 00:02:38.939
not beaming it down to Earth first just to send

00:02:38.939 --> 00:02:40.879
it back up again. That cuts out a massive amount

00:02:40.879 --> 00:02:42.919
of latency. Exactly. I mean, think of it like

00:02:42.919 --> 00:02:44.860
stacking Lego blocks at data centers, but you're

00:02:44.860 --> 00:02:46.680
assembling the whole thing in low Earth orbit.

00:02:47.069 --> 00:02:49.610
And phase two. Phase two is the ultimate sci

00:02:49.610 --> 00:02:52.069
-fi infrastructure play. Factories on the moon.

00:02:52.189 --> 00:02:54.469
The plan is to eventually build satellites and

00:02:54.469 --> 00:02:56.689
all this orbital gear using lunar materials,

00:02:56.909 --> 00:02:59.889
which cuts down on the astronomical cost of lifting

00:02:59.889 --> 00:03:01.789
every little component out of Earth's gravity.

00:03:02.090 --> 00:03:04.610
OK, so investors are clearly buying into this.

00:03:04.789 --> 00:03:07.710
They're paying 60 to 70 times forward revenue

00:03:07.710 --> 00:03:10.509
for this. This sci -fi meets infrastructure story.

00:03:11.030 --> 00:03:13.569
But why now? I thought the company was always

00:03:13.569 --> 00:03:15.629
clear they'd wait until Mars was a regular trip.

00:03:16.009 --> 00:03:19.069
Because AI changed the game. It changed the timeline,

00:03:19.189 --> 00:03:21.729
and it changed the economic necessity. The problem

00:03:21.729 --> 00:03:25.370
is energy and heat. Earth is, frankly, running

00:03:25.370 --> 00:03:28.590
out of physical capacity for compute. Cooling

00:03:28.590 --> 00:03:31.810
these massive, always -on AI data centers is

00:03:31.810 --> 00:03:34.569
becoming an existential crisis for the environment

00:03:34.569 --> 00:03:37.069
and for the balance sheets. That's the real shift

00:03:37.069 --> 00:03:39.189
in. Space gives you three things we're running

00:03:39.189 --> 00:03:41.289
out of down here. You get near -infinite solar

00:03:41.289 --> 00:03:44.009
power, a perfect vacuum for cooling, and zero

00:03:44.009 --> 00:03:46.460
land -use fights. Precisely. And they want to

00:03:46.460 --> 00:03:48.639
get there first, establish dominance. But they're

00:03:48.639 --> 00:03:50.479
not alone. We're seeing Bezos working on proposals

00:03:50.479 --> 00:03:52.800
for orbital gigawatt data centers. I mean, literal

00:03:52.800 --> 00:03:55.659
power plants in space. And Google is deep into

00:03:55.659 --> 00:03:58.000
testing radiation -hardened AI chips for these

00:03:58.000 --> 00:04:00.060
kinds of extreme environments. The competition

00:04:00.060 --> 00:04:03.099
is heating up and the stakes are incredibly high.

00:04:03.300 --> 00:04:05.479
But this valuation assumes they pull off moon

00:04:05.479 --> 00:04:08.659
factories and orbital data centers with just

00:04:08.659 --> 00:04:12.900
unprecedented reliability. Isn't that just deeply

00:04:12.900 --> 00:04:16.730
speculative? bubble territory? Or is there some

00:04:16.730 --> 00:04:19.709
hard metric we're missing? Investors are paying

00:04:19.709 --> 00:04:21.689
those high multiples because they see this as

00:04:21.689 --> 00:04:24.509
solving Earth's crippling compute and energy

00:04:24.509 --> 00:04:27.069
limits. That really sets the stage for the scale

00:04:27.069 --> 00:04:29.189
of this whole thing, doesn't it? But let's bring

00:04:29.189 --> 00:04:31.069
it back down to Earth for a moment. Let's focus

00:04:31.069 --> 00:04:32.910
on the people who are just trying to learn how

00:04:32.910 --> 00:04:35.810
to use these powerful new tools. Yeah, what's

00:04:35.810 --> 00:04:39.990
fascinating here is a really genuine, vulnerable

00:04:39.990 --> 00:04:44.050
admission. from a huge AI community, over 70

00:04:44.050 --> 00:04:46.589
,000 subscribers. Yeah. And they realized they

00:04:46.589 --> 00:04:48.670
had just, well, fundamentally ignored beginners.

00:04:48.810 --> 00:04:50.310
They kind of assumed everyone was already at

00:04:50.310 --> 00:04:52.449
an intermediate level or had been building alongside

00:04:52.449 --> 00:04:54.769
them from day one. And that's the core struggle,

00:04:55.009 --> 00:04:56.649
isn't it? If you're trying to catch up in AI

00:04:56.649 --> 00:04:58.610
right now, it feels like drinking from a fire

00:04:58.610 --> 00:05:01.769
hose. You log on and five new major models have

00:05:01.769 --> 00:05:04.269
dropped while you were asleep. Totally. You know,

00:05:04.269 --> 00:05:06.689
I still wrestle with prompt drift myself sometimes.

00:05:07.209 --> 00:05:10.269
And for anyone new to that term, prompt drift

00:05:10.269 --> 00:05:12.990
is just when your AI model slowly starts to forget

00:05:12.990 --> 00:05:15.389
the original instructions you gave it. It's frustrating

00:05:15.389 --> 00:05:17.850
even for pros. So the course correction here

00:05:17.850 --> 00:05:19.829
is a real commitment to foundational knowledge,

00:05:19.949 --> 00:05:22.250
which I think is just essential. So they've developed

00:05:22.250 --> 00:05:24.610
something called the Beginner's Daily AI Plan.

00:05:24.829 --> 00:05:27.490
It's free. And it's structured around building

00:05:27.490 --> 00:05:31.759
simple... actionable skills the idea is a day

00:05:31.759 --> 00:05:34.899
-by -day habit plan just one core idea a day

00:05:34.899 --> 00:05:38.480
so you don't get paralyzed and crucially it focuses

00:05:38.480 --> 00:05:41.579
on beginner friendly tools no coding needed that's

00:05:41.579 --> 00:05:43.120
how you actually build sustainable confidence

00:05:43.120 --> 00:05:44.459
right instead of just collecting links you'll

00:05:44.459 --> 00:05:47.399
never get back to but speaking of tools and browsers

00:05:47.399 --> 00:05:50.139
yeah we have a critical and immediate security

00:05:50.139 --> 00:05:52.800
warning that really needs your attention right

00:05:52.800 --> 00:05:55.300
now what did the sources find Researchers caught

00:05:55.300 --> 00:05:57.600
popular Chrome extensions, and I'm talking some

00:05:57.600 --> 00:06:01.180
with up to 8 million users, like UrbanVPN Proxy.

00:06:01.319 --> 00:06:03.759
They were secretly copying and selling users

00:06:03.759 --> 00:06:07.620
private GPT or Gemini chats for a profit. These

00:06:07.620 --> 00:06:09.720
extensions needed access to your browser activity

00:06:09.720 --> 00:06:12.560
to work, and they were just monetizing your most

00:06:12.560 --> 00:06:15.220
sensitive data. your private AI conversations.

00:06:15.680 --> 00:06:18.959
That is a terrifying breach of trust. When we

00:06:18.959 --> 00:06:20.779
use these tools, we assume the data is secure.

00:06:21.079 --> 00:06:22.939
How did they manage to pull this off without

00:06:22.939 --> 00:06:25.639
anyone noticing? Well, they leveraged that trust.

00:06:25.879 --> 00:06:29.199
A VPN or a utility extension needs broad access

00:06:29.199 --> 00:06:31.860
to your browser data just to function, and users

00:06:31.860 --> 00:06:34.500
grant that permission without thinking. But once

00:06:34.500 --> 00:06:36.699
they have it, the extension could scrape everything

00:06:36.699 --> 00:06:38.459
you were sending to or getting back from the

00:06:38.459 --> 00:06:41.220
AI portals. The immediate takeaway is... check

00:06:41.220 --> 00:06:43.459
your extensions. Now, if you've granted read

00:06:43.459 --> 00:06:45.500
and change data on all websites permission to

00:06:45.500 --> 00:06:47.860
any utility you don't absolutely need, disable

00:06:47.860 --> 00:06:50.740
it or just remove it. It's so easy to focus on

00:06:50.740 --> 00:06:53.300
all the exciting new tools, but the very thing

00:06:53.300 --> 00:06:55.660
that makes them so powerful, their ubiquity,

00:06:55.660 --> 00:06:58.560
is why we need this constant vigilance. You have

00:06:58.560 --> 00:07:01.939
to maintain extreme skepticism, especially with

00:07:01.939 --> 00:07:04.220
browser extensions asking for access to your

00:07:04.220 --> 00:07:07.100
sensitive data. Okay, shifting gears a bit, let's

00:07:07.100 --> 00:07:10.680
look at the globalization and... the more formal

00:07:10.680 --> 00:07:12.660
adoption of these tools. This is driving both

00:07:12.660 --> 00:07:15.959
the need for that vigilance and the huge demand

00:07:15.959 --> 00:07:18.160
for talent. Accessibility is just increasing

00:07:18.160 --> 00:07:20.879
so rapidly. It really is. I mean, take Google

00:07:20.879 --> 00:07:23.759
Translate. It now offers near real time audio

00:07:23.759 --> 00:07:26.720
translation using just any regular pair of headphones.

00:07:26.920 --> 00:07:29.860
That just instantly breaks down language barriers

00:07:29.860 --> 00:07:33.360
for business, for travel, for anything. And then

00:07:33.360 --> 00:07:35.639
you have projects like Sam Altman's World App.

00:07:35.759 --> 00:07:37.740
It's basically a proof of human super app for

00:07:37.740 --> 00:07:40.439
digital identity. It's got secure chats, global

00:07:40.439 --> 00:07:42.500
payments, and it's already running in over 100

00:07:42.500 --> 00:07:45.220
countries. That's a massive global footprint

00:07:45.220 --> 00:07:47.680
for a system that kind of bypasses traditional

00:07:47.680 --> 00:07:50.860
banking. And all that global competence is translating

00:07:50.860 --> 00:07:53.600
into huge domestic demand for talent. The U .S.

00:07:53.600 --> 00:07:55.720
government is making a major play here. They're

00:07:55.720 --> 00:07:57.860
planning to hire a thousand top techies for a

00:07:57.860 --> 00:08:00.079
tech force. The goal is to drive specialized

00:08:00.079 --> 00:08:03.120
AI projects across different agencies. And we're

00:08:03.120 --> 00:08:05.600
not talking entry -level jobs. The salaries are

00:08:05.600 --> 00:08:09.300
between $150 ,000 and $200 ,000 a year. They're

00:08:09.300 --> 00:08:11.560
partnering with huge players like Microsoft,

00:08:11.899 --> 00:08:16.339
Adobe, Amazon, Meta, even XAI. to pull in that

00:08:16.339 --> 00:08:19.100
top tier talent. Which just shows you how critical

00:08:19.100 --> 00:08:21.980
AI competence is viewed now. It's not a niche

00:08:21.980 --> 00:08:24.019
coding skill anymore. It's considered foundational

00:08:24.019 --> 00:08:26.779
national infrastructure. And they are willing

00:08:26.779 --> 00:08:29.199
to pay top dollar to compete with the private

00:08:29.199 --> 00:08:31.839
sector for that expertise. And the investment

00:08:31.839 --> 00:08:34.980
world completely agrees. Lightspeed Venture Partners

00:08:34.980 --> 00:08:38.139
just raised a staggering $9 billion across six

00:08:38.139 --> 00:08:41.360
new funds, all targeting AI development. That

00:08:41.360 --> 00:08:43.379
boosts their total assets under management to

00:08:43.379 --> 00:08:46.779
over $40 billion. That kind of capital raise

00:08:46.779 --> 00:08:49.539
confirms that investors see AI not as a trend,

00:08:49.659 --> 00:08:52.120
but as the core utility for the next decade.

00:08:52.360 --> 00:08:55.100
When you see that kind of money pouring in, you

00:08:55.100 --> 00:08:57.259
know the future is already here. So what does

00:08:57.259 --> 00:08:59.500
the government's willingness to pay $200 ,000

00:08:59.500 --> 00:09:02.320
salaries signal about the competitive value of

00:09:02.320 --> 00:09:05.059
just fundamental AI skills right now? It signals

00:09:05.059 --> 00:09:08.159
AI competence is now highly valued infrastructure.

00:09:08.460 --> 00:09:11.080
It's attracting top tier pay because it determines

00:09:11.080 --> 00:09:15.100
national competitive advantage. We've covered

00:09:15.100 --> 00:09:17.220
the macro view trillion dollar bets in space,

00:09:17.480 --> 00:09:20.899
massive global talent acquisition. But the AI

00:09:20.899 --> 00:09:23.419
story isn't just about infinite scale. It's also

00:09:23.419 --> 00:09:26.629
about extreme efficiency. So now let's dive into

00:09:26.629 --> 00:09:28.389
the technical breakthroughs that might actually

00:09:28.389 --> 00:09:31.710
make powerful AI cheaper, more private and, you

00:09:31.710 --> 00:09:34.070
know, decentralized for the average user. Yeah,

00:09:34.129 --> 00:09:36.250
this segment is so important because for the

00:09:36.250 --> 00:09:38.269
last year, everyone's been watching the open

00:09:38.269 --> 00:09:40.809
source race, particularly China, which has been

00:09:40.809 --> 00:09:43.090
leading on sheer model scale. They're consistently

00:09:43.090 --> 00:09:45.789
at the top of the 2025 leaderboards with massive

00:09:45.789 --> 00:09:48.830
models like DeepSea, Quen and Kimi. The conventional

00:09:48.830 --> 00:09:51.750
wisdom was just bigger is smarter. More parameters

00:09:51.750 --> 00:09:54.110
means a better model. But that arms race for

00:09:54.110 --> 00:09:56.669
size relies on building bigger and bigger data

00:09:56.669 --> 00:09:59.429
centers, which brings us right back to that original

00:09:59.429 --> 00:10:01.809
problem of energy and capacity limits here on

00:10:01.809 --> 00:10:05.570
Earth. Exactly. But NVIDIA? NVIDIA took a completely

00:10:05.570 --> 00:10:08.250
different path. They decided not to compete on

00:10:08.250 --> 00:10:10.590
size, but on intelligence through efficiency.

00:10:10.909 --> 00:10:13.529
They released something called Nemotron 330B

00:10:13.529 --> 00:10:16.929
Nano. And this model is a fundamental redesign.

00:10:17.110 --> 00:10:19.690
It's a highly distilled model focused entirely

00:10:19.690 --> 00:10:22.850
on running powerful open models efficiently on

00:10:22.850 --> 00:10:25.190
consumer hardware. Okay, let's unpack that efficiency

00:10:25.190 --> 00:10:27.450
because the technical details here really matter.

00:10:27.509 --> 00:10:29.570
What makes it so much more efficient? The key

00:10:29.570 --> 00:10:32.289
metric is throughput. And throughput just means

00:10:32.289 --> 00:10:35.330
how much information the model can process and

00:10:35.330 --> 00:10:38.990
deliver quickly. Nimitron 3 Nano achieved 3 .3

00:10:38.990 --> 00:10:41.710
times higher throughput than previous open models

00:10:41.710 --> 00:10:44.289
of a similar size. And that translates directly

00:10:44.289 --> 00:10:48.070
into raw speed. It's generating text at 377 tokens

00:10:48.070 --> 00:10:51.850
per second. 377 tokens a second. To put that

00:10:51.850 --> 00:10:53.990
in human terms, that's like generating a sophisticated

00:10:53.990 --> 00:10:56.710
5 ,000 -word analysis paper while you wait for

00:10:56.710 --> 00:10:58.710
your coffee. It's a massive difference in speed.

00:10:58.929 --> 00:11:01.470
And it's not just fast. It's still smart. It

00:11:01.470 --> 00:11:03.909
managed to get a bronze medal on the... very

00:11:03.909 --> 00:11:06.470
difficult international math Olympiad benchmark.

00:11:06.909 --> 00:11:09.549
And it's built specifically for long context

00:11:09.549 --> 00:11:12.970
tasks and sophisticated agentic work. Could you

00:11:12.970 --> 00:11:16.389
quickly define agentic work for us? Sure. Agentic

00:11:16.389 --> 00:11:19.129
work just means the AI can take a complex, high

00:11:19.129 --> 00:11:21.269
-level goal, break it down into the necessary

00:11:21.269 --> 00:11:23.889
steps, and then execute those steps on its own,

00:11:23.950 --> 00:11:26.330
all without constant hand -holding from a human.

00:11:26.620 --> 00:11:28.059
This is where it gets really interesting for

00:11:28.059 --> 00:11:30.440
me. NVIDIA didn't just build this. They open

00:11:30.440 --> 00:11:33.580
sourced the entire stack. They released the three

00:11:33.580 --> 00:11:36.799
trillion tokens of pre -training data, the post

00:11:36.799 --> 00:11:38.419
-training data sets, the full infrastructure

00:11:38.419 --> 00:11:41.700
code. That level of transparency is pretty rare.

00:11:41.919 --> 00:11:43.720
And that democratization is the whole point.

00:11:43.899 --> 00:11:46.259
The crucial accessibility part is that you don't

00:11:46.259 --> 00:11:47.960
need to rent some giant cloud server anymore.

00:11:48.139 --> 00:11:50.980
You can run this model locally on your own machine.

00:11:51.320 --> 00:11:54.179
You just use tools like LM Studio and it only

00:11:54.179 --> 00:11:56.490
needs about 25 gigabytes of RAM. Right. And LM

00:11:56.490 --> 00:11:58.350
Studio is basically a user -friendly app that

00:11:58.350 --> 00:12:00.129
helps you download and run these open source

00:12:00.129 --> 00:12:02.610
models right on your own computer. Precisely.

00:12:02.610 --> 00:12:05.029
You keep your data private, you customize the

00:12:05.029 --> 00:12:07.409
model for your own needs, and you can run it

00:12:07.409 --> 00:12:11.070
forever. Whoa. I mean, just imagine scaling that

00:12:11.070 --> 00:12:13.870
efficiency to a billion queries without needing

00:12:13.870 --> 00:12:16.250
the massive energy drain of a giant data center.

00:12:16.370 --> 00:12:19.029
That's a total game changer for localized, customized

00:12:19.029 --> 00:12:22.330
AI. So if models become this cheap and efficient

00:12:22.330 --> 00:12:25.529
to run privately, will this shift towards smaller,

00:12:25.570 --> 00:12:28.769
more optimized models change how companies structure

00:12:28.769 --> 00:12:31.720
their AI adoption? Yes. Efficient models allow

00:12:31.720 --> 00:12:33.919
for cheaper, private, and permanent deployment

00:12:33.919 --> 00:12:37.759
of customized AI. It shifts power away from centralized

00:12:37.759 --> 00:12:39.860
cloud providers. So what does this all really

00:12:39.860 --> 00:12:42.139
mean for us? Looking across all the sources,

00:12:42.220 --> 00:12:44.220
it feels like the boundaries of computation are

00:12:44.220 --> 00:12:46.679
literally leaving the planet. The scale is being

00:12:46.679 --> 00:12:49.200
pushed off world, driven by Earth's capacity

00:12:49.200 --> 00:12:52.159
limits, and just the sheer speed of AI innovation.

00:12:52.559 --> 00:12:54.440
Right. And whether that computation is happening

00:12:54.440 --> 00:12:56.559
in low Earth orbit or right there on your desktop,

00:12:56.860 --> 00:13:00.200
the path forward requires efficient models, constant...

00:13:00.110 --> 00:13:03.429
learning and absolute security vigilance. You

00:13:03.429 --> 00:13:05.690
have to be proactive about what data you're sharing,

00:13:05.769 --> 00:13:08.149
especially with those browser extensions. That

00:13:08.149 --> 00:13:11.009
orbital data center concept really forces a critical

00:13:11.009 --> 00:13:14.129
reflection on governance, though. If AI processing

00:13:14.129 --> 00:13:17.149
moves off world outside the reach of existing

00:13:17.149 --> 00:13:20.629
national laws, what kind of ethical or regulatory

00:13:20.629 --> 00:13:24.149
framework do we even need for data centers that

00:13:24.149 --> 00:13:26.690
operate beyond any single country's jurisdiction?

00:13:27.629 --> 00:13:29.789
That's a deep question. The rulebook for space

00:13:29.789 --> 00:13:32.330
is still being written, and AI is moving way

00:13:32.330 --> 00:13:34.870
faster than any treaty can keep up with. A great

00:13:34.870 --> 00:13:36.690
question to take with you as you process the

00:13:36.690 --> 00:13:38.669
future of compute. Thank you for sharing your

00:13:38.669 --> 00:13:40.789
sources and diving deep with us today. Until

00:13:40.789 --> 00:13:41.230
next time.
