WEBVTT

00:00:00.000 --> 00:00:03.259
The contrast in the AI world this week is, well,

00:00:03.319 --> 00:00:05.740
it's staggering. Yeah. On one side, you have

00:00:05.740 --> 00:00:08.900
these large language models suddenly cutting

00:00:08.900 --> 00:00:11.800
their operational costs in half, just like that.

00:00:11.839 --> 00:00:14.000
Right. And at the exact same time, those same

00:00:14.000 --> 00:00:16.379
models, now more efficient, are helping solve

00:00:16.379 --> 00:00:19.820
conceptual problems, problems that stumped, you

00:00:19.820 --> 00:00:22.219
know, the world's leading quantum computing minds.

00:00:22.399 --> 00:00:25.059
It's kind of wild. It really is. Welcome to the

00:00:25.059 --> 00:00:27.519
Deep Dive. You've shared a stack of sources this

00:00:27.519 --> 00:00:30.399
week, and they really prove AI is getting radically

00:00:30.399 --> 00:00:34.520
smarter and much more economically viable, both

00:00:34.520 --> 00:00:36.619
at once. This feels like the moment the ceiling

00:00:36.619 --> 00:00:39.460
gets raised, you know, on both capacity and capability.

00:00:40.020 --> 00:00:42.280
Our mission today is pretty clear, then. We're

00:00:42.280 --> 00:00:44.420
going to unpack DeepSeek's cost -cutting secret,

00:00:44.640 --> 00:00:47.020
this sparse attention thing, and figure out why

00:00:47.020 --> 00:00:49.840
it actually matters for, say, API prices. Okay.

00:00:50.280 --> 00:00:51.840
Then we're diving into the conceptual frontier.

00:00:51.960 --> 00:00:53.939
We'll look at how GPT -5 actually provided a

00:00:53.939 --> 00:00:56.000
genuine breakthrough in quantum mechanics. That's

00:00:56.000 --> 00:00:58.640
the Aronson story, right? Fascinating stuff.

00:00:58.960 --> 00:01:01.880
Exactly. And finally, we'll hit those critical

00:01:01.880 --> 00:01:03.920
shifts happening across the industry. Copyright

00:01:03.920 --> 00:01:07.200
fights, talent moving around, lots going on.

00:01:07.359 --> 00:01:09.079
All right, let's do it. Where do we start? The

00:01:09.079 --> 00:01:10.700
efficiency angle. Yeah, let's start with the

00:01:10.700 --> 00:01:13.480
money problem. It's fundamental to scaling AI,

00:01:13.599 --> 00:01:16.000
isn't it? Training, running these huge models.

00:01:16.819 --> 00:01:19.430
It's hitting a wall. Totally. And in the U .S.,

00:01:19.430 --> 00:01:21.810
the major labs, you know, they often just throw

00:01:21.810 --> 00:01:24.409
more hardware at it, more NVIDIA GPUs, especially

00:01:24.409 --> 00:01:28.049
when dealing with long prompts, long conversations.

00:01:28.209 --> 00:01:30.329
That's the brute force method. Yeah. Yeah. More

00:01:30.329 --> 00:01:33.170
compute, more cost. Simple as that. Pretty much.

00:01:33.469 --> 00:01:36.450
But DeepSeek, being based in China, operates

00:01:36.450 --> 00:01:39.230
under different... let's say, resource constraints.

00:01:39.450 --> 00:01:42.209
They were kind of forced to find a smarter way

00:01:42.209 --> 00:01:44.310
to scale up. A different path. And they found

00:01:44.310 --> 00:01:46.650
a massive one. They just dropped this model,

00:01:46.810 --> 00:01:50.709
DeepSeek V3 .2 XP, and the headline here is huge.

00:01:50.930 --> 00:01:55.290
It cuts our operational costs by 50%. 50 % without

00:01:55.290 --> 00:01:57.709
losing performance. That's what they claim. No

00:01:57.709 --> 00:01:59.790
loss in quality compared to their previous models,

00:01:59.930 --> 00:02:02.569
which is, frankly, enormous news for the whole

00:02:02.569 --> 00:02:06.290
field. Okay, that is genuinely massive. So what's

00:02:06.290 --> 00:02:08.310
the technical trick? What lever do they pull

00:02:08.310 --> 00:02:11.449
for that kind of efficiency gain? The secret,

00:02:11.610 --> 00:02:13.590
or maybe the not -so -secret -anymore secret,

00:02:13.770 --> 00:02:16.129
is sparse attention. Sparse attention, okay.

00:02:16.210 --> 00:02:18.729
Yeah. And to get why it's such a big deal, you've

00:02:18.729 --> 00:02:20.810
got to remember how the original LLM architecture

00:02:20.810 --> 00:02:23.710
works, that 2017 transformer model. Right, the

00:02:23.710 --> 00:02:26.120
foundation for most of this stuff. Exactly. It

00:02:26.120 --> 00:02:29.319
uses this process where basically every single

00:02:29.319 --> 00:02:31.800
word looks at every other single word in the

00:02:31.800 --> 00:02:34.199
whole sequence. It compares everything to everything.

00:02:34.460 --> 00:02:36.819
Like that analogy of Lego blocks. Yeah. Every

00:02:36.819 --> 00:02:38.699
block has to talk to every other block to figure

00:02:38.699 --> 00:02:40.360
out the structure. That's a good way to put it.

00:02:40.379 --> 00:02:42.879
And it works fine for short bits of text. But

00:02:42.879 --> 00:02:45.819
imagine a document with, say, 10 ,000 words.

00:02:46.060 --> 00:02:48.860
Uh -oh. Yeah, that comparison process just explodes.

00:02:49.080 --> 00:02:52.039
It hits what's called quadratic staling. Basically,

00:02:52.060 --> 00:02:54.360
if you double the length of the input, the computation

00:02:54.360 --> 00:02:58.099
cost goes up by four times. It gets exponentially

00:02:58.099 --> 00:03:00.800
slower and way more expensive as your inputs

00:03:00.800 --> 00:03:03.699
get longer. Which explains why trying to summarize

00:03:03.699 --> 00:03:07.080
a really long document with AI can feel sluggish

00:03:07.080 --> 00:03:09.819
and why it often costs more credits or tokens.

00:03:09.979 --> 00:03:13.159
That quadratic thing. is the bottleneck it's

00:03:13.159 --> 00:03:15.340
the economic and technical bottleneck absolutely

00:03:15.340 --> 00:03:18.599
now sparse attention which deep seek is using

00:03:18.599 --> 00:03:21.199
here it completely changes that process how so

00:03:21.199 --> 00:03:23.400
well instead of comparing every word to every

00:03:23.400 --> 00:03:26.599
other word it's smarter it's an llm process that

00:03:26.599 --> 00:03:29.039
basically picks out only the keywords that actually

00:03:29.039 --> 00:03:31.120
matter for the comparison at that moment so it

00:03:31.120 --> 00:03:33.620
skips the noise yeah focuses on the relevant

00:03:33.620 --> 00:03:36.759
connections precisely it intelligently ignores

00:03:36.759 --> 00:03:39.409
the irrelevant stuff And how did they engineer

00:03:39.409 --> 00:03:41.770
that? How do you make the model know which words

00:03:41.770 --> 00:03:44.990
to skip? They built this tiny specialized system.

00:03:45.030 --> 00:03:46.530
They call it the lightning indexer. Lightning

00:03:46.530 --> 00:03:49.150
indexer, okay. This little indexer helps the

00:03:49.150 --> 00:03:51.530
main model prioritize which connections are important,

00:03:51.650 --> 00:03:53.430
which words need to talk to which other words,

00:03:53.550 --> 00:03:56.050
and that shifts the scaling away from that awful

00:03:56.050 --> 00:03:58.550
dollar. Towards something more manageable, like

00:03:58.550 --> 00:04:02.009
linear. Closer to linear, yeah. Much, much more

00:04:02.009 --> 00:04:03.810
manageable. And that's what suddenly makes really

00:04:03.810 --> 00:04:06.210
long context economically viable. And they published

00:04:06.210 --> 00:04:08.750
benchmarks showing it holds up like performance

00:04:08.750 --> 00:04:11.550
is still good compared to their old dense model

00:04:11.550 --> 00:04:14.990
V3 .1 Terminus. Yep. They claim no performance

00:04:14.990 --> 00:04:18.970
drop, but half the compute cost. OK, but you

00:04:18.970 --> 00:04:20.709
mentioned the original Transformers from 2017.

00:04:21.069 --> 00:04:23.759
Is sparse attention brand new science? That's

00:04:23.759 --> 00:04:26.319
the interesting part. Not really. OpenAI was

00:04:26.319 --> 00:04:28.759
actually pioneering sparse transformers back

00:04:28.759 --> 00:04:32.019
in 2019. And Google released something similar

00:04:32.019 --> 00:04:34.600
called Reformer in 2020. So the idea has been

00:04:34.600 --> 00:04:37.160
around for years. Exactly. But DeepSeek seems

00:04:37.160 --> 00:04:40.379
to be the first major lab to really openly publish

00:04:40.379 --> 00:04:42.819
their specific implementation, the results, the

00:04:42.819 --> 00:04:45.259
cost savings. They kind of put it all out there,

00:04:45.319 --> 00:04:47.399
made the tech public in a way. And the impact

00:04:47.399 --> 00:04:50.170
is already happening. Oh, yeah. API prices for

00:04:50.170 --> 00:04:52.269
handling long inputs, they've already been slashed

00:04:52.269 --> 00:04:55.370
up to 50 % in some cases. And this matters for

00:04:55.370 --> 00:04:57.810
you, the listener, because systems like, say,

00:04:57.870 --> 00:05:00.949
ChatGPT, they often still reprocess all the previous

00:05:00.949 --> 00:05:02.850
words in your conversation every time you add

00:05:02.850 --> 00:05:05.149
something new. Right, which is why those chat

00:05:05.149 --> 00:05:07.009
sessions can feel like they're slowing down the

00:05:07.009 --> 00:05:09.329
longer they go on. It's that silent tax on interaction

00:05:09.329 --> 00:05:12.170
length. Sparse attention helps fix that. Okay,

00:05:12.230 --> 00:05:14.569
so here's the probing question then. If this

00:05:14.569 --> 00:05:16.870
tech is so good at cutting costs, and it's not

00:05:16.870 --> 00:05:20.589
exactly brand new, Why has OpenAI been so quiet

00:05:20.589 --> 00:05:23.610
about using it, or if they're using it, in GPT

00:05:23.610 --> 00:05:26.189
-4 or GPT -5? That's a really good question.

00:05:26.230 --> 00:05:29.509
And it hints at maybe some economic friction,

00:05:29.629 --> 00:05:32.470
you could say. While the tech is proven, fully

00:05:32.470 --> 00:05:35.310
adopting it might mean rethinking or even partially

00:05:35.310 --> 00:05:38.310
abandoning massive investments already made in

00:05:38.310 --> 00:05:40.889
GPU clusters designed for the old, dense way

00:05:40.889 --> 00:05:43.009
of doing things. So yeah, efficiency remains

00:05:43.009 --> 00:05:45.149
this kind of hidden scaling constraint for all

00:05:45.149 --> 00:05:47.129
the big players. Okay, so we've tackled the money

00:05:47.129 --> 00:05:49.490
problem, the efficiency leap. But what about

00:05:49.490 --> 00:05:52.149
the actual brainpower, the capability side? Right.

00:05:52.230 --> 00:05:54.449
Let's talk about Scott Aronson, the quantum computing

00:05:54.449 --> 00:05:56.790
legend. We're shifting gears now from saving

00:05:56.790 --> 00:06:00.069
money to actual conceptual breakthroughs. Yeah,

00:06:00.110 --> 00:06:01.589
this is where it gets really, really interesting

00:06:01.589 --> 00:06:03.990
for me. Aronson was deep into this notoriously

00:06:03.990 --> 00:06:06.970
tricky quantum proof. It concerns something called

00:06:06.970 --> 00:06:12.569
QMA. QMA. OK, for those of us not deep in complexity

00:06:12.569 --> 00:06:16.100
theory. What is that in simple terms? Simple

00:06:16.100 --> 00:06:19.339
terms. OK, think of it like this. You know, MP

00:06:19.339 --> 00:06:21.879
problems, problems where if someone gives you

00:06:21.879 --> 00:06:23.920
a solution, you can check it quickly on a regular

00:06:23.920 --> 00:06:27.240
computer. QMA or quantum Merlin Arthur is basically

00:06:27.240 --> 00:06:30.040
the quantum version of that. It deals with proofs

00:06:30.040 --> 00:06:31.939
that need a quantum computer to verify quickly.

00:06:32.139 --> 00:06:35.180
Got it. Quantum MP. Kind of. Kind of. Yeah. And

00:06:35.180 --> 00:06:38.769
Aronson was stuck on a specific part. He was

00:06:38.769 --> 00:06:40.769
trying to prove that a new method he found for

00:06:40.769 --> 00:06:42.949
amplifying the certainty of these QMA proofs

00:06:42.949 --> 00:06:45.910
was truly optimal. He couldn't quite nail down

00:06:45.910 --> 00:06:48.310
the perfect mathematical verifier function. So

00:06:48.310 --> 00:06:50.230
a technical roadblock in his own mathematical

00:06:50.230 --> 00:06:54.069
work. A human stuck point. Exactly. And so he

00:06:54.069 --> 00:06:57.029
turned to GPT -5 thinking for help. Just ask

00:06:57.029 --> 00:06:59.670
the AI. Yep. And what happened next is pretty

00:06:59.670 --> 00:07:01.769
remarkable, especially given who Aronson is.

00:07:01.930 --> 00:07:04.209
Apparently, the model gave a couple of unhelpful

00:07:04.209 --> 00:07:06.930
ideas at first. Okay. Typical AI sometimes. Right.

00:07:07.009 --> 00:07:09.470
But Aronson gave it some course correction, nudged

00:07:09.470 --> 00:07:12.389
it a bit, and then GPT -5 suggested a specific

00:07:12.389 --> 00:07:14.350
mathematical function. And this function? Mm

00:07:14.350 --> 00:07:17.649
-hmm. It worked. It broke his mental block completely.

00:07:17.829 --> 00:07:20.230
And the key thing here is Aronson's reaction,

00:07:20.350 --> 00:07:22.389
right? What did he say about it? He called the

00:07:22.389 --> 00:07:25.769
suggestion non -obvious and genuinely useful.

00:07:25.970 --> 00:07:28.529
Not obvious. Yeah. And he even added this quote,

00:07:28.610 --> 00:07:30.930
which I think just says it all. If a grad student

00:07:30.930 --> 00:07:33.009
had given it to me, I'd have called it clever.

00:07:33.269 --> 00:07:36.939
Wow. OK, clever. Yeah. From Scott Aronson about

00:07:36.939 --> 00:07:39.560
an AI suggestion on quantum complexity theory.

00:07:39.740 --> 00:07:42.180
Right. That means the AI didn't just compute

00:07:42.180 --> 00:07:44.560
something he asked for. It didn't just run through

00:07:44.560 --> 00:07:48.279
known patterns. It actually found a novel step,

00:07:48.399 --> 00:07:51.420
an elegant shortcut, maybe something a top human

00:07:51.420 --> 00:07:53.379
mind working on the problem had missed. Whoa.

00:07:53.839 --> 00:07:57.240
I mean, just imagine that AI actually contributing

00:07:57.240 --> 00:08:00.040
a genuinely clever step to advanced scientific

00:08:00.040 --> 00:08:02.990
theory. That feels. different. It feels very

00:08:02.990 --> 00:08:04.269
different. This is probably the first really

00:08:04.269 --> 00:08:06.149
clear example we've seen of this kind of thing,

00:08:06.189 --> 00:08:07.550
but you can bet it's going to be the first of

00:08:07.550 --> 00:08:11.370
thousands. It points towards true AI co -creation

00:08:11.370 --> 00:08:14.449
and science. Beat. So does a story like this

00:08:14.449 --> 00:08:17.569
confirm that AI is moving beyond just simulation,

00:08:17.889 --> 00:08:21.129
beyond pattern matching, towards genuine co -creation

00:08:21.129 --> 00:08:24.029
in really high level research? I think it strongly

00:08:24.029 --> 00:08:26.649
suggests that, yeah, AI is becoming an invaluable

00:08:26.649 --> 00:08:29.829
partner for generating novel, non -obvious research

00:08:29.829 --> 00:08:33.129
insights. Okay, let's pivot then. Broader industry

00:08:33.129 --> 00:08:37.370
shift. Application adoption. Maybe some friction

00:08:37.370 --> 00:08:40.870
points. Quickfire. Let's do it. First up, institutional

00:08:40.870 --> 00:08:43.330
adoption seems to be accelerating like crazy.

00:08:43.570 --> 00:08:47.370
USC just gave full access to ChatGPT to all students,

00:08:47.509 --> 00:08:50.870
staff, faculty. Big deal. Apparently $1 .5 million

00:08:50.870 --> 00:08:54.159
for one year. That's scale. That is huge scale.

00:08:54.340 --> 00:08:57.000
And while adoption like that speeds up, the legal

00:08:57.000 --> 00:08:59.460
fights are also heating up. No surprise there,

00:08:59.539 --> 00:09:02.500
really, especially around content. OpenAI launched

00:09:02.500 --> 00:09:05.080
Sora 2. Yeah, the video app. Yeah. Makes those

00:09:05.080 --> 00:09:07.299
short. like 10 second clips. They look pretty

00:09:07.299 --> 00:09:09.440
amazing. They do. But here's the controversy,

00:09:09.539 --> 00:09:11.659
the big friction point. Sources are saying it

00:09:11.659 --> 00:09:14.360
uses copyrighted material unless the owners actively

00:09:14.360 --> 00:09:17.419
opt out. Ah, the opt out model. That flips the

00:09:17.419 --> 00:09:19.480
burden entirely, doesn't it? Completely. It puts

00:09:19.480 --> 00:09:21.820
the massive job of protecting content onto the

00:09:21.820 --> 00:09:24.080
creators, not the AI company scraping the data.

00:09:24.159 --> 00:09:26.340
You're kind of presumed in unless you fight to

00:09:26.340 --> 00:09:28.519
get out. Guilty until proven innocent, almost.

00:09:29.259 --> 00:09:32.419
You can see it that way. And it forces big media

00:09:32.419 --> 00:09:34.980
companies to constantly police what the AI is

00:09:34.980 --> 00:09:37.519
learning from. We already saw Disney apparently

00:09:37.519 --> 00:09:40.059
bail on that kind of arrangement. Yeah, I saw

00:09:40.059 --> 00:09:42.519
that. This feels like it's going to define copyright

00:09:42.519 --> 00:09:45.340
law for the next decade. I think so, too. OK,

00:09:45.500 --> 00:09:48.559
shifting from consumption. to creation tools.

00:09:49.019 --> 00:09:52.000
Anthropic put out an important paper making a

00:09:52.000 --> 00:09:54.919
distinction between context engineering and prompt

00:09:54.919 --> 00:09:57.419
engineering. Okay. That sounds useful. We hear

00:09:57.419 --> 00:09:59.840
prompt engineering all the time, but context

00:09:59.840 --> 00:10:01.639
engineering, what's the difference they're highlighting?

00:10:01.860 --> 00:10:05.059
Good question. Because honestly, I still wrestle

00:10:05.059 --> 00:10:07.610
with prompt drift myself sometimes. Yeah, me

00:10:07.610 --> 00:10:11.049
too. You spend ages crafting this perfect persona

00:10:11.049 --> 00:10:14.409
or instruction set for the AI. And then like

00:10:14.409 --> 00:10:16.090
two turns later, it's completely forgotten and

00:10:16.090 --> 00:10:18.750
gone off track. Is context engineering meant

00:10:18.750 --> 00:10:21.230
to fix that? That's basically the pain point

00:10:21.230 --> 00:10:23.009
it addresses, yeah. Yeah. So prompt engineering

00:10:23.009 --> 00:10:25.889
is just what you ask the model, the actual question

00:10:25.889 --> 00:10:28.409
or command. Context engineering is about building

00:10:28.409 --> 00:10:30.970
the stuff around the prompt before you even ask.

00:10:31.389 --> 00:10:33.350
It's like setting up the instruction manual,

00:10:33.470 --> 00:10:37.389
the environment, the guardrails for the AI. Ensuring

00:10:37.389 --> 00:10:40.509
the model really understands its role, its constraints,

00:10:40.730 --> 00:10:43.429
the background, before it even starts thinking

00:10:43.429 --> 00:10:45.470
about your specific question. So it's like setting

00:10:45.470 --> 00:10:48.190
the stage properly, not just delivering the lines.

00:10:48.389 --> 00:10:51.309
Exactly. And Anthropic argues pretty convincingly

00:10:51.309 --> 00:10:54.090
that it leads to much more consistent, reliable

00:10:54.090 --> 00:10:59.350
AI behavior. Less drift. Makes sense. Okay, what

00:10:59.350 --> 00:11:02.029
else? Infrastructure. Yeah, quick one. Google

00:11:02.029 --> 00:11:05.009
Drive is now using AI, apparently, to spot ransomware

00:11:05.009 --> 00:11:07.509
attacks and help users quickly restore files

00:11:07.509 --> 00:11:09.950
that got scrambled. Oh, that's practical. Security

00:11:09.950 --> 00:11:12.330
becoming a core AI application. Good to see.

00:11:12.529 --> 00:11:14.370
Definitely. And then there's the talent shift.

00:11:14.590 --> 00:11:16.590
This was noteworthy. Sources mentioned about

00:11:16.590 --> 00:11:19.450
20 top AI researchers have left the big established

00:11:19.450 --> 00:11:23.889
labs, OpenAI, Google, Meta. 20? Wow, where'd

00:11:23.889 --> 00:11:26.200
they go? To start a new company together. A new

00:11:26.200 --> 00:11:29.100
AI venture. That is a massive brain drain from

00:11:29.100 --> 00:11:32.100
the incumbents. 20 top people. Yeah. It shows

00:11:32.100 --> 00:11:34.120
incredible confidence in the market for specialized

00:11:34.120 --> 00:11:36.740
new ventures, right? Especially considering how

00:11:36.740 --> 00:11:39.080
insanely expensive it is to start a frontier

00:11:39.080 --> 00:11:42.299
AI lab from scratch these days. For sure. Funding

00:11:42.299 --> 00:11:44.659
spotlight Assort Health. They secured $76 million

00:11:44.659 --> 00:11:47.159
for their voice AI platform. The scale numbers

00:11:47.159 --> 00:11:50.059
were impressive. 14 languages handled 42 million

00:11:50.059 --> 00:11:52.980
patients. Eightfold revenue growth. Shows health

00:11:52.980 --> 00:11:55.960
AI is scaling fast. Voice AI and healthcare,

00:11:56.120 --> 00:11:59.759
big area. And finally, tool update. Right. Cursor,

00:11:59.759 --> 00:12:02.379
the AI -first code editor, now supports controlling

00:12:02.379 --> 00:12:04.399
your browser, grabbing screenshots, debugging

00:12:04.399 --> 00:12:06.980
client -side issues, all integrated with Cloud

00:12:06.980 --> 00:12:09.580
Sonnet 4 .5, apparently, which is making the

00:12:09.580 --> 00:12:11.740
developer workflow smoother. Interesting. Tools

00:12:11.740 --> 00:12:14.159
getting more integrated. So that talent exodus.

00:12:14.399 --> 00:12:16.519
Yeah. It does raise a question, doesn't it? What

00:12:16.519 --> 00:12:18.919
does this movement of top talent away from the

00:12:18.919 --> 00:12:21.759
big labs really suggest about where future AI

00:12:21.759 --> 00:12:23.980
innovation might be heading? Yeah, that's a good

00:12:23.980 --> 00:12:25.899
point. I guess it suggests that frontier innovation

00:12:25.899 --> 00:12:29.500
is increasingly driven by these focused, specialized

00:12:29.500 --> 00:12:32.620
startups, maybe. More agility, perhaps. Seems

00:12:32.620 --> 00:12:34.700
plausible. Yeah. Okay, let's try and recap the

00:12:34.700 --> 00:12:37.639
big ideas from this deep dive. Sounds good. So

00:12:37.639 --> 00:12:40.720
we saw two really massive threads changing the

00:12:40.720 --> 00:12:44.389
AI. landscape just this week it feels like first

00:12:44.389 --> 00:12:47.029
the whole economic reality shifted deep seek

00:12:47.029 --> 00:12:49.070
basically proved you could potentially have your

00:12:49.070 --> 00:12:52.009
operational costs using these sparse attention

00:12:52.009 --> 00:12:54.669
techniques like their lightning indexer it tackles

00:12:54.669 --> 00:12:56.830
that critical quadratic scaling problem right

00:12:56.830 --> 00:12:59.250
making ai cheaper and feasible for much larger

00:12:59.250 --> 00:13:02.690
tasks and second the conceptual ceiling got pushed

00:13:02.690 --> 00:13:06.470
way higher. Yeah, the Aronson story. GPT -5 providing

00:13:06.470 --> 00:13:09.490
that genuinely clever, non -obvious step for

00:13:09.490 --> 00:13:12.129
a quantum proof. That confirms we're really moving

00:13:12.129 --> 00:13:15.070
into an era of AI co -authorship, even in really

00:13:15.070 --> 00:13:17.509
hard science. Definitely. And then layered on

00:13:17.509 --> 00:13:19.720
top of that, you have all these... critical industry

00:13:19.720 --> 00:13:21.940
shifts happening fast. Yeah, the rapid adoption,

00:13:22.059 --> 00:13:24.600
like at USC, the looming copyright battles, especially

00:13:24.600 --> 00:13:28.039
over that Sora 2 opt -out model. Right, and that

00:13:28.039 --> 00:13:30.620
talent fragmentation, top researchers leaving

00:13:30.620 --> 00:13:33.679
big labs for nimbler startups. So the whole AI

00:13:33.679 --> 00:13:35.820
landscape, it feels like it's simultaneously

00:13:35.820 --> 00:13:38.240
maturing, becoming more efficient, more stable,

00:13:38.320 --> 00:13:41.620
and accelerating, getting dramatically more capable,

00:13:41.919 --> 00:13:44.340
tackling harder problems. It's kind of doing

00:13:44.340 --> 00:13:46.559
both at once. Yeah, that's a good way to put

00:13:46.559 --> 00:13:49.419
it, maturing and accelerating. So for you listening,

00:13:49.580 --> 00:13:51.919
maybe explore one of those quick hits we mentioned.

00:13:52.019 --> 00:13:54.679
Good idea. You can look into the details of Anthropic's

00:13:54.679 --> 00:13:56.820
context engineering paper, see if it helps your

00:13:56.820 --> 00:14:00.159
own AI interactions. Or maybe dig into the implications

00:14:00.159 --> 00:14:03.419
of that SORA2 opt -out model for copyright and

00:14:03.419 --> 00:14:06.559
creative work. Yeah, lots to dig into. But here's

00:14:06.559 --> 00:14:08.600
the final, maybe provocative thought we want

00:14:08.600 --> 00:14:12.120
to leave you with. Okay. If AI can demonstrably

00:14:12.120 --> 00:14:15.120
help break conceptual roadblocks in something

00:14:15.120 --> 00:14:19.110
as complex as quantum theory today. What previously

00:14:19.110 --> 00:14:21.870
intractable scientific problem, maybe in medicine

00:14:21.870 --> 00:14:25.210
or material science or fundamental physics, what's

00:14:25.210 --> 00:14:27.809
it going to tackle next year? That is something

00:14:27.809 --> 00:14:29.909
to think about. Where does this capability actually

00:14:29.909 --> 00:14:31.870
lead us? A big question. Well, thank you for

00:14:31.870 --> 00:14:33.870
sharing your sources with us for this deep dive.

00:14:34.090 --> 00:14:34.769
Always fascinating.
