WEBVTT

00:00:00.000 --> 00:00:01.940
Okay, so just imagine this for a second. You're

00:00:01.940 --> 00:00:06.019
running a pharma startup, right? And you're using

00:00:06.019 --> 00:00:08.699
one of these next -gen AI models, let's say,

00:00:08.699 --> 00:00:10.839
I don't know, something way past GPT -5. Right.

00:00:11.019 --> 00:00:14.199
And this thing, it's not just, you know, drafting

00:00:14.199 --> 00:00:16.399
your emails. It's running millions of simulations.

00:00:16.519 --> 00:00:19.460
It hypothesizes a new chemical compound, and

00:00:19.460 --> 00:00:22.280
it basically discovers a cure for a rare disease.

00:00:22.640 --> 00:00:26.199
Wow. You stand to make billions, but then...

00:00:26.800 --> 00:00:29.940
You get this tap on the shoulder. The AI company

00:00:29.940 --> 00:00:32.640
doesn't want your $20 a month subscription fee.

00:00:32.759 --> 00:00:35.320
They want a percentage. A cut of the cure. Cut

00:00:35.320 --> 00:00:38.140
of the cure. They want royalties. Because in

00:00:38.140 --> 00:00:40.579
their view. Without them, you're just a scientist

00:00:40.579 --> 00:00:42.920
with a hunch. And that's the paradigm shift we're

00:00:42.920 --> 00:00:44.880
really waking up to this week. It creates such

00:00:44.880 --> 00:00:47.100
a wild precedent, doesn't it? I mean, we are

00:00:47.100 --> 00:00:50.520
moving from AI as a tool, like a really, really

00:00:50.520 --> 00:00:53.679
smart calculator, to AI as a partner. A stakeholder.

00:00:53.960 --> 00:00:55.960
A stakeholder. And if your stakeholders are digital,

00:00:56.179 --> 00:00:58.880
the economics of everything are about to get

00:00:58.880 --> 00:01:01.899
very weird. Welcome to the Deep Dive. It is Sunday,

00:01:02.119 --> 00:01:05.900
January 25th, 2026. Today, we're unpacking a

00:01:05.900 --> 00:01:07.739
stack of sources that suggest the rules of the

00:01:07.739 --> 00:01:10.859
game are changing and fast. We're not just talking

00:01:10.859 --> 00:01:12.819
about better chatbots anymore. Not at all. We

00:01:12.819 --> 00:01:16.299
are looking at a fundamental shift in business

00:01:16.299 --> 00:01:19.760
models, a huge leap in how AI perceives physical

00:01:19.760 --> 00:01:23.359
reality, and honestly, a bit of a crisis when

00:01:23.359 --> 00:01:25.939
it comes to trusting what these things even tell

00:01:25.939 --> 00:01:27.879
us. We have a lot to get through. We're going

00:01:27.879 --> 00:01:30.340
to start with that open AI profit sharing news,

00:01:30.519 --> 00:01:32.280
which I think is probably the biggest economic

00:01:32.280 --> 00:01:34.780
story in tech so far this year. Yeah, for sure.

00:01:34.920 --> 00:01:36.640
Then we'll look at what people are calling the

00:01:36.640 --> 00:01:39.659
GPT -2 moment for world models, basically AI

00:01:39.659 --> 00:01:41.900
that can build physics aware realities from scratch.

00:01:42.159 --> 00:01:45.379
Which is just mind blowing tech. It is. But we

00:01:45.379 --> 00:01:47.959
also have to look at the messy side of it. We've

00:01:47.959 --> 00:01:50.739
got reports of Google's AI prioritizing YouTube

00:01:50.739 --> 00:01:54.019
over, say, the Mayo Clinic for medical advice.

00:01:54.200 --> 00:01:56.569
Yikes. And some internal drugs. between deep

00:01:56.569 --> 00:01:58.609
mind and open AI that's starting to spill out

00:01:58.609 --> 00:02:00.969
into public. And then finally, we'll ground all

00:02:00.969 --> 00:02:03.590
of it with some practical tools. Google have

00:02:03.590 --> 00:02:05.329
a new breakdown of prompt engineering that's

00:02:05.329 --> 00:02:07.689
actually useful, and Apple is, believe it or

00:02:07.689 --> 00:02:10.960
not, finally giving serious. So let's start there.

00:02:11.000 --> 00:02:13.280
Let's unpack this open AI news. We're seeing

00:02:13.280 --> 00:02:16.840
reports surface about a major strategy shift,

00:02:16.860 --> 00:02:19.120
specifically coming from some comments by Sarah

00:02:19.120 --> 00:02:22.560
Fryer, the COO. What exactly is being proposed

00:02:22.560 --> 00:02:25.139
here? So the gist is that the flat fee model,

00:02:25.280 --> 00:02:27.979
the whole pay 20 bucks a month and use the bot,

00:02:28.180 --> 00:02:31.960
is becoming kind of outdated for high level enterprise

00:02:31.960 --> 00:02:35.360
use. Open AI is exploring a licensing model.

00:02:36.019 --> 00:02:38.520
And the argument, their argument, is that modern

00:02:38.520 --> 00:02:39.939
models, especially the ones they have in the

00:02:39.939 --> 00:02:42.400
lab right now, they aren't just summarizing PDFs

00:02:42.400 --> 00:02:45.560
anymore. They're acting like an AI researcher

00:02:45.560 --> 00:02:47.840
that never sleeps. That phrase, that really stood

00:02:47.840 --> 00:02:49.460
out to me in the notes, an AI researcher that

00:02:49.460 --> 00:02:53.360
never sleeps. It implies a level of agency that

00:02:53.360 --> 00:02:55.979
we haven't really given to software before. Exactly.

00:02:56.120 --> 00:02:57.419
I mean, think about what these models are doing

00:02:57.419 --> 00:03:00.780
in 2026. They're running millions of hypothesis

00:03:00.780 --> 00:03:03.900
tests across decades of research papers. They're

00:03:03.900 --> 00:03:06.219
stitching together data from simulations, from

00:03:06.219 --> 00:03:08.919
real world experiments, and they're proposing

00:03:08.919 --> 00:03:11.080
experiments that actually work. We're seeing

00:03:11.080 --> 00:03:13.699
this in biotech, in energy, materials, science.

00:03:14.000 --> 00:03:16.240
So the logic is, if the AI is doing the real

00:03:16.240 --> 00:03:18.659
heavy lifting of the invention, the creator of

00:03:18.659 --> 00:03:21.400
that AI deserves a slice of the pie. Right. If

00:03:21.400 --> 00:03:23.639
you hire a world -class scientist and she invents

00:03:23.639 --> 00:03:25.840
a new type of battery, she usually gets a bonus.

00:03:26.379 --> 00:03:28.460
Maybe some stock options, right? Sure. OpenAI

00:03:28.460 --> 00:03:30.439
is basically saying, we provided the digital

00:03:30.439 --> 00:03:32.580
scientist. We want in on the downstream success.

00:03:33.159 --> 00:03:36.060
It's treating the model as a collaborator, not

00:03:36.060 --> 00:03:38.460
just a utility you rent. But it changes the definition

00:03:38.460 --> 00:03:40.759
of collaboration, doesn't it? I mean, usually

00:03:40.759 --> 00:03:43.300
collaboration implies two humans sharing risk,

00:03:43.400 --> 00:03:45.620
sharing effort. Here, one party is a software

00:03:45.620 --> 00:03:48.680
vendor. But I suppose if the software is providing

00:03:48.680 --> 00:03:52.000
genuine cognitive labor. Cognitive labor. That's

00:03:52.000 --> 00:03:54.699
the key word. If a model suggests a financial

00:03:54.699 --> 00:03:57.379
strategy that makes a hedge fund an extra $100

00:03:57.379 --> 00:04:01.080
million, OpenAI looks at that and says, that

00:04:01.080 --> 00:04:04.360
wasn't just a tool like Excel. That was insight.

00:04:04.780 --> 00:04:07.639
But the complexities here seem massive. Well,

00:04:07.659 --> 00:04:10.060
they are. Who owns the IP? How do you even track

00:04:10.060 --> 00:04:13.719
value? That's what I'm wondering. If I use, say,

00:04:13.780 --> 00:04:16.399
five different AI tools, one for coding, one

00:04:16.399 --> 00:04:18.860
for biology, one for data analysis, and I have

00:04:18.860 --> 00:04:21.529
a breakthrough, who gets the check? Do they all

00:04:21.529 --> 00:04:23.529
split it? It sounds like a legal nightmare. Oh,

00:04:23.550 --> 00:04:26.730
absolutely. It is incredibly messy. But the signal

00:04:26.730 --> 00:04:28.930
is what's important here, and it's crystal clear.

00:04:29.529 --> 00:04:32.670
OpenAI sees itself as a stakeholder in your breakthroughs.

00:04:33.029 --> 00:04:35.629
They are betting their models will be so integral

00:04:35.629 --> 00:04:38.350
to the next wave of scientific discovery, and

00:04:38.350 --> 00:04:39.889
they don't want to leave that value on the table

00:04:39.889 --> 00:04:42.350
for $20 a month. It's a bold move. It speaks

00:04:42.350 --> 00:04:44.930
to a level of confidence in their tech that's...

00:04:45.439 --> 00:04:47.560
Well, it's almost arrogant, but maybe it's justified

00:04:47.560 --> 00:04:49.439
if the results are there. It really is. It's

00:04:49.439 --> 00:04:51.139
them saying, we know we're going to help you

00:04:51.139 --> 00:04:53.519
win, so we want our cut. So here's the question

00:04:53.519 --> 00:04:57.720
then. Does taking a cut of the win incentivize

00:04:57.720 --> 00:05:00.100
OpenAI to build drastically better models for

00:05:00.100 --> 00:05:02.800
us? Or does it just create this massive barrier

00:05:02.800 --> 00:05:05.300
where companies stop using them to protect their

00:05:05.300 --> 00:05:08.360
own innovation? It aligns incentives, but might

00:05:08.360 --> 00:05:10.779
scare off companies protecting their IP. Okay,

00:05:10.819 --> 00:05:12.959
let's pivot from the business side to the tech

00:05:12.959 --> 00:05:15.389
itself. While the business guys are figuring

00:05:15.389 --> 00:05:17.730
out how to charge for this intelligence, the

00:05:17.730 --> 00:05:19.949
engineers are figuring out how to make that intelligence

00:05:19.949 --> 00:05:22.389
understand the physical world. Yes. We're hearing

00:05:22.389 --> 00:05:25.029
a lot about world models. This is the fun stuff.

00:05:25.089 --> 00:05:27.689
We're seeing what some experts are calling the

00:05:27.689 --> 00:05:30.810
GPT -2 moment for world models. Okay. So for

00:05:30.810 --> 00:05:32.829
listeners who might not remember 2019 all that

00:05:32.829 --> 00:05:35.670
clearly, just remind us why GPT -2 is the benchmark

00:05:35.670 --> 00:05:38.329
here. Right. So GPT -2 wasn't the model that

00:05:38.329 --> 00:05:40.389
changed the world that was really GPT -3 and

00:05:40.389 --> 00:05:43.420
4. But GPT -2 was the first time that text generation

00:05:43.420 --> 00:05:46.319
felt stable enough to build on. It wasn't perfect,

00:05:46.459 --> 00:05:49.220
but it worked. We've just hit that same milestone.

00:05:49.579 --> 00:05:52.839
But for generating video in 3D worlds? And we

00:05:52.839 --> 00:05:54.980
have two specific players mentioned here driving

00:05:54.980 --> 00:05:58.019
this, Odyssey and World Labs. Right. Let's start

00:05:58.019 --> 00:06:01.519
with Odyssey 2 Pro. This is a new API that generates

00:06:01.519 --> 00:06:05.019
interactive, physics -aware video. And I really

00:06:05.019 --> 00:06:07.579
want to stress physics -aware. Okay. You aren't

00:06:07.579 --> 00:06:09.600
just generating a video file that plays from

00:06:09.600 --> 00:06:12.079
start to finish. You're generating a simulation.

00:06:12.579 --> 00:06:15.600
So what happens if I type in, say, a laughing

00:06:15.600 --> 00:06:18.160
baby? Okay, so it generates the scene almost

00:06:18.160 --> 00:06:20.639
instantly. We're talking 720p at about 22 frames

00:06:20.639 --> 00:06:23.779
per second. But here's the kicker. The model

00:06:23.779 --> 00:06:27.139
is actively predicting the next frame like a

00:06:27.139 --> 00:06:29.060
physics engine. Wow. It understands gravity,

00:06:29.259 --> 00:06:32.089
light. how things move. And you can interact

00:06:32.089 --> 00:06:33.930
with it in real time. It's not a movie. It's

00:06:33.930 --> 00:06:36.970
a little sandbox. That distinction feels crucial.

00:06:37.110 --> 00:06:39.629
It's moving from just capturing reality to actively

00:06:39.629 --> 00:06:42.029
simulating it. And World Labs is doing something

00:06:42.029 --> 00:06:45.029
similar, but in 3D space. Exactly. World Labs

00:06:45.029 --> 00:06:47.350
just dropped their marble model. You can upload

00:06:47.350 --> 00:06:49.509
a single photo or even just describe a scene.

00:06:49.810 --> 00:06:51.970
And in about five minutes, it gives you a fully

00:06:51.970 --> 00:06:54.610
navigable 3D world. Five minutes? Five minutes.

00:06:54.629 --> 00:06:56.189
It figures out the layout, the lighting, the

00:06:56.189 --> 00:06:58.889
depth. It uses this tech called Gaussian Splats

00:06:58.889 --> 00:07:02.009
to... render it all gaussian splats it sounds

00:07:02.009 --> 00:07:04.209
like a messy painting technique but i know it's

00:07:04.209 --> 00:07:06.550
a big deal technically it's surprisingly elegant

00:07:06.550 --> 00:07:09.509
the simple version is it's a way of representing

00:07:09.509 --> 00:07:14.509
3d scenes as these sort of fuzzy blobs the splats

00:07:14.509 --> 00:07:17.370
that all blend together to look solid right and

00:07:17.370 --> 00:07:19.730
it's way faster to render than traditional polygons

00:07:19.730 --> 00:07:23.370
so you go from a flat 2d picture of a living

00:07:23.370 --> 00:07:26.180
room to literally Walking around inside that

00:07:26.180 --> 00:07:27.959
living room in five minutes. That just feels

00:07:27.959 --> 00:07:30.860
like a massive leap for industries like gaming

00:07:30.860 --> 00:07:33.160
or architecture. You don't need to code the wall.

00:07:33.259 --> 00:07:36.839
You just dream the wall. Whoa. And imagine scaling

00:07:36.839 --> 00:07:39.040
that. Imagine just saying, I want a city that

00:07:39.040 --> 00:07:41.639
looks like 1920s Paris, but with neon lights.

00:07:42.110 --> 00:07:44.509
and then having a walkable simulation in minutes.

00:07:44.769 --> 00:07:47.829
We're moving from generating a picture to stepping

00:07:47.829 --> 00:07:49.310
into the simulation. It's a holodeck. It's a

00:07:49.310 --> 00:07:50.930
holodeck just on your screen. It's incredible.

00:07:51.050 --> 00:07:53.649
But it also blurs the line between creation and

00:07:53.649 --> 00:07:56.129
hallucination. If we can generate physics -aware

00:07:56.129 --> 00:07:58.949
reality on the fly, what happens to the whole

00:07:58.949 --> 00:08:01.889
concept of filming a movie or coding a game?

00:08:01.990 --> 00:08:04.569
Traditional production dies. We move from capturing

00:08:04.569 --> 00:08:07.269
reality to hallucinating consistent realities.

00:08:08.120 --> 00:08:11.459
That idea, hallucinating realities, it brings

00:08:11.459 --> 00:08:13.600
us to a more grounding and maybe more concerning

00:08:13.600 --> 00:08:16.699
topic. We have to talk about trust because while

00:08:16.699 --> 00:08:18.879
these models are busy building worlds, they're

00:08:18.879 --> 00:08:20.740
also answering our questions about our health,

00:08:20.819 --> 00:08:24.339
about history. And the latest reports are mixed,

00:08:24.480 --> 00:08:27.120
to say the least. Mixed is a very polite way

00:08:27.120 --> 00:08:28.939
to put it. Yeah. We've got a bit of a crisis

00:08:28.939 --> 00:08:31.240
of trust brewing here. There's a new study out

00:08:31.240 --> 00:08:34.259
about Google's AI overviews, you know, the summaries

00:08:34.259 --> 00:08:36.639
at the top of a search. Right. It turns out.

00:08:36.889 --> 00:08:39.730
When it comes to health information, the AI is

00:08:39.730 --> 00:08:42.730
citing YouTube videos more often than actual

00:08:42.730 --> 00:08:45.389
medical sites. Yeah. More than the Mayo Clinic.

00:08:45.549 --> 00:08:48.070
That is deeply unsettling. I mean, YouTube is

00:08:48.070 --> 00:08:50.870
fantastic if you need to fix a leaky sink. But

00:08:50.870 --> 00:08:53.289
for medical advice, it's a complete minefield

00:08:53.289 --> 00:08:57.190
of anecdotes and unverified claims. Right. And

00:08:57.190 --> 00:08:59.230
the studies show that even reputable medical

00:08:59.230 --> 00:09:01.690
sources were ranking lower than video content.

00:09:02.049 --> 00:09:04.669
It looks like the algorithm is prioritizing engagement.

00:09:05.370 --> 00:09:07.610
what people click on over actual medical accuracy.

00:09:07.990 --> 00:09:10.350
So it's the Internet's popularity contest problem,

00:09:10.509 --> 00:09:12.149
but now it's being presented as an authoritative

00:09:12.149 --> 00:09:15.350
answer by an AI. Precisely. And this isn't just

00:09:15.350 --> 00:09:17.110
a Google problem. We're seeing tension between

00:09:17.110 --> 00:09:21.190
the big labs, too. DeepMind's CEO publicly called

00:09:21.190 --> 00:09:23.929
out OpenAI recently. Yeah, that was spicy. What

00:09:23.929 --> 00:09:25.909
happened there? He basically questioned them,

00:09:25.950 --> 00:09:29.789
rushing ads into ChatGPT. His point was, you

00:09:29.789 --> 00:09:32.090
know, how can you claim to be a trusted assistant

00:09:32.090 --> 00:09:34.730
if you're also trying to sell me something? It's

00:09:34.730 --> 00:09:37.110
a conflict of interest. If I ask my assistant

00:09:37.110 --> 00:09:39.750
for the best running shoe, I want the best shoe,

00:09:39.850 --> 00:09:41.730
not the one that paid for a placement. That makes

00:09:41.730 --> 00:09:44.529
sense. And then there's this issue of models

00:09:44.529 --> 00:09:46.289
learning from other models. We saw that with

00:09:46.289 --> 00:09:48.970
GPT -5 .2, right? Oh, the Grokopedia incident.

00:09:49.090 --> 00:09:52.730
Yes. So reportedly, GPT -5 .2 started referencing

00:09:52.730 --> 00:09:55.769
Elon Musk's Grokopedia, which is the data source

00:09:55.769 --> 00:09:57.950
for his Grok model. Okay. And the problem with

00:09:57.950 --> 00:10:00.389
that is? The problem is Grok has a very specific,

00:10:00.490 --> 00:10:03.049
let's call it an edgy worldview. So all of a

00:10:03.049 --> 00:10:06.190
sudden, GPT -5 .2 starts giving these weird controversial

00:10:06.190 --> 00:10:09.210
takes on really sensitive topics like AIDS or

00:10:09.210 --> 00:10:16.309
slavery. It's like a contagion effect. One model

00:10:16.309 --> 00:10:18.450
hallucinates or carries a bias, and the next

00:10:18.450 --> 00:10:21.429
model comes along, scrapes that output, and treats

00:10:21.429 --> 00:10:24.509
it as fact. Exactly. It's a huge data contamination

00:10:24.509 --> 00:10:27.710
problem. If the internet is just flooding with

00:10:27.710 --> 00:10:30.870
AI -generated content, then new models are just

00:10:30.870 --> 00:10:33.889
training on old models' outputs. It's like making

00:10:33.889 --> 00:10:36.590
a photocopy of a photocopy. The image just...

00:10:36.779 --> 00:10:39.539
degrades over time. So if major models are citing

00:10:39.539 --> 00:10:41.899
YouTube and each other's hallucinations, are

00:10:41.899 --> 00:10:44.240
we just entering a feedback loop of misinformation?

00:10:44.799 --> 00:10:47.960
Yes. It's a digital echo chamber degrading the

00:10:47.960 --> 00:10:49.899
quality of truth. We're going to take a brief

00:10:49.899 --> 00:10:51.720
pause here. All right. We've talked about the

00:10:51.720 --> 00:10:54.759
high level economics, the futuristic tech. Let's

00:10:54.759 --> 00:10:56.720
try to bring this back down to earth a bit. What

00:10:56.720 --> 00:10:58.820
can people listening actually use this week?

00:10:58.860 --> 00:11:01.379
We've got some updates from Google, Apple and

00:11:01.379 --> 00:11:03.820
a few interesting startups. OK, let's start with

00:11:03.820 --> 00:11:07.700
learning. Google has this six -hour prompt engineering

00:11:07.700 --> 00:11:10.200
course, which I know sounds exhausting. Yeah,

00:11:10.220 --> 00:11:12.039
that's a commitment. But a newsletter author

00:11:12.039 --> 00:11:15.100
we follow distilled it down beautifully. And

00:11:15.100 --> 00:11:17.440
the core value isn't just a list of magic words.

00:11:17.779 --> 00:11:20.580
It's about understanding these five key principles

00:11:20.580 --> 00:11:23.379
that help you start to think like the model thinks.

00:11:23.840 --> 00:11:26.460
Thinking like the model. That really does seem

00:11:26.460 --> 00:11:29.139
to be the critical skill for 2026. It's not about

00:11:29.139 --> 00:11:31.419
memorizing commands. It's about understanding

00:11:31.419 --> 00:11:35.000
the logic flow. Exactly. It's about context,

00:11:35.059 --> 00:11:38.320
constraints, iteration. And speaking of iteration,

00:11:38.620 --> 00:11:41.340
I have to make a vulnerable admission here. Go

00:11:41.340 --> 00:11:43.740
for it. I still wrestle with prompt drift myself.

00:11:44.159 --> 00:11:46.200
You know, you get a great result and then you

00:11:46.200 --> 00:11:48.440
change one tiny word and suddenly the AI just

00:11:48.440 --> 00:11:50.440
goes completely off the rails. Oh, it happens

00:11:50.440 --> 00:11:52.480
to everyone. You think you finally mastered it

00:11:52.480 --> 00:11:54.639
and then the model decides that concise means

00:11:54.639 --> 00:11:58.120
be rude. Precisely. So these courses are actually

00:11:58.120 --> 00:12:00.799
pretty vital. But for people who just want the

00:12:00.799 --> 00:12:03.259
tech to work better, Apple is finally catching

00:12:03.259 --> 00:12:05.330
up. We're hearing that the Apple intelligence

00:12:05.330 --> 00:12:07.789
updates coming in February are pretty significant.

00:12:08.110 --> 00:12:10.350
Siri will finally be able to read on -screen

00:12:10.350 --> 00:12:12.309
content. That seems like such a basic feature.

00:12:12.389 --> 00:12:15.230
It's amazing that's been missing. It's huge for

00:12:15.230 --> 00:12:17.070
context. If you're looking at an email and you

00:12:17.070 --> 00:12:19.289
say, remind me about this, Siri will finally

00:12:19.289 --> 00:12:22.570
know what this is. And there's a bigger, more

00:12:22.570 --> 00:12:25.370
chatbot -style update planned for June. We also

00:12:25.370 --> 00:12:27.570
saw a big acquisition in the business world.

00:12:27.809 --> 00:12:30.470
Yelp bought a startup called Hatch for, what,

00:12:30.470 --> 00:12:33.730
$270 million? Yeah, this is a smart move. Hatch

00:12:33.730 --> 00:12:37.230
is an AI for service businesses. So think plumbers,

00:12:37.350 --> 00:12:39.470
movers, that sort of thing. It helps them auto

00:12:39.470 --> 00:12:41.450
-reply to customers. Yelp is basically saying,

00:12:41.549 --> 00:12:44.110
we don't just want to list your business. We

00:12:44.110 --> 00:12:46.090
want to handle your frontline customer service.

00:12:46.309 --> 00:12:49.110
It's automation for the less glamorous but essential

00:12:49.110 --> 00:12:52.210
parts of the economy. Totally. And any smaller

00:12:52.210 --> 00:12:54.779
tools that caught your eye this week? Two, really

00:12:54.779 --> 00:12:57.379
quickly. One is called Pliable. It's an AI -native

00:12:57.379 --> 00:12:59.940
analytics platform. It helps you see the forest

00:12:59.940 --> 00:13:02.600
for the trees with your data, but without needing

00:13:02.600 --> 00:13:05.059
a data science degree. The other one is for creators.

00:13:05.340 --> 00:13:08.460
Thumbfy .est. Thumbfast. Yeah. You drop in a

00:13:08.460 --> 00:13:12.080
face, describe a vision, and it generates a YouTube

00:13:12.080 --> 00:13:14.480
thumbnail instantly. You can grab inspiration

00:13:14.480 --> 00:13:16.299
from thumbnails that are already out there and

00:13:16.299 --> 00:13:18.539
just iterate until it's perfect. It's pure, simple

00:13:18.539 --> 00:13:21.960
utility. It's fascinating. We have A .I. negotiating

00:13:21.960 --> 00:13:25.240
million dollar drug discoveries on one end and

00:13:25.240 --> 00:13:27.600
A .I. making YouTube thumbnails on the other.

00:13:27.759 --> 00:13:30.139
The range is just staggering. That's the ecosystem

00:13:30.139 --> 00:13:32.980
right now. Yeah. From the sublime. all the way

00:13:32.980 --> 00:13:35.159
to the clickbait. With tools automating everything

00:13:35.159 --> 00:13:38.179
from thumbnails to customer service, what is

00:13:38.179 --> 00:13:40.899
the remaining role for human intuition? Curating

00:13:40.899 --> 00:13:43.399
the output and defining the creative vision.

00:13:43.600 --> 00:13:45.279
Let's try to pull all of this together. We've

00:13:45.279 --> 00:13:47.580
covered a lot of ground today. We have. And I

00:13:47.580 --> 00:13:49.200
think there really is a through line here. We're

00:13:49.200 --> 00:13:52.440
seeing the industry mature, maybe. Maybe it's

00:13:52.440 --> 00:13:54.879
losing some of its innocence. That's a good way

00:13:54.879 --> 00:13:57.870
to put it. Economically, we're moving from simple

00:13:57.870 --> 00:14:00.750
subscriptions to this complex idea of value sharing.

00:14:01.090 --> 00:14:04.269
Open AI wanting royalties is a sign they believe

00:14:04.269 --> 00:14:06.570
their product isn't just software anymore, it's

00:14:06.570 --> 00:14:08.769
a co -founder. And technologically, we're graduating

00:14:08.769 --> 00:14:11.509
from static images to these physics -aware worlds.

00:14:11.769 --> 00:14:15.470
That GPT -2 moment for world models means we

00:14:15.470 --> 00:14:18.350
are right on the cusp of generated reality being

00:14:18.350 --> 00:14:22.429
standard. But culturally, we're... are really

00:14:22.429 --> 00:14:25.049
struggling with this messy middle. The trust

00:14:25.049 --> 00:14:27.990
issues at Google, the data contamination with

00:14:27.990 --> 00:14:30.850
Grokopedia. It shows that as these models get

00:14:30.850 --> 00:14:33.190
more powerful, they also get more dangerous if

00:14:33.190 --> 00:14:35.730
we aren't careful about what we feed them. Right.

00:14:35.850 --> 00:14:38.490
The models are becoming less like tools and more

00:14:38.490 --> 00:14:42.629
like partners. Like any partner, sometimes they're

00:14:42.629 --> 00:14:44.789
brilliant -like when they discover a new material,

00:14:44.929 --> 00:14:47.690
and sometimes they're just completely unreliable

00:14:47.690 --> 00:14:50.190
citing a random YouTuber for medical advice.

00:14:50.409 --> 00:14:52.490
It forces us to be sharper, I think. We can't

00:14:52.490 --> 00:14:54.610
just passively consume the output anymore, we

00:14:54.610 --> 00:14:57.360
have to audit it. Absolutely. The human in the

00:14:57.360 --> 00:14:59.360
loop is more important than ever, even if that

00:14:59.360 --> 00:15:01.679
loop is getting faster and more automated. If

00:15:01.679 --> 00:15:03.399
you want to dive a little deeper on any of this,

00:15:03.519 --> 00:15:06.059
I'd really recommend checking out that distillation

00:15:06.059 --> 00:15:07.879
of the Google prompt course. It's a low stakes

00:15:07.879 --> 00:15:09.899
way to just sharpen your skills. Yeah. And if

00:15:09.899 --> 00:15:12.120
you have access, play around with Odyssey or

00:15:12.120 --> 00:15:15.139
World Labs. Seeing a static image turn into a

00:15:15.139 --> 00:15:17.360
3D world is it's something you really have to

00:15:17.360 --> 00:15:19.649
experience to understand. It's a trip. Highly

00:15:19.649 --> 00:15:21.649
recommended. I want to leave you with one final

00:15:21.649 --> 00:15:24.730
thought today. We talked about OpenAI wanting

00:15:24.730 --> 00:15:28.669
a cut of your profits because their AI helped

00:15:28.669 --> 00:15:31.590
you think. It's an interesting argument, but

00:15:31.590 --> 00:15:34.330
it raises the flip side of that coin. If OpenAI

00:15:34.330 --> 00:15:36.789
wants a share of the win because their model

00:15:36.789 --> 00:15:39.570
was a partner, does that mean they're also liable

00:15:39.570 --> 00:15:42.710
if their AI helps you fail? If the researcher

00:15:42.710 --> 00:15:45.370
that never sleeps hallucinates a strategy that

00:15:45.370 --> 00:15:47.679
bankrupts your company. Do they share in that

00:15:47.679 --> 00:15:50.340
loss? That is the billion dollar question, isn't

00:15:50.340 --> 00:15:51.960
it? Thanks for listening. We'll see you in the

00:15:51.960 --> 00:15:53.379
deep end next time. Bye for now.
