WEBVTT

00:00:00.000 --> 00:00:02.640
So for the last, what, 18 months, we've just

00:00:02.640 --> 00:00:05.019
been buried in these staggering numbers about

00:00:05.019 --> 00:00:07.580
AI adoption. Oh, yeah, nonstop. But this new

00:00:07.580 --> 00:00:11.640
open AI enterprise report, it just dropped this

00:00:11.640 --> 00:00:15.019
huge, almost paradoxical insight. Right. There's

00:00:15.019 --> 00:00:18.260
this immense gap opening up. A huge chunk of

00:00:18.260 --> 00:00:20.699
companies haven't even touched the basic AI features,

00:00:20.899 --> 00:00:24.300
even while the top tier is just, you know, exploding.

00:00:24.579 --> 00:00:27.160
It's the great AI divide. Yeah. And you really

00:00:27.160 --> 00:00:29.690
need to look at the metrics here because. The

00:00:29.690 --> 00:00:32.789
financial stakes are just enormous. We all know

00:00:32.789 --> 00:00:35.909
the 800 million weekly chat GPT users, but in

00:00:35.909 --> 00:00:38.590
the corporate world, the difference between these

00:00:38.590 --> 00:00:40.850
frontier firms and everyone else is already creating

00:00:40.850 --> 00:00:43.229
this huge divergence in shareholder returns.

00:00:43.929 --> 00:00:46.530
Welcome back to the Deep Dive. Our mission today,

00:00:46.609 --> 00:00:48.770
it's built entirely from the sources you sent

00:00:48.770 --> 00:00:50.789
us, really trying to cut through the noise and

00:00:50.789 --> 00:00:52.490
give you the most critical, actionable stuff.

00:00:52.750 --> 00:00:54.469
And we have a pretty clear roadmap for this.

00:00:54.549 --> 00:00:56.469
First, we're going to dissect this great AI divide.

00:00:56.630 --> 00:00:58.630
Look at the user habits that are translating

00:00:58.630 --> 00:01:02.630
directly into real financial gains. Second, we'll

00:01:02.630 --> 00:01:05.629
scan the landscape for 2026. So, monetization

00:01:05.629 --> 00:01:08.689
trends, hardware shifts, and this crazy accelerating

00:01:08.689 --> 00:01:12.769
model race. And finally, we have this just ultimate

00:01:12.769 --> 00:01:16.790
underdog story. How a tiny six -person team just

00:01:16.790 --> 00:01:20.590
smashed a critical AGI benchmark. Yeah. And they

00:01:20.590 --> 00:01:24.310
proved that smart system design is maybe more

00:01:24.310 --> 00:01:27.189
valuable now than just raw computational muscle.

00:01:27.489 --> 00:01:30.120
Okay, let's untack this. When you just look at

00:01:30.120 --> 00:01:32.379
the raw growth metrics from this report, the

00:01:32.379 --> 00:01:35.599
numbers, they almost feel unreal. Right. Enterprise

00:01:35.599 --> 00:01:38.799
usage is up nine times year over year. Nine times.

00:01:38.879 --> 00:01:41.120
And the number of weekly messages inside these

00:01:41.120 --> 00:01:43.480
companies, that's up eight times. This isn't

00:01:43.480 --> 00:01:46.000
some slow rollout. No, it's an absolute stampede,

00:01:46.019 --> 00:01:47.319
at least for the companies who've bought in.

00:01:47.420 --> 00:01:49.540
But that's just the volume side. What's really,

00:01:49.640 --> 00:01:52.239
I think, insightful here is the quality of the

00:01:52.239 --> 00:01:55.159
usage. The quality, how so? Reasoning token usage.

00:01:55.459 --> 00:01:57.560
Yeah. Which is basically the model's internal

00:01:57.560 --> 00:02:00.040
thought process, right? Its ability to do complex,

00:02:00.180 --> 00:02:03.019
multi -step problem solving. That's up a staggering

00:02:03.019 --> 00:02:06.620
320 times since last November. 320. I mean, that's

00:02:06.620 --> 00:02:08.319
a number you have to sit with for a second. Yeah.

00:02:08.379 --> 00:02:11.000
It suggests businesses are not just using AI

00:02:11.000 --> 00:02:13.729
for, you know. drafting a quick email anymore.

00:02:13.930 --> 00:02:16.550
Not at all. They're giving it deep, complex,

00:02:16.569 --> 00:02:19.449
strategic work. They're running analysis that

00:02:19.449 --> 00:02:22.150
needs the AI to connect a bunch of different

00:02:22.150 --> 00:02:24.830
dots and follow this long chain of logic. Right.

00:02:24.990 --> 00:02:28.430
And this kind of deep reasoning. It only really

00:02:28.430 --> 00:02:31.389
became commercially viable like 16 months ago.

00:02:31.550 --> 00:02:34.189
So this explosion is a very recent and I think

00:02:34.189 --> 00:02:37.069
profound shift. And that shift is leading to

00:02:37.069 --> 00:02:41.069
a productivity payoff that seems undeniably real.

00:02:41.449 --> 00:02:44.409
It is. The average user saves, what, 40 to 60

00:02:44.409 --> 00:02:46.930
minutes a day, which is great. Yeah, that's significant.

00:02:47.250 --> 00:02:49.229
But the heavy users, the ones who are really

00:02:49.229 --> 00:02:52.110
embedding it in every single workflow, they're

00:02:52.110 --> 00:02:54.189
saving 10 or more hours a week. That's a full

00:02:54.189 --> 00:02:56.169
extra workday. It's like buying a full extra

00:02:56.169 --> 00:02:58.349
workday without a new hire. And on top of that,

00:02:58.469 --> 00:03:00.669
75 % of workers now say they're doing things

00:03:00.669 --> 00:03:02.949
they just couldn't do before, like complex coding,

00:03:03.129 --> 00:03:05.650
data modeling, building custom automations. It's

00:03:05.650 --> 00:03:07.650
letting non -technical people jump straight into

00:03:07.650 --> 00:03:11.050
the technical deep end. I have to admit, I still

00:03:11.050 --> 00:03:13.270
wrestle with prompt drift myself sometimes, you

00:03:13.270 --> 00:03:15.710
know, finding just the right conversational path

00:03:15.710 --> 00:03:18.990
to get the result I want. But seeing people who

00:03:18.990 --> 00:03:22.319
a year ago wouldn't touch a line of code. And

00:03:22.319 --> 00:03:24.500
now they're building actual functional applications.

00:03:25.099 --> 00:03:27.539
It's astounding. It's just flattening that technical

00:03:27.539 --> 00:03:29.780
skill curve so dramatically. Yeah, but here's

00:03:29.780 --> 00:03:31.159
where it gets really interesting, right? Yeah.

00:03:31.439 --> 00:03:34.699
How do those frontier users, the ones driving

00:03:34.699 --> 00:03:37.699
all this, how do they actually do it? It seems

00:03:37.699 --> 00:03:41.400
to come down to deliberate. high volume and what

00:03:41.400 --> 00:03:43.460
you call high leverage interaction. Right. They

00:03:43.460 --> 00:03:46.500
send six times more messages per employee than

00:03:46.500 --> 00:03:49.280
the average company. Six times. But here's the

00:03:49.280 --> 00:03:52.400
critical metric. The top analysts in these firms,

00:03:52.520 --> 00:03:55.400
they're using AI tooling 16 times more often

00:03:55.400 --> 00:03:58.419
than their peers. 16. That intensity just creates

00:03:58.419 --> 00:04:01.199
this feedback loop that the non -adopters, they

00:04:01.199 --> 00:04:02.840
just can't compete with it. And that intensity

00:04:02.840 --> 00:04:05.699
maps directly to the bottom line. The firms operating

00:04:05.699 --> 00:04:08.740
at this level are seeing 1 .7 times faster revenue

00:04:08.740 --> 00:04:12.310
growth. But the kicker is the shareholder return,

00:04:12.569 --> 00:04:16.050
3 .6 times higher. 3 .6. So if you're a company

00:04:16.050 --> 00:04:19.569
that hasn't even touched basic AI yet, the data

00:04:19.569 --> 00:04:21.790
is screaming that you are already way behind

00:04:21.790 --> 00:04:24.550
the curve. So this raises a big question for

00:04:24.550 --> 00:04:26.949
me. We see this data and it correlates financial

00:04:26.949 --> 00:04:31.329
return with this high volume usage. Is this adoption

00:04:31.329 --> 00:04:33.629
gap we're seeing fundamentally a failure of management,

00:04:33.810 --> 00:04:36.410
a failure to encourage that kind of... deliberate

00:04:36.410 --> 00:04:39.269
interaction the data certainly suggests management

00:04:39.269 --> 00:04:42.550
is key financial returns track directly with

00:04:42.550 --> 00:04:45.850
that deliberate high leverage ai usage so leadership

00:04:45.850 --> 00:04:48.350
has to be a factor so while the big players are

00:04:48.350 --> 00:04:50.329
winning today with sheer usage we've got to look

00:04:50.329 --> 00:04:52.350
at what they're prepping for tomorrow let's uh

00:04:52.350 --> 00:04:54.930
scan the immediate landscape the pace is just

00:04:55.290 --> 00:04:57.509
It's a racing heartbeat. We're seeing utility

00:04:57.509 --> 00:05:00.329
tools just explode. Like, developers shared 25

00:05:00.329 --> 00:05:02.350
pre -built cloud skills. They're kind of like

00:05:02.350 --> 00:05:04.569
custom GPTs that are just copy -paste. Which

00:05:04.569 --> 00:05:06.769
massively simplifies things for teams. Totally.

00:05:06.829 --> 00:05:08.850
It simplifies rapid deployment, and people are

00:05:08.850 --> 00:05:11.009
hunting for these high -value shortcuts. Right,

00:05:11.110 --> 00:05:14.110
like that thread with 15 ,000 bookmarks. Yeah.

00:05:14.189 --> 00:05:16.490
The one that claims six prompts are better than

00:05:16.490 --> 00:05:19.050
Duolingo for learning a language. Exactly. People

00:05:19.050 --> 00:05:21.949
want to shortcut expertise instantly. Meanwhile...

00:05:22.160 --> 00:05:24.000
You know, the big model heavyweights are accelerating.

00:05:24.180 --> 00:05:26.540
There are these reports that OpenAI's code red

00:05:26.540 --> 00:05:29.319
might have led to them panic releasing GPT 5

00:05:29.319 --> 00:05:32.500
.2 like this week. And Elon's always teasing

00:05:32.500 --> 00:05:35.519
Grok 4 .20. Of course. And the investment side,

00:05:35.660 --> 00:05:37.699
it really shows where the long term money is

00:05:37.699 --> 00:05:41.819
going. Physical AI. Embodiment. Yeah. SoftBank

00:05:41.819 --> 00:05:44.220
and NVIDIA are reportedly about to put over a

00:05:44.220 --> 00:05:47.279
billion dollars into skilled AI. They build the

00:05:47.279 --> 00:05:49.459
robot brains, right? They build robot brains

00:05:49.459 --> 00:05:52.569
using human like AI. And that company's valuation?

00:05:53.509 --> 00:05:55.889
It tripled in less than two years. The big money

00:05:55.889 --> 00:05:57.850
thinks the next frontier is physical. So looking

00:05:57.850 --> 00:06:01.029
out a bit, to 2026, Microsoft has outlined these

00:06:01.029 --> 00:06:03.149
seven big trends. And two of them really jump

00:06:03.149 --> 00:06:06.050
out. First, monetization. Google finally confirmed

00:06:06.050 --> 00:06:08.930
it. Ads are coming to Gemini in 2026. The end

00:06:08.930 --> 00:06:11.129
of the ad -free era. It was always going to happen.

00:06:11.290 --> 00:06:13.550
It was inevitable. But that's a huge psychological

00:06:13.550 --> 00:06:16.410
shift, isn't it? For enterprise users, once ads

00:06:16.410 --> 00:06:18.689
are in the ecosystem, the whole trust model changes.

00:06:18.949 --> 00:06:21.050
For sure. Are you really going to build your

00:06:21.050 --> 00:06:23.490
core workflows on a tool where the attention

00:06:23.490 --> 00:06:26.290
economy is now a feature? It creates a friction

00:06:26.290 --> 00:06:28.790
point, for sure. And the second major trend is

00:06:28.790 --> 00:06:30.870
hardware integration. We're seeing this massive

00:06:30.870 --> 00:06:34.230
shift from screen -based chat to something more

00:06:34.230 --> 00:06:39.490
ambient. Google's new AI glasses, also coming

00:06:39.490 --> 00:06:42.389
in 2026 with Gemini built right in. Yeah, that's

00:06:42.389 --> 00:06:44.870
a direct shot at Meta's Ray -Bans. It signals

00:06:44.870 --> 00:06:47.410
that AI is about to become deeply integrated

00:06:47.410 --> 00:06:49.970
into how we actually perceive the world. So what

00:06:49.970 --> 00:06:51.389
really stands out to you in all this? I mean,

00:06:51.410 --> 00:06:54.129
are we really moving past AI as just a chat box,

00:06:54.230 --> 00:06:57.269
a tool we open, to AI as integrated hardware

00:06:57.269 --> 00:07:00.199
that's just... always on. Yes, I think the focus

00:07:00.199 --> 00:07:02.959
is absolutely shifting to deeply integrated AI

00:07:02.959 --> 00:07:05.259
and hardware, making the technology ambient and

00:07:05.259 --> 00:07:07.300
constant. Mineral sponsor, Geek Placeholder.

00:07:07.660 --> 00:07:10.519
All right, let's pivot to what might be the breakthrough

00:07:10.519 --> 00:07:13.319
that really shocked the AI world this week. We're

00:07:13.319 --> 00:07:15.339
talking about an incredible underdog story. Oh,

00:07:15.379 --> 00:07:18.660
this is great. A six -person startup called Poetic.

00:07:19.019 --> 00:07:22.740
They just beat giants like Google DeepMind and

00:07:22.740 --> 00:07:25.180
OpenAI on one of the toughest reasoning benchmarks

00:07:25.180 --> 00:07:29.600
out there. The ARC -AGI -2. And this is a huge

00:07:29.600 --> 00:07:32.379
deal, a huge win, for a couple reasons. First,

00:07:32.579 --> 00:07:36.019
the ARC -AGI -2 is a genuinely difficult test.

00:07:36.180 --> 00:07:38.879
It's not about spitting back facts. It's about

00:07:38.879 --> 00:07:41.720
abstract visual patterns, fluid reasoning. You

00:07:41.720 --> 00:07:43.500
have to actually understand structure and logic.

00:07:43.680 --> 00:07:46.079
Exactly. And Poetic was the first system ever

00:07:46.079 --> 00:07:50.040
to cross 50 % accuracy. They hit 54%. And here's

00:07:50.040 --> 00:07:52.319
where it gets really wild. Poetic didn't do what

00:07:52.319 --> 00:07:54.620
everyone else does. They didn't sink billions

00:07:54.620 --> 00:07:57.079
into training some giant new model from scratch.

00:07:57.300 --> 00:08:00.660
No, what's so fascinating is they just used Google's

00:08:00.660 --> 00:08:04.019
own Gemini 3 Pro as their base engine. They rented

00:08:04.019 --> 00:08:05.759
it. They essentially rented the foundational

00:08:05.759 --> 00:08:07.899
capability from their biggest competitor. So

00:08:07.899 --> 00:08:10.379
how does a tiny team win the race when they're

00:08:10.379 --> 00:08:12.459
using the very engine built by the people they're

00:08:12.459 --> 00:08:14.899
racing against? They built a smart, self -improving

00:08:14.899 --> 00:08:17.399
system on top of it. A refinement layer. Okay,

00:08:17.420 --> 00:08:19.420
so break that down. A refinement layer. Think

00:08:19.420 --> 00:08:22.509
of it like this. The foundational model, Gemini,

00:08:22.689 --> 00:08:25.569
it's like a massive, powerful jet engine that

00:08:25.569 --> 00:08:27.829
you can just rent. Okay. The refinement layer

00:08:27.829 --> 00:08:30.810
is the brilliant, custom -built flight computer

00:08:30.810 --> 00:08:33.370
that makes sure that engine is always running

00:08:33.370 --> 00:08:36.110
at peak performance, maximizing efficiency for

00:08:36.110 --> 00:08:38.389
whatever specific job you give it. That helps

00:08:38.389 --> 00:08:40.789
a lot. So it's not about brute force power. It's

00:08:40.789 --> 00:08:43.750
about elegant, high -leverage engineering. Precisely.

00:08:43.750 --> 00:08:47.139
And this layer has... Basically, three jobs that

00:08:47.139 --> 00:08:49.259
let them win. First, it's a smart traffic cop.

00:08:49.399 --> 00:08:52.419
It picks the right base model for the task. Second,

00:08:52.659 --> 00:08:55.200
it guides and improves the output. It's almost

00:08:55.200 --> 00:08:57.019
like it's teaching the rented engine in real

00:08:57.019 --> 00:08:59.860
time until the answer is good enough. And third,

00:09:00.120 --> 00:09:02.620
it is a self -auditing mechanism to verify the

00:09:02.620 --> 00:09:04.840
quality before it ever spits out the final result.

00:09:05.039 --> 00:09:06.539
And I'm guessing this method is a lot cheaper.

00:09:06.720 --> 00:09:08.899
Oh, significantly cheaper. Yeah. And way more

00:09:08.899 --> 00:09:11.870
flexible. Poetic's approach cost about $30 per

00:09:11.870 --> 00:09:15.389
task. Google DeepThink, which only hit 45 % accuracy,

00:09:15.710 --> 00:09:19.289
cost $77 per task. And that efficiency, that

00:09:19.289 --> 00:09:21.909
changes the whole economic model. Their system

00:09:21.909 --> 00:09:25.090
can adapt to any new base model from any company

00:09:25.090 --> 00:09:28.049
in a few hours. No costly retraining. And it's

00:09:28.049 --> 00:09:31.960
open source. Whoa. I mean, imagine scaling that

00:09:31.960 --> 00:09:34.480
refinement layer to a billion queries. Right.

00:09:34.799 --> 00:09:36.419
That completely changes the economics. You're

00:09:36.419 --> 00:09:39.360
not bottlenecked by the insane cost of building

00:09:39.360 --> 00:09:41.639
the foundational model anymore. The value just

00:09:41.639 --> 00:09:44.000
shifts. So what does this all mean then? Does

00:09:44.000 --> 00:09:47.299
this small team success show that smart system

00:09:47.299 --> 00:09:51.250
architecture, that elegant design? Is that the

00:09:51.250 --> 00:09:54.789
new moat now? Is it displacing raw data and compute

00:09:54.789 --> 00:09:57.350
as the main barrier to entry? I think this paradigm

00:09:57.350 --> 00:09:59.529
shift proves that smart systems, these refinement

00:09:59.529 --> 00:10:01.990
layers, they can outperform raw training scale

00:10:01.990 --> 00:10:04.529
and compute costs. It's a huge win for efficiency

00:10:04.529 --> 00:10:06.850
and innovation over just pure capital. You know,

00:10:06.929 --> 00:10:08.990
this whole deep dive, it really boils down to

00:10:08.990 --> 00:10:11.629
two core takeaways for you, the listener. First,

00:10:11.809 --> 00:10:13.870
that massive productivity and financial gap we

00:10:13.870 --> 00:10:16.289
talked about. It's driven by intense, deliberate

00:10:16.289 --> 00:10:18.809
usage. You have to adopt that frontier mindset.

00:10:19.149 --> 00:10:21.639
Using AI. tooling 16 times more than your peers.

00:10:21.820 --> 00:10:26.039
To see that 3 .6 times shareholder return, it's

00:10:26.039 --> 00:10:30.620
a choice. And second, the race for AGI. It might

00:10:30.620 --> 00:10:32.940
now be won by the smart architectural layers.

00:10:33.159 --> 00:10:36.620
That six person startup proved that system design

00:10:36.620 --> 00:10:39.620
and refinement can be more powerful than just

00:10:39.620 --> 00:10:42.000
having the biggest training budget. The value

00:10:42.000 --> 00:10:44.539
is shifting from the size of the engine. To the

00:10:44.539 --> 00:10:46.580
brilliance of the engineering team that's customizing

00:10:46.580 --> 00:10:48.799
it. Yeah. So we'd encourage you to adopt that

00:10:48.799 --> 00:10:51.179
frontier mindset. Just look at your own knowledge

00:10:51.179 --> 00:10:53.340
gathering. How can you increase the leverage

00:10:53.340 --> 00:10:56.039
of your tools? How can you add your own refinement

00:10:56.039 --> 00:10:58.440
layer to how you approach problems? And here's

00:10:58.440 --> 00:11:00.659
a final thought. If an open source refinement

00:11:00.659 --> 00:11:03.120
layer can outperform these proprietary foundational

00:11:03.120 --> 00:11:06.000
models on the world's hardest AGI benchmarks.

00:11:07.079 --> 00:11:09.519
What does that imply for the security and the

00:11:09.519 --> 00:11:12.399
future market value of those incredibly expensive

00:11:12.399 --> 00:11:15.779
proprietary models? Something to mull over. We'll

00:11:15.779 --> 00:11:17.340
catch you next time. Thanks for joining us for

00:11:17.340 --> 00:11:18.960
this deep dive Outero Music.
