WEBVTT

00:00:00.000 --> 00:00:03.700
Imagine, if you will, getting this rare glimpse

00:00:03.700 --> 00:00:07.599
into the secret labs where the next really big

00:00:07.599 --> 00:00:10.660
AI model, maybe something like GPT -5, is being

00:00:10.660 --> 00:00:13.539
built, not behind closed doors, but right there,

00:00:13.640 --> 00:00:15.820
kind of unfolding before your eyes. Yeah, it's

00:00:15.820 --> 00:00:17.679
less like your typical product launch and maybe

00:00:17.679 --> 00:00:20.300
more like ship this in July. We're seeing these

00:00:20.300 --> 00:00:23.140
mysterious fragments pop up, hinting at a new

00:00:23.140 --> 00:00:25.239
heavyweight contender. And it really suggests

00:00:25.239 --> 00:00:28.440
a big shift in how these powerful systems are

00:00:28.440 --> 00:00:30.579
being developed worldwide. Welcome to the Deep

00:00:30.579 --> 00:00:32.859
Dive. Today we're taking a journey through some

00:00:32.859 --> 00:00:35.299
really fascinating new sources. Our goal here

00:00:35.299 --> 00:00:38.200
is to unpack the very latest in artificial intelligence.

00:00:38.780 --> 00:00:41.439
Our mission really is to make sense of this complex

00:00:41.439 --> 00:00:43.840
web of innovation that's shaping the field right

00:00:43.840 --> 00:00:45.539
now. Absolutely. We're going to pull back the

00:00:45.539 --> 00:00:48.460
curtain on these enigmatic AI models, explore

00:00:48.460 --> 00:00:51.100
a whole wave of surprisingly practical new tools,

00:00:51.280 --> 00:00:53.399
and then we'll zoom out, look at the bigger picture,

00:00:53.439 --> 00:00:55.700
this global strategic chess match playing out

00:00:55.700 --> 00:00:59.159
between two very different AI philosophies. a

00:00:59.159 --> 00:01:00.859
deep dive into what's happening right now and

00:01:00.859 --> 00:01:02.960
maybe more importantly, what's just coming over

00:01:02.960 --> 00:01:06.019
the horizon. Let's unpack this first piece. Our

00:01:06.019 --> 00:01:09.260
first significant insight comes from a really

00:01:09.260 --> 00:01:11.519
intriguing observation that's been making waves.

00:01:11.959 --> 00:01:14.299
The appearance and then the sudden disappearance

00:01:14.299 --> 00:01:18.920
of six mysterious AI models on LM Arena. That's

00:01:18.920 --> 00:01:21.239
right. These models, they had code names like

00:01:21.239 --> 00:01:24.459
Zenith, Summit. Lobster, Starfish, Nectarine,

00:01:24.579 --> 00:01:28.900
and O3 Alpha. Well, they just appeared. And almost

00:01:28.900 --> 00:01:31.280
immediately, they started crushing some incredibly

00:01:31.280 --> 00:01:33.840
tough coding tasks, showing off capabilities

00:01:33.840 --> 00:01:36.519
we just haven't seen publicly. Then, you know,

00:01:36.540 --> 00:01:39.829
classic open AI style, poof. vanished as quickly

00:01:39.829 --> 00:01:42.189
as they came. It was quite something to watch.

00:01:42.349 --> 00:01:43.989
Right. And here's where it gets really interesting.

00:01:44.129 --> 00:01:46.609
The AI community pretty quickly started theorizing

00:01:46.609 --> 00:01:48.290
that these weren't separate models, but fragments.

00:01:48.989 --> 00:01:51.170
Components, maybe, of the GPT -5 everyone's waiting

00:01:51.170 --> 00:01:53.769
for. Think of them like individual parts being

00:01:53.769 --> 00:01:55.409
tested before the whole thing is assembled. Summit,

00:01:55.469 --> 00:01:57.829
for example. The reports say it generated over

00:01:57.829 --> 00:02:01.209
2 ,300 lines of working Starship UI code on its

00:02:01.209 --> 00:02:03.189
very first attempt. That's not just good. That's

00:02:03.189 --> 00:02:06.049
kind of an unprecedented level of complex output

00:02:06.049 --> 00:02:08.169
on a first go. And it wasn't just Summit either.

00:02:08.430 --> 00:02:11.069
Zenith and O3 Alpha. Yeah. They seem to lead

00:02:11.069 --> 00:02:13.689
at reasoning and general coding stuff. Lobster

00:02:13.689 --> 00:02:16.610
and Starfish felt maybe lighter, possibly open

00:02:16.610 --> 00:02:18.870
source variants, some people speculated. The

00:02:18.870 --> 00:02:20.770
main theory, which is pretty cool, is that these

00:02:20.770 --> 00:02:24.370
are pieces of a mixture of expert system, AOE.

00:02:24.969 --> 00:02:27.349
It's this powerful way to build AI. Imagine like

00:02:27.349 --> 00:02:29.650
stacking specialized Lego blocks of data and

00:02:29.650 --> 00:02:32.210
algorithms. Each block is good at one thing.

00:02:32.610 --> 00:02:34.509
Combine them and you get a much more capable,

00:02:34.669 --> 00:02:36.629
efficient system. And the idea is they'll eventually

00:02:36.629 --> 00:02:39.949
fuse into, well, GPT -5. It's a clever way to

00:02:39.949 --> 00:02:44.669
scale up. This modular strategy. It seems genuinely

00:02:44.669 --> 00:02:47.210
ingenious. OpenAI is essentially running live

00:02:47.210 --> 00:02:50.669
A -B tests on future AI cognition, right? They're

00:02:50.669 --> 00:02:52.490
watching how these pieces perform in the wild,

00:02:52.650 --> 00:02:54.669
getting tons of data, fine -tuning them. It's

00:02:54.669 --> 00:02:56.830
even reportedly outpacing CloudSonic 4, which

00:02:56.830 --> 00:02:58.409
for a long time was seen as the coding champ.

00:02:58.610 --> 00:03:00.750
And sure, Grok 4 might be keeper, but these fragments,

00:03:00.930 --> 00:03:03.050
they really suggest GPT -5 is shaving up to be

00:03:03.050 --> 00:03:05.210
the heavyweight in raw capability. And what's

00:03:05.210 --> 00:03:07.310
fascinating here is that this Moe approach, this

00:03:07.310 --> 00:03:11.169
modular style. it probably isn't just for GPT

00:03:11.169 --> 00:03:13.129
-5. It's likely going to stick around for future

00:03:13.129 --> 00:03:15.409
versions, maybe even through GPT -8, believe

00:03:15.409 --> 00:03:18.870
it or not. So when GPT -5 officially drops, it

00:03:18.870 --> 00:03:21.150
won't feel like some big surprise reveal. It'll

00:03:21.150 --> 00:03:23.189
be more like, oh yeah, we saw the pieces of this

00:03:23.189 --> 00:03:26.210
already. And connecting this back, it really

00:03:26.210 --> 00:03:29.770
is Shipmas in July. We're watching open AI build

00:03:29.770 --> 00:03:33.389
its next big thing live in public. Instead of

00:03:33.389 --> 00:03:35.650
some staged event, they're kind of inviting the

00:03:35.650 --> 00:03:38.270
whole internet, the whole AI community to watch

00:03:38.270 --> 00:03:40.229
and even participate through these public tests.

00:03:40.449 --> 00:03:43.250
Whoa. Just imagine the scale of that. Scaling

00:03:43.250 --> 00:03:45.889
to, what, a billion queries? Seeing these pieces

00:03:45.889 --> 00:03:47.530
actually come together in real time is quite

00:03:47.530 --> 00:03:49.409
something. So what would you say is the biggest

00:03:49.409 --> 00:03:52.060
takeaway from seeing GPT -5? built out in the

00:03:52.060 --> 00:03:54.560
open like this. AI development is becoming open.

00:03:54.659 --> 00:03:56.860
It's like a real time public beta speeding up

00:03:56.860 --> 00:03:58.919
progress and getting everyone involved. OK, so

00:03:58.919 --> 00:04:01.199
that's the high end frontier. But what does this

00:04:01.199 --> 00:04:03.900
all mean for what AI can actually do like right

00:04:03.900 --> 00:04:06.680
now? in our daily lives. Let's shift gears a

00:04:06.680 --> 00:04:08.539
bit and dive into some of the more immediate

00:04:08.539 --> 00:04:10.780
practical highlights our sources picked up on.

00:04:10.939 --> 00:04:12.699
Yeah, it's pretty wild to see these things emerge,

00:04:12.900 --> 00:04:16.920
like OpenAI's chat GPT agent. It was shown just

00:04:16.920 --> 00:04:19.639
smoothly clicking through Cloudflare's CaptiCCH

00:04:19.639 --> 00:04:22.120
checkbox, you know, the I'm not a robot thing.

00:04:22.420 --> 00:04:25.019
And the coolest part, it apparently narrates

00:04:25.019 --> 00:04:27.660
what it's doing, explains its own success as

00:04:27.660 --> 00:04:30.779
it browses. That's a small step, maybe, but significant

00:04:30.779 --> 00:04:33.199
for autonomous agents. That narration piece.

00:04:33.459 --> 00:04:35.889
Yeah. It really does hint at a deeper level of

00:04:35.889 --> 00:04:37.949
understanding, doesn't it? We also saw this viral,

00:04:38.050 --> 00:04:40.449
like, 45 -minute tutorial showing how to set

00:04:40.449 --> 00:04:43.110
up Claude Code to automate some surprisingly

00:04:43.110 --> 00:04:46.170
complex tasks. And then there's that story, really

00:04:46.170 --> 00:04:48.209
fascinating, about someone building an entire

00:04:48.209 --> 00:04:50.889
AI business from scratch in just two hours. And

00:04:50.889 --> 00:04:52.389
it actually worked. It really points towards

00:04:52.389 --> 00:04:54.889
a future where, you know, solopreneurship, small

00:04:54.889 --> 00:04:57.629
business, it's getting way more automated, lowering

00:04:57.629 --> 00:05:01.129
the barriers. And building on that work impact

00:05:01.129 --> 00:05:04.149
idea, there's this new study from Anthropic.

00:05:04.230 --> 00:05:07.310
It breaks down how AI is affecting over 700 different

00:05:07.310 --> 00:05:09.949
jobs. And it asks that question everyone's kind

00:05:09.949 --> 00:05:12.329
of wrestling with. Is AI going to replace you

00:05:12.329 --> 00:05:15.250
or will it just augment your work, help you collaborate?

00:05:15.769 --> 00:05:17.629
Definitely food for thought for all of us. Interestingly,

00:05:17.930 --> 00:05:20.389
Claude's been so popular, they've hit some scaling

00:05:20.389 --> 00:05:22.750
issues. They just put in two new usage caps.

00:05:22.930 --> 00:05:25.089
There's a total weekly cap and a specific one

00:05:25.089 --> 00:05:27.389
just for Claude Opus 4, their top model. Now,

00:05:27.389 --> 00:05:29.569
they say this only hits less than 5 % of users,

00:05:29.790 --> 00:05:32.129
but, well, it's often the most active users,

00:05:32.329 --> 00:05:34.569
the power users. And they tend to be the loudest

00:05:34.569 --> 00:05:36.250
about it. It just shows the incredible demand.

00:05:36.529 --> 00:05:38.069
Oh, and check this out. This one's kind of mind

00:05:38.069 --> 00:05:41.430
-bending. Hunyon World 1 .0. It's a new system

00:05:41.430 --> 00:05:44.329
that creates entire editable 3D worlds from just

00:05:44.329 --> 00:05:47.079
a cent. or a picture. We're not just talking

00:05:47.079 --> 00:05:48.860
videos here. These are actual environments you

00:05:48.860 --> 00:05:52.399
can virtually walk around in. That's a huge jump

00:05:52.399 --> 00:05:54.819
for content creation. And on the business side,

00:05:55.060 --> 00:05:57.680
sources highlighted MicroOne. They're a scale

00:05:57.680 --> 00:06:00.720
AI rival, apparently raising funds now at a pretty

00:06:00.720 --> 00:06:03.500
impressive $500 million valuation. They've seen

00:06:03.500 --> 00:06:06.040
huge growth revenue up five times this year alone,

00:06:06.240 --> 00:06:09.680
from $10 million to $50 million. And they expect

00:06:09.680 --> 00:06:13.290
to hit $100 million by September. That kind of

00:06:13.290 --> 00:06:15.470
speed really tells you about the hunger for AI

00:06:15.470 --> 00:06:17.850
infrastructure and services. Yeah, and practical

00:06:17.850 --> 00:06:20.129
problems are getting AI solutions, too. Like,

00:06:20.129 --> 00:06:22.069
is your AI agent always forgetting what you talked

00:06:22.069 --> 00:06:24.509
about earlier? Yeah. You can now build real long

00:06:24.509 --> 00:06:26.649
-term memory using ZEPP's knowledge graphs. It

00:06:26.649 --> 00:06:29.410
helps the AI keep context, learn over time, and

00:06:29.410 --> 00:06:31.509
keeps the API cost low, which is a big plus.

00:06:32.060 --> 00:06:34.459
You know, I still wrestle with prompt drift myself

00:06:34.459 --> 00:06:36.459
sometimes where the conversation just goes off

00:06:36.459 --> 00:06:38.579
track. So yeah, these long -term memory solutions,

00:06:38.699 --> 00:06:41.060
they seem absolutely key for anyone really using

00:06:41.060 --> 00:06:43.500
these tools seriously. Totally. And if you're

00:06:43.500 --> 00:06:45.860
looking for automation that's maybe more powerful

00:06:45.860 --> 00:06:48.620
than Zapier, there's a great guide on using NEN.

00:06:48.620 --> 00:06:51.259
It's open source to build your own personal intelligence

00:06:51.259 --> 00:06:54.079
agent. We also saw a guide on blending, like

00:06:54.079 --> 00:06:56.959
McKinsey consulting methods with AI, turning

00:06:56.959 --> 00:06:59.180
insights into actual strategies clients might

00:06:59.180 --> 00:07:02.220
use. And just a few quick tool mentions. Wordwriter

00:07:02.220 --> 00:07:04.500
says it writes 200 plus pages of research with

00:07:04.500 --> 00:07:08.420
references. Copycat automates web tasks. Startup

00:07:08.420 --> 00:07:11.300
Sonar tries to find hidden startup ideas on Reddit.

00:07:11.459 --> 00:07:14.120
And Free Image Bulk helps download lots of images

00:07:14.120 --> 00:07:16.279
from websites. That's a lot of different tools

00:07:16.279 --> 00:07:18.079
and abilities there. What feels like the common

00:07:18.079 --> 00:07:20.240
thread running through all these diverse applications?

00:07:20.699 --> 00:07:23.560
AI is getting incredibly specialized. It's automating

00:07:23.560 --> 00:07:27.269
really complex human tasks everywhere. OK, so

00:07:27.269 --> 00:07:29.509
beyond the individual tools, beyond the models

00:07:29.509 --> 00:07:32.029
themselves, there's this much larger strategic

00:07:32.029 --> 00:07:34.670
game being played globally. Our sources point

00:07:34.670 --> 00:07:37.509
to two very distinct, almost opposing visions

00:07:37.509 --> 00:07:39.850
for where AI is headed. Yeah, it's a fascinating

00:07:39.850 --> 00:07:43.209
contrast. Just days after the U .S. put out its

00:07:43.209 --> 00:07:46.449
AI action plan, which mostly emphasized deregulation,

00:07:46.689 --> 00:07:50.009
a race to global dominance led by private companies.

00:07:50.839 --> 00:07:53.579
China presented its own plan. And China's plan.

00:07:53.759 --> 00:07:56.959
It focuses heavily on global cooperation. Things

00:07:56.959 --> 00:07:59.720
like joint AI research, pushing for open data

00:07:59.720 --> 00:08:02.800
sets and model sharing, working towards shared

00:08:02.800 --> 00:08:05.360
computing resources, really trying to broaden

00:08:05.360 --> 00:08:08.120
access. Right. They're also pushing hard on education

00:08:08.120 --> 00:08:11.180
and access. Specifically, they're championing

00:08:11.180 --> 00:08:13.560
AI upskilling for developing nations, trying

00:08:13.560 --> 00:08:15.920
to give them a real seat at the table. And importantly,

00:08:16.000 --> 00:08:17.680
they're calling for risk management frameworks,

00:08:17.839 --> 00:08:20.569
AI ethics policies. developed with help from

00:08:20.569 --> 00:08:22.930
the UN. It's a very different governance model

00:08:22.930 --> 00:08:25.509
they're proposing. Their plan even encourages

00:08:25.509 --> 00:08:27.949
the global developer community to co -develop

00:08:27.949 --> 00:08:30.949
AI tools together, promoting open source instead

00:08:30.949 --> 00:08:33.029
of relying on what they call Western black box

00:08:33.029 --> 00:08:35.490
APIs. You know, those proprietary systems where

00:08:35.490 --> 00:08:37.190
you don't know what's inside, controlled by just

00:08:37.190 --> 00:08:39.950
a few big companies. It's a clear call for something

00:08:39.950 --> 00:08:41.990
more decentralized. So you can see it taking

00:08:41.990 --> 00:08:44.429
shape. Two really different philosophies. Yeah.

00:08:44.490 --> 00:08:46.870
The U .S. approach seems, well, faster maybe.

00:08:47.320 --> 00:08:50.059
Richer in terms of private money, but more closed.

00:08:50.279 --> 00:08:53.200
It leans heavily on giants like OpenAI, Meta,

00:08:53.240 --> 00:08:56.179
Google, kind of a Silicon Valley model. And then

00:08:56.179 --> 00:08:59.019
China's vision is broader, shared, and explicitly

00:08:59.019 --> 00:09:02.320
open. Beijing is actively pitching itself as

00:09:02.320 --> 00:09:05.080
the AI partner for the global south. Engaging

00:09:05.080 --> 00:09:07.500
with regions the U .S. hasn't really prioritized

00:09:07.500 --> 00:09:09.620
in the same way. It really suggests that these

00:09:09.620 --> 00:09:12.720
open models, shared data sets, they're being

00:09:12.720 --> 00:09:15.159
positioned as a direct alternative to the Western

00:09:15.159 --> 00:09:18.330
AI silos. Which naturally leads to the big question.

00:09:18.570 --> 00:09:20.769
How might these two very different philosophies

00:09:20.769 --> 00:09:23.169
shape the future? Technology, geopolitics, all

00:09:23.169 --> 00:09:25.169
of it. We might really be heading towards two

00:09:25.169 --> 00:09:28.289
distinct global AI ecosystems. Two sex islands.

00:09:28.870 --> 00:09:30.090
All right, to kind of round things out, let's

00:09:30.090 --> 00:09:31.690
just run through a few quick hits. These really

00:09:31.690 --> 00:09:34.789
highlight how pervasive AI is becoming just everywhere.

00:09:35.029 --> 00:09:37.409
Okay, get this one. A viral AI generated video.

00:09:37.570 --> 00:09:40.179
Cat riding a leopard. Got over 100 million views.

00:09:40.299 --> 00:09:42.740
It's just fun, sure, but it shows how engaging

00:09:42.740 --> 00:09:45.860
and easy this tech is becoming for, like, creative

00:09:45.860 --> 00:09:48.600
stuff. Microsoft Edge. It's basically an AI browser

00:09:48.600 --> 00:09:51.120
now. They fully launched co -pilot mode, built

00:09:51.120 --> 00:09:53.419
right in. Your web browser is now an intelligent

00:09:53.419 --> 00:09:55.899
assistant. And on the hardware side, huge news.

00:09:55.980 --> 00:09:59.580
Tesla just signed a massive $16 .5 billion deal

00:09:59.580 --> 00:10:04.090
with Samsung for AI chips. That's not just big

00:10:04.090 --> 00:10:07.129
money. It signals the insane demand for specialized

00:10:07.129 --> 00:10:09.730
hardware as these models keep growing. Yeah,

00:10:09.870 --> 00:10:12.029
and related to that, tech giants like Google

00:10:12.029 --> 00:10:14.350
and Meta, they're investing heavily in nuclear

00:10:14.350 --> 00:10:16.929
and hydropower, basically building their own

00:10:16.929 --> 00:10:19.169
energy grids to power their massive data centers.

00:10:19.549 --> 00:10:22.809
That's a huge, often overlooked sign of the monumental

00:10:22.809 --> 00:10:25.789
energy needs future AI is going to demand. And

00:10:25.789 --> 00:10:28.190
finally, sort of bringing it back to geopolitics,

00:10:28.190 --> 00:10:31.429
a group of 20 security experts urged former President

00:10:31.429 --> 00:10:34.529
Trump to restrict Nvidia H20 chip sales to China,

00:10:34.710 --> 00:10:36.909
pointing directly at the high stakes involved

00:10:36.909 --> 00:10:39.950
in controlling this advanced AI tech. AI is just

00:10:39.950 --> 00:10:42.330
rapidly integrating into pretty much every single

00:10:42.330 --> 00:10:45.049
part of our daily lives. mid -roll sponsor, Red.

00:10:45.230 --> 00:10:47.269
So as we wrap up this deep dive, I think the

00:10:47.269 --> 00:10:49.590
big idea that really stands out is just the incredible

00:10:49.590 --> 00:10:53.330
speed, the velocity, and the multifaceted nature

00:10:53.330 --> 00:10:56.529
of AI development right now. It's moving unbelievably

00:10:56.529 --> 00:10:59.090
fast on so many different fronts all at once.

00:10:59.149 --> 00:11:00.750
Yeah, we're seeing these cutting -edge models,

00:11:00.909 --> 00:11:04.190
like maybe GPT -5, being built and tested right

00:11:04.190 --> 00:11:06.230
out in the open. Yeah. And they're showing capabilities

00:11:06.230 --> 00:11:09.230
that are, frankly, almost hard to believe sometimes.

00:11:10.080 --> 00:11:12.759
And at the exact same time, practical AI tools

00:11:12.759 --> 00:11:15.320
are just exploding, making automation, content

00:11:15.320 --> 00:11:18.580
creation, even starting a business way easier

00:11:18.580 --> 00:11:22.159
and more accessible for regular users, for solopreneurs.

00:11:22.299 --> 00:11:25.159
And then on that global scale, these two fundamentally

00:11:25.159 --> 00:11:28.159
different visions for AI's future, they're really

00:11:28.159 --> 00:11:31.019
solidifying, potentially leading to these distinct

00:11:31.019 --> 00:11:33.620
global ecosystems, each with its own rules, its

00:11:33.620 --> 00:11:35.799
own approach. We've certainly covered a lot today.

00:11:35.899 --> 00:11:37.679
It's probably worth taking a moment to think

00:11:37.679 --> 00:11:39.940
about how these developments might impact your

00:11:39.940 --> 00:11:43.139
own work or maybe just your daily life. The ripples

00:11:43.139 --> 00:11:45.600
are definitely already spreading. And maybe here's

00:11:45.600 --> 00:11:47.860
a provocative thought for you to mull over. As

00:11:47.860 --> 00:11:52.360
AI gets more modular, more distributed. You know,

00:11:52.360 --> 00:11:54.580
all these specialized agents and open source

00:11:54.580 --> 00:11:57.379
models popping up. Well, the whole idea of one

00:11:57.379 --> 00:12:00.919
big AI, like one single super intelligence, become

00:12:00.919 --> 00:12:03.519
less important than having this vast network

00:12:03.519 --> 00:12:05.980
of many interconnected specialized intelligences

00:12:05.980 --> 00:12:08.019
working together. Something to think about. Thank

00:12:08.019 --> 00:12:09.720
you for joining us for this deep dive. Until

00:12:09.720 --> 00:12:11.139
next time, utero music.
