WEBVTT

00:00:00.000 --> 00:00:02.000
The biggest names in American AI, we're talking

00:00:02.000 --> 00:00:05.599
OpenAI, Google, Meta. They're basically invisible

00:00:05.599 --> 00:00:08.380
in the world's most used OpenAI models. They've

00:00:08.380 --> 00:00:11.080
just completely ceded the throne. It's an unprecedented

00:00:11.080 --> 00:00:14.000
statistical flip, and we've got the data. We're

00:00:14.000 --> 00:00:18.600
talking about 2 .2 billion global downloads in

00:00:18.600 --> 00:00:21.760
just one year. This isn't just a minor shift

00:00:21.760 --> 00:00:23.739
in the rankings, you know. It's a fundamental

00:00:23.739 --> 00:00:27.280
rebalancing of global AI leadership. The scale

00:00:27.280 --> 00:00:29.929
confirms it. Welcome back to the Deep Dive. Today,

00:00:29.989 --> 00:00:32.390
we're taking a calm, curious look at how quickly

00:00:32.390 --> 00:00:35.250
the foundational rules of the AI ecosystem are

00:00:35.250 --> 00:00:37.170
just being completely rewritten. The pace of

00:00:37.170 --> 00:00:40.009
change has never felt faster. That's right. And

00:00:40.009 --> 00:00:41.630
we're here to guide you through this. Our mission

00:00:41.630 --> 00:00:44.070
today is to really distill three critical shifts

00:00:44.070 --> 00:00:45.649
you need to understand right now. First, we're

00:00:45.649 --> 00:00:47.649
going to unpack the hard data that confirms this

00:00:47.649 --> 00:00:51.030
new dominant force in open source AI. And it's

00:00:51.030 --> 00:00:53.170
probably going to surprise you. Second, we have

00:00:53.170 --> 00:00:55.509
a practical playbook for staying visible in what

00:00:55.509 --> 00:00:57.689
we're now calling the generative engine optimization.

00:00:57.899 --> 00:01:01.640
or GEO era. And finally, we'll dive into a pure

00:01:01.640 --> 00:01:04.780
scientific breakthrough, an AI system that uses

00:01:04.780 --> 00:01:07.219
something called vibe -proving to solve mathematical

00:01:07.219 --> 00:01:09.540
problems that have been open for over 30 years.

00:01:09.700 --> 00:01:12.340
And it does it in hours, not decades. Okay, let's

00:01:12.340 --> 00:01:14.439
get into this monumental shift in the open source

00:01:14.439 --> 00:01:17.540
world because the data really is a shock. This

00:01:17.540 --> 00:01:19.980
recent study from MIT and Hugging Face tracked

00:01:19.980 --> 00:01:23.859
massive model usage and, well, it revealed something

00:01:23.859 --> 00:01:26.540
remarkable. It's the headline of the entire week.

00:01:26.989 --> 00:01:29.709
So the study tracked 2 .2 billion downloads on

00:01:29.709 --> 00:01:32.790
Hugging Face between August 2024 and August 2025.

00:01:33.510 --> 00:01:35.989
And when they sliced the data by the creator's

00:01:35.989 --> 00:01:38.909
origin, China officially surpassed the U .S.

00:01:38.930 --> 00:01:41.769
in the open AI ecosystem. Can you give us the

00:01:41.769 --> 00:01:43.709
exact numbers here? Sure. China now leads with

00:01:43.709 --> 00:01:46.730
17 .1 % of all those model downloads. The U .S.

00:01:46.730 --> 00:01:49.870
is just behind at 15 .8%. But the really deep

00:01:49.870 --> 00:01:51.930
insight is who makes up those percentages. Right.

00:01:52.010 --> 00:01:54.170
And what's absolutely staggering here is who

00:01:54.170 --> 00:02:04.500
is missing from the U .S. side. And here is the

00:02:04.500 --> 00:02:06.840
kicker, the thing that fundamentally changes

00:02:06.840 --> 00:02:10.680
how we view the global AI race. The U .S. leaders

00:02:10.680 --> 00:02:12.919
we talk about every single day, Google, Meta,

00:02:13.020 --> 00:02:18.439
OpenAI, have a 0 % presence in the 2024 to 2025

00:02:18.439 --> 00:02:21.300
top charts. They are literally invisible in the

00:02:21.300 --> 00:02:23.699
open marketplace. That feels so counterintuitive.

00:02:23.740 --> 00:02:25.919
We hear about them constantly. Only one notable

00:02:25.919 --> 00:02:28.460
U .S. contributor even makes the list, and that's

00:02:28.460 --> 00:02:31.659
Comfy, clocking in at 5 .4%. Right. And for listeners

00:02:31.659 --> 00:02:34.199
who might not know, Comfy isn't an LLM like DeepSeek

00:02:34.199 --> 00:02:37.719
or Quen. It's a popular U .S.-based workflow

00:02:37.719 --> 00:02:40.419
and tooling system. It's specific. for AI image

00:02:40.419 --> 00:02:42.120
generation. It just highlights that the American

00:02:42.120 --> 00:02:44.439
presence is in the utility layer, not the foundation

00:02:44.439 --> 00:02:46.800
model layer. So if you look at the strategic

00:02:46.800 --> 00:02:49.080
divergence, it becomes crystal clear what's happening.

00:02:49.319 --> 00:02:51.580
U .S. companies are focused almost entirely on

00:02:51.580 --> 00:02:54.900
closed APIs and high margin product monetization.

00:02:54.960 --> 00:02:56.560
They want you using their subscription services.

00:02:57.159 --> 00:02:59.800
Exactly. We see Meta kind of sitting on Lama

00:02:59.800 --> 00:03:02.860
3, carefully controlling its distribution. And

00:03:02.860 --> 00:03:05.539
OpenAI, which, you know, pioneered open source,

00:03:05.780 --> 00:03:08.280
it shifted away from open releases years ago.

00:03:08.780 --> 00:03:11.120
Google's best models are often locked behind

00:03:11.120 --> 00:03:13.379
cloud subscriptions. It's all about immediate

00:03:13.379 --> 00:03:16.080
financial returns and proprietary control. Meanwhile,

00:03:16.199 --> 00:03:18.500
the Chinese counter strategy is aggressively

00:03:18.500 --> 00:03:22.599
open. DeepSeek is shipping incredibly fast, highly

00:03:22.599 --> 00:03:25.460
multilingual models and releasing usable checkpoints

00:03:25.460 --> 00:03:28.280
almost weekly. Quinn is doing the same, improving

00:03:28.280 --> 00:03:31.360
its whole multilingual suite almost in real time.

00:03:31.740 --> 00:03:33.539
They're playing the long game. They're trying

00:03:33.539 --> 00:03:35.979
to establish their models as the default tool

00:03:35.979 --> 00:03:38.439
for developers all over the world. What used

00:03:38.439 --> 00:03:41.180
to be a U .S.-led open source movement, think

00:03:41.180 --> 00:03:43.780
of like the early days of Linux or TensorFlow,

00:03:44.180 --> 00:03:46.599
that's now becoming a China -led hardware plus

00:03:46.599 --> 00:03:48.939
weights game. Meaning the foundation models themselves,

00:03:49.199 --> 00:03:51.379
the weights, the multilingual support are now

00:03:51.379 --> 00:03:53.840
being set globally by companies optimized for

00:03:53.840 --> 00:03:56.180
fast open distribution. Which brings up a really

00:03:56.180 --> 00:03:58.620
important question. What is the long term, maybe

00:03:58.620 --> 00:04:02.280
even geopolitical impact of U .S. companies prioritizing

00:04:02.280 --> 00:04:05.219
closed monetization over leading the global open

00:04:05.219 --> 00:04:07.599
source standards. The global standard for open

00:04:07.599 --> 00:04:10.840
AI tools is rapidly becoming Chinese led technology.

00:04:11.479 --> 00:04:13.639
That brings us to the practical side of all this

00:04:13.639 --> 00:04:17.100
visibility. For decades, ranking on Google was

00:04:17.100 --> 00:04:20.319
kind of a blunt tool keyword stuffing generating

00:04:20.319 --> 00:04:23.660
backlinks. That playbook is completely dead now.

00:04:23.779 --> 00:04:26.660
That old SEO search engine optimization is just

00:04:26.660 --> 00:04:29.000
irrelevant in a generative future. We are now

00:04:29.000 --> 00:04:31.620
firmly in the era of generative engine optimization.

00:04:32.970 --> 00:04:35.170
Let's define GEO simply for everyone listening.

00:04:35.329 --> 00:04:38.029
Sure. GEO is just making your content easy for

00:04:38.029 --> 00:04:40.610
AI models to read, synthesize, and summarize

00:04:40.610 --> 00:04:42.709
accurately. You have to think of it as writing

00:04:42.709 --> 00:04:44.649
not just for human scanning, but for machine

00:04:44.649 --> 00:04:46.709
digestion. And here's where it gets really interesting

00:04:46.709 --> 00:04:49.149
because the big players like Google, Microsoft,

00:04:49.470 --> 00:04:51.189
Perplexity, they've all shared their core advice.

00:04:51.649 --> 00:04:56.279
AI search heavily favors content that is... fluent

00:04:56.279 --> 00:04:58.860
in machine -readable language. Which means structure

00:04:58.860 --> 00:05:01.019
is everything. You need strong brand clarity,

00:05:01.300 --> 00:05:04.000
sure, but more importantly, a predictable structure.

00:05:04.319 --> 00:05:06.500
Content has to be designed to be synthesized.

00:05:06.600 --> 00:05:08.899
If a model can't clearly parse your argument,

00:05:09.040 --> 00:05:11.439
your facts, your structure -like stacking Lego

00:05:11.439 --> 00:05:14.300
blocks of data, you just won't show up. And we're

00:05:14.300 --> 00:05:15.899
talking about more than just, you know, putting

00:05:15.899 --> 00:05:18.980
in some H1 tags, right? It's about clear semantic

00:05:18.980 --> 00:05:22.019
relationships, using structured data, getting

00:05:22.019 --> 00:05:25.399
rid of ambiguity. It's tough. Because the models

00:05:25.399 --> 00:05:27.600
are also trained on human language, which is

00:05:27.600 --> 00:05:30.339
messy by nature. I'll be honest, I still wrestle

00:05:30.339 --> 00:05:32.879
with prompt drift myself, you know, trying to

00:05:32.879 --> 00:05:34.680
keep my inputs perfectly clean and my outputs

00:05:34.680 --> 00:05:37.279
perfectly structured for these new rules. It's

00:05:37.279 --> 00:05:39.600
harder than it looks to do that consistently

00:05:39.600 --> 00:05:42.100
in your daily work. That vulnerability is real

00:05:42.100 --> 00:05:44.920
for everyone. The human brain is built for nuance,

00:05:45.000 --> 00:05:48.240
but machines are built for clarity. So given

00:05:48.240 --> 00:05:51.060
these new machine readability rules, how should

00:05:51.060 --> 00:05:54.040
we really rethink content creation from the ground

00:05:54.040 --> 00:05:56.339
up? Content needs to be structured and clear

00:05:56.339 --> 00:05:58.420
for machine synthesis, not just human scanning.

00:05:58.579 --> 00:06:00.220
All right, let's shift gears and look at some

00:06:00.220 --> 00:06:02.120
of the major news highlights hitting the ecosystem

00:06:02.120 --> 00:06:04.300
this week, starting with model spread and investment.

00:06:04.720 --> 00:06:07.079
What's fascinating here is just the sheer reach.

00:06:07.300 --> 00:06:10.819
Google's Gemini 3 Pro and the advanced Nano Banana

00:06:10.819 --> 00:06:14.060
Pro are now live in about 120 countries. They

00:06:14.060 --> 00:06:16.639
are expanding globally at light speed. OK, you

00:06:16.639 --> 00:06:18.959
have to tell us what Nano Banana Pro is, because

00:06:18.959 --> 00:06:21.639
that sounds like some internal jargon, not a

00:06:21.639 --> 00:06:24.120
public model name. It totally is. Nanobanana

00:06:24.120 --> 00:06:26.860
Pro is a highly advanced Google model. It's often

00:06:26.860 --> 00:06:28.819
used for these underlying visual and spatial

00:06:28.819 --> 00:06:31.579
tasks. So its practical function is unlocking

00:06:31.579 --> 00:06:34.439
character consistency in image generation, 3D

00:06:34.439 --> 00:06:37.980
translation. It handles the really finicky, detailed

00:06:37.980 --> 00:06:40.540
work behind the scenes. And for users who are

00:06:40.540 --> 00:06:42.779
interacting with the Pro tier, this ability to

00:06:42.779 --> 00:06:45.220
tap thinking with 3Pro right in the search bar

00:06:45.220 --> 00:06:48.399
is huge. It turns these complex queries into

00:06:48.399 --> 00:06:51.500
interactive, almost masterful visual summaries.

00:06:51.920 --> 00:06:54.519
That's real power for the user. And beyond the

00:06:54.519 --> 00:06:56.500
models, we saw a lot of investment signaling

00:06:56.500 --> 00:06:58.959
market confidence. Black Forest Labs just raised

00:06:58.959 --> 00:07:01.980
$300 million. That funding is going to support

00:07:01.980 --> 00:07:04.079
major tools and infrastructure that people use

00:07:04.079 --> 00:07:06.819
every day, including models like Grok, Adobe's

00:07:06.819 --> 00:07:09.959
platform integration, and Flux2's new 4K image

00:07:09.959 --> 00:07:11.980
model. The money's flowing to infrastructure.

00:07:12.399 --> 00:07:15.019
And if you connect that investment to the bigger

00:07:15.019 --> 00:07:17.120
picture, you start to see these foundational

00:07:17.120 --> 00:07:20.600
shifts in education. AI is quickly becoming the

00:07:20.600 --> 00:07:23.699
new it major. Enrollment at places like MIT is

00:07:23.699 --> 00:07:27.220
exploding for AI -specific tracks, and it's pushing

00:07:27.220 --> 00:07:28.779
out some of the traditional computer science

00:07:28.779 --> 00:07:31.399
programs. Yeah, and that educational shift is

00:07:31.399 --> 00:07:33.339
already playing out in the workforce, and it's

00:07:33.339 --> 00:07:36.060
happening fast. Accenture, the global consulting

00:07:36.060 --> 00:07:39.800
giant, just hard -rebranded 800 ,000 employee

00:07:39.800 --> 00:07:43.180
roles. They are now called re -inventors. That

00:07:43.180 --> 00:07:45.399
sends a pretty clear message, doesn't it? It's

00:07:45.399 --> 00:07:48.819
basically, learn J &AI or you're out. mandate

00:07:48.819 --> 00:07:50.959
for the whole consulting world. They are not

00:07:50.959 --> 00:07:53.040
waiting around. They're forcing massive media

00:07:53.040 --> 00:07:56.360
change. And on a necessary societal note, OpenAI

00:07:56.360 --> 00:07:59.819
dropped a $2 million grant program focused specifically

00:07:59.819 --> 00:08:02.579
on AI mental health research. It's a critical

00:08:02.579 --> 00:08:04.879
area recognizing we need to study the cognitive

00:08:04.879 --> 00:08:07.060
and emotional impact of these tools as quickly

00:08:07.060 --> 00:08:09.779
as we build them. So do these rapid workforce

00:08:09.779 --> 00:08:13.000
and education changes signal a foundational sort

00:08:13.000 --> 00:08:15.220
of rapid obsolescence of traditional white collar

00:08:15.220 --> 00:08:18.060
roles that you have to upskill instantly? Companies

00:08:18.060 --> 00:08:20.860
are redefining jobs instantly, forcing massive,

00:08:20.959 --> 00:08:23.800
immediate upskilling globally. Okay, let's turn

00:08:23.800 --> 00:08:26.019
our attention to one of the most exciting breakthroughs

00:08:26.019 --> 00:08:28.779
of the year. It's this blend of intuition and

00:08:28.779 --> 00:08:31.980
hard logic in mathematics. Harmonix Aristotle

00:08:31.980 --> 00:08:34.700
AI system cracked a problem that has been open

00:08:34.700 --> 00:08:37.779
for 30 years. This is truly foundational stuff.

00:08:38.159 --> 00:08:40.860
Aristotle solved the famous Erdos problem, hashtag

00:08:40.860 --> 00:08:44.059
124, which has been open since the 1990s. And

00:08:44.059 --> 00:08:46.500
the speed of the whole process is, well, it's

00:08:46.500 --> 00:08:48.759
what's so remarkable here. How fast are we talking?

00:08:49.019 --> 00:08:51.919
So Aristotle solved the entire problem, something

00:08:51.919 --> 00:08:54.220
that decades of human mathematicians couldn't

00:08:54.220 --> 00:08:57.120
crack in just six hours. But here's the critical

00:08:57.120 --> 00:08:59.580
detail about rigor that changes everything. It

00:08:59.580 --> 00:09:02.740
then verified the full proof using Lean, a formal

00:09:02.740 --> 00:09:05.620
proof checker, in only 60 seconds. Let's pause

00:09:05.620 --> 00:09:07.759
on Lean for a second. What is that exactly? Lean

00:09:07.759 --> 00:09:10.700
is basically a rigorous logical accountant. It's

00:09:10.700 --> 00:09:12.759
software designed to check every single step

00:09:12.759 --> 00:09:15.779
of a mathematical proof to ensure absolute formalized

00:09:15.779 --> 00:09:18.179
accuracy. There's no room for human error or

00:09:18.179 --> 00:09:22.440
any fuzzy steps. Whoa. Imagine scaling this intuitive

00:09:22.440 --> 00:09:26.259
and rigorously verifiable approach across all

00:09:26.259 --> 00:09:29.240
the unsolved problems in science and math. That

00:09:29.240 --> 00:09:32.039
capability just fundamentally changes the research

00:09:32.039 --> 00:09:35.370
timeline. That is a moment of wonder. And that

00:09:35.370 --> 00:09:37.669
speed and rigor is the essence of what Harmonic

00:09:37.669 --> 00:09:41.210
is calling vibe proving. It's a fascinating concept

00:09:41.210 --> 00:09:44.049
that bridges two worlds that are usually separate.

00:09:44.169 --> 00:09:46.309
Explain vibe proving for us. What does that name

00:09:46.309 --> 00:09:49.250
even mean? Vibe proving means the AI intuitively

00:09:49.250 --> 00:09:53.090
explores the huge solution space. It takes these

00:09:53.090 --> 00:09:55.950
high level creative leaps a human mathematician

00:09:55.950 --> 00:09:58.090
might take late at night when they finally see

00:09:58.090 --> 00:10:00.570
the answer. It gets the vibe of the solution.

00:10:00.889 --> 00:10:03.360
And then it checks its own work. Precisely. It

00:10:03.360 --> 00:10:05.840
immediately formalizes that intuitive idea with

00:10:05.840 --> 00:10:08.480
symbolic rigor to make sure every single step

00:10:08.480 --> 00:10:10.759
is checkable and provable by something like lean.

00:10:11.000 --> 00:10:13.960
So it blends this fluid discovery with hard proof.

00:10:14.179 --> 00:10:16.960
It compresses years of expert work into just

00:10:16.960 --> 00:10:20.100
hours with zero loss of accuracy. And this isn't

00:10:20.100 --> 00:10:22.820
just a theory. Aristotle already earned an International

00:10:22.820 --> 00:10:25.740
Math Olympiad gold medal. It's already proven

00:10:25.740 --> 00:10:28.480
itself against the best human minds. Yeah, it

00:10:28.480 --> 00:10:31.080
now instantly ranks alongside the math reasoning

00:10:31.080 --> 00:10:33.659
capabilities of the big players like OpenAI and

00:10:33.659 --> 00:10:36.679
Google DeepMind. The difference is the speed

00:10:36.679 --> 00:10:40.379
of verifiable novel discovery. What does vibe

00:10:40.379 --> 00:10:42.820
proving fundamentally change about the process

00:10:42.820 --> 00:10:45.139
of foundational mathematical research? It creates

00:10:45.139 --> 00:10:48.039
a blend of human -like creative intuition and

00:10:48.039 --> 00:10:51.000
machine -level absolute accuracy. That was a

00:10:51.000 --> 00:10:52.340
tremendous amount of ground we just covered.

00:10:52.480 --> 00:10:55.299
So let's synthesize the three core takeaways

00:10:55.299 --> 00:10:57.620
that you, the listener, really need to carry

00:10:57.620 --> 00:11:00.279
forward. First, remember this. The center of

00:11:00.279 --> 00:11:02.620
gravity for open AI leadership has decisively

00:11:02.620 --> 00:11:05.299
shifted to China. It's driven by these aggressive

00:11:05.299 --> 00:11:08.799
open strategies of DeepSeek and Quinn. The US

00:11:08.799 --> 00:11:11.120
giants are just absent from the global open source

00:11:11.120 --> 00:11:14.320
table. Second, content survival demands immediate

00:11:14.320 --> 00:11:17.019
adaptation to machine synthesis through generative

00:11:17.019 --> 00:11:19.759
engine optimization. If the machines can't read

00:11:19.759 --> 00:11:21.960
your structured content perfectly, you won't

00:11:21.960 --> 00:11:24.389
exist in the generative search future. And finally,

00:11:24.570 --> 00:11:27.149
AI systems are now capable of generating foundational,

00:11:27.490 --> 00:11:30.129
verifiable new mathematical knowledge in hours.

00:11:30.389 --> 00:11:32.830
This is going to accelerate scientific discovery

00:11:32.830 --> 00:11:35.330
at a rate we've never seen before. Given that

00:11:35.330 --> 00:11:37.250
Aristotle solved a 30 -year -old mathematical

00:11:37.250 --> 00:11:40.169
problem and then verified the entire solution

00:11:40.169 --> 00:11:43.450
in just 60 seconds, how quickly will AI systems

00:11:43.450 --> 00:11:45.909
move through the remaining known open problems

00:11:45.909 --> 00:11:49.190
in mathematics and science? That is the question

00:11:49.190 --> 00:11:51.269
we'll leave you with today. Keep exploring those

00:11:51.269 --> 00:11:53.070
concepts and we'll catch you on the next Deep

00:11:53.070 --> 00:11:53.429
Dive.
