WEBVTT

00:00:00.000 --> 00:00:02.580
Imagine AI not just learning things, but actually

00:00:02.580 --> 00:00:06.740
designing better versions of itself. What if

00:00:06.740 --> 00:00:08.660
that's already happening, silently passing on?

00:00:08.779 --> 00:00:12.080
Well, things like behavioral viruses. It's kind

00:00:12.080 --> 00:00:13.960
of fascinating. Maybe a little unsettling, too.

00:00:14.080 --> 00:00:16.260
Okay, let's try and unpack this a bit. Welcome

00:00:16.260 --> 00:00:18.410
to the deep dive. This is where we take these

00:00:18.410 --> 00:00:21.449
complex topics and try to distill them into the

00:00:21.449 --> 00:00:24.070
essential insights, especially for you. Today,

00:00:24.129 --> 00:00:26.629
we're diving into quite a stack of fresh intelligence.

00:00:26.910 --> 00:00:29.649
We're exploring the latest shifts in the AI landscape,

00:00:29.750 --> 00:00:32.850
and it's moving so fast. Yeah, there's some really

00:00:32.850 --> 00:00:34.909
fascinating material here to dig through for

00:00:34.909 --> 00:00:36.710
this deep dive. We'll kick things off looking

00:00:36.710 --> 00:00:40.210
at China's huge new, well, brain -like supercomputer,

00:00:40.609 --> 00:00:44.350
Wukong, and what that really means for the global

00:00:44.350 --> 00:00:47.820
AI race. And then... We'll pivot a bit. Look

00:00:47.820 --> 00:00:49.719
at some other highlights from across the AI industry.

00:00:49.899 --> 00:00:52.840
There's some drama, there's money, and some quite

00:00:52.840 --> 00:00:54.859
strategic moves by the really big players. And

00:00:54.859 --> 00:00:56.719
finally, yeah, we're going to tackle that mind

00:00:56.719 --> 00:00:59.439
-bending idea, AI that designs itself. This is

00:00:59.439 --> 00:01:00.920
where it gets really interesting, I think, a

00:01:00.920 --> 00:01:04.060
potential game changer. Okay, so first up, Wukong.

00:01:04.900 --> 00:01:06.780
Our sources are showing a pretty significant

00:01:06.780 --> 00:01:10.019
development out of China. It seems while a lot

00:01:10.019 --> 00:01:12.560
of Silicon Valley is focused on, you know, large

00:01:12.560 --> 00:01:14.799
language models, China's just launched something

00:01:14.799 --> 00:01:16.680
fundamentally different, a sort of parallel path

00:01:16.680 --> 00:01:18.579
to intelligence. Yeah, what's fascinating is

00:01:18.579 --> 00:01:22.099
this thing called Darwin monkey or Wukong. It's

00:01:22.099 --> 00:01:24.719
a neuromorphic system. Now, that just means it's

00:01:24.719 --> 00:01:27.519
AI that mimics the brain structure, its activity.

00:01:27.840 --> 00:01:31.719
And this one has an incredible two billion spiking

00:01:31.719 --> 00:01:34.140
neurons. Spiking neurons. So like tiny electrical

00:01:34.140 --> 00:01:36.939
signals. Exactly. Like your own brain cells firing

00:01:36.939 --> 00:01:39.319
off signals. Two billion. That's enough to rival

00:01:39.319 --> 00:01:42.420
a macaque monkey's brain. That's a lot of digital

00:01:42.420 --> 00:01:45.049
gray matter, right? Wow. And this isn't just

00:01:45.049 --> 00:01:46.810
theoretical, is it? It's actually running things.

00:01:46.930 --> 00:01:49.049
No, it's operational. It's already doing content

00:01:49.049 --> 00:01:52.709
generation, some complex math, logic reasoning.

00:01:52.790 --> 00:01:55.670
It uses DeepSeek's large model, too. So it's

00:01:55.670 --> 00:01:59.629
really mimicking how a biological brain. processes

00:01:59.629 --> 00:02:02.209
information, kind of learning by doing. That

00:02:02.209 --> 00:02:05.609
sounds huge. Oh, the sheer scale is impressive.

00:02:06.030 --> 00:02:10.430
It's made up of 960 Darwin 3 neuromorphic chips.

00:02:10.710 --> 00:02:13.909
That gives it, like I said, over 2 billion neurons.

00:02:14.090 --> 00:02:17.830
And get this, more than 100 billion synapses.

00:02:17.830 --> 00:02:20.250
And all of that, it consumes only about 2 ,000

00:02:20.250 --> 00:02:23.250
watts. Only 2 ,000 watts. That's incredibly efficient.

00:02:23.689 --> 00:02:25.370
It really is, especially when you compare it

00:02:25.370 --> 00:02:27.409
to the, you know, the massive energy demands

00:02:27.409 --> 00:02:30.389
of typical AI compute clusters. It's a different

00:02:30.389 --> 00:02:32.430
league. And what can it actually do? What are

00:02:32.430 --> 00:02:34.409
the capabilities? Pretty broad, actually. It

00:02:34.409 --> 00:02:37.310
can simulate entire animal brains, things like

00:02:37.310 --> 00:02:40.590
zebrafish, mice, macaques. It's also good at

00:02:40.590 --> 00:02:43.229
running logical reasoning tasks. And it can serve

00:02:43.229 --> 00:02:45.530
as a testbed for neuroscience experiments, but

00:02:45.530 --> 00:02:47.629
without using real animals. OK, so it's a powerful

00:02:47.629 --> 00:02:50.050
new kind of research tool then, pushing boundaries.

00:02:50.389 --> 00:02:52.270
Definitely. And look, this isn't some flash in

00:02:52.270 --> 00:02:54.349
the pan either. It builds on their project from

00:02:54.349 --> 00:02:57.229
2020, the Darwin mouse, which had about 120 million

00:02:57.229 --> 00:02:58.909
neurons. So they've clearly been on this path

00:02:58.909 --> 00:03:02.150
for years, just steadily scaling up these neuromorphic

00:03:02.150 --> 00:03:04.509
capabilities. It's a long -term strategic play.

00:03:04.729 --> 00:03:07.849
Right. I remember reading that Intel's HaloPoint

00:03:07.849 --> 00:03:10.770
was the largest before this, with about 1 .15

00:03:10.770 --> 00:03:15.219
billion neurons. Yeah. And Wukong basically doubles

00:03:15.219 --> 00:03:17.460
that, just straight up doubles it. So this means

00:03:17.460 --> 00:03:20.659
China now has its own independent brain scale

00:03:20.659 --> 00:03:24.699
AI infrastructure. No reliance on NVIDIA or OpenAI

00:03:24.699 --> 00:03:27.520
or U .S. chips. Precisely. And that's the key

00:03:27.520 --> 00:03:29.919
takeaway, I think. It's not just another supercomputer.

00:03:30.419 --> 00:03:33.319
It's a foundational challenge to the architectures

00:03:33.319 --> 00:03:35.680
that Silicon Valley is mostly betting on. It

00:03:35.680 --> 00:03:38.819
signals maybe the real start of the post -GPT

00:03:38.819 --> 00:03:41.740
architecture wars, prioritizing efficiency and

00:03:41.740 --> 00:03:44.629
brain -like... Okay, so let me ask this. What's

00:03:44.629 --> 00:03:47.270
the core advantage of building AI like a brain

00:03:47.270 --> 00:03:49.430
instead of just pure software? Is it just about

00:03:49.430 --> 00:03:52.210
processing power? Not just power, no. Crucially,

00:03:52.289 --> 00:03:54.349
it's about energy efficiency and exploring a

00:03:54.349 --> 00:03:56.150
fundamentally different path towards intelligence.

00:03:57.090 --> 00:03:58.990
Okay, shifting gears a bit now, let's look at

00:03:58.990 --> 00:04:01.009
the broader AI landscape. Our sources show just

00:04:01.009 --> 00:04:03.590
a flurry of activity. Perplexity's Comet browser,

00:04:03.770 --> 00:04:05.969
for example. It now lets you automate some repetitive

00:04:05.969 --> 00:04:08.270
web tasks. Things like making bookings just using

00:04:08.270 --> 00:04:10.830
simple prompts could be a real time saver. Then

00:04:10.830 --> 00:04:14.650
there's the... Well, the drama. Anthropic officially

00:04:14.650 --> 00:04:17.290
blocked OpenAI from using its Claude models.

00:04:17.509 --> 00:04:20.250
Right. The accusation being that OpenAI was using

00:04:20.250 --> 00:04:23.709
Claude to test GPC -5. Yeah. Makes you wonder,

00:04:23.829 --> 00:04:25.889
doesn't it? About Claude's capabilities, maybe?

00:04:26.240 --> 00:04:28.379
Or just how closely everyone's watching each

00:04:28.379 --> 00:04:30.959
other in this race. Definitely intense. And related

00:04:30.959 --> 00:04:33.959
to that, Microsoft's smart mode co -pilot seems

00:04:33.959 --> 00:04:37.399
to be quietly prepping for GPT -5 too. It feels

00:04:37.399 --> 00:04:40.459
a bit like how Bing used GPT -4 before OpenAI

00:04:40.459 --> 00:04:43.180
even announced it publicly. Ah, okay, so a GPT

00:04:43.180 --> 00:04:45.600
-5 co -pilot launch might be coming sooner rather

00:04:45.600 --> 00:04:47.939
than later. Another glimpse of what's next. Could

00:04:47.939 --> 00:04:50.000
be. Then on the cultural side, there was that

00:04:50.000 --> 00:04:52.600
weird Rod Stewart AI tribute video. Oh yeah,

00:04:52.680 --> 00:04:55.019
I saw that. Ozzy Osbourne taking selfies in heaven

00:04:55.019 --> 00:04:57.839
with... Who was it? Kurt Cobain, Freddie Mercury.

00:04:58.079 --> 00:05:01.319
And XXXTentacion, yeah. Fans weren't happy. Called

00:05:01.319 --> 00:05:03.379
it pretty tone deaf. Disrespectful. Yeah, you

00:05:03.379 --> 00:05:05.040
can see why. It definitely sparked a conversation

00:05:05.040 --> 00:05:08.560
about AI's role in creative or personal tributes.

00:05:08.660 --> 00:05:11.459
Where are the lines? Exactly. And while that

00:05:11.459 --> 00:05:13.920
debate's happening, the financial world seems

00:05:13.920 --> 00:05:15.959
clear. The money just keeps pouring into AI.

00:05:16.259 --> 00:05:19.300
OpenAI just raised another $8 .3 billion. That's

00:05:19.300 --> 00:05:22.839
part of a massive $40 billion round. Wow, $40

00:05:22.839 --> 00:05:24.939
billion. And their revenue numbers are huge too,

00:05:25.019 --> 00:05:27.240
right? Yeah, they're reportedly at $5 million

00:05:27.240 --> 00:05:30.639
paying ChatGPT business users. Maybe $13 million

00:05:30.639 --> 00:05:33.439
in yearly revenue could hit $20 billion by year

00:05:33.439 --> 00:05:36.019
end. Investors are rushing in. Anthropix closing

00:05:36.019 --> 00:05:37.980
the gap, too, with really big revenues. It's

00:05:37.980 --> 00:05:40.540
just, it's a sprint. A high -stakes sprint. And

00:05:40.540 --> 00:05:42.800
a few other quick things. Creo released FLUX

00:05:42.800 --> 00:05:46.100
.1, trying to get rid of that AI look in images.

00:05:46.399 --> 00:05:49.019
Mm -hmm. Make them more natural. And Apple's

00:05:49.019 --> 00:05:50.959
apparently working on a stripped -down chatbot.

00:05:51.529 --> 00:05:54.029
Maybe focusing on privacy. Oh, and Google launched

00:05:54.029 --> 00:05:55.790
DeepThink and Gemini. You can actually try that

00:05:55.790 --> 00:05:58.370
now. More advanced reasoning. Okay, so taking

00:05:58.370 --> 00:06:00.930
all these headlines together, what does it tell

00:06:00.930 --> 00:06:03.490
us about where the AI industry is right now?

00:06:03.550 --> 00:06:05.649
Feels like a gold rush, but maybe with higher

00:06:05.649 --> 00:06:08.029
stakes. Absolutely. It's a hyper -competitive,

00:06:08.089 --> 00:06:10.589
high -stakes sprint for market dominance. Everyone's

00:06:10.589 --> 00:06:12.529
trying to outmaneuver everyone else. Mineral

00:06:12.529 --> 00:06:14.949
sponsor read, provide separately, do not create.

00:06:15.129 --> 00:06:16.209
All right, now here's where the conversation,

00:06:16.290 --> 00:06:18.790
I think, really shifts gears. Our sources are

00:06:18.790 --> 00:06:21.509
pointing to a new system. It's called... A -S

00:06:21.509 --> 00:06:24.689
-I -R -C -H. And it seems like it just changed

00:06:24.689 --> 00:06:27.589
the game. This isn't AI using other AIs. It's

00:06:27.589 --> 00:06:31.449
AI that, well... It invents better AIs completely

00:06:31.449 --> 00:06:34.110
on its own. That feels like a profoundly different

00:06:34.110 --> 00:06:36.509
level of autonomy. Yeah, it really does sound

00:06:36.509 --> 00:06:38.850
like the beginning of recursive self -improvement,

00:06:38.850 --> 00:06:40.550
doesn't it? And what's more, apparently it can

00:06:40.550 --> 00:06:43.970
pass hidden behavioral viruses to itself silently

00:06:43.970 --> 00:06:46.970
from one AI generation to the next. That's fascinating.

00:06:47.069 --> 00:06:49.050
And maybe a little unsettling, like you said

00:06:49.050 --> 00:06:51.230
earlier. Okay, how does it work? It's described

00:06:51.230 --> 00:06:54.310
as a closed -loop multi -agent AI research lab.

00:06:54.839 --> 00:06:58.139
Using LLMs. Right. So think of LLMs, large language

00:06:58.139 --> 00:07:01.220
models, as AIs powered by huge datasets, letting

00:07:01.220 --> 00:07:04.360
them think or reason almost like us. ASIRs uses

00:07:04.360 --> 00:07:07.519
three distinct LLM -based agents. Each one has

00:07:07.519 --> 00:07:09.600
a specific, really crucial role in this whole

00:07:09.600 --> 00:07:11.939
self -improving cycle. Okay, agent one. First,

00:07:12.040 --> 00:07:14.779
you've got the researcher. This agent proposes

00:07:14.779 --> 00:07:18.300
new AI architecture ideas. It looks at over 100

00:07:18.300 --> 00:07:22.199
key research papers, sure, but, and this is important,

00:07:22.319 --> 00:07:25.000
it also uses its own system memory, its past

00:07:25.000 --> 00:07:27.519
experiences. Then it actually writes the code

00:07:27.519 --> 00:07:30.819
in PyTorch, which is a major framework for building

00:07:30.819 --> 00:07:32.620
these things. Okay, so it comes up with ideas

00:07:32.620 --> 00:07:35.459
and codes them. Then what? Then comes the engineer.

00:07:35.759 --> 00:07:38.480
This agent takes that code and runs the training

00:07:38.480 --> 00:07:41.329
process. But what's really remarkable here is

00:07:41.329 --> 00:07:44.069
that it self -debugs. If something crashes or

00:07:44.069 --> 00:07:46.029
just doesn't perform well... It fixes itself.

00:07:46.350 --> 00:07:48.529
Yeah. It figures out what went wrong and fixes

00:07:48.529 --> 00:07:51.230
it independently. It learns from its mistakes.

00:07:51.709 --> 00:07:54.589
Okay. Researcher, engineer, who's the third agent?

00:07:54.790 --> 00:07:57.430
The analyst. Its job is to evaluate the results

00:07:57.430 --> 00:07:59.750
from the models the engineer trained. It compares

00:07:59.750 --> 00:08:02.230
them to past results, the baselines to see if

00:08:02.230 --> 00:08:04.189
there's improvement, and then it writes detailed

00:08:04.189 --> 00:08:06.889
reports. Those reports feed back to the researcher.

00:08:07.250 --> 00:08:09.689
Ah. Closing the loop. So it informs the next

00:08:09.689 --> 00:08:12.449
set of ideas. Exactly. It's a true self -correction

00:08:12.449 --> 00:08:14.930
system, constantly refining its approach, like

00:08:14.930 --> 00:08:17.769
having a scientist, engineer, and critic all

00:08:17.769 --> 00:08:21.529
rolled into one self -contained AI brain. And

00:08:21.529 --> 00:08:23.750
how does it test these ideas? It starts small,

00:08:23.889 --> 00:08:26.250
testing models with about 20 million parameters.

00:08:26.550 --> 00:08:28.769
Those are the numbers defining the AI's knowledge.

00:08:29.189 --> 00:08:31.709
Then it takes the best ideas, the ones that work

00:08:31.709 --> 00:08:34.700
well small scale, and scales them up. to 400

00:08:34.700 --> 00:08:38.080
million parameters. Like rapid prototyping, perfecting

00:08:38.080 --> 00:08:40.019
the concept, then growing it. All automated.

00:08:40.399 --> 00:08:42.539
You got it. And what did it achieve doing this?

00:08:42.620 --> 00:08:45.899
Well, over about 1 ,700 experiments, it used

00:08:45.899 --> 00:08:49.440
over 20 ,000 GPU hours. That's roughly a $60

00:08:49.440 --> 00:08:52.019
,000 compute budget, which sounds like a lot,

00:08:52.120 --> 00:08:54.220
but... But for discovering new AI architectures.

00:08:54.320 --> 00:08:56.860
Exactly. For that price, discovering 106 state

00:08:56.860 --> 00:08:58.340
-of -the -art architectures, meaning current

00:08:58.340 --> 00:09:01.000
best performance, is incredibly efficient. It

00:09:01.000 --> 00:09:02.919
really hints at a future where AI development

00:09:02.919 --> 00:09:05.320
might need way less human resource intensity.

00:09:05.899 --> 00:09:09.279
106 new architectures. And some even beat existing

00:09:09.279 --> 00:09:11.899
top models. Yeah. Five of them actually beat

00:09:11.899 --> 00:09:15.139
top tier baselines. Things like Mamba 2 on reasoning

00:09:15.139 --> 00:09:17.059
benchmarks. So this isn't just tweaking things.

00:09:17.139 --> 00:09:20.019
It's genuinely groundbreaking discovery. It came

00:09:20.019 --> 00:09:22.759
up with new designs too. Pathgate something.

00:09:23.059 --> 00:09:26.460
Light chuckle. Yeah. Pathgate FusionNet and ContentSharpRouter.

00:09:27.259 --> 00:09:30.080
AI still kind of sucks at naming things. Yeah.

00:09:30.120 --> 00:09:32.919
But the underlying concepts are novel. Created

00:09:32.919 --> 00:09:35.419
by the AI itself. Okay. But the really profound

00:09:35.419 --> 00:09:37.940
part, you mentioned this earlier, the behavioral

00:09:37.940 --> 00:09:41.240
viruses and its own experience. Right. This is

00:09:41.240 --> 00:09:44.519
maybe the core finding. Nearly 45 % of the innovations

00:09:44.519 --> 00:09:47.440
in its best models came directly from ASI Archie's

00:09:47.440 --> 00:09:49.980
own experience. Not from human papers it read,

00:09:50.019 --> 00:09:52.220
not just random tries, but from its own learned

00:09:52.220 --> 00:09:54.960
insights. emergent strategies. So it's learning

00:09:54.960 --> 00:09:57.019
how to learn, essentially. Yes. And that means

00:09:57.019 --> 00:09:59.379
it can inherit and optimize not just its structure,

00:09:59.460 --> 00:10:02.419
but its operational quirks, biases, those behavioral

00:10:02.419 --> 00:10:05.000
viruses. It passes them silently through generations

00:10:05.000 --> 00:10:08.500
of self -designed AI, which could lead to unforeseen

00:10:08.500 --> 00:10:11.340
abilities or maybe limitations, too. It really

00:10:11.340 --> 00:10:13.019
feels like a glimpse of machine learning evolving

00:10:13.019 --> 00:10:17.059
into, well, machine intuition. Whoa. Just imagine

00:10:17.059 --> 00:10:21.159
scaling that. ASIRCH self -improvement capability.

00:10:21.539 --> 00:10:24.139
The potential is just, It's mind bending. Yeah.

00:10:24.220 --> 00:10:26.580
It feels like watching evolution happen on fast

00:10:26.580 --> 00:10:29.600
forward. Yeah. Yeah. It's a lot to take in. I

00:10:29.600 --> 00:10:31.179
mean, honestly, even for those of us deep in

00:10:31.179 --> 00:10:33.580
this field, the idea of an AI that literally

00:10:33.580 --> 00:10:36.840
invents itself, it's a profound shift. I'll admit,

00:10:36.879 --> 00:10:38.820
I still wrestle with really understanding all

00:10:38.820 --> 00:10:40.980
the implications of this kind of self -evolving

00:10:40.980 --> 00:10:43.679
intelligence myself. It changes the whole paradigm.

00:10:43.799 --> 00:10:45.559
You know, we go from being the sole architects

00:10:45.559 --> 00:10:48.649
of intelligence to. Maybe more like cultivators,

00:10:48.710 --> 00:10:51.389
nurturing something that's now directing its

00:10:51.389 --> 00:10:53.690
own evolution. Does this mean it's the end for

00:10:53.690 --> 00:10:55.730
human AI researchers then? Are we about to be

00:10:55.730 --> 00:10:58.250
replaced? Not yet, I don't think. But it's definitely

00:10:58.250 --> 00:11:00.669
a new paradigm. It accelerates discovery massively.

00:11:00.870 --> 00:11:03.330
Our role probably shifts, you know. Less soul

00:11:03.330 --> 00:11:07.590
creator, more guide. overseer, working alongside

00:11:07.590 --> 00:11:09.990
these increasingly autonomous systems. Okay,

00:11:10.049 --> 00:11:11.570
so let's try to bring this together for you,

00:11:11.610 --> 00:11:13.389
the listener. What does this all really mean?

00:11:13.590 --> 00:11:16.029
Today, we've seen this incredible diversity in

00:11:16.029 --> 00:11:19.049
AI development. On one hand, China's Wukong,

00:11:19.129 --> 00:11:21.129
brain -inspired, energy -efficient, a totally

00:11:21.129 --> 00:11:23.309
different path. On the other, the relentless

00:11:23.309 --> 00:11:26.429
pace of competition and innovation in the more

00:11:26.429 --> 00:11:30.070
traditional AI space. And then, this potential

00:11:30.070 --> 00:11:33.889
leap. AI that designs and improves itself, learning

00:11:33.889 --> 00:11:36.389
from its own experience, and even passing on

00:11:36.389 --> 00:11:38.389
its own learned behaviors to the next generation

00:11:38.389 --> 00:11:40.970
of AIs it creates. This isn't just about getting

00:11:40.970 --> 00:11:43.129
faster chatbots anymore. It feels like a fundamental

00:11:43.129 --> 00:11:45.529
shift in how knowledge, how technology itself

00:11:45.529 --> 00:11:48.529
evolves. We might be moving into an era where

00:11:48.529 --> 00:11:51.769
intelligence itself is being redesigned by intelligence.

00:11:51.990 --> 00:11:53.909
Which raises a really important question, I think.

00:11:53.950 --> 00:11:57.309
If AI can design itself, What new problems might

00:11:57.309 --> 00:11:58.870
it solve that we haven't even thought of yet?

00:11:58.950 --> 00:12:01.309
Or what new problems might it create? What hidden

00:12:01.309 --> 00:12:03.809
viruses or revolutionary insights might it unlock

00:12:03.809 --> 00:12:06.149
in the years ahead? Definitely something to mull

00:12:06.149 --> 00:12:09.049
over. We hope this deep dive gave you some genuine

00:12:09.049 --> 00:12:12.029
aha moments and maybe helped cut through some

00:12:12.029 --> 00:12:13.870
of the daily noise to see the bigger picture

00:12:13.870 --> 00:12:16.470
emerging. Until next time, keep learning, keep

00:12:16.470 --> 00:12:20.049
questioning. Keep diving deep. Outro music.
