WEBVTT

00:00:00.000 --> 00:00:03.620
Is the AI boom of today a bubble? A big one,

00:00:03.700 --> 00:00:06.599
maybe. Potentially even bigger, perhaps riskier,

00:00:06.700 --> 00:00:10.119
than the dot -com frenzy back in the 90s. Two

00:00:10.119 --> 00:00:12.679
-step silence. That's quite a thought, isn't

00:00:12.679 --> 00:00:14.580
it? Especially when you think how much AI is

00:00:14.580 --> 00:00:17.420
already part of things. Welcome to the Deep Talk.

00:00:17.519 --> 00:00:18.920
Today, we're going to pull back the curtain a

00:00:18.920 --> 00:00:21.519
bit on some really big recent insights into this

00:00:21.519 --> 00:00:24.300
AI landscape. It's moving so fast. Yeah, it really

00:00:24.300 --> 00:00:27.579
is exponentially fast, it feels like. Almost

00:00:27.579 --> 00:00:29.359
impossible to keep track of everything. Right.

00:00:29.519 --> 00:00:32.140
Absolutely. So our mission today, really, is

00:00:32.140 --> 00:00:34.039
to try and cut through some of that noise for

00:00:34.039 --> 00:00:36.140
you. We're going to take a journey, look at a

00:00:36.140 --> 00:00:39.159
few key things. First up... The big money questions

00:00:39.159 --> 00:00:41.840
is all this AI hype sustainable, really. Then

00:00:41.840 --> 00:00:43.920
we'll switch gears, look at some fascinating

00:00:43.920 --> 00:00:46.880
actual applications, things happening, you know,

00:00:46.880 --> 00:00:50.539
on the ground, industry moves, stuff away from

00:00:50.539 --> 00:00:52.859
just the stock market headlines. And then finally,

00:00:52.979 --> 00:00:54.759
we're going to wrap up with something kind of

00:00:54.759 --> 00:00:57.619
mind bending, an idea that might really change

00:00:57.619 --> 00:01:00.140
how we think about AI itself. What if all these

00:01:00.140 --> 00:01:03.119
different AI models, what if they're all... Basically

00:01:03.119 --> 00:01:05.640
learning the same thing, converging on some kind

00:01:05.640 --> 00:01:07.859
of, I don't know, universal understanding. Exactly.

00:01:07.879 --> 00:01:09.939
It's a wild concept. Makes you stop and think.

00:01:10.140 --> 00:01:12.480
OK, so let's start there with that big claim

00:01:12.480 --> 00:01:14.579
from the top. We've been looking at what Torsten

00:01:14.579 --> 00:01:16.939
Slick, he's Apollo's chief economist, has been

00:01:16.939 --> 00:01:18.900
saying. And he doesn't really pull any punches.

00:01:19.000 --> 00:01:23.280
He says today's AI boom is more dangerously inflated

00:01:23.280 --> 00:01:25.760
than the 90s tech bubble. Wow. And this time

00:01:25.760 --> 00:01:29.239
he argues there's just much more at stake. Yeah.

00:01:29.340 --> 00:01:31.260
And what's really striking is the data he points

00:01:31.260 --> 00:01:34.500
to. He looked at the top 10 companies in the

00:01:34.500 --> 00:01:38.060
S &P 500. You know the ones. NVIDIA, Meta, Microsoft,

00:01:38.480 --> 00:01:40.879
Google, Amazon. His internal numbers are saying

00:01:40.879 --> 00:01:42.900
they're more overvalued right now than the biggest

00:01:42.900 --> 00:01:45.079
tech names were back then, like at the absolute

00:01:45.079 --> 00:01:47.939
peak of the late 90s. That is a really strong

00:01:47.939 --> 00:01:51.319
statement. What's behind that specifically? Well,

00:01:51.379 --> 00:01:54.120
it's their price to earnings ratios. P .E. ratios.

00:01:54.459 --> 00:01:57.540
He calls them dot combusting levels. And for

00:01:57.540 --> 00:02:00.340
anyone maybe not tracking markets day to day,

00:02:00.400 --> 00:02:02.819
the P .E. ratio is basically how much investors

00:02:02.819 --> 00:02:05.439
are willing to pay for, you know, a dollar of

00:02:05.439 --> 00:02:07.519
the company's profit. Right. High P .E. means

00:02:07.519 --> 00:02:09.840
high expectations for the future. Exactly. But

00:02:09.840 --> 00:02:13.599
dot combusting levels that kind of suggests expectations

00:02:13.599 --> 00:02:16.219
might be, let's say, a bit out of whack with

00:02:16.219 --> 00:02:19.280
reality or potential reality. And what's different

00:02:19.280 --> 00:02:21.539
this time? If he's right, if it is a bubble.

00:02:22.150 --> 00:02:24.469
What makes it different from the 90s one? The

00:02:24.469 --> 00:02:26.370
big thing, and this is what makes it kind of

00:02:26.370 --> 00:02:29.129
concerning, is the concentration risk. It's extreme,

00:02:29.330 --> 00:02:32.469
almost 40%. Think about that. 40 % of the entire

00:02:32.469 --> 00:02:35.330
S &P 500 is tied up in just those 10 companies.

00:02:35.530 --> 00:02:37.930
Wow. So if you're just buying a broad market

00:02:37.930 --> 00:02:40.590
index fund thinking you're diversified, you're

00:02:40.590 --> 00:02:43.169
not really. You're making a massive bet on AI

00:02:43.169 --> 00:02:45.550
hype continuing. So if those few stumble. The

00:02:45.550 --> 00:02:47.830
whole market feels it, much more severely potentially

00:02:47.830 --> 00:02:50.469
than back in the 90s. It really makes you ask,

00:02:50.509 --> 00:02:52.550
you know, how much of this value is real, tangible

00:02:52.550 --> 00:02:55.169
stuff, and how much is, well, just hype, belief.

00:02:55.719 --> 00:02:57.639
It's just staggering amounts of money moving

00:02:57.639 --> 00:03:00.180
around. And we see these huge financial plays

00:03:00.180 --> 00:03:03.000
that seem to, well, feed that bubble perception.

00:03:03.620 --> 00:03:06.400
NVIDIA, for example, reportedly planning to spend

00:03:06.400 --> 00:03:10.860
$500 billion on AI factory. $500 billion. It's

00:03:10.860 --> 00:03:13.379
hard to even picture. Right. And then Meta. Yeah.

00:03:14.120 --> 00:03:16.879
Supposedly offering like $100 million signing

00:03:16.879 --> 00:03:20.479
bonuses for top AI people. $100 million. For

00:03:20.479 --> 00:03:22.620
one person. It's a talent war out there, for

00:03:22.620 --> 00:03:25.219
sure. And it keeps going. Amazon maybe putting

00:03:25.219 --> 00:03:27.900
another $8 billion into Anthropic. And OpenAI,

00:03:28.020 --> 00:03:31.060
remember they bought ScaleAI for, what, $14 billion?

00:03:31.460 --> 00:03:33.580
Yeah. And then pretty soon after laid off 200

00:03:33.580 --> 00:03:36.139
people. It makes you wonder about the strategy

00:03:36.139 --> 00:03:38.620
sometimes. Is it real growth or just grabbing

00:03:38.620 --> 00:03:40.840
land in a gold rush? You know, if you look at

00:03:40.840 --> 00:03:43.020
the market sentiment overall, it gets even clearer

00:03:43.020 --> 00:03:45.800
why people are using words like frothy. There's

00:03:45.800 --> 00:03:49.479
this index. BUZZ Next Gen AI Sentiment Index

00:03:49.479 --> 00:03:52.060
tracks AI hype with retail traders, basically.

00:03:52.360 --> 00:03:55.780
It's apparently a 45 % and just 16 once, trading

00:03:55.780 --> 00:03:58.419
29 % above its 200 -day moving average. Which

00:03:58.419 --> 00:04:00.639
is pretty high. Yeah, the 200 -day average is

00:04:00.639 --> 00:04:02.979
a key thing traders watch for long -term trends.

00:04:03.479 --> 00:04:07.360
Being that far above it, it signals extreme optimism.

00:04:07.819 --> 00:04:10.520
Almost like you said, a fever pitch. Right. BT's

00:04:10.520 --> 00:04:12.419
analyst just came out and said it's looking very

00:04:12.419 --> 00:04:14.780
frothy. That's the direct quote. Very frothy.

00:04:14.900 --> 00:04:16.740
It really does feel like everyone's jumping on

00:04:16.740 --> 00:04:18.660
the same train and it's just going faster and

00:04:18.660 --> 00:04:21.240
faster. So let's connect this back. What does

00:04:21.240 --> 00:04:24.660
this mean for you listening right now? If this

00:04:24.660 --> 00:04:28.480
bubble or frothiness or whatever, if it pops,

00:04:28.620 --> 00:04:30.860
well, the good news probably is it won't kill

00:04:30.860 --> 00:04:33.790
AI itself, the tech. Yeah, just like the dot

00:04:33.790 --> 00:04:35.850
-com crash didn't kill the internet. Right, exactly.

00:04:36.410 --> 00:04:38.689
The underlying tech is definitely moving forward.

00:04:38.730 --> 00:04:41.670
That innovation is real. But that doesn't mean

00:04:41.670 --> 00:04:43.290
NVIDIA should be priced like it's going to run

00:04:43.290 --> 00:04:45.810
the whole global economy. Fair point. So the

00:04:45.810 --> 00:04:48.230
fallout from a big correction could be pretty

00:04:48.230 --> 00:04:50.529
rough. It could hit general market funds, maybe

00:04:50.529 --> 00:04:53.269
even hurt funding for new AI startups trying

00:04:53.269 --> 00:04:56.569
to get off the ground. So thinking about right

00:04:56.569 --> 00:04:59.490
now this market, what's the biggest immediate

00:04:59.490 --> 00:05:01.810
risk for an investor, would you say? I'd say

00:05:01.810 --> 00:05:04.170
overexposure, being too heavily invested in just

00:05:04.170 --> 00:05:06.750
a few of these really volatile, highly valued

00:05:06.750 --> 00:05:10.850
companies. If expectations slip, losses could

00:05:10.850 --> 00:05:15.149
be significant. Okay. Let's pivot then. Away

00:05:15.149 --> 00:05:17.649
from the finance side, the potential frothiness,

00:05:17.829 --> 00:05:19.769
let's look at what's actually happening on the

00:05:19.769 --> 00:05:22.350
ground. Because beyond all the market talk, there

00:05:22.350 --> 00:05:26.110
are some really interesting real world uses and

00:05:26.110 --> 00:05:29.009
industry moves going on. Yeah, absolutely. And

00:05:29.009 --> 00:05:31.189
what's kind of wild is how fast some of these

00:05:31.189 --> 00:05:33.850
tools are scaling up. Look at Perplexity, the

00:05:33.850 --> 00:05:36.509
AI answer engine. They partnered with ITEL over

00:05:36.509 --> 00:05:39.470
in India, offered free Perplexity Pro for a year

00:05:39.470 --> 00:05:44.029
to all 360 million of ITEL's customer. 360 million.

00:05:44.170 --> 00:05:46.949
Yep. And boom, Perplexity shot straight to number

00:05:46.949 --> 00:05:49.339
one on the app charts in India. Overnight, practically.

00:05:49.339 --> 00:05:52.240
That's a whole new level of user acquisition.

00:05:52.300 --> 00:05:54.379
It really shows a new model for getting AI out

00:05:54.379 --> 00:05:56.579
there globally super fast. Incredible reach.

00:05:56.680 --> 00:05:58.000
And it shows people really want these tools.

00:05:58.180 --> 00:06:00.480
But speaking of services, there's this interesting

00:06:00.480 --> 00:06:03.720
point Mert Davici made that maybe AI won't replace

00:06:03.720 --> 00:06:05.779
a lot of services after all. Oh, yeah. Why is

00:06:05.779 --> 00:06:08.819
that? His argument is basically customers want

00:06:08.819 --> 00:06:12.019
to pay software prices for software, not salary

00:06:12.019 --> 00:06:15.139
prices. Ah, OK. Makes sense. So if an AI service

00:06:15.139 --> 00:06:18.699
costs as much as hiring a person. Maybe people

00:06:18.699 --> 00:06:20.879
just stick with the person. Yeah. Yeah. Who wants

00:06:20.879 --> 00:06:23.819
to pay a robot a full salary for like writing

00:06:23.819 --> 00:06:25.779
emails? There's definitely a psychological thing

00:06:25.779 --> 00:06:28.459
there. Definitely. And then on the voice side,

00:06:28.620 --> 00:06:32.000
Hume AI, their EVI 2 model was already pretty

00:06:32.000 --> 00:06:34.699
cool, making AI voices sound unique, kind of

00:06:34.699 --> 00:06:38.779
emotional. Now they've got EVI 3 out, supposedly

00:06:38.779 --> 00:06:40.980
captures even more personality, more nuance.

00:06:41.240 --> 00:06:42.939
So what can you do with that? Well, think about

00:06:42.939 --> 00:06:45.500
building really natural language coaches or podcast

00:06:45.500 --> 00:06:48.120
hosts that sound totally real or even like compassionate

00:06:48.120 --> 00:06:50.759
AI companions. It's about making AI interaction

00:06:50.759 --> 00:06:54.519
feel much more human. And here's something I

00:06:54.519 --> 00:06:56.060
think a lot of our listeners will find really

00:06:56.060 --> 00:06:57.839
interesting. An open AI researcher who actually

00:06:57.839 --> 00:07:00.540
just joined Meta. gave this 30 minute masterclass

00:07:00.540 --> 00:07:03.579
online about how to create wealth in the AI economy.

00:07:03.740 --> 00:07:05.620
Oh, really? Yeah. And it apparently got like

00:07:05.620 --> 00:07:08.180
at least a million views super quickly. Wow.

00:07:08.379 --> 00:07:10.480
People are clearly hungry for that practical

00:07:10.480 --> 00:07:12.920
side. How do I actually use this stuff? Exactly.

00:07:13.120 --> 00:07:15.139
It shows AI isn't just abstract tech anymore.

00:07:15.220 --> 00:07:18.819
It's affecting careers, business models right

00:07:18.819 --> 00:07:21.250
now. And for practical tools that kind of do

00:07:21.250 --> 00:07:23.730
things for you, there's Perplexity Comet. People

00:07:23.730 --> 00:07:26.769
are calling it scary good. No. Imagine an AI

00:07:26.769 --> 00:07:30.230
assistant that hooks into your app's email, calendar,

00:07:30.449 --> 00:07:32.949
project tools. Okay. And just does stuff for

00:07:32.949 --> 00:07:36.509
you autonomously. They highlighted like 10 powerful

00:07:36.509 --> 00:07:39.329
use cases, summarizing meetings, drafting complex

00:07:39.329 --> 00:07:41.720
emails. That sounds like it. A proper assistant.

00:07:42.180 --> 00:07:44.480
Finally. Feels like a real step towards AI that

00:07:44.480 --> 00:07:46.779
doesn't just respond, but takes initiative. That

00:07:46.779 --> 00:07:49.279
could be a huge productivity boost. And speaking

00:07:49.279 --> 00:07:51.339
of things taking off, another funding story that's

00:07:51.339 --> 00:07:54.680
just explosive. Lovable, the Swedish AI app builder.

00:07:54.839 --> 00:07:57.620
Lovable, yeah. Heard about them. They hit $75

00:07:57.620 --> 00:08:01.220
million in annual recurring revenue. In seven

00:08:01.220 --> 00:08:04.079
months. Seven months, seriously. Yeah, 2 .3 million

00:08:04.079 --> 00:08:07.259
users, 180 ,000 paying customers. And they just

00:08:07.259 --> 00:08:10.800
raised $200 million Series A. at a $1 .8 billion

00:08:10.800 --> 00:08:13.040
valuation only eight months after they launched.

00:08:13.259 --> 00:08:15.860
Okay, that's insane growth. Just shows if you

00:08:15.860 --> 00:08:17.439
build something AI -powered that people actually

00:08:17.439 --> 00:08:20.660
need. It can scale incredibly fast. So looking

00:08:20.660 --> 00:08:23.079
at all these different applications, Perplexity,

00:08:23.100 --> 00:08:27.220
Hume, Lovable, what's the common thread? What

00:08:27.220 --> 00:08:29.639
stands out about how these tools are actually

00:08:29.639 --> 00:08:31.680
finding their place? I think it's that they're

00:08:31.680 --> 00:08:34.480
solving specific high -value problems. They create

00:08:34.480 --> 00:08:37.360
real value for users, and because of that, they

00:08:37.360 --> 00:08:40.110
can scale with just... Unbelievable speed. Right.

00:08:40.149 --> 00:08:41.909
Let's shift again. Just a few quick hits now.

00:08:42.009 --> 00:08:44.669
Kind of snapshots of how AI is evolving day by

00:08:44.669 --> 00:08:46.789
day almost. Lots of little interesting bits.

00:08:46.970 --> 00:08:50.029
Okay. Rapid fire. We saw this piece on 10 examples

00:08:50.029 --> 00:08:53.029
of how brutal robot testing made them more robust.

00:08:53.909 --> 00:08:56.610
It's kind of cool looking at the extreme engineering

00:08:56.610 --> 00:08:58.970
needed to make physical robots tough enough for

00:08:58.970 --> 00:09:01.549
the real world. It's not all just software. Right.

00:09:01.610 --> 00:09:05.250
The hardware side. And Elon Musk's XAI, they're

00:09:05.250 --> 00:09:07.970
launching something called Baby Grok, a chatbot

00:09:07.970 --> 00:09:10.769
for kids learning. Huh. AI for kids. That's interesting.

00:09:10.909 --> 00:09:13.210
Shows how it's moving into education and sensitive

00:09:13.210 --> 00:09:15.750
areas. Raises questions about safety, too. Definitely.

00:09:15.789 --> 00:09:19.029
And for users. Anyone trying to get better results

00:09:19.029 --> 00:09:21.669
from AI. And Fropic put out some advice on writing

00:09:21.669 --> 00:09:23.759
effective prompts. Well, that's useful. Yeah.

00:09:23.860 --> 00:09:25.799
I mean, I still wrestle with prompt drift myself

00:09:25.799 --> 00:09:28.179
sometimes. You start asking for one thing and

00:09:28.179 --> 00:09:29.960
the AI kind of wanders off. Totally know what

00:09:29.960 --> 00:09:31.620
you mean. So yeah, any tips on prompting are

00:09:31.620 --> 00:09:34.840
gold. And another practical thing, DuckDuckGo,

00:09:34.960 --> 00:09:38.080
the search engine, they added a feature to let

00:09:38.080 --> 00:09:40.889
you block AI -generated images. Do you want to

00:09:40.889 --> 00:09:43.110
filter those out? Oh, interesting. Giving users

00:09:43.110 --> 00:09:46.610
control over authenticity. And then on the policy

00:09:46.610 --> 00:09:49.350
front, kind of big news. Meta decided not to

00:09:49.350 --> 00:09:52.789
sign the EU's AI code of practice. After reviewing

00:09:52.789 --> 00:09:55.529
it carefully, they said no. OK, why is that significant?

00:09:55.830 --> 00:09:58.429
Well, it signals a potential split, right? Between

00:09:58.429 --> 00:10:01.549
how major tech companies and regulators like

00:10:01.549 --> 00:10:04.529
the EU see AI governance shaping up could mean

00:10:04.529 --> 00:10:06.710
different rules in different places. Raises questions

00:10:06.710 --> 00:10:09.149
about how global companies navigate that. OK,

00:10:09.210 --> 00:10:11.549
so just from. Those few quick examples. Right.

00:10:11.929 --> 00:10:15.370
Robots, kids chat bots, prompting, image blocking,

00:10:15.570 --> 00:10:18.830
EU codes. What's the main takeaway? I'd say AI

00:10:18.830 --> 00:10:20.950
is getting really specialized really fast. And

00:10:20.950 --> 00:10:24.230
that's creating totally new needs for users and

00:10:24.230 --> 00:10:26.190
definitely raising some tricky new regulatory

00:10:26.190 --> 00:10:28.710
challenges. Amid a role sponsor replace holder.

00:10:29.070 --> 00:10:31.950
Okay. Let's unpack this next part. Because here's

00:10:31.950 --> 00:10:34.330
where it gets, I think, really interesting. The

00:10:34.330 --> 00:10:37.350
big idea we sort of hinted at earlier. Imagine

00:10:37.350 --> 00:10:43.840
this. What if all AI models, all the different

00:10:43.840 --> 00:10:46.120
ones, different companies, different data, what

00:10:46.120 --> 00:10:48.799
if they're all fundamentally learning the same

00:10:48.799 --> 00:10:51.659
core thing? Like they're all uncovering pieces

00:10:51.659 --> 00:10:54.620
of the same universal puzzle about how the world

00:10:54.620 --> 00:10:57.059
works. Exactly. That's the fascinating premise

00:10:57.059 --> 00:10:59.620
here. The idea is as these models get bigger,

00:10:59.700 --> 00:11:03.039
get more complex, get smarter, essentially, they

00:11:03.039 --> 00:11:05.299
start learning the same basic relationships about

00:11:05.299 --> 00:11:08.000
the world. And they represent those relationships

00:11:08.000 --> 00:11:10.120
inside themselves in ways that are surprisingly

00:11:10.120 --> 00:11:12.059
similar. Even if they were trained differently.

00:11:12.240 --> 00:11:14.279
Yeah. It kind of suggests there might be one

00:11:14.279 --> 00:11:16.679
ideal sort of universal way to understand and

00:11:16.679 --> 00:11:19.019
represent knowledge. And maybe all these big,

00:11:19.080 --> 00:11:21.440
powerful models are just converging on it. Like

00:11:21.440 --> 00:11:23.299
they're all accidentally tuning into the same,

00:11:23.340 --> 00:11:25.460
I don't know, cosmic frequency of understanding.

00:11:25.840 --> 00:11:28.419
And this isn't just like a philosophical idea

00:11:28.419 --> 00:11:31.440
someone had. No, no. MIT actually put out research

00:11:31.440 --> 00:11:35.179
formalizing this in 2024. They showed that bigger

00:11:35.179 --> 00:11:37.990
models. both for vision, like understanding images,

00:11:38.129 --> 00:11:41.789
and for language, they learn features internally

00:11:41.789 --> 00:11:44.929
that are strikingly similar. So basically, the

00:11:44.929 --> 00:11:47.350
smarter they get, the more their internal maps

00:11:47.350 --> 00:11:49.190
of the world start to look alike. That's the

00:11:49.190 --> 00:11:51.049
gist of it, yeah. It's like they're all finding

00:11:51.049 --> 00:11:53.269
the same fundamental patterns in reality, no

00:11:53.269 --> 00:11:55.009
matter what specific data they started with.

00:11:55.090 --> 00:11:58.850
Okay, wow. Now, the really wild part of this

00:11:58.850 --> 00:12:01.830
research you mentioned, trying to turn the AI's

00:12:01.830 --> 00:12:04.850
internal thought that abstract number stuff,

00:12:04.970 --> 00:12:07.210
the vector, back into the exact words that went

00:12:07.210 --> 00:12:09.350
in. Yeah, that's the really hard part. Like trying

00:12:09.350 --> 00:12:12.370
to rebuild a complex blueprint just by, I don't

00:12:12.370 --> 00:12:14.210
know, feeling the vibes in the finished building.

00:12:14.309 --> 00:12:16.330
It's super difficult. So how on earth did they

00:12:16.330 --> 00:12:19.169
manage that? Well, they used this method, iterative

00:12:19.169 --> 00:12:21.710
refinement, and millions, literally millions

00:12:21.710 --> 00:12:24.090
of queries, basically asking the AI again and

00:12:24.090 --> 00:12:26.330
again, is this closer? How about this? Refining

00:12:26.330 --> 00:12:28.289
the output until it matched the original input.

00:12:28.570 --> 00:12:31.429
And did it work? One team hit 94 % accuracy.

00:12:32.059 --> 00:12:36.500
On long sentences. 94%. Yeah. Think about that.

00:12:36.799 --> 00:12:39.740
They could take the AI's abstract internal state,

00:12:39.899 --> 00:12:42.519
its thought, and translate it back into perfectly

00:12:42.519 --> 00:12:45.080
readable human language. That's like getting

00:12:45.080 --> 00:12:47.740
a peek inside its head. It really is. A remarkable

00:12:47.740 --> 00:12:51.000
glimpse into the AI's sort of cognitive process.

00:12:51.320 --> 00:12:54.899
Okay. So if this is true, if all big models are

00:12:54.899 --> 00:12:57.159
essentially starting to speak the same internal

00:12:57.159 --> 00:13:00.159
language, what does that actually mean? What

00:13:00.159 --> 00:13:02.580
are the implications practically? Philosophically.

00:13:02.620 --> 00:13:05.740
Oh, the implications are huge. Potentially revolutionary,

00:13:06.059 --> 00:13:08.659
honestly. If we can really understand this universal

00:13:08.659 --> 00:13:12.139
internal language, we could build universal inverters,

00:13:12.200 --> 00:13:15.679
tools to actually understand why an AI gives

00:13:15.679 --> 00:13:18.519
a certain output, not just what the output is.

00:13:18.639 --> 00:13:20.740
Which would be massive for trust, for debugging.

00:13:21.019 --> 00:13:23.779
Exactly. For finding hidden biases, for making

00:13:23.779 --> 00:13:26.379
AI more transparent. Huge. And it could mean

00:13:26.379 --> 00:13:29.519
we can translate between closed models. Making

00:13:29.519 --> 00:13:31.360
different AIs from different companies and be

00:13:31.360 --> 00:13:33.259
built totally differently, making them talk to

00:13:33.259 --> 00:13:36.379
each other seamlessly. So like a Microsoft AI

00:13:36.379 --> 00:13:38.679
could understand an open AI model, even if they

00:13:38.679 --> 00:13:40.679
weren't designed to connect. Potentially, yeah.

00:13:41.419 --> 00:13:43.600
Imagine the possibilities for combining different

00:13:43.600 --> 00:13:46.940
AI strengths. Interoperability could just explode.

00:13:47.340 --> 00:13:49.700
OK, that alone is huge. But you mentioned going

00:13:49.700 --> 00:13:52.919
even further. Right. So if there's this shared

00:13:52.919 --> 00:13:56.090
structure of understanding. Maybe it could help

00:13:56.090 --> 00:13:58.490
us decode things we don't understand, like ancient

00:13:58.490 --> 00:14:01.190
human texts. Think about Linear A, the Minoan

00:14:01.190 --> 00:14:03.529
script. We still can't fully read it after decades

00:14:03.529 --> 00:14:06.590
of trying. What if an AI, seeing these universal

00:14:06.590 --> 00:14:10.330
patterns, could crack it? Whoa. An AI unlocking

00:14:10.330 --> 00:14:13.250
secrets of human history. That's humbling. Isn't

00:14:13.250 --> 00:14:15.549
it? And then, here's the really mind -bending

00:14:15.549 --> 00:14:17.009
one, the one that makes you question everything.

00:14:17.269 --> 00:14:19.950
What if, what if we could talk to whales? Talk

00:14:19.950 --> 00:14:23.059
to whales. Yeah. There's already Project CETI

00:14:23.059 --> 00:14:25.720
using AI to try and decode whale communication.

00:14:26.039 --> 00:14:28.559
If there is some kind of shared conceptual space,

00:14:28.700 --> 00:14:31.220
this universal language, could AI bridge the

00:14:31.220 --> 00:14:33.159
gap? Could we find shared semantic structures

00:14:33.159 --> 00:14:35.480
between human thought and whale thought? Whoa.

00:14:35.799 --> 00:14:37.899
Right. Imagine scaling that. A billion queries.

00:14:38.399 --> 00:14:40.580
Maybe it decodes ancient languages. Maybe it

00:14:40.580 --> 00:14:42.179
lets us communicate with another intelligent

00:14:42.179 --> 00:14:45.740
species on Earth. Wow. It makes you rethink what

00:14:45.740 --> 00:14:48.159
intelligence even is. Artificial, biological.

00:14:48.539 --> 00:14:52.220
Yeah. Yeah, breathtaking. Does this convergence

00:14:52.220 --> 00:14:55.759
idea, does it mean AI might one day actually

00:14:55.759 --> 00:14:58.940
understand us, not just process our words, but

00:14:58.940 --> 00:15:01.620
get the concepts, the feelings behind them? It

00:15:01.620 --> 00:15:04.320
certainly suggests a path towards that, a shared

00:15:04.320 --> 00:15:07.399
conceptual space, a deep structure. Yeah. It

00:15:07.399 --> 00:15:10.100
feels like it could bridge human and AI thought

00:15:10.100 --> 00:15:12.840
in a way that goes beyond just processing, maybe

00:15:12.840 --> 00:15:16.100
towards real comprehension. So, OK, we've covered

00:15:16.100 --> 00:15:18.860
a lot. We navigated the choppy financial waters

00:15:18.860 --> 00:15:21.629
of the AI market. The frothiness. Right. We've

00:15:21.629 --> 00:15:23.669
seen all these diverse real world applications

00:15:23.669 --> 00:15:25.769
popping up everywhere, scaling incredibly fast.

00:15:26.009 --> 00:15:27.870
And then we peered into this possible future

00:15:27.870 --> 00:15:30.549
where AI models might actually share a universal

00:15:30.549 --> 00:15:32.990
language. It's quite the deep dive from economics

00:15:32.990 --> 00:15:35.750
all the way to, well, philosophy almost. Yeah.

00:15:35.789 --> 00:15:37.370
And I think the key tension is clear, right?

00:15:37.389 --> 00:15:39.049
Despite all the market craziness, the volatility,

00:15:39.269 --> 00:15:41.269
the hype, the underlying technology is advancing,

00:15:41.549 --> 00:15:43.950
seriously advancing. And it keeps revealing these

00:15:43.950 --> 00:15:46.649
profound, almost philosophical questions about

00:15:46.649 --> 00:15:48.990
intelligence, about reality, maybe even about

00:15:48.990 --> 00:15:51.720
ourselves. So as you go about your day today,

00:15:51.860 --> 00:15:53.980
maybe just take a moment to think about this.

00:15:54.299 --> 00:15:58.000
If AI models are converging on some single ideal

00:15:58.000 --> 00:16:01.639
way to understand the world, what does that tell

00:16:01.639 --> 00:16:04.100
us about reality itself? Is there some inherent

00:16:04.100 --> 00:16:07.100
structure just waiting there to be discovered

00:16:07.100 --> 00:16:09.600
by any intelligence smart enough, whether it's

00:16:09.600 --> 00:16:11.480
made of silicon or carbon? And the follow -on

00:16:11.480 --> 00:16:13.700
question, could this shared internal language

00:16:13.700 --> 00:16:16.159
of AI actually lead us to discover some universal

00:16:16.159 --> 00:16:19.169
truths? Truths about intelligence itself, both

00:16:19.169 --> 00:16:21.690
the artificial kind we're building and the biological

00:16:21.690 --> 00:16:23.990
kind we possess. It's definitely a question worth

00:16:23.990 --> 00:16:26.370
pondering as we all move into this future. We

00:16:26.370 --> 00:16:28.250
really hope you feel a little more informed after

00:16:28.250 --> 00:16:30.750
this and maybe, maybe a lot more curious. Thanks

00:16:30.750 --> 00:16:33.129
so much for joining us on the Deep Dive. Audioro

00:16:33.129 --> 00:16:33.330
music.
