WEBVTT

00:00:00.000 --> 00:00:03.540
Imagine a world where your next chat GPT subscription

00:00:03.540 --> 00:00:07.019
doesn't just come as software, but with a robot

00:00:07.019 --> 00:00:11.980
in the box. Or where AI completely rewrites human

00:00:11.980 --> 00:00:15.419
knowledge itself, then trains on its own refined

00:00:15.419 --> 00:00:18.640
version. Wow. Sounds like science fiction, doesn't

00:00:18.640 --> 00:00:20.760
it? Yeah. But, well, parts of it are already

00:00:20.760 --> 00:00:23.140
starting to happen. Welcome to the Deep Dive.

00:00:23.160 --> 00:00:25.620
Today, we're really plunging into this fascinating

00:00:25.620 --> 00:00:28.750
landscape where AI is evolving. I mean, just

00:00:28.750 --> 00:00:31.230
at an almost unbelievable pace. It's moving far

00:00:31.230 --> 00:00:33.890
beyond simple chatbots now. We've gathered a

00:00:33.890 --> 00:00:35.729
bunch of sources, mainly looking at a recent

00:00:35.729 --> 00:00:37.750
AI fire newsletter that was just packed with

00:00:37.750 --> 00:00:40.189
crucial nuggets. Our goal is to pull out the

00:00:40.189 --> 00:00:43.570
most important insights for you. Our mission,

00:00:43.649 --> 00:00:45.549
as always, is to give you that shortcut, a shortcut

00:00:45.549 --> 00:00:48.250
to being truly well -informed. So we'll explore

00:00:48.250 --> 00:00:51.829
the frankly bold visions of Sam Altman and Elon

00:00:51.829 --> 00:00:54.469
Musk. Yeah. We'll deep dive into OpenAI's pretty

00:00:54.469 --> 00:00:57.649
ambitious master plan. Consider how AI is fundamentally

00:00:57.649 --> 00:00:59.850
shifting the job market. Which is a big one.

00:00:59.969 --> 00:01:02.530
It is. And then, something truly surprising,

00:01:02.670 --> 00:01:05.010
we'll unpack a new AI architecture that uses,

00:01:05.170 --> 00:01:08.569
get this, 100 times less compute than giants

00:01:08.569 --> 00:01:11.569
like Lama. 100 times less? It's going to be an

00:01:11.569 --> 00:01:13.469
insightful journey, I think. You know, what we're

00:01:13.469 --> 00:01:16.909
witnessing with AI right now, it... Almost feels

00:01:16.909 --> 00:01:19.370
like just the warm up act. Exactly. Just a taste

00:01:19.370 --> 00:01:21.730
of what's really coming. Yeah. Sam Altman, the

00:01:21.730 --> 00:01:24.530
CEO of OpenAI, he's been dropping some really

00:01:24.530 --> 00:01:27.530
intriguing hints about this next phase. He's

00:01:27.530 --> 00:01:30.189
talking about a future where chat GPT isn't just,

00:01:30.230 --> 00:01:32.390
you know, an interface on your screen. He hinted

00:01:32.390 --> 00:01:34.680
recently, didn't he, about. subscriptions that

00:01:34.680 --> 00:01:37.420
might actually ship with physical robots. You

00:01:37.420 --> 00:01:40.700
did. That's a bold vision, right? It's not just

00:01:40.700 --> 00:01:44.200
software anymore. It's AI having a physical presence

00:01:44.200 --> 00:01:47.180
in our lives. That feels like a significant leap.

00:01:47.319 --> 00:01:49.579
It really does. And then you pivot to Elon Musk.

00:01:49.980 --> 00:01:53.140
His idea for Grok, you know, his ex -AI chatbot,

00:01:53.280 --> 00:01:56.500
it's even more radical. He's talking about deleting

00:01:56.500 --> 00:02:00.019
and rewriting humanity's entire knowledge base.

00:02:00.540 --> 00:02:03.840
And then training Grok on its own refined version

00:02:03.840 --> 00:02:07.480
to purge what he calls garbage. It's kind of

00:02:07.480 --> 00:02:09.259
wild when you think about it, purging garbage.

00:02:09.439 --> 00:02:12.439
The sheer scale of these visions, though, from

00:02:12.439 --> 00:02:14.500
both of them, it's immense. They're pushing us

00:02:14.500 --> 00:02:17.599
beyond just thinking of AI as tools. Right. And

00:02:17.599 --> 00:02:19.680
more into these like pervasive, almost fundamental

00:02:19.680 --> 00:02:22.259
shifts in how we deal with information, even

00:02:22.259 --> 00:02:24.879
the physical world. It totally redefines what

00:02:24.879 --> 00:02:28.710
ubiquitous AI could even mean. These ideas from

00:02:28.710 --> 00:02:31.289
Altman and Musk, they're so huge, they almost

00:02:31.289 --> 00:02:34.169
sound like pure sci -fi. Yeah. Is there a realistic

00:02:34.169 --> 00:02:37.229
path to any of this, do you think? I don't think

00:02:37.229 --> 00:02:39.330
it's just sci -fi. It's really about relentlessly

00:02:39.330 --> 00:02:42.550
pushing boundaries and scale. Right, pushing

00:02:42.550 --> 00:02:44.289
boundaries and scale. Okay. Yeah. So speaking

00:02:44.289 --> 00:02:47.490
of ambitious, let's unpack OpenAI's master plan.

00:02:47.629 --> 00:02:49.789
Yeah. Altman recently pulled back the curtain

00:02:49.789 --> 00:02:52.069
a bit on how they went from like literally eight

00:02:52.069 --> 00:02:54.430
people around a whiteboard. Eight people? To

00:02:54.430 --> 00:02:56.110
one of the biggest sites on the internet. Yeah.

00:02:56.349 --> 00:02:59.569
He talks about this AGI or bust conviction. And

00:02:59.569 --> 00:03:02.250
their talent hack, as he put it, was frankly

00:03:02.250 --> 00:03:05.550
brilliant. By aiming for that truly one -of -one

00:03:05.550 --> 00:03:08.770
mission achieving AGI, they basically vacuum

00:03:08.770 --> 00:03:11.889
sealed the smartest 1 % of people. People who

00:03:11.889 --> 00:03:14.210
genuinely cared about that single, incredibly

00:03:14.210 --> 00:03:17.310
difficult goal. That's how you go from... zero

00:03:17.310 --> 00:03:21.370
revenue for years to just rocketing past every

00:03:21.370 --> 00:03:24.430
consumer app launch record ever with ChatGPT.

00:03:24.590 --> 00:03:27.729
It's an almost like cult -like focus on the mission.

00:03:27.930 --> 00:03:30.210
What's fascinating now is what Altman calls the

00:03:30.210 --> 00:03:33.930
product overhang. He says models like GPT -40

00:03:33.930 --> 00:03:36.669
mini and O3 are actually far more capable than

00:03:36.669 --> 00:03:39.789
what today's products expose. API prices are

00:03:39.789 --> 00:03:42.590
collapsing fast and he hinted a powerful open

00:03:42.590 --> 00:03:45.240
source model is coming soon. Said one. Better

00:03:45.240 --> 00:03:46.979
than you're hoping for. Okay. So this product

00:03:46.979 --> 00:03:49.979
overhang, it basically means the AI models themselves

00:03:49.979 --> 00:03:53.039
are much smarter than what we, the users, currently

00:03:53.039 --> 00:03:55.460
get to experience through the apps. Yeah. It's

00:03:55.460 --> 00:03:57.080
like having a supercar but only being allowed

00:03:57.080 --> 00:03:58.919
to drive it in a school zone. That's a great

00:03:58.919 --> 00:04:00.819
analogy. Yeah. Seriously. It's like we're stuck

00:04:00.819 --> 00:04:03.460
in the 1960s of AI product design or something.

00:04:03.520 --> 00:04:06.240
We're stacking these incredibly powerful Lego

00:04:06.240 --> 00:04:08.639
blocks of data, right, these foundational models.

00:04:08.840 --> 00:04:11.860
But we haven't truly built the castle yet. You

00:04:11.860 --> 00:04:14.199
know, the full transformative applications. There's

00:04:14.199 --> 00:04:17.019
just so much untapped potential sitting there

00:04:17.019 --> 00:04:20.680
waiting for the right builders. He also pinpointed

00:04:20.680 --> 00:04:24.160
memory as his favorite 2024 feature. The vision

00:04:24.160 --> 00:04:27.240
here is an always -on AI agent that truly learns

00:04:27.240 --> 00:04:29.800
you. Learns you. It hooks into all your data,

00:04:29.860 --> 00:04:33.120
supposedly, and can proactively act on your behalf

00:04:33.120 --> 00:04:35.949
without spamming you with notifications. Imagine

00:04:35.949 --> 00:04:38.870
an AI that genuinely anticipates your needs.

00:04:39.329 --> 00:04:41.829
That feels like a game changer for personal productivity.

00:04:42.149 --> 00:04:43.810
And here's where it gets really interesting,

00:04:43.889 --> 00:04:46.550
I think. He said hardware is on the table. Mentioned

00:04:46.550 --> 00:04:48.970
a new device coming, co -designed with Joni Ive,

00:04:49.050 --> 00:04:51.389
specifically engineered to make the interface

00:04:51.389 --> 00:04:54.389
just melt away. Melt away. Yeah. The goal is

00:04:54.389 --> 00:04:56.370
for the AI to become seamless, almost invisible

00:04:56.370 --> 00:04:58.910
in your life. No more apps, no more screens necessarily,

00:04:59.170 --> 00:05:02.610
just assistance, pure help. Which brings us right

00:05:02.610 --> 00:05:04.529
back to that initial moment of wonder, doesn't

00:05:04.529 --> 00:05:07.319
it? Long term, Altman genuinely seems to believe

00:05:07.319 --> 00:05:10.240
the high tier chat GPT subscription might actually

00:05:10.240 --> 00:05:13.160
ship with a humanoid robot in the box. Imagine

00:05:13.160 --> 00:05:16.220
that a physical AI companion delivered right

00:05:16.220 --> 00:05:18.959
to your doorstep. I mean, that shifts the entire

00:05:18.959 --> 00:05:21.879
paradigm of human computer interaction. It absolutely

00:05:21.879 --> 00:05:24.120
does. But, you know, there are significant limiting

00:05:24.120 --> 00:05:26.220
factors here. Altman is very clear about this.

00:05:26.279 --> 00:05:29.160
Yeah. Compute and energy are the new oil. He

00:05:29.160 --> 00:05:32.470
basically said that. Open AI is sprinting to

00:05:32.470 --> 00:05:34.410
build the largest, most expensive infrastructure

00:05:34.410 --> 00:05:38.230
in the world. They openly wish our local devices,

00:05:38.430 --> 00:05:40.709
you know, our phones and laptops, would shoulder

00:05:40.709 --> 00:05:44.350
half the workload. The real bottleneck soon won't

00:05:44.350 --> 00:05:46.810
be the algorithms. It's going to be energy. He

00:05:46.810 --> 00:05:49.230
even floated the idea of parking data centers

00:05:49.230 --> 00:05:52.970
in orbit. Orbit. OK. So why does all this matter

00:05:52.970 --> 00:05:56.290
for you listening right now? ground shaking,

00:05:56.550 --> 00:05:58.269
as Altman puts it. I mean, startups especially

00:05:58.269 --> 00:06:01.449
can win big, but they need to stop just cloning

00:06:01.449 --> 00:06:04.350
Chad GPT. Right. They need to find those greenfield

00:06:04.350 --> 00:06:07.069
opportunities, those untouched spaces to really

00:06:07.069 --> 00:06:09.649
differentiate themselves. And defensibility in

00:06:09.649 --> 00:06:12.509
this new era. It's not just about who has access

00:06:12.509 --> 00:06:15.189
to the best model anymore. It's memory. It's

00:06:15.189 --> 00:06:18.029
deep data integrations. And maybe most importantly,

00:06:18.189 --> 00:06:21.250
user trust. Trust. Yeah. Those are the new moats,

00:06:21.310 --> 00:06:23.350
you know, the things that will protect businesses.

00:06:23.730 --> 00:06:25.730
It's not just about building a better model.

00:06:25.810 --> 00:06:27.949
It's about building a trusted relationship with

00:06:27.949 --> 00:06:30.670
the user. It paints this picture of what he calls

00:06:30.670 --> 00:06:33.899
one person leverage. In Altman's 10 -year view,

00:06:34.220 --> 00:06:37.579
a tiny team armed with reasoning agents and abundant

00:06:37.579 --> 00:06:41.720
energy could create an unimaginable superintelligence

00:06:41.720 --> 00:06:45.000
right at human fingertips. Wow. It's not just

00:06:45.000 --> 00:06:48.220
efficiency. It's fundamentally shifting what

00:06:48.220 --> 00:06:50.100
a small group of people can actually achieve.

00:06:50.379 --> 00:06:54.250
So we're talking humanoid robots. Seamless AI

00:06:54.250 --> 00:06:56.589
agents. It sounds incredible. But what's the

00:06:56.589 --> 00:06:58.569
practical limiter here? What's really holding

00:06:58.569 --> 00:07:00.529
us back from having these kinds of companions

00:07:00.529 --> 00:07:03.189
today? Pure compute and energy scale. That's

00:07:03.189 --> 00:07:05.649
the bottom line. Compute and energy. Okay. So

00:07:05.649 --> 00:07:08.769
from OpenAI's internal master plan, let's maybe

00:07:08.769 --> 00:07:11.069
zoom out a bit. Let's see how these visions resonate

00:07:11.069 --> 00:07:14.189
or maybe clash with other major players and thinkers

00:07:14.189 --> 00:07:16.689
in the AI space. Pulling more insights from the

00:07:16.689 --> 00:07:18.730
newsletter. Right. You mentioned Elon Musk's

00:07:18.730 --> 00:07:21.410
desire for Grok to rewrite the entire corpus

00:07:21.410 --> 00:07:23.800
of human knowledge and... purge far too much

00:07:23.800 --> 00:07:27.779
garbage. That's such a powerful, almost godlike

00:07:27.779 --> 00:07:30.180
ambition, isn't it? But that phrase, purging

00:07:30.180 --> 00:07:32.939
garbage, I mean, who defines what's garbage?

00:07:33.139 --> 00:07:35.060
Exactly. That's the question. It opens up a whole

00:07:35.060 --> 00:07:38.139
pan of worms about bias, about what gets included

00:07:38.139 --> 00:07:41.500
or excluded. It's a total double -edged sword.

00:07:42.019 --> 00:07:45.079
On one hand, yeah, the idea of a truly curated,

00:07:45.319 --> 00:07:48.220
maybe unbiased knowledge base sounds appealing.

00:07:48.600 --> 00:07:52.240
But on the other... The potential for one entity

00:07:52.240 --> 00:07:55.759
to dictate truth? That's profoundly unsettling.

00:07:55.800 --> 00:07:57.920
It's a fascinating, almost Orwellian thought

00:07:57.920 --> 00:07:59.759
experiment, really. Then you have someone like

00:07:59.759 --> 00:08:02.259
Reid Hoffman. His view is quite different. He

00:08:02.259 --> 00:08:05.699
says AI will transform jobs, yeah, but not cause

00:08:05.699 --> 00:08:08.079
some white -collar bloodbath. Okay, that's more

00:08:08.079 --> 00:08:10.740
optimistic. Much more. He sees it as person plus

00:08:10.740 --> 00:08:13.620
AI, like an Excel for everything, augmenting

00:08:13.620 --> 00:08:15.480
human capabilities rather than just replacing

00:08:15.480 --> 00:08:17.560
them outright. You know, I still wrestle with

00:08:17.560 --> 00:08:19.620
prompt drift myself sometimes, so I completely

00:08:19.620 --> 00:08:22.759
get that person plus AI concept. It's not about

00:08:22.759 --> 00:08:24.620
the machines doing everything for us. It's about

00:08:24.620 --> 00:08:26.980
a partnership where AI helps us be more efficient,

00:08:27.060 --> 00:08:29.439
maybe more capable, just taking friction out

00:08:29.439 --> 00:08:31.569
of our workflows. But not everyone shares that

00:08:31.569 --> 00:08:34.610
level of optimism. Comic legend Paul Pope, for

00:08:34.610 --> 00:08:37.950
example, he isn't losing sleep over AI plagiarism.

00:08:38.330 --> 00:08:41.210
His worries are more fundamental. Like what?

00:08:41.370 --> 00:08:44.029
Killer robots and mass surveillance. A very different,

00:08:44.110 --> 00:08:46.789
maybe more existential kind of fear brewing there.

00:08:46.970 --> 00:08:50.389
Huh. Yeah. Different focus entirely. Meanwhile,

00:08:50.690 --> 00:08:52.850
you see companies like Amazon consolidating things.

00:08:53.049 --> 00:08:55.700
They're bundling QuickSight. Q business and Q

00:08:55.700 --> 00:08:58.659
apps into a single Q business suite workspace.

00:08:59.100 --> 00:09:02.519
Right. So the Q business suite is basically Amazon's

00:09:02.519 --> 00:09:05.519
combined toolkit for managing data and automating

00:09:05.519 --> 00:09:07.580
workflows from one place. Exactly. It's all about

00:09:07.580 --> 00:09:10.259
streamlining business operations, making it easier

00:09:10.259 --> 00:09:13.120
for companies to actually adopt and use AI effectively.

00:09:13.460 --> 00:09:15.679
And there is that fascinating anecdote from Jeffrey

00:09:15.679 --> 00:09:17.840
Hinton, you know, often called the godfather

00:09:17.840 --> 00:09:20.679
of AI. Oh, yeah. He says Google throttled their

00:09:20.679 --> 00:09:22.480
early chatbots because they were worried about

00:09:22.480 --> 00:09:25.259
reputational risk. Understandable for Google.

00:09:25.440 --> 00:09:28.659
But OpenAI back then had nothing to lose, so

00:09:28.659 --> 00:09:30.860
they just pushed ahead. And apparently when Hinton

00:09:30.860 --> 00:09:33.600
was asked about Sam Altman's moral compass, he

00:09:33.600 --> 00:09:35.899
just shrugged and said, we'll see. We'll see.

00:09:35.980 --> 00:09:38.820
Wow. That's very telling, isn't it? It really

00:09:38.820 --> 00:09:42.159
is. And you also had Mistral AI CEO Arthur Mensch

00:09:42.159 --> 00:09:44.539
raising a really good point about de -skilling.

00:09:44.820 --> 00:09:47.700
De -skilling, meaning AI reducing the need for

00:09:47.700 --> 00:09:50.529
human skills. Yeah, precisely. He argues that's

00:09:50.529 --> 00:09:52.570
the real danger, maybe more than just white collar

00:09:52.570 --> 00:09:55.649
job losses. His concern is we need to keep humans

00:09:55.649 --> 00:09:58.549
in the loop. Otherwise, we risk getting lazy

00:09:58.549 --> 00:10:00.889
minds or losing critical faculties over time.

00:10:01.370 --> 00:10:03.750
That's a concern that resonates, I think. Absolutely.

00:10:04.110 --> 00:10:07.190
So given all these different perspectives from

00:10:07.190 --> 00:10:11.629
Altman's robots to Hoffman's optimism to Hinton's

00:10:11.629 --> 00:10:14.450
caution, what's the immediate impact? How does

00:10:14.450 --> 00:10:16.570
this affect how people work like today or tomorrow?

00:10:17.080 --> 00:10:19.500
I think the consensus leans towards jobs will

00:10:19.500 --> 00:10:22.360
transform, not vanish. Think person plus AI.

00:10:22.600 --> 00:10:25.480
Person plus AI. Okay. Now let's shift gears a

00:10:25.480 --> 00:10:27.879
bit and talk about the bleeding edge. New tools

00:10:27.879 --> 00:10:29.940
and especially these architectural breakthroughs

00:10:29.940 --> 00:10:31.659
that are pushing boundaries beyond just making

00:10:31.659 --> 00:10:33.580
bigger and bigger models. Yeah, this is exciting

00:10:33.580 --> 00:10:35.980
stuff. The newsletter highlighted tools like

00:10:35.980 --> 00:10:38.399
FetchEye, bringing professional AI capabilities

00:10:38.399 --> 00:10:42.200
right onto any Mac app, and Julia's AI Notebooks,

00:10:42.279 --> 00:10:44.940
letting you basically chat with complex data

00:10:44.940 --> 00:10:47.799
to whip up expert insights. These are practical

00:10:47.799 --> 00:10:50.139
applications hitting the market now. Right, tangible

00:10:50.139 --> 00:10:52.399
tools. But then we get to the really deep technical

00:10:52.399 --> 00:10:57.250
dive, Meta's new AUNet. This is, well... It's

00:10:57.250 --> 00:10:59.490
potentially revolutionary. It sounds like a huge

00:10:59.490 --> 00:11:01.509
deal because it's showing incredible efficiency.

00:11:01.809 --> 00:11:04.309
The headline was something like, bite -level

00:11:04.309 --> 00:11:08.149
AUNet chases llama on 1 % of the compute. Exactly.

00:11:08.149 --> 00:11:10.950
So AUNet, to put it simply, it's Meta's new AI

00:11:10.950 --> 00:11:13.830
model architecture. It's designed for incredibly

00:11:13.830 --> 00:11:16.529
efficient language processing. It's a fundamentally

00:11:16.529 --> 00:11:18.769
different way of building these models. Okay,

00:11:18.789 --> 00:11:20.679
different how? Why does it matter so much? Well,

00:11:20.740 --> 00:11:22.799
first, it learns its own way to process language

00:11:22.799 --> 00:11:26.000
on the fly. That means no fixed BPE vocab is

00:11:26.000 --> 00:11:29.059
needed. And a BPE vocab is? That's the standard

00:11:29.059 --> 00:11:31.679
way AI models usually break text down into smaller

00:11:31.679 --> 00:11:33.879
pieces, like words or parts of words, so they

00:11:33.879 --> 00:11:36.159
can understand it. AUNet doesn't need that predefined

00:11:36.159 --> 00:11:38.960
dictionary. Okay. And the second thing? Second,

00:11:39.120 --> 00:11:42.179
it performs like... neck and neck with traditional

00:11:42.179 --> 00:11:45.259
BPE transformers, the standard model type, but

00:11:45.259 --> 00:11:49.159
it burns about 100 times less FLOPs than a massive

00:11:49.159 --> 00:11:54.019
model like LAMA 3 .18b. 100 times, wow. And FLOPs

00:11:54.019 --> 00:11:57.100
are just... Think of FLOPs as the sheer amount

00:11:57.100 --> 00:12:00.080
of computational muscle an AI model needs to

00:12:00.080 --> 00:12:04.240
flex to do its job. So way, way less muscle needed

00:12:04.240 --> 00:12:07.039
here. That is astounding. Efficiency over just

00:12:07.039 --> 00:12:09.840
brute force. It really hints at a coming wave

00:12:09.840 --> 00:12:12.539
of... architectural gains in AI, doesn't it?

00:12:12.559 --> 00:12:14.159
Exactly. Not just throwing bigger and bigger

00:12:14.159 --> 00:12:16.659
GPU clusters at the problem, consuming more energy.

00:12:16.799 --> 00:12:18.879
We're actually getting smarter about how we build

00:12:18.879 --> 00:12:21.320
these models. Smarter, not just bigger. It suggests

00:12:21.320 --> 00:12:23.960
a future where really powerful AI might run on

00:12:23.960 --> 00:12:26.340
much less hardware, maybe even locally more often.

00:12:26.559 --> 00:12:29.740
And there's this cross -lingual freebie, they

00:12:29.740 --> 00:12:31.960
called it, because it works at the byte level,

00:12:32.080 --> 00:12:35.220
like the raw letters and symbols, low resource

00:12:35.220 --> 00:12:37.240
languages get a performance bump automatically.

00:12:37.519 --> 00:12:39.720
Languages that don't have much digital data available.

00:12:39.980 --> 00:12:42.039
for training. Ah, so it helps level the playing

00:12:42.039 --> 00:12:43.720
field for different languages without needing

00:12:43.720 --> 00:12:46.200
tons of extra data. Pretty much. Which is huge

00:12:46.200 --> 00:12:48.679
for global accessibility, for making AI truly

00:12:48.679 --> 00:12:50.720
useful for everyone, not just those speaking

00:12:50.720 --> 00:12:52.919
dominant languages. The big takeaway here seems

00:12:52.919 --> 00:12:55.500
to be that the tokenizer might be the next thing

00:12:55.500 --> 00:12:58.000
to disappear. Yeah. That tokenizer, the part

00:12:58.000 --> 00:13:00.139
of the AI that converts words into numbers for

00:13:00.139 --> 00:13:02.860
the model, this new approach could make that

00:13:02.860 --> 00:13:06.440
complex, often problematic, pre -processing step

00:13:06.440 --> 00:13:10.519
just obsolete. Imagine AI just seeing raw data,

00:13:10.700 --> 00:13:13.320
letters, symbols, everything. Could this new

00:13:13.320 --> 00:13:15.879
architecture, this AUNet, fundamentally change

00:13:15.879 --> 00:13:18.620
how we build AI from the ground up? Is it that

00:13:18.620 --> 00:13:21.179
significant? Yes. I think it really hints at

00:13:21.179 --> 00:13:23.919
a new era of efficiency and much more dynamic

00:13:23.919 --> 00:13:26.039
language handling. Efficiency and dynamic handling.

00:13:26.139 --> 00:13:29.120
Got it. Midroll sponsor Reed Placeholder. Okay,

00:13:29.179 --> 00:13:31.179
let's just take a moment to recap the core themes

00:13:31.179 --> 00:13:33.639
from this Deep Dives. We've explored these radical,

00:13:33.879 --> 00:13:36.860
almost sci -fi visions for AI's future, coming

00:13:36.860 --> 00:13:40.139
from leaders like Sam Altman and Elon Musk. Robots

00:13:40.139 --> 00:13:42.759
and knowledge rewriting. Exactly. We saw OpenAI's

00:13:42.759 --> 00:13:46.960
strategic push towards pervasive AI agents and

00:13:46.960 --> 00:13:49.379
even physical hardware like robots, hinting at

00:13:49.379 --> 00:13:51.879
a whole new era of interaction. Then we broadened

00:13:51.879 --> 00:13:54.460
the view. Looking at the ongoing transformation

00:13:54.460 --> 00:13:57.600
of jobs and skills, weighing those different

00:13:57.600 --> 00:14:00.740
perspectives, optimism versus, you know, some

00:14:00.740 --> 00:14:02.559
real concerns about de -skilling or control.

00:14:02.820 --> 00:14:05.019
Right. And finally, we just touched on those

00:14:05.019 --> 00:14:08.080
incredibly exciting new architectural breakthroughs

00:14:08.080 --> 00:14:10.779
like Meta's AUNet that are making AI potentially

00:14:10.779 --> 00:14:13.220
far, far more efficient. Yeah, the efficiency

00:14:13.220 --> 00:14:15.899
angle is key. It really feels like AI is moving

00:14:15.899 --> 00:14:18.820
rabidly beyond its warm -up act phase now. We're

00:14:18.820 --> 00:14:20.559
heading into something much more integrated,

00:14:20.740 --> 00:14:24.480
more powerful and... maybe surprisingly, sometimes

00:14:24.480 --> 00:14:26.779
achieved with less brute force. It's not just

00:14:26.779 --> 00:14:28.860
about bigger models anymore, it seems. It's about

00:14:28.860 --> 00:14:31.779
smarter models, smarter architectures, and how

00:14:31.779 --> 00:14:33.740
they're all going to reshape our daily lives.

00:14:33.980 --> 00:14:36.879
So as you go about your day, maybe consider how

00:14:36.879 --> 00:14:38.919
these shifts might impact your own work or even

00:14:38.919 --> 00:14:41.419
just your daily life. Are you ready for an AI

00:14:41.419 --> 00:14:44.710
that learns you, that acts proactively? And here's

00:14:44.710 --> 00:14:46.350
maybe a final thought for you to mull over, building

00:14:46.350 --> 00:14:49.710
on that AUNet idea. Consider how a byte -level

00:14:49.710 --> 00:14:52.590
understanding of language could change AI's ability

00:14:52.590 --> 00:14:56.009
to truly understand diverse data. Think about

00:14:56.009 --> 00:14:59.690
it. Code, emoji. maybe even nuanced inflections

00:14:59.690 --> 00:15:02.590
in voice someday, all without needing humans

00:15:02.590 --> 00:15:06.149
to pre -process it perfectly first. What new,

00:15:06.309 --> 00:15:08.710
maybe entirely unexpected applications could

00:15:08.710 --> 00:15:11.190
that unlock for all of us? It's a fascinating

00:15:11.190 --> 00:15:13.330
future unfolding right in front of us. Definitely

00:15:13.330 --> 00:15:15.710
is. Thank you for joining us on this deep dive

00:15:15.710 --> 00:15:17.870
today. Yeah, thanks for listening. If you found

00:15:17.870 --> 00:15:19.509
this insightful, maybe share it with someone

00:15:19.509 --> 00:15:21.090
else who's trying to keep up with everything

00:15:21.090 --> 00:15:22.970
happening in AI. We'll see you next time.
