WEBVTT

00:00:00.000 --> 00:00:04.059
Imagine an AI, not one that just predicts what

00:00:04.059 --> 00:00:07.559
you'll type next, but what you'll do. Yeah. We're

00:00:07.559 --> 00:00:09.820
talking about a model that actually beat human

00:00:09.820 --> 00:00:12.960
cognitive models at predicting behavior. Just

00:00:12.960 --> 00:00:14.839
by reading psychology experiments. It's kind

00:00:14.839 --> 00:00:17.079
of wild. Yeah. It's not science fiction. It's

00:00:17.079 --> 00:00:21.070
this new AI called Centaur. Well, its accuracy

00:00:21.070 --> 00:00:23.969
is quite startling. Welcome to the Deep Dive.

00:00:24.050 --> 00:00:25.730
Today, we're going to unpack a whole bunch of

00:00:25.730 --> 00:00:28.910
insights trying to chart this really complex,

00:00:29.030 --> 00:00:32.549
fast -moving AI landscape. Right. It's all pulled

00:00:32.549 --> 00:00:35.289
from a recent newsletter that was pretty packed

00:00:35.289 --> 00:00:37.850
with info. Our mission really is to give you

00:00:37.850 --> 00:00:39.990
those crucial nuggets of knowledge, the stuff

00:00:39.990 --> 00:00:42.009
you need to kind of navigate this accelerating

00:00:42.009 --> 00:00:44.710
AI world. We'll kick off looking at Meta's...

00:00:45.530 --> 00:00:48.189
latest big quest this ai super intelligence for

00:00:48.189 --> 00:00:50.590
everyone idea okay then we'll zoom out a bit

00:00:50.590 --> 00:00:52.829
look at industry highlights maybe some practical

00:00:52.829 --> 00:00:55.890
tools you can actually use and then yeah dive

00:00:55.890 --> 00:00:58.649
deep into centaur that mind reading ai and what

00:00:58.649 --> 00:01:01.149
that means for understanding us you know human

00:01:01.149 --> 00:01:03.670
cognition all right let's get into meta's big

00:01:03.670 --> 00:01:07.349
claims then the company that you know gave us

00:01:07.349 --> 00:01:10.930
the metaverse dream Now pitching AI superintelligence

00:01:10.930 --> 00:01:13.689
for everyone. It sounds so familiar, doesn't

00:01:13.689 --> 00:01:15.670
it? It really does. Like that metaverse push

00:01:15.670 --> 00:01:18.930
in 2021. Yeah. We all remember how that went.

00:01:19.150 --> 00:01:21.370
Horizon Worlds ended up being kind of this virtual

00:01:21.370 --> 00:01:23.469
ghost town. Right. Even Meta's own employees

00:01:23.469 --> 00:01:25.569
weren't really using it much. Yeah. A quiet digital

00:01:25.569 --> 00:01:28.409
place is a good way to put it. And now Zuckerberg's

00:01:28.409 --> 00:01:30.609
latest memo talks about AI reels you can chat

00:01:30.609 --> 00:01:33.909
with, these always -on AI video chats that mimic

00:01:33.909 --> 00:01:37.290
human expressions. And even AI friends, like

00:01:37.290 --> 00:01:39.969
designed to fill social gaps, give you the feeling

00:01:39.969 --> 00:01:43.230
of 15 friends. It's quite a vision. It is. But

00:01:43.230 --> 00:01:44.870
what's interesting, what's different this time

00:01:44.870 --> 00:01:46.750
maybe, is that while the pitch sounds familiar,

00:01:47.010 --> 00:01:49.969
Meta actually has some real traction. Okay. How

00:01:49.969 --> 00:01:53.530
so? Over a billion people are using Meta AI tools

00:01:53.530 --> 00:01:56.480
every month. A billion. That's just worlds away

00:01:56.480 --> 00:01:58.359
from the few hundred thousand metaverse users.

00:01:58.560 --> 00:02:02.540
Wow. Okay. That's a huge difference. Huge. And

00:02:02.540 --> 00:02:04.640
they're using AI coding tools inside the company,

00:02:04.719 --> 00:02:07.420
too, which helps them build faster. Plus, people

00:02:07.420 --> 00:02:11.599
are genuinely kind of forming attachments to

00:02:11.599 --> 00:02:14.020
these AI personas. Really attachments? Yeah,

00:02:14.039 --> 00:02:15.759
it's like this whole new type of social connection

00:02:15.759 --> 00:02:18.240
emerging. It's interesting. The fundamental problems

00:02:18.240 --> 00:02:20.120
are still there, right? These models, they still

00:02:20.120 --> 00:02:22.340
hallucinate, make stuff up. Oh, absolutely. They

00:02:22.340 --> 00:02:25.120
invent information. Their reasoning is often,

00:02:25.280 --> 00:02:28.219
you know. hit or miss. You give them a simple

00:02:28.219 --> 00:02:30.879
logic puzzle, even a basic game, and they often

00:02:30.879 --> 00:02:33.740
just break. So that super intelligence part still

00:02:33.740 --> 00:02:37.300
feels pretty far off, like science fiction territory.

00:02:37.659 --> 00:02:40.039
Very much so. And if you connect it to the bigger

00:02:40.039 --> 00:02:43.060
picture, you see meta is, well, it's rebranding.

00:02:43.449 --> 00:02:46.710
Again. Right. And this strategy keeps investors

00:02:46.710 --> 00:02:49.210
happy, keeps them excited. That's key. And it

00:02:49.210 --> 00:02:52.050
helps them pull in top AI talent, which is super

00:02:52.050 --> 00:02:55.250
competitive right now. Helps justify those kind

00:02:55.250 --> 00:02:57.289
of crazy salaries. So it's partly a business

00:02:57.289 --> 00:02:59.789
move, attracting talent and investment. Definitely.

00:03:00.009 --> 00:03:02.189
Less about some sudden breakthrough, more about

00:03:02.189 --> 00:03:05.349
smart positioning in the market. OK, so is Meta's

00:03:05.349 --> 00:03:08.849
AI. push fundamentally different this time around?

00:03:08.949 --> 00:03:11.129
Or is it just, you know, a new label on an old

00:03:11.129 --> 00:03:13.210
approach? Well, it definitely has user traction

00:03:13.210 --> 00:03:16.430
this time, unlike the metaverse. But those core

00:03:16.430 --> 00:03:19.590
AI challenges. They absolutely remain. All right.

00:03:19.590 --> 00:03:22.270
Let's move beyond meta then. Look at the broader

00:03:22.270 --> 00:03:24.990
AI industry because there's just so much happening.

00:03:25.150 --> 00:03:26.949
Seems like it speeds up every week. Totally.

00:03:27.310 --> 00:03:29.490
Take Baidu, for instance. They just dropped Mute

00:03:29.490 --> 00:03:31.490
Steamer. Mute Steamer. Yeah. It's a new model.

00:03:31.710 --> 00:03:33.430
Basically, it goes head -to -head with Google's

00:03:33.430 --> 00:03:35.469
VO. It makes these 10 -second video clips. Okay.

00:03:35.629 --> 00:03:39.569
But get this. Fully synchronized visuals, sound

00:03:39.569 --> 00:03:42.389
effects, and spoken dialogue. All generated together.

00:03:42.990 --> 00:03:46.150
That's a pretty big step for multimodal AI, you

00:03:46.150 --> 00:03:47.949
know, AI that handles different types of data.

00:03:48.069 --> 00:03:50.229
That sounds impressive. And it brings us to some

00:03:50.229 --> 00:03:52.689
of the, let's say, weirder sides of AI's impact.

00:03:52.870 --> 00:03:54.669
Like, did you hear about that job hopper? The

00:03:54.669 --> 00:03:57.590
one landing multiple AI jobs at once? Yeah. Keeps

00:03:57.590 --> 00:04:00.069
getting caught, fired, then just does it again.

00:04:00.490 --> 00:04:03.509
Soft chuckle. Yeah. Yeah, I saw that. It's funny,

00:04:03.550 --> 00:04:06.370
but it kind of highlights how AI is really shaking

00:04:06.370 --> 00:04:08.530
up the job market, creating these weird news

00:04:08.530 --> 00:04:11.430
scenarios. Definitely. And speaking of shaking

00:04:11.430 --> 00:04:13.830
things up, look at news consumption. Oh, yeah.

00:04:14.430 --> 00:04:16.990
That's a big one. SimilarWeb found news -related

00:04:16.990 --> 00:04:20.310
searches on ChatGPT just exploded up, what, 212

00:04:20.310 --> 00:04:23.129
%? Yeah, massive jump. But at the same time,

00:04:23.170 --> 00:04:27.649
traffic to actual news websites down 26 % since

00:04:27.649 --> 00:04:31.089
Google rolled out its AI overviews. That's significant.

00:04:31.209 --> 00:04:33.370
A big shift in how people get their information.

00:04:33.550 --> 00:04:35.889
Maybe a bit concerning. Could be, yeah. Changes

00:04:35.889 --> 00:04:37.949
the whole information ecosystem. But on the flip

00:04:37.949 --> 00:04:40.529
side... There are new opportunities. Like if

00:04:40.529 --> 00:04:42.889
you're good at spotting AI mistakes and fixing

00:04:42.889 --> 00:04:44.730
them. Like Sarah Skid, the product marketing

00:04:44.730 --> 00:04:47.329
manager. Exactly. She's making like two grand

00:04:47.329 --> 00:04:50.329
just fixing bad AI website copy. So there's value

00:04:50.329 --> 00:04:52.689
in that human touch, that refinement. For sure.

00:04:52.810 --> 00:04:55.970
But then you've got MIT economist David Autor

00:04:55.970 --> 00:04:59.170
warning about a potential Mad Max scenario. Mad

00:04:59.170 --> 00:05:01.730
Max. What does he mean? Basically that AI could

00:05:01.730 --> 00:05:04.050
automate so much that a lot of current skills

00:05:04.050 --> 00:05:07.199
just become worthless. A pretty stark warning

00:05:07.199 --> 00:05:10.500
about economic disruption. Wow. So opportunity

00:05:10.500 --> 00:05:13.139
and disruption side by side. Exactly. And look

00:05:13.139 --> 00:05:15.480
at the money pouring in. Recognize this investment

00:05:15.480 --> 00:05:19.560
firm. They just raised $1 .7 billion. Billion

00:05:19.560 --> 00:05:23.220
with a B. Yep. Specifically to invest in mid

00:05:23.220 --> 00:05:26.420
-sized AI digital services companies. Shows huge

00:05:26.420 --> 00:05:28.360
confidence in the sector's growth. It's such

00:05:28.360 --> 00:05:31.430
a mix, isn't it? AI creating wealth, new jobs,

00:05:31.550 --> 00:05:33.610
but also disrupting industries, raising these

00:05:33.610 --> 00:05:36.769
big questions about, well, about us. Yeah. Huge

00:05:36.769 --> 00:05:39.550
opportunities and pretty profound existential

00:05:39.550 --> 00:05:42.410
questions all tangled up together. So thinking

00:05:42.410 --> 00:05:45.009
about this broader AI landscape, what's the takeaway

00:05:45.009 --> 00:05:47.800
for our careers and how we get information? Well,

00:05:47.860 --> 00:05:50.199
it's creating new jobs, definitely changing how

00:05:50.199 --> 00:05:53.040
we access news, and yeah, challenging the value

00:05:53.040 --> 00:05:55.139
of our existing skills. Okay, let's pivot then.

00:05:55.180 --> 00:05:57.199
Let's talk practical stuff. Tools, strategies,

00:05:57.459 --> 00:05:59.480
things you listening can actually use. I know

00:05:59.480 --> 00:06:01.259
a lot of people worry, especially if they run

00:06:01.259 --> 00:06:03.680
automation businesses, about these DIY tools

00:06:03.680 --> 00:06:06.480
maybe replacing them. It's a fair worry. But

00:06:06.480 --> 00:06:09.480
here's the thing. Most AI projects, they don't

00:06:09.480 --> 00:06:12.079
fail because the tech isn't good enough. They

00:06:12.079 --> 00:06:14.319
fail because of bad strategy, poor alignment

00:06:14.319 --> 00:06:16.639
with business goals. Ah, okay. So it's not just

00:06:16.639 --> 00:06:19.420
about the tool itself. Right. The key is to shift.

00:06:19.639 --> 00:06:22.019
Don't just be the person who implements the tool.

00:06:22.279 --> 00:06:25.819
Become that high -value strategic partner, the

00:06:25.819 --> 00:06:28.480
one who understands why you're using AI and how

00:06:28.480 --> 00:06:30.660
it fits the bigger picture. Makes sense. And

00:06:30.660 --> 00:06:33.470
for people who do want to build things. Maybe

00:06:33.470 --> 00:06:35.889
without being coding experts. Yeah, totally possible

00:06:35.889 --> 00:06:38.329
now. You can build your own custom AI apps without

00:06:38.329 --> 00:06:40.670
writing code. There are guides out there using

00:06:40.670 --> 00:06:44.430
tools like Firebase Studio for the backend stuff,

00:06:44.649 --> 00:06:48.529
database hosting. Right. And N8n for workflow

00:06:48.529 --> 00:06:50.810
automation. You can basically stitch together

00:06:50.810 --> 00:06:53.490
a real working app. Anyone can try it. It's kind

00:06:53.490 --> 00:06:55.870
of like playing with digital Lego blocks. That's

00:06:55.870 --> 00:06:57.970
pretty cool. Democratizing AI development a bit.

00:06:58.240 --> 00:07:00.500
Definitely. And even with existing tools like

00:07:00.500 --> 00:07:03.279
ChatGPT, becoming a real pro means learning specific

00:07:03.279 --> 00:07:05.720
techniques, how to refine your prompts, how to

00:07:05.720 --> 00:07:08.379
automate tasks. You know, I still wrestle with

00:07:08.379 --> 00:07:10.300
prompt drift myself sometimes. What's prompt

00:07:10.300 --> 00:07:14.019
drift again? It's when the AI's responses kind

00:07:14.019 --> 00:07:16.379
of change or get worse over time, even the same

00:07:16.379 --> 00:07:18.620
prompt. Makes it hard to get consistent results.

00:07:18.720 --> 00:07:21.180
So mastering prompting is really key if you want

00:07:21.180 --> 00:07:24.920
top -notch output. It's an art. Gotcha. And there

00:07:24.920 --> 00:07:26.639
are new tools popping up all the time, right?

00:07:26.779 --> 00:07:30.100
Oh, yeah. Loads. Like open memory. It syncs memory

00:07:30.100 --> 00:07:32.740
across different AIs. So you can pick up a conversation

00:07:32.740 --> 00:07:34.680
with one AI where you left off with another.

00:07:34.860 --> 00:07:37.920
Whoa, that sounds useful. Yeah. Or afo .dev.

00:07:38.519 --> 00:07:41.339
Phase one. Builds mobile apps in minutes. Even

00:07:41.339 --> 00:07:44.199
integrates Stripe for payments. In minutes. Seriously.

00:07:44.420 --> 00:07:47.800
Apparently. And browse. Anything that is an AI

00:07:47.800 --> 00:07:49.500
agent for your browser that can automate pretty

00:07:49.500 --> 00:07:51.800
much any task you do on the web sounds super

00:07:51.800 --> 00:07:53.560
powerful for getting stuff done. Okay, those

00:07:53.560 --> 00:07:56.279
sound seriously useful. Any others? Well, there

00:07:56.279 --> 00:07:59.800
are some quirkier ones too, like Mori. It's this

00:07:59.800 --> 00:08:02.660
timer that shows your age or even like a countdown

00:08:02.660 --> 00:08:05.980
to your estimated death in real time. Ah, okay.

00:08:06.439 --> 00:08:10.199
That's morbidly interesting. Chuckles. Yeah,

00:08:10.339 --> 00:08:12.839
definitely a conversation starter. And beep that

00:08:12.839 --> 00:08:15.220
out, an AI profanity filter for content creators.

00:08:15.459 --> 00:08:17.639
Could be handy for keeping things clean. Right.

00:08:17.720 --> 00:08:20.139
And just some quick hits from the industry that

00:08:20.139 --> 00:08:23.839
show how fast things are moving. Go for it. MidJourney

00:08:23.839 --> 00:08:27.220
has 17 new art styles. Google finally rolled

00:08:27.220 --> 00:08:30.620
out VO3, their video model, globally. About time.

00:08:30.939 --> 00:08:33.759
Daniel Gross, co -founder of SSI, he joined Meta's

00:08:33.759 --> 00:08:37.360
big AI lab. Meta's trying a new tactic, having

00:08:37.360 --> 00:08:39.759
chatbots message you first to keep you engaged.

00:08:39.980 --> 00:08:43.139
Oh, interesting. Proactive chatbots. And maybe

00:08:43.139 --> 00:08:45.860
the most amazing one. AI apparently gave fresh

00:08:45.860 --> 00:08:48.460
hope for treating male infertility, like, in

00:08:48.460 --> 00:08:51.320
hours. Yeah. A huge potential medical breakthrough.

00:08:51.580 --> 00:08:54.269
Wow. See? The pace is just incredible. These

00:08:54.269 --> 00:08:56.230
tools are really changing what's possible for

00:08:56.230 --> 00:08:58.529
everyone. So for the everyday user, how can they

00:08:58.529 --> 00:09:01.289
best leverage all these new AI capabilities that

00:09:01.289 --> 00:09:03.669
are coming out so fast? Really, by using these

00:09:03.669 --> 00:09:07.049
smart tools wisely and, crucially, getting better

00:09:07.049 --> 00:09:08.990
at understanding those prompt techniques. Okay,

00:09:09.049 --> 00:09:11.169
now for the part that feels, well, feels like

00:09:11.169 --> 00:09:13.250
a pretty big deal, maybe even a little unsettling.

00:09:13.289 --> 00:09:15.730
Let's talk about Centaur. Right, Centaur. So

00:09:15.730 --> 00:09:18.929
researchers at Helmholtz Munich in Germany, they

00:09:18.929 --> 00:09:20.970
trained this AI, and you called it mind reading

00:09:20.970 --> 00:09:24.490
earlier. It's shockingly good at predicting what

00:09:24.490 --> 00:09:27.129
humans will choose to do next. How did they train

00:09:27.129 --> 00:09:30.830
it? Get this. 10 million real choices from over

00:09:30.830 --> 00:09:35.450
50 ,000 actual people across 160 different psychology

00:09:35.450 --> 00:09:38.690
experiments. 10 million data points on human

00:09:38.690 --> 00:09:42.080
choice. Yeah. Whoa. Imagine scaling that kind

00:09:42.080 --> 00:09:44.460
of insight, understanding human decision -making

00:09:44.460 --> 00:09:46.879
on a massive, massive scale. The implications

00:09:46.879 --> 00:09:49.519
are just huge. And they built it using Meta's

00:09:49.519 --> 00:09:52.600
Alama 3 .1 model. Yep, a standard large language

00:09:52.600 --> 00:09:54.340
model. They basically turned these psychology

00:09:54.340 --> 00:09:57.179
tasks into plain English descriptions. Then they

00:09:57.179 --> 00:09:59.159
fine -tuned the model, but only tweaked like

00:09:59.159 --> 00:10:01.940
0 .15 % of its parameters. Tiny adjustment. And

00:10:01.940 --> 00:10:04.289
the whole training took... Five days. Just five

00:10:04.289 --> 00:10:06.889
days to get these incredible results. It's ridiculously

00:10:06.889 --> 00:10:09.129
fast. Incredible results meaning how good was

00:10:09.129 --> 00:10:11.830
it? Okay, so it went up against 14 classic cognitive

00:10:11.830 --> 00:10:13.909
models from psychology. You know, established

00:10:13.909 --> 00:10:15.889
theories of how we make decisions. Centaur beat

00:10:15.889 --> 00:10:20.309
them in 31 out of 32 different tasks. Wow. It

00:10:20.309 --> 00:10:23.590
just outperformed decades of psychological modeling.

00:10:23.730 --> 00:10:25.809
Pretty much. And even when the researchers tried

00:10:25.809 --> 00:10:28.149
to trick it, like changing the details in stories

00:10:28.149 --> 00:10:30.809
or throwing in new logic problems it hadn't seen,

00:10:31.009 --> 00:10:34.549
Centaur just adapted. Like a human would. So

00:10:34.549 --> 00:10:37.389
it's not just memorizing patterns, it's generalizing.

00:10:37.570 --> 00:10:39.850
Exactly. It seems to be showing this deeper level

00:10:39.850 --> 00:10:42.269
of adaptation. And the wildest part you mentioned

00:10:42.269 --> 00:10:46.529
earlier, its internal structure started looking

00:10:46.529 --> 00:10:49.429
like a human brain. Yeah, that's the kicker.

00:10:49.490 --> 00:10:52.009
Without being trained on any actual brain scan

00:10:52.009 --> 00:10:55.169
data, nothing neurological, it just converged.

00:10:55.519 --> 00:10:58.159
It organized itself in a way that mirrors how

00:10:58.159 --> 00:11:00.340
our brains process things. That feels profound,

00:11:00.600 --> 00:11:02.320
like it found the optimal structure independently.

00:11:02.779 --> 00:11:05.399
It does, and it gets better. Centaur actually

00:11:05.399 --> 00:11:07.700
discovered a completely new decision -making

00:11:07.700 --> 00:11:10.360
strategy, one that performed better than existing

00:11:10.360 --> 00:11:13.580
psychological theories. The AI found a new way

00:11:13.580 --> 00:11:15.759
we make decisions. Yep, which just flips things

00:11:15.759 --> 00:11:18.139
around, right? It raises this huge question.

00:11:18.299 --> 00:11:21.139
What can AI teach us about ourselves, about our

00:11:21.139 --> 00:11:23.899
own minds? It's like getting insights from, well...

00:11:24.139 --> 00:11:26.580
From a non -human intelligence. So Centaur is

00:11:26.580 --> 00:11:29.320
like having a virtual human test subject. You

00:11:29.320 --> 00:11:32.240
can run countless simulations. Exactly. Scientists

00:11:32.240 --> 00:11:35.620
can simulate behavior, test out theories, explore

00:11:35.620 --> 00:11:39.059
cognition without needing labs, without years

00:11:39.059 --> 00:11:41.600
of recruiting people for trials. Think about

00:11:41.600 --> 00:11:44.460
what that could do for mental health research.

00:11:44.990 --> 00:11:48.230
behavioral design, education. Marketing. Oh yeah,

00:11:48.289 --> 00:11:50.370
definitely marketing too. The efficiency gains

00:11:50.370 --> 00:11:52.690
could be absolutely massive. What? There's always

00:11:52.690 --> 00:11:54.710
a but, isn't there? What's the downside? The

00:11:54.710 --> 00:11:57.750
flip side is pretty significant. If AI can simulate

00:11:57.750 --> 00:12:00.500
your decisions this accurately. It can manipulate

00:12:00.500 --> 00:12:03.240
them? Precisely. It can be used to nudge you,

00:12:03.279 --> 00:12:06.279
maybe exploit your biases, influence your behavior

00:12:06.279 --> 00:12:09.440
at a huge scale. This could be maybe the first

00:12:09.440 --> 00:12:11.659
real step towards what people call cognitive

00:12:11.659 --> 00:12:14.960
AGI. Artificial general intelligence, but focused

00:12:14.960 --> 00:12:17.059
on thought processes. Right. It's not conscious,

00:12:17.139 --> 00:12:19.820
let's be clear. Not sentient like us. But it

00:12:19.820 --> 00:12:21.519
might be getting close enough to mimicking our

00:12:21.519 --> 00:12:24.179
cognitive processes that it fundamentally changes

00:12:24.179 --> 00:12:26.480
the game. And that makes it an incredibly powerful

00:12:26.480 --> 00:12:29.240
tool. Powerful and with a really serious potential.

00:12:31.700 --> 00:12:37.019
So this predictive power, does it ultimately

00:12:37.019 --> 00:12:40.059
open more doors for understanding ourselves or

00:12:40.059 --> 00:12:43.980
does it close off more avenues for genuine human

00:12:43.980 --> 00:12:46.740
choice and autonomy? That's the question, isn't

00:12:46.740 --> 00:12:48.759
it? It definitely opens huge doors for research.

00:12:49.000 --> 00:12:51.340
Yeah. But yeah, it raises very serious concerns

00:12:51.340 --> 00:12:53.960
about manipulation and autonomy. OK, so wrapping

00:12:53.960 --> 00:12:56.299
this all up. What does this mean for you listening

00:12:56.299 --> 00:13:00.279
right now? We've covered a lot. From Meta's big

00:13:00.279 --> 00:13:03.440
AI dreams, maybe learning a bit from their metaverse

00:13:03.440 --> 00:13:06.559
bumps, to these incredible leaps like Baidu's

00:13:06.559 --> 00:13:08.840
video generation. We looked at new tools you

00:13:08.840 --> 00:13:11.000
can use and how the job market is shifting under

00:13:11.000 --> 00:13:12.879
our feet. Yeah, the big idea, I think, is just

00:13:12.879 --> 00:13:15.539
the sheer speed, the accelerating pace of AI

00:13:15.539 --> 00:13:17.940
development. It's almost dizzying. Right. from

00:13:17.940 --> 00:13:20.580
the practical tools you can unload today to something

00:13:20.580 --> 00:13:22.980
as profound as Centaur. AI's reach is expanding

00:13:22.980 --> 00:13:25.340
so fast it's touching everything, how we get

00:13:25.340 --> 00:13:28.379
news, how we work, even how we understand what

00:13:28.379 --> 00:13:30.840
it means to be human. That line between the hype

00:13:30.840 --> 00:13:33.139
and the actual reality seems blurrier than ever.

00:13:33.320 --> 00:13:35.120
Especially with tools that can predict our own

00:13:35.120 --> 00:13:37.700
behavior with this kind of accuracy, it's constantly

00:13:37.700 --> 00:13:40.879
shifting. This deep dive really paints a picture

00:13:40.879 --> 00:13:43.320
of a future that's, well, it's full of incredible

00:13:43.320 --> 00:13:45.500
promise, but also really complex challenges.

00:13:45.500 --> 00:13:48.440
Definitely both. And as AI gets woven deeper

00:13:48.440 --> 00:13:51.259
into our lives, predicting our choices, maybe

00:13:51.259 --> 00:13:54.379
even sensing our emotions, it leaves us with

00:13:54.379 --> 00:13:56.100
a pretty crucial question for you to think about.

00:13:56.259 --> 00:13:58.580
Yeah. If an AI can predict what you're going

00:13:58.580 --> 00:14:01.460
to do. How much of your decision -making is still

00:14:01.460 --> 00:14:03.980
truly, authentically your own? That's a heavy

00:14:03.980 --> 00:14:06.379
one. Maybe think about how knowing these capabilities

00:14:06.379 --> 00:14:09.559
exist might change how you make choices. What

00:14:09.559 --> 00:14:12.460
does it all mean for our future autonomy in a

00:14:12.460 --> 00:14:15.460
world that's increasingly driven by AI? Now,

00:14:15.460 --> 00:14:16.159
T .O. Rowe music.
