WEBVTT

00:00:00.000 --> 00:00:01.860
When we first started hearing about generative

00:00:01.860 --> 00:00:04.679
AI everywhere, the story was pretty clear, wasn't

00:00:04.679 --> 00:00:08.699
it? This amazing new tool for, well, for corporations.

00:00:08.880 --> 00:00:11.380
Yeah, absolutely. We all pictured, you know,

00:00:11.560 --> 00:00:13.699
businesses getting huge productivity boosts,

00:00:13.800 --> 00:00:16.660
coders, marketers, analysts, making everything

00:00:16.660 --> 00:00:19.140
faster. Using it as a purely professional thing.

00:00:19.709 --> 00:00:22.370
But that picture, it's almost completely flipped

00:00:22.370 --> 00:00:24.469
now. We're diving into this really fascinating

00:00:24.469 --> 00:00:27.789
analysis today. They looked at how 700 million

00:00:27.789 --> 00:00:30.429
people are actually using ChatGBT in the real

00:00:30.429 --> 00:00:33.109
world. And the results are? Well, they're pretty

00:00:33.109 --> 00:00:36.170
surprising. Yeah, profound is a good word. It

00:00:36.170 --> 00:00:38.429
turns out, for almost three -quarters of users,

00:00:38.770 --> 00:00:40.990
AI isn't really that corporate tool. It's more

00:00:40.990 --> 00:00:43.810
like a personal friend or a tutor, maybe even

00:00:43.810 --> 00:00:46.009
a travel agent sometimes. Welcome to the deep

00:00:46.009 --> 00:00:48.820
dive. This study gives us this incredible window

00:00:48.820 --> 00:00:51.600
into what 700 million people are doing with AI.

00:00:52.340 --> 00:00:55.280
And it shows it's changing not just what we get

00:00:55.280 --> 00:00:57.759
done, but really how we do things, how we learn,

00:00:58.060 --> 00:01:00.359
how we edit our own writing. Even how we ask

00:01:00.359 --> 00:01:02.600
for advice on really personal stuff, like relationships.

00:01:03.079 --> 00:01:05.780
So our mission today is to unpack these real

00:01:05.780 --> 00:01:08.099
habits. We want to look at the shifts, especially

00:01:08.099 --> 00:01:12.120
in professional skills, and... Maybe touch on

00:01:12.120 --> 00:01:14.680
some of the risks. There's this one big ethical

00:01:14.680 --> 00:01:17.260
flag they raised called decision laundering that

00:01:17.260 --> 00:01:19.379
we definitely need to talk about. Okay, let's

00:01:19.379 --> 00:01:21.040
get into it. Where do we start? That personal

00:01:21.040 --> 00:01:23.840
takeover seems huge. It really is. So finding

00:01:23.840 --> 00:01:28.099
number one, the split. Only about 27 % of all

00:01:28.099 --> 00:01:30.280
the chat interactions they analyzed were actually

00:01:30.280 --> 00:01:33.640
for work. Just over a quarter, wow. Which means...

00:01:33.310 --> 00:01:36.530
The vast majority, 73%, is all personal stuff.

00:01:36.569 --> 00:01:38.689
People exploring hobbies, learning new things,

00:01:38.829 --> 00:01:41.129
managing their daily lives. It's become this

00:01:41.129 --> 00:01:44.349
kind of ultimate personal assistant. And it happens

00:01:44.349 --> 00:01:46.549
so fast. As Sadi mentioned, just a year ago,

00:01:46.810 --> 00:01:49.629
personal use was around 53%. Right. From barely

00:01:49.629 --> 00:01:52.409
half to almost three quarters in just 12 months.

00:01:52.609 --> 00:01:55.390
That's a really rapid, fundamental shift in how

00:01:55.390 --> 00:01:57.510
we're integrating this tech into our lives. So

00:01:57.510 --> 00:02:00.310
why? Why are we suddenly outsourcing so much

00:02:00.310 --> 00:02:04.629
of our curiosity and daily admin? to an AI? Well,

00:02:04.790 --> 00:02:07.450
a few reasons probably. The AI feels like this

00:02:07.450 --> 00:02:09.330
incredibly knowledgeable friend who's always

00:02:09.330 --> 00:02:14.090
there, 3 a .m., no problem. And crucially, there's

00:02:14.090 --> 00:02:17.050
no judgment. Ah, that's a big one. Yeah. If you

00:02:17.050 --> 00:02:20.509
feel kind of silly asking a coworker, what exactly

00:02:20.509 --> 00:02:23.110
is inflation again? Or, you know, how do I fix

00:02:23.110 --> 00:02:26.930
this leaky faucet? Uh, the AI just answers. No

00:02:26.930 --> 00:02:29.030
eye rolling. It takes away that social friction,

00:02:29.250 --> 00:02:31.370
that little bit of anxiety about asking something

00:02:31.370 --> 00:02:33.409
basic, and I guess the corporate side plays into

00:02:33.409 --> 00:02:35.900
it too. companies being nervous about data leaks.

00:02:36.240 --> 00:02:38.020
Definitely. There's still a lot of hesitation

00:02:38.020 --> 00:02:40.680
about employees pasting sensitive company info

00:02:40.680 --> 00:02:43.860
into these public tools. So that naturally pushes

00:02:43.860 --> 00:02:46.080
some usage towards personal accounts, personal

00:02:46.080 --> 00:02:49.099
questions at home. OK, so what are people actually

00:02:49.099 --> 00:02:52.020
doing in that big 73 % slice? What are the main

00:02:52.020 --> 00:02:54.599
personal uses? The top three are pretty foundational.

00:02:54.780 --> 00:02:57.460
First is just seeking information. But people

00:02:57.460 --> 00:02:59.520
want direct answers, not just a page of links

00:02:59.520 --> 00:03:01.539
like Google gives you. Right, like, give me a

00:03:01.539 --> 00:03:03.360
quick healthy recipe that doesn't use chicken.

00:03:03.340 --> 00:03:06.819
not here are 10 recipe blogs. Exactly. Second

00:03:06.819 --> 00:03:10.659
is writing, helping with emails to friends, social

00:03:10.659 --> 00:03:12.860
media captions, maybe even writing a little poem

00:03:12.860 --> 00:03:15.439
or something, polishing personal communication.

00:03:16.180 --> 00:03:18.900
And third is practical guidance, just straightforward

00:03:18.900 --> 00:03:21.919
how -to questions. How do I plan a three -day

00:03:21.919 --> 00:03:24.719
trip to Paris? Or how do I change a bike tire?

00:03:24.840 --> 00:03:26.719
That kind of thing. It really feels like we're

00:03:26.719 --> 00:03:30.300
offloading that initial. Drudgery the basic research

00:03:30.300 --> 00:03:32.740
phase of figuring things out. Yeah, which brings

00:03:32.740 --> 00:03:35.520
up a really interesting point How does this massive

00:03:35.520 --> 00:03:38.639
shift towards personal AI use? Change how we

00:03:38.639 --> 00:03:41.520
even think about productivity day -to -day. That's

00:03:41.520 --> 00:03:44.120
a good question I guess I mean these productivity

00:03:44.120 --> 00:03:46.360
isn't just about work output anymore. It's also

00:03:46.360 --> 00:03:48.460
about learning things faster in your personal

00:03:48.460 --> 00:03:51.300
life We're just handling those mundane life tasks

00:03:51.300 --> 00:03:53.080
more efficiently. So you have more brain space

00:03:53.080 --> 00:03:55.340
for other things Yeah, basically freeing up time

00:03:55.340 --> 00:03:57.719
by handling the small stuff faster. Okay, so

00:03:57.719 --> 00:04:00.949
let's pivot back then. What about the 27 % that

00:04:00.949 --> 00:04:03.530
is for work? What's happening inside that professional

00:04:03.530 --> 00:04:06.449
bubble? Right. So within that 27%, the biggest

00:04:06.449 --> 00:04:09.770
chunk, maybe unsurprisingly, about 40 % of all

00:04:09.770 --> 00:04:11.770
work -related tasks have something to do with

00:04:11.770 --> 00:04:14.650
writing. Makes sense. Lots of emails, reports,

00:04:14.870 --> 00:04:17.009
memos. But here's the really interesting twist.

00:04:17.410 --> 00:04:21.120
When people use AI for writing at work, Two -thirds

00:04:21.120 --> 00:04:24.860
of the time, about 66%, they're asking it to

00:04:24.860 --> 00:04:27.279
edit or change text they've already written themselves.

00:04:27.920 --> 00:04:30.060
So they're not just saying, write me a report

00:04:30.060 --> 00:04:33.060
on Q3 sales. Exactly. They're not asking for

00:04:33.060 --> 00:04:35.759
new stuff from a blank slate nearly as often.

00:04:36.079 --> 00:04:38.180
It seems people have caught on pretty quickly

00:04:38.180 --> 00:04:41.959
to what some call AI slop. AI slop? OK, define

00:04:41.959 --> 00:04:44.879
that. It's that, you know, generic, robotic -sounding

00:04:44.879 --> 00:04:47.360
text the AI spits out if you ask it to write

00:04:47.360 --> 00:04:49.639
something complex from scratch. It often likes

00:04:49.639 --> 00:04:52.459
real context, sounds bland, it usually needs

00:04:52.459 --> 00:04:54.399
a ton of editing anyway. Right. It doesn't sound

00:04:54.399 --> 00:04:56.860
like you or understand the nuances of your specific

00:04:56.860 --> 00:04:59.120
situation. Precisely. So the smarter approach,

00:04:59.199 --> 00:05:01.680
the one people seem to be adopting, is you write

00:05:01.680 --> 00:05:03.839
the first draft, you put your own knowledge,

00:05:03.959 --> 00:05:06.060
your tone, your context in there. You anchor

00:05:06.060 --> 00:05:08.259
it. Yeah, exactly. And then you ask the AI to

00:05:08.259 --> 00:05:10.240
act like an editor, clean it up, make it more

00:05:10.240 --> 00:05:11.759
concise, check the grammar, maybe make it sound

00:05:11.759 --> 00:05:14.079
more professional. You provide the core, the

00:05:14.079 --> 00:05:16.319
AI provides the polish. They give some examples

00:05:16.319 --> 00:05:18.589
in the study, right? Like, for a sick email?

00:05:18.670 --> 00:05:20.529
Yeah, a good one. Instead of just write a sick

00:05:20.529 --> 00:05:22.949
email, the user drafts something simple like,

00:05:23.370 --> 00:05:25.810
hey boss, woke up feeling rough, can't come in,

00:05:25.829 --> 00:05:28.189
we'll check email later. Then they feed that

00:05:28.189 --> 00:05:30.810
to the AI and say, make this sound more professional

00:05:30.810 --> 00:05:32.870
and formal. Okay, that makes a lot of sense.

00:05:33.490 --> 00:05:36.589
It's using the AI's strength refining language

00:05:36.589 --> 00:05:38.889
without relying on it for the core message or

00:05:38.889 --> 00:05:43.029
context. I have to say, I still rattle with prompt

00:05:43.029 --> 00:05:46.360
drift myself sometimes, you know. Trying to get

00:05:46.360 --> 00:05:48.980
the AI to generate exactly what I want from scratch.

00:05:49.399 --> 00:05:51.720
It can be tough to keep it on track. Using it

00:05:51.720 --> 00:05:53.439
as an editor after I've written the main points

00:05:53.439 --> 00:05:56.540
feels, well, way more effective usually. Yeah,

00:05:56.540 --> 00:05:58.920
it avoids that whole battle of trying to steer

00:05:58.920 --> 00:06:01.579
its massive generalization engine. What else

00:06:01.579 --> 00:06:03.519
are people doing for work? Well, summarizing

00:06:03.519 --> 00:06:07.300
is big. Taking a long report, say 20 pages, and

00:06:07.300 --> 00:06:09.439
asking for the key bullet points, that's super

00:06:09.439 --> 00:06:11.560
useful. Oh yeah, definitely. Brainstorming, too.

00:06:11.920 --> 00:06:14.339
Like, give me 10 creative ideas for a new coffee

00:06:14.339 --> 00:06:16.649
shop. getting that initial list to react to.

00:06:16.930 --> 00:06:18.350
What about programming? I thought that would

00:06:18.350 --> 00:06:20.649
be higher. Surprisingly low, actually. Only about

00:06:20.649 --> 00:06:23.829
4 .2 % in these general chats. The thinking is

00:06:23.829 --> 00:06:25.990
that coders are probably using more specialized

00:06:25.990 --> 00:06:28.569
tools, like GitHub Cotilot, that are built right

00:06:28.569 --> 00:06:31.389
into their workflow. Ah, OK. That makes sense.

00:06:31.509 --> 00:06:33.910
Different tools for different jobs. So if the

00:06:33.910 --> 00:06:36.629
key professional skill is becoming editing and

00:06:36.629 --> 00:06:40.209
refining AI output rather than just pure creation,

00:06:41.129 --> 00:06:43.829
what does that mean for human creativity? especially

00:06:43.829 --> 00:06:46.629
in that first crucial drafting stage. It suggests

00:06:46.629 --> 00:06:49.290
the human role is really shifting. Maybe less

00:06:49.290 --> 00:06:52.110
about being the initial author from zero and

00:06:52.110 --> 00:06:54.569
more about being the final judge, the curator,

00:06:54.970 --> 00:06:56.889
the one who adds the essential context and makes

00:06:56.889 --> 00:06:58.750
the final call. Okay, let's talk about where

00:06:58.750 --> 00:07:01.810
the lines get blurry between tool and something

00:07:01.810 --> 00:07:04.029
else. The study mentioned teaching and learning.

00:07:04.230 --> 00:07:06.810
Yeah, about 10 % of usage falls into that category.

00:07:07.189 --> 00:07:09.089
And there's a good side and a worrying side here.

00:07:09.209 --> 00:07:11.649
Okay, the good. The good part is the AI can be

00:07:11.649 --> 00:07:14.110
an incredibly patient tutor. You can ask it to

00:07:14.110 --> 00:07:16.230
explain something complex like photosynthesis

00:07:16.230 --> 00:07:18.730
over and over in simpler terms, like you're 10

00:07:18.730 --> 00:07:21.589
years old, and it won't get annoyed. That's pretty

00:07:21.589 --> 00:07:24.290
powerful for self -learning. Definitely. But

00:07:24.290 --> 00:07:26.589
the worrying part, and this is especially true

00:07:26.589 --> 00:07:29.550
for students, is the hallucination problem. Right.

00:07:29.769 --> 00:07:31.829
We should quickly define that. AI hallucination

00:07:31.829 --> 00:07:34.209
is basically AI makes up confidence sounding

00:07:34.209 --> 00:07:36.790
stuff that is factually wrong. Exactly. Like

00:07:36.790 --> 00:07:39.310
that funny example they use the AI inventing

00:07:39.310 --> 00:07:42.009
a fictional John Avocado as the inventor of avocado

00:07:42.009 --> 00:07:44.870
toast back in the 50s. It sounds plausible, but

00:07:44.870 --> 00:07:47.009
it's completely made up. And if students just

00:07:47.009 --> 00:07:49.439
accept that without checking. They risk learning

00:07:49.439 --> 00:07:52.100
things that just aren't true. It trains a habit

00:07:52.100 --> 00:07:54.100
of acceptance rather than critical thinking,

00:07:54.319 --> 00:07:57.360
which is... not great. Yeah, definitely not.

00:07:57.720 --> 00:07:59.660
What about the more social uses? This is where

00:07:59.660 --> 00:08:02.500
it gets even blurrier. About 2 % of messages

00:08:02.500 --> 00:08:05.180
were people asking for relationship advice. Wow.

00:08:05.519 --> 00:08:09.449
Asking AI about... Boyfriend problems, family

00:08:09.449 --> 00:08:11.990
issues. Yeah, things like that. And the why is

00:08:11.990 --> 00:08:14.089
probably similar to asking basic questions. It's

00:08:14.089 --> 00:08:15.850
always available and there's zero judgment. You

00:08:15.850 --> 00:08:17.410
don't have to worry about burdening a friend

00:08:17.410 --> 00:08:20.230
or feeling embarrassed. But the risk there seems

00:08:20.230 --> 00:08:23.850
really high. Enormous. AI doesn't have feelings.

00:08:24.009 --> 00:08:26.610
It doesn't understand love or grief or jealousy.

00:08:26.970 --> 00:08:29.990
It just recognizes patterns in data about how

00:08:29.990 --> 00:08:31.990
humans talk about those things. So the advice

00:08:31.990 --> 00:08:35.500
is going to be generic. based on averages, not

00:08:35.500 --> 00:08:38.500
on your specific nuanced human situation. Exactly.

00:08:38.799 --> 00:08:41.379
It lacks that essential human context and, frankly,

00:08:41.659 --> 00:08:44.340
any real moral compass. Relying on it for deep

00:08:44.340 --> 00:08:47.120
emotional guidance feels, well, pretty concerning.

00:08:47.419 --> 00:08:49.679
And there was even small talk. Another 2 % was

00:08:49.679 --> 00:08:52.259
just basic small talk. Hi, how are you? Tell

00:08:52.259 --> 00:08:55.110
me a joke. Things like that. Which on one level

00:08:55.110 --> 00:08:57.889
is harmless, but it does suggest that for some

00:08:57.889 --> 00:09:00.090
people the line between tool and companion is

00:09:00.090 --> 00:09:02.389
basically gone It could even point to you know

00:09:02.389 --> 00:09:05.509
loneliness replacing potentially tricky human

00:09:05.509 --> 00:09:08.669
chats with easy, predictable AI ones. And this

00:09:08.669 --> 00:09:10.490
all ties into a bigger trend they saw, right?

00:09:10.610 --> 00:09:12.870
Moving from doing to asking. Yeah, that was a

00:09:12.870 --> 00:09:15.690
key shift. Asking the AI to actively do a task

00:09:15.690 --> 00:09:18.850
made up about 35 % of usage, but asking it for

00:09:18.850 --> 00:09:21.809
information, advice, or opinions, asking or consulting

00:09:21.809 --> 00:09:24.289
that was over half, almost 52%. So we want the

00:09:24.289 --> 00:09:26.690
AI to help us think, help us decide, more than

00:09:26.690 --> 00:09:29.110
we want it to just perform tasks for us. Seems

00:09:29.110 --> 00:09:31.779
that way. Which leads to another question. If

00:09:31.779 --> 00:09:34.820
AI is always there, this patient, non -judgmental

00:09:34.820 --> 00:09:38.639
listener, does that constant availability maybe

00:09:38.639 --> 00:09:42.159
reduce our own ability or willingness to engage

00:09:42.159 --> 00:09:45.279
in those messy, sometimes difficult but ultimately

00:09:45.279 --> 00:09:47.620
deeper human connections? It feels like it could,

00:09:47.820 --> 00:09:51.299
yeah. It risks making us less reliant on the

00:09:51.299 --> 00:09:54.179
hard work of real, nuanced human interaction.

00:09:54.440 --> 00:09:56.500
which is where real growth happens. Okay, let's

00:09:56.500 --> 00:09:58.860
talk about skill levels. The study had some really

00:09:58.860 --> 00:10:01.899
interesting insights into how AI affects beginners

00:10:01.899 --> 00:10:05.490
versus experts. It's almost like A tool of inversion.

00:10:05.690 --> 00:10:07.950
How so? For beginners, people who are novices

00:10:07.950 --> 00:10:10.289
at something, AI is amazing. It can help you

00:10:10.289 --> 00:10:13.350
leap from knowing basically nothing, 0%, up to

00:10:13.350 --> 00:10:15.970
maybe 80 % proficiency really fast. Like the

00:10:15.970 --> 00:10:17.889
coding example. If you've never written HTML,

00:10:18.049 --> 00:10:20.169
you can ask for a simple website for your, I

00:10:20.169 --> 00:10:22.190
don't know, local bakery. And boom, you get a

00:10:22.190 --> 00:10:24.870
decent functional starting point. It dramatically

00:10:24.870 --> 00:10:27.309
lowers the barrier to entry for learning practical

00:10:27.309 --> 00:10:30.110
new skills. It makes starting less intimidating.

00:10:30.330 --> 00:10:33.279
But for experts, it's different. Completely different

00:10:33.279 --> 00:10:35.960
story. Experts usually already know that basic

00:10:35.960 --> 00:10:39.899
80%. They operate in that final, tricky 20 %

00:10:39.899 --> 00:10:43.080
where nuance, deep experience, and subtle judgment

00:10:43.080 --> 00:10:46.059
matter most. And that's where AI struggles. Often,

00:10:46.179 --> 00:10:48.960
yeah. This leads to what the study called knowledge

00:10:48.960 --> 00:10:51.960
dilution. The AI might get the basics right,

00:10:52.039 --> 00:10:54.360
but it messes up the critical details that define

00:10:54.360 --> 00:10:56.960
true expertise. Like the Hemingway example they

00:10:56.960 --> 00:10:59.559
mentioned, an AI might mimic the short sentences.

00:10:59.759 --> 00:11:01.559
Right, it gets the superficial style points,

00:11:01.879 --> 00:11:04.360
but it misses the underlying tone, the subtext,

00:11:04.480 --> 00:11:07.179
the why behind Hemingway's choices. So the expert

00:11:07.179 --> 00:11:09.440
ends up spending their valuable time just correcting

00:11:09.440 --> 00:11:13.039
the AI's generic, slightly off output. So the

00:11:13.039 --> 00:11:15.379
AI helps the novice more than the expert in a

00:11:15.379 --> 00:11:17.899
way. The expert still has to provide that crucial

00:11:17.899 --> 00:11:20.840
last 20 % of human insight. Exactly. The real

00:11:20.840 --> 00:11:23.200
value add is still human at the highest levels.

00:11:23.580 --> 00:11:25.899
And thinking about who's using this, the study

00:11:25.899 --> 00:11:27.799
found that half of all messages are coming from

00:11:27.799 --> 00:11:31.120
Gen Z. Half? Wow. I mean, imagine scaling that

00:11:31.120 --> 00:11:32.940
learning curve, or maybe that dependency curve,

00:11:33.399 --> 00:11:36.080
to potentially billions of queries globally from

00:11:36.080 --> 00:11:38.620
just that generation. That's huge. It is. And

00:11:38.620 --> 00:11:41.139
it drives that worry we talked about. Are young

00:11:41.139 --> 00:11:43.220
people becoming too dependent? Are they using

00:11:43.220 --> 00:11:45.899
it to bypass the struggle of learning, asking

00:11:45.899 --> 00:11:48.919
for the whole essay? Instead of using it smartly

00:11:48.919 --> 00:11:51.600
for an outline or to check their arguments, it's

00:11:51.600 --> 00:11:53.440
the difference between cheating yourself and

00:11:53.440 --> 00:11:55.720
augmenting yourself. Right. It always comes down

00:11:55.720 --> 00:11:58.200
to how you use the tool. But the temptation to

00:11:58.200 --> 00:12:00.899
take the easy route is definitely there. On a

00:12:00.899 --> 00:12:02.600
positive note, though, they mentioned the gender

00:12:02.600 --> 00:12:06.340
gap is closing. Yeah, that's great news for accessibility.

00:12:06.799 --> 00:12:09.879
It's almost even now, about 48 % male users.

00:12:10.240 --> 00:12:12.679
And they did see some slight tendency differences

00:12:12.679 --> 00:12:15.559
in how people use it. Like what? Female users

00:12:15.559 --> 00:12:17.860
tended to lean a bit more towards help with writing

00:12:17.860 --> 00:12:20.419
and practical guidance, like planning or organizing

00:12:20.419 --> 00:12:23.000
things. Male users leaned a bit more towards

00:12:23.000 --> 00:12:25.559
technical help, coding, and straightforward information

00:12:25.559 --> 00:12:28.419
seeking. Just tendencies, though, not hard rules,

00:12:28.480 --> 00:12:30.679
obviously. Interesting. So this brings up a big

00:12:30.679 --> 00:12:34.059
question for the future, then. If AI gets really

00:12:34.059 --> 00:12:37.120
good at handling that first 80 % of almost any

00:12:37.120 --> 00:12:39.799
task, giving everyone a solid starting point,

00:12:41.159 --> 00:12:43.159
how do future experts actually develop those

00:12:43.159 --> 00:12:45.059
foundational skills, the stuff you learned in

00:12:45.059 --> 00:12:47.139
that initial struggle? That's the million dollar

00:12:47.139 --> 00:12:49.590
question, isn't it? It suggests the future of

00:12:49.590 --> 00:12:52.190
developing expertise might be less about mastering

00:12:52.190 --> 00:12:55.149
the basics from scratch and more about learning

00:12:55.149 --> 00:12:58.370
how to rigorously question, validate, and ultimately

00:12:58.370 --> 00:13:01.470
perfect the AI's initial 80 % output. OK, let's

00:13:01.470 --> 00:13:03.850
try to bring this all together. We've seen AI

00:13:03.850 --> 00:13:06.789
shift dramatically towards personal use, redefining

00:13:06.789 --> 00:13:09.250
productivity around learning and life admin.

00:13:10.440 --> 00:13:12.740
Professionally, the key skill seems to be evolving

00:13:12.740 --> 00:13:16.100
towards editing, refining, judging AI output,

00:13:16.519 --> 00:13:18.799
rather than just raw creation from zero. Right.

00:13:19.120 --> 00:13:21.840
And AI literacy, knowing how to prompt, well,

00:13:22.159 --> 00:13:24.340
critically evaluate the answers, spot the hallucinations,

00:13:24.379 --> 00:13:26.480
that's becoming absolutely essential for everyone.

00:13:26.620 --> 00:13:29.399
But you mentioned a final major risk they highlighted.

00:13:29.639 --> 00:13:31.500
Decision laundering. Yeah, and this one feels

00:13:31.500 --> 00:13:34.100
like the most significant ethical red flag decision

00:13:34.100 --> 00:13:36.179
laundering. Okay. What exactly is that? It's

00:13:36.179 --> 00:13:39.019
when someone usually in a position of power uses

00:13:39.019 --> 00:13:42.139
an AI to help make a really difficult decision

00:13:42.139 --> 00:13:45.860
maybe one with Serious moral weight right or

00:13:45.860 --> 00:13:47.539
impacting people's lives like who to lay off

00:13:47.539 --> 00:13:50.000
based on some performance data Okay, and then

00:13:50.000 --> 00:13:52.460
they essentially blame the AI for the outcome

00:13:52.460 --> 00:13:54.620
to avoid taking personal responsibility They

00:13:54.620 --> 00:13:56.940
say well the algorithm crunched the numbers and

00:13:56.940 --> 00:13:59.799
this is what it decided They launder their difficult

00:13:59.799 --> 00:14:02.240
choice through the perceived objectivity of the

00:14:02.240 --> 00:14:06.000
machine. Wow, that's Yeah, that's really problematic.

00:14:06.139 --> 00:14:09.299
It's deeply concerning because AI no matter how

00:14:09.299 --> 00:14:12.740
smart it seems has no actual morals It has no

00:14:12.740 --> 00:14:15.620
empathy no understanding of human context or

00:14:15.620 --> 00:14:18.720
consequences Beyond the data it was trained on

00:14:18.720 --> 00:14:22.059
so offloading those fundamentally human judgments

00:14:22.059 --> 00:14:24.460
the ones that require courage and ethical consideration

00:14:24.960 --> 00:14:27.220
onto an algorithm. It's basically abdicating

00:14:27.220 --> 00:14:29.740
our responsibility. Humans have to own the final

00:14:29.740 --> 00:14:32.240
decision, especially the hard ones. We can use

00:14:32.240 --> 00:14:34.840
AI as a tool to inform us, but the judgment call,

00:14:34.860 --> 00:14:37.580
that has to remain human. This whole study really

00:14:37.580 --> 00:14:40.600
paints a picture of AI becoming less of a simple

00:14:40.600 --> 00:14:44.179
tool and more of a constant advisor, right? Deeply

00:14:44.179 --> 00:14:46.159
embedded in our personal lives, helping us think,

00:14:46.279 --> 00:14:48.580
helping us decide. We're becoming editors of

00:14:48.580 --> 00:14:50.779
choices, not just text. Yeah, we're outsourcing

00:14:50.779 --> 00:14:53.629
the cognitive friction, maybe. So maybe the final

00:14:53.629 --> 00:14:56.389
thought to leave people with is this. If we rely

00:14:56.389 --> 00:14:59.610
on AI so much as an advisor, if our main role

00:14:59.610 --> 00:15:02.789
becomes editing the choices it presents, are

00:15:02.789 --> 00:15:05.929
we, over time, subtly losing our own ability

00:15:05.929 --> 00:15:09.029
to generate truly original ideas, or maybe more

00:15:09.029 --> 00:15:11.169
importantly, to develop and trust our own moral

00:15:11.169 --> 00:15:13.330
conviction when things get tough? That's definitely

00:15:13.330 --> 00:15:15.090
something worth thinking about as we all keep

00:15:15.090 --> 00:15:17.629
using these incredibly powerful tools every day.

00:15:17.730 --> 00:15:19.629
Thank you for diving deep into this fascinating

00:15:19.629 --> 00:15:21.570
study with us today. Pre -music.
