WEBVTT

00:00:00.000 --> 00:00:01.720
Do you ever get that feeling that technology

00:00:01.720 --> 00:00:04.639
is just moving almost too fast, like we're right

00:00:04.639 --> 00:00:07.000
on the edge of something? Oh, all the time. I

00:00:07.000 --> 00:00:09.140
mean, I certainly do. It feels like just yesterday,

00:00:09.439 --> 00:00:12.480
AI was, you know, this little novelty. It could

00:00:12.480 --> 00:00:14.880
write a decent email, maybe. A little stiff,

00:00:14.900 --> 00:00:18.480
but it worked. Exactly. And now today we're talking

00:00:18.480 --> 00:00:21.059
about systems that can execute this incredibly

00:00:21.059 --> 00:00:24.500
complex work that once took a whole team of human

00:00:24.500 --> 00:00:27.500
experts months. Yeah, the testing phase is definitely

00:00:27.500 --> 00:00:30.100
over. It's completely over. I think 2026 is going

00:00:30.100 --> 00:00:31.920
to be the year we really start living and working

00:00:31.920 --> 00:00:34.810
in a totally new way. Welcome to the deep dive.

00:00:35.030 --> 00:00:37.250
I'm so energized for this one because we are

00:00:37.250 --> 00:00:39.729
getting into some critical insights from, really,

00:00:40.250 --> 00:00:43.609
the cutting edge of tech reports. And for you,

00:00:43.810 --> 00:00:46.049
our listener, the learner, it makes sense to

00:00:46.049 --> 00:00:48.109
ask, you know, how do we make sense of all this

00:00:48.109 --> 00:00:50.689
acceleration? Our mission today is pretty specific.

00:00:51.170 --> 00:00:54.229
We're going to chart this huge shift from AI

00:00:54.229 --> 00:00:56.429
as just a simple tool, like that old chat bot

00:00:56.429 --> 00:00:59.750
model, to AI as a true collaborative partner.

00:00:59.920 --> 00:01:02.079
This is really the age of digital amplification.

00:01:02.460 --> 00:01:05.540
We have seven big trends to unpack here. We're

00:01:05.540 --> 00:01:07.620
going to start with that profound shift in how

00:01:07.620 --> 00:01:11.719
we work and then the intense new safety demands

00:01:11.719 --> 00:01:14.260
that come with it. Then we'll pivot a bit. We're

00:01:14.260 --> 00:01:16.519
going to look at how AI is dramatically improving

00:01:16.519 --> 00:01:19.700
things like global health and speeding up scientific

00:01:19.700 --> 00:01:22.719
discovery on a scale that honestly is hard to

00:01:22.719 --> 00:01:24.659
even imagine. And finally, we'll pull back the

00:01:24.659 --> 00:01:27.159
curtain on the massive global infrastructure,

00:01:27.379 --> 00:01:30.010
the sort of silent engine. that has to power

00:01:30.010 --> 00:01:32.349
this whole new world. OK, let's get into it.

00:01:32.409 --> 00:01:34.010
Let's do it. So we have to start with the big

00:01:34.010 --> 00:01:36.010
one, right? Yeah. The existential question that's

00:01:36.010 --> 00:01:39.569
on everyone's mind. Is AI going to take my job?

00:01:39.909 --> 00:01:42.310
It's a natural fear. It is. It is. But if you

00:01:42.310 --> 00:01:44.530
look at the consensus across the major reports,

00:01:45.090 --> 00:01:49.209
the goal for 2026 is amplification. It's not

00:01:49.209 --> 00:01:51.549
wholesale replacement. It's about collaboration

00:01:51.549 --> 00:01:54.599
at scale. I appreciate that, but. you know, these

00:01:54.599 --> 00:01:56.719
analogies can sometimes gloss over the complexity.

00:01:57.159 --> 00:01:59.540
I know we love the one about AI being like a

00:01:59.540 --> 00:02:02.019
set of super modern power tools. And the human

00:02:02.019 --> 00:02:03.859
is still making the fundamental design choice,

00:02:04.180 --> 00:02:06.480
but the tool gives you like 10 times the speed.

00:02:06.670 --> 00:02:09.189
But see, those power tools don't ask for a salary,

00:02:09.409 --> 00:02:11.990
and they don't introduce these massive security

00:02:11.990 --> 00:02:14.509
risks when you plug them into your company's

00:02:14.509 --> 00:02:16.889
financial system. Isn't that analogy kind of

00:02:16.889 --> 00:02:19.129
missing the complexity of adding a decision -making

00:02:19.129 --> 00:02:21.770
thing? That's a really fair challenge. I think

00:02:21.770 --> 00:02:23.770
the difference is in the evolution away from

00:02:23.770 --> 00:02:26.229
the old chat bot. That was the model where you

00:02:26.229 --> 00:02:28.310
had to micromanage everything, write this email,

00:02:28.389 --> 00:02:30.789
you wait, then... OK, now send this email. Right,

00:02:30.969 --> 00:02:32.889
step by step. We're moving toward what the industry

00:02:32.889 --> 00:02:35.610
is calling AI agents. And you should think of

00:02:35.610 --> 00:02:39.569
an agent less like a tool and more like a smart,

00:02:40.250 --> 00:02:42.930
goal -oriented intern who doesn't need constant

00:02:42.930 --> 00:02:45.530
check -ins. So you don't micromanage. You just

00:02:45.530 --> 00:02:47.490
give it a single high -level goal, something

00:02:47.490 --> 00:02:49.770
like, help me launch and sell these three new

00:02:49.770 --> 00:02:52.330
products. Exactly. And then that agent just...

00:02:52.270 --> 00:02:55.030
It goes, it handles the market research, it drafts

00:02:55.030 --> 00:02:57.389
the sales copy, it schedules the emails, and

00:02:57.389 --> 00:02:59.449
it even generates a report on conversion rates,

00:02:59.930 --> 00:03:02.449
all from that one command. Microsoft calls this

00:03:02.449 --> 00:03:04.750
amplification, right? Yeah, that's their term,

00:03:04.810 --> 00:03:06.770
because it fundamentally changes the scale of

00:03:06.770 --> 00:03:09.669
what a human can do. I mean, a small team of,

00:03:09.669 --> 00:03:12.530
say, three people can suddenly do the work of

00:03:12.530 --> 00:03:14.430
a medium -sized company. That's where the real

00:03:14.430 --> 00:03:17.629
value is. So, if this new agent is defined by

00:03:17.629 --> 00:03:21.250
its goal orientation... It's autonomy. What's

00:03:21.250 --> 00:03:23.550
the fundamental change in how we have to prompt

00:03:23.550 --> 00:03:25.870
it? You have to ask it to act like a manager

00:03:25.870 --> 00:03:28.210
or a partner. You're setting a high -level objective,

00:03:28.449 --> 00:03:31.210
not giving it a to -do list. That kind of power

00:03:31.210 --> 00:03:34.069
is incredible. But, you know, the second you

00:03:34.069 --> 00:03:36.830
give that digital intern the keys to the kingdom

00:03:36.830 --> 00:03:39.250
access to your client list, your company email,

00:03:39.550 --> 00:03:42.409
we immediately have to talk about risk. Oh, absolutely.

00:03:42.710 --> 00:03:45.289
The security risk just skyrockets. I mean, what

00:03:45.289 --> 00:03:48.189
if that autonomous agent gets tricked by a bad

00:03:48.189 --> 00:03:51.319
actor? Your helpful tool could be told to send

00:03:51.319 --> 00:03:53.520
out secret company data, and suddenly it's a

00:03:53.520 --> 00:03:55.560
spy working against you. They're calling this

00:03:55.560 --> 00:03:58.099
the double agent problem. And it's a genuinely

00:03:58.099 --> 00:04:00.300
terrifying thought that the AI you trust to help

00:04:00.300 --> 00:04:02.060
you is turned against you because you gave it

00:04:02.060 --> 00:04:04.699
too much access. Yeah. So the solution for 2026

00:04:04.699 --> 00:04:06.680
is all about building a really strong protection

00:04:06.680 --> 00:04:10.379
layer right into the OS itself. Every single

00:04:10.379 --> 00:04:13.159
AI agent is going to need a digital ID card to

00:04:13.159 --> 00:04:16.680
do anything. And that ID is key because it strictly

00:04:16.680 --> 00:04:19.339
limits what the agent can see. It can only access

00:04:19.339 --> 00:04:22.079
certain folders. It's basically walled off from

00:04:22.079 --> 00:04:24.459
your most sensitive files. And critically, the

00:04:24.459 --> 00:04:26.759
agent cannot send any data outside the system

00:04:26.759 --> 00:04:29.399
without your explicit human consent. You have

00:04:29.399 --> 00:04:32.800
to physically click yes. It's mandatory human

00:04:32.800 --> 00:04:35.060
oversight. And this is where it gets challenging.

00:04:35.120 --> 00:04:37.240
I have to admit, even with all the safeguards,

00:04:37.439 --> 00:04:39.819
I still wrestle with prompt drift myself, just

00:04:39.819 --> 00:04:41.879
trying to maintain that vigilance when I'm delegating

00:04:41.879 --> 00:04:44.699
sensitive tasks. It's a constant battle to prevent

00:04:44.699 --> 00:04:47.019
accidental data leaks. That's a really powerful

00:04:47.019 --> 00:04:49.779
admission. So given that high stakes risk, what's

00:04:49.779 --> 00:04:52.000
the key piece of practical advice for our listeners

00:04:52.000 --> 00:04:55.649
today? Practice data safety. Never input bank

00:04:55.649 --> 00:04:58.610
passwords or client secrets into those free unregulated

00:04:58.610 --> 00:05:01.350
chat tools. That security issue leads us right

00:05:01.350 --> 00:05:03.870
into how AI is handling responsibility in the

00:05:03.870 --> 00:05:07.569
real world. Let's shift gears now out of the

00:05:07.569 --> 00:05:10.709
server room and into the clinic. Trend three,

00:05:11.230 --> 00:05:14.610
fixing the doctor shortage. This is a huge looming

00:05:14.610 --> 00:05:17.089
crisis. The World Health Organization is predicting

00:05:17.089 --> 00:05:20.509
a global shortage of 10 million health workers

00:05:20.509 --> 00:05:24.240
by 2030. I mean, that means huge numbers of people

00:05:24.240 --> 00:05:26.279
just won't see a doctor when they're sick. And

00:05:26.279 --> 00:05:28.939
AI is central to trying to solve this. The goal

00:05:28.939 --> 00:05:31.240
isn't replacement. It's about freeing the doctor's

00:05:31.240 --> 00:05:34.180
hands to deal with the most critical human cases.

00:05:34.240 --> 00:05:36.199
For sure. The applications are immediate. First,

00:05:36.300 --> 00:05:38.839
you've got triage. A nurse AI can look at initial

00:05:38.839 --> 00:05:41.319
pain level, temperature, and figure out the danger

00:05:41.319 --> 00:05:43.680
level to prioritize who needs to see a human

00:05:43.680 --> 00:05:46.259
right away. And second is specialized diagnosis

00:05:46.259 --> 00:05:48.600
support. Microsoft has been pushing a tool, I

00:05:48.600 --> 00:05:50.480
think it's called the Diagnostic Orchestrator.

00:05:50.699 --> 00:05:53.899
Yeah, and what's wild here is the data. In tests

00:05:53.899 --> 00:05:56.660
on really difficult, complex medical cases, that

00:05:56.660 --> 00:06:00.040
orchestrator hit an 85 .5 % accuracy rate. That's

00:06:00.040 --> 00:06:01.879
not a small improvement. That's literally saving

00:06:01.879 --> 00:06:05.439
lives right now. Wow, 85 .5 % on the hard cases.

00:06:06.060 --> 00:06:08.360
It really shows you how fast this is moving when

00:06:08.360 --> 00:06:11.980
it's applied to a real human need. So if AI is

00:06:11.980 --> 00:06:14.439
being used for diagnosis, what's the critical

00:06:14.439 --> 00:06:16.660
boundary users have to respect with their own

00:06:16.660 --> 00:06:19.339
health data? AI should only be used for reference

00:06:19.339 --> 00:06:22.480
and simple explanation. A human doctor must always

00:06:22.480 --> 00:06:24.379
be the final authority on your health. Okay,

00:06:24.420 --> 00:06:26.459
let's talk scientific discovery. Trend four.

00:06:26.759 --> 00:06:29.160
We're moving beyond AI just, you know, writing

00:06:29.160 --> 00:06:31.360
summaries. Now it's becoming an active scientist

00:06:31.360 --> 00:06:33.319
itself. Right. Think about the old scientific

00:06:33.319 --> 00:06:36.199
method. You had researchers manually mixing substance

00:06:36.199 --> 00:06:38.879
A and substance B, hoping for something, failing,

00:06:38.980 --> 00:06:41.500
trying again. That could take years just to test

00:06:41.500 --> 00:06:44.399
one idea. The new AI way just shortcuts that

00:06:44.399 --> 00:06:46.759
entire process. It doesn't just read a few papers.

00:06:47.019 --> 00:06:49.560
It reads all global science papers, and it connects

00:06:49.560 --> 00:06:51.939
these non - obvious dots that no human team could

00:06:51.939 --> 00:06:55.240
ever see. And then it runs millions of virtual

00:06:55.240 --> 00:06:57.800
experiments in seconds. It simulates chemical

00:06:57.800 --> 00:07:00.240
reactions with perfect precision. And the result

00:07:00.240 --> 00:07:03.079
is these highly targeted suggestions that just

00:07:03.079 --> 00:07:06.519
slash R &D timelines. Things like, hey, why don't

00:07:06.519 --> 00:07:09.600
we mix compound X and metal Y? And that leads

00:07:09.600 --> 00:07:13.240
to faster cures new materials. Whoa. Just imagine

00:07:13.240 --> 00:07:16.379
that potential. When AI can suggest a new drug

00:07:16.379 --> 00:07:18.519
or a new battery material that would have taken

00:07:18.519 --> 00:07:21.240
thousands of man hours to test, and it does it

00:07:21.240 --> 00:07:23.800
in a few minutes, I mean, that changes the fundamental

00:07:23.800 --> 00:07:26.420
speed of civilization? That's a serious moment

00:07:26.420 --> 00:07:29.720
of wonder. It truly is. So what's a simple, real

00:07:29.720 --> 00:07:32.139
-world application of that pattern -finding mindset

00:07:32.139 --> 00:07:35.290
for someone who isn't a scientist? You can use

00:07:35.290 --> 00:07:38.410
AI to find non -obvious connections, like between

00:07:38.410 --> 00:07:41.569
fast fashion and local water challenges, to generate

00:07:41.569 --> 00:07:43.889
creative business solutions. Speaking of solutions,

00:07:43.889 --> 00:07:46.490
we have to talk infrastructure. Trend five, the

00:07:46.490 --> 00:07:49.410
energy problem. Every sophisticated AI query

00:07:49.410 --> 00:07:52.269
uses way more electricity than a simple Google

00:07:52.269 --> 00:07:54.250
search. Yeah. If everyone's using an agent all

00:07:54.250 --> 00:07:56.350
day, the demand for power is going to skyrocket.

00:07:56.490 --> 00:07:58.209
You can't just keep building more servers. It's

00:07:58.209 --> 00:08:00.829
terrible for the environment. So the 2026 solution

00:08:00.829 --> 00:08:03.199
is all about efficiency. And they're basically

00:08:03.199 --> 00:08:05.379
building a smart grid for intelligence using

00:08:05.379 --> 00:08:07.759
something called distributed computing. The easiest

00:08:07.759 --> 00:08:09.339
way to think about it is like a ride sharing

00:08:09.339 --> 00:08:12.660
app, like Uber, but for computer power. Exactly.

00:08:12.879 --> 00:08:16.699
When server farms in, say, the US are quiet because

00:08:16.699 --> 00:08:18.959
it's the middle of the night, their power gets

00:08:18.959 --> 00:08:21.740
automatically routed to users in another region,

00:08:21.939 --> 00:08:24.500
like Vietnam, where it's daytime and demand is

00:08:24.500 --> 00:08:27.860
high. The goal is just maximum utilization. No

00:08:27.860 --> 00:08:31.079
server sits idle. As Mark Risenovich from Microsoft

00:08:31.079 --> 00:08:33.799
Azure said, we have to make every single watt

00:08:33.799 --> 00:08:36.139
of power produce intelligence. There's no more

00:08:36.139 --> 00:08:38.600
margin for waste. So beyond just saving power,

00:08:38.759 --> 00:08:41.320
what does distributed computing really enable

00:08:41.320 --> 00:08:44.500
for users around the world? It allows for continuous

00:08:44.500 --> 00:08:46.740
high -speed access to computing power globally

00:08:46.740 --> 00:08:49.299
by just maximizing the use of the hardware we

00:08:49.299 --> 00:08:51.179
already have. Okay, moving to trend six. Let's

00:08:51.179 --> 00:08:53.759
talk about coding, which is a huge field for

00:08:53.759 --> 00:08:55.600
so many of our learners. This evolution is a

00:08:55.600 --> 00:08:58.879
big one. For sure. The old AI coding cools, like

00:08:58.879 --> 00:09:01.159
the first version of GitHub Copilot, were basically

00:09:01.159 --> 00:09:03.960
just fancy autofill. They just guessed the next

00:09:03.960 --> 00:09:06.340
few words of code you were writing. But the new

00:09:06.340 --> 00:09:09.480
AI understands the entire deep context of your

00:09:09.480 --> 00:09:11.879
project, not just the one file you're in. They

00:09:11.879 --> 00:09:14.980
call this repository intelligence. The old AI

00:09:14.980 --> 00:09:17.860
tried to guess the story from one page. The new

00:09:17.860 --> 00:09:20.480
AI has read the whole book, the author's notes,

00:09:20.779 --> 00:09:23.320
and the publisher's style guide, the entire project

00:09:23.320 --> 00:09:26.210
folder. And the real value for a coder is system

00:09:26.210 --> 00:09:29.649
coherence. The AI knows that if you fix one little

00:09:29.649 --> 00:09:31.950
thing in page A, it's going to affect something

00:09:31.950 --> 00:09:35.889
15 files away in page B. It sees the whole architecture.

00:09:36.049 --> 00:09:38.509
And you can already see the impact. GitHub said

00:09:38.509 --> 00:09:41.889
their users' code updates jumped by 25 % to over

00:09:41.889 --> 00:09:44.909
a billion updates, just driven by this context

00:09:44.909 --> 00:09:47.620
-aware help. For a beginner, what's the most

00:09:47.620 --> 00:09:50.000
important takeaway about the value of that kind

00:09:50.000 --> 00:09:52.240
of intelligence? It's like the AI has read the

00:09:52.240 --> 00:09:53.840
architect's blueprint. So if you try to move

00:09:53.840 --> 00:09:55.759
a load -bearing wall, it instantly tells you

00:09:55.759 --> 00:09:57.860
the roof is going to collapse. Finally, trend

00:09:57.860 --> 00:10:00.919
seven, the frontier of speed itself, quantum

00:10:00.919 --> 00:10:03.039
computing. For years, this was, you know, science

00:10:03.039 --> 00:10:06.419
fiction. By 2026, it's becoming a real integrated

00:10:06.419 --> 00:10:08.700
factor. The simplest way to get the speed difference

00:10:08.700 --> 00:10:11.299
is this. The normal computer solves a maze by

00:10:11.299 --> 00:10:13.720
walking down one path at a time. It hits a wall,

00:10:13.720 --> 00:10:16.080
turns back, tries again. A quantum computer,

00:10:16.299 --> 00:10:18.620
because of how qubits work, can essentially flow

00:10:18.620 --> 00:10:21.679
into all paths at the same time like water. It

00:10:21.679 --> 00:10:24.240
finds the solution almost instantly. That speed

00:10:24.240 --> 00:10:27.639
is mind -blowing. But there's a huge catch. Quantum

00:10:27.639 --> 00:10:30.480
computers are still super unstable, really sensitive

00:10:30.480 --> 00:10:33.190
to temperature, and hard to keep coherent. So

00:10:33.190 --> 00:10:36.769
the reality for 2026 isn't a total switch to

00:10:36.769 --> 00:10:39.669
quantum. It's the hybrid system. We're not using

00:10:39.669 --> 00:10:41.870
quantum for everything. We're integrating it

00:10:41.870 --> 00:10:44.289
carefully. Yeah, the hybrid system is this powerful

00:10:44.289 --> 00:10:47.230
three -part team. AI handles pattern recognition.

00:10:47.649 --> 00:10:49.950
Classical supercomputers do the massive stable

00:10:49.950 --> 00:10:52.090
math. And then the quantum computers are safe

00:10:52.090 --> 00:10:53.970
for the hardest problems, like in chemistry.

00:10:54.250 --> 00:10:56.490
That's why we still need those big supercomputers

00:10:56.490 --> 00:10:58.669
for all the stable math. Quantum is a specialist,

00:10:59.169 --> 00:11:00.990
perfect for modeling molecular bonds, things

00:11:00.990 --> 00:11:03.909
like that. Right. And Microsoft's Majorana One

00:11:03.909 --> 00:11:06.750
chip is a huge step here. It's basically trying

00:11:06.750 --> 00:11:09.490
to build a reliable interface to make these unstable

00:11:09.490 --> 00:11:12.190
quantum systems predictable enough for actual

00:11:12.190 --> 00:11:15.370
work. Given that instability, why is it so important

00:11:15.370 --> 00:11:18.049
to keep pushing this hybrid approach? It lets

00:11:18.049 --> 00:11:21.009
us leverage quantum speed for those very specific

00:11:21.009 --> 00:11:25.269
hard problems while we rely on stability for,

00:11:25.269 --> 00:11:27.730
you know, 99 % of everything else. Okay, so let's

00:11:27.730 --> 00:11:30.350
recap the big idea here. The transition we've

00:11:30.350 --> 00:11:33.909
been talking about I think is complete. AI is

00:11:33.909 --> 00:11:36.789
no longer a calculator. It's not a toy. It's

00:11:36.789 --> 00:11:39.049
a partner. And it's designed for amplification

00:11:39.049 --> 00:11:42.190
and goal execution in every major field, from

00:11:42.190 --> 00:11:45.009
your job to finding global health cures. The

00:11:45.009 --> 00:11:47.769
world is changing incredibly fast. But the key

00:11:47.769 --> 00:11:50.289
is not to run from it. The goal is to drive the

00:11:50.289 --> 00:11:52.529
technology, to learn how to be a better collaborator

00:11:52.529 --> 00:11:55.529
with it. We pulled three immediate actions from

00:11:55.529 --> 00:11:57.750
the sources you can start today. First, change

00:11:57.750 --> 00:12:00.350
your mindset. Stop treating AI like an answer

00:12:00.350 --> 00:12:03.049
machine. Ask it to plan, to critique, to act

00:12:03.049 --> 00:12:05.990
like a partner. Second, protect yourself. Be

00:12:05.990 --> 00:12:08.230
extremely careful with personal data, especially

00:12:08.230 --> 00:12:10.509
in those free tools. That double agent problem

00:12:10.509 --> 00:12:13.879
is very real. And third, start small. Just pick

00:12:13.879 --> 00:12:16.039
one of these new agent -style tools, use it for

00:12:16.039 --> 00:12:18.519
one small task this week, and just get comfortable

00:12:18.519 --> 00:12:20.539
with delegation. Learn the limits of your new

00:12:20.539 --> 00:12:22.879
partner. The future really belongs to the people

00:12:22.879 --> 00:12:26.299
who actively drive this technology, not the ones

00:12:26.299 --> 00:12:28.259
who stand on the sidelines and hope it passes

00:12:28.259 --> 00:12:30.240
them by. So are you ready to be a driver?
