WEBVTT

00:00:00.000 --> 00:00:02.180
Welcome back to the Deep Dive. We're here to

00:00:02.180 --> 00:00:04.240
cut through the noise and get you insights that,

00:00:04.240 --> 00:00:06.639
well, really matter. And today we're tackling

00:00:06.639 --> 00:00:08.580
something that's definitely buzzing under the

00:00:08.580 --> 00:00:10.699
surface. Yeah, it's that feeling in offices and

00:00:10.699 --> 00:00:13.000
maybe even in your home office right now. That

00:00:13.000 --> 00:00:16.059
quiet, nagging question. The one everyone's asking,

00:00:16.379 --> 00:00:19.399
even if subconsciously. Is my job. Is my role.

00:00:19.699 --> 00:00:23.640
Secure. Especially with AI advancing so, so quickly.

00:00:23.800 --> 00:00:27.160
It's a genuine anxiety. And our sources today

00:00:27.160 --> 00:00:29.620
really hit this head on. The key thing. the big

00:00:29.620 --> 00:00:33.159
insight. It's that the real fight isn't about

00:00:33.159 --> 00:00:36.539
trying to be faster than AI. That's a losing

00:00:36.539 --> 00:00:39.079
game for most knowledge workers. Right. That's

00:00:39.079 --> 00:00:41.119
the trap, isn't it? This idea that we just need

00:00:41.119 --> 00:00:43.439
to be more efficient, quicker to somehow out

00:00:43.439 --> 00:00:46.439
-machine the machine. Exactly. But the sources

00:00:46.439 --> 00:00:48.789
we're diving into today are pretty clear. That

00:00:48.789 --> 00:00:51.390
is just not a winning strategy. You're never

00:00:51.390 --> 00:00:54.030
gonna beat AI at pure computation or storing

00:00:54.030 --> 00:00:57.210
facts or just following rules flawlessly. It's

00:00:57.210 --> 00:00:59.210
like trying to race a car on foot. You're competing

00:00:59.210 --> 00:01:01.570
on their turf, playing their game. Yeah, bringing

00:01:01.570 --> 00:01:04.090
a knife to a, well, a laser fight, as you said

00:01:04.090 --> 00:01:07.170
earlier. It's destined to fail. So why is that?

00:01:07.310 --> 00:01:10.170
Why is it such a losing battle? Well... It'd

00:01:10.170 --> 00:01:13.090
come down to core strengths. AI's power lies

00:01:13.090 --> 00:01:16.269
in processing massive data sets, running complex

00:01:16.269 --> 00:01:20.069
algorithms perfectly. Speed. Things humans just

00:01:20.069 --> 00:01:22.150
aren't built for at that scale. It's not about

00:01:22.150 --> 00:01:24.290
effort, then. It's about fundamental design.

00:01:24.549 --> 00:01:27.209
Precisely. It's a subtle, but really profound

00:01:27.209 --> 00:01:29.810
difference in how we operate versus how they

00:01:29.810 --> 00:01:33.049
operate. OK, so if we can't win on speed and

00:01:33.049 --> 00:01:36.129
efficiency, what's left? What's our advantage?

00:01:36.290 --> 00:01:39.370
What can humans do that machines fundamentally

00:01:39.370 --> 00:01:41.390
struggle with. Ah, now that's what gets really

00:01:41.390 --> 00:01:43.370
interesting. Our source material today, drawing

00:01:43.370 --> 00:01:46.250
from discussions around six books to outwit AI,

00:01:46.870 --> 00:01:49.989
points to a uniquely human superpower. A superpower.

00:01:50.010 --> 00:01:52.010
I like the sound of that. It's interdisciplinary

00:01:52.010 --> 00:01:54.469
thinking. Interdisciplinary thinking. Okay, unpack

00:01:54.469 --> 00:01:56.370
that a bit. Think about connecting ideas from

00:01:56.370 --> 00:01:58.890
totally different fields, like using insights

00:01:58.890 --> 00:02:01.209
from psychology to solve a tricky business problem,

00:02:01.689 --> 00:02:03.930
or applying lessons from how nature builds things

00:02:03.930 --> 00:02:06.700
biomimicry to design better systems. So it's

00:02:06.700 --> 00:02:09.379
about bridging gaps, seeing connections others

00:02:09.379 --> 00:02:13.939
miss. Exactly that. This ability to link distant

00:02:13.939 --> 00:02:17.340
concepts, to see patterns across domains, that's

00:02:17.340 --> 00:02:19.460
incredibly hard for machines. They're usually

00:02:19.460 --> 00:02:22.039
trained on specific data sets, highly specialized.

00:02:22.159 --> 00:02:24.639
They stay in their lane, so to speak. Right.

00:02:24.819 --> 00:02:27.620
They lack that intuitive feel for the wider world,

00:02:27.780 --> 00:02:30.599
that ability to make creative leaps between different

00:02:30.599 --> 00:02:33.120
areas of knowledge. OK, so the mission for us

00:02:33.120 --> 00:02:37.150
today, for you listening, is to figure out How

00:02:37.150 --> 00:02:40.610
do we actually train this human skill? How do

00:02:40.610 --> 00:02:43.009
we learn to think in ways machines just can't

00:02:43.009 --> 00:02:45.389
copy? Think of the sources we're using, these

00:02:45.389 --> 00:02:48.669
six books, as tools for building a kind of mental

00:02:48.669 --> 00:02:51.330
gymnasium. A cognitive gymnasium. Okay, yeah,

00:02:51.349 --> 00:02:54.250
each book, each thinking model, strengthens different

00:02:54.250 --> 00:02:56.389
cognitive muscles. They aren't just abstract

00:02:56.389 --> 00:02:58.669
theories, they're practical frameworks. They

00:02:58.669 --> 00:03:01.069
help you build a new cognitive operating system,

00:03:01.430 --> 00:03:03.830
one designed for connection -making and deep

00:03:03.900 --> 00:03:06.020
human thinking. Alright, I'm ready for the workout.

00:03:06.180 --> 00:03:07.719
Where do we start? Let's jump into the first

00:03:07.719 --> 00:03:10.360
one, systems thinking. Ugh, systems thinking.

00:03:10.800 --> 00:03:13.199
Okay, I think I know this feeling. You fix one

00:03:13.199 --> 00:03:14.919
problem right and then suddenly two new ones

00:03:14.919 --> 00:03:17.740
pop up somewhere else. Precisely, the classic

00:03:17.740 --> 00:03:20.340
whack -a -mole scenario. We're often trained

00:03:20.340 --> 00:03:23.199
to look at problems in neat little boxes. Like

00:03:23.199 --> 00:03:26.219
you boost sales figures, but then customer satisfaction

00:03:26.219 --> 00:03:29.680
takes a dive. Or you streamline a process. And

00:03:29.680 --> 00:03:32.800
suddenly team morale just plummets. It happens

00:03:32.800 --> 00:03:36.219
because we often treat symptoms without seeing

00:03:36.219 --> 00:03:38.439
the underlying connections. The ripple effects.

00:03:38.680 --> 00:03:41.340
Exactly. the feedback loops, everything's connected.

00:03:41.759 --> 00:03:44.199
Our solutions fail when they ignore these connections.

00:03:45.060 --> 00:03:47.719
Now, AI is great at optimizing within clear boundaries.

00:03:47.900 --> 00:03:50.379
But throw in unpredictable system interactions.

00:03:50.500 --> 00:03:52.979
And it struggles. Yeah. They can't easily anticipate

00:03:52.979 --> 00:03:55.879
those unintended consequences. Danela Meadows

00:03:55.879 --> 00:03:58.580
in Thinking in Systems peaches us to see those

00:03:58.580 --> 00:04:00.780
invisible threads. So it's not just solving problems,

00:04:00.860 --> 00:04:03.539
but understanding why they happen and why solutions

00:04:03.539 --> 00:04:06.039
sometimes backfire. Right. And the real game

00:04:06.039 --> 00:04:09.460
changer she introduces is the idea of Leverage

00:04:09.460 --> 00:04:11.319
points. Leverage points. Okay, what are those?

00:04:11.539 --> 00:04:13.759
They're places in a system where a small shift,

00:04:14.060 --> 00:04:16.720
a small change, can create a really big impact.

00:04:17.240 --> 00:04:19.680
She even ranks them. Ranks them from weak to

00:04:19.680 --> 00:04:22.000
strong. Yeah. The weakest is often just changing

00:04:22.000 --> 00:04:24.459
numbers, parameters, like budgets or targets.

00:04:24.740 --> 00:04:28.040
The strongest. Changing the paradigm, the fundamental

00:04:28.040 --> 00:04:31.339
beliefs or goals of the system itself. Wow. That

00:04:31.339 --> 00:04:33.420
sounds incredibly powerful, like finding the

00:04:33.420 --> 00:04:35.699
master switch. It really can be. Think about,

00:04:35.759 --> 00:04:38.550
say, city traffic. The usual response is what?

00:04:38.790 --> 00:04:41.370
Build more roads. Yeah, seems logical. But that's

00:04:41.370 --> 00:04:43.490
often a low leverage action. Meadows calls it

00:04:43.490 --> 00:04:45.930
changing a parameter. It often leads to induced

00:04:45.930 --> 00:04:49.009
demand. The new roads just fill up. Congestion

00:04:49.009 --> 00:04:51.269
returns. Okay, so what's the high leverage approach?

00:04:51.670 --> 00:04:54.589
A systems thinker questions the paradigm. Is

00:04:54.589 --> 00:04:57.529
driving a personal car the only or best way to

00:04:57.529 --> 00:04:59.569
get around? Then you look at higher leverage

00:04:59.569 --> 00:05:02.709
points. Investing massively in great public transport.

00:05:02.990 --> 00:05:05.089
Designing walkable cities. Promoting remote work.

00:05:05.339 --> 00:05:08.079
changing the goal or the mindset of the system.

00:05:08.220 --> 00:05:09.699
That's a completely different way of looking

00:05:09.699 --> 00:05:12.360
at it. Okay, so how can you, listening right

00:05:12.360 --> 00:05:16.420
now, apply this? Try a mini systems audit. Pick

00:05:16.420 --> 00:05:19.319
a recurring problem at work. A bottleneck, maybe,

00:05:19.399 --> 00:05:21.800
or a difficult team dynamic. Okay. First, just

00:05:21.800 --> 00:05:25.259
map it out. Who's involved, what processes, what

00:05:25.259 --> 00:05:28.899
tools or policies. Then, and this is key, draw

00:05:28.899 --> 00:05:30.939
the connections. How does one thing influence

00:05:30.939 --> 00:05:33.540
another? Like, maybe constant time pressure leads

00:05:33.540 --> 00:05:36.079
to rushed, lower quality work. Right. And that

00:05:36.079 --> 00:05:38.240
lower quality work maybe leads to more meetings

00:05:38.240 --> 00:05:40.740
to fix things. Which eats up time, leading to

00:05:40.740 --> 00:05:43.680
less deep work. Which increases the time pressure

00:05:43.680 --> 00:05:46.800
again. A vicious cycle. Exactly. A feedback loop.

00:05:46.970 --> 00:05:49.230
Once you see that loop, you can look for leverage

00:05:49.230 --> 00:05:51.490
points. Instead of just saying less meetings,

00:05:51.930 --> 00:05:53.990
maybe the leverage point is a rule change, like

00:05:53.990 --> 00:05:56.449
every meeting needs a clear agenda and objective

00:05:56.449 --> 00:05:58.949
set beforehand. Or changing a goal, like shifting

00:05:58.949 --> 00:06:02.389
focus from just task completion speed to, say,

00:06:02.870 --> 00:06:05.750
long -term value creation. Precisely. It's about

00:06:05.750 --> 00:06:07.889
finding where a small nudge can redirect the

00:06:07.889 --> 00:06:09.829
whole system. I actually did this with my morning

00:06:09.829 --> 00:06:12.310
routine once realized hitting snooze wasn't more

00:06:12.310 --> 00:06:14.850
rest, it was just more stress. Seeing the system

00:06:14.850 --> 00:06:18.379
helps. Okay, so systems thinking helps us map

00:06:18.379 --> 00:06:21.579
the territory, but the territory is often foggy,

00:06:21.779 --> 00:06:24.540
uncertain, right? How do we make good decisions

00:06:24.540 --> 00:06:27.660
then? Ah, that brings us perfectly to the next

00:06:27.660 --> 00:06:31.360
cognitive muscle, probabilistic thinking. This

00:06:31.360 --> 00:06:34.019
draws heavily from Annie Duke's Thinking in Bats.

00:06:34.259 --> 00:06:36.579
Annie Duke, the poker player. The very same.

00:06:37.019 --> 00:06:39.899
And poker teaches you something crucial. The

00:06:39.899 --> 00:06:42.629
world isn't black and white. It's shades of gray.

00:06:42.810 --> 00:06:45.649
It's probabilities. Our brains crave certainty,

00:06:46.170 --> 00:06:48.790
but reality rarely delivers. We want to know

00:06:48.790 --> 00:06:51.990
if a decision was right or wrong. Exactly. But

00:06:51.990 --> 00:06:53.829
Duke argues that's often the wrong question.

00:06:54.269 --> 00:06:56.769
She learned the hard way at the poker table that

00:06:56.769 --> 00:06:58.730
a good outcome doesn't automatically mean it

00:06:58.730 --> 00:07:00.509
was a good decision. And a bad outcome doesn't

00:07:00.509 --> 00:07:02.670
mean the decision was bad either. Precisely.

00:07:02.689 --> 00:07:04.470
You have to separate the quality of the decision

00:07:04.470 --> 00:07:07.470
process from the quality of the result. AI can

00:07:07.470 --> 00:07:10.050
crunch probabilities, sure. but it struggles

00:07:10.050 --> 00:07:12.750
with the ambiguity, the context, the hidden information

00:07:12.750 --> 00:07:16.569
of the real world. So is less, was I right, and

00:07:16.569 --> 00:07:18.990
more? Was my process sound, given what I knew

00:07:18.990 --> 00:07:21.949
at the time? Duke calls the trap of judging decisions

00:07:21.949 --> 00:07:24.529
solely by outcomes resulting. Resulting? Okay,

00:07:24.670 --> 00:07:26.889
give us an example. Imagine your team decides,

00:07:27.329 --> 00:07:29.810
based on solid data and research at that moment,

00:07:30.290 --> 00:07:33.300
to cut a product feature. Six months later, oops,

00:07:33.439 --> 00:07:35.500
a competitor launches something similar, and

00:07:35.500 --> 00:07:38.019
it's a huge hit. The immediate reaction is, oh

00:07:38.019 --> 00:07:41.139
no, we messed up. Terrible decision. That's resulting.

00:07:41.939 --> 00:07:45.300
The probabilistic thinker asks, OK, hold on.

00:07:45.579 --> 00:07:48.100
Given the information we had back then, was cutting

00:07:48.100 --> 00:07:52.060
the feature a reasonable bet? Maybe it was. Perhaps

00:07:52.060 --> 00:07:54.839
the market shifted unexpectedly or the competitor

00:07:54.839 --> 00:07:57.120
had different information. So the lesson isn't

00:07:57.120 --> 00:08:00.629
necessarily regret, but... Improvement. Exactly.

00:08:00.730 --> 00:08:02.750
How can we improve our information gathering?

00:08:03.189 --> 00:08:05.089
How can we refine our decision -making process

00:08:05.089 --> 00:08:07.170
for the next bet, regardless of how this one

00:08:07.170 --> 00:08:08.910
turned out? Okay, practical application time.

00:08:08.930 --> 00:08:11.269
How do we practice this? Conduct a decision debrief.

00:08:11.269 --> 00:08:14.089
After a big project, a key decision, win or lose,

00:08:14.269 --> 00:08:16.610
get the team together, or even just reflect yourself.

00:08:16.670 --> 00:08:18.689
What questions do you ask? What did we actually

00:08:18.689 --> 00:08:20.709
know when we made the call? What were the unknowns?

00:08:21.110 --> 00:08:23.110
What alternatives did we seriously consider?

00:08:23.329 --> 00:08:25.949
Was the discussion open? Were different viewpoints

00:08:25.949 --> 00:08:28.360
welcomed, even encouraged? And crucially, What

00:08:28.360 --> 00:08:31.279
can we learn to make better bets next time? It's

00:08:31.279 --> 00:08:33.720
not about assigning blame for the past. It's

00:08:33.720 --> 00:08:35.860
about improving judgment for the future. All

00:08:35.860 --> 00:08:37.379
right. We've mapped the system. We're learning

00:08:37.379 --> 00:08:40.139
to navigate uncertainty. Now let's talk about

00:08:40.139 --> 00:08:42.399
getting to the actual root of problems. This

00:08:42.399 --> 00:08:45.460
feels like logic 101, but maybe it's trickier

00:08:45.460 --> 00:08:47.840
than it looks. It often is. We're drawing here

00:08:47.840 --> 00:08:50.659
from Russell Ackoff and the art of problem solving.

00:08:51.639 --> 00:08:54.539
Logic seems basic, but it's surprising how often

00:08:54.539 --> 00:08:59.620
even very smart people make logical errors. Confusing

00:08:59.620 --> 00:09:02.639
correlation with causation is a classic. Jumping

00:09:02.639 --> 00:09:05.120
to conclusions. Assuming things without checking.

00:09:05.779 --> 00:09:08.460
Right. Ackhoff's big idea was that many, maybe

00:09:08.460 --> 00:09:11.059
most, workplace issues aren't really technical

00:09:11.059 --> 00:09:13.879
problems. They're problems of logic or definition

00:09:13.879 --> 00:09:16.679
in disguise. Logic or definition. Interesting.

00:09:16.919 --> 00:09:19.500
And his really provocative idea is that you shouldn't

00:09:19.500 --> 00:09:21.840
just aim to solve problems, you should aim to

00:09:21.840 --> 00:09:23.840
dissolve them. Dissolve them? Like, make them

00:09:23.840 --> 00:09:26.200
vanish? Essentially, yes. He argued that many

00:09:26.200 --> 00:09:28.340
problems only exist because of how we frame them

00:09:28.340 --> 00:09:30.220
in the first place. Change the frame, change

00:09:30.220 --> 00:09:32.039
the definition, and the problem itself might

00:09:32.039 --> 00:09:37.080
just disappear. Wow. Can AI do that? Step back

00:09:37.080 --> 00:09:39.379
and question the frame. That's precisely what

00:09:39.379 --> 00:09:42.159
it struggles with. AI is brilliant at solving

00:09:42.159 --> 00:09:44.860
clearly defined problems within given constraints.

00:09:45.240 --> 00:09:48.070
But asking, hold on. Are we even working on the

00:09:48.070 --> 00:09:51.009
right problem here? That's a deeply human step.

00:09:51.149 --> 00:09:53.409
OK, I need an example of dissolving a problem.

00:09:53.509 --> 00:09:55.490
Think about the common management question. How

00:09:55.490 --> 00:09:57.610
do we motivate our employees? Yeah, seems like

00:09:57.610 --> 00:10:00.190
a standard problem to solve. But notice the assumption

00:10:00.190 --> 00:10:03.429
baked in, that employees lack motivation and

00:10:03.429 --> 00:10:06.210
need it externally supplied. Ackhoff would flip

00:10:06.210 --> 00:10:08.990
this. He'd dissolve the problem by reframing

00:10:08.990 --> 00:10:12.610
it. Oh. He'd ask, What is our system, our policies,

00:10:12.769 --> 00:10:15.769
our processes, our culture doing that demotivates

00:10:15.769 --> 00:10:18.990
people who are likely already motivated? Ah,

00:10:19.169 --> 00:10:21.149
that completely shifts the focus. It's not about

00:10:21.149 --> 00:10:23.269
fixing the people. It's about fixing the environment

00:10:23.269 --> 00:10:24.990
around them. Exactly. Suddenly you're looking

00:10:24.990 --> 00:10:27.549
at bureaucracy, bad management, unclear goals,

00:10:27.850 --> 00:10:30.210
lack of autonomy, entirely different avenues

00:10:30.210 --> 00:10:32.289
for action. That's powerful. So for the listener,

00:10:32.370 --> 00:10:35.669
how can they practice this reframing? Try ACOF's

00:10:35.669 --> 00:10:38.429
five refrains exercise. Take a problem you're

00:10:38.429 --> 00:10:41.460
wrestling with. Let's say it's sales for Product

00:10:41.460 --> 00:10:43.879
X are declining. Okay, standard business problem.

00:10:44.419 --> 00:10:47.000
Now rewrite that problem statement in five different

00:10:47.000 --> 00:10:49.779
ways, each forcing a different perspective. Like

00:10:49.779 --> 00:10:51.879
how? You could frame it from the customer's view.

00:10:52.539 --> 00:10:55.460
How are the needs Product X used to meet changing?

00:10:56.059 --> 00:10:58.919
Or the competitor's view? What alternatives are

00:10:58.919 --> 00:11:02.340
customers choosing instead and why? Or the system

00:11:02.340 --> 00:11:05.279
view? What internal factors might be impacting

00:11:05.279 --> 00:11:08.840
Product X's performance? or value? Is product

00:11:08.840 --> 00:11:11.940
X's core value still relevant? Or even the inverse?

00:11:12.399 --> 00:11:14.440
How can we accelerate the decline of product

00:11:14.440 --> 00:11:17.399
X to make space for something better? Each one

00:11:17.399 --> 00:11:19.620
opens up totally different potential solutions,

00:11:19.840 --> 00:11:21.940
doesn't it? Completely. It breaks you out of

00:11:21.940 --> 00:11:23.840
tunnel vision and forces you to question the

00:11:23.840 --> 00:11:25.860
initial framing of the problem itself. Okay,

00:11:25.940 --> 00:11:28.379
this is fascinating. We're going deep. Now, let's

00:11:28.379 --> 00:11:30.240
broaden out. You mentioned interdisciplinary

00:11:30.240 --> 00:11:32.960
thinking earlier. This next one, broad thinking

00:11:32.960 --> 00:11:35.620
from David Epstein's range, things right up that

00:11:35.620 --> 00:11:37.960
alley. It absolutely is. And it pushes back against

00:11:37.960 --> 00:11:40.379
some really common advice, doesn't it? The whole

00:11:40.379 --> 00:11:44.580
niche down, specialize, find your lane mantra.

00:11:45.080 --> 00:11:48.419
Epstein argues that might actually be. Counterproductive.

00:11:48.820 --> 00:11:51.820
In many situations, yes. Especially in complex,

00:11:52.059 --> 00:11:54.460
unpredictable fields, which is, let's face it,

00:11:54.620 --> 00:11:57.519
most fields today. His research suggests that

00:11:57.519 --> 00:11:59.980
generalists, people with range, often outperform

00:11:59.980 --> 00:12:02.399
narrow specialists when faced with novel problems.

00:12:02.639 --> 00:12:04.639
But AI is the ultimate specialist, isn't it?

00:12:04.799 --> 00:12:07.080
It knows everything about its narrow domain.

00:12:07.340 --> 00:12:10.049
Precisely. And that's its limitation. It lacks

00:12:10.049 --> 00:12:12.730
that broad feel for the world, the intuition

00:12:12.730 --> 00:12:15.690
that comes from varied experiences. Breakthroughs

00:12:15.690 --> 00:12:18.070
often come from what Epstein calls cognitive

00:12:18.070 --> 00:12:21.090
bees. Cognitive bees. I like that. Explain. People

00:12:21.090 --> 00:12:22.970
who flip between different fields, different

00:12:22.970 --> 00:12:25.169
disciplines, picking up ideas here, pollinating

00:12:25.169 --> 00:12:28.200
concepts there, they create novel hybrids. Combinations

00:12:28.200 --> 00:12:30.120
that someone stuck in a single silo would never

00:12:30.120 --> 00:12:32.740
conceive of. So deep specialization might make

00:12:32.740 --> 00:12:35.299
you efficient at known tasks, but range makes

00:12:35.299 --> 00:12:37.639
you adaptable and innovative for unknown challenges.

00:12:37.840 --> 00:12:39.519
That's a great way to put it. So practically

00:12:39.519 --> 00:12:41.919
speaking, how do we cultivate this range? How

00:12:41.919 --> 00:12:45.620
do we become cognitive bees? Start actively collecting

00:12:45.620 --> 00:12:48.179
mental models, solutions, and ways of thinking

00:12:48.179 --> 00:12:50.779
from fields completely unrelated to your own.

00:12:51.200 --> 00:12:53.200
Be curious. Give me some examples. How could

00:12:53.200 --> 00:12:55.279
that work? OK, say you're struggling with company

00:12:55.279 --> 00:12:58.179
culture. Don't just read management books. How

00:12:58.179 --> 00:13:01.100
do marine biologists think about complex, evolving

00:13:01.100 --> 00:13:03.299
ecosystems? Maybe there are metaphors there.

00:13:03.480 --> 00:13:06.220
Interesting. Or maybe leading an innovation team

00:13:06.220 --> 00:13:08.659
under pressure. Look at how Michelin -starred

00:13:08.659 --> 00:13:11.539
chefs manage intense creativity and execution

00:13:11.539 --> 00:13:14.860
in a chaotic kitchen environment. Or designing

00:13:14.860 --> 00:13:17.720
better customer experiences. How do video game

00:13:17.720 --> 00:13:20.419
designers keep players hooked and engaged for

00:13:20.419 --> 00:13:23.059
hundreds of hours? So you deliberately look for

00:13:23.059 --> 00:13:25.600
analogies and frameworks in unexpected places.

00:13:25.820 --> 00:13:29.279
Yes. Maybe commit, say, 20 % of your reading

00:13:29.279 --> 00:13:32.320
or learning time to exploring areas totally outside

00:13:32.320 --> 00:13:34.960
your professional domain. It builds a unique

00:13:34.960 --> 00:13:37.580
mental toolkit. The connections might not be

00:13:37.580 --> 00:13:39.879
immediate, but when you face a truly new challenge,

00:13:40.240 --> 00:13:42.159
you'll have this diverse set of tools that no

00:13:42.159 --> 00:13:44.759
specialized AI can match. All right, range gives

00:13:44.759 --> 00:13:47.580
us diverse inputs. Now, how do we generate actual

00:13:47.580 --> 00:13:49.460
ideas from that? Let's talk design thinking,

00:13:49.679 --> 00:13:51.740
drawing from idea flow. All right, because it's

00:13:51.740 --> 00:13:54.639
easy to fall into traps here, too. One common

00:13:54.639 --> 00:13:56.899
trap is solving problems backward. Backward,

00:13:56.940 --> 00:13:58.820
what do you mean? Starting with a solution you

00:13:58.820 --> 00:14:00.980
already know, or like maybe a piece of technology

00:14:00.980 --> 00:14:04.080
or a familiar process, and then looking for a

00:14:04.080 --> 00:14:06.639
problem it can solve, or... Or just jumping on

00:14:06.639 --> 00:14:08.679
the very first solution that comes to mind for

00:14:08.679 --> 00:14:11.620
a problem. Exactly. We converge too quickly.

00:14:11.950 --> 00:14:15.029
Idea Flow emphasizes a more systematic process,

00:14:15.269 --> 00:14:17.490
drawing from design thinking principles. It's

00:14:17.490 --> 00:14:19.450
about deliberately expanding the possibilities.

00:14:20.009 --> 00:14:22.809
First, the divergence phase. Brainstorming widely,

00:14:23.009 --> 00:14:25.070
no judgment. Yes, generating lots of options,

00:14:25.210 --> 00:14:28.429
even wild ones. Then, you systematically narrow

00:14:28.429 --> 00:14:31.549
down and refine the convergence phase. AI can

00:14:31.549 --> 00:14:34.309
generate variations on a theme, but it struggles

00:14:34.309 --> 00:14:36.850
with that initial, wide -open exploration and

00:14:36.850 --> 00:14:39.250
questioning the fundamental need. It won't easily

00:14:39.250 --> 00:14:42.080
ask, do we even need an idea here? So how do

00:14:42.080 --> 00:14:44.779
we ensure we diverge properly? One powerful tool

00:14:44.779 --> 00:14:48.159
they highlight is the How Might We or HMW framework.

00:14:49.059 --> 00:14:51.440
It's a way to phrase questions that inherently

00:14:51.440 --> 00:14:54.200
opens up possibilities. Okay, let's use an example.

00:14:54.820 --> 00:14:57.779
Say a local bookstore struggling against online

00:14:57.779 --> 00:15:01.379
giants. The obvious maybe backward solution is

00:15:01.529 --> 00:15:04.230
Offer discounts. Right. A race to the bottom,

00:15:04.250 --> 00:15:07.669
they probably can't win. But using HMW questions

00:15:07.669 --> 00:15:11.049
reframes the challenge. How might we? What? How

00:15:11.049 --> 00:15:13.970
might we turn the bookstore into the community's

00:15:13.970 --> 00:15:16.850
third living room? How might we create a book

00:15:16.850 --> 00:15:19.409
discovery experience that an algorithm simply

00:15:19.409 --> 00:15:22.549
can't replicate? How might we become a hub for

00:15:22.549 --> 00:15:25.289
local cultural events and book clubs? Ah, see,

00:15:25.289 --> 00:15:27.409
each of those questions points towards completely

00:15:27.409 --> 00:15:30.090
different types of solutions. Community, experience,

00:15:30.389 --> 00:15:33.659
events. not just price. Exactly. They open up

00:15:33.659 --> 00:15:35.980
the solution space dramatically, focusing on

00:15:35.980 --> 00:15:38.039
human connection and experience, areas where

00:15:38.039 --> 00:15:40.419
the physical store has an advantage. So the practical

00:15:40.419 --> 00:15:42.320
application for you listening? Take a problem

00:15:42.320 --> 00:15:44.639
you're stuck on. Instead of jumping to solutions,

00:15:44.799 --> 00:15:46.980
try generating five or ten different how -might

00:15:46.980 --> 00:15:49.220
-we questions about it. See how it shifts your

00:15:49.220 --> 00:15:51.539
perspective and what new avenues it opens up.

00:15:51.679 --> 00:15:53.820
It forces you to think wider before you narrow

00:15:53.820 --> 00:15:56.500
down. Okay, we've covered a lot of ground. Mapping

00:15:56.500 --> 00:15:59.820
systems, betting under uncertainty, reframing

00:15:59.820 --> 00:16:02.820
problems, thinking broadly, generating ideas.

00:16:03.259 --> 00:16:05.899
What's our final workout? We end with maybe the

00:16:05.899 --> 00:16:08.360
most fundamental and perhaps challenging mode.

00:16:09.120 --> 00:16:12.440
First, principles thinking. This is heavily inspired

00:16:12.440 --> 00:16:15.860
by Peter Thiel's zero to one. Zero to one, meaning

00:16:15.860 --> 00:16:18.740
creating something entirely new, not just improving

00:16:18.740 --> 00:16:22.740
what exists. Exactly. Most people, most companies

00:16:22.740 --> 00:16:25.320
operate in the one to end space. They take something

00:16:25.320 --> 00:16:27.080
that already exists and make it incrementally

00:16:27.080 --> 00:16:29.580
better. That's reasoning by analogy. Doing what

00:16:29.580 --> 00:16:31.539
others are doing, but maybe slightly faster or

00:16:31.539 --> 00:16:33.980
cheaper? Right. But truly disruptive innovation,

00:16:34.279 --> 00:16:36.840
the zero to one leaps, often come from reasoning

00:16:36.840 --> 00:16:39.120
from first principles. And AI struggles with

00:16:39.120 --> 00:16:42.899
this. Massively. AI is, in its essence, an analogy

00:16:42.899 --> 00:16:45.220
machine. It learns from vast amounts of existing

00:16:45.220 --> 00:16:47.740
data and examples. It's brilliant at finding

00:16:47.740 --> 00:16:50.639
patterns in what is. But it can't easily tear

00:16:50.639 --> 00:16:52.940
down existing assumptions and build something

00:16:52.940 --> 00:16:55.200
completely new from the ground up because its

00:16:55.200 --> 00:16:58.139
entire foundation is based on precedent. So what

00:16:58.139 --> 00:17:00.039
does thinking from first principles actually

00:17:00.039 --> 00:17:02.899
involve? It means breaking a problem down to

00:17:02.899 --> 00:17:05.839
its absolute most fundamental truths. The things

00:17:05.839 --> 00:17:08.420
you know are true, like the laws of physics or

00:17:08.420 --> 00:17:11.259
basic human needs. You strip away all the assumptions,

00:17:11.460 --> 00:17:13.259
all the conventions, all the way things are usually

00:17:13.259 --> 00:17:16.119
done. And then you build back up from only those

00:17:16.119 --> 00:17:19.039
fundamental truths. Precisely. Ignoring how it's

00:17:19.039 --> 00:17:21.480
done now. Let's take an example. Improving employee

00:17:21.480 --> 00:17:24.690
training. Okay. The typical approach is, how

00:17:24.690 --> 00:17:27.549
can we make our workshops better? Or should we

00:17:27.549 --> 00:17:30.650
buy a new online course platform? That's reasoning

00:17:30.650 --> 00:17:33.910
by analogy, improving existing models. First

00:17:33.910 --> 00:17:35.990
principles thinking starts differently. Ask,

00:17:36.589 --> 00:17:39.690
what is the fundamental truth here? Employees

00:17:39.690 --> 00:17:42.309
need certain skills to do their jobs well, and

00:17:42.309 --> 00:17:45.150
maybe. Learning is most effective when it's relevant

00:17:45.150 --> 00:17:47.609
and applied quickly. Good. Now, what's the current

00:17:47.609 --> 00:17:49.650
assumption? that workshops or online courses

00:17:49.650 --> 00:17:52.450
are the best or only way to deliver that skill

00:17:52.450 --> 00:17:55.029
transfer. Okay, now ignore that assumption. Based

00:17:55.029 --> 00:17:57.329
only on the fundamental truths needing skills,

00:17:57.609 --> 00:18:00.390
learning by doing, how would you design the absolute

00:18:00.390 --> 00:18:03.690
best way to transfer skills from scratch? Maybe

00:18:03.690 --> 00:18:06.109
it's not a course at all. Maybe it's a structured

00:18:06.109 --> 00:18:10.109
mentorship program. Or guided real -world projects

00:18:10.109 --> 00:18:13.509
with immediate feedback. Or an internal platform

00:18:13.509 --> 00:18:16.569
where experts share knowledge directly as needed.

00:18:17.439 --> 00:18:21.460
See? You've just opened the door to potentially

00:18:21.460 --> 00:18:24.660
far superior solutions simply by questioning

00:18:24.660 --> 00:18:27.220
the ingrained assumption and building up from

00:18:27.220 --> 00:18:29.859
bedrock truths. That's a powerful mental reset.

00:18:30.160 --> 00:18:31.940
So the application for listeners. Take a big

00:18:31.940 --> 00:18:34.319
challenge you're facing. Really grill yourself.

00:18:34.660 --> 00:18:37.299
What are the absolute undeniable truths here?

00:18:37.359 --> 00:18:39.000
What are the assumptions I'm making just because

00:18:39.000 --> 00:18:41.539
that's how it's always been done? And crucially,

00:18:41.720 --> 00:18:44.200
How would I rebuild this from scratch based only

00:18:44.200 --> 00:18:46.660
on those truths? Wow. Okay. That's six powerful

00:18:46.660 --> 00:18:50.420
ways of thinking systems That's logic range design

00:18:50.420 --> 00:18:52.859
first principles. So what happens when you put

00:18:52.859 --> 00:18:54.680
them all together? What's the big picture? The

00:18:54.680 --> 00:18:56.480
big picture is that these aren't just isolated

00:18:56.480 --> 00:18:58.380
tools. They work together They reinforce each

00:18:58.380 --> 00:19:00.720
other think of it as installing a new more powerful

00:19:00.720 --> 00:19:03.720
cognitive operating system a human OS upgrade

00:19:03.720 --> 00:19:06.759
for the AI something like that Their real power

00:19:06.759 --> 00:19:08.460
comes from their synergy. How do they connect?

00:19:08.819 --> 00:19:11.269
We'll think about it Systems thinking gives you

00:19:11.269 --> 00:19:13.950
the map. It shows you the complex landscape of

00:19:13.950 --> 00:19:16.049
the problem, all the interconnections. Okay,

00:19:16.049 --> 00:19:18.890
I see the map. Probabilistic thinking is your

00:19:18.890 --> 00:19:21.470
compass for navigating the inherent uncertainty

00:19:21.470 --> 00:19:24.529
on that map. It helps you make smarter bets when

00:19:24.529 --> 00:19:27.690
the path isn't clear. Got my map and compass.

00:19:28.029 --> 00:19:30.569
Ackoff's logic ensures you're not just wandering,

00:19:30.930 --> 00:19:33.450
but actually trying to get to the right destination

00:19:33.450 --> 00:19:36.130
by dissolving misleading problems and asking

00:19:36.130 --> 00:19:37.890
the right questions. Making sure I'm heading

00:19:37.890 --> 00:19:40.650
the right way. Range gives you potential shortcuts

00:19:40.650 --> 00:19:43.170
and alternative routes by letting you borrow

00:19:43.170 --> 00:19:46.410
ideas and solutions from distant, unexpected

00:19:46.410 --> 00:19:49.950
places on the map. Finding clever pathways. Design

00:19:49.950 --> 00:19:52.470
thinking is like your sketchbook. It helps you

00:19:52.470 --> 00:19:55.150
rapidly generate and explore dozens of potential

00:19:55.150 --> 00:19:57.509
routes before committing to one. Exploring the

00:19:57.509 --> 00:19:59.849
options. And first principles thinking. That's

00:19:59.849 --> 00:20:02.150
the ultimate power tool. It gives you the ability

00:20:02.150 --> 00:20:05.640
to say, this whole map is wrong. tear it up,

00:20:06.000 --> 00:20:08.019
and draw a fundamentally new and better one from

00:20:08.019 --> 00:20:11.839
scratch. Map, compass, destination check, shortcuts,

00:20:12.240 --> 00:20:14.480
route sketching, and the power to redraw the

00:20:14.480 --> 00:20:17.599
map. That's quite a toolkit. When you start combining

00:20:17.599 --> 00:20:20.259
these modes of thought, you develop a way of

00:20:20.259 --> 00:20:23.200
seeing and solving problems that is deeply human

00:20:23.200 --> 00:20:25.660
and incredibly difficult for any current AI to

00:20:25.660 --> 00:20:28.380
replicate. That's the edge. That's how you become

00:20:28.380 --> 00:20:31.819
indispensable. Hashtag, hashtag, tag, atro. So

00:20:31.819 --> 00:20:33.980
the message here seems pretty clear. I think

00:20:33.980 --> 00:20:36.140
so. The future really does belong to minds that

00:20:36.140 --> 00:20:39.180
can think in ways machines can't. If your value

00:20:39.180 --> 00:20:42.420
is primarily in speed, efficiency, or just executing

00:20:42.420 --> 00:20:45.500
known procedures, well, AI is getting very good

00:20:45.500 --> 00:20:47.799
at that. Those roles are likely at risk. But

00:20:47.799 --> 00:20:51.059
AI cannot easily replicate the ability to connect

00:20:51.059 --> 00:20:54.200
disparate ideas, to navigate profound uncertainty

00:20:54.200 --> 00:20:57.220
with good judgment, to reframe and dissolve complex

00:20:57.220 --> 00:20:59.900
problems, or to imagine something truly new.

00:21:00.039 --> 00:21:02.339
So our job isn't to become more like machines.

00:21:02.539 --> 00:21:04.559
No. It's to become more human in our thinking,

00:21:04.740 --> 00:21:07.019
to lean into these cognitive strengths that are

00:21:07.019 --> 00:21:09.660
uniquely ours. And the great thing is we've outlined

00:21:09.660 --> 00:21:12.180
a clear path to start doing that based on these

00:21:12.180 --> 00:21:14.470
sources. Where should someone begin? Don't try

00:21:14.470 --> 00:21:17.430
to master all six at once. That's overwhelming.

00:21:17.789 --> 00:21:20.650
Pick the one thinking model, the one book that

00:21:20.650 --> 00:21:23.529
resonates most with you right now, or that addresses

00:21:23.529 --> 00:21:25.930
what you feel is your biggest current weakness.

00:21:26.410 --> 00:21:28.569
Just to recap the titles for everyone, we talked

00:21:28.569 --> 00:21:31.089
about Thinking in Systems by Daniella Meadows.

00:21:31.289 --> 00:21:34.539
Thinking in Bets by Annie Duke. The Art of Problem

00:21:34.539 --> 00:21:37.599
Solving by Russell Ackoff. Range by David Epstein.

00:21:38.039 --> 00:21:40.880
Idea Flow by Jeremy Utley and Perry Kleban. And

00:21:40.880 --> 00:21:43.000
Zero to One by Peter Thiel and Blake Masters.

00:21:43.200 --> 00:21:46.059
Pick one, dive in, and start exercising that

00:21:46.059 --> 00:21:49.460
particular cognitive muscle. Exactly. This deep

00:21:49.460 --> 00:21:51.640
dive wasn't just about understanding the challenge.

00:21:51.960 --> 00:21:54.740
It was about giving you practical tools, a way

00:21:54.740 --> 00:21:57.460
forward, a way to not just survive, but actually

00:21:57.460 --> 00:22:00.380
thrive by leveraging your unique human intelligence

00:22:00.380 --> 00:22:03.390
in the age of AI. Don't just listen and start

00:22:03.390 --> 00:22:05.630
thinking differently. Start practicing. Your

00:22:05.630 --> 00:22:07.029
future thinking starts now.
