1
00:00:00,000 --> 00:00:04,720
Right, everybody buckle up because today we're taking a deep dive into the future, like way

2
00:00:04,720 --> 00:00:07,480
into the future with AI.

3
00:00:07,480 --> 00:00:08,920
But not just any AI.

4
00:00:08,920 --> 00:00:14,760
We're talking about AI so powerful, it could change what it even means to be human, you

5
00:00:14,760 --> 00:00:15,760
know?

6
00:00:15,760 --> 00:00:19,760
And what's really interesting is that this isn't just some like far off sci-fi thing.

7
00:00:19,760 --> 00:00:22,040
We're talking about predictions from Dario Amode.

8
00:00:22,040 --> 00:00:23,240
He's the CEO of Anthropic.

9
00:00:23,240 --> 00:00:26,600
So he's like really in the thick of it when it comes to AI.

10
00:00:26,600 --> 00:00:27,600
Exactly.

11
00:00:27,600 --> 00:00:32,480
And he points out his whole vision in this essay called Machines of Loving Grace, which

12
00:00:32,480 --> 00:00:35,520
okay sounds pretty optimistic, right?

13
00:00:35,520 --> 00:00:37,360
But the big question is how do we actually get there?

14
00:00:37,360 --> 00:00:42,800
How do we go from AI that can barely write a decent email to AI that's like solving humanity's

15
00:00:42,800 --> 00:00:43,800
biggest problems?

16
00:00:43,800 --> 00:00:47,840
So Amode starts by, he kind of throws out this idea of what he calls the compressed

17
00:00:47,840 --> 00:00:49,400
21st century.

18
00:00:49,400 --> 00:00:53,840
And it's this idea that all the amazing progress, you know, all the scientific breakthroughs

19
00:00:53,840 --> 00:00:57,520
that we expect to happen over like a hundred years.

20
00:00:57,520 --> 00:01:01,240
They get like squeezed down, compressed into just a few years.

21
00:01:01,240 --> 00:01:03,040
It's like fast forwarding through time.

22
00:01:03,040 --> 00:01:05,760
Okay, so that's a lot to take in.

23
00:01:05,760 --> 00:01:09,800
So before we get too far ahead of ourselves, like what are we even talking up here when

24
00:01:09,800 --> 00:01:11,760
we say powerful AI?

25
00:01:11,760 --> 00:01:15,720
This isn't just like my Roomba figuring out how to do the dishes, right?

26
00:01:15,720 --> 00:01:16,720
No, no, no.

27
00:01:16,720 --> 00:01:20,320
We're talking AI that's smarter than like Nobel Prize winners.

28
00:01:20,320 --> 00:01:21,320
Okay.

29
00:01:21,320 --> 00:01:26,300
AI that can do all sorts of things from like designing groundbreaking experiments to composing

30
00:01:26,300 --> 00:01:30,520
incredible music and developing technologies we haven't even dreamed of yet.

31
00:01:30,520 --> 00:01:32,820
Okay, now that's some AI I can get behind.

32
00:01:32,820 --> 00:01:37,520
But even with all that brain power, wouldn't there still be like limits?

33
00:01:37,520 --> 00:01:41,720
I mean, AI can't just like rewrite the laws of physics, right?

34
00:01:41,720 --> 00:01:42,720
You're totally right.

35
00:01:42,720 --> 00:01:44,360
And Amode actually talks about this.

36
00:01:44,360 --> 00:01:47,420
He has this concept of marginal returns to intelligence.

37
00:01:47,420 --> 00:01:50,820
And it's kind of like you can have the fastest car in the world, but if there's no road,

38
00:01:50,820 --> 00:01:52,420
you're not going anywhere.

39
00:01:52,420 --> 00:01:57,760
So even with all the smarts in the world, AI still needs like the right tools and resources

40
00:01:57,760 --> 00:01:59,240
to actually make things happen.

41
00:01:59,240 --> 00:02:02,520
It's like a world-class chef needing ingredients, right?

42
00:02:02,520 --> 00:02:03,520
Exactly.

43
00:02:03,520 --> 00:02:08,360
And just like a chef can be limited by what ingredients they have, AI can be held back

44
00:02:08,360 --> 00:02:14,960
by things like the speed of scientific experiments or the availability of funding for research.

45
00:02:14,960 --> 00:02:17,420
So it's not enough to just build a super intelligent AI.

46
00:02:17,420 --> 00:02:21,560
It's about creating the right environment for it to like thrive and do its thing.

47
00:02:21,560 --> 00:02:22,560
Exactly.

48
00:02:22,560 --> 00:02:28,240
And Amode believes that as AI gets more powerful, it'll actually find ways to work around those

49
00:02:28,240 --> 00:02:29,240
limitations.

50
00:02:29,240 --> 00:02:30,600
So it's not like an overnight revolution.

51
00:02:30,600 --> 00:02:35,080
It's more like a gradual, but ultimately like a transformative shift in how things work.

52
00:02:35,080 --> 00:02:36,080
Exactly.

53
00:02:36,080 --> 00:02:37,080
Gotcha.

54
00:02:37,080 --> 00:02:41,760
And one of the first places Amode sees this happening is in the world of biology and healthcare.

55
00:02:41,760 --> 00:02:47,520
So imagine like a world where we've basically kicked all the worst diseases to the curb,

56
00:02:47,520 --> 00:02:51,680
and we're not just living longer, but we're actually healthier too, you know?

57
00:02:51,680 --> 00:02:53,920
Like our biology isn't holding us back anymore.

58
00:02:53,920 --> 00:02:54,920
Okay, yes.

59
00:02:54,920 --> 00:02:55,920
Sign me up for that future.

60
00:02:55,920 --> 00:02:56,920
Yeah.

61
00:02:56,920 --> 00:02:57,920
But how does that even work?

62
00:02:57,920 --> 00:02:59,280
How does AI actually make all that happen?

63
00:02:59,280 --> 00:03:02,520
I mean, you keep saying powerful AI, but what does that even look like when we're talking

64
00:03:02,520 --> 00:03:04,920
about like curing cancer, right?

65
00:03:04,920 --> 00:03:10,320
Well, remember how we were talking about AI having those marginal returns to intelligence?

66
00:03:10,320 --> 00:03:11,320
Yeah.

67
00:03:11,320 --> 00:03:16,080
So Amode thinks biology is like the perfect example of a field where those returns could

68
00:03:16,080 --> 00:03:17,080
be huge.

69
00:03:17,080 --> 00:03:21,280
It's like there's so much potential for breakthroughs, but it's also super complex.

70
00:03:21,280 --> 00:03:22,280
Right.

71
00:03:22,280 --> 00:03:23,280
Tons of data.

72
00:03:23,280 --> 00:03:24,280
Yeah.

73
00:03:24,280 --> 00:03:25,720
And that's where AI really shines.

74
00:03:25,720 --> 00:03:29,960
So it's like instead of researchers having to like sift through all this data themselves,

75
00:03:29,960 --> 00:03:33,800
AI can be their like super powered research assistant.

76
00:03:33,800 --> 00:03:34,800
Exactly.

77
00:03:34,800 --> 00:03:38,560
It's like giving every scientist in the world their own personal like research army, you

78
00:03:38,560 --> 00:03:39,560
know?

79
00:03:39,560 --> 00:03:46,120
They can run experiments, analyze data, connect the dots way faster than any human ever could.

80
00:03:46,120 --> 00:03:50,480
And Amode talks about how this could lead to things like personalized medicine, right?

81
00:03:50,480 --> 00:03:55,040
So your treatment is tailored to your specific genes and everything.

82
00:03:55,040 --> 00:03:56,040
Yeah, exactly.

83
00:03:56,040 --> 00:03:59,400
It's like imagine getting treatment that's designed just for you so it's way more effective

84
00:03:59,400 --> 00:04:00,800
and has fewer side effects.

85
00:04:00,800 --> 00:04:01,800
That's amazing.

86
00:04:01,800 --> 00:04:04,840
But it's not just about treating what we already have, right?

87
00:04:04,840 --> 00:04:07,280
It's about preventing these diseases in the first place.

88
00:04:07,280 --> 00:04:08,280
Right.

89
00:04:08,280 --> 00:04:13,440
Like Amode talks about AI being able to see early warning signs of diseases like Alzheimer's

90
00:04:13,440 --> 00:04:16,040
way before any symptoms even show up.

91
00:04:16,040 --> 00:04:17,040
Wow.

92
00:04:17,040 --> 00:04:19,960
So we could actually get ahead of the curve instead of always playing catch up.

93
00:04:19,960 --> 00:04:20,960
Exactly.

94
00:04:20,960 --> 00:04:26,960
And you know, Amode takes it even further with this idea of like biological freedom.

95
00:04:26,960 --> 00:04:27,960
Biological freedom.

96
00:04:27,960 --> 00:04:29,920
OK, now we're getting really futuristic.

97
00:04:29,920 --> 00:04:30,960
What does that even mean?

98
00:04:30,960 --> 00:04:35,400
So it's this idea that, you know, we're not just stuck with the biology we're born with.

99
00:04:35,400 --> 00:04:41,040
We can actually use technology to like enhance it, push the boundaries of what's possible.

100
00:04:41,040 --> 00:04:44,360
So are we talking about like biohacking our way to immortality here?

101
00:04:44,360 --> 00:04:48,680
Well, Amode doesn't necessarily go that far, but he does talk about extending the human

102
00:04:48,680 --> 00:04:50,960
lifespan like significantly.

103
00:04:50,960 --> 00:04:55,800
He uses the example of how life expectancy basically doubled in the 20th century.

104
00:04:55,800 --> 00:04:58,360
And he thinks AI could help us do it again in the 21st.

105
00:04:58,360 --> 00:04:59,360
Wow.

106
00:04:59,360 --> 00:05:00,840
So double our lifespans again.

107
00:05:00,840 --> 00:05:01,840
That's wild.

108
00:05:01,840 --> 00:05:03,760
But it's not just about living longer, right?

109
00:05:03,760 --> 00:05:06,360
It's about like the quality of life too.

110
00:05:06,360 --> 00:05:07,360
Exactly.

111
00:05:07,360 --> 00:05:11,400
Amode is talking about a future where we're not just living longer, but we're healthier

112
00:05:11,400 --> 00:05:13,800
and more like vibrant, you know.

113
00:05:13,800 --> 00:05:16,600
We have more control over our own bodies and how we age.

114
00:05:16,600 --> 00:05:17,600
Okay.

115
00:05:17,600 --> 00:05:22,440
But with all this talk about, you know, reshaping our biology and like rewriting the rules of

116
00:05:22,440 --> 00:05:25,800
aging, I have to ask, what about our minds?

117
00:05:25,800 --> 00:05:30,880
Like how does all of this impact our psychology, our mental health?

118
00:05:30,880 --> 00:05:34,700
It's a super important question and it's something Amode definitely thinks about.

119
00:05:34,700 --> 00:05:39,140
He actually believes that just like AI can revolutionize our physical health, it can

120
00:05:39,140 --> 00:05:42,000
totally transform how we approach mental health too.

121
00:05:42,000 --> 00:05:44,200
So we're talking about like a whole new approach to mental health.

122
00:05:44,200 --> 00:05:48,040
Yeah, like imagine instead of just treating symptoms, we can actually address the underlying

123
00:05:48,040 --> 00:05:51,880
causes of mental illness at like a neurological level.

124
00:05:51,880 --> 00:05:52,880
Okay.

125
00:05:52,880 --> 00:05:55,040
Now that sounds like a game changer, but how do we actually get there?

126
00:05:55,040 --> 00:05:57,080
I mean, the brain is still this big mystery, right?

127
00:05:57,080 --> 00:06:00,560
So how can AI help us crack the code?

128
00:06:00,560 --> 00:06:03,960
Well Amode points to this really cool example in his essay.

129
00:06:03,960 --> 00:06:06,720
Remember that computational mechanism he talked about?

130
00:06:06,720 --> 00:06:11,160
The one that was discovered in an AI system and then like married in the brains of mice?

131
00:06:11,160 --> 00:06:13,200
Yeah, yeah, that was crazy.

132
00:06:13,200 --> 00:06:18,200
So that's just one tiny example of how studying AI could give us this whole new understanding

133
00:06:18,200 --> 00:06:21,240
of like how our own brains work.

134
00:06:21,240 --> 00:06:22,240
Right.

135
00:06:22,240 --> 00:06:25,960
Like AI is almost like a mirror reflecting back at us, helping us understand our own

136
00:06:25,960 --> 00:06:27,200
minds better.

137
00:06:27,200 --> 00:06:28,200
Exactly.

138
00:06:28,200 --> 00:06:33,060
And as we learn more, Amode thinks we could start developing AI powered tools to actually

139
00:06:33,060 --> 00:06:35,960
address a bunch of different mental health challenges.

140
00:06:35,960 --> 00:06:41,240
Like imagine personalized therapies that are tailored to your like unique brain chemistry.

141
00:06:41,240 --> 00:06:42,240
Wow.

142
00:06:42,240 --> 00:06:45,160
So instead of just like throwing different medications or therapies at a problem and

143
00:06:45,160 --> 00:06:49,240
hoping something sticks, we can have treatments that are way more precise and effective.

144
00:06:49,240 --> 00:06:50,240
Exactly.

145
00:06:50,240 --> 00:06:54,120
It's like AI could help us target the root of the problem instead of just like putting

146
00:06:54,120 --> 00:06:55,680
a bandaid on the symptoms.

147
00:06:55,680 --> 00:06:56,680
That's amazing.

148
00:06:56,680 --> 00:06:57,840
But Amode doesn't stop there, does he?

149
00:06:57,840 --> 00:07:03,680
He also talks about AI actually being able to like enhance our cognitive abilities, right?

150
00:07:03,680 --> 00:07:06,120
Like our focus, our memory, even our creativity.

151
00:07:06,120 --> 00:07:07,160
Yeah, it's wild, right?

152
00:07:07,160 --> 00:07:11,960
He believes that AI could help us kind of unlock the full potential of our minds, like

153
00:07:11,960 --> 00:07:17,760
not just treating illness, but actually making us like sharper, quicker, more creative.

154
00:07:17,760 --> 00:07:23,120
OK, so now this is where I have to ask, are we like playing God here, messing with our

155
00:07:23,120 --> 00:07:25,560
brains, enhancing our abilities?

156
00:07:25,560 --> 00:07:28,840
It all sounds a little, I don't know, unnatural.

157
00:07:28,840 --> 00:07:30,120
I totally get that.

158
00:07:30,120 --> 00:07:31,520
It's definitely something to think about.

159
00:07:31,520 --> 00:07:37,000
But Amode's point is that like any advancements in this area would have to be approached really

160
00:07:37,000 --> 00:07:38,560
carefully and ethically.

161
00:07:38,560 --> 00:07:44,180
We have to make sure everyone has equal access to these technologies, not just a select few.

162
00:07:44,180 --> 00:07:45,340
That's a really important point.

163
00:07:45,340 --> 00:07:49,800
We can't just create a world where like the rich and powerful get to enhance themselves

164
00:07:49,800 --> 00:07:51,520
while everyone else gets left behind.

165
00:07:51,520 --> 00:07:52,520
Exactly.

166
00:07:52,520 --> 00:07:58,000
It's about creating a future where these technologies are used to like empower everyone, not divide

167
00:07:58,000 --> 00:07:59,000
us.

168
00:07:59,000 --> 00:08:01,960
It's not just about what's technologically possible.

169
00:08:01,960 --> 00:08:05,400
It's about making sure these advancements are used for good, for the benefit of all

170
00:08:05,400 --> 00:08:06,400
humankind.

171
00:08:06,400 --> 00:08:07,400
Exactly.

172
00:08:07,400 --> 00:08:11,280
It's about using this technology responsibly and thoughtfully to create a better future

173
00:08:11,280 --> 00:08:12,280
for everybody.

174
00:08:12,280 --> 00:08:16,040
Well, on that note, I want to thank you both for joining us today for this deep dive into

175
00:08:16,040 --> 00:08:17,580
the future of AI.

176
00:08:17,580 --> 00:08:21,620
It's a future filled with mind blowing possibilities, but also some really big questions that we

177
00:08:21,620 --> 00:08:22,800
all need to be thinking about.

178
00:08:22,800 --> 00:08:26,500
If you want to learn more about the topics we discussed today, be sure to check out the

179
00:08:26,500 --> 00:08:29,420
show notes for links to all the sources we mentioned.

180
00:08:29,420 --> 00:08:57,320
And as always, thank you for listening.

