WEBVTT

00:00:00.000 --> 00:00:02.439
Okay, so let's try to unpack this. There's this

00:00:02.439 --> 00:00:06.860
strange contradiction happening in AI right now.

00:00:06.980 --> 00:00:09.560
A really strange one. On one hand, you've got

00:00:09.560 --> 00:00:14.359
the biggest rivals. We're talking open AI, anthropic,

00:00:14.380 --> 00:00:17.579
block all making peace to standardize their future.

00:00:17.699 --> 00:00:20.449
Right. building the serious infrastructure. And

00:00:20.449 --> 00:00:22.989
then at the same time, you have Sam Altman, their

00:00:22.989 --> 00:00:26.089
leader on late night TV. And he's admitting he

00:00:26.089 --> 00:00:29.309
uses his, you know, genius level AI to figure

00:00:29.309 --> 00:00:31.829
out what his baby's poop means. It's just it's

00:00:31.829 --> 00:00:33.869
high stakes standardization meeting this deeply

00:00:33.869 --> 00:00:37.049
mundane, very human need. Exactly. Welcome back

00:00:37.049 --> 00:00:40.030
to the deep dive. Our mission, as always, is

00:00:40.030 --> 00:00:42.049
to take that whole complex stack of information

00:00:42.049 --> 00:00:44.770
and find the clear currents, you know, signals

00:00:44.770 --> 00:00:46.829
guiding the industry. And today those currents

00:00:46.829 --> 00:00:48.810
are pulling in two totally opposite directions.

00:00:49.710 --> 00:00:51.789
Consolidation on one side. And humanization on

00:00:51.789 --> 00:00:53.729
the other. We're looking at a field that's maturing

00:00:53.729 --> 00:00:57.109
incredibly fast, but maybe not always so gracefully.

00:00:57.289 --> 00:00:59.810
So our roadmap. We're going to start with that

00:00:59.810 --> 00:01:01.909
Ben's complex move to standardize what they're

00:01:01.909 --> 00:01:04.510
calling agentic AI. These are the systems designed

00:01:04.510 --> 00:01:06.730
to connect and actually act on their own. Right.

00:01:07.079 --> 00:01:10.000
Then we'll pivot to how you can navigate all

00:01:10.000 --> 00:01:12.260
this change. We'll talk about the real professional

00:01:12.260 --> 00:01:14.900
value of getting quick validation, like with

00:01:14.900 --> 00:01:17.140
the new Google Gemini certificate. And finally,

00:01:17.219 --> 00:01:19.420
we're going to get into the really clever PR

00:01:19.420 --> 00:01:23.200
move behind giving AGI a more human face with

00:01:23.200 --> 00:01:26.879
Sam Altman's parenting stories as like the key

00:01:26.879 --> 00:01:30.420
exhibit. OK, before we jump in, let's just nail

00:01:30.420 --> 00:01:33.000
down that piece of jargon. When we say agentic

00:01:33.000 --> 00:01:36.400
AI, we're just describing. Models that use protocols

00:01:36.400 --> 00:01:38.799
to connect with and use other software and data

00:01:38.799 --> 00:01:41.180
tools. They're the AIs that do things, not just

00:01:41.180 --> 00:01:44.280
talk. Exactly. They execute tasks. So segment

00:01:44.280 --> 00:01:47.439
one, the standardization wars. This is a big

00:01:47.439 --> 00:01:49.519
one. It's all centered on this new agendic AI

00:01:49.519 --> 00:01:52.519
foundation, the AAIF. And it's under the Linux

00:01:52.519 --> 00:01:55.609
foundation. And the members, OpenAI, Anthropic,

00:01:55.829 --> 00:01:58.370
Block. I mean, that's a surprising trio given

00:01:58.370 --> 00:02:00.469
how much they compete. It really is. To me, this

00:02:00.469 --> 00:02:02.609
is just them admitting reality. They know they

00:02:02.609 --> 00:02:05.310
can't scale the business of autonomous AI if

00:02:05.310 --> 00:02:07.450
everyone's building on, you know, a different

00:02:07.450 --> 00:02:10.349
shaky foundation. Fragmentation is a liability

00:02:10.349 --> 00:02:12.650
now. A huge one. And they didn't just join. They

00:02:12.650 --> 00:02:15.090
immediately donated these foundational tools

00:02:15.090 --> 00:02:17.689
to get it all started. So what are they contributing?

00:02:17.930 --> 00:02:20.550
What are these core technical pieces that are

00:02:20.550 --> 00:02:23.250
going to form the new standard? OK, so Anthropic

00:02:23.250 --> 00:02:25.569
contributed something called the Model Context

00:02:25.569 --> 00:02:30.129
Protocol or MCP. Right. This is so crucial. It's

00:02:30.129 --> 00:02:32.569
basically the universal language standard. It

00:02:32.569 --> 00:02:35.909
lets AI models negotiate context and access all

00:02:35.909 --> 00:02:38.669
these external tools and data sources. It handles

00:02:38.669 --> 00:02:41.250
the boring stuff like state management and authentication

00:02:41.250 --> 00:02:45.210
in a shared way. So it's it's not just a translator,

00:02:45.310 --> 00:02:47.370
is it? It's more like a shared set of rules for

00:02:47.370 --> 00:02:49.370
trust and access between different models and

00:02:49.370 --> 00:02:51.860
other software. Precisely. It just gets rid of

00:02:51.860 --> 00:02:53.599
so much developer perfection. Then you've got

00:02:53.599 --> 00:02:56.280
Block, Jack Dorsey's company. They open source

00:02:56.280 --> 00:02:58.780
something called Goose. And you can think of

00:02:58.780 --> 00:03:01.659
this as the essential starter pack for any developer

00:03:01.659 --> 00:03:04.400
building agents. It just cuts out months of basic

00:03:04.400 --> 00:03:06.740
setup work. And OpenAI's contribution, agents

00:03:06.740 --> 00:03:10.680
.ind. It's the simplest, but maybe the most strategically

00:03:10.680 --> 00:03:14.080
clever part of this. Oh, it's very smart. It's

00:03:14.080 --> 00:03:16.199
just a plain text file that lives inside your

00:03:16.199 --> 00:03:19.659
code repository. And its only job is to clearly

00:03:19.659 --> 00:03:23.259
define how any AI coding tool is allowed to behave

00:03:23.259 --> 00:03:26.460
inside that specific project. It's a way to manage

00:03:26.460 --> 00:03:29.719
trust by, like, sandboxing the model's actions.

00:03:30.139 --> 00:03:33.469
The thing is, the significance here... goes way

00:03:33.469 --> 00:03:35.729
beyond just these three companies because the

00:03:35.729 --> 00:03:37.330
Linux Foundation is driving it. And they have

00:03:37.330 --> 00:03:40.349
a history here. A huge history. They turn messy,

00:03:40.550 --> 00:03:44.069
fragmented ideas into core global infrastructure.

00:03:44.490 --> 00:03:46.689
Well, think about Kubernetes. It started as this

00:03:46.689 --> 00:03:49.930
super complex, messy thing for container orchestration.

00:03:50.169 --> 00:03:52.550
The Linux Foundation gave it that neutral home,

00:03:52.710 --> 00:03:54.849
and now Kubernetes basically runs most of the

00:03:54.849 --> 00:03:58.509
world's modern cloud, this AAF thing. It feels

00:03:58.509 --> 00:04:02.409
like the Kubernetes moment for agentic AI. That

00:04:02.409 --> 00:04:05.009
makes the stakes very clear. But let's push on

00:04:05.009 --> 00:04:06.550
the competitive side of this for a second. Okay.

00:04:06.629 --> 00:04:08.930
It's great for developers, great for interoperability.

00:04:09.110 --> 00:04:12.030
But isn't this also a strategic move for these

00:04:12.030 --> 00:04:14.490
three big players to kind of control the infrastructure?

00:04:14.789 --> 00:04:17.459
Well, that's always the risk, right? potentially

00:04:17.459 --> 00:04:20.180
lock out smaller competitors who weren't at the

00:04:20.180 --> 00:04:21.839
table when the rules were written. That's the

00:04:21.839 --> 00:04:23.839
inherent risk of standardizing early. You're

00:04:23.839 --> 00:04:25.699
right. You build the roads before everyone else

00:04:25.699 --> 00:04:28.480
has a car. But the immediate driver here really

00:04:28.480 --> 00:04:31.019
is business growth. We know that the AI coding

00:04:31.019 --> 00:04:34.079
market just exploded this year. We saw the figures.

00:04:34.259 --> 00:04:37.639
Yeah. AI coding spend went from, what, $550 million

00:04:37.639 --> 00:04:42.189
to $4 billion in one year. That is just explosive

00:04:42.189 --> 00:04:44.930
growth. And when you have that much money flying

00:04:44.930 --> 00:04:48.050
around, fragmentation, you know, where a tool

00:04:48.050 --> 00:04:50.709
only works with cloud or only in VS Code or only

00:04:50.709 --> 00:04:52.810
in the cloud, that becomes a massive barrier.

00:04:53.029 --> 00:04:55.170
You need standardization because that $4 billion

00:04:55.170 --> 00:04:57.649
is only going to grow if the tools are reliable

00:04:57.649 --> 00:04:59.689
and they work across all the different software

00:04:59.689 --> 00:05:02.769
stacks. So if we strip away the money in the

00:05:02.769 --> 00:05:06.250
competition for a moment, what's the one major

00:05:06.250 --> 00:05:09.839
underlying... technical pain point that this

00:05:09.839 --> 00:05:12.319
whole open source effort is trying to fix. It

00:05:12.319 --> 00:05:14.560
fixes the challenge of different AI tools and

00:05:14.560 --> 00:05:17.279
models needing a reliable shared language to

00:05:17.279 --> 00:05:20.079
connect securely across any software stack. That's

00:05:20.079 --> 00:05:23.800
the core of it. Okay, so from the complex job

00:05:23.800 --> 00:05:27.279
of standardizing how we build AI. We pivot. We

00:05:27.279 --> 00:05:29.980
pivot to standardizing how we learn about it

00:05:29.980 --> 00:05:32.180
with this new credential from Google. I love

00:05:32.180 --> 00:05:34.459
this story because it's so actionable for anyone

00:05:34.459 --> 00:05:38.000
listening. Google AI Education just launched

00:05:38.000 --> 00:05:40.660
the Gemini Educator Certificate. And the key

00:05:40.660 --> 00:05:42.860
detail. The key detail is that it's completely

00:05:42.860 --> 00:05:47.180
free until December 31st, 2025. Wow. After that,

00:05:47.279 --> 00:05:50.920
it's $25. That's a really generous window, a

00:05:50.920 --> 00:05:52.980
low -friction way for people to validate their

00:05:52.980 --> 00:05:55.339
skills. So what's the actual value proposition

00:05:55.339 --> 00:05:58.009
here for someone learning? I think it's got three

00:05:58.009 --> 00:06:00.310
tiers of benefit. First, just the practical utility.

00:06:00.470 --> 00:06:02.569
You actually gain mastery of the Gemini platform

00:06:02.569 --> 00:06:04.310
and that's becoming a really essential tool.

00:06:04.490 --> 00:06:07.110
Second, the profile boost. It's a Google credential

00:06:07.110 --> 00:06:09.410
that just carries some weight on a resume, you

00:06:09.410 --> 00:06:11.410
know. And maybe the most important part, in a

00:06:11.410 --> 00:06:13.769
really crowded job market, it signals something

00:06:13.769 --> 00:06:18.180
critical to employers. Yes. This is it. It signals

00:06:18.180 --> 00:06:21.360
proactive learning. If you have a pool of candidates

00:06:21.360 --> 00:06:24.000
who are all equally skilled, the one who took

00:06:24.000 --> 00:06:26.459
the initiative to learn and validate their knowledge

00:06:26.459 --> 00:06:29.100
on these new tools, they just gave themselves

00:06:29.100 --> 00:06:32.379
an edge. It shows curiosity. You know, I'll admit

00:06:32.379 --> 00:06:35.360
I still wrestle with prompt drift myself sometimes,

00:06:35.519 --> 00:06:37.720
finding the best way to talk to these things

00:06:37.720 --> 00:06:40.160
as they evolve. We all do. So I genuinely appreciate

00:06:40.160 --> 00:06:44.240
these low stakes ways to learn a new platform

00:06:44.240 --> 00:06:47.259
without. you know, the pressure of a huge tuition

00:06:47.259 --> 00:06:49.560
bill. The only barrier to entry here is time

00:06:49.560 --> 00:06:52.040
and curiosity. That's a great democratizing move.

00:06:52.259 --> 00:06:54.939
So for anyone interested, what are the key logistical

00:06:54.939 --> 00:06:56.920
things they need to know about taking it? Okay,

00:06:56.980 --> 00:06:59.600
watch the clock. You get 120 minutes for the

00:06:59.600 --> 00:07:01.660
exam, but people who've done well say it can

00:07:01.660 --> 00:07:04.389
be done in about 30. But the big warning. The

00:07:04.389 --> 00:07:07.310
retake policy. The retake policy. If you fail

00:07:07.310 --> 00:07:09.430
that first time, you have to wait eight days

00:07:09.430 --> 00:07:11.750
before you can try again. Given that restriction

00:07:11.750 --> 00:07:13.750
and the professional benefit we're talking about,

00:07:13.889 --> 00:07:16.430
should people be prioritizing a really high score?

00:07:16.649 --> 00:07:19.110
Or is it more about just showing you're curious

00:07:19.110 --> 00:07:22.470
and proactive by getting it done? Focus on signaling

00:07:22.470 --> 00:07:24.649
curiosity and showing you're actively engaging

00:07:24.649 --> 00:07:29.079
with new and evolving AI tools. We're back. We've

00:07:29.079 --> 00:07:31.779
covered the technical standardization that requires

00:07:31.779 --> 00:07:34.379
engineering collaboration. Now we're shifting

00:07:34.379 --> 00:07:37.620
to this powerful current of AI humanization that

00:07:37.620 --> 00:07:40.259
requires public relatability. And this is where

00:07:40.259 --> 00:07:43.240
Sam Altman, the guy building AGI, makes his debut

00:07:43.240 --> 00:07:45.560
on Late Night with Jimmy Fallon. And the topic

00:07:45.560 --> 00:07:48.459
wasn't, you know, the next LLM benchmark or the

00:07:48.459 --> 00:07:50.319
future of superintelligence. No, it was diapers

00:07:50.319 --> 00:07:54.800
and pizza. He framed ChatGPT as his parenting

00:07:54.800 --> 00:07:57.560
sidekick for his newborn. So the man leading

00:07:57.560 --> 00:07:59.579
one of the highest stakes tech projects in human

00:07:59.579 --> 00:08:02.459
history is using it for these deeply mundane,

00:08:02.680 --> 00:08:05.000
stressful parenting moments. The stories he told

00:08:05.000 --> 00:08:07.279
were perfect PR. He talked about asking why his

00:08:07.279 --> 00:08:09.439
baby was laughing while throwing pizza on the

00:08:09.439 --> 00:08:11.620
floor. I mean, that is just a genuinely human,

00:08:11.720 --> 00:08:14.120
chaotic moment. Or the other one, the universal

00:08:14.120 --> 00:08:17.040
but kind of gross question. Asking his genius

00:08:17.040 --> 00:08:20.019
-level AI about the color of baby poop. Right.

00:08:20.199 --> 00:08:23.300
He even admitted he feels kind of bad asking

00:08:23.300 --> 00:08:25.980
this super advanced system such dumb questions.

00:08:26.180 --> 00:08:28.879
And that little admission of vulnerability, that's

00:08:28.879 --> 00:08:30.959
the whole strategy. That's the core of it. The

00:08:30.959 --> 00:08:34.419
punchline is... It works. It's deeply relatable.

00:08:34.659 --> 00:08:37.120
It's incredibly effective, especially when you

00:08:37.120 --> 00:08:39.620
remember this is a company that has spent years

00:08:39.620 --> 00:08:43.059
in these serious, sometimes fearful public debates

00:08:43.059 --> 00:08:46.700
about existential risk and model races with Google.

00:08:47.049 --> 00:08:49.610
And now the CEO is talking about nursery stress.

00:08:49.690 --> 00:08:53.789
This is absolutely a clever, soft PR move. The

00:08:53.789 --> 00:08:56.190
goal is to shift the public's focus away from

00:08:56.190 --> 00:08:58.830
that scary theoretical future and bring it right

00:08:58.830 --> 00:09:01.710
back to immediate, friendly, practical use. They're

00:09:01.710 --> 00:09:04.610
working to make ChatGPT feel like a non -intimidating

00:09:04.610 --> 00:09:06.970
tool again. And the choice of Fallon's show for

00:09:06.970 --> 00:09:09.970
distribution is just genius. It completely bypasses

00:09:09.970 --> 00:09:11.809
the tech press and goes straight to millions

00:09:11.809 --> 00:09:14.029
of everyday people. The tech community doesn't

00:09:14.029 --> 00:09:16.610
need to be convinced. No, but millions of late

00:09:16.610 --> 00:09:19.490
night viewers just met this technology in the

00:09:19.490 --> 00:09:22.529
lowest stakes way possible. It normalizes it

00:09:22.529 --> 00:09:25.009
through humor. It suggests, hey, the CEO uses

00:09:25.009 --> 00:09:26.889
it to check on his kid. It can't be that scary.

00:09:27.129 --> 00:09:31.179
So does this PR shift suggest that. Public fear,

00:09:31.259 --> 00:09:34.080
or maybe just intimidation about AI, was a bigger

00:09:34.080 --> 00:09:36.080
challenge for the company than the actual technological

00:09:36.080 --> 00:09:39.080
capability. Yes, it suggests establishing broad

00:09:39.080 --> 00:09:42.460
public relatability is now a primary goal, maybe

00:09:42.460 --> 00:09:45.340
even equal to scaling the tech itself. Okay,

00:09:45.440 --> 00:09:48.240
let's wrap our deep dive with a rapid fire segment.

00:09:48.360 --> 00:09:51.320
Some interesting industry insights, tools, and

00:09:51.320 --> 00:09:54.440
predictions. Quick hits. Let's start with a really

00:09:54.440 --> 00:09:57.240
powerful new tool for anyone doing serious research.

00:09:57.320 --> 00:09:59.879
It's a Chrome extension called the Slopivator.

00:10:00.240 --> 00:10:02.399
The name alone just perfectly captures the need.

00:10:02.539 --> 00:10:04.539
It really does. It filters out AI -generated

00:10:04.539 --> 00:10:07.139
content and only shows you search results that

00:10:07.139 --> 00:10:09.440
were published before ChatGPT launched. So it's

00:10:09.440 --> 00:10:12.159
a dedicated tool for finding pure research before

00:10:12.159 --> 00:10:14.740
the floodgates of synthetic text open. Which

00:10:14.740 --> 00:10:17.259
says a lot about data integrity now, right? It's

00:10:17.259 --> 00:10:19.480
fascinating. We need a tool to deliberately look

00:10:19.480 --> 00:10:22.059
backwards just to trust what we're reading. Okay,

00:10:22.100 --> 00:10:25.110
what's next? On the topic of getting better outputs,

00:10:25.470 --> 00:10:28.190
an OpenAI co -founder gave some great advice.

00:10:28.590 --> 00:10:31.870
A simple instruction. Stop using the phrase,

00:10:32.090 --> 00:10:35.070
what do you think? Exactly. Just eliminate that

00:10:35.070 --> 00:10:37.490
request for an AI's opinion. And you will immediately

00:10:37.490 --> 00:10:41.129
get higher quality, less biased, more objective

00:10:41.129 --> 00:10:43.830
outputs. That's just actionable advice for anyone

00:10:43.830 --> 00:10:46.309
using these tools professionally. And in hardware,

00:10:46.549 --> 00:10:48.190
a lot of money is flowing toward efficiency.

00:10:48.899 --> 00:10:52.840
Unconventional AI just raised $475 million. And

00:10:52.840 --> 00:10:56.019
that huge funding round is specifically for creating

00:10:56.019 --> 00:10:58.759
brain -inspired chips. Chips designed to be more

00:10:58.759 --> 00:11:01.320
energy efficient, more eco -friendly. Which proves

00:11:01.320 --> 00:11:03.360
efficiency is now a multi -million dollar concern,

00:11:03.480 --> 00:11:05.759
not just a philosophical one. Right. But here's

00:11:05.759 --> 00:11:07.879
the prediction that really stopped me. Time's

00:11:07.879 --> 00:11:11.440
2025 person of the year, the betting favorite.

00:11:11.720 --> 00:11:15.419
AI itself, at 40%. The head of actual people

00:11:15.419 --> 00:11:18.710
like Jensen Huang and Sam Altman. Whoa. Just

00:11:18.710 --> 00:11:21.629
stop for a second. Imagine scaling the very concept

00:11:21.629 --> 00:11:24.710
of personhood to an algorithm. That reflects

00:11:24.710 --> 00:11:27.610
how pervasive this tech has become in our collective

00:11:27.610 --> 00:11:29.950
imagination. It's a genuine moment of wonder

00:11:29.950 --> 00:11:32.309
at the speed of this change. So with all these

00:11:32.309 --> 00:11:36.450
quick hits from filtering data to venture deals

00:11:36.450 --> 00:11:39.029
for the average professional, what's the single

00:11:39.029 --> 00:11:42.559
most practical insight here? The actionable strategy

00:11:42.559 --> 00:11:44.919
to refine your prompts by eliminating vague,

00:11:45.139 --> 00:11:48.379
opinion -seeking questions. Precise input guarantees

00:11:48.379 --> 00:11:51.120
better output. We covered a huge amount of ground

00:11:51.120 --> 00:11:53.840
today. We really did. From rival companies agreeing

00:11:53.840 --> 00:11:57.320
on foundational rules to a CEO talking about

00:11:57.320 --> 00:11:59.539
his personal life. And to recap the big idea.

00:12:00.029 --> 00:12:02.649
The world of AI is simultaneously consolidating

00:12:02.649 --> 00:12:05.330
its infrastructure. That AAIF alliance proves

00:12:05.330 --> 00:12:07.690
it. And it's dissolving its intimidating public

00:12:07.690 --> 00:12:10.830
image through this very calculated PR. The core

00:12:10.830 --> 00:12:13.149
takeaway here is that the tools for both building

00:12:13.149 --> 00:12:16.070
AI and for using AI are becoming so much more

00:12:16.070 --> 00:12:18.470
accessible. But that accessibility only helps

00:12:18.470 --> 00:12:20.750
you if you stay curious and proactive. Which

00:12:20.750 --> 00:12:23.029
brings us right back to that free Gemini certificate.

00:12:23.450 --> 00:12:25.110
Yeah, if you're curious, go check it out. It's

00:12:25.110 --> 00:12:27.470
a really low friction way to show you're engaged

00:12:27.470 --> 00:12:30.080
before that deadline is up. And before we sign

00:12:30.080 --> 00:12:32.679
off, we want to leave you with one final provocative

00:12:32.679 --> 00:12:35.559
thought from this week's news. A Google VP of

00:12:35.559 --> 00:12:37.899
ads recently denied the rumor about integrating

00:12:37.899 --> 00:12:40.779
ads deep inside their personalized AI models.

00:12:41.360 --> 00:12:44.500
But the key phrase they used was, no current

00:12:44.500 --> 00:12:47.080
plans. We encourage you to think about what that

00:12:47.080 --> 00:12:50.299
very deliberate, nuanced phrasing implies about

00:12:50.299 --> 00:12:53.039
the inevitable future of advertising. What happens

00:12:53.039 --> 00:12:55.820
when our trusted, personalized study buddy also

00:12:55.820 --> 00:12:57.940
becomes an incredibly well -informed salesman?

00:12:58.190 --> 00:13:00.389
Keep digging into your own source material. We'll

00:13:00.389 --> 00:13:01.129
catch you on the next one.
