WEBVTT

00:00:00.000 --> 00:00:02.620
Welcome to the Deep Dive. Let's start with a

00:00:02.620 --> 00:00:05.019
claim that honestly sounds almost unbelievable,

00:00:05.280 --> 00:00:07.580
but it seems to be reshaping how work gets done.

00:00:08.320 --> 00:00:10.939
Imagine building a really complex AI automation,

00:00:11.240 --> 00:00:13.179
say something with, I don't know, 25 different

00:00:13.179 --> 00:00:15.160
steps, different tools talking to each other.

00:00:15.699 --> 00:00:17.660
Historically, that was, well, that was an expert's

00:00:17.660 --> 00:00:20.699
job, measured in hours, maybe even days. And

00:00:20.699 --> 00:00:23.420
now, yeah, now we're hearing that the entire

00:00:23.420 --> 00:00:25.519
structure, the whole thing mapped out, connected,

00:00:25.660 --> 00:00:28.579
partly set up, can be generated in about 90 seconds.

00:00:28.640 --> 00:00:30.600
That's the leap we're diving into today. It's

00:00:30.600 --> 00:00:32.759
kind of staggering. We are digging into sources

00:00:32.759 --> 00:00:35.780
all about Nintent's new AI workflow builder.

00:00:36.020 --> 00:00:37.899
And this isn't just, you know. A minor update,

00:00:38.079 --> 00:00:40.799
the claim is a 10x shift in how fast you can

00:00:40.799 --> 00:00:43.039
build things, opening up this whole agent era

00:00:43.039 --> 00:00:45.759
to pretty much everyone. Right. Our mission here

00:00:45.759 --> 00:00:48.340
is to untack how this tool basically tears down

00:00:48.340 --> 00:00:50.600
the old barriers. You know, automation used to

00:00:50.600 --> 00:00:53.320
be for coders, for specialists. The idea now

00:00:53.320 --> 00:00:57.280
is it's for, well, for you, for anyone. So we'll

00:00:57.280 --> 00:00:59.100
look at that speed increase, the different ways

00:00:59.100 --> 00:01:01.539
this AI builder actually works, and walk through

00:01:01.539 --> 00:01:04.379
three examples, simple to seriously complex.

00:01:04.840 --> 00:01:07.280
And crucially, what this means for the economics

00:01:07.280 --> 00:01:11.239
of your time and your work. Okay, let's get into

00:01:11.239 --> 00:01:14.439
it. Thinking back to how automation used to be

00:01:14.439 --> 00:01:16.900
built, it wasn't just hard, it felt like there

00:01:16.900 --> 00:01:18.950
was so much friction. Like the system wanted

00:01:18.950 --> 00:01:21.109
you to be an expert. Oh, absolutely. Even for

00:01:21.109 --> 00:01:23.750
someone like me who lives in these tools, a simple

00:01:23.750 --> 00:01:27.109
three -step thing, 10, 15 minutes, easy. If you

00:01:27.109 --> 00:01:28.810
were just starting out, forget it. Half an hour

00:01:28.810 --> 00:01:30.870
minimum and probably feeling pretty defeated

00:01:30.870 --> 00:01:32.650
afterwards. Yeah, defeated is a good word. The

00:01:32.650 --> 00:01:34.750
friction points were just everywhere. You had

00:01:34.750 --> 00:01:37.129
to manually look through hundreds of these nodes,

00:01:37.269 --> 00:01:39.310
these little tools, just to pick the right one.

00:01:40.269 --> 00:01:42.209
Setting up credentials. Yeah. Especially for

00:01:42.209 --> 00:01:44.129
Google stuff. That was like this legendary headache.

00:01:44.269 --> 00:01:46.510
It honestly felt like you needed a PhD sometimes

00:01:46.510 --> 00:01:49.090
just to connect two apps. Right. And the time

00:01:49.090 --> 00:01:52.370
spent debugging. Oh man, that often took longer

00:01:52.370 --> 00:01:54.849
than actually building it. But that whole mess

00:01:54.849 --> 00:01:57.569
just collapses with this AI co -pilot approach.

00:01:57.969 --> 00:02:01.890
Simple stuff. One, two minutes. complex five

00:02:01.890 --> 00:02:05.230
to ten and the extreme case that one shot build

00:02:05.230 --> 00:02:07.310
yeah under two minutes so the whole interaction

00:02:07.310 --> 00:02:09.650
changes you just you describe what you want in

00:02:09.650 --> 00:02:12.189
plain english Pretty much. You give it the prompt,

00:02:12.310 --> 00:02:15.490
the AI figures out the right nodes, connects

00:02:15.490 --> 00:02:18.310
them logically, and here's the kicker, it intelligently

00:02:18.310 --> 00:02:20.569
grabs your saved credentials if you have them.

00:02:20.710 --> 00:02:22.990
And it even puts in some basic settings. It's

00:02:22.990 --> 00:02:25.009
basically building the skeleton for you. That

00:02:25.009 --> 00:02:26.650
doesn't sound like just making things faster.

00:02:26.750 --> 00:02:28.530
That sounds like completely removing the wall

00:02:28.530 --> 00:02:30.889
that kept most people out. It really is a paradigm

00:02:30.889 --> 00:02:33.430
shift. It is. So if the setup gets that simple,

00:02:33.550 --> 00:02:36.050
what's left for the user to actually do? The

00:02:36.050 --> 00:02:38.789
user gets to focus on the strategy. the why and

00:02:38.789 --> 00:02:40.629
then checking the final details maybe that last

00:02:40.629 --> 00:02:43.169
10 percent okay so it's important to understand

00:02:43.169 --> 00:02:46.430
this ai workflow builder isn't just like a chatbot

00:02:46.430 --> 00:02:49.270
giving advice it's described as an active agent

00:02:49.270 --> 00:02:52.430
it has access to all the nen documentation knows

00:02:52.430 --> 00:02:55.889
about the 500 plus tools or nodes and the best

00:02:55.889 --> 00:02:58.509
ways to use them right nodes think of them like

00:02:58.509 --> 00:03:01.509
specialized lego blocks for your automation each

00:03:01.509 --> 00:03:04.750
one does a specific job gets data processes it

00:03:04.750 --> 00:03:07.120
sends it somewhere else And the sources highlight

00:03:07.120 --> 00:03:10.800
two really distinct modes for this AI. Knowing

00:03:10.800 --> 00:03:13.599
the difference seems key. Let's start with ask

00:03:13.599 --> 00:03:16.280
mode. Ask mode is your helper. It's like having

00:03:16.280 --> 00:03:18.419
a consultant sitting next to you. It can explain

00:03:18.419 --> 00:03:20.759
how a node works, suggest ways to build something,

00:03:20.919 --> 00:03:23.020
answer your questions. But, and this is crucial,

00:03:23.120 --> 00:03:24.819
it won't actually change anything in your workflow.

00:03:24.939 --> 00:03:27.060
It's read -only, basically. Good for planning.

00:03:27.199 --> 00:03:29.060
Okay, planner mode. Then there's build mode.

00:03:29.199 --> 00:03:31.300
That sounds like where the action happens. That's

00:03:31.300 --> 00:03:33.729
the agent doing the work. It adds the nodes,

00:03:33.909 --> 00:03:36.409
connects the wires, fills in the settings, even

00:03:36.409 --> 00:03:38.590
handles credentials, all based on your prompt.

00:03:38.870 --> 00:03:41.229
This is how you get those super fast builds.

00:03:41.770 --> 00:03:45.189
But there's a catch. A pretty significant one

00:03:45.189 --> 00:03:48.889
mentioned. A 1 ,000 character limit for the prompt.

00:03:49.129 --> 00:03:52.900
1 ,000 characters. Hmm. That does feel restrictive.

00:03:53.060 --> 00:03:55.479
Doesn't that immediately rule out a lot of complex,

00:03:55.740 --> 00:03:58.740
real -world business processes that need detailed

00:03:58.740 --> 00:04:00.860
instructions? Yeah, that's the challenge they

00:04:00.860 --> 00:04:03.599
flagged. Detailed automations often need way

00:04:03.599 --> 00:04:06.099
more than 11 ,000 characters to describe accurately,

00:04:06.419 --> 00:04:08.680
forcing you to be super concise or figure out

00:04:08.680 --> 00:04:11.180
another way. So why is that 1 ,000 -character

00:04:11.180 --> 00:04:13.319
limit such a big deal for more advanced users?

00:04:13.599 --> 00:04:15.520
Real -world automations often need descriptions

00:04:15.520 --> 00:04:18.199
much longer than that constraint allows. All

00:04:18.199 --> 00:04:19.779
right, let's look at the first case study, Agent

00:04:19.779 --> 00:04:23.019
1. Pretty straightforward. Monitor an email inbox,

00:04:23.300 --> 00:04:26.220
use AI to figure out a response, and draft that

00:04:26.220 --> 00:04:28.279
reply. Yeah, the kind of thing that used to be

00:04:28.279 --> 00:04:31.439
10, maybe 15 minutes of clicking around. In the

00:04:31.439 --> 00:04:33.500
test described, they gave it a single prompt.

00:04:33.680 --> 00:04:36.560
The AI picked the right Gmail trigger, something

00:04:36.560 --> 00:04:39.079
called an AI agent node, and the Gmail send node,

00:04:39.220 --> 00:04:41.899
hooked them up, popped in the credentials, took

00:04:41.899 --> 00:04:44.740
maybe 30, 40 seconds. Hold on. The AI agent node.

00:04:44.819 --> 00:04:47.439
Can you clarify what that specific Lego block

00:04:47.439 --> 00:04:50.500
does? Good catch. Yeah. So if a node is a tool,

00:04:50.660 --> 00:04:53.519
the AI agent node is basically a bridge. It lets

00:04:53.519 --> 00:04:55.480
your workflow talk directly to a large language

00:04:55.480 --> 00:04:58.560
model. Think chat GPT, Claude, models like that.

00:04:58.660 --> 00:05:01.060
So you can use it for tasks like summarizing

00:05:01.060 --> 00:05:03.860
text, writing drafts, analyzing sentiment, that

00:05:03.860 --> 00:05:05.300
kind of thing, right within your automation.

00:05:05.600 --> 00:05:07.899
Got it. Makes sense. So 40 seconds for the build.

00:05:08.259 --> 00:05:10.639
But the sources said there were still minor tweaks

00:05:10.639 --> 00:05:14.019
needed, like fixing how data flowed into one

00:05:14.019 --> 00:05:16.600
field. Even with that, the total time was apparently

00:05:16.600 --> 00:05:18.899
only around three minutes. Which is still a huge

00:05:18.899 --> 00:05:20.980
jump in productivity. And that speed really hinges

00:05:20.980 --> 00:05:22.600
on getting rid of those old friction points.

00:05:22.860 --> 00:05:26.360
The biggest one they mention. Fixing Google authentication.

00:05:26.939 --> 00:05:29.120
Oh, I remember hearing about that. The old way

00:05:29.120 --> 00:05:32.040
sounded like torture. Going into the Google Cloud

00:05:32.040 --> 00:05:35.120
console, creating Oval stuff, client ID secrets.

00:05:35.379 --> 00:05:37.699
The sources literally called it the number one

00:05:37.699 --> 00:05:41.060
reason people just gave up. Exactly. Now, it's

00:05:41.060 --> 00:05:43.589
just sign in with Google. One click. Apparently,

00:05:43.649 --> 00:05:45.589
they had to go through a pretty expensive Google

00:05:45.589 --> 00:05:48.009
audit to make that happen. It really shows they're

00:05:48.009 --> 00:05:50.529
serious about making it accessible. And alongside

00:05:50.529 --> 00:05:52.930
that, the credential management is smarter, too.

00:05:53.069 --> 00:05:55.629
The AI figures out which saved account to use.

00:05:55.790 --> 00:05:58.550
Yeah, it suggests the right one based on the

00:05:58.550 --> 00:06:01.189
node. Little things, but they add up. So how

00:06:01.189 --> 00:06:03.850
big a deal is fixing that Google authentication

00:06:03.850 --> 00:06:06.470
for getting more people to use this? It removes

00:06:06.470 --> 00:06:09.339
the single biggest hurdle. Massively democratizing

00:06:09.339 --> 00:06:12.480
access for non -technical users. Okay, let's

00:06:12.480 --> 00:06:15.600
ramp up the complexity. Agent 2, advanced calendar

00:06:15.600 --> 00:06:18.100
analysis. This thing needed to run daily, look

00:06:18.100 --> 00:06:20.339
at calendar events, figure out productivity metrics,

00:06:20.560 --> 00:06:23.939
create a custom HTML report. And then email it.

00:06:24.000 --> 00:06:26.199
Yeah, that's definitely more involved. Manually,

00:06:26.240 --> 00:06:28.720
you're easily looking at 30 to 60 minutes, maybe

00:06:28.720 --> 00:06:31.279
more if you hit snags. And right away, you hit

00:06:31.279 --> 00:06:33.279
that thousand character wall. The goal is just

00:06:33.279 --> 00:06:35.699
too detailed. So they couldn't just type it all

00:06:35.699 --> 00:06:37.839
in. They had to use this hybrid strategy. Explain

00:06:37.839 --> 00:06:40.500
that. Right. The clever workaround was using

00:06:40.500 --> 00:06:42.639
another AI. First, they mentioned Claude Sonnet

00:06:42.639 --> 00:06:45.759
4 .5 to break down the big complex goal into

00:06:45.759 --> 00:06:48.199
tiny step -by -step instructions like step one,

00:06:48.360 --> 00:06:51.379
add a calendar trigger. Step two, add a code

00:06:51.379 --> 00:06:53.839
node. then they fed those small instructions

00:06:53.839 --> 00:06:57.860
one by one into the n8n ai builder okay that

00:06:57.860 --> 00:07:00.160
is clever using one ai to manage the other it

00:07:00.160 --> 00:07:02.800
is and honestly here's a bit of a confession

00:07:02.800 --> 00:07:06.000
even doing this stuff all the time i still wrestle

00:07:06.000 --> 00:07:08.660
with prompt drift myself you give an ai a really

00:07:08.660 --> 00:07:11.600
long complex prompt sometimes it just loses the

00:07:11.600 --> 00:07:14.040
plot you know makes weird mistakes halfway through

00:07:14.040 --> 00:07:17.139
that cascade into Well, into chaos. Sounds like

00:07:17.139 --> 00:07:19.120
a nightmare to debug. It's what we call debugging

00:07:19.120 --> 00:07:21.819
hell in the notes. Seriously. Trying to find

00:07:21.819 --> 00:07:25.000
the one error in a huge 25 node workflow that

00:07:25.000 --> 00:07:28.040
the AI built all at once. No thanks. This node

00:07:28.040 --> 00:07:30.360
by node method, even if it feels slower initially,

00:07:30.560 --> 00:07:33.019
it prevents that massive headache. It's kind

00:07:33.019 --> 00:07:34.680
of admitting that sometimes breaking the problem

00:07:34.680 --> 00:07:37.579
down yourself or using a specialized AI for planning

00:07:37.579 --> 00:07:39.879
is just better. So building it piece by piece,

00:07:39.939 --> 00:07:42.300
node by node, it keeps the AI focused on one

00:07:42.300 --> 00:07:44.860
thing at a time and it lets the human check each

00:07:44.860 --> 00:07:47.279
step. Exactly. Quality control at each stage.

00:07:47.399 --> 00:07:49.259
The results they reported were pretty amazing.

00:07:49.399 --> 00:07:51.980
For this quite complex agent, the total build

00:07:51.980 --> 00:07:53.899
time was about 15 minutes, but that was while

00:07:53.899 --> 00:07:55.779
they were explaining it on video. So realistically,

00:07:56.180 --> 00:07:58.779
probably five to 10 minutes normally. Compare

00:07:58.779 --> 00:08:01.899
that to 45, maybe 90 minutes the old way. It

00:08:01.899 --> 00:08:04.819
really shows the power of combining the NAI for

00:08:04.819 --> 00:08:07.360
the structure and maybe a different AI for the

00:08:07.360 --> 00:08:09.660
really nuanced bits like code generation. So

00:08:09.660 --> 00:08:12.579
besides using an external AI for planning, what's

00:08:12.579 --> 00:08:14.459
the main advantage of building node by node?

00:08:14.540 --> 00:08:16.920
It isolates potential configuration issues to

00:08:16.920 --> 00:08:18.899
single components, making troubleshooting far

00:08:18.899 --> 00:08:22.000
easier and faster. Hashtag tag hashtag mid -roll

00:08:22.000 --> 00:08:24.660
sponsor read. Sponsor read placeholder. Which

00:08:24.660 --> 00:08:27.680
brings us to the final test. Agent 3. This one

00:08:27.680 --> 00:08:30.680
sounds wild. A complex, long to short form content

00:08:30.680 --> 00:08:33.500
factory. Fetch a video, transcribe it, analyze

00:08:33.500 --> 00:08:35.799
it for viral bits, chop it into multiple short

00:08:35.799 --> 00:08:38.799
clips, make thumbnails, A -B test titles, publish

00:08:38.799 --> 00:08:41.279
everything. We're talking 25, maybe more nodes

00:08:41.279 --> 00:08:43.679
needed here. Yeah, historically. That's not hours.

00:08:43.740 --> 00:08:45.759
That's potentially days of frustrating work,

00:08:45.840 --> 00:08:48.960
even for an expert. So for this test, they basically

00:08:48.960 --> 00:08:50.720
threw a Hail Mary. They took the description,

00:08:50.940 --> 00:08:53.340
chopped it down ruthlessly to fit exactly 1 ,000

00:08:53.340 --> 00:08:55.740
characters, and fed the whole monster prompt

00:08:55.740 --> 00:08:58.799
to the AI builder at once. The ultimate one -shot

00:08:58.799 --> 00:09:01.600
test. Oh, wow. Okay, pushing the limits. So what

00:09:01.600 --> 00:09:03.580
happened? Did it just crash and burn? This is

00:09:03.580 --> 00:09:06.679
the part that, whoa, seriously, imagine scaling

00:09:06.679 --> 00:09:09.840
this kind of capability. The NAN -AI built the

00:09:09.840 --> 00:09:13.019
entire massive 25 -node workflow, the structure,

00:09:13.179 --> 00:09:16.110
the triggers, the logic, the connections. In

00:09:16.110 --> 00:09:18.549
about 90 seconds. That's just an incredible display

00:09:18.549 --> 00:09:20.570
of technical leverage right there. 90 seconds

00:09:20.570 --> 00:09:23.389
for a 25 -step structure. That is genuinely hard

00:09:23.389 --> 00:09:25.669
to get your head around. But let's be clear on

00:09:25.669 --> 00:09:27.429
what it didn't do. The sensitive stuff, right?

00:09:27.490 --> 00:09:30.549
Like actually getting API keys or logging into

00:09:30.549 --> 00:09:32.750
external accounts. Correct. It couldn't do that,

00:09:32.830 --> 00:09:34.870
which makes sense security -wise, but it built

00:09:34.870 --> 00:09:38.610
maybe 80, 90 % of the whole thing. And it even

00:09:38.610 --> 00:09:40.889
gave specific instructions on what manual steps

00:09:40.889 --> 00:09:42.830
were left. Like, okay, now you need to go here

00:09:42.830 --> 00:09:46.049
and authenticate this account. It perfectly highlights

00:09:46.049 --> 00:09:48.929
the emerging model. Humans provide the strategy,

00:09:49.090 --> 00:09:51.629
the taste, the critical security bits like authentication.

00:09:52.129 --> 00:09:55.529
The AI provides the raw speed and does the heavy

00:09:55.529 --> 00:09:58.070
lifting of execution. And that speed, that ease,

00:09:58.389 --> 00:10:02.360
it does more than just save time. It changes

00:10:02.360 --> 00:10:05.019
the economics, doesn't it? Specifically, the

00:10:05.019 --> 00:10:07.460
threshold for when automation even makes sense.

00:10:07.700 --> 00:10:09.860
Oh, fundamentally. Think about the old way you'd

00:10:09.860 --> 00:10:11.700
decide whether to automate something. You'd think,

00:10:11.779 --> 00:10:14.100
okay, this annoying task wastes maybe two minutes

00:10:14.100 --> 00:10:16.200
a week. But building the automation will take

00:10:16.200 --> 00:10:18.500
me two hours. Not worth it. So you just kept

00:10:18.500 --> 00:10:20.659
doing the annoying task manually. Yeah. Forever.

00:10:20.820 --> 00:10:23.000
Exactly. But the new calculation is completely

00:10:23.000 --> 00:10:26.179
different. It's, okay, this task wastes two minutes,

00:10:26.200 --> 00:10:28.379
maybe even just two minutes a month. But building

00:10:28.379 --> 00:10:30.850
the automation will take me... Two minutes. Yeah,

00:10:30.889 --> 00:10:33.590
absolutely worth it. The economic barrier to

00:10:33.590 --> 00:10:35.750
automating just plummeted. It basically went

00:10:35.750 --> 00:10:37.950
to zero. It's now worthwhile to automate even

00:10:37.950 --> 00:10:40.730
the tiniest little time -wasting inefficiencies.

00:10:41.070 --> 00:10:44.190
That shift in cost -benefit seems huge. It creates

00:10:44.190 --> 00:10:46.509
a potential gap, doesn't it? Definitely. So what's

00:10:46.509 --> 00:10:48.629
the competitive implication for professionals

00:10:48.629 --> 00:10:51.690
who decide not to embrace this? They will struggle

00:10:51.690 --> 00:10:54.610
to compete with those who deploy an army of fast,

00:10:54.850 --> 00:10:58.659
always -on AI agents daily. Hashtags, tag, tag,

00:10:58.679 --> 00:11:01.379
big idea recap. So if we boil it all down, the

00:11:01.379 --> 00:11:03.240
core idea is pretty straightforward. Building

00:11:03.240 --> 00:11:05.940
these AI agents, these automations, just got

00:11:05.940 --> 00:11:08.159
something like 10 times easier. And that's massively

00:11:08.159 --> 00:11:10.679
speeding up the arrival of this agent era we

00:11:10.679 --> 00:11:12.940
keep hearing about. It puts real power into the

00:11:12.940 --> 00:11:16.019
hands of, you know, solo entrepreneurs, non -technical

00:11:16.019 --> 00:11:18.039
folks, people who were previously locked out.

00:11:18.200 --> 00:11:21.580
Yeah, the winning formula seems clear. It's human

00:11:21.580 --> 00:11:25.779
expertise. The strategy, the judgment, the final

00:11:25.779 --> 00:11:28.460
checks combined with AI execution for that incredible

00:11:28.460 --> 00:11:31.700
speed and scale, that combination just multiplies

00:11:31.700 --> 00:11:34.720
value. Technology is the ultimate lever, really.

00:11:35.200 --> 00:11:37.879
Hashtag tag, tag, tag, outro. So if you're listing

00:11:37.879 --> 00:11:40.220
and thinking, OK, I want this leverage, the practical

00:11:40.220 --> 00:11:42.500
advice from the sources seems to be don't overthink

00:11:42.500 --> 00:11:44.539
it. Just start. The biggest hurdle is inertia.

00:11:45.070 --> 00:11:47.009
Yeah, focus on building those first one or two

00:11:47.009 --> 00:11:48.850
agents. Just push through that initial learning

00:11:48.850 --> 00:11:51.190
curve. Once you do that, you've kind of conquered

00:11:51.190 --> 00:11:53.470
90 % of the friction. Keep it simple at first.

00:11:53.649 --> 00:11:55.590
Use that node -by -node method if things get

00:11:55.590 --> 00:11:58.210
complex. And always, always remember, your job

00:11:58.210 --> 00:12:01.149
is that crucial last 10%. Reviewing, refining,

00:12:01.210 --> 00:12:03.370
making sure the AI actually did what you intended.

00:12:03.610 --> 00:12:05.409
Okay, let's leave you with a final thought. Pull

00:12:05.409 --> 00:12:07.190
from the conclusion of the material we looked

00:12:07.190 --> 00:12:10.070
at. The gap between people who embrace building

00:12:10.070 --> 00:12:12.690
and deploying these kinds of fast AI agents and

00:12:12.690 --> 00:12:15.240
those who don't. that gap is likely going to

00:12:15.240 --> 00:12:18.620
widen fast, possibly becoming impossible to bridge.

00:12:19.139 --> 00:12:21.039
So the question isn't if you should automate

00:12:21.039 --> 00:12:23.039
that two -minute inefficiency, but how quickly

00:12:23.039 --> 00:12:25.259
you're going to decide to do it out to your music.
