WEBVTT

00:00:00.000 --> 00:00:02.660
When you dedicate time to building a complex

00:00:02.660 --> 00:00:06.519
custom AI tool, say a specialized data scraper,

00:00:06.660 --> 00:00:09.820
it is well, it's going to have flaws. It always

00:00:09.820 --> 00:00:12.820
breaks down. But the real shift here is how we

00:00:12.820 --> 00:00:15.519
approach failure. The dilemma is how do you fix

00:00:15.519 --> 00:00:19.079
messy data, map outputs and debug logic without

00:00:19.079 --> 00:00:21.960
writing a single line of code? And the core insight

00:00:21.960 --> 00:00:24.079
from the sources we looked at is that debugging

00:00:24.079 --> 00:00:27.460
is now simply a matter of conversation. You just

00:00:27.460 --> 00:00:29.760
described the error to the AI, and it updates

00:00:29.760 --> 00:00:32.640
its own logic. Welcome to the deep dive. If part

00:00:32.640 --> 00:00:35.039
one was about getting a working prototype, this

00:00:35.039 --> 00:00:37.179
is part two. We're taking that idea and turning

00:00:37.179 --> 00:00:39.560
it into a polished, production -ready tool that

00:00:39.560 --> 00:00:41.600
can handle real -world complexity. Our mission

00:00:41.600 --> 00:00:44.539
today is to really analyze the five key troubleshooting

00:00:44.539 --> 00:00:46.320
steps from the source material. We're going to

00:00:46.320 --> 00:00:48.119
look at how conversational prompts in AI Studio,

00:00:48.340 --> 00:00:51.000
plus some workflow refinement in 8AN, can eliminate

00:00:51.000 --> 00:00:53.079
traditional debugging. Yeah, moving from prototype

00:00:53.079 --> 00:00:55.600
to reliable automation. Let's get into it. So

00:00:55.600 --> 00:00:57.299
in the first build, we confirmed the raw signal

00:00:57.299 --> 00:01:00.140
was received by NEN. That part worked. But the

00:01:00.140 --> 00:01:03.619
data itself is messy. It's just a big block of

00:01:03.619 --> 00:01:05.700
text. Right. And now the challenge is organization.

00:01:06.599 --> 00:01:08.980
How do you make sure specific data points land

00:01:08.980 --> 00:01:11.180
in the right spreadsheet columns? This is the

00:01:11.180 --> 00:01:13.739
first big hurdle. It is. This is challenge one.

00:01:14.260 --> 00:01:17.519
Mapping the webhook data to your sheets. When

00:01:17.519 --> 00:01:22.239
that raw payload arrives, it's just this undifferentiated

00:01:22.239 --> 00:01:25.489
mass of information. We need to define exactly

00:01:25.489 --> 00:01:28.290
which piece goes where. So if I have a column

00:01:28.290 --> 00:01:30.469
named date in my Google Sheet, I need to tell

00:01:30.469 --> 00:01:32.829
the system precisely which incoming field should

00:01:32.829 --> 00:01:35.049
map to it. Precisely. You're inside the ANAN

00:01:35.049 --> 00:01:37.930
Google Sheets node. And on the left, you see

00:01:37.930 --> 00:01:40.370
all the raw input fields, maybe 20 of them, like

00:01:40.370 --> 00:01:42.969
generated date or company name. And on the right

00:01:42.969 --> 00:01:45.489
are your spreadsheet columns. That sounds incredibly

00:01:45.489 --> 00:01:48.109
tedious if there are 20 or more fields. Is it

00:01:48.109 --> 00:01:50.170
a drag and drop thing for every single one? It

00:01:50.170 --> 00:01:52.329
is. It's a one -time setup that needs that manual

00:01:52.329 --> 00:01:55.159
alignment. You drag, say, search city from the

00:01:55.159 --> 00:01:57.719
input, and you map it to the city column on the

00:01:57.719 --> 00:02:00.299
output side. So if the initial data is flowing

00:02:00.299 --> 00:02:03.200
correctly, what's that critical step to ensure

00:02:03.200 --> 00:02:05.640
it lands in the right place? It's all about field

00:02:05.640 --> 00:02:09.080
mapping. That ensures the raw inputs correctly

00:02:09.080 --> 00:02:11.939
align with the spreadsheet columns. Okay, so

00:02:11.939 --> 00:02:13.780
that alignment is the first step toward production.

00:02:13.960 --> 00:02:16.539
You're forcing the unstructured AI output to

00:02:16.539 --> 00:02:18.639
fit into your structured database. You have to

00:02:18.639 --> 00:02:20.479
make the data talk to the database. Exactly.

00:02:20.500 --> 00:02:21.939
This is where things get really interesting.

00:02:22.699 --> 00:02:25.759
Once that mapping is done, we immediately hit

00:02:25.759 --> 00:02:29.379
a failure. Challenge two, missing social media

00:02:29.379 --> 00:02:32.680
data. The AI just didn't do a perfect scrape.

00:02:32.879 --> 00:02:36.080
LinkedIn and Facebook fields were empty. A perfect

00:02:36.080 --> 00:02:38.280
example of what developers call prompt drift.

00:02:38.860 --> 00:02:41.419
Prompt drift. Yeah, it's when the AI kind of

00:02:41.419 --> 00:02:44.520
forgets or deviates from a critical instruction

00:02:44.520 --> 00:02:46.900
that was buried in that original setup. And I'll

00:02:46.900 --> 00:02:48.879
admit, I still wrestle with prompt drift myself.

00:02:49.060 --> 00:02:51.860
I often forget the small details in my own instructions

00:02:51.860 --> 00:02:54.060
that can throw an entire search off. We all do.

00:02:54.199 --> 00:02:57.020
So how do we fix it? Not with code. You go right

00:02:57.020 --> 00:02:59.719
back to the AI Studio chat interface. The user

00:02:59.719 --> 00:03:01.719
just described the problem in plain English.

00:03:01.939 --> 00:03:05.099
What was the actual fix prompt? It was conversational,

00:03:05.120 --> 00:03:08.520
but really direct. Something like, great. I love

00:03:08.520 --> 00:03:10.800
it. But we don't have all the data. We are missing

00:03:10.800 --> 00:03:13.500
social media. Please make the AI agent better.

00:03:13.580 --> 00:03:15.699
Also use Google search and Google Maps properly.

00:03:15.919 --> 00:03:19.270
But wait a minute. Telling an AI to... use Google

00:03:19.270 --> 00:03:22.129
Maps properly, isn't that just replacing complex

00:03:22.129 --> 00:03:25.330
code with, you know, a different kind of vague

00:03:25.330 --> 00:03:27.750
instruction? That's the key difference. You aren't

00:03:27.750 --> 00:03:30.389
coding the search path yourself. You're asking

00:03:30.389 --> 00:03:33.849
the AI to refine its own internal logic. The

00:03:33.849 --> 00:03:36.849
AI translates that human instruction into a more

00:03:36.849 --> 00:03:39.710
accurate search sequence. And it worked. It instantly

00:03:39.710 --> 00:03:41.750
started populating those missing fields. OK,

00:03:41.830 --> 00:03:44.430
that is powerful. Now, challenge three, the timing

00:03:44.430 --> 00:03:47.270
issue. The app was sending a webhook call for

00:03:47.270 --> 00:03:50.370
every single lead, which is just inefficient,

00:03:50.650 --> 00:03:52.409
terrible for performance. Oh, definitely bad

00:03:52.409 --> 00:03:55.469
practice. And the fix, again, was a simple conversational

00:03:55.469 --> 00:03:58.129
prompt. Help me make a fix. When the data will

00:03:58.129 --> 00:04:00.250
be sent to the webhook, I want all of it to be

00:04:00.250 --> 00:04:03.050
sent in one single batch, not one by one. And

00:04:03.050 --> 00:04:05.569
just that one sentence changed the entire structure

00:04:05.569 --> 00:04:09.310
of the AI's output. Yep. The AI understood, and

00:04:09.310 --> 00:04:11.750
it restructured its output into a single clean

00:04:11.750 --> 00:04:14.599
payload. It was an array containing all the scraped

00:04:14.599 --> 00:04:17.319
leads delivered at once. So how do you efficiently

00:04:17.319 --> 00:04:20.579
correct a faulty data retrieval process? You

00:04:20.579 --> 00:04:23.540
just refine the prompt in chat. that compels

00:04:23.540 --> 00:04:25.980
the AI to improve its internal search logic.

00:04:26.220 --> 00:04:28.540
So now we have this high -quality batch of data

00:04:28.540 --> 00:04:30.540
arriving cleanly, but that creates challenge

00:04:30.540 --> 00:04:33.660
four. The data is now one big array. How does

00:04:33.660 --> 00:04:36.660
NEN split that up into individual rows for Google

00:04:36.660 --> 00:04:38.879
Sheets? Right. The sheet needs five separate

00:04:38.879 --> 00:04:41.480
records, not one giant chunk of text. Exactly.

00:04:41.480 --> 00:04:43.540
This is where you restructure the workflow. The

00:04:43.540 --> 00:04:46.620
solution is adding a dedicated step, a split

00:04:46.620 --> 00:04:49.310
-out node. Right after the webhook in an ANN.

00:04:49.490 --> 00:04:52.110
A split -out node. I love how literal that is.

00:04:52.209 --> 00:04:54.850
It is. It takes that bundle batch of, say, five

00:04:54.850 --> 00:04:57.610
leads and splits them into five distinct sequential

00:04:57.610 --> 00:04:59.930
items. You can kind of think of it like stacking

00:04:59.930 --> 00:05:02.009
Lego blocks of data. But does adding another

00:05:02.009 --> 00:05:04.389
node slow things down? Is there a hidden cost

00:05:04.389 --> 00:05:07.189
here? It adds negligible processing time, but

00:05:07.189 --> 00:05:09.350
it's absolutely essential. The Google Sheets

00:05:09.350 --> 00:05:11.589
node can only write one row at a time, so the

00:05:11.589 --> 00:05:13.550
split -out node makes sure it runs five times

00:05:13.550 --> 00:05:16.110
once for each lead. It makes the whole thing

00:05:16.110 --> 00:05:19.319
modular. Okay, so moving to challenge five, adding

00:05:19.319 --> 00:05:22.459
advanced features. Can we do more than just fix

00:05:22.459 --> 00:05:25.300
bugs with conversation? Absolutely. The source

00:05:25.300 --> 00:05:27.740
shows adding a lead count selector. The prompt

00:05:27.740 --> 00:05:30.860
was just add another form input, lead count.

00:05:31.040 --> 00:05:33.759
I want to be able to select between 10, 20, 30,

00:05:33.860 --> 00:05:37.160
up to 100 leads to be scraped. So the user just

00:05:37.160 --> 00:05:39.360
described the UI they wanted and the interface

00:05:39.360 --> 00:05:41.759
and the logic updated on its own. Instantly.

00:05:42.120 --> 00:05:44.199
The AI studio added the drop -down menu with

00:05:44.199 --> 00:05:46.779
the exact options specified. It shows you can

00:05:46.779 --> 00:05:48.959
iterate on front -end design, features, and back

00:05:48.959 --> 00:05:51.839
-end logic all conversationally. So how do you

00:05:51.839 --> 00:05:54.339
handle a bundled payload of multiple leads when

00:05:54.339 --> 00:05:56.600
updating a spreadsheet? A split -out node is

00:05:56.600 --> 00:05:59.060
required to process the batch into separate actionable

00:05:59.060 --> 00:06:02.019
items. This all defines a really powerful new

00:06:02.019 --> 00:06:05.000
iteration pattern, doesn't it? It's test, identify,

00:06:05.339 --> 00:06:07.600
describe the fix in plain English, the AI updates,

00:06:07.899 --> 00:06:09.939
and then you test again. That conversational

00:06:09.939 --> 00:06:12.790
refinement is the central idea. It replaces that

00:06:12.790 --> 00:06:15.829
long, painful debug cycle with just immediate

00:06:15.829 --> 00:06:19.670
iteration. And this AI Studio plus N8n stack

00:06:19.670 --> 00:06:22.870
is a universal framework. The leaf scraper is

00:06:22.870 --> 00:06:24.569
just one example. Let's talk about those other

00:06:24.569 --> 00:06:26.910
applications. If I'm not scraping leaves, what

00:06:26.910 --> 00:06:28.810
else could I build with this? Okay, think about

00:06:28.810 --> 00:06:31.470
a content creation suite. The AI app generates

00:06:31.470 --> 00:06:34.850
a draft blog post, maybe three social media snippets,

00:06:34.870 --> 00:06:38.110
and a prompt for graphic. And AN handles the

00:06:38.110 --> 00:06:41.470
distribution. Exactly. NN posts the draft to

00:06:41.470 --> 00:06:43.990
WordPress, schedules the snippets in Buffer,

00:06:43.990 --> 00:06:46.910
and saves the assets to Google Drive. The AI

00:06:46.910 --> 00:06:49.870
creates. N8N distributes. What about for internal

00:06:49.870 --> 00:06:51.810
stuff like customer service? You could build

00:06:51.810 --> 00:06:54.370
a customer support analyzer. The AI app monitors

00:06:54.370 --> 00:06:56.689
tickets, extracts the sentiment, suggests a reply.

00:06:57.129 --> 00:07:00.930
Then the N8N side updates your CRM. If a ticket

00:07:00.930 --> 00:07:04.160
says urgent, NAN sends a Slack alert and changes

00:07:04.160 --> 00:07:06.519
its status to critical. It connects the intelligence

00:07:06.519 --> 00:07:09.240
to the actual operations? It does. So is this

00:07:09.240 --> 00:07:11.560
iterative process limited to data scraping alone?

00:07:11.899 --> 00:07:14.040
It sounds like no. No, this pattern functions

00:07:14.040 --> 00:07:16.180
as a universal framework for building any custom

00:07:16.180 --> 00:07:18.920
automation tool. Now that the app works, where

00:07:18.920 --> 00:07:21.680
does it live? The sources mention some surprisingly

00:07:21.680 --> 00:07:24.540
accessible deployment options. You've got three

00:07:24.540 --> 00:07:27.560
clear paths. First is just personal use, keeping

00:07:27.560 --> 00:07:30.399
it in your AI studio account. Second is team

00:07:30.399 --> 00:07:33.839
sharing. You just share the app's URL with your

00:07:33.839 --> 00:07:35.839
colleagues. And the third option is properly

00:07:35.839 --> 00:07:38.800
public. Yeah, public deployment. This is where

00:07:38.800 --> 00:07:41.019
you deploy to Google Cloud with a custom domain.

00:07:41.379 --> 00:07:43.339
People are actually launching these as standalone

00:07:43.339 --> 00:07:45.899
web apps, even getting them to rank in search

00:07:45.899 --> 00:07:48.319
engines without ever hiring a developer for the

00:07:48.319 --> 00:07:50.439
backend. Okay, let's get into some pro tips.

00:07:50.759 --> 00:07:52.920
Even if we're just chatting with an AI, what's

00:07:52.920 --> 00:07:55.600
the one habit we have to maintain? Prompt engineering

00:07:55.600 --> 00:07:58.959
still matters. The AI is a powerful tool, but

00:07:58.959 --> 00:08:01.220
you have to be a specific director. Don't just

00:08:01.220 --> 00:08:03.759
say send the data. Say send the data as a single

00:08:03.759 --> 00:08:06.680
JSON array. And what about testing? Test incrementally.

00:08:07.079 --> 00:08:09.500
Do not write a five -page prompt trying to define

00:08:09.500 --> 00:08:11.279
everything at once. That's just guaranteed to

00:08:11.279 --> 00:08:14.319
fail. Start simple. Verify the input form works,

00:08:14.399 --> 00:08:16.980
then add the webhook. Test that. Build and test

00:08:16.980 --> 00:08:19.120
one feature before adding the next. The sources

00:08:19.120 --> 00:08:21.600
also mention using the chat for small refinements.

00:08:21.860 --> 00:08:24.600
Absolutely. The chat is for quick UI tweaks like

00:08:24.600 --> 00:08:27.279
make that button blue. Yeah. And your safety

00:08:27.279 --> 00:08:31.600
net tip four is the NAN executions panel. Can

00:08:31.600 --> 00:08:33.899
you explain what that panel actually shows us?

00:08:34.080 --> 00:08:36.580
It is your debugging truth. It's where you can

00:08:36.580 --> 00:08:39.100
visually see the raw data moving between each

00:08:39.100 --> 00:08:41.720
step so you can pinpoint the exact spot where

00:08:41.720 --> 00:08:44.120
things failed. So if I want to avoid massive

00:08:44.120 --> 00:08:46.720
failures, what is the best building technique?

00:08:47.370 --> 00:08:49.870
Build and test incrementally, verifying one small

00:08:49.870 --> 00:08:52.129
feature before adding the next. We have to address

00:08:52.129 --> 00:08:54.710
the cost. Building custom tools is usually expensive.

00:08:55.090 --> 00:08:58.210
But this whole picture seems surprisingly affordable.

00:08:58.649 --> 00:09:01.070
The free tiers are phenomenal. Google AI Studio

00:09:01.070 --> 00:09:04.529
gives you a generous free API quota. And NA Cloud

00:09:04.529 --> 00:09:07.629
offers, get this, 20 ,000 workflow executions

00:09:07.629 --> 00:09:10.710
every month for free. 20 ,000 executions a month.

00:09:10.830 --> 00:09:12.950
That's a staggering amount of automation for

00:09:12.950 --> 00:09:16.879
$0. Whoa. I mean, imagine scaling a system efficiently

00:09:16.879 --> 00:09:20.019
entirely on free tiers for all your initial testing

00:09:20.019 --> 00:09:23.320
and even moderate personal use. It really democratizes

00:09:23.320 --> 00:09:25.460
this for everyone. So when do you actually start

00:09:25.460 --> 00:09:28.990
paying? When you hit high volume. Exceeding Google's

00:09:28.990 --> 00:09:32.570
API quotas or needing more than those 20 ,000

00:09:32.570 --> 00:09:36.590
executions on NAN, their plans start around $20

00:09:36.590 --> 00:09:39.370
a month. And of course, hosting on Google Cloud

00:09:39.370 --> 00:09:41.570
if you deploy publicly. Let's talk about the

00:09:41.570 --> 00:09:43.950
pitfalls. Besides over -prompting, what other

00:09:43.950 --> 00:09:46.309
traps did the sources highlight? Skipping the

00:09:46.309 --> 00:09:48.649
data test. That's a fatal error. You have to

00:09:48.649 --> 00:09:50.490
test that webhook connection first to make sure

00:09:50.490 --> 00:09:53.009
data is actually arriving. And another one is

00:09:53.009 --> 00:09:56.429
ignoring error handling. Tell the AI what to

00:09:56.429 --> 00:09:59.049
show if a search fails, like no leads found.

00:09:59.269 --> 00:10:01.509
Try different keywords. And there was a practical

00:10:01.509 --> 00:10:03.830
tip about the destination, Google Sheets itself.

00:10:04.169 --> 00:10:06.970
Yes. Keep your Google Sheets simple. Data connectors

00:10:06.970 --> 00:10:09.570
in general get really confused by complex formatting,

00:10:09.809 --> 00:10:12.870
merged cells, frozen rows. Clean data needs a

00:10:12.870 --> 00:10:15.250
clean destination. So besides being specific

00:10:15.250 --> 00:10:18.190
in prompts, what helps ensure NAN runs smoothly?

00:10:18.490 --> 00:10:21.149
Keep your Google Sheets clean. Avoid complex

00:10:21.149 --> 00:10:23.309
formatting that could confuse the data connector.

00:10:23.549 --> 00:10:25.629
So we've established the live build wasn't perfect.

00:10:25.870 --> 00:10:28.129
It had stumbles, data mapping errors, timing

00:10:28.129 --> 00:10:32.690
issues. And that's the best news. The key takeaway

00:10:32.690 --> 00:10:35.389
is that every single one of those problems was

00:10:35.389 --> 00:10:38.250
solved purely through conversation. No debugging

00:10:38.250 --> 00:10:41.350
Python, no hunting for syntax errors. The power

00:10:41.350 --> 00:10:43.860
is in that loop. see the problem, describe the

00:10:43.860 --> 00:10:45.799
fix in English, and the AI handles the rest.

00:10:46.019 --> 00:10:48.820
It's just dramatically faster. This stack really

00:10:48.820 --> 00:10:51.139
is a fundamentally different, more powerful way

00:10:51.139 --> 00:10:53.980
to build. You don't need expensive sauce subscriptions

00:10:53.980 --> 00:10:56.500
or to compromise on features anymore. You just

00:10:56.500 --> 00:10:58.519
need a clear idea and the language to define

00:10:58.519 --> 00:11:01.600
it. A functional, powerful lead scraper was built

00:11:01.600 --> 00:11:04.639
live, including all these fixes in under 30 minutes.

00:11:04.879 --> 00:11:07.480
The true power here is that conversational iteration.

00:11:08.190 --> 00:11:11.250
It transforms complex bugs into simple chat requests,

00:11:11.450 --> 00:11:13.549
and it makes sophisticated automation accessible

00:11:13.549 --> 00:11:16.129
to anyone. So the sources leave us with a thought.

00:11:16.289 --> 00:11:18.350
What will you build in your next 28 minutes?

00:11:18.730 --> 00:11:20.809
Considering this flexibility, a content suite,

00:11:21.070 --> 00:11:23.509
a support analyzer, what is the most complex

00:11:23.509 --> 00:11:25.769
business workflow that could realistically be

00:11:25.769 --> 00:11:28.690
managed entirely by just talking to an AI agent?

00:11:28.970 --> 00:11:30.990
Yeah, think about that one problem you've been

00:11:30.990 --> 00:11:33.409
putting off because it seemed too complex to

00:11:33.409 --> 00:11:36.710
code. Maybe now it's just a conversation away

00:11:36.710 --> 00:11:39.440
from being solved. Thanks for diving deep with

00:11:39.440 --> 00:11:40.500
us. We'll see you next time.
