WEBVTT

00:00:00.000 --> 00:00:02.720
Welcome, curious minds, to another deep dive.

00:00:03.299 --> 00:00:04.860
Have you ever found yourself using a powerful

00:00:04.860 --> 00:00:07.839
tool, feeling pretty productive, but you get

00:00:07.839 --> 00:00:09.740
that nagging thought, is there like a secret

00:00:09.740 --> 00:00:12.560
advanced mode I'm missing? Something that could

00:00:12.560 --> 00:00:16.260
unlock even more. Today, we're basically on a

00:00:16.260 --> 00:00:18.820
mission to uncover exactly that kind of hidden

00:00:18.820 --> 00:00:21.410
power within NEN. you know, the incredible workflow

00:00:21.410 --> 00:00:23.489
automation tool. You're probably already using

00:00:23.489 --> 00:00:25.449
their AI nodes for all sorts of things, sentiment

00:00:25.449 --> 00:00:28.010
analysis, content generation, finding them indispensable,

00:00:28.210 --> 00:00:29.910
right? But what if I told you there's a hidden

00:00:29.910 --> 00:00:32.509
gem, maybe just a single node, that can utterly

00:00:32.509 --> 00:00:35.270
transform how you approach AI automation? You're

00:00:35.270 --> 00:00:38.590
spot on. Many inating users, they know nodes,

00:00:38.950 --> 00:00:41.469
like... the standard AI agent. And yeah, they're

00:00:41.469 --> 00:00:43.710
great for getting started quickly. But what often

00:00:43.710 --> 00:00:46.509
goes unnoticed, kind of behind the scenes, is

00:00:46.509 --> 00:00:49.170
that these are often simplified interfaces. They're

00:00:49.170 --> 00:00:52.130
built on top of a much more robust underlying

00:00:52.130 --> 00:00:56.310
technology. the Lang chain framework. So our

00:00:56.310 --> 00:00:58.189
deep dive today isn't just about finding this

00:00:58.189 --> 00:01:01.609
powerful Lang chain code node within NAN. It's

00:01:01.609 --> 00:01:03.509
about understanding why it's there, what makes

00:01:03.509 --> 00:01:06.290
it so incredibly effective, and crucially, how

00:01:06.290 --> 00:01:08.609
it can give you a really significant competitive

00:01:08.609 --> 00:01:11.049
advantage. We're talking about building highly

00:01:11.049 --> 00:01:13.950
intelligent, precisely tailored AI agents that

00:01:13.950 --> 00:01:16.209
move far beyond the limits of those pre -built

00:01:16.209 --> 00:01:18.489
solutions. This is where the real customization

00:01:18.489 --> 00:01:20.930
starts. Okay, let's untack this then. So we're

00:01:20.930 --> 00:01:22.969
talking about a hidden powerful node built on

00:01:22.969 --> 00:01:24.819
Lang chain. Before we get to the node itself,

00:01:24.859 --> 00:01:27.060
maybe we should back up. What exactly is Langchain?

00:01:27.140 --> 00:01:29.040
Why should you care about this framework? Right.

00:01:29.359 --> 00:01:31.680
Well, Langchain isn't just another tool you download.

00:01:32.000 --> 00:01:34.959
It's more like a comprehensive framework. It's

00:01:34.959 --> 00:01:37.640
engineered specifically for developing applications

00:01:37.640 --> 00:01:40.819
powered by large language models, LLMs. Think

00:01:40.819 --> 00:01:43.180
of it this way. Standard LLMs can think, right?

00:01:43.519 --> 00:01:45.939
But Langchain gives them like hands and eyes.

00:01:46.140 --> 00:01:48.420
It's the framework that lets these powerful AIs

00:01:48.420 --> 00:01:51.379
not just generate text, but actually act in the

00:01:51.379 --> 00:01:54.560
world, querying databases, making web searches,

00:01:54.840 --> 00:01:57.659
even controlling IoT devices, potentially. You

00:01:57.659 --> 00:01:59.840
know, leading tech companies, Replet, Klarna,

00:01:59.900 --> 00:02:01.560
they use it to build their own sophisticated

00:02:01.560 --> 00:02:04.439
AI assistance and co -pilots. So, it's serious

00:02:04.439 --> 00:02:06.900
stuff. Ah, okay. So, it's handling all the complex

00:02:06.900 --> 00:02:09.020
plumbing, letting us focus purely on the intelligence

00:02:09.020 --> 00:02:12.550
side. Sounds like a dream, really. Does that

00:02:12.550 --> 00:02:14.909
abstraction ever cause issues, or is it mostly

00:02:14.909 --> 00:02:17.789
just flexibility? It's overwhelmingly about flexibility,

00:02:17.909 --> 00:02:19.969
yeah. And that's what's really fascinating here.

00:02:20.189 --> 00:02:22.229
LangChain is incredibly versatile. You can work

00:02:22.229 --> 00:02:23.909
with loads of different language models. You

00:02:23.909 --> 00:02:26.789
can switch them, combine models from OpenAI,

00:02:27.150 --> 00:02:30.169
Anthropic, Google AI, you name it, all within

00:02:30.169 --> 00:02:33.830
the same workflow. Plus, it provides really sophisticated

00:02:33.830 --> 00:02:36.909
memory management. That's absolutely key for

00:02:36.909 --> 00:02:39.710
agents to maintain context across conversations,

00:02:40.009 --> 00:02:42.729
build seamless natural dialogues. Right, so it

00:02:42.729 --> 00:02:46.229
remembers things. Exactly. Imagine an agent that

00:02:46.229 --> 00:02:49.449
actually recalls your past interactions. And

00:02:49.449 --> 00:02:51.669
agents can also execute custom code snippets,

00:02:52.069 --> 00:02:54.669
follow complex conditional logic, make autonomous

00:02:54.669 --> 00:02:56.969
decisions. And for those really tricky scenarios,

00:02:57.069 --> 00:03:00.069
you can build these chains and graphs of logic,

00:03:00.469 --> 00:03:03.379
creating incredibly complex multi -stage workflows

00:03:03.379 --> 00:03:06.080
where, you know, the output of one step becomes

00:03:06.080 --> 00:03:08.240
the input for the next, like a domino effect

00:03:08.240 --> 00:03:10.780
of intelligence. That's a huge difference. Are

00:03:10.780 --> 00:03:13.180
you saying that many of the NE and AI nodes we

00:03:13.180 --> 00:03:15.219
might already be using, the ones that look simple

00:03:15.219 --> 00:03:17.199
on the surface, they're actually just a friendly

00:03:17.199 --> 00:03:19.759
front for this super powerful Lang chain framework?

00:03:20.280 --> 00:03:22.400
Exactly. Yeah, this is where it gets really interesting,

00:03:22.400 --> 00:03:24.960
I think. If you ever peek under the hood, like

00:03:24.960 --> 00:03:27.419
if you look at the underlying JSON configuration

00:03:27.419 --> 00:03:30.500
that defines how NNN workflows are built, you'll

00:03:30.500 --> 00:03:33.460
actually see the standard AI agent node identified

00:03:33.460 --> 00:03:38.759
as nnnodes -langchain .agent. No way. Yeah. It's

00:03:38.759 --> 00:03:41.139
like finding the blueprint, you know? It reveals

00:03:41.139 --> 00:03:43.960
the advanced engineering underneath. So when

00:03:43.960 --> 00:03:46.840
you're dragging, dropping, configuring that AI

00:03:46.840 --> 00:03:50.020
agent, you are, in fact, interacting with a simplified

00:03:50.020 --> 00:03:52.400
interface for Langchain. So it's a bit of a wizard

00:03:52.400 --> 00:03:54.300
-behind -the -curtain situation then. Yeah. And

00:03:54.300 --> 00:03:55.979
it's not just the AI agent node you're saying.

00:03:56.159 --> 00:03:58.740
No, definitely not. It holds true for a whole

00:03:58.740 --> 00:04:01.300
range of other AI nodes you might use every day,

00:04:01.719 --> 00:04:04.759
the basic LLM chain, information extractor, Q

00:04:04.759 --> 00:04:07.560
&A, sentiment analysis, summarization. Yeah,

00:04:07.680 --> 00:04:09.460
they're all essentially simplified interfaces.

00:04:09.680 --> 00:04:11.819
Now, this simplification is incredibly convenient

00:04:11.819 --> 00:04:13.979
for rapid development, getting started quickly,

00:04:14.439 --> 00:04:17.189
but it comes with a trade -off. Limited customization.

00:04:17.870 --> 00:04:19.970
By finding and using the lang chain code node

00:04:19.970 --> 00:04:22.050
directly, you can just bypass these limitations

00:04:22.050 --> 00:04:24.709
completely and harness the full unbridled power

00:04:24.709 --> 00:04:26.970
of the lang chain framework itself. Fascinating.

00:04:27.310 --> 00:04:30.829
Okay, so if this lang chain code node is so powerful,

00:04:31.410 --> 00:04:34.089
why isn't it front and center? Is it really a

00:04:34.089 --> 00:04:38.850
secret or just less obvious? For those of us

00:04:38.850 --> 00:04:42.069
wanting to unlock this power, Where do we even

00:04:42.069 --> 00:04:44.230
find it? Well, it's intentionally a bit tucked

00:04:44.230 --> 00:04:46.209
away, almost like a secret level for people who

00:04:46.209 --> 00:04:47.970
know where to look. It's not, you know, right

00:04:47.970 --> 00:04:50.490
there when you open the AI section. It guides

00:04:50.490 --> 00:04:52.949
new users to the simpler, more abstract nodes

00:04:52.949 --> 00:04:55.290
first, which makes sense. But once you know it's

00:04:55.290 --> 00:04:56.629
pretty straightforward, you'll find it within

00:04:56.629 --> 00:04:59.029
the AI category in the nodes panel. Usually you

00:04:59.029 --> 00:05:01.050
got to scroll down to other AI nodes and then

00:05:01.050 --> 00:05:04.610
click on miscellaneous. Yeah, the placement itself

00:05:04.610 --> 00:05:07.149
kind of hints at its advanced, maybe more behind

00:05:07.149 --> 00:05:08.930
the scenes capabilities. Right. And when you

00:05:08.930 --> 00:05:11.699
pull it onto your canvas, it looks well. Deceptively

00:05:11.699 --> 00:05:14.579
simple, just a blank box, no obvious inputs or

00:05:14.579 --> 00:05:17.759
outputs. So how do you even start setting up

00:05:17.759 --> 00:05:21.259
this powerful yet empty node? Ugh, this is where

00:05:21.259 --> 00:05:23.519
its power really lies, because it forces you

00:05:23.519 --> 00:05:26.019
to be explicit about what you need. It's not

00:05:26.019 --> 00:05:29.259
pre -configured. You add inputs and outputs specifically

00:05:29.259 --> 00:05:31.220
for your agents' requirements. So you'll add

00:05:31.220 --> 00:05:33.319
a main input, right, to connect your trigger

00:05:33.319 --> 00:05:36.800
or data source, and a main output to pass results

00:05:36.800 --> 00:05:39.740
along. Crucially, though, you'll also add specific

00:05:39.740 --> 00:05:41.560
connections for language model. That's where

00:05:41.560 --> 00:05:43.899
you link up OpenAI, Anthropic, whatever you're

00:05:43.899 --> 00:05:46.660
using. Got it. And optionally, memory for context,

00:05:46.740 --> 00:05:49.420
maybe connecting simple memory or even post -gresql,

00:05:49.560 --> 00:05:52.939
and tool that's vital for agents needing external

00:05:52.939 --> 00:05:55.500
tools like HTTP requests or... database nodes.

00:05:55.920 --> 00:05:57.500
It's like building your agent from the ground

00:05:57.500 --> 00:06:00.100
up piece by piece. OK, so you custom wire all

00:06:00.100 --> 00:06:02.680
its connections. But then what? The real core,

00:06:02.819 --> 00:06:05.540
you said, is this add code and execute section

00:06:05.540 --> 00:06:08.360
where you write custom JavaScript. Now, for some

00:06:08.360 --> 00:06:10.639
listeners, that might sound a bit intimidating,

00:06:11.240 --> 00:06:13.519
writing code. It's true. It does grant you complete

00:06:13.519 --> 00:06:16.040
control. And yes, it requires code. But here's

00:06:16.040 --> 00:06:18.259
a fantastic trick, almost like a cheat code itself.

00:06:18.279 --> 00:06:20.980
You can use other AI models like Claude or ChatGTT

00:06:20.980 --> 00:06:23.319
to help you write the lang chain code. Yeah.

00:06:23.449 --> 00:06:25.750
Just describe what you want your agent to do,

00:06:25.750 --> 00:06:28.750
you know, in plain English, and the AI can generate

00:06:28.750 --> 00:06:31.329
the starter code for you. Then you copy paste

00:06:31.329 --> 00:06:33.850
and refine it inside the lang chain code node.

00:06:33.970 --> 00:06:37.209
It lowers the barrier quite a bit. And this code

00:06:37.209 --> 00:06:39.730
section, this is where the magic really happens.

00:06:40.069 --> 00:06:43.129
It lets you define truly custom workflows, create

00:06:43.129 --> 00:06:46.790
complex conditional logic, decision trees, implement

00:06:46.790 --> 00:06:50.310
loops. You can even build teams of agents that

00:06:50.310 --> 00:06:52.569
collaborate on a task. Teams of agents. Yeah.

00:06:52.759 --> 00:06:55.160
And you can switch between different LLMs for

00:06:55.160 --> 00:06:57.100
different parts of a task, you know, use the

00:06:57.100 --> 00:06:59.420
best model for the job, and even create more

00:06:59.420 --> 00:07:01.300
autonomous agents that can plan their own steps

00:07:01.300 --> 00:07:03.639
and adapt. That's just incredible control. Yeah.

00:07:03.660 --> 00:07:05.459
I mean, it's totally clear why this is a hidden

00:07:05.459 --> 00:07:08.740
gem. To really grasp the leap in power here,

00:07:09.079 --> 00:07:10.839
let's think about the components of an AI agent.

00:07:11.399 --> 00:07:14.480
There's input, language model, memory, tools,

00:07:14.720 --> 00:07:16.819
instructions, logic, and output, right? Seven

00:07:16.819 --> 00:07:19.420
things. With the standard AI agent node, if I

00:07:19.420 --> 00:07:21.199
remember right, You can configure maybe five

00:07:21.199 --> 00:07:23.319
of those, but the logic part is prefixed. Am

00:07:23.319 --> 00:07:24.759
I right in thinking the line chain code node

00:07:24.759 --> 00:07:27.019
just blows that wide open? You're absolutely

00:07:27.019 --> 00:07:29.420
right. That fixed logic is the key limitation

00:07:29.420 --> 00:07:32.670
of the standard nodes. The Lang Chang code node,

00:07:32.689 --> 00:07:35.509
however, gives you full explicit control over

00:07:35.509 --> 00:07:38.209
all seven components. It really paves the way

00:07:38.209 --> 00:07:42.250
for exponentially more sophisticated, nuanced,

00:07:42.350 --> 00:07:44.550
and powerful agents. It's the difference between

00:07:44.550 --> 00:07:47.370
using a preset template and, well, designing

00:07:47.370 --> 00:07:49.509
your own architecture from scratch. OK, so let's

00:07:49.509 --> 00:07:51.769
be really clear about the trade -offs. For someone

00:07:51.769 --> 00:07:53.889
just starting out maybe with simpler automation

00:07:53.889 --> 00:07:57.129
needs, is the standard AI agent node still a

00:07:57.129 --> 00:07:59.009
good choice? Or should everyone just jump straight

00:07:59.009 --> 00:08:01.410
to this code node? Oh, absolutely. The standard

00:08:01.410 --> 00:08:04.350
AI agent node still has its place, for sure.

00:08:04.769 --> 00:08:07.470
It's incredibly easy to use, offers super quick

00:08:07.470 --> 00:08:10.430
setup, and it's perfect for straightforward logic.

00:08:11.250 --> 00:08:13.050
If your needs are simple, or you need a fast

00:08:13.050 --> 00:08:15.269
prototype, or maybe you just prefer a no -code

00:08:15.269 --> 00:08:17.810
approach, it's definitely your go -to. But yeah,

00:08:18.209 --> 00:08:20.029
customization is limited. You're generally stuck

00:08:20.029 --> 00:08:22.629
with one model per node, and it only offers pretty

00:08:22.629 --> 00:08:25.629
basic agent autonomy. Complex workflows often

00:08:25.629 --> 00:08:27.470
mean stringing together lots of these nodes,

00:08:27.470 --> 00:08:30.480
which can get messy. chain code node on the other

00:08:30.480 --> 00:08:32.659
hand yes it's more complex to set up it takes

00:08:32.659 --> 00:08:34.419
more development time because like we said it

00:08:34.419 --> 00:08:37.899
involves code but the benefits are huge unlimited

00:08:37.899 --> 00:08:40.320
customization orchestrating multiple models in

00:08:40.320 --> 00:08:43.220
one node advanced agent autonomy with real planning

00:08:43.220 --> 00:08:46.100
and reflection handling entire complex workflows

00:08:46.100 --> 00:08:49.379
in one spot plus highly customizable error handling

00:08:49.379 --> 00:08:51.580
and debugging this is where you get the true

00:08:51.580 --> 00:08:54.259
power the real efficiency and frankly a more

00:08:54.259 --> 00:08:56.519
future proof design right and connecting this

00:08:56.519 --> 00:08:58.700
to the bigger picture you mentioned the underlying

00:08:58.480 --> 00:09:00.500
Lanking's structure allows for advanced stuff

00:09:00.500 --> 00:09:02.600
like Lanksmith integration, that's Lanking's

00:09:02.600 --> 00:09:05.279
own monitoring platform, right? What can that

00:09:05.279 --> 00:09:07.409
actually tell you? Yeah, LangSmith gives you

00:09:07.409 --> 00:09:10.029
incredible visibility. You can track crucial

00:09:10.029 --> 00:09:13.110
metrics like token usage, those little bits of

00:09:13.110 --> 00:09:15.549
text, the LLM processes, which directly hit your

00:09:15.549 --> 00:09:18.169
costs. Ah, important. Very important. And monitor

00:09:18.169 --> 00:09:20.909
response times. It provides these detailed logs

00:09:20.909 --> 00:09:23.129
of the agent's whole reasoning process. So you

00:09:23.129 --> 00:09:25.730
can see its internal thought process, step -by

00:09:25.730 --> 00:09:29.070
-step execution of tool calls makes it much easier

00:09:29.070 --> 00:09:31.769
to spot errors or performance bottlenecks. It's

00:09:31.769 --> 00:09:34.529
basically like an x -ray for your AI agent. Just

00:09:34.529 --> 00:09:37.000
a keynote here. LangSmith integration currently

00:09:37.000 --> 00:09:40.340
only works with self -hosted NaN instances. So

00:09:40.340 --> 00:09:42.100
that's something to keep in mind. That level

00:09:42.100 --> 00:09:44.399
of insight sounds invaluable, especially for

00:09:44.399 --> 00:09:46.879
debugging complex agents. How does this whole

00:09:46.879 --> 00:09:49.139
Langchain approach in NaN stack up against other

00:09:49.139 --> 00:09:51.580
things people might know, like say the OpenAI

00:09:51.580 --> 00:09:54.259
Assistance API? That's a good question. Both

00:09:54.259 --> 00:09:57.019
approaches can work fine within Na10 and they

00:09:57.019 --> 00:09:59.830
each have their strengths. Langchain, its big

00:09:59.830 --> 00:10:02.190
plus, is model flexibility. You can use pretty

00:10:02.190 --> 00:10:04.870
much any LLM provider, so you avoid vendor lock

00:10:04.870 --> 00:10:07.690
-in. You get full control over every aspect of

00:10:07.690 --> 00:10:10.070
the agent's behavior. There's a self -hosting

00:10:10.070 --> 00:10:11.950
option, which is great for data sovereignty,

00:10:12.250 --> 00:10:14.970
broader tool integration generally, and you benefit

00:10:14.970 --> 00:10:18.169
from that big open source community. The OpenAI

00:10:18.169 --> 00:10:20.669
Assistance API, on the other hand, it really

00:10:20.669 --> 00:10:23.870
prioritizes simplicity. Less code involved. It's

00:10:23.870 --> 00:10:26.610
obviously optimized for OpenAI models, includes

00:10:26.610 --> 00:10:28.970
some handy built -in features. like file handling

00:10:28.970 --> 00:10:31.470
and code execution. And it's a managed service,

00:10:31.570 --> 00:10:34.570
so less operational headache for you. But I'd

00:10:34.570 --> 00:10:37.610
say for most NAA use cases where you really need

00:10:37.610 --> 00:10:39.889
that ultimate control and flexibility, Langchain

00:10:39.889 --> 00:10:42.070
usually comes out on top. OK, that makes sense.

00:10:42.370 --> 00:10:45.210
So where does this advanced power become truly

00:10:45.210 --> 00:10:47.169
indispensable? Can you give us some specific,

00:10:47.210 --> 00:10:49.129
maybe real -world examples of things you could

00:10:49.129 --> 00:10:50.889
build with these sophisticated Langchain agents

00:10:50.889 --> 00:10:52.950
that you just couldn't do with a standard node?

00:10:53.100 --> 00:10:57.820
Absolutely. Imagine building multi -agent systems,

00:10:58.399 --> 00:11:01.399
like actual teams of specialized agents collaborating.

00:11:01.799 --> 00:11:04.299
Maybe a research agent gathers info, passes it

00:11:04.299 --> 00:11:06.500
to an analysis agent, which then hands off to

00:11:06.500 --> 00:11:08.440
a writing agent to synthesize the results all

00:11:08.440 --> 00:11:11.340
working together autonomously. Or you can implement

00:11:11.340 --> 00:11:14.240
dynamic reasoning, where an agent plans its actions,

00:11:14.519 --> 00:11:16.419
reflects on the outcome, and then adapts its

00:11:16.419 --> 00:11:18.440
approach on the fly. Like, it tries one tool,

00:11:18.539 --> 00:11:20.759
realizes, hmm, that's not working well, and then

00:11:20.759 --> 00:11:22.419
autonomously tries a different tool or method.

00:11:22.379 --> 00:11:24.980
This lets you build really complex workflows

00:11:24.980 --> 00:11:27.600
with sophisticated decision trees or iterative

00:11:27.600 --> 00:11:29.940
processing where agents keep refining results

00:11:29.940 --> 00:11:32.559
until they meet some criteria. And that model

00:11:32.559 --> 00:11:34.720
switching, we talked about using different LLMs

00:11:34.720 --> 00:11:36.480
for different parts of a task based on their

00:11:36.480 --> 00:11:38.919
strengths. You can even build in custom safety

00:11:38.919 --> 00:11:41.539
checks for sensitive operations. Just think of

00:11:41.539 --> 00:11:43.919
a simple but powerful research agent you could

00:11:43.919 --> 00:11:46.360
build with this node. It takes a topic you give

00:11:46.360 --> 00:11:49.019
it, uses a connected search tool to find info

00:11:49.019 --> 00:11:52.120
online, then uses an LLM to analyze and summarize

00:11:52.120 --> 00:11:54.559
the key points. That alone shows how the lane

00:11:54.559 --> 00:11:56.820
chain code node lets you implement multi -step

00:11:56.820 --> 00:11:59.379
intelligent logic that interacts with external

00:11:59.379 --> 00:12:01.519
tools. That's something that's either incredibly

00:12:01.519 --> 00:12:04.080
hard or just plain impossible with only the standard

00:12:04.080 --> 00:12:06.799
AI agent node. OK, so the big question then,

00:12:07.080 --> 00:12:09.039
for you listening, the learner, after hearing

00:12:09.039 --> 00:12:12.240
all this, should I actually invest the time?

00:12:12.379 --> 00:12:14.580
to learn how to use this lane chain code node?

00:12:14.879 --> 00:12:17.000
When does it really become essential? Yeah, that's

00:12:17.000 --> 00:12:19.419
the key question. And the answer, honestly, it

00:12:19.419 --> 00:12:22.279
depends on your specific needs and maybe your

00:12:22.279 --> 00:12:24.639
ambitions. You should absolutely stick with the

00:12:24.639 --> 00:12:28.360
standard AI agent node if you need a quick, simple

00:12:28.360 --> 00:12:31.179
solution. If your agent's logic is pretty straightforward,

00:12:31.480 --> 00:12:33.440
if you're not yet comfortable with coding or

00:12:33.440 --> 00:12:36.960
don't want to be, if you only need one AI model,

00:12:37.000 --> 00:12:39.320
and if your workflow doesn't need complex multi

00:12:39.320 --> 00:12:41.519
-branch decisions, it's excellent for all those

00:12:41.519 --> 00:12:44.279
scenarios. No question. However, you should definitely

00:12:44.279 --> 00:12:46.500
dive into the Langtian code node if you find

00:12:46.500 --> 00:12:48.779
yourself craving advanced customization and full

00:12:48.779 --> 00:12:51.220
control. If your agent needs to make complex,

00:12:51.220 --> 00:12:53.659
adaptive decisions with lots of different paths,

00:12:53.879 --> 00:12:56.440
if you want to seamlessly use and orchestrate

00:12:56.440 --> 00:12:58.659
multiple AI models together, if you're building

00:12:58.659 --> 00:13:00.940
something that needs to be really scalable and

00:13:00.940 --> 00:13:03.299
robust, and crucially, if you are willing to

00:13:03.299 --> 00:13:05.639
invest a bit of time in learning a more powerful,

00:13:06.720 --> 00:13:09.200
more developer -centric solution, looking ahead,

00:13:09.220 --> 00:13:11.110
understanding frameworks like Langtian, likely

00:13:11.110 --> 00:13:13.250
to become an increasingly valuable skill. AI

00:13:13.250 --> 00:13:15.250
is getting deeper into everything, right? The

00:13:15.250 --> 00:13:18.529
ability to build these sophisticated, truly autonomous

00:13:18.529 --> 00:13:22.320
systems that can reason, plan, adapt. That's

00:13:22.320 --> 00:13:24.240
going to be a significant competitive advantage

00:13:24.240 --> 00:13:26.299
in the years to come, I think. Yeah, this deep

00:13:26.299 --> 00:13:29.240
dive has really pulled back the curtain on a

00:13:29.240 --> 00:13:31.779
hidden layer of power within NADAN, hasn't it?

00:13:32.080 --> 00:13:34.320
From that friendly drag and drop surface down

00:13:34.320 --> 00:13:36.480
to the powerful Lang chain framework underneath.

00:13:36.960 --> 00:13:38.860
We've seen how you can move from simple automation

00:13:38.860 --> 00:13:42.039
to building truly custom intelligent AI agents

00:13:42.039 --> 00:13:45.519
tailored exactly to what you need. The possibilities

00:13:45.519 --> 00:13:48.590
this Lang chain code node opens up. They're just

00:13:48.590 --> 00:13:50.950
not there with the standard nodes. It gives you

00:13:50.950 --> 00:13:53.409
unparalleled flexibility and control over your

00:13:53.409 --> 00:13:56.429
AI workflows. Exactly. And as AI keeps evolving

00:13:56.429 --> 00:13:58.870
so rapidly, the ability to build these custom

00:13:58.870 --> 00:14:00.629
sophisticated agents will just become more and

00:14:00.629 --> 00:14:03.210
more vital. By understanding and using the line

00:14:03.210 --> 00:14:05.169
chain code node, you're not just learning an

00:14:05.169 --> 00:14:07.090
NEA feature, you're developing skills that are

00:14:07.090 --> 00:14:10.269
valuable across the entire AI landscape, which

00:14:10.269 --> 00:14:12.029
raises an important question for you to think

00:14:12.029 --> 00:14:15.230
about. What kind of truly sophisticated adaptive

00:14:15.230 --> 00:14:17.370
AI agent could you build if you had that complete

00:14:17.370 --> 00:14:20.169
control over its logic and its tools? My advice,

00:14:20.509 --> 00:14:23.070
start small. Maybe even just peek under the hood,

00:14:23.309 --> 00:14:25.190
look at that JSON for the standard nodes like

00:14:25.190 --> 00:14:27.710
you mentioned, see how they work, and then gradually

00:14:27.710 --> 00:14:29.809
work your way up to building your own custom

00:14:29.809 --> 00:14:31.990
agents. The journey might be a bit challenging,

00:14:32.190 --> 00:14:34.690
sure, but the results could be incredibly rewarding.

00:14:34.909 --> 00:14:38.169
Yeah, definitely. As next steps, we really encourage

00:14:38.169 --> 00:14:41.590
you experiment. Set up a test workflow with the

00:14:41.590 --> 00:14:44.009
LangChain code note. Try building a simple agent

00:14:44.009 --> 00:14:46.309
that uses, say, two different language models

00:14:46.309 --> 00:14:49.570
or calls an external API tool. Dive into the

00:14:49.570 --> 00:14:51.769
official LangChain documentation. Seriously,

00:14:51.929 --> 00:14:54.590
it's a gold mine of information. And if you do

00:14:54.590 --> 00:14:57.470
have a self -hosted NNN instance, really consider

00:14:57.470 --> 00:14:59.710
integrating LangSmith for that amazing monitoring

00:14:59.710 --> 00:15:02.009
and debugging capability. Oh, and don't forget

00:15:02.009 --> 00:15:04.149
to join the NNN community, share what you build,

00:15:04.330 --> 00:15:06.110
ask questions, learn from others. It's a great

00:15:06.110 --> 00:15:08.360
resource. Thank you so much for joining us on

00:15:08.360 --> 00:15:10.500
this deep dive into the hidden AI power of an

00:15:10.500 --> 00:15:13.759
ADN. Until next time, keep exploring, keep learning,

00:15:14.120 --> 00:15:15.120
and keep diving deep.
