WEBVTT

00:00:00.000 --> 00:00:02.500
Imagine this. You send a quick message, maybe

00:00:02.500 --> 00:00:05.740
on Telegram, maybe Slack, and just seconds later,

00:00:05.879 --> 00:00:09.699
you get back a really detailed professional technical

00:00:09.699 --> 00:00:12.779
stock analysis. Yeah, like full breakdown, mass

00:00:12.779 --> 00:00:16.179
CD, support, resistance lines, even the actual

00:00:16.179 --> 00:00:18.559
chart image right there on your phone. Exactly.

00:00:18.679 --> 00:00:19.879
It sounds like something you'd need, I don't

00:00:19.879 --> 00:00:22.199
know, expensive financial software for. Right.

00:00:22.579 --> 00:00:25.559
But here's the kicker. This whole thing, this

00:00:25.559 --> 00:00:27.579
advanced agent system, it was built in just a

00:00:27.579 --> 00:00:31.059
few hours. And get this completely no code. Wow.

00:00:31.300 --> 00:00:33.399
Yeah, it's not just some simple chatbot fetching

00:00:33.399 --> 00:00:37.240
a price. It's a proper AI agent. It's even running

00:00:37.240 --> 00:00:39.920
a pretty sophisticated dual model strategy. Welcome

00:00:39.920 --> 00:00:42.759
to the deep dive. So today we're going to open

00:00:42.759 --> 00:00:44.719
up the hood on that exact system, the personal

00:00:44.719 --> 00:00:47.780
AI stock analyst. Our mission really is to get

00:00:47.780 --> 00:00:49.899
our heads around the architecture. This two part.

00:00:50.350 --> 00:00:52.530
two AI models set up that makes it all tick.

00:00:52.729 --> 00:00:54.770
Yeah, we'll map out what it can actually do,

00:00:54.829 --> 00:00:56.810
you know, memory comparisons, that kind of stuff.

00:00:56.909 --> 00:00:59.630
Then we'll really dig into that core idea, the

00:00:59.630 --> 00:01:03.590
brain versus the specialist separation. And crucially,

00:01:03.670 --> 00:01:06.349
you'll see why they picked two totally different

00:01:06.349 --> 00:01:09.010
AI models. You know, one that's great for reasoning,

00:01:09.129 --> 00:01:11.769
the thinking part, and another one completely

00:01:11.769 --> 00:01:14.549
separate that's amazing at visual stuff, like

00:01:14.549 --> 00:01:16.530
reading those charts. All right, let's dive in.

00:01:16.609 --> 00:01:19.799
Let's do it. okay so let's unpack what this system

00:01:19.799 --> 00:01:23.120
actually does day to day it seems to handle three

00:01:23.120 --> 00:01:25.700
main things that make it feel like a proper agent

00:01:25.700 --> 00:01:28.840
not just you know a simple script right the first

00:01:28.840 --> 00:01:31.060
one's pretty straightforward single stock analysis

00:01:31.060 --> 00:01:35.129
you feed it a ticker symbol like uh NASDAQ .UXU

00:01:35.129 --> 00:01:37.170
or whatever. Exactly. And you get back a full

00:01:37.170 --> 00:01:40.329
professional style report. Trend health, key

00:01:40.329 --> 00:01:43.689
price levels, mass D analysis, volume. Yeah.

00:01:43.750 --> 00:01:45.730
The whole nine yards. That's kind of the baseline.

00:01:45.829 --> 00:01:47.730
Yeah. Yeah. But the second function, comparative

00:01:47.730 --> 00:01:50.370
analysis, that's where this split architecture

00:01:50.370 --> 00:01:53.170
really starts to shine, I think. How so? Well,

00:01:53.209 --> 00:01:55.989
you can ask it to compare two stocks, say Alphabet

00:01:55.989 --> 00:01:57.969
versus Microsoft. Yeah. Like in the example,

00:01:58.109 --> 00:02:00.250
it doesn't just give you two separate reports.

00:02:00.879 --> 00:02:03.500
It runs the entire technical breakdown for both,

00:02:03.659 --> 00:02:05.920
generates the charts for each one, and then,

00:02:05.980 --> 00:02:09.120
this is the cool part, it performs this higher

00:02:09.120 --> 00:02:11.900
level reasoning to give you a final summary comparing

00:02:11.900 --> 00:02:14.879
them side by side. Ah, so it synthesizes the

00:02:14.879 --> 00:02:17.599
information. It doesn't just fetch it. Precisely.

00:02:17.840 --> 00:02:20.379
And that leads to the third really key function,

00:02:20.560 --> 00:02:22.719
smart memory. Right. The conversational aspect.

00:02:22.979 --> 00:02:25.319
Yeah. So if you just ask for that alphabet analysis,

00:02:25.680 --> 00:02:27.340
you can immediately follow up with something

00:02:27.340 --> 00:02:29.500
simple like, OK, now compare that to Microsoft.

00:02:29.500 --> 00:02:32.520
And it knows what that. refers to. Exactly. It

00:02:32.520 --> 00:02:34.419
remembers the last, say, five messages in the

00:02:34.419 --> 00:02:36.740
conversation. So you don't have to repeat the

00:02:36.740 --> 00:02:41.000
ticker. You don't waste time or API credits reanalyzing

00:02:41.000 --> 00:02:43.139
the first doc. It just picks up where you left

00:02:43.139 --> 00:02:45.180
off. That makes sense. And I guess that complexity

00:02:45.180 --> 00:02:47.539
needing memory, needing to synthesize comparisons,

00:02:47.919 --> 00:02:50.960
that's what drives this dual architecture idea.

00:02:51.240 --> 00:02:53.340
You got it. Separating the manager from the doer.

00:02:53.699 --> 00:02:57.400
So part one is the AI agent brain. Think of it

00:02:57.400 --> 00:03:00.039
like the CEO. This is the main workflow. probably

00:03:00.039 --> 00:03:03.439
built in something like NADN. It handles the

00:03:03.439 --> 00:03:05.560
chat, keeps track of the conversation history,

00:03:05.719 --> 00:03:08.219
that memory decides when it needs help from a

00:03:08.219 --> 00:03:10.699
specialist, and then takes the specialist output

00:03:10.699 --> 00:03:13.620
and formats the nice final report for you. And

00:03:13.620 --> 00:03:16.759
part two is? Part two, the technical analysis

00:03:16.759 --> 00:03:20.120
tool. The specialist. This is a totally separate,

00:03:20.219 --> 00:03:23.199
smaller workflow. A sub -workflow. Okay, a sub

00:03:23.199 --> 00:03:25.419
-workflow. Let's define that quickly. Sure. A

00:03:25.419 --> 00:03:27.599
sub workflow is basically like a dedicated mini

00:03:27.599 --> 00:03:29.800
program the main system can call up whenever

00:03:29.800 --> 00:03:32.120
it needs a specific task done. Like calling a

00:03:32.120 --> 00:03:34.979
consultant for a specific job. Exactly like that.

00:03:35.080 --> 00:03:37.719
This specialist one, it does one thing. Perfectly.

00:03:37.840 --> 00:03:40.379
Takes a stock ticker, calls an outside service

00:03:40.379 --> 00:03:42.860
and API to generate the chart image, and then

00:03:42.860 --> 00:03:45.000
does the raw technical analysis on that chart.

00:03:45.159 --> 00:03:48.039
OK, but I have to ask, doesn't splitting it into

00:03:48.039 --> 00:03:50.800
a CEO and a specialist make things more complex,

00:03:51.020 --> 00:03:53.759
like harder to build and manage, especially in

00:03:53.759 --> 00:03:56.500
a no code setup? Why not just cram it all into

00:03:56.500 --> 00:03:59.360
one big workflow? That's a great question. It

00:03:59.360 --> 00:04:01.419
seems counterintuitive, right? But actually,

00:04:01.520 --> 00:04:03.500
it makes things way less complex in the long

00:04:03.500 --> 00:04:06.219
run. Really? By separating them. The main CEO

00:04:06.219 --> 00:04:08.500
system stays really clean and focused just on

00:04:08.500 --> 00:04:11.099
managing the conversation and logic. It's modular.

00:04:11.460 --> 00:04:13.840
So let's say tomorrow you want to add a tool

00:04:13.840 --> 00:04:17.420
that analyzes, I don't know, options data. You

00:04:17.420 --> 00:04:20.019
just build a new separate specialist sub -workflow

00:04:20.019 --> 00:04:22.860
for options, tell the CEO about it, and plug

00:04:22.860 --> 00:04:25.439
it in. The core brain doesn't get cluttered.

00:04:25.459 --> 00:04:27.660
It makes the whole system much easier to manage

00:04:27.660 --> 00:04:30.750
and scale up with new tools later. Ah, I see.

00:04:30.850 --> 00:04:33.670
So that separation actually promotes clarity

00:04:33.670 --> 00:04:36.389
and makes it easier to add more capabilities

00:04:36.389 --> 00:04:39.509
down the road. Precisely. Scalable clarity. That's

00:04:39.509 --> 00:04:41.689
the goal. Okay, so we've got the CEO of the brain.

00:04:41.889 --> 00:04:44.730
Now, which AI model do you actually pick for

00:04:44.730 --> 00:04:46.889
that role? It used to be, you know, you'd just

00:04:46.889 --> 00:04:48.730
go with the biggest name or whatever was hyped

00:04:48.730 --> 00:04:50.629
that week. Right, whatever was trending on Twitter.

00:04:50.910 --> 00:04:52.750
Exactly. Yeah. But for something like financial

00:04:52.750 --> 00:04:54.990
analysis, you need proof it actually performs

00:04:54.990 --> 00:04:57.220
under pressure. The source material mentions

00:04:57.220 --> 00:05:00.579
this platform, mouse1 .ai, that runs something

00:05:00.579 --> 00:05:03.420
called the Alpha Arena. The Alpha Arena. What

00:05:03.420 --> 00:05:05.579
is that exactly? Sounds intense. It kind of is.

00:05:05.720 --> 00:05:08.779
It's basically this live high stakes competition

00:05:08.779 --> 00:05:11.699
for different AI models. They compete in this

00:05:11.699 --> 00:05:14.980
real time streaming environment designed to simulate,

00:05:15.079 --> 00:05:18.220
you know, dynamic market conditions. So it tests

00:05:18.220 --> 00:05:21.100
how well they reason and how reliable they are

00:05:21.100 --> 00:05:22.860
when things are constantly changing, like in

00:05:22.860 --> 00:05:25.529
actual trading. You got it. It's about battle

00:05:25.529 --> 00:05:27.990
-tested performance, not just benchmark scores

00:05:27.990 --> 00:05:30.750
or, you know, how fast it spits out tokens. Okay,

00:05:30.810 --> 00:05:33.649
so based on that kind of testing, which model

00:05:33.649 --> 00:05:36.490
did they choose for the brain? The data apparently

00:05:36.490 --> 00:05:41.189
pointed towards using DeepSeq v3 .1 chat. That

00:05:41.189 --> 00:05:43.589
became the core reasoning engine, the CEO brain.

00:05:43.870 --> 00:05:45.889
DeepSeq. Okay. And how do you actually connect

00:05:45.889 --> 00:05:48.029
to it? They used OpenRouter. Right, OpenRouter.

00:05:48.069 --> 00:05:50.329
Let's quickly define that too. Yeah, simple one.

00:05:50.629 --> 00:05:52.310
OpenRouter is just an aggregator that lets you

00:05:52.310 --> 00:05:54.769
access lots of different AI models using just

00:05:54.769 --> 00:05:57.670
one API key. Super flexible. Gives you options

00:05:57.670 --> 00:06:00.389
if you want to swap models later. Exactly. Now,

00:06:00.509 --> 00:06:02.850
critical for this brain, remember that smart

00:06:02.850 --> 00:06:04.449
memory we talked about? Yeah, remembering the

00:06:04.449 --> 00:06:07.600
last five messages. Right. The absolute crucial

00:06:07.600 --> 00:06:10.100
setup step there, and this is the bit people

00:06:10.100 --> 00:06:12.399
apparently mess up all the time, is you must

00:06:12.399 --> 00:06:15.819
tie that memory to the specific chat ID coming

00:06:15.819 --> 00:06:18.399
from the Telegram trigger. Ah, okay. So the memory

00:06:18.399 --> 00:06:21.300
is unique to each user conversation. What happens

00:06:21.300 --> 00:06:23.459
if you forget that link? What's the failure mode?

00:06:23.680 --> 00:06:27.180
Oh, it's instant chaos. Utter chaos. Imagine

00:06:27.180 --> 00:06:30.459
you're asking about Apple, right? And someone

00:06:30.459 --> 00:06:33.120
else in a totally separate chat just asked about

00:06:33.120 --> 00:06:36.240
Tesla. Uh -oh. Yeah. Suddenly you might start

00:06:36.240 --> 00:06:38.199
getting the Tesla analysis popping up in your

00:06:38.199 --> 00:06:40.740
Apple chat. The conversation memory gets completely

00:06:40.740 --> 00:06:43.019
crossed. It just obliterates the user experience.

00:06:43.259 --> 00:06:45.259
You have to link it to that unique chat ID. Got

00:06:45.259 --> 00:06:47.420
it. Okay. So you've picked the model based on

00:06:47.420 --> 00:06:49.060
performance. You've set up the memory correctly.

00:06:49.319 --> 00:06:51.720
Right. What's next? Defining what the agent should

00:06:51.720 --> 00:06:53.620
actually do. Exactly. You need to give it its

00:06:53.620 --> 00:06:55.899
job description, its personality, its rules,

00:06:56.060 --> 00:06:58.600
and that's done via the system prompt. The prompt.

00:06:58.779 --> 00:07:01.180
Okay. And the sources mention this needs to be

00:07:01.180 --> 00:07:03.160
really structured, not just a paragraph of text.

00:07:03.659 --> 00:07:07.220
Absolutely. That structure is like your defense

00:07:07.220 --> 00:07:11.120
against prompt drift, where the AI starts forgetting

00:07:11.120 --> 00:07:13.480
its instructions over time or gets sidetracked.

00:07:13.519 --> 00:07:15.160
Makes sense. So what's the structure look like?

00:07:15.339 --> 00:07:17.939
It starts with an overview, basically telling

00:07:17.939 --> 00:07:20.540
the AI who it is, like you are an expert technical

00:07:20.540 --> 00:07:23.680
stock analyst. Then comes context listing the

00:07:23.680 --> 00:07:26.250
tools it has access to. like our chart specialist,

00:07:26.470 --> 00:07:29.250
and maybe what data sources it can use. After

00:07:29.250 --> 00:07:31.649
that, step -by -step instructions on how to handle

00:07:31.649 --> 00:07:34.189
requests. Okay, overview, context, instructions,

00:07:34.410 --> 00:07:37.709
what else? Then, importantly, detailed tool descriptions.

00:07:38.069 --> 00:07:40.769
So, explicitly naming the get chart tool and

00:07:40.769 --> 00:07:43.730
maybe explaining what it does. And finally, SOPs

00:07:43.730 --> 00:07:46.139
as standard operating procedures. This is where

00:07:46.139 --> 00:07:48.220
you repeat the absolute most critical rules,

00:07:48.339 --> 00:07:50.899
like the big one. Do not give explicit financial

00:07:50.899 --> 00:07:53.120
advice. You hammer that home. That structured

00:07:53.120 --> 00:07:55.879
approach seems vital. Oh, it is, honestly. I

00:07:55.879 --> 00:07:57.839
still wrestle with prompt drift myself sometimes.

00:07:58.100 --> 00:07:59.720
Keeping things tightly structured like this,

00:07:59.759 --> 00:08:01.899
especially with those SOPs, is pretty essential

00:08:01.899 --> 00:08:04.180
for keeping the agent reliable and focused query

00:08:04.180 --> 00:08:07.339
after query. So let's connect that back. If we're

00:08:07.339 --> 00:08:10.459
building this scalable no -code system, Why is

00:08:10.459 --> 00:08:11.800
going through the trouble of using something

00:08:11.800 --> 00:08:14.480
like the Alpha Arena to pick maybe a less common

00:08:14.480 --> 00:08:18.240
model like DeepSeek V3 .1 so important? Why not

00:08:18.240 --> 00:08:21.560
just default to, say, GPT -4 or Claude, which

00:08:21.560 --> 00:08:23.839
everyone knows? Yeah, good question. It comes

00:08:23.839 --> 00:08:25.660
back to that performance under pressure idea.

00:08:26.040 --> 00:08:28.680
The Alpha Arena helps select the model that's

00:08:28.680 --> 00:08:31.699
proven to be most reliable and accurate specifically

00:08:31.699 --> 00:08:35.340
for this kind of dynamic reasoning task. In finance,

00:08:35.440 --> 00:08:37.220
you know, that reliability, that accuracy when

00:08:37.220 --> 00:08:39.379
things get volatile, that trumps familiarity

00:08:39.379 --> 00:08:41.659
every single time. You want the best cool for

00:08:41.659 --> 00:08:43.960
that specific brain function. OK, so the CEO

00:08:43.960 --> 00:08:46.019
brain running on DeepSeek decides it needs a

00:08:46.019 --> 00:08:48.500
chart and some analysis for, say, Apple. How

00:08:48.500 --> 00:08:50.840
does it actually hand off that specific task

00:08:50.840 --> 00:08:53.679
to the specialist tool, that sub workflow? Right.

00:08:53.740 --> 00:08:55.259
So the brain basically calls the specialist.

00:08:55.370 --> 00:08:58.850
tool using its unique url that call triggers

00:08:58.850 --> 00:09:01.429
the sub workflow to start running like dialing

00:09:01.429 --> 00:09:04.190
its direct number kind of yeah and the absolute

00:09:04.190 --> 00:09:07.730
key detail here for setup is the naming the name

00:09:07.730 --> 00:09:10.169
you give the sub workflow in your no code tool

00:09:10.649 --> 00:09:13.210
Let's say you call it get chart. That name has

00:09:13.210 --> 00:09:16.330
to exactly match the tool name you defined back

00:09:16.330 --> 00:09:18.730
in the brain system prompt. Exactly match, like

00:09:18.730 --> 00:09:20.850
capitalization and everything. Everything. Case

00:09:20.850 --> 00:09:23.549
sensitive, spaces, underscores, whatever. If

00:09:23.549 --> 00:09:26.309
it's off by even one character, the brain basically

00:09:26.309 --> 00:09:28.509
says, I don't know what tool you're talking about.

00:09:28.610 --> 00:09:30.610
And the whole process breaks down. Connection

00:09:30.610 --> 00:09:33.110
failed. Okay, meticulous naming is critical.

00:09:33.409 --> 00:09:36.090
So the specialist gets the call. Its first job

00:09:36.090 --> 00:09:38.250
is getting the actual chart image, right? It

00:09:38.250 --> 00:09:42.009
uses that free service chart IMG ATI. Yep, that's

00:09:42.009 --> 00:09:44.049
the one. And the sources pointed out a really

00:09:44.049 --> 00:09:46.049
neat efficiency trick here for setting up that

00:09:46.049 --> 00:09:48.330
API call in NANN. Ah, yeah, they called it the

00:09:48.330 --> 00:09:51.789
pro gamer move using the CRL import method. Uh

00:09:51.789 --> 00:09:54.450
-huh, yeah, that's the one. And it's genuinely

00:09:54.450 --> 00:09:56.710
a massive time saver. So what does that trick

00:09:56.710 --> 00:09:59.450
actually do for the person? Building this. How

00:09:59.450 --> 00:10:01.769
does it simplify things? Well, instead of manually

00:10:01.769 --> 00:10:03.929
setting up every single parameter for the API

00:10:03.929 --> 00:10:06.649
call, you know, the headers, the request body,

00:10:06.789 --> 00:10:09.149
the authentication method, which can be super

00:10:09.149 --> 00:10:11.309
tedious and error prone. Right. Fiddling with

00:10:11.309 --> 00:10:13.450
JSON and stuff. Exactly. You just find the example

00:10:13.450 --> 00:10:16.389
CRL command in the chart IMG API documentation.

00:10:16.769 --> 00:10:19.250
It's usually just a snippet of text. You copy

00:10:19.250 --> 00:10:21.570
it and you paste it into this specific import

00:10:21.570 --> 00:10:24.850
function in the NNN HTTP request node. And then?

00:10:25.450 --> 00:10:27.830
Bam. And it automatically configures the entire

00:10:27.830 --> 00:10:29.710
node for you. So that's all the headers, parameters,

00:10:29.889 --> 00:10:32.750
everything based on that URL command. It literally

00:10:32.750 --> 00:10:35.889
turns what could be hours of debugging into like

00:10:35.889 --> 00:10:39.309
10 seconds of copy paste. Wow. Okay. That is

00:10:39.309 --> 00:10:42.409
a huge shortcut. So when configuring that API

00:10:42.409 --> 00:10:45.250
request to chart IMG, what are the key settings?

00:10:45.509 --> 00:10:47.909
Two main things mentioned. First, using storage

00:10:47.909 --> 00:10:50.929
in the API's URL path. That tells Chart .img

00:10:50.929 --> 00:10:53.110
to generate the chart and give you back a public

00:10:53.110 --> 00:10:55.389
URL link to the image file, which is what we

00:10:55.389 --> 00:10:57.210
need. Okay, so you get a link to the picture.

00:10:57.429 --> 00:10:59.690
Yep. And second, you need to dynamically insert

00:10:59.690 --> 00:11:01.889
the stock ticker the user actually asked for.

00:11:02.090 --> 00:11:04.570
Use an N8N expression for that, something like

00:11:04.570 --> 00:11:07.230
trainison .query. That pulls the ticker, say

00:11:07.230 --> 00:11:10.210
NASDAQ key geog, from the data the brain sent

00:11:10.210 --> 00:11:13.129
over and sticks it into the API call. Got it.

00:11:13.169 --> 00:11:15.629
So the chart gets generated. We get a URL back.

00:11:16.110 --> 00:11:18.929
What's step three? Step three is actually downloading

00:11:18.929 --> 00:11:21.909
that image. You need the image file, not just

00:11:21.909 --> 00:11:24.730
the link, to feed to the Vision AI later. So

00:11:24.730 --> 00:11:27.190
that requires a second HTTP request node. Another

00:11:27.190 --> 00:11:29.690
API call. Yeah, this one just takes the URL from

00:11:29.690 --> 00:11:33.029
step two and fetches the actual image data. And

00:11:33.029 --> 00:11:35.009
the crucial setting here is you have to set the

00:11:35.009 --> 00:11:38.179
response format to binary. Binary. Okay, what

00:11:38.179 --> 00:11:40.399
exactly is binary data here? Think of binary

00:11:40.399 --> 00:11:43.240
data as the raw file format for the image itself,

00:11:43.360 --> 00:11:45.820
the actual pixels and colors that an AI vision

00:11:45.820 --> 00:11:48.279
model can understand. Not text. Right. If you

00:11:48.279 --> 00:11:50.299
forget to set it to binary, the node just downloads

00:11:50.299 --> 00:11:52.679
the image link as text, and when you pass that

00:11:52.679 --> 00:11:55.259
text to the vision AI, well, it can't see text.

00:11:55.340 --> 00:11:57.759
It needs the picture. Analysis fails instantly.

00:11:58.250 --> 00:12:01.269
Okay, binary format is key for the image download.

00:12:01.549 --> 00:12:04.490
And that brings us to step four, the actual specialized

00:12:04.490 --> 00:12:08.470
analysis using AI vision. And this is where that

00:12:08.470 --> 00:12:10.629
dual AI strategy really comes into play, right?

00:12:10.710 --> 00:12:13.169
Exactly. Because even though our brain is deep

00:12:13.169 --> 00:12:15.649
seek, for this specific task, analyzing the visual

00:12:15.649 --> 00:12:19.289
chart, we switch models, we use GPT -4O. Ah,

00:12:19.350 --> 00:12:22.149
interesting. Why the switch? Because, well...

00:12:22.350 --> 00:12:24.269
Currently, GPT -4 is generally considered the

00:12:24.269 --> 00:12:26.529
best model out there, specifically for visual

00:12:26.529 --> 00:12:29.029
analysis, for understanding images. So you're

00:12:29.029 --> 00:12:32.029
using the best tool for each distinct job, DeepSeq

00:12:32.029 --> 00:12:34.809
for reasoning and managing, GPT -4 for seeing.

00:12:35.289 --> 00:12:37.450
Precisely. We're essentially paying a bit more

00:12:37.450 --> 00:12:39.429
maybe for that specific vision step, but we're

00:12:39.429 --> 00:12:41.470
outsourcing that really critical high -value

00:12:41.470 --> 00:12:43.669
task, interpreting the candlestick patterns,

00:12:43.830 --> 00:12:46.830
the Mali lines, the volume bars visually to the

00:12:46.830 --> 00:12:49.029
absolute best specialist model available for

00:12:49.029 --> 00:12:51.100
that job. That makes a lot of sense. Pick the

00:12:51.100 --> 00:12:53.860
expert for the specific task. It really demonstrates

00:12:53.860 --> 00:12:58.139
the power of this modular approach. Whoa. Just

00:12:58.139 --> 00:13:00.600
imagine scaling this kind of architecture, like

00:13:00.600 --> 00:13:04.500
across a huge company, using the absolute best

00:13:04.500 --> 00:13:07.759
AI model for every single tiny step in a really

00:13:07.759 --> 00:13:10.370
complex business workflow. That feels like the

00:13:10.370 --> 00:13:12.009
future of building AI agents, doesn't it? It

00:13:12.009 --> 00:13:14.049
really does. So when you feed that downloaded

00:13:14.049 --> 00:13:16.909
chart image to GPT -40, what do you ask it to

00:13:16.909 --> 00:13:18.889
look for? The prompt must be pretty specific.

00:13:19.029 --> 00:13:20.649
Oh, yeah. Super specific. You don't just say

00:13:20.649 --> 00:13:22.950
analyze this chart. You instruct it to perform

00:13:22.950 --> 00:13:26.409
distinct types of analysis. Yeah. Detailed candlestick

00:13:26.409 --> 00:13:29.490
pattern analysis, MCG analysis, like is it crossing

00:13:29.490 --> 00:13:32.429
over? Volume analysis, are there spikes? And

00:13:32.429 --> 00:13:35.309
crucially, to identify and map the key support

00:13:35.309 --> 00:13:37.610
and resistance levels directly onto what's understanding

00:13:37.610 --> 00:13:39.710
the chart. So you're forcing it to act like a

00:13:39.710 --> 00:13:41.629
real technical analyst, looking for those specific

00:13:41.629 --> 00:13:43.529
indicators, not just describing the picture.

00:13:43.710 --> 00:13:46.110
Exactly. You want a professional readout, actionable

00:13:46.110 --> 00:13:48.960
insights derived from the visual data. So drilling

00:13:48.960 --> 00:13:50.840
down on that, what is the biggest advantage then

00:13:50.840 --> 00:13:53.320
of using an AI vision model to look at the chart

00:13:53.320 --> 00:13:56.480
image compared to just feeding a regular text

00:13:56.480 --> 00:13:59.840
-based AI model the raw price and volume data

00:13:59.840 --> 00:14:02.620
in tables? Great question. It's about pattern

00:14:02.620 --> 00:14:05.000
recognition, really. The vision model can instantly

00:14:05.000 --> 00:14:08.700
see complex visual patterns like a bullish engulfing

00:14:08.700 --> 00:14:11.240
candlestick pattern or the specific shape of

00:14:11.240 --> 00:14:13.759
a Mackie divergence or a sudden volume spike

00:14:13.759 --> 00:14:16.299
coinciding with a price move. It can recognize

00:14:16.299 --> 00:14:18.899
these holistic visual signals. much faster and

00:14:18.899 --> 00:14:21.700
often more reliably than a text model trying

00:14:21.700 --> 00:14:24.200
to infer those same patterns just from raw numbers

00:14:24.200 --> 00:14:26.460
in a table. It literally sees the picture the

00:14:26.460 --> 00:14:28.820
way a human analyst would. So we put it all together.

00:14:29.389 --> 00:14:31.730
User sends a message on Telegram. The brain,

00:14:31.889 --> 00:14:33.529
deep -seek, figures out what's needed, maybe

00:14:33.529 --> 00:14:35.649
calls a specialist sub -workflow. The specialist,

00:14:35.850 --> 00:14:39.070
using chart IMG API, generates the chart image

00:14:39.070 --> 00:14:42.289
URL, then downloads the binary image data. Right.

00:14:42.350 --> 00:14:45.129
Then that image is sent to the vision AI, GPT

00:14:45.129 --> 00:14:48.250
-4, with specific analysis instructions. GPT

00:14:48.250 --> 00:14:50.990
-4 sends back the detailed text analysis. Which

00:14:50.990 --> 00:14:53.110
goes back to the specialist, then back up to

00:14:53.110 --> 00:14:56.389
the brain. The brain takes that analysis, maybe

00:14:56.389 --> 00:14:58.649
combines it with conversation history or other

00:14:58.649 --> 00:15:01.149
analysis if it was a comparison request. And

00:15:01.149 --> 00:15:03.669
finally formats it all into a nice message and

00:15:03.669 --> 00:15:06.149
sends it back to the user on Telegram. Exactly.

00:15:06.149 --> 00:15:08.950
And the amazing thing is that whole round trip.

00:15:09.500 --> 00:15:11.639
Apparently it takes only about 10 to 20 seconds.

00:15:11.879 --> 00:15:14.700
Wow, that's remarkably fast for that much processing

00:15:14.700 --> 00:15:17.259
and multiple AI calls. It really is. And that

00:15:17.259 --> 00:15:19.700
architecture really proves its worth with that

00:15:19.700 --> 00:15:21.700
multi -stock comparison feature we talked about

00:15:21.700 --> 00:15:23.840
earlier. Right, comparing Alphabet and Microsoft.

00:15:24.240 --> 00:15:26.120
Yeah. The brain is smart enough to recognize

00:15:26.120 --> 00:15:29.440
this needs two separate analyses. So it calls

00:15:29.440 --> 00:15:32.580
a socialist tool twice, once for each stock.

00:15:32.860 --> 00:15:35.159
Gets back two sets of chart analysis texts from

00:15:35.159 --> 00:15:39.559
GPT -4. And then... The brain performs that high

00:15:39.559 --> 00:15:42.779
-level reasoning itself using DeepSeq to synthesize

00:15:42.779 --> 00:15:44.919
those two reports into that final comparative

00:15:44.919 --> 00:15:47.639
summary for the user. It's a real orchestration

00:15:47.639 --> 00:15:49.940
feat, managing that flow. Definitely sounds like

00:15:49.940 --> 00:15:52.360
it. Now, are there any limitations or things

00:15:52.360 --> 00:15:55.179
to watch out for with this specific build? Yeah,

00:15:55.240 --> 00:15:57.480
a couple of key ones mentioned. First, that free

00:15:57.480 --> 00:16:00.740
chart IMG API they used. It's limited. It only

00:16:00.740 --> 00:16:03.340
works for NASDAQ -listed securities. Ah, okay.

00:16:03.440 --> 00:16:06.440
So if you ask for a stock on the London Stock

00:16:06.440 --> 00:16:08.740
Exchange or something. The sub -workflow will

00:16:08.740 --> 00:16:11.620
probably just fail, hopefully gracefully, but

00:16:11.620 --> 00:16:14.259
you won't get a chart or analysis back. That's

00:16:14.259 --> 00:16:17.399
a limitation of that specific free tool. Good

00:16:17.399 --> 00:16:19.700
to know. What about common mistakes people make

00:16:19.700 --> 00:16:21.559
when trying to build something like this? Two

00:16:21.559 --> 00:16:23.950
big ones were highlighted. And they're easy traps

00:16:23.950 --> 00:16:26.970
to fall into. First, we mentioned it, but it's

00:16:26.970 --> 00:16:30.509
worth repeating. The tool name. The name you

00:16:30.509 --> 00:16:32.850
give your specialist workflow, like GetChart

00:16:32.850 --> 00:16:35.769
in your no -code platform, must perfectly, exactly

00:16:35.769 --> 00:16:38.110
match the tool name you wrote in the brain system

00:16:38.110 --> 00:16:40.669
prompt. Case -sensitive spaces, the whole deal.

00:16:40.850 --> 00:16:43.110
Got it. Get it wrong, and the brain can't call

00:16:43.110 --> 00:16:45.559
the specialist dead end. Nah. The second big

00:16:45.559 --> 00:16:47.740
one, and we also touched on this, is ensuring

00:16:47.740 --> 00:16:50.860
the chat ID variable. From Telegram. Yes, making

00:16:50.860 --> 00:16:52.960
absolute sure it's correctly mapped not just

00:16:52.960 --> 00:16:54.980
to the memory node, but also to all the Telegram

00:16:54.980 --> 00:16:57.740
send nodes that reply back to the user. Why all

00:16:57.740 --> 00:17:00.059
the send nodes too? Because if you only link

00:17:00.059 --> 00:17:03.179
it to memory, but not the reply node, the agent

00:17:03.179 --> 00:17:05.279
might remember the right conversation, but it

00:17:05.279 --> 00:17:07.380
could still accidentally send the reply to the

00:17:07.380 --> 00:17:10.299
wrong user's chat window. You need that chat

00:17:10.299 --> 00:17:13.160
ID guiding both memory and replies to keep everything

00:17:13.160 --> 00:17:16.019
strictly tied to the right conversation. Missing

00:17:16.019 --> 00:17:17.900
that is how you get those frustrating crossed

00:17:17.900 --> 00:17:21.220
messages. Okay, so map the chat ID everywhere

00:17:21.220 --> 00:17:24.299
it relates to a specific user interaction, memory,

00:17:24.519 --> 00:17:27.359
and sending messages back. Bingo. Nail those

00:17:27.359 --> 00:17:29.579
two things, exact tool names and consistent chat

00:17:29.579 --> 00:17:33.740
ID mapping, and you avoid probably... 80 % of

00:17:33.740 --> 00:17:35.579
the common frustrations. And all this efficiency,

00:17:35.839 --> 00:17:38.420
this complex orchestration, it really brings

00:17:38.420 --> 00:17:40.660
us back to the power of no -code advantage, doesn't

00:17:40.660 --> 00:17:42.839
it? Absolutely. I mean, think about it. This

00:17:42.839 --> 00:17:45.599
system connects multiple different APIs, manages

00:17:45.599 --> 00:17:48.119
conversational state with memory, strategically

00:17:48.119 --> 00:17:50.559
swaps between different best -in -class AI models

00:17:50.559 --> 00:17:53.000
for different tasks. And delivers results in

00:17:53.000 --> 00:17:55.880
seconds. Yeah. And it was all built. Visually,

00:17:55.900 --> 00:17:58.960
in maybe a few hours, on a platform like NAN,

00:17:58.960 --> 00:18:01.559
a traditional software developer trying to code

00:18:01.559 --> 00:18:03.920
this from scratch, setting up the servers, the

00:18:03.920 --> 00:18:06.420
API integrations, the state management, the model

00:18:06.420 --> 00:18:09.180
switching logic, that could easily be a multi

00:18:09.180 --> 00:18:11.539
-week, maybe even multi -month project. It really

00:18:11.539 --> 00:18:13.500
lowers the barrier to creating sophisticated

00:18:13.500 --> 00:18:16.619
AI tools. Fundamentally. It democratizes the

00:18:16.619 --> 00:18:19.519
ability to build these complex, specialized AI

00:18:19.519 --> 00:18:22.069
agents. That's the real power here. Hashtag,

00:18:22.069 --> 00:18:24.069
hashtag, hashtag outro. So reflecting on this

00:18:24.069 --> 00:18:25.930
whole project, it feels like it demonstrates

00:18:25.930 --> 00:18:28.650
three really critical ideas for building advanced

00:18:28.650 --> 00:18:32.250
AI systems now. First, that separation of concerns.

00:18:32.690 --> 00:18:34.950
Using that brain specialist architecture, the

00:18:34.950 --> 00:18:37.250
CEO and the expert consultant model, it just

00:18:37.250 --> 00:18:39.670
makes systems cleaner, more modular and much

00:18:39.670 --> 00:18:42.109
easier to scale later on. Yeah, definitely. Second

00:18:42.109 --> 00:18:44.660
big takeaway, AI vision is a game changer. The

00:18:44.660 --> 00:18:47.039
ability for AI to actually analyze images like

00:18:47.039 --> 00:18:49.460
these stock charts unlocks totally new possibilities.

00:18:49.660 --> 00:18:51.940
It lets the agent see the market in a way text

00:18:51.940 --> 00:18:55.000
alone can't capture. It moves beyond just processing

00:18:55.000 --> 00:18:57.500
numbers. And third, it highlights that tools

00:18:57.500 --> 00:19:00.460
make the agent. An AI agent isn't just defined

00:19:00.460 --> 00:19:03.039
by its conversational ability anymore. It's defined

00:19:03.039 --> 00:19:05.839
by the specialized tools it can connect to and

00:19:05.839 --> 00:19:08.480
use effectively to perform complex multi -step

00:19:08.480 --> 00:19:10.720
actions out in the world. That's so true. And,

00:19:10.779 --> 00:19:12.799
you know, this specific brain plus specialist.

00:19:13.740 --> 00:19:15.759
tool architecture. It's not just for stocks.

00:19:15.940 --> 00:19:18.799
You can adapt this exact model for countless

00:19:18.799 --> 00:19:21.410
other specialized uses. Like what? Oh, I don't

00:19:21.410 --> 00:19:23.630
know. Maybe a financial advisor tool that pulls

00:19:23.630 --> 00:19:26.410
specific compliance documents or a trading education

00:19:26.410 --> 00:19:28.490
bot that can analyze user submitted practice

00:19:28.490 --> 00:19:31.849
trades or even hyper -focused content creation

00:19:31.849 --> 00:19:34.789
tools for any niche industry where you have specialized

00:19:34.789 --> 00:19:37.849
data sources or analysis steps. The pattern is

00:19:37.849 --> 00:19:40.369
reusable. That's a really powerful idea. So maybe

00:19:40.369 --> 00:19:42.390
the final thought for everyone listening is this.

00:19:42.509 --> 00:19:45.250
If you can build an expert technical stock analyst

00:19:45.250 --> 00:19:48.259
with no code. What other specialized tools, unique

00:19:48.259 --> 00:19:51.859
data sources, or expert processes exist in your

00:19:51.859 --> 00:19:54.079
field? Yeah, what could you connect an AI agent

00:19:54.079 --> 00:19:56.500
brain to? Because the future here seems to be

00:19:56.500 --> 00:19:58.960
less about just building a better chatbot and

00:19:58.960 --> 00:20:01.480
more about building better connections, orchestrating

00:20:01.480 --> 00:20:04.359
these specialized tools to achieve complex, valuable

00:20:04.359 --> 00:20:07.599
tasks. It's about connection and action, not

00:20:07.599 --> 00:20:08.259
just conversation.
