WEBVTT

00:00:00.000 --> 00:00:01.919
Okay, so I want you to just imagine something

00:00:01.919 --> 00:00:03.859
for a second. Right. Imagine your automation

00:00:03.859 --> 00:00:07.280
workflows weren't just these silent scripts running

00:00:07.280 --> 00:00:08.900
in the background. Right, not just invisible

00:00:08.900 --> 00:00:11.839
code doing its thing. Exactly. What if they were

00:00:11.839 --> 00:00:14.179
conversational teammates? Teammates you could

00:00:14.179 --> 00:00:16.730
actually ask questions to in real time. Like

00:00:16.730 --> 00:00:18.910
a coworker who, you know, already knows all your

00:00:18.910 --> 00:00:21.629
data and can get things done instantly. This

00:00:21.629 --> 00:00:23.670
is really the fundamental shift that arrived

00:00:23.670 --> 00:00:27.449
in late 2025 with Ann Ann's chat hub. It moves

00:00:27.449 --> 00:00:29.710
automation from that old set and forget model

00:00:29.710 --> 00:00:34.170
to live steering. Okay, let's unpack this. We're

00:00:34.170 --> 00:00:37.729
diving into Max Ann's breakdown. of the NN and

00:00:37.729 --> 00:00:40.009
chat hub. Yeah. And it's not just about a chat

00:00:40.009 --> 00:00:43.289
interface. It's really about how this tool becomes

00:00:43.289 --> 00:00:46.270
a unified command center for delegating tasks

00:00:46.270 --> 00:00:48.670
and managing a bunch of different AI models.

00:00:48.789 --> 00:00:50.969
All in one place. Our mission today is to understand

00:00:50.969 --> 00:00:53.750
the three core features, see why this really

00:00:53.750 --> 00:00:56.469
changes complex automation, and then get into

00:00:56.469 --> 00:00:58.670
the practical steps for setting up agents that

00:00:58.670 --> 00:01:01.189
can analyze data and support customers. So what

00:01:01.189 --> 00:01:03.109
does this all mean for you? It means your system's

00:01:03.109 --> 00:01:04.890
just got a voice and we're going to learn how

00:01:04.890 --> 00:01:07.920
to talk back. And when the chat hop was released

00:01:07.920 --> 00:01:12.939
on December 15th, 2025, it was it was way more

00:01:12.939 --> 00:01:15.140
than just a chat window. It really became this

00:01:15.140 --> 00:01:18.040
centralized AI command center. Right. For talking

00:01:18.040 --> 00:01:21.019
to your entire infrastructure, not just one AI.

00:01:21.260 --> 00:01:23.280
Exactly. So the simplest way to think about it

00:01:23.280 --> 00:01:27.689
is this. It's like having GPT -5, Claude, and

00:01:27.689 --> 00:01:30.750
your own custom AI agents all in one spot. And

00:01:30.750 --> 00:01:33.609
crucially, they're already connected to all your

00:01:33.609 --> 00:01:36.489
automation tools, your data. That's the key part.

00:01:36.890 --> 00:01:38.670
That connectivity is the technical difference,

00:01:38.829 --> 00:01:40.590
isn't it? It is. It's what separates it from

00:01:40.590 --> 00:01:42.969
just, you know, using ChatGPT directly. A direct

00:01:42.969 --> 00:01:45.909
chat model is great for thinking, for brainstorming.

00:01:45.969 --> 00:01:47.790
But it's sealed off from the world. Totally.

00:01:47.909 --> 00:01:50.450
It can write an email, but it can't send the

00:01:50.450 --> 00:01:52.709
email from your account. It can't query your

00:01:52.709 --> 00:01:55.049
live database. It definitely can't run a multi

00:01:55.049 --> 00:01:57.590
-step workflow. Before this, automation always

00:01:57.590 --> 00:01:59.590
felt like a black box. You build something, hit

00:01:59.590 --> 00:02:02.500
run, and just hope for the best. And if it failed

00:02:02.500 --> 00:02:04.900
three steps in, you wouldn't know until later

00:02:04.900 --> 00:02:07.700
when you're digging through logs. It was so passive.

00:02:07.939 --> 00:02:10.620
And that's the critical insight here, the paradigm

00:02:10.620 --> 00:02:13.199
shift. It's a move to what they call collaborative

00:02:13.199 --> 00:02:15.759
automation. You're not just sending a request

00:02:15.759 --> 00:02:18.340
into the void anymore. You're treating the workflow

00:02:18.340 --> 00:02:20.919
like an active teammate. You're asking it in

00:02:20.919 --> 00:02:23.780
plain English. Hey, what's happening right now?

00:02:23.860 --> 00:02:26.580
Why did that last step fail? What data do I need

00:02:26.580 --> 00:02:29.960
next? It's that live steering capability. Think

00:02:29.960 --> 00:02:32.120
about it. If your daily sales report workflow

00:02:32.120 --> 00:02:35.500
fails on step four. The CRM credential node,

00:02:35.659 --> 00:02:38.099
always. Right. Instead of digging through logs,

00:02:38.219 --> 00:02:40.319
you just ask the hub, why did step four fail?

00:02:40.719 --> 00:02:43.139
And it replies instantly, credential expired.

00:02:43.789 --> 00:02:46.090
Pause the process. Please update the key. It's

00:02:46.090 --> 00:02:48.069
the difference between sending an email and getting

00:02:48.069 --> 00:02:49.810
an instant reply from someone sitting right next

00:02:49.810 --> 00:02:51.610
to you. It just removes all the guesswork. So

00:02:51.610 --> 00:02:53.949
connecting this to the bigger picture then, why

00:02:53.949 --> 00:02:56.150
is collaborative automation so much better than

00:02:56.150 --> 00:02:58.909
the old set and forget way, especially with complex

00:02:58.909 --> 00:03:01.849
data? It allows for live steering and debugging

00:03:01.849 --> 00:03:04.889
of systems, turning passive scripts into queried

00:03:04.889 --> 00:03:07.210
resources. Okay, so if the concept of talking

00:03:07.210 --> 00:03:10.069
to your systems is clear, what did they actually

00:03:10.069 --> 00:03:13.909
build to make this happen? Let's get into the

00:03:13.909 --> 00:03:17.009
three core features. The first one is the multi

00:03:17.009 --> 00:03:20.090
-model chat interface. This is a huge efficiency

00:03:20.090 --> 00:03:22.210
boost. I mean, it solves that problem of trying

00:03:22.210 --> 00:03:24.949
to force one AI model to do everything badly.

00:03:25.449 --> 00:03:27.889
This interface lets you switch between large

00:03:27.889 --> 00:03:29.689
language models right in the middle of a thread.

00:03:29.870 --> 00:03:31.849
You don't lose your context. You don't have to

00:03:31.849 --> 00:03:33.909
start over. It's like having a small specialized

00:03:33.909 --> 00:03:37.300
team on call. You can start with GPT -5 for reasoning.

00:03:37.400 --> 00:03:39.139
Yeah, for the big picture stuff. But then you

00:03:39.139 --> 00:03:41.020
realize you need to analyze a huge document,

00:03:41.099 --> 00:03:43.360
like 50 ,000 words, so you just... switch over

00:03:43.360 --> 00:03:45.840
to Claude, which is known for handling that huge

00:03:45.840 --> 00:03:48.120
context. Or you need to check a stock price right

00:03:48.120 --> 00:03:50.379
now, so you flip to Gemini or a model with a

00:03:50.379 --> 00:03:52.759
live web search tool, you get real -time info.

00:03:52.939 --> 00:03:54.879
No more knowledge cut -offs. What's fascinating

00:03:54.879 --> 00:03:57.400
here is the ability to use these specialized

00:03:57.400 --> 00:04:00.580
models, but how does ensuring consistency across

00:04:00.580 --> 00:04:03.759
those multi -model interactions affect the outcome?

00:04:04.000 --> 00:04:06.780
You use each model where it shines, leading to

00:04:06.780 --> 00:04:10.479
higher quality and more reliable analysis. Okay,

00:04:10.539 --> 00:04:13.300
so the second feature is custom personal agents.

00:04:13.939 --> 00:04:17.480
Think of these as the lightweight players on

00:04:17.480 --> 00:04:20.220
the team. They're simple agents with custom instructions

00:04:20.220 --> 00:04:23.800
and very specific limited tool access. And you

00:04:23.800 --> 00:04:26.300
can create them without building a whole complex

00:04:26.300 --> 00:04:28.459
workflow. So for someone just starting, this

00:04:28.459 --> 00:04:30.740
is probably the easiest way in. Absolutely. You

00:04:30.740 --> 00:04:32.600
could make a content editor agent. The instruction

00:04:32.600 --> 00:04:35.699
is just always use cloud for consistency. And

00:04:35.699 --> 00:04:38.639
that's it. Simple. But the real power, the heavy

00:04:38.639 --> 00:04:41.339
machinery, that's the third feature. workflow

00:04:41.339 --> 00:04:44.079
agents. This is the game changer. This is where

00:04:44.079 --> 00:04:47.420
you connect your big, complex NN workflows directly

00:04:47.420 --> 00:04:49.939
to the chat using the new chat trigger node.

00:04:50.120 --> 00:04:52.600
So you're merging conversation with actual execution.

00:04:52.899 --> 00:04:55.259
That's it. Instead of manually triggering a data

00:04:55.259 --> 00:04:57.540
analysis workflow, you just chat with it. You

00:04:57.540 --> 00:05:00.120
say, analyze last month's sales data. And the

00:05:00.120 --> 00:05:01.800
workflow just runs in the background, it connects

00:05:01.800 --> 00:05:03.800
to the database, does the analysis. And then

00:05:03.800 --> 00:05:05.480
replies to you right there in the chat window,

00:05:05.680 --> 00:05:08.720
right away your next question. Whoa. Imagine

00:05:08.720 --> 00:05:12.079
scaling this capability to manage a billion queries

00:05:12.079 --> 00:05:15.019
across diverse model architectures. That's a

00:05:15.019 --> 00:05:19.040
serious leap forward. Sponsor. So moving into

00:05:19.040 --> 00:05:21.279
implementation, there are a few really crucial

00:05:21.279 --> 00:05:24.060
details you have to get right for this to actually

00:05:24.060 --> 00:05:28.259
work. You must use the newest version of the

00:05:28.259 --> 00:05:30.579
chat trigger node. That is the number one pitfall

00:05:30.579 --> 00:05:32.759
we see. People are using the old legacy node.

00:05:32.980 --> 00:05:34.860
It looks similar, right? Very similar. But the

00:05:34.860 --> 00:05:37.360
new one is what handles that back and forth conversational

00:05:37.360 --> 00:05:39.300
flow. If you're stuck, check that node version

00:05:39.300 --> 00:05:42.199
first. Okay. And the other key thing is enabling

00:05:42.199 --> 00:05:45.439
streaming on the AI agent node. Jargon alert.

00:05:45.759 --> 00:05:48.360
What's streaming? Streaming just means the AI's

00:05:48.360 --> 00:05:51.000
answer appears in real time, word by word, like

00:05:51.000 --> 00:05:53.259
someone typing. Not a big block of text after

00:05:53.259 --> 00:05:55.149
a long wait. And that's important because it

00:05:55.149 --> 00:05:57.350
feels more like a conversation. It feels collaborative.

00:05:57.709 --> 00:05:59.810
Exactly. It's about that feeling of interaction.

00:06:00.129 --> 00:06:02.610
And what about knowledge cutoffs? I mean, most

00:06:02.610 --> 00:06:04.810
models don't know what happened last week. Right.

00:06:04.949 --> 00:06:06.949
So you have to connect external tools. You can

00:06:06.949 --> 00:06:09.589
plug in something like Jaina AI for real -time

00:06:09.589 --> 00:06:12.589
web search or connect to your own knowledge bases.

00:06:12.949 --> 00:06:15.209
It keeps the agent current. So now for the power

00:06:15.209 --> 00:06:19.079
move. Turning an existing complex workflow into

00:06:19.079 --> 00:06:22.800
an agent your team can talk to. It's surprisingly

00:06:22.800 --> 00:06:24.860
simple. You just go into the chat trigger and

00:06:24.860 --> 00:06:27.660
you turn on one setting. Make available an 8

00:06:27.660 --> 00:06:29.620
a .m. chat. And you have to give it a clear descriptive

00:06:29.620 --> 00:06:32.720
name, not workflow three. Please don't. Something

00:06:32.720 --> 00:06:35.180
like sales data analyzer. So your team knows

00:06:35.180 --> 00:06:37.600
exactly what it does. You know, I have to admit,

00:06:37.740 --> 00:06:40.639
I still wrestle with prompt drift myself sometimes.

00:06:40.939 --> 00:06:43.060
Oh, everyone does. Especially when you're dealing

00:06:43.060 --> 00:06:45.779
with these little setup details. Like just remembering

00:06:45.779 --> 00:06:48.480
to replace that old chat trigger node. It sounds

00:06:48.480 --> 00:06:51.120
so obvious. But it's an easy mistake to make

00:06:51.120 --> 00:06:52.899
when you're moving fast and just trying to get

00:06:52.899 --> 00:06:54.819
it working. It really happens to everyone. But

00:06:54.819 --> 00:06:57.000
when you get those details right, the use cases

00:06:57.000 --> 00:07:00.360
are just, they're transformative. Like an interactive

00:07:00.360 --> 00:07:03.120
customer support bot. Perfect example. Your support

00:07:03.120 --> 00:07:06.240
team can just ask, check status for order, hashtag

00:07:06.240 --> 00:07:09.829
12345. And they don't have to log into three

00:07:09.829 --> 00:07:11.790
different systems to get that one answer. Right.

00:07:11.850 --> 00:07:15.290
It pulls from the CRM, inventory, shipping, all

00:07:15.290 --> 00:07:17.310
at once. The psychological win there is huge.

00:07:17.569 --> 00:07:20.149
Or the data analysis assistant. Instead of pulling

00:07:20.149 --> 00:07:22.930
raw data, you just query the database in chat.

00:07:23.189 --> 00:07:25.709
What were our top 10 products last quarter? And

00:07:25.709 --> 00:07:28.689
you get the answer already summarized. So given

00:07:28.689 --> 00:07:30.769
that the workflow agent is the heavy machinery

00:07:30.769 --> 00:07:33.829
here, what's the fastest win a learner can get

00:07:33.829 --> 00:07:35.810
when they're starting out with a custom agent?

00:07:36.199 --> 00:07:38.680
Focus on building one workflow agent that reliably

00:07:38.680 --> 00:07:41.860
answers a single painful question your team asks

00:07:41.860 --> 00:07:45.480
every day. Solve one problem perfectly, then

00:07:45.480 --> 00:07:47.920
you can scale. Now, when we talk about rolling

00:07:47.920 --> 00:07:50.959
this out to a bigger team, especially in a company,

00:07:51.199 --> 00:07:54.399
control and security are everything. And this

00:07:54.399 --> 00:07:56.339
is where the chat user role comes in. This is

00:07:56.339 --> 00:07:59.139
a vital guardrail. It's designed for non -technical

00:07:59.139 --> 00:08:01.660
people, your sales team, your marketers, so they

00:08:01.660 --> 00:08:03.699
can use this power safely. So what do they see?

00:08:04.220 --> 00:08:06.339
They only see the chat screen and the agents

00:08:06.339 --> 00:08:08.259
you've chosen to expose to them. They can't see

00:08:08.259 --> 00:08:10.160
the workflows. They can't see the credentials.

00:08:10.459 --> 00:08:12.660
And can't break the engine. Exactly. It's a perfect

00:08:12.660 --> 00:08:15.120
separation of duties. They get the power without

00:08:15.120 --> 00:08:17.319
the risk. We should probably touch on limitations,

00:08:17.540 --> 00:08:19.860
too. Yes. Remember that personal agents are the

00:08:19.860 --> 00:08:22.000
lightweight ones. They're great for quick text

00:08:22.000 --> 00:08:24.480
tasks, but they can't do the heavy lifting like

00:08:24.480 --> 00:08:26.500
reading through a huge knowledge base. For that,

00:08:26.639 --> 00:08:28.699
you need a RAG, right? Retrieval Augmented Generation.

00:08:29.120 --> 00:08:31.660
Correct. And for proper RAG, great, you must

00:08:31.660 --> 00:08:35.230
use a workflow agent. Why is that? Because RGRI

00:08:35.230 --> 00:08:38.289
needs a vector store node to handle all the indexing

00:08:38.289 --> 00:08:41.090
and retrieval of that big document. A personal

00:08:41.090 --> 00:08:43.490
agent is just too lightweight to manage that

00:08:43.490 --> 00:08:46.029
complexity. So to make sure these are reliable,

00:08:46.210 --> 00:08:48.870
you really have to treat the rollout like a product

00:08:48.870 --> 00:08:51.529
launch. It helps to think of it like you're hiring

00:08:51.529 --> 00:08:54.399
a new team. You need to give them clear names,

00:08:54.539 --> 00:08:57.820
not Agent 1. Call it Customer Order Lookup. And

00:08:57.820 --> 00:08:59.919
write a good description that answers, what does

00:08:59.919 --> 00:09:02.299
this do and when should I use it? And then the

00:09:02.299 --> 00:09:04.820
system prompts. That's like the employee handbook.

00:09:05.000 --> 00:09:07.440
That's where you set the rules, the tone, the

00:09:07.440 --> 00:09:10.840
limits. Things like, always verify customer identity

00:09:10.840 --> 00:09:13.700
before sharing order info. It makes the agent

00:09:13.700 --> 00:09:16.080
dependable. And finally, you have to test it.

00:09:16.100 --> 00:09:17.919
You have to try and break it before you share

00:09:17.919 --> 00:09:21.009
it. Absolutely. Run edge cases. Ask confusing

00:09:21.009 --> 00:09:23.509
questions. If it can handle the stress test,

00:09:23.750 --> 00:09:26.110
it's ready for the team. Okay, so a quick recap

00:09:26.110 --> 00:09:28.990
of the comparison. Direct chat tools like GPT

00:09:28.990 --> 00:09:32.169
are great for thinking. Brainstorming, drafting.

00:09:32.629 --> 00:09:35.330
And fully custom chatbots are incredibly powerful.

00:09:35.590 --> 00:09:39.049
A massive engineering project to build and maintain.

00:09:39.389 --> 00:09:41.450
So ChatHub sits right in that sweet spot in the

00:09:41.450 --> 00:09:44.629
middle. It's where your thinking turns into direct

00:09:44.629 --> 00:09:47.490
automated action. It gives you that custom behavior

00:09:47.490 --> 00:09:50.250
without the insane cost of building a whole system

00:09:50.250 --> 00:09:52.809
from scratch. So if security and cost control

00:09:52.809 --> 00:09:56.049
are paramount, which administrative control should

00:09:56.049 --> 00:09:59.070
an admin prioritize when setting up ChatHub for

00:09:59.070 --> 00:10:01.840
a big team? they should focus on the chat user

00:10:01.840 --> 00:10:04.960
role restrict credential management and enable

00:10:04.960 --> 00:10:07.960
or disable specific potentially expensive ai

00:10:07.960 --> 00:10:10.639
models the essential insight we found here is

00:10:10.639 --> 00:10:12.519
that chat hub is really more than just a feature

00:10:12.519 --> 00:10:16.139
it's a a fundamental paradigm shift toward conversational

00:10:16.139 --> 00:10:18.820
automation It just removes the guesswork. You

00:10:18.820 --> 00:10:21.700
can debug in plain language. You can guide complex

00:10:21.700 --> 00:10:24.580
processes in real time. You stop waiting for

00:10:24.580 --> 00:10:26.440
an output and start engaging with the pipeline.

00:10:26.960 --> 00:10:29.539
The teams that are going to move fastest are

00:10:29.539 --> 00:10:31.779
the ones that upgrade from sending tickets to

00:10:31.779 --> 00:10:34.399
having a live analyst, a system they can talk

00:10:34.399 --> 00:10:36.639
to and make decisions with in the moment. And

00:10:36.639 --> 00:10:38.340
here's where it gets really interesting for me.

00:10:38.740 --> 00:10:42.399
This shift. means automation is no longer this

00:10:42.399 --> 00:10:44.460
invisible engine in the dark. It's an active

00:10:44.460 --> 00:10:46.799
teammate. It's an active teammate ready for you

00:10:46.799 --> 00:10:50.120
to delegate to. For anyone focused on fast, high

00:10:50.120 --> 00:10:53.000
-quality knowledge, this tool means you can now

00:10:53.000 --> 00:10:55.820
interrogate your data pipelines. You can question

00:10:55.820 --> 00:10:57.940
your workflows. You're not just waiting for the

00:10:57.940 --> 00:11:00.120
final report anymore. You're an active participant.

00:11:00.399 --> 00:11:02.460
An active participant in your own business logic.

00:11:02.659 --> 00:11:04.279
So if we accept that the future of automation

00:11:04.279 --> 00:11:06.659
is conversational, the final thought for you

00:11:06.659 --> 00:11:10.299
to consider is this. If your workflows could

00:11:10.299 --> 00:11:12.980
actually talk to each other, if the data analyst

00:11:12.980 --> 00:11:15.639
agent and the content generator agent could hold

00:11:15.639 --> 00:11:19.419
a meeting, what's the first task they would solve

00:11:19.419 --> 00:11:21.259
together? Go build something amazing.
