WEBVTT

00:00:00.000 --> 00:00:02.120
You know that immediate mental hurdle when you

00:00:02.120 --> 00:00:04.639
start thinking about an AI agent project? It's

00:00:04.639 --> 00:00:06.980
almost always complexity. Your mind just leaps

00:00:06.980 --> 00:00:11.019
to... Perfect prompts, multi -agent setups, rock

00:00:11.019 --> 00:00:13.699
-solid security. Right. And these ideas, they're

00:00:13.699 --> 00:00:15.900
important down the line, sure, but they kill,

00:00:15.919 --> 00:00:18.280
what, 90 % of projects before they even get off

00:00:18.280 --> 00:00:20.660
the ground? It's total analysis paralysis. It

00:00:20.660 --> 00:00:22.140
feels like you should be thinking about that

00:00:22.140 --> 00:00:25.199
stuff like it's best practice. But the reality,

00:00:25.399 --> 00:00:28.160
the sort of liberating truth here, is that the

00:00:28.160 --> 00:00:30.399
90 % you actually need to build something that

00:00:30.399 --> 00:00:32.359
works, something you can ship. You can probably

00:00:32.359 --> 00:00:35.500
grasp that in, like, under an hour. So welcome

00:00:35.500 --> 00:00:38.969
to the deep dive. Our goal here is to guide you

00:00:38.969 --> 00:00:41.409
through what really matters with AI agents quickly

00:00:41.409 --> 00:00:44.070
without you getting lost in all the noise. Yeah,

00:00:44.109 --> 00:00:46.070
we want to cut right through that perfectionism

00:00:46.070 --> 00:00:49.210
trap. Isolate those practical ship it today basics.

00:00:49.549 --> 00:00:53.289
Exactly. So today we're going to define the four

00:00:53.289 --> 00:00:56.789
core parts of any agent. Lay out a simple three

00:00:56.789 --> 00:00:59.250
-step launch sequence and cover the essential

00:00:59.250 --> 00:01:02.570
lightweight ways to handle tools, security, and

00:01:02.570 --> 00:01:05.090
actually deploying this thing. Okay, let's dive

00:01:05.090 --> 00:01:07.310
in and try to make this feel a bit less daunting.

00:01:07.709 --> 00:01:11.590
So segment one is this perfectionism trap. I

00:01:11.590 --> 00:01:13.670
think we've all felt this. You get an idea for

00:01:13.670 --> 00:01:16.090
an agent and immediately you're picturing how,

00:01:16.269 --> 00:01:19.319
I don't know. Google or OpenAI would build it,

00:01:19.340 --> 00:01:21.719
this massive, intricate system. And that's the

00:01:21.719 --> 00:01:23.480
moment, right? That's usually where the idea

00:01:23.480 --> 00:01:26.420
just stalls. Yeah. Because it's this engineering

00:01:26.420 --> 00:01:28.340
exercise for a scale you don't even have yet.

00:01:28.480 --> 00:01:31.500
Yeah. But the pattern for success, what we actually

00:01:31.500 --> 00:01:33.840
see working, it's developers starting really

00:01:33.840 --> 00:01:36.480
simple. Shipping fast. Shipping fast. And then

00:01:36.480 --> 00:01:39.579
iterating based on what real users are telling

00:01:39.579 --> 00:01:42.420
them. not based on some abstract theory of perfection.

00:01:42.700 --> 00:01:46.140
So the core message is really resist that temptation.

00:01:46.400 --> 00:01:49.140
Focus on the practical 90%, just what you need

00:01:49.140 --> 00:01:51.760
to launch something basic that works. What do

00:01:51.760 --> 00:01:54.980
you think is the main psychological block there?

00:01:55.019 --> 00:01:57.840
What stops people shipping? It's overcomplicating

00:01:57.840 --> 00:02:00.299
that foundation. That's what kills projects before

00:02:00.299 --> 00:02:03.200
they start. Simplicity, especially early on,

00:02:03.299 --> 00:02:05.680
that's actually a feature, not a bug. Okay, so

00:02:05.680 --> 00:02:08.219
let's talk anatomy. Before touching any code,

00:02:08.419 --> 00:02:10.360
what are the absolute essential parts? You're

00:02:10.360 --> 00:02:12.759
saying there are just four. Just four core components,

00:02:12.900 --> 00:02:14.539
yeah. Yeah. Doesn't matter how fancy it gets

00:02:14.539 --> 00:02:16.759
later, these are the building blocks. First up,

00:02:16.879 --> 00:02:20.759
tools. Think of these as the agent's hands. Okay.

00:02:20.919 --> 00:02:23.180
Their functions, letting the agent, you know,

00:02:23.199 --> 00:02:25.680
interact with the outside world, search the web,

00:02:25.860 --> 00:02:28.099
query a database, send an email. Right. Without

00:02:28.099 --> 00:02:29.919
tools, it's just talking. It can't do anything.

00:02:30.039 --> 00:02:32.400
Exactly. It's just a chatbot otherwise. Number

00:02:32.400 --> 00:02:36.129
two, the large language model, the LLM. That's

00:02:36.129 --> 00:02:39.889
the brain. Your GPT -clawed Gemini. Right. The

00:02:39.889 --> 00:02:42.469
reasoning engine, it looks at the user's request,

00:02:42.710 --> 00:02:44.669
looks at its instructions, and decides, okay,

00:02:44.770 --> 00:02:47.490
which tool do I need for this? Then number three

00:02:47.490 --> 00:02:50.110
is the system prompt, the instruction manual.

00:02:50.430 --> 00:02:51.870
Yeah, that's a good way to put it. It's the high

00:02:51.870 --> 00:02:54.449
-level programming. Defines the agent's personality.

00:02:54.689 --> 00:02:58.689
Is it helpful? Snarky. Sets its goals, its rules

00:02:58.689 --> 00:03:00.610
of engagement. And the fourth piece is memory

00:03:00.610 --> 00:03:04.520
systems, the context. Crucial. This covers short

00:03:04.520 --> 00:03:06.699
-term memory, basically, the current conversation

00:03:06.699 --> 00:03:09.860
history, and potentially long -term memory, like

00:03:09.860 --> 00:03:12.360
remembering user preferences from last week or

00:03:12.360 --> 00:03:15.159
important facts it learned. And that's it. Everything

00:03:15.159 --> 00:03:17.139
else, deployment, monitoring, that's all just

00:03:17.139 --> 00:03:20.360
refinement. It's refinement, not the core foundation.

00:03:20.639 --> 00:03:24.180
The LLM is the brain, sure, but which component

00:03:24.180 --> 00:03:26.319
actually lets it do things in the real world?

00:03:26.639 --> 00:03:29.159
That has to be the tools. They're the hands letting

00:03:29.159 --> 00:03:32.020
it interact with APIs, databases, whatever it

00:03:32.020 --> 00:03:34.599
needs. Okay, so how do we actually launch this?

00:03:34.919 --> 00:03:38.879
The hello world for agents. It's just three steps.

00:03:39.139 --> 00:03:42.439
Step one. Pick an LLM for prototyping. Just pick

00:03:42.439 --> 00:03:44.539
something cheap and fast. Don't agonize over

00:03:44.539 --> 00:03:47.039
it. Cloud Haiku is great for this. Okay, cheap

00:03:47.039 --> 00:03:49.560
and fast. Step two. Write a basic, clear system

00:03:49.560 --> 00:03:51.759
prompt. Just the essentials. You'll refine it

00:03:51.759 --> 00:03:53.400
later once you see how people actually use it.

00:03:53.460 --> 00:03:55.960
And step three. Add one tool. for just one to

00:03:55.960 --> 00:03:58.340
start maybe a simple web search or even just

00:03:58.340 --> 00:04:00.280
a calculator and if you do those three things

00:04:00.280 --> 00:04:02.580
you have a working agent you can take an instruction

00:04:02.580 --> 00:04:06.699
reason about it and perform an action and honestly

00:04:06.699 --> 00:04:10.580
this can be like less than 50 lines of code it's

00:04:10.580 --> 00:04:13.250
really not complex to start Okay, you mentioned

00:04:13.250 --> 00:04:15.110
setting up the LLM connection. You talked about

00:04:15.110 --> 00:04:17.389
OpenRouter. Why is that important early on? Ah,

00:04:17.389 --> 00:04:19.750
yeah, good question. OpenRouter basically gives

00:04:19.750 --> 00:04:23.290
you one API key that works for almost all the

00:04:23.290 --> 00:04:28.550
major LLMs, GPT, Cloud Gemini, Mistral, you name

00:04:28.550 --> 00:04:31.230
it. So switching is easy. Super easy. You change

00:04:31.230 --> 00:04:33.269
like one line of text in your config file, and

00:04:33.269 --> 00:04:35.470
boom, you're using a different model. No vendor

00:04:35.470 --> 00:04:38.069
lock -in, easy testing. It's a smart move from

00:04:38.069 --> 00:04:40.620
day one. You know, I have to admit, I still sometimes

00:04:40.620 --> 00:04:43.199
wrestle with the temptation to just let the LLM

00:04:43.199 --> 00:04:45.800
handle things like math. Skip writing a calculator

00:04:45.800 --> 00:04:47.540
tool. Oh, totally. It feels like it should be

00:04:47.540 --> 00:04:49.759
smart enough, right? Yeah. That's a classic trap.

00:04:50.259 --> 00:04:52.540
LLMs are fundamentally token prediction machines.

00:04:52.899 --> 00:04:56.420
They predict the next likely word or token. That

00:04:56.420 --> 00:04:58.740
makes them surprisingly bad at precise arithmetic,

00:04:58.959 --> 00:05:01.399
like really bad sometimes. So adding a simple

00:05:01.399 --> 00:05:03.660
calculator tool isn't just a nice -to -have,

00:05:03.800 --> 00:05:06.740
it's often necessary. It compensates for a core

00:05:06.740 --> 00:05:10.680
limitation of the LLM itself. Why is adding a

00:05:10.680 --> 00:05:13.879
tool for something basic like arithmetic so crucial

00:05:13.879 --> 00:05:17.220
then? Because LLMs are just inherently weak at

00:05:17.220 --> 00:05:19.779
precise math. They need tools to make up for

00:05:19.779 --> 00:05:21.980
that fundamental gap. Okay, let's nail down some

00:05:21.980 --> 00:05:26.029
specifics. LLM choices. For prototyping, like

00:05:26.029 --> 00:05:28.870
I said, I really like Cloud Haiku 4 .5. It's

00:05:28.870 --> 00:05:31.509
super cheap, incredibly fast, great for just

00:05:31.509 --> 00:05:33.310
getting things working. And then when you think

00:05:33.310 --> 00:05:35.689
about maybe moving towards production. Cloud

00:05:35.689 --> 00:05:37.889
Sonnet 4 .5 seems to be hitting a sweet spot

00:05:37.889 --> 00:05:39.990
right now. Good balance of intelligence, speed,

00:05:40.129 --> 00:05:42.550
and cost. But again, the real point here is use

00:05:42.550 --> 00:05:44.709
something like OpenRouter. Right. So switching

00:05:44.709 --> 00:05:47.810
between Haiku, Sonnet, maybe even GPT -4 is just

00:05:47.810 --> 00:05:50.180
changing that one line. Don't get locked in because

00:05:50.180 --> 00:05:52.819
of hype. Test what works for your use case. Makes

00:05:52.819 --> 00:05:55.040
sense. And the system prompt, that instruction

00:05:55.040 --> 00:05:57.519
manual. You mentioned a template to avoid staring

00:05:57.519 --> 00:05:59.800
at a blank page. Yeah, a simple five -section

00:05:59.800 --> 00:06:01.420
template really helps structure your thinking.

00:06:01.579 --> 00:06:04.240
What are the core sections? First three are key

00:06:04.240 --> 00:06:08.220
to start. One, persona and goals, just two or

00:06:08.220 --> 00:06:11.519
three sentences. Two, tool instructions, how

00:06:11.519 --> 00:06:14.459
and when to use the tools you've given it. Three,

00:06:15.000 --> 00:06:17.639
output format tone. How detailed should it be?

00:06:17.720 --> 00:06:18.939
That sort of thing. And the other two sections.

00:06:19.040 --> 00:06:21.040
You said ignore them at first. Right. Section

00:06:21.040 --> 00:06:24.519
four is for examples really only needed for complex,

00:06:24.779 --> 00:06:27.720
multi -step tool workflows. And section five

00:06:27.720 --> 00:06:29.639
is miscellaneous instructions. This is like the

00:06:29.639 --> 00:06:32.579
fix it section. So why start with just the first

00:06:32.579 --> 00:06:35.139
three and ignore the others initially? Because

00:06:35.139 --> 00:06:37.079
you should only fill in that miscellaneous section

00:06:37.079 --> 00:06:39.740
based on observed behavior. Let your agent mess

00:06:39.740 --> 00:06:42.120
up in testing. If it tries scheduling meetings

00:06:42.120 --> 00:06:44.870
at 3 a .m., then you add a rule. Never schedule

00:06:44.870 --> 00:06:48.230
meetings before 9 a .m. Use real feedback. Don't

00:06:48.230 --> 00:06:50.550
guess about edge cases up front. Start simple.

00:06:51.269 --> 00:06:54.189
Okay. Tool strategy. You mentioned a limit. Yeah.

00:06:54.230 --> 00:06:55.910
A general rule of thumb is try to keep it under

00:06:55.910 --> 00:06:58.389
10 tools per agent. Once you go beyond that,

00:06:58.449 --> 00:07:01.149
the LLM tends to get confused. The performance

00:07:01.149 --> 00:07:03.290
drops. It starts picking the wrong tool. It just

00:07:03.290 --> 00:07:05.610
gets overwhelmed. Only 10. I've seen people build

00:07:05.610 --> 00:07:08.029
agents with like dozens of tiny little function

00:07:08.029 --> 00:07:10.629
calls. Why the limit? It's really about the LLM's

00:07:10.629 --> 00:07:13.540
cognitive load and the context window. Every

00:07:13.540 --> 00:07:15.420
tool you add, you have to describe it in the

00:07:15.420 --> 00:07:18.800
prompt. That eats up precious token space. And

00:07:18.800 --> 00:07:21.300
the more tools it has to choose from, the more

00:07:21.300 --> 00:07:23.899
time and tokens it spends just deciding which

00:07:23.899 --> 00:07:26.519
one to use. Often incorrectly if there are too

00:07:26.519 --> 00:07:29.160
many similar options. Keep it focused. And if

00:07:29.160 --> 00:07:31.500
you could only build out one core capability

00:07:31.500 --> 00:07:35.540
at first. RJ. No question. Retrieval Augmented

00:07:35.540 --> 00:07:37.959
Generation. That's giving the agent access to

00:07:37.959 --> 00:07:40.120
search private documents, right? Internal knowledge

00:07:40.120 --> 00:07:42.399
basis. Exactly. And the data we're seeing suggests

00:07:42.399 --> 00:07:45.180
something like 80 % or more of the real business

00:07:45.180 --> 00:07:47.680
value from agents comes from this capability.

00:07:48.259 --> 00:07:50.279
Letting it use your company's internal data,

00:07:50.399 --> 00:07:52.939
customer history, technical docs. Master RRA

00:07:52.939 --> 00:07:55.680
first. Master RAG. And you unlock the most valuable

00:07:55.680 --> 00:07:58.139
use cases right out of the gate. Wow. Yeah, I

00:07:58.139 --> 00:08:00.259
can see that. Imagine just asking it to pull

00:08:00.259 --> 00:08:03.000
key risks from the last thousand customer contracts

00:08:03.000 --> 00:08:05.860
instantly. Yeah. Yeah, that's huge for a business.

00:08:06.019 --> 00:08:08.100
Totally transformative. Now, security. Don't

00:08:08.100 --> 00:08:10.680
panic. Just basic hygiene. First rule. Never,

00:08:10.759 --> 00:08:14.199
ever hard code API keys in your code. Use environment

00:08:14.199 --> 00:08:16.339
variables. Standard practice. Standard practice.

00:08:16.819 --> 00:08:20.180
Then look at Guardrails AI. It's an open source

00:08:20.180 --> 00:08:22.560
Python tool. It lets you basically wrap your

00:08:22.560 --> 00:08:24.899
agent. Wrap it? What does that do? It gives you

00:08:24.899 --> 00:08:27.490
input protection. It can block things like prompt

00:08:27.490 --> 00:08:30.889
injection attacks or filter out PII, you know,

00:08:30.930 --> 00:08:33.570
personally identifiable information like names,

00:08:33.710 --> 00:08:36.529
addresses. Okay, so it cleans the input. And

00:08:36.529 --> 00:08:38.419
it filters the output, too. It can check for

00:08:38.419 --> 00:08:40.799
factual consistency, make sure the agent isn't

00:08:40.799 --> 00:08:43.080
leaking sensitive data. It's like a basic safety

00:08:43.080 --> 00:08:45.740
net, lets you ship things internally with a lot

00:08:45.740 --> 00:08:48.019
more confidence. And you also mentioned SNCC

00:08:48.019 --> 00:08:51.460
for vulnerabilities. Yeah, there are prepackaged

00:08:51.460 --> 00:08:54.259
tool collections, those called MCP servers, that

00:08:54.259 --> 00:08:56.340
bundle security checks. Using something like

00:08:56.340 --> 00:08:58.379
SNCCs can automate vulnerability detection during

00:08:58.379 --> 00:09:00.519
development. Just good practice. Yeah. So if

00:09:00.519 --> 00:09:02.720
you're focusing on just one tool capability for

00:09:02.720 --> 00:09:05.139
that first agent, what's the priority? It's got

00:09:05.139 --> 00:09:07.779
to be our key. Because that ability to tap into

00:09:07.779 --> 00:09:10.240
private company data, that's where you'll find

00:09:10.240 --> 00:09:12.620
the most immediate business value. Okay, last

00:09:12.620 --> 00:09:15.539
lap. Optimization and actually shipping this

00:09:15.539 --> 00:09:18.679
thing. Cost is a big one. Context window costs.

00:09:18.980 --> 00:09:22.220
Right. You pay per token, both input and output.

00:09:22.740 --> 00:09:25.320
So keep your system prompts concise. Keep your

00:09:25.320 --> 00:09:27.779
tool descriptions tight. They get sent with every

00:09:27.779 --> 00:09:30.580
single call to the LLM. Don't be verbose there.

00:09:30.740 --> 00:09:32.710
And for memory. For the conversation history.

00:09:32.990 --> 00:09:35.149
You absolutely need that sliding window memory

00:09:35.149 --> 00:09:37.029
trick we talked about. Don't send the entire

00:09:37.029 --> 00:09:39.830
chat history back every time, especially if it's

00:09:39.830 --> 00:09:42.710
a long conversation. Just the last, say, 10 or

00:09:42.710 --> 00:09:46.230
20 messages. Exactly. Use that simple list slicing

00:09:46.230 --> 00:09:49.830
like conversation 10 in Python. It dramatically

00:09:49.830 --> 00:09:52.809
cuts down your token costs. What about things

00:09:52.809 --> 00:09:56.019
the agent needs to remember long term? like user

00:09:56.019 --> 00:09:59.159
preferences or facts it learned weeks ago. For

00:09:59.159 --> 00:10:01.440
that, you'd look at something like MEM0. It's

00:10:01.440 --> 00:10:03.639
an open source tool specifically for persistent

00:10:03.639 --> 00:10:07.120
memory. It uses AG principles itself to store

00:10:07.120 --> 00:10:09.600
facts sufficiently and only retrieve the relevant

00:10:09.600 --> 00:10:11.779
ones for the current query. So it doesn't stuff

00:10:11.779 --> 00:10:14.259
the main context window with old info. Right.

00:10:14.500 --> 00:10:17.320
Avoids those unnecessary token costs for stuff

00:10:17.320 --> 00:10:19.580
that's not immediately needed. Then there's observability,

00:10:19.700 --> 00:10:22.029
seeing what's going on inside. Yeah, you really

00:10:22.029 --> 00:10:24.350
need this for debugging and just understanding

00:10:24.350 --> 00:10:27.309
costs. LangFuse is a great, easy to integrate

00:10:27.309 --> 00:10:29.570
option. Gives you a dashboard. What does it track?

00:10:29.730 --> 00:10:31.850
Tracks the whole execution flow step by step.

00:10:32.070 --> 00:10:34.870
Shows you token usage per step, latency, which

00:10:34.870 --> 00:10:37.830
system prompt was used. Yeah. Invaluable when

00:10:37.830 --> 00:10:39.350
your agent does something weird and you need

00:10:39.350 --> 00:10:41.990
to figure out why. And finally, deployment, getting

00:10:41.990 --> 00:10:44.289
it running. Think Docker native right from the

00:10:44.289 --> 00:10:46.789
start. Yeah. Build your agent inside a Docker

00:10:46.789 --> 00:10:49.730
container. Makes it super portable. Is it heavy?

00:10:50.240 --> 00:10:52.820
Does it need big servers? That's the surprising

00:10:52.820 --> 00:10:55.379
part. AI agents are usually really lightweight.

00:10:55.799 --> 00:10:58.320
All the heavy computation, the LLM inference

00:10:58.320 --> 00:11:01.139
that happens on open AIs or anthropic servers.

00:11:01.500 --> 00:11:04.220
Oh, okay. Your code is mostly just managing API

00:11:04.220 --> 00:11:07.100
calls and maybe running a simple tool. So it

00:11:07.100 --> 00:11:09.039
can run on a small, cheap server. A Docker container,

00:11:09.259 --> 00:11:11.460
maybe with a basic web front end like Streamlit

00:11:11.460 --> 00:11:14.100
if it's chatbot. Or just a serverless function

00:11:14.100 --> 00:11:16.279
like EWS Lambda if it runs in the background.

00:11:16.460 --> 00:11:19.710
Keep it simple. So token costs, they add up fast.

00:11:20.090 --> 00:11:22.070
What's the absolute simplest, most effective

00:11:22.070 --> 00:11:25.169
way to manage memory for a long chat? Use that

00:11:25.169 --> 00:11:28.269
sliding window. Send only the last 10 or 20 messages.

00:11:28.590 --> 00:11:31.289
That saves huge amounts on token costs compared

00:11:31.289 --> 00:11:33.909
to sending the full history every time. Okay,

00:11:33.970 --> 00:11:36.370
let's recap the big idea here. The 90 -10 rule.

00:11:36.809 --> 00:11:39.570
What's the 90 % that really matters? The stuff

00:11:39.570 --> 00:11:41.960
you should do now. All right. Bic and LLM Haiku

00:11:41.960 --> 00:11:44.559
is great for testing. Write a basic system prompt,

00:11:44.679 --> 00:11:47.120
just those first three template sections. Add

00:11:47.120 --> 00:11:50.200
one to three tools. Really focus on ARG for accessing

00:11:50.200 --> 00:11:52.279
internal data. That's usually the highest value

00:11:52.279 --> 00:11:55.080
starting point. Add basic security with guardrails,

00:11:55.139 --> 00:11:58.600
AI input, and output protection. Add simple observability,

00:11:58.759 --> 00:12:01.080
something like LangFuse. Control that short -term

00:12:01.080 --> 00:12:03.820
memory cost using the sliding window trick. And

00:12:03.820 --> 00:12:05.860
build it all with Docker in mind from day one.

00:12:06.059 --> 00:12:08.740
Easy deployment later. And critically, what's

00:12:08.740 --> 00:12:11.200
the 10 % you should actively ignore at the start?

00:12:11.480 --> 00:12:14.620
Ignore the complex multi -agent systems. Ignore

00:12:14.620 --> 00:12:16.980
crafting 5 ,000 -word perfect system prompts.

00:12:17.399 --> 00:12:19.460
Ignore Kubernetes orchestration for your first

00:12:19.460 --> 00:12:21.919
simple agent. Ignore building massive custom

00:12:21.919 --> 00:12:24.480
evaluation test suites before you even have user

00:12:24.480 --> 00:12:27.200
feedback. Right. The agents that actually ship.

00:12:27.399 --> 00:12:30.120
They shipped because the builders resisted that

00:12:30.120 --> 00:12:32.740
urge to over -engineer everything up front. Exactly.

00:12:32.779 --> 00:12:35.000
Focus on the foundations, get it out there, get

00:12:35.000 --> 00:12:38.480
real feedback, then iterate. So the call to action

00:12:38.480 --> 00:12:41.100
is pretty clear. Find that 50 -line code example

00:12:41.100 --> 00:12:43.480
mentioned in the source material, or just start

00:12:43.480 --> 00:12:45.639
fresh with those core components. Build something

00:12:45.639 --> 00:12:48.220
simple, like today. Yeah, don't wait for it to

00:12:48.220 --> 00:12:50.460
be perfect. You don't fail by shipping something

00:12:50.460 --> 00:12:53.240
simple or slightly flawed. You fail by letting

00:12:53.240 --> 00:12:55.539
perfectionism stop you from learning. And you

00:12:55.539 --> 00:12:58.149
only really learn from real -world usage. So

00:12:58.149 --> 00:13:00.309
a final thought to leave you with. What simple

00:13:00.309 --> 00:13:02.909
problem, maybe even an annoying little task you

00:13:02.909 --> 00:13:05.289
do every day, could your first basic agent solve

00:13:05.289 --> 00:13:08.389
right now? Outro music fades in.
