WEBVTT

00:00:00.000 --> 00:00:02.100
We've all been there, haven't we? You're deep

00:00:02.100 --> 00:00:03.879
in your code, you've got your AI helper running,

00:00:04.740 --> 00:00:07.280
and you ask it something. Maybe refactor a component,

00:00:07.679 --> 00:00:11.019
debug a little function, and it just completely

00:00:11.019 --> 00:00:12.759
misses the point. Oh yeah. Is your old code,

00:00:12.939 --> 00:00:16.260
or it's like, it's got digital amnesia. Forgets

00:00:16.260 --> 00:00:17.760
everything you just told it about the project

00:00:17.760 --> 00:00:20.859
structure, like five minutes ago. Digital amnesia,

00:00:20.859 --> 00:00:24.120
I like that. It's spot on, really. You're in

00:00:24.120 --> 00:00:26.940
the zone, things are clicking, and then the AI

00:00:26.940 --> 00:00:30.359
just... loses the plot entirely, super frustrating.

00:00:30.519 --> 00:00:34.460
Exactly, and that frustration, that constant

00:00:34.460 --> 00:00:36.740
need to re -explain things, that's really what

00:00:36.740 --> 00:00:39.119
we're diving into today. Our goal is basically

00:00:39.119 --> 00:00:41.880
mastering AI context so we can all be more productive,

00:00:42.359 --> 00:00:46.060
because the core issue is that lack of... continuous

00:00:46.060 --> 00:00:48.679
context. These tools often feel like a blank

00:00:48.679 --> 00:00:50.600
slate every time you prompt them. Which waste

00:00:50.600 --> 00:00:53.060
so much time. It really does. And it's a known

00:00:53.060 --> 00:00:54.780
problem, right? People are thinking about this.

00:00:54.840 --> 00:00:58.039
There's even this kind of futuristic idea floating

00:00:58.039 --> 00:01:00.899
around called MCP servers, Model Context Protocol

00:01:00.899 --> 00:01:04.260
servers. The idea is these servers would let

00:01:04.260 --> 00:01:07.579
AIs automatically tap into a project's full context.

00:01:07.939 --> 00:01:11.200
Like, really understand. Think of it like a permanent

00:01:11.200 --> 00:01:13.560
memory for the AI. Deep understanding. Yeah,

00:01:13.780 --> 00:01:17.079
and it's important to say, while these dedicated

00:01:17.079 --> 00:01:19.420
MCP servers themselves might still be mostly

00:01:19.420 --> 00:01:21.659
concepts, maybe just around the corner, the ideas

00:01:21.659 --> 00:01:24.340
driving them, those are incredibly valuable right

00:01:24.340 --> 00:01:27.140
now. They give us a sort of roadmap for how we

00:01:27.140 --> 00:01:30.859
can... today, teach our current AIs, chat, GPT,

00:01:30.959 --> 00:01:33.040
Gemini, Claude, whatever you use, teach them

00:01:33.040 --> 00:01:36.099
to grasp our projects on a much deeper level.

00:01:36.299 --> 00:01:38.359
Exactly. So that's our mission for this deep

00:01:38.359 --> 00:01:41.079
dive. We're going to unpack 10 practical techniques,

00:01:41.200 --> 00:01:43.280
all based on those MCP ideas, things you can

00:01:43.280 --> 00:01:45.840
actually use now, to give your AI a better memory,

00:01:45.980 --> 00:01:48.099
basically, a clearer picture of your project,

00:01:48.299 --> 00:01:50.379
which should really boost your productivity,

00:01:50.540 --> 00:01:52.579
get the AI working with you, not against you.

00:01:52.840 --> 00:01:56.000
So think of this as just a calm, curious look

00:01:56.000 --> 00:01:59.549
at how we can make AI truly work for us, moving

00:01:59.549 --> 00:02:02.269
beyond those simple one -off questions towards

00:02:02.269 --> 00:02:04.409
something more like an informed partnership.

00:02:05.150 --> 00:02:08.210
Let's dig in. OK, so first up, something really

00:02:08.210 --> 00:02:12.069
fundamental, the project structure itself. This

00:02:12.069 --> 00:02:14.669
first idea comes from the concept of a file system

00:02:14.669 --> 00:02:17.710
MCP. Imagine an AI server that could just read

00:02:17.710 --> 00:02:19.509
and understand your whole project directory.

00:02:19.870 --> 00:02:22.210
Know where every file is automatically. Exactly.

00:02:22.409 --> 00:02:24.650
So how do we do that now, practically speaking?

00:02:24.770 --> 00:02:26.430
Well, it's actually pretty straightforward. Before

00:02:26.430 --> 00:02:28.629
you ask the AI to do something involving files,

00:02:29.009 --> 00:02:31.169
give it the structure first. You can just run

00:02:31.169 --> 00:02:34.689
a tree if you're on Windows or Ellsvoord R on

00:02:34.689 --> 00:02:37.430
Mac or Linux. Copy that output. Paste it right

00:02:37.430 --> 00:02:39.310
into your prompt. It's like, here are the blueprints.

00:02:39.509 --> 00:02:41.310
Gotcha. So you'd say something like? Yeah, you'd

00:02:41.310 --> 00:02:43.430
say, here's my project's directory structure.

00:02:43.530 --> 00:02:46.090
Then paste the tree output. And then follow up

00:02:46.090 --> 00:02:49.090
with, OK, I just moved button .tsx from Cirque

00:02:49.090 --> 00:02:51.409
Components over to Cirque Design System Adams.

00:02:52.069 --> 00:02:54.310
Based on the structure I gave you, find all the

00:02:54.310 --> 00:02:57.569
files and Cirque pages using this button and

00:02:57.569 --> 00:03:00.490
rewrite their import paths. Ah, OK. You're giving

00:03:00.490 --> 00:03:04.169
it a concrete map. No guesswork. Precisely. So

00:03:04.169 --> 00:03:06.870
why is knowing that file location so crucial

00:03:06.870 --> 00:03:09.710
for the AI? Well, without that map, it's just

00:03:09.710 --> 00:03:12.509
guessing, right? Especially in a big project,

00:03:12.530 --> 00:03:15.129
or one that changes a lot. Those guesses about

00:03:15.129 --> 00:03:17.930
where files are, that leads to wrong suggestions,

00:03:18.090 --> 00:03:21.490
broken code, just misunderstanding what you even

00:03:21.490 --> 00:03:23.909
want. Giving it the structure avoids all that

00:03:23.909 --> 00:03:26.610
hassle. So basically, it prevents AI from making

00:03:26.610 --> 00:03:28.949
assumptions about file paths. That's the core

00:03:28.949 --> 00:03:30.990
of it. OK, so that's where things are. What about

00:03:30.990 --> 00:03:33.460
how they got there? The history. Good point.

00:03:33.819 --> 00:03:36.419
That leads us straight into Git context. This

00:03:36.419 --> 00:03:39.280
comes from the Git MCP idea, an AI that knows

00:03:39.280 --> 00:03:42.080
your commit history, branches, recent changes,

00:03:42.439 --> 00:03:44.800
the whole story of the code. Like it's been pair

00:03:44.800 --> 00:03:46.479
programming with you the whole time. Kinda, yeah.

00:03:46.539 --> 00:03:48.520
So the practical step here is running commands

00:03:48.520 --> 00:03:52.039
like git log or git diff locally, and then just

00:03:52.039 --> 00:03:54.259
copying that output into the prompt. Exactly.

00:03:54.340 --> 00:03:57.599
You give the AI a clear snapshot of the project's

00:03:57.599 --> 00:03:59.699
evolution. OK, so an example might be... You

00:03:59.699 --> 00:04:01.939
can say, here's the commit history between tag

00:04:01.939 --> 00:04:04.659
v1 .2 .0 and the current main branch. Then paste

00:04:04.659 --> 00:04:08.379
the output of, say, git log v1 .2 .0 .head1 line.

00:04:08.520 --> 00:04:11.680
Then ask, based only on these commits, please

00:04:11.680 --> 00:04:14.879
draft some concise release notes for v1 .3 .0.

00:04:15.240 --> 00:04:17.339
Categorize them into new features, bug fixes,

00:04:17.639 --> 00:04:19.740
and improvements. Nice. You're giving it the

00:04:19.740 --> 00:04:22.139
actual changelog, not just bits of code. So how

00:04:22.139 --> 00:04:24.759
does this historical context make the AI a better

00:04:24.759 --> 00:04:27.060
collaborator? Well, it shifts it from just generating

00:04:27.060 --> 00:04:31.500
code to being more like a knowledgeable teammate.

00:04:31.620 --> 00:04:34.360
OK. It sees the why behind changes, the problem

00:04:34.360 --> 00:04:36.160
solved, the direction you're heading. Right.

00:04:36.220 --> 00:04:38.300
So the code it suggests, or the summaries it

00:04:38.300 --> 00:04:40.100
writes, they're just much more relevant because

00:04:40.100 --> 00:04:42.519
it gets the intent. It lets AI understand project

00:04:42.519 --> 00:04:44.860
evolution and recent changes. Yeah, which is

00:04:44.860 --> 00:04:47.040
crucial. Which leads nicely into the next idea,

00:04:47.300 --> 00:04:50.540
creating a kind of memory bank. Ah, yes. The

00:04:50.540 --> 00:04:53.620
Memory Bank, the concept, the Memory Bank MCP,

00:04:53.779 --> 00:04:56.680
was this server that holds all your project decisions,

00:04:56.819 --> 00:04:59.019
your coding conventions, architectural rules,

00:04:59.019 --> 00:05:01.180
and it remembers them across sessions. Dream

00:05:01.180 --> 00:05:03.360
it, right? No more repeating yourself. Definitely

00:05:03.360 --> 00:05:06.459
the dream. So how do we mimic that today? You

00:05:06.459 --> 00:05:09.319
can use features like custom instructions if

00:05:09.319 --> 00:05:12.420
your AI tool supports them. Or, and this works

00:05:12.420 --> 00:05:14.920
anywhere, just start every conversation with

00:05:14.920 --> 00:05:18.240
a really clear system prompt. Okay. Lay down

00:05:18.240 --> 00:05:21.490
the law from the start. Establish the project's

00:05:21.490 --> 00:05:24.230
DNA for the AI. So a system prompt might look

00:05:24.230 --> 00:05:26.350
like. Yeah, like right at the top. System prompt.

00:05:27.110 --> 00:05:29.069
Okay, for this whole chat, stick to these project

00:05:29.069 --> 00:05:32.689
rules. One, language is KypeScript. Two, use

00:05:32.689 --> 00:05:36.209
Axios for all API calls. Three, React components

00:05:36.209 --> 00:05:39.040
are function components with hooks. Clear rule.

00:05:39.360 --> 00:05:41.079
Now, help me create a user profile component.

00:05:41.600 --> 00:05:43.600
I have to admit, I still wrestle with prompt

00:05:43.600 --> 00:05:46.220
drift myself sometimes. Keeping the AI consistently

00:05:46.220 --> 00:05:49.319
on track can be genuinely tough. Well, for sure,

00:05:49.319 --> 00:05:52.180
it happens. But this memory bank approach, setting

00:05:52.180 --> 00:05:54.939
it up, what's the biggest benefit? Does it outweigh

00:05:54.939 --> 00:05:56.899
that initial effort? That's a fair question.

00:05:57.000 --> 00:05:58.560
It does take a little setup. But think about

00:05:58.560 --> 00:06:01.639
how much time you spend correcting the AI later,

00:06:01.779 --> 00:06:04.660
right? Re -explaining conventions, fixing code

00:06:04.660 --> 00:06:06.879
that doesn't fit the pattern. That drift adds

00:06:06.879 --> 00:06:09.980
up. So this memory bank approach, it front loads

00:06:09.980 --> 00:06:13.300
that effort, but it pays off over time with consistency.

00:06:13.519 --> 00:06:15.959
It ensures consistent AI behavior across all

00:06:15.959 --> 00:06:18.439
interactions. Exactly. Consistency is key. And

00:06:18.439 --> 00:06:21.540
speaking of keys, knowing how the pieces connect.

00:06:21.759 --> 00:06:24.220
Absolutely. Which brings us to analyzing code

00:06:24.220 --> 00:06:27.060
relationships. This is inspired by the knowledge

00:06:27.060 --> 00:06:30.339
graph memory idea, an AI server that automatically

00:06:30.339 --> 00:06:32.720
maps out all the dependencies. Which functions

00:06:32.720 --> 00:06:35.800
call which? Which components use others? Like

00:06:35.800 --> 00:06:38.040
a big interconnected web. Wow, that would be

00:06:38.040 --> 00:06:39.920
powerful. How do we approximate that? We can

00:06:39.920 --> 00:06:42.699
use our IDEs, actually, features like find all

00:06:42.699 --> 00:06:45.540
references or go to definition. Ah, okay. Use

00:06:45.540 --> 00:06:48.079
the tools we already have. Right. Get that dependency

00:06:48.079 --> 00:06:50.839
info and then feed it to the AI. So you could

00:06:50.839 --> 00:06:53.800
say... You might prompt it like this. Okay, I'm

00:06:53.800 --> 00:06:56.660
refactoring the Calculaprice function in utilscalculations

00:06:56.660 --> 00:07:00.240
.js. My IDE shows it's used in the components

00:07:00.240 --> 00:07:05.139
card .js, page checkout .js, and order service

00:07:05.139 --> 00:07:08.339
.js. Based on that, what are the potential risks

00:07:08.339 --> 00:07:11.139
here, and what steps should I take to avoid breaking

00:07:11.139 --> 00:07:13.519
things during the refactor? Okay, so you're giving

00:07:13.519 --> 00:07:16.399
it the dependency map. Why is understanding these

00:07:16.399 --> 00:07:18.620
links so important for the AI? Because it lets

00:07:18.620 --> 00:07:21.399
the AI predict the ripple effects. If it knows

00:07:21.399 --> 00:07:23.459
everywhere that calculate price function is used,

00:07:23.759 --> 00:07:25.600
it can warn you before you break something downstream.

00:07:25.920 --> 00:07:28.180
Oh, yeah. It can suggest safer ways to refactor.

00:07:28.319 --> 00:07:31.220
It shifts it from just coding to being more preventative.

00:07:31.720 --> 00:07:34.420
It helps AI foresee impact and avoid breaking

00:07:34.420 --> 00:07:36.379
things. Which is always good, less debugging.

00:07:36.620 --> 00:07:38.920
What about external info, like docs? Yeah, good

00:07:38.920 --> 00:07:41.420
question. Technique five is about retrieving

00:07:41.420 --> 00:07:44.910
web and API documentation. This comes from the

00:07:44.910 --> 00:07:48.310
Fetching Context 7 MCP ideas. The vision was

00:07:48.310 --> 00:07:50.329
an AI that could just pull the latest docs for

00:07:50.329 --> 00:07:53.149
any library, automatically. Instant up -to -date

00:07:53.149 --> 00:07:55.430
info. Okay, so for now, it's on us. For now,

00:07:55.529 --> 00:07:58.350
yeah. Instead of hoping the AI finds the right

00:07:58.350 --> 00:08:01.110
docs, or worse, makes something up. Hallucinates.

00:08:01.230 --> 00:08:03.230
Exactly. You go to the official docs yourself,

00:08:03.769 --> 00:08:05.449
copy the relevant bit, could be text, could be

00:08:05.449 --> 00:08:07.750
a code example, and paste that directly into

00:08:07.750 --> 00:08:09.529
your prompt. You're curating the info. Okay,

00:08:09.550 --> 00:08:12.620
so like... I'm using chart .js version 4 .4.

00:08:12.699 --> 00:08:15.220
Here's the official docs snippet for donut charts.

00:08:16.199 --> 00:08:18.980
Then you paste the snippet. Right. And then based

00:08:18.980 --> 00:08:21.500
on this specific documentation, write the code

00:08:21.500 --> 00:08:25.100
for a donut chart showing this data. Sales 60,

00:08:25.360 --> 00:08:28.680
marketing 25, dev 15. You're giving it the ground

00:08:28.680 --> 00:08:31.360
truth. How does providing that specific curated

00:08:31.360 --> 00:08:34.379
info prevent those AI hallucinations? Well, it

00:08:34.379 --> 00:08:36.399
directly counters them. You're providing concrete

00:08:36.399 --> 00:08:39.100
facts from a trusted source. The AI doesn't have

00:08:39.100 --> 00:08:41.519
to guess or rely on possibly outdated training

00:08:41.519 --> 00:08:43.240
data. Right. It has the authoritative answer

00:08:43.240 --> 00:08:45.600
right there. Dramatically boosts accuracy and

00:08:45.600 --> 00:08:48.759
reliability. It provides concrete factual knowledge

00:08:48.759 --> 00:08:51.100
for the AI. Makes sense. Less time debugging

00:08:51.100 --> 00:08:54.200
the AIs on the app than the creativity. Exactly.

00:08:54.960 --> 00:08:58.659
OK, next up, leveraging intelligent search. This

00:08:58.659 --> 00:09:01.720
stems from the Tavoli MCP concept, a specialized

00:09:01.720 --> 00:09:04.379
search engine for developers. Understood code?

00:09:04.570 --> 00:09:06.789
problems. That sounds amazing. So how do we get

00:09:06.789 --> 00:09:09.009
close to that now? If you're using an AI with

00:09:09.009 --> 00:09:13.509
web browsing built in, like Gemini or the latest

00:09:13.509 --> 00:09:16.730
chat GPT, you can guide it search. Don't just

00:09:16.730 --> 00:09:19.049
ask a generic question. Tell it how to search.

00:09:19.529 --> 00:09:22.190
Instruct it to search, synthesize, and pull answers

00:09:22.190 --> 00:09:24.529
from reliable sources only. Oh, OK. Like giving

00:09:24.529 --> 00:09:26.250
it research instructions. Pretty much. So you

00:09:26.250 --> 00:09:28.580
could say, I'm getting a course. Pre -flight

00:09:28.580 --> 00:09:31.620
request did not succeed error with AWS Lambda

00:09:31.620 --> 00:09:34.779
and API Gateway. Please search the web, specifically

00:09:34.779 --> 00:09:37.820
looking for official AWS documentation and reputable

00:09:37.820 --> 00:09:40.259
tech articles on configuring cores for this setup.

00:09:40.720 --> 00:09:42.580
Then summarize the top three common solutions

00:09:42.580 --> 00:09:45.110
you find. OK, so it's targeted problem solving.

00:09:45.330 --> 00:09:46.830
What's the key difference between doing that

00:09:46.830 --> 00:09:49.370
and just Googling it myself? The synthesis part

00:09:49.370 --> 00:09:52.409
is key. The AI isn't just giving you links. Right.

00:09:52.549 --> 00:09:54.950
It's reading multiple sources, figuring out the

00:09:54.950 --> 00:09:57.250
common patterns, the most frequent fixes, and

00:09:57.250 --> 00:09:59.529
presenting a concise summary. Distilling it down.

00:09:59.590 --> 00:10:02.509
Yeah. It's a focused, solution -driven search,

00:10:02.590 --> 00:10:04.970
not just information retrieval. It's a focused,

00:10:05.190 --> 00:10:07.889
synthesis -driven search, specifically for solutions.

00:10:07.970 --> 00:10:09.710
That could save a lot of time sifting through

00:10:09.710 --> 00:10:12.490
search results. Definitely. OK, number seven.

00:10:13.210 --> 00:10:16.450
Connecting personal notes. Inspired by Obsidian

00:10:16.450 --> 00:10:20.470
MCP, the idea of an AI plugging directly into

00:10:20.470 --> 00:10:23.009
your personal knowledge base. Your notes, your

00:10:23.009 --> 00:10:25.529
requirements. Your brain, basically. Sort of,

00:10:25.610 --> 00:10:27.509
yeah. How do we do that, practically? You just

00:10:27.509 --> 00:10:30.019
copy and paste. Open your notes app, Obsidian,

00:10:30.200 --> 00:10:32.759
Notion, Plain Text, whatever. Find the notes

00:10:32.759 --> 00:10:34.620
relevant to what you're working on, copy them,

00:10:34.720 --> 00:10:37.120
paste them into the prompt. Give the AI a peek

00:10:37.120 --> 00:10:39.279
into your specific thoughts. Exactly. So you

00:10:39.279 --> 00:10:41.340
could write, here are my notes on the login page

00:10:41.340 --> 00:10:44.080
requirements, and paste your bullet points, email

00:10:44.080 --> 00:10:47.779
password fields, standard email validation, password

00:10:47.779 --> 00:10:51.360
eight chars, Google sign in button, then ask.

00:10:52.340 --> 00:10:55.000
Based only on these requirements, generate the

00:10:55.000 --> 00:10:58.299
basic HTML and CSS for this form. How does giving

00:10:58.299 --> 00:11:00.080
it these personal notes, which aren't really

00:11:00.080 --> 00:11:02.159
code, how does that personalize the AIs help?

00:11:02.580 --> 00:11:05.919
It makes the output truly specific to your project,

00:11:06.100 --> 00:11:09.279
your thinking. Often, the little details, the

00:11:09.279 --> 00:11:11.700
nuances, the specific constraints, they only

00:11:11.700 --> 00:11:15.059
live in those notes. Giving the AI that context

00:11:15.059 --> 00:11:17.200
helps to generate something that actually matches

00:11:17.200 --> 00:11:20.399
your vision, not just a generic template. AI

00:11:20.399 --> 00:11:23.460
gains insight into your unique project specifics

00:11:23.460 --> 00:11:25.580
and thoughts. That's really powerful for getting

00:11:25.580 --> 00:11:28.659
tailored results. Now, what about really complex

00:11:28.659 --> 00:11:31.340
tasks? Good transition. Let's talk about applying

00:11:31.340 --> 00:11:33.789
sequential thinking. This comes from the sequential

00:11:33.789 --> 00:11:37.809
thinking MCP idea, an AI that automatically breaks

00:11:37.809 --> 00:11:41.350
down big problems into logical steps, like how

00:11:41.350 --> 00:11:43.669
an experienced developer thinks. OK. How do we

00:11:43.669 --> 00:11:46.009
encourage that? This one's actually super easy

00:11:46.009 --> 00:11:48.429
to apply and really effective. You just explicitly

00:11:48.429 --> 00:11:51.710
ask the AI to think step by step or break down

00:11:51.710 --> 00:11:53.350
the problem first. Just tell it to slow down

00:11:53.350 --> 00:11:55.509
and show its work. Pretty much. It prompts it

00:11:55.509 --> 00:11:57.649
to use a more structured approach and actually

00:11:57.649 --> 00:11:59.929
explain its reasoning. So an example. You could

00:11:59.929 --> 00:12:02.190
say, I need to build a shopping cart feature.

00:12:02.350 --> 00:12:05.389
Please think step -by -step and outline a detailed

00:12:05.389 --> 00:12:08.970
plan. Include, one, the data structure for the

00:12:08.970 --> 00:12:13.889
cart. Two, key API endpoints needed. Add, remove,

00:12:14.070 --> 00:12:18.009
update. Three, the main client -side logic. Ah,

00:12:18.169 --> 00:12:21.549
okay. Forces it to be methodical. Does asking

00:12:21.549 --> 00:12:23.870
it to think step -by -step actually make the

00:12:23.870 --> 00:12:26.730
AI, like, smarter? or is something else going

00:12:26.730 --> 00:12:29.169
on? It's less about making it intrinsically smarter

00:12:29.169 --> 00:12:31.330
and more about forcing it to show its process

00:12:31.330 --> 00:12:33.990
and follow a logical path. Okay. It reduces the

00:12:33.990 --> 00:12:35.850
chance of it jumping to conclusions or skipping

00:12:35.850 --> 00:12:38.190
crucial steps. Right. It guides the AI towards

00:12:38.190 --> 00:12:40.470
outputs that are more structured, more reliable,

00:12:40.870 --> 00:12:43.070
easier for us to understand and verify. Less

00:12:43.070 --> 00:12:45.570
rework. It guides the AI toward more structured,

00:12:45.570 --> 00:12:47.909
reliable outputs. That are more trustworthy code.

00:12:48.090 --> 00:12:49.990
Makes sense. Okay. Technique 9 deals with something

00:12:49.990 --> 00:12:52.450
critical. Handling environment variables securely.

00:12:52.759 --> 00:12:55.659
based on the InVault MCP concept, an AI server

00:12:55.659 --> 00:12:57.639
that helps access secrets and environment variables

00:12:57.639 --> 00:12:59.620
without ever exposing them. Super important.

00:12:59.779 --> 00:13:01.980
And the rule today is absolutely critical, right?

00:13:02.240 --> 00:13:06.799
Paramount. You never, ever paste API keys, passwords,

00:13:07.100 --> 00:13:10.500
any secrets directly into an AI prompt. Full

00:13:10.500 --> 00:13:13.399
stop. Never, ever. So what do we do? You instruct

00:13:13.399 --> 00:13:16.309
the AI. to write code that retrieves secrets

00:13:16.309 --> 00:13:18.750
from environment variables using standard methods,

00:13:19.250 --> 00:13:23.830
like process .env in Node or os .getinv in Python.

00:13:24.129 --> 00:13:27.090
OK. Tell it how to get the secret securely. Exactly.

00:13:27.269 --> 00:13:29.450
So a prompt might be, write a Python function

00:13:29.450 --> 00:13:32.850
to call the OpenAI API. Do not put the API key

00:13:32.850 --> 00:13:35.110
in the code. Read it from an environment variable

00:13:35.110 --> 00:13:37.909
called OpenIAPI key. If that variable isn't set,

00:13:37.990 --> 00:13:40.090
the function should raise an error. Perfect.

00:13:40.289 --> 00:13:42.850
Keeps the secret safe, out of the code, out of

00:13:42.850 --> 00:13:46.009
the prompt. Ooh. Imagine securely managing secrets

00:13:46.009 --> 00:13:48.470
for like a billion AI queries automatically,

00:13:48.769 --> 00:13:51.190
never risking exposure. That scale is mind -boggling.

00:13:51.269 --> 00:13:53.529
It really is. But even on our scale, why is the

00:13:53.529 --> 00:13:56.269
security aspect just so vital right now? It's

00:13:56.269 --> 00:13:58.769
just fundamental secure coding practice. Hard

00:13:58.769 --> 00:14:00.769
coding secrets or accidentally pasting them into

00:14:00.769 --> 00:14:03.370
a prompt, that's how major breaches happen. Compromised

00:14:03.370 --> 00:14:06.730
accounts, stolen data. So teaching the AI and

00:14:06.730 --> 00:14:09.720
ourselves to use environment variables. properly

00:14:09.720 --> 00:14:12.200
protects that sensitive info. It prevents those

00:14:12.200 --> 00:14:14.600
potentially catastrophic, really costly mistakes.

00:14:15.039 --> 00:14:17.159
It's about building secure habits. It protects

00:14:17.159 --> 00:14:19.860
sensitive information, preventing costly data

00:14:19.860 --> 00:14:21.919
breaches. Absolutely priceless in the long run.

00:14:22.039 --> 00:14:24.509
OK, one more. Last one, number 10. Integrating

00:14:24.509 --> 00:14:28.250
API contracts. Inspired by APSpec MCP, an AI

00:14:28.250 --> 00:14:30.750
that could read, say, an OpenAPI or Swagger file

00:14:30.750 --> 00:14:33.590
to know exactly how an API endpoint works. The

00:14:33.590 --> 00:14:36.210
precise structure, requests, responses. Exactly.

00:14:36.230 --> 00:14:38.250
So practically, we copy the relevant part of

00:14:38.250 --> 00:14:40.950
our spec file. That's the idea. Find the specific

00:14:40.950 --> 00:14:43.929
endpoint definition in your OpenAPI .json or

00:14:43.929 --> 00:14:47.389
Swagger .yaml. Copy that whole definition, the

00:14:47.389 --> 00:14:49.990
YAML or JSON block. Paste it directly into the

00:14:49.990 --> 00:14:52.610
prompt. Give the AI the undeniable contract for

00:14:52.610 --> 00:14:54.980
that endpoint. OK, so, like, I need a JavaScript

00:14:54.980 --> 00:14:57.559
function. Here's the OpenATI 3 .0 spec for the

00:14:57.559 --> 00:14:59.659
user's user endpoint. Then paste the spec lock.

00:14:59.759 --> 00:15:02.960
Right. Then, based strictly on this spec, write

00:15:02.960 --> 00:15:05.179
a fetch function called getUserBide that takes

00:15:05.179 --> 00:15:07.960
userIde. Make sure it handles the 200 success

00:15:07.960 --> 00:15:11.019
response and the 404 error case correctly, as

00:15:11.019 --> 00:15:14.720
defined. Got it. How does giving it that explicit

00:15:14.720 --> 00:15:18.799
API contract make the interaction more precise.

00:15:18.960 --> 00:15:21.039
It just removes all the ambiguity. The AI doesn't

00:15:21.039 --> 00:15:23.639
have to guess about parameters or what the response

00:15:23.639 --> 00:15:26.259
looks like or which status codes mean what. It

00:15:26.259 --> 00:15:28.580
has the blueprint. Exactly. The code it generates

00:15:28.580 --> 00:15:30.980
for talking to your API will be perfectly aligned

00:15:30.980 --> 00:15:34.100
with your backend. Fewer integration bugs, smoother

00:15:34.100 --> 00:15:37.919
development. It gives AI an exact blueprint for

00:15:37.919 --> 00:15:41.570
API communication. So that's 10 techniques, 10

00:15:41.570 --> 00:15:44.250
ways to give our AI better context, all inspired

00:15:44.250 --> 00:15:48.389
by those future -facing MCP server ideas. And

00:15:48.389 --> 00:15:51.490
the big idea, the main takeaway here, is pretty

00:15:51.490 --> 00:15:53.490
simple, really. You don't need to wait for those

00:15:53.490 --> 00:15:55.549
futuristic servers. You can effectively be the

00:15:55.549 --> 00:15:58.019
MCP server for your AI. right now. That's it,

00:15:58.059 --> 00:16:00.539
exactly. The power isn't in waiting for new tools,

00:16:00.639 --> 00:16:03.019
it's in how we use the tools we have. Adopting

00:16:03.019 --> 00:16:05.120
that context -first mindset every time you write

00:16:05.120 --> 00:16:06.919
a prompt, it changes everything. And when you

00:16:06.919 --> 00:16:09.240
start doing this, providing file structure, get

00:16:09.240 --> 00:16:11.879
history, that memory bank system prompt, the

00:16:11.879 --> 00:16:14.399
API specs, you will see a difference, a really

00:16:14.399 --> 00:16:16.539
noticeable difference. The quality of the AI's

00:16:16.539 --> 00:16:20.460
output goes way up, and your productivity. It

00:16:20.460 --> 00:16:22.419
really does get a significant boost. It's about

00:16:22.419 --> 00:16:25.159
having a richer, more informed, and just more

00:16:25.159 --> 00:16:27.960
reliable conversation with these tools. And maybe

00:16:27.960 --> 00:16:30.200
a final thought to leave you with. What if, by

00:16:30.200 --> 00:16:32.539
giving AI this deeper context, we're not just

00:16:32.539 --> 00:16:35.740
making the AI more useful, but we're also subtly

00:16:35.740 --> 00:16:38.240
changing how we approach problem solving, how

00:16:38.240 --> 00:16:39.720
we structure our own thoughts about the code?

00:16:40.080 --> 00:16:43.279
That's deep. But yeah, maybe. I'd say just try

00:16:43.279 --> 00:16:45.000
one or two of these techniques. Next time you're

00:16:45.000 --> 00:16:46.960
working with an AI, pick one, give it a shot,

00:16:47.159 --> 00:16:48.740
see how it feels, see what happens. Thank you

00:16:48.740 --> 00:16:50.940
for joining us for this deep dive on mastering

00:16:50.940 --> 00:16:53.379
AI context. We really hope it helps you build

00:16:53.379 --> 00:16:55.940
smarter, build faster, and maybe with a little

00:16:55.940 --> 00:16:56.559
less frustration.
