WEBVTT

00:00:00.000 --> 00:00:02.819
So if you've been using the latest AI models,

00:00:02.940 --> 00:00:05.799
maybe chat GPT -5, you might have noticed something

00:00:05.799 --> 00:00:08.400
a bit weird. Yeah, like you know you're using

00:00:08.400 --> 00:00:10.480
this incredibly powerful tool, right? State of

00:00:10.480 --> 00:00:13.539
the art. Exactly. But the results, they can be

00:00:13.539 --> 00:00:16.059
all over the place. Sometimes amazing, sometimes

00:00:16.059 --> 00:00:19.300
not so much. Right. You get this brilliant piece

00:00:19.300 --> 00:00:22.300
of analysis one minute, and then next the answer

00:00:22.300 --> 00:00:26.260
feels kind of lazy, almost worse than the older

00:00:26.260 --> 00:00:28.940
model sometimes. It's confusing, isn't it? You're

00:00:28.940 --> 00:00:31.339
paying for the premium model expecting top quality

00:00:31.339 --> 00:00:35.200
every time, but it dips. So why is this super

00:00:35.200 --> 00:00:38.509
smart AI acting well? Kind of inconsistent. It's

00:00:38.509 --> 00:00:40.689
a fair question, and it feels backwards. But

00:00:40.689 --> 00:00:43.149
the issue isn't really the AI model itself. It

00:00:43.149 --> 00:00:46.229
is powerful. OK. The problem comes from a couple

00:00:46.229 --> 00:00:48.689
of big changes OpenAI made in how the whole system

00:00:48.689 --> 00:00:50.149
worked. Basically, the way we used to write prompts,

00:00:50.390 --> 00:00:53.229
it's tripping up this new setup. Oh, OK. So our

00:00:53.229 --> 00:00:55.170
old habits are the problem. Let's dig into that,

00:00:55.189 --> 00:00:57.109
then. Yeah. The goal today is really to give

00:00:57.109 --> 00:00:59.710
you, the listener, a clear strategy to get back

00:00:59.710 --> 00:01:02.609
to those consistently high quality results. Definitely.

00:01:03.250 --> 00:01:05.189
We're going to unpack these two main changes.

00:01:05.430 --> 00:01:07.849
First, this invisible router thing, and also

00:01:07.849 --> 00:01:11.129
how the AI now follows instructions very strictly.

00:01:11.409 --> 00:01:14.969
And then we'll jump into five specific tips.

00:01:15.409 --> 00:01:17.930
We'll start simple with easy nudges and build

00:01:17.930 --> 00:01:19.810
up to something we call the perfection loop.

00:01:20.609 --> 00:01:23.469
It's a bit more effort, but wow, the results.

00:01:23.609 --> 00:01:25.469
All right, let's get into it. Segment one. Yeah.

00:01:25.950 --> 00:01:28.829
Why the old ways are failing. This invisible

00:01:28.829 --> 00:01:31.260
router. What's that about? I remember you used

00:01:31.260 --> 00:01:33.219
to be able to pick different models. You did,

00:01:33.359 --> 00:01:36.120
yeah. You could choose GPT -5, maybe a thinking

00:01:36.120 --> 00:01:38.260
version. You felt like you had control. Right.

00:01:38.439 --> 00:01:41.739
Now the interface seems simpler. But behind the

00:01:41.739 --> 00:01:43.719
curtain, when you send your prompt, it hits this

00:01:43.719 --> 00:01:47.480
invisible router first. Think of it like an automatic

00:01:47.480 --> 00:01:49.760
traffic cop for your request. A traffic cop?

00:01:50.140 --> 00:01:52.420
But if I'm paying for the best model, why does

00:01:52.420 --> 00:01:54.120
it need routing? Shouldn't it just go to the

00:01:54.120 --> 00:01:56.280
best one? That's the logical thought, yeah. But

00:01:56.280 --> 00:01:59.180
here's the catch, and where the user frustration

00:01:59.180 --> 00:02:01.579
often comes in, those really powerful models,

00:02:01.659 --> 00:02:03.920
they take more computer power, they cost more

00:02:03.920 --> 00:02:07.719
for OpenAI to run. Ah, so the router is maybe

00:02:07.719 --> 00:02:11.699
optimizing for cost. You got it. It's often engineered

00:02:11.699 --> 00:02:13.879
to find the most efficient path, which usually

00:02:13.879 --> 00:02:16.500
means the fastest and cheapest AI model available

00:02:16.500 --> 00:02:19.449
that might be able to answer the query. So it's

00:02:19.449 --> 00:02:22.689
defaulting to good enough to save resources,

00:02:23.310 --> 00:02:26.370
even if I wanted great. Precisely. And that good

00:02:26.370 --> 00:02:29.469
enough model gives you those, well, less impressive

00:02:29.469 --> 00:02:32.370
inconsistent answers sometimes. That's the hidden

00:02:32.370 --> 00:02:34.949
mechanical reason for the quality dips. OK, that

00:02:34.949 --> 00:02:37.150
makes a lot of sense, actually. So that's the

00:02:37.150 --> 00:02:39.810
router. What about the other big shift, the training?

00:02:40.009 --> 00:02:41.830
Right, the second piece is how it's trained.

00:02:42.270 --> 00:02:45.210
Chat GPT -5 got a lot of training focused on

00:02:45.210 --> 00:02:49.229
serving AI agents. These are programs, basically,

00:02:49.389 --> 00:02:51.490
that need instructions followed perfectly. No

00:02:51.490 --> 00:02:53.590
mistakes, no guessing. OK, so it got really good

00:02:53.590 --> 00:02:56.289
at following orders exactly. Super good. But

00:02:56.289 --> 00:02:58.770
the flip side for a regular user like you or

00:02:58.770 --> 00:03:01.669
me, if your prompt is a bit vague or you leave

00:03:01.669 --> 00:03:04.349
things out, It's much worse now at trying to

00:03:04.349 --> 00:03:05.830
figure out what you meant. It won't fill in the

00:03:05.830 --> 00:03:07.870
gaps like the older ones might have. Nope. It

00:03:07.870 --> 00:03:10.150
sticks rigidly to what you typed. No implied

00:03:10.150 --> 00:03:13.060
instructions. That sounds demanding. It can be.

00:03:13.280 --> 00:03:15.099
And honestly, I still mess this up sometimes.

00:03:15.180 --> 00:03:17.599
You know, I rush a prompt, make it too simple.

00:03:17.800 --> 00:03:20.800
Happens to the best of us. Right. Just last week,

00:03:20.879 --> 00:03:23.039
I asked for a quick summary and it gave me this

00:03:23.039 --> 00:03:25.560
long definition of terms I obviously already

00:03:25.560 --> 00:03:27.900
knew. Slight chuckle. Took me a few tries to

00:03:27.900 --> 00:03:29.860
get it right. It catches you out if you're not

00:03:29.860 --> 00:03:32.900
precise. OK, so the router pushes towards cheapness

00:03:32.900 --> 00:03:36.639
and the AI demands exact instructions. So the

00:03:36.639 --> 00:03:39.500
big question then, how do we make the router

00:03:39.500 --> 00:03:42.500
pick the smart expensive model when we need it.

00:03:42.620 --> 00:03:44.659
Ah, well, that's where our specific phrasing

00:03:44.659 --> 00:03:46.860
comes in. We have to signal it. Leading us nicely

00:03:46.860 --> 00:03:50.740
into the quick wins. Tip one. Tip one. Use router

00:03:50.740 --> 00:03:53.000
nudge phrases. This is probably the easiest thing

00:03:53.000 --> 00:03:56.080
you can do. Seriously low effort. OK. You just

00:03:56.080 --> 00:03:57.979
add a specific little phrase right at the end

00:03:57.979 --> 00:03:59.699
of your prompt. It's like a little flag for the

00:03:59.699 --> 00:04:02.560
router that says, hey, stop. This one needs actual

00:04:02.560 --> 00:04:04.860
thinking power. What are these magic words then?

00:04:05.180 --> 00:04:07.080
What tells the router to wake up the big guns?

00:04:07.439 --> 00:04:10.360
The ones that work consistently are phrases like,

00:04:10.800 --> 00:04:13.300
think carefully about this, or think deeply about

00:04:13.300 --> 00:04:15.419
this. Sometimes think hard about this works,

00:04:15.759 --> 00:04:18.980
too. Hmm. OK. Simple enough. Yeah. Using one

00:04:18.980 --> 00:04:21.360
of those basically forces the router to send

00:04:21.360 --> 00:04:23.720
your request to a more capable model, avoiding

00:04:23.720 --> 00:04:26.300
that quick, cheap default route we talked about.

00:04:26.480 --> 00:04:28.639
I can see that being useful. Like if I'm asking

00:04:28.639 --> 00:04:32.449
it to, say, outline pros and cons for a big decision,

00:04:32.810 --> 00:04:35.329
adding, think deeply about this, would probably

00:04:35.329 --> 00:04:38.050
get me a much better, more nuanced answer than

00:04:38.050 --> 00:04:40.290
just asking the question straight. Exactly. You

00:04:40.290 --> 00:04:41.990
should definitely use it for anything important.

00:04:42.189 --> 00:04:44.889
Business plans, you know, data analysis, drafting

00:04:44.889 --> 00:04:47.889
a critical email, things where quality really

00:04:47.889 --> 00:04:51.250
matters. Oh, and a quick heads up, don't use

00:04:51.250 --> 00:04:53.569
emotional language. Saying things like, this

00:04:53.569 --> 00:04:55.490
is really important to me doesn't seem to work.

00:04:55.730 --> 00:04:58.069
Right. It's a machine. It needs commands, not

00:04:58.069 --> 00:05:01.079
feelings. Precisely. Keep it clear. Keep it command

00:05:01.079 --> 00:05:03.779
-like. OK, so that's nudging the router. What

00:05:03.779 --> 00:05:06.060
about the output itself? Tip 2 is about length,

00:05:06.060 --> 00:05:08.199
right? Yeah, controlling the output length. Because

00:05:08.199 --> 00:05:10.879
the router also kind of defaults to shorter answers,

00:05:11.060 --> 00:05:13.040
again, to save processing time. So we need to

00:05:13.040 --> 00:05:15.120
be explicit about how much text we actually want.

00:05:15.240 --> 00:05:17.439
Makes sense. So for short stuff, like just the

00:05:17.439 --> 00:05:19.740
key points. Right. For a short output, you'd

00:05:19.740 --> 00:05:22.040
say something like, summarize the main points

00:05:22.040 --> 00:05:25.100
in under 100 words. Perfect for maybe a quick

00:05:25.100 --> 00:05:28.120
project update email or you know a tweet Okay,

00:05:28.319 --> 00:05:31.199
and if I need a bit more like the main points

00:05:31.199 --> 00:05:33.920
plus some background That's your medium output

00:05:33.920 --> 00:05:37.339
try asking it to explain this topic in about

00:05:37.339 --> 00:05:40.019
three to five short paragraphs That works well

00:05:40.019 --> 00:05:42.040
for explaining something like why your websites

00:05:42.040 --> 00:05:44.120
click through rate dropped Maybe to your team

00:05:44.120 --> 00:05:47.199
gives context and then the big stuff full reports

00:05:47.199 --> 00:05:50.800
articles detailed full output be specific provide

00:05:50.800 --> 00:05:53.519
a detailed and full analysis around 600, 800

00:05:53.519 --> 00:05:56.420
words, or even more, like write a comprehensive

00:05:56.420 --> 00:05:59.220
guide of about 1 ,200 words. That gets you the

00:05:59.220 --> 00:06:01.620
full document. Numbers seem key here. Specific

00:06:01.620 --> 00:06:03.879
word counts or paragraph counts. Absolutely.

00:06:04.199 --> 00:06:07.319
And a little pro tip here, use a text expander

00:06:07.319 --> 00:06:09.819
app. You can save these length commands like

00:06:09.819 --> 00:06:12.519
summarize under 100 words and just type a short

00:06:12.519 --> 00:06:15.300
code like S100 to insert the whole phrase. Saves

00:06:15.300 --> 00:06:18.350
a ton of time. Nice. Okay, so we can nudge the

00:06:18.350 --> 00:06:20.550
router and control the length, but what about

00:06:20.550 --> 00:06:22.610
improving the actual prompt before we send it?

00:06:22.689 --> 00:06:24.649
How do we make the prompt itself better? Great

00:06:24.649 --> 00:06:27.670
question. That leads us to using a meta prompt.

00:06:28.269 --> 00:06:30.750
We basically get the AI to help us write a better

00:06:30.750 --> 00:06:33.209
prompt for itself. Okay, tip three, the meta

00:06:33.209 --> 00:06:36.189
prompt optimizer. You're saying we can use the

00:06:36.189 --> 00:06:39.329
AI to improve our instructions for the AI. Exactly.

00:06:39.490 --> 00:06:41.850
It sounds a bit circular, maybe, but it works

00:06:41.850 --> 00:06:44.209
incredibly well. Most people don't know OpenAI

00:06:44.209 --> 00:06:46.870
has internal tools for this, but we can kind

00:06:46.870 --> 00:06:49.310
of replicate it with a specific type of prompt.

00:06:49.470 --> 00:06:52.889
A meta prompt. Yeah. Which is just a prompt about

00:06:52.889 --> 00:06:55.589
prompting. Yep. You're telling the AI to act

00:06:55.589 --> 00:06:58.709
like a prompt expert and fix your request. So

00:06:58.709 --> 00:07:01.649
wait, we're asking the AI to critique our prompting

00:07:01.649 --> 00:07:04.910
skills. Is that really necessary for, like, just

00:07:04.910 --> 00:07:07.569
asking for a blog post idea? Well, maybe not

00:07:07.569 --> 00:07:10.129
for every single query. But remember how strict

00:07:10.129 --> 00:07:12.170
it is now? Write the exact instruction following.

00:07:12.329 --> 00:07:14.430
By having it analyze your prompt first, you force

00:07:14.430 --> 00:07:17.290
it to clarify everything. You guarantee the instructions

00:07:17.290 --> 00:07:19.569
it finally acts on are super clear and detailed.

00:07:19.910 --> 00:07:21.970
The command is usually something like, you are

00:07:21.970 --> 00:07:24.509
an expert prompt engineer, analyze the weak points

00:07:24.509 --> 00:07:26.589
of my original prompt below, and then rewrite

00:07:26.589 --> 00:07:28.769
it to be much clearer and more effective. Hmm,

00:07:29.009 --> 00:07:32.430
okay. I can see how that would force clarity.

00:07:32.790 --> 00:07:35.170
Do you have an example? Sure, think about a vague

00:07:35.170 --> 00:07:38.449
request. Write a blog post about working from

00:07:38.449 --> 00:07:40.949
home effectively. Pretty standard prompt. Right.

00:07:41.310 --> 00:07:43.250
But the meta -prompt process might turn that

00:07:43.250 --> 00:07:45.949
into something much richer. The rewritten prompt

00:07:45.949 --> 00:07:48.689
would likely define the AI's role, like you are

00:07:48.689 --> 00:07:51.810
a productivity expert. Specify the target audience,

00:07:52.129 --> 00:07:54.490
remote workers new to the concept. Outline the

00:07:54.490 --> 00:07:57.790
required structure. Intro, three main tips with

00:07:57.790 --> 00:08:01.230
examples, conclusion, and set the tone, encouraging

00:08:01.230 --> 00:08:04.449
and practical. Wow, okay. That's way more specific.

00:08:04.610 --> 00:08:06.610
The difference in output quality must be huge.

00:08:06.810 --> 00:08:09.230
Night and day. That detailed prompt just gives

00:08:09.230 --> 00:08:11.269
the AI so much more to work with accurately.

00:08:11.649 --> 00:08:13.670
And if you start doing this meta -prompting regularly,

00:08:13.889 --> 00:08:15.529
I guess your prompts naturally start looking

00:08:15.529 --> 00:08:18.149
more structured. Which brings us to tip four.

00:08:18.490 --> 00:08:21.930
the XML sandwich. Exactly. You'll notice those

00:08:21.930 --> 00:08:24.110
improved prompts often use these angle brackets

00:08:24.110 --> 00:08:27.769
like this tag or context. These look like XML

00:08:27.769 --> 00:08:30.589
tags. OpenAI actually recommends this structure

00:08:30.589 --> 00:08:33.309
internally. It helps organize complex instructions.

00:08:33.830 --> 00:08:35.590
I like the analogy you used before thinking of

00:08:35.590 --> 00:08:38.350
them like clearly labeled boxes. Instead of just

00:08:38.350 --> 00:08:40.309
a messy paragraph of instructions, you're giving

00:08:40.309 --> 00:08:43.149
it distinct blocks of information. Audience,

00:08:43.309 --> 00:08:46.509
tone, goal. Precisely. It breaks it down logically.

00:08:47.070 --> 00:08:49.909
The AI processes structured information really

00:08:49.909 --> 00:08:52.139
well. Let's take an example. maybe asking for

00:08:52.139 --> 00:08:55.100
a study plan. Right. A bad prompt is just, make

00:08:55.100 --> 00:08:58.360
me an IELTS study plan, I need band 7. Vague.

00:08:58.980 --> 00:09:01.779
A good structured prompt would use tags. Current

00:09:01.779 --> 00:09:04.340
level band 6, current level, target score band,

00:09:04.679 --> 00:09:08.000
7 .5 target score, week area speaking fluency,

00:09:08.559 --> 00:09:11.259
complex grammar week areas, study time 10 hours

00:09:11.259 --> 00:09:13.960
week study time. Much clearer. Each piece of

00:09:13.960 --> 00:09:16.279
info is neatly packaged. And the result you get

00:09:16.279 --> 00:09:18.639
back reflects that clarity. Instead of generic

00:09:18.639 --> 00:09:20.919
advice, you'll likely get a detailed table, maybe

00:09:20.919 --> 00:09:23.259
in markdown format, with specific activities

00:09:23.259 --> 00:09:26.179
scheduled like practice part two cue cards, 30

00:09:26.179 --> 00:09:29.399
minutes, or shadow native speaker audio, 20 minutes.

00:09:29.580 --> 00:09:31.779
It becomes actionable. Yeah, that structure is

00:09:31.779 --> 00:09:34.100
powerful, especially I imagine if you're setting

00:09:34.100 --> 00:09:36.320
up custom instructions in chat GPT or building

00:09:36.320 --> 00:09:38.980
your own custom GPTs. Definitely essential for

00:09:38.980 --> 00:09:41.720
those. OK, so we've covered nudging, length,

00:09:42.659 --> 00:09:44.639
optimizing the prompt itself with meta prompts

00:09:44.639 --> 00:09:47.080
and structuring it with XML. This is all about

00:09:47.019 --> 00:09:50.679
crafting the input. But how do we kind of guarantee

00:09:50.679 --> 00:09:53.240
the output quality before we even see it? Ah,

00:09:53.240 --> 00:09:56.500
that's the final piece. We embed quality control

00:09:56.500 --> 00:09:59.799
inside the prompt itself. We make the AI check

00:09:59.799 --> 00:10:02.500
its own work. Interesting. Let's take a quick

00:10:02.500 --> 00:10:05.179
break and then dive into that final tip. Sounds

00:10:05.179 --> 00:10:08.240
good. Mid -roll sponsor read. We'll be right

00:10:08.240 --> 00:10:11.259
back after the break. All right, we're back.

00:10:11.440 --> 00:10:14.220
We were just talking about making the AI guarantee

00:10:14.220 --> 00:10:17.240
its own quality. What's the final tip? Tip five,

00:10:17.580 --> 00:10:19.639
the perfection loop. Now this one takes a bit

00:10:19.639 --> 00:10:20.980
more thought when you're writing the prompt,

00:10:21.039 --> 00:10:23.659
but the payoff is potentially huge. Okay, the

00:10:23.659 --> 00:10:25.720
perfection loop. How does it work? Instead of

00:10:25.720 --> 00:10:27.759
just asking for the output and hoping it's good,

00:10:27.980 --> 00:10:30.659
you first instruct the AI to define what a perfect

00:10:30.659 --> 00:10:32.879
result would even look like for this specific

00:10:32.879 --> 00:10:35.299
request. So you make it create its own checklist

00:10:35.299 --> 00:10:38.740
first, based on my goals. That's clever. Exactly.

00:10:38.899 --> 00:10:41.539
You tell it, okay, first... Figure out the criteria

00:10:41.539 --> 00:10:44.299
for a perfect answer here. Maybe that's uniqueness,

00:10:44.740 --> 00:10:47.679
clarity, engaging tone, fits the brand voice,

00:10:47.799 --> 00:10:50.679
whatever is relevant. Then you tell it, draft

00:10:50.679 --> 00:10:53.519
an answer. Then use your own checklist to grade

00:10:53.519 --> 00:10:56.279
that draft. Keep refining it internally until

00:10:56.279 --> 00:10:59.340
it scores a 10 out of 10. Only then show me the

00:10:59.340 --> 00:11:01.519
final result. Whoa. So it does the drafting,

00:11:01.840 --> 00:11:04.019
the critiquing, the editing all internally before

00:11:04.019 --> 00:11:06.820
I even see anything. Yep. You essentially push

00:11:06.820 --> 00:11:09.840
the quality control step. back onto the AI before

00:11:09.840 --> 00:11:11.559
it delivers. You only get the polished version.

00:11:11.740 --> 00:11:14.039
That's kind of amazing. So, for example, if I

00:11:14.039 --> 00:11:16.279
needed a content strategy. You could tell it.

00:11:16.659 --> 00:11:18.759
First, define the key elements of a successful

00:11:18.759 --> 00:11:22.120
Q4 content strategy for a B2B SaaS company. Make

00:11:22.120 --> 00:11:25.340
a checklist. Then, generate the strategy, review

00:11:25.340 --> 00:11:27.620
it against your checklist, revise it until it's

00:11:27.620 --> 00:11:30.419
perfect, and then show me the final 1010 strategy.

00:11:30.799 --> 00:11:32.639
It assesses itself on things like scalability,

00:11:32.759 --> 00:11:37.029
tone, alignment with goals. Or, for say, a YouTube

00:11:37.029 --> 00:11:40.570
script. Create a checklist for an engaging tutorial

00:11:40.570 --> 00:11:44.210
script. Hook, clear steps, visual cues, call

00:11:44.210 --> 00:11:47.149
to action, friendly tone. Write the script, grade

00:11:47.149 --> 00:11:50.929
it, rewrite it until perfect, then output. Wow,

00:11:51.190 --> 00:11:53.269
just thinking about that. Imagine scaling that

00:11:53.269 --> 00:11:56.090
kind of internal self -correction across, I don't

00:11:56.090 --> 00:11:58.230
know, millions of tasks. It's not just prompting

00:11:58.230 --> 00:12:00.480
anymore, that's like... managing an automated

00:12:00.480 --> 00:12:03.080
quality process. It really shifts the dynamic,

00:12:03.279 --> 00:12:05.000
doesn't it? It's best for those really complex,

00:12:05.200 --> 00:12:07.379
high -stakes tasks. You know, writing entire

00:12:07.379 --> 00:12:09.360
business plans, generating code that needs to

00:12:09.360 --> 00:12:12.440
work first time, crafting really long, important

00:12:12.440 --> 00:12:14.720
documents. Okay, so that's the fifth tip, the

00:12:14.720 --> 00:12:17.679
perfection loop. Now, the big takeaway here seems

00:12:17.679 --> 00:12:19.700
to be that these tips aren't really isolated

00:12:19.700 --> 00:12:22.159
tricks. Not at all. The real power comes when

00:12:22.159 --> 00:12:24.120
you start combining them. You layer them together

00:12:24.120 --> 00:12:26.259
to create what we sometimes call a super prompt.

00:12:26.399 --> 00:12:28.500
Right. Can we walk through building one, maybe

00:12:28.500 --> 00:12:30.399
for that project proposal example you mentioned,

00:12:30.779 --> 00:12:32.799
a fashion retail app? Sure. Perfect example.

00:12:33.120 --> 00:12:35.019
So first we'd start with structure. Tip four,

00:12:35.159 --> 00:12:37.860
we use those XML style tags. Okay, like company

00:12:37.860 --> 00:12:39.559
context, dot company context, problem statement,

00:12:39.679 --> 00:12:42.200
problem statement, and then maybe tags for each

00:12:42.200 --> 00:12:44.639
required section of the proposal. Section one

00:12:44.639 --> 00:12:46.679
introduction, section two solution, et cetera,

00:12:47.100 --> 00:12:49.700
laying it all out clearly. Exactly. Then we add

00:12:49.700 --> 00:12:53.480
tip two. Length control. We specify this proposal

00:12:53.480 --> 00:12:56.059
should be detailed and complete around 5 ,000,

00:12:56.059 --> 00:12:58.860
1 ,200 words. Give it a clear target. Got it.

00:12:59.120 --> 00:13:01.379
Structure, then length. What's next? Now we bring

00:13:01.379 --> 00:13:04.379
in the big one. Tip five, the perfection loop.

00:13:04.820 --> 00:13:07.679
We'd add instructions like, before writing, define

00:13:07.679 --> 00:13:09.899
an internal checklist for what makes a project

00:13:09.899 --> 00:13:12.559
proposal highly persuasive and likely to be approved.

00:13:13.200 --> 00:13:15.799
Grade your draft against this checklist. Refine

00:13:15.799 --> 00:13:18.019
it until it achieves a 10 -10 internal score,

00:13:18.320 --> 00:13:21.039
then present the final version. Okay, so structure,

00:13:21.220 --> 00:13:23.919
length, internal quality control. Are we missing

00:13:23.919 --> 00:13:26.340
anything? Just the final touch. Tip one, the

00:13:26.340 --> 00:13:29.059
router nudge. Right at the very end of the entire

00:13:29.059 --> 00:13:31.620
prompt we add, think very carefully about this.

00:13:31.860 --> 00:13:33.820
Ah, the signal to use the powerful model. Yep,

00:13:33.940 --> 00:13:35.840
so you see how it all stacks together. Structure

00:13:35.840 --> 00:13:37.879
provides clarity, length sets boundaries, the

00:13:37.879 --> 00:13:39.620
perfection loop ensures quality, and the nudge

00:13:39.620 --> 00:13:41.539
phrase makes sure the right brain is doing the

00:13:41.539 --> 00:13:45.279
work. That combination feels really robust. It

00:13:45.279 --> 00:13:47.659
addresses both the router issue and the strict

00:13:47.659 --> 00:13:50.480
instruction following. It gives the AI everything

00:13:50.480 --> 00:13:53.700
it needs to succeed based on its new rules. OK,

00:13:53.840 --> 00:13:56.080
so wrapping this all up, what's the main thing

00:13:56.080 --> 00:13:57.740
people should take away from this deep dive?

00:13:58.299 --> 00:14:00.299
I think the core idea is pretty straightforward.

00:14:01.039 --> 00:14:04.759
The era of just typing vague one -sentence prompts

00:14:04.759 --> 00:14:07.559
and hoping for the best. That's kind of over,

00:14:07.620 --> 00:14:09.700
at least for consistently high quality with models

00:14:09.700 --> 00:14:13.320
like GPT -5. Yes, the AI is more powerful. But

00:14:13.320 --> 00:14:15.960
that power comes with these new conditions, the

00:14:15.960 --> 00:14:18.799
invisible router, the need for absolute clarity.

00:14:19.080 --> 00:14:21.460
So our approach has to adapt. It's less about

00:14:21.460 --> 00:14:23.860
needing fancier tech and more about us being

00:14:23.860 --> 00:14:26.340
better communicators. More structured, now we

00:14:26.340 --> 00:14:29.259
ask. Exactly. Clear instructions, better organization.

00:14:29.419 --> 00:14:31.600
You can start small, you know, just add those

00:14:31.600 --> 00:14:33.539
router nudges and specify the length. That alone

00:14:33.539 --> 00:14:35.700
makes a difference. Right. Easy wins first. Then

00:14:35.700 --> 00:14:37.879
as you get comfortable, start playing with the

00:14:37.879 --> 00:14:40.500
meta prompts, use the XML structure for complex

00:14:40.500 --> 00:14:43.220
requests, and try out that perfection loop for

00:14:43.220 --> 00:14:45.779
really crucial tasks. It feels like these techniques

00:14:45.779 --> 00:14:48.360
aren't just about getting a better answer from

00:14:48.360 --> 00:14:50.960
the AI. They're also about building better processes.

00:14:51.240 --> 00:14:53.539
for working with the AI. That's a great way to

00:14:53.539 --> 00:14:56.019
put it. It's about setting up an effective partnership.

00:14:56.740 --> 00:14:59.360
You define the standards very clearly, and the

00:14:59.360 --> 00:15:03.059
AI has the power to meet them precisely. So here's

00:15:03.059 --> 00:15:04.799
a final thought to leave our listeners with,

00:15:05.159 --> 00:15:07.360
something to mull over. We've talked about how

00:15:07.360 --> 00:15:10.840
this AI is trained to follow exact commands perfectly

00:15:10.840 --> 00:15:14.539
now. Super literally. What do you think happens

00:15:14.539 --> 00:15:17.120
if you give it two instructions that are perfectly

00:15:17.120 --> 00:15:20.679
clear, perfectly structured? but they directly

00:15:20.679 --> 00:15:23.179
conflict with each other. What does forcing that

00:15:23.179 --> 00:15:26.399
kind of logical paradox make the system prioritize?

00:15:26.899 --> 00:15:29.080
Something to maybe experiment with. Ooh, that's

00:15:29.080 --> 00:15:32.340
a fascinating question. What breaks first? Definitely

00:15:32.340 --> 00:15:34.840
something to try. But for now, maybe just try

00:15:34.840 --> 00:15:36.860
combining those first couple of tips, the nudge

00:15:36.860 --> 00:15:38.759
and the length control, see what happens. Good

00:15:38.759 --> 00:15:40.200
starting point. Thanks for breaking all this

00:15:40.200 --> 00:15:42.320
down. My pleasure. Always fun talking about this

00:15:42.320 --> 00:15:42.460
stuff.
