WEBVTT

00:00:00.000 --> 00:00:01.960
You know, a lot of us have had this exact feeling.

00:00:02.720 --> 00:00:05.179
You open up a powerful AI tool, something like

00:00:05.179 --> 00:00:08.140
ChatGPT or Claude, you type in a question, maybe

00:00:08.140 --> 00:00:10.599
something complex, something important, and the

00:00:10.599 --> 00:00:12.279
answer that comes back is just, well, it's flat.

00:00:13.570 --> 00:00:15.470
Generic. Yeah, it's generic. It doesn't sound

00:00:15.470 --> 00:00:17.070
like your company It definitely doesn't sound

00:00:17.070 --> 00:00:18.910
like you hear this all the time and people blame

00:00:18.910 --> 00:00:21.230
the tech They think the models themselves just

00:00:21.230 --> 00:00:24.129
don't have real insight But the core discovery

00:00:24.129 --> 00:00:25.789
from the sources you shared about the art of

00:00:25.789 --> 00:00:28.429
asking is it's something totally different Okay,

00:00:28.570 --> 00:00:31.230
let's unpack that because the sources are suggesting

00:00:31.230 --> 00:00:34.210
the problem isn't the AI's intelligence at all

00:00:34.210 --> 00:00:37.049
It's our interaction style. It's that we're talking

00:00:37.049 --> 00:00:40.030
to this incredibly powerful tool like it's a

00:00:40.030 --> 00:00:42.939
search bar Exactly. Welcome to the deep dive.

00:00:43.359 --> 00:00:45.039
If you treat AI like Google, you're going to

00:00:45.039 --> 00:00:47.420
get generalized Google results. We have to start

00:00:47.420 --> 00:00:50.060
treating it like a specialized teammate. I mean,

00:00:50.079 --> 00:00:53.500
think of it like this. It's an eager, brilliant,

00:00:54.140 --> 00:00:58.020
super fast intern who is just desperate to please

00:00:58.020 --> 00:01:01.479
you. But. And this is the key part, an intern

00:01:01.479 --> 00:01:04.400
who absolutely hates saying, I don't know. That

00:01:04.400 --> 00:01:07.879
eager intern analogy feels so right. So our mission

00:01:07.879 --> 00:01:10.840
today is to distill this massive shift in approach.

00:01:11.140 --> 00:01:13.599
We're moving from treating AI like an oracle

00:01:13.599 --> 00:01:17.480
to treating it as this highly capable but kind

00:01:17.480 --> 00:01:19.719
of naive collaborator. And we're going to dive

00:01:19.719 --> 00:01:22.659
into five simple, really powerful techniques

00:01:22.659 --> 00:01:25.299
that experts use to get intelligent, personalized,

00:01:25.739 --> 00:01:28.439
and most importantly, reliably accurate results.

00:01:28.840 --> 00:01:31.799
This is all about something we call context engineering.

00:01:32.079 --> 00:01:34.719
Let's start right there. What is the fundamental

00:01:34.719 --> 00:01:36.700
problem we're trying to solve with this idea

00:01:36.700 --> 00:01:38.810
of context engineering? It's that eager interim

00:01:38.810 --> 00:01:41.769
problem you just mentioned. The AI is fundamentally

00:01:41.769 --> 00:01:44.209
programmed for helpfulness. So in its training,

00:01:44.450 --> 00:01:47.430
providing an answer, any answer, is prioritized

00:01:47.430 --> 00:01:49.969
way above accuracy or even honesty. So if you

00:01:49.969 --> 00:01:52.650
give it vague instructions, it has to guess the

00:01:52.650 --> 00:01:54.829
context because it just can't tolerate silence.

00:01:54.950 --> 00:01:57.430
It has to respond. And that guessing game, that's

00:01:57.430 --> 00:01:59.409
what leads to the biggest headache in using AI

00:01:59.409 --> 00:02:01.730
today. That's right. If the AI doesn't have the

00:02:01.730 --> 00:02:03.790
context that's in your head, it just makes things

00:02:03.790 --> 00:02:06.010
up. I mean, that's the literal definition of

00:02:06.010 --> 00:02:08.810
hallucinations. The AI is generating fiction

00:02:08.810 --> 00:02:10.870
because it's trying so hard to please you with

00:02:10.870 --> 00:02:13.770
a response. It just, it fills in the gaps. So

00:02:13.770 --> 00:02:16.370
if prompt engineering sounds a little too technical

00:02:16.370 --> 00:02:18.909
for people, we should think of context engineering

00:02:18.909 --> 00:02:22.569
as just bridging that gap. Bridging the gap between

00:02:22.569 --> 00:02:25.030
what's in our brain and what the AI needs to

00:02:25.030 --> 00:02:27.990
know to actually succeed. Precisely. If I walk

00:02:27.990 --> 00:02:30.610
up to you and just say, write a budget, you have

00:02:30.610 --> 00:02:33.250
to know who it's for, what the goal is, the time

00:02:33.250 --> 00:02:35.590
frame. Context engineering is just making sure

00:02:35.590 --> 00:02:37.750
you transfer all those constraints to the AI

00:02:37.750 --> 00:02:40.909
before it starts writing. So if that eager intern

00:02:40.909 --> 00:02:44.139
is so prone to guessing when we're vague, What's

00:02:44.139 --> 00:02:46.840
the biggest risk of providing insufficient context?

00:02:47.099 --> 00:02:49.699
Guesswork directly leads to hallucinations, because

00:02:49.699 --> 00:02:52.719
the AI will simply invent details to satisfy

00:02:52.719 --> 00:02:55.460
the request. That makes perfect sense. Okay,

00:02:55.759 --> 00:02:57.539
let's get into the first technique, which seems

00:02:57.539 --> 00:03:00.460
to directly address the speed and, frankly, the

00:03:00.460 --> 00:03:02.840
recklessness of this intern. Chain of thought

00:03:02.840 --> 00:03:06.349
reasoning. This is so critical. When an AI generates

00:03:06.349 --> 00:03:09.370
a response, it's not planning a full essay first.

00:03:09.669 --> 00:03:12.949
It's predicting one single word at a time based

00:03:12.949 --> 00:03:14.969
on the words that came before it. It's like speaking

00:03:14.969 --> 00:03:17.710
without really thinking deeply. Right. It's like

00:03:17.710 --> 00:03:19.530
a student just shouting out the first answer

00:03:19.530 --> 00:03:22.030
that pops into their head for a hard math problem.

00:03:22.110 --> 00:03:24.770
And that's often the wrong one. We need to force

00:03:24.770 --> 00:03:27.259
it to be more deliberate. We have to force the

00:03:27.259 --> 00:03:30.199
AI to show its work. And you don't need PODE

00:03:30.199 --> 00:03:33.159
for this. You just need to add one magic sentence

00:03:33.159 --> 00:03:35.819
to your instructions. Before you answer, please

00:03:35.819 --> 00:03:37.639
walk me through your thought process step by

00:03:37.639 --> 00:03:39.780
step. And I find this fascinating because the

00:03:39.780 --> 00:03:42.379
sources noted that making the AI write down its

00:03:42.379 --> 00:03:45.080
plan actually makes the final output smarter.

00:03:45.659 --> 00:03:48.039
It literally uses those planning words to predict

00:03:48.039 --> 00:03:50.240
better words that come after. It's like it's

00:03:50.240 --> 00:03:52.300
building a higher quality scaffold for its own

00:03:52.300 --> 00:03:54.610
answer. It really does. Let's use a real -life

00:03:54.610 --> 00:03:57.449
example. Planning a budget trip to Japan. A bad

00:03:57.449 --> 00:04:00.030
prompt is just, plan a five -day trip to Tokyo.

00:04:00.189 --> 00:04:03.090
Make it cheap. The AI will just spit out a generic

00:04:03.090 --> 00:04:05.250
itinerary. You'll get expensive hotels, some

00:04:05.250 --> 00:04:08.430
tourist traps. It's useless. But the good prompt,

00:04:08.669 --> 00:04:11.189
the chain of thought prompt, specifies the details

00:04:11.189 --> 00:04:13.490
and then demands that planning phase. Something

00:04:13.490 --> 00:04:16.620
like... I need a five -day itinerary for Tokyo

00:04:16.620 --> 00:04:19.420
on a tight budget. Before you create the itinerary,

00:04:19.639 --> 00:04:21.500
please think through the constraints step -by

00:04:21.500 --> 00:04:24.660
-step. Prioritize low -cost transport, research

00:04:24.660 --> 00:04:27.740
cheap but high -quality food, and only suggest

00:04:27.740 --> 00:04:30.379
free activities. Then the AI starts by saying,

00:04:30.459 --> 00:04:33.240
okay, my first constraint is the budget, so I'll

00:04:33.240 --> 00:04:35.959
prioritize subway passes and skip the high -speed

00:04:35.959 --> 00:04:38.620
rail. For meals, I'll look at convenience stores.

00:04:39.319 --> 00:04:41.259
That thought process locks in the constraints,

00:04:41.339 --> 00:04:43.680
so the final itinerary is something you can actually

00:04:43.680 --> 00:04:46.040
use. And here's a bit of a vulnerable admission.

00:04:46.500 --> 00:04:49.120
I still wrestle with prompt drift myself. You

00:04:49.120 --> 00:04:51.160
know, when the final itinerary looks good, it's

00:04:51.160 --> 00:04:53.399
just so easy to skip reading that initial thought

00:04:53.399 --> 00:04:55.019
process. Yeah. But we have to read the thinking

00:04:55.019 --> 00:04:56.899
part, too, right? Oh, absolutely. Because if

00:04:56.899 --> 00:04:59.839
the AI makes a bad assumption in its planning,

00:05:00.160 --> 00:05:02.240
like, say, it assumes you're flying into an airport

00:05:02.240 --> 00:05:04.420
that you're not, you have to catch that flawed

00:05:04.420 --> 00:05:06.620
premise. Right. If you only look at the final

00:05:06.620 --> 00:05:09.139
answer, you've completely missed the core problem.

00:05:09.259 --> 00:05:11.639
It allows us to course correct before the AI

00:05:11.639 --> 00:05:15.079
commits fully to a bad direction. Yes. It addresses

00:05:15.079 --> 00:05:17.720
that fundamental flaw of speed over precision.

00:05:17.959 --> 00:05:20.560
Okay, so if chain of thought helps us improve

00:05:20.560 --> 00:05:23.819
the AI's logic, the next big challenge is making

00:05:23.819 --> 00:05:27.519
the AI sound like a person. Specifically, like

00:05:27.519 --> 00:05:30.660
you. This brings us to technique two, few -shot

00:05:30.660 --> 00:05:33.720
prompting. And this one tackles that really common

00:05:33.720 --> 00:05:36.819
mistake of using weak adjectives. I see people

00:05:36.819 --> 00:05:39.899
type, write a professional, engaging, funny email.

00:05:40.170 --> 00:05:42.470
Those words are subjective. They're basically

00:05:42.470 --> 00:05:44.629
meaningless to an AI that's trying to match a

00:05:44.629 --> 00:05:47.550
specific human voice. Adjectives are weak. Examples

00:05:47.550 --> 00:05:50.509
are strong. The AI is a phenomenal copycat machine.

00:05:50.569 --> 00:05:52.589
It's just brilliant at pattern matching. If you

00:05:52.589 --> 00:05:54.290
want it to write like you, you have to give it

00:05:54.290 --> 00:05:56.170
the data, the patterns of what you sound like.

00:05:56.430 --> 00:05:58.850
And the technique is surprisingly simple. You

00:05:58.850 --> 00:06:00.790
just find three to five examples of your own

00:06:00.790 --> 00:06:03.230
successful writing, emails that got a great reply,

00:06:03.529 --> 00:06:05.310
short reports you were proud of, and you paste

00:06:05.310 --> 00:06:07.449
them right into the chat. And the instruction

00:06:07.449 --> 00:06:11.569
is key. You ask the AI to Analyze these examples,

00:06:11.889 --> 00:06:14.290
then write a new email using the same style,

00:06:14.550 --> 00:06:17.230
tone, and format. It's like it's using your own

00:06:17.230 --> 00:06:19.850
work as a microtraining set. Let's use that example

00:06:19.850 --> 00:06:22.449
of writing a statistical topology letter. A bad

00:06:22.449 --> 00:06:25.089
prompt is just, write an apology email. Be polite.

00:06:25.290 --> 00:06:28.550
The result. Dear valued customer, we are deeply

00:06:28.550 --> 00:06:30.889
sorry for the inconvenience. It's so robotic.

00:06:30.970 --> 00:06:33.269
It just alienates people. But the good prompts

00:06:33.269 --> 00:06:35.569
gives it short, friendly examples. Something

00:06:35.569 --> 00:06:37.329
like, hey, Sarah, I saw your order was late.

00:06:37.389 --> 00:06:39.709
I am so sorry about that. And the AI picks up

00:06:39.709 --> 00:06:41.490
on the short sentences, the casual greeting,

00:06:41.529 --> 00:06:43.970
and it avoids all that corporate jargon. The

00:06:43.970 --> 00:06:46.509
apology it writes is personal and actually effective.

00:06:46.750 --> 00:06:48.610
OK, but let me ask a practical question here.

00:06:49.069 --> 00:06:51.170
If I paced in three or five paragraphs of my

00:06:51.170 --> 00:06:52.970
best writing, doesn't that make the prompt kind

00:06:52.970 --> 00:06:55.399
of massive? I mean, does that get slow or expensive

00:06:55.399 --> 00:06:58.120
to run every single time? That's a great point,

00:06:58.240 --> 00:07:00.360
especially if you're hitting your context window

00:07:00.360 --> 00:07:03.040
limit. But remember, the examples don't have

00:07:03.040 --> 00:07:05.319
to be long. They just need to be representative.

00:07:05.920 --> 00:07:08.839
A few strong short paragraphs are usually enough

00:07:08.839 --> 00:07:11.560
for the AI to pick up on your linguistic fingerprint,

00:07:11.959 --> 00:07:13.920
your sentence structure, your vocabulary, your

00:07:13.920 --> 00:07:16.920
rhythm. The value of getting a perfect output

00:07:16.920 --> 00:07:19.639
really outweighs that slight increase in the

00:07:19.639 --> 00:07:21.850
input data. And what about the opposite? Can

00:07:21.850 --> 00:07:23.930
you show it what you don't want? Absolutely.

00:07:24.149 --> 00:07:26.589
That's the advanced tip here. Using a negative

00:07:26.589 --> 00:07:29.269
example, you paste in a piece of writing you

00:07:29.269 --> 00:07:32.069
hate, maybe a super stiff formal corporate email,

00:07:32.430 --> 00:07:35.050
and you explicitly tell the AI, do not write

00:07:35.050 --> 00:07:37.250
like this, avoid this style. It learns just as

00:07:37.250 --> 00:07:39.850
quickly what to avoid. So how many examples are

00:07:39.850 --> 00:07:41.910
generally effective for the AI to pick up on

00:07:41.910 --> 00:07:44.589
a specific style or voice? We're aiming for three

00:07:44.589 --> 00:07:46.970
to five strong examples of your best work. That's

00:07:46.970 --> 00:07:49.730
a tangible number. Oh. OK, moving to technique

00:07:49.730 --> 00:07:52.769
three. Reverse prompting. This feels vital for

00:07:52.769 --> 00:07:55.189
those times when, as the user, you don't even

00:07:55.189 --> 00:07:56.769
know what information is important to share.

00:07:57.129 --> 00:07:59.170
This happens all the time with health and safety.

00:07:59.769 --> 00:08:02.769
Remember the eager intern. If you ask for a workout

00:08:02.769 --> 00:08:05.089
plan but you forget to mention an old me injury

00:08:05.089 --> 00:08:07.850
or that you only have dumbbells at home, the

00:08:07.850 --> 00:08:10.709
AI just guesses. And it might give you this generic,

00:08:11.149 --> 00:08:12.829
intense plan that could actually hurt you because

00:08:12.829 --> 00:08:15.910
it's missing that vital context. So we flip the

00:08:15.910 --> 00:08:18.720
script. Instead of us giving vague instructions,

00:08:19.220 --> 00:08:22.000
we force the AI to become the smart interviewer.

00:08:22.300 --> 00:08:25.220
The key sentence is so simple. Before you generate

00:08:25.220 --> 00:08:27.600
the response, please ask me any questions you

00:08:27.600 --> 00:08:30.720
need to know to do the best job possible. This

00:08:30.720 --> 00:08:33.860
forces it out of guessing mode and into information

00:08:33.860 --> 00:08:36.440
gathering mode. We can use that gardening scenario

00:08:36.440 --> 00:08:38.799
from the sources. A bad prompt is just, tell

00:08:38.799 --> 00:08:41.649
me what to plant in my garden. The AI guesses

00:08:41.649 --> 00:08:43.730
you're in a temperate zone and recommends tomatoes,

00:08:44.330 --> 00:08:46.110
but you live in a cold climate where they'll

00:08:46.110 --> 00:08:48.669
die instantly. Right, but the good prompt makes

00:08:48.669 --> 00:08:52.289
the AI ask five clarifying questions about your

00:08:52.289 --> 00:08:54.690
location, hours of sunlight, your maintenance

00:08:54.690 --> 00:08:57.429
level, whether you want edible plants. You answer

00:08:57.429 --> 00:08:59.250
those five questions and the plan it gives you

00:08:59.250 --> 00:09:01.429
is perfectly tailored and will actually succeed.

00:09:01.629 --> 00:09:03.370
So it seems like it saves a lot of time. You

00:09:03.370 --> 00:09:05.330
just bypass several rounds of bad answers. It

00:09:05.330 --> 00:09:07.769
absolutely does. It moves straight to an informed

00:09:07.769 --> 00:09:10.129
answer, which increases the speed of execution

00:09:10.129 --> 00:09:13.409
by eliminating all those useless revisions. Technique

00:09:13.409 --> 00:09:16.809
4 is incredibly powerful. And it goes back to

00:09:16.809 --> 00:09:19.950
the AI's massive training data. Let's use the

00:09:19.950 --> 00:09:24.309
library analogy. If AI is this huge digital library

00:09:24.309 --> 00:09:26.750
with every book in the world, from comic books

00:09:26.750 --> 00:09:30.710
to PhD textbooks, a generic question gives you

00:09:30.710 --> 00:09:32.529
the average of everything. And we don't want

00:09:32.529 --> 00:09:34.450
the average answer. We want the expert answer.

00:09:34.850 --> 00:09:36.769
So when you assign a role, you're telling the

00:09:36.769 --> 00:09:39.929
AI which section of the library to look in. You're

00:09:39.929 --> 00:09:42.309
telling it to ignore the blogs and focus only

00:09:42.309 --> 00:09:44.769
on the specialized high authority texts for that

00:09:44.769 --> 00:09:47.220
role. The structure is simple. but really effective.

00:09:47.740 --> 00:09:50.480
You are an expert job title with 20 years of

00:09:50.480 --> 00:09:52.820
experience. You think like a famous person. This

00:09:52.820 --> 00:09:55.600
immediately filters the knowledge the AI uses.

00:09:55.860 --> 00:09:58.059
Let's say you're checking a resume. A bad prompt

00:09:58.059 --> 00:10:01.100
is just, check my resume for mistakes. You'll

00:10:01.100 --> 00:10:03.139
get basic spell check, maybe some grammar notes.

00:10:03.460 --> 00:10:05.960
That's the average answer. But the good role

00:10:05.960 --> 00:10:09.379
assigned prompt is, you are a strict hiring manager

00:10:09.379 --> 00:10:13.379
at a top tech company. You reject 99 % of resumes.

00:10:13.879 --> 00:10:17.990
Tell me why you would reject mine. The AI completely

00:10:17.990 --> 00:10:21.110
changes its persona. It will ignore the typos

00:10:21.110 --> 00:10:22.929
and tell you your bullet points are too vague,

00:10:23.129 --> 00:10:25.610
that you lack quantifiable achievements, that

00:10:25.610 --> 00:10:27.570
your structure isn't optimized for automated

00:10:27.570 --> 00:10:30.730
tracking systems. That kind of high -stakes feedback

00:10:30.730 --> 00:10:33.250
is invaluable. And you can assign almost any

00:10:33.250 --> 00:10:36.200
persona. a tough negotiation expert, certified

00:10:36.200 --> 00:10:38.779
financial planner, even a friendly kindergarten

00:10:38.779 --> 00:10:41.399
teacher to simplify a really complicated topic.

00:10:41.860 --> 00:10:44.279
Oh, and just imagine the potential there. When

00:10:44.279 --> 00:10:46.620
you can instantly tap into the synthesized knowledge

00:10:46.620 --> 00:10:49.139
of a billion textbooks just by assigning a job

00:10:49.139 --> 00:10:52.100
title, you're accessing true, focused expertise

00:10:52.100 --> 00:10:54.600
instantly. You're bypassing all the noise. So

00:10:54.600 --> 00:10:56.659
what is the critical difference between the generic

00:10:56.659 --> 00:10:59.379
answer and the role assigned answer? Role Assignment

00:10:59.379 --> 00:11:02.200
accesses specific, high -quality knowledge instead

00:11:02.200 --> 00:11:04.519
of providing an average of all possible information.

00:11:04.899 --> 00:11:07.559
Okay, finally we get to Technique 5, Role Playing.

00:11:08.279 --> 00:11:10.379
And this feels like the ultimate application

00:11:10.379 --> 00:11:15.100
of Persona, turning the AI into a conversation

00:11:15.100 --> 00:11:17.840
simulator. Right. Pilots use flight simulators

00:11:17.840 --> 00:11:19.980
to crash planes safely until they know exactly

00:11:19.980 --> 00:11:23.360
how to handle turbulence. We can use AI as a

00:11:23.360 --> 00:11:25.440
conversation simulator to practice high -stakes

00:11:25.440 --> 00:11:28.679
talks, asking for a raise, delivering bad news,

00:11:28.879 --> 00:11:30.820
so you don't mess up the real interaction. The

00:11:30.820 --> 00:11:33.039
sources outline a really powerful three -window

00:11:33.039 --> 00:11:35.580
system for this. Let's use that example of the

00:11:35.580 --> 00:11:38.279
tough rent negotiation with our fictional landlord,

00:11:38.480 --> 00:11:41.159
Mr. Smith. Okay, so step A happens in chat one,

00:11:41.480 --> 00:11:44.000
the profiler. You build the character, you say,

00:11:44.320 --> 00:11:46.419
Mr. Smith is stubborn, he talks loudly, he's

00:11:46.419 --> 00:11:48.639
60 years old, and he always complains about property

00:11:48.639 --> 00:11:51.679
taxes. You use the AI to really flesh out that

00:11:51.679 --> 00:11:53.700
psychological profile. And then step B is chat

00:11:53.700 --> 00:11:56.580
two, the simulator. You paste that detailed description

00:11:56.580 --> 00:11:59.139
in, you set the scene, and you start the conversation.

00:11:59.259 --> 00:12:01.759
You are Mr. Smith. Stay in character. Ring ring.

00:12:02.120 --> 00:12:04.519
And when you make your offer, the AI, acting

00:12:04.519 --> 00:12:06.639
as Mr. Smith, will argue back. It'll challenge

00:12:06.639 --> 00:12:08.799
your assumptions, it'll maintain that stubborn

00:12:08.799 --> 00:12:11.120
persona. This is where you get to crash safely.

00:12:11.320 --> 00:12:14.320
You can try five different opening lines. You

00:12:14.320 --> 00:12:16.720
learn what makes him angry, what diffuses him,

00:12:16.820 --> 00:12:19.200
what objections he's going to raise. And then

00:12:19.200 --> 00:12:21.799
step C is the key. You integrate a technique

00:12:21.799 --> 00:12:25.399
we mentioned earlier. It's chat three, the coach

00:12:25.399 --> 00:12:28.820
and the Russian judge. You copy the entire transcript

00:12:28.820 --> 00:12:31.159
of your negotiation attempt and you paste it

00:12:31.159 --> 00:12:33.980
into a new chat window. Right. And now you assign

00:12:33.980 --> 00:12:38.379
a new specific role. You are a world -class negotiation

00:12:38.379 --> 00:12:41.539
coach. You must grade my performance, but you're

00:12:41.539 --> 00:12:44.879
a Russian judge. do not be polite, grade me on

00:12:44.879 --> 00:12:48.080
a scale of 1 to 10, and be brutal. Why is that

00:12:48.080 --> 00:12:50.279
separate coach step necessary rather than just

00:12:50.279 --> 00:12:52.580
asking the simulator for feedback? It ensures

00:12:52.580 --> 00:12:55.460
the analysis is objective, separating the character

00:12:55.460 --> 00:12:58.460
simulation from the performance review. Got it.

00:12:58.639 --> 00:13:00.840
And because the AI is explicitly told not to

00:13:00.840 --> 00:13:03.059
be nice, it will point out the logical holes,

00:13:03.139 --> 00:13:05.299
the emotional traps you fell into. That truth

00:13:05.299 --> 00:13:07.440
is way more helpful than just simple encouragement.

00:13:07.659 --> 00:13:10.100
Exactly. We usually only get one shot at a hard

00:13:10.100 --> 00:13:12.250
conversation. With this method, you can... Practice

00:13:12.250 --> 00:13:14.429
10 times until you're calm, you're prepared,

00:13:14.750 --> 00:13:16.649
and you're ready for the real call. Okay, to

00:13:16.649 --> 00:13:19.190
wrap this deep dive up, let's hit one final technical

00:13:19.190 --> 00:13:22.669
tip. This is the new chat rule. It's so simple,

00:13:22.750 --> 00:13:25.289
but it's crucial. Keep your workspace clean.

00:13:25.580 --> 00:13:28.720
The AI relies on the previous conversation history

00:13:28.720 --> 00:13:31.480
for context. If you change the topic, just start

00:13:31.480 --> 00:13:33.539
a fresh chat window immediately. Yeah, we've

00:13:33.539 --> 00:13:36.120
all done it. We ruin a perfectly built persona

00:13:36.120 --> 00:13:39.279
by asking an unrelated question. You don't want

00:13:39.279 --> 00:13:42.100
to break your serious CEO negotiations simulator

00:13:42.100 --> 00:13:44.059
by asking it about your fantasy football picks

00:13:44.059 --> 00:13:46.610
in the same thread. A long, messy history just

00:13:46.610 --> 00:13:48.870
confuses the AI. It's like using one notebook

00:13:48.870 --> 00:13:52.049
for math, history, and cooking. One focused task,

00:13:52.309 --> 00:13:55.330
one clean chat. That makes sense. OK, let's quickly

00:13:55.330 --> 00:13:57.669
summarize these five techniques we covered. One,

00:13:58.129 --> 00:14:00.669
chain of thought. Force the AI to think step

00:14:00.669 --> 00:14:03.889
by step for smarter, more logical outputs. Two,

00:14:04.549 --> 00:14:07.629
few -shot prompting. Provide three to five examples

00:14:07.629 --> 00:14:09.789
of your work for perfect style and tone matching.

00:14:10.169 --> 00:14:13.519
Three, Reverse prompting. Ask the AI to interview

00:14:13.519 --> 00:14:16.559
you to get that critical, perfect context. Four,

00:14:16.980 --> 00:14:19.919
assign a role. Make the AI an expert to access

00:14:19.919 --> 00:14:23.279
high quality, focused advice. And five, role

00:14:23.279 --> 00:14:25.779
playing. Use that three window system and the

00:14:25.779 --> 00:14:28.120
Russian judge to simulate and practice high stakes

00:14:28.120 --> 00:14:30.919
conversations safely. The core idea really is

00:14:30.919 --> 00:14:33.419
the same across all of these. AI is the best

00:14:33.419 --> 00:14:35.059
teammate you've ever had, but it only shines

00:14:35.059 --> 00:14:37.340
when you give it precise instructions. So stop

00:14:37.340 --> 00:14:39.299
talking to it like an omniscient database. Start

00:14:39.299 --> 00:14:41.720
talking to it like the smart, eager intern it

00:14:41.720 --> 00:14:44.039
is. An intern who needs clear constraints and

00:14:44.039 --> 00:14:46.039
boundaries. And our challenge to you is simple.

00:14:46.320 --> 00:14:48.179
Don't try to master all five techniques today.

00:14:48.299 --> 00:14:50.679
Just pick one. Maybe use FewShop for an important

00:14:50.679 --> 00:14:52.759
email you have to write this afternoon. Or try

00:14:52.759 --> 00:14:54.559
Chain of Thought for a complex decision you're

00:14:54.559 --> 00:14:56.600
trying to map out. Just try one right now. And

00:14:56.600 --> 00:14:58.379
as a final provocative thought for you to explore

00:14:58.379 --> 00:15:01.419
this week, if we have to explicitly tell the

00:15:01.419 --> 00:15:04.279
AI to be brutal and honest, how does that fundamental

00:15:04.279 --> 00:15:06.580
AI tendency, the need to be helpful over being

00:15:06.580 --> 00:15:09.519
truthful, reflect broader human behavioral tendencies

00:15:09.519 --> 00:15:10.840
in high pressure situations?
