WEBVTT

00:00:00.000 --> 00:00:03.600
We have to fundamentally change how we talk to

00:00:03.600 --> 00:00:06.360
artificial intelligence. That habit we've all

00:00:06.360 --> 00:00:10.779
built up for two decades of just keyword stuffing,

00:00:11.660 --> 00:00:13.939
treating that little box like a search bar. It's

00:00:13.939 --> 00:00:16.019
actually hurting our productivity now. We really

00:00:16.019 --> 00:00:18.440
need to transition from searching to delegating.

00:00:18.579 --> 00:00:21.199
It's a total mindset shift. The moment you start

00:00:21.199 --> 00:00:25.100
treating the AI like a highly capable, high -stakes

00:00:25.100 --> 00:00:27.280
assistant, the entire game just changes. And

00:00:27.280 --> 00:00:29.899
the benefit here is just simple time arbitrage.

00:00:30.399 --> 00:00:33.140
I mean, if you invest maybe two focused minutes

00:00:33.140 --> 00:00:37.159
writing a clear, structured prompt, you can bypass

00:00:37.159 --> 00:00:39.340
30 minutes of tedious cleanup on the backend.

00:00:39.500 --> 00:00:41.799
And that, right there, is how we eliminate the

00:00:41.799 --> 00:00:44.320
edit tax. Welcome to the Deep Dive. Look, if

00:00:44.320 --> 00:00:46.740
the overwhelming constant change in the AI space

00:00:46.740 --> 00:00:48.899
has you feeling some serious information overload,

00:00:49.460 --> 00:00:51.939
you are definitely not alone. Every single week,

00:00:52.000 --> 00:00:53.880
there's a new model, a new feature set. But the

00:00:53.880 --> 00:00:55.820
real distinction, I think, is that AI mastery

00:00:55.820 --> 00:00:58.799
in 2026 isn't about which tool you subscribe

00:00:58.799 --> 00:01:01.020
to. It's about your core communication skill.

00:01:01.140 --> 00:01:03.710
It's really about clarity. And that's our mission

00:01:03.710 --> 00:01:05.829
for this deep dive. We're going to cut through

00:01:05.829 --> 00:01:08.530
all that noise to focus on the one skill that

00:01:08.530 --> 00:01:11.170
actually separates regular users from power users.

00:01:11.730 --> 00:01:13.549
We'll show you that foundational shift. We'll

00:01:13.549 --> 00:01:15.569
introduce you to the golden square for a perfect

00:01:15.569 --> 00:01:18.510
prompt, help you match the job to the right AI

00:01:18.510 --> 00:01:20.930
model. And then we'll look at some advanced techniques

00:01:20.930 --> 00:01:23.450
like forcing self -critique and even moving toward

00:01:23.450 --> 00:01:27.430
automation. So let's unpack this. So let's start

00:01:27.430 --> 00:01:30.090
with that foundation. Looking at the source material,

00:01:30.489 --> 00:01:35.030
why is communication, the classic soft skill.

00:01:35.349 --> 00:01:37.609
Why is that suddenly the most valuable technical

00:01:37.609 --> 00:01:40.329
skill? It feels a little counterintuitive. Well,

00:01:40.349 --> 00:01:42.390
it really comes down to a fundamental cognitive

00:01:42.390 --> 00:01:45.450
mismatch. For 20 years, our brains were trained

00:01:45.450 --> 00:01:48.469
by Google. We learned to use these fragmented

00:01:48.469 --> 00:01:51.030
key words, you know, best restaurant Berlin or

00:01:51.030 --> 00:01:54.849
tourist flat tire fix. We learn to speak to a

00:01:54.849 --> 00:01:56.930
machine that searches for existing things. And

00:01:56.930 --> 00:01:59.219
what do these new models want instead? They don't

00:01:59.219 --> 00:02:02.840
want keywords. They want direct, specific, delegated

00:02:02.840 --> 00:02:05.299
instruction. They are not search engines. They're

00:02:05.299 --> 00:02:07.000
not just looking through a library. These are

00:02:07.000 --> 00:02:09.819
language models that are capable of creation,

00:02:10.159 --> 00:02:13.020
of synthesis, of actual execution. So most users

00:02:13.020 --> 00:02:15.879
are just, they're using the wrong mental model?

00:02:16.099 --> 00:02:18.680
Exactly. They still approach the AI like they're

00:02:18.680 --> 00:02:21.439
just surfing the web. When you switch that mental

00:02:21.439 --> 00:02:24.979
model to delegation, You unlock everything. I

00:02:24.979 --> 00:02:27.379
really like the black box analogy that describes

00:02:27.379 --> 00:02:29.759
this. When you look at a model like ChatGBT or

00:02:29.759 --> 00:02:34.520
Claude, they contain this vast, almost unimaginable

00:02:34.520 --> 00:02:36.439
potential. It's all locked up inside. And the

00:02:36.439 --> 00:02:39.599
prompt is the key, literally. A weak, keyword

00:02:39.599 --> 00:02:42.599
-based prompt, it opens a tiny little door, it

00:02:42.599 --> 00:02:44.979
gives you a generic, surface -level answer that

00:02:44.979 --> 00:02:46.879
you have to heavily edit. What a strong one.

00:02:47.039 --> 00:02:48.879
A strong, structured prompt, one that's built

00:02:48.879 --> 00:02:51.280
on the clarity of your instruction, that opens

00:02:51.280 --> 00:02:53.280
the entire vault of knowledge that's just waiting

00:02:53.280 --> 00:02:55.659
inside the machine. That potential sounds great,

00:02:55.780 --> 00:02:57.539
but where I think a lot of listeners spend their

00:02:57.539 --> 00:03:00.039
time is just fixing bad output. So we have to

00:03:00.039 --> 00:03:02.780
talk more about this edit tax. The edit tax is

00:03:02.780 --> 00:03:05.580
simple, and it's so corrosive to productivity.

00:03:06.039 --> 00:03:08.520
It's spending, say, 30 seconds writing a low

00:03:08.520 --> 00:03:11.500
effort prompt and then spending 30 minutes fixing

00:03:11.500 --> 00:03:14.719
the messy, generalized, often inaccurate result.

00:03:15.099 --> 00:03:17.520
That 30 minutes of correction is the tax. That's

00:03:17.520 --> 00:03:20.020
the tax you pay for being unclear. OK, but let

00:03:20.020 --> 00:03:22.960
me challenge that for a second. Sometimes I just

00:03:22.960 --> 00:03:26.120
need a quick title idea or like a single sentence

00:03:26.120 --> 00:03:29.280
summary of a meeting isn't spending two minutes

00:03:29.280 --> 00:03:31.780
building a whole golden square prompt. Isn't

00:03:31.780 --> 00:03:34.379
that overkill for those simple low stakes tasks?

00:03:34.659 --> 00:03:37.000
That's a fair challenge, yeah. For a title idea,

00:03:37.240 --> 00:03:39.039
no, you don't need the full structure, but you

00:03:39.039 --> 00:03:41.300
have to distinguish between brainstorming where

00:03:41.300 --> 00:03:43.620
generic output is kind of fine and execution.

00:03:44.020 --> 00:03:46.520
If you need an email to a client or a business

00:03:46.520 --> 00:03:49.039
strategy or a paragraph for your website, the

00:03:49.039 --> 00:03:52.460
edit tax hits hard. Power users focus on execution.

00:03:52.719 --> 00:03:54.599
They'll invest those two focused minutes and

00:03:54.599 --> 00:03:57.340
get a 95 % perfect result. They lower the tax

00:03:57.340 --> 00:04:00.020
to almost zero. It's about maximizing output

00:04:00.020 --> 00:04:03.099
quality, not just speed. OK, that clarifies the

00:04:03.099 --> 00:04:06.460
return on investment. So if the prompt is the

00:04:06.460 --> 00:04:09.979
key to unlocking that vault, how exactly do we

00:04:09.979 --> 00:04:12.060
structure it so that we can open it consistently

00:04:12.060 --> 00:04:14.919
every single time? By consistently using the

00:04:14.919 --> 00:04:16.579
Golden Square framework. All right, let's dive

00:04:16.579 --> 00:04:19.000
into the Golden Square. This framework seems

00:04:19.000 --> 00:04:21.160
to be what immediately separates us from the

00:04:21.160 --> 00:04:24.389
default user. It's the role task context format

00:04:24.389 --> 00:04:26.769
approach. Absolutely. These four components,

00:04:26.930 --> 00:04:30.129
they provide the instant clarity the AI desperately

00:04:30.129 --> 00:04:33.089
needs. It's the scaffolding for great results.

00:04:33.730 --> 00:04:36.250
We start with the role. You tell the AI, act

00:04:36.250 --> 00:04:39.089
as a senior SEO specialist with 10 years of agency

00:04:39.089 --> 00:04:41.290
experience. And giving it that persona gives

00:04:41.290 --> 00:04:44.329
it authority and tone, forcing it to use specific

00:04:44.329 --> 00:04:46.970
knowledge. Exactly. Then you move to the task.

00:04:47.199 --> 00:04:49.959
Analyze these 10 keywords and suggest a primary

00:04:49.959 --> 00:04:52.459
and secondary content strategy. Then comes the

00:04:52.459 --> 00:04:55.100
really vital piece, the context. We are a small,

00:04:55.279 --> 00:04:57.800
independent bakery in Paris, specializing only

00:04:57.800 --> 00:05:00.199
in artisanal sourdough. Which grounds the entire

00:05:00.199 --> 00:05:04.019
analysis. And finally, format. Give me a detailed

00:05:04.019 --> 00:05:06.959
markdown table with columns for keyword, target

00:05:06.959 --> 00:05:10.319
audience, and content angle. That structure just

00:05:10.319 --> 00:05:12.560
provides the perfect container for the output.

00:05:13.040 --> 00:05:15.740
I want to focus on context because it feels like

00:05:15.740 --> 00:05:18.459
the most neglected piece, yet you're saying it's

00:05:18.459 --> 00:05:20.360
the most critical filter. We're not just providing

00:05:20.360 --> 00:05:22.720
background information here, are we? No, not

00:05:22.720 --> 00:05:26.089
at all. Context is the magic. It eliminates millions

00:05:26.089 --> 00:05:28.629
of generic answers because it acts as this critical

00:05:28.629 --> 00:05:31.470
filter. Most people provide what we call empty

00:05:31.470 --> 00:05:34.170
context, and that's precisely why they fail.

00:05:34.990 --> 00:05:37.269
Can you give us an example of that failure? Sure.

00:05:37.569 --> 00:05:39.709
If you just say, give me a workout plan, the

00:05:39.709 --> 00:05:42.990
AI pulls this boring generalized routine based

00:05:42.990 --> 00:05:45.189
on the average person. That's empty context.

00:05:45.329 --> 00:05:47.250
The output is useless because it's not for anyone

00:05:47.250 --> 00:05:49.779
specific. But the professional AI communication

00:05:49.779 --> 00:05:52.319
prompt, it turns the AI from a general database

00:05:52.319 --> 00:05:55.680
into a custom personal trainer. So what specific

00:05:55.680 --> 00:05:57.720
pieces of context do we need to provide to make

00:05:57.720 --> 00:05:59.800
that shift actually happen? You need to give

00:05:59.800 --> 00:06:02.800
it the context a human PT would need. It's like

00:06:02.800 --> 00:06:05.459
stacking Lego blocks of data. You start with

00:06:05.459 --> 00:06:08.370
biological metrics weight. age, typical heart

00:06:08.370 --> 00:06:10.870
rate, so it can calculate your daily energy needs

00:06:10.870 --> 00:06:13.610
and ensure cardiovascular safety. That alone

00:06:13.610 --> 00:06:16.290
moves it way past generic internet advice. Then

00:06:16.290 --> 00:06:18.370
you give it environment and equipment details.

00:06:18.569 --> 00:06:20.810
You say, I only have resistance bands and a yoga

00:06:20.810 --> 00:06:24.399
mat. The AI is then forced to swap bench presses

00:06:24.399 --> 00:06:27.540
for push -up variations or heavy weights for

00:06:27.540 --> 00:06:30.740
band exercises. And crucially, you share your

00:06:30.740 --> 00:06:32.759
injury history. Let's say you have a 10 -year

00:06:32.759 --> 00:06:36.060
-old knee problem. That triggers the AI to enter

00:06:36.060 --> 00:06:38.680
a low -impact mode and it removes any jumping

00:06:38.680 --> 00:06:41.420
or high -impact moves. Goals and preferences.

00:06:41.800 --> 00:06:43.879
If I hate running, I can just tell it to swap

00:06:43.879 --> 00:06:46.199
all my cardio for cycling. That makes sure the

00:06:46.199 --> 00:06:48.040
plan is actually something I'll stick with. The

00:06:48.040 --> 00:06:50.120
detail is the difference between a throwaway

00:06:50.120 --> 00:06:53.240
draft and a real executable plan. And we can

00:06:53.240 --> 00:06:55.500
sharpen that output even more by setting constraints.

00:06:55.699 --> 00:06:57.699
These are the negative instructions. Yeah. You're

00:06:57.699 --> 00:07:00.920
telling the AI precisely what not to do. So what

00:07:00.920 --> 00:07:02.519
are some of the most effective constraints to

00:07:02.519 --> 00:07:05.240
use? They are powerful guards against that robotic

00:07:05.240 --> 00:07:08.120
tone we all hate. You can say things like, do

00:07:08.120 --> 00:07:11.180
not use corporate jargon like energy or paradigm

00:07:11.180 --> 00:07:14.220
shift, or keep the response under 200 words,

00:07:14.740 --> 00:07:17.540
or even avoid using the word comprehensive or

00:07:17.540 --> 00:07:20.699
unlock in the summary. So we have the scaffolding

00:07:20.699 --> 00:07:23.120
built with the golden square, but that perfect

00:07:23.120 --> 00:07:25.339
structure is only as good as the raw material

00:07:25.339 --> 00:07:28.220
we feed it. If we pick the wrong tool, it's just

00:07:28.220 --> 00:07:31.120
wasted effort. How do we know which AI brain

00:07:31.120 --> 00:07:33.439
is the best one for the job? By matching the

00:07:33.439 --> 00:07:35.420
problem to the unique strengths of each model,

00:07:35.920 --> 00:07:38.259
we're really past the point where one AI tool

00:07:38.259 --> 00:07:40.759
does everything well. Mastery means acknowledging

00:07:40.759 --> 00:07:43.459
that different AIs have specific personalities,

00:07:43.860 --> 00:07:46.339
and knowing which tool to use for which problem

00:07:46.339 --> 00:07:48.379
is paramount. Okay, so let's look at the four

00:07:48.379 --> 00:07:50.439
main personalities that are detailed in the source

00:07:50.439 --> 00:07:52.579
materials breakdown. What's the first one? We

00:07:52.579 --> 00:07:54.399
start with Chat GPT. You should think of it as

00:07:54.399 --> 00:07:57.819
your creative partner. It really excels at brainstorming,

00:07:58.060 --> 00:08:00.399
generating engaging stories, role playing, and

00:08:00.399 --> 00:08:03.000
just general idea generation. It has that really

00:08:03.000 --> 00:08:05.279
human -like flow which makes it great for early

00:08:05.279 --> 00:08:07.759
stage conceptual work. And then the next one,

00:08:07.879 --> 00:08:10.019
which seems to handle more of the heavy lifting.

00:08:10.220 --> 00:08:12.529
That would be Claude. This is your intellectual

00:08:12.529 --> 00:08:15.250
assistant. Claude is amazing at complex logic

00:08:15.250 --> 00:08:18.550
and following very detailed multi -step instructions.

00:08:19.129 --> 00:08:22.649
Its huge technical advantage is its massive context

00:08:22.649 --> 00:08:26.050
window. Its memory, basically. Right. You can

00:08:26.050 --> 00:08:29.750
upload a 100 -page PDF or a 20 ,000 -word book

00:08:29.750 --> 00:08:32.350
and ask it really nuanced questions without it

00:08:32.350 --> 00:08:33.909
forgetting the beginning of the document. Then

00:08:33.909 --> 00:08:36.110
we've got Gemini, the Google offering. Right.

00:08:36.110 --> 00:08:39.490
Gemini is your data and integration king. Because

00:08:39.490 --> 00:08:41.879
it's so deeply woven to the Google ecosystem.

00:08:42.259 --> 00:08:45.299
It can natively see your Google Drive. It can

00:08:45.299 --> 00:08:48.299
analyze real -time data, search your Gmail. It

00:08:48.299 --> 00:08:51.720
can even process live YouTube videos. For a student

00:08:51.720 --> 00:08:54.600
or a learner who needs immediate data analysis

00:08:54.600 --> 00:08:56.600
from their personal files, Gemini is often the

00:08:56.600 --> 00:08:58.379
fastest, most integrated choice. And finally,

00:08:58.519 --> 00:09:02.440
the specialized one, perplexity AI. Yes, perplexity

00:09:02.440 --> 00:09:05.539
AI. the research librarian. It doesn't guess

00:09:05.539 --> 00:09:07.840
or hallucinate because its main job is to search

00:09:07.840 --> 00:09:10.279
the live internet. It gives you precise footnotes

00:09:10.279 --> 00:09:12.700
with links back to its sources. So if you're

00:09:12.700 --> 00:09:15.159
writing a research paper, a factual report, or

00:09:15.159 --> 00:09:17.720
a high -spakes business proposal, perplexity

00:09:17.720 --> 00:09:20.460
is essential for accurate, anchored information.

00:09:20.659 --> 00:09:22.519
I can see how a freelance marketer could use

00:09:22.519 --> 00:09:25.179
this strategically in combining these strengths.

00:09:25.539 --> 00:09:27.899
What does that look like in practice? Okay, so

00:09:27.899 --> 00:09:30.379
let's say they need a content plan for a niche

00:09:30.379 --> 00:09:34.090
B2B software client. First, they use perplexity

00:09:34.090 --> 00:09:37.549
to find live trending web data. What questions

00:09:37.549 --> 00:09:39.490
are people asking? What are competitors missing?

00:09:39.789 --> 00:09:41.870
They get all those factual inputs. And then they

00:09:41.870 --> 00:09:43.809
take that structured data and... Right, and they

00:09:43.809 --> 00:09:46.590
feed that raw data directly into Claude using

00:09:46.590 --> 00:09:49.190
the golden square, asking it to act as a leading

00:09:49.190 --> 00:09:52.490
industry voice and generate five compelling non

00:09:52.490 --> 00:09:55.330
-robotic blog post titles based on that data.

00:09:55.509 --> 00:09:57.929
And why Claude for that part? because Claude

00:09:57.929 --> 00:10:01.210
currently has one of the most fluid, least mechanical

00:10:01.210 --> 00:10:03.970
writing styles available. So that combination

00:10:03.970 --> 00:10:07.330
hot data plus evocative language gives them a

00:10:07.330 --> 00:10:10.450
professional, ready to execute content plan almost

00:10:10.450 --> 00:10:13.100
immediately. That is the definition of leveraging

00:10:13.100 --> 00:10:15.740
multiple strengths. So we know which tool to

00:10:15.740 --> 00:10:18.940
use. But once we have that basic draft, how do

00:10:18.940 --> 00:10:21.500
we push it past the initial output and force

00:10:21.500 --> 00:10:24.519
it to produce truly expert level content? We

00:10:24.519 --> 00:10:26.860
push it past the default by teaching the AI to

00:10:26.860 --> 00:10:29.279
think harder with advanced prompting, sponsor

00:10:29.279 --> 00:10:31.559
saying. So here's where the pro strategy start

00:10:31.559 --> 00:10:33.580
and the science behind it is fascinating. We

00:10:33.580 --> 00:10:36.580
can begin with chain of thought or Cotet prompting.

00:10:36.679 --> 00:10:39.669
Define that mechanism for us. It sounds complicated,

00:10:39.730 --> 00:10:42.769
but it's deceptively simple, isn't it? It is.

00:10:42.769 --> 00:10:45.269
It's just adding the instruction. Think step

00:10:45.269 --> 00:10:47.529
-by -step before providing the final answer.

00:10:48.049 --> 00:10:49.970
And research consistently proves that if you

00:10:49.970 --> 00:10:52.629
ask an AI to first show its internal logic, its

00:10:52.629 --> 00:10:55.289
accuracy increases significantly, sometimes by

00:10:55.289 --> 00:10:57.629
15 or 20 percent. What's the mechanism there?

00:10:57.929 --> 00:11:00.029
Why does asking it to show its work actually

00:11:00.029 --> 00:11:02.570
make it smarter? You're essentially forcing the

00:11:02.570 --> 00:11:05.769
AI out of its fast, superficial system 1 thinking,

00:11:06.129 --> 00:11:08.549
which defaults to the most common answer, and

00:11:08.549 --> 00:11:11.870
into a slower, more deliberate system 2 process.

00:11:12.590 --> 00:11:15.529
It kind of mimics how our own prefrontal cortex

00:11:15.529 --> 00:11:18.830
solves complex problems. It's essential for anything

00:11:18.830 --> 00:11:21.909
high stakes, like calculating ROI or designing

00:11:21.909 --> 00:11:24.669
software logic. The next technique is few -shot

00:11:24.669 --> 00:11:27.789
prompting, which is basically making the AI a

00:11:27.789 --> 00:11:30.960
world -class copycat. Precisely. If you're trying

00:11:30.960 --> 00:11:33.240
to write in your unique voice, which is usually

00:11:33.240 --> 00:11:35.899
this messy mix of sentence lengths, tone, and

00:11:35.899 --> 00:11:38.480
humor, don't try to describe it. Just give the

00:11:38.480 --> 00:11:40.740
AI two or three excellent examples of your work.

00:11:41.000 --> 00:11:42.960
That teaches that your unique style immediately,

00:11:43.240 --> 00:11:45.340
and it ensures your brand or personal voice stays

00:11:45.340 --> 00:11:47.519
consistent. But the output process should be

00:11:47.519 --> 00:11:49.320
a dynamic conversation, right? Not just a single

00:11:49.320 --> 00:11:51.700
shot monologue. We have to embrace iterative

00:11:51.700 --> 00:11:54.480
refinement. Absolutely. We have to stop expecting

00:11:54.480 --> 00:11:56.840
the first answer to be the best one. The process

00:11:56.840 --> 00:11:59.039
is a necessary back and forth. You get the draft,

00:11:59.159 --> 00:12:00.840
and then you say, OK, now shorten the introduction

00:12:00.840 --> 00:12:03.019
by 50 % and make the tone a bit more aggressive.

00:12:03.220 --> 00:12:05.679
Or turn that last paragraph into a strong call

00:12:05.679 --> 00:12:08.100
to action for my newsletter. Exactly. You know,

00:12:08.220 --> 00:12:10.299
I still wrestle with prompt drift myself if I'm

00:12:10.299 --> 00:12:12.919
not careful. For the listener, what is prompt

00:12:12.919 --> 00:12:16.399
drift? That's when the AI slowly forgets your

00:12:16.399 --> 00:12:18.419
original instructions as the conversation gets

00:12:18.419 --> 00:12:21.240
longer. You know, you spend 30 turns discussing

00:12:21.240 --> 00:12:23.980
a business plan, and by turn 31, it starts making

00:12:23.980 --> 00:12:26.480
completely irrelevant suggestions. Have you had

00:12:26.480 --> 00:12:28.960
a bad case of that recently? And how did you

00:12:28.960 --> 00:12:32.320
save the output? Just last week, I was optimizing

00:12:32.320 --> 00:12:35.259
this dense technical guide. And after about 45

00:12:35.259 --> 00:12:37.940
minutes, it completely forgot the tone constraint.

00:12:38.340 --> 00:12:41.320
It reverted to this sterile academic language.

00:12:41.620 --> 00:12:44.159
Frustrating. Yeah. But the fix wasn't starting

00:12:44.159 --> 00:12:46.440
over. The fix was just pasting my original golden

00:12:46.440 --> 00:12:48.500
square prompt back into the chat and saying,

00:12:49.059 --> 00:12:51.200
read here to this role in format and continue

00:12:51.200 --> 00:12:54.059
from the last paragraph. That resets the model

00:12:54.059 --> 00:12:57.870
instantly. That reset command is gold. My favorite

00:12:57.870 --> 00:13:00.330
technique to get airtight content is forcing

00:13:00.330 --> 00:13:03.490
the AI to flip from creator to critic, the ask

00:13:03.490 --> 00:13:06.149
for critiques method. Oh yeah, this is the expert

00:13:06.149 --> 00:13:08.629
level move that gets you past the polite average.

00:13:09.149 --> 00:13:11.830
The AI is programmed to be agreeable. So to trigger

00:13:11.830 --> 00:13:14.409
a self -correction loop, you have to demand a

00:13:14.409 --> 00:13:17.070
critique from a specific skeptical perspective.

00:13:17.470 --> 00:13:20.730
So I'd ask the AI to act as a skeptical executive

00:13:20.730 --> 00:13:22.970
and then critique the very draft it just wrote.

00:13:23.080 --> 00:13:25.440
Exactly. You demand it point out the logical

00:13:25.440 --> 00:13:27.740
gaps, the potential for misunderstanding, any

00:13:27.740 --> 00:13:30.940
robotic pros. Then, and this is the key, you

00:13:30.940 --> 00:13:33.320
tell it to rewrite the entire piece based on

00:13:33.320 --> 00:13:36.039
those self -identified flaws. You're using its

00:13:36.039 --> 00:13:38.419
own analytical power to improve its creative

00:13:38.419 --> 00:13:41.399
output. Even with great prompts and self -critique,

00:13:41.580 --> 00:13:44.059
it's exhausting to repeat business details like

00:13:44.059 --> 00:13:47.379
company history or brand voice every single day.

00:13:47.820 --> 00:13:50.139
How do we manage all this context for long -term

00:13:50.139 --> 00:13:52.360
efficiency? We need to set up permanent context

00:13:52.360 --> 00:13:54.360
files and custom instructions. We essentially

00:13:54.360 --> 00:13:57.159
teach the AI who we are one time. Permanent context

00:13:57.159 --> 00:13:59.539
is all about defining yourself so the AI never

00:13:59.539 --> 00:14:01.600
has to ask again. Think of custom instructions

00:14:01.600 --> 00:14:04.059
or system prompts as your AI's permanent personality

00:14:04.059 --> 00:14:06.720
file. So in a tool like ChatGPT, you'd input

00:14:06.720 --> 00:14:09.840
these permanent facts like, I live in Vietnam

00:14:09.840 --> 00:14:12.539
and work remotely, or my audience is beginner

00:14:12.539 --> 00:14:15.960
entrepreneurs, or I prefer short, bulleted lists.

00:14:16.250 --> 00:14:19.429
Correct. And that context is always active, globally,

00:14:19.750 --> 00:14:22.450
for every new chat. The AI just knows who you

00:14:22.450 --> 00:14:25.070
are. The counterpart to that is the project context

00:14:25.070 --> 00:14:26.950
file. What's the difference between the permanent

00:14:26.950 --> 00:14:29.759
personality and this project file? The project

00:14:29.759 --> 00:14:32.240
context is just a simple text file you create

00:14:32.240 --> 00:14:35.120
with specific temporary details for the client

00:14:35.120 --> 00:14:36.980
or project you're working on right now. You know,

00:14:37.179 --> 00:14:39.759
client names, competitor URLs, product limitations.

00:14:40.120 --> 00:14:42.100
You just upload that file at the start of every

00:14:42.100 --> 00:14:44.299
new chat and it saves you 10 minutes of typing

00:14:44.299 --> 00:14:46.759
every time. So now we're moving from that manual

00:14:46.759 --> 00:14:49.879
back and forth chat to building autonomous systems.

00:14:50.440 --> 00:14:52.940
We have to adopt the automation mindset. The

00:14:52.940 --> 00:14:55.700
shift here is huge. Don't just ask the AI to

00:14:55.700 --> 00:14:58.100
describe a solution or write a paragraph. The

00:14:58.100 --> 00:15:00.220
power user gives it the authority to execute

00:15:00.220 --> 00:15:03.299
an entire repetitive process. This is the difference

00:15:03.299 --> 00:15:05.659
between a high -speed typewriter and a scalable

00:15:05.659 --> 00:15:08.299
workflow engine. So what's the criteria for that?

00:15:08.559 --> 00:15:10.759
How do we identify the tasks that are ready for

00:15:10.759 --> 00:15:13.720
automation, these low -value loops? You look

00:15:13.720 --> 00:15:16.559
for tasks that are three things. Repetitive,

00:15:16.879 --> 00:15:19.889
logic -based, and boring. They follow a clear

00:15:19.889 --> 00:15:23.090
if -this -then -that rule. They require focus,

00:15:23.190 --> 00:15:25.710
but zero creative inspiration, and they just

00:15:25.710 --> 00:15:28.090
take up time. Like summarizing meeting transcripts

00:15:28.090 --> 00:15:31.110
or triaging support tickets. Exactly. Or drafting

00:15:31.110 --> 00:15:33.809
follow -up emails based on a spreadsheet of customer

00:15:33.809 --> 00:15:35.990
statuses. And to make that system actually execute,

00:15:36.070 --> 00:15:38.629
we need interconnectors. This is the glue. Yeah,

00:15:38.629 --> 00:15:40.870
that's right. Tools like Zapier and Make are

00:15:40.870 --> 00:15:43.330
the central nervous system. They connect the

00:15:43.330 --> 00:15:45.850
AI's specialized brain to the hands of your business,

00:15:45.970 --> 00:15:49.490
your other apps, like Slack, or a sauna. This

00:15:49.490 --> 00:15:52.190
eliminates the human middleman and that allows

00:15:52.190 --> 00:15:55.070
the output to scale globally without anyone manually

00:15:55.070 --> 00:15:57.070
copying and pasting. And that infrastructure

00:15:57.070 --> 00:16:00.070
lets us build very specific tailored agents or

00:16:00.070 --> 00:16:03.090
custom GPTs. Think about creating a social media

00:16:03.090 --> 00:16:06.009
manager agent. You train it on your specific

00:16:06.009 --> 00:16:08.730
brand voice, your posting schedules. You feed

00:16:08.730 --> 00:16:11.830
it a link to a new blog post and because it has

00:16:11.830 --> 00:16:14.590
the context and authority it automatically spits

00:16:14.590 --> 00:16:18.250
out five unique tweets detailed LinkedIn post,

00:16:18.590 --> 00:16:22.750
and a bulleted TikTok script. Whoa. Imagine scaling

00:16:22.750 --> 00:16:25.330
that process to a billion queries without copying

00:16:25.330 --> 00:16:28.429
and pasting a single thing. That's true leverage.

00:16:28.850 --> 00:16:31.330
It is. But automation is incredibly powerful,

00:16:31.350 --> 00:16:35.090
but AI does still hallucinate. So how do we ensure

00:16:35.090 --> 00:16:37.490
that all this speed and scale doesn't compromise

00:16:37.490 --> 00:16:40.429
quality or accuracy? By becoming a world -class

00:16:40.429 --> 00:16:43.029
fact checker and implementing crucial verification

00:16:43.029 --> 00:16:45.549
checks at every step. And the first most critical

00:16:45.549 --> 00:16:47.490
safety instruction here is source anchoring.

00:16:47.789 --> 00:16:49.870
This is completely non -negotiable whenever you're

00:16:49.870 --> 00:16:52.159
dealing with attached documents or data. You

00:16:52.159 --> 00:16:54.899
tell the AI, only use information provided in

00:16:54.899 --> 00:16:56.980
this attached PDF. If the answer is not contained

00:16:56.980 --> 00:16:59.360
within the PDF, you must say, I do not know.

00:16:59.460 --> 00:17:01.360
And that anchors it. It stops it from guessing.

00:17:01.559 --> 00:17:03.580
It completely stops it from guessing or trying

00:17:03.580 --> 00:17:05.779
to fill in the blanks with generalized knowledge.

00:17:06.299 --> 00:17:09.099
And for really high stakes information, legal

00:17:09.099 --> 00:17:11.059
summaries, medical reports, financial stuff,

00:17:11.660 --> 00:17:14.759
we need the cross -model verification method.

00:17:14.980 --> 00:17:17.440
It's the double -check system. You use model

00:17:17.440 --> 00:17:19.700
A, let's say Claude, to do the heavy lifting

00:17:19.700 --> 00:17:22.400
because you know it has a long memory. Then you

00:17:22.400 --> 00:17:24.779
immediately paste that answer into model B, like

00:17:24.779 --> 00:17:27.500
chat GPT, and you ask it to check the summary

00:17:27.500 --> 00:17:30.640
for errors, inconsistencies, or missing information.

00:17:30.859 --> 00:17:33.559
And if they disagree, you investigate. If they

00:17:33.559 --> 00:17:36.559
agree, you can feel much safer sending that document

00:17:36.559 --> 00:17:39.079
forward. Precisely. You're using their differences

00:17:39.079 --> 00:17:41.440
in training and architecture as a built -in safety

00:17:41.440 --> 00:17:44.160
net. But ultimately, the human in the loop is

00:17:44.160 --> 00:17:47.230
non -negotiable. It's the pilot -co -pilot analogy.

00:17:47.490 --> 00:17:49.549
The AI is the co -pilot. It's doing the bulk

00:17:49.549 --> 00:17:51.789
of the work, managing the data, flying the plane.

00:17:51.950 --> 00:17:54.849
But you are the pilot who is always, always responsible

00:17:54.849 --> 00:17:57.529
for the landing. Never let the AI post directly

00:17:57.529 --> 00:17:59.849
to social media or send an email to a client

00:17:59.849 --> 00:18:02.190
or execute a financial trade without your final

00:18:02.190 --> 00:18:05.009
human review. That responsibility just can't

00:18:05.009 --> 00:18:07.109
be delegated. Before we wrap up, let's just quickly

00:18:07.109 --> 00:18:09.509
run through the don't list, the common traps

00:18:09.509 --> 00:18:12.569
that still waste so much user time. Right. Don't

00:18:12.569 --> 00:18:15.509
give wall of text prompts. AIs stan text, they

00:18:15.509 --> 00:18:18.029
love structure. Use bullet points and numbered

00:18:18.029 --> 00:18:21.200
lists for clarity. And critically, don't use

00:18:21.200 --> 00:18:24.160
AI for things it's inherently bad at, like high

00:18:24.160 --> 00:18:26.880
-level strategy that needs deep market context

00:18:26.880 --> 00:18:30.779
or true emotional empathy. Use it for the heavy

00:18:30.779 --> 00:18:33.000
lifting of data analysis and drafting. So the

00:18:33.000 --> 00:18:35.880
big idea here, really, is that effective AI use

00:18:35.880 --> 00:18:38.279
isn't about chasing the next shiny new tool.

00:18:38.400 --> 00:18:41.500
It is about clarity, structure, verification,

00:18:41.940 --> 00:18:44.440
and understanding the core principles of communication.

00:18:44.700 --> 00:18:46.920
The biggest lesson after spending hundreds of

00:18:46.920 --> 00:18:49.170
hours building these systems is that AI doesn't

00:18:49.170 --> 00:18:52.170
replace your brain. It amplifies it. It's a true

00:18:52.170 --> 00:18:54.630
force multiplier. If you are a clear, structured

00:18:54.630 --> 00:18:57.289
communicator, AI will make you superhuman. You'll

00:18:57.289 --> 00:18:58.970
achieve an output quality and a speed that were

00:18:58.970 --> 00:19:01.250
previously impossible. But if you're disorganized,

00:19:01.289 --> 00:19:03.650
if your inputs are vague and you can't articulate

00:19:03.650 --> 00:19:05.829
exactly what you need, AI will just make you

00:19:05.829 --> 00:19:08.009
disorganized faster. It just speeds up the mess.

00:19:08.349 --> 00:19:11.109
The key to mastery is personal clarity. The roadmap

00:19:11.109 --> 00:19:13.549
is clear, and the tools are definitely ready

00:19:13.549 --> 00:19:16.390
for serious delegation. I think the key question

00:19:16.390 --> 00:19:18.470
now is, What are you going to build with these

00:19:18.470 --> 00:19:20.549
skills? We covered a huge amount of material

00:19:20.549 --> 00:19:22.990
today, so let's give you a simple 30 -day plan.

00:19:23.430 --> 00:19:26.049
This week, just focus on Clarity Make every single

00:19:26.049 --> 00:19:28.970
prompt, a role task context format prompt. In

00:19:28.970 --> 00:19:31.210
a couple of weeks, focus on refinement try, that

00:19:31.210 --> 00:19:33.650
advanced technique of critiquing the AI's own

00:19:33.650 --> 00:19:36.869
work. Then try a simple automation with Zapier

00:19:36.869 --> 00:19:39.670
or Make. Just pick one technique and try it today.

00:19:39.930 --> 00:19:40.490
Get started.
