WEBVTT

00:00:00.000 --> 00:00:05.580
We all fear it. That sudden jarring case of AI

00:00:05.580 --> 00:00:07.799
amnesia. Oh, it is the absolute worst. You switch

00:00:07.799 --> 00:00:10.240
to a new tool. And suddenly your brand new assistant

00:00:10.240 --> 00:00:13.039
has zero clue who you are. Right. It's like you

00:00:13.039 --> 00:00:15.300
never even existed. It doesn't actually have

00:00:15.300 --> 00:00:17.300
to be this way. You can transplant a digital

00:00:17.300 --> 00:00:20.399
brain in under five minutes. Welcome to our deep

00:00:20.399 --> 00:00:22.839
dive. Yeah, I am really excited about this one.

00:00:22.980 --> 00:00:25.140
I am glad you're joining us today. We are looking

00:00:25.140 --> 00:00:28.300
at a highly specific migration guide. We are

00:00:28.300 --> 00:00:31.920
dissecting the complete move from ChatGPT over

00:00:31.920 --> 00:00:34.090
to Claude. And we are going to cover it. everything

00:00:34.090 --> 00:00:36.890
you need to make this seamless. We will explore

00:00:36.890 --> 00:00:39.350
why everyone is making this jump right now. We'll

00:00:39.350 --> 00:00:41.270
show you how to properly prep your data. You

00:00:41.270 --> 00:00:44.170
will get the exact master export prompt you need.

00:00:44.289 --> 00:00:46.369
We'll also explain how Claude's memory actually

00:00:46.369 --> 00:00:48.869
works under the hood. And crucially, we will

00:00:48.869 --> 00:00:51.810
highlight the major traps you absolutely must

00:00:51.810 --> 00:00:53.890
avoid. Yeah, those traps catch a lot of people.

00:00:54.030 --> 00:00:57.270
Let's unpack this massive shift in the AI landscape.

00:00:57.570 --> 00:01:00.609
Beat. Because for a long time, ChatGPT was essentially

00:01:00.609 --> 00:01:03.740
the only real option in town. It really was.

00:01:04.120 --> 00:01:06.219
If you wanted a smart digital assistant, you

00:01:06.219 --> 00:01:08.540
went there. You built your entire workflow around

00:01:08.540 --> 00:01:10.840
it. Exactly. It was the absolute gold standard

00:01:10.840 --> 00:01:13.439
for everyone. But things have fundamentally changed

00:01:13.439 --> 00:01:16.219
recently. Very much so. Power users are finding

00:01:16.219 --> 00:01:19.040
a new favorite. They are migrating their daily

00:01:19.040 --> 00:01:21.579
operations over to Claude. Which is built by

00:01:21.579 --> 00:01:24.420
a company called Anthropic. Right. And the most

00:01:24.420 --> 00:01:27.000
common feedback I hear, it just feels more like

00:01:27.000 --> 00:01:30.180
a person. That's the primary draw right now.

00:01:30.349 --> 00:01:33.590
It feels significantly less like a robot. The

00:01:33.590 --> 00:01:36.810
Claude 3 .5 Sonnet model is particularly incredible.

00:01:36.890 --> 00:01:39.629
Oh, it's amazing. These newer models follow highly

00:01:39.629 --> 00:01:42.430
complex multi -step instructions beautifully.

00:01:42.689 --> 00:01:45.090
But more importantly, they write in such a natural

00:01:45.090 --> 00:01:47.909
tone. Yeah, a very human -like tone. Yeah. Chat

00:01:47.909 --> 00:01:51.109
GPT is still fantastic for many specific tasks.

00:01:51.290 --> 00:01:54.569
Absolutely. It is excellent for strict mathematical

00:01:54.569 --> 00:01:57.790
logic. It is wonderful for debugging highly complex

00:01:57.790 --> 00:02:01.709
code. It generates images. seamlessly with Deli.

00:02:01.890 --> 00:02:04.549
And, of course, it has that incredible advanced

00:02:04.549 --> 00:02:07.030
voice mode. Right, and it remains a total powerhouse

00:02:07.030 --> 00:02:09.650
for those things. But Claude is consistently

00:02:09.650 --> 00:02:12.270
winning the creative work category. It just sounds

00:02:12.270 --> 00:02:15.889
so much less AI -ish. It rarely uses those telltale

00:02:15.889 --> 00:02:20.110
words, right, like delve? Yes, or leverage, or

00:02:20.110 --> 00:02:23.530
testament. You almost never see those. That vocabulary

00:02:23.530 --> 00:02:26.210
difference is surprisingly huge for your daily

00:02:26.210 --> 00:02:28.430
workflow. It saves you so much editing time.

00:02:28.680 --> 00:02:31.139
But we also have to talk about the context window.

00:02:31.659 --> 00:02:34.889
Oh, yeah. That is a critical architectural difference.

00:02:35.229 --> 00:02:37.349
The context window is essentially its short -term

00:02:37.349 --> 00:02:40.169
working memory. It's how much text the AI can

00:02:40.169 --> 00:02:42.949
actively hold at once. Claude handles up to 200

00:02:42.949 --> 00:02:45.550
,000 words easily. Which is just a massive amount

00:02:45.550 --> 00:02:47.870
of information. It completely changes how you

00:02:47.870 --> 00:02:50.210
work with long documents. You are no longer just

00:02:50.210 --> 00:02:52.389
feeding it tiny snippets. Right. You can drop

00:02:52.389 --> 00:02:55.250
in an entire code base. Or you can upload three

00:02:55.250 --> 00:02:58.090
different annual financial reports. And it synthesizes

00:02:58.090 --> 00:03:00.129
all of it perfectly without losing the plot.

00:03:00.349 --> 00:03:02.729
Then there is a transformative feature called

00:03:02.729 --> 00:03:04.930
Artifacts. Artifacts essentially gives Cloud

00:03:04.930 --> 00:03:07.449
a dedicated workspace. Right next to your chat

00:03:07.449 --> 00:03:09.770
window. Developers are absolutely loving it.

00:03:10.009 --> 00:03:12.250
If you ask it to code a Python dashboard, it

00:03:12.250 --> 00:03:15.189
doesn't just spit out text. No, it actually renders

00:03:15.189 --> 00:03:17.789
a functional interactive preview. It builds it

00:03:17.789 --> 00:03:20.150
right on the side of your screen. It is a huge

00:03:20.150 --> 00:03:23.870
win for creators. But here is the profound friction

00:03:23.870 --> 00:03:26.310
for you. The switching cost. The switching cost

00:03:26.310 --> 00:03:31.379
is very real. If you have used ChatGPT for a

00:03:31.379 --> 00:03:34.879
year, it deeply knows you. It knows your job

00:03:34.879 --> 00:03:37.479
title and your family dynamics. It knows your

00:03:37.479 --> 00:03:40.699
absolute favorite coding language. It knows exactly

00:03:40.699 --> 00:03:43.020
how you like your emails to sound. That is what

00:03:43.020 --> 00:03:45.479
we call its internal memory. It's a collection

00:03:45.479 --> 00:03:48.000
of small facts it picked up over time. The fear

00:03:48.000 --> 00:03:50.240
of losing this memory keeps people completely

00:03:50.240 --> 00:03:52.620
stuck. It really does. It was like moving into

00:03:52.620 --> 00:03:55.560
a beautiful, spacious new house. But leaving

00:03:55.560 --> 00:03:57.919
all your customized furniture behind. Exactly.

00:03:58.039 --> 00:04:00.419
You have to start decorating from scratch. That's

00:04:00.419 --> 00:04:03.479
exhausting. Luckily, Anthropic recognized this

00:04:03.479 --> 00:04:06.000
massive hurdle recently. They built a dedicated

00:04:06.000 --> 00:04:08.659
tool to help bring those memories over. The digital

00:04:08.659 --> 00:04:11.099
move is now easier than ever. You do not lose

00:04:11.099 --> 00:04:13.460
your custom instructions or your saved memories.

00:04:13.879 --> 00:04:15.780
No, you can keep all of it. So before we get

00:04:15.780 --> 00:04:17.980
into the migration itself, I have a probing question.

00:04:18.519 --> 00:04:21.240
Why should someone stick with chat GPT instead

00:04:21.240 --> 00:04:23.920
of moving? Well, you should stay if you rely

00:04:23.920 --> 00:04:27.279
heavily on voice mode or daily image generation.

00:04:27.860 --> 00:04:30.500
Stay if you rely heavily on voice mode or daily

00:04:30.500 --> 00:04:32.779
image generation. That makes perfect sense. Yeah,

00:04:32.800 --> 00:04:34.860
it's a clear dividing line. Let's move into the

00:04:34.860 --> 00:04:37.160
preparation phase. This is where the real work

00:04:37.160 --> 00:04:40.339
begins. That fear of losing your digital setup

00:04:40.339 --> 00:04:43.180
makes people hesitate. But preparing the data

00:04:43.180 --> 00:04:45.480
is surprisingly straightforward. You just have

00:04:45.480 --> 00:04:47.829
to know which switches to flip. And this is where

00:04:47.829 --> 00:04:50.129
people make their first huge mistake. If you

00:04:50.129 --> 00:04:53.370
just open a chat and ask for an export, it fails.

00:04:53.670 --> 00:04:56.189
Completely fails. You might only get 20 % of

00:04:56.189 --> 00:04:58.649
your data. You miss almost all of your nuanced

00:04:58.649 --> 00:05:00.829
information. You really have to force the system

00:05:00.829 --> 00:05:02.970
to look deeper. I have to offer a vulnerable

00:05:02.970 --> 00:05:06.459
admission here. Oh. I still wrestle with remembering

00:05:06.459 --> 00:05:09.339
to check my settings before exporting. I have

00:05:09.339 --> 00:05:12.160
definitely lost project contexts that way. Oh,

00:05:12.259 --> 00:05:13.959
man. Yeah, it happens to the absolute best of

00:05:13.959 --> 00:05:16.699
us. It is incredibly frustrating to realize your

00:05:16.699 --> 00:05:19.779
data is gone because you rushed. You have to

00:05:19.779 --> 00:05:23.040
explicitly force chat GPT to look into every

00:05:23.040 --> 00:05:25.540
corner. You need to open its entire brain before

00:05:25.540 --> 00:05:28.740
you extract anything. So instead of just clicking

00:05:28.740 --> 00:05:31.819
wildly. Be strategic. When you dive into your

00:05:31.819 --> 00:05:34.779
backend settings, look for personalization preferences.

00:05:35.339 --> 00:05:37.980
You need to ensure the AI has permission to view

00:05:37.980 --> 00:05:40.959
its own memories. You will see three main toggles

00:05:40.959 --> 00:05:43.259
in that backend menu. Make sure they're all turned

00:05:43.259 --> 00:05:45.420
on. The first one is simply called memory. This

00:05:45.420 --> 00:05:47.300
is the database of facts about you. The second

00:05:47.300 --> 00:05:50.279
toggle is personalization. That helps the AI

00:05:50.279 --> 00:05:52.839
adapt to your specific writing style. The third

00:05:52.839 --> 00:05:55.639
crucial one is chat history and training. Now,

00:05:55.860 --> 00:05:58.060
you might normally keep this turned off for privacy.

00:05:58.259 --> 00:06:00.800
Many power users do. But you need it turned on

00:06:00.800 --> 00:06:03.040
briefly right now. It allows the system to pull

00:06:03.040 --> 00:06:05.439
actively from your complete history. Next, we

00:06:05.439 --> 00:06:07.920
have to talk about the specific AI model you

00:06:07.920 --> 00:06:11.560
use. Yes. ChadGPT offers standard models like

00:06:11.560 --> 00:06:14.540
4 .0 and newer thinking models. For this export,

00:06:14.839 --> 00:06:16.910
you absolutely must use a thinking model. You

00:06:16.910 --> 00:06:20.629
should use something like O1 or O3 Mini. This

00:06:20.629 --> 00:06:23.149
is the absolute secret sauce for a successful

00:06:23.149 --> 00:06:26.110
transfer. Standard models are just too lazy for

00:06:26.110 --> 00:06:29.069
this kind of exhaustive task. If you ask a standard

00:06:29.069 --> 00:06:31.750
model for your memories, it rushes. It might

00:06:31.750 --> 00:06:34.129
give you the last five superficial things it

00:06:34.129 --> 00:06:36.550
learned. But a thinking model actually takes

00:06:36.550 --> 00:06:39.430
its time. It iterates. It thinks much longer.

00:06:39.949 --> 00:06:42.490
It searches its hidden database thoroughly. In

00:06:42.490 --> 00:06:45.189
extensive testing, a thinking model pull three

00:06:45.189 --> 00:06:48.000
times more data. It grabs everything you actually

00:06:48.000 --> 00:06:50.939
need for a complete profile. I've seen standard

00:06:50.939 --> 00:06:54.899
models just return garbage data. So why avoid

00:06:54.899 --> 00:06:58.139
standard models for this specific export? Standard

00:06:58.139 --> 00:07:01.079
models are lazy. Thinking models search the internal

00:07:01.079 --> 00:07:03.060
database much more thoroughly. Standard models

00:07:03.060 --> 00:07:05.540
are lazy. Thinking models search the internal

00:07:05.540 --> 00:07:07.800
database much more thoroughly. Got it. So now

00:07:07.800 --> 00:07:09.939
your settings are fully prepared. You have the

00:07:09.939 --> 00:07:12.699
right model selected. But you need a very specific

00:07:12.699 --> 00:07:15.019
structured prompt. You cannot just casually say,

00:07:15.139 --> 00:07:17.699
tell me what you know about me. That will just

00:07:17.699 --> 00:07:20.339
give you a messy paragraph. You need a prompt

00:07:20.339 --> 00:07:23.199
that forces a highly clean output. You want a

00:07:23.199 --> 00:07:26.100
perfect, easily transferable format. Claude actually

00:07:26.100 --> 00:07:28.920
provides a basic prompt in their documentation.

00:07:29.160 --> 00:07:31.699
But the guide we are analyzing tweaked it heavily.

00:07:32.000 --> 00:07:34.259
The tweaked version is much better for a power

00:07:34.259 --> 00:07:37.300
user's move. The real secret here is using markdown

00:07:37.300 --> 00:07:40.459
code blocks. Exactly. You tell the AI to format

00:07:40.459 --> 00:07:43.240
everything inside one single code block. This

00:07:43.240 --> 00:07:45.120
keeps the plain text from getting messy when

00:07:45.120 --> 00:07:47.790
you copy it. It strips out all the weird formatting.

00:07:47.889 --> 00:07:50.250
And it makes a convenient copy button magically

00:07:50.250 --> 00:07:53.350
appear. I love the sheer elegance of using a

00:07:53.350 --> 00:07:56.269
code block for plain text. It is such a clever,

00:07:56.470 --> 00:07:58.829
simple hack. It prevents all those rich text

00:07:58.829 --> 00:08:01.089
formatting nightmares. It really is brilliant.

00:08:01.329 --> 00:08:04.329
But the actual content of the prompt is even

00:08:04.329 --> 00:08:06.750
more important. You paste this master prompt

00:08:06.750 --> 00:08:09.689
into a new chat. Using that thinking model. You

00:08:09.689 --> 00:08:12.439
explicitly state... I am moving my workflow to

00:08:12.439 --> 00:08:15.540
another service. You command it to export every

00:08:15.540 --> 00:08:18.339
single bit of personalized data, every stored

00:08:18.339 --> 00:08:21.060
memory, every custom instruction, every piece

00:08:21.060 --> 00:08:24.480
of deep context it has about your job. You demand

00:08:24.480 --> 00:08:27.220
the data in a highly specific format. You want

00:08:27.220 --> 00:08:30.069
the category, the date. and the specific memory

00:08:30.069 --> 00:08:32.250
itself. You ask it to include your preferred

00:08:32.250 --> 00:08:35.730
name and location. Your exact job title and your

00:08:35.730 --> 00:08:37.870
overarching work goals. Your preferred coding

00:08:37.870 --> 00:08:40.570
languages and specific frameworks. You even ask

00:08:40.570 --> 00:08:43.669
for your nuanced writing style preferences. Things

00:08:43.669 --> 00:08:47.590
like always be concise or use dry humor. And

00:08:47.590 --> 00:08:50.590
any of your strict always do X or never do Y

00:08:50.590 --> 00:08:53.549
rules. And here is the absolute most critical

00:08:53.549 --> 00:08:56.870
instruction of all. You tell it, do not summarize

00:08:56.870 --> 00:08:59.309
anything. Right. Do not group these memories

00:08:59.309 --> 00:09:01.929
into general themes. I want the raw verbatim

00:09:01.929 --> 00:09:04.610
data exactly as it was saved. This preserves

00:09:04.610 --> 00:09:07.370
your entirely unique working style. If the AI

00:09:07.370 --> 00:09:10.330
summarizes your preferences, you lose that nuance.

00:09:10.490 --> 00:09:13.350
All that precious hard -earned context just vanishes.

00:09:13.549 --> 00:09:16.690
So why must we demand raw verbatim data instead

00:09:16.690 --> 00:09:19.049
of summaries? Summaries strip away the nuance

00:09:19.049 --> 00:09:21.190
-specific ways you prefer to work and write.

00:09:21.529 --> 00:09:23.610
Summaries strip away the nuance -specific ways

00:09:23.610 --> 00:09:26.340
you prefer to work and write. Perfect. That brings

00:09:26.340 --> 00:09:30.059
us to the actual import phase. Sponsor. All right.

00:09:30.059 --> 00:09:32.519
Now you have your master export completed. You

00:09:32.519 --> 00:09:35.899
have a massive clean block of text from chat

00:09:35.899 --> 00:09:38.860
GPT. It is time to head over to Claude and complete

00:09:38.860 --> 00:09:41.759
the brain transplant. Anthropic new power users

00:09:41.759 --> 00:09:44.639
were switching platforms in droves. So they built

00:09:44.639 --> 00:09:47.940
a highly specific tool just for this. It makes

00:09:47.940 --> 00:09:50.320
the platform competition very friendly for the

00:09:50.320 --> 00:09:53.139
user. You go into Claude's interface. You dive

00:09:53.139 --> 00:09:55.399
into your account settings. Under the capabilities

00:09:55.399 --> 00:09:57.620
section, looks for the memory configuration.

00:09:57.860 --> 00:09:59.820
You will see a dedicated button waiting right

00:09:59.820 --> 00:10:03.120
there. It explicitly says import memory from

00:10:03.120 --> 00:10:05.860
other AI providers. You just click start import.

00:10:06.000 --> 00:10:08.279
A clean text box will appear on your screen.

00:10:08.519 --> 00:10:11.139
This is where you paste that big perfectly formatted

00:10:11.139 --> 00:10:13.740
code block. You paste the raw text and simply

00:10:13.740 --> 00:10:15.940
click add to memory. This is where the underlying

00:10:15.940 --> 00:10:19.600
technology gets really fascinating. beat. Claude

00:10:19.600 --> 00:10:22.259
does not just save this as a dumb static note.

00:10:22.460 --> 00:10:25.259
No, it actively digests and learns the information.

00:10:25.460 --> 00:10:28.899
If your imported text says, I prefer functional

00:10:28.899 --> 00:10:31.580
programming in Python, Claude genuinely understands

00:10:31.580 --> 00:10:34.899
that concept. It knows it should default to functional

00:10:34.899 --> 00:10:37.700
Python in the future. It stores those specific

00:10:37.700 --> 00:10:41.059
details as active behavioral memories. That's

00:10:41.059 --> 00:10:43.860
exactly why the move feels so magical. The AI

00:10:43.860 --> 00:10:46.960
effectively learns your entire professional past

00:10:46.960 --> 00:10:50.840
in seconds. After you click save, you absolutely

00:10:50.840 --> 00:10:54.340
must verify the import. Start a brand new blank

00:10:54.340 --> 00:10:57.019
chat. Ask it a very simple question. Based on

00:10:57.019 --> 00:11:00.019
what you just learned, what is my job? If it

00:11:00.019 --> 00:11:03.100
answers correctly, your digital migration was

00:11:03.100 --> 00:11:05.799
a success. But you need to deeply understand

00:11:05.799 --> 00:11:08.500
how their underlying brains differ. Yeah, the

00:11:08.500 --> 00:11:10.960
memory architectures are quite different. ChatGPT

00:11:10.960 --> 00:11:13.600
saves a new memory the exact moment you say it.

00:11:13.820 --> 00:11:15.980
It is an instant database, right? Claude is a

00:11:15.980 --> 00:11:17.919
bit more architectural and thoughtful about it.

00:11:18.220 --> 00:11:20.840
Claude processes your ongoing conversations quite

00:11:20.840 --> 00:11:23.620
differently. It updates its deeper long -term

00:11:23.620 --> 00:11:26.740
memory weights every 24 hours. Usually this heavy

00:11:26.740 --> 00:11:28.659
processing happens overnight. So if you tell

00:11:28.659 --> 00:11:31.929
Claude a brand new fact today... It waits. It

00:11:31.929 --> 00:11:33.710
perfectly remembers it for your current chat

00:11:33.710 --> 00:11:35.549
session. But it might not show up in the global

00:11:35.549 --> 00:11:38.330
memory list until tomorrow. Claude is also heavily

00:11:38.330 --> 00:11:41.389
tuned as a professional work assistant. It prioritizes

00:11:41.389 --> 00:11:44.269
your professional tools and complex project goals.

00:11:44.549 --> 00:11:47.870
It might actively overlook personal trivia. Like

00:11:47.870 --> 00:11:50.470
your cat's name or your favorite movie. You can

00:11:50.470 --> 00:11:52.730
always add those personal details manually later.

00:11:52.990 --> 00:11:55.309
It focuses relentlessly on the productivity side

00:11:55.309 --> 00:11:57.769
of things. So people often get confused about

00:11:57.769 --> 00:12:00.429
the timing. Why might Claude forget your name

00:12:00.429 --> 00:12:03.750
in a brand new chat today? Claude processes and

00:12:03.750 --> 00:12:06.129
officially updates its deeper memory overnight,

00:12:06.549 --> 00:12:09.309
not instantly. Claude processes and officially

00:12:09.309 --> 00:12:12.250
updates its deeper memory overnight, not instantly.

00:12:12.610 --> 00:12:14.309
That is a great distinction. It really helps

00:12:14.309 --> 00:12:17.049
manage expectations. Let's transition to handling

00:12:17.049 --> 00:12:20.350
highly specialized workspaces. If you are a pro

00:12:20.350 --> 00:12:23.789
user on either platform, you likely use projects.

00:12:24.009 --> 00:12:26.950
Both platforms charge about $20 a month for their

00:12:26.950 --> 00:12:29.610
premium tiers. Projects are incredibly useful

00:12:29.610 --> 00:12:32.210
for compartmentalizing your life. They are completely

00:12:32.210 --> 00:12:35.009
isolated spaces for different parts of your workflow.

00:12:35.370 --> 00:12:37.629
You might have a dedicated fitness project and

00:12:37.629 --> 00:12:40.370
a separate coding project. Moving these isolated

00:12:40.370 --> 00:12:43.549
projects requires a bit more intentional effort.

00:12:43.769 --> 00:12:47.289
You cannot just execute one big bulk export for

00:12:47.289 --> 00:12:49.129
everything. You have to go into each project

00:12:49.129 --> 00:12:51.730
completely individually. It is definitely manual

00:12:51.730 --> 00:12:55.009
labor. but it is vital for maintaining context.

00:12:55.470 --> 00:12:58.409
You open your specific project over in ChatGPT.

00:12:58.490 --> 00:13:01.269
You use that exact same master export prompt

00:13:01.269 --> 00:13:04.330
we discussed. Because you are inside that specific

00:13:04.330 --> 00:13:07.529
walled garden, ChatGPT looks around locally.

00:13:07.690 --> 00:13:09.929
It pulls the unique project instructions and

00:13:09.929 --> 00:13:12.850
the context of uploaded files. You meticulously

00:13:12.850 --> 00:13:15.769
copy that highly specific data. Then you head

00:13:15.769 --> 00:13:18.370
over to Claude to manually rebuild it. In Claude,

00:13:18.409 --> 00:13:20.649
you create a brand new project. You give it the

00:13:20.649 --> 00:13:23.460
exact same name for continuity. In the new project

00:13:23.460 --> 00:13:26.139
instructions area, you paste your freshly exported

00:13:26.139 --> 00:13:28.679
data. And please, do not forget to re -upload

00:13:28.679 --> 00:13:30.440
your core knowledge files. Put your reference

00:13:30.440 --> 00:13:33.419
PDFs and your complex spreadsheets back in. Comparing

00:13:33.419 --> 00:13:36.259
the two directly, Claw's version of projects

00:13:36.259 --> 00:13:39.340
is just more powerful. This brings us right back

00:13:39.340 --> 00:13:43.039
to that incredible artifacts feature. Whoa! Imagine

00:13:43.039 --> 00:13:45.179
it building a working dashboard right inside

00:13:45.179 --> 00:13:47.960
the chat from your uploaded files. Two secs silence.

00:13:48.379 --> 00:13:50.919
It is incredibly powerful to watch that happen

00:13:50.919 --> 00:13:53.580
in real time. It fundamentally transforms how

00:13:53.580 --> 00:13:56.039
you interact with stagnant data. It takes the

00:13:56.039 --> 00:13:59.039
concept of a workspace to a completely new level.

00:13:59.240 --> 00:14:01.860
It's not just chat anymore. It's active software

00:14:01.860 --> 00:14:04.779
generation. This step usually trips up users

00:14:04.779 --> 00:14:08.000
with massive setups. Can you bulk export all

00:14:08.000 --> 00:14:10.519
your specialized workspaces at once? No. You

00:14:10.519 --> 00:14:13.100
must export and rebuild each project individually

00:14:13.100 --> 00:14:15.980
to maintain the context. No! You must export

00:14:15.980 --> 00:14:18.600
and rebuild each project individually to maintain

00:14:18.600 --> 00:14:21.289
the context. It takes time. But it is worth it.

00:14:21.470 --> 00:14:23.610
We need to clearly recap the common traps here.

00:14:23.789 --> 00:14:25.710
I want to ensure you avoid these frustrating

00:14:25.710 --> 00:14:28.370
pitfalls completely. There are three big mistakes

00:14:28.370 --> 00:14:30.970
that people constantly make. We see these exact

00:14:30.970 --> 00:14:34.190
same unforced errors over and over. The very

00:14:34.190 --> 00:14:37.029
first mistake is using the wrong AI model for

00:14:37.029 --> 00:14:39.490
the export. People often use a fast, lazy model

00:14:39.490 --> 00:14:42.090
like GPT -4 Mini. Just because it's the default.

00:14:42.429 --> 00:14:44.669
If you do that, you get a pathetically tiny list.

00:14:44.889 --> 00:14:46.769
It might only give you five highly recent memories.

00:14:47.049 --> 00:14:50.509
You absolutely must use the thinking model to

00:14:50.509 --> 00:14:53.649
extract your full history. You want the complete

00:14:53.649 --> 00:14:57.029
rich list of 50 or more distinct memories. The

00:14:57.029 --> 00:15:00.029
second major mistake is not respecting Claude's

00:15:00.029 --> 00:15:02.990
24 -hour sync cycle. People paste their massive

00:15:02.990 --> 00:15:06.210
memory blocks into Claude and start testing immediately.

00:15:06.309 --> 00:15:08.549
They start a brand new chat 10 minutes later

00:15:08.549 --> 00:15:11.629
expecting a miracle. And then they get incredibly

00:15:11.629 --> 00:15:14.110
frustrated when it fails. Claude simply says,

00:15:14.110 --> 00:15:16.029
I don't know who you are. We have to give the

00:15:16.029 --> 00:15:18.649
system a few hours. The backend architecture

00:15:18.649 --> 00:15:22.009
needs time to properly index your data. The third

00:15:22.009 --> 00:15:24.830
critical mistake is forgetting the custom instructions

00:15:24.830 --> 00:15:28.450
box. ChatGPT has a highly specific area buried

00:15:28.450 --> 00:15:31.200
in the settings. It asks. How would you like

00:15:31.200 --> 00:15:34.080
chat GPT to respond? People constantly forget

00:15:34.080 --> 00:15:37.019
to copy that specific text separately. That hidden

00:15:37.019 --> 00:15:39.320
box is arguably the most important part of the

00:15:39.320 --> 00:15:42.059
transition. Is the core persona setting? It is

00:15:42.059 --> 00:15:44.039
what makes Claude actually feel like your trusted

00:15:44.039 --> 00:15:45.960
old assistant. You have to grab every single

00:15:45.960 --> 00:15:48.700
piece of the puzzle. Absolutely. It cannot leave

00:15:48.700 --> 00:15:51.279
the most important pieces behind. Just to hammer

00:15:51.279 --> 00:15:54.500
this point home, what happens if you use a lazy

00:15:54.500 --> 00:15:57.409
model for the export? You will only get about

00:15:57.409 --> 00:15:59.909
five recent memories instead of your complete

00:15:59.909 --> 00:16:01.970
history. You will only get about five recent

00:16:01.970 --> 00:16:05.090
memories instead of your complete history. Don't

00:16:05.090 --> 00:16:07.190
do it. We have covered a tremendous amount of

00:16:07.190 --> 00:16:09.970
ground today. Let's distill the core philosophy

00:16:09.970 --> 00:16:13.029
behind all of this technical maneuvering. Beat.

00:16:13.549 --> 00:16:15.789
Ultimately, you own your preferences and your

00:16:15.789 --> 00:16:18.429
digital history. You are never locked into one

00:16:18.429 --> 00:16:21.399
specific rigid tool. That is the most empowering

00:16:21.399 --> 00:16:24.440
part of this whole technological shift. You are

00:16:24.440 --> 00:16:27.460
finally in control of your own AI persona. By

00:16:27.460 --> 00:16:30.019
pulling your data out meticulously, you take

00:16:30.019 --> 00:16:33.200
absolute control. You force the AI to work precisely

00:16:33.200 --> 00:16:35.980
for you. It is like stacking Lego blocks of data

00:16:35.980 --> 00:16:38.259
exactly how you want them. And remember this

00:16:38.259 --> 00:16:40.899
exact same logic works perfectly in reverse.

00:16:41.100 --> 00:16:42.940
If you try Claude for a month and absolutely

00:16:42.940 --> 00:16:45.200
hate it, you can move back. You can transplant

00:16:45.200 --> 00:16:47.960
Claude's newly acquired memories right back into

00:16:47.960 --> 00:16:51.000
Chad GPT. You are completely free to roam. Beat.

00:16:51.340 --> 00:16:53.340
I want to challenge you directly right now. Use

00:16:53.340 --> 00:16:55.799
the import tool today. Set up just one single

00:16:55.799 --> 00:16:59.059
project in Claude. Give it a strict, unbiased,

00:16:59.299 --> 00:17:01.700
one -week trial run. See if the coding logic

00:17:01.700 --> 00:17:03.519
and the natural writing actually feel better.

00:17:03.779 --> 00:17:06.400
I want to leave you with one final, fascinating

00:17:06.400 --> 00:17:09.099
thought. If transferring an AI's entire memory

00:17:09.099 --> 00:17:12.200
is this simple right now. what happens in a few

00:17:12.200 --> 00:17:14.640
years. Imagine when you can seamlessly merge

00:17:14.640 --> 00:17:17.259
the memories of three or four different specialized

00:17:17.259 --> 00:17:21.180
AIs into one ultimate hyper -personalized super

00:17:21.180 --> 00:17:24.019
assistant. Beat! That is a profound future to

00:17:24.019 --> 00:17:26.119
think about. Thank you for joining us on this

00:17:26.119 --> 00:17:28.799
deep dive today. Be well, Outiro Music.
