WEBVTT

00:00:00.000 --> 00:00:01.840
Imagine buying a state -of -the -art professional

00:00:01.840 --> 00:00:04.400
kitchen today. Oh, yeah. You spend an absolute

00:00:04.400 --> 00:00:07.059
fortune on the equipment, and then you only use

00:00:07.059 --> 00:00:09.400
it to toast bread. Right, which sounds completely

00:00:09.400 --> 00:00:11.960
ridiculous out loud. It really does. But honestly,

00:00:12.160 --> 00:00:15.380
that is exactly how we use AI today. Welcome

00:00:15.380 --> 00:00:18.600
to our deep dive. Glad to be here. We are unpacking

00:00:18.600 --> 00:00:21.260
a really fascinating article by Max Anne today.

00:00:21.359 --> 00:00:24.140
It covers a major shift happening in the industry

00:00:24.140 --> 00:00:26.839
right now. A massive shift, yeah. We are moving

00:00:26.839 --> 00:00:30.760
away from simple AI chat interfaces completely.

00:00:31.199 --> 00:00:34.259
We are heading toward a fully autonomous AI workforce

00:00:34.259 --> 00:00:36.619
instead. It is a totally different way of working.

00:00:36.759 --> 00:00:39.539
And we will explore four specific open source

00:00:39.539 --> 00:00:42.479
blueprints to get there. We are looking at superpowers,

00:00:42.719 --> 00:00:45.799
G -Stack, Hermes Agent. And paperclip. Okay,

00:00:45.859 --> 00:00:47.679
let's untack this. We really need to understand

00:00:47.679 --> 00:00:50.659
the baseline problem first. A regular AI assistant

00:00:50.659 --> 00:00:54.539
is purely reactive by its very nature. You ask

00:00:54.539 --> 00:00:57.039
a direct question and you get an immediate answer.

00:00:57.200 --> 00:00:59.740
Which works perfectly for simple, highly isolated

00:00:59.740 --> 00:01:02.299
daily tasks. You just want a quick recipe or

00:01:02.299 --> 00:01:05.379
maybe a code snippet. It is fast. It is undeniably

00:01:05.379 --> 00:01:09.000
very useful for those tiny micro tasks. Yeah,

00:01:09.000 --> 00:01:12.180
but real meaningful work is rarely that incredibly

00:01:12.180 --> 00:01:14.840
simple. Building a software feature requires

00:01:14.840 --> 00:01:18.459
intense planning and rigorous testing. Exactly.

00:01:18.700 --> 00:01:21.620
It requires maintaining memory across many different

00:01:21.620 --> 00:01:23.719
sequential work sessions. And this is exactly

00:01:23.719 --> 00:01:26.780
where single chat AI falls apart completely.

00:01:27.439 --> 00:01:30.599
The entire illusion of intelligence just breaks

00:01:30.599 --> 00:01:32.900
down under structural pressure. I always compare

00:01:32.900 --> 00:01:35.439
it to a talented but forgetful intern. Oh, that

00:01:35.439 --> 00:01:37.599
is a great way to frame it. They might be brilliant

00:01:37.599 --> 00:01:40.439
at writing a single block of code. But they lack

00:01:40.439 --> 00:01:42.879
the architectural scaffolding to manage their

00:01:42.879 --> 00:01:46.040
own memory. Right. They need exact, precise instructions

00:01:46.040 --> 00:01:49.060
for every single subsequent step. If we connect

00:01:49.060 --> 00:01:51.560
this to the bigger picture, the next massive

00:01:51.560 --> 00:01:54.340
productivity jump is a completely different paradigm

00:01:54.340 --> 00:01:57.349
entirely. It is not about building one marginally

00:01:57.349 --> 00:02:00.209
smarter foundational model. It is about coordinating

00:02:00.209 --> 00:02:02.909
multiple capabilities with a very rigid structure.

00:02:03.170 --> 00:02:05.450
I mean, I get that it forgets things over time,

00:02:05.569 --> 00:02:08.430
but why does the single chat model break down

00:02:08.430 --> 00:02:11.270
so quickly on big projects? Well, it comes down

00:02:11.270 --> 00:02:14.169
to how context windows actually operate mechanically.

00:02:14.229 --> 00:02:17.479
Okay. An isolated chat window. just cannot hold

00:02:17.479 --> 00:02:20.240
overarching project goals. It predicts the next

00:02:20.240 --> 00:02:22.719
word based on the immediate previous text. So

00:02:22.719 --> 00:02:25.680
it just reacts blindly to the very last prompt

00:02:25.680 --> 00:02:28.979
you typed. Exactly. It has no true long -term

00:02:28.979 --> 00:02:31.740
plan. Right. Complex work needs memory and multi

00:02:31.740 --> 00:02:34.479
-step processes, not just instant answers. Beat.

00:02:34.800 --> 00:02:37.219
We need a fundamental shift in our daily approach.

00:02:37.560 --> 00:02:40.060
We really need to build guardrails around how

00:02:40.060 --> 00:02:42.479
the model operates. Which brings us to fixing

00:02:42.479 --> 00:02:45.550
the sloppy code generation first. But how do

00:02:45.550 --> 00:02:48.110
we actually force that discipline onto an LLM?

00:02:48.229 --> 00:02:51.090
This brings us directly to a project called Superpowers.

00:02:51.289 --> 00:02:53.949
Right. It was created by a very clever developer

00:02:53.949 --> 00:02:57.669
who goes by Obra. Superpowers adds real software

00:02:57.669 --> 00:03:00.349
engineering discipline directly to Claude Code.

00:03:00.569 --> 00:03:02.870
I still wrestle with prompt drift myself when

00:03:02.870 --> 00:03:05.750
asking AI for code to Sex Island. Yeah, it happens

00:03:05.750 --> 00:03:07.590
constantly. It just kind of wanders off the main

00:03:07.590 --> 00:03:09.969
architectural path entirely. We all do. That

00:03:09.969 --> 00:03:11.810
is the fundamental nature of the probabilistic

00:03:11.810 --> 00:03:14.879
piece. AI coding agents are incredibly fast right

00:03:14.879 --> 00:03:17.939
now, but that raw speed often brings hidden,

00:03:18.060 --> 00:03:21.000
deeply frustrating structural errors. Because

00:03:21.000 --> 00:03:23.680
they skip crucial structural tests just to save

00:03:23.680 --> 00:03:26.159
a little time. Exactly. They introduce weird,

00:03:26.319 --> 00:03:29.360
convoluted shortcuts that inevitably break things

00:03:29.360 --> 00:03:31.580
later on. They produce code that works perfectly

00:03:31.580 --> 00:03:34.219
fine for today, but it becomes totally impossible

00:03:34.219 --> 00:03:37.460
for a human to maintain by tomorrow. But superpowers

00:03:37.460 --> 00:03:40.259
actually forces the AI to slow down completely.

00:03:40.539 --> 00:03:43.879
It really does. It demands a totally clean, isolated

00:03:43.879 --> 00:03:47.219
workspace before writing anything. It uses Git

00:03:47.219 --> 00:03:49.319
work trees to keep the main environment safe.

00:03:49.500 --> 00:03:51.680
Which are isolated folders for safely testing

00:03:51.680 --> 00:03:54.000
different code changes. Right. It forces the

00:03:54.000 --> 00:03:56.599
AI to brainstorm before writing actual logic.

00:03:56.840 --> 00:03:59.539
Then it writes a highly detailed step -by -step

00:03:59.539 --> 00:04:02.340
implementation plan. The documentation explains

00:04:02.340 --> 00:04:05.800
this core goal in a very direct way. The plan

00:04:05.930 --> 00:04:08.289
must be incredibly detailed and logically robust.

00:04:08.590 --> 00:04:11.449
Even an inexperienced human engineer could follow

00:04:11.449 --> 00:04:13.449
it correctly. It essentially removes all the

00:04:13.449 --> 00:04:16.269
dangerous, hallucinatory guesswork. It enforces

00:04:16.269 --> 00:04:19.149
a strict test -driven development cycle on the

00:04:19.149 --> 00:04:21.819
model. Right. Writing automated tests before

00:04:21.819 --> 00:04:24.560
writing the actual software code. Exactly. Humans

00:04:24.560 --> 00:04:26.480
honestly hate doing test -driven development

00:04:26.480 --> 00:04:28.939
because it feels slow. Oh, absolutely. But an

00:04:28.939 --> 00:04:31.439
AI doesn't have the capacity to feel bored. It

00:04:31.439 --> 00:04:33.939
will happily write a failing test and then fix

00:04:33.939 --> 00:04:36.399
it. And then it explicitly requests a thorough

00:04:36.399 --> 00:04:39.379
code review. It forces the branch to finish properly

00:04:39.379 --> 00:04:41.779
and cleanly. Right. It prevents the model from

00:04:41.779 --> 00:04:44.519
just abandoning a half -finished thought. I get

00:04:44.519 --> 00:04:47.319
that slowing it down sounds good in theory, but...

00:04:47.680 --> 00:04:49.579
aren't we just stripping away its main speed

00:04:49.579 --> 00:04:53.060
advantage? Why is slowing the AI down actually

00:04:53.060 --> 00:04:56.500
an upgrade here? Because raw speed often introduces

00:04:56.500 --> 00:04:59.379
hidden vulnerabilities and sloppy logical shortcuts.

00:04:59.579 --> 00:05:02.459
Oh, I see. Slowing down forces the AI to formally

00:05:02.459 --> 00:05:05.540
verify its own logic first. It makes the final

00:05:05.540 --> 00:05:07.920
output actually trustworthy and structurally

00:05:07.920 --> 00:05:10.079
maintainable. Got it. Forcing a pause prevents

00:05:10.079 --> 00:05:13.060
sloppy, hard -to -maintain code later on. Beat.

00:05:13.600 --> 00:05:16.579
So, we have highly disciplined coding mechanics

00:05:16.579 --> 00:05:19.560
established now. The model is writing clean,

00:05:19.819 --> 00:05:22.889
tested, and verifiable software code. But perfectly

00:05:22.889 --> 00:05:25.870
written code for a terrible idea is still terrible.

00:05:26.149 --> 00:05:29.089
How do we evaluate the actual ideas being coded?

00:05:29.370 --> 00:05:31.670
Well, we have to give the AI different hats to

00:05:31.670 --> 00:05:33.730
wear. And this is where GStack enters the conversation

00:05:33.730 --> 00:05:36.410
perfectly. It addresses the exact blind spot

00:05:36.410 --> 00:05:38.350
you just mentioned. Yeah, it was created by Y

00:05:38.350 --> 00:05:41.180
Combinator president Gary Tan. VStack gives the

00:05:41.180 --> 00:05:44.399
AI a roster of highly specific roles. You are

00:05:44.399 --> 00:05:47.420
building a one -person startup team, essentially.

00:05:47.680 --> 00:05:50.480
Your AI gets a bunch of different highly specialized

00:05:50.480 --> 00:05:53.839
job titles. We're talking CEO, engineering manager,

00:05:54.100 --> 00:05:57.120
and lead designer. It also acts as a QA lead

00:05:57.120 --> 00:05:59.660
and security officer. It follows a very strict

00:05:59.660 --> 00:06:03.120
and methodical sprint flow. Think, plan, build,

00:06:03.399 --> 00:06:07.240
review, test, ship, and finally reflect. What's

00:06:07.240 --> 00:06:09.180
fascinating here is the underlying mathematics

00:06:09.180 --> 00:06:12.250
of the prompt. When you assign a specific persona,

00:06:12.490 --> 00:06:15.389
you shift token probabilities. A general agent

00:06:15.389 --> 00:06:18.209
gives loose, generalized, and very broad responses.

00:06:18.430 --> 00:06:21.209
A role -based setup forces highly focused specific

00:06:21.209 --> 00:06:24.029
domain decisions. Because each role looks at

00:06:24.029 --> 00:06:25.889
the exact same problem from a different angle.

00:06:26.149 --> 00:06:28.430
The security officer agent strictly looks for

00:06:28.430 --> 00:06:31.430
potential injection flaws. The designer agent

00:06:31.430 --> 00:06:33.790
only cares about user experience and interaction

00:06:33.790 --> 00:06:37.310
flow. It stops the AI from rushing into an immediate

00:06:37.310 --> 00:06:40.600
generic implementation. It forces it to pause

00:06:40.600 --> 00:06:43.240
and evaluate from multiple perspectives. Inside

00:06:43.240 --> 00:06:46.100
Cloud Code, you access this through simple slash

00:06:46.100 --> 00:06:49.120
commands. You just type slash C stack or slash

00:06:49.120 --> 00:06:52.279
office hours to trigger the roles. It guides

00:06:52.279 --> 00:06:55.120
the AI through deeply structured analytical steps.

00:06:55.300 --> 00:06:57.879
It prevents the model from trying to be everything

00:06:57.879 --> 00:07:00.529
at once. I have to offer a massive warning right

00:07:00.529 --> 00:07:02.790
here, though. Do not ask it to build an entire

00:07:02.790 --> 00:07:05.430
app on day one. Well, definitely not. It is exactly

00:07:05.430 --> 00:07:07.910
like hiring a brand new project manager. You

00:07:07.910 --> 00:07:09.949
cannot demand a shipped product on their first

00:07:09.949 --> 00:07:13.290
afternoon. That usually leads to incredibly messy

00:07:13.290 --> 00:07:16.310
and logically broken results. Yeah. The context

00:07:16.310 --> 00:07:18.509
window gets overwhelmed by too many competing

00:07:18.509 --> 00:07:22.029
priorities. So start small. Evaluate a tiny market

00:07:22.029 --> 00:07:24.470
opportunity or reframe a basic product idea.

00:07:24.709 --> 00:07:27.410
Get a design review or run a simple debugging

00:07:27.410 --> 00:07:30.759
workflow. Does assigning a fake CEO title actually

00:07:30.759 --> 00:07:33.680
change the underlying code the AI writes? Yes,

00:07:33.800 --> 00:07:36.160
because it radically changes the mathematical

00:07:36.160 --> 00:07:39.759
context of the prompt. Okay. The AI evaluates

00:07:39.759 --> 00:07:42.180
the coding problem through a completely different

00:07:42.180 --> 00:07:45.459
semantic lens. It prioritizes different logical

00:07:45.459 --> 00:07:48.939
metrics based on that specific assigned persona.

00:07:49.240 --> 00:07:52.199
Makes sense. Different roles force the AI to

00:07:52.199 --> 00:07:55.100
catch its own blind spots. Beat. We have discipline

00:07:55.100 --> 00:07:57.709
coding. And we have structured, specialized roles.

00:07:57.949 --> 00:08:00.790
But there is still a massive missing piece to

00:08:00.790 --> 00:08:03.329
this puzzle. Right. Mid -role sponsor read. We

00:08:03.329 --> 00:08:04.850
need to talk about what happens when you close

00:08:04.850 --> 00:08:07.430
your laptop. Usually the AI just forgets everything

00:08:07.430 --> 00:08:09.730
you just did together. The context window wipes

00:08:09.730 --> 00:08:12.189
clean and you start from absolute zero. Let's

00:08:12.189 --> 00:08:14.529
fix that. This brings us to a project called

00:08:14.529 --> 00:08:17.170
Hermes Agent. It was built by the brilliant team

00:08:17.170 --> 00:08:19.730
over at Noose Research. Hermes is a completely

00:08:19.730 --> 00:08:23.170
self -improving AI agent framework. Most AI tools

00:08:23.170 --> 00:08:25.500
start... totally fresh every single time you

00:08:25.500 --> 00:08:28.040
open them. Hermes is trying to be the exact opposite

00:08:28.040 --> 00:08:30.220
of that temporary paradigm. It creates specific

00:08:30.220 --> 00:08:32.860
skills from its past interactions with you. Yeah,

00:08:32.919 --> 00:08:34.519
it searches through all your past conversations

00:08:34.519 --> 00:08:37.480
seamlessly using vector databases. Which builds

00:08:37.480 --> 00:08:40.299
a much better long -term picture of how you actually

00:08:40.299 --> 00:08:43.679
work. It features a truly unified messaging gateway

00:08:43.679 --> 00:08:45.940
as well. You can connect it directly to Telegram

00:08:45.940 --> 00:08:50.019
or your company Slack. You can use Discord, WhatsApp,

00:08:50.240 --> 00:08:52.929
Signal. or the standard command line so you do

00:08:52.929 --> 00:08:55.389
not have to constantly switch between five different

00:08:55.389 --> 00:08:58.330
browser windows you interact with the exact same

00:08:58.330 --> 00:09:01.759
intelligent agent everywhere you go oh imagine

00:09:01.759 --> 00:09:04.279
it's scaling to remember your specific workflow

00:09:04.279 --> 00:09:07.500
across every single app you use it changes the

00:09:07.500 --> 00:09:09.240
entire fundamental relationship you have with

00:09:09.240 --> 00:09:12.039
the machine yeah you are no longer asking highly

00:09:12.039 --> 00:09:15.019
isolated randomly generated questions you are

00:09:15.019 --> 00:09:17.480
developing a persistent system that organically

00:09:17.480 --> 00:09:19.779
evolves alongside you but we need to talk about

00:09:19.779 --> 00:09:22.299
the actual onboarding reality here we have to

00:09:22.299 --> 00:09:24.759
issue a quick very serious security warning first

00:09:24.759 --> 00:09:27.240
definitely it is always worth reviewing open

00:09:27.240 --> 00:09:29.860
source installation scripts very You must do

00:09:29.860 --> 00:09:31.940
this before piping them directly into your local

00:09:31.940 --> 00:09:34.200
shell. Taking a moment to inspect the script

00:09:34.200 --> 00:09:37.340
reduces unnecessary local risk. Because you are

00:09:37.340 --> 00:09:39.860
giving an autonomous agent access to your file

00:09:39.860 --> 00:09:42.480
system? And you really need to be incredibly

00:09:42.480 --> 00:09:46.000
patient with this specific tool. Do not expect

00:09:46.000 --> 00:09:49.299
magically strong personalized results on the

00:09:49.299 --> 00:09:52.399
very first day. It behaves way more like onboarding

00:09:52.399 --> 00:09:54.720
a real human teammate. You have to correct it

00:09:54.720 --> 00:09:56.860
when it makes a logical mistake. It is absolutely

00:09:56.860 --> 00:09:58.840
not an instant feature toggle you just casually

00:09:58.840 --> 00:10:01.039
flip. People always expect these memory tools

00:10:01.039 --> 00:10:04.200
to be instantly telepathic. How long does it

00:10:04.200 --> 00:10:06.919
realistically take for this self -improving loop

00:10:06.919 --> 00:10:10.059
to actually feel useful? It usually takes several

00:10:10.059 --> 00:10:13.299
weeks of consistent daily interaction to calibrate.

00:10:13.320 --> 00:10:16.179
Wow, weeks. Yeah. The system needs enough varied

00:10:16.179 --> 00:10:18.700
data to understand your specific workflow patterns.

00:10:18.879 --> 00:10:21.899
It cannot learn your deep, nuanced habits from

00:10:21.899 --> 00:10:24.759
just two short conversations. So it's true onboarding.

00:10:24.759 --> 00:10:27.259
It builds rich context through real use over

00:10:27.259 --> 00:10:30.200
time. Beat. So we have disciplined role -playing

00:10:30.200 --> 00:10:32.740
agents equipped with persistent cross -platform

00:10:32.740 --> 00:10:35.179
memory. Let's put them all in one digital building

00:10:35.179 --> 00:10:37.100
and track the budget. We're talking about Paperclip

00:10:37.100 --> 00:10:39.059
now. Right. It is easily the most experimental

00:10:39.059 --> 00:10:41.559
and fragile project on this list. It orchestrates

00:10:41.559 --> 00:10:44.080
multiple specialized agents as if they were a

00:10:44.080 --> 00:10:51.179
re - Yeah. It gives you an actual visual dashboard

00:10:51.179 --> 00:10:56.399
for everything. Exactly. Inside this interface,

00:10:56.659 --> 00:10:59.899
agents take on major corporate roles. You have

00:10:59.899 --> 00:11:04.179
a CEO. a cmo and the cto actively working together

00:11:04.179 --> 00:11:06.879
it includes a full organizational chart and a

00:11:06.879 --> 00:11:09.360
functional ticketing system it has strict governance

00:11:09.360 --> 00:11:12.399
controls and granular financial budget tracking

00:11:12.399 --> 00:11:16.139
you can see exactly how much each specific generative

00:11:16.139 --> 00:11:19.259
task actually costs which is crucial here's where

00:11:19.259 --> 00:11:21.700
it gets really interesting i look at all this

00:11:21.700 --> 00:11:24.539
corporate scaffolding and i have to ask Is this

00:11:24.539 --> 00:11:27.559
just corporate process theater applied to AI?

00:11:27.899 --> 00:11:30.980
That is a very fair and deeply necessary critical

00:11:30.980 --> 00:11:33.240
question. I have to give the honest warning straight

00:11:33.240 --> 00:11:35.539
from the source text. Okay. This is absolutely

00:11:35.539 --> 00:11:37.960
not a magic money machine at all. You will not

00:11:37.960 --> 00:11:40.179
wake up to sudden unexpected surprise profits.

00:11:40.639 --> 00:11:43.960
The project is still incredibly rough and highly

00:11:43.960 --> 00:11:46.799
experimental software. It is meant for those

00:11:46.799 --> 00:11:50.000
genuinely curious about multi -agent orchestration

00:11:50.000 --> 00:11:53.330
at scale. Its true purpose is high -level coordination,

00:11:53.669 --> 00:11:55.929
not just blind, reckless automation. Because

00:11:55.929 --> 00:11:59.509
as soon as you use multiple agents, massive structural

00:11:59.509 --> 00:12:02.549
problems appear, responsibilities blur quickly,

00:12:02.870 --> 00:12:05.809
and agents get stuck in infinite conversational

00:12:05.809 --> 00:12:08.269
loops. And when they get stuck in loops, EPI

00:12:08.269 --> 00:12:11.210
costs quietly skyrocket. Oh, they will happily...

00:12:11.690 --> 00:12:13.950
Burn through your open AI credits arguing with

00:12:13.950 --> 00:12:16.169
each other. Paperclip attempts to bring visible

00:12:16.169 --> 00:12:18.909
structure to that massive underlying complexity.

00:12:19.409 --> 00:12:22.009
It places everything in one single, unified,

00:12:22.110 --> 00:12:25.110
and highly trackable interface. Why do we even

00:12:25.110 --> 00:12:27.809
need a whole dashboard just to manage AI agents?

00:12:27.850 --> 00:12:30.110
Because multiple agents working simultaneously

00:12:30.110 --> 00:12:33.509
create massive, unreadable chaos very quickly.

00:12:33.669 --> 00:12:36.320
Right. Goals drift rapidly without central oversight

00:12:36.320 --> 00:12:39.500
and very clear financial boundaries. A dashboard

00:12:39.500 --> 00:12:42.320
makes the invisible chaotic work of agents visible

00:12:42.320 --> 00:12:45.240
and trackable. Exactly. It brings visible coordination

00:12:45.240 --> 00:12:49.399
and budget tracking to multi -agent chaos. Beat.

00:12:49.559 --> 00:12:51.620
We should probably synthesize the ultimate pattern

00:12:51.620 --> 00:12:53.879
behind all these different tools. Let us step

00:12:53.879 --> 00:12:57.179
back and look at the actual big picture. We just

00:12:57.179 --> 00:12:59.879
covered four radically different open source

00:12:59.879 --> 00:13:02.500
software projects today. But they all share the

00:13:02.500 --> 00:13:05.240
exact same underlying philosophical approach.

00:13:05.419 --> 00:13:08.580
Every single one makes the exact same major technological

00:13:08.580 --> 00:13:11.519
bet. The future of AI is not a race for the smartest

00:13:11.519 --> 00:13:14.200
individual model. Right. It is about building

00:13:14.200 --> 00:13:16.820
a much better structure around how models work.

00:13:17.179 --> 00:13:19.879
G -Stack gives your general models incredibly

00:13:19.879 --> 00:13:23.039
clear, structurally defined roles. Hermes gives

00:13:23.039 --> 00:13:25.539
them deep. persistent, and highly accessible

00:13:25.539 --> 00:13:28.460
cross -platform memory. Superpowers gives them

00:13:28.460 --> 00:13:30.899
reliable, highly disciplined software engineering

00:13:30.899 --> 00:13:33.379
process. And Paperclip gives them a massive,

00:13:33.460 --> 00:13:35.740
visible organizational management structure.

00:13:36.059 --> 00:13:38.419
The industry focus is shifting away from raw,

00:13:38.600 --> 00:13:41.179
generalized benchmark intelligence. It is moving

00:13:41.179 --> 00:13:43.700
toward consistent, highly predictable behavior

00:13:43.700 --> 00:13:46.600
in real -world use. That shift might feel slightly

00:13:46.600 --> 00:13:48.879
less dramatic than a viral benchmark screenshot,

00:13:49.240 --> 00:13:52.159
but it is way more important for anyone actually

00:13:52.159 --> 00:13:54.080
building real things. We should probably help

00:13:54.080 --> 00:13:55.779
people figure out where to actually start. I

00:13:55.779 --> 00:13:57.700
want to explicitly lay out the recommended order

00:13:57.700 --> 00:13:59.940
of operations here. Definitely. I do not want

00:13:59.940 --> 00:14:02.000
you to lose your entire weekend to confusing

00:14:02.000 --> 00:14:05.700
documentation. Do not try to install all four

00:14:05.700 --> 00:14:08.580
on a Saturday afternoon. That is a truly great

00:14:08.580 --> 00:14:11.480
way to end up with absolutely nothing useful.

00:14:11.799 --> 00:14:13.740
First, you need to start with the superpowers

00:14:13.740 --> 00:14:16.419
framework. Do this if you already use Claude

00:14:16.419 --> 00:14:19.419
code on a daily basis. It is the absolute fastest

00:14:19.419 --> 00:14:22.720
way to see immediate practical coding value.

00:14:23.039 --> 00:14:25.120
The improvement to your daily development process

00:14:25.120 --> 00:14:28.379
is stark and visible right away. Second, you

00:14:28.379 --> 00:14:30.720
should try implementing the GStack role system.

00:14:31.100 --> 00:14:33.779
Add those structured, specialized roles to your

00:14:33.779 --> 00:14:36.820
newly disciplined coding process. It pairs naturally

00:14:36.820 --> 00:14:39.220
with superpowers once you understand the basic

00:14:39.220 --> 00:14:42.139
operational layers. Third, move on to the Hermes

00:14:42.139 --> 00:14:44.419
agent persistent framework. That gives you the

00:14:44.419 --> 00:14:46.639
powerful cross -platform memory you really need.

00:14:46.799 --> 00:14:48.899
It is absolutely perfect if you want an agent

00:14:48.899 --> 00:14:51.080
that runs scheduled background tasks. Finally,

00:14:51.159 --> 00:14:53.019
you can carefully explore the paperclip dashboard.

00:14:53.419 --> 00:14:56.120
Only do this if you are genuinely ready for experimental

00:14:56.120 --> 00:14:59.700
multi -agent orchestration. It is the most powerful

00:14:59.700 --> 00:15:02.220
conceptual framework, but definitely the roughest

00:15:02.220 --> 00:15:04.519
tool. Yeah, save it for last. Following this

00:15:04.519 --> 00:15:06.980
specific order helps you see practical value

00:15:06.980 --> 00:15:10.139
very quickly. You gradually understand exactly

00:15:10.139 --> 00:15:13.100
how each structural layer contributes to the

00:15:13.100 --> 00:15:15.419
whole. The future probably won't look like one

00:15:15.419 --> 00:15:18.179
singular genius chat assistant. No, it will look

00:15:18.179 --> 00:15:21.120
exactly like a small orchestra of highly specialized

00:15:21.120 --> 00:15:24.320
systems. Each has a very clear role and a rigidly

00:15:24.320 --> 00:15:26.659
defined process. They will smoothly hand off

00:15:26.659 --> 00:15:29.460
complex work. to the next digital player. The

00:15:29.460 --> 00:15:31.940
massive operational advantage goes to those who

00:15:31.940 --> 00:15:34.919
design these workflows early. If AI is rapidly

00:15:34.919 --> 00:15:37.080
becoming the orchestra of the modern workforce,

00:15:37.480 --> 00:15:39.740
what skills do you need to start developing today

00:15:39.740 --> 00:15:42.399
to become the conductor? That is a perfect, profound

00:15:42.399 --> 00:15:44.539
question to leave them with today. Thanks for

00:15:44.539 --> 00:15:47.200
joining us on this deep dive. Stay curious. Out

00:15:47.200 --> 00:15:47.820
to your own music.
