WEBVTT

00:00:00.000 --> 00:00:01.580
You know, if you've ever tried to build a rag

00:00:01.580 --> 00:00:04.780
agent for like a real world application, you

00:00:04.780 --> 00:00:07.580
know the pain. It's just this massive technical

00:00:07.580 --> 00:00:10.500
headache that can stop a project coal. Oh, it's

00:00:10.500 --> 00:00:12.599
brutal. The infrastructure alone. You're trying

00:00:12.599 --> 00:00:14.900
to figure out document chunking, getting the

00:00:14.900 --> 00:00:17.039
right vector embeddings, and then you have to

00:00:17.039 --> 00:00:19.339
stand up and manage a whole vector database.

00:00:20.280 --> 00:00:22.760
Pinecone, Milvus, whatever. It's a full time

00:00:22.760 --> 00:00:25.379
engineering job. Exactly. And the costs just

00:00:25.379 --> 00:00:27.300
start climbing right away. Yeah. But what if

00:00:27.300 --> 00:00:30.079
there was a way to just... Skip 90 % of that.

00:00:30.160 --> 00:00:31.719
Well, that's what we're talking about. Here's

00:00:31.719 --> 00:00:33.719
the headline that should make you just stop.

00:00:33.920 --> 00:00:37.780
We're talking about indexing 121 -page PDF, a

00:00:37.780 --> 00:00:40.799
huge knowledge base, for less than two cents.

00:00:41.780 --> 00:00:44.420
Welcome back to the Deep Dive. Today, our whole

00:00:44.420 --> 00:00:47.280
mission is to unpack Google's new Gemini File

00:00:47.280 --> 00:00:50.259
Search API because it looks like it completely

00:00:50.259 --> 00:00:52.560
automates the hardest parts of retrieval augmented

00:00:52.560 --> 00:00:55.310
generation. And you can do it with a simple no

00:00:55.310 --> 00:00:57.590
-code tool. Yeah. And let's just make sure we're

00:00:57.590 --> 00:01:00.189
on the same page with ARAC. Retrieval augmented

00:01:00.189 --> 00:01:02.750
generation just means using your own documents

00:01:02.750 --> 00:01:05.090
to ground an LLM to make sure its answers are

00:01:05.090 --> 00:01:08.310
based on some kind of truth, not just its training

00:01:08.310 --> 00:01:10.670
data. That grounding is absolutely everything.

00:01:10.909 --> 00:01:13.450
So here's what we're going to do. We'll walk

00:01:13.450 --> 00:01:15.950
through the insane cost savings. Yeah. Then the

00:01:15.950 --> 00:01:19.109
super simple four -step workflow. And then we

00:01:19.109 --> 00:01:21.790
have to talk about the limitations. Because,

00:01:21.810 --> 00:01:23.709
you know, if it sounds this good, there's got

00:01:23.709 --> 00:01:25.829
to be some catches. Okay, let's get into it.

00:01:25.890 --> 00:01:28.209
Let's see just how simple this really is. So

00:01:28.209 --> 00:01:31.090
the traditional way of doing our edge, it really

00:01:31.090 --> 00:01:33.569
was a gauntlet. It wasn't just, you know, pointing

00:01:33.569 --> 00:01:36.849
an LLM at a file. You had like... A dozen different

00:01:36.849 --> 00:01:39.370
things you had to build and then maintain. Absolutely.

00:01:39.590 --> 00:01:41.390
You had to worry about all the different file

00:01:41.390 --> 00:01:44.329
types, how to ingest them, adding metadata, and

00:01:44.329 --> 00:01:47.170
then the chunking. Oh, the chunking. Recursive

00:01:47.170 --> 00:01:49.530
character splitting, running every single one

00:01:49.530 --> 00:01:51.010
of those little chunks through an embeddings

00:01:51.010 --> 00:01:53.230
model. And only then, after all that, could you

00:01:53.230 --> 00:01:56.090
even put it in the database. Right. It was so

00:01:56.090 --> 00:01:58.989
intense. That's just the definition of high friction.

00:02:00.010 --> 00:02:03.090
So how does this Gemini file search solution...

00:02:03.290 --> 00:02:05.569
get around all that. What's Google actually doing

00:02:05.569 --> 00:02:07.769
under the hood? It just simplifies the whole

00:02:07.769 --> 00:02:09.449
thing from the developer side. You just upload

00:02:09.449 --> 00:02:11.469
the file. That's it. Google takes care of the

00:02:11.469 --> 00:02:13.789
chunking. They generate their own embeddings

00:02:13.789 --> 00:02:16.789
and they handle the storage. That entire pipeline

00:02:16.789 --> 00:02:20.050
is managed for you. So the big idea is you don't

00:02:20.050 --> 00:02:22.069
have to build your own search system. You don't

00:02:22.069 --> 00:02:24.590
have to set up a vector database or worry about

00:02:24.590 --> 00:02:27.289
keeping things in sync. You're just using their

00:02:27.289 --> 00:02:29.580
pipeline. Exactly. And what's really interesting

00:02:29.580 --> 00:02:32.800
is that their chunking is probably way better

00:02:32.800 --> 00:02:35.120
than what most of us would build. It's not just

00:02:35.120 --> 00:02:38.539
split every 500 characters. It understands the

00:02:38.539 --> 00:02:41.240
document's flow and structure. That's a great

00:02:41.240 --> 00:02:43.659
point. And just to be clear for everyone, when

00:02:43.659 --> 00:02:45.500
we say embeddings, we're just talking about turning

00:02:45.500 --> 00:02:48.439
words into numbers, right? Into a mathematical

00:02:48.439 --> 00:02:50.460
format so a computer can search them incredibly

00:02:50.460 --> 00:02:52.759
fast. That's it. It's just turning language into

00:02:52.759 --> 00:02:54.819
math for quick comparisons. So if you had to

00:02:54.819 --> 00:02:57.919
pick just one thing. What's the single biggest

00:02:57.919 --> 00:03:01.500
piece of complexity that this new Gemini method

00:03:01.500 --> 00:03:04.740
just gets rid of? It's that whole chain of manual

00:03:04.740 --> 00:03:06.939
document splitting, generating the embeddings,

00:03:07.020 --> 00:03:09.060
and then managing all the separate search infrastructure

00:03:09.060 --> 00:03:11.120
to glue it all together. Okay, let's talk about

00:03:11.120 --> 00:03:12.740
the money, because this is where the story gets

00:03:12.740 --> 00:03:16.280
kind of wild. You said you could index a huge

00:03:16.280 --> 00:03:19.080
document for pennies. How does that pricing actually

00:03:19.080 --> 00:03:21.639
work? The main thing is that you really only

00:03:21.639 --> 00:03:23.699
get charged for that first step, the upload,

00:03:23.879 --> 00:03:26.849
the indexing, and the cost is just... Tiny. It's

00:03:26.849 --> 00:03:30.189
15 cents per 1 million tokens. Let's put that

00:03:30.189 --> 00:03:33.069
in real terms again. That 121 -page PDF was,

00:03:33.210 --> 00:03:36.449
what, about 95 ,000 tokens? Yeah. So the math

00:03:36.449 --> 00:03:38.789
on that really is less than 2 cents. It costs

00:03:38.789 --> 00:03:41.009
basically nothing to load your knowledge. It's

00:03:41.009 --> 00:03:43.629
incredibly cheap to get the data in. And here's

00:03:43.629 --> 00:03:45.930
maybe the biggest deal right now. Storage is

00:03:45.930 --> 00:03:49.770
free. Totally free. You are not paying by the

00:03:49.770 --> 00:03:52.469
gigabyte for all those vectors to just sit there

00:03:52.469 --> 00:03:54.810
on Google servers. Okay, but what about querying?

00:03:55.469 --> 00:03:57.530
I have this agent running all day. Am I going

00:03:57.530 --> 00:04:00.270
to get slammed with retrieval fees? No, not for

00:04:00.270 --> 00:04:02.969
the retrieval itself. You pay the normal rate

00:04:02.969 --> 00:04:06.610
for using the LLM, you know, for Gemini 2 .5

00:04:06.610 --> 00:04:08.969
flash to generate the answer. But the actual

00:04:08.969 --> 00:04:11.469
cost of pulling the data from your store is,

00:04:11.550 --> 00:04:14.389
for now, absorbed. I saw the cost comparison

00:04:14.389 --> 00:04:17.000
table. Yeah. And it's a little bit shocking.

00:04:17.120 --> 00:04:19.660
Right. Let's say you have 100 gigs of data and

00:04:19.660 --> 00:04:21.959
you run a million queries in a month. Right.

00:04:22.040 --> 00:04:24.740
For a traditional setup with something like Pinecone

00:04:24.740 --> 00:04:26.740
plus all the compute you'd need, you're looking

00:04:26.740 --> 00:04:28.639
at hundreds, maybe even thousands of dollars

00:04:28.639 --> 00:04:32.100
a month easily. Whoa. Yeah. And with Gemini File

00:04:32.100 --> 00:04:34.680
Search, that whole first month, including the

00:04:34.680 --> 00:04:37.540
one -time indexing fee for all that data, is

00:04:37.540 --> 00:04:41.100
about $47. Wait, $47? Yeah. For that kind of

00:04:41.100 --> 00:04:43.279
volume? Yeah. That's the moment of wonder right

00:04:43.279 --> 00:04:45.060
there. I mean, imagine. Imagine what you could

00:04:45.060 --> 00:04:47.120
build, what you could experiment with if your

00:04:47.120 --> 00:04:49.339
entire knowledge -based infrastructure costs

00:04:49.339 --> 00:04:52.379
less than a pizza. It just opens a powerful RAG

00:04:52.379 --> 00:04:54.980
to everyone. It's a huge democratization. It's

00:04:54.980 --> 00:04:57.699
moving away from spending big capital on infrastructure

00:04:57.699 --> 00:05:00.879
to just a simple operational cost. So beyond

00:05:00.879 --> 00:05:03.199
that initial tiny fee, what's the main takeaway

00:05:03.199 --> 00:05:05.680
on cost for someone just starting out? The fact

00:05:05.680 --> 00:05:08.290
that storage is currently free. that removes

00:05:08.290 --> 00:05:10.870
the single biggest recurring cost that you always

00:05:10.870 --> 00:05:13.990
have with traditional vector databases okay so

00:05:13.990 --> 00:05:16.189
the money part is a no -brainer let's get practical

00:05:16.189 --> 00:05:19.370
let's talk about building this thing in nat which

00:05:19.370 --> 00:05:22.889
is basically a tool for visualizing api calls

00:05:22.889 --> 00:05:25.709
right and you only need four of them four simple

00:05:25.709 --> 00:05:29.149
http request nodes it's like stacking lego blocks

00:05:29.149 --> 00:05:31.629
it's really that linear okay walk us through

00:05:31.629 --> 00:05:33.930
them what are those four steps doing step one

00:05:33.930 --> 00:05:37.699
is create store Think of this as just making

00:05:37.699 --> 00:05:41.240
a folder. You're creating a permanent named index

00:05:41.240 --> 00:05:43.519
on Google's side where your documents are going

00:05:43.519 --> 00:05:45.180
to live. Got it. Then you have to get the actual

00:05:45.180 --> 00:05:48.259
file up there. Exactly. Step two is upload file.

00:05:48.779 --> 00:05:51.379
But this is just a temporary step. The file is

00:05:51.379 --> 00:05:53.519
in the Google Cloud environment, but it's not

00:05:53.519 --> 00:05:55.540
connected to your store yet. It's just sitting

00:05:55.540 --> 00:05:57.420
there. So you have to link the file to the folder.

00:05:57.639 --> 00:06:00.829
That is step three. Move file to store. This

00:06:00.829 --> 00:06:02.889
is the magic step. This is what actually kicks

00:06:02.889 --> 00:06:05.189
off the indexing and makes the file a permanent

00:06:05.189 --> 00:06:07.050
part of your knowledge base. And then finally,

00:06:07.129 --> 00:06:10.250
you can ask it a question. Step four, query the

00:06:10.250 --> 00:06:13.370
store. This is the request you send to the Gemini

00:06:13.370 --> 00:06:16.509
model, and you tell it, hey, use this specific

00:06:16.509 --> 00:06:18.529
knowledge base to help you answer the user's

00:06:18.529 --> 00:06:21.189
question. Okay, a quick but important detour

00:06:21.189 --> 00:06:24.850
on setup. Getting the security right. The original

00:06:24.850 --> 00:06:27.350
notes mention some confusion in Google's documentation.

00:06:28.079 --> 00:06:30.420
So what's the right way to do authentication

00:06:30.420 --> 00:06:34.379
in ADEM? The best way is to not paste your API

00:06:34.379 --> 00:06:37.300
key in every single node. That's messy and insecure.

00:06:37.899 --> 00:06:40.839
Instead, you use ADEM's generic credential type.

00:06:41.040 --> 00:06:43.379
Right. And you use the query auth option specifically.

00:06:43.639 --> 00:06:45.839
Correct. You just tell it the parameter is called

00:06:45.839 --> 00:06:48.920
key and you paste your Gemini API key in there

00:06:48.920 --> 00:06:51.000
once. Then all four of your nodes can just reference

00:06:51.000 --> 00:06:53.120
that saved credential. It keeps everything clean

00:06:53.120 --> 00:06:55.319
and secure. And why is getting that authentication

00:06:55.319 --> 00:06:57.540
right so important, even if you're just building

00:06:57.540 --> 00:06:59.839
a quick prototype? Because we should always be

00:06:59.839 --> 00:07:01.720
building securely from the start. It just prevents

00:07:01.720 --> 00:07:03.959
you from accidentally exposing your key in a

00:07:03.959 --> 00:07:05.579
bunch of different places. All right, let's talk

00:07:05.579 --> 00:07:07.360
about actually running this. You said step three,

00:07:07.439 --> 00:07:09.939
moving the file into the store, is the critical

00:07:09.939 --> 00:07:12.439
one. What happens if you forget to do that? The

00:07:12.439 --> 00:07:14.879
file just stays temporary, just floating out

00:07:14.879 --> 00:07:16.579
there in the cloud, and it'll get deleted after

00:07:16.579 --> 00:07:18.779
a little while. It never gets indexed, so your

00:07:18.779 --> 00:07:20.720
agent can't see it. You have to make that link

00:07:20.720 --> 00:07:23.519
to the store. Okay, so once it's linked... and

00:07:23.519 --> 00:07:26.879
we're ready to query it in step four what does

00:07:26.879 --> 00:07:30.120
that api call look like so we're using the gemini

00:07:30.120 --> 00:07:32.920
2 .5 flash model and the most important part

00:07:32.920 --> 00:07:35.660
is in the json you send you have to include a

00:07:35.660 --> 00:07:38.019
search config parameter and paste in the unique

00:07:38.019 --> 00:07:41.079
store name id that you got from step one that's

00:07:41.079 --> 00:07:43.819
how you tell the llm exactly where to look so

00:07:43.819 --> 00:07:46.550
if you get that id wrong The agent just defaults

00:07:46.550 --> 00:07:48.509
to its general knowledge. It doesn't use your

00:07:48.509 --> 00:07:50.610
documents at all. Precisely. It's the whole key.

00:07:50.930 --> 00:07:52.829
And then, of course, you need good prompt engineering.

00:07:53.009 --> 00:07:55.389
We used a clear instruction. You are a helpful

00:07:55.389 --> 00:07:57.889
ag agent. Use your knowledge -based tool for

00:07:57.889 --> 00:08:00.329
truth. Cite your sources. And there was that

00:08:00.329 --> 00:08:03.389
weirdly specific rule you found. No punctuation,

00:08:03.430 --> 00:08:06.930
quotation marks, or new lines. Why that? Huh,

00:08:07.110 --> 00:08:09.529
yeah, that's just a practical little hack. It's

00:08:09.529 --> 00:08:12.189
to stop the underlying API from throwing a JSON

00:08:12.189 --> 00:08:14.810
error when it tries to pass data back. It just

00:08:14.810 --> 00:08:16.790
ensures the data transfer is clean. A strange

00:08:16.790 --> 00:08:19.170
quirk, but it's necessary for stability right

00:08:19.170 --> 00:08:22.670
now. So let's get to the results. You threw three

00:08:22.670 --> 00:08:25.449
really different documents at it. The official

00:08:25.449 --> 00:08:28.529
rules of golf, an NVIDIA press release, and an

00:08:28.529 --> 00:08:31.399
Apple 10K filing. How did it do? It did extremely

00:08:31.399 --> 00:08:34.100
well. For instance, I asked the golf PDF, what

00:08:34.100 --> 00:08:36.100
happens if your club breaks during the middle

00:08:36.100 --> 00:08:38.419
of the round? It came back with a perfect cited

00:08:38.419 --> 00:08:41.179
answer about how you can continue using it or

00:08:41.179 --> 00:08:43.799
have it repaired legally. And what about across

00:08:43.799 --> 00:08:46.299
multiple documents? Could it find a specific

00:08:46.299 --> 00:08:49.039
number from the NVIDIA file? Yep. Asked for the

00:08:49.039 --> 00:08:52.580
Q1 2025 fiscal summary. It correctly pulled out

00:08:52.580 --> 00:08:55.120
the $26 billion in total revenue and the $22

00:08:55.120 --> 00:08:57.639
billion in data center revenue. And it cited

00:08:57.639 --> 00:08:59.360
the press release correctly. And the overall

00:08:59.360 --> 00:09:02.220
score. after 10 really tough questions across

00:09:02.220 --> 00:09:05.879
almost 200 pages of documents, was a 4 .5 out

00:09:05.879 --> 00:09:08.940
of 5 for correctness. That's amazing for a setup

00:09:08.940 --> 00:09:10.840
that took minutes. What's so impressive about

00:09:10.840 --> 00:09:13.860
that 4 .5 out of 5 score is that it got there

00:09:13.860 --> 00:09:16.700
with basically zero complexity, zero fine -tuning,

00:09:16.740 --> 00:09:18.799
and zero maintenance from the person building

00:09:18.799 --> 00:09:21.100
it. So we know it's simple, we know it's cheap,

00:09:21.179 --> 00:09:23.600
we know it's accurate. But now we have to be

00:09:23.600 --> 00:09:27.080
realistic. This isn't magic. Where does it fall

00:09:27.080 --> 00:09:29.710
short? What are the limitations? Okay. Limitation

00:09:29.710 --> 00:09:32.889
number one is a big one for any real application,

00:09:33.269 --> 00:09:35.690
data management. Right now, Google doesn't have

00:09:35.690 --> 00:09:38.110
any kind of version control for the files in

00:09:38.110 --> 00:09:40.710
your store. So if I have my Q1 report in there

00:09:40.710 --> 00:09:43.649
and then I upload the Q2 report, now I just have

00:09:43.649 --> 00:09:45.970
two of them. You have two of them. And the store

00:09:45.970 --> 00:09:48.690
gets cluttered with old conflicting data. Your

00:09:48.690 --> 00:09:51.009
agent might pull from the wrong one. So the only

00:09:51.009 --> 00:09:53.909
solution right now is manual. You have to track

00:09:53.909 --> 00:09:55.750
your own versions and remember to delete the

00:09:55.750 --> 00:09:58.230
old file before you upload the new one. I appreciate

00:09:58.230 --> 00:10:00.190
you sharing that vulnerability. It feels like

00:10:00.190 --> 00:10:02.649
no matter how advanced the tools get, we're always

00:10:02.649 --> 00:10:05.669
stuck with data hygiene problems. Oh, yeah. I

00:10:05.669 --> 00:10:08.529
still wrestle with prompt drift and messy data

00:10:08.529 --> 00:10:10.970
pipelines in my own complex projects. It's just

00:10:10.970 --> 00:10:13.110
a constant frustrating challenge in this field.

00:10:13.289 --> 00:10:15.570
And what about the quality of the documents themselves?

00:10:16.169 --> 00:10:19.559
Garbage in, garbage out. That rule absolutely

00:10:19.559 --> 00:10:23.019
still applies. Gemini has OCR, which is great,

00:10:23.220 --> 00:10:25.820
but it's not a miracle worker. If you upload

00:10:25.820 --> 00:10:28.500
a blurry, poorly formatted PDF, you're going

00:10:28.500 --> 00:10:30.600
to get bad answers. You still have to do the

00:10:30.600 --> 00:10:33.860
cleanup. Okay, and limitation number three. This

00:10:33.860 --> 00:10:35.779
one's about what it's actually good at. Right.

00:10:35.980 --> 00:10:38.799
This system is fantastic at finding a needle

00:10:38.799 --> 00:10:42.220
in a haystack. A specific fact, a number, a rule.

00:10:42.580 --> 00:10:45.440
It fails completely when you ask it for a holistic

00:10:45.440 --> 00:10:47.980
summary or to understand the whole document.

00:10:48.179 --> 00:10:51.639
So you couldn't ask it to, say, summarize a 500

00:10:51.639 --> 00:10:54.100
-page book. Exactly. We saw this in our tests.

00:10:54.200 --> 00:10:56.340
I asked it, how many total rules are in the Gulf

00:10:56.340 --> 00:10:59.519
PDF? And it answered five. It couldn't see the

00:10:59.519 --> 00:11:01.220
whole document to count them all. It just found

00:11:01.220 --> 00:11:03.139
the five nearest chunks that mentioned the word

00:11:03.139 --> 00:11:05.820
rule. And the last one, which is maybe the most

00:11:05.820 --> 00:11:08.529
important for businesses. You have to remember

00:11:08.529 --> 00:11:10.990
your documents are on Google servers, so you

00:11:10.990 --> 00:11:12.570
have to be really careful about what you upload.

00:11:12.769 --> 00:11:16.389
No sensitive PII, personally identifiable information,

00:11:16.789 --> 00:11:19.730
and no top secret company data. And you need

00:11:19.730 --> 00:11:22.389
to think about compliance. Absolutely. GDPR,

00:11:22.389 --> 00:11:26.009
IAPA, if you have really strict data sovereignty

00:11:26.009 --> 00:11:29.149
or security needs, you might still need to build

00:11:29.149 --> 00:11:31.490
your own on -premise solution. This might not

00:11:31.490 --> 00:11:34.899
be for you. So just to be crystal clear. If your

00:11:34.899 --> 00:11:37.360
goal is to summarize a huge entire document,

00:11:37.600 --> 00:11:41.080
should you use this? No. Its architecture is

00:11:41.080 --> 00:11:43.480
built for finding facts inside chunks, which

00:11:43.480 --> 00:11:45.659
limits the kind of holistic understanding you

00:11:45.659 --> 00:11:48.340
need for a good summary. OK, so let's pull all

00:11:48.340 --> 00:11:50.820
this together. The big takeaway here seems to

00:11:50.820 --> 00:11:53.840
be that Gemini File Search just massively lowers

00:11:53.840 --> 00:11:56.919
the barrier to entry for RAG. It's a huge leap

00:11:56.919 --> 00:11:59.679
in simplicity and cost effectiveness. The verdict

00:11:59.679 --> 00:12:01.500
is pretty clear. You can set it up in 30 minutes.

00:12:01.639 --> 00:12:04.080
It's basically free for most normal use cases,

00:12:04.200 --> 00:12:07.059
and it delivers really high accuracy. That 4

00:12:07.059 --> 00:12:10.789
.5 out of 5 is no joke. Four simple API calls

00:12:10.789 --> 00:12:13.230
are automating what used to be weeks of painful

00:12:13.230 --> 00:12:15.169
infrastructure work. It's pretty incredible.

00:12:15.330 --> 00:12:17.110
So who should be using this right now? I'd say

00:12:17.110 --> 00:12:20.190
developers who are prototyping org ideas, small

00:12:20.190 --> 00:12:22.549
businesses that need a simple internal Q &A bot,

00:12:22.710 --> 00:12:24.870
or maybe content creators trying to organize

00:12:24.870 --> 00:12:27.009
a huge library of their own work. And who should

00:12:27.009 --> 00:12:29.620
maybe hold off for now? Big companies with really

00:12:29.620 --> 00:12:32.379
strict compliance rules, anyone who needs total

00:12:32.379 --> 00:12:35.200
control over their data, or use cases that are

00:12:35.200 --> 00:12:38.059
all about deep, full document summarization instead

00:12:38.059 --> 00:12:40.559
of fact -finding. It really lets you change your

00:12:40.559 --> 00:12:43.519
focus. You can stop worrying about the RRAG infrastructure

00:12:43.519 --> 00:12:46.220
and just start building a valuable agent. Which

00:12:46.220 --> 00:12:48.460
leads to a really interesting thought. If this

00:12:48.460 --> 00:12:52.000
is the trend... If R just becomes a cheap built

00:12:52.000 --> 00:12:55.100
-in feature of these models, what does that mean

00:12:55.100 --> 00:12:58.080
for the future of, say, a specialized vector

00:12:58.080 --> 00:13:00.860
database engineer? Is that entire job going to

00:13:00.860 --> 00:13:03.179
change? That is a fascinating question to think

00:13:03.179 --> 00:13:05.220
about. It really is. All right. Go out and build

00:13:05.220 --> 00:13:08.279
value, not infrastructure. We hope this deep

00:13:08.279 --> 00:13:10.179
dive gave you the clarity you were looking for.

00:13:10.299 --> 00:13:11.059
Until next time.
