WEBVTT

00:00:00.000 --> 00:00:02.859
OK, let's just let's unpack this. For years,

00:00:02.940 --> 00:00:06.320
there have been these two seemingly impossible

00:00:06.320 --> 00:00:10.160
problems haunting AI image generators. The first

00:00:10.160 --> 00:00:13.839
one, trying to get them to write complex, readable

00:00:13.839 --> 00:00:16.960
text inside of visual. It was always just a garbled

00:00:16.960 --> 00:00:19.420
mess like digital spaghetti. Right. And that

00:00:19.420 --> 00:00:21.539
was the industry failure point. But the second

00:00:21.539 --> 00:00:24.559
problem was, you know, just as critical for businesses

00:00:24.559 --> 00:00:27.100
and storytellers. Character consistency. Exactly.

00:00:27.420 --> 00:00:30.190
Generating the same mascot across. dozens of

00:00:30.190 --> 00:00:33.530
scenes without it changing its face or its proportions.

00:00:34.369 --> 00:00:36.890
Google's new system, Nano Banana Pro, it just

00:00:36.890 --> 00:00:39.950
changes the rules entirely by solving both. Welcome

00:00:39.950 --> 00:00:42.829
back to the Deep Dive. Today we are tearing into

00:00:42.829 --> 00:00:44.850
the technical specs of what a lot of reviewers

00:00:44.850 --> 00:00:47.250
are calling a generational leap in generative

00:00:47.250 --> 00:00:49.210
AI. Yeah. We're talking about the Nano Banana

00:00:49.210 --> 00:00:51.909
Pro model, which is powered by the incredible

00:00:51.909 --> 00:00:54.890
reasoning capabilities of Gemini 3. We've sourced

00:00:54.890 --> 00:00:57.070
a brand new technical review to guide us through

00:00:57.070 --> 00:00:59.189
this. And our mission today is pretty simple,

00:00:59.229 --> 00:01:01.869
but it's also a challenge. We really want to

00:01:01.869 --> 00:01:04.329
understand the foundational mechanisms here.

00:01:04.689 --> 00:01:08.359
How does this system move past... just basic

00:01:08.359 --> 00:01:10.120
pattern matching, you know, making things look

00:01:10.120 --> 00:01:13.760
good to a genuine conceptual understanding that

00:01:13.760 --> 00:01:16.359
allows for real accuracy. So we'll cover the

00:01:16.359 --> 00:01:19.060
core tech that enables this. We'll dive deep

00:01:19.060 --> 00:01:22.280
into the revolution this creates for, say, infographics

00:01:22.280 --> 00:01:24.519
and typography. And then we'll look at the elegant

00:01:24.519 --> 00:01:27.140
solution for character and brand consistency.

00:01:27.340 --> 00:01:30.719
And critically, we will examine the honest limitations,

00:01:30.920 --> 00:01:34.079
what it still can't do. Yeah. so let's start

00:01:34.079 --> 00:01:36.180
with the leap itself this isn't just a bigger

00:01:36.180 --> 00:01:38.459
version running faster it's a totally different

00:01:38.459 --> 00:01:41.120
approach the claim is that this is a generational

00:01:41.120 --> 00:01:44.379
shift not just an update why it really just boils

00:01:44.379 --> 00:01:47.420
down to process control before you'd hand a model

00:01:47.420 --> 00:01:49.560
a prompt and it would just immediately start

00:01:49.560 --> 00:01:51.620
generating pixels right the rendering the whole

00:01:51.620 --> 00:01:53.340
system is focused on the look of the prompt yeah

00:01:53.739 --> 00:01:56.780
But NanoBanana Pro, it introduces an advanced

00:01:56.780 --> 00:01:59.620
large language model, Gemini 3, as a reasoning

00:01:59.620 --> 00:02:01.599
engine. So it's at the beginning of the workflow.

00:02:01.840 --> 00:02:04.780
Right at the start. The model thinks first. Okay,

00:02:04.819 --> 00:02:07.540
so if I type a prompt, what is Gemini 3 actually

00:02:07.540 --> 00:02:11.340
doing before any pixels even start to form? It's

00:02:11.340 --> 00:02:14.099
like an internal architect. It translates your

00:02:14.099 --> 00:02:18.659
concept into rigid structural constraints. It

00:02:18.659 --> 00:02:20.939
breaks down the prompts semantically. So if you

00:02:20.939 --> 00:02:23.180
ask for an image comparing two historical events,

00:02:23.379 --> 00:02:26.000
it doesn't just look for pictures of those events.

00:02:26.080 --> 00:02:28.840
It reasons. It reasons. It asks, what facts are

00:02:28.840 --> 00:02:30.860
needed here? What's the best format to communicate

00:02:30.860 --> 00:02:33.599
this? And crucially, that reasoning step is tied

00:02:33.599 --> 00:02:36.120
to real -time verification, isn't it? Exactly.

00:02:36.319 --> 00:02:38.840
Gemini 3 can use web search to gather data in

00:02:38.840 --> 00:02:41.620
real time. or verify facts that are needed for

00:02:41.620 --> 00:02:44.449
the image. Only after that whole reasoning and

00:02:44.449 --> 00:02:46.870
verification phase does it build a detailed plan.

00:02:47.050 --> 00:02:49.090
A constraint map. A constraint map. Yeah, the

00:02:49.090 --> 00:02:52.069
Nano Banana Pro, the image generator part, has

00:02:52.069 --> 00:02:54.090
to obey. It's like a digital architect planning

00:02:54.090 --> 00:02:56.550
the precise load -bearing structure before laying

00:02:56.550 --> 00:02:59.669
a single visual break. Reviewers even saw a thinking

00:02:59.669 --> 00:03:02.169
drop -down feature that let them audit the process.

00:03:02.490 --> 00:03:04.849
You could literally watch Gemini 3 tracing its

00:03:04.849 --> 00:03:06.990
history and verifying facts before the image

00:03:06.990 --> 00:03:08.930
was even rendered. It's the difference between

00:03:08.930 --> 00:03:11.530
asking an artist to paint a historical scene

00:03:11.530 --> 00:03:14.919
from memory. And giving that artist a few hours

00:03:14.919 --> 00:03:17.400
to research the exact clothing, the setting,

00:03:17.500 --> 00:03:19.960
the factual context before they even pick up

00:03:19.960 --> 00:03:23.599
a brush. So how does that? Thinking first step

00:03:23.599 --> 00:03:27.240
translate to practical, factual accuracy in the

00:03:27.240 --> 00:03:29.639
finished image. It uses web search to verify

00:03:29.639 --> 00:03:33.060
facts, ensuring accuracy before rendering the

00:03:33.060 --> 00:03:35.240
visual. That deep planning leads us straight

00:03:35.240 --> 00:03:38.300
to the text revolution because text was the industry's

00:03:38.300 --> 00:03:41.259
big, big bottleneck. Oh, yeah. When we talk about

00:03:41.259 --> 00:03:43.800
text failing, we mean the AI just saw letters

00:03:43.800 --> 00:03:47.159
as visual textures, as abstract squiggles, not

00:03:47.159 --> 00:03:49.539
as symbols with meaning. The severity of that

00:03:49.539 --> 00:03:51.319
problem was crippling. I mean, you could spend

00:03:51.319 --> 00:03:53.500
10... minutes crafting the perfect prompt and

00:03:53.500 --> 00:03:56.120
the AI would still spell basic words wrong. All

00:03:56.120 --> 00:03:58.800
the time. Nano Banana Pro has achieved one -shot

00:03:58.800 --> 00:04:01.580
perfect results. The first generation is the

00:04:01.580 --> 00:04:03.560
perfect result, even with complex paragraphs

00:04:03.560 --> 00:04:06.979
of text. I have to admit, I still wrestle with

00:04:06.979 --> 00:04:09.560
prompt drift myself, especially when I'm just

00:04:09.560 --> 00:04:11.400
trying to get a simple sign right, trying to

00:04:11.400 --> 00:04:14.099
generate a product label or a book cover with

00:04:14.099 --> 00:04:16.819
accurate titles. It was just a prompt credit

00:04:16.819 --> 00:04:19.160
graveyard before this. And this is where that

00:04:19.160 --> 00:04:21.680
reasoning model really shines. It treats the

00:04:21.680 --> 00:04:25.279
text not as a visual thing, but as a conceptual

00:04:25.279 --> 00:04:28.420
requirement dictated by Gemini 3's initial plan.

00:04:28.600 --> 00:04:30.660
Right. I mean, look at the specific tests they

00:04:30.660 --> 00:04:33.480
ran. They created this highly technical, clean,

00:04:33.639 --> 00:04:37.660
medical style, infographic, breaking down REM

00:04:37.660 --> 00:04:40.000
versus deep sleep. And it had perfectly readable

00:04:40.000 --> 00:04:42.279
labels, correct terminology, none of that AI

00:04:42.279 --> 00:04:45.139
weirdness, which is so crucial for any kind of

00:04:45.139 --> 00:04:47.279
technical content. Then they pushed the factual

00:04:47.279 --> 00:04:50.199
boundary. They asked the system to research the

00:04:50.199 --> 00:04:52.860
top five budget coffee machines and to accurately

00:04:52.860 --> 00:04:55.180
pull pros, cons, and real ratings from different

00:04:55.180 --> 00:04:57.279
sources. And lay it out in a chart. And lay it

00:04:57.279 --> 00:04:59.480
out in a neat comparison chart. That requires

00:04:59.480 --> 00:05:02.399
data ingestion, reasoning, and then very precise

00:05:02.399 --> 00:05:04.980
layout. That shifts the tool from being just

00:05:04.980 --> 00:05:07.319
a purely aesthetic engine into a serious design

00:05:07.319 --> 00:05:09.600
and data business assistant all in one process.

00:05:09.860 --> 00:05:12.360
The ultimate stress test, though, for typography

00:05:12.360 --> 00:05:14.560
and layout had to be that comic encyclopedia

00:05:14.560 --> 00:05:18.319
page. That test is just... absurdly difficult

00:05:18.319 --> 00:05:21.319
for a generative model. It demands placing large

00:05:21.319 --> 00:05:24.439
blocks of verbatim text nested in speech bubbles.

00:05:24.600 --> 00:05:27.759
It needs dynamic formatting, dramatic typography,

00:05:27.939 --> 00:05:30.920
color -coded sections, and precise power -level

00:05:30.920 --> 00:05:34.699
stats, all in one complex layout. That's a challenge

00:05:34.699 --> 00:05:37.120
that needs a semantic understanding of the text

00:05:37.120 --> 00:05:40.060
blocks, not just placing a few letters. The system

00:05:40.060 --> 00:05:42.379
had to handle paragraph fitting, font changes,

00:05:42.639 --> 00:05:45.209
stat boxes, and keep it all accurate. So beyond

00:05:45.209 --> 00:05:47.110
simple captions, what was the most difficult

00:05:47.110 --> 00:05:50.170
text formatting test it handled? The model flawlessly

00:05:50.170 --> 00:05:52.930
formatted a dense, multi -section comic book

00:05:52.930 --> 00:05:55.850
encyclopedia page. Wow. Now we move to the second

00:05:55.850 --> 00:05:59.589
major solution, consistency, or the lack of it.

00:05:59.709 --> 00:06:02.250
Character drift was the real commercial and narrative

00:06:02.250 --> 00:06:04.610
barrier for all the previous AI systems. You

00:06:04.610 --> 00:06:07.129
could generate a beautiful mascot one time, but

00:06:07.129 --> 00:06:09.290
the second time, the line weight on its ear was

00:06:09.290 --> 00:06:12.050
slightly off, or its proportions changed in a

00:06:12.050 --> 00:06:15.170
subtle way. And that's a disaster for any agency

00:06:15.170 --> 00:06:18.230
or brand manager trying to scale content. You

00:06:18.230 --> 00:06:21.110
can't build brand trust if your mascot's face

00:06:21.110 --> 00:06:24.370
changes in every single ad. No. So NanoBanana

00:06:24.370 --> 00:06:27.449
Pro addresses this using a technique the reviewer

00:06:27.449 --> 00:06:30.490
called the Pro Workflow. Okay, let's detail that

00:06:30.490 --> 00:06:32.329
workflow. This is really critical for anyone

00:06:32.329 --> 00:06:34.689
who's involved in brand identity. It relies on

00:06:34.689 --> 00:06:37.990
using Gemini 3's analytical power. So step one,

00:06:38.110 --> 00:06:41.209
you ask Gemini to analyze an existing asset,

00:06:41.449 --> 00:06:44.790
a logo, a previous character render, and then

00:06:44.790 --> 00:06:46.930
formalize the brand guidelines for that. The

00:06:46.930 --> 00:06:49.750
vibe, the colors. The vibe, the specific colors,

00:06:49.870 --> 00:06:52.189
hex codes even, and the typography constraints.

00:06:52.569 --> 00:06:55.170
So we're using the LLM to codify the style. What's

00:06:55.170 --> 00:06:57.470
step two? Well, since those guidelines can be

00:06:57.470 --> 00:07:00.290
long and text prompts have limits, you convert

00:07:00.290 --> 00:07:02.569
them into specific visual assets. So you're basically

00:07:02.569 --> 00:07:04.509
just taking screenshots of the style guide. I

00:07:04.509 --> 00:07:06.569
see. Step three, you upload those screenshots

00:07:06.569 --> 00:07:08.970
as dedicated reference images to Nano Banana

00:07:08.970 --> 00:07:11.740
Pro. This process gets around the limits of text

00:07:11.740 --> 00:07:13.519
-only prompts, and it locks in the aesthetic

00:07:13.519 --> 00:07:15.759
for every new asset you generate. That means

00:07:15.759 --> 00:07:18.500
the model is treating the brand style as a single,

00:07:18.660 --> 00:07:21.480
immutable visual token instead of just a list

00:07:21.480 --> 00:07:23.540
of suggestions. Exactly. And the character tests

00:07:23.540 --> 00:07:25.620
proved it. Right. They tested the mascot across

00:07:25.620 --> 00:07:28.240
wildly different scenarios. Holding a latte,

00:07:28.519 --> 00:07:31.220
driving a scooter, and the mascot was perfectly

00:07:31.220 --> 00:07:33.899
reproducible. Identical line weight, identical

00:07:33.899 --> 00:07:36.600
proportions, no matter what the action was. They

00:07:36.600 --> 00:07:39.500
also ran an even more demanding test. The emotion

00:07:39.500 --> 00:07:42.800
panel test. This generated a six panel sheet

00:07:42.800 --> 00:07:46.879
for emotions, cheerful, sleepy, annoyed, where

00:07:46.879 --> 00:07:48.740
the character had to shift its expression without

00:07:48.740 --> 00:07:51.139
any structural distortion. And the fidelity was

00:07:51.139 --> 00:07:53.639
unwavering. It was. And maybe the most revealing

00:07:53.639 --> 00:07:57.180
was the storyboard camera test. Shifting perspective.

00:07:57.839 --> 00:08:00.379
like from a mid shot to a full body shot, that

00:08:00.379 --> 00:08:03.379
usually causes tiny details to drift, a facial

00:08:03.379 --> 00:08:05.759
structure, an accessory. But here the reviewer

00:08:05.759 --> 00:08:07.819
found that every small detail was preserved,

00:08:08.000 --> 00:08:11.160
even across dramatic perspective shifts. So what's

00:08:11.160 --> 00:08:13.439
the secret to ensuring the brand style stays

00:08:13.439 --> 00:08:16.240
locked when generating new assets? Uploading

00:08:16.240 --> 00:08:18.600
converted brand guideline screenshots as dedicated

00:08:18.600 --> 00:08:22.199
reference images is the best technique. Mid -roll

00:08:22.199 --> 00:08:26.560
sponsor read. Okay, shifting focus a little.

00:08:26.639 --> 00:08:29.600
The model's performance in both consistency and

00:08:29.600 --> 00:08:32.379
text, it points to its deeper and I think most

00:08:32.379 --> 00:08:35.980
exciting power. Genuine conceptual understanding.

00:08:36.509 --> 00:08:40.090
It understands ideas and relationships, not just,

00:08:40.090 --> 00:08:41.889
you know, collections of pixels. That conceptual

00:08:41.889 --> 00:08:44.289
power can sound a bit abstract, but the practical

00:08:44.289 --> 00:08:46.129
examples are really stunning. Take the reverse

00:08:46.129 --> 00:08:48.830
engineering or recipe test. A reviewer uploaded

00:08:48.830 --> 00:08:52.009
a photo of a finished kind of complex steak dish.

00:08:52.230 --> 00:08:54.990
Right. Then they asked the AI for a photo of

00:08:54.990 --> 00:08:57.090
all the ingredients labeled with their quantities.

00:08:57.389 --> 00:08:59.830
And the result was that the model correctly identified

00:08:59.830 --> 00:09:02.610
and visualized the meat, the butter, the garlic,

00:09:02.730 --> 00:09:04.769
the heavy cream, everything necessary for that

00:09:04.769 --> 00:09:07.970
dish. required understanding the visual inputs

00:09:07.970 --> 00:09:10.190
and the underlying chemistry that created the

00:09:10.190 --> 00:09:12.649
meal. It's not just matching steak to a database

00:09:12.649 --> 00:09:15.169
of ingredients. It's inferring the whole process.

00:09:15.409 --> 00:09:17.549
Then there was the geographic intelligence test,

00:09:17.830 --> 00:09:20.809
zooming in on Vatican City and maintaining accurate

00:09:20.809 --> 00:09:23.610
spatial relationships. The position of the trees,

00:09:23.850 --> 00:09:27.970
the obelisk, even at a 67x zoom. Which suggests

00:09:27.970 --> 00:09:31.309
the model is not relying on flat 2D approximations.

00:09:31.409 --> 00:09:34.070
It seems to maintain some kind of internal synthetic

00:09:34.070 --> 00:09:37.240
coordinate system. projecting constraints onto

00:09:37.240 --> 00:09:39.159
what's essentially a 3D map of the location.

00:09:39.440 --> 00:09:41.600
And the final piece of evidence for that conceptual

00:09:41.600 --> 00:09:44.299
linkage was the translation accuracy test. It

00:09:44.299 --> 00:09:46.899
coherently translated English text on a cereal

00:09:46.899 --> 00:09:50.039
box into French. No made -up words. No made -up

00:09:50.039 --> 00:09:51.980
words. It showed genuine language processing

00:09:51.980 --> 00:09:55.679
tied directly into the visual output. Whoa. Two

00:09:55.679 --> 00:09:58.639
sec silence. Just imagine scaling that conceptual

00:09:58.639 --> 00:10:01.299
understanding, that ability to reason and translate

00:10:01.299 --> 00:10:04.820
ideas into structural maps to a billion queries

00:10:04.820 --> 00:10:07.679
a day, handling things far beyond just images,

00:10:07.779 --> 00:10:10.659
integrating data streams across media. That is

00:10:10.659 --> 00:10:12.820
the actual generational leap. That conceptual

00:10:12.820 --> 00:10:15.440
power is undeniable, but the reviewer did offer

00:10:15.440 --> 00:10:17.480
an honest assessment of limitations. It is not

00:10:17.480 --> 00:10:19.220
infallible yet, and this is important for users

00:10:19.220 --> 00:10:21.460
to understand. Right. The biggest frustration

00:10:21.460 --> 00:10:24.740
was around specific geometric instruction, specifically

00:10:24.740 --> 00:10:28.399
pose control. If the reviewer asked the model

00:10:28.399 --> 00:10:31.139
to make a character adopt a very specific, detailed

00:10:31.139 --> 00:10:34.620
pose from a reference drawing, say a complex

00:10:34.620 --> 00:10:37.480
martial arts stance, Nano Banana Pro consistently

00:10:37.480 --> 00:10:40.230
ignored it. And it just substituted its own pose

00:10:40.230 --> 00:10:42.990
instead. Why? I mean, if it understands complex

00:10:42.990 --> 00:10:45.049
concepts and spatial relationships, why does

00:10:45.049 --> 00:10:47.110
it struggle with specific input like a reference

00:10:47.110 --> 00:10:50.070
pose? It ignores pose reference drawings, preferring

00:10:50.070 --> 00:10:51.970
to generate its own character positions instead.

00:10:52.169 --> 00:10:54.309
It seems to prioritize the character's identity

00:10:54.309 --> 00:10:57.669
and its action, what the character is doing over

00:10:57.669 --> 00:11:00.730
the precise form, the exact geometry of the pose.

00:11:00.970 --> 00:11:03.230
Right. And there are minor flaws, too. Small

00:11:03.230 --> 00:11:05.850
text on products, tiny text or fine print on

00:11:05.850 --> 00:11:08.850
product labels or, say, watch faces, often fails

00:11:08.850 --> 00:11:10.889
to render crisply when you zoom in. Even though

00:11:10.889 --> 00:11:13.889
the large logos are perfect. Perfect. And finally,

00:11:14.009 --> 00:11:17.250
the reality of any AI rollout. Inconsistent community

00:11:17.250 --> 00:11:20.389
results. Variation is expected. There are constant

00:11:20.389 --> 00:11:22.350
model updates. There's randomness in the generation.

00:11:22.809 --> 00:11:25.149
Your results might vary a bit from the peak examples.

00:11:25.529 --> 00:11:28.190
After analyzing all the evidence, though. The

00:11:28.190 --> 00:11:30.750
verdict is pretty compelling. It's best in class.

00:11:31.070 --> 00:11:34.350
Nano Banana Pro absolutely blows all other image

00:11:34.350 --> 00:11:37.269
generation models out of the water for versatility,

00:11:37.490 --> 00:11:40.629
accuracy, and just practical application. We

00:11:40.629 --> 00:11:42.350
can still look at the competitive landscape.

00:11:42.710 --> 00:11:45.289
Mid -journey is still incredibly strong for raw,

00:11:45.429 --> 00:11:49.029
pure, aesthetic beauty. It can maybe create a

00:11:49.029 --> 00:11:52.450
subjectively prettier image, but it's still fundamentally

00:11:52.450 --> 00:11:56.110
weak on text and consistency. Nanobanan Pro takes

00:11:56.110 --> 00:11:58.269
the crown because it delivers both high quality

00:11:58.269 --> 00:12:01.049
and high utility, especially for commercial and

00:12:01.049 --> 00:12:02.929
technical use cases. So what does this actually

00:12:02.929 --> 00:12:05.190
mean for you, the listener, in your day -to -day

00:12:05.190 --> 00:12:07.350
work? Well, for marketers, it means generating

00:12:07.350 --> 00:12:09.830
campaign -ready assets that automatically adhere

00:12:09.830 --> 00:12:14.220
to brand guidelines in a single prompt. No constant

00:12:14.220 --> 00:12:16.799
human oversight. For educators, you can now describe

00:12:16.799 --> 00:12:20.000
a complex concept like photosynthesis and get

00:12:20.000 --> 00:12:22.200
a publication -ready accurate visual explanation

00:12:22.200 --> 00:12:25.559
or a labeled chart instantly. Yeah. And for professional

00:12:25.559 --> 00:12:27.659
creators, you finally have consistent characters

00:12:27.659 --> 00:12:30.139
across unlimited scenarios and camera angles.

00:12:30.340 --> 00:12:32.679
It unlocks some serious world -building potential.

00:12:33.230 --> 00:12:34.850
So we should probably just reiterate the best

00:12:34.850 --> 00:12:38.830
practices here. Use Gemini 3 to formalize and

00:12:38.830 --> 00:12:40.789
create those guidelines. Right. Use screenshots

00:12:40.789 --> 00:12:42.870
of those guidelines as your visual references

00:12:42.870 --> 00:12:45.649
to liken the style. And always use Nano Banana

00:12:45.649 --> 00:12:48.649
Pro to generate text from scratch. That is its

00:12:48.649 --> 00:12:52.129
strongest use case. But manually verify any highly

00:12:52.129 --> 00:12:55.570
specialized technical text just to be safe. The

00:12:55.570 --> 00:12:57.649
accessibility message here is just so powerful.

00:12:57.909 --> 00:12:59.889
These aren't capabilities that are promised for

00:12:59.889 --> 00:13:01.929
the future. They are proven and available today.

00:13:02.669 --> 00:13:05.070
It represents years of research paying off in

00:13:05.070 --> 00:13:07.830
a tangible, practical leap. The fact that this

00:13:07.830 --> 00:13:10.509
system solved consistency and complex text at

00:13:10.509 --> 00:13:13.149
the same time is proof that the technology has

00:13:13.149 --> 00:13:15.210
moved beyond treating the world as just pixels.

00:13:15.590 --> 00:13:18.309
This conceptual understanding moves past simple

00:13:18.309 --> 00:13:20.590
pattern matching. It's moving toward genuine

00:13:20.590 --> 00:13:23.049
comprehension of intent and structure. The tools

00:13:23.049 --> 00:13:25.470
exist. The capabilities are proven. The only

00:13:25.470 --> 00:13:27.909
question left is... What will you create now

00:13:27.909 --> 00:13:29.690
that this level of fidelity and accessibility

00:13:29.690 --> 00:13:32.230
is available to everyone? Out to your own music.
