WEBVTT

00:00:00.000 --> 00:00:02.220
If you look at how software creation has changed,

00:00:02.359 --> 00:00:04.660
I mean, even in just the last few years, it's

00:00:04.660 --> 00:00:07.940
already dramatic. But I don't think anything

00:00:07.940 --> 00:00:10.560
really prepared me for the actual demonstrated

00:00:10.560 --> 00:00:14.080
reality that you could build a functional real

00:00:14.080 --> 00:00:18.120
-time 3D ray tracing simulator just by typing

00:00:18.120 --> 00:00:20.719
a single paragraph of text. Welcome to the deep

00:00:20.719 --> 00:00:25.399
dive. That ability to turn conversation, or what

00:00:25.399 --> 00:00:28.059
some people are calling vibe, into a complex

00:00:28.059 --> 00:00:31.199
tool. That's the new reality check, isn't it?

00:00:31.239 --> 00:00:33.920
It really is. And today we are cracking open

00:00:33.920 --> 00:00:36.719
the documentation, the analysis of 12 different

00:00:36.719 --> 00:00:39.100
applications that were built completely from

00:00:39.100 --> 00:00:41.299
scratch using just natural language problems.

00:00:41.380 --> 00:00:43.600
We'll stack apps in minutes. Exactly. Our mission

00:00:43.600 --> 00:00:45.539
is pretty straightforward. We need to pull out

00:00:45.539 --> 00:00:48.399
the most important lessons here about rapid prototyping,

00:00:48.500 --> 00:00:52.820
handling, extreme complexity, and critically

00:00:52.820 --> 00:00:54.979
understanding the real world limits. We're moving

00:00:54.979 --> 00:00:57.399
past the hype. Right into the details. The short

00:00:57.399 --> 00:01:02.500
version, the TLDR. is this. Gemini 3 .0 automates

00:01:02.500 --> 00:01:06.579
these really specialized tasks that before required

00:01:06.579 --> 00:01:09.400
such deep knowledge of coding frameworks. The

00:01:09.400 --> 00:01:13.180
real value is just. It's drastically lowering

00:01:13.180 --> 00:01:16.519
the barrier to entry for complex logic. And here's

00:01:16.519 --> 00:01:18.280
your immediate actionable tip from the source

00:01:18.280 --> 00:01:20.340
material. If you want to try and replicate this.

00:01:20.920 --> 00:01:24.540
Always, always request standalone HTML files

00:01:24.540 --> 00:01:27.239
in your prompt. Why is that? Because that guarantees

00:01:27.239 --> 00:01:29.560
the app is immediately testable. It's portable,

00:01:29.799 --> 00:01:32.420
ready to share or, you know, ready to debug.

00:01:32.680 --> 00:01:34.640
That's such a critical constraint to put on the

00:01:34.640 --> 00:01:37.629
AI. Because even just a year ago, if you suggested

00:01:37.629 --> 00:01:39.750
we'd be creating a physics engine or a custom

00:01:39.750 --> 00:01:41.670
board game just by typing. Yeah, you'd have been

00:01:41.670 --> 00:01:43.269
left out of the room. It would have been dismissed

00:01:43.269 --> 00:01:45.370
as science fiction. But now that shortcut is

00:01:45.370 --> 00:01:47.769
undeniably real. The tests we reviewed, they

00:01:47.769 --> 00:01:49.769
showed some shocking successes, you know, like

00:01:49.769 --> 00:01:51.969
those physics demos. But we're also going to

00:01:51.969 --> 00:01:54.109
look honestly at the limits. For sure. Especially

00:01:54.109 --> 00:01:57.409
around browser lag and some of the visual artifacts.

00:01:57.790 --> 00:02:00.290
It's an honest look at where vibe coding is today.

00:02:00.549 --> 00:02:02.489
Okay, let's start with procedural generation.

00:02:03.200 --> 00:02:05.260
This is essentially code writing code. We're

00:02:05.260 --> 00:02:07.519
talking geometry and a little bit of fun with

00:02:07.519 --> 00:02:09.500
this thing called the voxel creatures generator.

00:02:09.800 --> 00:02:12.400
Right. It started with a really simple high level

00:02:12.400 --> 00:02:15.639
prompt just asking for procedurally generated

00:02:15.639 --> 00:02:18.840
voxel creatures and biomechanical vehicles. And

00:02:18.840 --> 00:02:21.360
what's immediately fascinating here is that the

00:02:21.360 --> 00:02:23.939
A .I. didn't just spit out spatic images. It

00:02:23.939 --> 00:02:27.060
generated interactive physics. You can create

00:02:27.060 --> 00:02:28.860
a creature and then you click a button to scrap

00:02:28.860 --> 00:02:32.340
it and you watch the pieces just shatter. with

00:02:32.340 --> 00:02:35.500
really realistic, tumbling physics and shadows.

00:02:35.520 --> 00:02:37.800
It's like digital Lego blocks. And the system

00:02:37.800 --> 00:02:40.159
retains conversational memory, which is key.

00:02:40.620 --> 00:02:44.180
The source asked for a specific item, a retro

00:02:44.180 --> 00:02:47.270
Polaroid camera. And Gemini just generated a

00:02:47.270 --> 00:02:50.409
fully recognizable voxel camera in seconds. So

00:02:50.409 --> 00:02:52.830
it's handling both random generation and specific

00:02:52.830 --> 00:02:54.930
instruction at the same time. Yeah, inside a

00:02:54.930 --> 00:02:57.250
3D space. It's pretty amazing. We also saw the

00:02:57.250 --> 00:03:00.050
process of converting a flat 2D image. I think

00:03:00.050 --> 00:03:01.969
a Michael Jackson photo was used in the test.

00:03:02.150 --> 00:03:04.729
That's the one. And turning it into 3D voxel

00:03:04.729 --> 00:03:07.650
assets. So the AI has to analyze the composition,

00:03:07.949 --> 00:03:10.969
identify elements, figure out depth, and then

00:03:10.969 --> 00:03:14.979
output three distinct 3D scenes. Setup. mid -event

00:03:14.979 --> 00:03:18.120
and aftermath it's complex scene interpretation

00:03:18.120 --> 00:03:21.240
so what was the quality like well the assessment

00:03:21.240 --> 00:03:23.740
was interesting the results were described as

00:03:23.740 --> 00:03:26.860
simplistic but actually really cool they weren't

00:03:26.860 --> 00:03:29.560
photorealistic but they absolutely captured the

00:03:29.560 --> 00:03:31.639
essence of the image and its movement Okay, so

00:03:31.639 --> 00:03:33.419
let's unpack this a bit. When we look at that

00:03:33.419 --> 00:03:35.759
voxel generator, the source noted a small sort

00:03:35.759 --> 00:03:38.520
of rough edge. The app keeps the same color palette

00:03:38.520 --> 00:03:40.759
from the last design until you do a hard refresh.

00:03:41.319 --> 00:03:43.460
What does that color palette stickiness tell

00:03:43.460 --> 00:03:45.840
us about where these generated apps are right

00:03:45.840 --> 00:03:48.240
now? It tells us that generated apps often need

00:03:48.240 --> 00:03:50.419
iterative refinement. The foundation is there,

00:03:50.479 --> 00:03:52.659
but state management still needs a human review.

00:03:52.879 --> 00:03:55.530
And now we have to move into the... uh the really

00:03:55.530 --> 00:03:58.110
deep technical stuff we asked the ai to handle

00:03:58.110 --> 00:04:00.590
a task that is just notoriously difficult and

00:04:00.590 --> 00:04:03.050
computationally expensive real -time ray tracing

00:04:03.050 --> 00:04:06.789
and ray tracing for anyone unfamiliar is the

00:04:06.789 --> 00:04:09.270
rendering technique that simulates exactly how

00:04:09.270 --> 00:04:11.930
light bounces through a scene it's what gives

00:04:11.930 --> 00:04:13.949
you those hyper realistic reflections usually

00:04:13.949 --> 00:04:16.329
requires high -end gaming hardware oh yeah and

00:04:16.329 --> 00:04:19.509
historically weeks of highly specialized manual

00:04:19.509 --> 00:04:22.149
coding to set up the geometry and the physics

00:04:22.920 --> 00:04:25.779
The prompt was intense. It specifically requested

00:04:25.779 --> 00:04:30.160
a 3D ray tracing app in a mirror maze using PBR

00:04:30.160 --> 00:04:32.699
materials and high -end frameworks like 3JS.

00:04:32.779 --> 00:04:34.759
Okay, let's quickly define that jargon. PBR.

00:04:34.959 --> 00:04:37.360
That's physically based rendering. Right. So

00:04:37.360 --> 00:04:39.699
digital materials that react accurately to light,

00:04:39.779 --> 00:04:43.899
like real metal or rough plastic. And 3JS is

00:04:43.899 --> 00:04:46.540
just the standard library for creating 3D graphics

00:04:46.540 --> 00:04:49.360
in a web browser. The AI had to seamlessly integrate

00:04:49.360 --> 00:04:51.500
all of that. And what it delivered was, well,

00:04:51.579 --> 00:04:53.959
a fully navigable 3D environment. You have these

00:04:53.959 --> 00:04:56.220
floating metallic pieces where the reflections

00:04:56.220 --> 00:04:58.800
update dynamically. Light bounces correctly between

00:04:58.800 --> 00:05:01.160
three different light sources. That is a massive

00:05:01.160 --> 00:05:03.720
computational load. Achieved via conversation.

00:05:04.060 --> 00:05:06.339
Now, the source was honest. There were visible

00:05:06.339 --> 00:05:08.519
artifacts, some smart alignment issues, a bit

00:05:08.519 --> 00:05:11.180
of visual glitching. But for something generated

00:05:11.180 --> 00:05:15.379
conversationally in minutes, the quality is truly

00:05:15.379 --> 00:05:17.339
remarkable. I mean, that's the moment of wonder

00:05:17.339 --> 00:05:19.720
right there. I genuinely did a double take when

00:05:19.720 --> 00:05:21.720
I saw the metallic reflections. It looks like

00:05:21.720 --> 00:05:25.000
a high -end tech demo from maybe five years ago,

00:05:25.160 --> 00:05:27.319
but it was generated from a paragraph of text.

00:05:27.639 --> 00:05:30.939
That's a huge leap. It is. Alongside this, we

00:05:30.939 --> 00:05:33.540
saw the dot matrix image converter. You upload

00:05:33.540 --> 00:05:35.920
a photo and the app turns it into thousands of

00:05:35.920 --> 00:05:37.980
individual dots. And the interactivity is key

00:05:37.980 --> 00:05:40.480
there. It is. You can apply effects like data

00:05:40.480 --> 00:05:42.560
glitch or implosion and you watch the dots move

00:05:42.560 --> 00:05:44.740
around in 3D space and then perfectly reassemble.

00:05:45.389 --> 00:05:48.529
But if the output is visually glitchy, as the

00:05:48.529 --> 00:05:51.790
source said, how much time does that really save

00:05:51.790 --> 00:05:55.170
a developer? Is vibe coding just creating complex

00:05:55.170 --> 00:05:58.069
but, you know, broken starting points? Well,

00:05:58.110 --> 00:06:00.569
it showed AI can handle the advanced real -time

00:06:00.569 --> 00:06:03.529
rendering physics. That complex structure, which

00:06:03.529 --> 00:06:05.769
used to take weeks to set up, is now instantly

00:06:05.769 --> 00:06:08.990
accessible. Next up, we tested complex multi

00:06:08.990 --> 00:06:12.069
-body simulations. This is where you really stress

00:06:12.069 --> 00:06:14.170
the math capabilities of the generated code.

00:06:14.730 --> 00:06:17.129
We started with the particle collider simulator.

00:06:17.470 --> 00:06:19.709
Right. The goal was an educational visualization.

00:06:20.250 --> 00:06:23.930
A particle collider and a tokamak fusion reactor.

00:06:24.310 --> 00:06:27.269
And Gemini created two separate simulators showing

00:06:27.269 --> 00:06:30.370
particle acceleration, interaction, and detailed

00:06:30.370 --> 00:06:33.790
collision paths. The physics here has to be mathematically

00:06:33.790 --> 00:06:37.000
sound for it to be accurate. But this is where

00:06:37.000 --> 00:06:38.920
we hit the performance reality check. The source

00:06:38.920 --> 00:06:40.560
material really focused on this. It said the

00:06:40.560 --> 00:06:43.800
simulation runs very slowly in my browser. But

00:06:43.800 --> 00:06:45.180
that's because it's doing a lot of calculation.

00:06:45.540 --> 00:06:47.639
That performance lag is so important. The source

00:06:47.639 --> 00:06:51.000
noted that expecting a flawlessly optimized production

00:06:51.000 --> 00:06:53.899
-ready app right away is the big mistake users

00:06:53.899 --> 00:06:56.540
make. I still wrestle with this myself, actually.

00:06:56.560 --> 00:06:58.699
You see this incredible functional output, the

00:06:58.699 --> 00:07:00.899
whole collider visualized, and you kind of forget

00:07:00.899 --> 00:07:03.819
that optimization is still a manual, time -consuming

00:07:03.819 --> 00:07:06.660
step. It's easy to... expect immediate perfection.

00:07:07.100 --> 00:07:09.579
Totally. And the interactive end -body gravity

00:07:09.579 --> 00:07:12.459
simulator pushed this idea of complicity even

00:07:12.459 --> 00:07:15.420
further. This challenges the AI to accurately

00:07:15.420 --> 00:07:18.560
simulate Newtonian gravity with multiple bodies

00:07:18.560 --> 00:07:20.879
at once. Meaning every planet is affecting every

00:07:20.879 --> 00:07:23.180
other planet all the time. Constantly. And what

00:07:23.180 --> 00:07:25.120
you can do with it is incredible. You can add

00:07:25.120 --> 00:07:28.899
planets, moons, change their mass, and the simulation

00:07:28.899 --> 00:07:31.360
just immediately recalculates all the gravitational

00:07:31.360 --> 00:07:34.740
forces. Which is how you get those emergent chaotic...

00:07:34.860 --> 00:07:37.600
behaviors like orbital decay or a gravitational

00:07:37.600 --> 00:07:40.500
slingshot. And the AI responded perfectly to

00:07:40.500 --> 00:07:43.560
requests for cosmic events. You could instantly

00:07:43.560 --> 00:07:46.339
add a rogue star, a black hole, or even trigger

00:07:46.339 --> 00:07:48.620
a supernova. And it all affected the physics

00:07:48.620 --> 00:07:50.560
in real time. It proved the underlying engine

00:07:50.560 --> 00:07:53.769
was robust, not just a visual trick. So when

00:07:53.769 --> 00:07:56.269
dealing with these high computation tasks, like

00:07:56.269 --> 00:07:59.509
simulating 10 bodies interacting, is that lag

00:07:59.509 --> 00:08:02.389
a failure or is it the ultimate proof of concept?

00:08:02.589 --> 00:08:04.889
It's proof of concept. The complex foundation

00:08:04.889 --> 00:08:07.810
is sound, but performance optimization has to

00:08:07.810 --> 00:08:10.129
follow the AI generation, not come before it.

00:08:10.410 --> 00:08:11.949
All right. So moving away from visual physics,

00:08:12.170 --> 00:08:14.310
we looked at how this generative capability applies

00:08:14.310 --> 00:08:17.290
to pure data analysis and real world utility.

00:08:18.050 --> 00:08:21.600
The AI bubble research storyboard is. It's a

00:08:21.600 --> 00:08:24.160
fascinating case of macroeconomics meeting interactive

00:08:24.160 --> 00:08:27.339
visualization. Yeah, this one was cool. The prompt

00:08:27.339 --> 00:08:30.779
basically told the AI to act as a macroeconomist,

00:08:30.819 --> 00:08:33.779
gather historical data on financial bubbles like

00:08:33.779 --> 00:08:36.440
the dot -com era, the Great Depression, and then

00:08:36.440 --> 00:08:39.559
build an animated public dashboard to analyze

00:08:39.559 --> 00:08:42.789
the current AI market. And what it created was

00:08:42.789 --> 00:08:45.750
an interactive macroeconomic analysis tool. It

00:08:45.750 --> 00:08:49.129
had current AI market data, I think CapEx projections

00:08:49.129 --> 00:08:53.110
of $405 billion in 2025. Yeah, and Minsky cycle

00:08:53.110 --> 00:08:55.590
visualizations. Let's clarify that. CapEx, capital

00:08:55.590 --> 00:08:58.029
expenditure. That's the money corporations are

00:08:58.029 --> 00:08:59.889
spending on physical assets, right? Exactly.

00:08:59.889 --> 00:09:02.090
Like infrastructure and chips. That number is

00:09:02.090 --> 00:09:04.169
a critical proxy for massive corporate confidence.

00:09:04.620 --> 00:09:06.840
And the Minsky cycle just explains the five standard

00:09:06.840 --> 00:09:09.059
stages of speculative bubble, from displacement

00:09:09.059 --> 00:09:11.580
all the way to panic. So the AI didn't just plot

00:09:11.580 --> 00:09:14.320
data. It applied complex financial theory. It

00:09:14.320 --> 00:09:17.100
did. And the highlight was the bubble simulator

00:09:17.100 --> 00:09:20.279
game that was built into the dashboard. Users

00:09:20.279 --> 00:09:22.600
could make business decisions, raise funding,

00:09:22.840 --> 00:09:25.720
hire people, and the chart would update tracking

00:09:25.720 --> 00:09:28.679
risk until the demo day bubble popped. And the

00:09:28.679 --> 00:09:31.120
tool even provided its own strategic assessment.

00:09:31.299 --> 00:09:33.659
It did. It said something like, productive buttle,

00:09:33.679 --> 00:09:37.360
infrastructure survives, speculators die, banks

00:09:37.360 --> 00:09:40.500
safe, stocks at risk. That's predictive application,

00:09:40.740 --> 00:09:43.039
not just visualization. So moving from macro

00:09:43.039 --> 00:09:45.860
trends to micro analysis, we have the basketball

00:09:45.860 --> 00:09:48.779
shot analyzer. This is a total game changer for

00:09:48.779 --> 00:09:50.759
coaching and skill development. The workflow

00:09:50.759 --> 00:09:52.919
seems simple. You just upload a short clip of

00:09:52.919 --> 00:09:55.320
a jump shot and the system processes it frame

00:09:55.320 --> 00:09:57.639
by frame. Right, using computer vision. And the

00:09:57.639 --> 00:10:00.440
technical advantage is just so clear. It delivers

00:10:00.440 --> 00:10:03.440
this granular biomechanics, breakdown joint angles,

00:10:03.559 --> 00:10:06.019
wrist snap, foot placement. It analyzes the entire

00:10:06.019 --> 00:10:08.759
kinetic chain and gives you a shot score and

00:10:08.759 --> 00:10:11.480
even a pro comparison mode with skeleton overlays.

00:10:11.580 --> 00:10:13.700
So how does that economic bubble game really

00:10:13.700 --> 00:10:15.639
show the difference between just raw analysis

00:10:15.639 --> 00:10:18.700
and genuine application? it turns static data

00:10:18.700 --> 00:10:21.440
and historical frameworks into a dynamic educational

00:10:21.440 --> 00:10:24.059
and most importantly a playable scenario for

00:10:24.059 --> 00:10:26.919
the user to learn from our final segment looks

00:10:26.919 --> 00:10:29.539
at the ai's ability to model large -scale systems

00:10:29.539 --> 00:10:32.480
and integrate real -time inputs we're starting

00:10:32.480 --> 00:10:34.840
with the earth simulator with weather control

00:10:34.840 --> 00:10:38.740
the prompt asked for a perfect 3d earth globe

00:10:38.740 --> 00:10:42.259
realistic interactive cloud layers high quality

00:10:42.259 --> 00:10:46.039
rendering and it delivered a rotatable 3d earth

00:10:46.039 --> 00:10:48.730
where you can actually adjust environmental parameters,

00:10:48.970 --> 00:10:51.710
CO2 levels, temperature. The satellite feature

00:10:51.710 --> 00:10:53.730
was particularly impressive, I thought. It really

00:10:53.730 --> 00:10:56.169
was. The system calculated and displayed the

00:10:56.169 --> 00:10:58.649
correct orbital paths for communication and weather

00:10:58.649 --> 00:11:00.970
satellites, all moving in their actual trajectories.

00:11:01.090 --> 00:11:03.389
Then that's hard. That requires accurate geospatial

00:11:03.389 --> 00:11:05.389
calculation with real -time physics. It's not

00:11:05.389 --> 00:11:07.370
just a visual. And you could use natural language

00:11:07.370 --> 00:11:09.909
commands. You can type trigger a Category 5 hurricane,

00:11:10.070 --> 00:11:12.350
and the system updates the visual state. Amazing.

00:11:12.590 --> 00:11:15.110
Then we have the hand -tracking fluid dynamics

00:11:15.110 --> 00:11:18.149
simulator. This one is simpler. It just tracks

00:11:18.149 --> 00:11:20.409
your hands with a webcam to create these swirling

00:11:20.409 --> 00:11:23.940
fluid dynamic effects. appeal there is just demonstrating

00:11:23.940 --> 00:11:26.419
that real -time computer vision integration with

00:11:26.419 --> 00:11:29.080
generative effects it's the basis for future

00:11:29.080 --> 00:11:32.159
gesture based interfaces and finally the power

00:11:32.159 --> 00:11:35.360
of 10 animator this tested the scaling challenge

00:11:36.250 --> 00:11:38.970
generating an animation zooming all the way from

00:11:38.970 --> 00:11:42.350
a cosmic scale down to an atomic scale. And Gemini

00:11:42.350 --> 00:11:44.529
did this by coordinating with another model,

00:11:44.690 --> 00:11:47.769
NanoBanana, to blend the imagery across those

00:11:47.769 --> 00:11:50.169
different magnitudes. And that ability to coordinate

00:11:50.169 --> 00:11:53.570
specialized subtasks, telling one AI to handle

00:11:53.570 --> 00:11:55.809
image generation and another to handle the animation,

00:11:56.070 --> 00:11:58.509
that's the next huge leap. Though we do have

00:11:58.509 --> 00:12:00.909
to acknowledge the honest check, the content

00:12:00.909 --> 00:12:03.879
accuracy. It varied. There was an odd, unexplained

00:12:03.879 --> 00:12:06.740
hand artifact that appeared at the cellular level.

00:12:06.879 --> 00:12:09.419
Right. So the concept of AI directing other AI

00:12:09.419 --> 00:12:12.100
worked, but the final polish still needed a human

00:12:12.100 --> 00:12:15.000
eye. So what's the overarching lesson from combining

00:12:15.000 --> 00:12:17.919
vision, physics, and image generation across

00:12:17.919 --> 00:12:21.080
all these different scales? That the AI can successfully

00:12:21.080 --> 00:12:23.740
coordinate specialized subtasks and separate

00:12:23.740 --> 00:12:26.980
models to achieve these really complex multimodal

00:12:26.980 --> 00:12:30.120
results on a grand scale. When you look back

00:12:30.120 --> 00:12:33.539
at All 12 of these applications, I mean, from

00:12:33.539 --> 00:12:35.500
the Monopoly board generator, which we didn't

00:12:35.500 --> 00:12:37.580
even have time to get into, to that high -end

00:12:37.580 --> 00:12:40.419
ray tracing demo, the impressive part isn't any

00:12:40.419 --> 00:12:43.360
single one of them. No. It's the sheer totality

00:12:43.360 --> 00:12:47.009
of it, the diversity and the complexity. 12 functional

00:12:47.009 --> 00:12:49.149
applications built through conversation in a

00:12:49.149 --> 00:12:51.090
matter of minutes. That amount of work would

00:12:51.090 --> 00:12:53.330
have required significant development time and

00:12:53.330 --> 00:12:55.450
specialized expertise just a few months ago.

00:12:55.590 --> 00:12:57.710
Multiple teams, probably. The ultimate unlock

00:12:57.710 --> 00:12:59.950
here isn't just speed. It's the democratization

00:12:59.950 --> 00:13:03.110
of high -end tooling. The code is still complex,

00:13:03.309 --> 00:13:05.870
but the barrier to accessing that complexity

00:13:05.870 --> 00:13:08.429
has been, you know, just vaporized by conversation.

00:13:09.129 --> 00:13:12.129
So the real question then isn't whether AI can

00:13:12.129 --> 00:13:14.830
build perfect applications today. It's whether

00:13:14.830 --> 00:13:17.889
AI can make building useful, structurally sound

00:13:17.889 --> 00:13:21.009
things accessible to significantly more people.

00:13:21.190 --> 00:13:23.970
And based on these results, the answer is, it's

00:13:23.970 --> 00:13:27.309
an undeniable yes. So if the AI handles the complexity,

00:13:27.649 --> 00:13:30.409
does the value of the human developer now shift

00:13:30.409 --> 00:13:33.629
entirely to optimization? Or does this finally

00:13:33.629 --> 00:13:35.649
mean we can move beyond the screen and start

00:13:35.649 --> 00:13:37.649
building truly custom interfaces applications

00:13:37.649 --> 00:13:39.809
that just weren't worth the coding effort before?

00:13:40.190 --> 00:13:42.850
The tools are here today. What kind of complex

00:13:42.850 --> 00:13:45.529
logic or creative idea that you previously thought

00:13:45.529 --> 00:13:47.950
was impossible to code are you now ready to prototype?

00:13:48.490 --> 00:13:50.909
That's something fascinating to consider as we

00:13:50.909 --> 00:13:51.990
wrap up this deep dive.
