WEBVTT

00:00:00.000 --> 00:00:04.519
What if an AI decided to stop chasing pure speed

00:00:04.519 --> 00:00:07.879
and instead started focusing on deep, slow, and

00:00:07.879 --> 00:00:10.830
just careful reasoning? That fundamental shift

00:00:10.830 --> 00:00:13.210
is here. I mean, it's officially here. It's what

00:00:13.210 --> 00:00:16.149
Google is calling their new thinking model. And

00:00:16.149 --> 00:00:19.550
crucially, it's paired with this massive infrastructural

00:00:19.550 --> 00:00:23.050
upgrade. We're talking a 1 million token context

00:00:23.050 --> 00:00:26.289
window. This is not just another small step.

00:00:26.410 --> 00:00:29.989
It really changes the core architecture of how

00:00:29.989 --> 00:00:33.130
we can handle huge amounts of data. Welcome to

00:00:33.130 --> 00:00:35.880
the Deep Dive. Today, we're doing a really critical

00:00:35.880 --> 00:00:40.200
analysis of Google's new Gemini 3 Pro. And based

00:00:40.200 --> 00:00:43.060
on some pretty extensive real -life testing we've

00:00:43.060 --> 00:00:45.759
done across a lot of different uses from finance

00:00:45.759 --> 00:00:49.100
to visual puzzles, the hype seems, well, it seems

00:00:49.100 --> 00:00:51.240
genuinely warranted. It does. It feels like a

00:00:51.240 --> 00:00:53.640
very serious contender in the high -end AI space

00:00:53.640 --> 00:00:56.439
now, maybe an unavoidable one. So our mission

00:00:56.439 --> 00:00:58.420
for this deep dive is to really distill what

00:00:58.420 --> 00:01:00.579
makes this model unique. We're going to unpack

00:01:00.579 --> 00:01:02.840
the philosophy behind that thinking model and

00:01:02.840 --> 00:01:05.200
explain what that super memory means in a practical

00:01:05.200 --> 00:01:07.560
sense. And then get into the test results. Exactly.

00:01:07.620 --> 00:01:09.560
We're talking about building interactive dashboards,

00:01:09.659 --> 00:01:11.900
solving some tricky visual puzzles, and even

00:01:11.900 --> 00:01:16.079
its surprising ability to analyze silent video

00:01:16.079 --> 00:01:18.500
footage. OK, so let's unpack that core concept

00:01:18.500 --> 00:01:22.480
first. Gemini 3 Pro. It's the first in this new

00:01:22.480 --> 00:01:25.579
Gemini 3 series, and Google is deliberately calling

00:01:25.579 --> 00:01:29.239
it a thinking model. Right. That sounds great.

00:01:29.400 --> 00:01:31.439
But what does that actually mean for someone

00:01:31.439 --> 00:01:34.819
using it day to day? It's a huge shift away from

00:01:34.819 --> 00:01:36.920
latency minimization, you know, that obsession

00:01:36.920 --> 00:01:39.409
with giving you an answer instantly. OK. This

00:01:39.409 --> 00:01:42.469
model is built to pause. When you ask it something

00:01:42.469 --> 00:01:45.170
complex, it doesn't just spit back the first

00:01:45.170 --> 00:01:48.370
thing it finds. It engages in what you could

00:01:48.370 --> 00:01:51.129
call an internal reasoning chain. So less like

00:01:51.129 --> 00:01:53.090
a search engine and more like someone actually

00:01:53.090 --> 00:01:55.290
thinking a problem through. Exactly. Think of

00:01:55.290 --> 00:01:57.549
it like someone carefully working through a multi

00:01:57.549 --> 00:02:00.010
-step math problem instead of just recalling

00:02:00.010 --> 00:02:02.590
an answer. So those extra, what, 10 to 20 seconds

00:02:02.590 --> 00:02:04.870
it takes to respond? That's deliberate. That's

00:02:04.870 --> 00:02:07.959
not wasted time. That's the key. During that

00:02:07.959 --> 00:02:10.120
delay, it's essentially running premortems on

00:02:10.120 --> 00:02:12.539
its own logic. It's checking for consistency,

00:02:13.039 --> 00:02:15.240
making sure step five still aligns with the rules

00:02:15.240 --> 00:02:18.080
you set back in step one. And that rigor is what

00:02:18.080 --> 00:02:20.020
leads to the higher quality. Precisely. It's

00:02:20.020 --> 00:02:22.879
why the final outputs are just so much more accurate.

00:02:23.159 --> 00:02:25.560
And the performance data seems to back this up.

00:02:25.699 --> 00:02:27.919
I mean, we're seeing benchmarks where Gemini

00:02:27.919 --> 00:02:31.439
3 Pro beat its competitors. Like Claude and the

00:02:31.439 --> 00:02:33.919
latest GPT models. By the biggest performance

00:02:33.919 --> 00:02:35.740
gap ever recorded in these kinds of head -to

00:02:35.740 --> 00:02:38.120
-head tests. Right, but reasoning is only half

00:02:38.120 --> 00:02:40.259
the picture. You can have the best reasoning

00:02:40.259 --> 00:02:42.740
in the world, but if you forget what the initial

00:02:42.740 --> 00:02:45.879
request was, it's useless. Which brings us to

00:02:45.879 --> 00:02:49.629
the memory. The context window. The other massive

00:02:49.629 --> 00:02:51.849
technical leap here. Okay, so let's talk numbers.

00:02:52.009 --> 00:02:54.110
Remind us what a context window is in a practical

00:02:54.110 --> 00:02:57.090
sense. It's basically the AI's short -term memory

00:02:57.090 --> 00:02:58.909
during your conversation. It's all the info it

00:02:58.909 --> 00:03:01.129
can hold in its head at once. And most models

00:03:01.129 --> 00:03:04.550
today are around, what, 256 ,000 tokens? Yeah,

00:03:04.710 --> 00:03:06.509
around there, which is already pretty impressive.

00:03:06.969 --> 00:03:10.080
And a token is... you know, roughly a word or

00:03:10.080 --> 00:03:13.300
part of a word. But Gemini 3 Pro is quadrupling

00:03:13.300 --> 00:03:16.819
that. It's quadrupling it up to a full one million

00:03:16.819 --> 00:03:19.099
tokens. So you can think of the super memory

00:03:19.099 --> 00:03:22.360
like upgrading the AIs RAM by four times. So

00:03:22.360 --> 00:03:25.300
the practical benefit for you listening is what?

00:03:25.360 --> 00:03:28.960
It means the AI can digest truly huge documents.

00:03:29.020 --> 00:03:31.580
We're talking thick legal contracts, 300 page

00:03:31.580 --> 00:03:34.439
reports, even entire books without forgetting

00:03:34.439 --> 00:03:36.780
the details from page one. So it just reduces

00:03:36.780 --> 00:03:39.400
that that management overhead. You don't have

00:03:39.400 --> 00:03:41.520
to keep reminding it of things. That's it. It

00:03:41.520 --> 00:03:43.560
just manages the complexity better. It gets rid

00:03:43.560 --> 00:03:45.939
of that frustrating moment where an AI gets three

00:03:45.939 --> 00:03:47.680
quarters through a project and starts making

00:03:47.680 --> 00:03:50.240
things up because its memory just ran out. The

00:03:50.240 --> 00:03:52.939
best news for getting started, it's free. It's

00:03:52.939 --> 00:03:54.819
accessible right now. Yep. Just go to Gemini

00:03:54.819 --> 00:03:57.699
.Google .com and it's the default model. But

00:03:57.699 --> 00:03:59.900
you have an important tip here. A very important

00:03:59.900 --> 00:04:02.240
tip. To make sure you're actually using this

00:04:02.240 --> 00:04:04.840
high -quality thinking model, you have to go

00:04:04.840 --> 00:04:07.639
into the settings and find the specific option

00:04:07.639 --> 00:04:10.280
to turn it on. Otherwise, it might default to

00:04:10.280 --> 00:04:13.060
a faster, less thorough mode. Exactly. And for

00:04:13.060 --> 00:04:15.560
developers listening, it's already integrated

00:04:15.560 --> 00:04:19.920
into Google AI Studio and Vertex AI. So, if the

00:04:19.920 --> 00:04:23.379
model is designed to pause... Is that extra quality

00:04:23.379 --> 00:04:27.459
really worth the annoyance of waiting? For complex

00:04:27.459 --> 00:04:31.259
tasks, the quality leap is undeniable. That waiting

00:04:31.259 --> 00:04:34.160
period is where the magic happens. Speaking of

00:04:34.160 --> 00:04:35.920
complexity, let's look at the first real -life

00:04:35.920 --> 00:04:38.779
test, building interactive dashboards. We gave

00:04:38.779 --> 00:04:41.579
it a deliberately practical challenge. We asked

00:04:41.579 --> 00:04:44.120
it to create a full financial calculator for

00:04:44.120 --> 00:04:47.000
a multi -unit rental property. Not just math.

00:04:47.310 --> 00:04:50.009
but a structured, interactive tool. Right. So

00:04:50.009 --> 00:04:52.410
the prompt had a lot of variables. Purchase price,

00:04:52.589 --> 00:04:55.290
loan rate, down payment. Income, costs, all the

00:04:55.290 --> 00:04:57.529
usual stuff. But the real challenge we threw

00:04:57.529 --> 00:04:59.790
in was an interactive slider for the vacancy

00:04:59.790 --> 00:05:01.870
rate. OK, so the user could drag it from, what,

00:05:01.930 --> 00:05:04.629
0 % to 30%. Exactly. And the result was genuinely

00:05:04.629 --> 00:05:06.930
impressive. It built a full functional dashboard

00:05:06.930 --> 00:05:08.970
right there in the chat. So as you move the slider.

00:05:09.089 --> 00:05:11.610
The profit numbers instantly update. You can

00:05:11.610 --> 00:05:14.009
see your profit shrink or even watch the property

00:05:14.009 --> 00:05:17.290
start losing money in real time. And it also

00:05:17.290 --> 00:05:20.730
very smartly included a Generate Report button

00:05:20.730 --> 00:05:23.470
that gave you a written summary based on wherever

00:05:23.470 --> 00:05:26.089
the slider was set. That is a huge time saver

00:05:26.089 --> 00:05:28.310
for just checking if an idea is feasible. It

00:05:28.310 --> 00:05:30.750
is. And a big pro tip here is you can share these

00:05:30.750 --> 00:05:33.709
tools just by sending the Gemini link so you

00:05:33.709 --> 00:05:35.709
can collaborate on the model with someone else.

00:05:35.790 --> 00:05:38.629
OK, so from complex math to pure writing, the

00:05:38.629 --> 00:05:41.740
next test was about nuance, right? Yeah, passing

00:05:41.740 --> 00:05:44.519
subtle, human -imposed constraints. The challenge

00:05:44.519 --> 00:05:49.860
was to write a 700 -word SEO article on AI in

00:05:49.860 --> 00:05:52.240
marketing. For a non -technical audience, using

00:05:52.240 --> 00:05:55.000
web search for current info. And here's the tricky

00:05:55.000 --> 00:05:58.180
part. It was forbidden from using M dashes. A

00:05:58.180 --> 00:06:00.920
style rule that even human writers mess up all

00:06:00.920 --> 00:06:02.939
the time. And how did it do? The writing was

00:06:02.939 --> 00:06:05.899
excellent, very natural, very helpful, and most

00:06:05.899 --> 00:06:09.029
importantly, it worked. No M dashes. It showed

00:06:09.029 --> 00:06:11.569
a really high level of obedience to a subtle

00:06:11.569 --> 00:06:13.769
style guide. But there was a small issue. There

00:06:13.769 --> 00:06:16.949
was. And here's a vulnerable admission for me.

00:06:17.370 --> 00:06:19.990
I still wrestle with prompt drift myself, and

00:06:19.990 --> 00:06:22.209
even this top tier model had a little bit of

00:06:22.209 --> 00:06:24.129
it. What happened? It missed the word count.

00:06:24.689 --> 00:06:27.990
The goal was exactly 700 words, and it produced

00:06:27.990 --> 00:06:31.670
785. And for anyone who hasn't run into it, what

00:06:31.670 --> 00:06:34.339
is prompt drift? It's when the model deep in

00:06:34.339 --> 00:06:36.800
a conversation starts to prioritize the quality

00:06:36.800 --> 00:06:39.220
of what it's creating over your original structural

00:06:39.220 --> 00:06:41.519
rules. So it wanted to finish the thought properly,

00:06:41.720 --> 00:06:43.720
even if it meant going over the word count. That's

00:06:43.720 --> 00:06:45.759
my guess. It's also just a reminder that these

00:06:45.759 --> 00:06:48.660
AIs count in tokens, not exact words. So there's

00:06:48.660 --> 00:06:51.220
always a bit of ambiguity there. These dashboards

00:06:51.220 --> 00:06:54.639
seem great for initial modeling, but how reliable

00:06:54.639 --> 00:06:57.899
are they for serious financial work? They're

00:06:57.899 --> 00:07:01.209
outstanding for that initial analysis. For precision,

00:07:01.610 --> 00:07:03.250
always double check the logic, but the structure

00:07:03.250 --> 00:07:06.329
is very reliable. Okay, moving beyond text and

00:07:06.329 --> 00:07:09.290
numbers into the area where this model is really

00:07:09.290 --> 00:07:12.990
expected to shine. Multimodal power. Right, analyzing

00:07:12.990 --> 00:07:16.389
images and video. So test three was about visual

00:07:16.389 --> 00:07:19.470
reasoning. We gave it two classic spatial puzzles.

00:07:20.110 --> 00:07:22.949
The first was a photo of stacked colored cubes

00:07:22.949 --> 00:07:25.430
and we asked it to count the total, including

00:07:25.430 --> 00:07:27.899
the ones it couldn't see. A puzzle that famously

00:07:27.899 --> 00:07:31.040
trips up older AIs. They just can't infer what

00:07:31.040 --> 00:07:33.560
isn't directly visible. But Gemini 3 Pro got

00:07:33.560 --> 00:07:36.319
it. Yep. It took about 10 seconds, using that

00:07:36.319 --> 00:07:38.300
thinking model, and then it showed its work.

00:07:38.779 --> 00:07:40.639
It explained how it counted the hidden blocks

00:07:40.639 --> 00:07:42.939
needed for support, and it gave the correct total.

00:07:43.220 --> 00:07:45.720
And the second puzzle. was identifying the top

00:07:45.720 --> 00:07:48.660
-down view of a pyramid based on subtle color

00:07:48.660 --> 00:07:51.079
patterns from different photos. It got that one

00:07:51.079 --> 00:07:53.839
right too, explaining its reasoning based on

00:07:53.839 --> 00:07:56.060
how the colors had to line up. That level of

00:07:56.060 --> 00:07:59.259
spatial analysis is a huge deal. It's not just

00:07:59.259 --> 00:08:01.740
recognizing an object. It's not. And this is

00:08:01.740 --> 00:08:04.519
critical for fields like architecture, engineering,

00:08:04.920 --> 00:08:07.220
or even medical image analysis, where you need

00:08:07.220 --> 00:08:09.459
to understand what lies beneath the surface.

00:08:09.639 --> 00:08:11.399
The next test was maybe even more impressive.

00:08:11.629 --> 00:08:14.689
video analysis, but with no sound. This is a

00:08:14.689 --> 00:08:16.689
breakthrough capability I think people are sleeping

00:08:16.689 --> 00:08:20.009
on. I uploaded a silent five -minute screen recording

00:08:20.009 --> 00:08:22.490
of me just working. And you just asked it, what

00:08:22.490 --> 00:08:25.430
am I doing? Pretty much. What am I doing and

00:08:25.430 --> 00:08:28.589
what info can you see? And the result was? Startlingly

00:08:28.589 --> 00:08:31.449
detailed. It spotted a tiny pop -up notification

00:08:31.449 --> 00:08:34.610
that was only on screen for a second. It read

00:08:34.610 --> 00:08:38.250
my name from the user interface. And it accurately

00:08:38.250 --> 00:08:40.529
described what I was doing, saying I was changing

00:08:40.529 --> 00:08:43.450
parameters in a financial calculator tool. It

00:08:43.450 --> 00:08:45.769
understood the purpose of my actions from visuals

00:08:45.769 --> 00:08:49.009
alone. Whoa. Yeah. Just imagine scaling that.

00:08:49.269 --> 00:08:51.470
Imagine scaling that kind of video understanding

00:08:51.470 --> 00:08:55.730
to a billion security camera queries. Oh. Or

00:08:55.730 --> 00:08:58.149
analyzing sports footage. Exactly. We did another

00:08:58.149 --> 00:09:00.389
test with a pickleball clip, and it worked perfectly

00:09:00.389 --> 00:09:02.809
there, too. It opens up entirely new workflows.

00:09:02.990 --> 00:09:05.210
And the final test in this segment was building

00:09:05.210 --> 00:09:08.940
a simple app. a game. It did. It created a playable

00:09:08.940 --> 00:09:11.460
game called Neon Swarm, kind of like Galaga,

00:09:11.659 --> 00:09:13.980
using the system's Canvas tool. It had keyboard

00:09:13.980 --> 00:09:17.120
controls, score tracking, the works. So crucial

00:09:17.120 --> 00:09:19.279
tip for anyone listening who wants to try making

00:09:19.279 --> 00:09:21.320
something visual like that. Always, always make

00:09:21.320 --> 00:09:23.220
sure you've activated the Canvas tool. That's

00:09:23.220 --> 00:09:24.940
the environment where it can actually execute

00:09:24.940 --> 00:09:27.399
and build these interactive things. Okay. And

00:09:27.399 --> 00:09:29.399
you had a quick note on debugging these projects.

00:09:29.639 --> 00:09:31.860
You do. If you go back and forth with the AI,

00:09:32.250 --> 00:09:35.870
more than say 10 times, the canvas can sometimes

00:09:35.870 --> 00:09:38.629
get confused and things start to break. So what's

00:09:38.629 --> 00:09:41.750
the fix? Honestly, it's usually faster to just

00:09:41.750 --> 00:09:44.190
start a new chat with a clean prompt than to

00:09:44.190 --> 00:09:46.990
try and fix the old one. So if the model is so

00:09:46.990 --> 00:09:50.009
capable, what's the biggest limiting factor right

00:09:50.009 --> 00:09:52.529
now when you're trying to create these complex

00:09:52.529 --> 00:09:55.909
visual tools? It's that the memory of the tool's

00:09:55.909 --> 00:09:58.269
visual state can sometimes break after too many

00:09:58.269 --> 00:10:01.250
back and forth refinements. A fresh chat usually

00:10:01.250 --> 00:10:04.049
solves it. Let's talk about daily workflow. We've

00:10:04.049 --> 00:10:05.870
confirmed the power of the thinking model, but

00:10:05.870 --> 00:10:08.929
it takes 10 to 20 seconds. In a fast -paced job,

00:10:09.210 --> 00:10:11.990
isn't that delay a deal -breaker? It's a fair

00:10:11.990 --> 00:10:14.509
question. That wait time does feel a little annoying

00:10:14.509 --> 00:10:16.789
at first. We're all conditioned for instant AI

00:10:16.789 --> 00:10:18.950
replies. But the trade -off is that the quality

00:10:18.950 --> 00:10:21.090
is so much higher that you spend less time revising

00:10:21.090 --> 00:10:23.669
and fact -checking. For high -stakes work like

00:10:23.669 --> 00:10:26.429
drafting a contract summary, that depth is absolutely

00:10:26.429 --> 00:10:29.190
worth the wait. You use fast tools for fast work.

00:10:29.320 --> 00:10:31.639
and this for deliberate work. For those ready

00:10:31.639 --> 00:10:34.179
to adopt it, you mentioned two essential settings

00:10:34.179 --> 00:10:37.500
to turn on right away. Yes. First is personal

00:10:37.500 --> 00:10:41.399
context. Turn this on. It lets Gemini learn your

00:10:41.399 --> 00:10:45.220
style, your tone, your jargon from past chats.

00:10:45.879 --> 00:10:48.340
So it becomes more personalized over time. Exactly.

00:10:48.539 --> 00:10:50.799
It'll start writing sales emails in your formal

00:10:50.799 --> 00:10:53.019
tone automatically, for example. And the second

00:10:53.019 --> 00:10:56.259
must -have setting. Custom instructions. This

00:10:56.259 --> 00:10:58.259
is why you set permanent rules for every single

00:10:58.259 --> 00:11:01.190
chat. Things like always avoid complex jargon

00:11:01.190 --> 00:11:04.470
or frame all advice around website speed. And

00:11:04.470 --> 00:11:06.350
you're saying these stick much better in this

00:11:06.350 --> 00:11:08.309
new version. Far more reliably than they did

00:11:08.309 --> 00:11:11.450
before. Yes. OK, what about the AI mode in Google

00:11:11.450 --> 00:11:14.080
search? How did that perform against? Traditional

00:11:14.080 --> 00:11:16.419
search. I ran a comparison looking for a hotel

00:11:16.419 --> 00:11:19.559
in a specific price range. Regular Google search

00:11:19.559 --> 00:11:23.220
was perfect. Fast, precise, clean links. The

00:11:23.220 --> 00:11:25.600
AI mode was inconsistent. It would sometimes

00:11:25.600 --> 00:11:28.360
show conflicting information. How so? The little

00:11:28.360 --> 00:11:30.940
AI generated summary might quote one price, but

00:11:30.940 --> 00:11:32.820
the actual link it provided showed a price that

00:11:32.820 --> 00:11:35.679
was 50 % higher. It was hallucinating details

00:11:35.679 --> 00:11:37.720
to make the summary sound good. So the problem

00:11:37.720 --> 00:11:40.659
isn't speed, it's reliability for real -time

00:11:40.659 --> 00:11:44.399
facts. Exactly. For simple factual queries, regular

00:11:44.399 --> 00:11:46.700
Google search is still much more reliable right

00:11:46.700 --> 00:11:48.500
now. And we have to talk about the future here.

00:11:49.159 --> 00:11:51.500
Agent mode. This is the big one. This is the

00:11:51.500 --> 00:11:54.139
idea that signals the shift from a chat tool

00:11:54.139 --> 00:11:57.149
to a digital colleague. So what does it do? It

00:11:57.149 --> 00:12:00.169
can autonomously do things on the web for you.

00:12:00.269 --> 00:12:02.549
It's not just giving you information, it's acting

00:12:02.549 --> 00:12:04.669
on it. Like booking a reservation or filling

00:12:04.669 --> 00:12:08.409
out forms. Exactly. It moves the AI from being

00:12:08.409 --> 00:12:11.070
a passive answer generator to an active doer

00:12:11.070 --> 00:12:13.909
in your workflow. Given those search inconsistencies,

00:12:14.110 --> 00:12:16.990
should users rely on any AI, including this one,

00:12:17.409 --> 00:12:20.759
for precise real -time data. Not yet for simple

00:12:20.759 --> 00:12:23.000
facts. Regular search is still king for that.

00:12:23.200 --> 00:12:25.419
But for complex analysis of data you provide,

00:12:25.700 --> 00:12:28.120
the thinking model is far superior. So let's

00:12:28.120 --> 00:12:30.220
bring it all together. Where does Gemini 3 Pro

00:12:30.220 --> 00:12:32.860
really shine in a professional workflow? I'd

00:12:32.860 --> 00:12:35.580
say three clear areas. First, creating structured

00:12:35.580 --> 00:12:38.399
things like interactive tools. Second, deep reasoning

00:12:38.399 --> 00:12:41.399
for complex problems. And third, that advanced

00:12:41.399 --> 00:12:43.679
multimodal analysis. And the context window.

00:12:43.899 --> 00:12:46.820
Honestly, the 1 million token context window

00:12:46.820 --> 00:12:49.419
alone is a compelling reason to try it, especially

00:12:49.419 --> 00:12:51.700
if you work with long documents. And compared

00:12:51.700 --> 00:12:54.549
to the competition, where does it fit? Well,

00:12:54.750 --> 00:12:57.409
chat GPT still has the edge on third -party integrations.

00:12:57.610 --> 00:13:00.970
The app ecosystem is huge. Claude is still fantastic,

00:13:01.149 --> 00:13:03.610
especially for high -level coding. But Gemini

00:13:03.610 --> 00:13:06.990
3 Pro is a serious contender. A very serious

00:13:06.990 --> 00:13:09.110
one. And if you're already deep in the Google

00:13:09.110 --> 00:13:13.110
ecosystem, Gmail, Docs, Workspace, the integration

00:13:13.110 --> 00:13:15.850
is so seamless, it makes it a really powerful

00:13:15.850 --> 00:13:18.029
and easy choice. So what's your final advice

00:13:18.029 --> 00:13:21.169
for our listeners today? If you're totally happy

00:13:21.169 --> 00:13:23.250
with your current tools for simple things, there's

00:13:23.250 --> 00:13:25.789
no need to rush. But you should absolutely try

00:13:25.789 --> 00:13:28.769
Gemini 3 Pro for three specific tasks. Which

00:13:28.769 --> 00:13:31.730
are? Interactive tool creation, any kind of complex

00:13:31.730 --> 00:13:34.769
analysis that requires deep reasoning, and processing

00:13:34.769 --> 00:13:37.789
any long document that's over, say, 100 pages.

00:13:38.009 --> 00:13:40.389
So the core features, the reasoning, the quality,

00:13:41.190 --> 00:13:43.889
they're top -notch. They are, despite some minor

00:13:43.889 --> 00:13:46.649
issues like the word count thing. We really encourage

00:13:46.649 --> 00:13:49.049
you to try them all yourself. It's at Gemini

00:13:49.049 --> 00:13:52.269
.Google .com. And remember to go into those settings

00:13:52.269 --> 00:13:55.509
and turn on the thinking model option to really

00:13:55.509 --> 00:13:58.169
test it on your own complex work. And building

00:13:58.169 --> 00:14:01.769
on that idea of agent mode, just imagine an AI

00:14:01.769 --> 00:14:04.009
not just answering your questions, but automatically

00:14:04.009 --> 00:14:06.389
completing the next three steps of your workflow.

00:14:06.649 --> 00:14:09.330
Scheduling, drafting, data entry. All of it,

00:14:09.509 --> 00:14:11.590
autonomously. What kind of fundamental shift

00:14:11.590 --> 00:14:14.409
happens to our idea of work when the chat tool

00:14:14.409 --> 00:14:17.129
becomes a true digital colleague? Thank you for

00:14:17.129 --> 00:14:18.350
joining us for the deep dive.
