WEBVTT

00:00:00.000 --> 00:00:02.120
You know, when you look across the current AI

00:00:02.120 --> 00:00:05.120
landscape, the story has almost always been bigger

00:00:05.120 --> 00:00:07.400
is better. Right. We're kind of constantly waiting

00:00:07.400 --> 00:00:10.720
for the next massive foundational model. But

00:00:10.720 --> 00:00:13.519
what happens when that narrative just flips entirely?

00:00:14.179 --> 00:00:16.800
What if a lightweight model, something like 25

00:00:16.800 --> 00:00:19.739
times cheaper to run than the big guys, actually...

00:00:20.170 --> 00:00:23.410
beats them decisively on key tasks it just turns

00:00:23.410 --> 00:00:25.170
everything upside down doesn't it that whole

00:00:25.170 --> 00:00:28.030
paradox speed versus scale performance versus

00:00:28.030 --> 00:00:30.370
price that's exactly what we're going to crack

00:00:30.370 --> 00:00:33.570
open today welcome back to the deep dive we've

00:00:33.570 --> 00:00:36.369
sifted through a pile the latest most disruptive

00:00:36.369 --> 00:00:38.590
ai intelligence and we're here to give you the

00:00:38.590 --> 00:00:40.810
fastest way to get you know, truly up to speed.

00:00:40.950 --> 00:00:42.850
Yeah, our mission is really to filter out all

00:00:42.850 --> 00:00:44.890
that noise. We'll start with that shocking underdog

00:00:44.890 --> 00:00:48.090
model, Grok4Fast, and how it's kind of rewriting

00:00:48.090 --> 00:00:50.429
the economics of deploying this stuff. And then

00:00:50.429 --> 00:00:52.149
we'll pivot to the strategic side, things like

00:00:52.149 --> 00:00:54.770
major universities adopting ChatGPT and some

00:00:54.770 --> 00:00:57.770
pretty crucial policy shifts that, frankly, demand

00:00:57.770 --> 00:01:00.250
immediate action from you, especially about your

00:01:00.250 --> 00:01:03.159
data. And finally, we hit what feels like the

00:01:03.159 --> 00:01:05.299
ultimate reality check for the whole industry

00:01:05.299 --> 00:01:08.280
right now. We dive into this new gold standard

00:01:08.280 --> 00:01:11.280
for testing AI coding agents, a really tough

00:01:11.280 --> 00:01:14.980
benchmark called SWE Bench Pro. And the results

00:01:14.980 --> 00:01:18.120
are, well, pretty sobering. Reminds us how far

00:01:18.120 --> 00:01:20.959
AI still has to go on its own. So let's start

00:01:20.959 --> 00:01:23.480
the deep dive. Okay, so XAI just dropped Grok

00:01:23.480 --> 00:01:26.599
4 fast. Now, they position it as this lightweight,

00:01:26.859 --> 00:01:29.040
cost -efficient follow -up to their main Grok

00:01:29.040 --> 00:01:32.060
4 model. But honestly. It's way more than just

00:01:32.060 --> 00:01:34.560
a footnote. It immediately blasted into the global

00:01:34.560 --> 00:01:37.019
top 10 on the benchmarks. That's a huge statement

00:01:37.019 --> 00:01:38.819
right out of the gate. The efficiency numbers

00:01:38.819 --> 00:01:40.700
are just staggering. Our sources are confirming

00:01:40.700 --> 00:01:43.340
it runs roughly 25 times cheaper than competitors

00:01:43.340 --> 00:01:46.480
like Gemari 2 .5 Pro. This isn't just tweaking.

00:01:46.500 --> 00:01:48.599
It's fundamental. It's using way fewer computational

00:01:48.599 --> 00:01:50.780
resources. Like, for example, it only needed

00:01:50.780 --> 00:01:53.620
about 61 million tokens for its standard benchmark

00:01:53.620 --> 00:01:56.579
runs. The full Grok 4. That used 120 million.

00:01:56.819 --> 00:02:00.340
Wow. And that huge drop in tokens means less

00:02:00.340 --> 00:02:02.480
processing, right, which translates directly

00:02:02.480 --> 00:02:05.640
into lower costs and much faster answers. This

00:02:05.640 --> 00:02:07.760
is the kind of model that makes AI suddenly seem

00:02:07.760 --> 00:02:10.199
viable for, you know, every small team, every

00:02:10.199 --> 00:02:12.479
startup with an idea. But the real story, I think,

00:02:12.500 --> 00:02:14.539
is the performance it keeps, despite being so

00:02:14.539 --> 00:02:17.419
lightweight. This thing is, well, it's a powerhouse.

00:02:17.520 --> 00:02:19.419
It shot straight to number one in web search

00:02:19.419 --> 00:02:21.719
performance on the El Marina search arena. That's

00:02:21.719 --> 00:02:24.000
a really competitive space. Yeah, that's impressive

00:02:24.000 --> 00:02:25.979
for sure. But here's where the tech side gets

00:02:25.979 --> 00:02:28.770
really interesting. It beat both the older Grok

00:02:28.770 --> 00:02:32.629
4 and GPT -5 on those complex live code bench

00:02:32.629 --> 00:02:35.430
tests. And look at the math scores, too. It's

00:02:35.430 --> 00:02:38.009
competing where AI usually, you know, struggles

00:02:38.009 --> 00:02:40.030
with actual reasoning, not just crunching numbers.

00:02:40.189 --> 00:02:45.580
Getting 92 % on AIM 2025, 93 .3 % on HMMT. For

00:02:45.580 --> 00:02:47.860
people who don't know, AIME and HMMT are tough

00:02:47.860 --> 00:02:50.379
math competitions. They test real problem solving,

00:02:50.520 --> 00:02:53.039
not just calculation, scoring that high. That

00:02:53.039 --> 00:02:55.219
means serious reasoning ability. Right. That

00:02:55.219 --> 00:02:57.699
level of consistency search coding complex math

00:02:57.699 --> 00:02:59.740
is kind of astonishing for a model that's 25

00:02:59.740 --> 00:03:02.009
times cheaper to run. And the way they achieve

00:03:02.009 --> 00:03:04.270
this efficiency is fascinating. They built this

00:03:04.270 --> 00:03:06.849
unified architecture, right? It runs in two modes.

00:03:07.050 --> 00:03:10.590
A thinking mode for the hard stuff, complex reasoning,

00:03:10.830 --> 00:03:14.189
and a fast mode for quick, simple answers. But

00:03:14.189 --> 00:03:16.770
it's smarter than just two gears. It seems like

00:03:16.770 --> 00:03:20.060
it uses tokens dynamically. The system only kicks

00:03:20.060 --> 00:03:22.560
into that expensive thinking mode if its confidence

00:03:22.560 --> 00:03:25.419
dips below some threshold, so it doesn't waste

00:03:25.419 --> 00:03:28.139
processing power on easy questions. That's the

00:03:28.139 --> 00:03:30.800
really clever part, yeah. This is a direct shot

00:03:30.800 --> 00:03:33.460
at the OpenAI pricing wall, basically. It's offering

00:03:33.460 --> 00:03:36.430
Claude or Gemini -level power. But at a price

00:03:36.430 --> 00:03:38.729
that makes advanced AI suddenly accessible for

00:03:38.729 --> 00:03:41.449
like indie developers, it really shifts the economic

00:03:41.449 --> 00:03:43.389
barrier for anyone wanting to build something

00:03:43.389 --> 00:03:45.930
new. So bottom line on the economics. Cheaper,

00:03:45.930 --> 00:03:48.189
faster models make advanced AI accessible for

00:03:48.189 --> 00:03:50.860
small teams and new products. Definitely. OK,

00:03:50.919 --> 00:03:53.099
let's pivot a bit from deployment economics to

00:03:53.099 --> 00:03:56.039
the bigger strategic picture and policy shifts.

00:03:56.259 --> 00:03:58.460
We're seeing mass adoption now in institutions.

00:03:58.900 --> 00:04:01.379
Look at Oxford University first in the UK to

00:04:01.379 --> 00:04:03.900
give all students and staff free access to chat

00:04:03.900 --> 00:04:06.960
GPT edu. That's huge. Yeah, because institutional

00:04:06.960 --> 00:04:09.439
adoption makes it normal. Right. Not just some

00:04:09.439 --> 00:04:11.860
tool for students to cheat, but like an actual

00:04:11.860 --> 00:04:14.800
part of professional work of education. It's

00:04:14.800 --> 00:04:17.240
a big stamp of approval. And on the really practical

00:04:17.240 --> 00:04:20.069
personal level. Good prompting is becoming a

00:04:20.069 --> 00:04:22.709
real skill, almost like a financial skill. We

00:04:22.709 --> 00:04:24.829
saw this amazing story. A traveler used just

00:04:24.829 --> 00:04:28.470
eight specific prompts, basically turned ChatGPT

00:04:28.470 --> 00:04:32.490
into their travel agent, cut an $879 flight down

00:04:32.490 --> 00:04:37.069
to $299. No reward points involved. See? Prompt

00:04:37.069 --> 00:04:39.430
engineering isn't just a buzzword anymore. It's

00:04:39.430 --> 00:04:42.029
a real -world money saver. Ask the right questions,

00:04:42.189 --> 00:04:45.139
save hundreds of dollars. Simple as that. That

00:04:45.139 --> 00:04:46.980
said, with all this AI getting integrated everywhere,

00:04:47.160 --> 00:04:49.259
we absolutely have to talk privacy. Data sharing

00:04:49.259 --> 00:04:52.240
is about to change pretty significantly for millions

00:04:52.240 --> 00:04:54.360
of people. LinkedIn is going to start sharing

00:04:54.360 --> 00:04:56.899
user profiles, activity data directly with Microsoft.

00:04:56.959 --> 00:04:59.120
That data is for AI training and ad targeting.

00:04:59.279 --> 00:05:02.000
And it starts November 3rd. And here's the critical

00:05:02.000 --> 00:05:04.300
part for you listening right now. The default

00:05:04.300 --> 00:05:08.529
setting is on in. It's opt out, not opt in. So

00:05:08.529 --> 00:05:10.550
if you care about your professional data feeding

00:05:10.550 --> 00:05:13.310
into these big AI systems, you have to actively

00:05:13.310 --> 00:05:16.209
go into your settings and turn it off. Like today.

00:05:17.350 --> 00:05:19.410
Meanwhile, the money keeps pouring into the hardware

00:05:19.410 --> 00:05:22.310
side. We noted that $10 billion U .S. government

00:05:22.310 --> 00:05:24.970
allocation, it's specifically to make sure America

00:05:24.970 --> 00:05:27.949
stays dominant in chip manufacturing, a really

00:05:27.949 --> 00:05:30.290
strategic investment for the future AI supply

00:05:30.290 --> 00:05:33.569
chain. And the VC money. It's flowing deep and

00:05:33.569 --> 00:05:35.730
focused. Glilak Capital just raised half a billion

00:05:35.730 --> 00:05:39.389
dollars, $500 million, specifically for AI and

00:05:39.389 --> 00:05:42.490
cybersecurity startups based in Israel. The big

00:05:42.490 --> 00:05:44.449
money is laser focused on those two areas right

00:05:44.449 --> 00:05:46.829
now. So drilling down on these shifts, what's

00:05:46.829 --> 00:05:49.069
the single biggest action a user needs to take

00:05:49.069 --> 00:05:51.470
right now? Users must opt out of LinkedIn's new

00:05:51.470 --> 00:05:53.709
default settings if they care about data sharing.

00:05:54.079 --> 00:05:56.100
No question. Let's talk about a common problem

00:05:56.100 --> 00:05:58.879
in this AI gold rush right now. It's this issue

00:05:58.879 --> 00:06:01.060
of vibe coding. You know, developers rushing

00:06:01.060 --> 00:06:03.019
to build something just because the tech is cool

00:06:03.019 --> 00:06:05.100
without checking if anyone actually needs it

00:06:05.100 --> 00:06:07.500
first. Yeah, we got to move past building AI

00:06:07.500 --> 00:06:10.279
just because the demo looks slick, right? That's

00:06:10.279 --> 00:06:12.620
why our sources are really emphasizing a more

00:06:12.620 --> 00:06:15.199
structured approach. Exactly. They lay out this.

00:06:15.639 --> 00:06:18.500
three -stage framework for successful AI building.

00:06:18.779 --> 00:06:21.079
And it stresses that the absolute first step

00:06:21.079 --> 00:06:23.959
has to be user -first research. Validate the

00:06:23.959 --> 00:06:26.779
product idea. Don't even start the MVP until

00:06:26.779 --> 00:06:29.620
you know the problem is real and, you know, people

00:06:29.620 --> 00:06:31.740
will actually pay for a solution. Well, while

00:06:31.740 --> 00:06:33.360
you're building, remember the whole distribution

00:06:33.360 --> 00:06:36.459
game is changing, mostly thanks to Google's big

00:06:36.459 --> 00:06:40.259
AI push. The old search model is shifting. Businesses

00:06:40.259 --> 00:06:42.180
now have to figure out how to deal with conversational

00:06:42.180 --> 00:06:45.100
ads and these external AI search tools. that

00:06:45.100 --> 00:06:47.139
just bypass Google's traditional results page

00:06:47.139 --> 00:06:49.540
entirely. Right, but that shift also creates

00:06:49.540 --> 00:06:51.560
new openings. There's a great practical guide

00:06:51.560 --> 00:06:53.740
we saw detailing like seven different ways people

00:06:53.740 --> 00:06:56.000
are making money right now using free AI tools.

00:06:56.079 --> 00:06:58.680
It covers everything from creating unique digital

00:06:58.680 --> 00:07:01.060
products to offering super specialized freelance

00:07:01.060 --> 00:07:03.000
services. And for the creative folks out there,

00:07:03.040 --> 00:07:05.899
we're seeing some really cool specific uses for

00:07:05.899 --> 00:07:09.329
AI image generation, especially in video. Think

00:07:09.329 --> 00:07:11.110
about systems that can keep a character looking

00:07:11.110 --> 00:07:14.350
consistent across tons of animated frames. Or

00:07:14.350 --> 00:07:17.069
projects animating famous old paintings for,

00:07:17.189 --> 00:07:19.709
like educational videos. It's getting surprisingly

00:07:19.709 --> 00:07:22.129
detailed. So if that framework is the guide,

00:07:22.410 --> 00:07:25.610
what's the absolute crucial first step for building

00:07:25.610 --> 00:07:28.529
a successful AI product now? The crucial first

00:07:28.529 --> 00:07:31.089
stage is user -first research to validate the

00:07:31.089 --> 00:07:33.939
product idea. Period. All right. Let's hit some

00:07:33.939 --> 00:07:35.699
quick industry shifts. We're seeing this push

00:07:35.699 --> 00:07:39.319
towards decentralization. Maybe Meta's letting

00:07:39.319 --> 00:07:41.939
outside developers build apps for its smart glasses

00:07:41.939 --> 00:07:44.199
now. That opens up a whole new playground for

00:07:44.199 --> 00:07:46.939
wearable AI stuff. And pressure on platforms

00:07:46.939 --> 00:07:49.300
is everywhere. You know, formerly Twitter is

00:07:49.300 --> 00:07:51.839
reportedly moving towards a highly customizable

00:07:51.839 --> 00:07:54.879
AI driven algorithm for the main feed. It feels

00:07:54.879 --> 00:07:56.779
like personalized AI feeds are just becoming

00:07:56.779 --> 00:07:59.379
the expected standard. And all these new AIs,

00:07:59.480 --> 00:08:01.620
the big ones, the fast ones like Grok we talked

00:08:01.620 --> 00:08:03.730
about. They're putting serious competitive heat

00:08:03.730 --> 00:08:05.709
on the big tech companies. Their whole strategy

00:08:05.709 --> 00:08:07.990
has to adapt faster than ever. And honestly,

00:08:08.250 --> 00:08:11.209
just trying to keep up with this constant rapid

00:08:11.209 --> 00:08:13.709
change is exhausting, even for those of us watching

00:08:13.709 --> 00:08:16.589
it closely every day. Yeah, I'll admit it. I

00:08:16.589 --> 00:08:18.750
still wrestle with prompt drift myself. Yeah.

00:08:19.069 --> 00:08:21.269
You know, when these algorithms just shift under

00:08:21.269 --> 00:08:22.930
your feet unexpectedly, you think you've nailed

00:08:22.930 --> 00:08:25.750
a perfect prompt one day, and the next, the output's

00:08:25.750 --> 00:08:27.509
totally different. You're constantly having to

00:08:27.509 --> 00:08:32.059
relearn the tools. And that pace. Well, it brings

00:08:32.059 --> 00:08:34.379
risks we just can't ignore. We've seen some really

00:08:34.379 --> 00:08:37.539
serious misuse fears surface lately. Reports

00:08:37.539 --> 00:08:41.059
of chatbots generating child abuse images, for

00:08:41.059 --> 00:08:43.500
instance, that sparked huge safety concerns,

00:08:43.600 --> 00:08:46.019
rightly so. The industry has got to manage this

00:08:46.019 --> 00:08:48.600
dark side along with the innovation. And finally,

00:08:48.700 --> 00:08:51.200
there's this sense that the sheer speed of the

00:08:51.200 --> 00:08:53.799
gold rush is creating tunnel vision at the very

00:08:53.799 --> 00:08:56.940
top. Sources are suggesting CEOs are just so

00:08:56.940 --> 00:08:59.480
incredibly bullish on AI succeeding and the immediate

00:08:59.480 --> 00:09:01.860
money it brings that they're reportedly ignoring

00:09:01.860 --> 00:09:03.399
pretty much everything else in their business

00:09:03.399 --> 00:09:05.299
plans. It's like an all consuming single focus.

00:09:05.559 --> 00:09:08.159
So what does that CEO bullishness, that intense

00:09:08.159 --> 00:09:10.360
focus, tell us about where the market's priorities

00:09:10.360 --> 00:09:13.539
are right now? The market focus is singularly

00:09:13.539 --> 00:09:16.500
locked onto AI, overshadowing other traditional

00:09:16.500 --> 00:09:20.370
risks. It's AI or nothing, it seems. Okay. This

00:09:20.370 --> 00:09:22.830
brings us to maybe the most important reality

00:09:22.830 --> 00:09:25.509
check we have today. For a while now, we've been

00:09:25.509 --> 00:09:27.870
looking at these AI benchmarks showing models

00:09:27.870 --> 00:09:31.629
getting like 70, 80, 90 % on coding tasks. And

00:09:31.629 --> 00:09:33.370
it kind of creates this false impression that

00:09:33.370 --> 00:09:36.129
AI is almost ready to just replace human engineer.

00:09:36.350 --> 00:09:39.549
That perception is dangerous. And this new benchmark.

00:09:40.529 --> 00:09:43.370
SWB Bench Pro totally changes the math on that.

00:09:43.590 --> 00:09:46.590
So SWB Bench Pro is being called the new gold

00:09:46.590 --> 00:09:48.769
standard. What makes it different is it doesn't

00:09:48.769 --> 00:09:50.830
just check if the model spits out code that looks

00:09:50.830 --> 00:09:53.149
right. It actually verifies if the model solves

00:09:53.149 --> 00:09:55.809
complex real world coding problems. And if it

00:09:55.809 --> 00:09:58.190
applies the fixes correctly within existing code

00:09:58.190 --> 00:10:00.710
bases, it really raises the bar for saying something

00:10:00.710 --> 00:10:02.649
actually works. And when they tested the top

00:10:02.649 --> 00:10:05.309
models against this tougher standard, whoa, the

00:10:05.309 --> 00:10:07.250
gap between what we thought they could do and

00:10:07.250 --> 00:10:09.190
what they actually do in the real world became.

00:10:09.389 --> 00:10:12.149
painfully obvious. It was, frankly, a bloodbath

00:10:12.149 --> 00:10:14.309
compared to the old benchmarks. Get these numbers.

00:10:14.350 --> 00:10:16.590
The top models, the ones we use every day, scored

00:10:16.590 --> 00:10:19.809
way lower than anyone thought. OpenAI's GPT -5,

00:10:19.990 --> 00:10:23.590
only 23 .3 % on the public tasks. Claude Opus,

00:10:23.610 --> 00:10:27.529
4 .1. 23 .1%. And GPT -4, which everyone was

00:10:27.529 --> 00:10:30.909
raving about, it dropped to just 4 .9 % on those

00:10:30.909 --> 00:10:34.440
public tasks. 4 .9%. Oof. And when they tested

00:10:34.440 --> 00:10:36.240
on private commercial code, which is usually

00:10:36.240 --> 00:10:38.500
more complex, more context -dependent, GPT -5

00:10:38.500 --> 00:10:42.580
dropped even further, down to 14 .9%. These scores

00:10:42.580 --> 00:10:44.620
just lay bare the massive difference between

00:10:44.620 --> 00:10:46.480
being a helpful coding assistant, which they

00:10:46.480 --> 00:10:49.360
are, and being a true autonomous coding agent

00:10:49.360 --> 00:10:51.779
that can handle production work. We're not there

00:10:51.779 --> 00:10:54.559
yet. The findings just undeniable, really. Yes,

00:10:54.620 --> 00:10:56.240
the top models still do better than the rest.

00:10:56.320 --> 00:10:58.399
There's consistency there. But that performance

00:10:58.399 --> 00:11:01.090
gap. Between the AI dream and production reality,

00:11:01.470 --> 00:11:04.049
it's now clear as day. SWB Bench Pro basically

00:11:04.049 --> 00:11:06.470
gives us the map. It shows exactly how far these

00:11:06.470 --> 00:11:08.470
agents still need to travel before they can reliably

00:11:08.470 --> 00:11:10.470
handle professional software development. So

00:11:10.470 --> 00:11:13.029
the immediate practical takeaway for any company

00:11:13.029 --> 00:11:14.850
out there thinking about, you know, letting their

00:11:14.850 --> 00:11:18.090
developers go, don't fire your engineers. AI

00:11:18.090 --> 00:11:20.450
still needs significant development before it

00:11:20.450 --> 00:11:22.570
can tackle complex production code on its own.

00:11:22.929 --> 00:11:26.629
Simple as that. Hashtag tag tag outro. So it's

00:11:26.629 --> 00:11:29.370
been a deep dive into these two sort of opposing

00:11:29.370 --> 00:11:31.990
but equally vital realities today, hasn't it?

00:11:32.029 --> 00:11:34.350
On one side, you've got this radical efficiency

00:11:34.350 --> 00:11:37.909
leap with Grok 4 Fast. It shows that small, cheap,

00:11:38.029 --> 00:11:40.769
smart systems can absolutely beat the big, expensive

00:11:40.769 --> 00:11:42.789
ones sometimes. Right. And on the other side,

00:11:42.809 --> 00:11:46.110
you have the necessary sobering reality check

00:11:46.110 --> 00:11:49.470
from SWE Bench Pro, proving that the AI coding

00:11:49.470 --> 00:11:52.149
tools we rely on, they're still failing most

00:11:52.149 --> 00:11:54.639
of the time on the... really complex, verifiable

00:11:54.639 --> 00:11:56.720
stuff needed for professional work. The future

00:11:56.720 --> 00:11:58.700
isn't just going to be about who has the biggest

00:11:58.700 --> 00:12:01.100
model. It's about mastering the economics of

00:12:01.100 --> 00:12:04.080
deployment and using targeted intelligence smartly.

00:12:04.139 --> 00:12:07.919
Whoa. Just imagine scaling that Grok 4 fast efficiency,

00:12:08.200 --> 00:12:10.440
right? Where you're paying maybe 25 times less

00:12:10.440 --> 00:12:13.529
for each query. Imagine scaling that. to handle

00:12:13.529 --> 00:12:16.850
like a billion complex user interactions every

00:12:16.850 --> 00:12:19.750
single day. That kind of shift in profit and

00:12:19.750 --> 00:12:22.450
accessibility, it completely changes the business

00:12:22.450 --> 00:12:25.059
model for AI products. Yeah, I think the real

00:12:25.059 --> 00:12:27.460
competitive edge in the next year or so won't

00:12:27.460 --> 00:12:30.460
be just having the biggest model. It'll be mastering

00:12:30.460 --> 00:12:32.779
the economics of deploying the lightest, most

00:12:32.779 --> 00:12:35.899
targeted models that can still deliver top performance

00:12:35.899 --> 00:12:38.179
where it really matters. So maybe your homework

00:12:38.179 --> 00:12:40.500
from this deep dive is twofold. First, seriously,

00:12:40.700 --> 00:12:43.179
go check your LinkedIn settings right now. Make

00:12:43.179 --> 00:12:44.940
sure your data preferences are where you want

00:12:44.940 --> 00:12:47.200
them. And second, start thinking about those

00:12:47.200 --> 00:12:49.879
efficiency gains, that idea delivering premium

00:12:49.879 --> 00:12:52.759
results at indie level cost. That feels like

00:12:52.759 --> 00:12:55.230
the blueprint. the next wave of successful AI

00:12:55.230 --> 00:12:57.409
products. Until next time keep digging into the

00:12:57.409 --> 00:12:57.610
details.
