WEBVTT

00:00:00.000 --> 00:00:02.500
You ever get that feeling, that little bit of

00:00:02.500 --> 00:00:04.919
friction when you open chat GPT to write something,

00:00:05.360 --> 00:00:07.559
but then you pause and you think, maybe Claude

00:00:07.559 --> 00:00:10.000
would structure this better, or maybe perplexity

00:00:10.000 --> 00:00:12.140
for the search part. Yeah, that feeling. It's

00:00:12.140 --> 00:00:14.419
like playing AI ping pong all day long. Exactly.

00:00:14.539 --> 00:00:17.579
Switching tabs, comparing outputs, just trying

00:00:17.579 --> 00:00:20.640
to get the best possible result for like one

00:00:20.640 --> 00:00:23.339
single task. It gets exhausting. Well, that friction,

00:00:23.539 --> 00:00:25.699
that's precisely the problem Genspark is setting

00:00:25.699 --> 00:00:28.000
out to solve. It's pitched as this all -in -one

00:00:28.000 --> 00:00:31.329
AI that, you know, does the switching for you.

00:00:31.329 --> 00:00:34.130
Yeah. Automatically. Interesting. So welcome,

00:00:34.229 --> 00:00:35.990
everyone, to the Deep Dive. Today we're digging

00:00:35.990 --> 00:00:38.829
into a pretty comprehensive guide on Genspark,

00:00:38.909 --> 00:00:42.689
and we're really focusing on its core idea, this

00:00:42.689 --> 00:00:44.990
multi -agent system. Right. Think of it like

00:00:44.990 --> 00:00:46.750
your own personal assistant. You give it one

00:00:46.750 --> 00:00:48.850
prompt, one question, and it kind of secretly

00:00:48.850 --> 00:00:51.210
goes and asks three experts in this case, three

00:00:51.210 --> 00:00:54.289
top AI models, and then it synthesizes the best

00:00:54.289 --> 00:00:57.630
bits into one answer for you. So for you listening,

00:00:57.909 --> 00:00:59.390
especially if you're trying to learn fast and

00:00:59.390 --> 00:01:01.350
stay informed without just drowning in tabs and

00:01:01.350 --> 00:01:03.729
tools, our mission today is simple. We need to

00:01:03.729 --> 00:01:06.689
figure out, is Ganspark actually the shortcut

00:01:06.689 --> 00:01:11.049
it claims to be? Does it deliver? Our roadmap

00:01:11.049 --> 00:01:12.989
today is pretty straightforward. We're gonna

00:01:12.989 --> 00:01:16.109
dive deep into that that multi -agent chat feature

00:01:16.109 --> 00:01:18.450
which sound like the main event Definitely then

00:01:18.450 --> 00:01:21.129
we'll explore its creative tools images video

00:01:21.129 --> 00:01:23.329
that kind of stuff We'll look at the productivity

00:01:23.329 --> 00:01:26.129
side like generating slides and sheets and finally

00:01:26.129 --> 00:01:29.629
there's this almost sci -fi feature The AI calling

00:01:29.629 --> 00:01:31.870
agent. Yes. Yeah, that was wild But the whole

00:01:31.870 --> 00:01:33.870
package the pitch is really compelling because

00:01:33.870 --> 00:01:36.650
of the value right it tries to do every chat

00:01:36.650 --> 00:01:39.450
video data analysis, even making phone calls

00:01:39.450 --> 00:01:42.269
for around 20 bucks a month. $20. Yeah. So straight

00:01:42.269 --> 00:01:44.829
away, it's positioning itself as like a way cheaper

00:01:44.829 --> 00:01:47.450
option than subscribing to maybe four or five

00:01:47.450 --> 00:01:50.409
specialized expensive AI tools separately. OK.

00:01:50.450 --> 00:01:52.370
But the real innovation here, it seems, isn't

00:01:52.370 --> 00:01:54.790
just bundling. It's how it uses those tools.

00:01:54.790 --> 00:01:56.750
Exactly. Instead of you manually copying your

00:01:56.750 --> 00:02:00.010
prompt into GPT, then Claude, then Gemini, Gensmark

00:02:00.010 --> 00:02:02.349
takes your one prompt and routes it to all three

00:02:02.349 --> 00:02:05.129
models behind the scenes. Simultaneously. And

00:02:05.129 --> 00:02:07.659
then this is the key part. It analyzes. or selects,

00:02:07.659 --> 00:02:09.699
or maybe combines the results to give you just

00:02:09.699 --> 00:02:12.599
one, hopefully better response. That seems to

00:02:12.599 --> 00:02:15.620
be the core time saver. That's the promise, yeah.

00:02:16.000 --> 00:02:18.580
Reducing that mental fatigue of constantly comparing.

00:02:18.699 --> 00:02:21.219
So let's really unpack that core engine first.

00:02:21.439 --> 00:02:25.400
The multi -agent chat workflow. Because the frustration

00:02:25.400 --> 00:02:27.620
it aims to solve, I mean, it's real. You spend

00:02:27.620 --> 00:02:30.699
time crafting this perfect prompt for, say, chat

00:02:30.699 --> 00:02:33.759
GPT. Only to find the output structure is maybe

00:02:33.759 --> 00:02:35.560
a bit weak. So then you take that same prompt

00:02:35.560 --> 00:02:37.830
over to Claude. Maybe it's a little better, but

00:02:37.830 --> 00:02:39.389
you still kind of feel you should check Gemini

00:02:39.389 --> 00:02:41.870
2 just in case. Yeah, that whole dance. Gens

00:02:41.870 --> 00:02:44.770
Park basically says, stop dancing. One prompt

00:02:44.770 --> 00:02:47.509
goes out and it hits GPT -5, Claude Sonnet 4,

00:02:47.949 --> 00:02:51.750
and Gemini 2 .5 flesh all at once. And then comes

00:02:51.750 --> 00:02:54.889
the smart step, is the source called it, the

00:02:54.889 --> 00:02:57.990
cohesion layer. It looks at all three outputs

00:02:57.990 --> 00:03:01.430
and decides how to combine them. Or maybe just

00:03:01.430 --> 00:03:03.569
pick the best one overall. It's leveraging the

00:03:03.569 --> 00:03:06.270
unique strengths of each model, supposedly. Yeah,

00:03:06.270 --> 00:03:09.069
and it's also efficient with your tokens. You

00:03:09.069 --> 00:03:11.949
know, tokens are basically the currency AI models

00:03:11.949 --> 00:03:14.169
use like the words they process, input and output.

00:03:14.210 --> 00:03:17.009
Right, they charge based on usage. Exactly. So

00:03:17.009 --> 00:03:19.930
because you only prompt once with Gansbark, you

00:03:19.930 --> 00:03:21.969
only spend those tokens once, not three times

00:03:21.969 --> 00:03:24.469
for the same result. Saves cost and time. OK,

00:03:24.469 --> 00:03:26.770
let's make this concrete. The source had a really

00:03:26.770 --> 00:03:29.699
complex example, a case study. We asked Genspark

00:03:29.699 --> 00:03:31.919
to play the role of a senior marketing expert.

00:03:32.020 --> 00:03:34.159
Okay. And the task was to analyze a competitor

00:03:34.159 --> 00:03:36.719
called EcoBottle. This wasn't just like summarize

00:03:36.719 --> 00:03:39.259
their website. No, this was a heavy lift. It

00:03:39.259 --> 00:03:41.659
was actually a four part task rolled into one

00:03:41.659 --> 00:03:43.840
prompt. Right. It had to do one, the role play

00:03:43.840 --> 00:03:47.319
itself. Two, a full SWOT analysis, strengths,

00:03:47.620 --> 00:03:51.210
weaknesses, opportunities, threats. three, come

00:03:51.210 --> 00:03:53.430
up with three creative counter -marketing strategies,

00:03:53.729 --> 00:03:57.370
and four, actually write content specifically,

00:03:57.969 --> 00:04:00.289
a five -tweet thread based on those strategies.

00:04:00.949 --> 00:04:03.849
Okay, that's a massive multi -layered request

00:04:03.849 --> 00:04:06.569
for any AI. Hold on a sec, though. This idea

00:04:06.569 --> 00:04:09.849
of combining outputs from three different AIs,

00:04:09.849 --> 00:04:13.629
it sounds potentially, well, messy. How does

00:04:13.629 --> 00:04:16.110
Ginspark make sure you don't just get a Frankenstein

00:04:16.110 --> 00:04:18.170
answer that kind of loses the point? What did

00:04:18.170 --> 00:04:20.350
the source say about that cohesion part? That's

00:04:20.350 --> 00:04:22.350
a fair question. The idea isn't just mashing

00:04:22.350 --> 00:04:24.449
text together. It's about extracting the best

00:04:24.449 --> 00:04:26.990
component from each model suiting for that specific

00:04:26.990 --> 00:04:29.290
part of the task. OK. So for that eco bottle

00:04:29.290 --> 00:04:32.850
example, it probably pulled the really structured

00:04:32.850 --> 00:04:35.230
logical SWOT analysis maybe from Claude because

00:04:35.230 --> 00:04:36.910
Claude's good at reasoning. Right. Then maybe

00:04:36.910 --> 00:04:39.410
it grabbed the most out there creative marketing

00:04:39.410 --> 00:04:42.620
ideas from Gemini. And then it might use GPT

00:04:42.620 --> 00:04:45.060
-5, which is often great at writing, to actually

00:04:45.060 --> 00:04:47.519
polish the language and stitch the whole thing

00:04:47.519 --> 00:04:50.160
together into a cohesive professional response.

00:04:50.740 --> 00:04:53.220
Ah, okay. So it's like having a specialist team

00:04:53.220 --> 00:04:55.019
working on the different parts of the problem,

00:04:55.040 --> 00:04:56.959
not just three generalists shouting answers.

00:04:57.000 --> 00:04:59.000
That makes more sense. Exactly. And looking at

00:04:59.000 --> 00:05:01.480
the results from that example, the level of detail

00:05:01.480 --> 00:05:04.100
was impressive. Like the AI note of the competitor,

00:05:04.439 --> 00:05:07.319
EcoBottle, is tapping into this huge eco -friendly

00:05:07.319 --> 00:05:10.060
bottle market. It even pulled a specific stat.

00:05:10.170 --> 00:05:15.370
projected to hit $18 .49 billion by 2032. Oh!

00:05:15.990 --> 00:05:18.910
That specific data point alone adds a ton of

00:05:18.910 --> 00:05:21.069
value to the market analysis section. Absolutely.

00:05:21.230 --> 00:05:23.230
And those counter strategies it came up with?

00:05:23.430 --> 00:05:25.470
Some were genuinely clever, like one called the

00:05:25.470 --> 00:05:28.069
Lifetime Impact Dashboard. What was that? Instead

00:05:28.069 --> 00:05:30.870
of just making vague sustainability claims, it

00:05:30.870 --> 00:05:33.250
proposed giving customers a real -time dashboard

00:05:33.250 --> 00:05:35.490
showing the positive environmental impact of

00:05:35.490 --> 00:05:38.170
using their bottle over time. Kind of gamified

00:05:38.170 --> 00:05:40.189
sustainability. That's smart. And the other one

00:05:40.189 --> 00:05:42.170
you mentioned, strategy two. Yeah, that tackled

00:05:42.170 --> 00:05:44.709
the circular economy idea head on. It was called

00:05:44.709 --> 00:05:48.389
the Trade In and Transform program. The AI suggested

00:05:48.389 --> 00:05:51.009
accepting any brand's old bottle, not just their

00:05:51.009 --> 00:05:53.870
own, for recycling or repurposing when a customer

00:05:53.870 --> 00:05:56.480
buys a new one. That's really smart competitive

00:05:56.480 --> 00:05:59.420
positioning. Okay, so beyond just efficiency,

00:05:59.879 --> 00:06:02.600
how does Genspark ensure that synthesis, that

00:06:02.600 --> 00:06:05.139
combining of the three models, actually improves

00:06:05.139 --> 00:06:08.060
the overall quality of the answer? Well, like

00:06:08.060 --> 00:06:10.120
we discussed, it leverages the unique strengths

00:06:10.120 --> 00:06:12.259
of each model simultaneously for different parts

00:06:12.259 --> 00:06:14.540
of the task. Okay, got it. Let's shift gears

00:06:14.540 --> 00:06:17.639
now to the creative side. The multi -agent tools

00:06:17.639 --> 00:06:20.600
for images and video, is the process similar

00:06:20.600 --> 00:06:23.060
there? Pretty much, yeah. Same principle. Instead

00:06:23.060 --> 00:06:25.160
of relying on just one image generator, it runs

00:06:25.160 --> 00:06:27.279
your prompt through several heavy hitters. They

00:06:27.279 --> 00:06:29.439
mentioned NanoBanana by Dan C. Dream. That's

00:06:29.439 --> 00:06:31.879
from the TikTok parent company. Right. And GTT

00:06:31.879 --> 00:06:34.500
Image, which uses a daily technology. So you

00:06:34.500 --> 00:06:36.540
get results influenced by different training

00:06:36.540 --> 00:06:38.990
data and styles. And the example prompt they

00:06:38.990 --> 00:06:41.709
used was really specific, wasn't it? That Hanoi

00:06:41.709 --> 00:06:44.250
sidewalk cafe scene. Oh yeah, super detailed.

00:06:44.589 --> 00:06:47.129
They wanted photorealistic, but also a specific

00:06:47.129 --> 00:06:51.910
16 .9 aspect ratio. Visual details like condensation

00:06:51.910 --> 00:06:54.410
on the iced coffee glass, raindrops on the window

00:06:54.410 --> 00:06:59.470
pane. And even a specific mood. Quiet, cinematic,

00:06:59.750 --> 00:07:02.949
nostalgic. That level of control is what pros

00:07:02.949 --> 00:07:05.310
need, definitely. And the guide gave a good tip.

00:07:05.579 --> 00:07:07.420
Always try to use a reference image if you have

00:07:07.420 --> 00:07:10.060
one. Or, if you just have a vague idea, use their

00:07:10.060 --> 00:07:11.959
auto prompt feature to flesh it out into a more

00:07:11.959 --> 00:07:14.439
detailed description for the AI. Good advice.

00:07:14.889 --> 00:07:17.449
Now on to video. The key insight here was great

00:07:17.449 --> 00:07:19.069
AI video starts with a great starting image.

00:07:19.250 --> 00:07:21.110
You can't just skip that composition step. Makes

00:07:21.110 --> 00:07:23.250
sense. Garbage in, garbage out applies to the

00:07:23.250 --> 00:07:25.550
starting frame, too. Right. And for video generation,

00:07:25.709 --> 00:07:29.170
it uses models like VO3, C -dance light, Pixverse

00:07:29.170 --> 00:07:31.290
V5, again, pulling from different specialized

00:07:31.290 --> 00:07:33.189
engines. And the prompt for the video was layered

00:07:33.189 --> 00:07:35.009
on top of the image prompt. Exactly. They took

00:07:35.009 --> 00:07:37.389
that perfect Hanoi coffee image and then commanded

00:07:37.389 --> 00:07:39.529
multiple precise layers of motion. Things like

00:07:39.529 --> 00:07:42.350
a slow camera zoom, a dolly zoom effect, raindrops

00:07:42.350 --> 00:07:45.750
slowly sliding down the window. and those blurry

00:07:45.750 --> 00:07:48.089
neon lights in the background. They wanted them

00:07:48.089 --> 00:07:50.629
to gently flicker. Wow, that's intricate. Yeah.

00:07:50.910 --> 00:07:53.290
And they even added a specific negative constraint.

00:07:53.949 --> 00:07:56.269
Absolutely no steam or smoke. Because remember,

00:07:56.389 --> 00:07:58.750
it was an iced coffee, right? Details matter.

00:07:59.069 --> 00:08:02.370
Whoa. I mean, just imagine scaling that level

00:08:02.370 --> 00:08:05.810
of precise, multi -layered motion control, doing

00:08:05.810 --> 00:08:08.389
that across hundreds of marketing assets every

00:08:08.389 --> 00:08:10.959
single day. That's... That's a huge capability

00:08:10.959 --> 00:08:13.079
boost right there. It really is. But let's play

00:08:13.079 --> 00:08:15.500
devil's advocate again. Is the trade -off in

00:08:15.500 --> 00:08:17.779
quality acceptable? I mean, integrating image,

00:08:17.879 --> 00:08:20.699
video, chat all in one place is convenient. But

00:08:20.699 --> 00:08:23.420
are these the absolute best -in -class models

00:08:23.420 --> 00:08:26.300
for each specific task or just good models? That's

00:08:26.300 --> 00:08:28.699
where. For rapid content creation? For budget

00:08:28.699 --> 00:08:31.259
-conscious teams? Small businesses? Yeah, probably.

00:08:31.600 --> 00:08:34.159
The output they showed was solid. It was robust.

00:08:34.500 --> 00:08:36.360
Definitely good enough for quick prototyping,

00:08:36.379 --> 00:08:38.980
social media posts, internal stuff. Maybe not

00:08:38.980 --> 00:08:41.559
Hollywood VFX, but very usable. So it's solid

00:08:41.559 --> 00:08:43.960
and good enough for fast content creation needs.

00:08:44.159 --> 00:08:46.940
Pretty much, yeah. OK. Let's transition, then,

00:08:46.940 --> 00:08:49.460
from the creative tools to pure productivity,

00:08:50.039 --> 00:08:53.840
starting with AI slides. Right. The utility here

00:08:53.840 --> 00:08:56.759
seems pretty clear. taking an idea, maybe even

00:08:56.759 --> 00:08:58.879
just a structured outline, and quickly turning

00:08:58.879 --> 00:09:01.539
it into a presentation slide deck. Apparently

00:09:01.539 --> 00:09:03.299
it's also helpful just for clarifying your own

00:09:03.299 --> 00:09:05.720
thinking. Yeah, forcing structure onto an idea.

00:09:05.909 --> 00:09:08.169
We tested this with another detailed prompt,

00:09:08.690 --> 00:09:11.470
creating a 15 slide internal training deck for

00:09:11.470 --> 00:09:14.409
new B2B sales hires. OK. And critically, we didn't

00:09:14.409 --> 00:09:16.450
just say make a sales training. We gave it a

00:09:16.450 --> 00:09:19.149
very structured nine part outline, everything

00:09:19.149 --> 00:09:22.049
from what is B2B and understanding the sales

00:09:22.049 --> 00:09:25.389
funnel through to using CRM software and ending

00:09:25.389 --> 00:09:27.690
with a quick quiz. And that structure is key.

00:09:27.950 --> 00:09:30.409
It turns the AI from being a guesser, where results

00:09:30.409 --> 00:09:32.549
can be kind of hit or miss, into more of an efficient

00:09:32.549 --> 00:09:34.110
worker filling in the blanks you've already defined.

00:09:34.350 --> 00:09:36.830
Exactly, but there was a limitation noted here,

00:09:37.070 --> 00:09:39.970
too, right? Yeah, the feedback was the structure

00:09:39.970 --> 00:09:42.009
and content of the slides were functional, often

00:09:42.009 --> 00:09:45.220
pretty good, but the design, not so much. The

00:09:45.220 --> 00:09:47.519
slides weren't beautiful. They'd likely need

00:09:47.519 --> 00:09:49.700
polishing afterwards in something like PowerPoint

00:09:49.700 --> 00:09:52.440
or Keynote. OK, so great for a first draft of

00:09:52.440 --> 00:09:54.559
the content and flow, but expect to do design

00:09:54.559 --> 00:09:56.620
work. That's an important caveat. Definitely.

00:09:56.860 --> 00:09:59.200
AI sheets. Doing market research in minutes.

00:09:59.980 --> 00:10:01.620
The source mentioned something interesting here,

00:10:01.679 --> 00:10:03.679
that a lot of users are kind of scared of complex

00:10:03.679 --> 00:10:06.179
spreadsheets. Yeah, that spreadsheet intimidation

00:10:06.179 --> 00:10:08.740
is real. So this feature tries to bridge that

00:10:08.740 --> 00:10:12.210
gap. The test prompt was pretty ambitious. asking

00:10:12.210 --> 00:10:14.929
the AI to research and gather competitor data,

00:10:15.029 --> 00:10:17.950
putting it directly into a spreadsheet. OK. The

00:10:17.950 --> 00:10:20.409
specific market was mid -range smartphones in

00:10:20.409 --> 00:10:24.490
Vietnam, looking ahead to 2025 data. Highly specific.

00:10:24.549 --> 00:10:26.090
And you could specify the columns you wanted.

00:10:26.230 --> 00:10:28.590
Yep. We asked for columns like brand, popular

00:10:28.590 --> 00:10:31.070
model in that category, the price range, specifically

00:10:31.070 --> 00:10:34.549
5 to 8 million Vietnamese don key specs, estimated

00:10:34.549 --> 00:10:36.710
market share, their marketing slogan, and user

00:10:36.710 --> 00:10:39.889
ratings. Wow. So you basically get a mini market

00:10:39.889 --> 00:10:42.470
research report generate into a usable table

00:10:42.470 --> 00:10:45.909
in like under a minute That's the idea and the

00:10:45.909 --> 00:10:48.409
utility doesn't stop there You can then apparently

00:10:48.409 --> 00:10:50.529
ask gen spark follow -up questions like turn

00:10:50.529 --> 00:10:52.690
this data into charts or even use it for some

00:10:52.690 --> 00:10:54.309
basic financial planning based on the market

00:10:54.309 --> 00:10:56.850
numbers Okay, that's powerful. But given the

00:10:56.850 --> 00:11:00.549
complexity there Synthesizing research data into

00:11:00.549 --> 00:11:03.950
specific columns. Mm -hmm. How critical is fact

00:11:03.950 --> 00:11:07.409
-checking on the data that AI sheets generates?

00:11:07.960 --> 00:11:09.860
especially with market share estimates and things

00:11:09.860 --> 00:11:12.019
like that. Yeah, the advice was clear. Always

00:11:12.019 --> 00:11:14.080
fact check important data, especially if you're

00:11:14.080 --> 00:11:15.740
going to present it publicly or make decisions

00:11:15.740 --> 00:11:17.779
based on it, treat it as a powerful starting

00:11:17.779 --> 00:11:20.360
point. So always fact check important content

00:11:20.360 --> 00:11:23.039
before public presentation. Got it. All right.

00:11:23.100 --> 00:11:25.179
Final main segment. This is about automation

00:11:25.179 --> 00:11:28.240
and connecting to the real world, starting with

00:11:28.240 --> 00:11:31.679
something called MCP. MCP. model context protocol.

00:11:31.840 --> 00:11:33.159
The guide explained it pretty well. Think of

00:11:33.159 --> 00:11:37.179
them like secure digital bridges. Or maybe like

00:11:37.179 --> 00:11:39.440
stacking Lego blocks, but with data from different

00:11:39.440 --> 00:11:42.759
apps. They let the AI, if you give it permission,

00:11:43.399 --> 00:11:46.860
access external software you use. Right. And

00:11:46.860 --> 00:11:49.059
GenSpark apparently offers connections to hundreds

00:11:49.059 --> 00:11:51.440
of these. Things like Gmail, Google Calendar,

00:11:52.019 --> 00:11:55.519
Notion, even ex -Twitter. Yes, 631 connections

00:11:55.519 --> 00:11:57.960
were mentioned. It's a lot. So we give it a task

00:11:57.960 --> 00:12:00.320
that used these connections, a cross -checking

00:12:00.320 --> 00:12:03.129
task. Scan my Gmail for the last seven days and

00:12:03.129 --> 00:12:05.309
my Google Calendar for the next seven days. Okay.

00:12:05.370 --> 00:12:07.629
And find any potential scheduling conflicts or

00:12:07.629 --> 00:12:09.970
maybe urgent tasks mentioned in emails that I

00:12:09.970 --> 00:12:11.929
might have missed putting on the calendar. Whoa,

00:12:12.049 --> 00:12:14.149
okay, that is a true personal assistant task.

00:12:14.169 --> 00:12:15.809
Yeah. Finding those things that fall through

00:12:15.809 --> 00:12:17.809
the cracks. Yeah. Like an email mentioning a

00:12:17.809 --> 00:12:19.649
deadline that isn't actually blocked out on your

00:12:19.649 --> 00:12:22.090
calendar yet. Or maybe spotting a double book

00:12:22.090 --> 00:12:24.389
meeting where you only accepted one invite. The

00:12:24.389 --> 00:12:26.970
potential time management benefit there is huge.

00:12:27.350 --> 00:12:30.690
Massive. But, you know. I'll admit I still wrestle

00:12:30.690 --> 00:12:32.669
with prompt drift myself sometimes trying to

00:12:32.669 --> 00:12:35.309
get the AI to do exactly what I want and connecting

00:12:35.309 --> 00:12:39.389
my private data my actual email my calendar it

00:12:39.389 --> 00:12:42.379
It still feels inherently risky obviously Yeah.

00:12:42.500 --> 00:12:44.759
But you can see the potential, right? If you

00:12:44.759 --> 00:12:47.340
can get past that initial hurdle of trust, the

00:12:47.340 --> 00:12:50.039
assistant capability is just enormous. I totally

00:12:50.039 --> 00:12:52.860
get that hesitation. Which brings us nicely to

00:12:52.860 --> 00:12:55.919
maybe the most out there feature. Right. The

00:12:55.919 --> 00:12:58.519
AI calling agent. Right. This thing actually

00:12:58.519 --> 00:13:01.820
makes phone calls to real people. Yeah. makes

00:13:01.820 --> 00:13:04.399
the call, holds a conversation, apparently a

00:13:04.399 --> 00:13:06.740
pretty natural sounding one, and then it transcribes

00:13:06.740 --> 00:13:08.580
the key results back to you in the chat. Okay,

00:13:08.740 --> 00:13:10.360
the example task for this was pretty involved

00:13:10.360 --> 00:13:12.639
too, a multi -step negotiation. What was it?

00:13:12.700 --> 00:13:15.299
The prompt was basically, call this specific

00:13:15.299 --> 00:13:17.960
electronics store. Ask if they have a Sony Bravia

00:13:17.960 --> 00:13:21.519
X90L 55 -inch TV in stock. Tell them a competitor

00:13:21.519 --> 00:13:24.539
is selling it for $2 ,500. Request a price match.

00:13:25.159 --> 00:13:27.559
And then critically ask exactly what documentation

00:13:27.559 --> 00:13:29.720
or proof you need to bring into the store to

00:13:29.720 --> 00:13:31.919
actually get that match price. OK, that's not

00:13:31.919 --> 00:13:33.879
just asking a simple question. That's navigating

00:13:33.879 --> 00:13:35.940
a whole interaction, handling potential objections,

00:13:36.159 --> 00:13:39.059
asking clarifying questions. Exactly. And the

00:13:39.059 --> 00:13:41.440
AI supposedly handles that entire thread. It

00:13:41.440 --> 00:13:43.500
overcomes the initial let me check hurdles to

00:13:43.500 --> 00:13:45.600
the negotiation part and then reports back. OK.

00:13:45.740 --> 00:13:48.000
they agreed to $2 ,500, and you need to bring

00:13:48.000 --> 00:13:50.820
a printout of the competitor's ad. Managing that

00:13:50.820 --> 00:13:53.279
kind of multi -step, real -world communication

00:13:53.279 --> 00:13:55.980
completely autonomously, that's pretty sci -fi,

00:13:56.019 --> 00:13:58.740
like you said. It really is. But linking back

00:13:58.740 --> 00:14:02.220
to the MCPs and security, what's the single most

00:14:02.220 --> 00:14:04.639
crucial piece of advice for users thinking about

00:14:04.639 --> 00:14:07.220
connecting sensitive data like email or calendars

00:14:07.220 --> 00:14:10.200
to something like Gansburg? Based on the dyed,

00:14:10.360 --> 00:14:12.639
it's straightforward. Only connect tools and

00:14:12.639 --> 00:14:15.320
data sources that you absolutely trust and where

00:14:15.320 --> 00:14:17.840
you understand the security implications. Start

00:14:17.840 --> 00:14:20.779
small, maybe. So only connect tools you absolutely

00:14:20.779 --> 00:14:24.600
trust to maintain security. Makes sense. Okay,

00:14:24.639 --> 00:14:26.240
let's wrap this up. Bring it all together. The

00:14:26.240 --> 00:14:28.580
final verdict, the value proposition. What's

00:14:28.580 --> 00:14:30.919
the takeaway? Well, the best feature, hands down,

00:14:31.179 --> 00:14:33.200
seems to be that multi -agent workflow for chat.

00:14:33.440 --> 00:14:35.840
giving just one prompt and getting back this

00:14:35.840 --> 00:14:38.480
synthesized best of breed answer from GPT -5,

00:14:38.659 --> 00:14:41.000
Claude and Gemini simultaneously. That saves

00:14:41.000 --> 00:14:44.000
a massive amount of time and frankly mental energy.

00:14:44.279 --> 00:14:46.500
It directly tackles that core pain point we started

00:14:46.500 --> 00:14:48.899
with, the AI ping -pong. Exactly. And then there's

00:14:48.899 --> 00:14:51.320
the value, 20 bucks a month for that advanced

00:14:51.320 --> 00:14:53.899
chat plus the image generation, the video tools,

00:14:54.100 --> 00:14:56.299
the slides, the sheets, the calling agent, and

00:14:56.299 --> 00:14:58.649
all those hundreds of MCP connections. Yeah,

00:14:58.649 --> 00:15:01.090
you really can't get that breadth of functionality

00:15:01.090 --> 00:15:03.350
without stacking up several different subscriptions,

00:15:03.789 --> 00:15:05.990
which would definitely cost way more than $20.

00:15:06.289 --> 00:15:09.190
For sure. Now, we did note limitations, right?

00:15:09.649 --> 00:15:12.889
The calling agent, while cool, apparently needs

00:15:12.889 --> 00:15:15.110
some refinement. There can be slight delays in

00:15:15.110 --> 00:15:17.889
the conversation flow still. OK. And those AI

00:15:17.889 --> 00:15:21.250
slides, functional structure, good content start.

00:15:21.809 --> 00:15:23.889
But yeah, you'll need to polish the look and

00:15:23.889 --> 00:15:26.590
feel in another tool. So knowing all that, who

00:15:26.590 --> 00:15:29.330
is this really for? Who gets the most value here?

00:15:29.529 --> 00:15:32.330
I think it's ideal for content creators who need

00:15:32.330 --> 00:15:34.269
to generate different types of stuff quickly.

00:15:34.850 --> 00:15:37.250
Entrepreneurs, maybe small teams who are juggling

00:15:37.250 --> 00:15:40.769
lots of tasks on a tight budget. And fundamentally,

00:15:40.909 --> 00:15:42.769
it's for anyone who's just plain tired of having

00:15:42.769 --> 00:15:45.830
15 browser tabs open. Copy and pasting prompts

00:15:45.830 --> 00:15:47.990
between different AI models all day. Yeah, the

00:15:47.990 --> 00:15:50.990
tab switchers. So the big idea recap. The main

00:15:50.990 --> 00:15:54.009
takeaway is that Genspark really consolidates

00:15:54.269 --> 00:15:56.669
the utility of many different AI models into

00:15:56.669 --> 00:16:01.090
one place. It aims to provide good or solid capability

00:16:01.090 --> 00:16:03.850
across a whole range of tasks, which should reduce

00:16:03.850 --> 00:16:06.190
the complexity and the cognitive load for the

00:16:06.190 --> 00:16:10.289
user. It's about practical application. And if

00:16:10.289 --> 00:16:12.549
you, listening, decide you want to try it out,

00:16:12.809 --> 00:16:15.809
the advice was spend some time on day one or

00:16:15.809 --> 00:16:18.759
two. setting up your profile context, defining

00:16:18.759 --> 00:16:21.960
your preferred style, your goals. Personalize

00:16:21.960 --> 00:16:24.179
it. And then maybe use those detailed structured

00:16:24.179 --> 00:16:26.120
prompts we talked about today, like that Hanoi

00:16:26.120 --> 00:16:28.700
coffee prompt or the B2B sales training outline.

00:16:28.879 --> 00:16:31.299
Use those as templates to get the best results

00:16:31.299 --> 00:16:33.700
early on. Good starting points. So final thought,

00:16:33.840 --> 00:16:35.799
then. We just talked about an AI that can potentially

00:16:35.799 --> 00:16:38.740
handle a multi -step price negotiation over the

00:16:38.740 --> 00:16:41.179
phone with a real person. If it can do that today,

00:16:41.689 --> 00:16:44.750
What's the actual limit for AI assistance accessing

00:16:44.750 --> 00:16:47.429
and managing our real -world tasks maybe next

00:16:47.429 --> 00:16:49.250
year or the year after? Where does this capability

00:16:49.250 --> 00:16:51.450
lead? That's the big question, isn't it? Something

00:16:51.450 --> 00:16:52.529
to definitely keep an eye on.
