WEBVTT

00:00:00.000 --> 00:00:02.980
It is strange that asking an AI for a simple

00:00:02.980 --> 00:00:05.219
diet plan might soon be illegal in New York.

00:00:05.459 --> 00:00:08.320
Yet that exact same technology is actively saving

00:00:08.320 --> 00:00:11.359
lives in southern Africa. It does this by reading

00:00:11.359 --> 00:00:14.439
old newspaper clippings. It is a strange paradox.

00:00:14.699 --> 00:00:16.800
Oh, right. And it gets weird fast. The landscape

00:00:16.800 --> 00:00:19.120
is shifting constantly. You know, we're watching

00:00:19.120 --> 00:00:21.660
society scramble to keep up. Welcome to this

00:00:21.660 --> 00:00:24.480
deep dive. We are exploring the friction of integrating

00:00:24.480 --> 00:00:27.820
AI into society today. First, we will examine

00:00:27.820 --> 00:00:30.899
a new New York bill. It tries to ban AI from

00:00:30.899 --> 00:00:33.750
acting like a licensed professional. Then we

00:00:33.750 --> 00:00:36.850
look at the Wild West of everyday AI tools. We

00:00:36.850 --> 00:00:38.909
will uncover some unexpected economic ripple

00:00:38.909 --> 00:00:41.390
effects there. Finally, we explore a brilliant

00:00:41.390 --> 00:00:43.890
pivot by Google. They're predicting flash floods

00:00:43.890 --> 00:00:46.409
using written text. It is a massive spectrum

00:00:46.409 --> 00:00:49.030
of human adaptation. We're watching society try

00:00:49.030 --> 00:00:51.030
to build guardrails. At the same time, the technology

00:00:51.030 --> 00:00:53.329
is sprinting right past them. Let us start with

00:00:53.329 --> 00:00:55.170
what is happening in New York. State lawmakers

00:00:55.170 --> 00:00:57.609
are advancing a very specific, aggressive bill.

00:00:57.729 --> 00:01:00.729
It targets AI tools acting like licensed professionals.

00:01:01.030 --> 00:01:04.090
The scope here is incredibly broad. I mean, we're

00:01:04.090 --> 00:01:06.969
talking about 14 different licensed professions.

00:01:07.390 --> 00:01:10.969
This includes medicine, law, psychology, and

00:01:10.969 --> 00:01:14.129
engineering. It even covers dentistry and social

00:01:14.129 --> 00:01:17.450
work. The core of the bill is a strict ban. It

00:01:17.450 --> 00:01:19.510
would make it illegal for software to give a

00:01:19.510 --> 00:01:22.530
substantive response in those areas. That applies

00:01:22.530 --> 00:01:24.709
to anything normally requiring a state license.

00:01:24.989 --> 00:01:27.549
This is a radical departure from normal tech

00:01:27.549 --> 00:01:30.409
regulation. Usually, tech regulation relies on

00:01:30.409 --> 00:01:33.170
government fines. An agency investigates an issue,

00:01:33.290 --> 00:01:36.290
then issues a penalty. This bill is completely

00:01:36.290 --> 00:01:39.469
different. It includes a private right of action.

00:01:39.709 --> 00:01:42.290
Let us unpack the mechanics of that. A private

00:01:42.290 --> 00:01:45.150
right of action. That means users can sue the

00:01:45.150 --> 00:01:47.269
chatbot providers directly. Exactly. You don't

00:01:47.269 --> 00:01:48.989
have to wait for a government regulator at all.

00:01:49.049 --> 00:01:51.290
Any regular citizen can file a civil lawsuit.

00:01:51.569 --> 00:01:53.969
If they believe the AI gave professional advice,

00:01:54.250 --> 00:01:57.069
they can sue. And companies can't just hide behind

00:01:57.069 --> 00:01:59.030
their terms of service. The sources state that

00:01:59.030 --> 00:02:01.129
adding warnings will not protect the developers.

00:02:01.549 --> 00:02:04.390
Legal disclaimers are completely useless under

00:02:04.390 --> 00:02:07.010
this proposed law. Yeah, users can still sue

00:02:07.010 --> 00:02:09.750
the providers directly. And if it passes, this

00:02:09.750 --> 00:02:12.990
law moves incredibly fast. It takes effect roughly

00:02:12.990 --> 00:02:15.229
three months after the governor signs it. That

00:02:15.229 --> 00:02:17.550
is practically overnight in the tech world, giving

00:02:17.550 --> 00:02:19.889
companies three months to audit entire language

00:02:19.889 --> 00:02:21.849
models. That seems practically impossible to

00:02:21.849 --> 00:02:24.629
execute. It probably is impossible. And here

00:02:24.629 --> 00:02:27.610
is the real underlying issue. The bill does not

00:02:27.610 --> 00:02:30.009
clearly define what counts as a substantive response.

00:02:30.389 --> 00:02:34.189
That is the ultimate danger zone. Beat. It is

00:02:34.189 --> 00:02:36.669
kind of like outlawing medical textbooks just

00:02:36.669 --> 00:02:39.409
because they contain medical facts and then letting

00:02:39.409 --> 00:02:41.969
the readers sue the printing press. That is a

00:02:41.969 --> 00:02:44.789
perfect analogy. A medical textbook is full of

00:02:44.789 --> 00:02:48.139
substantive medical facts. But we don't sue publishers

00:02:48.139 --> 00:02:51.080
for practicing unlicensed medicine. The New York

00:02:51.080 --> 00:02:53.419
bill essentially says that if an AI reads that

00:02:53.419 --> 00:02:55.819
textbook and then summarizes a paragraph for

00:02:55.819 --> 00:02:58.919
you, the developer is suddenly liable. It creates

00:02:58.919 --> 00:03:01.520
a massive gray area. This ambiguity could make

00:03:01.520 --> 00:03:03.819
harmless educational responses legally risky.

00:03:04.250 --> 00:03:06.009
Think about the chilling effect this creates.

00:03:06.710 --> 00:03:09.050
Developers might just block any query related

00:03:09.050 --> 00:03:12.289
to those 14 professions. They will heavily over

00:03:12.289 --> 00:03:15.009
censor the models just to avoid frivolous lawsuits.

00:03:15.430 --> 00:03:17.810
Let me ask you this. If a student asks an AI

00:03:17.810 --> 00:03:21.069
to explain a legal concept for school, does the

00:03:21.069 --> 00:03:24.090
developer get sued? The sheer ambiguity of the

00:03:24.090 --> 00:03:27.729
word substantive creates that exact risk. A thorough,

00:03:27.889 --> 00:03:30.650
helpful explanation could easily be misconstrued

00:03:30.650 --> 00:03:33.189
as legal advice. A jury would have to decide.

00:03:33.659 --> 00:03:35.960
So basic education becomes a legal minefield

00:03:35.960 --> 00:03:38.030
for developers. Yes. They will have to choose

00:03:38.030 --> 00:03:40.590
between educational utility and legal safety.

00:03:40.810 --> 00:03:43.129
In a corporate environment, safety will almost

00:03:43.129 --> 00:03:45.569
always win. While regulators try to draw neat

00:03:45.569 --> 00:03:48.370
lines, the actual technology is spilling into

00:03:48.370 --> 00:03:50.289
everything. It's creating totally unexpected

00:03:50.289 --> 00:03:52.889
economic loops. It really is a messy reality.

00:03:53.129 --> 00:03:55.270
Let's look at the strange economics of coding

00:03:55.270 --> 00:03:58.430
right now. AI makes writing code much faster

00:03:58.430 --> 00:04:00.710
and cheaper, but hiring for software engineers

00:04:00.710 --> 00:04:03.689
is actually increasing. That seems entirely counterintuitive.

00:04:03.849 --> 00:04:06.389
If the machine writes the code, why hire more

00:04:06.389 --> 00:04:08.189
humans? Human engineers. Because companies now

00:04:08.189 --> 00:04:10.370
realize they want to build more software, the

00:04:10.370 --> 00:04:13.030
overall demand for software is practically infinite.

00:04:13.349 --> 00:04:16.089
We are seeing a classic economic principle playing

00:04:16.089 --> 00:04:18.470
out here. You mean the Jevons paradox? Let's

00:04:18.470 --> 00:04:21.149
define that. It means efficiency makes a resource

00:04:21.149 --> 00:04:24.319
cheaper, so people use more of it. Yes, exactly.

00:04:24.560 --> 00:04:28.639
Lowering the cost of code just unlocks new, highly

00:04:28.639 --> 00:04:31.459
ambitious projects. We aren't replacing the engineers.

00:04:31.680 --> 00:04:35.000
We are massively scaling their output. And the

00:04:35.000 --> 00:04:37.879
tools those engineers use are evolving rapidly.

00:04:38.509 --> 00:04:40.810
We're moving away from simple chatbots toward

00:04:40.810 --> 00:04:43.470
autonomous systems. I still wrestle with prompt

00:04:43.470 --> 00:04:46.730
drift myself. Oh, we all do. You ask an AI to

00:04:46.730 --> 00:04:49.189
draft a simple email. By the third revision,

00:04:49.350 --> 00:04:51.790
it is somehow talking like a 19th century poet.

00:04:51.970 --> 00:04:54.829
Keeping it on track is exhausting. That is exactly

00:04:54.829 --> 00:04:57.269
why the industry is moving toward AI agents.

00:04:57.470 --> 00:04:59.370
To clarify what that means, it is software that

00:04:59.370 --> 00:05:00.930
runs in the background and takes actions for

00:05:00.930 --> 00:05:03.500
you. Perfect definition. Look at what Perplexity

00:05:03.500 --> 00:05:05.100
is doing right now. They just launched something

00:05:05.100 --> 00:05:07.540
called Personal Computer. It is an always -on

00:05:07.540 --> 00:05:10.620
AI agent running on a Mac Mini. Hold on. Giving

00:05:10.620 --> 00:05:13.680
an AI constant access to my screen and files?

00:05:14.519 --> 00:05:17.019
That sounds like a privacy nightmare waiting

00:05:17.019 --> 00:05:19.319
to happen. Why would anyone opt into that? It

00:05:19.319 --> 00:05:21.850
is definitely a privacy trade -off. But people

00:05:21.850 --> 00:05:24.970
opt in because it delegates tedious digital chores.

00:05:25.250 --> 00:05:27.529
It can control your files, your apps, and your

00:05:27.529 --> 00:05:29.810
web sessions. It is fundamentally different from

00:05:29.810 --> 00:05:32.129
a search engine. It operates your machine for

00:05:32.129 --> 00:05:35.290
you. Google is taking a similar interactive approach,

00:05:35.389 --> 00:05:37.730
right? They are quietly updating Google Maps.

00:05:38.009 --> 00:05:40.509
Instead of typing a search, you can now talk

00:05:40.509 --> 00:05:42.769
to it. Yeah, they added a new Ask Maps feature.

00:05:43.230 --> 00:05:45.589
You can ask complex, highly contextual questions

00:05:45.589 --> 00:05:47.810
naturally. For example, you can ask where to

00:05:47.810 --> 00:05:50.230
chart a phone nearby. It understands the context

00:05:50.230 --> 00:05:53.009
and finds a location. It is rolling out to users

00:05:53.009 --> 00:05:55.470
right now. It isn't just about productivity and

00:05:55.470 --> 00:05:57.509
mapping, though. The actual personalities of

00:05:57.509 --> 00:05:59.509
these models are shifting dramatically. This

00:05:59.509 --> 00:06:01.990
is where things get truly weird. Amazon just

00:06:01.990 --> 00:06:04.730
added a sassy personality to their new Alexa

00:06:04.730 --> 00:06:08.220
Plus. Sassy. For a household smart speaker? Yes.

00:06:08.379 --> 00:06:10.639
It is an adults -only mode for the assistant.

00:06:10.800 --> 00:06:13.600
It can actually curse at you or roast your questions.

00:06:13.920 --> 00:06:16.759
That is hilarious. But why would engineers intentionally

00:06:16.759 --> 00:06:19.439
build that? Well, it comes down to human psychology.

00:06:19.660 --> 00:06:22.439
People form parasocial relationships with AI.

00:06:22.800 --> 00:06:25.439
Perfect, subservient software is actually pretty

00:06:25.439 --> 00:06:28.360
boring. It feels robotic. But when a piece of

00:06:28.360 --> 00:06:31.699
software roasts your music taste, it mimics human

00:06:31.699 --> 00:06:35.079
friction. It feels more real. That makes a lot

00:06:35.079 --> 00:06:37.439
of sense. We actually prefer technology that

00:06:37.439 --> 00:06:40.319
mimics our own flaws. Exactly. Though it still

00:06:40.319 --> 00:06:43.939
blocks actual NSFW content, of course. You even

00:06:43.939 --> 00:06:46.259
have to pass extra security checks just to enable

00:06:46.259 --> 00:06:48.420
the sassy mode. Meanwhile, other tech giants

00:06:48.420 --> 00:06:50.939
are stumbling behind the scenes. Meta has a secret

00:06:50.939 --> 00:06:53.879
AI model called Avocado. Right. They just had

00:06:53.879 --> 00:06:56.579
to delay Avocado. They reportedly had some very

00:06:56.579 --> 00:06:58.980
disappointing internal tests. The official launch

00:06:58.980 --> 00:07:01.629
is pushed back to May. And the industry rumors

00:07:01.629 --> 00:07:04.290
surrounding this is wild. Meta might actually

00:07:04.290 --> 00:07:06.149
consider using Google Gemini in the meantime.

00:07:06.430 --> 00:07:08.709
That would be a massive shift in the AI arms

00:07:08.709 --> 00:07:11.709
race. Meta using a Google model would be a huge

00:07:11.709 --> 00:07:14.089
concession. It shows how hard it is to build

00:07:14.089 --> 00:07:16.550
foundational intelligence from scratch. There

00:07:16.550 --> 00:07:20.050
is also massive money flowing into AI video generation

00:07:20.050 --> 00:07:23.129
right now. Pixverse just secured $300 million

00:07:23.129 --> 00:07:25.850
in funding. And that funding is heavily backed

00:07:25.850 --> 00:07:28.550
by Alibaba. It reflects immense growing confidence

00:07:28.550 --> 00:07:31.750
in Pixverse specifically. The AI video sector

00:07:31.750 --> 00:07:34.269
is just exploding right now. Speaking of AI video,

00:07:34.529 --> 00:07:36.910
it is already entering the geopolitical arena.

00:07:37.170 --> 00:07:39.870
We need to look at this very objectively. A Chinese

00:07:39.870 --> 00:07:42.990
embassy recently posted an AI -generated video

00:07:42.990 --> 00:07:46.410
online. It was directly mocking a U .S. policy

00:07:46.410 --> 00:07:48.970
proposal. They were mocking Trump's Shield of

00:07:48.970 --> 00:07:51.589
the Americas concept. The video showed a U .S.

00:07:51.610 --> 00:07:55.319
eagle promising security. Then... The eagle aggressively

00:07:55.319 --> 00:07:58.339
locks the region inside a cage. Now, we must

00:07:58.339 --> 00:08:00.920
be perfectly clear here. Right. We are not endorsing

00:08:00.920 --> 00:08:03.300
any political viewpoint left or right. We are

00:08:03.300 --> 00:08:05.699
merely analyzing this as an objective example

00:08:05.699 --> 00:08:08.279
of the technology. Exactly. We are looking at

00:08:08.279 --> 00:08:11.139
how AI video tools are being deployed. They are

00:08:11.139 --> 00:08:13.819
now actively used in global geopolitical messaging.

00:08:14.170 --> 00:08:16.670
It drastically lowers the barrier for high -quality

00:08:16.670 --> 00:08:19.250
propaganda. Anyone can generate a compelling

00:08:19.250 --> 00:08:21.430
metaphorical video in minutes. You don't need

00:08:21.430 --> 00:08:23.329
an animation studio anymore. You just need a

00:08:23.329 --> 00:08:25.449
prompt. This brings me back to the economics

00:08:25.449 --> 00:08:29.430
of it all. Will always -on AI agents eventually

00:08:29.430 --> 00:08:33.409
break the Jevons paradox loop by completely replacing

00:08:33.409 --> 00:08:36.549
the need for human developers to build that extra

00:08:36.549 --> 00:08:39.850
software? Not anytime soon. Right now, it just

00:08:39.850 --> 00:08:42.210
means humans manage the agents to build even

00:08:42.210 --> 00:08:45.350
more complex systems. We become directors, not

00:08:45.350 --> 00:08:47.669
just typists. Lowering the barrier just raises

00:08:47.669 --> 00:08:49.909
the ceiling for what we build. Precisely. We

00:08:49.909 --> 00:08:51.750
just keep building taller and taller digital

00:08:51.750 --> 00:08:54.190
structures. We will continue our deep dive in

00:08:54.190 --> 00:08:56.330
just a moment. Stick around. Mid -roll sponsor

00:08:56.330 --> 00:09:00.269
break. And we are back. It is easy to get distracted

00:09:00.269 --> 00:09:02.629
by sassy smart speakers, but there is a much

00:09:02.629 --> 00:09:05.429
quieter AI breakthrough happening that is literally

00:09:05.429 --> 00:09:08.259
saving lives. We need to talk about the reality

00:09:08.259 --> 00:09:11.080
of flash floods. They are incredibly destructive

00:09:11.080 --> 00:09:13.779
natural disasters. They kill over 5 ,000 people

00:09:13.779 --> 00:09:16.000
every single year. Flash floods are incredibly

00:09:16.000 --> 00:09:18.320
destructive, mostly because they're notoriously

00:09:18.320 --> 00:09:21.279
hard to predict. They happen extremely fast and

00:09:21.279 --> 00:09:23.360
are hyper -localized to specific neighborhoods.

00:09:24.029 --> 00:09:26.970
Usually predicting them requires expensive physical

00:09:26.970 --> 00:09:29.990
infrastructure. You need physical river gauges

00:09:29.990 --> 00:09:32.750
installed. You need advanced local radar systems

00:09:32.750 --> 00:09:36.049
scanning the skies. Many countries simply do

00:09:36.049 --> 00:09:38.509
not have that kind of infrastructure. Building

00:09:38.509 --> 00:09:41.269
a national radar network costs billions of dollars.

00:09:41.570 --> 00:09:44.370
So Google came up with a genuinely brilliant

00:09:44.370 --> 00:09:47.990
alternative. They decided to teach an AI to read

00:09:47.990 --> 00:09:51.710
the news. They analyzed five million news articles

00:09:51.710 --> 00:09:54.320
from around the world. That is a staggering amount

00:09:54.320 --> 00:09:57.059
of unstructured text. They used advanced natural

00:09:57.059 --> 00:09:59.799
language processing. The AI wasn't just searching

00:09:59.799 --> 00:10:02.279
for the word flood. It had to understand the

00:10:02.279 --> 00:10:04.360
linguistic context. Right. It has to distinguish

00:10:04.360 --> 00:10:07.480
between a literal disaster and a metaphor. A

00:10:07.480 --> 00:10:10.000
flood of tears or a flood of emails isn't a weather

00:10:10.000 --> 00:10:12.580
event. Exactly. It filtered out the metaphors

00:10:12.580 --> 00:10:14.960
to extract reports of actual flood events from

00:10:14.960 --> 00:10:16.860
those historical news stories. They essentially

00:10:16.860 --> 00:10:19.179
turned historical journalism into historical

00:10:19.179 --> 00:10:21.480
weather data. They did. From this text, they

00:10:21.480 --> 00:10:23.679
built a brand new data set. They call it the

00:10:23.679 --> 00:10:25.700
ground source data set. Ground source contains

00:10:25.700 --> 00:10:29.419
2 .6 million individual flood reports. All of

00:10:29.419 --> 00:10:31.299
them were identified purely from written news

00:10:31.299 --> 00:10:34.620
coverage. They meticulously geotagged the locations

00:10:34.620 --> 00:10:37.919
mentioned in the text. They accurately timestamped

00:10:37.919 --> 00:10:40.279
the flood events based on the publication dates.

00:10:40.500 --> 00:10:44.240
Two sec silence. Oh. Imagine scaling millions

00:10:44.240 --> 00:10:46.879
of old news clippings into a global radar. It

00:10:46.879 --> 00:10:49.399
is a profound shift in scientific thinking. It

00:10:49.399 --> 00:10:51.879
is a historical weather data set created from

00:10:51.879 --> 00:10:53.740
written reports instead of physical sensors.

00:10:54.100 --> 00:10:57.000
They used this ground source data set to train

00:10:57.000 --> 00:10:59.879
a predictive forecasting model. Yes. They've

00:10:59.879 --> 00:11:02.720
fed this massive historical pattern into an AI.

00:11:02.980 --> 00:11:06.220
By understanding exactly where and when it flooded

00:11:06.220 --> 00:11:09.379
before, the AI learns the geographic vulnerabilities.

00:11:09.779 --> 00:11:12.519
Now, the system is integrated into Google Flood

00:11:12.519 --> 00:11:15.250
Hub. It identifies flood risks in urban areas

00:11:15.250 --> 00:11:18.450
across 150 countries. It shares these alerts

00:11:18.450 --> 00:11:20.850
directly with emergency response agencies. And

00:11:20.850 --> 00:11:22.769
it is already working in the real world. One

00:11:22.769 --> 00:11:24.970
emergency response official in southern Africa

00:11:24.970 --> 00:11:27.809
recently spoke about the trials. They said the

00:11:27.809 --> 00:11:29.970
system actually helped them respond faster to

00:11:29.970 --> 00:11:32.929
incoming flood events. Real human lives are being

00:11:32.929 --> 00:11:35.450
positively impacted by this. It is important

00:11:35.450 --> 00:11:37.730
to acknowledge the scientific limitations, though.

00:11:38.299 --> 00:11:40.799
The predictions currently cover areas of about

00:11:40.799 --> 00:11:43.860
20 square kilometers. That is fairly broad. It

00:11:43.860 --> 00:11:46.139
is not exactly block by block precision. Right.

00:11:46.240 --> 00:11:49.519
It is not as precise as systems like the US National

00:11:49.519 --> 00:11:52.940
Weather Service. Those systems use high fidelity

00:11:52.940 --> 00:11:56.919
local radar data for pinpoint accuracy. But this

00:11:56.919 --> 00:11:59.620
project wasn't built to replace local radar systems.

00:12:00.000 --> 00:12:03.419
It was designed specifically for regions entirely

00:12:03.419 --> 00:12:06.519
lacking. advanced weather infrastructure. It

00:12:06.519 --> 00:12:08.159
is for countries where traditional forecasting

00:12:08.159 --> 00:12:10.759
tools are completely unavailable. It provides

00:12:10.759 --> 00:12:13.240
a crucial baseline of safety where there was

00:12:13.240 --> 00:12:16.179
previously none. Is this text -to -sensor methodology

00:12:16.179 --> 00:12:19.519
the missing link for developing nations that

00:12:19.519 --> 00:12:21.460
can't afford billion -dollar weather arrays?

00:12:21.759 --> 00:12:25.179
Absolutely. It shows how historical text leapfrogs

00:12:25.179 --> 00:12:27.399
the need for expensive physical infrastructure.

00:12:27.600 --> 00:12:30.019
You leverage the conversational data that already

00:12:30.019 --> 00:12:32.840
exists in the world. So text data acts as virtual

00:12:32.840 --> 00:12:35.080
weather sensors for the past. Yes. And by deeply

00:12:35.080 --> 00:12:37.159
understanding the past, the AI can predict the

00:12:37.159 --> 00:12:39.940
future. It is a brilliant, life -saving substitution

00:12:39.940 --> 00:12:42.659
of resources. We have covered a lot of fascinating

00:12:42.659 --> 00:12:45.250
ground today. We are currently stuck in a very

00:12:45.250 --> 00:12:47.929
messy technological transition. We really are.

00:12:48.070 --> 00:12:50.610
We are watching society wrestle with a fundamental

00:12:50.610 --> 00:12:54.090
shift in capability. On one end, we have blunt

00:12:54.090 --> 00:12:56.850
legal instruments. Look at the proposed New York

00:12:56.850 --> 00:13:00.610
bill. It is trying to shove AI back into a neat

00:13:00.610 --> 00:13:03.649
professional box. It is trying to apply old liability

00:13:03.649 --> 00:13:06.659
frameworks to a totally new paradigm. By doing

00:13:06.659 --> 00:13:09.259
so, it will probably stifle basic education in

00:13:09.259 --> 00:13:11.440
the process. On the other end, we have the messy

00:13:11.440 --> 00:13:14.179
reality of the Wild West. The Jevons paradox

00:13:14.179 --> 00:13:16.779
is driving software demand through the roof.

00:13:16.940 --> 00:13:19.220
We have sassy Amazon assistants intentionally

00:13:19.220 --> 00:13:22.659
roasting us. We have geopolitical AI memes rapidly

00:13:22.659 --> 00:13:25.299
reshaping digital diplomacy. The friction of

00:13:25.299 --> 00:13:27.259
integration is everywhere. But beneath all that

00:13:27.259 --> 00:13:29.539
friction, the technology is performing quiet

00:13:29.539 --> 00:13:32.480
miracles. Look at the success of Google Flood

00:13:32.480 --> 00:13:35.429
Hub. turning 5 million old news articles into

00:13:35.429 --> 00:13:37.789
a global flood warning system. It proves that

00:13:37.789 --> 00:13:40.250
the most valuable application of AI is finding

00:13:40.250 --> 00:13:42.789
patterns in the noise. That is exactly why this

00:13:42.789 --> 00:13:44.690
matters to you, the listener. The fundamental

00:13:44.690 --> 00:13:46.870
definition of knowledge is changing rapidly.

00:13:47.149 --> 00:13:50.269
It used to be entirely about what you know. Rope

00:13:50.269 --> 00:13:53.450
memorization. Stirring facts in your head. Now.

00:13:53.789 --> 00:13:55.809
Knowledge is about how you connect the dots.

00:13:56.070 --> 00:13:59.110
The raw facts are universally available to everyone.

00:13:59.370 --> 00:14:02.789
The synthesis is what actually matters. AI is

00:14:02.789 --> 00:14:05.049
the ultimate synthesis engine. It can read five

00:14:05.049 --> 00:14:07.309
million articles in an afternoon. You cannot.

00:14:08.000 --> 00:14:10.200
But you can direct the engine. You can ask the

00:14:10.200 --> 00:14:12.220
right questions. You can manage the autonomous

00:14:12.220 --> 00:14:15.019
agents. You can navigate the ethical and legal

00:14:15.019 --> 00:14:18.000
gray areas. The New York lawmakers are stubbornly

00:14:18.000 --> 00:14:20.659
focused on what the AI knows. They are missing

00:14:20.659 --> 00:14:22.659
the bigger picture of how the AI synthesizes

00:14:22.659 --> 00:14:24.620
information. Which leaves us with a fascinating

00:14:24.620 --> 00:14:26.840
prospect for the future. We're just scratching

00:14:26.840 --> 00:14:29.460
the surface of using text as a sensor. The ground

00:14:29.460 --> 00:14:31.799
source data set is just one single application.

00:14:32.200 --> 00:14:34.860
Flood prediction is just one narrow domain of

00:14:34.860 --> 00:14:37.080
human experience. Think about the sheer volume

00:14:37.080 --> 00:14:40.259
of written history we possess. Centuries of local

00:14:40.259 --> 00:14:42.799
newspapers, medical journals, and shipping logs.

00:14:43.139 --> 00:14:45.740
It is an endless ocean of unstructured human

00:14:45.740 --> 00:14:48.379
data. We have never had the tools to process

00:14:48.379 --> 00:14:50.940
it comprehensively before now. If Google can

00:14:50.940 --> 00:14:53.899
reconstruct historical weather patterns, Just

00:14:53.899 --> 00:14:57.120
by having an AI read the news, what other invisible

00:14:57.120 --> 00:15:00.059
patterns exist in the text? Humanity is already

00:15:00.059 --> 00:15:02.740
written, just waiting for an AI to connect the

00:15:02.740 --> 00:15:05.840
dots. Could we predict economic crashes or disease

00:15:05.840 --> 00:15:08.179
outbreaks purely from historical literature?

00:15:08.419 --> 00:15:10.480
It is something to think about. Thank you for

00:15:10.480 --> 00:15:11.539
joining us on this deep dive.
