WEBVTT

00:00:03.720 --> 00:00:06.240
Welcome to the Azure Security Podcast, where

00:00:06.240 --> 00:00:08.759
we discuss topics relating to security, privacy,

00:00:09.039 --> 00:00:11.480
reliability, and compliance on the Microsoft

00:00:11.480 --> 00:00:15.820
Cloud Platform. Hey everybody, welcome to episode

00:00:15.820 --> 00:00:19.199
113. This week it's myself, Michael, with Sarah

00:00:19.199 --> 00:00:23.500
and Mark. Gladys is away this week. And our guest

00:00:23.500 --> 00:00:25.820
this week is Craig Nelson, who's here to talk

00:00:25.820 --> 00:00:27.829
to us about the Microsoft Red Team. But before

00:00:27.829 --> 00:00:30.230
we get to our guest, let's take a little lap

00:00:30.230 --> 00:00:32.530
around the news. I'll kick things off. I've got

00:00:32.530 --> 00:00:35.289
just a couple of items. The first one is in public

00:00:35.289 --> 00:00:37.450
preview. We now have, as your virtual network

00:00:37.450 --> 00:00:40.929
manager, high -scale private endpoints. There's

00:00:40.929 --> 00:00:42.750
this thing called connected groups that allows

00:00:42.750 --> 00:00:46.350
you to basically cluster together private endpoint

00:00:46.350 --> 00:00:49.130
information, just making it considerably easier

00:00:49.130 --> 00:00:52.130
to manage. If you've ever tried managing private

00:00:52.130 --> 00:00:55.009
endpoints one by one, it can be a little bit

00:00:55.009 --> 00:00:59.009
difficult. So this allows you to basically glom

00:00:59.009 --> 00:01:02.450
together 20 ,000 private endpoints into one connected

00:01:02.450 --> 00:01:05.569
group and manage them that way, which is absolutely

00:01:05.569 --> 00:01:08.609
magnificent. Next one is one of my favorite topics.

00:01:08.829 --> 00:01:10.730
You really need to make sure you guys got this

00:01:10.730 --> 00:01:12.750
all in place because this is not just the only

00:01:12.750 --> 00:01:15.349
product that's coming down with this, but Entra

00:01:15.349 --> 00:01:18.689
Domain Services is switching over to TLS 1 .2

00:01:18.689 --> 00:01:21.980
and above. and completely deprecating 1 .0 and

00:01:21.980 --> 00:01:26.200
1 .1 on August 31st this year. That is not...

00:01:26.379 --> 00:01:28.560
a long way away. So you need to make sure that

00:01:28.560 --> 00:01:30.439
all your clients that are connecting to those

00:01:30.439 --> 00:01:33.680
services are enabled for TLS 1 .2. To be frank,

00:01:33.799 --> 00:01:35.620
I've yet to, I think I've never come across one

00:01:35.620 --> 00:01:38.780
item where a customer had a problem and it was

00:01:38.780 --> 00:01:42.180
some really old mobile platform with some funky

00:01:42.180 --> 00:01:46.280
Java or Kotlin app that had some really old library.

00:01:46.519 --> 00:01:48.540
The other mistake I've seen people make is when

00:01:48.540 --> 00:01:50.579
they actually hard code the TLS requirements

00:01:50.579 --> 00:01:53.939
into the client. So it can't connect to 1 .2

00:01:53.939 --> 00:01:58.760
or 1 .3. go doing that. Okay, so me next. Couple

00:01:58.760 --> 00:02:01.260
of things. Well, one's a big one, but first up,

00:02:01.359 --> 00:02:04.620
we've got in Azure Container Registry public

00:02:04.620 --> 00:02:08.020
preview of continuous patching, which is lovely

00:02:08.020 --> 00:02:10.240
because it means we can patch things without

00:02:10.240 --> 00:02:13.900
having to rebuild containers. So go and have

00:02:13.900 --> 00:02:16.639
a look at that if you are using containers, which

00:02:16.639 --> 00:02:19.680
no doubt somebody is, or most people are in their

00:02:19.680 --> 00:02:22.979
environments nowadays. So my other bit of news

00:02:22.979 --> 00:02:26.550
is around MCP. Now, MCP at the time... recording

00:02:26.550 --> 00:02:30.330
this has absolutely exploded uh in the sort of

00:02:30.330 --> 00:02:32.750
i'd say last six weeks um if you haven't heard

00:02:32.750 --> 00:02:36.969
of it mcp is the model context protocol um it's

00:02:36.969 --> 00:02:40.629
open source and being driven by anthropic it

00:02:40.629 --> 00:02:43.590
allows agents and applications to discover and

00:02:43.590 --> 00:02:45.930
invoke tools in a standardized way so you can

00:02:45.930 --> 00:02:49.379
think of it maybe a little bit like a USB standard.

00:02:49.979 --> 00:02:54.719
It's a client server model. It's very new. The

00:02:54.719 --> 00:02:57.439
spec is changing regularly. And of course, there

00:02:57.439 --> 00:03:02.270
are some security risks around any. new technology

00:03:02.270 --> 00:03:04.689
that we introduce. So I wrote a couple of blog

00:03:04.689 --> 00:03:08.389
posts on it. One is about general overview. Another

00:03:08.389 --> 00:03:12.050
one is about a particular type of attack that

00:03:12.050 --> 00:03:14.430
some researchers have found against MCP. It is

00:03:14.430 --> 00:03:18.069
actually just indirect prompt injection. So go

00:03:18.069 --> 00:03:21.210
and look at that. Also, we'll be definitely hearing

00:03:21.210 --> 00:03:24.629
more about MCP. And I understand it was actually

00:03:24.629 --> 00:03:28.090
mentioned at Blue Hat India yesterday. Also,

00:03:28.189 --> 00:03:32.159
on that same... thread we've also gone all in

00:03:32.159 --> 00:03:34.500
we've announced that we're going to support in

00:03:34.500 --> 00:03:37.000
our products agent to agent protocol now agent

00:03:37.000 --> 00:03:39.719
to agent protocol is similar to mcp it's open

00:03:39.719 --> 00:03:42.039
source again it's being driven by google and

00:03:42.039 --> 00:03:45.199
that is a way a standardized way that agents

00:03:45.199 --> 00:03:47.900
can talk between themselves rather than connect

00:03:47.900 --> 00:03:50.120
to resources so that's the difference between

00:03:50.120 --> 00:03:53.259
the two it's all very new changing very quickly

00:03:53.259 --> 00:03:56.900
so go and keep an eye on that but certainly if

00:03:56.900 --> 00:04:00.759
your devs have started wanting to use MCP, that's

00:04:00.759 --> 00:04:02.800
something that you want to go read up on and

00:04:02.800 --> 00:04:08.360
educate yourself about ASAP. Cool. So in my area,

00:04:08.479 --> 00:04:13.780
big thing is the MCRA, the Microsoft Cybersecurity

00:04:13.780 --> 00:04:16.980
Reference Architecture, has been released. That's

00:04:16.980 --> 00:04:19.959
our April 2025 edition is the current version,

00:04:20.079 --> 00:04:24.699
replaces the December 23 edition. Mostly the

00:04:24.699 --> 00:04:26.959
same kind of format, no like real changes like

00:04:26.959 --> 00:04:29.139
the previous revisions where we went from kind

00:04:29.139 --> 00:04:31.339
of a single slide to a bunch of complex slides

00:04:31.339 --> 00:04:35.079
or to a full narrative kind of landing why end

00:04:35.079 --> 00:04:36.579
-to -end security is important. which were kind

00:04:36.579 --> 00:04:38.759
of previous things. In this case, it's sort of

00:04:38.759 --> 00:04:41.019
an enhancement of the current structure and format.

00:04:41.860 --> 00:04:44.160
And one of the big, big things that we did there

00:04:44.160 --> 00:04:47.000
was some of the work that I and others have been

00:04:47.000 --> 00:04:49.920
doing with the open group to define roles and

00:04:49.920 --> 00:04:52.240
capabilities and all sorts of things in security

00:04:52.240 --> 00:04:55.420
that have been either partially defined or not

00:04:55.420 --> 00:04:58.680
defined well or just not defined at all. So some

00:04:58.680 --> 00:05:01.519
of that work made it into the MCRA. So lots of

00:05:01.519 --> 00:05:04.199
goodness there. I've been doing a lot of continuing

00:05:04.199 --> 00:05:06.319
to do a lot of work in that. space. And it's

00:05:06.319 --> 00:05:07.959
just really interesting when you lay out all

00:05:07.959 --> 00:05:09.560
the different roles and responsibilities from

00:05:09.560 --> 00:05:14.180
board members to CEOs to lawyers and finance

00:05:14.180 --> 00:05:16.959
people and people in the security operations

00:05:16.959 --> 00:05:20.240
or sec ops or SOC and IT engineers. And when

00:05:20.240 --> 00:05:22.600
you bring all those things together and connect

00:05:22.600 --> 00:05:25.579
them and list them out and say, what is the thing

00:05:25.579 --> 00:05:27.839
you need to do? Kind of that old office space,

00:05:27.959 --> 00:05:30.019
what would you say you do here? It's just really

00:05:30.019 --> 00:05:33.240
interesting to... to pull all that together and

00:05:33.240 --> 00:05:35.259
kind of see how things actually work and should

00:05:35.259 --> 00:05:37.600
work. It's like seeing the human system as a

00:05:37.600 --> 00:05:41.079
system, like a technical system. And so it's

00:05:41.079 --> 00:05:43.839
been a lot of fun, a lot of challenges, and a

00:05:43.839 --> 00:05:46.000
lot of... well, what really is the difference

00:05:46.000 --> 00:05:48.279
between a chief digital officer and a chief technology

00:05:48.279 --> 00:05:51.240
officer and a chief architect and a CIO? And

00:05:51.240 --> 00:05:53.399
so there's all sorts of interesting things that

00:05:53.399 --> 00:05:55.699
come up from that. But we're publishing as much

00:05:55.699 --> 00:05:57.620
as we learn as we can. I'm going to include a

00:05:57.620 --> 00:05:59.680
few links in the show notes of the stuff that

00:05:59.680 --> 00:06:01.519
isn't quite in the standards yet, but we're looking

00:06:01.519 --> 00:06:03.500
to get feedback on. But yeah, that's the big

00:06:03.500 --> 00:06:06.060
one is the MCRI. So we'll drop the link for that

00:06:06.060 --> 00:06:07.819
out there. All right. So as I mentioned at the

00:06:07.819 --> 00:06:12.379
top of the podcast, Our guest this week is Craig

00:06:12.379 --> 00:06:14.519
Nelson. Craig, welcome to the podcast. We'd like

00:06:14.519 --> 00:06:16.800
to take a moment and introduce yourself to our

00:06:16.800 --> 00:06:19.379
listeners. All right. Thank you. I'm a longtime

00:06:19.379 --> 00:06:21.459
listener of this podcast, so I really appreciate

00:06:21.459 --> 00:06:24.160
the chance to be here with you today. I've been

00:06:24.160 --> 00:06:26.420
with Microsoft for about 18 years, and I started

00:06:26.420 --> 00:06:28.319
out as one of the first engineers focused on

00:06:28.319 --> 00:06:30.779
securing Microsoft's cloud. And back then, the

00:06:30.779 --> 00:06:32.819
cloud was very small and the problems are very

00:06:32.819 --> 00:06:34.980
different. And I've grown up with cloud and I've

00:06:34.980 --> 00:06:38.040
had the privilege. of being part of the evolution

00:06:38.040 --> 00:06:41.100
of both the technology and the threats that we

00:06:41.100 --> 00:06:45.220
see. So today I'm the VP of Microsoft's Red Team,

00:06:45.339 --> 00:06:48.379
and it's absolutely an awesome job because Microsoft

00:06:48.379 --> 00:06:50.519
takes Red Team incredibly seriously, and it's

00:06:50.519 --> 00:06:55.060
not just a checkbox. We work directly with engineers

00:06:55.060 --> 00:06:59.939
who shape security strategy, build secure architecture,

00:07:00.079 --> 00:07:04.399
and shape the investment decisions across the

00:07:04.399 --> 00:07:07.660
company. So as you can imagine, Microsoft is

00:07:07.660 --> 00:07:09.579
constantly targeted by some of the world's most

00:07:09.579 --> 00:07:11.920
advanced threat actors and who may actually be

00:07:11.920 --> 00:07:13.399
listening to this podcast. So I have to be very

00:07:13.399 --> 00:07:15.519
careful about what I share. But what I can say

00:07:15.519 --> 00:07:18.480
is that our red team is structured to view Microsoft

00:07:18.480 --> 00:07:21.000
in the same way that real attackers do. Because

00:07:21.000 --> 00:07:24.100
threat actors do not respect organizational boundaries.

00:07:24.180 --> 00:07:26.439
So neither do we. We look at Microsoft end to

00:07:26.439 --> 00:07:30.680
end. The red team is a centralized group of engineers

00:07:30.680 --> 00:07:33.819
who run breach operations across Microsoft infrastructure.

00:07:34.399 --> 00:07:38.579
That is the core of the team. And that's backed

00:07:38.579 --> 00:07:41.939
by a specialist who also performed deep technical

00:07:41.939 --> 00:07:44.319
research and exploitation so we can gain what

00:07:44.319 --> 00:07:46.500
we call high value attack positions. We have

00:07:46.500 --> 00:07:48.360
an intelligence team that built a security graph

00:07:48.360 --> 00:07:50.839
so we can understand attack paths and risks and

00:07:50.839 --> 00:07:52.819
perform things like center of gravity analysis

00:07:52.819 --> 00:07:55.459
and an engineering team that builds the tools

00:07:55.459 --> 00:07:58.939
and AI that we need to scale our work. But in

00:07:58.939 --> 00:08:02.060
many ways, we operate like a nation state level

00:08:02.060 --> 00:08:05.600
offensive team. But there is one major critical

00:08:05.600 --> 00:08:07.959
difference. Our mission is actually entirely

00:08:07.959 --> 00:08:10.100
defensive. We don't attack anything outside of

00:08:10.100 --> 00:08:12.639
Microsoft. We never touch customer data. And

00:08:12.639 --> 00:08:15.160
our goal is to proactively find and address vulnerabilities

00:08:15.160 --> 00:08:18.120
so we can help protect Microsoft and our customers.

00:08:18.600 --> 00:08:21.240
So let's just start with probably the most fundamental

00:08:21.240 --> 00:08:24.980
of questions. Can you explain the role of the

00:08:24.980 --> 00:08:27.399
Microsoft Red Team and just what is Red Teaming

00:08:27.399 --> 00:08:30.579
in general? First, let me frame how Microsoft's

00:08:30.579 --> 00:08:32.379
organization works so you can understand where.

00:08:32.750 --> 00:08:35.090
Red Team and I sit. At Microsoft, we operate

00:08:35.090 --> 00:08:37.070
with a distributed security governance model.

00:08:37.350 --> 00:08:40.549
Our CISO sets the overall direction, and beneath

00:08:40.549 --> 00:08:43.029
them are deputy CISOs, each responsible for core

00:08:43.029 --> 00:08:46.330
security decisions across the different divisions

00:08:46.330 --> 00:08:49.330
of the company. Some of the deputy CISOs are

00:08:49.330 --> 00:08:51.470
leaders in our engineering orgs, such as Mark

00:08:51.470 --> 00:08:53.690
Persinovich is the deputy CISO of Azure. Other

00:08:53.690 --> 00:08:56.190
deputy CISOs span critical areas, such as Anne

00:08:56.190 --> 00:08:57.909
Johnson being responsible for our customers.

00:08:58.409 --> 00:09:00.570
I bring them up because both Mark and Ann are

00:09:00.570 --> 00:09:02.629
very well known in security circles and in the

00:09:02.629 --> 00:09:04.889
podcast community. They're great examples to

00:09:04.889 --> 00:09:07.409
call out. Now, you know, Red Team reports to

00:09:07.409 --> 00:09:09.190
the CISO and the work that the Red Team does

00:09:09.190 --> 00:09:12.289
spans the entire company. And the findings often

00:09:12.289 --> 00:09:14.929
influence multiple deputy CISOs. Now, you have

00:09:14.929 --> 00:09:16.649
to imagine you're one of those deputy CISOs and

00:09:16.649 --> 00:09:19.129
your job is to understand the risks of the systems

00:09:19.129 --> 00:09:22.629
and how they behave against attackers in the

00:09:22.629 --> 00:09:25.429
real world. So you have that problem to worry

00:09:25.429 --> 00:09:28.169
about. Meanwhile, Microsoft is also in a very

00:09:28.169 --> 00:09:30.649
transformational phase where AI is changing the

00:09:30.649 --> 00:09:32.889
way people work and how systems are built. And

00:09:32.889 --> 00:09:34.450
that means there's a lot of code being written

00:09:34.450 --> 00:09:37.769
by AI. Products are incorporating AI very quickly.

00:09:38.049 --> 00:09:41.710
And the internal work of how we and the industry

00:09:41.710 --> 00:09:44.309
is going to use AI for productivity is just going

00:09:44.309 --> 00:09:46.470
so fast. So there are so many things happening

00:09:46.470 --> 00:09:49.090
at this moment in technology. It can either be

00:09:49.090 --> 00:09:51.620
a blessing or a curse. And that's where red teaming

00:09:51.620 --> 00:09:54.320
comes in. Our mission is to help those security

00:09:54.320 --> 00:09:56.700
leaders, our deputy CISOs and CISOs truly understand

00:09:56.700 --> 00:09:59.559
how the systems stand up under adversarial pressure.

00:09:59.759 --> 00:10:01.960
And I hope your listeners who are thinking about

00:10:01.960 --> 00:10:04.200
forming a red team can benefit from this and

00:10:04.200 --> 00:10:07.000
ensure that their red team and the folks doing

00:10:07.000 --> 00:10:10.009
this work are organizationally positioned. in

00:10:10.009 --> 00:10:12.750
a way that the risks that are surfaced are appropriate

00:10:12.750 --> 00:10:14.929
to their security governance model and how decisions

00:10:14.929 --> 00:10:16.629
are made. Red teaming isn't just about breaking

00:10:16.629 --> 00:10:18.929
in. It's about emulating a range of threat actors

00:10:18.929 --> 00:10:21.610
from low effort, opportunistic attackers to highly

00:10:21.610 --> 00:10:24.690
resourced nation state level adversaries. And

00:10:24.690 --> 00:10:27.190
we look at how all the things come together in

00:10:27.190 --> 00:10:29.669
responding to that technology, people, process

00:10:29.669 --> 00:10:32.679
when they're under real world stress. Red teaming

00:10:32.679 --> 00:10:34.440
is all about this challenging groupthink and

00:10:34.440 --> 00:10:36.600
status quo assumptions. And big organizations

00:10:36.600 --> 00:10:39.480
can quickly fall into patterns of thinking because

00:10:39.480 --> 00:10:41.440
they think that their checklists are complete.

00:10:41.600 --> 00:10:44.559
And red team exists to test those assumptions

00:10:44.559 --> 00:10:48.539
and expose the gaps that matter the most. So

00:10:48.539 --> 00:10:52.259
Craig, what are the key objectives of red team

00:10:52.259 --> 00:10:56.480
exercise? Is it learning or... I assume maybe

00:10:56.480 --> 00:10:59.019
there's probably a few things. Yeah, so Red Team

00:10:59.019 --> 00:11:01.320
is about forcing the evolution of very complex

00:11:01.320 --> 00:11:03.279
systems. You can imagine how large the Microsoft

00:11:03.279 --> 00:11:06.740
estate is and all the complexity that's tied

00:11:06.740 --> 00:11:10.840
into building global services that fulfill so

00:11:10.840 --> 00:11:13.600
many workloads that our customers use. We want

00:11:13.600 --> 00:11:16.279
to force that evolution, and that makes us learn

00:11:16.279 --> 00:11:19.759
how do these systems respond to real lawful good

00:11:19.759 --> 00:11:22.200
adversaries, that is the Red Team, as one of

00:11:22.200 --> 00:11:24.889
the core objectives. So we want to understand

00:11:24.889 --> 00:11:27.450
how a motivated attacker would move through the

00:11:27.450 --> 00:11:29.250
environment, how they would get that initial

00:11:29.250 --> 00:11:31.750
foothold, and then what the effectiveness of

00:11:31.750 --> 00:11:34.970
the defenses are. And then with that, when we

00:11:34.970 --> 00:11:37.330
see the red team having success, we know where

00:11:37.330 --> 00:11:40.450
to invest to better harden the system, improve

00:11:40.450 --> 00:11:44.389
detections, as well as response efforts. That

00:11:44.389 --> 00:11:46.690
really boils down to mapping out real attack

00:11:46.690 --> 00:11:49.389
paths across identity systems, network edges,

00:11:49.509 --> 00:11:53.059
and cloud boundaries. And we put a lot of time

00:11:53.059 --> 00:11:56.059
into making sure that we understand how detection

00:11:56.059 --> 00:11:58.840
and response works in the scenario where things

00:11:58.840 --> 00:12:01.700
fail. So, for example, in the industry, we talk

00:12:01.700 --> 00:12:03.539
about assume breach a lot. You have to assume

00:12:03.539 --> 00:12:06.440
that an attacker will always get in. The question

00:12:06.440 --> 00:12:09.399
is, how long can they stay in? We want to give

00:12:09.399 --> 00:12:13.419
our engineering teams a feedback loop as a adversary.

00:12:13.960 --> 00:12:16.440
So they can understand that. And then we are

00:12:16.440 --> 00:12:18.820
testing them to see if they can take the right

00:12:18.820 --> 00:12:23.059
steps to pursue the attackers and then what they

00:12:23.059 --> 00:12:27.539
do to minimize their success. At the end of the

00:12:27.539 --> 00:12:29.919
day, back to your learning point, this is all

00:12:29.919 --> 00:12:32.799
about shifting the mindset from hardening individual

00:12:32.799 --> 00:12:35.460
systems to protecting end -to -end breach paths.

00:12:35.720 --> 00:12:37.919
Thinking about how credentials, secrets, how

00:12:37.919 --> 00:12:40.360
information moves, where it's stored, what policies

00:12:40.360 --> 00:12:42.980
are applied. So you can really constrain what

00:12:42.980 --> 00:12:46.320
an attacker can do once they compromise the system.

00:12:46.720 --> 00:12:50.279
Can you describe a little bit more about how

00:12:50.279 --> 00:12:54.879
red teaming enhances the overall security posture

00:12:54.879 --> 00:12:56.940
and any kind of examples that you're able to

00:12:56.940 --> 00:12:58.840
share, completely recognize there's limits on

00:12:58.840 --> 00:13:01.779
what you can share? So that's a fantastic question.

00:13:01.919 --> 00:13:03.759
To really answer it, we need to acknowledge that

00:13:03.759 --> 00:13:06.200
humans need to have a threat materialize and

00:13:06.200 --> 00:13:08.500
be affected by it to truly understand it. So

00:13:08.500 --> 00:13:10.299
in security, it's easy to get stuck in hypothetical

00:13:10.299 --> 00:13:13.399
scenarios that really do nothing because you're

00:13:13.399 --> 00:13:16.320
overwhelmed. And today's complex systems, there

00:13:16.320 --> 00:13:19.039
is an endless list of things that could go wrong,

00:13:19.100 --> 00:13:20.919
from denial of surface attacks that can disrupt

00:13:20.919 --> 00:13:23.220
a company to a full -scale compromise with serious

00:13:23.220 --> 00:13:26.220
financial and reputational consequences. So if

00:13:26.220 --> 00:13:28.279
you're curious to how bad things can get, ask

00:13:28.279 --> 00:13:30.940
ChatGPT about attacks that I remember from the

00:13:30.940 --> 00:13:33.799
past. There's one in 2014 from the Lazarus Group.

00:13:34.500 --> 00:13:37.200
Stuxnet is also infamous. Search for NotPetya.

00:13:37.690 --> 00:13:39.669
and you'll see that you know these weren't just

00:13:39.669 --> 00:13:41.590
one -off events you know they revealed a pattern

00:13:41.590 --> 00:13:44.850
often what we will see in the future and uh exposed

00:13:44.850 --> 00:13:47.370
techniques that start with hygiene and they get

00:13:47.370 --> 00:13:50.190
quite sophisticated and you know this forms the

00:13:50.190 --> 00:13:53.230
foundation of how modern cyber attacks work and

00:13:53.230 --> 00:13:56.679
uh even cyber warfare practically Most organizations

00:13:56.679 --> 00:13:58.720
aren't worried about nation state conflict, and

00:13:58.720 --> 00:14:01.399
that's good. They're worried about ransomware,

00:14:01.500 --> 00:14:03.740
service outages, data leaks that can impact the

00:14:03.740 --> 00:14:05.700
trust of their customers. And that's where red

00:14:05.700 --> 00:14:07.440
teaming comes in. It can help you figure out

00:14:07.440 --> 00:14:10.220
which of those risks actually matter to you in

00:14:10.220 --> 00:14:12.679
your environments, within your systems, as well

00:14:12.679 --> 00:14:16.299
with how the people respond. At its core, red

00:14:16.299 --> 00:14:18.500
teaming is about running a controlled test before

00:14:18.500 --> 00:14:20.480
a real attacker does. And it gives you that insight

00:14:20.480 --> 00:14:22.919
into how your defenses hold up under pressure,

00:14:22.980 --> 00:14:25.360
and just as importantly, how people respond.

00:14:25.929 --> 00:14:29.570
And it's what happens after a breach that matters

00:14:29.570 --> 00:14:31.409
the most. You're sitting down with those that

00:14:31.409 --> 00:14:33.389
were impacted and the engineers that may have

00:14:33.389 --> 00:14:35.350
created the problem, analyzing what broke down

00:14:35.350 --> 00:14:38.509
and then figuring out how to improve. And that's

00:14:38.509 --> 00:14:41.289
what drives the improvements to security posture.

00:14:41.789 --> 00:14:44.070
So if you're listening and you haven't done red

00:14:44.070 --> 00:14:45.809
teaming yet, whether you're a big company or

00:14:45.809 --> 00:14:47.809
a small one, I strongly encourage you to learn

00:14:47.809 --> 00:14:49.960
more. You don't need to have a full -time red

00:14:49.960 --> 00:14:52.100
team to start. And even a small internal exercise

00:14:52.100 --> 00:14:54.779
can be hugely valuable. It's a great way to grow

00:14:54.779 --> 00:14:57.679
your security culture. Red teaming is designed

00:14:57.679 --> 00:14:59.879
to challenge assumptions, spark creativity, and

00:14:59.879 --> 00:15:01.580
it's actually a lot of fun. And it gives your

00:15:01.580 --> 00:15:04.200
security professionals and engineers a new lens

00:15:04.200 --> 00:15:06.500
on their work and helps the broader organization

00:15:06.500 --> 00:15:10.080
see security for what it really is, a very dynamic,

00:15:10.139 --> 00:15:12.620
evolving challenge and not just a checklist.

00:15:13.360 --> 00:15:15.120
Just think about red teaming in general. I mean,

00:15:15.120 --> 00:15:17.379
are there any ethical considerations that come

00:15:17.379 --> 00:15:21.240
into play or does the end justify the means?

00:15:21.379 --> 00:15:23.940
How does that all work out? Yeah, the first thing

00:15:23.940 --> 00:15:25.480
I want to point out is as I talk through this,

00:15:25.620 --> 00:15:28.059
we are focused on protecting Microsoft. We don't

00:15:28.059 --> 00:15:31.159
red team customer tenants or systems. It is to

00:15:31.159 --> 00:15:33.460
protect customer data that's sitting on top of

00:15:33.460 --> 00:15:37.299
Microsoft's infrastructure. So ethics are non

00:15:37.299 --> 00:15:40.480
-negotiable. So every red team operation is governed

00:15:40.480 --> 00:15:43.019
by a very formal set of rules of engagement that

00:15:43.019 --> 00:15:45.559
covers scope, safety, impact, and who's going

00:15:45.559 --> 00:15:47.580
to be notified and when. And clearly, you don't

00:15:47.580 --> 00:15:50.740
want to disrupt any business and cross into data

00:15:50.740 --> 00:15:53.399
that you're not authorized. to touch. That is

00:15:53.399 --> 00:15:55.259
really, really important. You have to be very

00:15:55.259 --> 00:15:57.000
intentional about how you conduct red teaming

00:15:57.000 --> 00:15:59.519
for your listeners that are listening to this

00:15:59.519 --> 00:16:01.519
and they want to form a red team or they want

00:16:01.519 --> 00:16:03.580
to have some kind of red team initiatives, even

00:16:03.580 --> 00:16:05.840
with folks that don't do red teaming as a full

00:16:05.840 --> 00:16:08.460
-time job. That's great, but it is really important

00:16:08.460 --> 00:16:11.059
that you craft what we call rules of engagement.

00:16:11.480 --> 00:16:13.600
So Craig, you've already touched on this, but

00:16:13.600 --> 00:16:16.659
how do you define the rules of engagement for

00:16:16.659 --> 00:16:20.370
a red team operation? Yeah, so the rules of engagement

00:16:20.370 --> 00:16:23.169
is kind of like the constitution of an op. We

00:16:23.169 --> 00:16:25.450
work with our legal teams, engineering teams,

00:16:25.570 --> 00:16:27.529
and leadership to define exactly what's in scope,

00:16:27.629 --> 00:16:29.950
what's out of bounds, and when we'll communicate

00:16:29.950 --> 00:16:32.590
before, during, and after the exercise. So just

00:16:32.590 --> 00:16:34.710
very clear bullet points so everyone is on the

00:16:34.710 --> 00:16:38.350
same page. Now, in terms of when we talk about

00:16:38.350 --> 00:16:41.269
what's in scope, it's really important that your

00:16:41.269 --> 00:16:44.169
red team has a very broad scope. So the rules

00:16:44.169 --> 00:16:46.750
of engagement are to set expectations that...

00:16:47.080 --> 00:16:49.019
that the scope is large because the more you

00:16:49.019 --> 00:16:51.179
can strain, the less breach paths you're going

00:16:51.179 --> 00:16:53.519
to find. And then finally, in the rules of engagement,

00:16:53.559 --> 00:16:56.419
it covers safety mechanisms. So one important

00:16:56.419 --> 00:17:00.399
one is called de -conflict. So if alerts go off

00:17:00.399 --> 00:17:02.899
and they're seen by the security operations center,

00:17:03.019 --> 00:17:06.099
they have a very clear path that they can understand

00:17:06.099 --> 00:17:09.200
whether or not it's the red team or potentially

00:17:09.200 --> 00:17:11.619
a real threat actor. So we want to make sure

00:17:11.619 --> 00:17:15.000
that they understand that if an alert goes off,

00:17:15.019 --> 00:17:17.380
they just... You don't have the idea that just

00:17:17.380 --> 00:17:19.740
attribute to red team and then perhaps no action

00:17:19.740 --> 00:17:23.299
is taken. In the best case, everyone in the organization

00:17:23.299 --> 00:17:26.279
responds to the red team as if it were a real

00:17:26.279 --> 00:17:28.559
threat actor, because that's where the bulk of

00:17:28.559 --> 00:17:30.440
learnings come from. When you talk about like

00:17:30.440 --> 00:17:33.539
red teaming, it strikes me as a fairly broad

00:17:33.539 --> 00:17:36.160
set of skills. Can you kind of talk about like

00:17:36.160 --> 00:17:39.200
the skill set of red teaming and like just getting

00:17:39.200 --> 00:17:41.319
started and then what you kind of master and

00:17:41.319 --> 00:17:44.049
fine tune as you go? You definitely need a deep

00:17:44.049 --> 00:17:46.910
mix of technical skills, adversarial creativity,

00:17:47.230 --> 00:17:50.349
and the ability to clearly communicate. And we

00:17:50.349 --> 00:17:52.589
have people that are experts in Windows internals,

00:17:52.589 --> 00:17:55.829
identity systems, protocols, network exploitation,

00:17:56.490 --> 00:17:59.410
cloud architecture. It's really where those skills

00:17:59.410 --> 00:18:02.849
come together is where you find the most creative

00:18:02.849 --> 00:18:06.210
novel breach paths through a system. But what

00:18:06.210 --> 00:18:08.809
makes someone great being on a red team is not

00:18:08.809 --> 00:18:10.779
just their technical depth. It's really their

00:18:10.779 --> 00:18:13.200
ability to think like an attacker, explore the

00:18:13.200 --> 00:18:17.000
edges, have that instinct of how things were

00:18:17.000 --> 00:18:19.839
designed to work, and then figure out how to

00:18:19.839 --> 00:18:23.740
kind of move around those edges and target conditions

00:18:23.740 --> 00:18:26.619
or dependencies that may have not been thought

00:18:26.619 --> 00:18:30.200
about in the system design. One of the things

00:18:30.200 --> 00:18:32.720
I really appreciate is that intuition where folks

00:18:32.720 --> 00:18:35.220
understand how to chain together little subtitle

00:18:35.220 --> 00:18:39.220
misconfigurations. And then tie that to a breach

00:18:39.220 --> 00:18:42.380
path and then explain to an organization why

00:18:42.380 --> 00:18:44.759
it matters, why these things have to be fixed

00:18:44.759 --> 00:18:47.539
and why they should invest time to get those

00:18:47.539 --> 00:18:50.559
misconfigurations or past architectural decisions

00:18:50.559 --> 00:18:54.839
changed to make the attackers have to navigate

00:18:54.839 --> 00:18:57.779
more terrain and increase the cost to breach.

00:18:58.079 --> 00:19:00.759
Or just, of course, just make the breach infeasible.

00:19:00.900 --> 00:19:02.440
You brought up an interesting point there when

00:19:02.440 --> 00:19:04.119
you said... The people on the red team have to

00:19:04.119 --> 00:19:06.680
think like an attacker. It's interesting. I hear

00:19:06.680 --> 00:19:09.539
that term all the time. It's like, oh, you know,

00:19:09.619 --> 00:19:11.660
we make software so much better. More people

00:19:11.660 --> 00:19:13.759
just thought like an attacker. The problem is

00:19:13.759 --> 00:19:16.319
unless you are one, you can't think like one.

00:19:16.440 --> 00:19:19.380
You really, really can't. However, the people,

00:19:19.480 --> 00:19:21.720
all the people that I know on the red team are

00:19:21.720 --> 00:19:23.880
definitely world -class attackers. They don't

00:19:23.880 --> 00:19:27.359
just think like an attacker. They truly are attackers.

00:19:27.680 --> 00:19:32.339
And so they have that certain mentality. So what

00:19:32.339 --> 00:19:35.160
are the tools and techniques that are commonly

00:19:35.160 --> 00:19:37.299
used by the red team, or at least as much as

00:19:37.299 --> 00:19:40.339
you can talk about? And what's the role of AI

00:19:40.339 --> 00:19:44.019
in here, both from attack and defense? So our

00:19:44.019 --> 00:19:47.160
toolkit spans open source proprietary and system

00:19:47.160 --> 00:19:49.680
native tools, whatever best enables the mission.

00:19:50.259 --> 00:19:52.519
In the reconnaissance phase, we use open source

00:19:52.519 --> 00:19:54.640
tools from GitHub, right? That's what real attackers

00:19:54.640 --> 00:19:57.180
use. This is what we use. And we use these tools

00:19:57.180 --> 00:20:00.269
to gather open source intelligence. enumerate

00:20:00.269 --> 00:20:04.569
networks, and map cloud attack services. For

00:20:04.569 --> 00:20:07.730
command and control, frameworks like Sliver from

00:20:07.730 --> 00:20:10.450
Bishop Fox, again, available on GitHub, and custom

00:20:10.450 --> 00:20:12.829
infrastructure gives us the flexibility and stealth

00:20:12.829 --> 00:20:17.230
that we need to emulate real threat actors. But

00:20:17.230 --> 00:20:20.309
just as often, I want to reinforce that it's

00:20:20.309 --> 00:20:23.400
important that your red team... It's just living

00:20:23.400 --> 00:20:25.759
off the land, using tools that are native and

00:20:25.759 --> 00:20:27.619
already present in the environment and operating

00:20:27.619 --> 00:20:30.839
system. So this is PowerShell, WMI, CertUtil,

00:20:30.980 --> 00:20:33.579
and just classic Unix tools like SoCat remain

00:20:33.579 --> 00:20:36.740
powerful decades later. I used SoCat 25 years

00:20:36.740 --> 00:20:39.440
ago. I still use it today. But the landscape

00:20:39.440 --> 00:20:42.579
is shifting, and AI is rapidly becoming central

00:20:42.579 --> 00:20:46.039
to red teaming. So we use it to generate code,

00:20:46.180 --> 00:20:48.420
automate repetitive tasks, and speed up complex

00:20:48.420 --> 00:20:51.819
operations. Personally, I vibe code a lot of

00:20:51.819 --> 00:20:54.440
stuff. I build very fast, functional scripts

00:20:54.440 --> 00:20:57.460
that stitch together APIs, analyze data, invoke

00:20:57.460 --> 00:21:00.660
tools on the fly. And I look at it right now

00:21:00.660 --> 00:21:02.619
that it's less about perfection. It's more about

00:21:02.619 --> 00:21:06.099
acceleration. But the next critical skill set,

00:21:06.160 --> 00:21:08.099
if I were advising someone getting into red teaming

00:21:08.099 --> 00:21:12.200
now, is really learning how to vibe code to connect

00:21:12.200 --> 00:21:15.920
AIs together and their data streams, build lightweight

00:21:15.920 --> 00:21:18.440
orchestration across their environments, really

00:21:18.440 --> 00:21:23.980
centered on AI and AI platforms. Over time, this

00:21:23.980 --> 00:21:25.900
is what's going to involve systems that take

00:21:25.900 --> 00:21:29.740
automated action, such as triaging logs, testing

00:21:29.740 --> 00:21:34.039
APIs, and remediating threats based upon learned

00:21:34.039 --> 00:21:38.640
patterns. Now, this is not going to replace humans.

00:21:38.819 --> 00:21:40.599
I look at it as you want to supercharge red team

00:21:40.599 --> 00:21:42.880
engineers. It's going to allow engineers that

00:21:42.880 --> 00:21:45.960
are doing red teaming to move faster, test deeper,

00:21:46.180 --> 00:21:48.279
and simulate what real adversaries are doing.

00:21:48.880 --> 00:21:51.680
themselves, because that data will be enhanced

00:21:51.680 --> 00:21:54.640
through threat intelligence. And real attackers

00:21:54.640 --> 00:21:57.220
are going to use the exact same approaches. So

00:21:57.220 --> 00:21:59.980
looking ahead, I think that the next obvious

00:21:59.980 --> 00:22:03.640
step is that all of the common tools that we

00:22:03.640 --> 00:22:09.480
use as Red Team will be invocable via AI, and

00:22:09.480 --> 00:22:12.799
they're going to be plugged into multi -agent

00:22:12.799 --> 00:22:15.819
AI infrastructure. These agents communicating

00:22:15.819 --> 00:22:18.819
over MCP, I know Sarah highlighted that earlier

00:22:18.819 --> 00:22:22.759
in the news, are going to coordinate what tasks

00:22:22.759 --> 00:22:25.259
need to be done, such as source code analysis.

00:22:25.680 --> 00:22:29.059
They will be invoked through a common orchestration

00:22:29.059 --> 00:22:32.859
AI platform that basically says, hey, here's

00:22:32.859 --> 00:22:35.680
these three tasks that I need to do. Invoke these

00:22:35.680 --> 00:22:38.799
tasks across these tools. And that data will

00:22:38.799 --> 00:22:41.240
be brought back for that AI orchestrator to interpret

00:22:41.240 --> 00:22:44.259
and then take the appropriate action. So across

00:22:44.259 --> 00:22:47.259
domains, and they'll do so at speed and scale.

00:22:47.440 --> 00:22:49.079
So I think there's going to be a big transformation

00:22:49.079 --> 00:22:52.579
in the products that we use today in the security

00:22:52.579 --> 00:22:55.779
world. So that's where we're going. Tools are

00:22:55.779 --> 00:22:57.460
going to become skills. AI is going to become

00:22:57.460 --> 00:23:00.920
the fabric. And red teams that are using this

00:23:00.920 --> 00:23:04.220
automation are going to be able to stay on the

00:23:04.220 --> 00:23:07.809
leading edge to help. ensure that defenders don't

00:23:07.809 --> 00:23:11.950
fall behind. So, Craig, how do you measure the

00:23:11.950 --> 00:23:14.470
success of a red team engagement when you're

00:23:14.470 --> 00:23:17.150
all finished? Yeah, so it's not just about how

00:23:17.150 --> 00:23:18.829
many systems are compromised. It's about what

00:23:18.829 --> 00:23:22.450
the organization learns and what changes as a

00:23:22.450 --> 00:23:25.349
result of the red team operations. We look at

00:23:25.349 --> 00:23:28.910
how quickly actions were detected, how quick

00:23:28.910 --> 00:23:32.150
and accurately the defenders responded. whether

00:23:32.150 --> 00:23:34.809
or not critical issues were prioritized, and

00:23:34.809 --> 00:23:36.849
whether or not there's lasting improvement over

00:23:36.849 --> 00:23:40.450
time. So a successful red team engagement must

00:23:40.450 --> 00:23:44.190
lead to real architectural, in many cases, cultural

00:23:44.190 --> 00:23:48.950
change about how systems are built and protected

00:23:48.950 --> 00:23:52.650
over the long run. I just have to say, I love

00:23:52.650 --> 00:23:54.710
that answer. And thank you for saying that, because

00:23:54.710 --> 00:23:57.710
I see way too many people celebrating that we

00:23:57.710 --> 00:24:00.759
got in and we shamed you guys. Okay, so why are

00:24:00.759 --> 00:24:03.019
we paying you? You're supposed to help make us

00:24:03.019 --> 00:24:09.880
secure. So would you be able to share maybe like

00:24:09.880 --> 00:24:12.740
a real -world example of an engagement that led

00:24:12.740 --> 00:24:15.000
to some of those improvements? Is that something

00:24:15.000 --> 00:24:17.920
that you'd be able to do? Well, I can't share

00:24:17.920 --> 00:24:20.279
specifics from the internal operations, but I

00:24:20.279 --> 00:24:22.339
can give you some ideas of the type of things

00:24:22.339 --> 00:24:25.519
that we find and some of the patterns that we've

00:24:25.519 --> 00:24:30.140
been able to fix at scale. I'll start with number

00:24:30.140 --> 00:24:32.640
one, overprivileged identities and applications.

00:24:33.039 --> 00:24:35.920
What we tend to find a lot is either identities,

00:24:36.099 --> 00:24:38.839
either human or service principles may have very

00:24:38.839 --> 00:24:41.799
broad assignments like owner, contributor, user

00:24:41.799 --> 00:24:43.920
access administrator. When you see stuff like

00:24:43.920 --> 00:24:46.460
that, that tends to mean that there's too much

00:24:46.460 --> 00:24:48.480
privilege being assigned just within the culture

00:24:48.480 --> 00:24:50.880
of the organization or within the specific applications.

00:24:51.690 --> 00:24:53.970
applications are just granted far more permissions

00:24:53.970 --> 00:24:57.549
than they're needed, as well as OAuth applications

00:24:57.549 --> 00:25:00.630
are reviewed and monitored, and they often retain

00:25:00.630 --> 00:25:03.309
access kind of long after their useful life.

00:25:03.710 --> 00:25:06.250
So this matters because once an attacker gains

00:25:06.250 --> 00:25:09.009
a foothold, that overprivilege, if they can get

00:25:09.009 --> 00:25:11.269
those credentials, can turn a small breach into

00:25:11.269 --> 00:25:15.650
something much larger as they get access to those

00:25:15.650 --> 00:25:18.029
identities and those credentials and laterally

00:25:18.029 --> 00:25:21.289
move. So what to do there? The guidance is always

00:25:21.289 --> 00:25:24.390
goes back to the core of enforcing least privilege

00:25:24.390 --> 00:25:27.210
and then looking at enterprise applications more

00:25:27.210 --> 00:25:29.890
specifically and service principles for applications

00:25:29.890 --> 00:25:32.210
like application read, write all and directory

00:25:32.210 --> 00:25:35.529
read, write all and just very broadly scope findings.

00:25:35.809 --> 00:25:38.230
That's number one. That's the number one finding.

00:25:38.349 --> 00:25:41.369
After that, the second is inadequate credential

00:25:41.369 --> 00:25:44.450
and token isolation. So this is where you tend

00:25:44.450 --> 00:25:47.150
to find like shared credentials and DevOps pipelines

00:25:47.150 --> 00:25:49.490
and scripts and automation accounts and its source

00:25:49.490 --> 00:25:52.650
code. And then like long -lived tokens, such

00:25:52.650 --> 00:25:55.269
as like a connection string or SaaS tokens that

00:25:55.269 --> 00:25:58.910
can be used to access resources that an application

00:25:58.910 --> 00:26:01.349
may take as a dependency, such as a storage account.

00:26:01.549 --> 00:26:04.049
That matters a lot because those enable what

00:26:04.049 --> 00:26:06.569
I call silent privilege escalation. Some of these

00:26:06.569 --> 00:26:09.480
things are really hard to detect, but... There's

00:26:09.480 --> 00:26:12.319
a lot of transitive risk that can be seen just

00:26:12.319 --> 00:26:15.140
simply from one system having access to another

00:26:15.140 --> 00:26:18.079
system via a credential that the attacker is

00:26:18.079 --> 00:26:21.319
using via this lateral movement techniques. Another

00:26:21.319 --> 00:26:24.299
area would be we just see flat network architecture

00:26:24.299 --> 00:26:27.119
and just overall weak network isolation. You

00:26:27.119 --> 00:26:28.579
know, one of the hardest things to get by from

00:26:28.579 --> 00:26:31.240
a red team perspective is isolation, primarily

00:26:31.240 --> 00:26:33.779
network isolation. You know, we tend to find

00:26:33.779 --> 00:26:36.519
that VNets that may span multiple zones of trust

00:26:36.519 --> 00:26:39.539
or hub and spoke models. where there is just

00:26:39.539 --> 00:26:41.839
a central point of trust that can pivot broadly

00:26:41.839 --> 00:26:46.180
within a network or a VNet, and then misconfigured

00:26:46.180 --> 00:26:49.460
network security groups and overly permissive

00:26:49.460 --> 00:26:52.359
access and data flows. This is a difficult thing

00:26:52.359 --> 00:26:55.259
to get a hold of. It was 20 years ago. It still

00:26:55.259 --> 00:26:57.400
is now, just because applications are designed

00:26:57.400 --> 00:27:00.099
to want to connect to each other. And you really

00:27:00.099 --> 00:27:02.339
have to understand your network and your flows

00:27:02.339 --> 00:27:05.839
and make sure that if there's a problem, the

00:27:05.839 --> 00:27:08.640
default behavior is into open rules to allow

00:27:08.640 --> 00:27:11.279
applications to talk to each other. Because that

00:27:11.279 --> 00:27:13.700
might not be corrected. And those are the type

00:27:13.700 --> 00:27:15.880
of things that red teamers find. It kind of reminds

00:27:15.880 --> 00:27:18.240
me of the old fail -safe principle in engineering

00:27:18.240 --> 00:27:21.660
that you don't want the dam to fail open where

00:27:21.660 --> 00:27:24.980
it's going to flood the town, right? Yeah, absolutely.

00:27:25.220 --> 00:27:27.480
Absolutely. And to that point, weak application

00:27:27.480 --> 00:27:29.819
boundaries and these assumptions, I think those

00:27:29.819 --> 00:27:32.940
boundaries are that dam. And you have to really

00:27:32.940 --> 00:27:36.220
understand your microservices and APIs that do

00:27:36.220 --> 00:27:39.140
assume trust across tiers. sometimes without

00:27:39.140 --> 00:27:41.759
authorization enforcement. That's a great analogy

00:27:41.759 --> 00:27:44.400
that there are all these dams between the systems.

00:27:44.460 --> 00:27:47.420
But at the end of the day, all those dams have

00:27:47.420 --> 00:27:50.000
some point of failure. Then you have to understand

00:27:50.000 --> 00:27:53.119
kind of what it is and what the consequence is

00:27:53.119 --> 00:27:57.710
if it's defeated. Craig, so how do... Red teams

00:27:57.710 --> 00:28:00.329
adapt to emerging threats. We know things change

00:28:00.329 --> 00:28:03.170
all the time and different attack techniques.

00:28:03.390 --> 00:28:05.829
And obviously I'm going to throw in, what about

00:28:05.829 --> 00:28:09.150
AI? Yeah, red teams have to kind of live in the

00:28:09.150 --> 00:28:12.089
future a bit and understand what's coming down

00:28:12.089 --> 00:28:14.529
the pipe. Because AI is very powerful, defenders

00:28:14.529 --> 00:28:15.950
are clearly going to use it. And we see that

00:28:15.950 --> 00:28:18.250
today. And I just got back from RSA where there's

00:28:18.250 --> 00:28:21.569
a lot of innovation in the space where the AI

00:28:21.569 --> 00:28:23.630
is being used to sort through large volumes of

00:28:23.630 --> 00:28:26.400
data, drive automation within the SOC. and drive

00:28:26.400 --> 00:28:29.240
consistency across the environment. So because

00:28:29.240 --> 00:28:31.079
it's going to get harder, at that point, the

00:28:31.079 --> 00:28:33.539
red team is going to have to use AI as well.

00:28:33.660 --> 00:28:37.140
And we know that real attackers are using and

00:28:37.140 --> 00:28:39.579
getting better with AI. Right now, some of them

00:28:39.579 --> 00:28:41.920
might be confined to crafting better spear phishing

00:28:41.920 --> 00:28:45.339
emails. But over time, it's going to look at

00:28:45.339 --> 00:28:48.299
things like automating recon, monitoring a target.

00:28:48.440 --> 00:28:52.839
So for example, use generative AI to vibe code.

00:28:53.359 --> 00:28:56.240
a script in Python to watch a particular endpoint

00:28:56.240 --> 00:28:59.279
over a long period of time to detect subtle changes

00:28:59.279 --> 00:29:03.519
in the attack surface, which can then raise an

00:29:03.519 --> 00:29:06.119
alert for the red team to move in. Certainly

00:29:06.119 --> 00:29:09.460
analyzing source code at scale and kind of looking

00:29:09.460 --> 00:29:13.299
through source code with a new lens of where

00:29:13.299 --> 00:29:16.619
do vulnerabilities exist, not only in the semantic

00:29:16.619 --> 00:29:19.140
and the code flow, but also just how the human

00:29:19.140 --> 00:29:21.609
works, right? I look at... source code that has

00:29:21.609 --> 00:29:24.609
to -do comments or code that is just overly complex,

00:29:24.890 --> 00:29:29.470
sometimes being passed for exploitation. I think

00:29:29.470 --> 00:29:33.450
the other dimension of AI for both a red team

00:29:33.450 --> 00:29:35.589
and a defender perspective is looking at code

00:29:35.589 --> 00:29:39.109
to understand what is being detected or where

00:29:39.109 --> 00:29:42.710
there are gaps in where telemetry is being generated.

00:29:42.990 --> 00:29:46.369
And then those are areas that you want to target,

00:29:46.470 --> 00:29:48.690
certainly from a defender perspective. looking

00:29:48.690 --> 00:29:50.630
at code, and if there's a particular event, for

00:29:50.630 --> 00:29:53.250
example, for authentication, if that is not logged,

00:29:53.430 --> 00:29:55.869
that is something that should be fixed by the

00:29:55.869 --> 00:29:58.730
defenders, but also targeted by the red team.

00:29:58.950 --> 00:30:01.509
And in summary, we're doing a lot of the same

00:30:01.509 --> 00:30:04.250
things, just much faster, using AI to find the

00:30:04.250 --> 00:30:07.529
weak links and model attacker behavior more realistically

00:30:07.529 --> 00:30:10.630
to understand what threat actors are looking

00:30:10.630 --> 00:30:15.170
for, and then using that to drive quicker change

00:30:15.170 --> 00:30:17.819
within the environment. such as implementing

00:30:17.819 --> 00:30:20.980
more precision detection rules and understanding

00:30:20.980 --> 00:30:24.180
where there are vulnerabilities that previously

00:30:24.180 --> 00:30:27.220
haven't been easy to find. One of the questions

00:30:27.220 --> 00:30:30.759
that we always like to ask is, what's a day in

00:30:30.759 --> 00:30:33.210
the life of a red teamer? I mean, is it... always

00:30:33.210 --> 00:30:35.509
on the keyboard? Is it in conference rooms and

00:30:35.509 --> 00:30:39.809
meetings? What is it like to just kind of do

00:30:39.809 --> 00:30:42.069
this day over day? Well, I'll start by saying

00:30:42.069 --> 00:30:44.349
that at this stage in my career, I'm not the

00:30:44.349 --> 00:30:46.230
one actively breaching the systems every day.

00:30:46.349 --> 00:30:48.890
I spend most of my time working within Microsoft

00:30:48.890 --> 00:30:50.829
security governance structure and understanding

00:30:50.829 --> 00:30:53.589
what our CISOs and deputy CISOs need to make

00:30:53.589 --> 00:30:56.089
informed security decisions, as well as our engineering

00:30:56.089 --> 00:30:57.930
leaders. They have to get the results that they

00:30:57.930 --> 00:31:00.180
need to be successful. But I'll share what I

00:31:00.180 --> 00:31:02.200
see every day from the engineers that work on

00:31:02.200 --> 00:31:04.279
my team, because that's where the action is.

00:31:04.400 --> 00:31:07.160
And it certainly makes up a better podcast. Roughly

00:31:07.160 --> 00:31:09.079
half of their time is spent in deep technical

00:31:09.079 --> 00:31:11.500
work, hands -on keyboard. And that means building

00:31:11.500 --> 00:31:13.500
tools, writing scripts, reverse engineering,

00:31:13.759 --> 00:31:16.299
doing reconnaissance, exploiting misconfiguration,

00:31:16.480 --> 00:31:19.380
and navigating Microsoft's vast infrastructure

00:31:19.380 --> 00:31:22.299
as if they were real adversaries. So today, it's

00:31:22.299 --> 00:31:24.960
also about using AI to investigate things such

00:31:24.960 --> 00:31:26.819
as understanding the details of an API attack

00:31:26.819 --> 00:31:28.819
surface without having to page through tons of

00:31:28.819 --> 00:31:31.539
websites and documentation. I am in awe every

00:31:31.539 --> 00:31:33.559
day of their technical depth, creative thinking,

00:31:33.779 --> 00:31:37.359
and the persistence that these engineers have

00:31:37.359 --> 00:31:40.539
to execute on a day -by -day basis so they can

00:31:40.539 --> 00:31:42.940
chain all the research and tooling together seamlessly

00:31:42.940 --> 00:31:45.920
to turn small weaknesses into meaningful breach

00:31:45.920 --> 00:31:47.920
paths. The other half of the job might surprise

00:31:47.920 --> 00:31:49.880
people. It's all about influence. It's focused

00:31:49.880 --> 00:31:52.720
on communication, which means writing, writing

00:31:52.720 --> 00:31:55.480
down the findings, presenting to engineers, engaging

00:31:55.480 --> 00:31:57.759
with product teams and building long term relationships

00:31:57.759 --> 00:32:00.859
across the company. So a red teamer just can't

00:32:00.859 --> 00:32:03.339
find a flaw. They have to explain it, persuade

00:32:03.339 --> 00:32:05.359
others to why it matters and drive real change

00:32:05.359 --> 00:32:08.839
and verify that that change is effective. So

00:32:08.839 --> 00:32:11.740
technical excellence is essential, but so is

00:32:11.740 --> 00:32:14.750
empathy, clarity and collaboration. To summarize,

00:32:14.910 --> 00:32:17.009
being a red teamer is part hacker, part diplomat,

00:32:17.009 --> 00:32:20.609
and part detective. There's a video that I often

00:32:20.609 --> 00:32:22.569
refer people to. It's about a decade old now,

00:32:22.670 --> 00:32:24.170
and it's from a gentleman by the name of Rob

00:32:24.170 --> 00:32:26.250
Joyce, who at the time was the director at NSA.

00:32:26.710 --> 00:32:30.130
He gave a talk at Usenix that's infamous in red

00:32:30.130 --> 00:32:32.829
team circles and available for YouTube if you

00:32:32.829 --> 00:32:35.170
want to check it out. One of the things that

00:32:35.170 --> 00:32:38.690
he says in this video has stuck with me throughout

00:32:38.690 --> 00:32:41.329
my career, and that is, you know the technologies

00:32:41.329 --> 00:32:43.970
that you intend to use in your network. We know

00:32:43.970 --> 00:32:45.890
the technologies that are actually in use in

00:32:45.890 --> 00:32:48.869
your network. And today, that quote is even more

00:32:48.869 --> 00:32:50.950
relevant. And it's such a part of the day in

00:32:50.950 --> 00:32:52.769
the life of the red teamer. Because with the

00:32:52.769 --> 00:32:55.450
explosion of cloud services and APIs and open

00:32:55.450 --> 00:32:58.029
source libraries and microservices, organizations

00:32:58.029 --> 00:33:00.910
are assembling these super powerful systems with

00:33:00.910 --> 00:33:02.769
countless dependencies. And red teamers have

00:33:02.769 --> 00:33:05.170
to understand how those systems behave under

00:33:05.170 --> 00:33:07.710
pressure and not just how they're designed on

00:33:07.710 --> 00:33:10.579
paper. So a day in life is never routine. It's

00:33:10.579 --> 00:33:13.099
really chasing down real world edge cases and

00:33:13.099 --> 00:33:15.880
helping the organization learn from them to become

00:33:15.880 --> 00:33:19.160
more resilient. So we always ask, I guess, something

00:33:19.160 --> 00:33:22.720
else at the end of the episodes, which is if

00:33:22.720 --> 00:33:25.160
you wanted to leave our listeners with one final

00:33:25.160 --> 00:33:28.500
thought, what would it be? I'd say, you know,

00:33:28.500 --> 00:33:31.220
attacks, attackers are moving fast and AI is

00:33:31.220 --> 00:33:33.019
going to speed them up. But you always have to

00:33:33.019 --> 00:33:35.400
remember as a defender that you control the terrain

00:33:35.400 --> 00:33:38.109
that the attackers have to operate in. You can

00:33:38.109 --> 00:33:39.650
slow them down, you can expose them, you can

00:33:39.650 --> 00:33:42.250
shut them out, but only if you're designed for

00:33:42.250 --> 00:33:44.250
that and you understand how the attacker works

00:33:44.250 --> 00:33:48.950
so you can define and implement the right terrain

00:33:48.950 --> 00:33:51.569
for what you're trying to accomplish. You own

00:33:51.569 --> 00:33:53.720
the terrain. All right, let's bring this episode

00:33:53.720 --> 00:33:55.900
to an end. Craig, thank you so much for joining

00:33:55.900 --> 00:33:57.500
us this week. I know you're busy, guys. So we

00:33:57.500 --> 00:33:59.359
really appreciate you taking the time to spend

00:33:59.359 --> 00:34:01.779
time with us. And to all our listeners out there,

00:34:01.859 --> 00:34:04.579
stay safe, and we'll see you next time. Thanks

00:34:04.579 --> 00:34:06.519
for listening to the Azure Security Podcast.

00:34:06.980 --> 00:34:09.900
You can find show notes and other resources at

00:34:09.900 --> 00:34:14.500
our website, azsecuritypodcast .net. If you have

00:34:14.500 --> 00:34:17.699
any questions, please find us on Twitter at AzureSecPod.

00:34:18.590 --> 00:34:22.289
Background music is from ccmixter .com and licensed

00:34:22.289 --> 00:34:24.250
under the Creative Commons License.
