1
00:00:00,000 --> 00:00:06,200
Welcome to the Azure Security Podcast,

2
00:00:06,200 --> 00:00:09,360
where we discuss topics relating to security, privacy,

3
00:00:09,360 --> 00:00:13,640
reliability, and compliance on the Microsoft Cloud Platform.

4
00:00:13,640 --> 00:00:17,160
Hey, everybody. Welcome to episode number 30.

5
00:00:17,160 --> 00:00:20,600
This week, it's myself, Gladys, and Sarah.

6
00:00:20,600 --> 00:00:23,040
Mark is just absolutely slammed,

7
00:00:23,040 --> 00:00:25,320
so we'll have to listen to his news next week.

8
00:00:25,320 --> 00:00:27,400
We also have a guest. We have Pete Bryant.

9
00:00:27,400 --> 00:00:29,280
He's a senior software engineer in

10
00:00:29,280 --> 00:00:31,360
the Microsoft Press Intelligence Center,

11
00:00:31,360 --> 00:00:35,000
and he's here to talk about everything you need to know about Mystic.

12
00:00:35,000 --> 00:00:36,520
But before we get on to Pete,

13
00:00:36,520 --> 00:00:38,040
let's take a look at the news.

14
00:00:38,040 --> 00:00:40,000
Gladys, why don't you kick things off?

15
00:00:40,000 --> 00:00:44,960
Yeah, I wanted to mention that Michael Workers,

16
00:00:44,960 --> 00:00:48,320
David Sanchez Rodriguez, Javier Soriano,

17
00:00:48,320 --> 00:00:52,160
Marcelo de Olio, and myself have recorded

18
00:00:52,160 --> 00:00:55,360
the first Azure Security Podcast in Spanish.

19
00:00:55,360 --> 00:00:56,800
We have published it.

20
00:00:56,800 --> 00:01:00,000
Currently, we are recording in a monthly basis,

21
00:01:00,000 --> 00:01:03,560
but we expect to change to more often,

22
00:01:03,560 --> 00:01:05,880
depending on the outreach.

23
00:01:05,880 --> 00:01:09,720
Thank you, Michael, for helping us get all this set up.

24
00:01:09,720 --> 00:01:11,600
It takes a little bit of learning,

25
00:01:11,600 --> 00:01:13,960
but we are doing it now.

26
00:01:13,960 --> 00:01:17,040
It has been well-received based on

27
00:01:17,040 --> 00:01:20,080
Twitter and LinkedIn comments that we're seeing.

28
00:01:20,080 --> 00:01:22,000
At the end of this month,

29
00:01:22,000 --> 00:01:25,080
we will be interviewing Roberto Rodriguez

30
00:01:25,080 --> 00:01:27,440
about his similar creation,

31
00:01:27,440 --> 00:01:29,800
so being the look out for those.

32
00:01:29,800 --> 00:01:32,640
From the Cloud Capability perspective,

33
00:01:32,640 --> 00:01:37,040
I'm really excited about a conditional access filtering

34
00:01:37,040 --> 00:01:41,440
that has been added in preview for Azure AD.

35
00:01:41,440 --> 00:01:47,160
Basically, this gives the ability of filtering for device as condition.

36
00:01:47,160 --> 00:01:49,760
For example, one can restrict access to

37
00:01:49,760 --> 00:01:51,680
a privileged access workstation or

38
00:01:51,680 --> 00:01:55,040
a secure access workstation.

39
00:01:55,040 --> 00:02:00,680
Sometime in our documentation is referred as POR or SO.

40
00:02:00,680 --> 00:02:05,680
For those of you not familiar with what POR or SO are,

41
00:02:05,680 --> 00:02:09,600
basically, there are computers that are really hardened,

42
00:02:09,600 --> 00:02:11,480
limited applications.

43
00:02:11,480 --> 00:02:17,040
We recommend not to do email or regular web browsing.

44
00:02:17,040 --> 00:02:20,520
It's only used for administration.

45
00:02:20,520 --> 00:02:24,120
You are able to connect to Cloud Administration

46
00:02:24,120 --> 00:02:27,960
or on-prem application for administration.

47
00:02:27,960 --> 00:02:30,080
To configure this, basically,

48
00:02:30,080 --> 00:02:32,160
all you have to do is go to

49
00:02:32,160 --> 00:02:38,360
Azure AD conditional access under the authentication context.

50
00:02:38,360 --> 00:02:41,920
The last thing that I wanted to talk about is,

51
00:02:41,920 --> 00:02:46,440
two years ago, Microsoft launched Windows Virtual Desktop.

52
00:02:46,440 --> 00:02:47,800
With the pandemic,

53
00:02:47,800 --> 00:02:50,560
Microsoft has seen the need to support

54
00:02:50,560 --> 00:02:55,720
an evolving set of remote and hybrid work scenarios.

55
00:02:55,720 --> 00:02:59,000
To support this broader vision,

56
00:02:59,000 --> 00:03:02,080
we are changing the rebranding of

57
00:03:02,080 --> 00:03:05,520
Windows Virtual Desktop to Azure Virtual Desktop.

58
00:03:05,520 --> 00:03:08,560
You're going to start seeing a lot of documentation

59
00:03:08,560 --> 00:03:14,200
referring to Windows Virtual Desktop as Azure Virtual Desktop.

60
00:03:14,200 --> 00:03:17,480
Cool. Some of the news I have,

61
00:03:17,480 --> 00:03:19,520
I'll start with Azure Backup.

62
00:03:19,520 --> 00:03:23,920
Anyone using the Azure Backup service and

63
00:03:23,920 --> 00:03:25,760
any resources that are using

64
00:03:25,760 --> 00:03:29,240
the Microsoft Azure Recovery Services or the Mars agent,

65
00:03:29,240 --> 00:03:32,520
you need to be using TLS 1.2 or above.

66
00:03:32,520 --> 00:03:37,360
We will be stopping using TLS 1.0 and 1.1

67
00:03:37,360 --> 00:03:41,200
as of the 1st of September, 2021.

68
00:03:41,200 --> 00:03:43,360
At the day we're recording this,

69
00:03:43,360 --> 00:03:45,760
that is June, July, August, September,

70
00:03:45,760 --> 00:03:49,000
that's three and a half months away.

71
00:03:49,000 --> 00:03:54,320
I know that in production IT terms,

72
00:03:54,320 --> 00:03:56,200
that's not necessarily a long time

73
00:03:56,200 --> 00:03:58,640
if you have to go through a change board and stuff.

74
00:03:58,640 --> 00:04:02,320
Definitely get onto that if you're using that.

75
00:04:02,320 --> 00:04:06,520
Secondly, let's talk about my favorite baby, Azure Sentinel.

76
00:04:06,520 --> 00:04:08,040
Just a quick one this time,

77
00:04:08,040 --> 00:04:10,200
I will actually not talk about it too much,

78
00:04:10,200 --> 00:04:13,080
but we have made some great changes to the pricing of

79
00:04:13,080 --> 00:04:15,720
Sentinel and this is pretty cool because it means

80
00:04:15,720 --> 00:04:17,680
it should be cheaper for folks.

81
00:04:17,680 --> 00:04:20,520
Now, when I say cheaper, I'm not saying we suddenly drop the price,

82
00:04:20,520 --> 00:04:22,440
but a couple of things to know.

83
00:04:22,440 --> 00:04:25,480
If your capacity reservations are now called

84
00:04:25,480 --> 00:04:28,600
commitment tiers because we like to change names.

85
00:04:28,600 --> 00:04:31,280
But with the commitment tiers,

86
00:04:31,280 --> 00:04:34,440
we now have higher commitment tiers.

87
00:04:34,440 --> 00:04:36,120
If you're familiar with them,

88
00:04:36,120 --> 00:04:41,920
you'll know that we went from 100 gig a day up to 500 gig a day.

89
00:04:41,920 --> 00:04:44,520
Now we are also doing one terabyte a day,

90
00:04:44,520 --> 00:04:46,800
two terabytes a day, five terabytes a day.

91
00:04:46,800 --> 00:04:49,880
So you can actually just configure that commitment tier in

92
00:04:49,880 --> 00:04:55,120
the UI without having to talk to a Microsoft person.

93
00:04:55,120 --> 00:04:58,120
The other thing that's really,

94
00:04:58,120 --> 00:05:00,880
really cool is the way that we bill for

95
00:05:00,880 --> 00:05:03,280
data ingestion over the commitment tier.

96
00:05:03,280 --> 00:05:05,400
So if you'll say on a,

97
00:05:05,400 --> 00:05:07,800
what used to happen was if you were on,

98
00:05:07,800 --> 00:05:12,800
say, 100 gig a day commitment tier and you went over 100 gigabytes a day,

99
00:05:12,800 --> 00:05:16,760
you would pay the pay as you go rate for Azure Sentinel.

100
00:05:16,760 --> 00:05:20,280
Now what you'll do if you go over your commitment tier,

101
00:05:20,280 --> 00:05:23,240
you will just pay the effective rate.

102
00:05:23,240 --> 00:05:26,080
So because each commitment tier has a discount.

103
00:05:26,080 --> 00:05:28,320
You can tell I'm a tech person and not

104
00:05:28,320 --> 00:05:31,360
a salesperson because I am appalling and explaining this.

105
00:05:31,360 --> 00:05:33,200
But basically it means it's cheaper.

106
00:05:33,200 --> 00:05:35,960
We'll put the link in the show notes, go check it out.

107
00:05:35,960 --> 00:05:39,440
It is, I think, a big improvement because previously,

108
00:05:39,440 --> 00:05:41,400
if you went over your commitment tier,

109
00:05:41,400 --> 00:05:44,880
you would get charged for that overage quite a bit more.

110
00:05:44,880 --> 00:05:47,760
So it's going to be cheaper now, which is lovely.

111
00:05:47,760 --> 00:05:50,320
Then we've got for Azure Security Center,

112
00:05:50,320 --> 00:05:52,600
couple of things to talk about there.

113
00:05:52,600 --> 00:05:56,200
We've got, it's got some new recommendations for

114
00:05:56,200 --> 00:05:58,080
hardening Kubernetes clusters.

115
00:05:58,080 --> 00:05:59,800
So if you're using Kubernetes,

116
00:05:59,800 --> 00:06:01,600
we're going to have some more hygiene recommendations,

117
00:06:01,600 --> 00:06:02,800
which is great.

118
00:06:02,800 --> 00:06:05,520
There's going to be some new recommendations to

119
00:06:05,520 --> 00:06:08,440
enable trusted launch capabilities.

120
00:06:08,440 --> 00:06:11,400
In preview then for GA because

121
00:06:11,400 --> 00:06:13,480
Azure Security Center is pretty much

122
00:06:13,480 --> 00:06:15,520
as busy as Azure Sentinel, I reckon.

123
00:06:15,520 --> 00:06:18,320
It's things that have gone GA that you may have already seen.

124
00:06:18,320 --> 00:06:20,880
We've got Azure Defender for DNS and

125
00:06:20,880 --> 00:06:24,080
Azure Defender for Resource Manager and now GA.

126
00:06:24,080 --> 00:06:28,640
Azure Defender for open-source relational databases is GA.

127
00:06:28,640 --> 00:06:32,760
We've got some new alerts in Defender for Resource Manager,

128
00:06:32,760 --> 00:06:38,320
and there's also the SQL data classification recommendation.

129
00:06:38,320 --> 00:06:41,480
Severity has changed and that's all GA too.

130
00:06:41,480 --> 00:06:43,960
Then the last thing that I wanted to talk about again,

131
00:06:43,960 --> 00:06:46,120
Security Center, this is public preview,

132
00:06:46,120 --> 00:06:47,240
but this is very cool.

133
00:06:47,240 --> 00:06:48,640
So it gets to be separate.

134
00:06:48,640 --> 00:06:54,040
Is that Azure Security Center now integrates with GitHub Actions.

135
00:06:54,040 --> 00:06:56,680
So if you're using GitHub Actions,

136
00:06:56,680 --> 00:07:01,280
they are a way of doing automation within your GitHub repo.

137
00:07:01,280 --> 00:07:03,480
I have had some experience with them.

138
00:07:03,480 --> 00:07:05,360
I'm a bit of a GitHub noob,

139
00:07:05,360 --> 00:07:07,720
but I have had some experience trying to post

140
00:07:07,720 --> 00:07:10,000
automated messages in the Sentinel repo.

141
00:07:10,000 --> 00:07:13,080
So I've done a tiny bit with this.

142
00:07:13,080 --> 00:07:17,240
It's very cool because what it means is that you can

143
00:07:17,240 --> 00:07:22,080
incorporate security and compliance into your CI CD pipeline,

144
00:07:22,080 --> 00:07:26,120
and it will help developers identify issues faster.

145
00:07:26,120 --> 00:07:28,600
So definitely go check that out if you're using

146
00:07:28,600 --> 00:07:31,480
a GitHub repo for your code.

147
00:07:31,480 --> 00:07:34,240
Over to you, Michael. That's all my news.

148
00:07:34,240 --> 00:07:37,200
One of the first items that I have is the fact that we now

149
00:07:37,200 --> 00:07:42,280
have Sec DevOps practice support in GitHub and Azure.

150
00:07:42,280 --> 00:07:48,480
So for example, if you're using GitHub as your main pipeline,

151
00:07:48,480 --> 00:07:52,360
then we can actually use tooling that we have now.

152
00:07:52,360 --> 00:07:55,440
For example, Azure Security Center in collaboration with containers,

153
00:07:55,440 --> 00:07:59,120
we can provide that end-to-end collaborative view and tooling to

154
00:07:59,120 --> 00:08:02,160
help you secure the products that come out of your pipelines.

155
00:08:02,160 --> 00:08:03,600
That's really great to see.

156
00:08:03,600 --> 00:08:07,760
The next one is there is now the general availability for

157
00:08:07,760 --> 00:08:10,960
key rotation and exploration policies for Azure Storage.

158
00:08:10,960 --> 00:08:12,680
So before I get stuck in,

159
00:08:12,680 --> 00:08:14,320
so there's need to explain what the keys are here.

160
00:08:14,320 --> 00:08:15,400
These are not encryption keys,

161
00:08:15,400 --> 00:08:16,760
these are not cryptographic keys.

162
00:08:16,760 --> 00:08:20,560
This is the keys that are used as essentially the access token that you use to

163
00:08:20,560 --> 00:08:22,680
access a storage account.

164
00:08:22,680 --> 00:08:25,400
If you're not familiar, you can use two major ways of

165
00:08:25,400 --> 00:08:32,240
accessing storage accounts either through some token or you can use using AAD identities.

166
00:08:32,240 --> 00:08:34,240
We talked about the last podcast,

167
00:08:34,240 --> 00:08:39,920
we talked about the ability now using policy to disable the use of keys.

168
00:08:39,920 --> 00:08:43,920
So only using AAD accounts, that's the data plane.

169
00:08:43,920 --> 00:08:47,360
Well, if you need to use access keys,

170
00:08:47,360 --> 00:08:50,760
for example, secure access tokens,

171
00:08:50,760 --> 00:08:54,080
then sometimes you may want to rotate those on a regular basis.

172
00:08:54,080 --> 00:08:56,320
Well, now you can put policy in place to

173
00:08:56,320 --> 00:09:02,640
require a rotation policy and exploration policies for those access keys.

174
00:09:02,640 --> 00:09:05,080
So some people still want to use access keys,

175
00:09:05,080 --> 00:09:06,320
I totally understand that,

176
00:09:06,320 --> 00:09:11,440
but this is just giving you more control over making sure that those things are rotated on a regular basis.

177
00:09:11,440 --> 00:09:13,960
The next one is we now have the ability,

178
00:09:13,960 --> 00:09:15,560
is it the same public preview,

179
00:09:15,560 --> 00:09:21,120
to have identity-based connections in Azure functions using Azure triggers,

180
00:09:21,120 --> 00:09:24,040
like triggers on various services.

181
00:09:24,040 --> 00:09:29,840
This applies right now to Azure Blob, Azure Q, Event Hub, Service Bus, and Event Grid.

182
00:09:29,840 --> 00:09:35,320
Basically, what it does is it now lets you leverage an identity instead of

183
00:09:35,320 --> 00:09:39,120
a connection string when these services are talking to each other.

184
00:09:39,120 --> 00:09:40,960
As you're probably all well aware,

185
00:09:40,960 --> 00:09:45,720
storing a secret is always a painfully difficult thing to do.

186
00:09:45,720 --> 00:09:48,720
More importantly, if it's compromised,

187
00:09:48,720 --> 00:09:53,640
then the attacker now can impersonate that particular service.

188
00:09:53,640 --> 00:09:58,560
So this gets rid of that problem by using managed identities.

189
00:09:58,560 --> 00:10:00,440
So if you've set this in place,

190
00:10:00,440 --> 00:10:02,240
you can have two services talking to each other just using

191
00:10:02,240 --> 00:10:05,160
managed identities to authenticate against each other.

192
00:10:05,160 --> 00:10:06,800
The last one I have,

193
00:10:06,800 --> 00:10:10,800
which I was going to talk about last week was,

194
00:10:10,800 --> 00:10:12,240
but I totally forgot about it,

195
00:10:12,240 --> 00:10:18,480
was Cosmos DB now has support for client-side encryption using Always Encrypted.

196
00:10:18,480 --> 00:10:22,200
Always Encrypted is a technology that first came out in SQL Server.

197
00:10:22,200 --> 00:10:24,560
It's essentially client-side encryption.

198
00:10:24,560 --> 00:10:28,880
So the keys are actually maintained by the clients.

199
00:10:28,880 --> 00:10:30,800
SQL Server doesn't know about them.

200
00:10:30,800 --> 00:10:31,920
On this particular example,

201
00:10:31,920 --> 00:10:33,320
Cosmos DB doesn't know about them.

202
00:10:33,320 --> 00:10:35,720
They're maintained completely at the client.

203
00:10:35,720 --> 00:10:40,040
There are certain kinds of data and certain kinds of configurations that will allow you to do

204
00:10:40,040 --> 00:10:43,800
queries over that data even though it's encrypted.

205
00:10:43,800 --> 00:10:47,400
This is the beauty of Always Encrypted.

206
00:10:47,400 --> 00:10:52,280
So that technology is now available in Cosmos DB in preview.

207
00:10:52,280 --> 00:10:56,200
We talked a couple of months ago now about the SDK that's available.

208
00:10:56,200 --> 00:10:57,320
It's up on GitHub.

209
00:10:57,320 --> 00:10:59,560
It's essentially the same code.

210
00:10:59,560 --> 00:11:01,440
So if you're using Cosmos DB,

211
00:11:01,440 --> 00:11:06,760
you want some incredibly robust cryptographic control at the data plane,

212
00:11:06,760 --> 00:11:08,880
then this is certainly worth looking at.

213
00:11:08,880 --> 00:11:15,040
So with that, let's change tax and let's turn our attention to our guest.

214
00:11:15,040 --> 00:11:16,600
This week, we have Pete Bryant.

215
00:11:16,600 --> 00:11:21,200
As I mentioned, he's a senior software engineer in the Microsoft Threat Intelligence Center,

216
00:11:21,200 --> 00:11:22,880
otherwise known as Mystic.

217
00:11:22,880 --> 00:11:25,280
Pete, welcome to the podcast.

218
00:11:25,280 --> 00:11:30,360
We'd like to spend a couple of moments and explain how long you've been at Microsoft and what you do.

219
00:11:30,360 --> 00:11:31,800
Thanks, Michael.

220
00:11:31,800 --> 00:11:33,680
Yeah, thanks for having me on.

221
00:11:33,680 --> 00:11:38,960
So I work as officially a software engineer at Mystic,

222
00:11:38,960 --> 00:11:44,240
but really I'm more of a security analyst or researcher.

223
00:11:44,240 --> 00:11:47,920
I've worked at for Mystic for a couple of years now,

224
00:11:47,920 --> 00:11:54,680
and I've been at Microsoft for getting on for nearly five years now in a variety of roles.

225
00:11:54,680 --> 00:12:00,600
I actually started off as a security engineer at Skype back in the day,

226
00:12:00,600 --> 00:12:06,280
and then I've done some kind of customer facing roles before moving into Mystic.

227
00:12:06,280 --> 00:12:12,880
But my background really has always been the kind of defensive side of cybersecurity.

228
00:12:12,880 --> 00:12:16,040
So socks, instant response, that sort of work.

229
00:12:16,040 --> 00:12:16,640
Very cool.

230
00:12:16,640 --> 00:12:20,240
So I mean, I have to ask this pretty straight up question.

231
00:12:20,240 --> 00:12:25,680
What is Mystic and what is the role of Mystic within Microsoft and how does it relate to our customers?

232
00:12:25,680 --> 00:12:29,480
So Mystic has a number of different missions,

233
00:12:29,480 --> 00:12:33,920
as the name suggests Threat Intelligence is a core one of those.

234
00:12:33,920 --> 00:12:41,440
So what that involves is investigating and tracking the more sophisticated threat actors

235
00:12:41,440 --> 00:12:45,200
that are targeting Microsoft and Microsoft customers.

236
00:12:45,200 --> 00:12:51,080
These are typically nation backed groups or advanced e-crime actors.

237
00:12:51,080 --> 00:12:55,960
And you might have heard of some of these groups that we track when we talk about them publicly.

238
00:12:55,960 --> 00:13:02,680
As we name them after periodic elements, so things like strontium, gold, nabellum,

239
00:13:02,680 --> 00:13:08,680
these are all kind of names of threat act groups that we track as part of the core Threat Intelligence mission.

240
00:13:08,680 --> 00:13:15,200
And the objective of that mission is to feed into both Microsoft's defender teams,

241
00:13:15,200 --> 00:13:19,320
so the teams that protect Microsoft as an organization,

242
00:13:19,320 --> 00:13:22,760
but also out to our customers through our security products.

243
00:13:22,760 --> 00:13:30,120
So the intelligence that we gather as part of the TI mission feeds into all of the products

244
00:13:30,120 --> 00:13:33,240
that our customers use kind of day in, day out.

245
00:13:33,240 --> 00:13:38,240
But it's not just that kind of Threat Intelligence mission that Mystic does.

246
00:13:38,240 --> 00:13:41,920
We also have a number of other engineering and R&D roles.

247
00:13:41,920 --> 00:13:49,600
So we spend a lot of time and effort researching new attack techniques and new defensive techniques

248
00:13:49,600 --> 00:13:56,200
and feeding them again into the product groups, into the product ecosystem that Microsoft has.

249
00:13:56,200 --> 00:14:00,880
And some of that is providing domain expertise to other groups.

250
00:14:00,880 --> 00:14:07,160
Some of it is providing kind of core engineering platforms that actually do some of this detection as well.

251
00:14:07,160 --> 00:14:10,760
We also try and engage with the community more broadly.

252
00:14:10,760 --> 00:14:14,720
So as part of the Threat Intelligence mission, we have a lot of industry partners

253
00:14:14,720 --> 00:14:20,560
who we work very closely with in the threat actor tracking with information sharing and so on.

254
00:14:20,560 --> 00:14:25,560
But we also try and share out through open source projects and openly in the community.

255
00:14:25,560 --> 00:14:32,720
So one of the kind of big open source projects we have is Mystic Pi, which is one I work on.

256
00:14:32,720 --> 00:14:37,320
But there's others in there as well in the new section Gladys mentioned Simuland,

257
00:14:37,320 --> 00:14:41,600
which was created by Roberto Rodriguez, who's my colleague at Mystic.

258
00:14:41,600 --> 00:14:48,600
And there's a number of other areas where we're just trying to contribute back to improve the security ecosystem

259
00:14:48,600 --> 00:14:51,600
for Microsoft customers, but also just more generally.

260
00:14:51,600 --> 00:14:55,200
Can you explain a little bit about Mystic Pi?

261
00:14:55,200 --> 00:14:59,720
Sure. So Mystic Pi is kind of very much my baby.

262
00:14:59,720 --> 00:15:05,600
I could talk about it all day because it's something I've worked on for the last couple of years now.

263
00:15:05,600 --> 00:15:13,520
And what it is is a set of Python tools to support threat intelligence analysts and threat hunters.

264
00:15:13,520 --> 00:15:19,400
Most of it is derived from expertise and experience in-house in Mystic.

265
00:15:19,400 --> 00:15:26,640
And we actually have a very similar tool set internally that is it has a different name

266
00:15:26,640 --> 00:15:32,440
and it's kind of geared towards our specific internal processes, but it has a lot of the same capabilities.

267
00:15:32,440 --> 00:15:40,480
And the idea really is to provide an easy and simple way for security analysts and threat hunters to use Python

268
00:15:40,480 --> 00:15:46,440
and primarily Jupyter notebooks to conduct this investigation work.

269
00:15:46,440 --> 00:15:51,480
So it has tools to help you kind of collect data, analyze data, visualize data,

270
00:15:51,480 --> 00:15:57,920
and kind of really improve your kind of workflow speed and capabilities

271
00:15:57,920 --> 00:16:03,920
based off this experience we have within Microsoft and specifically within Mystic.

272
00:16:03,920 --> 00:16:09,920
So one of the big benefits of doing this in Python and creating a Python-based tool is that

273
00:16:09,920 --> 00:16:16,080
it also opens up a integration to the wider Python ecosystem.

274
00:16:16,080 --> 00:16:20,600
All of the other capabilities people have built out there, partly for security,

275
00:16:20,600 --> 00:16:22,400
but more really for other projects.

276
00:16:22,400 --> 00:16:28,040
So if you think about the data science and ML community, they're heavily invested in Python

277
00:16:28,040 --> 00:16:33,080
and there are loads of great Python tools out there, such as things like Scikit Learn,

278
00:16:33,080 --> 00:16:36,440
that make ML a lot easier to conduct.

279
00:16:36,440 --> 00:16:38,360
And they're all written in Python.

280
00:16:38,360 --> 00:16:44,000
So having a security tool set written in Python as well means we can kind of integrate those two sides

281
00:16:44,000 --> 00:16:47,720
of the Python ecosystem for the defenders as well.

282
00:16:47,720 --> 00:16:53,200
I'm always fascinated by the kind of the human element of tools like this and people's journey,

283
00:16:53,200 --> 00:16:56,120
you know, as things change, as technology shifts and so on.

284
00:16:56,120 --> 00:17:00,920
And obviously one of the biggest changes over the last decade has been the use of AI and ML.

285
00:17:00,920 --> 00:17:07,800
A couple of weeks ago, actually, at the end of April, we had Sharon Shah on who does the AI and ML

286
00:17:07,800 --> 00:17:09,320
for Azure Sentinel.

287
00:17:09,320 --> 00:17:13,200
And she briefly talked about her journey as a security professional,

288
00:17:13,200 --> 00:17:16,360
sort of through the AI and machine learning landscape.

289
00:17:16,360 --> 00:17:20,640
Could you share with our listeners sort of your kind of journey, like what things you've learned along

290
00:17:20,640 --> 00:17:23,440
the way as a security person learning AI and ML?

291
00:17:23,440 --> 00:17:28,600
A big thing with security, particularly from a threat hunting, threat intelligence perspective,

292
00:17:28,600 --> 00:17:31,080
is it's really just a data problem.

293
00:17:31,080 --> 00:17:36,200
You need to collect your data, format the data and then find the interesting things in it.

294
00:17:36,200 --> 00:17:42,120
And that's fundamentally not that different from what data scientists do.

295
00:17:42,120 --> 00:17:47,400
And having kind of talked to other data scientists and particularly working at Microsoft,

296
00:17:47,400 --> 00:17:50,640
where we've got teams of great data scientists doing other things,

297
00:17:50,640 --> 00:17:58,520
being able to collaborate with them has shown me kind of how powerful ML and AI capabilities can be

298
00:17:58,520 --> 00:18:03,000
for threat hunting, even when they're potentially quite basic,

299
00:18:03,000 --> 00:18:07,600
or at least from a data scientist view that we see as basic.

300
00:18:07,600 --> 00:18:11,320
And Python just makes them so accessible.

301
00:18:11,320 --> 00:18:18,160
It's provided me a way of really easily learning and leveraging some capabilities that really help.

302
00:18:18,160 --> 00:18:21,520
If you think about kind of what our data scientists do internally,

303
00:18:21,520 --> 00:18:30,200
they spend a lot of time creating really cool, very granular data models and machine learning algorithms

304
00:18:30,200 --> 00:18:35,680
that help kind of find specific events in a whole stack of data.

305
00:18:35,680 --> 00:18:40,240
But for me, what I can do is kind of take some of their learnings at basic level

306
00:18:40,240 --> 00:18:45,760
and apply it in threat hunting to do things that don't have to be anywhere near as sophisticated to be valuable.

307
00:18:45,760 --> 00:18:50,120
So if I've got a big set of data, if I can create an ML model,

308
00:18:50,120 --> 00:18:56,080
a simple ML model using some of the pre-built capabilities in something like scikit-learn,

309
00:18:56,080 --> 00:19:02,200
just to cut that data set down to 10% of what it was originally, that's a huge help to me.

310
00:19:02,200 --> 00:19:09,480
And so having those capabilities and those tools easily available to me as a security person

311
00:19:09,480 --> 00:19:14,760
through Python and just a few lines of code is really powerful.

312
00:19:14,760 --> 00:19:20,440
And it means that I can learn a lot about ML as I go in.

313
00:19:20,440 --> 00:19:22,040
I'm far from an expert.

314
00:19:22,040 --> 00:19:27,800
We work pretty closely with some data science experts.

315
00:19:27,800 --> 00:19:32,120
And to be honest, a lot of the maths they talk about goes over my head.

316
00:19:32,120 --> 00:19:39,440
But I can understand enough and leverage enough to make it useful to me as a threat hunter.

317
00:19:39,440 --> 00:19:43,920
Cool. So Pete, I get asked and I'm going to let you talk about it rather than me,

318
00:19:43,920 --> 00:19:50,400
but I get asked, why do I need Sentinel if MysticPy can do all these things?

319
00:19:50,400 --> 00:19:52,640
Like how do they work together?

320
00:19:52,640 --> 00:19:55,600
Do they complement each other, etc.

321
00:19:55,600 --> 00:20:01,600
Because I know not everyone is clear on how those two might coexist.

322
00:20:01,600 --> 00:20:06,480
I guess the first thing to say is MysticPy is built to work with Sentinel,

323
00:20:06,480 --> 00:20:09,200
but it's not Sentinel specific.

324
00:20:09,200 --> 00:20:12,400
It has capabilities to work with other data sources.

325
00:20:12,400 --> 00:20:16,080
It hooks up with things like Splunk, hooks up with MD,

326
00:20:16,080 --> 00:20:19,760
it hooks up with local data you might have.

327
00:20:19,760 --> 00:20:24,000
But it's also not a replacement for any of those tools really.

328
00:20:24,000 --> 00:20:32,640
It's focused more on the less structured parts of the security process.

329
00:20:32,640 --> 00:20:34,800
So you're not triaging alerts necessarily.

330
00:20:34,800 --> 00:20:43,360
It's more the experimentation that comes with threat hunting or a particularly complex investigation.

331
00:20:43,360 --> 00:20:48,960
One of the advantages it has is that it takes the power of something like KQL,

332
00:20:48,960 --> 00:20:53,840
which we have in Sentinel, and just opens up to the, again,

333
00:20:53,840 --> 00:20:56,640
pretty much anything you could think of doing in Python.

334
00:20:57,520 --> 00:21:00,080
If you think about where MysticPy is sitting,

335
00:21:00,080 --> 00:21:04,240
it's probably not going to be something that every security analyst is going to use.

336
00:21:04,240 --> 00:21:09,600
It is definitely one of the more advanced capabilities in a tool set.

337
00:21:09,600 --> 00:21:14,800
But it allows you to do things that you maybe could do in Sentinel,

338
00:21:14,800 --> 00:21:16,800
but wouldn't necessarily want to do.

339
00:21:16,800 --> 00:21:19,760
And I think one of the really powerful things about it is its ability

340
00:21:19,760 --> 00:21:22,160
to connect to multiple different datasets at once.

341
00:21:22,720 --> 00:21:25,840
You can use it and pull data from Sentinel as your starting point,

342
00:21:25,840 --> 00:21:30,720
but then also pull data from other locations and analyze it together

343
00:21:30,720 --> 00:21:34,080
without having to then kind of ingest all that data into Sentinel,

344
00:21:34,960 --> 00:21:39,120
and the kind of storage elements and engineering side that comes with that.

345
00:21:39,120 --> 00:21:43,680
Really, it's kind of your extension out of Sentinel into other elements.

346
00:21:44,400 --> 00:21:48,160
And really, it's kind of the world you're oyster once you're in MysticPy,

347
00:21:48,160 --> 00:21:53,520
because you're not really constrained by kind of UI or features at that point.

348
00:21:54,320 --> 00:21:58,960
The general sort of view I get is Sentinel is this big tool,

349
00:21:58,960 --> 00:22:02,000
whereas MysticPy may be more applicable for some people

350
00:22:02,000 --> 00:22:04,160
who want to have some more programmatic access

351
00:22:04,160 --> 00:22:08,000
and fiddling around with different settings and so on

352
00:22:08,000 --> 00:22:10,080
just to get certain types of data back.

353
00:22:10,080 --> 00:22:13,920
It seems more program-y rather than being an infrastructure tool.

354
00:22:13,920 --> 00:22:16,000
Is that a fair comment or not?

355
00:22:16,720 --> 00:22:18,080
Yeah, absolutely.

356
00:22:18,080 --> 00:22:24,320
And it's also not a tool that's really built for structure, if that makes sense.

357
00:22:24,320 --> 00:22:29,520
It doesn't have integration with like a ticketing system or a nice process queue

358
00:22:29,520 --> 00:22:33,040
that you get with the instant experience in Sentinel, say.

359
00:22:33,040 --> 00:22:36,560
So there are scenarios where you would definitely kind of,

360
00:22:36,560 --> 00:22:39,120
you need that structure such as triaging alerts

361
00:22:39,120 --> 00:22:41,920
that you're just not going to have in MysticPy,

362
00:22:41,920 --> 00:22:44,080
because that's not really what it's intended for.

363
00:22:44,080 --> 00:22:48,560
So can you give an example of how MysticPy has been used in the wild

364
00:22:48,560 --> 00:22:50,800
and perhaps say some of our customers some time?

365
00:22:50,800 --> 00:22:57,280
Sure. So along with MysticPy, we've also created a number of Jupyter notebooks

366
00:22:57,280 --> 00:23:07,600
that go with Sentinel and use MysticPy to kind of allow people to kind of do specific things.

367
00:23:07,600 --> 00:23:10,560
And one that we created last year that I think is a good example is,

368
00:23:10,560 --> 00:23:13,840
is one that was looking at COVID-19 themed threats.

369
00:23:13,840 --> 00:23:18,400
And we wrote this back in, I think, March or April last year,

370
00:23:18,400 --> 00:23:23,840
when we were seeing a huge volume of COVID-themed phishing attacks

371
00:23:23,840 --> 00:23:28,560
and other kind of influence type operations.

372
00:23:29,600 --> 00:23:34,640
And rather than just kind of like release a feed of IOCs that we are seeing that would have kind

373
00:23:34,640 --> 00:23:38,800
of grown on an exponential basis, it made a lot more sense to create a notebook

374
00:23:39,440 --> 00:23:43,440
using MysticPy that allowed people to analyze the stuff themselves.

375
00:23:43,440 --> 00:23:48,960
So there the notebook kind of collects various data sets from Sentinel primarily

376
00:23:48,960 --> 00:23:56,640
and then looks in them for COVID-themed elements, domain names, document names, things like that.

377
00:23:56,640 --> 00:24:01,280
And then uses a number of the features of MysticPy to help highlight

378
00:24:02,000 --> 00:24:06,000
which of those might be something worth investigating a bit further.

379
00:24:06,000 --> 00:24:10,880
So we can look them up in threat intelligence feeds, we can get details on domains

380
00:24:10,880 --> 00:24:14,480
and when they were registered, are they something that was just kind of set up 10 minutes ago

381
00:24:14,480 --> 00:24:16,480
or has this been around for a couple of years?

382
00:24:17,680 --> 00:24:19,040
What's the reputation of this?

383
00:24:19,680 --> 00:24:25,200
Again, just kind of allowing us to take that core data that you've got in Sentinel

384
00:24:25,200 --> 00:24:30,960
and enhance it with all of these external data feeds to help you kind of drill in and investigate

385
00:24:30,960 --> 00:24:33,120
this data. And I think that's really important, right?

386
00:24:33,120 --> 00:24:38,000
I mean, you've got this massive amount of data and you're essentially using MysticPy and it's

387
00:24:38,000 --> 00:24:44,960
just AI and ML to whittle it down to some smaller data set that has a higher likelihood

388
00:24:44,960 --> 00:24:46,800
of being real attack data.

389
00:24:47,600 --> 00:24:54,320
Absolutely. And you can't necessarily get it down to something that has zero false positives.

390
00:24:54,320 --> 00:24:58,480
But as I was saying before, as a threat hunter, you don't need to do that necessarily.

391
00:24:58,480 --> 00:25:03,440
If you can just cut it down to a manageable level that you can go and investigate a bit further,

392
00:25:03,440 --> 00:25:04,400
that's a huge win.

393
00:25:04,400 --> 00:25:11,520
From the looks of it, there's a lot of lessons learned that can be gathered from the work that

394
00:25:11,520 --> 00:25:16,640
Mystic does. Does Mystic publish data on attack or threat actors?

395
00:25:18,480 --> 00:25:27,360
Yes. So we do fairly regularly now. We publish things publicly and also via the Microsoft

396
00:25:27,360 --> 00:25:33,920
security tooling. So you'll see our public blogs. We've recently posted ones about the

397
00:25:33,920 --> 00:25:41,520
groups Nibelium and Hafnium. And these go into detail about the techniques and the malware that

398
00:25:41,520 --> 00:25:47,680
we've seen those threat actors use. But it's not just those public elements. We also have

399
00:25:47,680 --> 00:25:55,280
detections and reports available via things like Defender for Endpoint. So if you are targeted

400
00:25:55,280 --> 00:26:00,960
by one of these groups, you all get access to reporting about those groups through the portal

401
00:26:00,960 --> 00:26:06,800
where you can learn a bit more about them, what their history is, what their typical targeting

402
00:26:06,800 --> 00:26:13,840
pattern is, TTPs that might be associated with them. And we also extend this out to other

403
00:26:13,840 --> 00:26:19,040
customers we have. So where we're seeing the groups or the threat actors that we're tracking,

404
00:26:19,040 --> 00:26:23,760
where we see them targeting customers specifically, we let those customers know

405
00:26:24,320 --> 00:26:30,320
what we've seen, when we've seen it, and help them respond to that. So we're using the

406
00:26:30,320 --> 00:26:37,760
intelligence that we're gathering to feed into customers and the community as much as we can

407
00:26:37,760 --> 00:26:45,200
through this. And we're always looking to step this up. The blog cadence has increased over the

408
00:26:45,200 --> 00:26:50,880
last year. We're also starting to share threat intelligence data sets more often than we have

409
00:26:50,880 --> 00:26:57,040
done in the past. And really, we're just trying to be as open as possible and publish as much as

410
00:26:57,040 --> 00:27:05,280
we can to help people defend themselves against these threat actors, but also allow other threat

411
00:27:05,280 --> 00:27:12,240
intelligence organizations and companies to build upon the work we've done and expand it using their

412
00:27:12,240 --> 00:27:17,680
own visibility. We've seen some great recent examples of that where we published a blog about a

413
00:27:17,680 --> 00:27:23,040
threat actor. And another threat intelligence organization has been able to take the information

414
00:27:23,040 --> 00:27:28,560
there and build and expand and produce their own blogs with some new information that's based on

415
00:27:28,560 --> 00:27:35,520
the unique visibility that they might have. So there's real kind of advantage to everyone allowing

416
00:27:36,560 --> 00:27:44,000
this kind of public sharing of the work that we do. So that leads into a good question about

417
00:27:44,000 --> 00:27:54,080
Nebellion. What do we learn from Nebellion? Oh, I mean, we learn a lot from Nebellion and we're

418
00:27:54,080 --> 00:28:04,320
still learning from them. I think the last six, eight months has been a continual learning process

419
00:28:04,320 --> 00:28:11,920
from this threat group. There's so many things that they've done that have been kind of interesting

420
00:28:11,920 --> 00:28:21,360
and maybe not completely new, but used in a way that we haven't seen on a scale before that has

421
00:28:21,360 --> 00:28:30,160
allowed us to focus our research, but also kind of develop new investigative areas. So the way

422
00:28:30,160 --> 00:28:37,600
that they focused and pivoted from on-premise identity up into the cloud, this wasn't necessarily

423
00:28:37,600 --> 00:28:44,400
a completely new technique. The stealing of ADFS key material and minting of SAML tokens had been

424
00:28:44,400 --> 00:28:49,440
documented by researchers before and have been used by attackers before, but not on the same kind

425
00:28:49,440 --> 00:28:56,080
of scale or sophistication that we saw with Nebellion. Recently, we published a blog about

426
00:28:56,080 --> 00:29:01,760
the phishing campaign Nebellion's been running for the last few months and the techniques are used

427
00:29:01,760 --> 00:29:09,040
there. Again, the techniques weren't completely new or novel, but the way they went about doing this,

428
00:29:09,040 --> 00:29:16,080
the TTPs they used and the methodical approach they took is something that we're learning from.

429
00:29:16,080 --> 00:29:21,600
And again, when you're tracking these very sophisticated threat actors, you learn a lot

430
00:29:21,600 --> 00:29:30,800
just from the way that they approach these attacks, how they spend a long time developing and testing

431
00:29:30,800 --> 00:29:36,160
capabilities before launching attacks. I think if you look at the supply chain attack Nebellion

432
00:29:36,160 --> 00:29:43,840
launched, they spent years on this basically, getting access, persisting it, tweaking it, testing

433
00:29:43,840 --> 00:29:50,640
it, and then finally exploiting it for their end goals. So that kind of timeline and persistence

434
00:29:50,640 --> 00:29:58,640
is super valuable from a defender to learn from and gives us a whole wide range of data points

435
00:29:58,640 --> 00:30:05,760
that we can use to improve our own tracking of threat actors, but also the defenses that go into

436
00:30:05,760 --> 00:30:12,240
our products. What about Haphneum? Have we learned anything there? Any particular patterns?

437
00:30:13,760 --> 00:30:19,920
So Haphneum was a really interesting one. And I think one of the great things about

438
00:30:20,560 --> 00:30:26,800
that was the cross company approach we took. So Haphneum was a threat actor that we had been

439
00:30:26,800 --> 00:30:33,920
tracking who we saw exploiting exchange vulnerabilities. And specifically a number of exchange zero days

440
00:30:33,920 --> 00:30:42,400
that were disclosed in March this year. So about the time that we started seeing them exploit

441
00:30:42,400 --> 00:30:46,880
these capabilities and understand what was going on, other parts of Microsoft were also

442
00:30:46,880 --> 00:30:55,120
focusing in on this. External security researchers had reported some of these

443
00:30:55,120 --> 00:31:01,040
exploits and vulnerabilities to the Microsoft security response center. And this meant that we

444
00:31:01,040 --> 00:31:07,040
could team up with them and the exchange group to take that information we had from researchers,

445
00:31:07,600 --> 00:31:14,480
the threat actor information that we had seen as mystic, and the research we'd done internally

446
00:31:14,480 --> 00:31:21,360
to create a really good response to that, allowing us to have kind of a comprehensive

447
00:31:21,360 --> 00:31:27,360
patching and protection capability for customers, as well as having detection of threat hunting

448
00:31:28,240 --> 00:31:32,560
resources for people to go see if they've been impacted. And that was a really good example,

449
00:31:32,560 --> 00:31:37,920
I think, of the threat intelligent mission that mystic does, enriching and enhancing the

450
00:31:38,560 --> 00:31:41,600
security work that goes on across all of Microsoft really.

451
00:31:42,160 --> 00:31:49,680
I will have to ask about ransomware. There's a lot of talk about it lately. And I have

452
00:31:49,680 --> 00:31:59,280
and I have heard customers talking why my antivirus couldn't gather it. Can you provide some

453
00:31:59,280 --> 00:32:04,000
learnings from there and the overall process ecosystem?

454
00:32:06,000 --> 00:32:11,760
Yeah, so obviously ransomware is a big problem for the whole industry at the moment. And it's

455
00:32:11,760 --> 00:32:17,920
certainly one that mystic is focusing on. You might have seen the recent reports

456
00:32:17,920 --> 00:32:25,120
about the FBI responding to the attackers who targeted the colonial pipeline here in the US.

457
00:32:25,920 --> 00:32:31,760
And the FBI called out in their press conference the other day the support that they'd had from

458
00:32:31,760 --> 00:32:38,880
for mystic. So it shows the kind of work we're doing across not just Microsoft, but the wider

459
00:32:38,880 --> 00:32:44,880
community to help respond to ransomware and impose some cost on these ransomware actors.

460
00:32:44,880 --> 00:32:53,120
But really, what is interesting about ransomware is the way it's often depicted in the news versus

461
00:32:53,120 --> 00:33:01,120
the personas behind it. So ransomware is often seen as a malware problem and it's reported

462
00:33:01,840 --> 00:33:08,400
as a malware problem. So again, if we look at colonial pipeline, the report was about

463
00:33:08,400 --> 00:33:15,440
dark side. And really, dark side is a type of ransomware. It's the software that's involved in it.

464
00:33:16,240 --> 00:33:23,120
But behind dark side, there's a whole number of personas and actors that we can track and look

465
00:33:23,120 --> 00:33:29,360
at here. Generally, with these ransomware groups, it breaks down into kind of three parts. You've

466
00:33:29,360 --> 00:33:36,160
got the people who create the ransomware who do the coding. You've got the sellers effectively.

467
00:33:36,160 --> 00:33:42,640
So these are the people that advertise the things on the dark web that provide people access for a

468
00:33:42,640 --> 00:33:49,600
fee and kind of maintain the infrastructure behind it. And then you've got the operators at the end.

469
00:33:49,600 --> 00:33:56,640
So these are the people who buy access to the ransomware platform and then we'll deploy it at

470
00:33:56,640 --> 00:34:02,880
a victim and be that kind of initial interaction point with the victim demanding the ransom.

471
00:34:02,880 --> 00:34:07,680
So you've got all these different elements that you need to contend with here and there's a number

472
00:34:07,680 --> 00:34:13,920
of places where you can learn and track from them. And I think part of the problem we have

473
00:34:13,920 --> 00:34:20,000
with ransomwares, we see it as a malware problem because we talk about it as a malware. Whereas

474
00:34:20,000 --> 00:34:26,000
really, you've got to think of it much more broadly than that. You've got to have an approach that

475
00:34:26,000 --> 00:34:32,960
thinks about it a lot more holistically, particularly for these really big intrusions. It's not just

476
00:34:32,960 --> 00:34:39,280
a case of someone attaching ransomware payload to an email and sending it so it gets deployed.

477
00:34:39,280 --> 00:34:48,240
These operators who are compromising organizations are acting like a sophisticated threat actor

478
00:34:48,240 --> 00:34:54,960
and gaining access through an initial kind of compromise point, pivoting around, gaining

479
00:34:54,960 --> 00:35:01,840
domain dominance and then deploying ransomware at the end. So you need to yes, think about blocking

480
00:35:01,840 --> 00:35:08,400
the malware aspect of ransomware. But really, if you get to that point, that's very late in the

481
00:35:08,400 --> 00:35:13,920
ransomware kill chain. You need to be looking way earlier at that kind of like initial access, lateral

482
00:35:13,920 --> 00:35:20,000
movement, how are you going to detect and stop that? Because that's really the stage you need to be

483
00:35:20,000 --> 00:35:25,040
doing it rather than just trying to block the ransomware executing on the endpoint.

484
00:35:27,680 --> 00:35:32,480
So when we started the interview, you mentioned that Mystic is also involved in research and

485
00:35:32,480 --> 00:35:37,200
development. Could you give us a brief overview of what the research and development looks like?

486
00:35:38,080 --> 00:35:45,200
Yeah. So R&D insecurity is one of those kind of never-ending problems is you've always got to

487
00:35:45,200 --> 00:35:50,640
kind of keep up with the latest attacks. You've always got to develop detections for the latest

488
00:35:50,640 --> 00:35:59,600
TTPs that have come out or the latest piece of malware. And obviously, doing that kind of churn

489
00:35:59,600 --> 00:36:06,160
is part of what we do. But we're also looking at how we can leverage the community to kind of take

490
00:36:06,160 --> 00:36:12,320
our R&D to the next level. And one of the things that we're looking at and my colleague Roberto

491
00:36:12,320 --> 00:36:19,760
is really championing is this idea of how can we engage with the security community, particularly

492
00:36:19,760 --> 00:36:28,320
the research and offensive security community, to build our R&D in at their development stage.

493
00:36:28,320 --> 00:36:35,120
So if you think about kind of red team tools quite often what happens is these great offensive

494
00:36:35,120 --> 00:36:40,960
security researchers will develop a great new technique, build it into a tool to help them

495
00:36:40,960 --> 00:36:49,760
and other red teams release that it will then get abused by a malicious actor who will kind of

496
00:36:49,760 --> 00:36:55,680
compromise a bunch of innocent people using it. The defensive security team will then be kind of

497
00:36:55,680 --> 00:37:02,640
focused on it and develop detections and allow people to find it. Well, really, we kind of want

498
00:37:02,640 --> 00:37:09,760
to we want to skip that whole innocent people getting compromised piece and see if we can work

499
00:37:09,760 --> 00:37:16,960
with those offensive security researchers early on to help them develop the detection capabilities

500
00:37:16,960 --> 00:37:24,640
as they're developing the offensive tooling so that when it's released defenders are already there

501
00:37:24,640 --> 00:37:31,840
rather than having to wait. And that's kind of a big project for us and it's kind of a big bit

502
00:37:31,840 --> 00:37:37,440
of work for the community because it typically hasn't been something the community has done so well

503
00:37:37,440 --> 00:37:46,960
that kind of red versus blue element and coming together. But also the R&D side on the defender's

504
00:37:46,960 --> 00:37:53,440
perspective has been limited by things like a lack of good tooling and also just a lack of time.

505
00:37:53,440 --> 00:38:01,920
Like sock teams have a number of kind of objectives and it can often be quite time sensitive. So

506
00:38:01,920 --> 00:38:08,080
kind of making the time for this defensive research can be hard. So we're trying to do things like

507
00:38:08,080 --> 00:38:14,640
developing frameworks and tool sets that can help these defenders do that research. So Simuland

508
00:38:14,640 --> 00:38:24,400
that was mentioned in the new section is one that Roberto has created to help with this. But also

509
00:38:24,400 --> 00:38:30,640
just the work we're trying to do with these researchers to get ahead of the game effectively

510
00:38:30,640 --> 00:38:36,880
on the defensive side of that research and through Roberto's persistence and the work that he's done

511
00:38:36,880 --> 00:38:42,560
building the community we're having some great success there in allowing us to kind of work with

512
00:38:42,560 --> 00:38:49,120
these researchers to make sure that we understand their new techniques and capabilities before

513
00:38:49,120 --> 00:38:55,680
their public and also make sure we've got those protections and defences built into our products

514
00:38:55,680 --> 00:39:01,360
before their public. And for me that's probably going to be a real game changer not in the next

515
00:39:01,360 --> 00:39:05,600
week or two but in the next year or so. If we can kind of really get that process working

516
00:39:06,240 --> 00:39:09,840
I think we're going to have a tangible difference on the security landscape.

517
00:39:10,400 --> 00:39:14,000
There's a question we ask all our guests at the very end and that is if you had one final

518
00:39:14,000 --> 00:39:17,440
thought just one thing for our listeners to really hang on to what would it be?

519
00:39:17,440 --> 00:39:27,200
I think a key thing to focus on is keeping perspective on security threats. As threat

520
00:39:27,200 --> 00:39:32,240
intelligence practitioners we quite often like focusing on the really sophisticated elements

521
00:39:32,240 --> 00:39:37,280
and talking about them. So a good example would be with Nibelium all the focus was on their

522
00:39:37,840 --> 00:39:42,800
long-running supply chain compromise but really that was just kind of part of what they did.

523
00:39:42,800 --> 00:39:51,840
A lot of the other elements of their attack were the techniques and processes that are well known

524
00:39:51,840 --> 00:39:58,560
to defenders and really by keeping that in perspective and focusing on the security basics

525
00:39:58,560 --> 00:40:03,600
people can really do a great job at protecting themselves even from the most advanced threat

526
00:40:03,600 --> 00:40:10,240
actors out there. Things like just making sure you've got MFA enabled, things like you restricting

527
00:40:10,240 --> 00:40:16,240
what you're exposing to the internet. They make a huge difference again it's not just the kind of

528
00:40:16,240 --> 00:40:21,120
commodity threats but against these really advanced factors as well and it might not stop them

529
00:40:21,120 --> 00:40:26,800
because they're going to be persistent and find other ways in but what it does is A imposes bigger

530
00:40:26,800 --> 00:40:33,120
cost on them and B gives defenders way more opportunity to detect them. You'll find that even

531
00:40:33,120 --> 00:40:40,640
advanced groups will be quite noisy particularly if you force them to kind of circumvent good

532
00:40:40,640 --> 00:40:46,560
security controls. So really what I'd say is focus on making sure you've got the basics in place

533
00:40:47,440 --> 00:40:53,600
regardless of who your threat actor in your threat model is. So with that let's bring the podcast

534
00:40:53,600 --> 00:40:58,400
to an end Pete thanks so much for joining us this week I know we we all really appreciate it we also

535
00:40:58,400 --> 00:41:04,320
learned a great deal and hopefully our listeners learned a great deal as well and that's all you

536
00:41:04,320 --> 00:41:08,160
listeners out there thank you so much for joining us this week take care of yourself and we'll see

537
00:41:08,160 --> 00:41:13,680
you next time. Thanks for listening to the Azure Security podcast you can find show notes and other

538
00:41:13,680 --> 00:41:20,960
resources at our website azsecuritypodcast.net. If you have any questions please find us on

539
00:41:20,960 --> 00:41:35,280
twitter at azuresetpod. Background music is from ccmixter.com and licensed under the Creative Commons license.

