1
00:00:00,000 --> 00:00:09,600
Welcome to the Azure Security Podcast, where we discuss topics relating to security, privacy,

2
00:00:09,600 --> 00:00:13,080
reliability and compliance on the Microsoft Cloud Platform.

3
00:00:13,080 --> 00:00:15,280
Hey, everybody.

4
00:00:15,280 --> 00:00:17,160
Welcome to Episode 94.

5
00:00:17,160 --> 00:00:19,760
This week, it's myself, Michael, with Sarah and Mark.

6
00:00:19,760 --> 00:00:25,280
This week, our guest is Ryan Munch, who's here to talk to us about copilot for security.

7
00:00:25,280 --> 00:00:28,120
But before we get to our guest, let's take a little wrap around the news.

8
00:00:28,120 --> 00:00:29,920
Sarah, why don't you kick things off?

9
00:00:29,920 --> 00:00:34,480
So a couple of things that I love to start with, bit of containers.

10
00:00:34,480 --> 00:00:39,960
So you can now, in public preview, we've got support for Key Vault certificates in Azure

11
00:00:39,960 --> 00:00:41,420
Container apps.

12
00:00:41,420 --> 00:00:42,420
So that's nice.

13
00:00:42,420 --> 00:00:47,640
You can get your own TLS or SSL certificates in container apps and you can use Key Vault

14
00:00:47,640 --> 00:00:49,400
to store them.

15
00:00:49,400 --> 00:00:54,160
And for GA, we've now got free managed certificates on Azure Container apps.

16
00:00:54,160 --> 00:00:56,400
So again, a nice free certificate.

17
00:00:56,400 --> 00:01:00,520
We love free things and of course, certificates are very important.

18
00:01:00,520 --> 00:01:01,520
Another couple of things.

19
00:01:01,520 --> 00:01:04,920
Now, this is, of course, what everyone's talking about at the moment.

20
00:01:04,920 --> 00:01:07,760
But a couple of things on the responsible AI side.

21
00:01:07,760 --> 00:01:11,300
Now, some people might be like, oh, responsible AI.

22
00:01:11,300 --> 00:01:12,300
That's not security.

23
00:01:12,300 --> 00:01:16,520
But in fact, if you start looking at that stuff and I'll put it in the show notes as

24
00:01:16,520 --> 00:01:20,180
well, recently released a very, very short blog post on this.

25
00:01:20,180 --> 00:01:24,720
But responsible AI and security, you basically can't separate them.

26
00:01:24,720 --> 00:01:30,440
And so we've also released a couple of things on the responsible AI security side.

27
00:01:30,440 --> 00:01:32,440
One is a prompt shield.

28
00:01:32,440 --> 00:01:38,640
So that will help check your large language model inputs for user prompt and document

29
00:01:38,640 --> 00:01:39,640
attacks.

30
00:01:39,640 --> 00:01:41,240
It's built into Azure AI Studio.

31
00:01:41,240 --> 00:01:43,360
So you just have to turn it on.

32
00:01:43,360 --> 00:01:45,840
And we've also done one for groundedness detection.

33
00:01:45,840 --> 00:01:51,760
So groundedness is with an LLM, a large language model.

34
00:01:51,760 --> 00:01:56,160
I'm going to explain this terribly, but essentially it means that the grounding is when the response

35
00:01:56,160 --> 00:02:00,480
that the model gives you is actually grounded in the material that it's been trained on

36
00:02:00,480 --> 00:02:03,120
and the material that's been provided to it.

37
00:02:03,120 --> 00:02:07,180
And again, what it essentially means is it gives you better answers.

38
00:02:07,180 --> 00:02:11,200
These are things that if people give clever prompts, they can be got around.

39
00:02:11,200 --> 00:02:14,180
So having more tools to stop that is important.

40
00:02:14,180 --> 00:02:18,560
So go and have a look at that if you are using AI stuff.

41
00:02:18,560 --> 00:02:22,800
And then the last couple of things, because I like to give a shout out for events.

42
00:02:22,800 --> 00:02:27,040
I know we talked about Build that's at the time we're recording this coming up in a couple

43
00:02:27,040 --> 00:02:28,120
of months.

44
00:02:28,120 --> 00:02:34,040
But also, we have for those of you who are security research type people, we have our

45
00:02:34,040 --> 00:02:35,520
Blue Hat Conference.

46
00:02:35,520 --> 00:02:37,360
And there's actually two coming up.

47
00:02:37,360 --> 00:02:42,240
So there's Blue Hat India, which is in the middle of May.

48
00:02:42,240 --> 00:02:46,800
And the call for papers is closed, but you can apply to attend Blue Hat India, which

49
00:02:46,800 --> 00:02:53,440
is in Hyderabad and Blue Hat Israel, which is a couple of days, I think it's the week

50
00:02:53,440 --> 00:02:58,720
after Blue Hat India, that still has its call for papers open.

51
00:02:58,720 --> 00:03:02,920
So at least at the time we're recording this for I think another week or so.

52
00:03:02,920 --> 00:03:08,600
So it's the first time they've done Blue Hat in India, but Blue Hat Israel has been around

53
00:03:08,600 --> 00:03:09,600
a long time.

54
00:03:09,600 --> 00:03:14,440
I have not sadly been able to go to it yet, but I'm told it's really good.

55
00:03:14,440 --> 00:03:19,360
So if you're into your security research, you should go and check that out.

56
00:03:19,360 --> 00:03:23,680
And because India and Israel are not that far apart, you can basically go from one to

57
00:03:23,680 --> 00:03:24,680
another.

58
00:03:24,680 --> 00:03:29,640
And I'm very sad, not that I've planned this out in my head, but unfortunately, I will

59
00:03:29,640 --> 00:03:32,720
be at Build this year, so I won't get to go.

60
00:03:32,720 --> 00:03:37,040
But if you are interested in that, you should go check it out and I'll put the links in

61
00:03:37,040 --> 00:03:38,360
the show notes.

62
00:03:38,360 --> 00:03:41,080
And Michael, that's my news over to you.

63
00:03:41,080 --> 00:03:42,680
I can't believe you said SSL.

64
00:03:42,680 --> 00:03:46,640
You know, SSL has been deprecated for decades, right?

65
00:03:46,640 --> 00:03:52,600
Full disclosure, I was reading, and everyone can go and look it up in the show notes, I

66
00:03:52,600 --> 00:03:55,200
am reading the Azure news.

67
00:03:55,200 --> 00:03:58,320
I did read TLS slash SSL.

68
00:03:58,320 --> 00:04:00,600
I 100% agree with you, Michael.

69
00:04:00,600 --> 00:04:02,720
Yeah, we should get rid of that.

70
00:04:02,720 --> 00:04:03,720
We should.

71
00:04:03,720 --> 00:04:07,160
That's something, some of my bucket list is to get rid of references to SSL.

72
00:04:07,160 --> 00:04:12,600
Anyway, on the news, so we've just added this new function to Azure SQL database, which

73
00:04:12,600 --> 00:04:15,560
is advanced notifications for planned maintenance.

74
00:04:15,560 --> 00:04:20,680
It basically gives you more flexibility on when maintenance may occur on your Azure SQL

75
00:04:20,680 --> 00:04:21,680
databases.

76
00:04:21,680 --> 00:04:23,280
This is something you can sign up for in the portal.

77
00:04:23,280 --> 00:04:27,640
It's well worth looking at if something, you know, just gives you more control over when

78
00:04:27,640 --> 00:04:30,000
the maintenance occurs on your instances.

79
00:04:30,000 --> 00:04:34,040
Next one is we have a new thing called Database Watcher for Azure SQL.

80
00:04:34,040 --> 00:04:38,720
This is like a big dashboard for all your Azure SQL databases and managed instances

81
00:04:38,720 --> 00:04:41,000
that are running inside Azure.

82
00:04:41,000 --> 00:04:45,000
Essentially, it's just a way of monitoring everything without having to deploy any kind

83
00:04:45,000 --> 00:04:46,000
of agents whatsoever.

84
00:04:46,000 --> 00:04:47,560
The data is already there.

85
00:04:47,560 --> 00:04:52,320
We're just basically presenting it in a more concise dashboard so you can see everything

86
00:04:52,320 --> 00:04:56,560
that's going on with all your database instances.

87
00:04:56,560 --> 00:05:01,680
Next one is SQL Server Management Studio version 20 came out just recently and there's a big

88
00:05:01,680 --> 00:05:06,720
change that we made in the UI around using TLS.

89
00:05:06,720 --> 00:05:11,040
Basically the tool is now quite strict when it comes to using TLS and requires TLS.

90
00:05:11,040 --> 00:05:16,560
It also is the first version of SSMS to support TLS 1.3 because it's switched over from using

91
00:05:16,560 --> 00:05:22,680
system.data.sql clients over to Microsoft.data.sql client, which supports TLS 1.3.

92
00:05:22,680 --> 00:05:27,720
But I can almost guarantee there will be some people who have, let's just say a couple of

93
00:05:27,720 --> 00:05:31,760
headaches in the user interface when it comes to connecting to their SQL instances.

94
00:05:31,760 --> 00:05:36,200
So a colleague of mine, Aaron, has written a blog post on basically how to navigate

95
00:05:36,200 --> 00:05:38,160
any errors that you may come across.

96
00:05:38,160 --> 00:05:41,440
They're all in the name of security and they're actually really good changes, but people just

97
00:05:41,440 --> 00:05:43,480
have to get used to it.

98
00:05:43,480 --> 00:05:47,800
Next one is in Azure, we have Public Preview of Change Actor.

99
00:05:47,800 --> 00:05:53,680
The way I look at this is it's a way of bubbling up a lot of information in the activity feed

100
00:05:53,680 --> 00:05:56,600
and adding a little bit more intelligence to the results to make it easier to find out

101
00:05:56,600 --> 00:05:58,840
who changed what, when, and why, and where from.

102
00:05:58,840 --> 00:06:02,880
It just makes it a lot easier to do that kind of data spelunking as opposed to doing it

103
00:06:02,880 --> 00:06:04,200
all by yourself.

104
00:06:04,200 --> 00:06:08,880
So a big fan of anything that sort of helps people get to the bottom of why things changed.

105
00:06:08,880 --> 00:06:11,680
And the last thing is, I'm just going to touch on this really, really fast, only because

106
00:06:11,680 --> 00:06:18,640
it literally was released a few hours ago, is we have announced Copilot for Azure SQL

107
00:06:18,640 --> 00:06:21,600
Database in private preview right now.

108
00:06:21,600 --> 00:06:25,240
I've played around with it for a little bit and worked on some of the security for it.

109
00:06:25,240 --> 00:06:27,320
Fantastic product, but the only reason why I brought this up is just because we're talking

110
00:06:27,320 --> 00:06:32,520
about Copilot today and I just think it would be a pertinent topic to discuss.

111
00:06:32,520 --> 00:06:38,720
So with the news out of the way, let's turn our attention to our guest.

112
00:06:38,720 --> 00:06:42,280
As I mentioned before, our guest this week is Ryan Munch, who's here to talk to us about

113
00:06:42,280 --> 00:06:44,080
Copilot for Security.

114
00:06:44,080 --> 00:06:45,640
Ryan, welcome to the podcast.

115
00:06:45,640 --> 00:06:49,120
We'd like to take a moment and introduce yourself to our listeners.

116
00:06:49,120 --> 00:06:51,480
Well, first off, thanks for having me.

117
00:06:51,480 --> 00:06:53,200
Excited to be here.

118
00:06:53,200 --> 00:06:58,560
I am a principal technical specialist at Microsoft and I've been focused on Copilot for Security

119
00:06:58,560 --> 00:07:02,560
well, a little over a year as one of the first people to get access.

120
00:07:02,560 --> 00:07:07,100
I've been tinkering and toiling with it, but also like helping to push it along and advocate

121
00:07:07,100 --> 00:07:09,680
for the solution along the way.

122
00:07:09,680 --> 00:07:14,960
My background stems from prior cybersecurity work and threat intelligence and really internet

123
00:07:14,960 --> 00:07:17,360
telemetry and reconnaissance.

124
00:07:17,360 --> 00:07:21,680
Prior to that, I had more of a traditional background in DevOps and exchange email management

125
00:07:21,680 --> 00:07:25,040
and more of the IT side of the house.

126
00:07:25,040 --> 00:07:30,200
But where I think it's been really great for me, it's been foundational for the goal of

127
00:07:30,200 --> 00:07:35,200
what Copilot for Security is and that is to bring together a myriad of different backgrounds

128
00:07:35,200 --> 00:07:40,000
and expertises to complement what you may or may not be good at.

129
00:07:40,000 --> 00:07:44,000
So Ryan, let's start at the beginning.

130
00:07:44,000 --> 00:07:49,560
Now when we release this, of course, Copilot for Security will have just gone GA or generally

131
00:07:49,560 --> 00:07:52,760
available so everyone can go and have a go with it.

132
00:07:52,760 --> 00:07:57,280
But what is Copilot for Security?

133
00:07:57,280 --> 00:07:58,280
Why have we done it?

134
00:07:58,280 --> 00:08:03,720
Because I know some people will think we've done a lot of Copilots recently and I think

135
00:08:03,720 --> 00:08:05,560
some folks get confused.

136
00:08:05,560 --> 00:08:11,240
Yeah, I think the terrible joke I like to make here in the United States is that basically

137
00:08:11,240 --> 00:08:17,640
the Baskin and Robbins of AI here at Microsoft, we have 31 different flavours of Copilot.

138
00:08:17,640 --> 00:08:22,840
But where we are fundamentally different for Copilot for Security is, well, the approach

139
00:08:22,840 --> 00:08:25,640
we took in the inception of the design.

140
00:08:25,640 --> 00:08:30,880
If you look at 99%, really even probably more than that, it's like an impotentially small

141
00:08:30,880 --> 00:08:37,480
decimal at this point, of AI systems in the world, generative AI systems, they are designed

142
00:08:37,480 --> 00:08:43,560
really and predicated upon doing one thing, finding content and generating new content.

143
00:08:43,560 --> 00:08:45,120
And there's tons of applications for that.

144
00:08:45,120 --> 00:08:47,560
It can do a lot of incredible things.

145
00:08:47,560 --> 00:08:52,320
But what sets Copilot for Security apart is when the team went out to build this, they

146
00:08:52,320 --> 00:08:59,000
took two giant steps back, looked across the security ecosystem and asked themselves, well,

147
00:08:59,000 --> 00:09:02,800
what should we solve and what could we do with AI and security?

148
00:09:02,800 --> 00:09:04,280
And I think you all are aware of this.

149
00:09:04,280 --> 00:09:08,240
You're incredibly brilliant people and listening to you all just even talk about the news,

150
00:09:08,240 --> 00:09:11,560
like it's inspiring as well as intimidating.

151
00:09:11,560 --> 00:09:13,680
Security is a really difficult thing.

152
00:09:13,680 --> 00:09:15,000
Doesn't matter where you operate.

153
00:09:15,000 --> 00:09:20,240
You operate in silos with all these different specializations that are hard to replicate

154
00:09:20,240 --> 00:09:22,360
and bring to anyone else.

155
00:09:22,360 --> 00:09:28,820
And so, what they recognize is that due to all this fragmentation, AI has this possibility

156
00:09:28,820 --> 00:09:34,840
of spanning across all of that, collapsing the fragmentation but also upleveling all

157
00:09:34,840 --> 00:09:40,840
the abilities of anyone that uses AI and security to collectively operate with security in any

158
00:09:40,840 --> 00:09:41,840
context.

159
00:09:41,840 --> 00:09:46,460
Doesn't matter if you come from a database background or maybe even decided to join security

160
00:09:46,460 --> 00:09:47,500
from HR.

161
00:09:47,500 --> 00:09:52,840
If you need a security context, you can ask a simple question and get back a profound

162
00:09:52,840 --> 00:09:53,840
result.

163
00:09:53,840 --> 00:09:58,720
And that has been built into the core architecture and it's why it's not only so unique and

164
00:09:58,720 --> 00:10:06,000
different, but in a lot of ways, we're leading and we are laid apart from what you'll see

165
00:10:06,000 --> 00:10:10,360
inside of other AI solutions and across the market in general.

166
00:10:10,360 --> 00:10:16,240
So, in the news, I mentioned that we've just released a private preview of copilot for

167
00:10:16,240 --> 00:10:18,280
Azure SQL DB.

168
00:10:18,280 --> 00:10:21,560
So how is this different to other copilots?

169
00:10:21,560 --> 00:10:27,540
As you mentioned, the Baskin Robbins quote, we have copilots for absolutely everything.

170
00:10:27,540 --> 00:10:29,600
So how is this different from other copilots?

171
00:10:29,600 --> 00:10:33,880
Ultimately, what is the job of copilot for security?

172
00:10:33,880 --> 00:10:34,880
Yeah.

173
00:10:34,880 --> 00:10:38,840
That's one of my favorite questions to answer and speak to.

174
00:10:38,840 --> 00:10:43,800
And candidly, it was really hard to articulate for a long time.

175
00:10:43,800 --> 00:10:46,480
And then about a month ago, we finally got this help.

176
00:10:46,480 --> 00:10:48,180
It's actually, there's this great paper.

177
00:10:48,180 --> 00:10:53,200
It's written by the Berkeley Artificial Intelligence Research Center or BEAR.

178
00:10:53,200 --> 00:10:55,320
It's like a digital bear if you go to the website.

179
00:10:55,320 --> 00:10:59,600
And the paper is the shift from models to compound AI systems.

180
00:10:59,600 --> 00:11:03,720
If you look at what copilot for security does and how it's built and how we have to go out

181
00:11:03,720 --> 00:11:08,920
and solve the problem of fragmentation and complexity, you can't do that with a monolithic

182
00:11:08,920 --> 00:11:09,920
model.

183
00:11:09,920 --> 00:11:15,000
In fact, and this is what I'll say is the headlining quote from the article or the research

184
00:11:15,000 --> 00:11:21,220
paper, state of the art AI systems are increasingly obtained by compound systems with multiple

185
00:11:21,220 --> 00:11:25,200
components and not just monolithic models.

186
00:11:25,200 --> 00:11:30,480
So in security, if you were to go out and train at LLM and let's say you do exactly

187
00:11:30,480 --> 00:11:35,160
what open AI does, use Microsoft's supercomputer, which is the fifth largest in the world, train

188
00:11:35,160 --> 00:11:39,880
it against trillions of parameters and have it spin away for months, if not longer to

189
00:11:39,880 --> 00:11:42,120
come out and arrive with a new model.

190
00:11:42,120 --> 00:11:46,000
Well, the moment you get a new vulnerability or a new system that you need to incorporate

191
00:11:46,000 --> 00:11:52,240
into a security context, that really expensive laborious model train is now immediately out

192
00:11:52,240 --> 00:11:53,240
of date.

193
00:11:53,240 --> 00:11:56,320
So it doesn't work well in security, just like it wouldn't work if you were to apply

194
00:11:56,320 --> 00:12:02,480
a model to healthcare or any other similarly specialized in highly fragmented environment.

195
00:12:02,480 --> 00:12:06,360
So what we've done, and this is really where it separates it from the other AI systems

196
00:12:06,360 --> 00:12:12,480
across, well, other copilot systems across Microsoft is that we work with this compound

197
00:12:12,480 --> 00:12:18,480
AI system predicated upon orchestration and a plugin architecture, as well as a number

198
00:12:18,480 --> 00:12:22,680
of different grounding mechanisms to ensure that we anchor in truth.

199
00:12:22,680 --> 00:12:29,080
So for example, inside of security, you typically work with two or three sources of threat intelligence.

200
00:12:29,080 --> 00:12:33,880
We provide MDTI, Microsoft Defender Threat Intelligence for free as a grounding mechanism

201
00:12:33,880 --> 00:12:38,520
to help ensure that we anchor on a source of truth, but also that we believe fundamentally

202
00:12:38,520 --> 00:12:42,440
that threat intelligence should be baked into everything.

203
00:12:42,440 --> 00:12:46,520
As an organization, you will typically have two or three other sources so that you have

204
00:12:46,520 --> 00:12:53,200
some level of due diligence to confirm your intelligence and confirm anything that you're

205
00:12:53,200 --> 00:12:54,200
assessing.

206
00:12:54,200 --> 00:12:57,960
So in connecting to those other systems, you wouldn't train a model against it, but you

207
00:12:57,960 --> 00:13:04,480
can use a plugin that would connect into that API, understand where to get an indicator,

208
00:13:04,480 --> 00:13:09,400
provide a threat intelligence summary back, and collapse it all by using the model itself.

209
00:13:09,400 --> 00:13:13,860
So it's kind of a combination of the best of both worlds in that we can use the powers

210
00:13:13,860 --> 00:13:18,840
of generative AI, but still supercharge it with this extensible architecture that comes

211
00:13:18,840 --> 00:13:21,600
inherent to building anything in Azure.

212
00:13:21,600 --> 00:13:23,160
So I got a question.

213
00:13:23,160 --> 00:13:28,620
So say someone's got Microsoft Defender, a couple of different technologies, or maybe

214
00:13:28,620 --> 00:13:35,440
all of them, Microsoft Sentinel, and Purview Intune, etc.

215
00:13:35,440 --> 00:13:42,980
What does security, excuse me, Co-pilot for security, what does that do for me?

216
00:13:42,980 --> 00:13:47,440
What does it add and what does it change that isn't already available in the existing products

217
00:13:47,440 --> 00:13:49,280
and technologies?

218
00:13:49,280 --> 00:13:50,280
Great question.

219
00:13:50,280 --> 00:13:53,040
I'm going to break this down in a few different ways.

220
00:13:53,040 --> 00:13:58,400
The first thing I like to think about is, back to the core design, what we recognize

221
00:13:58,400 --> 00:14:04,040
pretty early on with what Co-pilot needs to do is it has to operate in the capacity and

222
00:14:04,040 --> 00:14:06,360
concept of workflows.

223
00:14:06,360 --> 00:14:09,760
Workflows can begin and start anywhere or exist at any state.

224
00:14:09,760 --> 00:14:13,260
No matter the case, we have to be able to interact with it and augment it.

225
00:14:13,260 --> 00:14:17,760
So when we started off with some of our very first customers, I'm talking about the first

226
00:14:17,760 --> 00:14:22,560
ten into the platform, one of the things they quickly realized is that, well, when they

227
00:14:22,560 --> 00:14:26,640
conduct incident response, that actually starts in something like ServiceNow.

228
00:14:26,640 --> 00:14:31,720
And so they went back to us and said, hey, we are Defender customers and Sentinel customers

229
00:14:31,720 --> 00:14:35,840
through and through, but we still send this over to ServiceNow to track our incidents.

230
00:14:35,840 --> 00:14:40,820
If we don't have a ServiceNow plugin, this doesn't do us a whole lot of good or doesn't

231
00:14:40,820 --> 00:14:46,320
provide us a lot of advantage over to our existing ecosystem and how we tie everything

232
00:14:46,320 --> 00:14:47,600
together.

233
00:14:47,600 --> 00:14:52,400
And so one of the things that we've recognized is that even if you spend all of your day

234
00:14:52,400 --> 00:14:58,800
inside a Defender or Purview or whatever the solution is, you can go out and use whatever

235
00:14:58,800 --> 00:15:03,800
you've done inside of those systems and cross-referenced it with something outside of the Microsoft

236
00:15:03,800 --> 00:15:04,800
security ecosystem.

237
00:15:04,800 --> 00:15:09,440
And that's not really it or even outside of the security ecosystem itself.

238
00:15:09,440 --> 00:15:11,960
Maybe you go over to a different IT system.

239
00:15:11,960 --> 00:15:15,400
One of my favorite things to always bring up is a great example.

240
00:15:15,400 --> 00:15:19,680
I used to work with a bunch of ExCisa people here at Microsoft and they would talk about

241
00:15:19,680 --> 00:15:25,280
one of the first things they would do post breach is they would analyze who is attacked

242
00:15:25,280 --> 00:15:30,080
and then compared against an HR system and discern similarities between them such that

243
00:15:30,080 --> 00:15:34,000
they can understand what the motive was for an attack.

244
00:15:34,000 --> 00:15:39,200
And that is something you can't really discern in Defender or Purview or otherwise, but you

245
00:15:39,200 --> 00:15:43,520
do need to go to a secondary HR system to figure that out.

246
00:15:43,520 --> 00:15:47,000
Now, so that kind of like tackles like the one part of it, which is like the general

247
00:15:47,000 --> 00:15:52,320
need of like all these different multimodal workflows that can go in any number of different

248
00:15:52,320 --> 00:15:53,580
directions.

249
00:15:53,580 --> 00:15:58,080
The other part of it is just how can we drive efficiency or how can we introduce net new

250
00:15:58,080 --> 00:16:02,320
competencies that were maybe difficult and isolated from before.

251
00:16:02,320 --> 00:16:06,720
And so when I think of that, there are two examples that immediately come to mind.

252
00:16:06,720 --> 00:16:12,520
About probably six months ago in our preview, we introduced the concept of script analysis.

253
00:16:12,520 --> 00:16:15,640
And when we talked to some customers, they didn't even do that.

254
00:16:15,640 --> 00:16:21,060
They did not try to understand how a script would execute, how code would maliciously

255
00:16:21,060 --> 00:16:23,120
go out and do something in a system.

256
00:16:23,120 --> 00:16:27,800
They would either pass that off to a partner or maybe call in a contractor for those special

257
00:16:27,800 --> 00:16:29,000
cases.

258
00:16:29,000 --> 00:16:35,000
But now out of the gate, inside of Defender, they now have an ability to analyze a script

259
00:16:35,000 --> 00:16:39,080
and have it broken down in a way that anyone can understand it.

260
00:16:39,080 --> 00:16:45,280
And so what used to be like this highly specialized, uniquely reserved skill set for the most capable

261
00:16:45,280 --> 00:16:49,700
people inside of a security organization now could be something that even the most junior

262
00:16:49,700 --> 00:16:51,660
analysts could pick up.

263
00:16:51,660 --> 00:16:55,840
And the other one that I like to point out, and I think this is also helpful, is that

264
00:16:55,840 --> 00:17:00,720
when you look at some of the more complex attacks, you have to go through and analyze

265
00:17:00,720 --> 00:17:06,880
large pieces of information or multiple different sources or alerts or otherwise.

266
00:17:06,880 --> 00:17:11,520
And there are inherent advantages by being able to collapse all that into a summary or

267
00:17:11,520 --> 00:17:16,560
into the most profound elements that would lead you in a direction to then take in more

268
00:17:16,560 --> 00:17:19,280
directed and informed action.

269
00:17:19,280 --> 00:17:23,020
And that's where we've seen impact for people that like to operate in those systems.

270
00:17:23,020 --> 00:17:27,120
And that's why we built what is called an embedded experience inside of Defender in

271
00:17:27,120 --> 00:17:30,680
Purview, Entra, and Intune.

272
00:17:30,680 --> 00:17:34,160
And there's more in the way that helps with exactly that.

273
00:17:34,160 --> 00:17:37,840
And it's one of the things that our customers love the most about where CoPilot for Security

274
00:17:37,840 --> 00:17:43,960
is going and how it will be there with them no matter where their workflow operates.

275
00:17:43,960 --> 00:17:50,760
So Ryan, can you, obviously there's many, many scenarios, because that's the point where

276
00:17:50,760 --> 00:17:54,860
customers could use CoPilot for Security.

277
00:17:54,860 --> 00:18:01,400
But could you walk us through a typical scenario that you've seen folks use it for?

278
00:18:01,400 --> 00:18:05,800
I'll build upon the script analysis portion, because that's usually critical to what is

279
00:18:05,800 --> 00:18:08,720
an incident response process.

280
00:18:08,720 --> 00:18:14,200
And what CoPilot will do or how they interact with CoPilot in those cases is it's either

281
00:18:14,200 --> 00:18:20,660
with them to kick off that initial analysis, so understanding alerts, understanding constituent

282
00:18:20,660 --> 00:18:26,360
elements of that incident or that incident response process, such as maybe a script analysis,

283
00:18:26,360 --> 00:18:33,920
a file analysis, or even looking at things like user risk or device risk attached to

284
00:18:33,920 --> 00:18:35,660
the incident itself.

285
00:18:35,660 --> 00:18:40,820
So then the natural triage process then becomes something that's informed by artificial intelligence,

286
00:18:40,820 --> 00:18:45,680
and it's something that becomes more efficient and more approachable for everyone.

287
00:18:45,680 --> 00:18:50,360
There are other scenarios we do as well, some really great ones and some things that I really

288
00:18:50,360 --> 00:18:55,240
like to talk about is the way that now it's infusing threat intelligence into everything

289
00:18:55,240 --> 00:18:56,240
naturally.

290
00:18:56,240 --> 00:19:00,840
A lot of people, if you were to go out and ask them, how do you go and make a profile

291
00:19:00,840 --> 00:19:06,080
for a threat actor or how do you understand the impact of a threat actor in an active

292
00:19:06,080 --> 00:19:07,080
incident?

293
00:19:07,080 --> 00:19:11,200
And usually that is something that people have to learn from that moment and take it

294
00:19:11,200 --> 00:19:16,400
forward to inform the context of all the elements of their incident or all of the things that

295
00:19:16,400 --> 00:19:21,480
need to come next in that active response.

296
00:19:21,480 --> 00:19:27,360
Chances are you can do things like type manatee tempest into a prompt bar, and from that,

297
00:19:27,360 --> 00:19:32,900
that's all you need to then get the entire profile recommendations or even considerations

298
00:19:32,900 --> 00:19:39,360
of risk that you would take forward and use to respond to a breach, respond to an incident,

299
00:19:39,360 --> 00:19:47,640
and reduce any further risk or mitigate any additional actions taken by the threat actor.

300
00:19:47,640 --> 00:19:53,720
So those are some immediate stories that come to mind, and there's a lot more on the way.

301
00:19:53,720 --> 00:19:56,920
And I think some of the ones that excite me the most and what we're starting to see now

302
00:19:56,920 --> 00:20:02,560
with some features we roll out is now how we're starting to impact IT operators and

303
00:20:02,560 --> 00:20:08,000
how they are looking at things like comparing device configurations, understanding how device

304
00:20:08,000 --> 00:20:12,720
configurations then can have a security or even a threat intel context, and starting

305
00:20:12,720 --> 00:20:17,900
to fuse together what have been two different sides of the house and having them talk again

306
00:20:17,900 --> 00:20:20,160
and figure things out collectively.

307
00:20:20,160 --> 00:20:29,440
I know data security comes up a lot around AI, so tell me how the copilot for security

308
00:20:29,440 --> 00:20:33,000
and data security kind of intersect.

309
00:20:33,000 --> 00:20:36,840
This is, you're right, it's something that usually will come up almost immediately in

310
00:20:36,840 --> 00:20:41,680
any conversation with anyone looking at to bring AI into their organization.

311
00:20:41,680 --> 00:20:45,040
And there's a couple different facets that are considered.

312
00:20:45,040 --> 00:20:49,360
First and foremost, are they exposing their organization to any intellectual property

313
00:20:49,360 --> 00:20:55,320
violations, meaning based on how the AI was trained or what data it sources for responses,

314
00:20:55,320 --> 00:20:59,320
is it using something that doesn't belong to the customer or it doesn't belong to Microsoft?

315
00:20:59,320 --> 00:21:06,280
And I've even had conversations with different leaders across a number of very large entities

316
00:21:06,280 --> 00:21:10,800
that have stated or at least did state, we will not bring in AI until we can figure out

317
00:21:10,800 --> 00:21:13,960
the intellectual property component against it.

318
00:21:13,960 --> 00:21:18,600
Now the second part of what you've brought forward in terms of the question, when it

319
00:21:18,600 --> 00:21:23,520
comes to data security, how should I start to think about it in relation to AI and in

320
00:21:23,520 --> 00:21:26,560
relation to copilot for security?

321
00:21:26,560 --> 00:21:31,600
Part of it, and what I like to usually explain first, is that how copilot for security operates

322
00:21:31,600 --> 00:21:33,280
a little bit differently than most AI.

323
00:21:33,280 --> 00:21:38,320
So first and foremost, we're not working from a capacity of training off of what customers

324
00:21:38,320 --> 00:21:43,920
are doing to prompt within a system, meaning we are respecting that data residency.

325
00:21:43,920 --> 00:21:49,660
We are respecting that privacy the customers should have to operate in an AI environment.

326
00:21:49,660 --> 00:21:55,280
And that's part of what I would call table stakes to have an enterprise AI solution.

327
00:21:55,280 --> 00:22:02,160
Next, when it comes to what we are using to allow copilot to operate, this is where the

328
00:22:02,160 --> 00:22:07,000
architectural decisions we made very early on for the problem we are trying to solve

329
00:22:07,000 --> 00:22:13,360
by extension put us in a place that is much different and in some ways helps even better

330
00:22:13,360 --> 00:22:19,000
establish a AI security story than what other solutions can talk about.

331
00:22:19,000 --> 00:22:22,400
So copilot is not going to do something like side load your data.

332
00:22:22,400 --> 00:22:27,040
In fact, a lot of different AI systems, one of the things they'll do is they'll take

333
00:22:27,040 --> 00:22:31,000
anything that they want to do with their AI solution, ask you to load your data into a

334
00:22:31,000 --> 00:22:36,080
vector database, and then from that they create then a series of embeddings to then understand

335
00:22:36,080 --> 00:22:37,080
that data.

336
00:22:37,080 --> 00:22:40,680
So really, it's almost like back to the SQL conversation before.

337
00:22:40,680 --> 00:22:43,880
It's like just a series of references into a database.

338
00:22:43,880 --> 00:22:49,560
It is not quite doing the same as trying to, what copilot does, figure out the best plugin

339
00:22:49,560 --> 00:22:56,440
to respond to a user, select a skill, go out and then access that system on behalf of the

340
00:22:56,440 --> 00:23:02,080
user's authentication and permissions to that system, and then reason over what's there

341
00:23:02,080 --> 00:23:04,600
and only return the necessary results.

342
00:23:04,600 --> 00:23:09,800
So there's nothing stored that is necessary to make the system work at the onset from

343
00:23:09,800 --> 00:23:10,800
that system.

344
00:23:10,800 --> 00:23:12,040
We're not loading all of your Sentinel data.

345
00:23:12,040 --> 00:23:15,560
We're not loading all of your Defender data or ServiceNow or otherwise.

346
00:23:15,560 --> 00:23:17,840
We're reasoning over it in place.

347
00:23:17,840 --> 00:23:21,640
Now there are things we do to introduce knowledge bases, and I think that probably gets into

348
00:23:21,640 --> 00:23:29,440
the next part of the conversation when it comes to AI, and that is with a system that

349
00:23:29,440 --> 00:23:36,560
introduces an ability for anyone to ask simple questions and get profound results or information

350
00:23:36,560 --> 00:23:42,560
back, that starts to naturally expose more things than they probably had thought to access

351
00:23:42,560 --> 00:23:47,640
or even tried to access before because the limitations of, let's say, maybe a query language

352
00:23:47,640 --> 00:23:52,760
or whatever interface prevented them from getting to that data, despite the fact they

353
00:23:52,760 --> 00:23:56,240
maybe should have never had access to it.

354
00:23:56,240 --> 00:24:00,360
When it comes to then data security and what you should think about when it comes to your

355
00:24:00,360 --> 00:24:06,640
organizational AI journey, part of that has to incorporate looking at user permissions,

356
00:24:06,640 --> 00:24:11,960
seeing what they do have access to, where that data is exposed, and if they do access

357
00:24:11,960 --> 00:24:14,840
it, what would the consequences be?

358
00:24:14,840 --> 00:24:19,500
The more of a consideration it becomes is based on the AI systems you have.

359
00:24:19,500 --> 00:24:21,400
Does it train off of your data?

360
00:24:21,400 --> 00:24:23,840
Does it sideload your data?

361
00:24:23,840 --> 00:24:27,760
What data is necessary to make those AI systems function?

362
00:24:27,760 --> 00:24:32,240
Those would all increase or all bring about a certain degree of stringency needed to then

363
00:24:32,240 --> 00:24:37,520
understand what controls and what protections you should have in place.

364
00:24:37,520 --> 00:24:42,320
Generally speaking, where we are with Co-Pilot and what I recommend all customers are, figure

365
00:24:42,320 --> 00:24:47,760
out your data story first, figure out your user permissions, and we'll respect that.

366
00:24:47,760 --> 00:24:51,880
That's part of the system design to reinforce what you should already have as we'll call

367
00:24:51,880 --> 00:24:56,620
it sound data security principles.

368
00:24:56,620 --> 00:25:03,280
One thing that took my interest was the extensibility aspects of this.

369
00:25:03,280 --> 00:25:04,960
You mentioned the word plugins before.

370
00:25:04,960 --> 00:25:10,440
What's the story there from a developer perspective?

371
00:25:10,440 --> 00:25:14,440
Customers can get really excited about what we have coming and what is available for Co-Pilot

372
00:25:14,440 --> 00:25:16,600
for security.

373
00:25:16,600 --> 00:25:22,600
We have taken forward architecture from OpenAI where they introduce what are called OpenAI

374
00:25:22,600 --> 00:25:23,800
plugins.

375
00:25:23,800 --> 00:25:25,720
In those plugins, it declares a manifest.

376
00:25:25,720 --> 00:25:31,120
The manifest becomes a mechanism to connect to a system, a database, or in some cases,

377
00:25:31,120 --> 00:25:34,040
even just redefine how data is classified.

378
00:25:34,040 --> 00:25:36,800
In Co-Pilot, we support three different types of plugins.

379
00:25:36,800 --> 00:25:41,600
The most common one I expect customers to use will be just an API plugin.

380
00:25:41,600 --> 00:25:45,400
We've created a new standard with Co-Pilot for security that creates a manifest that

381
00:25:45,400 --> 00:25:48,220
allows you to do more beyond what you could with OpenAI.

382
00:25:48,220 --> 00:25:51,940
You can do things like use skills to invoke sub-skills.

383
00:25:51,940 --> 00:25:55,280
A skill is a mechanism to understand something out of a system.

384
00:25:55,280 --> 00:26:00,100
For example, one of the skills we have for Defender is summarize an incident.

385
00:26:00,100 --> 00:26:06,400
Another skill we have for ServiceNow under that plugin is we have find incidents, summarize

386
00:26:06,400 --> 00:26:14,800
incidents, write a summary of a workflow, then back to a ServiceNow incident as a comment.

387
00:26:14,800 --> 00:26:17,920
Those are all different things that can be invoked, and they all reflect different things

388
00:26:17,920 --> 00:26:20,960
you could build with an AI plugin.

389
00:26:20,960 --> 00:26:26,160
What makes ours unique is that we have the possibility of using a skill to invoke a secondary

390
00:26:26,160 --> 00:26:27,220
skill.

391
00:26:27,220 --> 00:26:33,220
We also have the possibility of allowing description against those skills and providing some feedback

392
00:26:33,220 --> 00:26:37,320
to the users to allow them to then put in different parameters against them.

393
00:26:37,320 --> 00:26:42,400
What this does is, at the core of our AI orchestration engine, is it allows us to be more effective

394
00:26:42,400 --> 00:26:47,880
with how we select what to respond with and how to respond to a user to best enable them

395
00:26:47,880 --> 00:26:52,840
to be successful in their prompting experience.

396
00:26:52,840 --> 00:26:57,600
Beyond the API plugins, we then have KQL plugins.

397
00:26:57,600 --> 00:27:00,620
These are really great for all of the reasons you'd expect.

398
00:27:00,620 --> 00:27:05,840
Most customers that work with Microsoft products, they have libraries and tons of different

399
00:27:05,840 --> 00:27:07,440
KQL queries.

400
00:27:07,440 --> 00:27:12,620
You can take those, build them as a plugin, and then allow them to then be something that

401
00:27:12,620 --> 00:27:18,540
copilot and the orchestrator can use as a mechanism to respond to a user prompt.

402
00:27:18,540 --> 00:27:22,240
The third type, and this one's novel and it's something net new in the age of AI, is what

403
00:27:22,240 --> 00:27:25,120
we call our GPT plugins.

404
00:27:25,120 --> 00:27:30,020
GPT plugins are ways that we defined and labeled data.

405
00:27:30,020 --> 00:27:35,520
For example, the one that I talk about to customers first is just the concept of defanging

406
00:27:35,520 --> 00:27:41,160
URLs or making URLs rendered inert so that no one can accidentally click to it and go

407
00:27:41,160 --> 00:27:44,840
to some website that's going to do all kinds of bad things, maybe just even cause you to

408
00:27:44,840 --> 00:27:51,080
invest in a foreign government.

409
00:27:51,080 --> 00:27:56,080
The process of defanging a URL takes the URL, renders it inert, and by adding in all these

410
00:27:56,080 --> 00:27:58,040
extra characters.

411
00:27:58,040 --> 00:28:00,160
That's not a definition that a large language model would have.

412
00:28:00,160 --> 00:28:05,800
It's not a definition that copilot for security has, but we can define that in text in the

413
00:28:05,800 --> 00:28:07,960
GPT plugin manifest.

414
00:28:07,960 --> 00:28:12,600
By providing that definition, then copilot for security knows that any time you say something

415
00:28:12,600 --> 00:28:18,720
like, I need to defang this URL or this indicator, I need to render it inert, or I need to make

416
00:28:18,720 --> 00:28:21,320
sure that it's safe so that no one clicks on it.

417
00:28:21,320 --> 00:28:25,660
Well, through that GPT skill, it then has that understanding and can provide that as

418
00:28:25,660 --> 00:28:29,480
a prompting mechanism for anyone using copilot.

419
00:28:29,480 --> 00:28:36,680
We've talked about how we handle data, but of course, customers and people using copilot

420
00:28:36,680 --> 00:28:44,160
for security are going to want to bring their own data in so it can give them some insights,

421
00:28:44,160 --> 00:28:47,680
but how does that actually work then?

422
00:28:47,680 --> 00:28:49,160
Is it the plugins?

423
00:28:49,160 --> 00:28:51,160
Do you upload things?

424
00:28:51,160 --> 00:28:54,120
Well, you're absolutely spot on.

425
00:28:54,120 --> 00:28:59,040
How we think about customer data or even working with data belonging to customers is through

426
00:28:59,040 --> 00:29:01,400
the concept of what we call sources.

427
00:29:01,400 --> 00:29:02,840
You talked about plugins.

428
00:29:02,840 --> 00:29:04,280
That is absolutely a source.

429
00:29:04,280 --> 00:29:09,120
They can make a plugin connected to some data store or a specific system they have in their

430
00:29:09,120 --> 00:29:13,600
organization and then that becomes something that copilot can use in reference.

431
00:29:13,600 --> 00:29:17,740
The other concept we have of a source is what we call a knowledge base.

432
00:29:17,740 --> 00:29:21,740
In a knowledge base, we just released two different pieces of functionality.

433
00:29:21,740 --> 00:29:23,520
We have a third on the way.

434
00:29:23,520 --> 00:29:29,720
The first two entail what is called file upload, which is exactly what it sounds.

435
00:29:29,720 --> 00:29:35,560
Upload a document into copilot for security and then it can use that in reference to responding

436
00:29:35,560 --> 00:29:38,640
to you in a prompt response process.

437
00:29:38,640 --> 00:29:42,560
Before I get into the other one, it probably makes sense for me to explain what those documents

438
00:29:42,560 --> 00:29:44,520
are useful for.

439
00:29:44,520 --> 00:29:51,600
In any type of security situation or even any type of IT situation, you'll have standard

440
00:29:51,600 --> 00:29:53,840
practices and procedures.

441
00:29:53,840 --> 00:29:58,480
Maybe you have a standard template you use when you write an incident report.

442
00:29:58,480 --> 00:30:04,480
Maybe you have a template to use when you issue a takedown request against a site that's

443
00:30:04,480 --> 00:30:08,660
impersonating your company's organization.

444
00:30:08,660 --> 00:30:14,520
Any number of different files could represent something specific to your organization that

445
00:30:14,520 --> 00:30:20,900
is pertinent to any workflow and aligning it against what your company would expect.

446
00:30:20,900 --> 00:30:22,760
That's what knowledge bases achieve.

447
00:30:22,760 --> 00:30:26,320
File uploads is one mechanism which we provide that.

448
00:30:26,320 --> 00:30:30,240
The second is Azure OpenAI search.

449
00:30:30,240 --> 00:30:33,960
Through that, what we do is we create an index and we can put all of your files that you

450
00:30:33,960 --> 00:30:39,360
would like to have operate in that context of copilot for security, then become searchable

451
00:30:39,360 --> 00:30:44,400
within that semantic index that copilot can use to infuse that context inside of your

452
00:30:44,400 --> 00:30:47,200
workflow inside of a session.

453
00:30:47,200 --> 00:30:51,480
That is helpful in and of itself of taking things like what we talked about, writing

454
00:30:51,480 --> 00:30:56,440
a report but making sure you write that report in the format your company expects or taking

455
00:30:56,440 --> 00:31:02,520
all of the IOCs such as registrar information from a malicious domain and putting that in

456
00:31:02,520 --> 00:31:07,240
the email template you would use and then exporting that and having that email ready

457
00:31:07,240 --> 00:31:08,280
to go.

458
00:31:08,280 --> 00:31:12,400
It drives a lot of efficiency and aligns copilot against your organization.

459
00:31:12,400 --> 00:31:16,680
The third, which is we're going to talk about this a little bit future functionality, on

460
00:31:16,680 --> 00:31:21,160
the horizon we will eventually introduce the concept of documentation sources.

461
00:31:21,160 --> 00:31:27,400
For example, MS Learn could be a documentation source where if we need to know how do I go

462
00:31:27,400 --> 00:31:31,280
out and configure Microsoft Sentinel, well, we could then have a location in which we

463
00:31:31,280 --> 00:31:37,040
pull in information from Microsoft Learn and you can start to pair that with the individual

464
00:31:37,040 --> 00:31:42,600
prompt responses and get a more informed or specific tailoring of information against

465
00:31:42,600 --> 00:31:44,640
those setup policies and procedures.

466
00:31:44,640 --> 00:31:50,920
Ryan, so copilot for security has just gone GA so now everyone can go out and can be let

467
00:31:50,920 --> 00:31:55,440
loose with it but what would be a good way for people to get started because of course

468
00:31:55,440 --> 00:31:59,000
there's so many things you could do with this.

469
00:31:59,000 --> 00:32:02,360
What's a nice like baby steps into using the product?

470
00:32:02,360 --> 00:32:03,960
Yeah, great question.

471
00:32:03,960 --> 00:32:08,880
Where I would start to align people is first, we've talked about copilot a lot today and

472
00:32:08,880 --> 00:32:12,920
that's piqued your interest and the next thing I'd encourage you to do is go out and look

473
00:32:12,920 --> 00:32:18,400
at some of the videos and publications we have from webinars and learning series and

474
00:32:18,400 --> 00:32:23,000
kind of get like that next level understanding of what functionality that we provide today

475
00:32:23,000 --> 00:32:27,640
in copilot because there is a core functionality that will be there out of the box and then

476
00:32:27,640 --> 00:32:31,760
there of course is what will be down the road and what we'll add and then the final element

477
00:32:31,760 --> 00:32:35,600
is what we talked about the custom plugins of how you can extend it yourself because

478
00:32:35,600 --> 00:32:39,960
at the end of the day it's all about using copilot for security to align against your

479
00:32:39,960 --> 00:32:40,960
workflow.

480
00:32:40,960 --> 00:32:46,480
So, once you get a good understanding of that, if you go to Microsoft Learn there is a series

481
00:32:46,480 --> 00:32:52,000
of documents or entire documentation section that will take you through the steps of spinning

482
00:32:52,000 --> 00:32:58,160
up your own copilot for security instance, getting users into it and starting to connect

483
00:32:58,160 --> 00:33:03,320
it to all of your different sources to give you the best copilot prompting experience

484
00:33:03,320 --> 00:33:08,320
and the great thing about this is it's really it is incredibly approachable to get this

485
00:33:08,320 --> 00:33:11,320
moving and get prompting in the same day.

486
00:33:11,320 --> 00:33:15,880
All of the customers we've on boarded today are to this point they are prompting within

487
00:33:15,880 --> 00:33:18,600
the same day of activating copilot for security.

488
00:33:18,600 --> 00:33:21,400
It's probably time to start bringing this thing to a close.

489
00:33:21,400 --> 00:33:27,360
Ryan, so one question we always ask our guests is if you had just one small final thought

490
00:33:27,360 --> 00:33:30,280
to leave our listeners with, what would it be?

491
00:33:30,280 --> 00:33:37,160
The final thing I'll leave with our guest is to think about how they've experienced

492
00:33:37,160 --> 00:33:41,320
working with computers today and how they need to start to think about working with

493
00:33:41,320 --> 00:33:43,240
computers in the future.

494
00:33:43,240 --> 00:33:47,160
Traditionally, if you've worked with a computer, you've maybe written a script, you have a

495
00:33:47,160 --> 00:33:51,360
discrete input, you get a discrete output, but now working with computers is going to

496
00:33:51,360 --> 00:33:56,960
become a conversation where you can ask anything and receive any set of information back.

497
00:33:56,960 --> 00:34:02,520
In a lot of ways, you will want to trust it and look at what is presented to you, but

498
00:34:02,520 --> 00:34:06,800
as we've learned for our long and extensive history in security, there should always be

499
00:34:06,800 --> 00:34:10,080
an element of trust but verify.

500
00:34:10,080 --> 00:34:14,160
As you start to work with AI systems, what I would challenge you to think about and consider

501
00:34:14,160 --> 00:34:18,360
is how are you seeing the AI system working?

502
00:34:18,360 --> 00:34:23,640
How are you knowing what information is being sorted or sourced and cited?

503
00:34:23,640 --> 00:34:28,680
Finally, how are you putting that into action in a responsible way?

504
00:34:28,680 --> 00:34:33,320
Just like any conversation such as we are having today, at any point in time, you have

505
00:34:33,320 --> 00:34:37,320
the option to say, Ryan, you know what, that doesn't sound right, I'm going to call you

506
00:34:37,320 --> 00:34:42,000
on that or I've had enough of you, Ryan, and I'm done with this conversation.

507
00:34:42,000 --> 00:34:46,280
You should start to think of treating AI systems in the same way where it continues to be an

508
00:34:46,280 --> 00:34:51,320
extension of trust and you should always ensure that the AI is meeting your trust throughout

509
00:34:51,320 --> 00:34:54,200
the entirety of the conversation.

510
00:34:54,200 --> 00:34:57,840
Just so everyone knows, we will have links to everything that Ryan just mentioned in

511
00:34:57,840 --> 00:34:58,840
the show notes.

512
00:34:58,840 --> 00:35:02,200
Again, Ryan, thank you so much for joining us this week.

513
00:35:02,200 --> 00:35:03,920
This is a really exciting product.

514
00:35:03,920 --> 00:35:07,360
I think it's great to see and I think we'll learn a heck of a lot more as people start

515
00:35:07,360 --> 00:35:12,560
to use it more, the capabilities that this kind of AI brings to the table.

516
00:35:12,560 --> 00:35:15,520
Again, thank you so much for joining us this week.

517
00:35:15,520 --> 00:35:18,120
To all our listeners out there, we hope you found this useful.

518
00:35:18,120 --> 00:35:20,440
Go ahead and kick the tires on Copilot for Security.

519
00:35:20,440 --> 00:35:23,880
While you're doing that, stay safe and we'll see you next time.

520
00:35:23,880 --> 00:35:27,040
Thanks for listening to the Azure Security Podcast.

521
00:35:27,040 --> 00:35:33,840
You can find show notes and other resources at our website azsecuritypodcast.net.

522
00:35:33,840 --> 00:35:38,720
If you have any questions, please find us on Twitter at AzureSecPod.

523
00:35:38,720 --> 00:36:02,600
Background music is from ccmixtr.com and licensed under the Creative Commons license.

