Clarke Rodgers (00:10):
When you have those sort of private meetings with customer CEOs and effectively your peers, what are they asking you? What are they talking to you about in terms of security and privacy and compliance and sort of the regulatory regime that we see out there? Can you give us sort of a little peek into those conversations?
Adam Selipsky (00:28):
Those are really important conversations, which a lot of CEOs really do care about, and these topics resonate with many of them, as they should. I guess I'd point to a few things, one of which is, well, generative AI of course is on everybody's mind. And we get a lot of questions around "How do I think about security in a generative AI world " and "Things are moving so quickly" and "What types of applications or technologies should I be using?” and “How do I know they're secure and how do I think about being secure inside of my company as well?" And the first part of the answer is,
"You should expect from generative AI exactly the same level of security that you expect from any other service that you consume."
Somehow there's been this schism where people talk about enterprise security for all these services over here and then, “Oh, now let's talk about generative AI.” And it was actually quite astounding to me how some of the first generative AI chatbots or consumer-grade assistants came out really without a security model. And the data literally did go out over the internet and any improvements to the model literally would be shared by everybody using the models. That's why so many CIOs, CISOs and CEOs literally banned some of these assistants from their company for a good amount of time.
► Listen to the podcast: Data Trust: The Most Essential Ingredient for AI Innovation
But it kind of amazes me because I think about going to a security-minded CEO or a CIO or a CISO and saying, "Hey, I've got this amazing new database service. There's nothing like it. You're going to love it. I really think you should adopt it. By the way, it's got no security model attached to it, but don't worry about it because I'll come around with v2 and it'll be secure then." I mean, I'd get thrown out on my you-know-what!
Clarke Rodgers (02:20):
Sure.
Adam Selipsky (02:21)
At least I hope I would, I would deserve to. And so, I think other companies in this space for some reason, I can't tell you why, are taking a different approach to security and somehow deemed it less important. And we're very predictable here. Our generative AI services like Amazon Bedrock, which is a managed service for operating foundation models, is no more secure and no less secure than any other AWS service.
So that's the first conversation around generative AI. And then there's some other topics as well and the topic of "How do I get a security mindset into my company?" And I think that gets back to culture. It gets back to some of the things you and I discussed today around really top-down leadership and sending signals from senior leaders that this matters. And the bar, the standards are incredibly high. And I often counsel my peers, a lot of it's about insisting on the highest standards and people need to see how high the standards are in security and what your lack of tolerance are for anything except those highest standards.