Amazon.com, Inc. (AMZN) Goldman Sachs Communacopia + Technology Conference (Transcript)

featured-image

Amazon.com, Inc. ( NASDAQ: AMZN ) Goldman Sachs Communacopia + Technology Conference September 9, 2024 4:05 PM ET Company Participants Matt Garman - CEO of Amazon Web Services Conference Call Participants Eric Sheridan - Goldman Sachs Eric Sheridan All right.

So all right, I think we're going to get going in the interest of time. I know people are still moving around, but it's my pleasure to introduce Matt Garman, the CEO of AWS, representing AWS as part of amazon.com today.



First, I'm going to read the safe harbor and then Matt and I are going to get into a conversation. During the conversation today, Matt will make forward-looking statements in addressing the questions and factors that could cause actual results to differ materially as described in AWS' periodic SEC filings, which are already available, including on their website, www.aws.

com. So Matt, thanks for being part of the conference this year. Matt Garman Thanks for having me.

Eric Sheridan Okay. So why don't we start with you sharing your background and the journey you've been on to the point where you became the CEO of AWS in the not too distant past. I think it's sort of an interesting story and journey to tell.

Matt Garman Sure. Yes. So it's been 3 months now that I've -- since I took over the CEO job, but I've been at AWS for 18 years.

And actually, the first -- my first interaction with Amazon I was a business school. And in 2005, it did my summer internship for Andy Jassy and he was doing an internal startup inside of Amazon, which was AWS. And so that was my interim project.

And then I came back as the first product manager for AWS. And then that's what I've been doing for the last 18 years. And I started out leading engineering teams, built many of the core services and helped on things like EC2 and networking and compute and storage and a bunch of our core kind of AWS building blocks.

Then 4 years ago, switched rolls and actually went away from product and engineering and led sales and marketing globally and then took over the job about 3 months ago. Question-and-Answer Session Q - Eric Sheridan Okay. So I think one of the biggest investor debates we have all the time is where we are in terms of broad cloud computing adoption.

So can you level set your world view on that and how you see AWS's position in the broader cloud computing landscape? Matt Garman Yes. Look, we're about 18 years into the cloud now, and so it's a pretty well-established technology. And yet, they're still the vast, vast, vast majority of workloads that have yet to move to the cloud.

And so I think you all probably have lots of estimates as to how many workloads are still on-premise or move to the cloud, and you spend probably more time thinking about that than I do. But I think you're hard pressed to find somebody that thinks it's more than 10% to 20% of the workloads out there. And so what that means is there is a massive set of workloads that are still running in on-prem data centers.

And you all at Goldman are excellent AWS customers of ours, thank you. And you run lots of data centers yourself still, and there's still lots of stuff still to go. And so, that is just the nature of the industry that we're in.

There's a lot of these workloads, whether it's because they're running on mainframes or because you have assets that haven't been fully amortized or you just haven't fully moved things or it's a technology, think about telco infrastructure that's out there that ran sites and things like that, that hasn't yet been cloud-enabled and it's still kind of traditional infrastructure or at least for the most part. And so the vast majority of workloads haven't moved yet, and we're still at the very early stages of that. We're spending a lot of time helping customers because that said, the also most customers, if you could really give them an easy button, one of those buttons that just push that happens magically.

Most people would move those workloads in a heartbeat. And so we're really helping customers understand how they can move more quickly, how they can get their workloads in the cloud because the agility you gain, the ability to adopt new technologies much more quickly and take advantage of all the new technologies out, is so much easier when you're running in the cloud than if you're having to buy your own gear and run it in your own data centers and run that way. Because it turns out if you run -- if you buy a server and you stick in your data center, you're on that server for the next 5 years.

You have no flexibility of taking advantage of new technologies or new capabilities and particularly in the world of -- most of that is operating in the cloud today. And so much of that is also pushing people to move to the cloud more quickly. Eric Sheridan Okay.

Before we get to AI, maybe let's build on that last answer. In your view, what are the key differentiators that allow AWS to keep winning new customers and growing revenue share with existing customers when you look at the landscape right now. So sort of the differentiation point? Matt Garman Yes.

Look, I view it as -- it's how we approach our customers out there, and it is really no different than from when we first started the business 18 years ago. And when we think about it, when we -- we think our differentiation is a couple of things. Number one is we listen to our customers and we build what our customers ask us for.

And when you talk to almost any customer of any type, whether it's a startup, whether it's a large enterprise, whether it's a government customer, across the board, the most important things that they are looking for is outstanding operational excellence and world-class security and then a partner who's going to be very focused on them and be very partner focused to help them get through problems. And then folks then say, "Great. If you have that baseline and those are the most important things that you focus on, and I can trust my business to you, then I'm interested in how you're helping me innovate more fast -- more rapidly, how you're building new technologies and how you're really leaning forward.

And so that's how we approach customers today. And it's -- whether you're a small customer or the very largest customer, we say security is first. It's not bolted on after the fact.

It's not because I've had a bunch of security issues. And so now I guess I have to focus on it and operational excellence. Those are from the very first time -- days that we started AWS, it's how we focus.

And then we just focus on customers, and we listen to our customers, we really listen to where are the problems, where are the technologies and things that are not working today. Where are the pain points you have out there today? And how can we help go innovate to help you move faster and help every single one of our customers just focus on the things that make them interesting and unique as opposed to what we call undifferentiated heavy lifting, which is pieces of the technology stack that really don't differentiate your company as opposed to the IP and things you build on top of it. And that's been how we approach customers from day 1, and I think that it often resonates with customers, and they love that that's how we go win their business, not because we have onerous licensing terms are not because they feel like they have to use us, but because that we're the best solution to help them move their businesses forward.

And that is how we've grown the business today. It's why we see the business accelerating from where we are, even though we're already at a $105 billion run rate, we still see the growth accelerating and are quite bullish about where the future lies. Eric Sheridan So just maybe 1 follow-up there.

Can you isolate any products and services where you believe you're prioritizing that are driving potential positive customer outcomes or driving innovation and adoption across AWS right now? Matt Garman Yes. I think if you look even at the base layers of where we think about compute and storage and databases, AWS has been innovating for the last decade at a level that others haven't. And so if you think about from 10 years ago, we went on a path to start innovating on our own custom silicon.

We started on a path to innovate at the very base layers of hypervisors and virtualizations and networks and data centers and power infrastructure and supply chain. And across the board, these are non-glamorous things necessarily, but they're very differentiating. It means that we have a very differentiated security posture than anyone else.

It means we have a differentiated cost structure than anyone else. It means that we have custom-made processors where we can deliver outsized performance and better price performance gains than anyone else. And then we think about how do we continue to build on top of that.

And I think for a long time, many of the other competitors were much more focused on how they protect their legacy business as opposed to innovating. And so when you think about the database world, as an example, we leaned into open source from the very beginning because we didn't want people locked into our products with proprietary licensing. We wanted people -- we wanted to have a scalable, well-run, excellent operating database for customers to be able to use.

And so we were free to innovate on a number of different levels, whether it's a NoSQL database, whether it's a cloud purpose-built database like Aurora. And that's true if you go across the board, across our sets of products, we really lean into how do we build great products for our customers. And so that mentality has allowed us to differentiate ourselves.

And if you look across the board, we have the absolute best compute layer with Graviton with Trainium, with Intel, with NVIDIA, with AMD, all fantastic partners because we really focus on how does that compute stay available, how does it have great performance characteristics and that is differentiating versus everyone else. You look at that at the network layer, it's true. If you look at the storage layer, S3 is by far our first service that we launched, and we have continued to invest heavily to improve performance, reduce costs and continue to scale out with the world.

And that's true across almost every single product that you look at, whether it's analytics, whether it's monitoring, whether it's compute, storage, et cetera, and then, of course, AI services, which I'm not trying to jump to your question. But all across the board. And relentlessly, we focus on innovating for customers and listening to what they need.

And so when customers tell us they have a new problem where they're not seeing their needs met out there in the market, we listen. And I think a lot of companies will tell you that they listen to their customers and then they don't or they don't actually internalize that. And I think it's one of the I know it's really a secret of Amazon because I think we're quite open about doing it, but it's actually quite hard to do in practice, and it's one of the things that I think we do quite well that really differentiates us.

And that differentiating, I think, is at that core level. Because anyone can point to point-in-time features that is different than anyone else, but it really is that core underlying sense of listening to our customers and continuous innovation built on top of that layer of security and operational excellence that really makes a difference. And that's why enterprises often will stay with us.

They may even try other clouds and they'll often come back, and they'll continue to grow. And I think what we've built the business on and where we continue to see success. Eric Sheridan Okay.

Really clear. I do want to turn to generative AI. I think probably the biggest debate at the conference this year and recently with investors is where generative AI is going over the longer term.

So can you lay out your vision for how generative AI capabilities will be adopted and utilized by customers across infrastructure model and application and how you think about the market opportunities around those different layers of computing? Matt Garman Yes. Look, I'm -- I think you heard Lisa talking about it a little bit right before. I am incredibly excited about this technology.

It is a technology that over time is going to completely change almost every single industry that all of us focus on and think about and work on every single day to some level. And I really think that. And it's every single industry.

And it's not just -- I think, in some ways, the early splashes of generative AI of like a cool chatbot that can write you a high coupon, miss the actual value that you're going to get. And early on, a lot of the value that companies are getting are like efficiency games, which are fantastic, but they're early, right? There are things like -- and we have a product with Connect which is a cloud call center and it's by far the most popular contact center out there in the cloud. And having AI throughout that makes customers much, much more efficient, helps them lower cost.

It means they don't have to have as many agents. They can help their customers more rapidly, fantastic. But I think that is scratching the surface of where the real value is going to be over time.

And as I talk to customers out there as they keep -- as they get deeper and deeper into thinking about the core of their business, and this is -- it turns out to be very industry-specific, but you're really unlocking capabilities that I think have never been possible before. And that's -- it's a hard thing to really get their heads around of things that are never possible before. But you talk to a pharmaceutical company that's using AI to actually invent new proteins and discover new proteins and new molecules that may be able to help cure cancer or cure other diseases and things like that.

And at a rate that's tens of thousands of hundreds of thousands more times than a person sitting there with a computer trying to guess what the next protein could look like to solve a particular disease. That is just a fundamentally different capability than ever existed before. And has massive implications for health care.

But you kind of go on down the list, you can think about financial markets that are using generative AI to do fraud detection and NASDAQ is doing a bunch of this where they look at some of their market analysis and use AI models to find fraud detection that they weren't able to do just a year or 2 ago. And so that has fundamental improvements in how they're able to run their business. You think about -- here's a good example we had launched recently, the central railway in Japan is launching a new bullet train, okay? It's going to go upwards of 300 miles an hour, so twice as fast as the current generation bullet train, which is pretty unbelievable.

It's a little scary when you see trains moving that fast. And so what they do, though, is those trains, both the rails, the electronics and the actual cars or the trains have a ton of sensors and they ingest all of that sensor data in IoT into AWS. And then using SageMaker, they built AI models to predict where they're going to have maintenance issues, they can -- like little small changes in how things are operating.

They can actually proactively predict weeks in advance where they might see components fail. And then using generative AI, they actually pull from a bunch of different data sources actually give the technician advice as to how they can go address that. And the person can go out there and quickly address any sort of issues proactively so that the trains keep running.

So something as traditional as a train, albeit a really fast school bullet train, can be completely redone and made possible by some of the generative AI technologies. And again, I think we're just scratching the surface. We can probably stay here for the next hour and talk through really cool use cases, some of which are possible today, some of which are hinted that today and require the technology to continue to advance, but that's where it's going.

Eric Sheridan Okay. With this pivot towards generative AI, how does this change your go-to-market strategy with respect to the customer? And how does Amazon Bedrock factor into your broader AI strategy? Matt Garman Well, there's a couple of things. One is, I think, I'll say on the -- there's a few ways that I would answer this.

Number one is, if you rewind to about 18 to 24 months ago before kind of generative AI became front center and whatever on a lot of the industries were thinking about, customers were very focused on cost reduction, and they're actually thinking about there is going to be a recession and really thinking about how to reduce their costs. And so we spent a lot of time actually with customers helping them reduce their bills, whether by moving to the cloud to save on CapEx or other spend like that or even reducing their cloud bill so that they could actually afford to do new projects. As customers shifted to generative AI, it shifted a lot of that focus and a lot of customers are now rethinking about how do I innovate? Because if I don't innovate, when we be left behind, and everyone else is going to get way ahead of me.

So that was number one. And it's been -- and so we're helping customers think through how do they get real value. And again, not just how do they put a chatbot on their website, so they can tell their board that they have a generative AI strategy, but real actual enterprise value that they get from actually reinventing and reimagining how their industry operates, number one.

So really helping customers there. And part of that is also moving out of just IT and thinking about how we talk to CEOs and they think about their strategy, how we think -- how we talk to line of business owners who are really thinking about the core of that business. Because if you go back to the pharmaceutical example, it's not the CIO that's worrying about how do you think about protein exploration or protein discovery, it's the actual scientists and the folks that are in there like making new drugs that are thinking about that.

And so you have to change some of your go-to markets just to think a little bit more industry-focused and a little bit more line of business focus. Because the more you can be really in there with the customers and thinking about how this technology can really change the actual industry as opposed to just running some -- running -- more efficiently running their back office IT operations, both of those are equally important, by the way, and super important. But when you think about generative AI, it's oftentimes that line of business customer and the industry-specific customer that you really have to get into to understand.

Eric Sheridan Okay. How should we think about the levels of capital expenditure and investments needed for AWS to achieve its generative AI goals? To what extent does infrastructure need to be re-architected for a Gen AI world? Matt Garman Yes. Well, I think overall, AWS is -- in the range of software to hardware, AWS is a capital-intensive business.

And so that is some of the business that we operate, right? We invest in data centers, we invest in servers, we invest in network and we invest in that global infrastructure so that our customers don't necessarily have to. And so, there is -- as the business continues to grow, there is necessarily capital expenditures to grow data centers to add power to add servers. And so that part is just a part of the business that we operate in.

And one of the things that I'm quite proud of is that over the last 18 years, Amazon has this kind of a learned expertise, if you will, in supply chain from thinking about our retail world. And so we apply that to technology. And we think very carefully about what we think about that longer-term supply chain and when are we going to need power and when are we going need data centers and when are we going to need servers.

And so that part, I think we've been -- we've learned over the last couple of decades as to how to manage that demand and how do we think about having enough compute power for customers so that when they want to grow and they need their capacity, it's available for them, but we don't have too much so that we unnecessarily spend ahead of demand. And so -- the ramp in generative AI adds to that pressure, and I think it adds for the opportunity for us, too. But we're pretty disciplined in how do we do that.

And we think that we have a pretty good model for balancing some of those expenditures along with revenue growth to capture some of that opportunity for the business. One of the things that I think we have a benefit on is that we have been investing for more than a decade in custom infrastructure that means that we own more of that cost. And so I'll use 1 example.

I think it's 15 years ago, we started building our own network devices. And so instead of having to rely on third-party supply chains, instead of having to rely on third-party vendors for load balancers or networking gear, we build them ourselves and we build them out of normal compute boxes and software on top of servers and build our own systems that way. And then we went into building custom chips, and we build custom chips for our own virtualization technology, we call Nitro.

And that means that we don't have to go buy those from third parties that allows us to lower our cost. We long ago, many of the -- and folks in the industry and AI in particular, kind of really leaned into InfiniBand, seeing that they thought that, that was the best performing network that you could get, which is true if you're going to run a small cluster that you're going to custom configure to run your own small cluster. We saw long ago that if you really want to run at scale, and you have to run these really large scales and you have to operate them in a really efficient way that Ethernet was going to be a much better path over the long term.

And so we've invested in high-performance Ethernet for the last decade plus for HPC systems. And now we have Ethernet networking for building large training clusters for AI that will often outperform InfiniBand on an absolute performance basis and is a much lower cost and much more efficiency and much more operable and it's much better uptime. And so those are some of the investments that we've made along the way that allow us to lower some of those capital expenditures for us and grow more efficiently than we maybe otherwise would have.

Eric Sheridan Okay. Interesting. Can you discuss AWS' strategy around silicon partnerships and building your own custom chips for AI alongside those partnerships? Matt Garman Yes.

Look, I think a lot of times people enjoy a narrative where they say like, how are you possibly doing your own chips when you have other partners who have their other chips. And it turns out customers like choice. And we've believed that from the earliest days of AWS.

And so we firmly believe that AWS is the absolute best place to run Intel, to run AMD, to run NVIDIA processors and we think that we can offer some differentiated capabilities by offering our own processors as well. And so we started out -- actually, we started out with our own internal chips, which are Nitro chips that ran our whole virtualization layer and moved all the virtualization off of the core compute into a dedicated side processor. From there, kind of built up this expertise, and we launched our very first processor chip called Graviton and that has been a wild success.

It's a general purpose processor based on ARM, and we're at Graviton4 now. And Graviton4 absolutely outperforms the best other processors, x86 processors at a 20% lower price. And so many of our customers to -- get 40% to 50% price performance gains while also using less power and improving their carbon footprint using Graviton.

And it's because we control that whole process, we think we know what -- where it's going to run, we don't have to build these processors to run in a general purpose environment. They're going to run exactly in our server, exactly in our data center, exactly with our networking stack and so we can optimize that just for our customers. And the customers are, of course, going to run a huge variety of workloads on once they run on it.

But the actual hardware environment that runs in is exactly just AWS, and we can optimize like crazy around that, plus we have a very good team that's building the chips. Then about 5 years ago, we saw the opportunity to innovate in AI processors as well. And by the way, obviously, I'm not sharing any secrets here, NVIDIA makes a very, very, very good processor, it's quite popular, it has done quite well.

And AWS is the best place to run NVIDIA-based GPU workloads. NVIDIA, in fact, themselves, and I understand you're going to have Jensen here in a couple of days, you can ask them about it, we're partnering super closely to build a giant AI infrastructure for them to build their own models and to run their own test cases inside of AWS, because they realize that we have the best operating environment and the best performance in order to run their own servers. And so we have a great partnership together and we really need them together.

And we think that there are some use cases where our own custom processors can help customers save money. The very first one we launched was called Inferentia. And it was very focused on inference.

And I can use our own company. Alexa, moved all of her inference to Inferentia and save 70% versus doing it on a standard GPU part. And so not all workloads will work better on our own processors, but we feel very bullish about the opportunity there.

Trainium is the newest chip that we have out, which is very focused on large-scale training clusters for these AI models. And we feel really -- we preannounced Trianium2 that's going to be coming out at the end of this year. We feel incredibly excited about that platform.

We think that we have the opportunity to really aggressively lower cost for customers while increasing performance. And so super excited about that platform. And I think, look, there's going to be a breadth of processor options for customers for a long time, and we think more choice is better for our customers.

Eric Sheridan Okay. Clear. Maybe just coming back to the competitive question I asked before.

How do you view AWS' competitive positioning, specifically to generative AI, if you were to look at the application layer compared to the infrastructure and model layer? Matt Garman You mean like our own applications? Or -- I think here's how I think we think about the application layer generally, which is if you think about the stack, the technology stack, if you will, the very lowest layers of the stack, we're going to be building compute and storage and database and data centers. And at that layer of the stack, there's going to be very few players that are going to be at cloud, hyperscale cloud to be able to go build something like that. And we think by far, AWS is the best at doing that, and we're the largest at doing that.

. And then you move up a layer and you think about the services that we have built on top of that and you think about some of the higher level services like maybe it's something like an Aurora database or something like a Redshift analytics cluster or things like that. And then the very top layer, but that's still kind of in the infrastructure space, and there's more competitors there.

And our view is we want customers to run on the very best of those products that are available. And so somebody like Databricks or Snowflake and Redshift, all kind of are great options that customers use and some -- and many of them run on the AWS infrastructure, but customers pick and choose depending on their use case, and there's going to be more of those options out there, many of them we offer and many of them are partners offer. Then you go to that application layer.

There are I don't know, tens of thousands, hundreds of thousands of startup -- there's a new start up every day that's building a cool thing at the application layer. And so AWS will have a few of those, I think. And I mentioned contact centers earlier.

I think we have the most popular and the fastest-growing cloud contact center in Connect, and that's arguably at the application layer. It's an area that we thought we had expertise in and we went into that and we've done quite well and customers really enjoy using it and we have AI infused into it and it's growing like very well. But we also have a huge number of partners and whether you have Salesforce or ServiceNow or Workday or a start-up that was just funded yesterday, building on the application layer, there are going to be tens of thousands of these applications.

And so I think that we will have many successful ones, and I'm super excited about Amazon Q, which is our conversational assistant that helps both developers and enterprises get more value out of their data and really be more efficient in how they go about working. And we're seeing tremendous upside and tremendous growth of enterprises starting to adopt that technology. But I think we'll just be one of many thousands that are going to be successful at that layer, and that's part of how we operate.

Eric Sheridan Okay. You talked earlier about security being a differentiator in the space. Can you give us a little bit of color on to what degree you're building security solutions that are meant for your own network and our sort of internal facing versus elements of building security solutions for customers? And what some of your priorities might be both looking internally and externally around the security landscape? Matt Garman Yes.

The -- look, our priority number one -- and we answer your question, we do both, but the priority number one is the security of our infrastructure. That is things that our customers can't do. Our partners can't do.

We have to own that. And so that is something where we spend an enormous amount of time and have from the very beginning. And so we think about -- to the extent that when we built a custom, and this is what I used to do in my old job, many like a decade ago, is we built a custom hypervisor layer that means that there is no operator access to a compute instance running in AWS.

And so if you're running an EC2 instance, the normal hypervisor, there's a layer that the operator can come in and kind of manage the VMs and do things like that, and it's how a lot of systems interact with your various virtual machines. We built a hypervisor that doesn't have that. It just -- the only way you contact and create VMs is through APIs.

And so there is no way for a human to go and log into a machine and beyond that box that your VMs are running it. That is a very different security posture than anyone else runs because we thought of this from the ground zero. We've built the infrastructure for AWS.

We thought about security from the very beginning, and we continue to invest enormous amounts of effort and time and resources in securing that infrastructure layer, because that is something in our services layer. And so -- and it turns out it's one of those things you can't bolt on after the fact. And I think some of our other friends are learning that the hard way, and they're trying to figure out how to go do that.

But we are fortunate enough to have the team that built AWS, in particular, are unfortunate enough to be thinking about that from the very beginning, and it's always been a priority for us. Now that said, we also see security in the cloud as a shared responsibility model. And so at the application layer, the customer is responsible for securing their application, right? We're responsible for securing the infrastructure and they have to secure their application.

And so customers can lose keys, customers can leave database ports open, customers can have that security practice. And so we also build services to help customers really understand how to go manage that, how to monitor for that, and we have teams that will help customers with best practices around that. There's also a rich partner ecosystem and this is where partners can come in and help customers secure applications.

And so whether you have folks like CrowdStrike or Wiz or Palo Alto or et cetera, there's a number of security customers -- or -- companies out there that build great applications and great capabilities to help customers secure their own applications. We also have services that we offer on that front, too, that I think are quite good, but there's going to be a wide variety of things that customers use on that front. But that underlying security is where we spend the vast majority of our time.

Eric Sheridan Maybe just 1 quick follow-up. Are there any areas of focus that you think it's critical that AWS builds to for the longer term around the security infrastructure layer? Matt Garman Yes. I think, look, I think you have to keep -- there's a bunch, right? And so I think you have to keep -- one of the things about security is you have to keep running because the bad guys don't stop either, and -- unfortunately.

And so whether you're thinking about quantum safe encryption, whether you're thinking about -- I think as promising as the technology is generative AI also opens up a pretty large surface area of security risk that you have to think about, as to how you lead and particularly thinking about how you might leak data or other things like that from various models. And so I think we continue to have to think about how do we push the security boundary on those fronts. .

And I think you just have to -- you're seeing more and more sophisticated attacks. And so I think we are always thinking multiple years out as to -- how do we -- how do we do reactive and proactive security measures to try to identify where we see patterns, where we see bad guys, where we see different things and then we go try to build ahead of that. Eric Sheridan Okay.

We only have a few minutes left. So let me end on 1 sort of bigger picture one. Looking ahead over the next 12 to 18 months, how would you frame up the key priorities and milestones you'd like AWS to achieve? And are there any other emerging themes that you think we, as investors, should be paying attention to across the broader computing landscape? Matt Garman Yes.

There's a lot, I don't know how much longer you have. But I think a few things that I'm excited about and priorities for me. Number one is we have that baseline of rock solid infrastructure.

I am also excited about us really accelerating the pace of innovation and simplification that we can offer for our customers out there. I think that today, there is such a dizzying array of things that they can pick from that us being a little bit more prescriptive and us being a little bit kind of putting on that customer lens and saying, how can I help customers simplify some of those decision making things and focus on what's most important for their business. And so I think there's a lot of innovation that happens there.

There's a lot of things that we can continue to take on for customers to simplify their life so they can really focus on their own business. And I think if you think about -- look, I firmly believe that as we move into this AI world, most customers are not going to become experts in AI. Most customers are not going to go build their own models, most customers are not going to spend $1 trillion building some sort of foundational model and most customers are going to want to get the benefits out of that model.

And so the benefits, though, are going to be very closely tied to what is your unique IP and your differentiating data you have for your own enterprise and your own workflows and your own customers. And so really thinking about helping customers be able to get the value out of that data from the technology when it comes in a relatively easy way is one of the things that I'm most excited about doing and helping customers get their data out of data silos and into a cloud world where it's available to be used by some of these models, so you can actually get value out of that, while also protecting that data so that they have that unique IP that's going to be most important to that end customer. I think there is a ton of opportunity there to really help customers build more value for their enterprises.

And if we can make it easier for them and to innovate and help on the analytics side and the AI side and make that accessible to everyone while they get their data in a cloud world, I think that's how you really see the acceleration of real enterprise value that comes from these technologies and it's what I'm most excited about probably over the next 12 to 24 months. Eric Sheridan Okay. With that, I think we're going to leave it there.

Matt, thank you so much for being part of the conference. Matt Garman Thank you. Eric Sheridan Please join me in thanking Matt.

.