Alphabet Inc. (NASDAQ:GOOG) Goldman Sachs 2024 Communacopia and Technology Conference September 10, 2024 4:05 PM ET
Company Participants
Thomas Kurian – CEO, Google Cloud
Conference Call Participants
Eric Sheridan – Goldman Sachs
Eric Sheridan
All right. I know people are still finding their seats, but in the interest of time, let’s — we’ll get started on our next session. I’m going to start with a safe harbor. I’m going to introduce Thomas Kurian, who’s going to join me up on stage, walk through a slide presentation, and then we’ll do some Q&A.
Some of the statements that Mr. Kurian may make today could be considered forward-looking. These statements involve a number of risks and uncertainties that could cause actual results to differ materially. Please refer to Alphabet’s Form 10-K and 10-Q, including the risk factors discussed therein. Any forward-looking statements that Mr. Kurian makes are based on assumptions as of today, and Alphabet undertakes no obligation to update them.
Thomas Kurian joined Google Cloud as CEO in November of 2018. He has deep experience in enterprise software, hired to Google Cloud, he’s spent 22 years at Oracle, where most recently, he was President of Product Development. Thomas, welcome to the stage.
Thomas Kurian
Thank you, Eric for that warm introduction. We at Google Cloud are the organization that takes the innovation that Google is making in infrastructure, our data and digital platforms, cybersecurity and AI, and brings it to enterprises, small companies, startups and governments. We do that through a global infrastructure, 40 regions, 121 zones connected around the world.
On that infrastructure, we’ve built world leading AI training and inferencing systems, a collection of leading frontier AI models, not just from Google, but also partners like Anthropic and many others, and a suite of tools. We make that available to customers in five important product offerings.
Infrastructure for them to build their own foundational models or to tune and adapt existing foundational models and modernize their IT systems, development tools to connect AI models into enterprise systems so that they can build agents and applications, data analysis to connect any kind of data to any kind of model, as well as to use AI to speed up and make it easier for people to do analysis.
Cybersecurity, integrating AI into a security operations work bench to help people prevent, defend, respond, resolve cyber threats materially faster. And then obviously, we’ve integrated AI deeply into our applications portfolio. Google Workspace, but we’re also introducing new applications powered by AI.
Starting with AI infrastructure. For 11 years now, Google has built world leading AI systems. We are in our sixth generation of accelerator technology, and we offer customers a choice of our accelerators as well as NVIDIA’s. They’re assembled into super high scale systems that offer extremely good performance, 3 times competitors for training, 2.5 times competitors on a cost performance basis for inference, and as you get larger and larger clusters, reliability becomes a bigger issue because the larger the number of machines, you end up with failures and needing to restock your model training. There are many advances we’ve made there.
As an example, in addition to the software on top of these systems, we’ve introduced water cooling. We now offer close to 1 gigawatt of water cooling in production that’s 70 times the number two. We’ve seen huge growth as a result in our AI infrastructure, 150 times more compute instances can be connected to a single storage volume, which means you get super dense training clusters. We’ve seen 10 times growth year-over-year in people using our systems for training.
Here are examples of customers. 90% of AI unicorns run on our cloud for training and inference. 60% of all AI funded start-ups ever, use our infrastructure for training or inference. We’re also seeing traditional companies now building both high-performance and generative AI models on our cloud. An example is Ford Motor Company. They’re using our deep learning technology to build simulations for wind tunnel and for virtual wind tunnel simulation, replacing a traditional approach was called computational fluid dynamics.
Midjourney, a leader in AI foundation models, trains in our TPU, serves on NVIDIA GPUs, both in our cloud. It’s an example of having that diverse portfolio allows them to choose the best combination across the platform. In addition to having a foundational model, you need to connect it to your enterprise systems.
We offer a foundational platform for people to do that. There are a number of differentiators I’ll touch on three important ones. First, we offer an open platform. In addition to Google’s frontier models, we offer leading open source as well as third-party models, Anthropic, Cohere, AI21 Labs, Runway and many others. This allows enterprises to choose a standard platform because they get the choice of models.
We also have built AI for many years into our products and we offer advanced capability as a result in this platform. Additional services like grounding to improve the accuracy of the answers. We’ve improved — we’ve introduced something called high fidelity, adaptation, distillation, for example, take a large model and shrink it down. All of these additional services, we offer the customers through an end-to-end platform. You can use that to connect to any cloud, any SaaS application or any enterprise system.
We monetized compute instances on a consumption basis. We monetize our AI developer platform by pricing on a token basis, but people also pay for the additional services, grounding, distillation, fine tuning, adaptation and so on. There are many developers using our platform, over 2 million of them. As people migrate from proof-of-concept to production, we see a ramp in usage.
An example, last week, we spoke to — with Accenture. We’re working with them in many of the Fortune 500 companies. 45% of the projects have now gone live. And as a result, we see a ramp in API requests. Here are examples of people using our developer platform. Just a couple of examples. Bayer in health care is using our generative AI tools to power search document creation, image analysis to help radiographers, for example, with tumor detection and comparison.
Similarly, Samsung used our distillation tools to build a specific version of Gemini and Imagine to power the Galaxy S24 smartphones, which are now in hundreds of millions of hands. Models need to be connected to data. We offer two important things with our data platform. First, the ability to connect any data on any cloud to a model with super low latency. So structured data, unstructured data, semi-structured data can be connected to any model with super low latency.
Secondly, for those people who want to do analysis, we’ve introduced a capability we call a data agent. It helps you do all the tasks that you need for analysis, but using a conversational interface. It helps you migrate data, stage it, aggregate it, visualize it, it even builds you charts and spreadsheets. We monetize this in two ways. First, because we’ve made it easier for people, it drives a lot more query volume on our analytical platform, big quarry.
Secondly, because we’ve opened up analysis from being the domain of people who know SQL, Python, etc. It also drives a lot more end user subscription growth because we can sell more seats in an organization. We see growth in our analytical platform. BigQuery now runs 80% more ML operations in just the last six months. It’s also being used not just to process structured data, there’s been a 13 times growth in multimodal data, because of the capabilities of the Gemini model to process multi-model data.
Many customers are using it. Two examples, UPS Capital. They run millions of calculations in real-time on streaming data to analyze package movements, locations to detect if you’re going to deliver a package in an area of risk. They also adjust and drive, for example, calculations detect if somebody is doing fraudulent things like stealing packages.
Second, Hiscox, one of the largest syndicates in Lloyd’s of London, introduced the first AI enhanced lead underwriting model. So when they ensure property risk, it just take them months to calculate the entire portfolio. A single property took three days. It takes a few seconds now. What used to take months now takes a matter of days.
What are we doing in cybersecurity? We started with a foundation that’s super secure. And if you look at our reliability, we have about a quarter of the downtime of the other players. In cyber, as measured by Cisco and others, we have a really secure cloud, half the number of vulnerabilities as other clouds.
So we started with a very strong foundation. We then applied tools that we have built to help organizations prevent, detect and respond to cybersecurity threats. How do we do that? From our Mandiant acquisition, from the millions of endpoints that run Chrome, and from our broad network, we collect summarize threat intelligence from around the world, that is fed into an AI model to help you prioritize the threats you’re likely to face.
It then compares it with the configuration of your existing systems to see where are you likely to be attacked from. It then helps you generate the remediation. We call that the run book to remediate it. It writes the audit file for your audit submission. It helps you then validate that you fix the issue, that cycle speeds up the ability for people to identify remediate and respond to threats.
We’re seeing growth because we have helped people speed up how quickly they can use our tools to detect and respond to threats. We’ve seen a 4x increase in customer adoption, 3 times the volume of data ingested, an 8 times increase in threat hunting. We monetize this based on the volume of data we’re processing and the number of threat hunts or queries that are happening on the system.
Many customers use this platform. Two examples. Fiserv is running this powered by our AI capabilities to speed up how they summarize threats, find answers, detect, validate and respond. If you look at Apex Fintech, it’s a fintech company. They wanted to speed up how quickly they can run threat hunting rules. And so they’re using our AI system to write extremely complex threat detection processes, took them many hours. Now it’s taking a few seconds.
Finally, we have a broad applications portfolio. We’ve integrated AI into Google Workspace to improve productivity. I’ll talk about that in the question-and-answer session, but we’re also introducing new applications. One example is applications we’re introducing to help people in customer experience and customer service. Think of it as you can go on the web, on a mobile app, you can call a call center or be at a retail point of sale, and you can have a digital agent, help you assist you in searching for information, finding answers to questions using either chat or voice calls.
Our differentiation is in three important capabilities. Number one, we can handle multi-channel, all the four channels I talked about, web, mobile point-of-sale, call center in a single system. Secondly, multi-modal. For instance, you call the telephone company to say, I’d like to trade in my phone. It can send you a chat, please upload an image of your phone. Your screen is cracked, here’s how much I can reimburse you for. That’s a multimodal conversation. Voice, text, and image. We have that capability because of the capabilities of our foundational model Gemini to process multimodal information.
Second, if you work in call centers, there are times where you need to follow a strict order of control of questions. For example, if you’re in a bank, you have to verify identity, you need to be able to guarantee you asked a set of questions to verify identity for KYC. At the same time, you may ask the bank, tell me what’s the best mortgage offering I have. Can you compare it across these different products you offer?
The first requires deterministic control. The second requires something called generative control. We’re the only ones that allow you to do that. Lastly, imagine you call the bank and you asked a question about your balance, bank balance. You don’t need it to be right 90% of the time. You need that answer to be right 100% of the time. We have a technique to answer with 100% accuracy. All of these is driving growth in our customer experience platform.
Here, we monetize based on the value we save for users, either the costs we’re displacing or the reach expansion we’re giving their agents. We’ve seen growth across all the dimensions, the adoption of agents, digital agents, the volume of traffic going to these agents, etc. Examples of customers using our customer experience platform. If you call Verizon, you’re talking to our chat system and our call system, 60% containment rate, high rate of call deflection.
If you drive a General Motors vehicle and you hit OnStar, you’re talking to our conversational AI system. So across this portfolio, we’ve integrated AI. We monetize it in different ways, and we’re focused on three important things: winning new customers, winning more projects within the customer, upselling new products. Winning more customers, for example, Radisson. We help them use our generative AI technology to speed up marketing content creation by 2x. They then bring that online. And most importantly, they saw a 20% lift because the ads were really tailored, that’s step one.
We then often go to other parts of the portfolio. For example, if we’re working with a retailer, we can help them now that they’ve automated the content creation process. We can help them with retail commerce and conversational shopping, a second project. And then to tailor the ads to different customer segments will sell them our analytical platform, BigQuery for segmentation and personalization. So that’s the process we go through to win customers.
We’ve trained our go-to-market organization. We continue to focus on thoughtful geographical expansion of our fields. We’re teaching them to be specialists. Many of these solutions are not bought in the IT organization. They are bought by the Head of Customer Service, the Head of Commerce, the head of Finance, and so you have to learn to sell outside of the IT organization.
So we’ve taught them how to specialize by industry as well as specialized from a solution selling point of view. We’ve taught them how to do a business value methodology, so they can easily measure what productivity benefits we’re going to give, how much reach they’re going to get, what cost savings they can generate.
And finally, we don’t — we’re not a big consulting shop. We’re not a services organization. So we work with a broad partner ecosystem. Because we don’t conflict with the partner ecosystem, we’ve invested in them. We’ve invested in technology, commercial incentives, training and certifications, as well as go-to-market incentives.
Just to give you an example, if you look at the leading ISP and SaaS vendors, 75% of them are building with us, using our AI tools. Leading system integrators, Accenture has doubled the number of people certified on our AI systems in just the last year. All of this, winning new customers, driving new projects, expanding and upselling product has driven growth for us. 5 times revenue growth in five years. We are now the fourth largest enterprise software company in the world on a standalone basis.
While we grew top line, we also had great discipline in managing our costs. We’ve invested thoughtfully in engineering. We’ve invested thoughtfully in a go-to-market organization and we delivered operating income growth at the same time. We continue to see strong business momentum. We’re capturing customers faster. We’re doing larger deals. Customers have increased their adoption of our products by 30% in just the last year.
We have very strong support from the ecosystem. And we are being patient with AI, and we think it will monetize because of the many different parts of the portfolio in which we have integrated AI, and the many different ways in which we’re monetizing AI, we think it will help us continue to accelerate our presence in the market.
So with that, Eric, happy to take questions.
Question-and-Answer Session
Q – Eric Sheridan
Okay. First of all — thank you, Thomas. Okay. Why don’t we jump right into it. Maybe start with a big picture question. A lot has evolved over the last couple of years in terms of both the industry, the public cloud space and even within Google Cloud. To level set, can you give us your world view of where we sit in cloud adoption and usage and how you see Google Cloud’s position evolve competitively?
Thomas Kurian
Geographically, we’re seeing growth in many, many markets. For example, in Asia and Latin America, many organizations are going straight to the cloud rather than starting on premise and then lifting to the cloud. Industry-wise, they were early movers, for example, retailers, communication service providers. Now we’re seeing many other industries, utilities, mining, natural resources, a number of them moving. So we see that happening.
We also see that the — historically, all cloud computing projects were controlled in the IT department. Increasingly, the adoption is being driven by business initiatives. For example, the Head of Private Wealth Management will say, I want to use data and AI to streamline how my organization does research. And so those projects increasingly are being driven not just in IT, but by business buyers.
We are still very early. If you count all the machines and data centers today versus how many are being consumed to the cloud. We’re still early. And so that gives us — we have a strong presence. We’ve obviously grown a lot. When I started, most people told me that we didn’t have a chance. We’re now the fourth largest enterprise software company.
Eric Sheridan
Understood. How do you frame Google Cloud’s differentiated offering to win new business and grow wallet share? Maybe you could talk a little bit about some of the products and the services that are aimed at tackling some of the themes you highlighted in your presentation.
Thomas Kurian
We’ve always focused our product strategy, customer in. So when we started, we said we introduced a concept in 2019 called multi-cloud. What was that meant to be? It basically said you should be able to use our technology in concert with other cloud providers and you should be able to standardize on our solution. So for instance, we introduced a database that runs across on-premise environments all of the major clouds, that’s driven companies like Workday to standardize on it, because they can then use it no matter where their application is running. So that’s one example. It broadens our total addressable market because it allows us to play in all clients, so that’s number one.
Second, we also said we need to offer more than just compute and infrastructure. We need to go up the stack to offer solutions in databases, analytics, cyber, etc. So if you look at those areas, we’ve also taken a different approach. If you look at our analytical system, it allows people to analyze data across multiple clouds, without moving the data and copying it to Google. So you can get a single place to run your analysis no matter where your data sets. Albertson is an example. They have migrated to our analytical systems. Many of their applications are on other clouds. Again, it allows us to win more customers, win more projects.
Third, we wanted to upsell products and services. So when you look at our strength in data and analytical processing, now we bring generative AI along with it. It allows people to do analysis using generative AI much more quickly and efficiently. One of the largest telecommunications companies in the United States is using our analytical system along with our Vertex AI platform and Gemini to run very important, but simple calculations.
Take as an example, take all the calls coming into my call center, which are recorded for quality purposes, summarize the calls, and compare them with the data from my billing system to tell me if the call is coming in are because of complaints about bills. So that first part, the process using Gemini. The second part is analyzed in BigQuery. And because we offer that combination, it’s another one of these strategic advantages that we offer.
Eric Sheridan
Okay. I want to turn next to go-to-market. Are there differences competitively of how you’re taking your products to market and how you work with channel partners today, and how has that evolved over the last 12 to 18 months?
Thomas Kurian
Great question. Our go-to-market fundamentally, we’ve been very disciplined on how we’re taking to market. There are two important things. First of all, the way you sell an AI solution is not the same way of selling a compute server feeds and speeds. So we’ve done three important things. First, we’ve identified our go-to-market around key buyers. So for example, when we say, we’re selling cybersecurity, we’re selling it to the CISO of a company.
When we sell customer service, we’re help — typically selling to the head of customer service or the head of e-commerce. So we specialize around buyers. We’ve taught our teams to sell as a solution. A solution is what’s the value proposition, how much cost and productivity advantages can it give me. And that’s a different sales methodology than selling infrastructure. In order to do that effectively, though, you can’t specialize around everything. You have to be disciplined in how you specialize for specific buyers and product offerings while getting global scale to our front line, so we do that very well.
And lastly, early on, we made a decision, we’re not going to offer a huge number of services from our professional services. We have a very focused team. They work on our largest accounts. But it allows us to partner very well with independent software vendors because we’re not in the competing with them in core apps. It also helps us work very well with system integrators.
Eric Sheridan
Okay. You talked a lot about AI in your presentation. Can you lay out your vision of how generative AI capabilities will be adopted and utilized by customers across the infrastructure, model and application layers and how you view the relative sizing of those opportunities?
Thomas Kurian
So we offer, as I said, five different capabilities. Broad-brush (ph), you can think about we offer infrastructure for people to use to build their own models or to tune existing models. Second, we help people with our AI developer platform to build their own agents. We have people building insurance agents, research analysis agents, customer service agents of their own.
We’ve also provided packaged agents, a data agent, a cybersecurity agent, a collaboration agent for helping you write things and increasingly, we are specializing them by industry. So for example, an insurance agent is different than a nursing agent. All three monetized in different ways. Early on, just like any other, if you look at the internet, first, people lay the infrastructure, over time, monetization moves up. And so that diversification allows us to capture value in many ways and over time, I think you’ll see similar things.
Eric Sheridan
Okay. Maybe I’ll turn to Vertex AI next. How are you seeing customers using Google Cloud to build scaled applications in a generative AI world and what types of models are customers gravitating to across various use cases?
Thomas Kurian
We see a huge range of models. Some people are using large models because they want very complex reasoning. Others like Samsung, for example, when I mentioned, they are focused on super task specific models. We allow people to get that range of capability in the platform, and we give them tools to then tailor the model for their specific need.
In addition to that, though, you need other services to be able to use a model. And so what Vertex gives you is the breadth of services. For example, if I’m answering a question, can you validate the answer, either against the Internet or against my own data set or, for example, in financial services against Moody’s, Bloomberg, Thomson Reuters, and other well high-quality data sources, that’s grounding. We make that available as a service, so you don’t have to do it yourself.
We offer technologies, for example, to shrink the model size. So that you can say, I took a general model and chewed it just for my needs. Example of a customer doing that is Samsung and that reduces cost. It also improves latency and it shrinks down the cycle, the response time for these applications. Most importantly, we’ve connected all these services together.
So for example, if you don’t know how to ground, you can just delegate to a service we call adaptive grounding. And our systems will say, we got a response. This one, I don’t think needs to be grounded. However, the other answer needs to be grounded and that saves people a lot of cost because they’re always worried about, we’ll have to pay each time I send a request. I will also have to ground the answer. So all of these are built into Vertex. As a result, we’ve seen huge growth with it, as I pointed out earlier.
Eric Sheridan
Okay. Maybe turning to custom silicon. Can you discuss Alphabet strategy around silicon partnerships versus building your own custom chips for AI workloads?
Thomas Kurian
We’ve always worked on high scale compute systems for AI. We partner closely with NVIDIA. We’re also in discussions with many others. But just — as an example, we offer an integrated high scale system. There are a number of pieces to that system. People often get focused on the chips. It’s not just the chips. It’s, for example, how much memory, high bandwidth memory do you have. If you’re doing training for a dense model, you need more. If you’re training a sparse model, you don’t need as much memory.
What’s the optical switching you have to interconnect a very large cluster with low latency. What’s cooling technology. And then as you move up the stack, can I program in a high level framework like PyTorch or something like that. Can I use a framework like JAX, which is a compiler framework that will compile that down. If you’re going to a GPU to CUDA (ph), if you go into a TPU differently. And so that entire stack is what we offer customers, that’s why many leading companies use us for high scale training, but also in [Technical Difficulty].
Eric Sheridan
Okay. Can you give us an update on workspace and other applications in your portfolio and how you see them in the broader generative AI offerings for clients?
Thomas Kurian
So from a productivity point of view, we see three types of adoption patterns. With Workspace, which is our communication collaboration portfolio, we do have many organizations adopting it for their professional workforce. To help them write, to take notes and meetings, to create content. For example, we have many of them using it to create marketing content, to translate marketing content in many languages. We’ve introduced a new product called Google Vids, which allows you to create short videos for training people and for — to do, for example, all hands meetings. So there’s a lot of that full professional workforce. And we see people like Randstat, Woolworths, etc., adopting it.
Second, we have very specific parts of organizations that are super high value for an organization. For example, if you’re in a hospital, as a hospital company, nurses are the critical path. Because nurses determine how many hospital beds you can have, they control the revenue of the organization. So we work with nursing staff, for example, to do live hand-off of patients. It saves about 6 hours in a 24 hour day. And one of the leading hospitals was talking at a conference today that they estimate when rolled out, it will save them $250 million.
Insurance, handling claims. We’re working with the largest health insurance company in Germany. They have a huge amount of claims coming in, on average, a claim, they need to read 800 policy documents to determine if the claim is not — is valid or not. They use our technology. It helps take 23 to 30 minutes down to 3 seconds. So productivity in these specific places are extremely high value, and then we also see productivity in certain roles that are scarce.
When I say it’s scarce. For example, cyber is one example. There’s not enough cyber analysts to go around. So introducing AI capability there scales productivity in a very different way. And so we’re working on all three dimensions, the professional workforce, frontline or first-line workers as well as scarce teams. And there’s a lot of thought we put in to find what are the roles that are scarce because they monetize faster.
Eric Sheridan
Okay. Maybe I’ll end on one big picture one. When you think about the opportunities you laid out today in your presentation and in our conversation, what are the primary areas of investment that you’re calling out when you’re thinking about aligning investments against the goals and milestones that you and the team want to accomplish with Google Cloud?
Thomas Kurian
I mean we’ve shown a track record for many years now of being very thoughtful on how we’re making investments. They broadly span what do we need to do in engineering, to broaden our product portfolio and deepen it. What do we do with our go-to-market organization, to expand it and build specialization. What do we need to do in our data centers to expand our geographical footprint as well as grow our infrastructure.
We’ve had very close relationships also with partners and we incent partners and we have made investments in the channel. All of that is balanced. We are doing — we do a lot of systematic planning. We look over multiple years on where technology is going, and we make thoughtful decisions based on all of that.
Eric Sheridan
Okay. Thomas, I always appreciate the opportunity to chat. Please join me in thanking Thomas Kurian for being part of the conference.