Micron Technology, Inc. (NASDAQ:MU) J.P. Morgan’s 52nd Annual Global Technology, Media and Communications Results Conference May 21, 2024 8:10 AM ET
Company Participants
Mark Murphy – EVP & CFO
Manish Bhatia – EVP of Global Operations
Conference Call Participants
Harlan Sur – JPMorgan
Harlan Sur
Good morning, and welcome to the second day of JPMorgan’s 52nd Annual Technology Media and Communications Conference. My name is Harlan Sur. I’m the semiconductor and semiconductor capital equipment analyst for the firm.
Very pleased to have Mark Murphy, Chief Financial Officer; and Manish Bhatia, Chief Operating Officer at Micron Technology, here with us this morning. Gentlemen, thank you for joining us. I’m going to go ahead and kick off the Q&A, and we’ll have some time for audience questions as well. So again, thanks for joining.
As we closed out the March earnings season, heard from most of your peers. All of them echoed similar reviews relative to your earnings call back in March, right? Tight industry supply; demand trends expected to continue into next year; customer inventories of memory at normalized levels; better second-half seasonal demand trends; strong new drivers of growth like AI, which is obviously pulling demand for things like HBM and data center SSDs; and finally, a posture of continued supply side and CapEx discipline, right?
So, now that we’re coming full circle here at the end of your team’s quarter, any differences relative to your prior views? And more importantly, any changes on expectations for continued strong pricing improvements in this calendar year, record revenue outlook and significant profitability outlook for 2025?
Mark Murphy
Okay. Thank you, Harlan, and thank you, everyone, for joining us today. I’ll start with safe harbor, actually. We’ll be making forward-looking statements. Those statements have risks and uncertainties associated with them. I refer you to our risk factors disclosed in our public filings.
So, Harlan, your question. No new comments on the market environment or trajectory of the business from what we’ve said recently in earnings. And continue to see tight supply-demand balance on the leading edge. We expect prices to continue to increase through 2024. And we still see data center growth improving in the back half of the year.
No update on guidance for the May quarter or on the indications we provided for the August quarter, including gross margin in the low 30s-plus percent area. We now see FY ’24 CapEx, up some to about $8 billion. That’s versus prior view was $7.5 billion to $8 billion. And that’s as we intersect the HBM opportunities ahead of us.
Consistent with prior review, we see positive free cash flow for the May and the August quarters. If we look further out, we project record revenue in our FY ’25 and significantly improved profitability year-over-year. Micron is in an excellent position across technology, products and manufacturing to benefit from this upturn and the anticipated growth we have ahead of us.
Question-and-Answer Session
Q – Harlan Sur
I appreciate that. And one of the things that did occur during the quarter was obviously the earthquake in Taiwan. Obviously, an unfortunate event. Micron team has a fairly large footprint of DRAM manufacturing operations there. And I know that the team did put out a press release or an 8-K around the potential impact, but I think you said mid-single digits type of quarterly impact to your overall DRAM supply. So, it doesn’t sound like that’s changed.
Manish, since we have you here, how did the team recover? Post mortem, how did the team recover from the earthquake? And are things sort of back on track and back to normal?
Manish Bhatia
Sure. Thanks, Harlan. Thanks for having us here. Yes, as you referenced, there was a quake that hit Taiwan in early April. Was the largest earthquake in the last 25 years, so it was a substantial event? And our teams did extremely well, rallying team members actually not just from within Taiwan, but from other sites in — not just in Asia, in the U.S., to be able to come in and help restore operations.
We did have — and we put out the statement that up to about mid-single-digit percent of 1 quarter’s DRAM output loss through a combination of some wafers that had to be scrapped, some lost production as we were restarting equipment, and then some slightly lower yields on product that was already in process. But all in all, I think the Micron teams responded extremely well. And as you note, we haven’t had to make any further comments since then. And as Mark noted, no further comments relative to our guidance either.
Harlan Sur
Perfect. No, I appreciate that. From an end market perspective, back in March, the team highlighted data center customers were still working through normalization of some excess inventories. But you guys had anticipated data center inventory normalization would occur in the first half of this calendar year. Obviously, you have line of sight for that — to that. I mean, have inventories already normalized in the data center market? And how do you see in general inventories in the traditional PC, smartphone and embedded markets?
Mark Murphy
Yes, data center inventories are normalizing midyear here as we expected. And as I mentioned in my opening comments, expect growth to pick up in the back half of the year. On PC, smartphones, we did see some OEMs opportunistically build some inventories in anticipation of return to unit growth and AI-driven content growth. But then more broadly, inventories are in good shape.
Harlan Sur
From a supply perspective, our global team just updated our memory industry supply-demand model. So, from a bit demand perspective, we’re looking for DRAM bit demand in the sort of mid-teens growth range this year, NAND bit demand growing low 20%, overall bit supply below the bit demand levels for both DRAM and NAND. And we’re remodeling industry DRAM wafer capacity up about 10%, right, primarily for HBM. And NAND wafer capacity flattish just on the continued supply discipline of the industry. How does this compare to Micron’s industry demand and supply outlook and Micron’s supply dynamics within this?
Manish Bhatia
Sure. So, we continue to view the DRAM industry long-term growth rate at mid-teens bit growth rate, and that’s inclusive of HBM, which I’ll talk a little bit more about, and in NAND in the low 20% range. With regard to production capacity, I think we’ve highlighted before that through the downturn that we just experienced, Micron deployed a capital-efficient methodology to be able to transition to newer technology nodes while reducing wafer capacity. So basically, reusing more of the equipment towards newer technology, taking wafer starts down by — in both DRAM and NAND by low double-digit percentages. And that’s sort of where we are relative to our peak output capability back in 2022. So, from a wafer capacity standpoint, we’re actually done. And we believe this is actually a phenomenon that the entire industry went through to be able to manage CapEx during the downturn.
So, as we look ahead the HBM opportunity, I mentioned that the mid-teens CAGR growth rate includes HBM, which as we’ve discussed before, high-bandwidth memory has larger die sizes and consumes more wafer capacity per bit of output production. So, keep in mind that as HBM grows in mix, the effective bit growth rate for the industry will be higher. And the need for both technology transitions to be able to generate more bit growth as well as greenfield wafer capacity start additions will be needed as we move forward. And see HBM growing as part of the industry mix, and that will be for everyone, not just for Micron.
Harlan Sur
On the topic of HBM. The team started shipping production bits of your 8 high HBM3E solution this quarter. Can you just give us an update? How has the ramp proceeded? Is the team hitting yield? Manufacturability milestones? And we know you are shipping into NVIDIA’s H200 GPU platform, which is ramping now. Are you also supporting the Blackwell platform? And have you diversified your wins to non-NVIDIA XPU compute platforms?
Manish Bhatia
So yes, that’s right, Harlan. We mentioned on our call in March that we had begun production shipments and that ramp has started. And we also mentioned on that call that we expect several hundred million dollars’ worth of revenue in the second half of fiscal ’24, which would be the — sorry, the February — sorry, the May quarter and the August quarter, and we’re on track for that. And so far, so good, going well. In terms of the breadth, we’re not — we can’t comment too much on other customers. But we are focused with that one partner right now and off to a good start.
Harlan Sur
And the team, in addition to targeting several hundred million dollars of HBM revenues this fiscal year. You also had anticipated HBM revenues this quarter to be accretive to both your DRAM and overall gross margins. Did the team — is the team hitting that milestone?
Manish Bhatia
Yes, we’re on track for HBM. The cost of HBM is higher, but the pricing enables us to actually have more than offset that, and the margins are immediately accretive to our DRAM gross margins, even starting in the first quarter of production. And we expect them to continue to do so as we grow the business. First, we mentioned several hundred million dollar expected in fiscal ’24. And in fiscal ’25, we expect HBM to be a multibillion-dollar business for us.
Harlan Sur
So not as clear that industry bit supply will be tight for the foreseeable future. Your customers want predictability, right? Especially when the demand profile is growing strongly, like HBM for accelerated compute and AI applications. I know last we spoke, the Micron team was sold out on HBM for this year, right? You’ve locked in volume and pricing agreements for this year.
At the time of the earnings call, you were almost locked up on volume and pricing for next year. So, is the 2025 output all locked up now? And what about the non-HBM part of your business? Are you locking in LTA agreements in these segments as well? As I assume customers here also want some predictability in a tight supply environment.
Mark Murphy
Yes. So, as you say, we are mostly allocated for next year on HBM. And in addition to the volumes being allocated, the pricing is mostly set as well. As Manish just mentioned, we expect to have several hundred million of revenue in HBM this fiscal ’24. And then next year, fiscal ’25, we expect multibillion revenue in HBM. Now non-HBM, no changes there on pricing dynamics, but the HBM supply requirements are creating an entire market more broadly in DRAM.
Manish Bhatia
It’s important to note just on the LTAs for next year, calendar year ’25. I mean, this unique dynamic with the high-bandwidth memory market. And Micron’s high-bandwidth memory product being so much, so superior to others. Not only are we locking in LTAs for supply for calendar year ’25 already, but those LTAs actually include pricing as well. So, they are really — it just demonstrates the strength of the product that Micron has developed as well as the demand strength for these platforms for high-bandwidth memory for the future.
Harlan Sur
As we think about high-bandwidth memory, which obviously wasn’t a big driver, even 18, sort of 24 months ago for the industry. And you layer on top of that, all the plethora of new enteritis of all of these new AI compute platforms, NVIDIA H200, B200, you’ve got all of the hyperscale’s announcing new ASIC programs, right? And all of these next-gen architectures are driving higher HBM DRAM support.
You put all of this together, I mean, I think since 2023 and extrapolating through the next couple of years, our team at least forecast HBM bit shipment CAGR of about 160%. So ’23 to ’25, 160%-bit shipment CAGR. Exit run rate in ’25 is, we’re forecasting, about 70% sort of bit demand growth type profile. What’s the Micron team’s view on sort of the mid- to long-term view on bit demand CAGR for HBM?
Mark Murphy
Yes. So yes, maybe I’ll start, Manish can build on it. So, we — on HBM, as far as industry share, it was between 1% and 2% of bits last year, probably around 4% this year, somewhat higher. As we look out the next few years, the share of bits in the industry will grow. We expect our HBM bits to grow over 50% CAGR over the next several years or over the near term. And keep in mind, there’s this trade ratio of 3:1. So the HBM, capacity consumed for HBM, is triple this rate.
Harlan Sur
So, the 50% CAGR on HBM bits, is that from — let’s say from this year going forward, is that the way we should think about it? Maybe for the next few years?
Mark Murphy
In the near term.
Harlan Sur
In the near term. Okay, perfect. I’m going to open up. I’m going to move over to a different topic. So, before that, I do want to open it up for questions. [Operator Instructions]. No questions? Okay.
The team started sampling — as we think about your road map on HBM, team started sampling your 12 high HBM3E back in March. Has the team began qualifications of 12 high solutions? And what’s the timing of an adoption and shipment curve look like for 12 high?
Manish Bhatia
So, 12 high is a unique product. It’s a 12-high stacked HBM product versus the current standard 8 high product that we’re in production on today. So, it represents 50% more DRAM content in the same number of sockets. So, it’s actually a significant demand growth driver for the industry and for Micron.
And you’re right, Harlan, we did start sampling that product earlier this quarter and we are working with our customer to be able to bring that into production. Expect it to really be a 2025 revenue driver and continue to increase in penetration throughout calendar year ’25 and become an increasing part of the mix of the overall HBM market as all of these AI applications are so hungry for memory, and this is the best way for them to get more memory per GPU into the system, is if you go to this higher-density stacked memory. So, you’ll see that the higher-end applications will end up having deeper penetration of the 12 high as we go through our calendar year ’25.
Harlan Sur
As you’ve benchmarked your 12-high solution, obviously, your customers obviously have competitive solutions as well. As you’ve continued to sort of benchmark 12 high, obviously, one of the big differentiators on 8 high was better performance, 30% power efficiency relative to competitive solutions. Do you see this performance and power advantage continuing?
Manish Bhatia
Absolutely. I mean, the power performance comes from the base HBM die, which is based on our 1 beta process technology, which we believe is the best in the industry and well ahead of our competitors, and that allows us to be able to deliver on those metrics. Along with using similar packaging architecture, and we have some novel IP that we use in that packaging structure which we’re very confident is helping us give some of those performance benefits as well. And so, we absolutely see our advantage continuing in HBM3E as we move from — not move from. It will be a mix of 8 high and 12 high both. But we expect to be very well positioned on both throughout the HBM3E lifetime.
Harlan Sur
And as the HBM technology matures, as you look at your road maps, do you guys anticipate a cost-down pace that is similar to your mainstream DRAM products, i.e., which is sort of mid- to high single-digit percentage sort of annualized cost-down reductions over a longer period of time for HBM?
Manish Bhatia
Well, I think that as we look forward, the mix of the HBM in the market will determine a little bit what the cost-down capabilities will be. Certainly, the underlying technology, we would expect to be able to have strong and competitive cost reductions as we transition more of our capacity to 1 beta, as we move to our 1 gamma node and others.
But the HBM will have a few different factors. One will be product mix, how much is 8 high, how much is 12 high? As we transition to HBM4, the performance parameters around that may drive different die size considerations as well, and certainly eventually HBM4E also. So, I think the cost-down part of HBM is still going to be determined a lot by the performance requirements of each generation as well as the mix of HBM.
But I think what’s really important about HBM is it’s really a value-based product. I mean, the fact that it’s higher cost today, but we’re immediately able to be margin accretive, projecting it to be margin accretive throughout its lifetime means that it’s really a high-value product where it’s around — the performance and the power benefits and the density that we’re able to provide to our customers is really what they are hungry for and what all these systems require in order to be able to deliver optimal accuracy improvements through generative AI algorithms. And so, we’re really focused on being able to deliver maximum value through power, performance and density across all the HBM products we’re developing.
Harlan Sur
Perfect. I’m going to start talking about some of the different end markets. Do we have any questions from the audience? Okay. From an end market perspective, there’s been a lot of debate about general purpose server demand this year, right? Continued concerns on AI spending crowding out general purpose compute spending. Within your view of mid- to high single-digits growth in server shipments this year, are you forecasting general purpose, traditional server shipments growing? And the team did mention seeing a pickup in demand late in the prior quarter on traditional servers. Did that demand continue into this quarter? And is it cloud hyperscaler-driven? Is it enterprise driven? Or was it just broad-based?
Mark Murphy
Towards the end of the quarter, we saw some signs of recovery on traditional server. And CPU vendors have talked about a second half recovery. So, we think as we’ve talked about several times, see market returning to growth there, and broad-based.
Harlan Sur
In PCs and smartphones, there is optimism on AI-driven semi content gains per system which will, of course, drive higher memory requirements. PC, smartphone CPU vendors are bringing to market, right, NPU-enabled CPU architectures. It will be a big part of the mix looking into next year. I assume you are getting some visibility on AI CPU mix for next year for PC, smartphones. What does that mix look like, AI CPU versus non-AI? But more importantly, what are you seeing on applications and software development that’s actually going to take advantage of these integrated AI capabilities?
Mark Murphy
I mean, we’re seeing a lot of activity on the edge and the market trying to determine a lot of factors around the size of algorithms, what’s on the cloud versus device, dealing with battery life and all that. But we are seeing — and we’ve announced some products that offer, for example in smartphones, more predictive features, more personalized experiences, more seamless interactions. So, there’s certainly — AI is going to play a positive role in helping drive smartphone and PC growth and especially in the refresh cycle.
We’ve said that we see — there’s two dimensions here. One is the amount of AI content in an AI-rich device. And we’ve said that smartphones, we see 50% to 100% more DRAM. And in PCs, we see 40% to 80% more DRAM in an AI-rich device. And then there’s a penetration, how quickly will markets adopt AI-rich features? So, we’re seeing it and heading into a flagship and reports of one in three smartphones in ’25 being AI-enabled, so to speak. And then in PCs, probably over half by ’27. So, it’s happening, it’s positive.
Manish Bhatia
I think as Mark mentioned, I think what’s really exciting about this AI demand cycle, demand-driven cycle, versus some of the other cycles that the industry has had. I mean, when you think about Internet and PC era in the late ’90s, early 2000s, the smartphone era in the late ’00s and early 2010, even cloud in the late part of 2010s. Each of those was basically a new segment or a new application coming on to complement the existing segments that were already there and that opened up new vectors of demand for memory and storage.
But AI actually, it’s not just about the data center obviously, but every single one of the applications that we’re talking about are impacted and have the positive content growth story for DRAM and actually NAND flash, whether it’s smartphones and PCs on the edge, automotive, as we have increasing autonomous features being built into automobiles and even EVs with richer content, and more AI-driven capability that will be being presented, and of course in the data center as well.
So, AI has the ability to — it’s not just a single new demand driver as other vectors have been over the last 20 years of the industry. This has the halo effect across all the different end markets that we serve. And that’s why we’re hopeful that this cycle could be more durable.
Harlan Sur
Yes. And that’s — we did get a question from one of the web listeners which sort of tacks on to that, which is, how much of an opportunity is your specialized LPDDR5X for data center applications? And is it exclusive to Micron?
Manish Bhatia
Well, we do have — I think we’ve talked about one engagement with a particular customer where we do have some unique features that we’ve developed for them. But broadly, I think we are starting to see more of these sort of supercomputer data center architectures that have combinations of high-bandwidth memory, high-capacity DDR5 modules and LP, high-density packages, all combined into these systems to be able to memory for various different workloads altogether. So, optimized for each of the different capabilities.
Obviously, LP, low power, has the benefit of being able to provide low-power and high-density memory to all of these architectures, and that’s one of the reasons you’re starting to see it to be adopted more. And we will see it adopted more in data center architectures going forward.
Harlan Sur
Switching over to NAND. In addition to just the improving industry demand and supply trends, I think one of the big Micron-specific inflections has been the strong increase in your enterprise and data center SSD share, right? In calendar ’23, Micron actually moved to being the third-largest global enterprise SSD supplier globally, right? You’ve always had strong share in SATA-based data center SSDs. In fact, you exited last year with a record high of 25% share, number 2 player in the market.
But on the newer and higher-volume PCIe-based data center SSDs, your share has actually gone from 3%, 4% in calendar ’21 to 10% to 12% exiting last year with full year number 4 global share and PCIe. So, what has the team done to improve the competitiveness and performance of your enterprise SSD portfolio?
Manish Bhatia
Well, thanks for noting that. And it really is a tremendous execution story. I think it starts with the strategy that we outlined many, many years ago and have updated progress points is, but focusing on high-value solutions.
So, we actually organized our SSD development teams around the goal of being able to have deeper enterprise and data center SSD penetration. We knew that was a high-value area. And as a relatively small player in terms of NAND bit supply, we knew we wanted to direct those businesses to the highest-value homes.
Of course, for now a couple of generations, both our — first, our 176-layer and now our 232-layer have developed — have reached industry leadership in terms of NAND capability. So, we’re able to leverage that capability combined with some specific actions we took on the SSD development side with regard to developing our own custom controllers for PCIe, improving or enhancing our firmware capability.
And specifically, instead of waiting and having enterprise and data center applications be trailing in terms of when we are productizing them relative to when the 232-layer node actually was introduced, we actually pulled that forward and targeted data center much earlier in life, faster than we had ever targeted before. So that means that the technology and manufacturing teams on the NAND side, combined with the development teams on the SSD side, were working together hand-in-hand much earlier to be able to target some of these terrific applications.
One of the ones that I think we talked about in our call that’s getting great adoption is our 6500-high capacity, up to 30 terabytes in the data center for a data center SSD, which has been part of the story around our gaining share. So really around our strategy and then execution to that strategy from all levels, from the technology development, the ASIC controller, the firmware, and our collaboration with customers on specific applications where our NAND flash really shines.
Harlan Sur
Well, the good news is that the step-up in competitiveness, both technology as well as product, like you mentioned, the 30-terabyte high-capacity drive, it does feel like the team is bringing a very competitive set of products to the market right when it’s been a little bit of a lag, right?
The DRAM pull has always been strong from accelerated compute and AI, but it feels like SSD has been a little bit of a lag. But I think as we’ve heard from you and some of your peers, as time progresses and we see more deployments of XPU-based server clusters focused on AI, whether it’s training or deployment and inferencing, the demand pool now for ESSD is really starting to show itself, right?
And so, the team bringing a competitive set of products and intercepting that AI curve as it starts to ramp is very timely. But do you get a sense that a lot of the demand that you’re seeing for enterprise SSD is being driven by accelerated compute and applications?
Mark Murphy
Yes. I mean, we are. We’re seeing new data centers that are very heavy mix SSD, if not all. And I think it’s — this whole discussion, I think, highlights the unique advantage that Micron has. I mean, we are hitting our stride on technology leadership, products and manufacturing, as Manish mentioned, on 1 beta and 232.
But it’s — our products are hitting the market at the right time where the workloads for this, for AI activities, and then just the system requirements are such and the power constraints in the market are driving folks to differentiated products. And as you heard here, on some of the key products, HBM, low-power DRAM, SSDs in the case of NAND, Micron is doing exceptionally well because of these macro themes.
Harlan Sur
SSD is a great example of the team bringing new technologies and performance-driven products to the market at the right time. From a core technology perspective, the move to EUV on your 1 gamma DRAM process is another example of that, right? So, you’re on track to start to ramp 1 gamma EUV DRAM in calendar ’25. You were actually very smart improving out and driving manufacturability using EUV on your 1 alpha and your 1 beta processes. How are the current manufacturability metrics looking for 1 gamma? Does the team expect it can continue to drive mid- to high single-digit sort of cost-down curve over time, even with EUV as a part of the flow?
Manish Bhatia
Yes. So, we’re on track, as you mentioned, for 1 gamma in calendar 2025, and that will be our introduction for EUV. And everything is on track for that. Our utilization of EUV, we’ve demonstrated it on both our 1 alpha and now our 1 beta technology as well. It’s not required for either of those. But in order to get the learning of the — how to work with the systems as well as the other elements of the solution, the resists and other elements that come together, the new reticles and new resists and the tool together, we have had significant production with EUV tools for actually for the last year. So, we feel pretty good about our ability to deploy it in production in 1 gamma. That’s not going to be a challenge.
And I think the other thing that we feel really good about is the tools that we have purchased, we were — because of our decision to delay introduction and extend our multi-patterning immersion capabilities further, we were able to get better-performing tools from ASML.
So, the tools that we have versus earlier generations have better specs, better performance, better maintainability, an ability to have higher uptime, and more serviceable. So, for all those reasons, we’re confident that our EUV insertion point is the right time, and that we’ll be able to be successful with our 1 gamma ramp next year.
Harlan Sur
Shifting to the cross cycle financial model. You had laid out at your last Analyst Day, revenue CAGR target of high single-digit percentage growth; operating margin, 30%; EBITDA margin, low 50%; CapEx intensity, mid-30s; free cash flow margins greater than 10%. This was before the AI-driven sort of memory demand dynamic, right? Has the incremental demand drivers, the potential to drive higher value-added products driven a change in how the team thinks about its through-cycle revenue growth and profitability metrics?
Mark Murphy
There’s been no change to the model. But to your point, we’re very excited about the position we’re in between our advanced technology, our leadership, our product portfolio. And then just how we’re operating in manufacturing and our ability to take advantage of the market recovery here and then the growth opportunity in front of us.
Harlan Sur
Great. Mark, Manish, thank you for participating today, and look forward to monitoring the progress and execution of the team this year. Thank you.
Manish Bhatia
Thanks, Harlan.
Mark Murphy
Thank you, Harlan.