pdf_upload / flagged /log.csv
Richard Hsu
push
df0d3a5
Upload PDF,Extracted Text,flag,username,timestamp
/Users/richardhsu/pdf_upload/flagged/Upload PDF/df016a9cf3c724acbaf36497283ca4669fa5629f/2024Q2.pdf,"'Broadcom Inc. (NASDAQ:AVGO) Q2 2024 Earnings Conference Call June
12, 2024 5:00 PM ET
Company Participants
Ji Yoo - Head, IR
Hock Tan - President and CEO
Kirsten Spears - CFO
Charlie Kawwas - President, Semiconductor Solutions Group
Conference Call Participants
Vivek Arya - Bank of America Securities
Ross Seymore - Deutsche Bank
Stacy Rasgon - Bernstein
Harlan Sur - JPMorgan
Ben Reitzes - Melius Research
Toshiya Hari - Goldman Sachs
Blayne Curtis - Jefferies
Timothy Arcuri - UBS
Thomas O'Malley - Barclays
Karl Ackerman - BNP Paribas
CJ Muse - Cantor Fitzgerald
William Stein - Truist Securities
Operator
Welcome to Broadcom Inc. Second Quarter Fiscal Year 2024 Financial
Results Conference Call. At this time, for opening remarks and introductions, I
would like to turn the call over to Ji Yoo, Head of Investor Relations of
Broadcom Inc.
Ji Yoo
Thank you, Operator, and good afternoon, everyone. Joining me on today's
call are Hock Tan, President and CEO; Kirsten Spears, Chief Financial Officer
and Charlie Kawwas, President, Semiconductor Solutions Group.
Broadcom distributed a press release and financial tables after the market
closed, describing our financial performance for the second quarter of fiscal
year 2024. If you did not receive a copy, you may obtain the information from
the Investor section of Broadcom's website at Broadcom.com.
This conference call is being webcast live and an audio replay of the call can
be accessed for 1 year through the Investor section of Broadcom's website.
During the prepared comments, Hock and Kirsten will be providing details of
our second quarter fiscal year 2024 results, guidance for our fiscal year 2024,
as well as commentary regarding the business environment. We'll take
questions after the end of our prepared comments.
Please refer to our press release today and our recent filings with the SEC
for information on the specific risk factors that could cause our actual results
to differ materially from the forward-looking statements made on this call. In
addition to US GAAP reporting, Broadcom reports certain financial measures
on a non-GAAP basis. A reconciliation between GAAP and non-GAAP
measures is included in the tables attached to today's press release.
Comments made during today's call will primarily refer to our non-GAAP
financial results. I'll now turn the call over to Hock.
Hock Tan
Thank you, Ji. And thank you everyone for joining today. In our fiscal Q2 2024
results -- consolidated net revenue was $12.5 billion, up 43% year-on-year as
revenue included a full quarter of contribution from VMware. But if we exclude
VMware, consolidated revenue was up 12% year-on-year. And this 12%
organic growth in revenue was largely driven by AI revenue, which stepped up
280% year-on-year to $3.1 billion, more than offsetting continued cyclical
weakness in semiconductor revenue from enterprises and telcos.
Let me now give you more color on our two reporting segments. Beginning
with software. In Q2 infrastructure software segment revenue of $5.3 billion
was up 175% year-on-year and included $2.7 billion in revenue contribution
from VMware, up from $2.1 billion in the prior quarter. The integration of
VMware is going very well. Since we acquired VMware, we have modernized
the product SKUs from over 8,000 disparate SKUs to four core product
offerings and simplified the go-to-market flow, eliminating a huge amount of
channel conflicts.
We are making good progress in transitioning all VMware products to a
subscription licensing model. And since closing the deal, we have actually
signed up close to 3,000 of our largest 10,000 customers to enable them to
build a self-service virtual private cloud on-prem. Each of these customers
typically sign up to a multi-year contract, which we normalize into an annual
measure known as Annualized Booking Value, or ABV. This metric, ABV for
VMware products, accelerated from $1.2 billion in Q1 to $1.9 billion in Q2.
For reference, for the consolidated Broadcom software portfolio, ABV grew
from $1.9 billion in Q1 to $2.8 billion over the same period in Q2. Meanwhile,
we have integrated SG&A across the entire platform and eliminated
redundant functions. Year-to-date, we have incurred about $2 billion of
restructuring and integration costs and drove our spending run rate at
VMware to $1.6 billion this quarter, from what used to be $2.3 billion per
quarter pre-acquisition.
We expect spending will continue to decline towards a $1.3 billion run rate
exiting Q4, better than our previous $1.4 billion plan, and will likely stabilize at
$1.2 billion post-integration. VMware revenue in Q1 was $2.1 billion, grew to
$2.7 billion in Q2, and will accelerate towards a $4 billion per quarter run rate.
We therefore expect operating margins for VMware to begin to converge
towards that of classic Broadcom software by fiscal 2025.
Turning to semiconductors, let me give you more color by end markets.
Networking. Q2 revenue of $3.8 billion grew 44% year-on-year, representing
53% of semiconductor revenue. This was again driven by strong demand from
hyperscalers for both AI networking and custom accelerators. It's interesting
to note that as AI data center clusters continue to deploy, our revenue mix has
been shifting towards an increasing proportion of networking.
We doubled the number of switches we sold year-on-year, particularly the
PAM-5 and Jericho3, which we deployed successfully in close collaboration
with partners like Arista Networks, Dell, Juniper, and Supermicro. Additionally,
we also double our shipments of PCI Express switches and NICs in the AI
backend fabric. We're leading the rapid transition of optical interconnects in AI
data centers to 800 gigabit bandwidth, which is driving accelerated growth for
our DSPs, optical lasers, and PIN diodes. And we are not standing still.
Together with these same partners, we are developing the next generation
switches, DSP, and optics that will drive the ecosystem towards 1.6 terabit
connectivity to scale out larger AI accelerated clusters.
Talking of AI accelerators, you may know our hyperscale customers are
accelerating their investments to scale up the performance of these clusters.
And to that end, we have just been awarded the next generation custom AI
accelerators for these hyperscale customers of ours. Networking these AI
accelerators is very challenging, but the technology does exist today. In
Broadcom, with the deepest and broadest understanding of what it takes for
complex, large workloads to be scaled out in an AI fabric. Proof in
point, seven of the largest eight AI clusters in deployment today use
Broadcom Ethernet solutions.
Next year, we expect all mega-scale GPU deployments to be on Ethernet. We
expect the strength in AI to continue, and because of that, we now expect
networking revenue to grow 40% year-on-year compared to our prior
guidance of over 35% growth. Moving to wireless. Q2 wireless revenue of
$1.6 billion grew 2% year-on-year, was seasonally down 19% quarter-on-
quarter and represents 22% of semiconductor revenue.
And in fiscal '24, helped by content increases, we reiterate our previous
guidance for wireless revenue to be essentially flat year-on-year. This trend is
wholly consistent with our continued engagement with our North American
customer, which is deep, strategic, and multiyear and represents all of our
wireless business. Next, our Q2 server storage connectivity revenue was
$824 million or 11% of semiconductor revenue, down 27% year-on-year. We
believe though, Q2 was the bottom in server storage. And based on updated
demand forecast and bookings, we expect a modest recovery in the second
half of the year. And accordingly, we forecast fiscal '24 server storage
revenue to decline around the 20% range year-on-year.
Moving on to broadband. Q2 revenue declined 39% year-on-year to $730
million and represented 10% of semiconductor revenue. Broadband remains
weak on the continued pause in telco and service provider spending. We
expect Broadcom to bottom in the second half of the year with a recovery in
2025. Accordingly, we are revising our outlook for fiscal '24 broadband
revenue to be down high 30s year-on-year from our prior guidance for a
decline of just over 30% year-on-year.
Finally, Q2 industrial rev -- resale of $234 million declined 10% year-on-year.
And for fiscal '24, we now expect industrial resale to be down double-digit
percentage year-on-year compared to our prior guidance for high single-digit
decline.
So to sum it all up, here's what we are seeing. For fiscal '24, we expect
revenue from AI to be much stronger at over $11 billion. Non-AI
semiconductor revenue has bottomed in Q2 and is likely to recover modestly
for the second half of fiscal '24.
On infrastructure software, we're making very strong progress in integrating
VMware and accelerating its growth. Pulling all these three key factors
together, we are raising our fiscal '24 revenue guidance to $51 billion. And
with that, let me turn the call over to Kirsten.
Kirsten Spears
Thank you, Hock. Let me now provide additional detail on our Q2 financial
performance, which included a full quarter of contribution from VMware.
Consolidated revenue was $12.5 billion for the quarter, up 43% from a year
ago. Excluding the contribution from VMware, Q2 revenue increased 12%
year-on-year. Gross margins were 76.2% of revenue in the quarter. Operating
expenses were $2.4 billion and R&D was $1.5 billion, both up year-on-year
primarily due to the consolidation of VMware.
Q2 operating income was $7.1 billion and was up 32% from a year ago with
operating margin at 57% of revenue. Excluding transition costs, operating
profit of $7.4 billion was up 36% from a year ago, with operating margin of
59% of revenue. Adjusted EBITDA was $7.4 billion or 60% of revenue. This
figure excludes $149 million of depreciation. Now a review of the P&L for our
two segments, starting with semiconductors. Revenue for our semiconductor
solutions segment was $7.2 billion and represented 58% of total revenue in
the quarter. This was up 6% year-on-year.
Gross margins for our semiconductor solutions segment were approximately
67%, down 370 basis points year-on-year, driven primarily by a higher mix of
custom AI accelerators. Operating expenses increased 4% year-on-year to
$868 million on increased investment in R&D, resulting in semiconductor
operating margins of 55%.
Now moving on to infrastructure software. Revenue for infrastructure software
was $5.3 billion, up 170% year-on-year, primarily due to the contribution of
VMware and represented 42% of revenue. Gross margin for infrastructure
software were 88% in the quarter, and operating expenses were $1.5 billion in
the quarter, resulting in infrastructure software operating margin of 60%.
Excluding transition costs, operating margin was 64%.
Now moving on to cash flow. Free cash flow in the quarter was $4.4 billion
and represented 36% of revenues. Excluding cash used for restructuring and
integration of $830 million, free cash flows of $5.3 billion were up 18% year-on
-year and represented 42% of revenue. Free cash flow as a percentage of
revenue has declined from 2023 due to higher cash interest expense from
debt related to the VMware acquisition and higher cash taxes due to a higher
mix of US income and the delay in the reenactment of Section 174.
We spent $132 million on capital expenditures. Days sales outstanding were
40 days in the second quarter, consistent with 41 days in the first quarter. We
ended the second quarter with inventory of $1.8 billion down 4% sequentially.
We continue to remain disciplined on how we manage inventory across our
ecosystem. We ended the second quarter with $9.8 billion of cash and $74
billion of gross debt. The weighted average coupon rate and years to maturity
of our $48 billion in fixed rate debt is 3.5% and 8.2 years respectively.
The weighted average coupon rate and years to maturity of our $28 billion in
floating rate debt is 6.6% and 2.8 years, respectively. During the quarter, we
repaid $2 billion of our floating rate debt, and we intend to maintain this
quarterly repayment of debt throughout fiscal 2024. Turning to capital
allocation. In the quarter, we paid stockholders $2.4 billion of cash dividends
based on a quarterly common stock cash dividend of $5.25 per share.
In Q2, non-GAAP diluted share count was 492 million as the 54 million shares
issued for the VMware acquisition were fully weighted in the second quarter.
We paid $1.5 billion withholding taxes due on vesting of employee equity,
resulting in the elimination of 1.2 million AVGO shares. Today, we are
announcing a 10-for-1 forward stock split of Broadcom's common stock to
make ownership of Broadcom stock more accessible to investors and to
employees.
Our stockholders of record after the close of market on July 11, 2024, will
receive an additional nine shares of common stock after the close of market
on July 12, with trading on a split-adjusted basis expected to commence at
market open on July 15, 2024. In Q3, reflecting a post-split basis, we expect
share count to be approximately 4.92 billion shares.
Now on to guidance. We are raising our guidance for fiscal year 2024
consolidated revenue to $51 billion and adjusted EBITDA to 61%. For
modeling purposes, please keep in mind that GAAP net income and cash
flows in fiscal year 2024 are impacted by restructuring and integration-related
cash costs due to the VMware acquisition. That concludes my prepared
remarks. Operator, please open up the call for questions.
Question-and-Answer Session
Operator
Thank you. [Operator Instructions] And our first question will come from the
line of Vivek Arya with Bank of America. Your line is open.
Vivek Arya
Thanks for taking my question. Hock, I would appreciate your perspective on
the emerging competition between Broadcom and NVIDIA across both
Accelerators and Ethernet switching. So on the Accelerator side, they are
going to launch their Blackwell product that many of the same customers that
you have a very large position in the custom compute. So I'm curious how you
think customers are going to do that allocation decision, just broadly what the
visibility is.
And then I think Part B of that is as they launch their Spectrum-X Ethernet
switch, do you think that poses increasing competition for Broadcom and the
Ethernet switching side in AI for next year? Thank you.
Hock Tan
Very interesting question, Vivek. On AI accelerators, I think we are operating
on a different -- to start with scale, much as a different model. It is -- and on
the GPUs, which are the AI accelerator of choice on merchant -- in a
merchant environment is something that is extremely powerful as a model. It's
something that NVIDIA operates in, in a very, very effective manner.
We don't even think about competing against them in that space, not in the
least. That's where they're very good at and we know where we stand with
respect to that. Now what we do for very selected or selective hyperscalers is,
if there's a scale and the skills to try to create silicon solutions, which are AI
accelerators to do particular very complex AI workloads. We are happy to use
our IP portfolio to create those custom ASIC AI accelerator. So I do not see
them as truly competing against each other. And far for me to say I'm trying to
position myself to be a competitor on basically GPUs in this market. We're
not. We are not a competitor to them. We don't try to be, either.
Now on networking, maybe that's different. But again people may be
approaching and they may be approaching it from a different angle. We are as
I indicated all along, very deep in Ethernet as we've been doing Ethernet for
over 25 years, Ethernet networking. And we've gone through a lot of market
transitions, and we have captured a lot of market transitions from cloud-scale
networking to routing and now AI. So it is a natural extension for us to go into
AI. We also recognize that being the AI compute engine of choice in
merchants in the ecosystem, which is GPUs, that they are trying to create a
platform that is probably end-to-end very integrated.
We take the approach that we don't do those GPUs, but we enable the GPUs
to work very well. So if anything else, we supplement and hopefully
complement those GPUs with customers who are building bigger and bigger
GPU clusters.
Vivek Arya
Thank you.
Operator
One moment for our next question, and that will come from the line of Ross
Seymore with Deutsche Bank. Your line is open.
Ross Seymore
Hi guys. Thanks for taking my question. I want to stick on the AI theme, Hock.
The strong growth that you had in the quarter, the 280% year-over-year, could
you delineate a little bit between if that's the compute offload side versus the
connectivity side? And then as you think about the growth for the full year,
how are those split in that realm as well? Are they kind of going hand-in-
hand? Or is one side growing significantly faster than the other, especially
with the I guess, you said the next-generation accelerators are now going to
be Broadcom as well?
Hock Tan
Well, to answer your question on the mix, you are right. It's something we
don’t really predict very well, not understand completely except in hindsight.
Because it's like, to some extent, to the cadence of deployment of when they
put in the AI accelerators versus when they put in the infrastructure that puts it
together, the networking. And we don't really quite understand it 100%. All we
know, it used to be 80% accelerators, 20% networking. It's now running closer
to two-thirds accelerators, one-thirds networking and we'll probably head
towards 60%-40% by the close of the year.
Ross Seymore
Thank you.
Operator
Thank you. One moment for our next question. And that will come from the
line of Stacy Rasgon with Bernstein. Your line is open.
Stacy Rasgon
Hi, guys. Thanks for taking my question. I wanted to ask about the $11 billion
AI guide. You'd be at $11.6 billion even if you didn't grow AI from the current
level in the second half. And it feels to me like you're not suggesting that. It
feels to me like you think you could be [guided] (ph). So why wouldn't that AI
number be a lot more than $11.6 billion? It feels like it ought to be. Or am I
missing something?
Hock Tan
Because I guided just over $11 billion, Stacy could be what you think it is. It's
-- quarterly shipments get sometimes very lumpy. And it depends on rate of
deployment, depends on a lot of things. So you may be right. You may
estimate it better than I do, but the general trajectory is getting better.
Stacy Rasgon
Okay. So I guess again, how do I -- are you just suggesting that, that more
than $11 billion is sort of like the worst it could be because that would just be
flat at the current levels, but you're also suggesting that things are getting
better into the back half?
Hock Tan
Correct.
Stacy Rasgon
Okay. So I guess we just take that, that's a very -- if I'm reading it wrong,
that's just a very conservative number?
Hock Tan
That's the best forecast I have at this point, Stacy.
Stacy Rasgon
All right. Okay, Hock, thank you. I appreciate it.
Hock Tan
Thank you.
Operator
One moment for our next question, and that will come from the line of Harlan
Sur with JPMorgan. Your line is open.
Harlan Sur
Yeah, good afternoon. Thanks for taking my question. Hock, on cloud and AI
networking silicon, good to see that the networking mix is steadily increasing.
Like clockwork, the Broadcom team has been driving a consistent two year
cadence, right of new product introductions, Trident, Tomahawk, Jericho
family of switching and routing products for the past seven generations. You
layer on top of that your GPU -- TPU customers are accelerating their
cadence of new product introductions and deployments of their products.
So is this also driving faster adoption curve for your latest Tomahawk and
Jericho products? And then maybe just as importantly, like clockwork, it is
been two years since you've introduced Tomahawk 5 product introduction,
right which if I look back historically, means you have silicon and are getting
ready to introduce your next-generation three-nanometer Tomahawk 6
products, which would, I think, puts you two years to three years ahead of
your competitors. Can you just give us an update there?
Hock Tan
Harlan, you're pretty insightful there. Yes, we launched Tomahawk 5 in 2023.
So you're right, by late 2025, the time we should be coming out with
Tomahawk 6, which is the 100 terabit switch, yes.
Harlan Sur
And is the -- is this acceleration of cadence by your GPU and TPU partners, is
that also what's kind of driving the strong growth in the networking products?
Hock Tan
Well, you know what, sometimes you have to let things take its time. But it's
two-year cadence so we're right on. Late 2023, once when we shoot it out to a
Tomahawk 5 and adoption. You're correct with AI has been tremendous
because it ties in with the need for a very large bandwidth in the networking,
in the fabric for AI clusters, AI data centers. But regardless, we have always
targeted Tomahawk 6 to be out two years after that, which should put it into
late '25.
Harlan Sur
Okay, thank you Hock.
Operator
Thank you. One moment for our next question, and that will come from the
line of Ben Reitzes with Melius. Your line is open.
Ben Reitzes
Hi, thanks a lot. And congrats on the quarter and guide. Hock, I wanted to talk
a little bit more about VMware. Just wanted to clarify if it is indeed going better
than expectations. And how would you characterize the customer willingness
to move to subscription? And also just a little more color on Cloud Foundation.
You've cut the price there, and are you seeing that beat expectations? Thanks
a lot.
Hock Tan
Thanks, and thanks for your kind regards on the quarter. But it's -- as far as
VMware is concerned, we're making good progress. The journey is not over
by any means, but it is pretty much very much to expectation. Moving to
subscription, well, in VMware we are very slow compared to, I mean a lot of
other guys, Microsoft, Salesforce, Oracle, who have already been pretty much
in subscription. So VMware is late in that process. But we're trying to make up
for it by offering it and offering it in a very, very compelling manner because
subscription is the right things to do, right?
It's a situation where you put out your product offering, and you update it,
patch it, but update it feature-wise, everything as capabilities on a continual
basis, almost like getting your news on an ongoing basis, subscription online
versus getting it in a printed manner once a week. That's how I compare
perpetual to subscription. So it is very interesting for a lot of people to want to
can't get on. And so to no surprise, they are getting on very well. The big
selling point we have as I indicated, is the fact that we are not just trying to
keep customers kind of stuck on just server or compute virtualization.
That's a great product, great technology, but it's been out for 20 years. Based
on what we are offering now at a very compelling price point, compelling in a
very attractive price point, the whole stack, software stack to use vSphere and
its basic fundamental technology to virtualize networking, storage, operation
and management, the entire data center and create this self-service private
cloud.
And thanks for saying it, you're right, and we have priced it down to the point
where it is comparable with just compute virtualization. So yes, that is getting
a lot of interest, a lot of attention from the customers. We have signed up who
would like to deploy -- the ability to deploy private cloud, their own private
cloud on-prem. As a nice complement, maybe even alternative or hybrid to
public clouds, that's the selling point, and we are getting a lot of interest from
our customers in doing that.
Ben Reitzes
Great. And it's on track for $4 billion by the fourth quarter still, which is
reiterated?
Hock Tan
Well, I didn't give a specific time frame, did I? But it's on track as we see this
process growing towards a $4 billion quarter.
Ben Reitzes
Okay, thanks a lot Hock.
Hock Tan
Thanks.
Operator
Thank you. One moment for our next question, and that will come from the
line of Toshiya Hari with Goldman Sachs. Your line is open.
Toshiya Hari
Hi, thank you so much for taking my question. I guess kind of a follow-up to
the previous question on your software business. Hock, you seem to have
pretty good visibility into hitting that $4 billion run rate over the medium term,
perhaps. You also talked about your operating margin in that business
converging to classic Broadcom levels. I know the integration is not done and
you're still kind of in debt paydown mode. But how should we think about your
growth strategy beyond VMware? Do you think you have enough drivers, both
on the semiconductor side and the software side to continue to drive growth
or is M&A still an option beyond VMware? Thank you.
Hock Tan
Interesting question. And you're right. As I indicated in my remarks, even we
found the contribution from VMware this past quarter where we have AI
helping us, but we have non-AI semiconductor sort of bottoming out. We're
able to show 12% organic growth year-on-year. So almost have to say, so do
we need to rush to buy another company? Answer is no. But all options are
always open because we are trying to create the best value for our
shareholders who have entrusted us with the capital to do that.
So I would not discount that alternative because our strategy, our long-term
model has always been to grow through a combination of acquisition, but also
on the assets we acquire to really improve, invest, and operate them better to
show organic growth as well. But again, organic growth often enough is
determined very much by how fast your market would grow. So we do look
towards acquisitions now and then.
Toshiya Hari
All right. Thank you.
Operator
Thank you. One moment for our next question, and that will come from the
line of Blayne Curtis with Jefferies. Your line is open.
Blayne Curtis
Hi, thanks for taking my question. I wanted to ask you Hock, on the
networking business kind of ex AI. Obviously, I think there's an inventory
correction the whole industry is seeing. But just kind of curious, I don't think
you mentioned that it was at a bottom. So just the perspective, I think it's
down about [60%] (ph) year-over-year. Is that business finding a bottom? I
know you said overall whole semi business should -- non-AI should see a
recovery. Are you expecting any there any perspective on just customer
inventory levels in that segment?
Hock Tan
We see it behaving. I didn't particularly call it out, obviously because more
than anything else, I kind of link it very much to server storage, non-AI that is.
And we call server storage as at a bottom Q2, and we call it to recover
modestly second half of the year. We see the same thing in networking, which
is a combination of enterprise networking, as well as the hyperscalers who run
their traditional workloads on those, though it's hard to figure out sometimes.
But it is. So we see the same trajectory as we are calling out on server
storage.
Blayne Curtis
Okay, thank you.
Operator
Thank you. One moment for our next question, and that will come from the
line of Timothy Arcuri with UBS. Your line is open. Mr. Arcuri, your line is
open.
Timothy Arcuri
Hi, sorry. Thanks. Hock, is there a way to sort of map GPU demand back to
your AI networking opportunity? I think I've heard you say in the past that if
you spent $10 billion on GPU compute, you need to spend another $10 billion
on other [infrastructure] (ph), most of which is networking. So I'm just kind of
wondering if when you see these big GPU numbers, is there sort of a rule of
thumb that you use to map it back to what the opportunity will be for you?
Thanks.
Hock Tan
There is, but it's so complex, I stopped creating such a model, Tim. I've said it.
But there is because one would say that for every -- you almost say, for every
$1 billion you spend on GPU, you probably would spend probably on
networking, and if we include the optical interconnects as part of it, though we
are not totally in that market, except for the components like DSPs, lasers,
PIN diodes that go into those, high-bandwidth optical connect. But if you just
take optical connects in totality, switching, all the networking components, it
goes into -- attaches itself to clustering a bunch of GPUs, you probably would
say that about 25% of the value of the GPU goes to networking, the rest on
networking.
Now not entirely all of it is my available market. I don't do the optical connects,
but I do the few components I talked about in it. But roughly, the simple way to
look at it is probably about 25%, maybe 30% of all these infrastructure
components is kind of attached to the GPU value point itself. But having said
that, it's never – we are never that precise that deployment is the same way.
So you may see the deployment of GPU or purchase of GPU much earlier.
And the networking comes later or sometimes less the other way around,
which is why you're seeing the mix going on within my AI revenue mix. But
typically, you run towards that range over time.
Timothy Arcuri
Perfect Hock, thank you so much.
Operator
Thank you. One moment for our next question, and that will come from the
line of Thomas O'Malley with Barclays. Your line is open.
Thomas O’Malley
Hi, guys. Thanks for taking my question. And nice results. My question in
regards to the custom ASIC AI. Hock, you had a long run here of a very
successful business, particularly with one customer. If you look in the market
today, you have a new entrant who's playing with different customers. And I
know that you said historically, that's not really a direct customer to you. But
could you talk about what differentiates you from the new entrant in the
market as of late? And then there's been profitability questions around the
sustainability of gross margins longer term. Can you talk about if you see any
increased competition? And if there's really areas that you would deem more
or less defensible in your profile today? And if you would see kind of that
additional entrant maybe attack any of those in the future?
Hock Tan
Let me take the second part first, which is our AI -- custom AI accelerator
business. It is a very profitable business, and let me put to scale -- look
examine from a model point of view. I mean, each of these AI accelerators no
different from a GPU. The way these large language models get run
computing, get run on these accelerators, no one single accelerator, as you
know, can run these big large language models. You need multiple of it no
matter how powerful those accelerators are.
But also, and the way the models are run, there is a lot of memory access to
memory requirements. So each of this accelerator comes with a large amount
of cache memory, as you call it, what you guys probably now know as HBM,
high-bandwidth memory specialized for AI accelerators or GPUs. So we're
supplying both in our custom business.
And the logic side of it, where the compute function is on doing the chips, the
margin there are no different than the margin in any -- in most of any of a
semiconductor silicon chip business. But when you're attached to it, a huge
amount of memory, memory comes from a third-party. There are a few
memory makers who make this specialized thing. We don't do margin
stacking on that part. So by almost buying basic math will dilute the margin of
these AI accelerators when you sell them with memory, which we do. It does
push up revenue somewhat higher but it has diluted the margin.
But regardless, the spend, the R&D, the OpEx that goes to support this as a
percent of the revenue, which is higher revenue, so much less. So on an
operating margin level, this is easily as profitable, if not more profitable, given
the scale that each of those custom AI accelerators can go up to. It's even
better than our normal operating margin scale. So that's the return on
investment that attracts and keeps us going at this game. And this is more
than a game. It is a very difficult business. And to answer your first question,
there is only one Broadcom, period.
Thomas O'Malley
Thanks Hock.
Operator
Thank you. One moment for our next question, and that will come from the
line of Karl Ackerman with BNP. Your line is open.
Karl Ackerman
Hi, thank you. Good afternoon. Hock, your networking switch portfolio with
Tomahawk and Jericho chipsets allows hyperscalers to build AI clusters using
either a switch-scheduled or endpoint-scheduled network. And that, of course
is unique among competitors. But as hyperscalers seek to deploy their own
unique AI clusters, are you seeing a growing mix of white-box networking
switch deployments? I ask because while your custom sales and business
continues to broaden, it will be helpful to better understand the growing mix of
your $11 billion AI networking portfolio combined this year. Thank you.
Hock Tan
Let me have Charlie address this question. He's the expert.
Charlie Kawwas
Yes. Thank you, Hock. So two quick things on this. One is the – you are
exactly right that the portfolio we have is quite unique in providing that
flexibility. And by the way, this is exactly why Hock, in his statements earlier
on, mentioned that seven out of the top eight hyperscalers use our portfolio.
And they use it specifically because it provides that flexibility. So whether you
have an architecture that's based on an endpoint and you want to actually
build your platform that way or you want that switching to happen in the fabric
itself, that's why we have the full end-to-end portfolio. So that actually has
been a proven differentiator for us.
And then on top of that, we've been working, as you know, to provide a
complete network operating system that's open on top of that using SONiC
and Psi, which has been deployed in many of the hyperscalers. And so the
combination of the portfolio plus the stack really differentiates the solution that
we can offer to these hyperscalers. And if they decide to build their own NICs,
their own accelerators are custom or use standard products, whether it is from
Broadcom or other, that platform, that portfolio of infrastructure switching
gives you that full flexibility.
Karl Ackerman
Thank you.
Operator
Thank you. One moment for our next question, and that will come from the
line of C.J. Muse with Cantor Fitzgerald. Your line is open.
CJ Muse
Yeah. Good afternoon. Thank you for taking my question. I was hoping to ask
two part software question. So excluding VMware, your Brocade, CA, and
Symantec business now running $500 million higher for the last two quarters.
So curious, is that the new sustainable run rate or were there onetime events
in both January and April that we should be considering?
And then the second question is as you think about VMware Cloud
Foundation adoption, are you seeing any sort of crowding out of spending like
other software guys are seeing as they repurpose their budgets to IT? Or is
that business so less discretionary that it's just not an impact to you? Thanks
so much.
Hock Tan
Well, on the second one, I don't know about any crowding out, to be honest.
It's not. What we are offering, obviously, is not something that they would like
to use themselves, to be able to do themselves, which is they're already
spending on building their own on-prem data centers. And typical approach
people take, a lot of enterprises take historically continue today than most
people do a lot, people do is they have best of breed.
What I mean is they create a data center that is compute as a separate
category, best compute there is and they often enough use vSphere for
compute virtualization due to improved productivity, but best of breed there.
And best of breed on networking and best of breed on storage with a common
management operations layer, which very often is also VMware we realize.
And what we're trying to say is this mixed bag, and what they see -- is this
mixed bag best of [big] (ph) data center, very heterogenous, is not driving
that, is not a highly resilient data center.
I mean, you have a mixed bag. So it goes down. Where do you find quickly
root cause? Everybody is pointing fingers at the other. So you got a problem,
not very resilient and not necessary secure between bare metal in one side
and software on the other side.
So it's a natural thinking on the part of many CIOs we talk to, to say, hey, I
want to create one common platform as opposed to just [best-of-breed of age]
(ph). So that gets us into that. So it is a greenfield that’s not bad, they started
from scratch. If it's a brownfield, that means they have existing data centers
trying to upgrade. It's -- that sometimes that's more challenging for us to get
that adopted.
So I'm not sure there's a crowding out here. There's some competition,
obviously, on greenfield, where they can spend their budget on an entire
platform versus best-of-breed. But on the existing data center where you're
trying to upgrade, that's a trickier thing to do. And it cuts the other way as well
for us. So that's how I see it. So in that sense, best answer is I don't think
we're seeing a level of crowding out that is -- any and that very significant for
me to mention.
In terms of the revenue mix, no, Brocade is having a great, great field year so
far and still chugging along. But will that sustain? Hell no, you know that.
Brocade goes through cycles like most enterprise purchases. So we're
enjoying it while it lasts.
CJ Muse
Thank you.
Hock Tan
Thanks.
Operator
Thank you. And we do have time for one final question, and that will come
from the line of William Stein with Truist Securities. Your line is open.
William Stein
Great. Thanks for squeezing me in. Hock, congrats on the yet another great
quarter and a strong outlook in AI. I also want to ask about something you
mentioned with VMware. In your prepared remarks, you highlighted that
you've eliminated a tremendous amount of channel conflict. I'm hoping you
can linger on this a little bit and clarify maybe what you did. And specifically
also what you did in the heritage Broadcom software business, where I think
historically, you've shied away from the channel. And there was an idea that
perhaps you'd reintroduce those products to the channel through a more
unified approach using VMware's channel partners or resources. So any sort
of clarification here, I think, would be helpful.
Hock Tan
Yes, thank you. That's a great question. Yes, VMware taught me a few things.
They have 300,000 customers, 300,000. That's pretty amazing. And we look
at it. I know under CA, we took a position that let's pick an A-list strategic guy
and focus on it. I can't do that in VMware. I approached it differently. And I
start to learn the value of a very strong bunch of partners they have, which are
a network of distributors and something like 15,000 VARs, value-added
resales supported with these distributors.
So we have doubled down and invested in this resale network in a big way for
VMware. It's a great move, I think but six months into the game. But we are
seeing a lot more velocity out of it. Now these resellers, having said that, tend
to be very focused on a very long tail of their 300,000 customers. The largest
10,000 customers of VMware are large enterprises who tend to -- they are
very large enterprises, the largest banks, the largest health care companies.
And their view is I want very bespoke service, support, engineering solutions
from us. So we've created a direct approach, supplemented with the VAR of
choice where they need to. But on the long tail of 300,000 customers, they get
a lot of services from the resellers, value-added resellers, and so in their way.
So we now strengthen that whole network of resellers so that they can go
direct, manage, supported financially with distributors.
And we don't try to challenge those guys unless the customers. On the end of
the day, the customer chose where they like to be supported. So we kind of
simplify this together with the number of SKUs they have. In the past, unlike
what we're trying to do here, everybody is a partner. I mean, you're talking a
full range of partners. And whoever makes the biggest deal gets the lowest.
The partner that makes the biggest deal gets the lower -- biggest discount,
lowest price. And they are out there basically kind of creating a lot of channel
chaos and conflict in the marketplace.
Here, we don't. The customers, I am aware. They can take it direct from
VMware to their direct sales force or they can easily move to the resellers to
get it that way. And as a third alternative which we offer, if they chose not --
they want to run their applications on VMware and they want to run it
efficiently on a full stack. They have a choice now of going to a hosted
environment managed by network of managed service providers, which we
set up globally, that will run the infrastructure, invest and operate the
infrastructure. And these enterprise customers just run their workloads in and
get it as a service, basically VMware as a service. That's a third alternative,
and we are clear to make it very distinct and differentiated for our end-use
customers. They're available to all three is how they choose to consume our
technology.
William Stein
Great. Thank you.
Operator
Thank you. I would now like to hand the call over to Ji Yoo, Head of Investor
Relations, for any closing remarks.
Ji Yoo
Thank you, Cherie. Broadcom currently plans to report its earnings for the
third quarter of fiscal '24 after close of market on Thursday, September 5,
2024. A public webcast of Broadcom's earnings conference call will follow at
2:00 p.m. Pacific Time. That will conclude our earnings call today. Thank you
all for joining. Operator, you may end the call.
Operator
Thank you all for participating. This concludes today's program. You may now
disconnect.
",,,2024-07-25 21:01:29.337394