World War Fee Meta’s AI ambitions are going to cost more than expected thanks to increased competition and — who could have seen this coming? — the Trump administration’s obsession with tariffs, which is driving up the price tag of key components.
This revelation came amid Zuckercorp’s Q1 earnings call this week when CFO Susan Li warned investors its infrastructure investments could end up costing Meta as much as $72 billion in 2025.
“We anticipate our full year 2025 capital expenditures, including principal payments on finance leases, will be in the range of $64 billion to $72 billion,” she said on Wednesday’s call. In January, the Llama model maker had pegged them at $60 billion to $65 billion. That’s a potential $7 billion jump, when comparing the top ranges.
“The higher cost we expect to incur for infrastructure hardware this year really comes from suppliers who source from countries around the world,” Li told analysts. “There’s just a lot of uncertainty around this given the ongoing trade discussions.” Meta, she added, is working to diversify its supply chains to mitigate some of these costs.
But it’s not just the tariffs. Li’s statements seem to indicate that the company formerly known as Facebook isn’t just paying more for components tied to its AI datacenters, but is deploying more of them.
“This updated outlook reflects additional datacenter investments to support our AI efforts as well as an increase in the expected cost of infrastructure hardware. The majority of our CapEx in 2025 will continue to be directed to our core business,” she said.
The Register reached out to Meta for clarification on its AI spending costs this year; we’ll let you know if we hear anything back.
Meta has previously committed to bringing roughly a gigawatt of new compute capacity and expects to have more than 1.3 million GPUs training and serving models by the end of the year.
If Meta is in fact looking to deploy even more compute this year, increased competition in the open model arena is no doubt weighing on the decision.
Not long ago, Meta held a nearly uncontested position, producing some of the most advanced open-weight models on the market. However, over the past year, Meta has faced a new wave of competition from rival model builders, most notably in China and Europe. And with OpenAI’s Sam Altman promising on X to release open-weight models as well, Zuck and Co could face even more competition before long.
Despite this, Meta continues to push its AI strategy forward, launching new models with built-in safety features, a standalone Meta AI app to compete with those from OpenAI and Anthropic, and a new API service running at least in some capacity on Groq and Cerebras accelerators.
Zuckerberg also hopes to pull ahead by letting AI build better AI, boasting that within 18 months the majority of the code for its Llama project will be written by other LLMs.
Meanwhile, work to bring additional capacity online is already underway. Back in December, Meta began construction of a 2.2-gigawatt AI supercomputing cluster in Richland Parish, Louisiana. The facility, Zuckerberg boasted at the time, is so large that it would cover a significant portion of Manhattan Island. It’ll take about five years to complete.
As we reported earlier this week, Meta may attempt to soften the blow of these costs by de-prioritizing its Reality Labs division, which continues to weigh heavily on its profits. Over the past five years, the division has bled more than $60 billion. ®
PS: The Lmarena benchmark has come under fire for allowing Big Tech, including Meta and Google, to test private variants of models before making results public. The project defended its policies here.