After landing blockbuster deals with Microsoft and Meta worth over $20 billion combined, Nebius Group has positioned itself as the go-to infrastructure provider for tech giants desperate for GPU capacity.
The Amsterdam-based company reported that demand was so strong that the size of its Meta contract had to be limited to the capacity that Nebius had available.
But with the AI infrastructure market projected to reach nearly $750 billion by 2029, the question isn’t whether Nebius will sign more mega-deals—it’s who’s next in line.
The Selection Criteria
Nebius and its larger rival CoreWeave have seen strong demand this year as insatiable AI appetite has left even the biggest cloud companies, such as Microsoft and Amazon, with capacity constraints.
This supply crunch creates a perfect storm for Nebius to capture customers who share specific characteristics:
High urgency: Companies racing to train next-generation AI models or facing immediate capacity shortages Deep pockets: Ability to commit billions in multi-year infrastructure contracts Strategic gaps: Organizations that either can’t build fast enough internally or need to supplement existing capacity Competitive pressure: Players in heated AI races who can’t afford to fall behind
Based on current market dynamics, capital deployment patterns, and strategic positioning, here are the five companies most likely to become Nebius’s next major customers.
1. OpenAI: The Most Obvious Choice
Likelihood: 85%
OpenAI’s 2024 training costs were expected to reach $3 billion, with inference costs at $4 billion, and the company’s infrastructure needs are accelerating dramatically. OpenAI’s latest computing supplier is Google, and Microsoft has also turned to CoreWeave for additional AI computing power, with OpenAI signing a direct agreement with CoreWeave worth billions of dollars.
Why They Need Nebius:
- In October, OpenAI’s CEO, Sam Altman, admitted that the company was not releasing products as often as they wished because they were facing “a lot of limitations” with their computing capacity
- Over the past two months, OpenAI has made four deals that could lead to the construction of 30 gigawatts of additional data center capacity
- Despite massive investments from Microsoft, capacity remains insufficient for their ambitious roadmap
The Deal Structure: Expect a $5-8 billion multi-year commitment, potentially structured similarly to the Microsoft deal—capacity reserved exclusively for OpenAI’s training and inference workloads. The announcement could come within the next 6-9 months as OpenAI scales GPT-5 development.
2. Anthropic: The Cautious Competitor
Likelihood: 75%
Anthropic’s total compute costs are estimated around $2 billion, suggesting that most of that spending was for training given their API revenue should have positive gross margins. Anthropic announced plans to spend $50 billion on a U.S. artificial intelligence infrastructure build-out, starting with custom data centers in Texas and New York.
Why Nebius Makes Sense:
- Amazon has opened a dedicated data center campus for Anthropic on 1,200 acres in Indiana, while Anthropic has also expanded its compute deal with Google by tens of billions of dollars
- Despite these partnerships, Anthropic’s rapid enterprise growth (serving over 300,000 businesses) demands additional capacity
- Internal projections showed Anthropic expects to break even by 2028, well ahead of OpenAI, which is projecting $74 billion in operating losses that same year—this fiscal discipline makes them attractive partners
The Deal Structure: A more measured $2-4 billion deal focusing on near-term capacity (12-24 months) while Anthropic builds out its own infrastructure. This would serve as bridge capacity, allowing Anthropic to maintain competitive velocity without over-committing capital.
3. xAI: Elon’s Insatiable Appetite
Likelihood: 70%
Elon Musk’s artificial intelligence startup xAI announced that it has raised $6 billion in new equity financing as the company seeks to expand its supercomputer facility to house at least 1 million graphics processing units. The Greater Memphis Chamber Annual Chairman’s Luncheon suggested that xAI plans to expand its Colossus site to incorporate “a minimum of one million GPUs”.
Why They’re Prime Candidates:
- XAI allegedly used 20,000 H100s to train Grok 2, and projected up to 100,000 H100s would be used for Grok 3
- The upcoming Grok-2 model was reportedly being trained on cloudy infrastructure from Oracle, with Oracle inking a deal to have OpenAI soak up any GPU capacity not used by xAI
- Musk’s aggressive timeline and competitive intensity with OpenAI create urgency that Nebius can capitalize on
The Deal Structure: A $3-6 billion commitment with flexible scaling provisions. Given xAI’s rapid build-out in Memphis, they need immediate capacity while their own infrastructure scales. Nebius could provide 6-12 month burst capacity for Grok 3 and beyond.
4. Amazon Web Services: The Surprising Dark Horse
Likelihood: 60%
This pick may seem counterintuitive—AWS builds its own infrastructure—but the capacity crunch changes everything. In July, Google reported that “demand is so high for Google’s cloud services that it now amounts to a $106 billion backlog”. AWS faces similar constraints.
Why AWS Might Turn to Nebius:
- Amazon’s CFO said much of Amazon’s spending was on the company’s custom AI chip, called Trainium, as well as other technology infrastructure, but custom chip production takes 2+ years
- AWS needs Nvidia GPU capacity NOW to serve enterprise customers and compete with Azure/Google Cloud
- Microsoft’s approach to data center capacity appears to be more hybrid between build and lease, while Meta, Google and Amazon Web Services are more in the build mode—but this could shift
The Deal Structure: A smaller but strategically important $1-2 billion deal focused on specific geographic markets where AWS has capacity constraints. This would be positioned as “strategic partnership” rather than admission of infrastructure gaps.
5. Oracle: The Enterprise AI Infrastructure Play
Likelihood: 55%
OpenAI announced the $500 billion Stargate project in January, and has since been executing on that, collaborating with Oracle on $300 billion deal, adding 4.5 gigawatts of AI data center capacity. But Oracle is also building capacity for its own cloud customers.
Why Oracle Fits:
- Oracle serves as infrastructure for multiple AI labs but faces the same supply constraints as everyone else
- Analysts are calling for Amazon’s 41% capex growth this year to $117 billion, slowing from 57% growth in 2024—Oracle faces similar build-out timelines
- Oracle could use Nebius capacity to accelerate time-to-market for enterprise AI customers
The Deal Structure: A unique wholesale arrangement: $1.5-3 billion where Oracle essentially resells Nebius capacity to its enterprise customers, particularly in regions where Oracle is building out new data centers. This would be positioned as “Oracle Cloud powered by Nebius infrastructure.”
The Wild Cards
Tesla: With autonomous driving requiring massive compute, Tesla could surprise everyone with a $2-3 billion deal focused on training and simulation workloads.
Saudi Arabia/UAE AI Initiatives: Reports suggest that the bulk of xAI’s $6 billion capital, $5 billion, would come from sovereign funds in the Middle East. Sovereign wealth funds are aggressively building AI capabilities and may want dedicated infrastructure.
Mistral or Cohere: European AI champions could leverage Nebius’s European presence for sovereignty-compliant infrastructure.
The Timing Game
Nebius Group reported Q3 ’25 revenue of $146 million, up 355% YoY, but management said that it sold out of its available capacity in the third quarter. This creates a fascinating dynamic: Nebius must carefully sequence new deals as capacity comes online.
Nebius has 220 megawatts of connected power to data centers currently and is working to expand. New capacity deployments will likely determine the timing and size of the next major announcements.
Expected Timeline:
- Q4 2025: Announcement of 1-2 new strategic customers as Vineland, NJ capacity ramps
- H1 2026: Potential Amazon or Oracle deal as enterprise demand accelerates
- H2 2026: Wild card announcement (potentially Middle East sovereign wealth or Tesla)
The Bottom Line
The compute bottleneck for AI is driving major tech companies to double down on AI infrastructure spending to scale revenue.
For OpenAI or Anthropic, if they had double their current inference compute, their revenue would almost double within a month due to ability to serve more users and improve product quality.
This creates a seller’s market for infrastructure providers like Nebius. The question isn’t whether these companies need more capacity—it’s whether Nebius can build fast enough to capture the opportunity.
Nebius will raise more capital to provide GPU services to Microsoft and said it would raise $2 billion in debt and float more shares to fund the Microsoft deal.
The most likely scenario: Nebius announces 2-3 major deals worth $8-15 billion combined within the next 12 months, with OpenAI leading the pack. The real risk isn’t finding customers—it’s execution.
Can Nebius deliver on its ambitious buildout while maintaining the quality and reliability that attracted Microsoft and Meta in the first place?
That’s the $20 billion question investors are asking. And it’s why, despite 355% revenue growth, the stock remains under pressure. In the AI infrastructure gold rush, even triple-digit growth might not be enough.
Also Read
How JPMorgan’s Deposit Token Threatens Stablecoin Supremacy
How Much Is My Personal Injury Case Worth? A Complete Guide to Calculating Your Claim
