I’ve just read the script on Seeking Alpha and listened to the call with Nebius at the UBS conference today. Marc Boroditsky (CRO) and Neil Doshi (VP and head of IR) were the speakers.
Here are some of the main points from this 30 minute interview:
- Nebius was founded by ex-Yandex leadership and started with a large, highly experienced engineering team with 1,000 engineers.
- Nebius is positioning itself as a full hyperscaler rather than a simple GPU-rental provider.
- End-to-end engineering spans data center construction, custom racks and servers, cooling systems, and power delivery.
- There is a strong cultural focus on cost optimization, efficiency, and margin scalability.
- Customer support is staffed by AI engineers, providing unusually deep technical assistance around the clock.
- The platform includes a full cloud software layer: virtualization, managed Kubernetes, and a robust control plane.
- Nebius aims to enable seamless training, optimization, and inference on a single vertically integrated stack.
- Increasing investment in vertical-specific tooling (e.g., healthcare, life sciences, physical AI).
- Strategy includes both proprietary capabilities and a broad integrated partner ecosystem.
- No defined limit on how far up the software and tooling stack Nebius plans to expand.
- Customer strategy targets foundation-model builders, AI-native startups, and traditional enterprises.
- Management expects long-term demand to be dominated by enterprise adoption.
- Early enterprise traction is appearing in areas such as finance and high-value vertical use cases.
- Microsoft has signed a large, multiyear agreement for a massive Nebius cluster deployment. Microsoft can use Nebius capacity for Copilot growth, internal model training, and other compute-heavy initiatives.
- Nebius reports a sharp acceleration in compute demand across all segments.
- The company is seeing far more demand than supply, with pipeline growth significantly exceeding available capacity.
- AI startups are exceptionally well-funded and consuming large volumes of compute early.
- Many scaling AI companies are shifting from relying on foundation models to building their own specialized models.
- Software vendors are redesigning products to become fully AI-first.
- Inference workloads are expanding extremely rapidly and are outpacing industry-wide infrastructure build-out.
- Nebius argues that the industry is not overbuilding; supply is still lagging true demand. “The demand is there. I mean we’re seeing it in our pipeline. We’re experiencing it with our customers. We’re watching customers doubling every 6 to 8 weeks in some cases. We’re watching the uptake take place.”
- Infrastructure footprint expanding globally from a single initial site.
- Key bottleneck is the physical complexity and speed of constructing data center capacity, not chips alone.
- Nebius uses a parallelized, agile approach to building infrastructure rather than a standard cookie-cutter model.
- The company has raised substantial capital and maintains a healthy balance sheet.
- Additional liquidity can be unlocked from stakes in Avride, ClickHouse, and Toloka.
- Future funding strategies include asset-backed financing, corporate debt, and opportunistic equity.
- The core business has recently turned EBITDA profitable, with the full group nearing breakeven EBITDA.
- Nebius targets strong medium-term EBIT margins driven by scale, premium cloud revenue, operating leverage, and in-house hardware design efficiencies.
- Company maintains conservative financial practices, including stricter depreciation schedules than peers. (4 years rather than 6 at Coreweave).
For those of us who follow Nebius closely there was not much we didn’t already know - but it is always good for us to hear these things afresh. Perhaps the biggest takeaway for me was the fact that Nebius said that demand for compute is multiples higher than the capacity they can currently bring online, and that the market is still underbuilt, not overbuilt.
We are still in the early innings.
Jonathan