Marketdash

Nebius Launches Blackwell Ultra Cloud Platform With Real-Time GPU Visibility Tools

MarketDash Editorial Team
2 hours ago
Nebius Group unveils AI Cloud 3.1 with Nvidia's next-gen Blackwell infrastructure and capacity management tools designed for enterprise-scale AI deployment, becoming Europe's first cloud provider to run both platforms in production.

Nebius Group NV (NBIS) is betting that enterprises moving AI workloads into production need more than just raw computing power. They need to actually see what they're getting.

On Wednesday, the company launched Nebius AI Cloud 3.1, rolling out Nvidia Corp.'s (NVDA) next-generation Blackwell Ultra compute infrastructure alongside tools designed to give customers transparency into GPU capacity and resource management. The update builds on Nebius AI Cloud Aether with infrastructure enhancements targeting large-scale AI training and inference in live production environments.

Nebius is deploying Nvidia Blackwell Ultra infrastructure globally, with Nvidia HGX B300 and GB300 NVL72 systems already running in customer environments. The company says it's now the first cloud provider in Europe operating both platforms in production, and the first worldwide to run production GB300 NVL72 systems on 800 Gbps Nvidia Quantum-X800 InfiniBand.

That interconnect technology doubles throughput for distributed workloads. Combined with hardware-accelerated networking and improved storage caching, Nebius says the setup helps eliminate infrastructure bottlenecks that slow down AI workflows. The platform also complements the company's leading results in MLPerf Training v5.1 benchmarks.

Capacity Management Gets Transparent

Here's the interesting part: as customers shift from AI experimentation to scaled deployment, Nebius says demand is rising for real-time GPU visibility, predictable resource allocation, and governance tools that work across multiple teams.

Version 3.1 addresses that with Capacity Blocks and a real-time Capacity Dashboard, giving customers visibility into reserved GPU capacity and availability across all regions. Project-level quotas and new lifecycle object storage rules add granular control over resource allocation and costs. Translation: you can finally see what you're paying for and where your GPUs are actually going.

Developer Tools and Security Upgrades

Nebius AI Cloud 3.1 expands its ecosystem with native Dstack integration and simplified access to Nvidia BioNeMo NIM microservices, including Boltz2, Evo-2, GenMol and MolMIM. The company says customers won't need NGC keys or Nvidia AI Enterprise licenses to access these tools.

Additional enhancements include improved Slurm-based orchestration with Manager Soperator, FOCUS-compliant billing exports, and console user-experience updates.

On the security side, the release strengthens Aether's enterprise foundation with object storage data-plane audit logs for HIPAA-compliant configurations, per-object access controls, VPC security groups, and enhanced identity management with Microsoft Entra ID integration and granular service roles.

Strong Stock Performance, Mixed Financial Results

Nebius stock has gained over 184% year-to-date, fueled by surging demand for AI computing power in data centers. The company secured major hyperscale agreements with Meta Platforms Inc. (META) and Microsoft Corp. (MSFT).

But the financial picture is more complicated. The company posted third-quarter revenue of $146.1 million, missing Wall Street estimates of $153.7 million. It issued a full-year revenue outlook of $500 million to $550 million, below the $578 million analyst consensus estimate.

NBIS Price Action: Nebius Group shares were down 5.82% at $76.23 at the time of publication on Wednesday.

Nebius Launches Blackwell Ultra Cloud Platform With Real-Time GPU Visibility Tools

MarketDash Editorial Team
2 hours ago
Nebius Group unveils AI Cloud 3.1 with Nvidia's next-gen Blackwell infrastructure and capacity management tools designed for enterprise-scale AI deployment, becoming Europe's first cloud provider to run both platforms in production.

Nebius Group NV (NBIS) is betting that enterprises moving AI workloads into production need more than just raw computing power. They need to actually see what they're getting.

On Wednesday, the company launched Nebius AI Cloud 3.1, rolling out Nvidia Corp.'s (NVDA) next-generation Blackwell Ultra compute infrastructure alongside tools designed to give customers transparency into GPU capacity and resource management. The update builds on Nebius AI Cloud Aether with infrastructure enhancements targeting large-scale AI training and inference in live production environments.

Nebius is deploying Nvidia Blackwell Ultra infrastructure globally, with Nvidia HGX B300 and GB300 NVL72 systems already running in customer environments. The company says it's now the first cloud provider in Europe operating both platforms in production, and the first worldwide to run production GB300 NVL72 systems on 800 Gbps Nvidia Quantum-X800 InfiniBand.

That interconnect technology doubles throughput for distributed workloads. Combined with hardware-accelerated networking and improved storage caching, Nebius says the setup helps eliminate infrastructure bottlenecks that slow down AI workflows. The platform also complements the company's leading results in MLPerf Training v5.1 benchmarks.

Capacity Management Gets Transparent

Here's the interesting part: as customers shift from AI experimentation to scaled deployment, Nebius says demand is rising for real-time GPU visibility, predictable resource allocation, and governance tools that work across multiple teams.

Version 3.1 addresses that with Capacity Blocks and a real-time Capacity Dashboard, giving customers visibility into reserved GPU capacity and availability across all regions. Project-level quotas and new lifecycle object storage rules add granular control over resource allocation and costs. Translation: you can finally see what you're paying for and where your GPUs are actually going.

Developer Tools and Security Upgrades

Nebius AI Cloud 3.1 expands its ecosystem with native Dstack integration and simplified access to Nvidia BioNeMo NIM microservices, including Boltz2, Evo-2, GenMol and MolMIM. The company says customers won't need NGC keys or Nvidia AI Enterprise licenses to access these tools.

Additional enhancements include improved Slurm-based orchestration with Manager Soperator, FOCUS-compliant billing exports, and console user-experience updates.

On the security side, the release strengthens Aether's enterprise foundation with object storage data-plane audit logs for HIPAA-compliant configurations, per-object access controls, VPC security groups, and enhanced identity management with Microsoft Entra ID integration and granular service roles.

Strong Stock Performance, Mixed Financial Results

Nebius stock has gained over 184% year-to-date, fueled by surging demand for AI computing power in data centers. The company secured major hyperscale agreements with Meta Platforms Inc. (META) and Microsoft Corp. (MSFT).

But the financial picture is more complicated. The company posted third-quarter revenue of $146.1 million, missing Wall Street estimates of $153.7 million. It issued a full-year revenue outlook of $500 million to $550 million, below the $578 million analyst consensus estimate.

NBIS Price Action: Nebius Group shares were down 5.82% at $76.23 at the time of publication on Wednesday.

    Nebius Launches Blackwell Ultra Cloud Platform With Real-Time GPU Visibility Tools - MarketDash News