<

CoreWeave Raises $221 Million in Series B Funding to Expand Specialized Cloud Infrastructure

CoreWeave, a specialized cloud provider for large-scale GPU-accelerated workloads, has announced that it has raised $221 million in Series B funding to expand its specialized cloud infrastructure for compute-intensive workloads, including artificial intelligence (AI) and machine learning (ML), visual effects and rendering, batch processing, and pixel streaming.

The funding round was led by Magnetar Capital, with contributions from NVIDIA, Nat Friedman, and Daniel Gross. The latest funding will be used to meet the growing demand for generative AI technology and expand CoreWeave’s data center infrastructure, bringing the total number of North American-based data centers to five.

CoreWeave is strategically positioned to offer purpose-built, customized solutions that can outperform larger, more generalized cloud providers. According to the company, its cloud solutions are up to 35 times faster and 80% less expensive than those of the large, generalized public clouds. The new capital will also support the company’s expansion of U.S.-based data centers with the opening of two new centers this year.

CoreWeave’s CEO and co-founder Michael Intrator stated, “CoreWeave is uniquely positioned to power the seemingly overnight boom in AI technology with our ability to innovate and iterate more quickly than the hyperscalers.” Intrator added, “Magnetar’s strong, continued partnership and financial support as lead investor in this Series B round ensures we can maintain that momentum without skipping a beat. Additionally, we’re thrilled to expand our collaboration with the team at NVIDIA.”

NVIDIA has been a strong partner to CoreWeave, and the company’s recent release of the NVIDIA H100 Tensor Core and NVIDIA HGX H100 platform has helped to power CoreWeave’s HGX H100 clusters, which are currently serving clients such as Anlatan, the creators of NovelAI. In addition to HGX H100, CoreWeave offers more than 11 NVIDIA GPU SKUs, interconnected with the NVIDIA Quantum InfiniBand in-network computing platform, which are available to clients on demand and via reserved instance contracts.

Manuvir Das, Vice President of Enterprise Computing at NVIDIA, said, “CoreWeave’s strategy of delivering accelerated computing infrastructure for generative AI, large language models, and AI factories will help bring the highest-performance, most energy-efficient computing platform to every industry.” Ernie Rogers, Magnetar’s Chief Operating Officer, added, “CoreWeave’s innovative, agile, and customizable product offering is well-situated to service this demand and the company is consequently experiencing explosive growth to support it.”

Founded in 2017, CoreWeave is a specialized cloud provider that delivers a massive scale of GPU compute resources on top of the industry’s fastest and most flexible infrastructure. The company’s solutions for compute-intensive use cases are designed to help customers accelerate their workloads and bring AI technology to every industry.

Read more:

Join us on Telegram

Follow us on Twitter

You might also like

LATEST NEWS

LASTEST NEWS