The round follows the company’s $250m Series C in September 2025, bringing total funding to $850m and valuing the company at approximately $2.34bn.
With $650m raised in the past six months – over 75% of its total capital to date – Rebellions is entering a new phase of growth focused on US market expansion, scaled production of its Rebel100 platform and preparation for a future IPO.
“AI is now measured by its ability to operate in the real world – at scale, under power constraints, and with clear economic return,” says Sunghyun Park, co-founder and CEO of Rebellions.
At the core of this strategy is Rebellions’ software-centric approach. The company has built a cloud-native AI stack and serving platform for production-scale deployment, underpinned by Kubernetes and designed to work natively with leading open source software, including vLLM, PyTorch, Triton, Hugging Face and OpenShift. The platform delivers high-performance distributed inference, broad model support and a consistent deployment experience.
This architecture reflects a clear belief that AI infrastructure will be defined by open ecosystems that abstract hardware complexity. By aligning with open source standards from the outset, Rebellions enables developers to deploy across diverse models and environments without proprietary lock-in. Combined with a mature, end user tested and validated software stack, this positions the company to support heterogeneous AI infrastructure at scale.
Additionally, Rebellions RebelRack and RebelPOD – both available today – extend the platform beyond silicon and software into fully deployable, vertically integrated AI infrastructure platforms. The RebelRack delivers a production-ready unit of inference compute, while the RebelPOD integrates multiple racks into a scalable cluster designed for large-scale AI deployment.
Together, these solutions represent a shift towards delivering complete, modular AI infrastructure that can be deployed, replicated and scaled across datacentre environments. Built on the chiplet-based Rebel100 NPU, the platform is optimised at the system level for performance-per-watt and cost efficiency, enabling organisations to operate AI workloads within real-world power and infrastructure constraints.
Electronics Weekly