Tech titans co-found open programmable infrastructure project

Dell Technologies, F5, Intel, Keysight Technologies, Marvell, NVIDIA and Red Hat are among a growing number of tech companies that are co-founders of The Linux Foundation’s Open Programmable Infrastructure (OPI) Project.

OPI aims to foster a community-driven, standards-based open ecosystem for next-generation architectures and frameworks based on DPU and IPU technologies.

It is designed to facilitate the simplification of network, storage and security APIs within applications to enable more portable and performant applications in the cloud and data centre across DevOps, SecOps and NetOps.

“When new technologies emerge, there is so much opportunity for both technical and business innovation but barriers often include a lack of open standards and a thriving community to support them,” said Mike Dolan, Senior Vice President of Projects at The Linux Foundation.

“DPUs and IPUs are great examples of some of the most promising technologies emerging today for cloud and datacenter, and OPI is poised to accelerate adoption and opportunity by supporting an ecosystem for DPU and IPU technologies,” he added.

DPUs and IPUs are increasingly being used to support high-speed network capabilities and packet processing for applications such as 5G, AI/ML, Web3, and crypto because of their flexibility in managing resources across networking, compute, security, and storage domains.

Instead of the servers being the infrastructure unit for cloud, edge or the data center, operators can now create pools of disaggregated networking, compute and storage resources supported by DPUs, IPUs, GPUs, and CPUs to meet their customers’ application workloads and scaling requirements.

OPI will help establish and nurture an open and creative software ecosystem for DPU and IPU-based infrastructures. As more DPUs and IPUs are offered by various vendors, the OPI Project seeks to help define the architecture and frameworks for the DPU and IPU software stacks that can be applied to any vendor’s hardware offerings. The OPI Project also aims to foster a rich open source application ecosystem, leveraging existing open source projects, such as DPDK, SPDK, OvS, and P4.

“The emerging DPU market is a golden opportunity to reimagine how infrastructure services can be deployed and managed. With collective collaboration across many vendors representing both the silicon devices and the entire DPU software stack, an ecosystem is emerging that will provide a low friction customer experience and achieve portability of services across a DPU enabled infrastructure layer of next generation data centers, private clouds, and edge deployments,” said Geng Lin, EVP and Chief Technology Officer of F5.

Contributions to OPI

Contributions will come in the form of the Infrastructure Programmer Development Kit (IPDK) that is an official sub-project of OPI. IPDK is an open source framework of drivers and APIs for infrastructure offload and management that runs on a CPU, IPU, DPU, or switch.

NVIDIA DOCA, an open source software development framework for NVIDIA’s BlueField DPU, will be contributed to OPI to help developers create applications that can be offloaded, accelerated, and isolated across DPUs, IPUs, and other hardware platforms.

DOCA includes drivers, libraries, services, documentation, sample applications, and management tools to speed up and simplify the development and performance of applications. It allows for flexibility and portability for BlueField applications written using accelerated drivers or low-level libraries, such as DPDK, SPDK, Open vSwitch, or Open SSL.

As part of OPI, developers will be able to create a common programming layer to support many of these open drivers and libraries with DPU acceleration.

“The fundamental architecture of data centres is evolving to meet the demands of private and hyperscale clouds and AI, which require extreme performance enabled by DPUs such as the NVIDIA BlueField and open frameworks such as NVIDIA DOCA. These will support OPI to provide BlueField users with extreme acceleration, enabled by common, multi-vendor management and applications,” said Kevin Deierling, Vice President of Networking at NVIDIA.