Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors

The Tale of Two Container Networking Standards: CNM v. CNI

Coming out of last months Dockercon conference, a number of interesting trends are taking shape on the container networking front. SDxCentral reported on the push to standardize one of a couple of container networking models, called CNI, here. Standardizing networking and orchestration in the container space will really help take the technology mainstream and build a healthy ecosystem of technology providers. It might be a good time to provide a little history and context to these standardization efforts.

Containers are changing the way applications are developed and how applications connect to the network. Most talk around containers is focused on orchestration because that’s the touchpoint for application developers. However, networking becomes critical for production deployments because the network needs to be automated, scalable and secure to adapt to next generation hybrid cloud and application architectures based on microservices.

For those coming from traditional networking backgrounds, the new way of doing things can be confusing. There are several different options available for networking containers and standardization efforts have started to happen on the various approaches. Before we look at the different standards out there, let’s look at network interfaces for containers versus virtual machines.

Virtual machines simulate hardware and include virtual network interface cards (NIC) that are used to connect to the physical NIC. On the other hand, containers are just processes, managed by a container runtime, that share the same host kernel. So containers can be connected to the same network interface and network namespace as the host (e.g., eth0), or they can be connected to an internal virtual network interface and their own network namespace and then connected to the external world in various ways.

Initial container networking designs concerned themselves with just how to wire containers running on a single host and make them reachable from the network. In the ‘host’ mode, the containers run in the host network namespace and use the host IP address. To expose the container outside the host, the container uses a port from the host’s port space. This means that you need to manage the ports that containers attach to since they are all sharing the same port space.

The ‘bridge’ mode offers an improvement over the ‘host’ mode. In ‘bridge’ mode, containers get IP addresses from a private network/networks and are placed in their own network namespaces. Because the containers are in their own namespace, they have their own port space and don’t have to worry about port conflicts. But the containers are still exposed outside the host using the host’s IP address. This requires the use of NAT (network address translation) to map between host IP:host port and private IP:private port. But these NAT rules are implemented by using Linux IPtables, which limits the  scale and performance of the solution. So, you are trading of one set of design trade-offs for another.

Also, these solutions don’t address the problem of multi-host networking. As multi-host networking became a real need for containers, the industry started looking into different solutions. Recognizing that every network tends to have its own unique policy requirements, container projects favored a model where networking was decoupled from the container runtime. This also greatly improves application mobility. In this model, networking is handled by a ‘plugin’ or ‘driver’ that manages the network interfaces and how the containers are connected to the network.  The plugin also assigns the IP address to the containers’ network interfaces. In order for this model to succeed, there needs to be a well defined interface or API between the container runtime and the network plugins.

Comparing the two standards

To paraphrase an old joke: What’s nice about container networking standards is there are so many to choose from. Docker, the company behind the Docker container runtime, came up with the Container Network Model (CNM). Around the same time, CoreOS, the company responsible for creating the rkt container runtime, came up with the Container Network Interface (CNI).

Kubernetes, a popular container orchestrator initially conceived at Google, but now supported by a large and growing open source community, has supported network plugins since its earliest releases. However, since Kubernetes wanted to leverage the broader open source community, it didn’t want to create another standard. Since Docker is a popular container runtime, Kubernetes wanted to see if it could use CNM. However, due to several reasons as detailed in this blog, Kubernetes decided that it would go with CNI for its network plugins. The primary technical objection against CNM was the fact that it was still seen as something that was designed with the Docker container runtime in mind and was hard to decouple from it. From a political standpoint, Kubernetes developers felt that Docker wasn’t ready to accommodate the changes that would be needed to make CNM more freestanding.

After this decision by Kubernetes, several other large open source projects decided to use CNI for their container runtimes. Cloud Foundry PaaS has a container runtime called Garden, while the Mesos cluster manager has a container runtime called Mesos Containerizer.

Container Network Model (CNM)

CNM has interfaces for both IPAM plugins and network plugin. The IPAM plugin APIs can be used to create/delete address pools and allocate/deallocate container IP addresses. The network plugin APIs are used to create/delete networks and add/remove containers from networks. The same plugin can implement both sets of APIs, or you can use different plugins for each. However, since container runtime decides the timing of invoking the IPAM and network plugins, it can be problematic for network plugins to carry out their functions since the IPAM and network attach phases are disjoint. CNM also requires a distributed key-value store like consul to store the network configuration.

Docker’s libnetwork is a library that provides an implementation for CNM. However, third-party plugins can be used to replace the built-in Docker driver.

Container Network Interface (CNI)

CNI exposes a simple set of interfaces for adding and removing a container from a network. CNI assumes that the network configuration is in JSON format that can be stored in a file. Unlike CNM, CNI doesn’t require a distributed key value store like etcd or consul. The CNI plugin is expected to assign the IP address to the container network interface. The latest version of the CNI spec allows for an IPAM plugin to be defined. However, it is the CNI plugin’s responsibility to call the IPAM plugin at the right time. While this allows for separate network and IPAM plugins, it allows for the network driver to have control over when to invoke the IPAM plugin.

Conclusion

There are several networking plugins that implement one or both of CNI and CNM, such as the ones from Calico, Contrail (Juniper), Contiv, Nuage Networks and Weave. You can generally expect these plugins to provide the following features – IP address management, IP-per-container (or Pod in case of Kubernetes) assignment, and multi-host connectivity.

One area that isn’t addressed by either CNM or CNI is network policy. Some of the network plugins may also implement network policies. Kubernetes, for example, which is a container orchestrator, has a beta Network Policy API. So, some of the network plugins for Kubernetes may implement both CNI and Kubernetes Network Policy API.

While the container networking standards are addressing the networking requirements for containers, its still the case that several application services will likely continue to run in virtual machines or bare metal servers. Technologies like overlay networks can help avoid containers from becoming the next infrastructure silo.

CNI has now been adopted by several open source projects such as Kubernetes, Mesos and Cloud Foundry. It has also been accepted by the Cloud Native Computing Foundation (CNCF). As the acceleration of containers and cloud-native technologies continues, CNCF has said that it’s looking to make a push for CNI as an industry standard. Since CNCF is backed by a large number of companies in this space, it’s very likely that CNI will become the de facto standard for container networking.

Related Reading:

The Container Networking Landscape: CNI versus CNM https://thenewstack.io/container-networking-landscape-cni-coreos-cnm-docker/

Why Google won’t support CNM: https://thenewstack.io/google-wont-support-dockers-container-network-model/

Why Kubernetes doesn’t use libnetwork: https://blog.kubernetes.io/2016/01/why-Kubernetes-doesnt-use-libnetwork.html

More on Kubernetes and Docker networking: https://www.quora.com/If-Kubernetes-doesnt-support-Dockers-libnetwork-or-CNM-how-does-Docker-networking-work-with-Kubernetes

The Quest for Container Networking Interoperability: https://www.sdxcentral.com/articles/analysis/quest-container-networking-interoperability/2016/11/

The Nuage site uses cookies. By using this site, you are agreeing to our Privacy Policy.