Calico and containers are flip sides of the same coin

When we launched Project Calico about a year ago, our main focus was on providing a better way to network virtual machines in OpenStack environments.  But over the last 6 months or so, the Calico team has been putting a great deal of effort into the Linux container space, and Calico is now emerging as one of the front runners in container networking.

Containers are hot right now, and the best known name in containers – Docker – is positively incandescent.  Calico is basking in the reflected glow from Docker as we demonstrate just how effective a solution it is for container networking.

In this post, I draw a parallel between the evolution of compute virtualization from VMs to containers, and the evolution of network virtualization from L2-centric, overlay-based approaches to L3-centric policy-rich approaches – like Calico.

Containers and virtual machines both provide a means to run many independent workloads on the same host, but they go about this in very different ways.  With a VM, the hypervisor presents the guest application with an emulation of a complete physical server.  You treat a VM just like you would a physical machine – you install an operating system on it (as a “guest OS”), and then you run your application workload over that guest operating system.

Like VMs, containers provide isolation between applications running on the same host.  But rather than doing this by emulating a multiplicity of physical machines, containers use built-in Linux capabilities (e.g. namespaces) to provide the isolation.  Container-based workloads can be supported directly by the host OS, so they have a much smaller memory footprint and they load much more quickly.

Why are containers so hot?  I think it’s for two main reasons:

  • Containers are well-suited for running Web scale apps that embody a microservices architecture, which is particularly topical right now.
  • Container implementations such as Docker place a huge emphasis on portability – an app developed and tested in a container on a laptop should “just work” when deployed in the cloud, and this supports the Devops model very nicely.

I think of containers as a kind of second-generation compute virtualization.  Virtual machines make sense if you want to take legacy apps that were designed for some specific OS on bare metal, and move them over to the cloud.  But if you are writing new apps, most likely you’ll always be writing them for Linux.  In which case, why bother with the overhead of a VM with its own guest OS?  From an app development and operations standpoint, all you really care about is that you can run many workloads on each host, and that workloads are properly isolated from one another.  So why not use a Linux-based platform that supports isolation between workloads in a much more lightweight way? That’s what containers do.

We can think of Calico in very much the same terms: as second-generation network virtualization.

Traditional network virtualization is all based on the notion that applications expect to connect to an Ethernet segment which provides Layer 2 connectivity.  In a legacy data center, workloads run on bare metal servers connected by Ethernet segments.  Workloads that are connected to the same Ethernet segment can all communicate directly with one another.  To talk to workloads on another Ethernet segment, you typically go through a router or a firewall, where some network isolation can be applied if desired.  When you move to a virtualized environment, the path of least resistance would appear to be simply to virtualize that entire collection of networking concepts – which is exactly the thinking behind all of the overlay-based solutions for the data center fabric.

But new apps are all written to talk IP.  A typical app comprises multiple workloads – e.g. front end, business logic, database – which need to talk to each other using TCP or UDP on specific ports.  Some workloads may also need IP connectivity to the external world, again on specific ports.  So why not just connect all these workloads directly to an IP fabric, and specify precisely what connectivity you require between and among the workloads, and from the workloads to the external world?  That’s exactly what Calico does.  It’s far simpler than connecting workloads via an emulated Ethernet segment implemented as a mesh of tunnels carrying encapsulated Ethernet frames over an IP fabric, which is essentially what first generation network virtualization solutions do.  And by locking down connectivity between workloads to exactly what the workloads need, Calico is also a good deal more secure.

Containers and Calico offer the most logical solution to the question “what do apps actually need from the cloud infrastructure?” in the compute and network domains respectively.  Just as most apps don’t need an emulation of a physical machine in order to execute safely and securely in the cloud, so most apps don’t need an emulation of an Ethernet segment in order to connect safely and securely in the cloud.

In pointing out how closely aligned Calico is with the Linux container movement, I don’t mean to imply in any way that Calico is less well suited to networking workloads running in virtual machines.  The Calico team continues to invest heavily in the OpenStack space, and most of the early production deployments of Calico are likely to be networking VMs in OpenStack.  But I do want to make the point that the world is increasingly embracing containers as the best way to run new applications in the cloud because they are simpler and more efficient than VMs, and we expect the world to embrace Calico for networking new applications in the cloud for precisely the same reasons.

Join our mailing list

Get updates on blog posts, workshops, certification programs, new releases, and more!

X