Seamless OpenStack and Docker networking using Apache Brooklyn and Project Calico

Seamless OpenStack and Docker networking using Apache Brooklyn and Project Calico

One of the key features of Calico is that it is simple and universal – you can use it to network any virtualized environment, whether with VMs or containers. One of the key features of Apache Brooklyn is its ability to orchestrate any combination and kind of workload through simple but powerful blueprints.

So what would you get if you put the two together? Seamless networking and automation of VM and container workloads – allowing you to build applications using a combination of VMs and containers, mixing and matching depending on what best suits each workload type. Brooklyn takes care of the automation. Calico taking care of the networking.

So much for the theory – how does this all work in practice? Over the past couple of weeks I’ve been working with Andrew Kennedy and Csaba Palfi at Cloudsoft to configure a Brooklyn / Clocker deployment to manage applications spread across OpenStack and Docker. This blog post describes how it works, and how you can do this yourself.

First we built a simple 5 host deployment. We used 2 of the hosts to run OpenStack VM workloads and the 3 other hosts to run Docker containers. We used the standard Calico OpenStack plugin for the OpenStack workloads and the standard calico-docker code for the Docker workloads. Brooklyn was responsible for orchestrating the workloads across all of the hosts (using Clocker t control the Docker workloads).

Everything works as you’d expect. You can create VMs and containers through OpenStack and Docker as normal, create and manage arbitrary OpenStack security groups, and assign the VMs and containers to any of those security groups. When you do so, the correct routes get created and propagated through BGP to allow connectivity, while the security group rules are applied to both VMs and containers as normal (and yes, adding a container to a security group does allow it to match on rules that filter on security group).

The summary? A few extra steps in Calico set up (see below for what to do manually), but it all just worked. Nobody likes manual steps, and that’s where Clocker and Brooklyn come in, allowing the automated install of extra Docker hosts, and of applications and security profiles spanning both VMs and containers. Andrew Kennedy will be showing off this demo at the Cloudsoft booth at DockerCon. Project Calico will also have a booth; come and talk to us there to find out more.

The rest of this blog discusses the Calico networking in more detail, but we encourage you to also check out the Brooklyn and Clocker projects that made the higher level automation possible (and expect to see a cross-link here to their blog post on this topic for more details coming soon).

How did we set up Calico for this deployment?

First we used the same etcd cluster for all the Calico nodes. For simplicity we also configured Calico to use the same BGP AS on all the hosts for routing purposes

The security groups were configured (via Brooklyn) via OpenStack using the standard OpenStack interfaces. (You could also use either the horizon GUI, the command line, or the OpenStack RESTful API.) As soon as the security group has been created it appears as a Calico profile available for all types of workloads to use. OpenStack VMs were assigned to profiles using Calico’s OpenStack plugin. Containers were assigned to profiles using the standard calicoctl tool. All coordinated by Brooklyn & Clocker.

Here’s what you have to do (with Calico) in more detail.

  • Install a Calico OpenStack deployment following the instructions. Unlike in a normal OpenStack cluster where the OpenStack database contains all the data, and so putting etcd on a RAM disk makes sense (as the etcd data can be rebuilt if it fails), it’s best to use an etcd cluster : if you have to reboot and lose all your Calico data it’s annoying to have to rebuild it all! After installing the OpenStack deployment, test that you can create and manage VMs as normal.
  • Install the Docker hosts, installing etcd on these hosts as a proxy to the etcd cluster you are using for OpenStack. Before you do anything with calicoctl on each of these hosts, you need to set the interface prefix correctly, as follows. etcdctl set /calico/v1/host/<docker host>/config/InterfacePrefix caliOpenStack hosts expect interface starting with “tap” while Docker hosts expect interfaces starting with “cali” – and this way you’ll be able to mix the two in the same deployment.
  • For each Docker host, download calicoctl and configure the node (calicoctl node).
  • Create the IP address pools (calicoctl pool add) on one of your Docker hosts as documented here. You should use separate IP pools for OpenStack and Docker (so that OpenStack does not select IPs that are in the Docker range), but provision both of them using calicoctl so that they are included in the BIRD BGP configuration.
  • Now, configure BGP for both Docker and OpenStack hosts. On the Docker hosts, BIRD configuration is automatically applied through etcd and confd – but that requires that the IPs of each OpenStack compute host is stored in etcd, so set them up as follows. etcdctl set /calico/v1/host/<openstack host>/bird_ip <IP address of host>
  • Once you’ve done that, you’ll need to configure the BIRD configuration in /etc/bird/bird.conf on the OpenStack compute hosts, and restart BIRD. A good trick is to dump the config on one of the Docker hosts and use that as the basis for the OpenStack configuration, to reduce the amount of manual editing required.docker exec calico-node cat /config/bird.cfg

OK, so now it’s all working, how do you test that it works?

  1. Create some security groups in OpenStack using your preferred method (the horizon GUI is usually the easiest if you haven’t used OpenStack much before, but you can also use the command line or REST API). You can configure whatever rules you like here, including rules that reference other security groups.
  2. Create some VMs in one or more of those security group, using your favourite OpenStack interface.
  3. Run calicoctl profile show on a Docker host – you’ll see a list of profile IDs, each of which is the UUID of an OpenStack security group.
  4. Create containers using calicoctl on one or more Docker hosts as usual, and use calicoctl member add to assign them to the security groups.
  5. Check that routes are set up correctly, by running ip route show on both OpenStack compute hosts and Docker hosts. If the routes to the containers and VMs show up on the owning host but nowhere else, you have probably made a mistake configuring BIRD.
  6. Finally, test connectivity itself, by ensuring that VMs and containers can contact one another if and only if their security groups allow the traffic.

It’s worth mentioning that this is still a work in progress and there are still some rough edges in the tools when doing this. For example, some calicoctl commands are a little confused by the OpenStack data they find in a mixed deployment: you can list profiles derived from OpenStack security groups using calicoctl, but you cannot use it to edit or view their contents; calicoctl shownodes does not work; and calicoctl node stop only works with the force option. None of these is a significant issue in practice, and we’ll be cleaning this up in future (at the same time as making the process above a little less manual).

I'm a core developer on Project Calico, based in Edinburgh.