Calico has redefined the way to build out data center networks using a pure Layer 3 approach that is simpler, higher scaling, better performing and more efficient than the standard approach of using overlay networks.

Through environment specific plug-ins, calico integrates seamlessly with cloud orchestration systems such as OpenStack and Docker to provide networking between local and geographically distributed workloads.

Removing the packet encapsulation associated with the standard Layer 2 solution simplifies diagnostics, reduces transport overhead and improves performance.

The Calico approach of using a pure IP network combined with BGP for route distribution allows internet scaling for virtual networks.

Difficulties with traditional overlay networks

Traditional virtual infrastructures have offered a LAN-like (Layer 2) experience to users configuring multiple workloads.  This may require several layers of virtual LANs, bridging and tunnelling to make Layer 2 networking work across multiple physical hosts.  This presents a range of problems.

Scale challenges above few hundred servers / thousands of workloads.
Difficult to troubleshoot due to packet encapsulation.
On/off-ramp device (or virtual router hop) required to access non-virtualized devices.
Every node in the network is state-heavy (e.g. VLANs, tunnels).
Virtual NAT device required to connect a workload to a public “floating IP”.
High availability / load balancing across links requires additional LB function and/or app-specific logic.
Geographically distributed data centers require inter-DC tunnels.
CCNA or equivalent required to understand end-to-end networking

The Calico solution

Calico provides a pure L3 fabric solution for interconnecting Virtual Machines or Linux Containers (“workloads”). Instead of a vSwitch, Calico employs a vRouter function in each compute node. The vRouter leverages the existing L3 forwarding capabilities of the Linux kernel, which are configured by a local agent (“Felix”) that programs the L3 Forwarding Information Base with details of IP addresses assigned to the workloads hosted in that compute node. Felix also programs Access Control Lists in each compute node to enforce whatever security policy may be needed, for example to provide isolation between tenants. The vRouter function makes use of Border Gateway Protocol to advertise (typically) /32 or /128 routes to workloads hosted in each compute node. The BGP stack in each compute node is connected to a virtualised BGP Route Reflector to provide scalability of the control plane.

Calico connects each workload via the vRouter directly to the infrastructure network. There is no overlay, no tunnelling, no VRF tables. There is also no requirement for NAT in a Calico network, since any workload can be assigned a public IP address which is directly reachable from the Internet – provided you configure a security policy to allow this.

Scale to millions of workloads with minimal CPU and network overhead
What is happening is “obvious” – traceroute, ping, etc., work as expected; routing and ACL rules tell you everything you need to know
Path from workload to non-virtualized device is just a route
Physical fabric is state-light (standard IP forwarding only)
External connectivity is achieved by assigning a public IP
Equal Cost Multi-Path (ECMP) any Anycast just work, enabling scalable resilience and full utilization of physical links
Traffic between data centers is natively L3 routed
Basic IP networking knowledge only required

Connectivity through IP routing

In Calico, IP packets to or from a workload are routed and firewalled by the Linux routing table and iptables infrastructure on the workload’s host.   For a workload that is sending packets, Calico ensures that the host is always returned as the next hop MAC address regardless of whatever routing the workload itself might configure. For packets addressed to a workload, the last IP hop is that from the destination workload’s host to the workload itself.


An environment specific plugin (e.g. a Neutron plugin for OpenStack) notifies a Calico agent when it is asked to provision connectivity for a particular workload.  The Calico agent adds to the host routing table direct routes to each workload via a TAP (or veth, etc.).   A BGP client running on the host notices those and distributes them – perhaps via a route reflector – to BGP clients running on other hosts, and hence the indirect routes appear also.

Scalability, performance and simplicity

Calico simplifies the network topology, removing multiple encapsulation and de-encapsulation.  This is key to understanding how a Calico can perform and scale so well.  In particular:

  • Smaller packet sizes mean that there is reduction in possible packet fragmentation.
  • There is a reduction in CPU cycles handling the encap and de-encap.
  • Easier to interpret packets, and therefore easier to troubleshoot.

When a workload is created, moved or destroyed, the speed of the propagation of the routing information also plays a factor into the scalability.

Calico offers outstanding scalability because it’s architected on the exact same principles as the Internet, using Border Gateway Protocol as the control plane.  With well-known implementations of BGP able comfortably to handle millions of distinct routes, Calico should scale to meet the network needs of the largest imaginable data centers.  And because Calico connects virtual machines directly via IP, it scales beyond the data center and natively supports cloud connectivity across any geographic distribution.


In principle, routing in a Calico network allows any workload in a data center to communicate with any other – but in general an operator will want to restrict that; for example,to isolate customer A’s workloads from those of customer B.  Therefore Calico also programs iptables on each host, to specify the IP addresses (and optionally ports etc.) that each workload is allowed to send to or receive from.  This programming is ‘bookended’ in that the traffic between workloads X and Y will be firewalled by both X’s host and Y’s host – this helps to keep unwanted traffic off the data center’s core network, and as a secondary defence in case it is possible for a rogue workload to compromise its local host.

Field-hardened traffic separation

Applications running within a given tenant’s virtual network are completely separated from other tenants’ traffic.
Traffic separation is enforced by the Linux networking subsystem’s access control list (ACL) processing — code that has been field-hardened and proven in massive scale production environments for many years.

Fine-grained policy rules

While all virtual networking solutions implement per-tenant traffic separation, most do not support finer-grained security policy (without the addition of a separate firewall service). Through its use of Linux’s native ACL processing, Calico can be extended to allow a wide array of security rules – anything that can be defined in terms of a networking ACL can be implemented in Calico.

This rich policy engine would allow an administrator to, for example, give a tenant a VM which is only allowed external access to a specific set of IP addresses and over a specific TCP port – in effect providing basic firewall functionality within the core implementation of the virtual networking layer.

Security through Simplicity

Calico’s use of flat IP networking principles, without multiple levels of address translation and payload encapsulation, results in a simple “WYSIWYG” networking model where every packet on the network clearly identifies who it is from, and where it is going. Network administrators can therefore easily identify any sources of rogue traffic – whereas complex (and often proprietary) tunneling protocols create layers of obfuscation that can be hard to analyze and debug.

Open standards and Open Source

Calico itself is open source and open standards.  We utilize other open source communities and actively push enhancements and fixes to those communities:

  • Linux kernel – layer 3 forwarding and ACL enforcement
  • BIRD – BGP stack
  • Platform plug-in APIs (e.g. Neutron for OpenStack).