Securing Kubernetes Nodes with Calico Automatic Host Endpoints

Kubernetes is described as “a portable, extensible, open-source platform for managing containerized workloads and services”. Working within that description, there’s a strong focus on the workloads that are contained within pods, and how they communicate with one another. One of the mechanisms Kubernetes offers in securing workloads is through Network Policies. Network Policies give a means to declaratively configure what network connections are allowed to pods, whether that is pod-to-pod communication within a cluster or connections to or from outside of the cluster.

In general Network Policies can only be applied to pods within a cluster. However, securing the pod workloads is just a small piece of the larger Kubernetes security puzzle. Also of importance is securing the nodes that the workloads are hosted on.

It’s a well known good practice to use some kind of perimeter firewall to allow or block external traffic to the cluster as a whole using static course-grained IP based rules. But to protect your nodes from compromises inside the perimeter you need finer grained microsegmentation that can dynamically secure each node, allowing or blocking traffic to other nodes depending on their role, and securing the nodes against traffic from the workloads they host.  As the nodes and pods in a cluster are dynamic, this host-level microsegmentation also needs to be dynamic, in the same way that Network Policy enforcements for pods is dynamic.  Without this additional layer of microsegmentation, a compromised node, or a compromised pod with insufficiently strict Network Policy, can spread across the cluster unchallenged.

Calico is well known as both a Container Networking Interface (CNI) and as a Network Policy provider. The Calico CNI provides network connectivity between workloads in the cluster, while the Calico Network Policy engine applies ingress and egress access rules for pods. Internally, Calico represents each pod as a “workload endpoint”. Building upon this framework, Calico also provides an abstraction for host network interfaces, Host Endpoints. Until now, using Host Endpoints required users to manually create and manage their own host endpoints. Once created, users can then write network policies to target those endpoints, just like pods.

With the 3.14 release, Calico now has the option to automatically create and manage the host endpoints in your Kubernetes cluster. By default the feature is turned off, but when enabled every node within your Kubernetes cluster will be assigned a host endpoint that applies a set of default rules to every network interface on every host. In the rest of this blog, we’ll cover how Calico Host Endpoints works, how you can enable automatic Host Endpoints for each of your nodes, and how to apply Network Policy to create dynamic microsegmentation to protect your Kubernetes nodes.

How Calico Host Endpoints Work

Host Endpoints are a Calico resource type that can be managed and inspected using calicoctl command line tool, just like any other Calico resource. Host Endpoints can come in two types, one that directly names an interface on a host, or wildcarded that covers all interfaces on a host. Each endpoint includes a set of labels and associated profiles that Calico will use to apply a policy to the interfaces specified by the endpoint.

When Calico’s automatic host endpoints feature is enabled, Calico will automatically create and manage a wildcarded host endpoint for each Kubernetes node.  Each host endpoint inherits the labels of the Kubernetes node, and has an attached “default allow” profile which mirrors the behavior of Kubernetes Network Policy for pods. i.e. All ingress traffic is allowed until one (or more) Network Policies is defined which applies ingress rules to the host endpoint, at which point only the traffic specified in the Network Policies is allowed, and the same for egress rules.

In addition Calico allows traffic that matches the failsafe rules, designed to prevent loss of connectivity to a host node if you accidentally apply Network Policy that is too strict. These rules are configurable but by default include inbound and outbound rules for SSH, DNS, DCHP, BGP, and etcd.

Example: Enabling Host Endpoint Protection

In this short tutorial, we will be working with a basic cluster that has one master node and two worker nodes (note, host endpoint protection is not yet available for clusters that are running the eBPF dataplane tech preview). Since we will be using Calico specific features, we assume that calicoctl is installed and configured for your datastore. To begin, get a list of all of the nodes in the cluster:

~$ calicoctl get nodes -owide
NAME      ASN       IPV4              IPV6   
master    (64512)   192.168.2.10/24          
node1     (64512)   192.168.2.11/24          
node2     (64512)   192.168.2.12/24   

Automatic host endpoints are enabled using Calico’s KubeControllerConfiguration. You can get this resource with calicoctl:

$ calicoctl get kubecontrollersconfiguration default -o yaml
apiVersion: projectcalico.org/v3
kind: KubeControllersConfiguration
metadata:
  creationTimestamp: "2020-05-14T23:19:23Z"
  name: default
  resourceVersion: "738864"
  uid: 7ef01338-124b-402f-85f0-699bad5529c4
spec:
  controllers:
    namespace:
      reconcilerPeriod: 5m0s
    node:
      reconcilerPeriod: 5m0s
      syncLabels: Enabled
    policy:
      reconcilerPeriod: 5m0s
    serviceAccount:
      reconcilerPeriod: 5m0s
    workloadEndpoint:
      reconcilerPeriod: 5m0s
  etcdV3CompactionPeriod: 10m0s
  healthChecks: Enabled
  logSeverityScreen: Info
status:
  environmentVars:
    DATASTORE_TYPE: kubernetes
    ENABLED_CONTROLLERS: node
  runningConfig:
    controllers:
      node:
        hostEndpoint:
          autoCreate: Disabled
        syncLabels: Disabled
    etcdV3CompactionPeriod: 10m0s
    healthChecks: Enabled
    logSeverityScreen: Info

Reviewing the status, you can see that host endpoint auto-creation is disabled. You can verify that no host endpoints exist:

~$ calicoctl get hostendpoints
NAME   NODE   

Enable automatic host endpoints by toggling the “autoCreate” parameter:

~$ calicoctl patch kubecontrollersconfiguration default \
--patch='{"spec": {"controllers": {"node": {"hostEndpoint": {"autoCreate": "Enabled"}}}}}'

Calico will now automatically create host endpoints for all of the nodes in your cluster. You can verify this by running the host endpoints again.

~$ calicoctl get hostendpoints
NAME               NODE      
master-auto-hep    master   
node1-auto-hep     node1     
node2-auto-hep     node2

You can get even more information about the host endpoint on the master node:

~$ calicoctl get hostendpoint master-auto-hep -o yaml
apiVersion: projectcalico.org/v3
kind: HostEndpoint
metadata:
  creationTimestamp: "2020-05-19T00:53:26Z"
  labels:
    beta.kubernetes.io/arch: amd64
    beta.kubernetes.io/os: linux
    kubernetes.io/arch: amd64
    kubernetes.io/hostname: master
    kubernetes.io/os: linux
    node-role.kubernetes.io/master: ""
    projectcalico.org/created-by: calico-kube-controllers
  name: master-auto-hep
  resourceVersion: "741196"
  uid: 434ede74-1eb4-49d5-864d-4bad0604076d
spec:
  expectedIPs:
  - 192.168.2.10
  interfaceName: '*'
  node: master
  profiles:
  - projectcalico-default-allow

You can see that the host endpoint is using a wildcard interface name, which means network policies will be applied to all of the nodes. As previously mentioned, a default failsafe policy is applied to the node that denies all traffic except for services required to maintain node connectivity.

It’s useful at this point to label all of our nodes in the Kubernetes cluster. It gives us a selector that we can easily use to apply policies to all of the Kubernetes hosts.

~$ kubectl label nodes --all kubernetes-host=
node/master labeled
node/node1 labeled
node/node2 labeled

Depending on the installer you used to create your cluster, your master node(s) may already be labeled with `node-role.kubernetes.io/master`. If not then add that label to them too (replacing <master> with the actual name, and repeating for all of the master nodes).

~$ kubectl label nodes <master> node-role.kubernetes.io/master=
node/master labeled

With our nodes labeled, we can then apply some policies that secure the nodes while retaining access to the Kubernetes API. The first applies some rules specific for the Kubernetes master.

~$ calicocto apply -f - <EOF 
apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
  name: ingress-k8s-masters
spec:
  selector: has(node-role.kubernetes.io/master)
  # This rule allows ingress to the Kubernetes API server.
  ingress:
  - action: Allow
    protocol: TCP
    destination:
      ports:
      # kube API server
      - 6443
  # This rule allows all traffic to localhost.
  - action: Allow
    destination:
      nets:
      - 127.0.0.1/32
  # This rule allows coredns to forward dns requests back through
  # through the master node in the case where your gateway is
  # colocated with the Kubernetes master
  - action: Allow
    protocol: UDP
    destination:
      ports:
      - 53
  # This rule is required in multi-master clusters where etcd pods are colocated with the masters.
  # Allow the etcd pods on the masters to communicate with each other. 2380 is the etcd peer port.
  # This rule also allows the masters to access the kubelet API on other masters (including itself).
  - action: Allow
    protocol: TCP
    source:
      selector: has(node-role.kubernetes.io/master)
    destination:
      ports:
      - 2380
      - 10250
EOF

We can apply similar policies to the worker nodes to secure them. Start by labeling the nodes, then applying a policy:

~$ kubectl get node -l '!node-role.kubernetes.io/master' -o custom-columns=NAME:.metadata.name | tail -n +2 | xargs -I{} kubectl label node {} kubernetes-worker=

~$ calicoctl apply -f - << EOF
apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
  name: ingress-k8s-workers
spec:
  selector: has(kubernetes-worker)
  # Allow all traffic to localhost.
  ingress:
  - action: Allow
    destination:
      nets:
      - 127.0.0.1/32
  # Allow only the masters access to the nodes kubelet API.
  - action: Allow
    protocol: TCP
    source:
      selector: has(node-role.kubernetes.io/master)
    destination:
      ports:
      - 10250
  # This rule allows coredns to forward dns requests back through
  # through the worker node in the case where your gateway is
  # colocated with the Kubernetes master
  - action: Allow
    protocol: UDP
    destination:
      ports:
      - 53
EOF

With the policy applied to all of the nodes in the cluster, traffic to the cluster should be locked down. Let’s test this out by starting up a basic Python server on the Kubernetes master.

~$ mkdir http; cd http; python3 -m http.server 8888
Serving HTTP on 0.0.0.0 port 8888 (http://0.0.0.0:8888/) ...

Try to connect to it from an external client that’s connected to your network.

external: ~$ curl http://192.168.2.10:8888 --connect-timeout 3
curl: (28) Connection timed out after 3003 milliseconds

Now try to connect to it from a Kubernetes pod in your cluster, with similar results:

~$ kubectl run -i --tty busybox --image=busybox --rm -- sh
If you don't see a command prompt, try pressing enter.
/ # wget -O - -T 3 http://192.168.1.100:8888
Connecting to 192.168.2.10:8888 (192.168.2.10:8888)
wget: download timed out
/ # exit

The application of the ingress-k8s-workers policy shut down all other non-failsafe connections to the master node. We can write a new policy to open up access to this service:

~$ calicoctl apply -f - <<EOF
apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
  name: global-access
spec:
  ingress:
  - action: Allow
    protocol: TCP
    destination:
      ports:
      - 8888
EOF

Trying our busybox and external connections again:

~$ kubectl run -i --tty busybox --image=busybox --rm -- sh 
If you don't see a command prompt, try pressing enter.
/ # wget -O - -T 3 http://192.168.2.10:8888
Connecting to 192.168.2.10:8888 (192.168.2.10:8888)
192.168.2.12 - - [21/May/2020 00:26:29] "GET / HTTP/1.1" 200 -
                                                              writing to stdout
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<title>Directory listing for /</title>
</head>
<body>
...

Similarly

external: ~$ curl http://192.168.2.10:8888 --connect-timeout 3 
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<title>Directory listing for /</title>
</head>
<body>
...

Delete the previous policy, and update it to only allow connections from nodes in our cluster.

~$ calicoctl delete gnp global-access
Successfully deleted 1 'GlobalNetworkPolicy' resource(s)

~$ calicoctl apply -f - <<EOF
apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
  name: internal-access
spec:
  ingress:
  - action: Allow
    protocol: TCP
    source:
      selector: has(kubernetes-host)
    destination:
      ports:
      - 8888
EOF

Checking from the external node:

external: ~$ curl http://192.168.2.10:8888 --connect-timeout 3
curl: (28) Connection timed out after 3004 milliseconds

Checking from worker node1:

node1: ~$ curl http://192.168.2.10:8888 --connect-timeout 3 
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html>
<head>
...

Conclusion

Calico Host Endpoints are another security tool that you can use to secure your Kubernetes clusters. This example gave you a brief tour of how to enable automatic host endpoints in the 3.14 release, then apply some basic policy rules to secure the host endpoints while allowing different levels of access to a basic service running on the cluster. Whenever you are securing a cluster, it’s important to take a complete view of your entire cluster, from the workloads managed by Kubernetes to the hosts themselves. You can learn more about host endpoints at the Project Calico docs site.

Project Calico is supported by a community of developers and users. You can join the community and connect to other users at the Project Calico Slack. Asking questions on the  community discourse is a great way to get help with issues you might encounter. You can meet the developers and other community members by joining us for the monthly community meetings. Follow @projectcalico on Twitter for up-to-date information about new blog posts, videos, community meetings, and releases.


If you enjoyed this blog then you may also like:

Chris Hoge

Chris Hoge

Chris is a Developer Advocate for Project Calico. Prior to joining Tigera he was a Technical Program Manager at the OpenStack Foundation, where he helped launch an interoperability program and coordinated cross-community efforts between the OpenStack and Kubernetes communities. He holds an MS in Applied Mathematics from the University of Colorado.

You Might Also Like

What’s new in Calico 3.15

We’re very excited to announce Calico v3.15.0, which includes a bunch of great features including generally available Wireguard encryption, and the ability to migrate Calico’s data storage from etcd to

Read More

Join our mailing list​

Get updates on blog posts, new releases and more!

Thanks for signing up!