Standing up a Calico powered Kubernetes cluster using kops

This guest blog post is by Eric Son who is a user group researcher for Project Calico. Eric is currently a third-year student studying Computer Science and Business at the University of British Columbia. He volunteers with a software engineering club called Launch Pad at UBC and is interning at Samsung as an IoT Software Engineer for the summer.


Setting up Kubernetes comes with a lot of complexity including many different ways to launch a cluster. One of the easiest ways to go about doing this is through a Managed Kubernetes Service like EKS, GKE, etc. While these options are an easy way to launch and manage Kubernetes they have the potential to lock you in to particular platforms, or may not be extensible enough. Fortunately, there are different tools that you can leverage to launch a cluster by yourself without having to do everything completely manually. In this blog I’ll walk you through how to launch a Kubernetes cluster with Calico networking using kops as an example.

Calico and kops

I am going to assume that you already have an understanding of what Kubernetes is, but you may not have heard about Calico or kops.

Calico is an “open-source networking and network security solution for containers, virtual machines, and native host-based workloads.” It is a networking solution for Kubernetes that allows you to create network policies, secure traffic to containers/hosts, and is built for scale.

Kops (short for Kubernetes Operations) is a tool that allows you to easily create, destroy, and manage highly available, production-grade Kubernetes clusters through the command line.

Setting up your Kops work environment

Kops currently only supports AWS, with other cloud providers in beta testing. So today we will go with AWS to launch our cluster. First, you need to set up a host to use as your work environment (if you don’t already have a suitable work environment set up).

For this blog I’m going to use Ubuntu. You can follow a quick tutorial here to set up an Ubuntu server on AWS. I am personally using an Ubuntu 18.04 with a t2.small to meet the minimum hardware requirements for my Kops work environment.

To set up the work environment we need to:

  1. Install kubectl command line tool
  2. Install kops command line tool
  3. Install AWS command line tool
  4. Create an AWS IAM user for kops to use

Installing kubectl

First, we have to install kubectl, the command line tool that lets you control Kubernetes clusters. 

On my kops work environment host, you can run this command to download the kubectl binary:

curl -LO`curl -s`/bin/linux/amd64/kubectl

Once downloaded, you can make your kubectl binary executable and then move it somewhere in your PATH

chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl

After having done this correctly you can run this command to test kubectl is working properly.

kubectl version --client

At the time of this writing my kubectl version response is:

Client Version: version.Info{Major:”1", Minor:”18", GitVersion:”v1.18.2", GitCommit:”52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:”clean”, BuildDate:”2020–04–16T11:56:40Z”, GoVersion:”go1.13.9", Compiler:”gc”, Platform:”linux/amd64"}

Installing kops

In a similar fashion, we can install kops

  1. Download the kops binary
  2. Make it executable 
  3. Move the binary to a location within your PATH
curl -LO$(curl -s | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64<br>chmod +x kops-linux-amd64
chmod +x kops-linux-amd64
sudo mv kops-linux-amd64 /usr/local/bin/kops

To make sure it’s working correctly you can run the command: 

kops version

Installing / Configuring AWS Command Line tools

To properly allow and prepare our AWS account to use kops we have to first install the AWS CLI tools. 

First, run this command to get the correct zip file:

curl “<a href=""></a>" -o “”

I didn’t have the unzip tool already installed on Ubuntu 18.04 so I had to install that as well to unzip this file by running:

sudo apt install unzip

Next, we want to unzip the by running:


Finally, we execute the installation script by running this command:

sudo ./aws/install

To check that aws cli tools have installed properly you can run

aws --verison

Now we have to configure aws cli tools with our account credentials so that it has access to our aws account. Run the command

aws configure

and it will prompt you to fill in the following fields with your account credentials. If you need help debugging this step or locating your account credentials take a look here.

AWS Access Key ID [None]: <em>AKIAIOSFODNN7EXAMPLE</em> AWS Secret Access Key [None]: <em>wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY</em> Default region name [None]: <em>us-west-2</em> Default output format [None]: <em>json</em>

Creating a separate IAM user for kops to use

We can now create a separate IAM user called kops with the correct credentials and permissions that will allow the kops tool to do its work. Take note of the output of this command below. We will need it in the next step!

aws iam create-group --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonEC2FullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonRoute53FullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/IAMFullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonVPCFullAccess --group-name kops
aws iam create-user --user-name kops
aws iam add-user-to-group --user-name kops --group-name kops
aws iam create-access-key --user-name kops

Now that we have a dedicated IAM user for kops lets configure the aws cli tool again so that it will use this dedicated IAM role instead of your original user role. You should record the SecretAccessKey and AccessKeyID in the returned JSON output, and then use them below:

# configure the aws client to use your new IAM user
aws configure           # Use your new access and secret key here
aws iam list-users      # you should see a list of all your IAM users here

# Because "aws configure" doesn't export these vars for kops to use, we export them now
export AWS_ACCESS_KEY_ID=$(aws configure get aws_access_key_id)

Congrats! We are finished setting up our kops work environment.

Setting up your state store bucket

We will now create an s3 bucket that kops will store the cluster state in. Run the command

aws s3 mb s3://

Note: I picked the name to be the name of my hosted zone (domain name of the kubernetes cluster). Here is an informative excerpt from the Kubernetes documentation that explains that kops uses DNS discovery for the kubernetes API Server

“kops uses DNS for discovery, both inside the cluster and outside, so that you can reach the kubernetes API server from clients.
kops has a strong opinion on the cluster name: it should be a valid DNS name. By doing so you will no longer get your clusters confused, you can share clusters with your colleagues unambiguously, and you can reach them without relying on remembering an IP address.”

We will be using a private DNS solution for our cluster so we can really pick any DNS valid name like or because we won’t be registering the name anywhere, but if you wanted a public cluster then you can follow the DNS steps listed here at the kops documentation. Let us continue having chosen the name

This will create the bucket and now we will export a variable that kops uses to point toward this bucket. 

export KOPS_STATE_STORE=s3://

Note: An additional step here might be to place these variable exports into your bashrc file so you don’t have to run these export commands every time from the command line.

Standing up your Calico + Kubernetes Cluster

All we need to do now is run a command that tells kops to stand up a cluster with Calico enabled for networking. I am calling this cluster

kops create cluster --zones=us-east-1c --networking calico --dns private

After ten minutes your cluster should be stood up and you can use this command to verify that everything is working as expected.

kops validate cluster

Calico Network Policies in Action

We can now bring up containers and test out some Calico Network Policies. For now, we can run these commands to bring up an nginx container with port 80 exposed and a busybox container in a separate namespace.

kubectl create ns demo
kubectl create deployment --namespace=demo nginx --image=nginx
kubectl expose --namespace=demo deployment nginx --port=80

This last command will bring up a busybox pod and open up a shell to it.

Note: If you don’t see the following policies take effect try closing the busybox shell and opening it back up again.

kubectl run --namespace=demo busybox --rm -ti --image <strong>busybox</strong> /bin/sh<br>

Now when you run wget from inside the busybox you should be able to load the html of the nginx container

wget -q --timeout=5 nginx -O -<br>

Next, let’s try bringing up a Calico Network Policy to secure our workloads. First, we are going to deny all ingress traffic between everything in our demo namespace. From the kops work environment node run the command:

calicoctl create -f - <<EOF
kind: NetworkPolicy
  name: default-deny-ingress
  namespace: demo
  selector: all()
  - Ingress

Now from your busybox running the command will time out

wget -q --timeout=5 nginx -O -<br>

Try adding this policy to allow ingress traffic from the busybox to nginx.

calicoctl create -f - <<EOF
kind: NetworkPolicy
  name: allow-busybox-nginx
  namespace: demo
  selector: app == 'nginx'
  - Ingress
  - action: Allow
        from: busybox
        to: nginx
      selector: run == 'busybox'

There are so many powerful things you can accomplish with Calico Network Policies that let you adopt best practices and secure your workloads. Learn more about them here.


To tear down the kops cluster you can run this command in your work environment node

kops delete cluster --yes


There are many different ways to go about setting up your Kubernetes cluster. Through kops I was able to stand up a highly available, production ready cluster with ease. I hope this walkthrough removes friction for you and gives you the confidence to try it out for yourself!

If you enjoyed this blog then you may also like:

Alex Pollitt

Alex Pollitt

Alex Pollitt is co-founder and CTO at Tigera, and helped lead the original core developer team for Project Calico.

You Might Also Like

What’s new in Calico 3.17

We’re thrilled to announce Calico v3.17.0! This release includes a number of cool new features as well as bug fixes. Thank you to each one of the contributors to this

Read More

EKS, Bottlerocket, and Calico eBPF

Introduction Bottlerocket is an open source Linux distribution built to run containers securely in scale by Amazon, it is uniquely tailored to improve stability and performance with a great focus

Read More

Join our mailing list​

Get updates on blog posts, new releases and more!

Thanks for signing up!