Securing the “Hipster Shop” with Calico Network Policies

Today’s blog post is by Edward Chang. Edward is a user group researcher for Project Calico. Before working with Tigera, Edward volunteered as a Software Developer at one of the competitive robotics design teams at UBC. This summer, Edward is working as a Software Development Intern.  Edward is currently a third year Computer Science student at the University of British Columbia. 

As containerization and microservice architectures become the industry standard, developers need a method to ensure that unwanted and possibly malicious traffic does not find its way into their applications. One standard tool in the Kubernetes toolkit for enforcing traffic control between microservices is Network Policies. This blog will demonstrate how simple it is to write Kubernetes Network Policy to secure a complex application. As you’re following along, you may want to refer to this page that outlines the basics of Network Policies, particularly the sections on “Ingress and Egress”, and the paragraph under “How To”.

Creating Network Policies for an Existing Application

Google’s microservices demo (aka Hipster Shop) offers a simulation of an online boutique that has all the functionalities of an online shop, such as item recommendations, a cart system and even checkout and payment functionality. It’s a perfect application to see network policies in action. Not only is it an excellent representation of a complex, containerized application, but the project also comes with a handy service architecture diagram, which acts as a useful guide for the network policies that we need to write. Ideally network policies would be written as each service in as it was developed, but this is still a useful example for securing applications after they were deployed.

Before you Begin

First, you need to install and deploy the microservices demo by following the instructions provided on the application’s Github page. It is recommended that you explore the application and it’s functionality, while looking at the service architecture diagram, to gain a better understanding of what the individual microservices accomplish. More importantly, having the application running gives you the power to progressively test your network policy along the way. It is far less tedious to test that a network policy is correct as you write it, rather than realizing at the end of the process, and having to dig through all the network policies, trying to find the root of the problem. 

Before you begin writing your first Network Policies, make sure you understand these two concepts, which are explained in the page linked above. First, if one or more Network Policies applies to a pod, then only traffic specified by these network policies will be allowed. This will be important later on in determining at what steps we should be testing our application. Second is the concept of Ingress vs. Egress. Looking at the service architecture diagram, arrows pointing into a specific pod represent Ingress traffic going into that pod, and arrows pointing out of a pod represents Egress traffic. This will be important in how we structure our Network Policy.

Network Policies in Action

Let’s start at the bottom left side of the server architecture diagram, where there is an arrow pointing from the Checkout Service into the Shipping Service. From the perspective of the shipping service, this is Ingress traffic from the checkout service. To specify these pods in our network policy, we will take advantage of the labels associated with the pods.

kubectl get pods --show-labels

NAME                                     READY   STATUS    RESTARTS   AGE     LABELS
adservice-6b867fbddf-68qb4               1/1     Running   0          3m38s   app=adservice,pod-template-hash=6b867fbddf
cartservice-7db8c58b54-6z5xq             1/1     Running   0          3m39s   app=cartservice,pod-template-hash=7db8c58b54
checkoutservice-849bcddcc5-kbcjd         1/1     Running   0          3m40s   app=checkoutservice,pod-template-hash=849bcddcc5
currencyservice-7bf5c56c-5dxzg           1/1     Running   0          3m39s   app=currencyservice,pod-template-hash=7bf5c56c
emailservice-c645fb94f-npztm             1/1     Running   0          3m40s   app=emailservice,pod-template-hash=c645fb94f
frontend-86db5d6bc9-m9n9k                1/1     Running   0          3m39s   app=frontend,pod-template-hash=86db5d6bc9
loadgenerator-8bcc5fd79-k4wcp            1/1     Running     3          3m39s   app=loadgenerator,pod-template-hash=8bcc5fd79
paymentservice-6bb9b8977c-mrxzg          1/1     Running   0          3m39s   app=paymentservice,pod-template-hash=6bb9b8977c
productcatalogservice-5bf4567b87-4fhnb   1/1     Running   0          3m39s   app=productcatalogservice,pod-template-hash=5bf4567b87
recommendationservice-6868459695-cjtrf   1/1     Running   0          3m40s   app=recommendationservice,pod-template-hash=6868459695
redis-cart-7887db6db-9tzkr               1/1     Running   0          3m39s   app=redis-cart,pod-template-hash=7887db6db
shippingservice-85dbf46687-r4gzm         1/1     Running   0          3m39s   app=shippingservice,pod-template-hash=85dbf46687

This command returns all the pods as well as the labels associated with them, which will be used for selecting the pods under the matchLabels selector. As shown in the output, checkout service and shipping service have the labels app=checkoutservice” and app=shippingservice” respectively. Next, we will need the port of entry for ingress traffic coming into the checkout service.

~$ kubectl get service

NAME                    TYPE           CLUSTER-IP       EXTERNAL-IP.   PORT(S)        AGE
adservice               ClusterIP      10.100.106.162   <none>         9555/TCP       7m6s
cartservice             ClusterIP      10.100.174.141   <none>         7070/TCP       7m6s
checkoutservice         ClusterIP      10.100.29.54     <none>         5050/TCP       7m6s
currencyservice         ClusterIP      10.100.72.178    <none>         7000/TCP       7m6s
emailservice            ClusterIP      10.100.134.224   <none>         5000/TCP       7m6s
frontend                ClusterIP      10.100.232.121   <none>         80/TCP         7m6s
frontend-external       LoadBalancer   10.100.7.149     a8c7769...     80:30316/TCP   7m6s
kubernetes              ClusterIP      10.100.0.1       <none>         443/TCP        17d
paymentservice          ClusterIP      10.100.47.139    <none>         50051/TCP      7m6s
productcatalogservice   ClusterIP      10.100.228.123   <none>         3550/TCP       7m6s
recommendationservice   ClusterIP      10.100.6.165     <none>         8080/TCP       7m6s
redis-cart              ClusterIP      10.100.181.232   <none>         6379/TCP       7m6s
shippingservice         ClusterIP      10.100.234.110   <none>         50051/TCP      7m6s

This command returns all the services in any namespace. As shown in the output, the payment service has port 50051.  Putting all this together, you can finally formulate a Network Policy. 

~$ kubectl create -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: shipping-allow-checkout
  namespace: default
spec:
  podSelector:
    matchLabels:
      app: shippingservice
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: checkoutservice
    ports:
      - port: 50051
EOF

You have now applied your first Network Policy! However, if you go to the cart page, you may notice that it is now failing to load. This is happening due to one of the important concepts mentioned above: “if one or more Network Policies applies to a pod, then only traffic specified by these network policies will be allowed.” Looking back at the shipping service pod on the server architecture diagram, there is also an incoming arrow from the frontend pod. This traffic has been blocked, due to the fact that shipping service is now only accepting traffic from checkout service. Therefore, we will write one more Network Policy to completely secure the shipping service

~$ kubectl create -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: shipping-allow-frontend
  namespace: default
spec:
  podSelector:
    matchLabels:
      app: shippingservice
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: frontend
    ports:
      - port: 50051
EOF

Upon refreshing the page, the application should now be back up and running. It is safest to also restart the front-end pod to see your Network Policy in action. This is because existing network connections are not covered by new policies, so you need to explicitly restart the pod to break the network connections and realize the policy. You have now secured the shipping service pod! To continue securing the application, repeat this process for all the arrows that are present on the diagram. Remember to progressively test your application as you write Network Policy, and expect the application to throw errors if you have not written all the Network Policies necessary for a specific pod, for the reason mentioned above.

A complete collection of ingress policies for this application can be found here.


If you enjoyed this blog then you may also like:

Chris Hoge

Chris Hoge

Chris is a Developer Advocate for Project Calico. Prior to joining Tigera he was a Technical Program Manager at the OpenStack Foundation, where he helped launch an interoperability program and coordinated cross-community efforts between the OpenStack and Kubernetes communities. He holds an MS in Applied Mathematics from the University of Colorado.

You Might Also Like

What’s new in Calico 3.16

We’re very excited to announce Calico v3.16.0! This release includes the eBPF dataplane going GA, the addition of Windows support for Kubernetes, and BGP communities support. In addition, this release

Read More

Join our mailing list​

Get updates on blog posts, new releases and more!

Thanks for signing up!