Contents

Distributed load testing on Kubernetes with Locust


This article is about how to run distributed load testing on Kubernetes with Locust. Locust is an open source load testing tool written in Python. It allows you to write test scenarios in Python code to simulate user behavior on your website or API. Locust is distributed and scalable, meaning that you can scale up to millions of concurrent users.

Introduction

As more and more users rely on online services, even a minor performance glitch can have a significant impact on user experience and business reputation. This is where load testing tools like Locust come into play, allowing us to simulate thousands, or even millions, of users accessing our applications concurrently.

In this article, we will dive deep into the world of distributed load testing on Kubernetes using Locust. We will explore the benefits of this approach, discuss the key components involved, and guide you through the process of setting up your own load testing environment. By the end, you’ll have the knowledge and tools to conduct comprehensive load tests on your Kubernetes-hosted applications, helping you proactively identify and address performance bottlenecks before they impact your users.


What is Locust?

Locust is an open source load testing tool written in Python. It allows you to write test scenarios in Python code to simulate user behavior on your website or API. Locust is distributed and scalable, meaning that you can scale up to millions of concurrent users.

Prerequisites

Before we get started, you’ll need to have the following prerequisites in place:

  1. A running Kubernetes cluster with at least 2 nodes.

  2. kubectl installed and configured to connect to your cluster.

  3. Helm

Architecture overview

we will be using the following architecture for our load testing environment:

locust.png

Installing Locust on Kubernetes

To install Locust on Kubernetes, we will use the deliveryhero helm chart. Let’s first add the deliveryhero helm repository :

1
2
helm repo add deliveryhero "https://charts.deliveryhero.io/"
helm repo update deliveryhero

Create locustfile:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
from locust import HttpUser, task, between

books_data = [{'title': 'the lord of the rings', 'author': 'J. R. R. Tolkien', 'price': 10.99},{'title':'A Game of Thrones','author':'George R. R. Martin','price':9.99},{'title':'The Name of the Wind','author':'Patrick Rothfuss','price':8.99},{'title':'The Way of Kings','author':'Brandon Sanderson','price':7.99},{'title':'The Lies of Locke Lamora','author':'Scott Lynch','price':6.99},{'title':'The Final Empire','author':'Brandon Sanderson','price':5.99},{'title':'The Eye of the World','author':'Robert Jordan','price':4.99},{'title':'The Blade Itself','author':'Joe Abercrombie','price':3.99},{'title':'The Fellowship of the Ring','author':'J. R. R. Tolkien','price':2.99},{'title':'The Black Prism','author':'Brent Weeks','price':1.99}]

class User(HttpUser):
    wait_time = between(1, 2) # wait between 1 and 2 seconds between each task
    @task(4) # 4 times more likely to execute this task than the one below
    def get_books(self):
        self.client.get("/v1/books")
        
    @task(2) # 2 times less likely to execute this task than the one above
    def purchase(self):
        for book in books_data:
            self.client.post("/v1/pay", json=book)
    @task(1) # 1 times less likely to execute this task than the one above
    def add_to_cart(self):
        for book in books_data:
            self.client.post("/v1/cart", json=book)

Install the chart:

Let’s first create configmap for our locustfile:

1
kubectl create configmap locust-config --from-file=locustfile.py

Then create a namespace for our load testing environment:

1
kubectl create namespace locust

And then install the chart in the namespace locust:

1
2
3
4
5
6
7
helm install locust deliveryhero/locust --set loadtest.name=locust-demo \
--set loadtest.locust_locustfile_configmap=locust-config \
--set loadtest.locust_locustfile=locustfile.py \
--set worker.hpa.enabled=true \
--set worker.hpa.minReplicas=3 \
--set loadtest.locust_host=http://<ip>:<port>
--namespace locust

The above command will create a locust master and 3 workers. The workers will be autoscaled based on the CPU utilization. The target host is <ip>:<port> of the application we want to load test.

Accessing the Locust UI

To access the Locust UI, we need to port forward the locust master service:

1
kubectl port-forward svc/locust-demo-master 8089:8089 --namespace locust

Then we can access the UI at http://<locust_ip>:8089. <locust_ip> is the external IP of the locust master service.

We should see something like this: /distributed-load-testing-on-kubernetes-with-locust/locust-ui.png

Overview of our python App

We will be using a simple python app to test our load testing environment. The app is a simple REST API that allows us to get a list of books, add a book to the cart, and purchase a book. The app is written in python using the FastAPI framework. Here’s the code of our app:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
from fastapi import FastAPI, HTTPException, Response, status
from fastapi.middleware.cors import CORSMiddleware
import time

h=Histogram('execution_time','execution time')
app = FastAPI()
origins = ["*"]
app.add_middleware(
    CORSMiddleware,
    allow_origins=origins,
    allow_credentials=True,
    allow_methods=["*"],
    allow_headers=["*"],
)

logging.basicConfig(level=logging.INFO)

# book data including title, author, price 
books_data = [{'title': 'the lord of the rings', 'author': 'J. R. R. Tolkien', 'price': 10.99},{'title':'A Game of Thrones','author':'George R. R. Martin','price':9.99},{'title':'The Name of the Wind','author':'Patrick Rothfuss','price':8.99},{'title':'The Way of Kings','author':'Brandon Sanderson','price':7.99},{'title':'The Lies of Locke Lamora','author':'Scott Lynch','price':6.99},{'title':'The Final Empire','author':'Brandon Sanderson','price':5.99},{'title':'The Eye of the World','author':'Robert Jordan','price':4.99},{'title':'The Blade Itself','author':'Joe Abercrombie','price':3.99},{'title':'The Fellowship of the Ring','author':'J. R. R. Tolkien','price':2.99},{'title':'The Black Prism','author':'Brent Weeks','price':1.99}]

@app.post("/v1/pay", status_code=201)
async def payment(d: dict):
  return {"message": "Purchase completed successfully."}

@app.get("/v1/books")
async def books(): 
  return books_data
# add to cart
@app.post("/v1/cart", status_code=201)
async def cart(d: dict):  
  return {"message": "Book added to cart successfully."}

The app is then containerized and deployed to a container registry.

Our app is running on port 1097

Starting the load testing

Now that the app is deployed to kubernetes, we can start the load testing. we will be using the locust UI to launch the tests. since we are running kubernetes locally, here’s the url of the locust UI: http://localhost:8089. Make sure you have port forwarded the locust master service as explained above.

If you are running kubernetes on a cloud provider, you can access the locust UI using the external IP of the locust master service.

/distributed-load-testing-on-kubernetes-with-locust/locust-init.png

in this example, we are simulating a load of 100 concurrent users. the spawn rate is 1 which means that 1 user will be created every second until it reaches the limit of 100 users. the load test will run for 10 minutes.

Analyzing the results

After some time, we can see the results of our load test: /distributed-load-testing-on-kubernetes-with-locust/locust-results.png /distributed-load-testing-on-kubernetes-with-locust/locust-requests.png

The two charts above are the most important ones. The first one shows the number of requests per second. The second one shows the response time of each request. We can see that the response time is increasing as the number of requests per second increases. This means that the performance of our app decreases as the number of concurrent users increases.

Here are some tips to consider in order to make our app more efficient and scalable:

  • Right-size you pods: Make sure that your pods have enough resources to handle the load.
  • Use Horizental Pod Autoscaler: HPA allows you to scale your pods based on CPU utilization. You can also use custom metrics to scale your pods based on other metrics such as memory utilization.
  • Use cluster autoscaler: Cluster autoscaler allows you to scale your cluster based on the number of pending pods. This is useful when you have a lot of pending pods due to insufficient resources.
  • Use light-weight containers: Use light-weight containers such as Alpine Linux instead of Ubuntu or CentOS. This will reduce the memory footprint of your containers and make them more efficient.
  • Use caching: Use caching to reduce the number of requests to your database. This will improve the performance of your app and reduce the load on your database.
  • Use readiness and liveness probes: Use readiness and liveness probes to make sure that your pods are healthy and ready to serve traffic. This will reduce the number of failed requests and improve the performance of your app.

Cleaning up

When you finish the tests, clean up the environment by deleting the locust master and workers pods. Since we used helm to install locust, we will uninstall it using helm:

1
helm uninstall locust --namespace locust

This will delete the locust master and workers pods. It will also delete the locust master and workers services. Finally, it will delete the locust namespace.

Conclusion

In this blog post, we have seen how to run distributed load testing on Kubernetes using Locust. We have also seen how to analyze the results of our load tests and how to make our app more efficient and scalable. I hope you found this article useful. If you have any questions or comments, please feel free to leave a comment below. Thank you for reading!