Performance Tuning Wso2 Identity Server Kubernetes Deployment

Buddhima Udaranga
6 min readOct 24, 2021

Recently I have written a blog to provide steps to deploy the wso2 identity server as a Kubernetes deployment.[1] Through that, you can achieve the functional requirements of an Identity Server Deployment. But that deployment will not achieve any of the non-functional requirements. Through this blog, I’m trying to explain how to achieve one of the major non-functional requirements. Which is the performance of the deployed WSO2 Identity Server.

Lack of performance of the solution may lead to several issues such as

  • Damaged customer relations
  • Loss of Income
  • Business failures on-peak hours
  • Reduced competitiveness

Due to this, although performance is not one of the first things people look for, it is one of the most essential things when it comes to Identity and Access Management.

Through this blog, I will describe each and every factor that we have identified that impacts performance. And the methodologies that we have used to identify those issues. And the fixes we have done to mitigate those issues. This blog can be used in the future as a guideline to tune the performance of WSO2 Identity Server Deployment.

First, let’s identify the factors that can be a bottleneck to achieve the performance we need.

  1. Pod Memory
  2. Pod CPU
  3. VM Size
  4. Logging
  5. DB Connection Pool
  6. DB CPU
  7. Scaling

Pod Memory

When considering the WSO2 Identity Server it loads tenant-wise data to its memory when a request comes to a particular tenant.

When specifying memory usage in Kubernetes deployment one of the main things you need to consider is the memory limits/request. You can specify the minimum memory that the application must have to operate as request memory. Here although I have specified it as minimum memory this minimum memory should be enough even when the application is operating at desired load level. In case a load spike happens suddenly on such occasions k8s can handle this in two ways.

  1. If memory is available in the AKS cluster pod’s allocated memory can be increased from requested memory to limit memory.
  2. If memory is available in the AKS cluster and the Horizontal pod scaler is configured and in place, new pods can be spawned in order to handle the extra load

From the above two what happens first is depending on your horizontal pod scaler configuration and the amount of load that has been applied. But If your application takes more time to start and be ready then it’s better to rely on option 1 rather than going for option 2.

To continuously monitor the JVM memory metrics we have introduced a feature that can log the JVM memory details to stdout. This can be enabled by adding the following configuration to deployment.toml

[carbon_memory_logger]

enable = true

interval = 2

This will print a log similar to below

[2021–04–07 10:26:31,489] [] INFO {org.wso2.carbon.healthcheck.api.core.JavaMemoryUsageLogger} — JVM Memory Usage: Heap Used: 253M, Heap Committed: 696M, Heap Max: 932M, Non Heap Used: 223M, Non Heap Committed: 246M, Non Heap Max: 0M

You can push these logs to any log analytics tools such as azure log insights and from there we were able to monitor real-time JVM level memory usage of each container. Two pods are depicted below in different colors.

Pod CPU

In k8s we can configure the limit and requests of CPU needed for a pod just as a memory. We can analyze the CPU utilization as same as the memory utilization from log analytics

Normally what we are expected to see is something like the above when it comes to CPU. We can see the CPU load is divided equally among the pods and none of them have reached any limit. An erroneous scenario will be something like below where we can see one pod is reaching its CPU limit

Pods can over utilize CPU when they do not scale with the CPU usage. We will discuss scaling later in this blog.

VM Size

If you are using a managed Kubernetes cluster when initializing the cluster we need to decide the instance type that we are going to use. Depending on the number of pods we are going to use and the amount of resources that we are going to allocate to each pod this instance should be selected.

Logging

Logging can cause huge CPU consumption continuously. Hence it’s better to have minimum required logs printed to stdout. By default, Identity Server writes its logs to files. It’s better to change these log file appenders to console appenders when deploying on Kubernetes. If we store logs in files those files will erase when the pod is getting killed and we cannot access them easily. If we write them to stdout that way we can publish them to a log analytics agent [9] and store them. Also in the performance aspect, writing the logs to a file can also degrade performance.

# CARBON_CONSOLE is set to be a ConsoleAppender using a PatternLayout.

appender.CARBON_CONSOLE.type = Console

appender.CARBON_CONSOLE.name = CARBON_CONSOLE

appender.CARBON_CONSOLE.layout.type = PatternLayout

appender.CARBON_CONSOLE.layout.pattern = [%X{Correlation-ID}] %mm%n

appender.CARBON_CONSOLE.filter.threshold.type = ThresholdFilter

appender.CARBON_CONSOLE.filter.threshold.level = DEBUG

Appender such as above can be used for direct logs to stdout

DB Connection Pool

Database connection pooling is a method used to keep database connections open so they can be reused by others.[2]

Wso2 Identity Server has a set of data sources for each of the databases. Normally by default, there are 3 data sources.

  1. Identity
  2. Shared
  3. User

To get more idea on this please refer to the [3]

Each of these data sources has a separate connection pool. A number of this connection pool depends on 2 things.

  1. Number of concurrent requests that IS receive at a specific time

One IS pod can handle 250 threads concurrently. This is configured in catalina-server.xml. [4] This means IS can handle 250 requests concurrently. These 250 requests can create more than 250 DB queries in each of the above databases. If these data sources did not have enough connections for those requests that will create a delay causing a performance lag. Hence it’s better to have at least around 300+ connection pools. This can be configured using the maxActive parameter. [5]

2. Resource limits of the database

Although we need some amount of data source connections we cannot achieve that if we are limited from the database resource limits. Normally cloud vendors provide their resource limits in their documentation. [6]

DB CPU

Normally in an identity server, the most utilized database is the Identity Database. But of course, this utilization highly depends on the load the IS pod is handling and also the type of database that you are using. But when allocating resources it’s better to allocate at least twice of the CPU of other databases to the Identity database. Since there is so much load in the identity server it’s also better if we can divide that load into two databases as session data and identity data. This functionality will be supported from WSo2 IS 5.12.0.

Scaling

Scaling can be done in two ways.

  • Horizontal scaling — Adding more pods or machines.

This can be easily achieved through a horizontal pod scaler. [7]

apiVersion: autoscaling/v2beta2

kind: HorizontalPodAutoscaler

metadata:

annotations:

autoscaling.alpha.kubernetes.io/metrics: ‘[{“type”:”Resource”,”resource”:{“name”:”memory”,”targetAverageUtilization”: xx }}]’

name: wso2is-hpa

spec:

maxReplicas: 6

minReplicas: 1

scaleTargetRef:

apiVersion: apps/v1

kind: Deployment

name: wso2is

targetCPUUtilizationPercentage: yy

Here the target average memory and CPU utilization depend on your load and your pod-wise allocated resources. It’s better to use a value blindly and then tune by loading your deployment.

  • Vertical scaling — Allocating more resources to existing pods.

We cannot go with this approach more often manually. But from the k8s level, we can set memory limit and memory requests and CPU limits and requests as explained above. This [8] will help when determining these values.

Hope this guide will help you when deploying identity servers in Kubernetes and tuning its performance. Please let me know your thoughts and comments on this.

[1]. Explaining Simple Wso2 Identity Server Kubernetes deployment

[2]. https://stackoverflow.com/questions/4041114/what-is-database-pooling

[3]. https://is.docs.wso2.com/en/latest/setup/working-with-databases/

[4]. https://docs.wso2.com/display/Carbon440/Configuring+catalina-server.xml

[5]. https://is.docs.wso2.com/en/latest/setup/changing-to-mysql/
[6].https://docs.microsoft.com/en-us/azure/azure-sql/database/resource-limits-dtu-single-databases#basic-service-tier

[7]. https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/

[8] https://is.docs.wso2.com/en/latest/setup/installation-prerequisites/

[9].https://aws.amazon.com/blogs/containers/how-to-capture-application-logs-when-using-amazon-eks-on-aws-fargate/

--

--