Affinity In Nginx Ingress Controller-2

Buddhima Udaranga
5 min readDec 10, 2022

--

In an earlier blog we have discussed the basics of session affinity, How to configure it. In this blog we will focus on the internals of nginx ingress controller.

Going through this blog you will learn what will happen to a request that comes to a ingress controller. Which methods will execute internally how it maintain session affinity kind of stuff.

How Nginx Controller generate nginx.conf

If you have worked with nginx you should be familiar with the nginx configuration file nginx.conf. Have you ever thought what is the configuration file in nginx controller.

In Nginx ingress controller we provide configuration through ingress yaml files. For example you will have to create something like below.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-world-ingress
spec:
ingressClassName: nginx
rules:
- host: demo.com
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: hello
port:
number: 80

When we apply this kind of configuration through the K8s API they are been picked up from the nginx ingress controller and it results to reload the configuration resulting a configuration file.

In that configuration file (nginx.conf) with respect to the ingress configuration we have provided above a block of configuration will be created as below.

## start server demo.com
server {
server_name demo.com;

listen 80 ;
listen [::]:80 ;
listen 443 ssl http2 ;
listen [::]:443 ssl http2 ;

set $proxy_upstream_name "-";

ssl_certificate_by_lua_block {
certificate.call()
}

location ~* "/" {

set $namespace "default";
set $ingress_name "hello-world-ingress";
set $service_name "hello";
set $service_port "80";
set $location_path "/";

rewrite_by_lua_block {
lua_ingress.rewrite({
force_ssl_redirect = false,
ssl_redirect = true,
force_no_ssl_redirect = false,
use_port_in_redirects = false,
})
balancer.rewrite()
plugins.run()
}

# be careful with `access_by_lua_block` and `satisfy any` directives as satisfy any
# will always succeed when there's `access_by_lua_block` that does not have any lua code doing `ngx.exit(ngx.DECLINED)`
# other authentication method such as basic auth or external auth useless - all requests will be allowed.
#access_by_lua_block {
#}

header_filter_by_lua_block {
lua_ingress.header()
plugins.run()
}

body_filter_by_lua_block {
}

log_by_lua_block {
balancer.log()

monitor.call()

plugins.run()
}

port_in_redirect off;

set $balancer_ewma_score -1;
set $proxy_upstream_name "hello";
set $proxy_host $proxy_upstream_name;
set $pass_access_scheme $scheme;

set $pass_server_port $server_port;

set $best_http_host $http_host;
set $pass_port $pass_server_port;

set $proxy_alternative_upstream_name "";

client_max_body_size 1m;

proxy_set_header Host $best_http_host;

# Pass the extracted client certificate to the backend

# Allow websocket connections
proxy_set_header Upgrade $http_upgrade;

proxy_set_header Connection $connection_upgrade;

proxy_set_header X-Request-ID $req_id;
proxy_set_header X-Real-IP $remote_addr;

proxy_set_header X-Forwarded-For $remote_addr;

proxy_set_header X-Forwarded-Host $best_http_host;
proxy_set_header X-Forwarded-Port $pass_port;
proxy_set_header X-Forwarded-Proto $pass_access_scheme;

proxy_set_header X-Scheme $pass_access_scheme;

# Pass the original X-Forwarded-For
proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;

# mitigate HTTPoxy Vulnerability
# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
proxy_set_header Proxy "";

# Custom headers to proxied server

proxy_connect_timeout 5s;
proxy_send_timeout 300s;
proxy_read_timeout 300s;

proxy_buffering off;
proxy_buffer_size 4k;
proxy_buffers 4 4k;

proxy_max_temp_file_size 1024m;

proxy_request_buffering on;
proxy_http_version 1.1;

proxy_cookie_domain off;
proxy_cookie_path off;

# In case of errors try the next upstream server before returning an error
proxy_next_upstream error timeout;
proxy_next_upstream_timeout 0;
proxy_next_upstream_tries 3;

proxy_set_header X-Forwarded-Server $host;

proxy_pass https://upstream_balancer;

proxy_redirect off;

}

Here you can see that the host in the ingress yaml has turned in to a server in nginx.conf. Path has converted into a location and namespace port and service name is defined under the location in the nginx conf file.

Request processing inside the Nginx controller

Lets assume now our hello application is hosted fronting the nginx controller with above configuration. And some one made a request as below

curl -k -v https://demo.com/

Here the nginx controller seeks assistance from nginx conf file to process this request. Since the host is demo.com controller directs the request to the server defined as demo.com. Then since the path is / , then the location / is mapped for the request.

So all the configuration related to the mapped server and location will be applied to the request. Then the request will be passed to proxy_pass block. The proxy pass block calls the upstream_balancer.

 upstream upstream_balancer {

server 0.0.0.1; # placeholder

balancer_by_lua_block {
balancer.balance()
}

keepalive 320;

keepalive_timeout 60s;
keepalive_requests 10000;

}

Upstream balancer method is as above. It will call the balancer.balance(). This will invoke the function get_balancer() in the balancer.lua file [1] This will return a balancer object.

local function get_balancer()
if ngx.ctx.balancer then
return ngx.ctx.balancer
end

local backend_name = ngx.var.proxy_upstream_name
ngx.log(ngx.STDERR, "####################### backend name " .. backend_name)

local balancer = balancers[backend_name]
ngx.log(ngx.STDERR, "####################### balancer " .. balancer.name)
if not balancer then
return nil
end

if route_to_alternative_balancer(balancer) then
local alternative_backend_name = balancer.alternative_backends[1]
ngx.var.proxy_alternative_upstream_name = alternative_backend_name

balancer = balancers[alternative_backend_name]
end

ngx.ctx.balancer = balancer
ngx.log(ngx.STDERR, "####################### balancer2 " .. balancer.name)
return balancer
end

Here the balancer object get assigned based on the upstream name defined in the nginx.conf. This lua balancer calls the nginx balancer which is responsible for the rest of the routing.

Class Diagram for Balancer

It would be easier to explain the balancer implementation in a object oriented manner. Each balancer method has a constructor and a balance method.

There are separate implementations for balance method in Sticky balancer and Round robin balancer. Those implementations extends the Balancer. In this blog we will be focusing on the sticky implantation since the round robin is discussed in the earlier blog which is obvious.

The sticky balancer object has the implementation for the balancer method. Let’s see what balancer method does.

From here onwards its a very interesting part hope you will enjoy !

function _M.balance(self)
local upstream_from_cookie

local key = self:get_cookie()
if key then
upstream_from_cookie = self.instance:find(key)
end

local last_failure = self.get_last_failure()
local should_pick_new_upstream = last_failure ~= nil and
self.cookie_session_affinity.change_on_failure or upstream_from_cookie == nil

if not should_pick_new_upstream then
return upstream_from_cookie
end

local new_upstream
new_upstream, key = self:pick_new_upstream(get_failed_upstreams())
if not new_upstream then
ngx.log(ngx.WARN, string.format("failed to get new upstream; using upstream %s", new_upstream))
elseif should_set_cookie(self) then
self:set_cookie(key)
end

return new_upstream
end

Basically what it does it picking and returning a new upstream. Now if you have read the earlier blog, I have explained that the Nginx controller keeps the pod ip addresses as upstreams. So basically what this balancer does is picking that ip address based on the stickiness.

If there is a failure in an upstream the balancer will pick a new upstream. As described in previous blog we have defined the stickiness based on the cookie. This cookie has a key. Example is there in the previous blog.

Nginx Controller maintains a map between upstream ip and this cookie key.

Lets go step by step now

  1. First balancer check whether a key (cookie) is available.
  2. If available it will check against the map and will get the upstream ip.
  3. If that upstream has a failure recorded earlier, which means if the pod is killed for some reason then it decides to pick the new upstream.
  4. When picking the new upstream it generates a new key and cookie.
  5. Then the new upstream will be added against the key replacing the earlier value.

So this is basically how the Nginx controller maintain the session affinity in a Kubernetes platform based on a affinity cookie.

For me I had to go through all this code to identify because I tried to customize this code for a partitioning requirement. I don’t think there are any other resources that describes this flow in this detailed level. Hope you guys enjoyed it as well !

[1]. https://github.com/kubernetes/ingress-nginx/blob/main/rootfs/etc/nginx/lua/balancer.lua#L273

--

--