This article is not about how Nginx Ingress can be deployed to EKS. It’s more about the externalTrafficPolicy
used as part of the Nginx service. This is valid also for any Kubernetes cluster deployed with aws
as a cloud-provider
.
Requirements
Below are a set of requirements for an EKS cluster with high traffic, having more than 20 nodes, most of them spot instances.
- have all the EKS nodes able to route traffic to Nginx Ingress, use
externalTrafficPolicy: Cluster
- in case any Nginx container becomes unhealthy the EKS node should continue to serve the traffic without any problems
- get the correct clients IP addresses
- keep the Nginx Ingress Deployment slim and scale up automatically when need it using and HPA
The issue
With the current recommendation from Nginx and from Kubernetes documentation is hard to achieve this.
The recommendations for Nginx Ingress with an NLB, at least as of the time this article was created, was to use the externalTrafficPolicy: Local
. It means that only the nodes which are running an Nginx Ingress container will be able to communicate with the NLB. All the other nodes will be in an unhealthy state. In case a container becomes unhealthy the NLB node health policy will apply, the node will continue to receive traffic until the Target Groups will mark that node as unhealthy. The externalTrafficPolicy
is set to Local
in order to preserve the client IP address when Proxy Protocol is not enabled. If there is another intermediary node between NLB and Nginx Ingress, the ingress will see the node IP as client IP and that becomes a problem when it comes to whitelisting and not only.
Everything looks kind of ok up to now with the policy set to Local
, but only if the infrastructure is static and the nodes never change which I don’t think happens that often these days, specially in the Cloud.
The recommendation from Kubernetes is to deploy the ingress as a DaemonSet or as a Deployment with pod anti affinity.
Regardless of the deployment type (DaemonSet or Deployment) there is a high risk for Nginx Ingress containers becoming unhealthy/unresponsive or the rollout process to terminate them. The issues will arise also when it comes to EKS upgrades as the nodes will need to be terminated.
There are also other issues:
- why to use a DaemonSet when we all know how much traffic can be handled with only 2 Nginx pods?
- there might be the case of creating a dedicated node group only for Nginx but that doesn’t resolve the problem when containers will crash. The node will become unavailable and the requests will still fail.
The patch
All of above issue are happening only because the externalTrafficPolicy
is set to Local
. By setting the externalTrafficPolicy
to Cluster
and enabling the Proxy Protocol
for the NLB will resolve the problem and the following things will work:
- the NLB will be able to forward to Nginx Ingress the actual IP of the client even if the traffic will go via an intermediary node.
- all Kubernetes nodes will be able serve traffic as they will all be healthy under the Target Groups.
- Nginx Ingress can be rollout as a Deployment which will scale up automatically based on the traffic using the HPA configuration.
The AWS NLB has support for Proxy Protocol V2. Nginx also has support for this type of protocol. The only missing part remains how to enable the NLB proxy protocol when the NLB is created. For this task there are few options:
- after Nginx deployment manually enable the Proxy Protocol V2 for the NLB. Not great but will do the job, the option will remain checked for the NLB lifetime.
- use third party plugins like aws-nlb-helper-operator which will enable the Proxy Protocol for you as part of the deployment
- another option is to use the aws-load-balancer-controller and annotate the Nginx service with this annotation.
- don’t use an NLB, stick with ELB/ALB
- use the NLB as per community recommendations
Example
Below example will use the aws-nlb-helper-operator
. Nginx Ingress version required is >= v0.35.0
.
---
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
data:
enable-real-ip: "true"
hsts-include-subdomains: 'false'
large-client-header-buffers: '1024 256k'
proxy-body-size: "100m"
proxy-buffer-size: "512k"
proxy-connect-timeout: "120"
proxy-read-timeout: "120"
proxy-send-timeout: "120"
real-ip-header: "proxy_protocol"
server-name-hash-bucket-size: "128"
server-tokens: "false"
ssl-redirect: "false"
use-gzip: 'false'
use-proxy-protocol: "true" # this is the Proxy Protocol part
server-snippet: |
if ( $proxy_protocol_server_port = 80 ) {
return 308 https://$host$request_uri;
}
---
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx-nlb
annotations:
aws-nlb-helper.3scale.net/loadbalanacer-termination-protection: "true"
aws-nlb-helper.3scale.net/enable-targetgroups-proxy-protocol: "true"
aws-nlb-helper.3scale.net/targetgroups-deregisration-delay: "60"
# below option will not do anything as of the time when the article was created
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
service.beta.kubernetes.io/aws-load-balancer-type: nlb
# below option will not do anything as of the time when the article was created
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
spec:
externalTrafficPolicy: Cluster # not required as this is the default
>>> Please Allow cookies in order to post or read comments. <<<