I'm looking to better understand load balancing concepts in a "self-hosted" Kubernetes environment.
cluster setup architecture image
I have an external load balancer, let's label it LB
. This is just a VM running NGINX. I then have 2 master nodes for my control plane and 3 worker nodes for my data plane where 2/3 worker nodes have the NGINX ingress controller running. I want to route outbound traffic to different services via the LB
.
How should I think about routing in this case? Ideally I want to automate this setup 100%.
- Would the NGINX config in
LB
point to all the worker nodes running theingress controller
to handle routing TCP traffic? - Would I add additional configuration in the
LB
to also pointcluster.example.com
to the 3 IP addresses of my master nodes?
I was researching metallb, but it seems more suitable for those who have some sort of pool of unallocated IPv4 addresses leased. In my case, I'm just running a VM on some cloud provider that is assigned a public IPv4 address for my LB
. To add on, I think what confused me what how the automatic config updates happen. The best I can come up with is using Consul from Hashicorp to sync Kubernetes services/nodes to Consul and then write a custom template to auto update the NGINX config on the LB
node.
Goals summary
I want:
- External load balancer to be able to hit master nodes for things like
kubectl
- Ability to use external load balancer to route requests to my different services when using ingress controller
- I want this to be easily automated