0

I'm looking to better understand load balancing concepts in a "self-hosted" Kubernetes environment.

cluster setup architecture image

I have an external load balancer, let's label it LB. This is just a VM running NGINX. I then have 2 master nodes for my control plane and 3 worker nodes for my data plane where 2/3 worker nodes have the NGINX ingress controller running. I want to route outbound traffic to different services via the LB.

How should I think about routing in this case? Ideally I want to automate this setup 100%.

  • Would the NGINX config in LB point to all the worker nodes running the ingress controller to handle routing TCP traffic?
  • Would I add additional configuration in the LB to also point cluster.example.com to the 3 IP addresses of my master nodes?

I was researching metallb, but it seems more suitable for those who have some sort of pool of unallocated IPv4 addresses leased. In my case, I'm just running a VM on some cloud provider that is assigned a public IPv4 address for my LB. To add on, I think what confused me what how the automatic config updates happen. The best I can come up with is using Consul from Hashicorp to sync Kubernetes services/nodes to Consul and then write a custom template to auto update the NGINX config on the LB node.

Goals summary

I want:

  • External load balancer to be able to hit master nodes for things like kubectl
  • Ability to use external load balancer to route requests to my different services when using ingress controller
  • I want this to be easily automated

1 Answer 1

0

I have a self hosted LB working. I wound up going with metallb. The address pool doesn't need to be unallocated leased IPv4 addresses. It can be the private IPv4 address pool inside your kubernetes cluster. I did this in the office running a private 192.168.1.x network. I defined the ip address pool via CIDR as 192.168.2.0/24. The complexity that metallb sorts out is the communication between LB and the ingress controller (i.e. the nginx ingress controller that communicates with ingress admission pods on every worker node). This distinction is important because you don't have as many ingress controllers as worker nodes. Metallb can use layer2 - which I used. It can also use BGP - but this is more complex as it requires a router - and I haven't pursued this yet.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .