Load Balancer Stack#
The cluster exposes services to the home network through a combination of Traefik, KubeVIP, and Cilium. See Network Architecture for the full dataplane overview; this page focuses on the operational details for the load-balancing components.
Building Blocks#
- Traefik acts as the ingress controller and terminates TLS for HTTP(S) workloads.
- KubeVIP provides virtual IP addresses for both the Kubernetes API and
LoadBalancerservices. - Cilium supplies the pod network, kube-proxy-free service routing, and transparent encryption but does not announce external VIPs in this setup.
Control-Plane Virtual IP#
The DaemonSet in apps/kubevip-ha/ keeps the Kubernetes API reachable under https://192.168.1.230:6443. Nodes labelled as control-plane members compete for leadership; the winner advertises the VIP on interface eno1 using ARP.
To rotate the static manifest:
- Pull the desired
ghcr.io/kube-vip/kube-vipimage. - Generate a new DaemonSet manifest with the
manifest daemonsethelper. - Copy the output into
apps/kubevip-ha/k8s.kubevip.yaml, trimmingcreationTimestampandstatusfields.
Example using containerd:
GitHub releases: https://github.com/kube-vip/kube-vip/releases
Service Load Balancers#
The deployment in apps/kubevip-cloud-controller/ installs the kube-vip cloud controller manager. It watches every Service of type LoadBalancer and assigns a VIP from the range defined in the generated kubevip ConfigMap (cidr-global=192.168.1.231-192.168.1.239 by default). Because Cilium L2 announcements are disabled, the kube-vip DaemonSet takes responsibility for advertising the VIP on whichever node currently hosts the matching backend pod — KubeVIP allows the address to "follow" the pod. Cilium currently doesn't have this feature.
When you create a new external-facing service (for example Traefik or kube-prometheus-stack's Alertmanager):
- Set the Service
type: LoadBalancer. - Optional: pin a specific address by setting
spec.loadBalancerIPto an IP inside the configured range. - Watch the Service until the
EXTERNAL-IPfield displays a value from the pool. - Configure DNS or clients to reach the assigned address.
Although KubeVIP owns the VIP, the per-pod data plane is still provided by Cilium in hybrid DSR mode, so backend pods see the original client IP and respond directly, improving throughput.
Updating the Cloud Controller#
The deployment references ghcr.io/kube-vip/kube-vip-cloud-provider:v0.0.12. To upgrade:
- Update the image tag in
apps/kubevip-cloud-controller/k8s.kube-vip-cloud-controller.yaml. - Apply the same change to any overlays in
apps/overlays/kubevip-cloud-controller/. - Commit and push; Argo CD will roll out the new deployment.
Troubleshooting Checklist#
kubectl -n kube-system get pods -l app.kubernetes.io/name=kube-vip-ds— verify the control-plane DaemonSet is running on each leader-capable node.kubectl -n kube-system logs deploy/kube-vip-cloud-provider— confirm Services receive VIP assignments.kubectl -n kube-system get configmap kubevip -o yaml— verify the configured CIDR range matches your intended pool.ip addr show dev eno1 | rg 192.168.1.23on a node — check which node currently owns the VIP.cilium service list— confirm the service map contains the expected frontend (VIP) and backend pods.
If ARP announcements stop, restart the kube-vip DaemonSet pod on the current leader and verify that no conflicting DHCP lease is advertising the same IP.