David Young
David Young
1 min read



For the last few months, I’ve been working on converting my Docker Swarm “private cloud” design in the Geek’s Cookbook into a Kubernetes design compatible with public cloud providers, but still friendly to self-hosted apps.

One of the most challenging elements was how to manage inbound connections into the cluster. I needed many (currently 35) inbound ports (mining pools, mqtt, unifi, etc), and providing ingress using my cloud provider’s load balancer was cost-prohibitive.

I ended up home-brewing an HAproxy solution on an external VM with a webhook, so that my containers could “phone home” from the Kubernetes cluster and report the public IP of the node they were on. This allowed me to use NodePort to expose all the ports I wanted (at no cost) in the cluster, on hosts with unpredictable IP addresses, but to serve each port on the fixed public IP of my HAProxy VM.

This solution also bypassed the annoying fact that NodePort ports have to start from 30000 upwards (i.e., no 443 for you!)

The entire design, from why I like Kubernetes, to setting up a basic Kubernetes cluster with Digital Ocean, to configuring a “poor man’s loadbalancer”, automated PV snapshots for backup, and traefik for ingress HTTPS termination, can be found here.

Here’s highly technical and accurate diagram: