Virtualization strategies for production

When your infrastructure is small, this works, but what usually happens is people install the docker versions of nginx, haproxy or traefik on 2 or 3 different worker nodes and giving them access to the ports 80 and 443 locally. Then, docker (and any oci container) solves their DNS resolution internally. You configure the reverse proxy to listen to a vhost (basically a domain) and point that domain to the container’s name. Something like

listen git-repo:80
redirect git-repo:443

listen git-repo 443
tls certs
https://git-repo proxypass -> http://docker-name-gitlab
https://git-repo reverse-proxypass -> http://docker-name-gitlab

Obviously the above is pseudo-config, I don’t remember how the conf goes, but that’s the gist of it.

This way, you can’t even access the resources without going through a reverse-proxy first, which is pretty secure. The same would apply when splitting containers into multiple pods, like gitlab in one pod would have its configuration point out to the container name of postgres, which would get resolved internally by docker / k8s. Internal to the container network, they talk on normal ports (or at least the ports that are configured inside the conf files to listen on).

And this only works with containerized reverse proxies, because a normal install will not be able to resolve the internal container hostname.

1 Like