Using Docker to self-host applications is an absolutely amazing, mind-blowing thing when you really stop to think about how much power you get in such a great tool. As we all know, with great power comes great responsibility. I've been showing you one way of setting up Docker containers and using port mapping for a couple of years now. We then use that to reverse-proxy our traffic through something like NGinX Proxy Manager, providing us with access to our application(s) through the web and a nice domain or sub-domain; as well we get the benefit of having CA SSL certificates issued from LetsEncrypt.
It's a fine system, but there is a better and more secure way to do it. You've been telling me this for years, and for whatever reason, it just never clicked with me until now. So, here I am, thankful to all of you who've told me over and over how to do this...thankful for your patience, and your persistence.
Making Our Containers Talk through Internal Docker Networks
The first thing to realize is that we can create our own networks inside of Docker. I've known this for a while, but never pieced it all together til now. Once we make that network, and assign applications to that network, the only piece left is to assign NGinX Proxy Manager (or Traefik, or Caddy, or HAProxy, ro whatever reverse proxy you prefer) to that network as well.
This is where the magic happens. Once these are all part of the same network, we can simple proxy our traffic without having to expose host ports for the containers. We can also use the container name to route the traffic, instead of the container IP. When it finally clicked with me, it was like angels singing int he background.
Creating a Docker Network
We can do this through the command line, and it's quite easy, but I have to say using a tool like Portainer is much nicer. So, if you don't have Portainer, I highly recommend you taking a look at it. You get a great GUI for managing your docker, docker-swarm, and kubernetes setups, and it provides tons of capabilities.
Fow now, through, we'll use the CLI (command line interface).
docker network create -d bridge <network-name-you-want>
The above command will create a network that bridges from docker to your ethernet connection. We want a bridge in most cases. You can, however, create an overlay network as well, but you'd use that for applications that span multiple docker hosts, or instances. For now, bridge is what we need.
Next, we need to attach (connect) each of our docker-containers to the network we just created. This is where the secret sauce comes in to play.
docker network connect <network-name-you-want> <container name>
In the above command, you type the network name you gave the network in the "create" command, and then the container name of the container you are adding to the network. If you don't know the name of the container(s), you can see a list of running containers, including their names with the command:
Once you have your proxy (in my case I use NGinX Proxy Manager), and the other containers on the host connected to the same network (not the default network mind you), you can setup your Proxies using the container name.
Additionally, if you did port mapping previously to allow access to certain applications via the Proxy, you can recreate the containers without those mappings, and simply point to the container's internal web application port with your proxy.
For instance, I setup Matomo, which exposes port 80 internally for it's webserver.
in NGinX Proxy Manager I changed my Matomo setup as follows:
IP / Hostname: matomo <– name of my Matomo container
Port: 80 <– port number of the internal web server for Matomo
Click Save, and voila! Matomo is still up, running, and reachable via my reverse proxy. Now, you might be asking, "What's the difference?"
In the previous setup with a host port mapped to port 80, I could go to Matomo via the host IP and the mapped host port. I could completely work around my reverse proxy and the SSL encryption I wanted on the system.
Now, I have to use the reverse proxy address, and I can force SSL! Encrypted access only! Excellent!
You can just repeat this for each container you have. Now, use this setup in conjunction with the Docker and Firewalls video from last week, and you'll be running in a much more safe, secure way!