You’re probably already aware of this, but if you run Docker on linux and use ufw or firewalld - it will bypass all your firewall rules. It doesn’t matter what your defaults are or how strict you are about opening ports; Docker has free reign to send and receive from the host as it pleases.
If you are good at manipulating iptables there is a way around this, but it also affects outgoing traffic and could interfere with the bridge. Unless you’re a pointy head with a fetish for iptables this will be a world of pain, so isn’t really a solution.
There is a tool called ufw-docker that mitigates this by manipulating iptables for you. I was happy with this as a solution and it used to work well on my rig, but for some unknown reason its no-longer working and Docker is back to doing its own thing.
Am I missing an obvious solution here?
It seems odd for a popular tool like Docker - that is also used by enterprise - not to have a pain-free way around this.
I’ve read the article you pointed to. What is written there and what you wrote here are absolutely different things. Docker does integrate with firewalld and creates a zone. Have you tried configuring filters for that zone? Ufw is just too dumb because it is suited for workstations that do not forward packets at all, so it cannot be integrated with docker by design.
Docker by default will bind exposed ports to all IPs, but you can override this by setting an IP on the port exposed so thet a local only server is only accessable on 127.0.0.1
I do this with things that should go down my VPN only
https://docs.docker.com/reference/compose-file/services/#ports
you can override this by setting an IP on the port exposed so thet a local only server is only accessable on 127.0.0.1
Also, if the Docker container only has to be accessed from another Docker container, you don’t need to expose a port at all. Docker containers can reach other Docker containers in the same compose stack by hostname.
That might do the trick. Would you mind giving an example?
That might do the trick. Would you mind giving an example?
Something like this. This is a compose.yml that only allows ips from the local host 8080 to connect to the container port 80.
services: webapp: image: nginx:latest container_name: local_nginx ports: - "127.0.0.1:8080:80"Ahh. Then route it through the firewall/pass it to a reverse proxy?
Well if your reverse proxy is also inside of a container, you dont need to expose the port at all. As long as the containers are in the same docker network then they can communicate.
If your reverse proxy is not inside a docker container, then yes this method would work to prevent clients from connecting to a docker container.
Thanks, given me something to think about.
Course, feel free to DM if you have questions.
This is a common setup. Have a firewall block all traffic. Use docker to punch a hole through the firewall and expose only 443 to the reverse proxy. Now any container can be routed through the reverse proxy as long as the container is on the same docker network.
If you define no network, the containers are put into a default bridge network, use docker inspect to see the container ips.
Here is an example of how to define a custom docker network called “proxy_net” and statically set each container ip.
networks: proxy_net: driver: bridge ipam: config: - subnet: 172.28.0.0/16 services: app1: image: nginx:latest container_name: app1 networks: proxy_net: ipv4_address: 172.28.0.10 ports: - "8080:80" whoami: image: containous/whoami:latest container_name: whoami networks: proxy_net: ipv4_address: 172.28.0.11Notice how “who am I” is not exposed at all. The nginx container can now serve the whoami container with the proper config, pointing at 172.28.0.11.
Instead of 8080:8080 port mapping you do 127.0.0.1:8080:8080
Yeah, leaving unwanted ports open is a configuration problem. A firewall gives you just the opportunity to fuck up twice.
I just host everything on bare metal and use systemd to lock down/containerize things as necessary, even adding my own custom drop-ins for software that ships its own systemd service file. SystemD is way more powerful than people often realize.
When you say you’re using systems to lock down/containerize things as necessary can you explain what you mean?
I don’t know what the commenter you replied to is talking about, but systemd has it’s own firewalling and sandboxing capabilities. They probably mean that they don’t use docker for deployment of services at all.
Here is a blogpost about systemd’s firewall capabilities: https://www.ctrl.blog/entry/systemd-application-firewall.html
Here is a blogpost about systemd’s sandboxing: https://www.redhat.com/en/blog/mastering-systemd
Here is the archwiki’s docs about drop in units: https://wiki.archlinux.org/title/Systemd#Drop-in_files
I can understand why someone would like this, but this seems like a lot to learn and configure, whereas podman/docker deny most capabilities and network permissions by default.
In an enterprise setting, you shouldn’t trust the server firewall. You lock that down with your network equipment.
Edit: sorry, I failed to read the whole post 🤦♂️. I don’t have a good answer for you. When I used docker in my homelab, I exposed services using labels and a traefik container similar to this: https://docs.docker.com/guides/traefik/#using-traefik-with-docker
That doesn’t protect you from accidentally exposing ports, but it helps make it more obvious when it happens.
In an enterprise setting, you shouldn’t trust the server firewall. You lock that down with your network equipment.
I thought someone might say this, but it doesn’t seem very zero-trust?
Ideally you’d still want the host to be as secure as humanly possible?
Yes, but having both in place can help mitigate lateral movement risk.
I use podman that doesn’t suffer from that problem
+1 for Podman. I’ve found rootful Podman Quadlets to be a very nice alternative to Docker Compose, especially if you’re using systemd anyway for timers, services, etc.
If you are good at manipulating iptables there is a way around this
Modern systems shouldn’t be using iptables any more.
this is the second time I’ve seen a post like this.
docker has always been like this. if it’s news to you then you must be new to docker.
if you’re using the built in firewall to secure your system on your wan, you’re doing it wrong. get a physical firewall. if you’re doing it to secure your lan then you just need to put in some proper routes and let your hardware firewall sort it out with some vlans.
don’t rely on firewalld or iptables for anything.
What if you rent a bare metal server in a data center? Or rent a VPS from a basic provider that expects you to do your own firewalling? Or run your home lab docker host on the same vlan as other less trusted hosts?
It would be nice if there was a reliable way to run a firewall on the same host that’s running docker.
You may say these are obscure use cases and that they are Wrong and Bad. Maybe you’re right, but personally I think it’s an unfortunate gap in expected functionality, if for no other reason than defense-in-depth.
What if you rent a bare metal server in a data center?
any msp will work with your security requirements for a cost. if you can’t afford it, then you shouldn’t be using a msp.
Or rent a VPS from a basic provider that expects you to do your own firewalling?
find a better msp. if a vendor you’re paying tells you to fuck off with your requirements for a secure system, they are telling you that you don’t matter to them and their only goal is to take your money.
Or run your home lab docker host on the same vlan as other less trusted hosts?
don’t? IDK what to tell you if you understand what a vlan is and still refuse to set one up properly to segment your network securely.
It would be nice if there was a reliable way to run a firewall on the same host that’s running docker.
don’t confuse reliable with convenient. iptables and firewalld are not reliable, but they are certainly convenient.
You may say these are obscure use cases and that they are Wrong and Bad. Maybe you’re right, but personally I think it’s an unfortunate gap in expected functionality, if for no other reason than defense-in-depth.
poor network architecture is no excuse. do it the proper way or you’re going to get your shit exposed one day.
I’ve had similar issues using CSF firewall. They just pushed out updates that apparently support docker a little better but I still have to fight with that to get that working, I don’t know if that will fix it, but give it a try
I would vote for the firewalld integration.
I use podman instead, though I’m honestly not certain this “fixes” the problem you described. I assume it does purely on the no-root point.
Agreeing with the other poster, network tools and not relying on the server itself is the professional fix
Podman explicitly supports firewalls and does not bypass them like docker does, no matter whether you’re using root mode or not. So IMHO that is the more professional solution.






