I’m a retired Unix admin. It was my job from the early '90s until the mid '10s. I’ve kept somewhat current ever since by running various machines at home. So far I’ve managed to avoid using Docker at home even though I have a decent understanding of how it works - I stopped being a sysadmin in the mid '10s, I still worked for a technology company and did plenty of “interesting” reading and training.
It seems that more and more stuff that I want to run at home is being delivered as Docker-first and I have to really go out of my way to find a non-Docker install.
I’m thinking it’s no longer a fad and I should invest some time getting comfortable with it?
You don’t actually have to care about defining IP, cpu/ram reservations, etc. Your docker-compose file just defines the applications you want and a port mapping or two, and that’s it.
Example:
--- version: "2.1" services: adguardhome-sync: image: lscr.io/linuxserver/adguardhome-sync:latest container_name: adguardhome-sync environment: - CONFIGFILE=/config/adguardhome-sync.yaml volumes: - /path/to/my/configs/adguardhome-sync:/config ports: - 8080:8080 restart: - unless-stopped
That’s it, you run
docker-compose up
and the container starts, reads your config from your config folder, and exposes port 8080 to the rest of your network.Oh… But that means I need another server with a reverse-proxy to actually reach it by domain/ip? Luckily caddy already runs fine 😊
Thanks man!
Most people set up a reverse proxy, yes, but it’s not strictly necessary. You could certainly change the port mapping to
8080:443
and expose the application port directly that way, but then you’d obviously have to jump through some extra hoops for certificates, etc.Caddy is a great solution (and there’s even a container image for it 😉)
Lol…nah i somehow prefer at least caddy non-containerized. Many domains and ports, i think that would not work great in a container with the certificates (which i also need to manually copy regularly to some apps). But what do i know 😁