You’ve never had to run migrations that lock tables or rebuild an index in two decades?
You’ve never had to run migrations that lock tables or rebuild an index in two decades?
The official Postgres Docker image is geared towards single database per instance for several reasons - security through isolation and the ability to run different versions easily on the same server chief among them. The performance overhead is negligible.
And about devs not supporting custom installation methods, I’m more inclined to think it’s for lack of time to support every individual native setup and just responding to tickets about their official one (which also is why Docker exists in the first place).
I used to be a sysadmin in 2002/3 and let me tell you - Docker makes all that menial, boring work go away and services just work. Which is want I want, instead of messing with php.ini extensions or iptables custom rules.
Not really. The docker-compose file has services in it, and they’re separate from eachother. If I want to update sonarr but not jellyfin (or its DB service) I can.
It’s kinda weird to see the Docker scepticism around here. I run 40ish services on my server, all with a single docker-compose YAML file. It just works.
Comparing it to manually tweaking every project I run seems impossibly time-draining in comparison. I don’t care about the underlying mechanics, just want shit to work.
I keep trying it every couple of years to see if it works better, but nah. Even with MySQL/PG + Redis, it’s still slow and clunky. Maybe in 2026
Yes, it would cause downtime for the one being migrated - right? Or does that not count as downtime?