I have a server with a bunch of services just as Docker containers. I see that Proxmox is popular among the self-hosting community. I was wondering why?
I understand that running things in a VM provides better security than running them in a container. But is the difference so important given the relatively low risk that an exploit happens inside a container that leads to doing damage to the host machine?
There’s also obviously the additional overhead of using Proxmox. It wouldn’t be an issue for me as I should have enough resources to, say replace all my Docker containers with VMs. I’m more wondering if the security difference is so massive, or if there is another reason I’m missing why people use Proxmox.
Or am I misunderstanding how people use Proxmox? I was assuming people would use it like how you use Docker, i.e. different services get their own VM/container. If you have a different kind of setup I’d be interested in hearing it.
Edit: I would appreciate if people stop being pedantic and actually read the post. Obviously I am aware that you can run containers in VMs, or containers on bare metal alongside VMs. That’s not what the question is and you know it.
Proxmox or Docker?
It’s not mutually exclusive? I have a 3-node proxmox config on which I have 3 VMs running as kubenetes nodes to which I deploy containers. I also have some VMs setup for things which either don’t work well as containers or which I simply don’t want as containers (e.g. a couple Windows VMs for doing Windows things). Also home assistant runs in a VM since it was just easier to do USB passthrough this way.
I understand that running things in a VM provides better security than running them in a container.
Not sure what you mean by this - containers are typically easier to secure as they’re minimalist. But I doubt anyone is using VMs because they think they’re more secure.
I use proxmox because I am a tinkerer and VMs help me tinker without worrying about making major mistakes that might brick my server. If I want to try something new, just spin up a test VM and try it out, the rest of my stacks are safe and if I muck up the test VM I’m tinkering with, just delete it and start again.
I started with KVM-QEMU, which proxmox is based on, with virt-manager front end. Can do all the same things, but can be installed on most distros. Will let you get your feet wet with VMs without having to format and install proxmox.
My vote is Podman with an immutable distro, like OpenSUSE MicroOS or Fedora Silverblue. Here are my reasons:
- rolling base, with very minimal footprint, so you don’t need to worry about upgrades
- podman runs proper rootless containers, so you get better security vs docker, which tends to run as root (breaking out does less damage if you manage permissions properly)
- deploying a new service (or moving a service) just means copying configs and running, no concerns about what the host has
- there’s nothing special about the host, so if MicroOS or Silverblue are abandoned, just copy the configs and data to a new host
It’s a little more work to set up, but once things are running, it’s drama free. And I think that’s the best thing to optimize for, keeping things boring is a good thing.
“I run an immutable distro, BTW”
I keep landing back to Proxmox, My primary use is to run the Home Assistant OS VM which is quite fantastic there. And also, I have NFS sharing setup on the Proxmox server so I can share it between my machines and my home Linux boxes. I’m on Proxmox 8 though and not 9. Debian 13 with Proxmox 9 it turns out at least when I tried it, is really locked down now for running Docker via the host. (Proxmox machine) With Proxmox 8, I can still install Docker and run my containers there, then use Portainer to manage them sometimes, but rarely now days. You can also probably do it the “Correct way” as some may believe by setting up a VM or LXC in Promox to host docker containers. I do that with one subset of containers but not all.
Another option you may want to consider is XCP-NG, which is another hypervisor and IMHO ran Home Assistant a tad bit faster for me, but it will not allow you to mount existing drives without erasing them (I can’t do that with my disks). Additionally, it seems to be on an out of date CentOS build which is no longer updated. (My notes from this are from a year ago when I tried it and I think some of it has changed, but for storage: https://docs.xcp-ng.org/storage/) You can see what’s going on there.
Most people will say to host Truenas or something like that in a VM via Proxmox but honestly, it isn’t too difficult to set up with a tool like Cockpit to manage the shares. I’ve played with most of the setups recently and recently tried going with a Debian 12 install on bare metal with the Home Assistant VM running which I could, but I had more crashes with the server and it never started the VM in spite of being told to do so. I honestly didn’t stick around though, so YMMV if you go that route.
There no need to choose on over the other. I host all my podman containers in a Proxmox VM.
I just do one Docker container per LXC. All the convenience of compose, plus those sweet Proxmox snapshots.
Podman and Proxmox is your answer. Both are great for everything you will ever need virtually. No reaon to choose one or the other, just how you are going to configure your setup.
VMs are managed by you. You’re responsible for dealing with prerequisites, updates, security.
Docker is a dev stating “works on my machine” and giving you a copy of their machine.
You can run docker within proxmox, and doing so gives you the ability to run containers in addition to VMs.
There are advantages and disadvantages to both.
I switched to Docker ages ago and don’t regret it. The other benefit aside from the “works on my machine” is that usually it’s very easy to back up with minimal bloat, especially for projects that don’t document what you should be backing up.
I can, and have, switch hosts on a moments notice and only have to mess with DNS updates.
Although I’ve been procrastinating switching to rootless Docker.
The only thing I run on a VM right now is Home Assistant. But I do that with Cockpit and KVM/virsh.
Run a proxmox VM with docker services. ZFS snapshots and backups via PBS.
I found proxmox and docker to be fairly incompatible, and went through many iterations of different things to make it work well. Docker in VMs, Docker in LXC, Docker on the host (which felt redundant as hell). Proxmox is an amazing hypervisor, but then I realized I didn’t really need a hypervisor since I was mostly running containers.
My recommendations:
-
No need for VMs Just run debian and run containers on it
-
Some VMs, Mostly containers, 1 host Run proxmox, and create a VM in proxmox for your contianer workloads
-
Some VMs, Mostly containers, >1 host, easy mode Same as above, but make one host debian and the other one proxmox
-
Some VMs, Mostly Containers, >1 host, hard mode but worth it after 2 years Use kubernetes, I use k3s. Some nodes are just debian with k3s on them, others are running in VMs on proxmox using the extra compute available. This has a massive learning curve though, it took me well into a year to finally having it at a state I like it - but I’ll never go back.
Same here. I used proxmox for 8 years and have recently dumped it in favour of a couple of incus machines running OCI and LXC containers.
Much lighter, much faster, and to be honest, more straightforward when it comes to storage abstraction, which I think proxmox does in a very… convoluted way.
What did you find to be incompatible between proxmox and docker? I get that it’s essentially an extra layer of complexity if all you’re doing is running docker containers, but I don’t see how that makes them incompatible.
Docker in LXC can be a pain, especially when using backups as the Overlay2 filesystems don’t really jive with the way Proxmox does backups. And forget about running Docker in an unprivileged LXC.
Running in a VM is perfectly fine though; not sure what issues anyone has there. I ran on big beefy servers with 24 cores and tons of RAM though.
It was nice to be able to move my services between machines using a live migration while doing updates though; but again you have to be set up for that. My entire network was managed with twin OPNSense routers as VMs in Proxmox; they handled their own failover and so I could just shut down one at a time to run updates, even to Proxmox itself, and when it came back up then I could work on the other one. But, I wanted to learn all that and have zero downtime so the wife wouldn’t get mad every time I botched something (which, especially in the beginning, was often)
If you don’t have the money or time and just have one server box with a normal amount of RAM and disk; Proxmox is probably overkill unless you want to experiment with VMs or Linux containers. It’s an awesome product and I will sing its praises all day, but if you just want some docker containers you can make a far simpler setup; although I will say that the “overhead” is way less than you might think. It’s just more complicated (not hard, there’s just more going on than vanilla Debian or something)
That thing about docker being so badly behaved in unprivileged containers seems to be a proxmox problem, not an LXC problem, as I’ve discovered running LXC in a non-proxmox environment.
That’s unfortunate. I know they do change some things for both security hardening as well as for convinience of the platform, it’s a double-edged sword apparently.
-
Proxmox and Docker serve different purposes. Proxmox is a hypervisor, while Docker handles containerized services. There is a little bit of crossover when it comes to containers (Proxmox can host LXCs, kinda sorta a little bit similar to Docker containers), but that’s really the only commonality.
If you want to run multiple services and have a playground to mess around with and learn things, Proxmox is what you want. Spin up a VM (or 2, or 3) for Docker, and run your Docker services in those. You still have the ability to dick around with other things in Proxmox without having to worry about fucking up everything else on the physical machine.
Proxmox or even just lazy old KVM GUI for anything that needs to be deployed manually in a VM (Home Assistant, WIndows VM, etc.). Otherwise you can even just spin up whatever manual service you want to run on an LXC container or bare metal host with the correct security settings with systemd and selinux if you want to be extra careful.
Docker/Podman (the superior one lol) is just an automated deployment system in container form (like Ansible). It great for automated deployment without having to manually configure the installation process and worry about upgrades, changes, etc. You can even easily create your own images on the fly just for the purpose of having it run a single service inside a container.
Proxmox equivalent would be like using Terraform/OpenTofu to deploy VMs to do the same thing. Its possible, but just not that common because of the reduced overhead with containers, and well supported deployment images with docker/podman specifically.
Generally speaking, I’ve seen proxmox used more in lab environments were you want to emulate something like a complete network of machines whereas docker/podman has become the defacto server deployment platform.
You’re just much more likely to find software with a published docker container and default docker compose script than the same thing in Terraform or even K8s/K3s.
I use both. I have one VM on Proxmox for all of my Docker containers, a seperate VM as a reverse proxy, and a third VM to handle OIDC for apps (because I don’t want it failing with all of my other things). I also use Proxmox for using apps in a VM that are GUI based, but that I want to have running on something other than my laptop.
I think you got plenty of great answers already. Security is not a real concern for a home user. It’s not even for the enterprises. Container escapes are extremely rare and Kubernetes is used very widely among some of the largest companies in the world running thousands of containers.
I think in general people start out in VMs and advance to containers. If you are already using containers stick with it, otherwise you are taking a step back.
Now for why you might want to run proxmox? I do it because I wanted windows, Linux and Jellyfin with hardware decoding on one server.
I think in general people start out in VMs and advance to containers. If you are already using containers stick with it, otherwise you are taking a step back.
Interesting perspective—I had thought that running an entire VM would be more difficult, but I’ve never used virtualisation for server stuff, only ever used VMs with a GUI VM manager on my personal computer. Thanks for the input.
VMs on a server are great fun, and there are some use cases where you’d absolutely need them (as parent said, running Windows on a Linux server, etc). I virtualized my whole-network router using virtualized OPNSense which is BSD based.
If you aren’t into spending time (and, eventually , money) on a setup that does “everything”, you don’t need Proxmox.
But it’s fucking AWESOME for tinkering. I think the question to ask yourself is, do you want a homelab, or do you want to just set-it-and-forget-it?
If you want services to be there without spending time on it, keep it simple. If you want the power that added complexity brings, and you have the means (time/energy/maybe money for upgrades etc) then by all means take the leap. It’s fun as hell, if you’re into it.
I’ve worked in enterprise software the better part of a decade and if there were security concerns about container escapes they wouldn’t be so widely used.
There is Incus too. I use it to run a home-assistant VM, since it’s simpler that way. I also ran some containers with it.
I use both. I have some things I want in VMs and others in containers. I run a VM to run containers in podman alongside my “normal” VMs.
Proxmox has its own ability to run containers but I was more familiar with docker/podman.