Clips are 1080p, and total storage for the entire VM is 128gb, not 1TB. Total disk usage for the HAOS VM does not exceed 64GB for clips retained for 10 days.
Clips are 1080p, and total storage for the entire VM is 128gb, not 1TB. Total disk usage for the HAOS VM does not exceed 64GB for clips retained for 10 days.
I have 4 ethernet cameras feeding into Frigate inside HAOS. HAOS is running in a Proxmox VM with 4 cores, 4GB RAM, 128GB storage and an m.2 Coral TPU passed through.
The host machine is a Lenovo m910q with an i7-6700T processor that pulls about 35w, 32GB RAM and 1 TB NVMe.
Frigate is set to retain clips for 5 days, after which they are deleted. I have a Samba Backup job that runs every night and retains 10 days of backups.
With this setup, disk space never exceeds 50%, and CPU usage never exceeds 35%.
You can also add tags that are searchable
I use several separate small servers in a Proxmox cluster. You can get a used Dell or HP SFF PC from eBay for cheap (example). The ones I am using all came with Intel T series processors that run at 35w.
You install Proxmox like any other OS (it’s basically Debian), then you can create VMs (or LXCs) to run whatever services you want.
If you have existing drives in a media server, you can pass those drives through to a VM pretty easily, or any PCI device, or even the entire PCI controller.
They also only pull 75w, which is an added bonus.
You may want to check out Craft Computing’s YT channel - he did a few episodes (Piped link) in his Cloud Gaming series on these cards.
Nvidia Tesla P4. Under $100 for a new one on eBay. Comes with a low profile bracket.
If you’re running Proxmox, you can even get the official vGPU drivers running so you can split the card between multiple VMs.
Is there a window in the room the closet is in? I’ve got a similar setup with a server rack in a closet (no ventilation, though). I recently purchased an in-window Midea AC that can be controlled by Home Assistant.
I have an automation that will kick on the AC if the temperature in the closet rises above a certain amount, and will shut down when it drops below that amount. I just leave the closet door open by about a foot and that seems to be sufficient.
It’s probably worth noting that I’m running pretty efficient hardware (35w i7s and a 75w Tesla P4) so it doesn’t get super hot, even under heavy load.
I’ve been daily driving a Debian 11 Proxmox VM running on an HP ProDesk Elite SFF with an i7-6700T and an ancient Nvidia GeForce GT 730 passed through.
I access it via ThinLinc running on a Dell Wyse 5070 Extended thin client. Works really well, even video isn’t bad, but it’s not for gaming.
For gaming, I’m working on setting up a Nobara VM with an Nvidia Tesla P4 passed through.
This is the correct answer.
Run an *arr stack somewhere on your network, install Jellyfin on the server and the Jellyfin app on the Shield and you’re golden, no need for subscriptions.
Desktops and PCs are just OS name and version. Proxmox cluster is Ankh-Morpork (from Disc world) and nodes are Ankh Morpork street names: Treacle Mine, Pseudopolis Yard, Attic Bee, etc.
Just an FYI to OP: If you’re looking to run docker containers, you should know that Proxmox specifically does NOT support running docker in an LXC, as there is a very good chance that stuff will break when you upgrade. You should really only run docker containers in VMs with Proxmox.
Just for completeness sake - We don’t recommend running docker inside of a container (precisely because it causes issues upon upgrades of Kernel, LXC, Storage packages) - I would install docker inside of a Qemu VM as this has fewer interaction with the host system and is known to run far more stable.
As far as I’m aware, everything in Proxmox is open source.
I think some people get annoyed by the Red Hat style paid support model, though. There is a separate repo for paying customers, but the non-subscription repo is just fine, and the official forums are a great place to get support, including from Proxmox employees.
I haven’t done it myself, but I have looked into the process in the past. I believe you do it just like paying any drive through to any Proxmox VM.
It’s fairly simple - you can either pass the entire drive itself through to the VM, or if you have a controller card the drive is attached to, you can pass that entire PCIe device through to the VM and the drive will just “come with it”.
I would say it’s at the “bottom” of the stack - Debian is the base layer, then Proxmox, then your VMs.
Clustering just lets the different nodes share resources (more options with ZFS) and allows management of all nodes in the cluster from the same GUI.
Another vote for Proxmox.
Backups: Proxmox Backup Server (yes, it can run in a Proxmox VM) is pretty great. You can use something like Duplicati to backup the PBS datastore to B2.
Performance: You can use ZFS in Proxmox, or not. ZFS gets you things like snapshots and raidz, but you will want to make sure you have a good amount of RAM available and that you leave about 20% of available drive space free. This is a good resource on ZFS in Proxmox.
Performance-wise, I have clusters with drives running ZFS and EXT4, and I haven’t really noticed much of a difference. But I’m also running low-powered SFF servers, so I’m not doing anything that requires a lot of heavy duty work.
Yes, but you’ll need to do all the extra stuff required to get a domain name working.
Lemmy-Easy-Deploy is awesome, and yes, Oracle will burn you eventually, and you won’t be able to access your instance to run one of the migrate scripts to move elsewhere. Everything will be permanently gone.
Check out Hetzner instead. They have reasonable prices and DCs in both the EU and the US. They also just started offering ARM servers as well.
You can do this in most browsers on Android as well, but the option is called “Add to home screen”.
Yes, that’s correct.