

If you’re running it via docker compose it’s trivial to upgrade, and there are no breaking changes. Pull, down, up, you’re done.
If you’re running it via docker compose it’s trivial to upgrade, and there are no breaking changes. Pull, down, up, you’re done.
Frigate is pretty good, too. I’ve only been running it for a few months but I’m very happy with it.
Though, technically that leaves you more at risk of ransomeware or something that overwrites your data.
I rsync as well, but use snapshotting on the remote drives. So, a bad rsync would suck but shouldn’t really result in data loss. Ransomware on my local+remote server would of course be very bad…
I do something similar — I have a raspberry pi and a HD, with daily rsync and snapshots (monthly retained indefinitely, weekly retained for a month, daily retained for a week). It’s at family’s house, connected to my home via WireGuard via a VPS. Tailscale (or anything really) would also work here.
It’s a great setup! Just have some watchdog reboot if it can’t talk to home (a simple cronjob with ping -c1 home.lan || reboot
or similar).
Even our “slow” 35Mbps upload speed is way more than enough for incremental rsyncs of my Immich library. The initial sync was done in person, though.
I got one from goHardDrive on eBay (link). It was cheap enough, looks flawless, and knock on wood has been working fine.
Googling around, the brand gets…mixed reviews. My use case is such that of this drive fails it’s not a big deal.
I’ve honestly never understood people who feel the need to “replace” Spotify. … Spotify has never made sense for my use-case.
I don’t know how to say this, but…you have extremely uncommon use-cases:
…during those times, my phone is either fully turned off (so I’ll use an MP3 player), or it’s in Airplane Mode.
Many people listen to music on stereos and don’t necessarily want a device plugged in, so
I just download the music I like to my device and listen to it via VLC.
either doesn’t work or is substantially less convenient than e.g. casting from a phone.
Not hating on your setup at all, but it’s very niche, in my experience.
ZigBee router thing:
I’ve been happy with the SMLIGHT SLZB-06M. You can easily flash firmware, and it has PoE which was important for me. I believe it also supports Thread, but I haven’t tried this yet (and I’m not sure if it supports it at the same time as Zigbee).
Zigbee smart plugs from Third Reality have been pretty solid in my experience, and they report power usage.
For circuit breaker level monitoring, I have an Emporia Vue2. I have it running esphome, completely local — unfortunately this requires some simple soldering and flashing, so it’s not turnkey. But it’s been rock solid ever since flashing it. (Process is well documented online.)
I’ve had decent luck with cheap wifi Matter bulbs, but provisioning them is finicky, and sometimes they just crap out and need to be power cycled; Zigbee bulbs (e.g., Ikea) have generally been reliable, though sometimes I’ve had difficulty pairing them initially. After power cycling a Matter WiFi bulb, it takes a while for it to respond to Home Assistant; Zigbee bulbs generally respond as soon as you power them on.
I have a wired smart light switch from TP-Link/Kasa (KS205), and it’s been completely hassle free (and totally local — Matter over wifi). The Kasa smart switch dongles I have work flawlessly but need proprietary pairing, and I’m afraid to update firmware in case they lose local support.
Good luck! Fun adventure :)
I think a lot of companies view their free plan as recruiting/advertising — if you use TailScale personally and have a great experience then you’ll bring in business by advocating for it at work.
Of course it could go either way, and I don’t rely on TailScale (it’s my “backup” VPN to my home network)… we’ll see, I guess.
Hopefully you can publish in an open-access journal — if not it would be great if you could share an arXiv preprint :)
Physics is like sex: sure, it may give some practical results, but that’s not why we do it.
— Richard P. Feynman
I think the same is true for a lot of folks and self hosting. Sure, having data in our own hands is great, and yes avoiding vendor lock-in is nice. But at the end of the day, it’s nice to have computers seem “fun” again.
At least, that’s my perspective.
Whatever you decide for your laptop, I’m a proponent of a barebones off-site setup if you’re trying for 3-2-1 backup or similar.
I use a raspberry pi 3 with a single HD (ZFS) retaining some number of daily/weekly/monthly snapshots. Daily rsync, everything over WireGuard+VPS (TailScale would work too).
Same — rsync to a pi 3 with a (single) ZFS drive at family’s house. Retain some daily/weekly/monthly snapshots.
I have a (free) VPS with static IPv4 which is how I connect everything.
Both the VPS and the remote site have limited network speed (I think 50Mbps for VPS), so the initial sync was done sneakernet (well…“airplane net”). Nightly rsync is no problem bandwidth-wise, and is mostly just any new videos I’ve uploaded to my local Immich instance.
Fail2ban config can get fairly involved in my experience. I’m probably not doing it the right way, as I wrote a bunch of web server ban rules — anyone trying to access wpadmin gets banned, for instance (I don’t use WordPress, and if I did, it wouldn’t be accessible from my public facing reverse proxy).
I just skimmed my nginx logs and looked for anything funky and put that in a ban rule, basically.
proxmox nudes
No judgement here, you just keep doing what makes you happy.
It’s mostly so that I can have SSL handled by nginx (and not per-service), and also for ease of hosting multiple services accessible via subdomains. So every service is its own subdomain.
Additionally, my internal network (as in, my physical LAN) does not have any port forwarding enabled — everything is over WireGuard to my VPS.
My method:
VPS with reverse proxy to my public facing services. This holds SSL certs, and communicates with home network through WireGuard link configured on my router.
Local computer with reverse proxy for all services. This also has SSL certs, and handles the same services as the VPS, so I can have local/LAN speeds. Additionally, it serves as a reverse proxy for all my private services, such as my router/switches/access point config pages, Jellyfin, etc.
No complaints, it mostly just works. I also have my router override DNS entries for my FQDN to resolve locally, so I use the same URL for accessing public services on my LAN.
Getting TLS certs will be complicated
I just use Let’s Encrypt with a wildcard domain — same certs for public and private facing domains. I’m sure this isn’t best practice, but it’s mostly just for me so I’m not too worried :)
Yeah I don’t expose Jellyfin over the Internet, so it doesn’t matter for me, and wouldn’t work at all over WAN (unless VPN’d to home network).
Also, it’s all reverse proxied, and there’s nothing preventing having two Jellyfin hostnames, e.g., jf-local.mydomain.com and jf-public.mydomain.com.
Another fun trick you can play is to use a private IP on your public DNS records. This is useful for Jellyfin on Chromecast for instance — it uses 8.8.8.8 for DNS lookup (and ignores your router settings), so it wants a fully qualified domain name. But it has no problem accessing local hosts, so long as it’s from 8.8.8.8’s record.
On low end CPUs you can max out the CPU before maxing out network—if you want to get fancy, you can use rsync over an unencrypted remote shell like
rsh
, but I would only do this if the computers were directly connected to each other by one Ethernet cable.