Backups need to be reliable and I just can’t rely on a community of volunteers or the availability of family to help.
So yeah I pay for S3 and/or a VPS. I consider it one of the few things worth it to pay a larger hosting company for.
Backups need to be reliable and I just can’t rely on a community of volunteers or the availability of family to help.
So yeah I pay for S3 and/or a VPS. I consider it one of the few things worth it to pay a larger hosting company for.
I intentionally do not host my own git repos mostly because I need them to be available when my environment is having problems.
I make use of local runners for CI/CD though which is nice but git is one of the few things I need to not have to worry about.
Do you have any links or guides that you found helpful? A friend wanted to try this out but basically gave up when he realized he’d need an Nvidia GPU.
I’ve been testing Ollama in Docker/WSL with the idea that if I like it I’ll eventually move my GPU into my home server and get an upgrade for my gaming pc. When you run a model it has to load the whole thing into VRAM. I use the 8gb models so it takes 20-40 seconds to load the model and then each response is really fast after that and the GPU hit is pretty small. After I think five minutes by default it will unload the model to free up VRAM.
Basically this means that you either need to wait a bit for the model to warm up or you need to extend that timeout so that it stays warm longer. That means that I cannot really use my GPU for anything else while the LLM is loaded.
I haven’t tracked power usage, but besides the VRAM requirements it doesn’t seem too intensive on resources, but maybe I just haven’t done anything complex enough yet.
DuckDNS is great… but they have had some pretty major outages recently. No complaints, I know it’s an extremely valuable free service but it’s worth mentioning.
Cloudflare has an api for easy dynamic dns. I use oznu/docker-cloudflare-ddns to manage this, it’s super easy:
docker run \
-e API_KEY=xxxxxxx \
-e ZONE=example.com \
-e SUBDOMAIN=subdomain \
oznu/cloudflare-ddns
Then I just make a CNAME for each of my public facing services to point to ‘subdomain.example.com’ and use a reverse proxy to get incoming traffic to the right service.
That’s been my problem. It’s overpriced for just a single camera considering I already manage a big storage pool that my other services can use. But do I want to lock myself into buying other Ubiquity IP cams down the road?
Don’t the Ubiquity doorbells require a ‘dream machine’ storage appliance for recording video? I didn’t think there was a way to use your own storage anymore which has been my main hesitation in getting one.
deleted by creator
Are you using s3 for storage or block storage? S3 is pretty cheap but I’m wondering if Cloudfront would still help me with the load on the ec2 instance when federation traffic is slamming it.
Are you using cloudfront?
I switched from docker compose to pure Ansible for deploying my containers. Makes managing config and starting containers across multiple hosts super easy. I considered virtualizing too but decided it didn’t offer me enough advantages. If I ever have an issue with the host OS I just reinstall using a preseed file and then rerun my playbooks and it’s ready to go.
I started using Checkmk recently after it was mentioned here and I really like it. I’d used Zabbix a bit but was annoyed at how much work it took to get it to do what I want. Checkmk was a lot better right out of the box.
This is the right answer. A better backup strategy is an actual backup strategy. Snapshots, drive mirroring, rsync copies, etc aren’t really backups.
This exactly. If you already have Pis they are still great. Back when they were $35 it was a pretty good value proposition with none of the power or space requirements of a full size x86 PC. But for $80-$100 it’s really only worth it if you actually need something small, or if you plan to actually use the gpio pins for a project.
If you’re just hosting software a several year old used desktop will outperform it significantly and cost about the same.
I really like Kopia. I backup my containers with it, my workstations, and replicate to s3 nightly. It’s great.
I’ve had a lot of good luck with Syncthing. If you’re just syncing files locally you can disable nat traversal.
In my opinion trying to set up a highly available fault tolerant homelab adds a large amount of unnecessary complexity without an equivalent benefit. It’s good to have redundancy for essential services like DNS, but otherwise I think it’s better to focus on a robust backup and restore process so that if anything goes wrong you can just restore from a backup or start containers on another node.
I configure and deploy all my applications with Ansible roles. It can programmatically create config files, pass secrets, build or start containers, cycle containers automatically after config changes, basically everything you could need.
Sure it would be neat if services could fail over automatically but things only ever tend to break when I’m making changes anyway.
Once I got to the point where I was running a ton of containers I’d occasionally have issues where a maintainer wouldn’t resolve issues fast enough for my liking so I started building more containers myself which was a lot easier than I’d anticipated.
Same here. I love DuckDNS but after the third DNS outage taking down all my services I migrated to Cloudflare and haven’t had a single problem since.