Oh I think i tried at one point and when the guide started talking about inventory, playbooks and hosts in the first step it broke me a little xd
Oh I think i tried at one point and when the guide started talking about inventory, playbooks and hosts in the first step it broke me a little xd
Got any decent guides on how to do it? I guess a docker compose file can do most of the work there, not sure about volume backups and other dependencies in the OS.
Hmm, I bought a used laptop on which I wanted to tinker with linux and docker services, but I kinda wanted to separate the NAS into a separate advice to avoid the “all eggs in one basket” situation (also I can’t really connect that many hard drives to it unless I buy some separately charged USB disk hubs or something, if those exist and are any good?)
However I do see the merit in your suggestion considering some of the suggestions here are driving me into temptation to get a $500 NAS and that’s even without the drives… that’s practically more than what my desktop is worth atm.
Could be a regional thing but Synology HDDs are around 30% more expensive than ‘normal’ WD/Seagate/Toshiba that I’m seeing at first glance. Maybe it does make it up for quality and longevity but afaik HDDs are pretty durable if they are maintained well, and I imagine them being in RAID1 should be good enough security measure?
Considering the price of the diskstation itself it’s all quickly adding up to a price of a standalone PC so i’m trying to keep it simple since it’s for a relatively low performance environment.
gummibando@mastodon.social
Sorry, with ‘docker drives’ I meant ‘docker volumes or bind mounts’. I dont have a lot of experience with it yet so I’m not sure if I’m going to run into problems by mapping them directly to a NAS, or if I should have local copies of data and then rsync / syncthing them into the NAS. I heard you can theoretically even run docker on the NAS but not sure if that’s a good idea in terms of its longevity or performance.
Is the list of “approved HDDs” just a marketing/support thing or does it actually affect performance?
Thanks for the answers! The DS2xx series looks like something I could start with. DS223 is a bit cheaper and has 3 USB ports so that could be useful, I’d guess I don’t need to focus on performance since it’s mostly just for personal data storage and not some intensive professional work.
Logseq
having everything laid out in a few yaml files that I can tear down and rebuild on a whim
Oh absolutely, but for me docker compose already does that. Kubernetes might be a good learning exercise but I don’t think I need load balancing for 1 user, me, on the home network 😅
What’s the benefit of kubernetes over docker for a home server setup?
I always thought you’re supposed to buy similar drives so the performance is better for some reason (I guess the same logic as when picking RAM?) but this thread is changing my mind, I guess it doesn’t matter after all👀
I’m not sure what Ansible does that a simple Docker Compose doesn’t yet but I will look into it more!
My real backup test run will be soon I think - for now I’m moving from windows to docker, but eventually I want to get an older laptop, put linux on it and just move everything to the docker on it instead and pretend it’s a server. The less “critical” stuff I have on my main PC, the less I’m going to cry when I inevitably have to reinstall the OS or replace the drives.
Ahh, so the best docker practice is to always just use outside data volumes and backup those separately, seems kinda obvious in retrospect. What about mounting them directly to the NAS (or even running docker from NAS?), for local networks the performance is probably good enough? That way I wouldn’t have to schedule regular syncs and transfers between “local” device storage and NAS? Dunno if it would have a negative effect on drive longevity compared to just running a daily backup.
What makes it so useful? Is it just remote access if you’re away from your pc, or what do you use it for?
Are there any docker FOSS alternatives? It sounds like a good thing in practice but yeah, they seem to have too much power atm.
Really? I’ve seen threads with people claiming to run dozens of services on it. What do you recommend instead, just any rpi OS and installing them like I would on regular linux?
I feel old, I don’t understand 90% of words in this thread lol.
I just have kodi on Libreelec with a jellyfin plugin on my rpi4 and even that struggled with overheating at times. So I run most stuff on my pc instead. I’m tempted to try the portainer to get some experience with docker tho.
I see, thanks. I’m still stuck in the mentality “the more parts there are, the easier for it to break” :P. Or that it’d affect the speed in some way
Any specific reason why?
Isn’t it more likely that paths are used to reference resources like images rather than a db fk?
Damn, that’s extensive. How long did it take to set it all up and to maintain it continually?
Does Fluent Reader count? Doesn’t have an amazing interface but it’s free and simple to use.