Personally, I’d really like if it could have different users on its management interface, with their own file shares.
It’s understandable why they don’t bother, but I would like to share my NAS without running several instances.
Personally, I’d really like if it could have different users on its management interface, with their own file shares.
It’s understandable why they don’t bother, but I would like to share my NAS without running several instances.
I’ll second people here in pointing that you are better allowing calls from your family during the “Do Not Disturb” than trying to set-up things not to call you during that time. Your phone almost certainly has a setting that allows “favorite contacts” or something like it.
It has a better configuration orthogonality :)
Yeah, that’s not a good reason.
It’s much easier to authorize a key than to input your password on every kind of interaction.
This is the internet. If you poke the bear, somebody will come-up with a completely reasonable use case of password authentication that happened once somewhere on the world.
If you don’t have any good reason not to, always set your SSH server to only authenticate with keys.
Anything else is irrelevant.
Oh, sure, the bloat on your images requires resources from the host.
There is the option of sharing things. But, obviously that conflicts a bit with maintaining your environments isolated.
FileZila has relied in a distribution channel that has turned untrustworthy a while ago.
Since then, they migrated the project. But somebody that doesn’t know what they are doing isn’t sure to get a good version of it.
Just about this part:
Or you might set up an sFTP service to accept a GUI connection from a client like FileZilla.
FileZilla has been a troublemaker for decades (not because the software itself, but the OP won’t get it right), and sFTP requires an extra service.
I’d recommend he get WinSCP or another scp client.
Do not run databases in Docker unless you know really well what you are doing.
It’s completely possible to run them correctly in Docker. But it’s far from trivial, and if you need to ask this, it means that you probably won’t be able to.
Hetzner. But it looks like the problem is created by the pair (hoster, ISP), and neither of them have a problem by themselves.
I get the throughput I brought from my ISP. But latency to my VPS is 260ms.
To check if your problem is caused by excessive memory usage requiring constant swapping. If it is, turning swap off will make some process be killed instead of slowing the computer down.
Have you tried turning your swap off?
It’s popular because differently from NC, Syncthing works.
NextCloud main use is file synchronization. If you take this away, you will almost certainly decide to use some different software for the other features, because NC does them badly.
Nobody said “syncthing” on this thread yet, so that will be me.
Why does this bot set the text color? And why it does that and not set the background at the same time?
Try to run something that requires php7 and something else that requires php8 on the same web server; or python 2 and python 3.
You actually can, but it’s not pretty.
(The thing about a declarative setup isn’t much of a difference, you can do it for any popular Linux distro.)
The answer is get a minimum linux image, add nginx or apache, and put your content on the relevant place. (Basically, your third option.)
Do not bother about the future of nginx. Changing the web server on that image is the easiest thing in the world.
I stopped using it because it has an extremely complex protocol, with very large bloat that increases with the number of files, and incredibly sensitive to latency.
When I stopped syncing directories because they would take days to upload and started compressing them so they would finish in 10 minutes, I decided it had to go. (Oh, and it’s extremely sensitive to network problems too.)
Hum, no. The last thing I need on the world is a piece of non-working hard to maintain software.
I’d write something before trying Nextcloud again.