TS is a lot easier to set up than WG and does not require a publicly accessible IP address nor any public whatsoever. It’s not really comparable to setting WG up yourself; especially w.r.t. security.
Interested in Linux, FOSS, data storage systems, unfucking our society and a bit of gaming.
I help maintain Nixpkgs.
https://github.com/Atemu
https://reddit.com/u/Atemu12 (Probably won’t be active much anymore.)
TS is a lot easier to set up than WG and does not require a publicly accessible IP address nor any public whatsoever. It’s not really comparable to setting WG up yourself; especially w.r.t. security.
It’s a central server (that you could actually self-host publicly if you wanted to) whose purpose it is to facilitate P2P connections between your devices.
If you were outside your home network and wanted to connect to your server from your laptop, both devices would be connected to the TS server independently. When attempting to send IP packets between the devices, the initiating device (i.e. your laptop) would establish a direct wireguard tunnel to the receiving device. This process is managed by the individual devices while the central TS service merely facilitates communication between the devices for the purpose of establishing this connection.
If you’re worried about that, I can recommend a service like Tailscale which does not require permanently open ports to the outside world, offering quite a bit more security than an exposed traditional VPN server.
Yes, yes they will. If you’re the sole user, they’d identify you from your behaviour anyways.
I don’t think internet proxy won’t help very much w.r.t. privacy but it will make you a lot more susceptible to being blocked.
I do like the idea of using USB drives for storage, though…
I wholeheartedly don’t.
They are quite solid but be aware that the web UI is dog slow and the menus weirdly designed.
Well that depends on how you define malware ;)
That is just a specific type of drive failure and only certain software RAID solutions are able to even detect corruption through the use of checksums. Typical “dumb” RAID will happily pass on corrupted data returned by the drives.
RAID only serves to prevent downtime due to drive failure. If your system has very high uptime requirements and a drive just dropping out must not affect the availability of your system, that’s where you use RAID.
If you want to preserve data however, there are much greater hazards than drive failure: Ransomware, user error, machine failure (PSU blows up), facility failure (basement flooded) are all similarly likely. RAID protects against exactly none of those.
Proper backups do provide at least decent mitigation against most of these hazards in addition to failure of any one drive.
If love your data, you make backups of it.
With a handful of modern drives (<~10) and a restore time of 1 week, you can expect storage uptime of >99.68%. If you don’t need more than that, you don’t need RAID. I’d also argue that if you do indeed need more than that, you probably also need higher uptime in other components than the drives through redundant computers at which point the benefit of RAID in any one of those redundant computers diminishes.
Without any cold hard data, this isn’t worth discussing.
The problem is that it’s not just 15W; I merely used that as an example of how even just two “low power” devices can cause an effect that you can measure in euros rather than cents.
Yes. Low power draws add up. 5W here 10W there and you’re already looking at >3€ per month.
You probably could. Though I don’t see the point in powering a home server over PoE.
A random SBC in the closet? WAP? Sure. Not a home server though.
Ja, das denkt er sich auch glaub ich. (Also, das Brot, nicht der andere braune.)
If you’re using containers for everything anyways, the distro you use doesn’t much matter.
If Ubuntu works for you and switching away would mean significant effort, I see no reason to switch outside of curiosity.
The operating system is explicitly not virtualised with containers.
What you’ve described is closer to paravirtualisation where it’s still a separate operating system in the guest but the hardware doesn’t pretend to be physical anymore and is explicitly a software interface.
Do you have a media center and/or server already? It’s a bit overkill for the former but would be well suited as the latter with its dedicated GPU that your NAS might not have/you may not want to have in your NAS.
Glad I could save you some money :)
NixOS packages only work with NixOS system. They’re harder to setup than just copying a docker-compose file over and they do use container technology.
It’s interesting how none of that is true.
Nixpkgs work on practically any Linux kernel.
Whether NixOS modules are easier to set up and maintain than unsustainably copying docker-compose files is subjective.
Neither Nixpkgs nor NixOS use container technology for their core functionality.
NixOS has the nixos-container
framework to optionally run NixOS inside of containerised environments (systemd-nspawn) but that’s rather niche actually. Nixpkgs does make use of bubblewrap for a small set of stubborn packages but it’s also not at all core to how it works.
Totally beside the point though; even if you don’t think NixOS is simpler, that still doesn’t mean containers are the only possible mean by which you could possibly achieve “easy” deployments.
Also without containers you don’t solve the biggest problems such as incompatible database versions between multiple services.
Ah, so you have indeed not even done the bare minimum of research into what Nix/NixOS are before you dismissed it. Nice going there.
as robust in terms of configurations
Docker compose is about the opposite of a robust configuration system.
Hier wurde einfach nicht ordentlich recherchiert, es wird LibreOffice verwendet. Es ist die gebrandete Version von Collabora, die einer der Haupt-Contributors von LO sind.