Interested in Linux, FOSS, data storage systems, unfucking our society and a bit of gaming.

I help maintain Nixpkgs.

https://github.com/Atemu
https://reddit.com/u/Atemu12 (Probably won’t be active much anymore.)

  • 5 Posts
  • 160 Comments
Joined 4 years ago
cake
Cake day: June 25th, 2020

help-circle


  • It’s a central server (that you could actually self-host publicly if you wanted to) whose purpose it is to facilitate P2P connections between your devices.

    If you were outside your home network and wanted to connect to your server from your laptop, both devices would be connected to the TS server independently. When attempting to send IP packets between the devices, the initiating device (i.e. your laptop) would establish a direct wireguard tunnel to the receiving device. This process is managed by the individual devices while the central TS service merely facilitates communication between the devices for the purpose of establishing this connection.







  • That is just a specific type of drive failure and only certain software RAID solutions are able to even detect corruption through the use of checksums. Typical “dumb” RAID will happily pass on corrupted data returned by the drives.

    RAID only serves to prevent downtime due to drive failure. If your system has very high uptime requirements and a drive just dropping out must not affect the availability of your system, that’s where you use RAID.

    If you want to preserve data however, there are much greater hazards than drive failure: Ransomware, user error, machine failure (PSU blows up), facility failure (basement flooded) are all similarly likely. RAID protects against exactly none of those.

    Proper backups do provide at least decent mitigation against most of these hazards in addition to failure of any one drive.

    If love your data, you make backups of it.

    With a handful of modern drives (<~10) and a restore time of 1 week, you can expect storage uptime of >99.68%. If you don’t need more than that, you don’t need RAID. I’d also argue that if you do indeed need more than that, you probably also need higher uptime in other components than the drives through redundant computers at which point the benefit of RAID in any one of those redundant computers diminishes.












  • NixOS packages only work with NixOS system. They’re harder to setup than just copying a docker-compose file over and they do use container technology.

    It’s interesting how none of that is true.

    Nixpkgs work on practically any Linux kernel.

    Whether NixOS modules are easier to set up and maintain than unsustainably copying docker-compose files is subjective.

    Neither Nixpkgs nor NixOS use container technology for their core functionality.
    NixOS has the nixos-container framework to optionally run NixOS inside of containerised environments (systemd-nspawn) but that’s rather niche actually. Nixpkgs does make use of bubblewrap for a small set of stubborn packages but it’s also not at all core to how it works.

    Totally beside the point though; even if you don’t think NixOS is simpler, that still doesn’t mean containers are the only possible mean by which you could possibly achieve “easy” deployments.

    Also without containers you don’t solve the biggest problems such as incompatible database versions between multiple services.

    Ah, so you have indeed not even done the bare minimum of research into what Nix/NixOS are before you dismissed it. Nice going there.

    as robust in terms of configurations

    Docker compose is about the opposite of a robust configuration system.