• 0 Posts
  • 21 Comments
Joined 1 year ago
cake
Cake day: June 30th, 2023

help-circle
  • Immutable Nixos. My entire server deployment from partitioning to config is stored in git on all my machines.

    Every time I boot all runtime changes are “wiped”, which is really just BTRFS subvolume swapping.

    Persistence is possible, but I’m forced to deal with it otherwise it will get wiped on boot.

    I use LVM for mirrored volumes for local redundancy.

    My persisted volumes are backed up automatically to B2 Backblaze using rclone. I don’t backup everything. Stuff I can download again are skipped for example. I don’t have anything currently that requires putting a process in “maint mode” like a database getting corrupt if I backup while its being written to. When I did, I’d either script gracefully shutting down the process or use any export functionality if the process supported it.


  • I haven’t tested in Windows, but this is my setup Linux to Linux using rclone which the docs say works with Windows.

    Server

    • LUKS
    • LVM
    • Volgroup with a mishmash of drives in a mirror configuration
    • Cache volume with SSD
    • BTRFS /w Snapshots (or ZFS or any other snapshotting FS)
    • (optional) Rclone local “remote” with Crypt if you want runtime encryption at rest and the ability to decrypt files on the server. You can skip this and do client side only if you don’t want the decryption key on the server.
    • SFTP (or any other self-hosted protocol from https://rclone.org/docs/)

    Client

    • Rclone Config /w SFTP (or chosen protocol)
    • (optional) Rclone Config /w Crypt
    • Rclone mount with VFS.

    I use this setup for my local files and a similar setup to my Backblaze B2 off site backups.

    The VFS implementation has been pretty good. You can also manually sync. Their bisync I don’t fully trust though.

    I can access everything through android using https://github.com/newhinton/Round-Sync. Not great for photos though as thumbnails weren’t loading without pulling the whole file last I tested a year ago.


  • One method depends on your storage provider. Rsync may have incremental snapshots, but I haven’t looked because my storage provider has it.

    Sometimes a separate tool like rsnapshot (but probably not rsnapshot itself as I dont think its hard links interact well with rsync) might be used to manage snapshots locally that are then rsynced.

    On to storage providers or back ends. I use B2 Backblaze configured to never delete. When a file changes it uploads the new version and renames the old version with a timestamp and hides it. Rsync has tools to recover the old file versions or delete any history. Again, it only uploads the changed files so its not full snapshots.



  • Important stuff (about 150G) is synced to all my machines and a b2 Backblaze bucket.

    I have a rented seed box for those low seeder torrents.

    The stuff I can download again is only on a mirrored lvm pool with an lvmcache. I don’t have any redundancy for my monerod data which is on an nvme.

    I’m moving towards an immutable OS with 30 days of snapshots. While not the main reason, it does push one to practicing better sync habits.




  • Can try installing Avahi on the RPi (may come on the default image). It will advertise .local over mDNS / DNS-SD. I believe Avahi will advertise on link local if there is no default route to the internet.

    Your system may automatically resolve the domain if its able to pickup the mDNS records to SSH in. Been a couple years since I’ve done it, so I could be forgetting a nuanced detail, but I vaguely remember just ‘plug and play’ if internet for the RPi wasn’t required.


  • My NAS is an mATX mobo with an i5, 64G RAM, 8 disk drives, 3 nvme drives, and an ARC GPU for video transcoding.

    Disk drives are all mirrored. One nvme runs NixOS which is easy enough to redeploy if the drive dies. One nvme is cache on top of the disk drives. Last nvme I use for temp fast storage like Jellyfin transcoding.

    Its more of a combo NAS/server as I run most self hosted apps on it (tor node, monero node, jellyfin, *arr stack, etc).




  • Jellyfin recommends not using SBCs. I was in the same boat as you a month ago. Started on an RPi. Works fine for raw (no transcoding). Poor performance if you do any scrubbing or try to watch something while new content is processing. Got a mini PC. It was better but its basically a laptop chipset, so still not the best experience. Had other things I wanted to do on my self-hosted setup so decided to just bite the bullet and make a proper build: 12th gen i5, Intel Arc GPU, 4+8 SATA ports with PCI card, 3xNVME, 10xHDD/SSD case. Can’t speak to the performance yet. Learning Ansible to automate managing it including installing the OS.

    I would stay away from NAS systems like QNAP or Synology. They tend to not be much better than a SBC.

    For the budget constraints I would just echo getting the cheapest desktop-class PC you can get your hands on in a suitable form factor.

    https://jellyfin.org/docs/general/administration/hardware-acceleration/#hardware-acceleration-on-docker-linux

    While hardware acceleration is supported on Raspberry Pi hardware, it is recommended that Jellyfin NOT be hosted on Raspberry Pis or other SBCs. Many hardware acceleration features are not supported and will fallback to software. In addition, they are generally too slow to provide a good experience when transcoding is needed. Please consider getting a more powerful system to host Jellyfin.




  • Not sure what your environment is. I can tell you what I do in linux/android.

    I use backblaze b2 for my cloud storage.

    I use rclone to create two encrypted “remotes”: one on my local file system and one for b2. Rclone supports a bunch of cloud providers, so you don’t have to use b2.

    I mount the encrypted local file system and use whatever app (e.g., paperless) to access the files like it was any other directory.

    When I’m done I unmount it and sync it with the b2 encrypted remote.

    I use Round Sync on android which is rclone with a mobile GUI to access the same files. Also works great for backing up my phone.

    For docker access to the mount point, either run the docker daemon as your current user, enable root access to rclone’s fuse mounts, or my preferred is to remount (with root access) a scoped directory for that docker container using something like bindfs.

    Just be aware if using the vfs-cache (needed for seek or append), that cache is stored decrypted in your home folder. I’ve been meaning to look into locking it down with apparmor or something.


  • Round Sync with whatever remote (backend provider) you want supported by the underlying library rclone. They have self hosted remote options like FTP or something. If you want off site and privacy, create a crypted remote pointed to your off site backend remote. It acts as a wrapper to do end-to-end encryption.

    Round Sync isn’t in any app store, so I’d set a watcher for releases on Github. You can setup scheduled backups or restores in Round Sync to keep things moving between your phone and backend remote as a relatively set it and forget it for one way syncing.

    It has some nuances with bidirectional syncing that can result in data loss if not careful. With your workflow, I recommend something like a daily copy job from your phone to the server. This never deletes files. When you curate your photos via a tool on the backend remote (only do this when all photos on your phone are on the backup server), then you can do a manual sync from the backend remote to your phone to match the two exactly.

    Copy just copies or updates to newer versions from target 1 to target 2.

    Sync does the same but also deletes any files on target 2 that are not on target 1. Very easy to delete files if not careful in planning out your workflow. Test first.

    I use Backblaze B2 for my off site remote which only hides and doesn’t delete files with the default settings. You can manage those with the rclone CLI application on a desktop to later cleanup hidden files or set them to delete after X days or something in the b2 life cycle settings on the Backblaze website.

    Its no Google Photos or Dropbox, but it works well enough for me without giving up privacy. It also decouples the syncing from curating of photos giving some additional freedom for a custom workflow.

    I personally just have a daily copy job on my phone from my phone to a crypted b2 remote and a cron on my self hosted server to copy from b2 to my self hosted server. Once a year I might clean things up from my server, do a manual sync to b2, then another to my phone. Sometime later I’ll go clean up the hidden (deleted) files in b2.

    That said, I care more about backups than bidirectional syncing, so your milage may vary with this solution for your use case.





  • Don’t have to use their proxy. My gateway router uses cloudflare to set the IP via the API and I just use self-signed certificates. A record resolves to my gateway, not some cloudflare server.

    They also do a lot of work in the privacy space. Encrypted Client Hello to protect SNI came from them.

    If you use any company for TLS termination they can MITM (e.g. AWS certificate manager).