• 0 Posts
  • 18 Comments
Joined 1 year ago
cake
Cake day: June 7th, 2023

help-circle
  • You might want to look up SMR vs CMR, and why it matters for NASes. The gist is that cheaper drives are SMR, which work fine mostly, but can time out during certain operations, like a ZFS rebuild after a drive failure.

    Sorry don’t remember the details, just the conclusion that’s it’s safer to stay away from SMR for any kind of software RAID

    EDIT: also, there was the SMR scandal a few years ago where WD quietly changed their bigger volume WD Red (“NAS”) drives to SMR without mentioning it anywhere in the speccs. Obviously a lot of people were not happy to find that their “NAS” branded hard drives were made with a technology that was not suitable for NAS workload. From memory i think it was discovered when someone investigated why their ZFS rebuild kept failing on their new drive.



  • This sounds like a FOSS utopian future :)

    There’s a few projects that have started towards this path with single-click deployable apps, you could even say HomeAssistant OS does this to some extent my managing the services for you.

    I believe one of the biggest hurdle for a “self hosting appliance” is resilience to hardware failure. Noone wants to loose decades of family photos or legal documents due to a SSD going bad , or the cat spilling water on their “hosting box”. So automated reliable off-site backups and recovery procedures for both data and configs is key.

    Databox from BBC / Nottingham University is also a very interesting concept worth looking in to:

    A platform for managing secure access to data and enabling authorised third parties to provide the owner authenticated control and accountability.



  • Like others said it’s mostly just practice.

    What helps is to align the (short) ends and hold them flat between your index finger and thumb. Use your free hand to get them in order. Once they’re in order, keep holding them still between your index finger and thumb using one hand, then use your free hand to slot on the connector

    Edit: also bending them back and forth a bit will soften them up and reduce them curling in all sorts of directions. It also weakens them, so don’t overdo it (mostly only works for solid cable, the type meant for permanent installations like inside walls)



  • For RPi the two major causes of issues (in my experience) are low spec power supplies and low spec SD-cards.

    Power supplies drop voltage when the loads gets too high, which is especially pronounced with high power USB devices like external harddrives.

    SD-cards tend to get worn out or give write errors after enough writes. Class 10 SD cards are recommended for both speed and longevity. And ideally try to avoid write intensive stuff on the SD card









  • Proxmox Backup Server: Incremental de-duplicateed image backups of the whole VM, with possibility of individual file restore. It’s like magic

    For the legacy bare metal system I have rsnapshots of the data folder (set it up ages ago, and never changed it)

    An nginx LXC container has a single static backup of the container, with the nginx config file stored in a git repo



  • Why not both?

    Like many others here, I went with Proxmox as the base host. But most of my services are Docker containers , running in a “dockerVM” on top of Proxmox.

    Having Proxmox as the base is just so flexible, which is very handy for a homelab.

    • For instance I set up a VM with Wireguard back when Wireguard had only just been merged in to the mainline kernel, without affecting the other
    • You can have separate VM for docker testing, and docker production
    • You can run multiple VMs for multiple Kubernetes hosts, to try it out and get your feet wet without affecting the “production” containers
    • If you get additional servers, you can just migrate those Kubernetes VMs
    • You can run Windows VM should you need, and BSD (and thus pfSense/opensense or TRUE AS)
    • You can run a full graphical environment if you want
    • Proxmox has easy setup for firewalls for each VM
    • I have a VM running a legacy bare metal system (from the same server now running proxmox) that I’ve been slowly de-commissioning piece by piece