I have a 56 TB local Unraid NAS that is parity protected against single drive failure, and while I think a single drive failing and being parity recovered covers data loss 95% of the time, I’m always concerned about two drives failing or a site-/system-wide disaster that takes out the whole NAS.

For other larger local hosters who are smarter and more prepared, what do you do? Do you sync it off site? How do you deal with cost and bandwidth needs if so? What other backup strategies do you use?

(Sorry if this standard scenario has been discussed - searching didn’t turn up anything.)

  • unit327@lemmy.zip
    link
    fedilink
    English
    arrow-up
    4
    ·
    7 hours ago

    I use aws s3 deep archive storage class, $0.001 per GB per month. But your upload bandwidth really matters in this case, I only have a subset of the most important things backed up this way otherwise it would take months just to upload a single backup. Using rclone sync instead of just uploading the whole thing each time helps but you still have to get that first upload done somehow…

    I have complicated system where:

    • borgmatic backups happen daily, locally
    • those backups are stored on a btrfs subvolume
    • a python script will make a read-only snapshot of that volume once a week
    • the snapshot is synced to s3 using rclone with --checksum --no-update-modtime
    • once the upload is complete the btrfs snapshot is deleted

    I’ve also set up encryption in rclone so that all the data is encrypted an unreadable by aws.