I’ve been selfhosting a while. I run around 15 distinct services in docker containers, all on a single machine with a medium sized disk. It’s a small form factor, and I recently had to add space, so I’ve attached an external USB storage device.

It feels clunky.

At what point does a performant SAN/NAS make sense more than local storage? When did you make the jump?

  • Jul (they/she)@piefed.blahaj.zone
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 hours ago

    I have always used small, cheap devices for the servers and NAS storage. I actually had a NAS for a long time to backup my personal devices, too, even before self-hosting. The NAS uses cheaper, slower drives but larger anf more of them and in RAID configuration for redundancy because I’ve had too many drives die on me over the years. And I then use NFS to cache the data locally on the servers which benefit from faster drives which can then be much smaller because they are more expensive per GB.

  • CmdrShepard49@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 hours ago

    My NAS is in the case with my server (its just a desktop case with tons of drive slots). I’m not a fan of pre-built NAS as they’re insanely expensive and can’t be expanded without buying a whole other NAS. Plus now I don’t have to worry about network issues causing my files to be unavailable nor tons of traffic over the LAN for reads/writes

  • non_burglar@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    13 hours ago

    I won’t comment on the when of it, you have a lot of good answers here already.

    But I’d like to add that “performant” is a much lower bar than you think. Gigabit has been fine for my nearly 25 containers to use NFS as their storage, video streaming included.

  • frongt@lemmy.zip
    link
    fedilink
    English
    arrow-up
    15
    ·
    19 hours ago

    When you want to disconnect compute from storage.

    If you have more than one compute node you can reboot one without taking any services down. Or you can add more nodes and they share the storage.

    I have two proxmox servers, but I don’t run separate storage. Small services use zfs and replication. My big media server VM stays put and has to be shut down if I reboot its host.

  • observantTrapezium@lemmy.ca
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    18 hours ago

    When you run out of local storage…

    If you have a single node, external USB storage is 100% fine. Even if you have more machines, if you don’t actually need a massive amount of storage, you can share that external drive as NFS.

  • stoy@lemmy.zip
    link
    fedilink
    English
    arrow-up
    4
    ·
    16 hours ago

    For me it was when I noticed that some of my photos had been affected by bitrot.

    Instarted building my NAS last year, and got caught up in the increased HDD cost, I need two more 8TB drives to get my 32TB Zraid2 going.

    I was going to use TrueNAS, but now that they have turned their back on opensource, I am not so sure anymore

    • aichan@piefed.blahaj.zone
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 hour ago

      Went from TrueNAS to OpenMediaVault and we are quite happy atm. Very reliable, decent documentation, easy to use. We were between this and Proxmox when we switched but ultimately chose OMV for its ease. It’s been at least a year and with ~100 containers running and some friends asking for VMs it would have been useful to be in proxmox, but not a biggie. However, be sure of your usecase and try imagining future needs just to be prepared and avoid more migrations ;)

  • hexagonwin@lemmy.today
    link
    fedilink
    English
    arrow-up
    3
    ·
    16 hours ago

    when you can’t add more local storage, or networked storage gets cheaper

    networked storage is slower and eats up the bandwidth, so not really a great option on a small scale

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    53 minutes ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    NAS Network-Attached Storage
    NFS Network File System, a Unix-based file-sharing protocol known for performance and efficiency
    NVMe Non-Volatile Memory Express interface for mass storage
    RAID Redundant Array of Independent Disks for mass storage
    SAN Storage Area Network
    SSD Solid State Drive mass storage

    6 acronyms in this thread; the most compressed thread commented on today has 14 acronyms.

    [Thread #246 for this comm, first seen 18th Apr 2026, 19:00] [FAQ] [Full list] [Contact] [Source code]

  • egg82@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    14 hours ago

    It’s all about tradeoffs and maximizing the useful qualities of each.

    NVMe storage is extremely fast, but expensive and wears quickly. For a homelab, those drives are usually not easily accessible or replaceable without powering the system off. Internal SSDs are similar with the caveat that they’re more likely to be hot swappable when using more server-grade equipment (even older equipment, which more homelabs will have) - HDDs are obviously slower but have higher capacity and wear less quickly. SAS drives will have higher DWPD and more speed for roughly the same (used) cost but you need to make sure the backplane you’re using supports them.

    External USBs are much cheaper and higher capacity, depending on what you get, but are usually limited to USB-C or even USB3 speeds. Additionally, they can be disconnected physically or via software.

    A SAN or vSAN requires either special equipment and cables or a dedicated high speed (10Gbit+) network to function well. There’s various free software that can create a vSAN-like thing for you, such as ceph. A “proper” vSAN will be marginally slower than an internal drive array but usually still plenty fast for “big data” which is what it’s good for - big chunks of data that don’t require the world’s fastest drive access speeds. Note that, while unlikely if set up properly, this storage can also be disconnected both physically and via software. Usually this is more recoverable more quickly than USB since common vSAN software will work around this.

    For my homelab, I use NAS storage for data that’s large, “infinitely” growing, and doesn’t need extremely fast access like a database would require. vSAN for most other operations. I should keep local storage or use an actual SAN fabric of some kind but homelabs aren’t professional datacenters

    • iamthetot@piefed.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      13 hours ago

      Honestly this is the simple answer. My partner and I needed to access the same files often enough that we started using a NAS before I even got into self hosting. In fact, really, the NAS was my gateway drug haha.