• 1 Post
  • 43 Comments
Joined 1 year ago
cake
Cake day: June 6th, 2023

help-circle










  • It’s all about where the packages and services are installed

    No. Your packages and services could be on a network share on the other side of the world, but where they are run is what matters here. Processes are always loaded into, and run from main memory.

    “Running on bare metal” refers to whether the CPU the process is being run on is emulated/virtualized (ex. via Intel VT-x) or not.

    A VM uses virtualization to run an OS, and the processes are running within that OS, thus neither is running on bare metal. But the purpose of containers is to run them wherever your host OS is running. So if your host is on bare metal, then the container is too. You are not emulating or virtualizing any hardware.

    Here’s an article explaining the difference in more detail if needed.


  • As the other person said, I don’t think the SSD knows about partitions or makes any assumptions based on partitioning, it just knows if you’ve written data to a certain location, and it could be smart enough to know how often you’re writing data to that location. So if you keep writing data to a single location, it could decide to logically remap that location in logical memory to different physical memory so that you don’t wear it out.

    I say “could” because it really depends on the vendor. This is where one brand could be smart and spend the time writing smart software to extend the life of their drive, while another could cheap out and skip straight to selling you a drive that will die sooner.

    It’s also worth noting that drives have an unreported space of “spare sectors” that it can use if it detects one has gone bad. I don’t know if you can see the total remaining spare sectors, but it typically scales with the size of a drive. You can at least see how many bad sectors have been reallocated using S.M.A.R.T.



  • Afaik, the wear and tear on SSDs these days is handled under the hood by the firmware.

    Concepts like Files and FATs and Copy-on-Write are format-specific. I believe that even if a filesystem were to deliberately write to the same location repeatedly to intentionally degrade an SSD, the firmware will intelligently shift its block mapping around under the hood so as to spread out the wear. If the SSD detects a block is producing errors (bad parity bits), it will mark it as bad and map in a new block. To the filesystem, there’s still perfectly good storage at that address, albeit with a potential one-off read error.

    The larger sizes SSD just gives the firmware more extra blocks to pull from.


  • Assume your hard drives will fail. Any time I get a new NAS drive, I do a burn-in test (using a simple badblocks run, can take a few days depending on the size of the drive, but you can run multiple drives in parallel) to get them past the first ledge of the bathtub curve, and then I put them in a RaidZ2 pool and assume it will fail one day.

    Therefore, it’s not about buying the best drives so they never fail, because they will fail. It’s about buying the most cost effective drive for your purpose (price vs avg lifespan vs size). For this part, definitely refer to the Backblaze report someone else linked.


  • I’ve been using TrueNas with a nightly sync to Backblaze for years and I like it.

    It used to be called FreeNas and used FreeBSD. Now the BSD version is called TrueNas Core, and a new Linux based version is called TrueNas Scale.

    I would go with TrueNas Scale if I were starting a new one today. You probably won’t use the “jail” functionality immediately, but they’re super handy, and down the line if you start playing with them, you’ll run into fewer compatibility issues running Linux vs BSD.