I’m a little teapot 🫖

  • 0 Posts
  • 37 Comments
Joined 9 months ago
cake
Cake day: September 27th, 2023

help-circle



  • Depends on the SSD, the one I linked is fine for casual home server use. You’re unlikely to see enough of a write workload that endurance will be an issue. That’s an enterprise drive btw, it certainly wasn’t cheap when it was brand new and I doubt running a couple of VMs will wear it quickly. (I’ve had a few of those in service at home for 3-4y, no problems.)

    Consumer drives have more issues, their write endurance is considerably lower than most enterprise parts. You can blow through a cheap consumer SSD’s endurance in mere months with a hypervisor workload so I’d strongly recommend using enterprise drives where possible.

    It’s always worth taking a look at drive datasheets when you’re considering them and comparing the warranty lifespan to your expected usage too. The drive linked above has an expected endurance of like 2PB (~3 DWPD, OR 2TB/day, over 3y) so you shouldn’t have any problems there. See https://www.sandisk.com/content/dam/sandisk-main/en_us/assets/resources/enterprise/data-sheets/cloudspeed-eco-genII-sata-ssd-datasheet.pdf

    Older gen retired or old stock parts are basically the only way I buy home server storage now, the value for your money is tremendous and most drives are lightly used at most.

    Edit: some select consumer SSDs can work fairly well with ZFS too, but they tend to be higher endurance parts with more baked in over provisioning. It was popular to use Samsung 850 or 860 Pros for a while due to their tremendous endurance (the 512GB 850s often had an endurance lifespan of like 10PB+ before failure thanks to good old high endurance MLC flash) but it’s a lot safer to just buy retired enterprise parts now that they’re available cheaply. There are some gotchas that come along with using high endurance consumer drives, like poor sync write performance due to lack of PLP, but you’ll still see far better performance than an HDD.




  • If I had to guess there was a code change in the PVE kernel or in their integrated ZFS module that led to a performance regression for your use case. I don’t really have any feedback there, PVE ships a modified version of an older kernel (6.2?) so something could have been backported into that tree that led to the regression. Same deal with ZFS, whichever version the PVE folks are shipping could have introduced a regression as well.

    Your best bet is to raise an issue with the PVE folks after identifying which kernel version introduced the regression, you’ll want to do a binary search between now and the last known good time that this wasn’t occurring to determine exactly when the issue started - then you can open an issue describing the regression.

    Or just throw a cheap SSD at the problem and move on, that’s what I’d do here. Something like this should outlast the machine you put it in.

    Edit: the Samsung 863a also pops up cheaply from time to time, it has good endurance and PLP. Basically just search fleaBay for SATA drives with capacities of 400/480gb, 800/960gb, 1.6T/1.92T or 3.2T/3.84T and check their datasheets for endurance info and PLP capability. Anything in the 400/800/1600/3200Gb sequence is a model with more overprovisioning and higher endurance (usually refered to as mixed use) model. Those often have 3 DWPD or 5 DWPD ratings and are a safe bet if you have a write heavy workload.





  • Distcc, maybe gluster. Run a docker swarm setup on pve or something.

    Models like those are a little hard to exploit well because of limited network bandwidth between them. Other mini PC models that have a pcie slot are fun because you can jam high speed networking into them along with NVMe then do rapid fail over between machines with very little impact when one goes offline.

    If you do want to bump your bandwidth per machine you might be able to repurpose the wlan m2 slot for a 2.5gbe port, but you’ll likely have to hang the module out the back through a serial port or something. Aquantia USB modules work well too, those can provide 5gbe fairly stably.

    Edit: Oh, you’re talking about the larger desktop elitedesk g1, not the USFF tiny machines. Yeah, you can jam whatever hh cards into these you want - go wild.


  • Bus issues usually. Having a disk (or 4) drop out of a ZFS filesystem regularly isn’t a good time.

    If you can find a combination of enclosure, driver/firmware and USB port that provides you with a reliable connection to the drive then USB is just another storage bus. It’s generally not recommended because that combination (enclosure, chipset, firmware, driver, port) is so variable from situation to situation but if you know how to address the pitfalls it can usually work fine.



  • seaQueue@lemmy.worldtoSelfhosted@lemmy.worldpfsense: Who needs AES-NI?
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    7 months ago

    I’m not sure what you’re shopping for with AES-NI but I can strongly recommend the HP T730 and T740 thin clients if you’re trying to build a budget home firewall machine. Both support AES-NI (but obviously not Xeon QAT) and the t730 is cheap on eBay. Drop whatever NIC and an SSD in and you’re off to the races with OPNSense. The T740 is performant enough to run OPNSense on Proxmox if that’s your thing, you’ll have plenty of spare processing time to do something else on the machine beyond routing/firewalling a 1-2Gb home connection.



  • seaQueue@lemmy.worldtoSelfhosted@lemmy.worldEnterprise SSD?
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    7 months ago

    I use mostly Samsung, SK Hynix, Micron and SanDisk. For bulk storage it doesn’t really matter which of those you pick but for fast storage you’ll want to be sure the drive offers PLP.

    Go hit up fleaBay and see what’s available in the way of enterprise drives in the size you need then google the model numbers and check out the datasheets. Once you know what each drive is capable of you can decide which to buy. I usually try to buy 3 dwpd models for VM storage and 1.3 dwpd for bulk, you might prefer to focus on IOPS over endurance it’ll depend on your application.

    Edit: for a VM host pool you’re primarily going to be concerned with IOPS, endurance and having PLP for better ZFS performance. For bulk storage you can skimp on specs to some extent. I prefer to use cheaper drives like the SanDisk cloudspeed eco line for a bulk storage pool and whatever high IOPS+endurance drives I can find cheap for my VM host pool. When you split your pools you can do things like use mirror zdevs for performance for VMs and raid z whatever for bulk storage.

    How many drives are you looking to use, what are they for, what interfaces do you have available on the machine (SAS backplane, SATA, any number of available NVMe hookups of some flavor, etc), what pool topology are you trying to use and what is the intended workload you want to jenga tower off of all of the above? With more info people can give you more specific recommendations. (E: and what sort of machine are you running things on while I’m at it, processors and amount of RAM would be useful)