Oh no, you!

  • 0 Posts
  • 54 Comments
Joined 1 year ago
cake
Cake day: November 3rd, 2024

help-circle


  • Used/refurb SAS drives aren’t that expensive. Can someone with better memory than I please link to that site for second hand server components?

    The reason why SAS drives are usually more expensive isn’t because the tech itself is more expensive (It’s largelt just a different kind of interface), but rather that “enterprise grade” hardware have a few additional Q&A steps, such as running a break-in cycle at the factory to weed out defective units.

    While a server such as the one you described is slightly power hungry, it’s not that bad. Plus, if you wanna get into servers long term, it could serve as a useful way to get used to the hardware involved.
    Server hardware is at its core not that different from consumer hardware, but it does often come with some nice and useful additions, such as:

    • Botswana drive bays (I tried to write “hotswap”, but autocorrect is probably correct.
    • IPMI/iDRAC or equivalent for headless management
    • Dual PSUs
    • Rack mount capability
    • Easy maintenance access to most hardware
    • A ridiculous amount of sensors with automated warnings.

    RAID is entirely optional. I seem to be the only one in here who actually like hardware RAID, as software RAID is more popular in the self hosting community. Using it is entirely optional and depends on your use case, though. If you wanna live without, use JBOD mode, and access each drive normally. Alternatively, pool as many disks as you want into RAID6 and you have one large storage device with built-in redundancy. RAIDs can either be managed from the BIOS, or from the OS using tools such as storcli.


  • Some VLAN-related nuggets that you may find useful for your post/blog:

    • 99% of the time when people refer to VLAN, they’re talking about 802.1Q (tag-based VLANs). There are others (Such as port based), so it’s up whether you want to cover those as well.
    • The word “Trunk” can mean different things, depending on vendor. In the Cisco world, it means a line/port carrying multiple VLANs. With many other vendors, such as Aruba/HPE, it refers to link aggregation which isn’t necessarily relevant to VLANs
    • A lot of hardware still use VLANs even if none have been configured. For example, defaulting all switch ports to have an Access tag of 1 makes it behave like a dumb switch. This can cause issues later if you’re configuring VLANs elsewhere
    • Anything non-vlany connected to a VLAN-enabled switch will have to be connected to a port with a default VLAN tag. This is usually referred to as an “Access port” or an “Untagged port”
    • “How do I configure the switch to allow units on VLAN 123 to talk to VLAN 321?”. You don’t. Connect both VLANs to a router which will route between them. Either connect the router to both VLANs individually and skip the tagging on the router, or you can run a single trunk between the switch and the router which carries both VLANs. The latter requires you to configure VLANs on your router accordingly.
    • It might make sense in many cases to have the VLAN tag the same as the last octet in the IPv4 subnet. Makes it easier to keep track of.
    • A PC can implement VLANs on its network port, allowing you to connect to a trunk port and access several VLANs with one cable.

    Source: VLANs have been an integral part of my career for 20ish years.


  • If it works on mint, it’ll most likely work on debian, with the caveat that debian is a lot more CLI and a lot less handholding. Depending on your setup, debian might be a better choice for you, as Mint is desktop oriented.

    But don’t fix something that already works. If there’s no issues with your Mint setup, I’d say keep it. Next time you set up a server, you can go for debian instead.

    Source: I use both extensively. Mint on desktop, debian on headless stuff.




  • I use beegfs at work for the redundancy and clustering aspect. 1.8PB of storage with 100% redundancy.

    While it supports a lot and CAN be quite involved, a very basic setup is in fact pretty simple:

    A filesystem on a machine is a storage target.
    A machine with storage targets is a storage node. (beegfs-storage)
    A management server (beegfs-mgmtd) connects these together into a filesystem.
    Any machine runs beegfs-client to mount this filesystem.
    One machine needs to run beegfs_meta for the Metadata. It doesn’t require a lot.