Oh no, you!

  • 0 Posts
  • 69 Comments
Joined 1 year ago
cake
Cake day: November 3rd, 2024

help-circle





  • I’d say that a good starting point would be the smallest setup that would serve a useful purpose. This is usually some sort of network storage, and it sounds this might be a good starting point for you as well. And then you can add on and refine your setup however you see fit, provided your hardware is up to it.

    Speaking of hardware, while it’s certainly possible to go all out with a rack-mounted purpose built 19" 4U server full of disks, the truth is that “any” machine will do. Servers generally don’t require much (depending on use case, of course), and you can get away with a 2nd hand regular desktop machine. The only caveat here is that for your (percieved) use cases, you might want the ability to add a bunch of disks, so for now, just go for a simple setup with as many disk as you see fit, and then you can expand with a JBOD cabinet later.

    Tying this storage together depends on your tastes, but it generally comes down to to schools of thought, both of which are valid:

    • Hardware RAID. I think I’m one of the few fans of this, as it does offer some advantages over software RAID. I suspect that the ones who are against hardware RAID and call it unreliable have not been using proper RAID controllers. Proper RAID controllers with write cache are expensive, though.
    • Software RAID. As above, except it’s done via software instead (duh), hence the name. There are many ways to approach this, but personally I like ZFS - Set up multiple disks as a storage pool, and add more drives as needed. This works really well with JBOD cabinets. The downside to ZFS is that it can be quite hungry when it comes to RAM. Either way, keep in mind that RAID, software or hardware, is not a backup.

    Source: Hardware RAID at work, software RAID at home.

    Now that we’ve got storage addressed, let’s look at specific services. The most basic use case is something like an NFS/SMB share that you can mount remotely. This allows you to archive a lot of the stuff you don’t need live. Just keep in mind, an archive is not a backup!

    And just to be clear: An archive is mainly a manner of offloading chunks of data you don’t need accessible 100% of the time. For example older/completed projects, etc. An archive is well suited for storing on a large NAS, as you’ll still have access to it if needed, but it’s not something you need to spend disk space on on your daily driver. But an archive is not a backup, I cannot state this enough!

    So, backups… well, this depends on how valuable your data is. A rule of thumb in a perfect world involves three copies: One online, one offline, and one offsite. This should keep your data safe in any reasonable contingency scenarious. Which of these you implement, and how, is entirely up to you. It all comes down to a cost/benefit equation. Sometimes keeping the rule of thumb active is simply not viable, if you have data in the petabytes. Ask me how I know.

    But, to circle back on your immediate need, it sounds like you can start with something simple. Your storage requirement is pretty small, and adding some sort of hosting on top of that is pretty trivial. So I’d say that, as a starting point, any PC will do - just add a couple of harddrives to make sure you have enough for the forseeable future.


  • Back in the day I used Nagios to get an overview of large systems, and it made it very obvious if something wasn’t working and where. But that was 20 years ago, I’m sure there are more modern approaches.

    Come to think of it, at work we have grafana running, but I’m not sure exactly what scope it’s operating under.








  • neidu3@sh.itjust.workstoSelfhosted@lemmy.worldTonight 😬
    link
    fedilink
    English
    arrow-up
    30
    ·
    edit-2
    26 days ago

    STORY TIME!

    Once upon a time, I worked an offshore rotation. So while I was home, I didn’t have much better to do than to hang out with my friend and his coworkers. They all worked for the local branch of a huge international corporation that shall remain anonymous, so I will only refer to the corporation by their initials: IBM.

    This local branch dealt with banking systems, handling large clients in Europe, ensuring that their systems ran the way they should. And to make sure said banks could have their stuff sorted when a problem arose, there was always someone on call.

    Well, it sucks being the guy on call when the one who’s the perfect guy to fix it is off, and in the spirit of solidarity, they did the only thing reasonable: Went to a local pub, and placed the on-call phone on the table, so if it rang, the expertise to get it sorted quickly was present.

    I usually joined them, and more than once did I go for a piss, passing someone with their phone on their shoulder with a laptop in a bathroom sink, trying to sort out banking issues after having had waaay too many drinks.





  • Used/refurb SAS drives aren’t that expensive. Can someone with better memory than I please link to that site for second hand server components?

    The reason why SAS drives are usually more expensive isn’t because the tech itself is more expensive (It’s largelt just a different kind of interface), but rather that “enterprise grade” hardware have a few additional Q&A steps, such as running a break-in cycle at the factory to weed out defective units.

    While a server such as the one you described is slightly power hungry, it’s not that bad. Plus, if you wanna get into servers long term, it could serve as a useful way to get used to the hardware involved.
    Server hardware is at its core not that different from consumer hardware, but it does often come with some nice and useful additions, such as:

    • Botswana drive bays (I tried to write “hotswap”, but autocorrect is probably correct.
    • IPMI/iDRAC or equivalent for headless management
    • Dual PSUs
    • Rack mount capability
    • Easy maintenance access to most hardware
    • A ridiculous amount of sensors with automated warnings.

    RAID is entirely optional. I seem to be the only one in here who actually like hardware RAID, as software RAID is more popular in the self hosting community. Using it is entirely optional and depends on your use case, though. If you wanna live without, use JBOD mode, and access each drive normally. Alternatively, pool as many disks as you want into RAID6 and you have one large storage device with built-in redundancy. RAIDs can either be managed from the BIOS, or from the OS using tools such as storcli.


  • Some VLAN-related nuggets that you may find useful for your post/blog:

    • 99% of the time when people refer to VLAN, they’re talking about 802.1Q (tag-based VLANs). There are others (Such as port based), so it’s up whether you want to cover those as well.
    • The word “Trunk” can mean different things, depending on vendor. In the Cisco world, it means a line/port carrying multiple VLANs. With many other vendors, such as Aruba/HPE, it refers to link aggregation which isn’t necessarily relevant to VLANs
    • A lot of hardware still use VLANs even if none have been configured. For example, defaulting all switch ports to have an Access tag of 1 makes it behave like a dumb switch. This can cause issues later if you’re configuring VLANs elsewhere
    • Anything non-vlany connected to a VLAN-enabled switch will have to be connected to a port with a default VLAN tag. This is usually referred to as an “Access port” or an “Untagged port”
    • “How do I configure the switch to allow units on VLAN 123 to talk to VLAN 321?”. You don’t. Connect both VLANs to a router which will route between them. Either connect the router to both VLANs individually and skip the tagging on the router, or you can run a single trunk between the switch and the router which carries both VLANs. The latter requires you to configure VLANs on your router accordingly.
    • It might make sense in many cases to have the VLAN tag the same as the last octet in the IPv4 subnet. Makes it easier to keep track of.
    • A PC can implement VLANs on its network port, allowing you to connect to a trunk port and access several VLANs with one cable.

    Source: VLANs have been an integral part of my career for 20ish years.


  • If it works on mint, it’ll most likely work on debian, with the caveat that debian is a lot more CLI and a lot less handholding. Depending on your setup, debian might be a better choice for you, as Mint is desktop oriented.

    But don’t fix something that already works. If there’s no issues with your Mint setup, I’d say keep it. Next time you set up a server, you can go for debian instead.

    Source: I use both extensively. Mint on desktop, debian on headless stuff.