

I don’t have experience with hosting lemmy specifically, but from what I hear it doesn’t require much other than being a bit RAM-hungry. Add some swap space, use the instance primarily for yourself, and you should be good.
Oh no, you!


I don’t have experience with hosting lemmy specifically, but from what I hear it doesn’t require much other than being a bit RAM-hungry. Add some swap space, use the instance primarily for yourself, and you should be good.


Hosting does not attract CSAM on its own. Anonymous uploads do. Only host services that you find useful yourself, and maybe sharing it with friends, and that’s a reasonably safe start.


Does sshd count?
Beyond the “default” stuff, I always seem to end up with a setup that involves linux + apache + mod_perl + postgresql for various purposes. And by the way, that’s the only proper LAMP stack in my book, and I will die on this hill.


Normally it doesn’t matter. The only restrictions is in terms of who can buy domains of that country to begin with (some countries have restrictions on that), and what sort of content is allowed in such domains. Other than that, it’s OK.


I’d say that a good starting point would be the smallest setup that would serve a useful purpose. This is usually some sort of network storage, and it sounds this might be a good starting point for you as well. And then you can add on and refine your setup however you see fit, provided your hardware is up to it.
Speaking of hardware, while it’s certainly possible to go all out with a rack-mounted purpose built 19" 4U server full of disks, the truth is that “any” machine will do. Servers generally don’t require much (depending on use case, of course), and you can get away with a 2nd hand regular desktop machine. The only caveat here is that for your (percieved) use cases, you might want the ability to add a bunch of disks, so for now, just go for a simple setup with as many disk as you see fit, and then you can expand with a JBOD cabinet later.
Tying this storage together depends on your tastes, but it generally comes down to to schools of thought, both of which are valid:
Source: Hardware RAID at work, software RAID at home.
Now that we’ve got storage addressed, let’s look at specific services. The most basic use case is something like an NFS/SMB share that you can mount remotely. This allows you to archive a lot of the stuff you don’t need live. Just keep in mind, an archive is not a backup!
And just to be clear: An archive is mainly a manner of offloading chunks of data you don’t need accessible 100% of the time. For example older/completed projects, etc. An archive is well suited for storing on a large NAS, as you’ll still have access to it if needed, but it’s not something you need to spend disk space on on your daily driver. But an archive is not a backup, I cannot state this enough!
So, backups… well, this depends on how valuable your data is. A rule of thumb in a perfect world involves three copies: One online, one offline, and one offsite. This should keep your data safe in any reasonable contingency scenarious. Which of these you implement, and how, is entirely up to you. It all comes down to a cost/benefit equation. Sometimes keeping the rule of thumb active is simply not viable, if you have data in the petabytes. Ask me how I know.
But, to circle back on your immediate need, it sounds like you can start with something simple. Your storage requirement is pretty small, and adding some sort of hosting on top of that is pretty trivial. So I’d say that, as a starting point, any PC will do - just add a couple of harddrives to make sure you have enough for the forseeable future.
Back in the day I used Nagios to get an overview of large systems, and it made it very obvious if something wasn’t working and where. But that was 20 years ago, I’m sure there are more modern approaches.
Come to think of it, at work we have grafana running, but I’m not sure exactly what scope it’s operating under.
Barring any hardware issues or external factors, will it run for 10000 years? Any logs not properly rotated? And other outputs accumulating and eventually filling up a filesystem?
Sounds more like what you need is a combination of a VPN and RDP. Have your machines connect to somewhere via whichever VPN protocol you prefer, and then you can access them via whichever protocol you prefer.
I’m old and crusty, so I mostly use openvpn, but wireguard will probably do as well.


Debian on homeservers, centos on work servers, and mint on desktops


In this day and age, shouldn’t Huntarr be replaced by Gatherarr? You know, sustainability and all…


A jumpbox. Set up a VPS somewhere, have some remote hands at home set up a VPN client to connect to the VPS, and then you connect to the VPS as well.
Alternatively, is it possible that your ISP can remote config your router and set up the port forwarding again for you?
You know what’s never happened? Someone coming home way too late after too much to drink, stumbling into the kitchen going “I could really go for an apple right now!”
STORY TIME!
Once upon a time, I worked an offshore rotation. So while I was home, I didn’t have much better to do than to hang out with my friend and his coworkers. They all worked for the local branch of a huge international corporation that shall remain anonymous, so I will only refer to the corporation by their initials: IBM.
This local branch dealt with banking systems, handling large clients in Europe, ensuring that their systems ran the way they should. And to make sure said banks could have their stuff sorted when a problem arose, there was always someone on call.
Well, it sucks being the guy on call when the one who’s the perfect guy to fix it is off, and in the spirit of solidarity, they did the only thing reasonable: Went to a local pub, and placed the on-call phone on the table, so if it rang, the expertise to get it sorted quickly was present.
I usually joined them, and more than once did I go for a piss, passing someone with their phone on their shoulder with a laptop in a bathroom sink, trying to sort out banking issues after having had waaay too many drinks.
When I come home from a humid bar crawl, I always crave greasy food and a messy kernel upgrade


Running arbitrary text from the internet through an interpreter… what could possibly go wrong.
I need to set up a website with
fork while 1
…Just so I can (try to) convince people to
curl | perl
it
…rhyme intended.


If you’re going for software RAID, I recommend taking it a step further and go for ZFS: If set up correctly you get all the advantages of raid6, while remaining very flexible.


Used/refurb SAS drives aren’t that expensive. Can someone with better memory than I please link to that site for second hand server components?
The reason why SAS drives are usually more expensive isn’t because the tech itself is more expensive (It’s largelt just a different kind of interface), but rather that “enterprise grade” hardware have a few additional Q&A steps, such as running a break-in cycle at the factory to weed out defective units.
While a server such as the one you described is slightly power hungry, it’s not that bad. Plus, if you wanna get into servers long term, it could serve as a useful way to get used to the hardware involved.
Server hardware is at its core not that different from consumer hardware, but it does often come with some nice and useful additions, such as:
RAID is entirely optional. I seem to be the only one in here who actually like hardware RAID, as software RAID is more popular in the self hosting community. Using it is entirely optional and depends on your use case, though. If you wanna live without, use JBOD mode, and access each drive normally. Alternatively, pool as many disks as you want into RAID6 and you have one large storage device with built-in redundancy. RAIDs can either be managed from the BIOS, or from the OS using tools such as storcli.


Some VLAN-related nuggets that you may find useful for your post/blog:
Source: VLANs have been an integral part of my career for 20ish years.
If it works on mint, it’ll most likely work on debian, with the caveat that debian is a lot more CLI and a lot less handholding. Depending on your setup, debian might be a better choice for you, as Mint is desktop oriented.
But don’t fix something that already works. If there’s no issues with your Mint setup, I’d say keep it. Next time you set up a server, you can go for debian instead.
Source: I use both extensively. Mint on desktop, debian on headless stuff.
Same. Got some leftover Fortinet from work that I’m using. Could be better, but my Fortigate 101E works miles better than my ISP default router. All I had to do was assign upstream wan to VLAN 10 and spoof the MAC address.