iptables -I APPEALS -j DROP
iptables -I APPEALS -j DROP
“I hereby sentence you to two years on your own VLAN with no gateway”
Jabra still exists yes. I’m still using Jabra, although I’m using a pair that I bought after I thought that one earbud was gone forever. I still use the older ones, which was Jabra Elite 4, but only with my PC, as its battery took a hit after those 6 months at sea. I currently main Jabra Active 7 or something like that, and I quite like them. I noticed that the cover doesn’t stay very attached after a few proper cleans, but nothing a drop of glue doesn’t fix. What I really like about the ones I currently use is that they’re supposedly built to withstand sweat while training. I don’t work out, but it would seem that those who do sweat A LOT, as I can wear mine while showering without any issues.
As for resilvering, the RAIDs are only a small fraction each of the complete storage cluster. I don’t remember their exact sizes, but each raid volume is 12 drives of 10TB each. Each machine has three of these volumes. Four machines total contributes all of its raid volumes to the storage cluster for 1.2PB of redundant storage (although I’m tempted to drop the beegfs redundancy, as we could use the extra space, and it’s usually fairly hassle free to swap in a new server and move the drives over).
EDIT: I just realized that I have this Jabra confference call speaker attached to the laptop on which I’m currently typing. I mostly use it for discord while playing project zomboid with my friends, though. I run audio output elsewhere, as the jabra is mono only.
Story time!
In this one production cluster at work (1.2PB across four machines, 36 drives per machine) everything was Raid6, except ONE single volume on one of the machines that was incorrectly set up as Raid5. It wasn’t that worrysome, as the data was also stored with redundancy across the machines in the storage cluster itself (a nice functionality of beegfs), but it annoyed the fuck out of me for the longest time.
There was some other minor deferred maintenance as well which necessitated a complete wipe, but there was no real opportunity to do this and rebuild that particular RAID volume properly until last spring before the system was shipped off to Singapore to be mobilized for a survey. I planned on getting it done before the system was shipped, so I backed up what little remained after almost clearing it all out, nuked the cluster, disassembled the raid5, and then started setting up everything from scratch. Piece of cake, right?
shit
That’s when I learned how much time it actually takes to rebuild a volume of 12 disks, 10TB each. I let it run as long as I could before it had to be packed up. After half a year of slow shipping it finally arrived on the other side of the planet, so I booked my plane ticket and showed up a week before anyone else just so I could connect power and continue the reraiding before the rest of the crew showed up. Basically, pushing a few buttons, followed by a week of sitting at various cafes drinking beer. Once the reraid was done, reclustering was done in less than an hour, and restoring the folder structure backup was a few hours on top of that. Not the worst work trip I’ve had, except from some unexpected and unrelated hardware failures, but that’s a story for another day.
Fun fact: While preparing the system for shipment here in Europe, I lost one of my Jabra bluetooth buds. I searched fucking everywhere for hours, but gave up on finding it. I found it half a year later in Singapore, on top of the server rack, surprised it hadn’t even rolled down. It really speaks to how little these huge container ships roll.
Seconding this. For starters, when tempted to go for Raid5, go for Raid6 instead. I’ve had drives fail in Raid5, and in turn have a second failure during the increased I/O associated with replacing a failed drive.
And yes, setting up RAID wipes the drives. Is the data private? If not, a friendly datahoarder might help you out with temporary storage.
deleted by creator
The issue with diagnosing memory issues is that it usually results in no memory available to handle the logging of such a problem when it happens.
I’ve found that the easieat approach is to set up a file as additional swap space, and swapon, then see if the problem disappears, either partially or fully.
Personally I’d just upgrade to RAIDZ2, and add as many disks to that as reasonably practical. To be honest, I fail to see any downsides to using four disks for this other than the storage inefficiency.
rsync?
You’ll probably be fine with hetzner. If not, you can cancel whenever.
I have moved over to inhouse hosting, but I exclusively used hetzner for years, a lot of it via the auction.
If you don’t need something very specific, the auction is a great way to spin up something cheaply.
I’m not that picky in terms of routers, as long as it is rack mounted. I happen to use a Fortigate 101E that was no longer needed at work.
While it does support VLAN, I don’t do that on the router, as Fortigate can be a bit of a pain in the ass when it comes to VLAN tagging. I instead have dedicated ports for the various network I serve, each of which connect to the same switch.
On this switch I have each of those uplink tagged as access ports for the VLANs they represent. Then the remaining ports can be tagged as I please. A few extra patch cables, but only dealing with VLAN tags on aruba makes it so much better.
As for PoE, that’s best done on a switch. My Aruba powers all of my access points this way.
I’m not very well versed on docker, but this sounds like a config issue. The behavior seems similar to “squash root” found in many other services.
deleted by creator
Not necessarily. It can be, but it all depends on which nodes you get when you connect. If I end up on slow nodes I usually just reconnect, and it’s fine.