Just some Internet guy

He/him/them 🏳️‍🌈

  • 0 Posts
  • 131 Comments
Joined 1 year ago
cake
Cake day: June 25th, 2023

help-circle
  • I get about 350-400 both ways which AFAIK is what my Unifi AC-Lite tops at since it’s WiFi 5 and it’s only got 2 antennas and tops at 80MHz channels. I get about 200-250 on my phone (1+8T) which I think is single stream.

    Everything indicates me that’s as best as it can be with the set of hardware I have. Signal is solid, latency is solid.

    You’ll need 802.11ax and/or more MIMO streams to get higher speeds, and/or 160MHz/320MHz channels.






  • If you want FRP, why not just install FRP? It even has a LuCI app to control it from what it looks like.

    OpenWRT page showing the availability of FRP as an app

    NGINX is also available at a mere 1kb in size for the slim version, full version also available as well as HAproxy. Those will have you more than covered, and support SSL.

    Looks like there’s also acme.sh support, with a matching LuCI app that can handle your SSL certificate situation as well.


  • The concern for the specific disk technology is usually around the use case. For example, surveillance drives you expect to be able to continuously write to 24/7 but not at crazy high speeds, maybe you can expect slow seek times or whatever. Gaming drives I would assume are disposable and just good value for storage size as you can just redownload your steam games. A NAS drive will be a little bit more expensive because it’s assumed to be for backups and data storage.

    That said in all cases if you use them with proper redundancy like RAIDZ or RAID1 (bleh) it’s kind of whatever, you just replace them as they die. They’ll all do the same, just not with quite the same performance profile.

    Things you can check are seek times / latency, throughput both on sequential and random access, and estimated lifespan.

    I keep hearing good things about decomissioned HGST enterprise drives on eBay, they’re really cheap.



  • It could be a disk slowly failing but not throwing errors yet. Some drives really do their best to hide that they’re failing. So even a passing SMART test I would take with some salt.

    I would start by making sure you have good recent backups ASAP.

    You can test the drive performance by shutting down all VMs and using tools like fio to do some disk benchmarking. It could be a VM causing it. If it’s an HDD in particular, the random reads and writes from VMs can really cause seek latency to shoot way up. Could be as simple as a service logging some warnings due to junk incoming traffic, or an update that added some more info logs, etc.


  • There’s always the command escape hatch. Ultimately the roles you’ll use will probably do the same. Even a plugin would do the same, all the ZFS tooling eventually shells to the zfs/zpool, probably same with btrfs. Those are just very complex filesystems, it would be unreliable to reimplement them in Python.

    We use tools to solve problems, not make it harder for no reason. That’s why command/shell actions exist: sometimes it’s just better to go that way.

    You can always make your own plugin for it, but you’re still just writing extra code to eventually still shell out into the commands and parse their output.



  • Very minimal. Mostly just run updates every now and then and fix what breaks which is relatively rare. The Docker stacks in particular are quite painless.

    Couple websites, Lemmy, Matrix, a whole email stack, DNS, IRC bouncer, NextCloud, WireGuard, Jitsi, a Minecraft server and I believe that’s about it?

    I’m a DevOps engineer at work, managing 2k+ VMs that I can more than keep up with. I’d say it varies more with experience and how it’s set up than how much you manage. When you use Ansible and Terraform and Kubernetes, the count of servers and services isn’t really important. One, five, ten, a thousand servers, it matters very little since you just run Ansible on them and 5 minutes later it’s all up and running. I don’t use that for my own servers out of laziness but still, I set most of that stuff 10 years ago and it’s still happily humming along just fine.


  • You probably need the server to do relatively aggressive keepalive to keep the connection alive. You go through CGNAT, so if the server doesn’t talk over the VPN for say 30 seconds, the NAT may drop the mapping and now it’s gone. WireGuard doesn’t send any packet unless it’s actively talking to the other peer, so you need to enable keepalive so it’s sending stuff often enough the connection doesn’t drop and if it does, quickly bring it back up.

    Also make sure if you don’t NAT the VPN, that everything has a route that goes back to the VPN. If 192.168.1.34 (main location) talks to 192.168.2.69 (remote location) over a VPN 192.168.3.0/24, without NAT, both ends needs to know to route it through the VPN network. Your PIVPN probably does NAT so it works one way but not the other. Traceroute from both ends should give you some insight.

    That should absolutely work otherwise.


  • For the backup scenario in particular, it makes sense to pipe them through right to the destination. Like, tar -zcv somefiles | ssh $homeserver dd of=backup.tar.gz, or mysqldump | gzip -c | ssh $homeserver dd of=backup.sql.gz. Since it’s basically a download from your home server’s perspective it should be pretty fast, and you don’t need temporary space at all on the VPS.

    File caching might be a little tricky. You might be best self host some kind of object storage and put varnish/NGINX/dedicated caching proxy software in front of it on your VPS, so it can cache the responses but will ultimately forward to the home server over VPN if it doesn’t have it cached.

    If you use NextCloud for your photos and videos and stuff, it can use object storage instead of local filesystem, so it would work with that kind of setup.




  • Depends what it does.

    Lets say you run a Reddit/Twitter/YouTube proxy. Yeah, the services ultimately still get your server’s IP, but you will just appear as coming from some datacenter somewhere, so while they can know it’s your traffic, they can’t track you on the client side frontend and see that you were at home (and where your home is), then you went on mobile data and then ended on a guest WiFi, then at some corporate place. The server is obfuscating all of that. And you control the server, so your server isn’t tracking anything.

    The key to those services being more private is actually to have more people using them. Lets say now you have 10 people using your Invidious instance. It’ll fudge your watch pattern a fair bit, but also any watched video could be from any of the 10 users. If they don’t detect that, they’ve made a completely bogus profile that’s the combination of you and your 10 users.

    You can always add an extra layer and make it go through a VPN or Tor, but if you care that much you should already always be on a VPN anyway. But it does have the convenience that you can use it privately even without a VPN.


    A concrete example. I run my own Lemmy server. It’s extremely public but yet, I find it more private that Reddit would. By having my own server, all of my client-side actions are between me and my server. Reddit on the other hand can absolutely log and see every interaction I have with their site, especially now that they’ve killed third-party apps. It knows every thread I open, it can track a lot of my attention. It knows if I’m skimming through comments or actually reading, everything. In contract, the fediverse doesn’t know what I actually read: my server collects everything regardless. On the other hand, all my data including votes is totally public, so I gain privacy in a way but lose some the other way.

    Privacy is a tradeoff. Sometimes you’re willing to give away some information to protect other.


    For selfhosting as a whole, sure some things are just frontends and don’t give you much like an Invidious instance, but others can be really good. NextCloud for example, I know my files are entirely in my control and get a similar experience to using Google Drive: I can browse my stuff from anywhere and access my files. I have my own email, so nobody can look at my emails and give me ads based on what newsletter I get.

    It doesn’t have to be perfect, if it’s an improvement and gets you into selfhosting more stuff down the line, it’s worth it.


  • Max-P@lemmy.max-p.metoSelfhosted@lemmy.worldLemmy API?
    link
    fedilink
    English
    arrow-up
    23
    ·
    3 months ago

    https://join-lemmy.org/docs/contributors/04-api.html

    Lemmy is the API, it’s always there. The web UI is just a client like any others and makes use of the Lemmy API. So you can just call the API to register an account, reset password, log in, everything. You don’t need to register tokens or apps, you just log into your account and get a session token and you’re good to go!

    That makes it easy to discover the API as well, since you can just open your browser’s devtools and inspect the network requests. It’s the same API, so you can just go ahead and implement the same in your code. No second class clients for Lemmy, they all use the same public API.

    Plus of course it also implements the ActivityPub APIs for federation, also which doesn’t require registration or anything special.


  • Do you have spare drives to test? Can be really small or mismatched, it’s just for testing.

    The idea is as follows: make the exact same RAID with the old controller on test drives, then put them in the target controller with hopefully the same settings and see if it’s happy. Make sure to have some large files with known checksums on it, just to test if the data is correct and not corrupted in subtle ways.

    If it works, then it should work with the real drives. If it doesn’t, good luck.

    Also RAID 1 with 6 drives doesn’t really make sense. RAID 1 would be mirrors, and if your data had 6 copies I think you’d care way too much about your data to even consider doing this. Probably RAID 5/6/10, which adds parity and striping to the mix which does significantly increase the chances of incompatibility.