I’m the administrator of kbin.life, a general purpose/tech orientated kbin instance.

  • 0 Posts
  • 18 Comments
Joined 1 year ago
cake
Cake day: June 29th, 2023

help-circle
  • I think people’s experience with PLE will always be subjective. In the old flat we were in, where I needed it. It would drop connection all the time, it was unusable.

    But I’ve had them run totally fine in other places. Noisy power supplies that aren’t even in your place can cause problems. Any kind of impulse noise (bad contacts on an old style thermostat for example) and all kinds of other things can and will interfere with it.

    Wifi is always a compromise too. But, I guess if wiring direct is not an option, the OP needs to choose their compromise.




  • Well I run an ntp stratum 1 server handling 2800 requests a second on average (3.6mbit/s total average traffic), and a flight radar24 reporting station, plus some other rarely used services.

    The fan only comes on during boot, I’ve never heard it used in normal operation. Load averages 0.3-0.5. Most of that is Fr24. Chrony takes <5% of a single core usually.

    It’s pretty capable.



  • But isn’t that the point? You pay a low fee for inconvenient access to storage in the hope you never need it. If you have a drive failure you’d likely want to restore it all. In which case the bulk restore isn’t terrible in pricing and the other option is, losing your data.

    I guess the question of whether this is a service for you is how often you expect a NAS (that likely has redundancy) to fail, be stolen, destroyed etc. I would expect it to be less often than once every 5 years. If the price to store 12TB for 5 years and then restore 12TB after 5 years is less than the storage on other providers, then that’s a win, right? The bigger thing to consider is whether you’re happy to wait for the data to become available. But for a backup of data you want back and can wait for it’s probably still good value. Using the 12TB example.

    Backblaze, simple cost. $6x12 = $72/month which over a 5-year period would be $4320. Depending on whether upload was fast enough to incur some fees on the number of operations during backup and restore might push that up a bit. But not by any noticeable amount, I think.

    For amazon glacier I priced up (I think correctly, their pricing is overly complicated) two modes. Flexible access and deep archive. The latter is probably suitable for a NAS backup. Although of course you can only really add to it, and not easily remove/adjust files. So over time, your total stored would likely exceed the amount you actually want to keep. Some complex “diff” techniques could probably be utilised here to minimise this waste.

    Deep archive
    12288 put requests @ $0.05 = $614.40
    Storage 12288GB per month = $12.17 x 60 = $729.91
    12288 get requests @ $0.0004 = $4.92
    12288GB retrieval @ $0.0025 / GB x 12288 = $30.72 (if bulk possible)
    12288GB retrieval @ $0.02 / GB x 12288 = $245.76 (if bulk not possible)

    Total: $1379.95 / $1594.99

    Flexible
    12288 put requests @ $0.03 = $368.64
    Storage 12288GB per month = $44.24 x 60 = $2654.21
    12288 get requests @ $0.0004 = $4.92
    12288GB retrieval @ $0.01 / GB x 12288 = $122.88

    Total: $3150.65

    In my mind, if you just want to push large files you’re storing on a high capacity NAS somewhere they can be restored on some rainy day sometime in the future, deep archive can work for you. I do wonder though, if they’re storing this stuff offline on tape or something similar, how they bring back all your data at once. But, that seems to me to be their problem and not the user’s.

    Do let me know if I got any of the above wrong. This is just based on the tables on the S3 pricing site.


  • So there’s three problems you are very likely to encounter.

    1. Most providers now almost certainly filter their egress for netblocks under their control to prevent ip spoofing. So it’s likely the packets would never make it out at all.

    2: if it does work the return path would be over the normal Internet route and not via the vpn. Only the sent packets would go via the vpn host.

    3: if the client is behind nat the router will not recognise the response packets as belonging to an open connection and will drop them.

    I’m really not sure what your intention is.





  • Well mobile data is very different. With fibre optic you can generally keep provisioning more cables and a single cable already carries a huge amount already.

    Radio has an absolute efficiency limit for the bandwidth of a signal and we’re pretty damn close to that now.

    5g uses wider bandwidth channels, with more cells closer together and uses things like beamforming. But there’s still always going to be an upper limit that is considerably lower than fibre.

    This is why they likely want to discourage 5g becoming a full alternative to wired, because there’s just not the capacity to do it on the same scale.




  • I think this is right but to make it work you’d need to do one of two things to pull it off. First off, if you’re doing it just for Web the nginx proxy putting original ip in the header and unpacking on the other side is the smart move. Otherwise.

    1: route all your traffic on your side via the vpn, and have the routing on the vpn side forward the packets to the intranet ip on your side not do dnat on it.

    2: if you want to route normal traffic over your normal link then you could do it with source routing on the router. You would need two subnets, one for your normal Internet and one for the vpn traffic. Setup source routing to route packets with the vpn ip addresses go via vpn and the rest nat the normal way then the same as before, vpn on cloud forwards not nat to your side of the vpn.

    In both cases snat should be done on the cloud side.

    It’s a fiddly setup just to get the ip addresses though.


  • You don’t need to use nat on ipv6. Most routers are based on Linux and there you have conntrack.

    With that you can configure by default outgoing only connections just like nat and poke holes in the firewall for the ports you want specifically.

    Also windows and I think Linux use ipv6 privacy extensions by default. That means that while you can assign a fixed address and run services, it will assign random ip addresses within your (usually) /64 allocation for outgoing connections. So people can’t identify you and try to connect back to your ip with a port scanner etc.

    All the benefits of nat with none of the drawbacks.


  • Well actually if the popular communities weren’t concentrated on the larger instances, and rather spread out it would be less of a problem I think. But, yes at the peak of things I was averaging around 5 hits a second from lemmy.world alone on incoming federation messages.

    I think making a separate run relay isn’t the answer. I think perhaps the larger instances running a separate server for federation outgoing messages, and perhaps redirecting incoming federation messages too. So as to separate federation and UI. If they don’t already of course. That could go a long way to making it take longer to overwhelm.


  • Well, more specifically it is protecting against a specific form of data loss, which is hardware failure. A good practice if you’re able is to have RAID and an offsite/cloud backup solution.

    But if you don’t, don’t feel terrible. When the OVH datacentre had a fire, I lost my server there. But so did a lot of businesses. You’d be amazed at how many had no backup and were demanding that OVH somehow pry their smouldering drives from what remained of the datacentre wing and salvage all the data.

    If you care about your data, you want a backup that is off-site. Cloud backup is quite inexpensive these days.