Aussie living in the San Francisco Bay Area.
Coding since 1998.
.NET Foundation member. C# fan
https://d.sb/
Mastodon: @dan@d.sb

  • 5 Posts
  • 495 Comments
Joined 3 years ago
cake
Cake day: June 14th, 2023

help-circle
  • Use a page caching plugin that writes HTML files to disk. I don’t do a lot with WordPress any more, but my preferred one was WP Super Cache. Then, you need to configure Nginx to serve pages directly from disk if they exist. By doing this, page loads don’t need to hit PHP and you effectively get the same performance as if it were a static site.

    See how you go with just that, with no other changes. You shouldn’t need FastCGI caching. If you can get most page loads hitting static HTML files, you likely won’t need any other optimizations.

    One issue you’ll hit is if there’s any highly dynamic content on the page, that’s generated on the server. You’ll need to use JavaScript to load any dynamic bits. Normal article editing is fine, as WordPress will automatically clear related caches on publish.

    For the server, make sure it’s located near the region where the majority of your users are located. For 200k monthly hits, I doubt you’d need a machine as powerful as the Hetzner one you mentioned. What are you using currently?


  • If your current setup works well for you, there’s no reason to change it.

    You could try Debian in a VM (virtual machine) if you want to. If you’re running a desktop environment, GNOME Boxes makes it pretty easy to create VMs. It works even if you don’t use GNOME.

    If you want to run it as a headless server (no screen plugged in to it), I’d install Proxmox on the system, and use VMs or LXC containers for everything. Proxmox gives you a web UI to manage VMs and containers.



  • Blue Iris is by far the most capable NVR, but it’s Windows-only so you’d need a Windows or Windows Server VM. For a basic setup, Frigate is more than sufficient.

    I’d say try Frigate on your ThinkCentre and see how well it runs. I wouldn’t buy new hardware prematurely.

    Do I understand that I could then share the igpu between Jellyfin and Docker/Frigate?

    I’m not sure about containers like LXC, but generally you need SR-IOV or GVT-g support to share a GPU across multiple VMs. I think your CPU supports GVT-g, so you should be able to find a guide on setting it up.





  • Oops, I didn’t know about the SX line, and didn’t know they had auction servers with large amounts of disk space. Thanks!! I’m not familiar with all of Hetzner’a products.

    For pure file storage (ie you’re only using SFTP, Borgbackup, restic, NFS, Samba, etc) I still think the storage boxes are a good deal, as you don’t have to worry about server maintenance (since it’s a shared environment). I’m not sure if supports encryption though, which is probably where a dedicated server would be useful.



  • SQLite is underrated. I’ve used it for high traffic systems with no issues. If your system has a large number of readers and a small number of writers, it performs very well. It’s not as good for high-concurrency write-heavy use cases, but that’s not common (most apps read far more than they write).

    My use case was a DB that was created during the build process, then read on every page load.


  • MariaDB is not always a drop-in replacement. There’s several features that MySQL has that MariaDB doesn’t, especially related to the optimizer (for some types of queries, MySQL will give you a more optimized execution plan compared to MariaDB). It’s also missing some newer data types, like JSON (which indexes the individual fields in JSON objects to make filtering on them more efficient).

    MariaDB and MySQL are both fine. Even though MySQL doesn’t receive as much development any more, it doesn’t really need it. It works fine. If you want a better database system, switch to PostgreSQL, not MariaDB.


  • AWS Glacier would be about $200/mo, PLUS bandwidth transfer charges, which would be something like $500. R2 would be about $750/mo

    50TB on a Hetzner storage box would be $116/month, with unlimited traffic. It’d have to be split across three storage boxes though, since 20TB is the max per box. 10TB is $24/month and 20TB is $46/month.

    They’re only available in Germany and Finland, but data transfer from elsewhere in the world would still be faster than AWS Glacier.

    Another option with Herzner is a dedicated server. Unfortunately the max storage they let you add is 2 x 22TB SATA HDDs, which would only let you store 22TB of stuff (assuming RAID1), for over double the cost of a 20TB storage box.


  • Both of those documents agree with me? RedHat are just using the terms “client” and “server” to make it easier for people to understand, but they explicitly say that all hosts are “peers”.

    Note that all hosts that participate in a WireGuard VPN are peers. This documentation uses the terms client to describe hosts that establish a connection and server to describe the host with the fixed hostname or IP address that the clients connect to and, optionally, route all traffic through this server.

    Everything else is a client of that server because they can’t independently do much else in this configuration.

    All you need to do is add an extra peer to the WireGuard config on any one of the “clients”, and it’s no longer just a client, and can connect directly to that peer without using the “server”.


  • There’s no such thing as a client or server with Wireguard. All systems with Wireguard installed are “nodes”. Wireguard is peer-to-peer, not client-server.

    You can configure nftables rules to route through a particular node, but that doesn’t really make it a server. You could configure all nodes to allow routing traffic through them if you wanted to.

    If you run Wireguard on every device, you can configure a mesh VPN, where every device can directly reach any other device, without needing to route through an intermediary node. This is essentially what Tailscale does.





  • dan@upvote.autoSelfhosted@lemmy.worldDocker security
    link
    fedilink
    English
    arrow-up
    25
    ·
    2 months ago

    you can override this by setting an IP on the port exposed so thet a local only server is only accessable on 127.0.0.1

    Also, if the Docker container only has to be accessed from another Docker container, you don’t need to expose a port at all. Docker containers can reach other Docker containers in the same compose stack by hostname.



  • why is a tower defense game listed under Automation?

    and two of the most popular automation programs are missing (n8n and Node-RED).

    who on earth needs customer live chat and a lot of business-scale website analytics, webshop systems and CRM and ERP in their homelab??

    Maybe not in a homelab, but plenty of people self-host these. I’m setting up customer live chat (Chatwoot) and invoicing and account (Bigcapital) for my wife for example. I self-host website analytics (Plausible) and bug tracking (used to be Sentry but it got too complex to host, so now I’m trying Bugsink and Glitchtip) for my personal sites/projects, too.