• 0 Posts
  • 71 Comments
Joined 1 year ago
cake
Cake day: June 19th, 2023

help-circle
  • I was in the same place as you a few years ago - I liked swarm, and was a bit intimidated by kubernetes - so I’d encourage you to take a stab at kubernetes. Everything you like about swam kubernetes does better, and tools like k3s make it super simple to get set up. There _is& a learning curve, but I’d say it’s worth it. Swarm is more or less a dead end tech at this point, and there are a lot more resources about kubernetes out there.


  • They are, but I think the question was more “does the increased speed of an SSD make a practical difference in user experience for immich specifically”

    I suspect that the biggest difference would be running the Postgres DB on an SSD where the fast random access is going to make queries significantly faster (unless you have enough ram that Postgres can keep the entire DB in memory where it makes less of a difference).

    Putting the actual image storage on SSD might improve latency slightly, but your hard drive is probably already faster than your internet connection so unless you’ve got lots of concurrent users or other things accessing the hard drive a bunch it’ll probably be fast enough.

    These are all Reckons without data to back it up, so maybe do some testing








  • As in, hardware RAID is a terrible idea and should never be used. Ever.

    With hardware RAID, you are moving your single point of failure from your drive to your RAID controller - when the controller fails, and they fail more often then you would expect - you are fucked, your data is gone, nice try, play again some time. In theory you could swap the controller out, but in practice it’s a coin flip if that will actually work unless you can find exactly the same model controller with exactly the same firmware manufactured in the same production line while the moon was in the same phase and even then your odds are still only 2 in 3.

    Do yourself a favour, look at an external disk shelf/DAS/drive enclosure that connects over SAS and do RAID in software. Hardware RAID made sense when CPUs were hewn from granite and had clock rates measures in tens of megahertz so offloading things to dedicated silicon made things faster, but that’s not been the case this century.




  • I’d considered doing something similar at some point but couldn’t quite figure out what the likely behaviour was if the workers lost connection back to the control plane. I guess containers keep running, but does kubelet restart failed containers without a controller to tell it to do so? Obviously connections to pods on other machines will fail if there is no connectivity between machines, but I’m also guessing connections between pods on the same machine will be an issue if the machine can’t reach coredns?



  • I’ve started a similar process to yours and am moving domains as they come up for renewal, with a slightly different technical approach:

    • I’m using AWS Route 53 as my registrar. They aren’t the cheapest, but still work out at about half the price of Gandi and one of my key requirements was to be able to use Terraform to configure DS records for DNSSEC and NS records in the parent zone
    • I run an authoritative nameserver on an OCI free tier VM using PowerDNS, and replicate the zones to https://ns-global.zone/ for redundancy. I’m investigating setting up another authoritative server on a different cloud provider in case OCI yank the free tier or something
    • I use https://migadu.com/ for email

    I have one .nz domain which I’ll need to find a different registrar for, cos for some reason route53 doesn’t support .nz domains, but otherwise the move is going pretty smoothly. Kinda sad where Gandi has gone - I opened a support ticket to ask how they can justify being twice the price of their competitors and got a non-answer




    • An HP ML350p w/ 2x HT 8 core xeons (forget the model number) and 256GB DDR3 running Ubuntu and K3s as the primary application host
    • A pair of Raspberry Pi’s (one 3, one 4) as anycast DNS resolvers
    • A random minipc I got for free from work running VyOS as by border router
    • A Brocade ICX 6610-48p as core switch

    Hardware is total overkill. Software wise everything is running in containers, deployed into kubernetes using helmfile, Jenkins and gitea




    • There has been some technical decisions over the last few years that I don’t think fit my needs terribly well; chief of these is the push for Snaps - they are a proprietary distribution format, that adds significant overhead without any real benefit, and Canonical has been pushing more and more functionality into Snap
    • I previously chose Ubuntu over Debian because I needed more up to date versions of things like Python and PHP, with Docker this isn’t really a concern any more, so slower, more conservative approach Debian takes isn’t as big of an issue