• 1 Post
  • 12 Comments
Joined 11 months ago
cake
Cake day: August 2nd, 2023

help-circle

  • You can use the wildcard domain

    Yeah the problem was more that this machine is running on a network where I don’t really control the DNS. That is to say, there’s a shitty ISP router with DHCP and automatic dynamic DNS baked in, but no way to add additional manual entries for vhosts.

    I thought about screwing with the /etc/hosts file to get around it but what I ended up doing instead is installing a pihole docker for DNS (something I had been contemplating anyway), pointing it to the router’s DNS, so every local DNS name still resolves, and then added manual entries for the vhosts.

    Another issue I didn’t really want to deal with was regenerating the TLS certificate for the nginx server to make it valid for every vhost, but I just bit through that bullet.


  • I was afraid it was going to come down to that. I have been looking into configuration options for the apps, but they’re 3rd party nodejs apps and I know jack shit about nodejs so I’ve had no luck with it so far.

    Going with vhosts anyway (despite the pains it will create on this setup) seems to be the preferred way forward then.

    Thanks for the insight, and for confirming what I already suspected.



  • Hmm no, that’s not really it… that’s more so that you don’t pass URLs starting with /app1/ onwards to the application, which would not be aware of that subpath.

    I think I need something that intercepts the content being served to the client, and inserts /app1/ into all hardcoded absolute paths.

    For example, let’s say on app1’s root I have an index.html that contains:

    ...
    src="/static/image.jpg"
    ...
    

    It should be dynamically served as:

    ...
    src="/app1/static/image.jpg"
    ...
    








  • As a general rule, you should always keep in mind that you’re not really looking for a backup solution but rather a restore solution. So think about what you would like to be able to restore, and how you would accomplish that.

    For my own use for example, I see very little value in backing up docker containers itself. They’re supposed to be ephemeral and easily recreated with build scripts, so I don’t use docker save or anything, I just make sure that the build code is safely tucked away in a git repository, which itself is backed up of course. In fact I have a weekly job that tears down and rebuilds all my containers, so my build code is tested and my containers are always up-to-date.

    The actual data is in the volumes, so it just lives on a filesystem somewhere. I make sure to have a filesystem backup of that. For data that’s in use and which may give inconsistency issues, there are several solutions:

    • docker stop your containers, create simple filesystem backup, docker start your containers.
    • Do an LVM level snapshot of the filesystem where your volumes live, and back up the snapshot.
    • The same but with a btrfs snapshot (I have no experience with this, all my servers just use ext4)
    • If it’s something like a database, you can often export with database specific tools that ensure consistency (e.g. pg_dump, mongodump, mysqldump, … ), and then backup the resulting dump file.
    • Most virtualization software have functionality that lets you to take snapshots of whole virtual disk images

    As for the OS itself, I guess it depends on how much configuration and tweaking you have done to it and how easy it would be to recreate the whole thing. In case of a complete disaster, I intend to just spin up a new VM, reinstall docker, restore my volumes and then build and spin up my containers. Nevertheless, I still do a full filesystem backup of / and /home as well. I don’t intend to use this to recover from a complete disaster, but it can be useful to recover specific files from accidental file deletions.