and using DDNS
As in, running software to update your DNS records automatically based on your current system IP. Great for dynamic IPs, or just moving location.
🇨🇦
and using DDNS
As in, running software to update your DNS records automatically based on your current system IP. Great for dynamic IPs, or just moving location.
Sure, cloudflare provides other security benefits; but that’s not what OP was talking about. They just wanted/liked the plug+play aspect, which doesn’t need cloudflare.
Those ‘benefits’ are also really not necessary for the vast majority of self hosters. What are you hosting, from your home, that garners that kind of attention?
The only things I host from home are private services for myself or a very limited group; which, as far as ‘attacks’ goes, just gets the occasional script kiddy looking for exposed endpoints. Nothing that needs mitigation.
Unless you are behind CGNAT; you would have had the same plug+play experience by using your own router instead of the ISP supplied one, and using DDNS.
At least, I did.
I have one more thought for you:
If downtime is your concern, you could always use a mixed approach. Run a daily backup system like I described, somewhat haphazard with everything still running. Then once a month at 4am or whatever, perform a more comprehensive backup, looping through each docker project and shutting them down before running the backup and bringing it all online again.
I setup borg around 4 months ago using option 1. I’ve messed around with it a bit, restoring a few backups, and haven’t run into any issues with corrupt/broken databases.
I just used the example script provided by borg, but modified it to include my docker data, and write info to a log file instead of the console.
Daily at midnight, a new backup of around 427gb of data is taken. At the moment that takes 2-15min to complete, depending on how much data has changed since yesterday; though the initial backup was closer to 45min. Then old backups are trimmed; Backups <24hr old are kept, along with 7 dailys, 3 weeklys, and 6 monthlys. Anything outside that scope gets deleted.
With the compression and de-duplication process borg does; the 15 backups I have so far (5.75tb of data) currently take up 255.74gb of space. 10/10 would recommend on that aspect alone.
/edit, one note: I’m not backing up Docker volumes directly, though you could just fine. Anything I want backed up lives in a regular folder that’s then bind mounted to a docker container. (including things like paperless-ngxs databases)
Dirty secrets about you.
After reading this thread and a few other similar ones, I tried out BorgBackup and have been massively impressed with it’s efficiency.
Data that hasn’t changed, is stored under a different location, or otherwise is identical to what’s already stored in the backup repository (both in the backup currently being created and all historical backups) isn’t replicated. Only the information required to link that existing data to its doppelgangers is stored.
The original set of data I’ve got being backed up is around 270gb: I currently have 13 backups of it. Raw; thats 3.78tb of data. After just compression using zlib; that’s down to 1.56tb. But the incredible bit is after de-duplication (the part described in the above paragraph), the raw data stored on disk for all 13 of those backups: 67.9gb.
I can mount any one of those 13 backups to the filesystem, or just extract any of 3.78tb of files directly from that backup repository of just 67.9gb of data.
Linux?
I just use sshfs to mount ssh shares and move files between them like any other folder.
Same with samba shares (windows).
Configure ethernet with fixed IPs, and configure wifi to use your phone hotspot.
Then you can use one to troubleshoot the other as needed.
Then your normal setup would be wired between the pi+laptop, with the laptop connected to local wifi for internet.
True, the browser extension can be rather annoying. I tend to do any edits through either the android app, or the web page.
Interesting, that I was not aware of. I’ve never run into a scenario where I’ve had to add/edit while offline.
When using vaultwarden however, you can be offline as long as the client can still reach the server (ie they are within the same lan network or are the same machine). You’d still be fine to add/edit while your home wan is out for example, just not on the go.
Plus there’s the no-internet package mentioned in that link, but it’s limited to the desktop application.
Bitwarden is (primarily) a single db synced between devices via a server. A copy is kept locally on each device you sign into.
Changes made to an offline copy will sync to the server and your other devices once back online. (with the most recent change to each individual item being kept if there are multiple changes across several devices)
/edit: the local copy is for access to your passwords offline. Edits must be made with a connection to the server your account resides on, be that bitwardens or your own.
If you host your own sync server via vaultwarden, you can easily maintain multiple databases (called vaults) either with multiple accounts, or with a single account and the organizations feature. (options for creating vaults separate from your main one and sharing those vaults with multiple accounts) You can do this with regular bitwarden as well, but have to pay for the privilege.
Using vaultwarden also gives you all the paid features of bitwarden for free (as it’s self-hosted instead of using public servers)
I’ve been incredibly happy with it after setting it up ~3 months ago. Worth looking into.
Jesus, you can run more than one piece of software on each bit of hardware…
Why spread out across 12-13 machines? Seems like a huge waste of power, and a whole bunch of extra to maintain.
It’s a reverse proxy infront of you’re services. That’s fundamental to how a RP functions. Just like your own reverse proxy.
Emby, Jellyfin, and Plex will all detect connection speed, adjust quality settings, and transcode the media to playback without buffering.
I wouldn’t recommend Plex. They’ve been steadily moving away from self-hosted private media servers and towards just serving comercial content to you.
I myself run Emby as I’m rather fond of their development team and their attitude towards privacy. It does require payment for ‘emby premier’, ie the installable client apps and transcoding features, but it has single payment lifetime licenses as well as monthly.
Jellyfin is a popular open source option that is built on a fork of Embys older open source code before they went closed source.
Either would work for you.
Tbh, laziness and lack of need.
I’ll probably reconsider once renewal comes around, but that’s ~4 years away. Until then, as long as things continue functioning: meh. Doesn’t really make a difference.
Idk, but it seems really stupid.
Having not actually looked into it at all:
I’m wondering if they have an api for updating records instead of traditional DDNS. Not the same thing AFAIK.
Either way, I’m already using cloudflare as a nameserver so this shouldn’t matter as much as I thought.
I’m an idiot.
I already do this. The swap to Squarespace wont actually effect me.
🤦
Oh fuck.
I just remembered I use cloudflare as my name servers, google (well, Squarespace now) only handles the registration.
I probably don’t have to do anything then.
Kinda feel like a moron now…
Drink less paranoia smoothie…
I’ve been self-hosting for almost a decade now; never bothered with any of the giants. Just a domain pointed at me, and an open port or two. Never had an issue.
Don’t expose anything you don’t share with others; monitor the things you do expose with tools like fail2ban. VPN into the LAN for access to everything else.