Ultimately, do whatever you think you’ll be able to keep up with.
The best documentation system is useless if you keep putting it off because it’s too much work.
Ultimately, do whatever you think you’ll be able to keep up with.
The best documentation system is useless if you keep putting it off because it’s too much work.
It can be in git even if you’re not doing ‘config as code’ or ‘infrastructure as code’ yet/ever.
Even just a text file with notes in markdown is better than nothing. Can usually be rendered, tracked, versionned.
You can also add some relevant files as needed too.
Like, even if your stuff isn’t fully automated CI/CD magic, a copy of that one important file you just modified can be added as necessary.
Are you trying to recover data here?
Seems like you didn’t use it and (maybe?) don’t have data to lose here?
Yea I’ve been using nextcloud for a while and it’s fine.
I remember when I used owncloud before nextcloud was even a thing and the upgrade experience was absolute shit.
These days it’s just fine.
What’s nice is it provides a similar level of protection to using a VPN with PKI, but just for that specific subdomain. While a VPN would be have to be connected manually before use (or all the time), this is built-in.
The odds of someone breaking through the mTLS and breaking through that application’s security at the same time are much smaller than either separately.
If you don’t have a valid cert, you’re dropped by the reverse proxy before anything even gets passed to the server behind it.
I’m a big fan of it.
Not really, although now that I have certs for those anyway, maybe I should.
More like I’m using some services on the go that I want to always work, whether I’m on the LAN or on the go.
Opening home automation or 3d printers to the Internet is unwise to say the least.
mTLS in the reverse proxy for those allows me to have more security without having to establish a VPN first.
I’m just doing mutual TLS to authenticate clients which I use the pricate CA for.
I could use the orivate CA for the server instead of lets encrypt and trust that on devices, but letsencrypt is easy enough and useful for other things that I open publicly. mTLS avoids needing a vpn for more sensitive services
I run a private CA for client SSL.
For traditional server SSL I just use let’s encrypt, although I already have the domain (less than $10 a year) for my public facing stuff, and just use a subdomain of that one for my homelab.
I have a container with openssl for the private CA and generating user certs as well as renewing the let’s encrypt ones. I just use openssl without anything fancy.
The output folder is only mounted rw in that one container
I only ever mount the subfolders in read-only in other containers that need those certs.
All these containers are running on the same server so I don’t even have to copy anything around, the containers don’t even need connectivity between them, it’s just mounted where needed.
I configure nginx with text condig files.
No clue how or where that is in your setup, but presumably somewhere where you configure the proxypass and server names.
in nginx:
server {
...
location / {
...
proxy_pass https://redacted.......;
proxy_pass_request_headers on;
proxy_pass_header Set-Cookie;
proxy_set_header HOST $host;
proxy_set_header Referer $http_referer;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
...
}
}
I think the was a trusted proxy setting in owncloud itself that needed to be set too, or maybe I’m thinking of another service.
You need to forward the real IP from nginx.
I’ll upload an example when I get off work
Split Horizon DNS is the most seamless user experience.
Yea I’ve been running “core” in docker-compose and not the “supervised” or whatever that’s called.
It’s been pretty flawless tbh.
It’s running in docker-compose in a VM in proxmox.
At first, it was mostly because I wanted to avoid their implementation of DNS, which was breaking my split-horizon DNS.
Honestly, once you figure out docker-compose, it’s much easier to manage than the supervised add-on thing. Although the learning curve is different.
Just the fact that your add-ons don’t need to go down when you upgrade hass makes this much easier.
I could technically run non-hass related containers in that docker, but the other important stuff is already in lxc containers in proxmox.
Not everything works in containers, so having the option to spin a VM is neat.
I’m also using PCI passthrough so my home theater/gaming VM has access to the GPU and I need a VM for that.
Even if they only want to use k8s or dockers for now, having the option to create a VM is really convenient.
FWIW, my ISP router didn’t allow custom DNS, but it allows disabling DHCP altogether.
I just run DHCP in pihole too, which works fine.
My guess was gonna be a broadcast storm caused by a loop. Like a device that’s connected to 2 different ports. That or a rogue dhcp.
Either way, taking a capture in Wireshark would have helped.
If these are managed switches, configuring bpdu guard on LAN ports (not on trunk ports to other switches!) would prevent a device from forming a loop.
Haven’t had to use port forwarding for gaming in like 30 or so years, so I just looked up Nintendo’s website…
LMAO, no thanks, that’s not happening.
For your question, you could likely route everything through a tunnel and manage the port forwarding on the other end of the tunnel.