

Just put everything that doesn’t have OIDC behind forward auth. OIDC is overrated for selfhosting.


Just put everything that doesn’t have OIDC behind forward auth. OIDC is overrated for selfhosting.


You’re arguing two different points here. “A VPN can act as a proxy” and “A VPN that only acts as a proxy is no longer a VPN”. I agree with the former and disagree with the latter.
A “real” host-to-network VPN could be used as a proxy by just setting your default route through it, just like a simple host-to-host VPN could be NOT a proxy by only allowing internal IPs over the link. Would the latter example stop being a VPN if you add a default route going from one host to the other?


Fundamentally, a host-to-host VPN is still a VPN. It creates an encapsulated L2/L3 link between two points over another network. The number of hosts on either end doesn’t change that. Each end still has its own own interface address, subnet, etcetera. You could use the exact same VPN config for both a host-to-host and host-to-site VPN simply by making one of the hosts a router.
I see your point about advocating for other methods where appropriate (although personally I prefer VPNs) but I think that gatekeeping the word “VPN” is silly.


“It has effectively the same function as a proxy” isn’t the same thing as “it’s not actually a VPN”.
One could argue you’re not really using the tech to its fullest advantage, but the underlying tech is still a VPN. It’s just a VPN that’s being used as a proxy. You’re still using the same VPN protocols that could be used in production for conventional site-to-site or host-to-network VPN configurations.
Regardless, you’re the one who brought up commercial VPNs; when using OpenVPN to create a tunnel between a VPS and home server(s), it seems like it’s being used exactly to “create private communication between multiple clients”. Even by your definition that should be a VPN, right?


VPN and proxy server refer to different things. There’s lots of marketing BS around VPNs but that doesn’t make the term itself BS, they’re different and it’s relevant when you’re talking about networking.
If there’s a port you want accessible from the host/other containers but not beyond the host, consider using the expose directive instead of ports. As an added bonus, you don’t need to come up with arbitrary ports to assign on the host for every container with a shared port.
IMO it’s more intuitive to connect to a service via container_name:443 instead of localhost:8443


The UX just isn’t there for MPV. Jellyfin isn’t always ideal but it gives an interface roughly on par with a streaming service. Why should I replace that with a tool like MPV? I don’t need keyboard controls, I watch from my couch. It seems like all downsides to me.


I don’t see how? Normal HTTP/TLS validation would still apply so you’d need port forwarding. You can’t host anything on the CGNAT IP so you can’t pass validation and they won’t issue you a cert.


CGNAT is for IPv4, the IPv6 network is separate. But if you have IPv6 connectivity on both ends setting up WG is the same as with IPv4.
Only giving a /64 breaks stuff, but some ISPs do it anyway. With only a /64 you can’t subnet your network at all.
I really doubt it. We could give everyone on Earth their own /48 with less than 1% of the IPv6 address space.
Giving a /48 is spec, but a lot of ISPs are too stingy :/
Going to other planets would require a total re-architecting of our communications infrastructure anyway. There’s such distance too it’s not really viable to have a shared internet. Even Mars would have up to 22 minute latency at peak. So I don’t think it makes sense to plan our current internet around potential future space colonization.
Even so, IPv6 is truly massive. We could give a /64 to every square centimeter of the Earth’s surface and still have IPs to spare. Frankly, I think the protocol itself will be obsolete before we run out.


All of your temporary privacy addresses will be coming out of the same subnet, so it’s clear they all belong to the same people.
Ultimately the privacy extensions are just bringing IPv6’s privacy back in line with IPv4, because without the privacy extensions every single device has a separate IPv6 address based on its MAC address whereas in IPv4 most consumer networks have every device sharing a single IP.


Be that as it may, the Plex official guide for setting up “remote streaming” walks you through port forwarding. That implies that when they say remote streaming, they mean port forwarding by default. I then had to go digging to find mention of the Relay service which seems to be a fallback. (Apparently it isn’t even supported by all clients)
Surely if they meant they’d start charging for Relays they’d mention that explicitly, and not use the term “remote streaming”?


It’s the confusing mess of subscriptions and seemingly locking basic functionality behind a paywall that’s skeevy, not paying for software itself. I have happily paid for software before and would again. Plex has never appealed to me though, and they’re certainly doing nothing to make themselves more appealing.


Do you have a source for this claim that the new pricing scheme only applies to the Plex Relays? As far as I can tell it applies to anything they consider “remote access”, regardless of whether it goes through their servers or not.


It seems deeply opposed to the spirit of selfhosting to have to pay for the privilege of accessing one’s own server. If the software itself cost money, that would be one thing, but this whole monetization scheme is skeevy.


It seems like multiple things are being conflated here and I’m not sure what the reality is because I’ve never used Plex.
Some people claim this has something to do with Plex needing to pay for NAT traversal infrastructure. Okay, that seems sort of silly but at least there’s the excuse that their servers are involved in the streaming somehow.
But their wording is very broad, just calling it “remote streaming.” That led me to this article on the Plex support website, which walks people through setting up port forwarding in order to enable “remote streaming”! So that excuse doesn’t really seem to hold water. What exactly is being paid for here then? How do they define what “local streaming” is?
I definitely feel the lab burnout, but I feel like Docker is kind of the solution for me… I know how docker works, its pretty much set and forget, and ideally its totally reproducible. Docker Compose files are pretty much self-documenting.
Random GUI apps end up being waaaay harder to maintain because I have to remember “how do I get to the settings? How did I have this configured? What port was this even on? How do I back up these settings?” Rather than a couple text config files in a git repo. It’s also much easier to revert to a working version if I try to update a docker container and fail or get tired of trying to fix it.