Do you have a firewall? Packet inspection in particular can wreak havoc on speeds.
Do you have a firewall? Packet inspection in particular can wreak havoc on speeds.
Tell them to move to yubikey or similar hardware key which is far more secure than any password policy will ever be and vastly more user friendly. Only downside is the intense shame if you manage to lose it.
The key should stick with the user thus not be stored with the computer when not in use. The key isn’t harmless of course but it takes a very deliberate targeting and advance knowledge about what it goes to and how it can be used. It’s also easy to remote revoke. If you’re extra special paranoid you could of course store the key locked at a separate site if you want nuclear codes levels of security.
I think a VPS and moving to NetBird self hosted would be the simplest solution for you. $5 per month gives you a range of options and you can go even lower with things like yearly subscriptions. That way you get around the subdomain issue, you get a proper tunnel and can proxy whatever traffic you want into your home.
As for control scheme for your home automation you’ll need to come up with something that fits you but I strongly advise against letting users into Home Assistant. You could build a simple web interface that interacts via API with HA, through Node-Red is super simple if it seems daunting to build the API.
If a RPi 4 is what you’ve got and that’s it then I guess you’re kinda stuck for the time being. Home Assistant is often quite lightweight if you’re not doing something crazy so it runs well on even a RPi 3, same with NAS software for home use, it too works fine on a 3. If SBC is your style my recommendation is to setup an alert on whatever second hand sites operate in your area and pick up a cheap one to allow you to separate things and make the setup simpler.
That’s one part of it, but the other is that there’s no proper way to ensure you won’t cause issues down the line and it makes the configuration unclean and harder to maintain.
It also makes your setup dependent on seemingly unrelated things. Like the certificate for the domain which is some completely different applications problem but will break your Home Assistant setup all the same. That dependency issue can be a nightmare to troubleshoot in some instances, especially when it comes to stuff like authentication. Try doing SSO towards two different applications running on different subpaths on the same domain…
I can’t grasp your use case I feel, pretty much all your complaints seem… odd. To me at least.
First subdomain. I think HA is completely right that proxy with a subpath is basically an anti-pattern that just makes things worse for you and is always a bad idea (with very few exceptions).
As for your tunnel I don’t know how you’ve set it up and I haven’t used tailscale but them only allowing one domain sounds like a very arbitrary limit, is it something that costs money to add? I use NetBird which I selfhost on my VPS and from there tunnel into my much beefier home setup.
Then docker in HAOS. The proper way I feel of running HA is for sure HAOS, and also running it in its own VM / or on dedicated hardware. This because you will likely need to couple additional hardware like a stick providing support for more protocols like ZigBee or Matter. It really isn’t a good solution for running all your self hosted stuff, and wasn’t ever intended to be. Running Plex in HA for instance is just a plain bad idea, even if it can be done. As such the need for an external drive seems strange as well. If you need to interact with storage you should set up a NAS and share over SAMBA. All this to say that HA should be one VM/Device, your docker environment another VM.
As for authentication there are 10k plus contributors to Home Assistant yearly but very few bother to make authentication more streamlined. I would’ve loved OpenID/OAuth2 support natively but there are ways to do so with custom components and in the end I quite strongly feel that if the end-users of your smarthome setup (i.e. the wife and kids) need to login to Home Assistant then you’ve probably got more work to do. Remote controls which interact with HA handle the vast majority of manual interaction and I’ve dabbled with self-hosted voice interfaces for the more complex operations.
Sorry if this came across as writing you on the nose, that’s not my intention. I just suspect you’re making things harder for yourself and maybe have a strange idea around how to selfhost in general?
Well, as someone also self-hosting email I agree with his solutions but he paints a picture of how bad it is that I feel is a bit exaggerated. But then again I host for myself and my family, I suspect it gets a bit different when you have many users and send hundreds of mail per day.
Only one I’ve had trouble with it Microsoft, they’re the strictest and you need to get some support from them to make it work reliably. Google has an automated service.
So no ads, sure, but then you need a commandment about paying for what you consume. Since otherwise, if we all followed the commandments, we’d be out of content right quick since you can’t make a living producing it.
Yeah exactly, if they have decent APIs or you can scissor out the content via iframes or something. Not really a web developer so I probably ain’t makin’ sense.
Doesn’t sound like it’s own “product” to be honest. I’d probably look at an alternative presentation layer that can present what’s in Jellyfin and also supports being the presentation layer for the top solutions for books, comics etc. If nothing like that exists I think there are people that would be interested in a unified media presenter. It doesn’t even need to actually play the media, just link to it.
Yeah, I was just confused about the direction/flow he was asking for. He clarified and his use case is fully solvable. Just not something I’ve personally dabbled in since he wants it for non http traffic.
Well thats just a normal reverse proxy then. In my setup I use Caddy to send traffic through the NetBird managed wireguard tunnel to my home machine that runs Jellyfin but for any outside observer it look like it’s my VPS that is serving Jellyfin.
You want to group by IP in grafana and not using http traffic? Why not group on data or metadata in what is being sent which is the common approach?
If you can fool the Internet that traffic coming from the VPS has the source IP of your home machine what stops you from assuming another IP to bypass an IP whitelist?
Also if you expect return communication, that would go to your VPS which has faked the IP of your home machine. That technique would be very powerful to create man in the middle attacks, i.e. intercepting traffic intended for someone else and manipulating it without leaving a trace.
IP, by virtue of how the protocol works, needs to be a unique identifier for a machine. There are techniques, like CGNAT, that allows multiple machines to share an IP, but really it works (in simplified terms) like a proxy and thus breaks the direct connection and limits you to specific ports. It’s also added on top of the IP protocol and requires specific things and either way it’s the endpoint, in your case the VPS, which will be the presenting IP.
Preserve the source IP you say, why?
The thing is that if you could (without circumventing the standards) do so then that implies that IP isn’t actually a unique identifier, which is needs to be. It would also mean circumventing whitelists / blacklists would be trivial (it’s not hard by any means but has some specific requirements).
The correct way to do this, even if there might be some hack you could do to get the actual source IP through, is to put the source in a ‘X-Forwarded-For’ header.
As for ready solutions I use NetBird which has open source clients for Windows, Linux and Android that I use without issues and it’s perfectly self-hostable and easy to integrate with your own IDP.
No the scenario a VM protects from is the T110s motherboard/cpu/PSU/etc craps out and instead of having to restore from off-site I can move the drives into another enclosure and then map them the same way to the VM and start it up. Instead of having to wait for new hardware I can have the fileserver up and running again in 30 minutes and it’s just as easy to move it into the new server once I’ve sourced one.
And in this scenario we’re only running the fileserver on the T110, but we still virtualized it with proxmox because then we can easily move it to new hardware without having to rebuild/migrate anything. As long as we don’t fuck up the drive order or anything like that, then we’re royally fucked.
Yes, but in the post they also stated what they were working with in terms of hardware. I really dislike giving the advice “buy more stuff” because not everyone can afford to when selfhosting often comes from a frugal place.
Still you’re absolutely not wrong and I see value in both our opinions being featured here, this discussion we’re having is a good thing.
Circling back to the VM thing though, even if I had dedicated hardware, if I would’ve used an old server for a NAS I still would’ve virtualized it with proxmox if for no other reason than that gives me mobility and an easier path to restoration if the hardware, like the motherboard, breaks.
Still, your advice to buy a used server is good and absolutely what the OP should do if they want a proper setup and have the funds.
Sure, I’m not saying its optimal, optimal will always be dedicated hardware and redundancy in every layer. But my point is that you gain very little for quite the investment by breaking out the fileserver to dedicated hardware. It’s not just CPU and RAM needed, it’s also SATA headers and an enclosure. Most people doing selfhosted have either one or more SBCs and if you have more than one SBC then yeah the fileserver should be dedicated. The other common thing is having an old gaming/office PC converted to server use and in that case Proxmox the whole server and run NAS as a VM makes the most sense instead of buying more hardware for that very little gain.
There’s absolutely no issues whatsoever with passing through hardware directly to a VM. And Virtualized is good because we don’t want to “waste” a whole machine for just a file server. Sure dedicated NAS hardware has some upsides in terms of ease of use but you also pay an, imo, ridiculous premium for that ease. I run my OMV NAS as a VM on 2 cores and 8 GB of RAM (with four hard drives) but you can make do perfectly fine on 1 Core and 2 GB RAM if you want and don’t have too many devices attached / do too many iops intensive tasks.
Well good part there is that you can build everything for internal use and then add external access and security later. While VLAN segmentation and overall secure / zero-trust architecture is of course great it’s very overkill for a selfhosted environment if there isn’t an additional purpose like learning for work or you find it fun. The important thing really is the shell protection, that nothing gets in. All the other stuff is to limit potential damage if someone gets in (and in the corporate world it’s not “if” it’s “when”, because with hundreds of users you always have people being sloppy with their passwords, MFA, devices etc.). That’s where secure architecture is important, not in the homelab.
Ah, right, read to fast it seems! Though that still leaves the possibility of software firewalls, but any OOTB ones wouldn’t be doing any packet inspection.