Are the links correct? @anoyongbot
Are the links correct? @anoyongbot
Run iperf internally to see if your bottleneck is switch/ap or fw. I set up a j1900 pfsense for my sisters family a while back to do qos (gamer bois in the house) amd it had no problem staying at 500mbps. No ids or other stuff.
Not built any opn/pf-sense in a while, but i always use intel server-nic’s. Used to have way better support than other stuff on bsd
Yeah, but if your house burns down copies on different hdd wont matter much. Offsite like cloud will
Basically why i feel more comfortable with LXC than docker for my home lab services. It feels more like a VM in management.
We run a good mix of docker, vm’s and bare metal at work; no containers are auto-updated
Stick to strong keys and keep it on 22 for ease of use
No - ssh is very easy to secure, while an exposed web-service is very hard to secure. Theres no difference in the security of ssh without password and for example WireGuard.
Lolwut? Someone downvotes you for that?
Yeah - industrial computers is the way. I would want something that can run at 60 c, and is water/dust proof. How to keep 20tb on a floating humidifier? Im not sure about this one, but swap drives often is probably a good idea.
Do you ride salt or sweet water?
A reverse proxy is used to expose services that don’t run on exposed hosts. It does not add security but it keeps you from adding attack vectors.
They usually provide load balancing too, also not a security feature.
Edit: in other words what he’s saying is true and equal to “raid isn’t baclup”
All reverse proxies i have used do rudimentary DDoS protection: rate limiting. Enough to keep your local script kiddy at bay - but not advanced stuff.
You can protect your ssh instance with rate limiting too but you’ll likely do this in the firewall and not the proxy.
what does your trace give? You are setting up a recursive resolver, make sure settings allow for this
IMO venturing out in the unknown using fringe case hardware/software is a hobby by itself. It’s my 2nd hobby besides self hosting. Being more about experimenting than stability and ease of use, it’s not compatible with self hosting so I keep them separate
I still dont understand broadcom’s move except for short term profits. All the kids used to use it, and now they’re on proxmox.
I work in public sector and we’re transitioning away from VMware now, as the people we recruit know proxmox and not VMware.
Just like adobe lets the kids get away with pirating - as that builds following - VMware was giving away single-seat.
I don’t care about internetpoints, and I’ve given up hopes for lemmy as a platform. There’s too many subs compared to people, so people are smeared too thin out.
Reddit had soul back then. It was fresh, new, different. Lemmy is just a bleak copy of Reddit, missing quality content and people.
That’s the main difference between lemmy and early reddit. Reddit had good info from knowledgeable people, and moderation. Here it seems most are 8 years old with 0 knowledge talking shite. Voting to “prove their point”. Like downvoting your reply.
You can also try running ve on Debian https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_12_Bookworm
If you can - separate host and storage. Run what ever hyper visor you like - Xcp-ng is also good. Any nas is good
Could also be docker network-config. Docker should by default use the hosts resolver config if there’s nothing in /etc/resolve.conf
You can also supply dns server on the docker command or in your compose file if you’re using compose.
As a last resort you can enter server and ip i the container’s /ets/host file if the ip is static. But that’s gone once you rebuild the image.
Or maybe there’s env on the container you use for dns
either create a cert group and give that group permission to the certs, or add a handler to distribute the cert+key on renew to your service’s folder, and change owner/group to whats relevant to the service
Note: the “live” folder only contains links to the archive folder