My setup exists of one local server that basically hosts Jellyfin and an arr stack. I only access this server locally with PC, TV and phone, however I might setup a Wireguard based remote access in the future.
Should I use a reverse proxy like Caddy so I can access the different containers with a local domain name like jellyfin.myserver.local?
I am also interested in hosting Adguard home but how can this work together with Caddy, won’t they both conflict as a DNS server?
I appreciate any possible advice on these topics.
Thank you.
I like the workflow of having a DNS record on my network for *.mydomain.com pointing to Nginx Proxy Manager, and just needing to plug in a subdomain, IP, and port whenever I spin up something new for super easy SSL. All you need is one let’s encrypt wildcard cert for your domain and you’re all set.
This is exactly how I have mine set up and I really like it.
I’ve got an internal and external domain with a wildcard cert so if it’s a local only service I can easily create a newservice.localurl.com, and if it’s external I can just as easily set up newservice.externalurl.com
Can show us how you configured the internal part?
I can. I’ll report back with details tomorrow when I have time.
Subscribe
Just posted my setup
So, this took way longer than I thought it would, mostly because I needed the time to sit down and actually type this up.
Full credit, I followed the instructions in this video from Wolfgang’s Channel
Prerequisites (this is based on my setup, the api key requirement will vary based on your domain registrar/service):
- Docker & Docker Compose
- NGINX Proxy Manager running via Docker
- A registered domain to use for your lan
- An API key from your domain registrar/service
I’m running NGINX Proxy Manager, using this
docker-compose.yml
, which I got straight from the NGINX Proxy manager website.version: '3.8' services: app: image: 'jc21/nginx-proxy-manager:latest' restart: unless-stopped ports: - '80:80' - '81:81' - '443:443' volumes: - ./data:/data - ./letsencrypt:/etc/letsencrypt
I’ve got my domain managed by Cloudflare (yes, I know they’re evil, what company isn’t?), so these instructions will show setup using that, but NGINX Proxy Manager supports a whole bunch of domain services for the HTTP-01 challenge.
With all prerequisites in place, here are the steps:
- Log in to your NGINX Proxy Mananger (you can access the service and login at port 81 of the machine hosting it)
- In the top menu, click the SSL Certificates tab
- Click the Add SSL Certificate button
- Choose Let’s Encrypt for the certificate type
- In the Add Let’s Encrypt Certificate dialog, input the following
- Email Address for Let’s Encrypt: Any valid email address you’d like to use
- Toggle the Use a DNS Challenge option on (when you toggle this on, a new set of options will appear)
- DNS Provider: Choose yours. I chose Cloudflare
- Credentials File Content: Delete the prepopulated dummy api key and paste in your actual api key
- Propagation Seconds: I put in 120 to give it two minutes. You can try leaving it blank, but if the DNS records haven’t propagated, you may get an error (I did when I tried leaving it blank during setup).
- Toggle on the I Agree to the Let’s Encrypt Terms of Service option - Click Save
Once you get a success message, you can start creating proxies with NGINX Proxy Manager for your internal domain. To do that you will need the ip address and port you are forwarding the domain to for your lan service. If you are using Docker containers, you’ll need the Docker ip, which you can get from the command line with:
ip addr show | grep docker0
You should get an ip address like
172.17.0.1
Otherwise you’ll just need the ip address of the machine you’re running the service on.
To set up a proxy redirect:
- In NGINX Proxy Manager click the Hosts tab/button and then choose Proxy Hosts.
- Towards the upper right click the Add Proxy Host button
- In the New Proxy Host dialog box, input the following:
- Domain Names: input the domain address (subdomain or tld) you wish to use for the service. For example.
homepage.abcde.com
, then press enter to confirm the domain - Scheme: leave set to http
- Forward Hostname/IP: Input either the host machine ip, or the docker ip
- Forward Port: Input the appropriate port for the service
- Cache Assets: Toggle on
- Block Common Exploits: Toggle on
- Websockets Support: Toggle on if the service needs websockets
- Click the SSL tab of the New Proxy Host dialog box to set up the ssl certificate
- In the SSL tab, input the following:
- Click the None under SSL Certificate and select your local domain + wildcard subdomain certificate
- Toggle on the Force SSL, HTTP/2 Support, HSTS Enabled, and HSTS Subdomains options
- Click Save
- Domain Names: input the domain address (subdomain or tld) you wish to use for the service. For example.
Once the save is complete you should be able to input the new domain for you lan services and get a secure connection.*
*Bear in mind some services require you to specify a valid domain for the service within the config/settings. Double check any services you may be running for this if you plan to use a reverse proxy with them.
Haven’t forgotten. Just haven’t had time. I’ll get a write up ASAP
A reverse proxy makes setup a lot easier and more versatile, and can manage SSL certs for you.
Don’t use jellyfin.server.local
.local is reserved for mdns, which doesn’t support more than one dot. (Though it may still sometimes work).
In any case, to make that work you need either a DNS server on your network or something like duckdns (which supports wildcard entries).
For people wanting the a very versatile setup, follow this video:
Apps that are accessed outside the network (jellyfin) are jellyfin.domain.com
Apps that are internal only (vaultwarden) or via wireguard as extra security: Vaultwarden.local.domain.com
Add on Authentik to get single sign on. Apps like sonarr that don’t have good security can be put behind a proxy auth and also only accessed locally or over wireguard.
Apps that have oAuth integration (seafile etc) get single sign on as well at Seafile.domain.com (make this external so you can do share links with others, same for immich etc).
With this setup you will be super versatile and can expand to any apps you could every want in the future.
Specifically, use home.arpa, if you must use a private domain.
Whatever floats your boat. If you don’t need it, you don’t need it. I have some services exposed to the outside on the standard port and I need a reverse proxy to make that possible. It also does the https with letsencrypt certificates. It’s a bit more comfortable managing them all in the reverse proxy. But I also have some webinterfaces of other less important software that is fine running on some IP on port 5102 and I don’t worry configuring something to change that. I don’t think there is a “should” unless you need to encrypt the traffic or expose that service to somewhere. And it’s also not wrong to do it.
It’s nice not to deal with HTTPS warnings etc and as you said it’s more convenient to access by domain name rather than remembering port numbers. You should be able to technically achieve the latter in another way by using docker and configuring it to assign a real IP for each service (a bridge network presumably), then setting each service to use port 80 externally. But that’s probably as much work as just setting up a reverse proxy.
And if you’re concerned about exposing ports, you can use DNS challenge which doesn’t require opening port 80 on your router.
I would say if locally. No. But the moment you open up to the web. Yes. Nginx proxy manager is also very good for this.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters DNS Domain Name Service/System HTTP Hypertext Transfer Protocol, the Web HTTPS HTTP over SSL IP Internet Protocol LXC Linux Containers NAS Network-Attached Storage PiHole Network-wide ad-blocker (DNS sinkhole) SSL Secure Sockets Layer, for transparent encryption nginx Popular HTTP server
9 acronyms in this thread; the most compressed thread commented on today has 11 acronyms.
[Thread #854 for this sub, first seen 6th Jul 2024, 22:45] [FAQ] [Full list] [Contact] [Source code]
Good bot.
Personal preference.
Unless something has changed, Caddy isn’t a dns server. It’s a web server and reverse proxy. If you might expose something to the public internet, you will want it behind the reverse proxy.
If you want to access local network services (private vpn counts) via a domain name all you need is a DNS server and for you clients setup to query that dns server. I use PiHole for this. From what I understand Adguard may be similar to PiHole but I’ve never looked a it.
One thing to be wary of, there are no reserved private network domains. Depending on how you set things up your local network dns queries may go out onto the public internet. It’s best to go ahead and register a domain name that you want to use so that you can control it routing if that happens. They can be had cheap as $11 USD each.
If you’re not hosting any publicly available services, then no. A reverse proxy would be unnecessary. You can just just set static records in your DNS server that tell it which internal hostname goes with what IP and it will relay that info to any device on your local network that requests it. Even with a Wireguard connection, you can tell it to use the DNS server from your local network.
Reverse proxies aren’t DNS servers.
The DNS server will be configured to know that your domain, e.g., example.com or *.example.com, is a particular IP, and when someone navigates to that URL it tells them the IP, which they then send a request to.
The reverse proxy runs on that IP; it intercepts and analyzes the request. This can be as simple as transparently forwarding jellyfin.example.com to the specific IP (could even be an internal IP address on the same machine - I use Traefik to expose Docker network IPs that aren’t exposed at the host level) and port, but they can also inspect and rewrite headers and other request properties and they can have different logic depending on the various values.
Your router is likely handling the .local “domain” resolution and that’s what you’ll need to be concerned with when configuring AdGuard.
It entirely depends on how you want your homelab to work. I use a reverse proxy to set up subdomains for my publicly facing services because I find it easier and cleaner to assign a subdomain to each service, and I also like having HTTPS managed by a single point — a sort of single point of entry to the rest of the services. You’d have to decide what you want out of your homelab, and find and set up the services that yield the outcome that you want.
Somewhat related rant: I recently tried to set up a reverse proxy on a Synology NAS, my god was it convoluted.
I’m curious what made it that complicated. Was the Synology OS (DSM they call it right?) fighting you along every step or something? As far as I know it’s a custom Linux OS but I have no idea what it’s based on, or if it’s even based on a specific distribution… I could definitely see it being a challenge depending on the answers haha.
I don’t know what he is talking about, this can be easily done from the DSM UI, also you don’t even need to mess with the certs expiring as it auto renews them.
I run Caddy, it has a few services exposed on https, and I also use it with adguard.
Adguard does the DNS rewrite and Caddy does the port map for internal, eg:
Proxmox:
I then can have all my VMs/LXCs/Docker with god knows what port numbers pointed to in caddy
For sure.
At some point, your services could easily warrent it. If you learn it early, it makes it much easier to organize your services and share them with others if you decide to.
Also, if you do decide to use a domain name, you probably won’t be able to use it internally to your network. If you use Adguard, you can use DNS rewrite to only direct your traffic to your server when you’re in your network.
Also, personally, I use nginx, but I’m more than happy to give you any advice on setup or reverse proxy.
I didn’t for the longest time but now I use Traefik for this. It can automatically add services (i.e. containers) to it’s routing list so the overhead is low and since I also run openwrt on my router I setup *. localhost to point to 127.0.0.1 so I don’t have to remember what ports I’m using for which service (e.g. jellyfin.localhost). You can also setup DNS entries using something like PiHole.