• 0 Posts
  • 30 Comments
Joined 1 year ago
cake
Cake day: July 14th, 2023

help-circle



  • I made a typo in my original question: I was afraid of taking the services offline, not online.

    Gotcha, that makes more sense.

    If you try to run the reverse proxy on the same server and port that an existing service is using (e.g., port 80), then you’ll run into issues. You could also run into conflicts with the ports the services themselves use. Likewise if you use the same outbound port from your router. But IME those issues will mostly stop the new services from starting - you’d have to stop the services or restart your machine for the new service to have a chance to grab the ports while they were unused. Otherwise I can’t think of any issues.


  • I’m afraid that when I install a reverse proxy, it’ll take my other stuff online and causes me various headaches that I’m not really in the headspace for at the moment.

    If you don’t configure your other services in the reverse proxy then you have nothing to worry about. I don’t know of any proxy that auto discovers services and routes to them by default. (Traefik does something like this with Docker services, but they need Docker labels and to be on the same Docker network as Traefik, and you’re the one configuring both of those things.)

    Are you running this on your local network? If so, then unless you forward a port to your server on the port your reverse proxy is serving from, it’ll only be accessible from the local network. This means you can either keep it that way (and VPN in to access it) or test it by connecting directly to your server on that port and confirm that it’s working as expected before forwarding the port.


  • I don’t know that a newer drive cloner will necessarily be faster. Personally, if I’d successfully used the one I already have and wasn’t concerned about it having been damaged (mainly due to heat or moisture) then I would use it instead. If it might be damaged or had given me issues, I’d get a new one.

    After replacing all of the drives there is something you’ll need to do to tell it to use their full capacity. From reading an answer to this post, it looks like what you’ll need to do is to select “Change RAID Mode,” then keep RAID 1 selected, keep the same disks, and then on the next screen move the slider to use the drives’ full capacities.


  • upper capacity

    There may be an upper limit, but on Amazon there is a 72 TB version that would have to come with at least 18 TB drives. If 18 TB is fine, 20 TB is also probably fine, but I couldn’t find any reports by people saying they’d loaded 20 TB drives into theirs without issue.

    procedure

    You could also clone them yourself, but you’d want to put the NAS into read only mode or take it offline first.

    I think cloning drives is generally faster than rebuilding them in RAID, as well as easier on the drives, but my personal experience with RAID is very limited.

    Basically, what I’d do is:

    1. Take the NAS offline or make it read-only.
    2. Pull drive 0 from the array
    3. Clone it
    4. Replace drive 0 with your clone
    5. Pull drive 2 (from the other mirrored pair) from the array
    6. Clone it
    7. Replace drive 2 with your clone
    8. Clone drive 0 again, then replace drive 1 with your clone
    9. Clone drive 2 again, then replace drive 3 with your clone
    10. Put the NAS back online or make it read-write again.

    In terms of timing… I have a Sabrent offline cloning hub (about $50 on Amazon), and it copies data at 60 Mbps, meaning it’d take about 9 hours per clone. Startech makes a similar device ($96 on Amazon, that allegedly clones data at 466 Mbps (28 GB per minute), meaning each clone would take 2.5 hours… but people report it being just as slow as the Sabrent.

    Also, if you bought two offline cloning devices, you could do steps 1-3 and 4-6 simultaneously, and do the same again with steps 7-8.

    I’m not sure how long it would take RAID to rebuild a pulled drive, but my understanding is that it’s going to be fastest with RAID 1. And if you don’t want to make the NAS read-only while you clone the drives, it’s probably your only option, anyway.




  • What exactly are you trusting a cert provider with and what are the security implications?

    End users trust the cert provider. The cert provider has a process that they use to determine if they can trust you.

    What attack vectors do you open yourself up to when trusting a certificate authority with your websites’ certificates?

    You’re not really trusting them with your certificates. You don’t give them your private key or anything like that, and the certs are visible to anyone navigating to your website.

    Your new vulnerabilities are basically limited to what you do for them - any changes you make to your domain’s DNS config, or anything you host, etc. - and depend on that introducing a vulnerability of its own. You also open a new phishing attack vector, where someone might contact you, posing as the certificate authority, and ask you to make a change that would introduce a vulnerability.

    In what way could it benefit security and/or privacy to utilize a paid service?

    For most use cases, as far as I know, it doesn’t.

    LetsEncrypt doesn’t offer EV or OV certificates, which you may need for your use case. However, these are mostly relevant at the enterprise level. Maybe you have a storefront and want an EV cert?

    LetsEncrypt also only offers community support, and if you set something up wrong you could be less secure.

    Other CAs may offer services that enhance privacy and security, as well, like scanning your site to confirm your config is sound… but the core offering isn’t really going to be different (aside from LE having intentionally short renewal periods), and theoretically you could get those same services from a different vendor.






  • Reverse proxies aren’t DNS servers.

    The DNS server will be configured to know that your domain, e.g., example.com or *.example.com, is a particular IP, and when someone navigates to that URL it tells them the IP, which they then send a request to.

    The reverse proxy runs on that IP; it intercepts and analyzes the request. This can be as simple as transparently forwarding jellyfin.example.com to the specific IP (could even be an internal IP address on the same machine - I use Traefik to expose Docker network IPs that aren’t exposed at the host level) and port, but they can also inspect and rewrite headers and other request properties and they can have different logic depending on the various values.

    Your router is likely handling the .local “domain” resolution and that’s what you’ll need to be concerned with when configuring AdGuard.


  • If you use that docker compose file, I recommend you comment out the build section and uncomment the image section in the lemmy service.

    I also recommend you use a reverse proxy and Docker networks rather than exposing the postgres instance on port 5433, but if you aren’t familiar with Docker networks you can leave it as is for now. If you’re running locally and don’t open that port in your router’s firewall, it’s a non-issue unless there’s an attacker on your LAN, but given that you’re not gaining anything from exposing it (unless you need to connect to the DB directly regularly - as a one off you could temporarily add the port mapping), it doesn’t make sense to increase your attack surface for no benefit.


  • I haven’t personally used any of these, but looking them over, Tipi looks the most encouraging to me, followed by Yunohost, based largely on the variety of apps available but also because it looks like Tipi lets you customize the configuration much more. Freedom Box doesn’t seem to list the apps in their catalog at all and their site seems basically useless, so I ruled it out on that basis alone.


  • hedgehog@ttrpg.networktoSelfhosted@lemmy.worldWhat should I run and why?
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    7 months ago

    I am trying to avoid having to having an open port 22

    If you’re working locally you don’t need an open port.

    If you’re on a different machine but on the same network, you don’t need to expose port 22 via your router’s firewall. If you use key-based auth and disable password-based auth then this is even safer.

    If you want access remotely, then you still don’t have to expose port 22 as long as you have a vpn set up.

    That said, you don’t need to use a terminal to manage your docker containers. I use Portainer to manage all but my core containers - Traefik, Authelia, and Portainer itself - which are all part of a single docker compose file. Portainer stacks accept docker compose files so adding and configuring applications is straightforward.

    I’ve configured around 50 apps on my server using Docker Compose with Portainer but have only needed to modify the Dockerfile itself once, and that was because I was trying to do something that the original maintainer didn’t support.

    Now, if you’re satisfied with what’s available and with how much you can configure it without using Docker, then it’s fine to avoid it. I’m just trying to say that it’s pretty straightforward if you focus on just understanding the important parts, mainly:

    • docker compose
    • docker networks
    • docker volumes

    If you decide to go that route, I recommend TechnoTim’s tutorials on Youtube. I personally found them helpful, at least.


  • I’m not addressing anything Gitea has specifically done here (I’m not informed enough on the topic to have an educated opinion yet), but just this specific part of your comment:

    And they also demand a CLA from contributors now, which is directly against the idea of FOSS.

    Proprietary software is antithetical to FOSS, but CLAs themselves are not, and were endorsed by RMS as far back as 2002:

    In contrast, I think it is acceptable to … release under the GPL, but sell alternative licenses permitting proprietary extensions to their code. My understanding is that all the code they release is available as free software, which means they do not develop any proprietary softwre; that’s why their practice is acceptable. The FSF will never do that–we believe our terms should be the same for everyone, and we want to use the GPL to give others an incentive to develop additional free software. But what they do is much better than developing proprietary software.

    If contributors allow an entity to relicense their contributions, that enables the entity to write proprietary software that includes those contributions. One way to ensure they have that freedom is to require contributors to sign a CLA that allows relicensing, so clearly CLAs can enable behavior antithetical to FOSS… but they can also enable FOSS development by generating another revenue stream. And many CLAs don’t allow relicensing (e.g., Apache’s).

    Many FOSS companies require contributors to sign CLAs. For example, the FSF has required them since 2005 at least, and its CLA allows relicensing. They explain why, but that explanation doesn’t touch on why license reassignment is necessary.

    Even if a repo requires contributors sign a CLA, nobody’s four freedoms are violated, and nobody who modifies such software is forced to sign a CLA when they share their changes with the community - they can share their changes on their own repo, or submit them to a fork that doesn’t require a CLA, or only share the code with users who purchase the software from them. All they have to do is adhere to the license that the project was under.

    The big issue with CLAs is that they’re asymmetrical (as opposed to DCOs, which serve a similar purpose). That’s understandably controversial, but it’s not inherently a FOSS issue.

    Some of the same arguments against the SSPL (which is not considered FOSS because it is so copyleft that it’s impractical) being considered FOSS could be similarly made in favor of CLAs. Not in favor of signing them as a developer, mind you, but in favor of considering projects that use them to be aligned FOSS principles.


  • I’ve never used Radicale, but I just looked it up and the homepage talks about enabling authentication. It also supports auth via reverse proxy headers, which is great for anyone who wants to use Authelia, KeyCloak, or another similar solution. By contrast, as far as I can tell, Baikal doesn’t support reverse proxy auth, though it does seem to let you set up auth through the web interface.