• pulsewidth@lemmy.world
    link
    fedilink
    English
    arrow-up
    27
    ·
    16 hours ago

    Its not as egregious as you think. ‘Everyone’ group means every Synology user account - not that everyone on the network that can talk to the NAS, they’d still need both a Synology account and Shared folder permissions. Any Synology user trying to access those files would still have to have read and write access to the Share to actually access it (eg via file explorer SMB/CIFs or app-level access to Synology File Manager, or they would need to be granted SSH access to get in via terminal, etc) in order to R/w/m the files.

    I know it’s a bit confusing, but it’s correct. Docker often causes confusion with file permissions. There are file-level permissions (this article) and there are share-level permissions. You need both to access folders and files via mapped drives / SMB, this setting is just to ensure that Docker containers which can be running as a variety of user names (depending on how you config docker and the container) don’t experience issues accessing files you’re expecting them to be able to access, as Synology says, the default Docker folder permission is for the ‘everyone’ group to have Read-only access. This should allow most Docker containers configs to at least run and then if you run into issues writing/modifying files… That’s a clue you have missed some file permission configuration settings that need to be done, and the only reason it’s running at all is because that default ‘everyone’ permission is saving your butt.

    • anamethatisnt@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      14 hours ago

      The main thing I see you can avoid with locking down the docker images into a separate low permission user that can only access what they really need is if someone successfully attacks a project and you get infected with some shit when your Synology pulls image:latest.
      It could limit the traversal of a ransomware that successfully breaks free of the container but ends up having no permissions outside as an example.
      I would probably purge the whole NAS and setup from my backup for my own peace of mind even with the user separation though.

      edit: updating “low user” to “low permission user”, amazing how the brain can fill in words for you when reading your own texts.

      • The Stoned Hacker@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        52 minutes ago

        I do this for my containers. I have a completely domain-managed network, so my docker/podman host mounts an NFS share that contains all the data volumes for my services. Each one only has read permissions for the service account that runs it (and has nogroup). Each OCI container mounts their data volume(s) from their respective directory as well as a kerberos user TGT and credentials cache. Each OCI container runs as the service account, which uses the kerberized credentials to access the mounted data volumes (this is necessary), and thus I acheive separation. Even if a threat actor were to compromiee a service they would still be locked down to that service account and only able to access/modify the data of this service. This is still be very bad for services like keycloak, but for other trivial services it almost guarantees more than adequate segregation. This does fall apart a little bit with the recent copyfail and dirtyfrag exploits which allow for easy privilege escalation, but I don’t allow root squash so the data volumes on the NFS share are still service_account:nogroup even when accessing as root. Now an attacler could go through and use the KRBTGTs that are stored for each service account to access the data, but at that point I am dealing with a dedicated threat actor. Defending against someone explicitly seeking to compromise me is a different situation altogether, and still requires initial access through a vulnerable application that is sitting behind an SSL termination proxy and an NGFW with IPS capabilities.

  • irate944@piefed.social
    link
    fedilink
    English
    arrow-up
    2
    ·
    12 hours ago

    I think it was just an example (poor one, true).

    In my case, I just need to give access to my admin account in order for them to work - not the default admin account, to be clear.

  • anamethatisnt@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    16 hours ago

    I mean unless specified otherwise most Synology container management dockers will run as root. With that said, if you want to secure things then there are guides.

    An alternative path would be to setup a specific docker user and use docker compose to use that user when installing images
    https://drfrankenstein.co.uk/step-2-setting-up-a-restricted-docker-user-and-obtaining-ids/

    Jellyfin example
    https://drfrankenstein.co.uk/jellyfin-in-container-manager-on-a-synology-nas-hardware-transcoding/

    From there you could go further and use the guides above to create one user per docker image and give them different permissions depending on need.

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    42 minutes ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    NAS Network-Attached Storage
    NFS Network File System, a Unix-based file-sharing protocol known for performance and efficiency
    SMB Server Message Block protocol for file and printer sharing; Windows-native
    SSH Secure Shell for remote terminal access
    SSL Secure Sockets Layer, for transparent encryption

    5 acronyms in this thread; the most compressed thread commented on today has 22 acronyms.

    [Thread #296 for this comm, first seen 16th May 2026, 07:50] [FAQ] [Full list] [Contact] [Source code]

  • non_burglar@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    16 hours ago

    That seems to be what Synology is suggesting, and you’re right, this wouldn’t be the best configuration if security is the goal.