• 4 Posts
  • 33 Comments
Joined 1 year ago
cake
Cake day: June 24th, 2023

help-circle




  • fmstrat@lemmy.nowsci.comtoSelfhosted@lemmy.worldDocker or podman?
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    4 months ago

    Agreed. Honesly I use docker like snap these days. Need a specific version of node?

    alias node="docker run --rm -ti -v '${PWD}:${PWD}' -w '${PWD}' node:16-alpine"
    
    alias npm="docker run --rm -ti -v '${PWD}:${PWD}' -w '${PWD}' node:16-alpine npm"
    

    Pretty much every CLI tool that isn’t super basic to install I do this with.








  • Good suggestions at the bottom.

    There are several indications which could be used to discover the attack from day 1:

    All issued SSL/TLS certificates are subject to certificate transparency. It is worth configuring certificate transparency monitoring, such as Cert Spotter (source on github), which will notify you by email of new certificates issued for your domain names

    Limit validation methods and set exact account identifier which could issue new certificates with Certification Authority Authorization (CAA) Record Extensions for Account URI and Automatic Certificate Management Environment (ACME) Method Binding (RFC 8657) to prevent certificate issue for your domain using other certificate authorities, ACME accounts or validation methods





  • I’ll try to answer the specific question here about importing data and sandboxing. You wouldn’t have to sandbox, but it’s a good idea. If we think of a Docker container as an “encapsulated version of the host”, then let’s say you have:

    • Service A running on your cloud
    • Requires apt-get install -y this that and the other to run
    • Uses data in /data/my-stuff
    • Service B running on your cloud
    • Requires apt-get install -y other stuff to run
    • Uses data in /data/my-other-stuff

    In the cloud, the Service A data can be accessed by Service B, increasing the attack vector of a leak. In Docker, you could move all your data from the cloud to your server:

    # On cloud
    cd /
    tar cvfz data.tgz data
    # On local server
    mkdir /local/server/
    cd /local/server
    tar xvfz /tmp/data.tgz ./
    # Now you have /local/server/data as a copy
    

    You’re Dockerfile for Service A would be something like:

    FROM ubuntu
    RUN apt-get install -y this that and the other
    RUN whatever to install Service A
    CMD whatever to run
    

    You’re Dockerfile for Service B would be something like:

    FROM ubuntu
    RUN apt-get install -y other stuff
    RUN whatever to install Service B
    CMD whatever to run
    

    This makes two unique “systems”. Now, in your docker-compose.yml, you could have:

    version : '3.8'
    
    services:
      
      service-a:
        image: service-a
        volumes:
          - /local/server/data:/data
    
      service-b:
        image: service-b
        volumes:
          - /local/server/data:/data
    

    This would make everything look just like the cloud since /local/server/data would be bind mounted to /data in both containers (services). The proper way would be to isolate:

    version : '3.8'
    
    services:
      
      service-a:
        image: service-a
        volumes:
          - /local/server/data/my-stuff:/data/my-stuff
    
      service-b:
        image: service-b
        volumes:
          - /local/server/data/my-other-stuff:/data/my-other-stuff
    

    This way each service only has access to the data it needs.

    I hand typed this, so forgive any errors, but hope it helps.