There is:

  • 2 container docker (nginx, php) and general directory / var / www

    version: '2' services: web: image: nginx ports: - "80:80" links: - app volumes: - /var/www:/var/www/html app: image: php:7-fpm volumes: - /var/www:/var/www/html 
  • I add projects locally to / var / www with the rights myuser: myuser ug + rwx

Problem:

  • each container uses its internal user (nginx, www-data)
  • all other commands in the container are root
  • persistent problems with "permission denied"

What do you need:

Without reconfiguring each container for one user (root), get rid of the above problems

Additional:

  • Without a docker, having a large zoo of services, the problem was solved by a SUID tag, sometimes adding users to the central group. But this method with the docker "does not work", since each container does not know about groups and users from outside
  • There is an idea that the docker can run containers (services) on behalf of the user I need, thus, if they do not understand the wilds of containers, they will edit the files (volumes) on behalf of the user of the host machine.
  • At the moment the problem is not solved in a good way, here is the official discussion on the githaba

    2 answers 2

    each container uses its internal user (nginx, www-data)

    It is not necessary in the container. You have one process per container, you do not physically need additional users. You do not need to segregate resources, you are not afraid if the user gets into the system files, because the broken container is re-elevated in a second.

    Behind containers lies a very large philosophy, one of the planes of which is well reflected in the concept of a twelve factor application. When you write an application correctly, the application itself does not use the file system at all - the logs go to stdout or the log collector, the files are streamed to file storage, the data is sent to the database. If you have problems with access rights, then for some reason you write something to disk, and you do it absolutely in vain. This is justified, because the transition to the twelve-factor application requires such good expenses, but the truth remains the same - you just physically do not need to write anything to disk, have several users, run several processes.

    • I have everything kosher, the nginx service received a directory for processing, and it processes it. But I don’t like that the service makes unnecessary actions with the directory in the form of changing file properties. The question is more to the manager (docker program), how to make containers work with volume without changing the properties of the files, so that the container (service) processes incoming data on behalf of the user that uses the container - duhon

    You can use chmod.

    chmod is a program for changing permissions of files, directories.

    The chmod command has the following syntax:

     chmod [options] mode[,mode] file1 [file2 ...] 

    For example, here are some codes for changing access rights:

    744 (-rwxr - r--) Each user can read, the owner has the right to edit and run

    755 (-rwxr-xr-x) Each user has the right to read and run; owner can edit

    777 (-rwxrwxrwx) Each user can read, edit and run.

    • and how to use this command in the case I mentioned :) Any container can create a file with different rights on behalf of another user. I can't run chmod -R 777 every minute. - duhon
    • In this case, this problem must be solved in another way. Since 2 containers should use each other's files, a good solution would be to use the 3rd container to which, with the help of some kind of API, the first 2 will be accessed. In this case, you will not have to deal with windmills. Alternatively, you can use samba, ftp, webdav. - Alex