We implemented warmth through GitLab CI and docker-compose . In the process of deploying, the docker-compose.yml file docker-compose.yml files, etc. are copied to the remote server to create "auxiliary" containers like nginx and mysql

Everything works as it should. 2 points bother you: downtime and docker "garbage" images (those with <none> in the TAG column in docker images )

Here is a piece of the .gitlab-ci.yml file .gitlab-ci.yml responsible for deploying to the remote server:

 .template-secure-copy: &secure-copy stage: deploy image: covex/alpine-git:1.0 before_script: - eval $(ssh-agent -s) - ssh-add <(echo "$SSH_PRIVATE_KEY") script: - ssh -p 22 $DEPLOY_USER@$DEPLOY_HOST 'set -e ; rm -rf '"$DEPLOY_DIRECTORY"'_tmp ; mkdir -p '"$DEPLOY_DIRECTORY"'_tmp' - scp -P 22 -r build/* ''"$DEPLOY_USER"'@'"$DEPLOY_HOST"':'"$DEPLOY_DIRECTORY"'_tmp' # */ <-- в оригинале строка не закоментирована =) - ssh -p 22 $DEPLOY_USER@$DEPLOY_HOST 'set -e ; cd '"$DEPLOY_DIRECTORY"'_tmp ; docker login -u gitlab-ci-token -p '"$CI_JOB_TOKEN"' '"$CI_REGISTRY"' ; docker-compose pull ; if [ -d '"$DEPLOY_DIRECTORY"' ]; then cd '"$DEPLOY_DIRECTORY"' && docker-compose down --rmi local && rm -rf '"$DEPLOY_DIRECTORY"'; fi ; cp -r '"$DEPLOY_DIRECTORY"'_tmp '"$DEPLOY_DIRECTORY"' ; cd '"$DEPLOY_DIRECTORY"' ; docker-compose up -d --remove-orphans ; docker-compose exec -T php phing app-deploy -Dsymfony.env=prod ; rm -rf '"$DEPLOY_DIRECTORY"'_tmp' tags: - executor-docker 

Downtime now is 1-2-3 minutes. It starts with docker-compose down ... and until the end of the script. I want to reduce it.

And how to make sure that the "junk" docker images do not appear - I did not understand at all. I know about docker image prune I want to not clean, but not litter.

UPD1:

The docker-compose.yml is created with the following construction:

 .template-docker-compose: &docker-compose stage: build image: covex/docker-compose:1.0 script: - for name in `env | awk -F= '{if($1 ~ /'"$ENV_SUFFIX"'$/) print $1}'`; do eval 'export '`echo $name|awk -F''"$ENV_SUFFIX"'$' '{print $1}'`'='$"$name"''; done - mkdir build - docker-compose -f docker-compose-deploy.yml config > build/docker-compose.yml - sed -i 's/\/builds\/'"$CI_PROJECT_NAMESPACE"'\/'"$CI_PROJECT_NAME"'/\./g' build/docker-compose.yml - cp -R docker build artifacts: untracked: true name: "$CI_COMMIT_REF_NAME" paths: - build/ tags: - executor-docker 

As a result of this procedure, we get the following docker-compose.yml :

 networks: nw_external: external: name: graynetwork nw_internal: {} services: mysql: build: context: ./docker/mysql environment: MYSQL_DATABASE: project MYSQL_PASSWORD: project MYSQL_ROOT_PASSWORD: root MYSQL_USER: project expose: - '3306' networks: nw_internal: null restart: always volumes: - database:/var/lib/mysql:rw nginx: build: args: app_php: app server_name: project-dev1.ru context: ./docker/nginx depends_on: php: condition: service_started networks: nw_external: ipv4_address: 192.168.10.13 nw_internal: null ports: - 80/tcp restart: always volumes_from: - service:php:ro php: depends_on: mysql: condition: service_healthy environment: ENV_database_host: mysql ENV_database_name: project ENV_database_password: project ENV_database_port: '3306' ENV_database_user: project ENV_mailer_from: andrey@mindubaev.ru ENV_mailer_host: 127.0.0.1 ENV_mailer_password: 'null' ENV_mailer_transport: smtp ENV_mailer_user: 'null' ENV_secret: ThisTokenIsNotSoSecretChangeIt expose: - '9000' image: gitlab.site.ru:5005/dev1-projects/symfony:master networks: nw_internal: null restart: always volumes: - /composer/vendor - /srv version: '2.1' volumes: database: {} 

Dockerfile for nginx service

 FROM nginx:alpine ARG server_name=docker.local ARG app_php=app_dev COPY ./default.conf /etc/nginx/conf.d/default.conf RUN sed -i 's/@SERVER_NAME@/'"$server_name"'/g' /etc/nginx/conf.d/default.conf \ && sed -i 's/@APP@/'"$app_php"'/g' /etc/nginx/conf.d/default.conf 

Dockerfile for mysql service

 FROM mysql:5.7 HEALTHCHECK CMD mysqladmin ping --silent 
  • Do you use it for production deployment? - Mikhail Vaysman
  • @MikhailVaysman, well, actually, it hasn’t reached production yet, because not yet really implemented. How to finish it - it will be the first time - Andrei Mindubayev
  • @MikhailVaysman but it turned out that the difference between a working or test site is not. Depla occurs the same way - Andrei Mindubayev
  • one
    I like the question, I do not know the answer. If in a day there is no good answer, write to me - I will assign an award for the question. - Nick Volynkin
  • one
    As promised, opened the competition. - Nick Volynkin

3 answers 3

You need at least two containers that will be zero downtime. Then the process ( well described here ): stop one old one, start one new one and so on.

But I literally a couple of weeks ago, it all went through and docker swarm is VERY JUST, there are a couple of commands, the recipe for the link is several times more complicated. For your solution, you need to set up a balancer, and in docker swarm everything is already there and put, I will say it again, VERY JUST.

Go directly to the docker swarm. It is very quickly configured and zero downtime out of the box.

Just the docker service is a great tool, and the docker stack is generally a bomb (it just raises all the weapons from the docker-compose of such a file).

  • Unfortunately, we will not be able to switch to docker swarm, and we will not be able to use the docker stack either - it is too complicated organizationally. First, let's just go to the docker - Andrei Mindubayev
  • 2
    You need at least two containers that will be zero downtime. Then the process is well described here github.com/vincetse/docker-compose-zero-downtime-deployment ... stop one old one, start one new one and so on. But I literally a couple of weeks ago, it all went through and docker swarm is VERY JUST, there are a couple of commands, the recipe for the link is several times more complicated. For your solution, you need to set up a balancer, and in docker swarm everything is already there and put, I will say it again, VERY JUST. - alexes
  • one
    @alexes but it already seems like a good answer. Maybe write a short instruction right here? The links are unreliable, especially there is an instruction in English, it can be translated. - Nick Volynkin
  • 2
    There is an official guide: docs.docker.com/get-started. On the swarm you do not need to "go", it is now from box to docker-[ce]e - vp_arth
  • one
    @NickVolynkin laid out his decision. I needed time to test everything and finish it. Plus, I still hope that there will be an answer about the docker swarm =) Although, there will probably be a downtime for updating the database structure - Andrei Mindubaev

I significantly reduced downtime when changes were made to a remote server using @alexes advice! Now downtime lasts somewhere around 5 seconds . More precisely, it takes 5 seconds for the nginx container to reload and check for updates for the nginx and mysql containers. And since this operation is by and large optional, then downtime can be avoided altogether .

At the moment, only the docker-compose.yml file is copied to the remote server, all services in this file use a ready-made image image instead of build . These images are prepared before the application is deployed to the server.

Instead of one service with php-code, two absolutely identical now are used: php and spare .

Also, operations for preparing the cache of the Symfony application ( cache:warmup ) and static files of the bundles plugins ( assets:install ) are now performed during the preparation of the application image.

As a result, the deployment procedure was as follows:

  1. Downloading new images of the entire application.
  2. Update and restart the spare container.
  3. Updating static files
  4. We make changes to the database structure
  5. Update and restart php container
  6. Restart nginx container
 docker-compose pull docker-compose up -d --no-recreate docker-compose up -d --force-recreate --no-deps spare docker-compose exec -T spare sh -c "cd /srv && rm -rf b/* && cp -a web/. b/ && rm -rf a/* && cp -a web/. a/" docker-compose exec -T spare phing storage-prepare database-deploy docker-compose up -d --force-recreate --no-deps php docker-compose stop nginx docker-compose up -d nginx 

This procedure can be run both for initialization and for updating the application.

nginx configuration Here both containers are included in the upstream . When one of them becomes unavailable during an update, it is temporarily excluded from the upstream by nginx itself. Construct try_files /a$uri /b$uri /web$uri /app.php$is_args$args; allows you to give the correct static files during deployment.

 upstream backend { server php:9000 fail_timeout=5s; server spare:9000 fail_timeout=5s; } server { listen 80 default_server; root /srv/web; server_name site.ru; charset utf-8; location / { root /srv; try_files /a$uri /b$uri /web$uri /app.php$is_args$args; } location ~ ^/app\.php(/|$) { fastcgi_split_path_info ^(.+\.php)(/.*)$; fastcgi_pass backend; include fastcgi_params; fastcgi_param SCRIPT_FILENAME /srv/web$fastcgi_script_name; fastcgi_param DOCUMENT_ROOT /srv/web; internal; } location /upload/ { root /srv/storage; } location ~ \.php$ { return 404; } sendfile off; client_max_body_size 100m; error_log /var/log/nginx/error.log error; } 

New docker-compose.yml

 networks: nw_external: external: name: graynetwork nw_internal: {} services: mysql: environment: MYSQL_DATABASE: project MYSQL_PASSWORD: project MYSQL_ROOT_PASSWORD: root MYSQL_USER: project expose: - '3306' image: covex/mysql:5.7 networks: nw_internal: null restart: always volumes: - database:/var/lib/mysql:rw nginx: depends_on: mysql: condition: service_healthy image: gitlab.site.ru:5005/dev1-projects/symfony-workflow2/nginx:master networks: nw_external: ipv4_address: 192.168.10.13 nw_internal: null ports: - 80/tcp restart: always volumes: - assets:/srv/a:ro - assets:/srv/b:ro - assets:/srv/storage:ro php: environment: ENV_database_host: mysql ENV_database_mysql_version: '5.7' ENV_database_name: project ENV_database_password: project ENV_database_port: '3306' ENV_database_user: project ENV_mailer_from: andrey@mindubaev.ru ENV_mailer_host: 127.0.0.1 ENV_mailer_password: 'null' ENV_mailer_transport: smtp ENV_mailer_user: 'null' ENV_secret: ThisTokenIsNotSoSecretChangeIt image: gitlab.site.ru:5005/dev1-projects/symfony-workflow2:master networks: nw_internal: null restart: always volumes: - assets:/srv/a:rw - assets:/srv/b:rw - assets:/srv/storage:rw spare: environment: ENV_database_host: mysql ENV_database_mysql_version: '5.7' ENV_database_name: project ENV_database_password: project ENV_database_port: '3306' ENV_database_user: project ENV_mailer_from: andrey@mindubaev.ru ENV_mailer_host: 127.0.0.1 ENV_mailer_password: 'null' ENV_mailer_transport: smtp ENV_mailer_user: 'null' ENV_secret: ThisTokenIsNotSoSecretChangeIt image: gitlab.site.ru:5005/dev1-projects/symfony-workflow2:master networks: nw_internal: null restart: always volumes: - assets:/srv/a:rw - assets:/srv/b:rw - assets:/srv/storage:rw version: '2.1' volumes: assets: {} database: {} 
  • Do not change the nginx config on the go. Make upstream with multiple servers on different ports and set a small timeout. You should always have at least one intanga php container, able to give the answer nginx, while others are updated. Name containers like php1, php2, php3 and update them one by one ... stopped first, update, at this time nginx saw that something was wrong with it and temporarily threw it out of upstream. - alexes
  • one
    If you can run 2 containers without straining, always do it + on top of the balancer. Containers sometimes crash and restart for a minute or so ... you don't need it. - alexes
  • @alexes I tried to make two server entries in the upstream, and mark the "spare" server as backup . but there at some point nginx began to refuse to work and was constantly overloaded due to the fact that the php container did not exist (I could not give details - it was late at night =) Plus there is such a problem that the updated container can work only with the updated database structure , and the old - only with the old. It turns out that I can neither leave the old container with the new database structure nor launch the updated container with the old database structure. Plus, you need to update nginx and mysql - Andrey Mindubaev
  • one
    Why update nginx and mysql? There, well, once a year the versions are updated and that's it. No need to mark the second php as backup. Make two constantly working php on different ports, update one by one. - alexes
  • one
    Very rarely an old container falls due to migrations. Well, if it falls in extremely rare situations, it's not scary ... - alexes

I use the command without downtime:

 docker-compose up -d --no-deps --build <service> 

Example:

 docker-compose up -d --no-deps --build nginx