There is a web site on Django, on the battle server I use a uWSGI / Nginx bundle for the launch server, local development is a Django virtualenv / dev server

Some of the questions asked on SO:

  1. Distribution of docker images

  2. Running Docker images on the battle server

  3. Launching a site from a Docker VS image; launching with traditional tools (uWSGI, Nginx, Apache)

another pair appeared.

What to include in the Docker image?

We have a production version and a developer version. As I understand, in the repositories they distribute production.

Does it make sense to create a docker image for a developer version (for me, project code and virtual virtualenv environment)? Or does the developer need only repository code to start development?

Where do you create a docker image of a production version?

On the combat server there is a project that runs on the specified bundle - uwsgi / nginx / packer with production-settings. I have to collect the image on the battle server?

  • 2
    There should be no difference between production and non-production. That's the whole point. Rather, there is no such separation. There is a division in the software version. - Mikhail Vaysman
  • one
    And the same is true about testing. Your testers will be very happy to test exactly the version that will be in production. - Nick Volynkin
  • 2
    What not to make mistakes is collected by the robot. He takes data from strictly defined places and collects the image according to a strictly defined scenario. It is advisable to control the version of external dependencies too. - Mikhail Vaysman
  • one
    @ while1pass you can disable the distribution of statics in Django. And put in front of Django Nginx who will distribute the statics. - Mikhail Vaysman
  • one
    For inspiration, you can watch a demonstration of how a docker image is automatically automated at the gitlab. youtube.com/watch?v=m0nYHPue5RU - Nick Volynkin

1 answer 1

There should be no concept of production and non-production version of the image. Developers, testers, operators and all others should use the same version of the image.

The development process can be arranged in completely different ways, but if a docker image is used (for example, a server developed by another group) as an external dependency, it should be taken from the same source as for other needs - testing, operation, etc. .

How, where and how docker-images are created?

The process of making an image must be fully automated in order to avoid mistakes and make the process repeatable.

Approximately the process of making an image looks like this (I omit some steps):

  1. The developer added the code and tested it locally, including the image assembly.
  2. The developer pours the code into the version control system.
  3. The robot assembles the project and creates artifacts for deployment (it mimics everything, combines it into a common package, etc.).
    1. Optional step - artifacts are placed in storage.
  4. The robot collects docker-image.
    1. Optional step - the robot signs the image.
  5. The robot uploads the image to the registry.

If there are no specific requirements, then you can use any Continuous Integration server - Jenkins , TeamCity , Bamboo , etc. to control the creation process. They all have the appropriate plugins, or you can write simple shell scripts and create images with the standard docker build .

What to include in the docker-image?

It is difficult to give an unequivocal answer to this question, since much depends on the type of image and personal preferences. I will write how I would do with the server on Django .

I worked a little with Django , so correct me if I say something irrelevant to reality.

If the project is just beginning and the service load is small, then I would put everything (except the database) in one image. Those. the image will contain:

  • Fixed Python version installed as system (without virtualenv , etc.)
  • nginx / Apache fixed versions with the necessary settings
  • actually your application, taken as an artifact for deployment and deployed inside the image

The database is either in a separate image or on a separate server without using containerization. If the database goes in a separate way, then it is important to take care of saving data to an external (with respect to the container) disk partition. Otherwise, data will be lost when the container is restarted.

If the load is significant and there are a lot of static pages, then I would make several images:

  1. Image with static pages
    • nginx / apache
    • static part of your application
  2. Image with a dynamic part
    • Python
    • part of your application ( django )
  3. Image balancer (optional)
    • nginx / HAProxy / Varnish / etc

If the project is very large, then perhaps the static and dynamic parts are made by different teams and in this case they will be responsible for preparing the Dockerfile for their part of the project.

  • I ask in advance what are the tools for automatic image creation? Now I’ll collect the first image manually, I need to know about it for the future - while1pass
  • noted with the answer, you are very helpful in answering. Well, I am waiting for the continuation) - while1pass
  • I beg your pardon - I was distracted by Sherlock :) but now I added. - Mikhail Vaysman
  • which turn with the "East Wind") - while1pass
  • "DB or in a separate image ..." Well, if I put the database in the image, as I understand it, there will be no critical error? - while1pass