There should be no concept of production and non-production version of the image. Developers, testers, operators and all others should use the same version of the image.
The development process can be arranged in completely different ways, but if a docker image is used (for example, a server developed by another group) as an external dependency, it should be taken from the same source as for other needs - testing, operation, etc. .
How, where and how docker-images are created?
The process of making an image must be fully automated in order to avoid mistakes and make the process repeatable.
Approximately the process of making an image looks like this (I omit some steps):
- The developer added the code and tested it locally, including the image assembly.
- The developer pours the code into the version control system.
- The robot assembles the project and creates artifacts for deployment (it mimics everything, combines it into a common package, etc.).
- Optional step - artifacts are placed in storage.
- The robot collects docker-image.
- Optional step - the robot signs the image.
- The robot uploads the image to the registry.
If there are no specific requirements, then you can use any Continuous Integration server - Jenkins , TeamCity , Bamboo , etc. to control the creation process. They all have the appropriate plugins, or you can write simple shell scripts and create images with the standard docker build .
What to include in the docker-image?
It is difficult to give an unequivocal answer to this question, since much depends on the type of image and personal preferences. I will write how I would do with the server on Django .
I worked a little with Django , so correct me if I say something irrelevant to reality.
If the project is just beginning and the service load is small, then I would put everything (except the database) in one image. Those. the image will contain:
- Fixed Python version installed as system (without virtualenv , etc.)
- nginx / Apache fixed versions with the necessary settings
- actually your application, taken as an artifact for deployment and deployed inside the image
The database is either in a separate image or on a separate server without using containerization. If the database goes in a separate way, then it is important to take care of saving data to an external (with respect to the container) disk partition. Otherwise, data will be lost when the container is restarted.
If the load is significant and there are a lot of static pages, then I would make several images:
- Image with static pages
- nginx / apache
- static part of your application
- Image with a dynamic part
- Python
- part of your application ( django )
- Image balancer (optional)
- nginx / HAProxy / Varnish / etc
If the project is very large, then perhaps the static and dynamic parts are made by different teams and in this case they will be responsible for preparing the Dockerfile for their part of the project.