docker is such simplified virtualka. Virtualka for one or two programs. Virtualka, which starts in a split second. Virtualka, which promises that if you provide a special small (text) docker-file (config file), then any admin who knows the docker will be able to prepare the right environment and run the software.
Why all this is necessary. Recall the classic situation - the programmer wrote the software and says "it works on my laptop." And admins (sysopes / devops) cannot deploy to the prod server and make it work, as there is no good description how to do it. And they usually say so to that - let's put your laptop in a rack.
But if the programmer provided the docker file ... then everything is greatly simplified.
Also, the docker provides "out of the box" all sorts of goodies. For example, an application can allocate certain processor / memory resources, open ports, install a specific version of a library — programmers like to say that their software works only with a limited set of library versions.
You can also transfer applications running in docker from machine to machine without any problems.
What is image is a set of files prepared by someone, ready for use in docker. It can be a whole Linux, or it can be a specially assembled python, which can be "superimposed" on the desired image of Linux. That is, if the admin needs to update the kernel, he takes the necessary imedzh with the poison, adds the already ready imedzh with the programs and voila, everything works (unless they have damaged the compatibility).
container - and this is a ready-to-use product. It can be started / stopped. (But the image will not run, you need to put it to start in the container).
In fact, the docker is not a virtual machine, it is a tool for managing various virtualization tools. And he manages for example lxc (linux container) - almost a built-in core virtualization tool).
UPD
virtualka - as a virtual box or vmware. But tezhe "virtualny servers" can be made on the basis of the docker.
Isolated environment is an opportunity to make the application think that it works on the server itself. Suppose you want to run ten web servers, and they all have 80 port nailed and cannot be changed. And the docker will allow them to run all of them, each in its own container, and it will be released outside under different ports. And in one more docker to launch a balancer, which will scatter requests from port 80. And also docker-composer can be screwed to this, which can manage a pack of docker containers.
But Windows - yes, until you start it in the docker, but this is gainable. If Microsoft wants to, then make an image and then everything will be literally in a couple of clicks (oh, not clicks, but lines to the docker files). Moreover, they say that Microsoft is preparing for this - they have released a Windows server without gui, they integrate Linux inside ... we'll see.