📜 ⬆️ ⬇️

Valid SSL domain names for local Docker containers

preview


Using Docker in the development process has already become the de facto standard. Run the application with all its dependencies, using just one command - it is becoming more and more familiar action. If an application provides access using a web interface or some HTTP API, it is likely that the front-line container forwards its unique (among other applications that you are developing in parallel) to the host port, having knocked on which we can interact with the application in the container .


And it works fine, until you have a whole zoo of applications, switching between them starts to cause some inconvenience, as you need to remember the scheme and the port, and somewhere to fix which ports for which application you once allocated, so as not there was a collision with time.


And then you also want to check the work on https - and you have to either use your root certificate, or always use curl --insecure ... , and when different commands work on applications - the number of steamers begins to increase exponentially.


Faced with such a problem once again - the thought “Stop it to endure!” Flashed in my head, and the result of working on a couple of days off was a service that solves this problem at the root, which will be discussed below. For the impatient, traditionally - a reference .


World We will save the reverse proxy


In an amicable way, we need some kind of domain zone, all sub-domains from which will always resolve localhost (127.0.0.1). A quick search put it on domains like *.localho.st , *.lvh.me , *.vcap.me and others, but how can you attach a valid SSL certificate to them? Having fiddled with his root certificate, it was possible to start curl without errors, but not all browsers correctly accepted it, and continued to throw out the error. In addition - I really did not want to "mess" with SSL.


"Well, let's go on the other side!" - and immediately a domain was acquired with the name localhost.tools , delegated to CloudFlare, the required resolution was configured (all sub-domains resolve 127.0.0.1 ):


 $ dig foo.localhost.tools | grep -v '^;\|^$' foo.localhost.tools. 190 IN A 127.0.0.1 

After that, certbot was launched in a container that, at the entrance, receiving API KEY from CF using DNS records confirms ownership of the domain, and issues a valid SSL certificate at the output:


 $ docker run \ --entrypoint="" \ -v "$(pwd)/cf-config.conf:/cf-credentials:ro" \ -v "$(pwd)/cert:/out:rw" \ -v "/etc/passwd:/etc/passwd:ro" \ -v "/etc/group:/etc/group:ro" \ certbot/dns-cloudflare:latest sh -c \ "certbot certonly \ --dns-cloudflare \ --dns-cloudflare-credentials '/cf-credentials' \ -d '*.localhost.tools' \ --non-interactive \ --agree-tos \ --email '$CF_EMAIL' \ --server 'https://acme-v02.api.letsencrypt.org/directory' \ && cp -f /etc/letsencrypt/live/localhost.tools/* /out \ && chown '$(id -u):$(id -g)' /out/*" 

The ./cf-config.conf file contains authorization data on the CF, for more information, see the certbot documentation, $CF_EMAIL is an environment variable with your email

Ok, now we have a valid SSL certificate (even for 3 months, and only for subdomains of the same level). It remains to somehow learn how to proxy all requests that come to lokalhost in the desired container.


And here we come to the aid of Traefik (spoiler - it is beautiful). By running it locally, flinging a docker socket into its container through the volume, it can proxy requests to the container that has the required docker label . Thus, we do not need any additional configuration, except for launching the client to specify the desired label on the container (and docker network, but when running without docker-compose, even this is not necessary, although very desirable) to which we want to get domain name access and valid SSL !


Having done all this way, the light saw the docker-container with this most pre-configured Traefik and wildcard SSL certificate (yes, it is public).


SSL private key in a public container?


Yes, but I think that it is not scary, as it is on the domain zone, which always resolves the localhost. MitM in this case does not make much sense in principle.


What to do when the certificate goes rotten?


Just pull off a fresh image by restarting the container. The project has CI configured, which automatically, once a month (for the time being) updates the certificate and publishes a fresh image.


I want to try!


There is nothing easier. First of all, make sure that local ports 80 and 443 free, and execute:


 # Создаём docker-сеть для нашего реверс-прокси $ docker network create localhost-tools-network # Запускаем сам реверс-прокси $ docker run -d --rm \ -v /var/run/docker.sock:/var/run/docker.sock \ --network localhost-tools-network \ --name localhost.tools \ -p 80:80 -p 443:443 \ tarampampam/localhost # Запускаем nginx, говоря ему откликаться на "my-nginx.localhost.tools" $ docker run -d --rm \ --network localhost-tools-network \ --label "traefik.frontend.rule=Host:my-nginx.localhost.tools" \ --label "traefik.port=80" \ nginx:latest 

And now we can test:


 $ curl -sS http://my-nginx.localhost.tools | grep Welcome <title>Welcome to nginx!</title> <h1>Welcome to nginx!</h1> $ curl -sS https://my-nginx.localhost.tools | grep Welcome <title>Welcome to nginx!</title> <h1>Welcome to nginx!</h1> 

As you can see - it works :)


Where does the documentation live?


Everything, as it is not difficult to guess, lives at https://localhost.tools . Moreover, the muzzle is responsive, and it knows how to look at whether the reverse proxy daemon is running locally, and display a list of containers running and available for interaction (if any).


How much is?


Not at all. Totally. Having done this thing for myself and my team, it came to the understanding that it could be useful to other developers / ops. Moreover, only the domain name costs money, everything else is used without the need for payment.


PS Service is still in beta, therefore - if they find any shortcomings, typos, etc. - just scribble in lichku . The programming and website development hubs are indicated for the reason that this approach may be useful primarily in these industries.


Source: https://habr.com/ru/post/439806/