📜 ⬆️ ⬇️

Alternative nomad orchestrator on the desktop

Currently, container orchestration is associated primarily with kubernetes. But this is not the only possible choice. There are also alternative orchestration tools, such as nomad, the developer of HashiCorp (well known as the developer of the Vagrant virtualization tool).

Mastering work with orchestration means is usually difficult, since not everyone has access to the infrastructure of several servers with root access, therefore, in a previous post
We deploy Kubernetes on the desktop in a few minutes with the MicroK8s described the process of deploying the Kubernetes environment on the desktop using the example of the Django web application. Initially, I planned to continue to describe the database deblocking in the MicroK8s environment. But then I thought it would be interesting to continue working with the nomad orchestration tool that is just as convenient. I will not give even a German one to compare different orchestration systems. The only thing I note for those who doubt that nomad is even easier to install than MicroK8s, because to do this, simply copy the two executable files (nomad and consul) from the developer server.

So, as I said, you first need to load nomad and consul , which are delivered as ready-made binaries for all major operating systems. Root access is not needed to run these files, so everything can be placed in the home directory and run on behalf of a non-privileged user.

And, of course, you should already have a docker installed, if only you are going to dock containers. By the way, nomad can run not only containers, but also regular executable files, which we will use soon.

So first you need to create a nomad configuration file. Nomad can be run in server mode or in client mode, as well as simultaneously in both modes (not recommended for production). To do this, you need to place the server section, the client section, or both of these sections in the configuration file:

bind_addr = "127.0.0.1" data_dir = "/tmp/nomad" advertise { http = "127.0.0.1" rpc = "127.0.0.1" serf = "127.0.0.1" } server { enabled = true bootstrap_expect = 1 } client { enabled = true options = { "driver.raw_exec.enable" = "1" } } consul { address = "127.0.0.1:8500" } 


It is launched by the nomad command, which specifies the path to the created configuration file:

 nomad agent --config nomad/nomad.conf 


In the last section of the configuration, the address where consul will work is set. Consul can also work in server mode, in client mode, and in both server and client mode:

 consul agent -server -client 127.0.0.1 -advertise 127.0.0.1 -data-dir /tmp/consul -ui -bootstrap 


You can open the command to execute these commands in the browser (http: // localhost: 4646) - this is the nomad UI, and (http: // localhost: 8500) is the UI consul.

Next, create a Dockerfile for the Django image. From the Dockerfile in the previous post it differs from the line in which access to Django is allowed through any host:

 FROM python:3-slim LABEL maintainer="apapacy@gmail.com" WORKDIR /app COPY requirements.txt . RUN pip install -r requirements.txt RUN django-admin startproject mysite /app \ && echo "\nALLOWED_HOSTS = ['*']\n" >> /app/mysite/settings.py EXPOSE 8000 STOPSIGNAL SIGINT ENTRYPOINT ["python", "manage.py"] CMD ["runserver", "0.0.0.0:8000"] 


After building the container:

 docker build django/ -t apapacy/tut-django:1.0.1 


You must create a task in which the required number of replicas of Django containers will be created (nomad / django.conf):

 job "django-job" { datacenters = ["dc1"] type = "service" group "django-group" { count = 3 restart { attempts = 2 interval = "30m" delay = "15s" mode = "fail" } ephemeral_disk { size = 300 } task "django-job" { driver = "docker" config { image = "apapacy/tut-django:1.0.1" port_map { lb = 8000 } } resources { network { mbits = 10 port "lb" {} } } service { name = "django" tags = ["urlprefix-/"] port = "lb" check { name = "alive" type = "http" path = "/" interval = "10s" timeout = "2s" } } } } } 


All parameters of this configuration are quite understandable based on their names. The only thing I would like to decipher is one line: port "lb" {} , which means that the ports will be dynamically assigned by the environment (you can also set them statically).

The task is started by the command:

 nomad job run nomad/django.conf 


Now through the UI nomad (http: // localhost: 4646) you can see the status of the django-job job, and through the UI consul (http: // localhost: 8500) the status of the django service, and on which ip-addresses and ports it works each replica of the django service. Now services are available through these ip-addresses, but only within the nomad network, and are not accessible from the outside. In order to publish services for access from the outside, you can use a number of possibilities. For example, through haproxy, but the easiest way to do this is through another (third after nomad and consul) module from HashiCorp - fabio.

You will not need to download it again - you can provide this nomad case, which, as I mentioned at the beginning of the message, can work not only with docker containers, but also with any executable files. To do this, create another task (nomad / fabio.conf):

 job "fabio-job" { datacenters = ["dc1"] type = "system" update { stagger = "60s" max_parallel = 1 } group "fabio-group" { count = 1 task "fabio-task" { driver = "raw_exec" artifact { source = "https://github.com/fabiolb/fabio/releases/download/v1.5.4/fabio-1.5.4-go1.9.2-linux_amd64" } config { command = "fabio-1.5.4-go1.9.2-linux_amd64" } resources { cpu = 100 # 500 MHz memory = 128 # 256MB network { mbits = 10 port "lb" { static = 9999 } port "admin" { static = 9998 } } } } } } 


To perform this task, driver = "raw_exec" . Not all drivers are loaded by default, so in the nomad configuration we foresee this feature:

 client { enabled = true options = { "driver.raw_exec.enable" = "1" } } 


By the way, the new versions of nomad will change the syntax for loading plug-ins and drivers, so this part of the configuration will soon have to be finalized.

The task is started by the command:

 nomad job run nomad/fabio.conf 


After that, the UI fabio is available in the browser at the address (http: // localhost: 9998). And the django service will be published to the address (http: // localhost: 9999).

The code for the configurations given in the publication can be found in the github.com/apapacy/microk8s-tut repository.

apapacy@gmail.com
20th of February 2019

Source: https://habr.com/ru/post/440956/