Python server RabbitMQ using RabbitMQ . After some time (about a month), the process eats up all the memory and the queues stop. Here is the process:

 29431 rabbitmq 20 0 4330792 1,992g 4596 S 12,6 12,7 792:49.90 beam.smp 

After killing the process, everything becomes good.

Judging by beam.smp , the beam.smp virtual machine is running. But why she does not free the memory - it is not clear.

  • Number of connections and open channels checked at the time when Rabbit is faced with a critical level of memory? - Firepro
  • No, but the eating of memory is growing before our eyes. A week ago, the VIRT value was about 2,000,000, now 4330792. - faoxis
  • is the management UI connected? - etki
  • @Etki yes, plugged in - faoxis
  • @faoxis shows any anomalies in it, queues with a lot of missed messages, etc? - etki

1 answer 1

Understood with a question. The point was that the Rabbit was written in erlang . In the fine language erlang the garbage collector in a virtual machine works with old and not very efficient algorithms. He begins to clean the memory only when it ends. However, there are ways to explicitly indicate when it starts.

Thus, through the RabbitMQ settings you can reach the Erlang virtual machine. To do this, you need to create (if it does not exist) the file /etc/rabbitmq/rabbitmq.config (Debian, Ubuntu) . It can specify the percentage of total memory or a specific amount, telling when to start the garbage collector. It should be remembered that it is not RAM that is taken into account, but virtual (!). I did 20% of the total virtual memory like this:

 [{rabbit, [{vm_memory_high_watermark, 0.2}]}]. 

My problem was that the default value is 0.4, which is too much for a busy server in production ...

You can make sure that the limit of consumed virtual memory does not exceed the norm is possible with the command sudo rabbitmqctl status . vm_memory_limit parameter: vm_memory_limit .