Hello everyone, I got one problem, but I just can not solve it! Can you help me? So the essence of what:

|| We have let's say 40 servers, which have for example 30-40 cores. Each kernel processes incoming requests. Requests can be about a thousand per second.

? The question is how after processing 1,000,000 requests by all cores. Temporarily stop working on all cores.

! Important information:

1) We do not know the exact number of servers.

2) We do not know the exact number of cores on each server.

3) We must stop working on all cores, namely after 1,000,000 requests. That is, we must prevent processing 1,00101 requests

4) Native cluster module is used.

    1 answer 1

    If you understand correctly, then there are 40 separate servers with 30-40 cores, each running a cluster. And all this traffic flows through the balancer.

    Then the idea is this: a request is received that passes "authentication", if the limit is exceeded, we prohibit further transmission to the NodeJS server.

    On the example of the module http_auth_request_module from Nginx:

    http { # балансировщик - тут наши 40 серверов upstream backend { server nodejs1.site.com; server nodejs2.site.com; server nodejs3.site.com; # ... } server { # сюда идут запросы location /nodejs/ { # если аутентификации нет, дальше запросы не пойдут auth_request /auth; # проксируем на бек NodeJS proxy_pass http://backend; } # тут проводим проверку location = /auth { internal; proxy_pass http://auth.site.com; proxy_pass_request_body off; proxy_set_header Content-Length ""; #... } } } 

    It remains only to start at http://auth.site.com to count up to 1 million exactly:

     'use strict'; const express = require('express'), app = express() ; let numRequests = 0, // лимит: requestsLimit = 1000000 ; app.all(/.*/, function(req, res) { if(numRequests >= requestsLimit) { res.status(403).send('Limit is over!'); return; } numRequests++; res.send(`ok: ${numRequests}`); }); app.listen(80); 

    This script can also be hung on the same server with Nginx, only change the port (or not change if you add more Nginx configs).