At the application level, resolving allowed and forbidden addresses seems wrong and I want to assign this work to iptables.

Question: iptables consumes CPU resources, can it be that if the number of requests increases, the server will shut up due to iptables rules? There is an original data how many requests can resolve what iron? Or will the width of the channel end anyway faster and the request simply won't get to the server because of their number?

  • In the iptables bath through fail2ban, there are thousands of two addresses. The flight is normal. CentOS 7, nginx, 2GB of memory. - KAGG Design
  • @KAGGDesign but ipsets was invented for something? By the way, are you sure that your fail2ban does not use them? - Pavel Mayorov
  • @PavelMayorov yes, I read that ipset is faster. But fail2ban writes in iptables. I see this in iptables -L, but the ipset list gives an empty answer. - KAGG Design
  • @PavelMayorov Oh, what I found: lawsonry.com/2015/01/… now I will try. I have long wanted to switch to ipset - KAGG Design
  • launched a counting script - 4100 addresses in the block at the moment. - KAGG Design

2 answers 2

For the load, you can not worry too much if in one chain iptables is not much more than 1000 rules. The kernel processes each chain that is hit by the control, strictly sequentially. Because of this, going through a large number of rules will begin to slow down. If the rules are supposed to be significantly more than 1000, then think up an algorithm for splitting them into several chains, each of which is called only for a certain type of packet or a certain ip range. For example, break the rules by subnets:

iptables -N RULE_10 iptables -A RULE_10 -s 10.0.0.1 -j DROP iptables -A RULE_10 -s 10.0.10.41 -j DROP ... iptables -N RULE_192 iptables -A RULE_192 -s 192.168.0.1 -j DROP iptables -A RULE_192 -s 192.168.50.48 -j DROP ... iptables -A INPUT -s 10.0.0.0/8 -j RULE_10 iptables -A INPUT -s 192.0.0.0/8 -j RULE_192 

According to a similar scheme (although more complicated because of the uneven distribution of the number of rules across subnets), I have easily processed 35,000 rules. A 3-level chaining system is used, on the upper and 2nd levels of which there are about 30 rules, at the 3rd level, on average, 50-200 rules.

But such a complex scheme had to be used only by the fact that on that machine there was a rather old kernel that I don’t want to change and there is no possibility to use a much better solution :

Iptables ipset module . This module consists of two parts, the actual module for iptables and the separate ipset management utility . The bottom line is that if we have the same type of rules that filter by ip-address or, for example, by port numbers, we create lists using this same utility. And in iptables itself, there is literally one rule added:

 ipset create mailBlock hash:ip >/dev/null 2>&1 iptables -I INPUT -m set --match-set mailBlock src -p tcp --dport 25 -j DROP 

Many ip-addresses can be entered into the list itself; their processing at the kernel level works not through enumeration, but through hash tables, which is very fast.

  • The answer is super, thanks. The question is: how are they called and where to read about behavioral rules (heard such can be done), for example: if more than 10 requests per second came from one ip, then block all requests from it for 30 minutes. - marrk2
  • @ marrk2 This is done by hashlimit modules and connlimit is searched in Google for these keywords. well, or in difficult cases, if the number of packets is not an indicator, hands / scripts. there up there in the comments they wrote that they use some kind of fail2ban (it is in the Apache logs). - Mike
  • @mike is not just apache. I have nginx along with fail2ban. - KAGG Design
  • one
    @KAGGDesign, Mike, and not only nginx. in fact, you can monitor any log file. and you can influence not only the netfilter tables. - aleksandr barakin

A common question. The answer is also general.

Read an article from CloudFlare about blocking 10_000_000 packets per second,

General translation of key points:

Test Stand

We will show some figures to illustrate our method. We use one of our Intel servers with a 10Gbs network card. The iron part is not so important, because we want to show the work of the OS and not the performance of iron.

Description of the testing process:

  • we transmit a large number of small UDP packets (up to 14Vpps)
  • this traffic is directed to a single processor on the target server
  • we measure the number of filtered (dropped) packets by the kernel

Result enter image description here

A source