The Help page for QNetworkAccessManager in the Detailed Description section contains the following note:

Note: QNetworkAccessManager queues it requests. The number of requests executed on the protocol. Currently, requests are not received for the host / port combination.

Does this mean that if there is a difference in the host / port combination or sending requests to, say, two different addresses, the requests will be executed sequentially?


For QNetworkRequest interesting attribute such as the HttpPipeliningAllowedAttribute . Is this attribute related to the parallel request sending mechanism, for example, with the condition of matching a single host / port combination?


QNetworkAccessManager has a method of pre-connecting to the server side on the specified port:

 void QNetworkAccessManager ::connectToHost(const QString &hostName, quint16 port = 80) 

As I understand it, this can partly or even completely offset the time spent on rezolving a domain name and the so-called TCP handshake . What is the number of such connections will be the most effective? Should this number correspond to the number of different host / port combinations or should it correspond to the number of channels (6 pieces) that QNetworkAccessManager organizes when connected to one host / port combination?

  • You have very specific questions, I would ask them on the Qt forum, there are more chances to get an answer. And that, I doubt it. - ixSci

1 answer 1

Does this mean that if there is a difference in the host / port combination or sending requests to, say, two different addresses, the requests will be executed sequentially?

You want to say "parallel" and not consistently?

This condition should be understood as follows - no more than 6 requests for one server connection (host: port). Yes, Apache / Nginx can listen simultaneously on 2 ports, then it’s like 12 requests. But from the client side it is very difficult to find out.

In the case of different hosts, requests are made "in parallel" (I personally checked on several projects). In the case of different ports did not check, but I think that will also work. However, when the hosts are virtual (and sit with one ip), then it seems that it still does not exceed the limit of 6 requests. But here I have too contradictory results. Therefore, it will be necessary to separately investigate this question and figure out - the host is a domain name or ip.

HttppipeliningAllowedAttribute

This thing allows you to send multiple requests without waiting for answers at once. Naturally, the server must support http / 1.1. That is, many requests are sent, and then we expect answers, but this is all within the same connection. Requests for different hosts cannot use this feature. The special advantage of this option comes when the requests are small, independent and there are many of them, and the network and their processing are terribly slow.

As I understand it, this can partly or even completely offset the time spent on rezolving a domain name and the so-called TCP handshake. ?

Yes maybe. Or maybe not:)

What is the number of such connections will be the most effective?

And here everything is very specific. It is possible that there will be no win at all. It will be won if you need to execute a request very quickly (for example, use the scrolling list and load another piece of data or it plays a game on the exchange where it needs to react very quickly and a delay of 0.3 seconds can cost a fortune) - In this case, it makes sense to make a preconnect.

And if you just need to perform one or two requests, then what do preconnect do, what don’t you do, there’s no difference. Because the first request all the same will execute this most.

As always, benchmarks help in such cases. I know of one case where a person did something like a crawler bots and was asked how he adjusts the config for maximum performance. After a lot of questioning, it turned out that he first starts a special test script that will run the program at various settings and selects the maximum. Yes, this script sometimes worked for several days, but then it squeezed all the juices from the servers.

  • Thank. A little confusing at the first sub-question. Six requests are executed in parallel when sending to the same host / port. But at the same time, as I understand it, there is no limit to the amount for parallel execution if the target hosts in the requests differ from each other. Then, it turns out, the limitation on 6 parallel requests is in fact the limitation of the server side? - alexis031182
  • Simply making more than 6 parallel requests for one server in most cases makes no sense. The server is unlikely to handle more. But if our goal is ddos, then this is another forum :-) - KoVadim
  • No way. The question of 6 requests arose because of the existence of such a limit, as such, and can be reformulated to something like: "And why is it exactly 6, not 5 or 7, or what number?". "The server is unlikely to handle more" parallel requests from the same connection - this, it turns out, is a special artificial restriction for security reasons on the server side, since the same parallel requests, but from different client hosts, can be orders of magnitude more. Thank you very much. - alexis031182
  • one
    Why did you decide that it is exactly 6 - I do not know. But it seems that somewhere in the documentation nginx saw the same figure. Even if we allow to allow not 6, but 100, then this does not help at all (in most cases). Just requests will accumulate in the queue on the server, not on the client. And the answers will still come at the same speed (or maybe even slower, since the memory on the server will be clogged with requests). - KoVadim