Hello. Interested in the correct implementation of multithreading web server in C ++. The problem is this: I used to think that in order for a server to be multi-threaded, it’s enough only after accepting a request from a client to create a new thread for it, using CreateThread, and so on for every client.

But later I learned that there is no point in creating threads more than the number of logical processors in the system, since there will be losses when switching between threads, and the program will run slower.

So how to be? On the one hand, they write on all habras and blogs that it is enough to create a thread with the CreateThread function. On the other hand, a certain guru writes that 100–200 threads at the same time greatly reduce performance on a regular computer.

    2 answers 2

    In general, you write everything correctly, but I would shift the emphasis. The car can really be loaded with two hundred streams, but this does not mean that they will necessarily load it or that this solution will load less than others - any service has a limit, after which it turns into a pumpkin. Basically, you have three main options: synchronous processing of requests, which will be very slow, multi-threaded processing of requests, and finally, asynchronous processing of requests in one form or another, under which I hide all sorts of event loop models and everything else. The last option is the most productive, but very complex, and necessary only in such monsters as nginx - for you for your purposes, simply enough simple multi-threaded processing will be enough. Most likely, such a thing as blocking wait, during which the processor waits for a response from an external server (database) or a file system, will meet inside your server more than once - other OS will transfer control to other threads, and you will only win . The only problem that the multithreaded scheme introduces is the moment at which the number of incoming requests begins to exceed the number of requests that the server can handle, and the threads that appear again begin to take away CPU time, worsening performance. Here, finally, there is something for which I wrote this whole answer.

    You should not createThread() . It is sad to create a stream for each request for two reasons - firstly, it is unnecessary expenses for a new thread, and secondly - the lack of control over the number of threads that would solve the above-mentioned problem with server overflow. You need to implement a pool of threads of a fixed value; The maximum capacity of the pool is how many simultaneous requests the server is ready to process, and freshly arrived requests will have to wait for the release of the work thread, which will allow the service not to turn into a pumpkin. At the same time, the pool size can be as much as the number of processors, and be in the hundreds - it’s difficult to advise something specific, but in the current project, raising the limit from twenty to five hundred threads in the embedded server gave a performance increase tenfold (in fact, not , and more efficient use of resources, and even the number of five hundred was taken from the ceiling, and probably worth reducing it).

    In general, do not worry so much for the model, it is more than working. Do not forget to limit the possibilities of the service from above.

      Understand correctly the specifics of multithreading - if each of the threads will be engaged in intensive calculations, it is obvious that they should not be run more than logical processors in the system.

      However, since in the case of a service, the threads basically expect either a disk or a network, there is no better way than hanging this wait on the operating system by creating the required number of threads and properly managing the event management in them.