Will not help. Suppose you have a good api and is responsible on average for 0.05 s (taking into account all the overhead costs of sending, processing and delivering the response to your piece of iron). 19000/200*0,05 - which is just under 5 seconds of real time. Suppose an even faster and more stable api with a response time of 0.01 s — this is still one second of real time.
It doesn't matter what and how you do it on your side, if 99% of the time you are just waiting for the result of an API call.
If the API owner is not offended by this and does not begin to squeeze the limits on the number of requests (hundreds of requests per second from one of your scripts - this is quite a lot), you can take multi curl and execute requests to api in parallel. In a sad case, if the API is not based on HTTP, then write your binding on non-blocking sockets . And again, there is a big chance to fly into the limits of using the API. And even in general, to put a flurry of requests on an external system, if it is not ready for such an intensive request.
More democratic - bring the user progress bar. And fill it as requests are processed. Those. Immediately make it clear in the interface that the task takes time and that the user has not been forgotten, his task is being processed.
And optimally, of course, it would be to convince the external system to add the appropriate opportunity.