I need to perform a specific function once every 20 мс passed after executing it (for example). To do this, I put a mark after its implementation:
functime = std::chrono::high_resolution_clock::now(); Then some more actions are performed (checked, they are performed in a few microseconds). Accordingly, before the next call of this function, I pause as follows:

 if (std::chrono::milliseconds(20) > (std::chrono::high_resolution_clock::now() - functime)) { std::this_thread::sleep_for(std::chrono::milliseconds(20) - (std::chrono::high_resolution_clock::now() - functime)); } 

But as a result of the check, it turns out that it is performed every 0.031201 секунды , and not 0.020000 . Why is that?

PS before I made changes to the code of this program, but in general not related to this, everything worked fine. I work in VS2013.

UPDATE1 This function, which is called, is a function to send RTP packets that must be sent every 20 мс . Several "clients" are connected to the program, who themselves send the same RTP packets (data is received in separate streams in the number of clients). sending data was assumed in their parent stream, so as not to load the CPU with a data availability check. Accordingly, before sending, the incoming packets are processed, and then they are sent back to all clients also once in 20 мс (respectively, a pause is needed only before sending to the first client, which freezes the next data processing for 20 мс , too, which is required for normal operation). And, although data processing takes less than a microsecond, now I have a delay instead of 20 мс - 31,2 .

UPDATE2 I found out that the computer is dumb . In the test project:

 int main() { using namespace std::chrono; for (int i = 0; i < 100; ++i) { steady_clock::time_point t1 = steady_clock::now(); std::this_thread::sleep_for(std::chrono::milliseconds(20)); steady_clock::time_point t2 = steady_clock::now(); duration<double> time_span = duration_cast<duration<double>>(t2 - t1); std::cout << " " << time_span.count(); } system("pause"); return 0; } 

Writes that in ba = 31. Sometimes 32 ... Is there any way to do the same, but without std::this_thread::sleep_for ?

I tried replacing it with boost::this_thread::sleep_for(boost::chrono::milliseconds(20)); the result is random in the range of 15 to 32 .

Perversions of the form (with various methods of implementation ( steady_clock , etc.)):

 int main() { auto a = GetTickCount(); while ((GetTickCount() - a) < 20) { ; } cout << GetTickCount() -a; system("pause"); return 0; } 

Also gives the result of 31-32 ...

Setting real-time priority only increased latency.

  • That is, more simply, do you need to have 20 ms between function calls? In general, it is very difficult to judge from where the extra 11 ms is taken, strongly depends on the context, maybe you have 30 more heavy threads there. And in general, Vindouz is not a RTOS and does not guarantee accurate planning. - Cerbo
  • Check the specs, it seems to me that sleep_for guarantees a sleep time of at least specified. More is allowed. - Kromster
  • @Cerbo yes, I have a few more threads spinning, but they read the data from the data socket and write them to the buffer (into a variable of the uint8_t * type), they hardly interfere, considering that the delay was exactly 20 milliseconds earlier - Dmitry
  • @Dmitry Try priorities play. - Cerbo
  • Actually, I don’t remember where, I read that for such things a real-time system is needed, and that the language itself does not guarantee anything; when the sheduler will give up control - then he will go ... so I agree with the advice of Cerbo - try to maximize priority ... - Harry

5 answers 5

There is a suspicion that Sleep and GetTickCount are not accurate enough to measure such small time intervals. As an example:

 #include <iostream> #include <windows.h> #include <inttypes.h> int main() { LARGE_INTEGER Freq, Time, Current; QueryPerformanceFrequency(&Freq); int64_t Delay = Freq.QuadPart * 0.020; for (int i = 0; i < 20; i++) { int a = GetTickCount(); QueryPerformanceCounter(&Time); do { QueryPerformanceCounter(&Current); } while (Current.QuadPart - Time.QuadPart < Delay); printf("GetTickCount() - %dms, QueryPerformanceCounter() - %fms\n", GetTickCount() - a, (Current.QuadPart - Time.QuadPart) * 1.0 / Freq.QuadPart); } system("pause"); return 0; } 

You can clearly see how the values ​​of GetTickCount () jump with constant QueryPerformanceCounter ():

 GetTickCount() - 15ms, QueryPerformanceCounter() - 0.020000ms GetTickCount() - 32ms, QueryPerformanceCounter() - 0.020000ms GetTickCount() - 15ms, QueryPerformanceCounter() - 0.020000ms GetTickCount() - 31ms, QueryPerformanceCounter() - 0.020000ms GetTickCount() - 16ms, QueryPerformanceCounter() - 0.020000ms GetTickCount() - 31ms, QueryPerformanceCounter() - 0.020000ms GetTickCount() - 16ms, QueryPerformanceCounter() - 0.020000ms GetTickCount() - 31ms, QueryPerformanceCounter() - 0.020000ms ... 

Added by:

Further examination of the issue revealed that Sleep () is still relatively accurate (at least on my machine), and the inaccuracy is precisely in the measurement of time intervals:

 int main() { LARGE_INTEGER Freq; LARGE_INTEGER Times[501]; QueryPerformanceFrequency(&Freq); printf("Timer frequency: %lluHz, Resulution: %fmks\n", Freq.QuadPart, (1E6 / Freq.QuadPart)); QueryPerformanceCounter(&Times[0]); for (int i = 1; i <= 500; i++) { Sleep(20); QueryPerformanceCounter(&Times[i]); } int maxDiff = 0; int minDiff = 0x7fffffff; for (int i = 1; i <= 500; i++) { minDiff = std::min(minDiff, int(Times[i].QuadPart - Times[i-1].QuadPart)); maxDiff = std::max(maxDiff, int(Times[i].QuadPart - Times[i-1].QuadPart)); } printf("Sleep(20) accuracy is %f - %f ms\n", (minDiff * 1000) * 1.0 / Freq.QuadPart, (maxDiff * 1000) * 1.0 / Freq.QuadPart); system("pause"); return 0; } 

Timer frequency: 2929414Hz, Resulution: 0.341365mks
Sleep (20) accuracy is 19.218178 - 20.040868 ms

    You call now() twice, which is not entirely fair. Try this:

     auto functime = std::chrono::high_resolution_clock::now(); // ... auto delay = std::chrono::high_resolution_clock::now() - functime; if(std::chrono::milliseconds(20) > delay) { std::this_thread::sleep_for(std::chrono::milliseconds(20) - delay); } 

    Regarding planning, I advise you to customize thread priorities, that is, your thread should receive a higher priority than the others. I also strongly recommend that you do not perform resource allocation / release (memory, descriptors, etc.) in critical areas, which are rather time-unstable operations.

    • It turned out the problem was in this_thread :: sleep_for itself, so it did not help. - Dmitry
    • Nothing has changed ... - Dmitry

    As already written above, the sleep functions, std::this_thread::sleep_for do not guarantee accuracy. Almost everyone says "will be no less than specified." But half the trouble with this, say, get squeeze and make a mandrel at a specified interval. And there is also a network that is difficult to influence.

    Several years ago, I solved a similar problem and the solution was the following. The original code was the same as yours - they sent the package, calculated how much time to “sleep” and send the package again. But to this was added a code that counted how many packets were sent in the last few seconds (a simple class with a ring type buffer was written to speed up the work, as a result, the insertion into it is performed in O (1)). When sending, I looked at how many packets I managed to send in the last time. If there were not enough packages, then one or two extra turns would be done. Also, the time for “sleeping” decreased a little. If there are more packages than the opposite, skip one package. The main thing is not to change dramatically, otherwise the system will constantly hang out, switching in modes or stick to one. I had an error at the beginning and when the client was very stupid, that time decreased to zero and recovered dozens of packages. It was fixed by the fact that such clients simply disconnected.

    The question arises, what about the client? and customers keep the buffer and put the packages there. And reproduce with the speed they need. If music is playing, the buffer size of 10 packets (0.2 seconds) does not interfere at all. When talking (VoIP) already at 0.1 seconds can be noticeable by ear. But some even the second delay does not scare - the main thing is that this delay is not very floated.

    This method works very well even with bad networks, the main thing is to learn how to throw packets correctly. Just because you can not throw the package - on the receiving side is likely to "click".

    By the way, instead of sleep and similar functions, I used the fact that my clients are served in poll and just adjusted the timeout for the socket. Plus - no additional thread is needed, a minus - there was a lot more in this thread. But it worked.

    Yes, I did not invent this algorithm - this is how many streaming services work.

      You decide what you need.

      Either a delay of 20msec between the time the function ends and the next time it starts, or it runs every 20msec.

      In the first case, with a large number of iterations, extra seconds will inevitably accumulate.

      In the second case (it can be called a “scheduled call”), it is necessary to schedule the launch time (of course, in practice calculating the delay value), starting from the time of the first function call and the number of iterations.

      Those. for a scheduled interval between calls equal to plan_delay , the delay before the n call is equal to delay = t_start + n * plan_delay - t_now;

        The problem was in the system timer, or rather in the resolution of the hardware timer. Sometimes he flew to the default value of 15,6мс , which gave a scatter.

        Solution: added the first line in main:

         SetPriorityClass(GetCurrentProcess(), REALTIME_PRIORITY_CLASS);//необязательно, но у меня приложение реального времени timeBeginPeriod(1);// устанавливаем разрешение 1мс