While writing the program I had a question, the answer to which I could not find. I measure the execution time of a single function in nanoseconds with the same parameters and I get not the same result (in Java):

-- 0,000087036 -- 0,000084647 -- 0,000082940 -- 0,000101713 -- 0,000085330 -- 0,000087377 -- 0,000082598 -- 0,000085330 -- 0,000081916 -- 0,000084306 -- 0,000081916 -- 0,000082940 -- 0,000083622 -- 0,000083964 -- 0,000081916 -- 0,000081575 -- 0,000095227 -- 0,000081916 -- 0,000087036 -- 0,000081575 Среднее значение -- 0,000085244

I tried to do the same in C #, there are also different values. Perhaps it is obvious that they cannot coincide due to the very high accuracy of time, but I would like to know the specific reason. How can this be explained? What does it affect? Java code:

 public class MainFloat { final static int RAND_MAX = 32767; public static void main(String[] args) { int[] check = new int[] {1000, 5000, 10000, 20000, 30000, 40000, 50000, 60000, 70000, 80000, 90000, 100000, 200000, 300000, 400000, 500000, 600000, 700000, 800000, 900000, 1000000, 2500000, 5000000, 10000000, 15000000, 20000000}; double counter = 0; double timeSpent = 0; for(int i = 0; i < 26; i++) { int N = check[i]; float[] a = new float[N]; float[] b = new float[N]; float[] c = new float[N]; float[] f = new float[N]; float[] x = new float[N]; for(int h = 1; h <= N - 1; h++) { do { a[h] = (float) ((h == 1) ? 0 : Math.random() * RAND_MAX); b[h] = (float) ((h == N - 1) ? 0 : Math.random() * RAND_MAX); c[h] = (float) (Math.random() * RAND_MAX); } while(Math.abs(c[h]) < Math.abs(a[h]) + Math.abs(b[h])); f[h] = (float) (Math.random() * RAND_MAX); } counter = 0; for(int j = 0; j < 20; j++) { double startTime = System.nanoTime(); Progonka(check[i] - 1, a, b, c, f, x); timeSpent = System.nanoTime() - startTime; System.out.printf(" -- %8.9f%n", timeSpent/1000000000); counter += timeSpent/1000000000; } counter /= 20; System.out.printf("%8.9f%n", counter); } System.out.printf("END"); } static void Progonka(int np, float ap[], float bp[], float cp[], float fp[], float xp[]) { float[] alfa = new float[np + 1]; float[] beta = new float[np + 1]; int ip; for(ip = 1; ip < np; ip++) { alfa[ip + 1] = bp[ip] / (cp[ip] - ap[ip] * alfa[ip]); beta[ip + 1] = (fp[ip] + ap[ip] * beta[ip]) / (cp[ip] - ap[ip] * alfa[ip]); } xp[np] = (fp[np] + ap[np] * beta[np]) / (cp[np] - ap[np] - alfa[np]); for(ip = np - 1; ip >= 1; ip--) xp[ip] = alfa[ip + 1] * xp[ip + 1] + beta[ip + 1]; } 

}

  • 7
    I think the question is stupid for the reason that the CPU load at different points in time is different. From here you will have different execution times, besides, on such accuracy - Dred
  • one
    As already said, the competitive environment of the process execution, "hot" optimizations of virtual machines, all sorts of caches. - free_ze
  • 2
    I do not agree that the question is stupid. In my opinion, the question is very useful. In general, performance measurement is a very difficult topic. The speed of the method may depend on a lot of factors. For example, the first run of a method in .NET can be very slow, since the method will compile. After that, the launches can be more or less the same, but one should not expect a coincidence in nanoseconds, since, as already mentioned, you can have something parallel to work (GC, for example, or other processes) and load the system with it. - tym32167
  • Thank you all very much) I received the required answer - Systems

0