I mean this construction:

var length = arr.length; for (var i = 0; i < length; i++) { } 

I got the difference at times:

 arr = []; for (var i = 0; i < 100000000; i++) { arr[i] = i; } var b = Date.now(); for (var i = 0; i < arr.length; i++) { } console.log(Date.now() - b); var b = Date.now(); var length = arr.length; for (var i = 0; i < length; i++) { } console.log(Date.now() - b); 

https://jsfiddle.net/q1tLcyap/

  • Comments are not intended for extended discussion; conversation moved to chat . - Nick Volynkin

3 answers 3

If the body of the loop is not empty, then any possible performance increase will be measured at the most in units of percent.

And if in the body of the loop there is work with DOM, then you will not notice an increase in the length of the array.

Do not save on matches. Write a code that is easier to read.

  • The question is probably about the insides of the implementation of the JS engine: is the length of the array calculated there every time, or is it stored in an internal “variable” that changes only when new elements are added / removed? - Sergiks
  • Although in a real situation the way it is, but either in a test with fictional conditions, I am somewhere "juggled", or arr.length still works 3-4 times faster than var length . Feeddle - Regent
  • @Regent: Well, with the stupid optimizer of the current version of Chrome, with an uncached length, cycle pattern recognition works and optimization is cut. - VladD
  • @Regent so it seems necessary to fill the array at the beginning ... and then watch ...... jsfiddle.net/6wm45b33/1 - Alexey Shimansky
  • I got the difference at times: jsfiddle.net/q1tLcyap - user208916

Funny results

Test code:

 let arr = []; for (var e = 0; e < 1000; e++) arr.push(e); function run(count) { let f = [], s = []; count = parseInt(count) || 0; for (let e = 0; e < count; e++) { let start = Date.now(); for (let i = 0; i < arr.length; i++) null; f.push(Date.now() - start); } for (let e = 0; e < count; e++) { let start = Date.now(), length = arr.length; for (var i = 0; i < length; i++) null; s.push(Date.now() - start); } return { first: f, second: s }; } [1000, 10000, 100000, 1000000].forEach(item => { let tmp = run(item); console.info(`Размер массива: ${arr.length}\nКоличество итераций: ${item}\n\nБез кэширования (среднее): ${tmp.first.reduce((a, e) => a += e) / tmp.first.length}\nС кэшированием (среднее): ${tmp.second.reduce((a, e) => a += e) / tmp.second.length}`); }); 

Chrome 51 :

enter image description here

Firefox 47 :

enter image description here

As you can see, it is more profitable for the fox to cache only small structures, the larger the array, the less benefit (even in the minus, in terms of benefit, is gone).
Chrome is the same, but caches more efficiently, apparently.


Conclusion :
Optimization is a complicated thing. It seems to be there, and it seems like only a minus :)
First of all, write the human-readable code !
Saving man-hours on parsing - How does it work? - override -4 millisecond optimization.
Then you can already think how to optimize access to the length and so on.
Such big things need to be divided into small pieces.

  • And if you run these measurements independently several times, you will find that the time is different each time and sometimes the winner changes drastically. - avp 2:49 pm
  • I ran it several times, the load was about the same (photoshop, storm, a bunch of tabs, GTA5 and the little things), it was about the same, nothing changed dramatically, + -2. Professional and complete test do not know how, but it is also not bad. It seems to be. - user207618 2:55 pm
  • You measure not what you want. - Arnial
  • @Arnial, maybe. Only "measure". - user207618
  • In both cases, getting the length will occur outside the loop. article from the developer v8 on this topic , you can also check for yourself what the browser will actually do - Arnial

There used to be a test where all iteration options were compared, it was called foreach-vs-loop , now it is not available but you can search by pictures, in short, there is no difference, with caching it even turned out a little slower, which surprisingly was the opposite for faster than direct ps : you can find a saved copy, for example on Yandex , this is certainly not a complete test, but there is a question that interests you. ps: the speed of assign in loop declaration is surprising, I would even say it doesn’t surprise to amaze.

  • for loop (cached) not much faster: P - user208916
  • one
    here the statistical error actually affects, in reality, all the engines are caching so, and by the way, now the reverse for has ceased to be faster, apparently the V8 has been optimized :) - pnp2000
  • No, well, the reverse for is already a brute force: D - user208916