Funny results
Test code:
let arr = []; for (var e = 0; e < 1000; e++) arr.push(e); function run(count) { let f = [], s = []; count = parseInt(count) || 0; for (let e = 0; e < count; e++) { let start = Date.now(); for (let i = 0; i < arr.length; i++) null; f.push(Date.now() - start); } for (let e = 0; e < count; e++) { let start = Date.now(), length = arr.length; for (var i = 0; i < length; i++) null; s.push(Date.now() - start); } return { first: f, second: s }; } [1000, 10000, 100000, 1000000].forEach(item => { let tmp = run(item); console.info(`Размер массива: ${arr.length}\nКоличество итераций: ${item}\n\nБез кэширования (среднее): ${tmp.first.reduce((a, e) => a += e) / tmp.first.length}\nС кэшированием (среднее): ${tmp.second.reduce((a, e) => a += e) / tmp.second.length}`); });
Chrome 51 :

Firefox 47 :

As you can see, it is more profitable for the fox to cache only small structures, the larger the array, the less benefit (even in the minus, in terms of benefit, is gone).
Chrome is the same, but caches more efficiently, apparently.
Conclusion :
Optimization is a complicated thing. It seems to be there, and it seems like only a minus :)
First of all, write the human-readable code !
Saving man-hours on parsing - How does it work? - override -4 millisecond optimization.
Then you can already think how to optimize access to the length and so on.
Such big things need to be divided into small pieces.