I will try to explain.
In any programming language, everyone always struggles to ensure that the program ate as little memory as possible. As if there was no leakage. Especially in C ++, you need to correctly allocate, free.
Each data type occupies some memory space in bytes.
I will write examples on C #
Explain to me please. Why programmers, knowing, for example, that in a cycle will run over an array of 10 characters in length and this length will be constant, they still write
for (int i = 0; i < someVar.Length; i++) { // do something } Why an int counter? Why not sbyte or byte ? Yes, maybe it will work quickly in this scope. But there are a lot of such blocks by code.
Similarly with arrays. Knowing exactly the length ( Amendment. Because the answers are already there, but the question was asked ... a little confused ...: not the length, it's over, but exactly the maximum numbers ... for example, there will be numbers from 1 to 100 ). All the same, often all declare
int[] arr = new int[8]; instead
ushort[] arr = new ushort[8]; or
byte[] arr = new byte[8]; This array will occupy the memory already until it is killed or the program finishes its work.
So from here is the question. Why is everyone doing this? I don’t know something yet or is it just lazy programmer?
for (int i = 0; i < someVar.Length; i++) {...}- Length - is the int the same? why then should i use a smaller data type? And if tomorrow there will suddenly be more elements than I thought? Then change it in a bunch of places? And I need it? 2) Yes, maybe it will work quickly in this scope. But there are a lot of such blocks by code. - this is not the place to optimize 3)ushort[] arr = new ushort[8];- and unless so you do not change the meaning of the original array? You had an int array, and you became an ushort array. Here it’s not a matter of array length, but in its values - BOPOH