As I understand it, usually the type integer weighs 4 bytes, including unsigned integer . But there are cases where, depending on the bitness of the processor, an integer can have both 2 bytes and 8. I don’t understand how the bit affects the size of the integer type and what it changes for the developer except that it can accommodate a wider range of numbers (or less) ?

UPD

What does it mean to translate a program from x32 to x64 ? Roughly speaking, is it using x64 types? If I understand correctly, the reason why a program written under x64 will not work on x32 , is this a memory overhead in data types? Why, then, can we allow in Visual Studio to choose under what bitness to do build ? How will this affect the size of the program itself and the size that ordinary data types will take?

  • integer type in which programming language? - Grundy
  • @Grundy for example in C ++ - raviga

2 answers 2

There is such a table of correspondence of sizes of fundamental types depending on the data model used:

type sizes

The data model is generally not tied directly to the hardware (processor capacity), and depends on the operating system being used (if present). In this case, it should be obvious that on a 32bit processor it will not be possible to run a 64bit OS, but the opposite is quite the place to be.

The size of the fundamental types in a specific implementation of the compiler (i.e., for a specific platform) is chosen so as to ensure optimal performance.

In addition to increasing the range of values ​​of the fundamental types, a larger bit width allows addressing more memory. For example, with 32bit you can address no more than 4Gb, and with 64bit you can already use 16 Eb .

When they talk about transferring a program from x32 to x64 they mean that it is necessary to ensure first of all that the program is working in a 64bit environment. Those. that because of the change in the size of the types, nothing will break. Problems of this kind often arise when, during initial development, they are laid on a fixed size of fundamental types, for example, that the pointer will always be 4 bytes. And when, when building in 64bit mode, it “suddenly” becomes 2 times more, the program just starts to crash.

Therefore, in order for the program to be portable between x32 x64 if necessary, to ensure the necessary bitness of calculations, it is worth relying on the types from <cstdint> .

    To understand this, it is better to write a couple of programs in assembler, for example, a simple calculator.

    The main reason for the different sizes of type int , usually lies in the number of bytes of the processor architecture word (i.e. size of registers), as well as directly in the compiler.

    Usually the presentation is as follows:

    8 bit

    int = 16 bits

    16 bits

    int = 16 bits

    32 bits

    int = 32 bits

    64 bit

    int = 32 bits

    The need for 64-bit operating systems and as a result of the use of processor architectures arose due to the growth of memory. The fact is that 32-bit OS allows you to address only 4 GB of RAM, because 2^32 = 4294967296 bytes.

    X86 processor registers

     AL, BL, CL, DL - 8 бит AX, BX, CX, DX - 16 бит EAX, EBX, ECX, EDX - 32 бит 
    • About 7 years ago at the university, something was taught there at the assembly, but it passed somehow by me. The register is like a temporary data storage and we can write something there using some key and then use the same key to get it? It turns out 8-byte integer used only with obvious need? - raviga
    • Everything is somewhat more complicated, the register is memory directly in the processor, int can be 8 bytes, only with very great refinements on your part, there are many problems, for example alignment - Komdosh
    • there is still packing - Komdosh