My question is: why WinApi Microsoft in WinApi create its macros for the definitions available in the language? For example, CHAR == char , DOUBLE == double , LPCSTR == char* and so on.

  • 7
    - Most likely, for consistency reasons with the adopted scheme of naming structures - all names of structures in WinAPI are written in capital letters (for example, LVITEM ). - Accordingly, in order for function signatures to look consistent — DOUBLE MyFunctionName(HANDLE handle, INT something) , rather than double MyFunctionName(HANDLE handle, int something) , at some point it was decided to retypedef all standard types. - Costantino Rupert
  • one
    This is done not only in the Windows API. In almost all large C / C ++ projects, through a bunch of macros and typedef, they create their own system of primitive types. Partly for the reasons that led @ Kitty, partly to guarantee the size / behavior of the data type and thereby further abstract it from the compiler / processor architecture (answer @KoVadim). Moreover, such a technique for integers is included in the standard and is implemented separately in the stdint.h header file. Let 's say a long example on this topic from boost boost.org/doc/libs/1_53_0/boost/cstdint.hpp - igumnov
  • Macros? This is where you saw such macros at Microsoft? These types in Microsoft have always been typedef-names, not macros. - AnT

2 answers 2

Everything is very simple. In fact, all standard types have too few restrictions. The same char in fact can be either signed or unsigned. The same double can have a different length:

The type double provides for the double double, it provides for the type of double double.

That is, the exact length in bytes is unknown. Yes, for a specific implementation of the compiler, it is known, but in general - no.

Now imagine a situation that there are many, many lines of code, and in one day the compiler developers said that they would have an unsigned char (earlier this situation would have seemed far-fetched to me. But now that ARM and MIPS processors are entering our lives when 128 bit platforms are around the corner, maybe not like that). And now what i can do? Recheck tons of code? Guess why it works on one platform and not on the other?

Therefore, they determine their type, which has guaranteed behavior , as well as a clear (easy to search and perception) and not very long (compare CHAR and signed char ) name.

But with the types of LPCSTR situation is even more interesting. I remember, some time ago, when I was just beginning to study C and pros, I needed to rewrite a small code that worked with three-dimensional arrays (for the department where I studied). Transmission on the pointer, when you have three stars in a row, finally confuses. But if you declare a typedef int*** matrix; type typedef int*** matrix; , then passing by reference is greatly simplified ( void func(matrix * data); ) And you do not need to think about how many asterisks you need to put. From a properly structured type structure, productivity increases significantly.

This applies to almost any API , in the same OpenGL and Direct3D their override types. Even the standard stddef / cstddef includes redefined types such as size_t , ptrdiff_t , etc.

Here is a simple example: you use pointer arithmetic on a 32-bit platform in your project, sometimes storing a pointer in the int type, then you want to transfer your project to the 64-bit platform, so this type of int for storing the pointer may not fit because of void * (pointer) dimension differences between int . So, you have to change the int type to 64-bit type everywhere, and if you immediately used ptrdiff_t , you wouldn’t have to do extra work.

WinAPI has analogs of the type INT_PTR / UINT_PTR , LONG_PTR / ULONG_PTR , DWORD_PTR ...

The same applies to the types that are guaranteed to give the bit dimensionality as UINT8 (1-byte), UINT16 (2-byte), UINT32 (4-byte), UINT64 (8-byte) ...

ps even such truth is not a panacea: sizeof(int) == sizeof(__int32) can be true, and it can be false.