Why it happens ? Code:

unsigned short x = 0 - 1; unsigned short y = 0 - 1 ; unsigned long z = x * y; cout << hex << x << " \t" << y << " \t" << z << " \t" << x * y << endl; 

Result:

 ffff ffff fffffffffffe0001 fffe0001 
  • And what is wrong? - skegg
  • I was wondering why the last two results are different? fffffffffffe0001 and fffe0001. - DUP

2 answers 2

@DUP "I was wondering why the last two results are different? Fffffffffffe0001 and fffe0001."

This is for a 64-bit CPU. long 64-bit, x * y is implicitly cast by the compiler to int (32-bit), when converted to long the sign bit multiplies. For a 32-bit CPU, the last 2 results will be equal (fffe0001).

In general, be careful with calculations for operands with different bit depth and unsigned. It is better to explicitly assign assignments to intermediate variables before calculations (there will be less misunderstandings)

    Type casting jokes. When you multiply x * y, get 0xfffe0001, cast to type int. When this expression is assigned to an unsigned long z, it is cast to a long (character extension) type, which results in fffffffffffe0001. If you output the multiplied result, you get fffe0001 in hexadecimal format (-131071 in decimal). Like so