On HP xv9300 Opteron workstation with fully patched RH WS3, the following program: ---------------------------------------------- #include <stdio.h> int main() { unsigned char buffer[4] ; buffer[0] = buffer[1] = buffer[2] = buffer[3] = 0x82 ; unsigned long L ; L = buffer[3] + ( buffer[2] << 8 ) + ( buffer[1] << 16 ) + ( buffer[0] << 24 ) ; printf( "L = 0x%08x\n", L ) ; printf( "L / 2 = 0x%08x\n", L / 2 ) ; printf( "L >> 1 = 0x%08x\n", L >> 1 ) ; printf( "\n" ) ; L = 0x81818181 ; printf( "L = 0x%08x\n", L ) ; printf( "L / 2 = 0x%08x\n", L / 2 ) ; printf( "L >> 1 = 0x%08x\n", L >> 1 ) ; return 0 ; } ------------------------------------------------------------------------- when compiled with g++, gives the following result: ------------------------------------------------------------------ L = 0x82828282 L / 2 = 0xc1414141 L >> 1 = 0xc1414141 L = 0x81818181 L / 2 = 0x40c0c0c0 L >> 1 = 0x40c0c0c0 ------------------------------------------------------------------- The first set of results are clearly wrong; the math has been done with signed arithmatic.
I just noticed that the second part of my program should have had L = 0x82828282, which would then make the bug in the first part more obvious. One for me, one for gcc...
There is nothing wrong on what GCC does (and the testcase is invalid). printf is a vararg function, so the type of the var you pass to it must match the corresponding format string specifier. Either %08lx and L (resp. L / 2, L >> 1), or %08x and (int) L etc. If you use the former, you'll see what is GCC doing more clearly in the first case: L = 0xffffffff82828282 L / 2 = 0x7fffffffc1414141 L >> 1 = 0x7fffffffc1414141 which is expected, as ( buffer[0] << 24 ) expression according to ISO C and ISO C++ rules has type int (unsigned char is promoted to (signed) int) and (((unsigned char) 0x82) << 24) thus is on two's complement arches where int is 32-bit is -2113929216, when converted to unsigned long 0xffffffff82000000.