previous | start | next

Signed Integer Conversions II

C allows larger sized integers to be assigned to smaller sized integer types without a cast!

This can lead to loss of precision (i.e., a different value) if the larger sized integer doesn't 'fit' in the smaller size.

How does the conversion work? It simply drops the leading signficant bytes. For example:

char c1;  int x1 = 0x00000041; //   65
char c2;  int x2 = 0xFFFFFF81; // -127
char c3;  int x3 = 0x00000081; //  129
char c4;  int x4 = 0xFFFFFF41; // -191

c1 = x1; // c1 is now 0x41 or   65; int 65 'fits' in 8 bits (char)
c2 = x2; // c2 is now 0x81 or -127; int -127 'fits' in 8 bits (char)
c3 = x3; // c3 is now 0x81 or -127; int 129 does not 'fit' in 8 bits
c4 = x4; // c4 is now 0x41 or   65; int -191 does not 'fit' in 8 bits
   

An 32 bit int value will fit in 8 bits if (and only if)

 bits 31:8 are the same as bit 7      

0xFFFFFF81 = 1111 1111 1111 1111 1111 1111 1000 0001 (yes)

0xFFFFFF41 = 1111 1111 1111 1111 1111 1111 0000 0001 (no)

0xFFFF4F81 = 1111 1111 1111 1111 0100 1111 1000 0001 (no)

   

An int value in x will 'fit' in a char ch if x still has its original value after these assignments:

    ch = x;
    x = ch;
   


previous | start | next