This is something that is prolly considered "common-sense" but I'm posting anyway just in case someone can benefit from it. I might've gotten the idea from Frank's code posted a while back, I don't remember how he implemented Endian conversion routines.
Here's the code:
program Test_Endian;
const ByteOrder = $01020304;
function LittleEndianCPU: Boolean; var mByteOrder : Cardinal(32); mByte : array[ 1..4 ] of Cardinal(8); begin mByteOrder := ByteOrder; Move(mByteOrder, mByte, 4); LittleEndianCPU := (mByte[1] = 4) and (mByte[2] = 3) and (mByte[3] = 2) and (mByte[4] = 1); end;
function BigEndianCPU: Boolean; var mByteOrder : Cardinal(32); mByte : array[ 1..4 ] of Cardinal(8); begin mByteOrder := ByteOrder; Move(mByteOrder, mByte, 4); BigEndianCPU := (mByte[1] = 1) and (mByte[2] = 2) and (mByte[3] = 3) and (mByte[4] = 4); end;
begin WriteLn( 'Little Endian CPU present -> ', LittleEndianCPU ); WriteLn( ' Big Endian CPU present -> ', BigEndianCPU ); end.
The above code should compile, I translated it from the BP code. The only difference is in the type declarations; mByteOrder must always be 32-bits, and mByte must always be an array of 4 elements with 8-bits each, hence the use of Cardinal(x). The way it works is simple, and the code does not depend on any compiler specific syntax (except for type declarations) so it should compile correctly in the long run. AFAIK, constants are compiled in the native byte order of the target architecture. By comparing the long value as bytes, one-by-one, the byte order can be determined, hence the endianess can also be determined. I have tested this code on a Macintosh computer (which is the only big-endian machine I currently have access to), and have of course tested this on a PC. The results displayed are correct in both architectures.
See ya! Orlando Llanes
"Meine Damen und Herren, Elvis hat soeben das Gebaeude verlassen!"
"Look out fo' flyeeng feet" O__/ a010111t@bc.seflin.org /|____. O <__. /> / \ ____________|_________ http://ourworld.compuserve.com/homepages/Monkey414
The above code should compile, I translated it from the BP code. The
only difference is in the type declarations; mByteOrder must always be 32-bits, and mByte must always be an array of 4 elements with 8-bits each, hence the use of Cardinal(x). The way it works is simple, and the code does not depend on any compiler specific syntax (except for type declarations) so it should compile correctly in the long run. AFAIK, constants are compiled in the native byte order of the target architecture. By comparing the long value as bytes, one-by-one, the byte order can be determined, hence the endianess can also be determined. I have tested this code on a Macintosh computer (which is the only big-endian machine I currently have access to), and have of course tested this on a PC. The results displayed are correct in both architectures.
This is very nice, but will require runtime testing (and checking of the bigendian flag *each* time)
Maybe it is smarter to let the above program output something to an includefile.
The makefile should then always compile the above program prior to the actual compilation.
This way one would have the flexibility AND the compiletime decision between little and big endian?
Marco van de Voort (MarcoV@Stack.nl) http://www.stack.nl/~marcov/xtdlib.htm
On Wed, 5 Apr 2000, Marco van de Voort wrote:
This is very nice, but will require runtime testing (and checking of ... between little and big endian?
True, I never thought of it that way :)
See ya! Orlando Llanes
"Meine Damen und Herren, Elvis hat soeben das Gebaeude verlassen!"
"Look out fo' flyeeng feet" O__/ a010111t@bc.seflin.org /|____. O <__. /> / \ ____________|_________ http://ourworld.compuserve.com/homepages/Monkey414