Ernst-Ludwig Bohne wrote:
On Mon, 27 Feb 2006, Adriaan van Os wrote:
[G5:gpc/testgpc/adriaan] adriaan% cat testsubrange.p
program testsubrange;
type int16 = integer attribute( size = 16); int32 = integer attribute( size = 32); point = record x,y: real end;
var i: int16;
procedure P( size: int32); begin writeln( 'size = ', size) end;
begin i:= 3658; P( i * SizeOf( point)); end.
[G5:gpc/testgpc/adriaan] adriaan% gp testsubrange.p
[G5:gpc/testgpc/adriaan] adriaan% ./testsubrange size = -7008
Any comments ? Regards,
Adriaan van Os
Simplifying your program I observe the same problem:
PROGRAM testsubrange; var i: ShortInt; {signed 16 bit integer} j: SizeType; {unsigned 32 bit word} point : record x,y: real end; {takes 16 bytes} begin i:= 3658; j:= 16; writeln ('result1 = ', i*j); {58528, OK} writeln ('result2 = ', i*SizeOf(point)); {-7008, wrong} end.
My suspicion is that for calculating the second expression the compiler uses wrong operand size (16? bit) instead of 32 (function SizeOf returns a value of SizeType). BTW: -7008+2^16 = 58528
Basically speaking gpc performs operations up to precision of more precise argument. When computing `i*j' gpc notes that `j' is more precise (usually `SizeType' has 32-bit or better precision) and uses its precision to perform multiplication. In the second case (`i*SizeOf(point)') gpc notes that `SizeOf(point)' is a constant and that this constant fits into 16-bits, and uses only 16-bit precision for multiplication.
Also, ATM gpc has no runtime overflow checking, so one silently gets incorrect result.
You may ask gpc is "correct". That is samewhat tricky question. Namely, gpc applies rules, and AFAICS it applies exactly the rules which were intended by programers coding them (I think Frank changed rules to the current ones). So the real question is if the rules are good ("correct"). Now, it would be nice to have rules which always give mathematically correct result (in other words, use precision big enough to avoid any possibility of overflow). But this is impossible in currect gpc: we have maximal precision and we can not go beyond that. Another possibility is to give correct results if possible and use maximal precision otherwise. However, the maximal precision is really a "double precision". Namely, gpc can perform arithmetic at twice of normal machine precision. Which is nice, but expensive: such operation may be significantly slower then normal operations. ATM gpc is rather dumb when predicting needed precision, so even if arguments are small gpc may think that maximal precision is needed. Also, even maximal precision may still give you wrong results.
So, there is rather nasty compromise between correctness and speed. I usuallt go for correctness. However, here speed penalty may be very significant (about 3 times when done right, but may be as high as 20 if maximal precision is slow). So, having optional overflow checking looks more attractive: one can test with overflow checking on and then release with checking off.
One could also try to invent some compromise rules, for example trying to use "most precise fast arithmetic", but doint it right is tricky. Also, there are issues of compatibility (spurious overflows in bitwise operations...).
BTW: I do not plan work on this in near future. Still, if some consesus appears we may implement it when time comes.