Waldek Hebisch wrote:
Frank Heckenbach wrote:
Waldek Hebisch wrote:
The manual promises that `Integer' is the same size as C `int'. AFAIK on most 64-bit machines `int' is 32 bit. The consequences of C brain damage are rather unpleasent: -- all use of 64 bit integers became non-standard -- string length is limited to 32 bits
I'm not sure if the standard requires it to be limited to `Integer' (of course, that's a bit difficult to decide, since the standard doesn't have any bigger types). But even if we could change it (without changing `Integer'), I don't think it would be worth it, since it's only a small part of the problem.
-- spurious overflows/truncations since procision by default is limited to 32 bits
Of course only if one relies on `Integer' being 64 bits, or `Integer' being able to hold a `Pointer' value. Both are things one shouldn't do anyway.
look at:
const c = 1000*1000*1000*1000; { More readible then 1000000000000 }
the comptiler reports arithmetic overflow (I have fixed that example, but have to check if my fix did not couse bad side effects).
I see. (Of course, this still relies on there being a 64 bit type or similar, but IMHO that's reasonable assumption in general, although the 8 and 16 bit platforms may have problems, or do they emulate 64 bit operations?)
And if we inside the compiler fall into the trap I bet that many programs will be (are) affected -- according to the standard `integer' is biggest and the only integer type that adjusts to the machine.
The standard doesn't say anything about "adjusts to the machine". The standard only defines one integer type (apart from subranges), but as in many other cases we provide extensions (which are disabled in standard mode).
So for example `integer' is natural choice as index type for "unlimited" arrays (as the type of schema discriminant).
Yes, that's a problem.
But I agree that it's not nice -- especially if 32 bit operations are less inefficient, but also otherwise.
Note that C mandates wraparoud for unsigned type. Pascal has "affine" point of view, so Pascal backend is allowed to use higher precision if that is faster (but now we get what C folks give us: wraparoud).
That's another bug (missing overflow checking). As long as we declare `MaxInt' to be the maximum value of the type `Integer' (not necessarily of any integer type provided as an extension) we're at least conformant here (even though we may add unnecessary restrictions, it's not actually incorrect).
As I said, I see the main problems in C interfaces (which so far rely on `Integer = int', while most other places really should not really on exact type sizes etc.). So *if* we change it, we should only change one type, something like `Integer = "undefined" in C', and `CInteger' (or whatever) = `int', so C interfaces would only have to s/Integer/CInteger/g. (And similar for `Cardinal'/`Word', even if nonstandard, but so we keep the fact that `Integer' and `Cardinal' have the same size.)
Of course, we'd also need a transition period here -- add `CInteger' now, let everyone change their interfaces (using GPC-version conditionals to avoid breaking compatibility with older GPCs), sometime change `Integer'. Unpleasant indeed ...
Let me try to estimate impact. AFAICS candidate for change are:
m68k 16 --> 32
and 64 bit targets:
alpha, s390x, ia64, ppc64, pa64, sparc64, mips64, sh, amd64
It seems that at the moment alpha and ia64 are really using 64-bit mode, but I have read that developmenet on other platforms is mostly 32 bit. I know that Mandrake included 64 bit version of gpc-20030830, and quite likely other disribution will follow (or already followed). On the other hand Mandrake version had serious bugs, and nobody complained, so the actual usage is likely to be quite small.
Similarely, m68k version was buggy (and we learned about them only thanks to Debian builds).
Also, a buch of bugs that I fixed on AMD64 were generic 64 bit bugs, so I think that at the moment usage on 64 bit machines is quite small.
I would say that now is last chance to do the change. In few month actual use on 64 bit machines is likely to grow considerably -- mostly due to AMD64 (and Intel clone :) but also ppc64 is on mass market (in new Power Macs).
While other targets also would be forced to change type names, since the types would remain the same the change shuild be painless for them.
I see it a bit differently. I don't like to use a "wrong" type (i.e., `Integer' for `int' when it's actually `long', just because it happens to match on most platforms). In fact, to me it means that such instances will be harder to find for people (most have no access to a 64 bit platform).
But I agree that if we change it, we should do it quickly. Some arguments for and against the change have been given, but I'm still undecided (or rather, I personally wouldn't mind either way).
If we change it, we must agree on which changes exactly -- as I said, I propose `CInteger', `CCardinal', `CWord'(?) as equivalent to `[unsigned] int', and `Integer', `Cardinal', `Word' to be equivalent to `[unsigned] long int', at least on some platforms (Waldek seems to know better, which ones). This would keep the impact on C interfaces minimal AFAICS.
Frank