cbfalconer@yahoo.com wrote:
Quoting "Prof. Harley Flanders" harley@umich.edu:
It is also time for more accurate integers. Borland introduced Int64 10-20 years ago, and it has remained the standard highest accuracy integer type.
This plethora of names is totally unnecessary. Pascal simply has a maxint value, which specifies the accuracy available. When an integral type is specified, it normally has a range. That range is used to specify the integer size used.
This sort of foolishness is typical of the C based attitude of Borland.
Sorry, I have to disagree. When efficiency is taken into account, there are at least two problems:
- The maximum range supported by a language is often (and in particular also in GPC) larger than what the hardware directly supports, e.g. a 64 bit range on 32 bit platforms.
Doing 64 bit operations is less efficient since it has to be emulated with several CPU instructions.
So you don't want all "Integer" operations to be done in 64 bits.
- Types smaller than the natural word size are also often less efficient (AFAIK, not so much on x86 with its 8 and 16 bit heritage, but more so on other processors; ISTR Crays support(ed) only 64 bit operations naturally, smaller types would have to be emulated (read, mask, shift, write)), so you don't want to have them used every time you specify a small subrange.
OTOH, sometimes you want or need to use them, e.g. in larger arrays. That's a space/time tradeoff that in general is better left to the programmer who understands the context. (Even the same program could be used in both ways, e.g. using larger types for small data sets for speed, and using smaller types for large data sets for space constraints.)
Therefore, specifying the range and the size of an integer type are two distinct things.
Frank