To Frank H. and all Pascal re-writers:
One of the main, if not the main, point of keeping Pascal updated is to extend its usefulness.
I want to address one area: scientific programming. It is what I do; I have done it in Pascal for vary many years, and continue doing so in Delphi. I even wrote a book, "Scientific Pascal", in 2 editions.
So here is my wish list:
Add types
Complex Rational
coded in assembler.
All the standard operations and functions should apply to these types:
Complex: +, -, *, /, ^ or ** Sqr, Sqrt, Sin, Cos, ..., Exp Ln
Rational: +, -, *, /, Sqr, LowestTerms,...
Even 40-year old FORTRAN has complex types. Obviously Complex is needed for numerical analysis, differential equations, electrical engineering, etc.
A rational type would be very useful for number theory.
Add higher precision floats (20 bytes, 24 bytes, user-chosen precision) again with all the standard operations and functions implemented in assembler.
Borland introduced type Extended (10 byte floats), possibly in Turbo Pascal 4 or 5. This was an enormous improvement over the Real type, (4 byte), and made full use of the 80x87 mathematical coprocessor.
Extended has remained the highest precision f.p. type in Borland Pascal and probably all other Pascals for 15-20 years, while computer speed and memory have improved by orders of magnitude. It is surely time for more accurate real types, coded in assembler.
The current real types (in Delphi) are Single (4 bytes), Real48 (6 bytes), Double (8 bytes), Extended (10 bytes), [and Comp and Currency].
Synonyms are in order,skipping the factor of 8: Real4, Real6, Real8, Real10, etc. for the hopefully new ones.
It is also time for more accurate integers. Borland introduced Int64 10-20 years ago, and it has remained the standard highest accuracy integer type.
Delphi wisely added the names (and unsigned types)
Int8, Int16, Int32, Int64, UInt8, UInt16, UInt32, UInt64.
(My preference again would be to drop the factor 8:
Int1, Int2, Int4, Int8, UInt1, UInt2, UInt4, UInt8.)
I would love to see Int16, Int32, Int64 and Int128, and their unsigned counterparts.
Add exponentiation, with the symbol ^ or ** overloaded for the special cases of the exponent: positive or negative integer [ (-0.02)^(-3) = -1.25E5 ], rational with odd denominator [ (-8)^(5/3) = -32 ]. FORTRAN and even BASIC have had exponentiation forever.
Allow genuine operator overloading, such as
operator "+" (const X, Y: MyType): MyType;
operator "*" (const X: Integer; const Y: IntegerVector): IntegerVector;
Delphi has made a mess of this so far, with overloading only possible with class functions and records. GNU Pascal and Free Pascal do much better.
It should be possible to overload all the standard operators:
+ - * / ^ = <> <= >= > <
There should be an abstract classes TRing which can be specialized to TField, etc.
TRing should have a Ring Element that can be specialized.
So one could define TMatrix, with elements in TRing, and define matrix multiplication A*B and Det (determinant) without specifying the elements in advance, and specialize them to any ring, such as the rings of rationals, of polynomials with complex coefficients, etc.
This is what OOP, polymorphism, abstraction, inheritance should be about.
Add Print Methods for TMemo, TStringGrid, TImage, and maybe a few other components.
Obviously if I compute data and store it in a table or memo, or create a graph, I want to print it. Currently this requires searching the WWW for someone's code. One of the advantages of unicode (see below): one can mix alphabets in tables. For instance a table of number theoretic functions will have Greek headings: Omega, Phi, omega, etc.
Any modern Pascal should use unicode, with all character and string procedures modified accordingly. This is a big advance Delphi started a year or so ago
Add some more fonts that include Greek letters and many math symbols, and which can be easily added to code (which should be WYSIWYG). Fonts for scientific use should be non-justified, so that "9", "1" and "." are the same width in displays of numbers. As far as I can tell, Courier New (OEM CharSet) is the only satisfactory font at present.
Harley Flanders
Prof. Harley Flanders wrote:
To Frank H. and all Pascal re-writers:
One of the main, if not the main, point of keeping Pascal updated is to extend its usefulness.
I want to address one area: scientific programming. It is what I do; I have done it in Pascal for vary many years, and continue doing so in Delphi. I even wrote a book, "Scientific Pascal", in 2 editions.
So here is my wish list:
Thanks for your contribution. You basically (though unintentionally ;-) echo my wish list. Several of your wishlist features can be implemented in plain Pascal code using templates and automatic con-/destructors (see below), which are exactly my two most wanted features. Together with function overloading and inline routines across modules (see below), actually all of your wishlist features can be implemented in plain Pascal. So it would be a good job, perhaps for yourself, or others who'd like to contribute, but not work on the compiler itself.
The GMP library, for which GPC bindings exist since 1999 or so, has three main types which exactly match three of your wishlist items (unlimited integer, rational and real types). The main problem is that the operations are defined as procedures as functions, not operators. I wrote operator bindings for GPC as good as possible (see http://fjf.gnu.de/gmpop.inc) -- but that's a big "as possible". They're still not very comfortable to use, see the long comment at the start. The main underlying issue is the lack of automatic con- and destructors.
Add types
Complex
Exists already.
Rational
See GMP's "mpq_t" if you want unlimited length of the nominator and denominator. Or if you want a limited size (but faster processing) just implement a record with two Integer or LongInt fields.
coded in assembler.
Disagree. Instead improve inline routine support. (GPC supports inline routines, but not in unit/module interfaces, which limits their usefulness -- it's one of the reasons why gmpop.inc is an include file, not a unit.)
Supporting inline routines across modules has been on GPC's wish list for a long time, and still would be an important feature. The main difficulty in implementing it stems from the fact that interface (GPI) files would need to contain any Pascal code then, whereas currently they do only declarations. It's not that difficult to implement (compared to other features), and with a newly written GPC with clean data structures, I'd almost say it would be trivial.
Inline routines, written in Pascal(!), would generally produce better code than hand-written assembler, because the optimizer can act on them. E.g., if you do a Complex multiplication and only use its real part, the optimizer can completely eliminate the operations for the imaginary part.
As a side note, C++ has a complex type in STL which is a template. So you can not only get complex types of different precision (e.g., GPC currently has 3 precisions for real types, but only one for Complex), but also e.g. complex integers (which can be useful).
All the standard operations and functions should apply to these types:
Complex: +, -, *, /, ^ or ** Sqr, Sqrt, Sin, Cos, ..., Exp Ln
They do.
Rational: +, -, *, /, Sqr, LowestTerms,...
They are available as GMP routines for mpq_t, see above. For a self-made rational type, they can be easily implemented (student homework level work ;-).
Add higher precision floats (20 bytes, 24 bytes, user-chosen precision) again with all the standard operations and functions
See GMP's "mpf_t".
Borland introduced type Extended (10 byte floats), possibly in Turbo Pascal 4 or 5. This was an enormous improvement over the Real type, (4 byte), and made full use of the 80x87 mathematical coprocessor.
Extended has remained the highest precision f.p. type in Borland Pascal and probably all other Pascals for 15-20 years, while computer speed and memory have improved by orders of magnitude.
But hardware architecture hasn't. The 4, 8 and 10 byte floats are still the only ones supported directly in hardware on the x86. (For other CPUs that may have a 16 byte float, GCC and therefore GPC probably supports it -- can't check right now.)
I'm no expert on these matters, but I suppose software-implemented floats are sigificantly slower, even with the best algorithms. Of course, there's a need for them, that's why GMP exists.
The current real types (in Delphi) are Single (4 bytes), Real48 (6 bytes), Double (8 bytes), Extended (10 bytes), [and Comp and Currency].
GPC has ShortReal aka Single (4 bytes), Real aka Double (8 bytes), LongReal aka Extended (10 bytes) and Comp aka LongInt (8 byte signed integer). (Sizes apply to x86, might vary on other CPUs.)
Real48 is, I suppose, the software floating point type compatible to old TP versions. GPC has conversion functions (RealToBPReal and BPRealToReal) in its System unit, but no operations on these types. Of course, operators can be implemented (right now!) using operator overloading; for functions (SqRt etc.) function overloading will be needed (which is also not that big a feature to add, compared to other things, and also one I'd consider rather high priority).
I suppose the same applies to a software-implemented Currency type (though I don't know the specifics of this type in Delphi, and for currency values I myself prefer to use an integer type in the smallest unit, e.g. cents for "normal" tasks, maybe 1/100 cents or so for banking).
It is also time for more accurate integers. Borland introduced Int64 10-20 years ago, and it has remained the standard highest accuracy integer type.
Delphi wisely added the names (and unsigned types)
Int8, Int16, Int32, Int64, UInt8, UInt16, UInt32, UInt64.
You can define them in GPC as follows:
type UInt16 = Cardinal attribute (size = 16);
(My preference again would be to drop the factor 8:
Int1, Int2, Int4, Int8, UInt1, UInt2, UInt4, UInt8.)
If you like, you can define them under these names (though IMHO it would add to confusion when communicating with other programmers, since bitwise count seems far more widespread).
I would love to see Int16, Int32, Int64 and Int128, and their unsigned counterparts.
For larger types, look at GPM's "mpz_t".
Add exponentiation, with the symbol ^ or ** overloaded for the special cases of the exponent: positive or negative integer [ (-0.02)^(-3) = -1.25E5 ],
WriteLn ((-0.02) pow (-3))
-1.250000000000000e+05
Note: "pow" is integer exponentiation, distinguished from "**", in the same way that integer division "div" is distinguished from "/".
rational with odd denominator [ (-8)^(5/3) = -32 ].
Does not work yet, but can (already!) be implemented with operator overloading (with "**" instead of "^").
A problem, though, is that "5/3" yields a real value by Pascal standards, but you apparently want a Rational here. Even operator overloading wouldn't help, since having two operators take take the same arguments, but return different results would be ambiguous. So you'd need some kind of explicit notation, e.g.:
(-8) ^ Rational (5, 3)
FORTRAN and even BASIC have had exponentiation forever.
So has Extended Pascal.
Allow genuine operator overloading, such as
operator "+" (const X, Y: MyType): MyType;
operator "*" (const X: Integer; const Y: IntegerVector): IntegerVector;
It's allowed.
It should be possible to overload all the standard operators:
- / ^ = <> <= >= > <
All these (and more) can be overloaded (again with "**" rather than "^" for exponentiation; "^" is pointer dereference in Pascal, which cannot be overloaded yet, but maybe should -- it has useful applications in C++).
There should be an abstract classes TRing which can be specialized to TField, etc.
TRing should have a Ring Element that can be specialized.
You can write them right now (in any of the object models you prefer).
So one could define TMatrix, with elements in TRing, and define matrix multiplication A*B and Det (determinant) without specifying the elements in advance, and specialize them to any ring, such as the rings of rationals, of polynomials with complex coefficients, etc.
This is what OOP, polymorphism, abstraction, inheritance should be about.
As I see this, OOP covers this case insufficiently. Of course, you can declare TRing and various descendent classes, and then make a TMatrix containing TRing elements. But polymorphism then allows you to mix and match (or rather mismatch) different rings that are not meant to be compatible, i.e. literally adding apples and oranges.
A better solution are templates. To explain what I mean (since obviously a lot of confusion abounds) and take away some black magic from templates, I'm attaching a demo (in C++) that implements such a matrix template which multiplication as the only operation (others can be added, of course), with a straight-forward implementation. Even if you're not familiar with C++ syntax, you should be able to read most of it. Note the comments.
The multiplication operator can be implemented either as a method or stand-alone. I show both, selected by the METHOD_OPERATOR conditional. Both versions look almost identically, except that the method only needs to declare one argument, as "this" ("Self" in Pascal) is implicit, as usual. So it's a matter of taste which one you prefer.
The declaration of constant matrices in the main function is a bit clumsy (but such things usually occur only in demo programs, anyway). They could be improved by writing different constructors.
Note that this example actually doesn't require a specific TRing class. It works on any type that supports assignment ("="), "+" and "*" operators, which includes all built-in numeric-types, but also user-defined rings.
(More advanced, polymorphism can be used on top of that, if you want to implement different, but compatible rings, which would then need to be descendant from a common ancestor. But you still control what's compatible with what, essentially by declaring the element-wise "+" and "*" operators to accept precisely those combinations that make sense to you.)
Add Print Methods for TMemo, TStringGrid, TImage, and maybe a few other components.
What's that? I don't know about these types. If they're part of some Delphi compatibility unit, the methods have to be added there, obviously. Just do it or ask the maintainer (The Chief?).
Any modern Pascal should use unicode, with all character and string procedures modified accordingly. This is a big advance Delphi started a year or so ago
But without breaking backward-compatibility. C++'s solution is to define the string type as a template of its base type, so there can be strings of plain "char", "wide chars", UTF-16, UTF-32 or whatever you like. In GPC with templates, we could do the same.
(Note: Of course, you can process UTF-8 strings with existing Pascal strings in a limited way, e.g. Length gives the length in bytes, not in characters. For simple applications, this is sufficient; the above applies to cases where it isn't.)
Add some more fonts that include Greek letters and many math symbols, and
That's more the job of a distributor than compiler writers. I've used many such fonts in Debian Linux (which therefore must be free), many of them in TTF format, so they should be installable under Windows easily.
which can be easily added to code (which should be WYSIWYG).
If your editor supports UTF-8, you can (already now) just put them in string literals (and the editor would presumably display them as is, i.e. WYSIWYG). Since UTF-8 preserves the ASCII range, they're perfectly compatible with otherwise ASCII Pascal code, and GPC will accept them. (Of course, the processing of them, whether I/O or conversion to "wide chars", is not currently available, see above, but if/when it will be, such literals will work.)
Fonts for scientific use should be non-justified, so that "9", "1" and "." are the same width in displays of numbers.
As far as I can tell, Courier New (OEM CharSet) is the only satisfactory font at present.
There are various monospace fonts, the rest is a matter of taste.
Frank
Hi,
On 8/3/10, Frank Heckenbach ih8mj@fjf.gnu.de wrote:
coded in assembler.
Disagree. Instead improve inline routine support.
Inline routines, written in Pascal(!), would generally produce better code than hand-written assembler, because the optimizer can act on them.
GPC has never (that I know of) been meant to be x86-only, so assembly is shunned. (And I like assembly!) Not to bring this up too many times, but the only loss here now is due to bigger size, which ideally would be handled by the linker. Or maybe there really should be an "x86 task force" for GPC to whip up some smaller / faster bits since it really is a popular architecture these days. However, speed optimizations are hard, so my personal interest would just be to shrink it.
EDIT: Pure assembly is harder to maintain, ask Virtual Pascal!
Extended has remained the highest precision f.p. type in Borland Pascal and probably all other Pascals for 15-20 years, while computer speed and memory have improved by orders of magnitude.
But hardware architecture hasn't. The 4, 8 and 10 byte floats are still the only ones supported directly in hardware on the x86. (For other CPUs that may have a 16 byte float, GCC and therefore GPC probably supports it -- can't check right now.)
FPU, MMX, 3dnow!, SSE, AVX ... which to support? I think most people would (probably incorrectly) say that FPU/MMX is deprecated. Gah, I hate modern computing sometimes, always complicating things, never making it easier.
It is also time for more accurate integers. Borland introduced Int64 10-20 years ago, and it has remained the standard highest accuracy integer type.
So you want similar to "long long long int"?? Actually, GPC by default makes "longint" 64-bit! Which in rare cases can be confusing. ;-)
FORTRAN and even BASIC have had exponentiation forever.
So has Extended Pascal.
You can emulate it with ISO 7185, something like (I think) this: exp(ln(a)*b)
Any modern Pascal should use unicode, with all character and string procedures modified accordingly. This is a big advance Delphi started a year or so ago
Took them long enough! (Didn't Plan 9 invent it in 1993?) No, seriously, do your apps really need it? I find it often overhyped as the great fix, but it's hard, and most people really don't use it. It's just more complications. Not that I hate it or anything, but ....
Rugxulo wrote:
On 8/3/10, Frank Heckenbach ih8mj@fjf.gnu.de wrote:
coded in assembler.
Disagree. Instead improve inline routine support.
Inline routines, written in Pascal(!), would generally produce better code than hand-written assembler, because the optimizer can act on them.
GPC has never (that I know of) been meant to be x86-only, so assembly is shunned.
That's also true. (Though it's acceptable when used as an alternative to otherwise portable code, e.g. the GMP library contains assembler optimizations for various platforms.)
In this case, however, it's a no-brainer, since a simple function such as Complex multiplication couldn't be written more efficiently by hand in assembler than what the compiler normally produces, and it takes away optimization opportunities.
Or maybe there really should be an "x86 task force" for GPC to whip up some smaller / faster bits since it really is a popular architecture these days.
I'd suggest they concentrate on the backend (whether GCC, LLVM, ...) then, since this would benefit all languages.
However, speed optimizations are hard, so my personal interest would just be to shrink it.
Actually understanding speed optimization on modern architectures is hard. Many things that were faster on older processors are now slower. Caching, superscalar execution, branch prediction, etc. It's really hard to evaluate even simple assembler code WRT performance -- furthermore it depends on circumstances, such as whether the program is CPU, memory or I/O bound. We're long past the point where producing efficient code has become easier to do for automatic tools (i.e., compilers) than for humans. (Which doesn't mean that all compilers are optimal, there's certainly room for improvement, but again it's hard, and there are diminishing returns ...)
Extended has remained the highest precision f.p. type in Borland Pascal and probably all other Pascals for 15-20 years, while computer speed and memory have improved by orders of magnitude.
But hardware architecture hasn't. The 4, 8 and 10 byte floats are still the only ones supported directly in hardware on the x86. (For other CPUs that may have a 16 byte float, GCC and therefore GPC probably supports it -- can't check right now.)
FPU, MMX, 3dnow!, SSE, AVX ... which to support? I think most people would (probably incorrectly) say that FPU/MMX is deprecated. Gah, I hate modern computing sometimes, always complicating things, never making it easier.
Sure. BTW, AFAIK the GCC backend doesn't support any of these (yet?), don't know about LLVM. So in this case you're on your own. An additional problem is, e.g. FPU and MMX are mutually exclusive (switching is expensive and destroys the state), so a compiler couldn't simply use both without coordination with other parts of the problem. The convention is, of course, that FPU is generally used, and if one wants to use MMX, one has to make sure no floating point code is used at the same time, and do the switch back explicitly.
It is also time for more accurate integers. Borland introduced Int64 10-20 years ago, and it has remained the standard highest accuracy integer type.
So you want similar to "long long long int"?? Actually, GPC by default makes "longint" 64-bit! Which in rare cases can be confusing. ;-)
Why confusing? It's one power of two larger than "Integer".
FORTRAN and even BASIC have had exponentiation forever.
So has Extended Pascal.
You can emulate it with ISO 7185, something like (I think) this: exp(ln(a)*b)
You can emulate, or let's say implement, all of his wishlist features. The desired compiler features would make it more comfortable (templates, function overloading and especially automatic con-/destructors) or efficient (cross-module inlining).
Frank
Hi,
On 8/4/10, Frank Heckenbach ih8mj@fjf.gnu.de wrote:
However, speed optimizations are hard, so my personal interest would just be to shrink it.
Actually understanding speed optimization on modern architectures is hard. Many things that were faster on older processors are now slower. Caching, superscalar execution, branch prediction, etc. It's really hard to evaluate even simple assembler code WRT performance
It's a mess, even for GCC, I know they're suffering trying to target so much. It's vastly different from GCC 2.7.2.3, when 486 was the best it could do (386 + alignment). And compile times suffer from that extra complexity. I just wish GCC -O0 was equally as fast, but it's not. :-(
FPU, MMX, 3dnow!, SSE, AVX ... which to support? I think most people would (probably incorrectly) say that FPU/MMX is deprecated. Gah, I hate modern computing sometimes, always complicating things, never making it easier.
Sure. BTW, AFAIK the GCC backend doesn't support any of these (yet?), don't know about LLVM.
-ftree-vectorize is supported, but I'm not sure how well it works overall. And GCC has always (I think?) assumed an FPU is present (real or emulated). AVX isn't out yet, I think, and is yet another ball of wax. (SSE is implemented even by AMD, but Intel never bothered with 3Dnow!, so that's less useful. But even my now-dead AMD laptop supported through SSE3.) I blindly assume GCC on AMD64 does something with SSE2, but who knows.
In short, not sure it's worth officially supporting any of this in a compiler. And yet this is the exact area where hand-written assembly is still direly needed. Personally I find it too complex (and boring), but it does speed up stuff sometimes.
So in this case you're on your own. An additional problem is, e.g. FPU and MMX are mutually exclusive (switching is expensive and destroys the state), so a compiler couldn't simply use both without coordination with other parts of the problem.
Yes, and SSE rectified that but required explicit OS support to FXSAVE everything.
So you want similar to "long long long int"?? Actually, GPC by default makes "longint" 64-bit! Which in rare cases can be confusing. ;-)
Why confusing? It's one power of two larger than "Integer".
Only confusing for extreme portability. I think FPC and VPC default to 32-bit for it. (And yes, I know about _BP_UNPORTABLE_TYPES_ or whatever.) In other words, my Befunge "benchmark" counts down from -1 to MAXINT, and it takes much longer (!) when that is 64-bit. ;-)
Rugxulo wrote:
In short, not sure it's worth officially supporting any of this in a compiler. And yet this is the exact area where hand-written assembly is still direly needed. Personally I find it too complex (and boring), but it does speed up stuff sometimes.
Yes, but only in special cases (e.g., parallel processing) which are already hard to express in "classic" languages, i.e. the compiler would first have to "reverse-engineer" what the programmer writes (such as for-loops).
Only confusing for extreme portability. I think FPC and VPC default to 32-bit for it. (And yes, I know about _BP_UNPORTABLE_TYPES_ or whatever.) In other words, my Befunge "benchmark" counts down from -1 to MAXINT, and it takes much longer (!) when that is 64-bit. ;-)
Even worse, by the time your counter finishes, current processors may have 128 or 256 bit registers. Good luck with the next iteration. ;-)
Frank
To Prof. Harley Flanders:
Nearly everything you said about new types and operator overloading is already supported in Free Pascal:
http://www.freepascal.org/docs-html/ref/refch12.html
If you need new types, just implement them yourself using an array of byte or a record and operator overloading. Then code all the operations for your new type, including operations involving other types. You can also code the operations in assembler if you like. Something like:
type TMaxPowerFloat = type array[0.19] of Byte;
Operator + (r : real; z1 : TMaxPowerFloat) z : TMaxPowerFloat; begin // Implement summing a real with a TMaxPowerFloat end;
And so on
I would love to see Int16, Int32, Int64 and Int128, and their unsigned counterparts.
You can implement yourself again, with operator overloading.
Allow genuine operator overloading, such as operator "+" (const X, Y: MyType): MyType; operator "*" (const X: Integer; const Y: IntegerVector): IntegerVector;
It should be possible to overload all the standard operators: + - * / ^ = <> <= >= > <
Already supported in FPC
So one could define TMatrix, with elements in TRing, and define matrix multiplication A*B and Det (determinant) without specifying the elements in advance, and specialize them to any ring, such as the rings of rationals, of polynomials with complex coefficients, etc.
Already supported in FPC through generics, which are very similar to C++ templates:
http://www.freepascal.org/docs-html/ref/refch8.html
http://wiki.freepascal.org/Generics
Type generic TList<_T>=class(TObject) type public TCompareFunc = function(const Item1, Item2: _T): Integer; var public data : _T; procedure Add(item: _T); procedure Sort(compare: TCompareFunc); end;
Add Print Methods for TMemo, TStringGrid, TImage, and maybe a few other components.
Just send over a unit to Lazarus which has code to print all the standard components and we will add it.
I'm not sure if a Print method directly in the classes will work because the printer support is in a separate package.
Any modern Pascal should use unicode, with all character and string procedures modified accordingly. This is a big advance Delphi started a year or so ago
The Lazarus Component Library has full support for Unicode since 2008, 1 year before Delphi.
Please take a look at the following Windows CE russian application made in Lazarus:
http://wiki.lazarus.freepascal.org/germesorders
Quoting "Prof. Harley Flanders" harley@umich.edu:
It is also time for more accurate integers. Borland introduced Int64 10-20 years ago, and it has remained the standard highest accuracy integer type.
This plethora of names is totally unnecessary. Pascal simply has a maxint value, which specifies the accuracy available. When an integral type is specified, it normally has a range. That range is used to specify the integer size used.
This sort of foolishness is typical of the C based attitude of Borland.
Am 04.08.2010 15:18, schrieb cbfalconer@yahoo.com:
Quoting "Prof. Harley Flanders" harley@umich.edu:
It is also time for more accurate integers. Borland introduced Int64 10-20 years ago, and it has remained the standard highest accuracy integer type.
This plethora of names is totally unnecessary. Pascal simply has a maxint value, which specifies the accuracy available.
One might not want maxint being a signed 64 bit value on 32 bit systems due to performance reasons neither growing integer to 64 bit.
When an integral type is specified, it normally has a range. That range is used to specify the integer size used.
int64 is not a "complete" ordinal type in the pascal sense.
cbfalconer@yahoo.com wrote:
Quoting "Prof. Harley Flanders" harley@umich.edu:
It is also time for more accurate integers. Borland introduced Int64 10-20 years ago, and it has remained the standard highest accuracy integer type.
This plethora of names is totally unnecessary. Pascal simply has a maxint value, which specifies the accuracy available. When an integral type is specified, it normally has a range. That range is used to specify the integer size used.
This sort of foolishness is typical of the C based attitude of Borland.
Sorry, I have to disagree. When efficiency is taken into account, there are at least two problems:
- The maximum range supported by a language is often (and in particular also in GPC) larger than what the hardware directly supports, e.g. a 64 bit range on 32 bit platforms.
Doing 64 bit operations is less efficient since it has to be emulated with several CPU instructions.
So you don't want all "Integer" operations to be done in 64 bits.
- Types smaller than the natural word size are also often less efficient (AFAIK, not so much on x86 with its 8 and 16 bit heritage, but more so on other processors; ISTR Crays support(ed) only 64 bit operations naturally, smaller types would have to be emulated (read, mask, shift, write)), so you don't want to have them used every time you specify a small subrange.
OTOH, sometimes you want or need to use them, e.g. in larger arrays. That's a space/time tradeoff that in general is better left to the programmer who understands the context. (Even the same program could be used in both ways, e.g. using larger types for small data sets for speed, and using smaller types for large data sets for space constraints.)
Therefore, specifying the range and the size of an integer type are two distinct things.
Frank