Frank Heckenbach wrote:
Jonas Maebe wrote:
What's taking so long currently, Adriaan, is probably the GPI imports. That's independent of the target, i.e. a Pascal to C++ converter would have to do it just the same, so its complexity is independent of how the output is structured. As I said, this is a separate issue which would be easier to tackle if the compiler was written in a high-level language (such as C++ or Pascal with templates). It's easier to find and experiment with efficient data structures (e.g., hash tables, trees) when they're readily available than when you have to manually implement them each and every time like in C (for the current GPC) and also in Pascal so far.
I agree that it is a separate issue and that it is easier to tackle in a future compiler. So, I am moving this to a new thread. We need not discuss it any further now, but I am still following up to answer your questions.
The problem is not so much the speed of GPI loading as such, but the fact that unit-recompilation is of order-2 (in GPC) instead of order-1 (as in FPC).
Imagine a program P that uses unit1 .. unitN, where each unit K uses unit1..unitK-1. Currently, a compile of program P with GPC - triggers a compilation of unit1 and writes a unit1.gpi - triggers a compilation of unit2, which uses unit1, so loads unit1.gpi and writes unit2.gpi - triggers a compilation of unit3, which uses unit1 and unit2, so loads unit1.gpi and unit2.gpi and writes unit3.gpi - etcetera - triggers a compilation of unitK, which uses unit1..unitK-1, so loads unit1.gpi..unitK-1.gpi and writes unitK.gpi - etcetera
So, unit1 is written once and loaded N-1 times, unit2 is written once and loaded N-2 times, etcetera, unitK is written once and loaded N-K times, etcetare. Therefore, N .gpi files are written and (N-1) + (N-2) + ... (N-N) = N * (N-1)/2 .gpi file are loaded.
In other words, compilation times increase quadratic with the number of used units. Improving .gpi load times doesn't help much (only by the square root of the load-time improvement) and the process will still be slow when N is large. The only real solution is to make compilation a linear process, where already-loaded .gpi files are not loaded again. My understanding is that this is difficult to accomplish with the current GPC back-end.
The same problem exists with C/C++ header files and the common solution there is to use include guards http://en.wikipedia.org/wiki/Include_guard.
BTW, is it the actual Mac OS X interfaces that are so huge, or your
It is the actual Mac OS X interfaces that are so huge,
wrappers? I remember you had a long list of string functions for string types of various kinds and lengths. Using templates, they'd shrink drastically.
That was the other problem, not related to the above issue. We are using operator overloading to mimic UCSD-Pascal strings. But, unfortunately, that triggers another quadratic preformance issue in the compiler http://www2.gnu-pascal.de/crystal/gpc/en/mail12897.html.
Regards,
Adriaan van Os