Well, I promised that I'd write up a brief account of what I really meant. Unfortunately I'm going to have to let it slip a little as I am extremely busy. Hope this rushed effort make its clearer, not more confusing! Looking at it briefly, if anything its redundant as all this seems to have already been covered, but all the same... I did promise more examples, which I have to admit I'm out of time for... maybe another week.
Anyway, on with the show.
A. Layout.
Basically identical to BP units:
unit DemoUnit;
interface uses whatever;
<export declaration part> end.
implementation uses whatever, somemore;
<definition part> end.
The difference lies in the way the elements are used, not the syntax.
Because of this, I suspect if this were to be implemented, a compiler switch such as the one Chief suggested would be appropriate. No new keywords, Frank! ;-)
There are no different types of uses clauses; they always do the same thing: import stuff. The difference is what happens once the stuff is imported.
The implementation doesn't import the interface, it only checks against the interface's declaration part. So it has to import its own stuff for itself; more on that in a mo'.
B. Independence of action of the interface and implementation sections
Firstly, let's clarify the flow of information of interface and implementation sections as this seems to have caused confusion amongst some people. Conventional units effectively work such that the declarations in the interface are imported into the implementation section of the same unit.
Here, interface exports all of its contents to other units. Its declarations are *checked* against those of the implementation section. The implementation section has its own definitions; it does not import those from the interface for its use. As a consequence, it does not import the uses present in the interface section. Right?
You *could* have the declarations in the interface imported as in BP units, but I believe it works best - for these types of units - if the two are more-or-less independent, bar the checking of the declarations. If you do have the implementation import the stuff from the interface, you now are asking for trouble with the uses clauses.
Each section separately imports (uses) any other units that they need. This isn't checked across the interface/implementation boundary; only the declaration part is.
*Everything* in the interface is exported, including any imported units (via uses). One rule, no exceptions. The interface section is a container saying that "these things ought to be exported".
Everything in the implementation section is private.
An interface is free to import (and hence re-export) any unit it feels relevant. It could for example import a related unit which is not needed in the implementation of the current unit, but which is useful for the users of the current unit in some way.
The interface section will, of course, have to import the units needed to satisfy the declaration part of the interface. This has the effect that users of the current units get these dependencies "for free". This can work well, provided that developers of lower-level units don't place a lot of things into the interfaces that really aren't needed to be there - which they ought not to be doing anyway. Put in another way: the current unit's interface looks after the "required definitions" on behalf of any users of the current unit by exporting those that are needed. (To do this with more control - a good thing I think - you really need the ability to import only subsets of an interface.) We've already discussed how this can shield higher-level units from re-organisation of code in lower-level units.
The implementation is free to import units it needs that are not needed to satisfy the interface declaration part or needed by the user of the current unit. It could, for example, import units which are effectively private to the implementation of the current unit, such as one we decided to bundle all system-dependent elements into or a unit with some obscure internal manipulations which are necessary for the implementation but really ought to be hidden from higher-level units.
Because of the way things work, there will almost always be a uses in both the interface and the implementation, unlike in BP units where you can use only only one uses in the interface section and rely on the implementation section picking this up. Having the uses in the interface imported by the implementation imposes restrictions on the interface and if you left things that way the idea of re-exporting interfaces of imported units would awkward. I think this is where some people's early complaints of "re-exporting everything" come from (i.e. they are thinking of a single uses in the interface section importing everything needed for both the interface declarations and the implementation. I wouldn't want all the stuff for the implementation section re-exported either and with the two uses you don't have to.). You really need the interface and implementation sections to be more independent and you need a uses clause in both (so that they are independent) to make this work well.
You could mimick the equivalent of the BP uses situation, by allowing the result of a uses clause outside both the interface and implementation sections to be 'read-only' to both sections, but personally I think this could be confusing - ? (You might have problems if the interface and implementation are placed in two separate files - haven't got time to think about this right now.) I'm really not sure what to do about the possibility of a uses clause outside of both sections, but it strikes me as being inconsistent with the rest of the approach.
While you could introduce an export clause of some sort, they aren't really necessary under this scheme and it seems easier on the programmer to just remember that everything in the interface is exported, including what is brought in through uses. Its a simple enough rule. And no new syntactic elements, Frank! (But perhaps some of the behaviour differences are annoying instead?)
BTW, I don't have experience with use of several interfaces for one module as Frank mentioned in a earlier post (he says this is possible in EP). If anyone has any comments on this, I'd be happy to hear them.
Not recommended in general, but on rare occasion useful - you can allow users to optionally relax the type checking across the interface/implementation barrier (comparing the implementation's definitions to declaration part of the interface) to allow explicit recasting of types for the rare occasions this is useful. This isn't possible if the interface is imported into the interface section. Not exactly a feature to rest a debate on, though ;-) The inheritance is the key thing; this is just a minor aside.
C. Circular references.
Skating out on thin ice here...
If a unit's interface has already been imported, a re-importing of it is redundant (unless you allow importing only parts of an interface, but let's keep this simple for now). This ought to be able to break the deadlock -- ? Ditto for the initialisation code.
I have to get back to work, so this is where I stop. Sorry... or perhaps you're grateful?! ;-)
Cheers,
Grant
Grant Jacobs wrote:
Well, I promised that I'd write up a brief account of what I really meant. Unfortunately I'm going to have to let it slip a little as I am extremely busy. Hope this rushed effort make its clearer, not more confusing! Looking at it briefly, if anything its redundant as all this seems to have already been covered, but all the same... I did promise more examples, which I have to admit I'm out of time for... maybe another week.
Anyway, on with the show.
Well, you've convinced me that the models are quite different. But even after reading your description, I must say that I prefer both the UCSD/BP model for its simplicity (see below) and the EP model for its explicit ways (import and export are separated, and you can import and (re-)export whatever you like, at the cost of some more typing).
Therefore I'm afraid, I don't think I will implement the other model, at least not in my free time. If you really need it, you could hire me to implement it -- I'd expect it to take a few weeks. Or you can try it yourself ...
I must also say that I'm a little angry at the inventors of these models (UCSD and Borland on the one side and Apple on the other, I suppose). Not only did they all depart from the standards efforts (I know EP wasn't finished at that time; but I also know that BP retreated from the commitee long before that, and even ignored the already existing classic Pascal standard; not sure about Apple's behaviour), but they implemented two models with the same syntax with are different, but not at first sight. Hardly anything could be more confusing.
A. Layout.
Basically identical to BP units:
[...]
The difference lies in the way the elements are used, not the syntax.
Because of this, I suspect if this were to be implemented, a compiler switch such as the one Chief suggested would be appropriate. No new keywords, Frank! ;-)
That's exactly the problem -- the same syntax for a different thing. A compiler switch for such a purpose is also not in line with the existing ones which should not modify the behaviour "substantially" (i.e., there are some to enable/disable some features, checks, or change the behaviour in details such as default output field width, but as you described, the two units models are fundamentally different, and a given source can possibly compile with both settings and yield different results which a user of the unit will notice; quite confusing -- just like your initial confusion when you found out about the differnce the hard way ...).
(OK, well, macros are an exception. So even if your unit model was implemented as, say, `unit with export Foo' (to write some silly combination of keywords), a switch `"-Dunit=unit with export"' (macro definition) would effectively make it possible to write `unit Foo' ...)
B. Independence of action of the interface and implementation sections
Firstly, let's clarify the flow of information of interface and implementation sections as this seems to have caused confusion amongst some people. Conventional units effectively work such that the declarations in the interface are imported into the implementation section of the same unit.
Here, interface exports all of its contents to other units. Its declarations are *checked* against those of the implementation section.
I suppose you mean only routines here. Are they only checked, or do they behave like `forward' declarations, i.e. can you do:
unit Foo;
interface
procedure Foo;
implementation
procedure Bar; begin Foo end;
procedure Foo; begin end;
end.
For types, constants and variables there is nothing to check, since their declarations are not repeated in the implementation ... or are they? -- Well, I think I still don't quite understand. What about the following (rather common) case:
unit Foo;
interface
type t = record [...] end;
procedure Foo (var p: t);
implementation
procedure Foo (var p:t); var Temp: t; begin end;
end.
Does this work? If so, does the implementation get t from the interface, or is it only available within the definition of Foo.
Or do you have to declare t again in the implementation (or import it explicitly)?
In any case, I see some difficulties when trying to implement it. First, one would have to write a mechanism to only get the routine declarations (depending on the answers above, either as forward declarations, or in a special state that doesn't exist currently: must be checked, but cannot be called yet).
Secondly, a compiled interface (GPI) must distinguish between here-declared and re-exported routines (only the former should get to the implementation in the way described IIUIC).
In case type definitions etc. have to be repeated, that adds more overhead -- not only for the programmer (IMHO), also for the compiler. Currently there's no code to check type definitions for equality (because that's not needed in regular Pascal; only type compatibility matters, and two distinct structures are never compatible, even if they look the same). So this code would have to be written from scratch. Same for variables and constants ...
[...]
Everything in the implementation section is private.
At least here it agrees with BP units. ;-)
The interface section will, of course, have to import the units needed to satisfy the declaration part of the interface. This has the effect that users of the current units get these dependencies "for free". This can work well, provided that developers of lower-level units don't place a lot of things into the interfaces that really aren't needed to be there - which they ought not to be doing anyway.
I slightly disagree here -- e.g., I like to have some mid-lower level units for certain topics (e.g., string utils, file utils, ...). If a higher unit that needs something from them in its interface (e.g., a type from the lower units used in a parameter of its own routines) would re-export everthying from it, this sounds to me like you ask your friend for a book, and he brings his toolbox because he needed a screwdriver to open a box in which he'd kept the book. ;-)
Put in another way: the current unit's interface looks after the "required definitions" on behalf of any users of the current unit by exporting those that are needed.
But IIUYC, it re-exports everything, not only the required definitions, doesn't it?
(To do this with more control - a good thing I think - you really need the ability to import only subsets of an interface.)
I certainly agree here. But as I said, this ability exists already.
Because of the way things work, there will almost always be a uses in both the interface and the implementation, unlike in BP units where you can use only only one uses in the interface section and rely on the implementation section picking this up. Having the uses in the interface imported by the implementation imposes restrictions on the interface and if you left things that way the idea of re-exporting interfaces of imported units would awkward.
I don't think so. BP does allow `uses' in the implementation part (in case that wasn't clear), so I don't think in the BP model you need to use any more units in the interface part than in the Apple model. (Though you *can* move the imports required by the implementation there as well in the BP model, but you don't have to.)
Not recommended in general, but on rare occasion useful - you can allow users to optionally relax the type checking across the interface/implementation barrier (comparing the implementation's definitions to declaration part of the interface) to allow explicit recasting of types for the rare occasions this is useful.
Strong objection here! :-)
Explicit type casting within an routine declaration is possible, in BP style (see the other thread). That's not too nice, but at least it's explicit, and someone who uses it can be supposed to know what they're doing. And they see the type cast right there where the code is that uses it.
"Casting" a routine type (i.e., the parameter form) is quite a different beast. IMHO that's exactly not an explicit, but rather a quite implicit cast (the types change magically on their way into the routine). I've commented on this WRT `univ' in procedural types.
(Now something like this is possible in GPC with the (ab)use of `external' and linker names, but it's certainly not recommended.)
And "relaxing the type checking" even sounds like doing it globally (not only for routines that are specially marked for "uncertain argument types"). This would be another order of magnitude worse since it would affect all routines in the interface. But maybe I'm misunderstanding you here.
C. Circular references.
Skating out on thin ice here...
If a unit's interface has already been imported, a re-importing of it is redundant (unless you allow importing only parts of an interface, but let's keep this simple for now). This ought to be able to break the deadlock -- ?
Which deadlock? There is no deadlock in either BP or EP circular references.
Frank
Frank Heckenbach wrote:
I must also say that I'm a little angry at the inventors of these models (UCSD and Borland on the one side and Apple on the other, I suppose).
Apple ?? That must be a misunderstanding.
Macintosh Pascal compilers have their roots in Apple Pascal for the Apple II, an implementation of UCSD Pascal, way back in 1980. The unit model on the Macintosh is actually quite close to UCSD Pascal.
It is true that Think Pascal for Macintosh introduced "uses propagation" as a special feature for practical reasons: the massiveness of the Macintosh Toolbox. In each unit, you typically had to refer to fourty or more other units. The feature was copied in CodeWarrior Pascal as an option, but not in Apple MPW Pascal (as far as I recall).
Anyway, the model described by Grant Jacobs is not an Apple model.
Regards,
Adriaan van Os
Adriaan van Os wrote:
Frank Heckenbach wrote:
I must also say that I'm a little angry at the inventors of these models (UCSD and Borland on the one side and Apple on the other, I suppose).
Apple ?? That must be a misunderstanding.
Macintosh Pascal compilers have their roots in Apple Pascal for the Apple II, an implementation of UCSD Pascal, way back in 1980. The unit model on the Macintosh is actually quite close to UCSD Pascal.
It is true that Think Pascal for Macintosh introduced "uses propagation" as a special feature for practical reasons: the massiveness of the Macintosh Toolbox. In each unit, you typically had to refer to fourty or more other units.
Not to restart that discussion, but that's just the case where I'd prefer an explicit "encapsulation" (something like (what I think of) "libraries").
The feature was copied in CodeWarrior Pascal as an option, but not in Apple MPW Pascal (as far as I recall).
Anyway, the model described by Grant Jacobs is not an Apple model.
Grant spoke of "MW Pascal". Isn't this the same as Apple MPW Pascal?
I might be a little confused about the Pascal compilers by Apple and/or for Mac, since I've never used one of them myself.
Frank
At 9:18 PM +0100 11/3/03, Frank Heckenbach wrote:
Adriaan van Os wrote:
It is true that Think Pascal for Macintosh introduced "uses propagation" as a special feature for practical reasons: the massiveness of the Macintosh Toolbox. In each unit, you typically had to refer to fourty or more other units.
Can anyone tell me what the differences are between my model and Think Pascal? I'm being lazy: I do have an old copy of Think Pascal lying around - not sure if it'll run under Classic on OS X... one day I check this out...
Grant spoke of "MW Pascal". Isn't this the same as Apple MPW Pascal?
Nope. CW = MW (MW is the company that makes CW).
Grant
Frank Heckenbach wrote:
Grant spoke of "MW Pascal". Isn't this the same as Apple MPW Pascal?
No, "MW Pascal" stands for the MetroWerks CodeWarrior Pascal compiler (see below).
I might be a little confused about the Pascal compilers by Apple and/or for Mac, since I've never used one of them myself.
Yes, it is confusing, so I have listed the Pascal compilers for Macintosh that I know of:
1. Lisa Pascal (by Apple Computer) for the Lisa (the predecessor of the Macintosh). I don't know much about it.
2. Turbo Pascal (by Borland), only a version 1.0. An excellent compiler, but for some reason the product was discontinued.
3. TML Pascal (by TML Systems, headed by Tom Leonard). I have only worked with their TML Pascal compiler for the Apple IIgs, which had too many bugs. The company disappeared.
4. Lightspeed Pascal, later named THINK Pascal (by Symantec). Still very popular, but others hate the editor. It produces code for 680x0 processors only.
5. MetroWerks Pascal (by MetroWerks), their pre-CodeWarrior standalone Pascal compiler, a sibling of their Modula compiler (both rather unknown).
6. Apple MPW Pascal (by Apple Computer) for the Macintosh Programmer's Workshop. MPW has a command line interface and hosts compilers for several languages. Interesting sidebar: their has been MPW support in GCC and the GCC source has still MPW related parts (although the targets don't seem to build). MPW has been replaced in Mac OS X by "Project Builder", inherited from NeXT.
7. Language Systems Pascal (by Language Systems) for the Macintosh Programmer's Workshop. A real farce. They obtained the source of Apple MPW Pascal for the 680x0 processor, changed a few runtime details and named it LSPascal. Then, they started working on a port to the PowerPC processor. You had to pay 500 US dollars (or so) to participate in the beta program. Yes, they produced 70 beta versions, all totally unusable. The company was sold and the compiler disappeared into nowhere. An unbelievable story.
8. CodeWarrior Pascal (by MetroWerks), often named "MW Pascal". A Pascal compiler for the 680x0 processor and the only one for the PowerPC processor. Later also a cross compiler for and on Wintel. The CodeWarrior IDE hosts compilers for several platforms and languages and it has a well documented plug-in architecture. MetroWerks saved Apple Computer with CodeWarrior when the PowerPC was released, as Apple failed to deliver PowerPC compilers. MetroWerks was later acquired by Motorola. The Pascal compiler has recently been discontined.
Regards,
Adriaan van Os
Adriaan van Os wrote:
I might be a little confused about the Pascal compilers by Apple and/or for Mac, since I've never used one of them myself.
Yes, it is confusing, so I have listed the Pascal compilers for Macintosh that I know of:
- Turbo Pascal (by Borland), only a version 1.0. An excellent
compiler,
Oh, I must correct myself, I actually used that at school for a few weeks (it was even 3.0 IIRC), before they switched to PCs ...
but for some reason the product was discontinued.
I think the reason is easy: Borland added many PC specific features in the later versions, many of which are hard to impossible to make portable. (I know because I did that in GPC, as far as possible.) To support another platform would have meant they'd have to make quite a different and incompatible compiler. (Or to design the features properly in the first place, but it seems that wasn't an option for them ...)
- MetroWerks Pascal (by MetroWerks), their pre-CodeWarrior standalone
Pascal compiler, a sibling of their Modula compiler (both rather unknown).
- Apple MPW Pascal (by Apple Computer) for the Macintosh Programmer's
Workshop.
So "MWP" vs. "MPW", that's what was a little confusing ...
Frank
In move.pas, the Move et al are defined as:
procedure MoveLeft (const Source; var Dest; Count: SizeType); asmname '_p_MoveLeft'; procedure MoveRight (const Source; var Dest; Count: SizeType); asmname '_p_MoveRight'; procedure Move (const Source; var Dest; Count: SizeType); asmname '_p_Move';
Given the comment in the documentation that const could be passed either as value or reference, should the first parameter be "protected var"? I presume since they are type-less, they can't be copied, so it has to be reference, so it would always be technically ok to be left as const, but "protected var" would seem more sensible...?
Is that correct? Peter.
Peter N Lewis wrote:
In move.pas, the Move et al are defined as:
procedure MoveLeft (const Source; var Dest; Count: SizeType); asmname '_p_MoveLeft'; procedure MoveRight (const Source; var Dest; Count: SizeType); asmname '_p_MoveRight'; procedure Move (const Source; var Dest; Count: SizeType); asmname '_p_Move';
Given the comment in the documentation that const could be passed either as value or reference, should the first parameter be "protected var"? I presume since they are type-less, they can't be copied, so it has to be reference, so it would always be technically ok to be left as const, but "protected var" would seem more sensible...?
As you say, for untyped parameters, it makes no actual difference. One reason for `const' would be that untyped parameters are a BP extension, and BP only knows `const', so it would be "consistent".
But I don't care too much. If you prefer it to be changed, just send me a patch.
Frank