Beautiful, nex3. Works exactly like I had been thinking about. But I'm not sure that making the table live in Arc is the best idea... it relies on a table of a certain name, which I think is slightly odd. Nevertheless, excellent--including the move of everything but functions into arc.arc. Exactly how it should work. Thank you.
Now, if I could only understand defset, I'd be all set... :)
I put the table in Arc because I was trying to make it work as closely as possible to stuff like defset and help. Although it is a little weird to have the core code relying on stuff going on in Arc-land, I think that's going to be necessary to give Arc as much power as possible.
Also, the core code only relies on call to be defined when it's actually trying to resolve a functional-position object. Doing something like (= call nil) will only fail when you actually try to use a non-function object as a function.
That's a good point. And I do understand when it will fail, it just seemed odd. But you do make a good point about the power; it also reduces the number of axioms.
Think of it that the ideal Arc would be written entirely in Arc, except that some parts are written in Scheme for performance reasons or to allow something to actually run.
I've been thinking that the Scheme code (ac.scm) should be split in two parts: an "axiomatic" part that defines car, cdr, set, annotate, etc; and a "library" part. The "library" part would define OS stuff like current-gc-milliseconds, open-socket, probably atomic-invoke, etc. There would probably be a "math" library that defines exp, sqrt, and all the missing math functions.
This structure would make it clear what parts of the language are really axioms, and what parts are in Scheme for performance, convenience, or because they are OS things that require low-level hooks.
Here "scheme-fn" is a macro that returns an Arc function that calls the named Scheme function, but still expects to be passed Scheme values and returns a Scheme value. Then various functions such as "scheme-istrue" can be used to convert Arc values to Scheme values and back again.
That should be doable with a new primitive and a modification to ar-apply in ac.scm. If we define a primitive called, say, behave (or rather, a much better name that I can't think of), such that
. (This is all untested.) In fact, we could even move everything but functions (and macros?) out of ar-apply and define them all in arc.arc, removing the (ar-tagged? fn) check, but this would break on the release of a new arcn.tar.
The main reason my settable-fn works the way it does is so that attachments are orthogonal to annotations. An object might be tagged, or it might be attached, or it might be both tagged and attached.
This means that I could have defined a function, tagged it as 'table, and provided a '= attachment for a setterfunction and a 'keys attachment for a redef'ed 'maptable and 'keys. This way, I don't have to modify, say, 'each, which will simply see that it's a 'table and pass it to 'maptable, without ever realizing that it isn't a hash table.
With this system, everything that takes an attachment must be a settable-fn (or whatever). This means that I need to modify about a dozen existing macros so that it will work with my "table", so that they will also check for a 'keys attachment, and probably I want to do this for any new macros. Sure, I could redef 'type, but this gets more complicated (and potentially risky).
So no, I don't agree with this mod, because I think attachments should be orthogonal to annotations. I don't want an object-with-attachments to have its own annotation, I want the user to specify his or her own annotation for the object. In any case, since the mod has been pushed on arc-wiki, I'll have to work around it, possibly having to redef 'type.
I'm not sure I understand... defcall doesn't really have much at all to do with annotate. It certainly doesn't change the way it works. It just allows you to make user-defined types work like functions.
In any case, it seems un-Lispy to me in general to redefine behavior by attaching things to objects. I'd rather see us redefining and tweaking the verbs rather than adding information to the nouns. This is also, I think, how PG envisioned annotate and friends working. From http://www.paulgraham.com/ilc03.html :
"If you want to overload existing operators to do the right thing when given your new type, you don't need anything new in the core. As long as you have lexical scope, you can just wrap a new definition of the operator around the old one. So if you want to modify print to display objects of your new type foo in a special way, you write something like this:
I don't know, maybe I'm being short-sighted and attaching information to objects is really necessary. But I'm not seeing a case right now where it wouldn't be just as easy (or easier, with stuff like defcall) to define something like an attached-object type and use that.
defcall specifies how an 'annotate 'd object will work in function position. This meant that it dispatches on the claimed type of the object, not on the real type. So (at least in your original version) a type masquerading as another type will be difficult to implement:
arc> (= test (annotate 'table (fn (x) x)))
#3(tagged table <procedure>)
arc> test!x
Error: "Can't get reference"
I've since modified it so that if it's a tagged type, 'ref will perform apply on its representation. This means that currently, (= call 'type ref) will cause an object typed as 'type to dispatch as if it were its representation, while using (defcall 'type ...) will dispatch based on its type (meaning it can't be faked)
I don't think types should masquerade as other types, by which I mean they shouldn't annotate themselves with other types' symbols. I think the proper way to act like another type is to behave like that type, not to annotate yourself with that type.
The thing is, if we expect (annotate 'table (fn (x) x)) to act just like a table out of the box, we have a lot of work to do. Every table axiom has to check for an annotation and recurse if one exists.
This may not seem so bad for tables, but consider: if ((annotate 'table (fn (x) x)) 'foo) works, shouldn't ((annotate 'cons (fn (x) x)) 1)? What about (+ (annotate 'num (fn (x) x)) 2)? What does that even mean?
It seems to me that the easiest and most consistent way to mimic other types is to annotate with a new type but to redef functions like keys and defcall to work in functional position.
I urge you to check out settable-fn2.arc if you haven't already - I re-implement get- and add-attachment using this style of annotation, and it comes out quite nicely. Rather than annotating the attached functions with their types, I add a 'type attachment which overrides isa. It appears to work fine with file-table.arc, too.
From my point of view, attachments should be orthogonal to types. Basically, an attachment is any piece of information you want to attach to an object, and lives and dies with that object. That an attachment is used to overload 'keys or '= is just a use of the attachment concept.
For example, we might want to build a reader which keeps track of line numbers. The reader's output is still 'cons and 'sym, etc., but with an attachment. Each 'cons cell has a 'linenumber attachment which we can use. For example, a macro whose syntax has been violated would be able to report the line number where this violation occurs. This is useful if the macro is used often enough and there is a need to locate the line number of the error, or if its syntax is like CL 'loop and you expect it to span several lines.
In all cases, the cons cell produced by this hypothetical reader is a cons cell. Its representation is a cons cell and is only a cons cell. However, we can extract additional data from it. After it passes through the evaluator and is discarded as trash, its attached data can be thrown away.
In any case file-table.arc only cares that settable functions work, and settable functions only care that attachments work. Whether we make attachments orthogonal to types, or separate stuff-with-attachments as types may not really matter so much anyway. This is Lisp, after all.
I agree that attachments should be orthogonal to types - that's why I added the "isa" overloading to settable-fn2. But I don't think annotations should be orthogonal.
The thing is, there's no way we'll be able to add arbitrary attachments to any object and have it continue to behave just as if there were no attachments in Arc. We'd need to either modify the core to give each object a Python/Ruby/Javascript/etc-style implicit table, which I don't think PG is likely to be very fond of (and which I don't think is a good idea besides); or we need to accept that there will be some cases where we won't be able to get attachments without a few compromises.
"Now this is the noble truth of the origin of suffering: it is this attachment which leads to renewed existence, accompanied by delight and lust, seeking delight here and there, that is, attachment to sensual pleasures, attachment to existence, attachment to extermination."
Therefore... buddha pg, please enlighten us and deliver us from attachment! ^^
Also, I just pushed a settable-fn2.arc that uses annotations how I envisioned them being used (it's also pure Arc). Hopefully that should make it more clear what I'm thinking.
No; as nex3 observed, it's supposed to add it (cf. http://www.paulgraham.com/ilc03.html). Why? This is more general. Right now, we can define reptag to do what you want:
(def reptag (typ obj)
(annotate typ (rep obj)))
If we just had reptag, we couldn't define annotate.
Also, annotate obeys two useful identities:
(type (annotate x y)) --> x
(rep (annotate x y)) --> y
However, reptag does not:
def --> #3(tagged mac #<procedure>)
(rep (annotate 'fn def)) --> #3(tagged mac #<procedure>)
(rep (reptag 'fn def)) --> #<procedure>
Because of the type-replacing behaviour, that identity does not hold for tagged objects. I consider that a strike against it as well.
I think it's pretty clear that he intends option 1, as that's how it actually works, and as absz pointed out, is a strict superset of the functionality of option 2 (and has nicer properties, too).
I can't blame him. Cutting bloat in the language core is clearly a goal. Nothing should go into the official Arc release unless it has proven its value in real code (which is basically News.YC at the moment).
Sure, he has to keep the control over the things. The point is that a few bug fixes (the mkdir problem for example), simple conveniences (see arc.sh) and even trivial optimisations (arc< to name it) available in anarki are of interest even in the official release. I'm not talking about experimental stuff (infix numeric notation, vectors, standalone exe, maybe docstrings, experimental module systems, ...)
But I think the few fixes and conveniences should really be taken into consideration. I can't believe none of them are of interest.
They probably are of interest, but let's not forget he's running a company in his spare time, and 3 releases in about 3 weeks is a pretty good work rate.
Of course, as someone running Arc in Windows, I'd love it if a bit more stuff worked out of the box (e.g. the blog, which didn't work in Arc1 IIRC). That's why I'm probably going to switch to developing on Anarki and then testing it on vanilla Arc afterwards.
Note that I don't criticize Paul's attitude there. 3 releases in less than a month is much more than what I expected. He didn't release early, but at least he releases often :) I just meant a few things would deserve a little more consideration, at least in the next few weeks/months ?
(mac delay body
`(annotate 'promise (fn () ,@body)))
(def force (promise)
((rep promise)))
This was lightly adapted from http://cadrlife.blogspot.com/2008/02/lazy-lists-in-arc.html which I found on this forum -- read it for a more complete version with various supporting functions, etc. It's not primitive lazy evaluation, but it should work for many of the same things that delay/force work for.
complement only works with functions. behind the scenes, it uses apply to call the original function. at first glance, i don't really see a reason that it couldn't just do the same as (compose no _)
I know compose works on macros... but I'm not sure you are straight on, because compose uses apply as well. And the macro expansions look exactly the same.
arc> (macex '(compose no litmatch))
(fn gs1639 (no (apply litmatch gs1639)))
arc> (macex '(complement litmatch))
(fn gs1641 (no (apply litmatch gs1641)))
arc> ((compose no litmatch) "a" "aston")
nil
arc> ((complement litmatch) "a" "aston")
Error: "vector-ref: expects type <non-negative exact integer> as 2nd argument,
given: \"a\"; other arguments were: #3(tagged mac #<procedure>)"
except calls to compose are optimized away in the interpreter. they are treated specially. look in the `ac' function in ac.scm. i'm guessing this is the reason.
Now, what to do about the other case... there's still no reason it should bug out, it seems to me. Or rather, it should be implementable in such a way that it shouldn't.
Yes, we do. But if I recall (and I can't test, because the git "wiki" broke for later mzschemes again), apply does work for macros, except it evaluates all the arguments first, which breaks things like and. Someone should test that, though; as I noted, I can't.
I saw, that's how I was able to test. Thank you for setting it up, by the way! And if you fixed it, thank you for that; if someone else did, thank them for that.
I'm fairly inexperienced with pure functional programming à la Haskell and ML, but isn't this a weaker variant of tagged unions? Tagged unions are (in my mind) fantastic, but what you have allows only one tagged union in the entire program. It would be nice to have Haskell's "maybe" union, which could be (just x) or (nothing). Then you could also have, say, a binary tree type* which could be (node x) or (branch btree-left btree-right), and they would be different things.
* Yes, I know that lists can represent binary trees. Yes, that's often better. However, I needed an example.