On the other hand, that operates differently if the last value is nil; in other words, it's a tradeoff and has different semantics. I'm personally inclined to think that a boolean might be more useful.
I agree to a point; to be a useful idiom, the semantics of the return value should primarily make code easier to read for humans -- while allowing for the useful flow construct which cuts corners on code length.
I don't find t / nil necessarily be the best range, though; the number of iterations that was looped, if non-zero, nil otherwise, conveys useful information that is often wanted and would presently require explicit code for defining and updating a counter variable that, could instead ideally just be there for you to capture, should you want it.
In languages where zero has boolean falsity, the unmodified iteration count would of course work even better and be more cleanly, but I think nil-or-positive-integer works better in Arc.
If getting the count was presumed to be the more common case (vs just figuring out if a loop ran through its body at all) I would argue for the count, zero or otherwise, but personal experience says you mostly want the "whether", not the "how many?", and that the latter is the occasional useful fringe case that mainly adds extra convenience.
Given that the number of iterations adds strictly more information, I can't see why that would be a bad idea. I agree that it's not normally what you would want, so the asymmetry between numbers and nil is probably worth it.
Don't CL's and Scheme's do allow the user to specify the return value? Maybe this wouldn't necessarily be the case with each, but I think it is correct in spirit to allow the user to control the return value.
Hmmm, I like that. The only problem is that then you can't just say _ to access the first argument, and one-argument functions are the most common case. So you'd either need a new name for _ or a different name for the list, it seems to me, and those break symmetry.
I think he meant that updating Arc didn't make any difference to the requirements, not that using an old mzscheme doesn't make any difference, which it certainly does. I wouldn't be using Arc if there hadn't been a patch to get it working on newer mzschemes; the git "wiki" (http://git.nex-3.com/arc-wiki.git) works on said mzschemes right now. It may not be official, but it might help scratch your itch.
Oops, that extra "do" wrapping just the map was left over from debugging the macro and can be chopped. Note to noobs: this is also a macro-writing tip to remember, one can put debugging print statements that run as a macro gets expanded to help debug and even learn how to write macros.
There are a couple of things wrong with your macro. The first is that your (list ,@args) is inside ,(each ...) which is inside `(do ...); you can only have as many unquotes (,s or ,@s) as you have quasiquotes (`s), and you have two within one. The second is that "each" is only run for its side-effects. The return value of "each" is always nil. Thus, even if your macro worked, it would expand to (do nil), which isn't what you want. To replace "each", you want (map func lst), which returns a list where func has been applied to each element of lst, e.g. (map - '(1 2 3)) returns '(-1 -2 -3). In my macro, the function returns a list of the form (report-result ARGUMENT 'ARGUMENT); the `',_ construct means "quote the value of _," since ,_ is within a `. Splicing this (map ...) into the (do ...) block will give you what you want. Is that clear?
Also, a handy tip for debugging macros: (macex1 '(an-expression ...)) will expand the first macro call in (an-expression), which can help you see what's going wrong.
I can't tell you if this is the right place to ask these questions, but having some place for them would definitely be a good thing. I'm usually happy to answer them, though.
Enormously helpful. Thank you. Coming from Blub world, it's been hard for me to think functionally - making a distinction between returning values and side effects.
I don't understand your first point however, as this is a perfectly valid macro:
Nitpick. This is not a good definition (valid, but not good). The problem is something like this:
(double (prn 3))
Try the above in your repl after entering the mac definition; then consider what must be done in order to protect the x. For example, you might notice that the macros in arc.arc have a lot of (w/uniq (...) `(let ...)) forms, even the arguably simpler ones.
I liked the "old" name (in http://www.paulgraham.com/ilc03.html) of "tag", but that appears to have gone away so the web library could use it as a name. "make" isn't bad, though.
Unfortunately, "cracker" already has the meaning of "one who breaks into other people's computer systems or software." We've had to fight hard enough to retain the real, positive meaning of "hacker" (and haven't entirely succeeded); "cracker" is a lost cause.
I agree with your general premise, but I have one correction--in arc0.tar, [...] expands directly to (fn (_) (...)). It's only in the git repository that it expands to (make-br-fn (...)) (which (semi-incidentally :P) I added, but that's not the point). Regardless, your point still stands, and I agree.
make-br-fn allows you to use _1 _2 ... _n (it searches for the n AFAIK) as well as __. If you use _n you get a n-arity function. If you use _n and __, you get a >=n-arity function, with __ containing the rest. I have a few reservations about whether it handles checking of free variables properly but I haven't actually dived into the code.
make-br-fn doesn't search for a literal _n, but it searches for anything matching the regexp /_(\d+|_)?/, except for anything of the form /_0+/. The free-variable checking code is based off of problems I did while working through Essentials of Programming Languages by Friedman, Wand, and Haynes, and it certainly shouldn't break in most common cases (especially since most common cases won't bind any extra variables). A second set of eyes is probably a good idea, though.
I tried it on my machine, and got the same error (well, I ^Ced out before it finished, but it was the same thing). However, the segfault isn't in the creation--it's in the display. Observe:
arc> (= x '(1 2))
(1 2)
arc> (do (sref x x 1) t)
t
Of course, this makes it difficult to test if there is a bug somewhere...
The biggest win, which actually results from a semantic change, is that it becomes possible to cleanly apply a macro to a variable number of arguments. Consider
(apply and (acons x) (car x) lst)
as opposed to
(and (acons x) (car x) @lst)
. The first version is broken: it will be forced to evaluate (acons x), (car x), and lst before applying and to them, which breaks the semantics of and: (acons x) no longer guards against (car x) trying to take the car of a non-list, as everything is evaluated before being passed to apply.
On the other hand, the second version works: the list is spliced in and then the and macro is run as normal. The semantics are preserved here because @, like a macro, expands the code first; nothing is evaluated, and then the and macro runs on its arguments as usual, short-circuiting if (acons x) fails. The variable being spliced is still evaluated, but that is the point of this notation.
As a concrete example (though I think this is generally useful/powerful), partial application then becomes merely
(def par (fn . args)
(fn newargs (fn @args @newargs)))
. If apply were used instead of the splicing @, this would actually break on certain macros as noted above, both causing the abstraction to leak and preventing someone from partially applying and, or, etc.
The more I think about this, the more mind boggling I find it. What exactly would this expression expand to?
(and (acons x) (car x) @lst)
The actual lst is only available at runtime, while the 'and macro expands at compile time.
Edit:
After looking at the definition of 'and, it is clear that the above could never work. Without performing the splice it would expand to
(if (acons x) (if (car x) @lst))
For each additional argument 'and must add another 'if to the code, so with lst being spliced in at runtime, we have an impossible problem.
It seems @ can only ever work reliably in quasiquote. I guess longtime lispers already knew, otherwise we would probably have had this feature for several decades already.
If you insist, it could still be done for functions. To avoid inefficient code, it better be smart about how it builds the list;
I had assumed that (and (acons x) (car x) @lst) would expand to (and (acons x) (car x) lst.0 lst.1 ... lst.N), and only then would it expand to (if (acons x) (if (car x) (if lst.0 ...))). In other words, @s are expanded as though they were the outermost macro, not the innermost. That should remove the problem of 'and having to be psychic. Your point about efficiency, however, is a very good one.
As mentioned, the macro expands at compile time, but the value of lst is not available until runtime.
In order to expand a call to 'and with 6 sub expressions into (if e0 (if e1 (if e2 (if e3 (if e4 e5))))), all 6 expressions must be visible at compile time.
Splicing in a list of arguments to a macro will simply never work (except by compiling the code from scratch each time you run it, i.e. limiting yourself to the most inefficient of interpreter techniques)
Aha! Oh, I see. I should have realized that. Yes, that's a problem :) I'm not convinced it's insurmountable, but this syntax certainly won't work. Thank you.
I still, however, think it might be nice for functions, but that's merely a difference in appearance, not functionality.