Not quite. Arc has various bits of (gasp!) syntax. There are 10 pieces of syntax: () [] ' ` , ,@ ~ : . and ! . A quick breakdown:
1. () you know--function application, macro application, or a list. Basic stuff.
2. [] creates an anonymous function of 1 (or, on the Anarki, n) arguments. [...] is transformed (more or less) into (fn (_) ...).
3. '... is the same as (quote ...); whatever is inside becomes a literal. 'a is a symbol, '(a) is a list....
4. `... is the same as (quasiquote ...); the same as (quote ...), but evaluated things can be inserted with (unquote ...) and (unquote-splicing ...).
5. ,... is the same as (unquote ...); as observed, `(.1. ,... .2.) is similar to (list '.1. ... '.2.).
6. ,@... is the same as (unquote-splicing ...); the same as unquote, but ... must be a list and is spliced in. `(.1. ,@... .2.) is similar to (join '(.1.) ... '(.2.)).
7. ~func is the same as (complement func), which returns a function which is conceptually similar to (fn __ (no (apply func __))), i.e. which returns the boolean opposite of whatever the original returned.
8. f1:f2 is the same as (compose f1 f2), which returns a function which is conceptually similar to (fn __ (f1 (apply f2 __))), i.e. which applies f1 to the return value of f2 applied to the arguments.
9. x.y is the same as (x y); it's designed for nice structure access, e.g. lst.3 instead of (lst 3).
10. x!y is the same as (x 'y); it's also designed for nice structure access, e.g. my-table!password instead of (my-table 'password).
Good explanation. As an aside, the four syntax characters "~ : ! ." are the "special syntax". You can see the effects of these with ssexpand. Also, ~ and : can be combined, and ! and . can be combined, but not the other combinations (e.g. ~ and !).
arc> (ssexpand '~a:b:~c)
(compose (complement a) b (complement c))
arc> (ssexpand 'd.e!f)
(d e (quote f))
I just noticed that (car (ssexpand '.a)) returns the Scheme eof object. This doesn't seem like a good thing.
Good point. To be precise (as you noted), () [] ' , and ,@ are taken care of by mzscheme's read, but ~ : . and ! are handled afterwards in ssexpand (or its equivalent in ac.scm).
There are other bugs like those--pretty much any special syntax character on its own or in an incomplete way becomes an EOF.
The nth element of a list is best obtained by (lst n), e.g.(lst 5). Also note that assignment is best written =, not set. As far as I know, there's no library function to push or pop from the end of the list, but (though it's probably unintentional) (++ lst '(seven)) will append the list (seven) to lst, doing what you want. However, here are (untested) pushend and popend functions:
Hmm, hadn't thought about that. Good point and you're almost certainly right; I did dash them off in five minutes for this post. I'd like to benchmark the two to be sure, though.
Try building lists that are a reasonably large fraction of your gc-able space; push-rev creates 2N storage, while the [+ lst (list _)] creates about N^2/2 storage.
Right, analysis pf algorithms--forgot about that :P In all seriousness, though, good point. The time complexity should be the same for both cases: push is O(1), and rev is O(n), so push-rev is O(n); + is O(n), so push-+ is also O(n). (Warning: IANACS (in a few years, I hope), so my analysis could be woefully, laughably wrong.)
Why O(n^2) memory, though? + will reallocate the memory for the list, so that will double the memory requirements; I can't figure out where the squaring is coming from.
Then, we add an element. + creates a new list of 1 element.
Then we add an element, we discard the previous list, and + creates a list of 2 elements. We've created a total of 3 cons cells (1 discarded)
Then we add an element, we discard the previous list, and + creates a list of 3 elements. We've created a total of 6 cons cells (3 discarded).
Basically this is a summation of an arithmetic progression. Memory used isn't exactly n^2, it's a formula ((n^2 + n)/2 or some such, IIRC), basically equal to sum of i from 1 to n; since the most significant component is the n^2, it's O(n^2).
edit: even processing time complexity is not the same, either - item copy operations still take O(1) time. rev makes O(n) copy operations and + does too, but push-rev does rev only once, while pushend does + at each push.
OK, I see what you mean. I was referring to the behaviour of pushend on one element (as opposed to using it to create a list in general), which is linear; of course, using pushend to build up a list of n elements has n * O(n) = O(n^2) memory usage. If push-rev were simply
, then a sequence of those would have the same problem: n things that require O(n) memory. But if you're intelligent about it, and reverse your list, push on all the elements, and then reverse again, you come out with O(n) space usage.
(Oh, and the formula is n(n+1)/2.)
Ah well, reading about CS in one's spare time is one thing, but actual study is another. I'm off to study all this next year, so I'll hopefully have a better grasp on it soon :) Thanks for the help.
If you want to get a feel for Arc, I recommend either reading or skimming the tutorial: http://ycombinator.com/arc/tut.txt . It covers the basics of Arc and its idioms.
I have three objections to (push obj lst -1), however. First, it's not pushing (since that only refers to the top of the list). Second, the last argument is meaningless: -1? Why that? Can I do (push obj lst 2)? (push obj lst -4)? And third, pushing on the end can't be as efficient: lists don't know their last element. If we push 'a onto '(b c d), we create a list whose car is 'a and whose cdr is '(b c d); if we shove 'a onto the end of '(b c d), we have to iterate until we find the element whose car is 'd and whose cdr is nil, and then set its cdr to '(a). That will take more time, and there's no way around that, so it's not fast, per se.
Don't ask me how Lutz did it. (Maybe he keeps a pointer to the end of the list?) I think that lists in NewLisp aren't quite the same as lists in Common Lisp. Lutz wrote:
The concept of a 'cons cell' is about details of early LISP
implementations (CPU registers, etc). newLISP also has a basic cell
structure which can be linked to a list but has no concept of a dotted
pair. When have you last seen a dotted pair used in a real world
program? newLISP has 'cons' but it works differently,
By the way, it would be nice if Arc let you get the last element with -1, as NewLisp does:
arc> ('(2 3 4) -1)
Error: "list-ref: expects type <non-negative exact
integer> as 2nd argument, given: -1;
other arguments were: (2 3 4 . nil)"
Huh. That (in my humble opinion) is a terribly misnamed function (push already has a specific meaning; what about ins, for insert, which is what it actually does?), but certainly a very useful one.
I'd be curious to see what (time (push 8 x 4)) looks like: is that also constant-time? What sort of data structure is newLISP using, if not a linked list? Linked list + end pointer, perhaps--but wouldn't that break on a push? Ah well. I (unfortunately) don't really have time to construct an in-depth analysis, but I'm still curious.
I'm not sure why the Anarki won't let you do ('(2 3 4) -1); it would indeed be useful. Mayhap there are efficiency concerns? Negative indexing works in cut.
Ruby has the opposite problem from conventional Lisp: it appends quickly, but prepends slowly:
t = Time.now; a=[2]; 99_000.times{ a.push( 8 ) }; p Time.now - t
0.14
==>nil
t = Time.now; a=[2]; 99_000.times{ a.unshift( 8 ) }; p Time.now - t
70.902
==>nil
t = Time.now; a=[0,1,2,3,4]; 99_000.times{ a.insert(2, 8) }; p Time.now - t
73.346
That's 73 seconds! NewLisp can insert in a list 1800 times as fast as Ruby can!
Right. In Ruby, pushing, unshifting, and spliceing are all different operations, as they should be. Meaningful names are important.
It's not quite as fast to insert in the middle, but much faster than it should be. This would, to me, imply that newLISP isn't even using linked lists at all, but is instead using some form of array, which is decidedly unLispy. If that is the case (if, I say), then of course this will only work with "one reference only", as you can't construct lists which share cdrs; "one reference only" being an awful idea, I would consider the speed a fair tradeoff.
Have you never seen a good implementation of singly-linked lists? Because accessing the last element is a relatively common operation, a decent implementation will keep a pointer to the last element. Because it's singly-linked, the last element is the only one that pointer is any good for, so (lst -2) is still O(n), but (lst -1) becomes O(1).
The weird thing is, I actually found this good idea in an APCS review book.
The very latest mzschemes—those even more recent than 372—have switched to immutable lists instead of mutable ones; i.e. (set-car! my-lst) and the like no longer work. However, you can get mutable lists with the mcons, mcar, and mcdr operators, and this is what sacado was referring to.
Is this browser dependent? In Firefox 2.0.0.12 on Linux, I can't reproduce what you describe. After the first click, clicking on the same spot has no effect at all.
Please note that this forum doesn't use BBCode; to add code, you need to surround the code by blank lines and indent each line by two spaces or more. The indented lines will be formatted like code, and your * s won't disappear. It would be nice to see the code, but it's unreadable this way.
This has been discussed before, and the biggest problem is not optional arguments, but variadic functions. (+), (+ 42), (+ 42 10), (+ 42 10 7), etc. are all legal, so (+ x) would just be x. Currying is a very powerful feature without a doubt, but it's not clear that it works very well with variadic functions. Certainly it's might be powerful for functions of a fixed arity, but explicit partial application might make more sense with the Arc system. See also http://arclanguage.org/item?id=2205.
I think variadic functions tend to be library (general) functions, whereas actual applications are often full of fixed arity functions. For instance, I think you will find that news.arc is full of fixed arity functions.
I believe that looking at the arc libraries when judging the value of implicit currying may be misleading, since its greatest value is in code written for a single application...
This is certainly true, at least to an extent, but it makes currying inconsistent, which is irritating. I'm not convinced that [] and par create substantially longer code than implicit partial application, which (as noted) has issues. And the other problem, as was pointed out, is optional parameters; since they are similar to rest parameters, the objection is similar, but the problem is more insidious.
Since the arity information is lost for any complement ~f (for instance, but any general or composed function will do) of a function f:
(def complement (f) (fn args (no (apply f args))))
automatic currying turns all composed functions into second degree citizens, which adds programmer burden to the use of any function, in keeping track of whether it will invoke or return a new function.
Ahhhrg, too bad.
I searched for the topic before posting, but I didn't found anything on it.
Well, better live with [+ x _], I suppose. At least, it is compatible with variadic functions.
You can also have a par function to curry a function:
(def par (f . args)
" Partially apply `f' to `args'; i.e., return a function which, when called,
calls `f' with `args' and the arguments to the new function. "
(fn newargs (apply f (join args newargs))))
so (par + 42) will do what you want. Also, a generalized swap (which I called flip):
(def flip (fun)
" Returns a function which is the same as `fun', but takes its arguments in
the opposite order. "
(fn args (apply fun (rev args))))
This works for n-ary functions, but is still probably most useful for functions of two arguments: (par (flip -) 42).
Didn't think of "par", cool idea. It could even be turned into a macro, like that:
(par foo x) => [foo x] ; Well maybe too much.
About flip, I'd rather rotate the arguments instead of reverse them. That way, you have access to more arguments orders by composing flip. You could also define flip2, flip3... in the library if they're used often.
Don't you mean [foo x] => (par foo x)? []s aren't fundamental, they're sugar.
I'm not sure there is a sensible way to extend flip to functions taking more than two arguments. Reverse is one, rotate is another; you might as well have two functions for that. It seems like six of one, half a dozen of the other to me.