What's funny about his arguments about Lisp's "syntax irregularity": in PyArc, most of the things he listed are just syntax abbreviations. Thus, you can use either:
Even the [] function shorthand expands into S-expressions:
[+ 1 2] -> (square-brackets + 1 2)
He does have somewhat of a point about # and ; but I could easily change PyArc so they also expand into S-expressions:
#\x -> (char x)
;x -> (comment x)
He didn't mention string syntax, but I could do that too:
"x" -> (string x)
Thus, they are merely syntax shorthands, so we don't have to type as much. In fact, in PyArc, syntax (including ssyntax) is expanded at read time, so PyArc's eval doesn't actually know about #\ ' ` , ,@ it only knows about S-expressions.
Also, I read that guy's articles a while back... I laughed when he said that cons cells were the primary reason people aren't using Lisp.
Yes, sometimes you do end up using (car) or (cdr) in Lisp programs, especially if you're writing utilities, but a lot of list processing happens with higher-order functions like map, filter, etc.
He seems to like Mathematica a lot, and dislikes Lisp because it's not Mathematica. That's all well and good, but it'd be nice if he actually tried to understand Lisp before bashing it. Here's some choice quotes:
I became aware of this essay in early 2000s. Kent himself
mentions it when he sees fit. I actually never red it. I just
knew that it's something lispers quote frequently like a
gospel. In the back of my mind, i tend to think it is something
particular to lisp, and thus is stupid, because many
oft-debated lisp issues (e.g. single vs multi semantic space),
never happens in a lisp-like language Mathematica which i'm
a expert, nor does it happen in any dynamic langs i have
become a expert of since 2000. (e.g. perl, python, php, javascript.).
So... doesn't read what other people have to say, but because it relates to Lisp, he assumes it must be stupid and wrong, and then claims that Lisp sucks because other languages don't have the same issue, even though he doesn't even know what the issue is.
Actually, the article he linked to mentions that a generic copy function is possible, but would require making arbitrary choices about how it would behave in different situations. Also in the paper he referenced: "However, the problems cited here are quite general, and occur routinely in other dynamically-typed languages as well as user programs." I'm actually tempted to make a generic copy function in Arc.
---
For example, the difference between (list 1 2 3), '(1 2 3),
(quote (1 2 3)) is a frequently asked question.
It is? They all produce the same output: a list of 3 integers. Simple question, simple answer. In fact, '(1 2 3) expands to (quote (1 2 3)) so the last two are truly identical.
OK, I want to create a nested list in Lisp (always of only
integers) from a text file, such that each line in the text file
would be represented as a sublist in the 'imported' list.
Example of input
3 10 2
4 1
11 18
example of output:
((3 10 2) (4 1) (11 18))
Using only the core arc.arc, here is the solution I came up with:
This could, of course, be made cleaner with a (readlines) macro. So... I guess his problems are more with Common Lisp, Scheme, and Elisp? Arc, using just base functionality, isn't that far behind Ruby. Here's the readlines version, which is basically the same length as the Ruby version:
Kinda ugly, I think, but it's short, so I fail to see why conses are the cause of elisp's verbosity. You could also make `readlines` a function rather than a macro:
I actually like the Ruby version because it reads in a left-to-right fashion: read the lines from the file, then map then, then split them, etc. Whereas the Arc version has the order jumbled up. It might be interesting to write this instead:
Also, he's comparing apples to oranges. Ruby comes with a "readlines" function built-in, but he defined a "readlines" in elisp. Thus, half of the verbosity in elisp is because it doesn't have a built-in "readlines" function. If you assume that "readlines" is built-in, then the elisp one is in the same league as Ruby, for conciseness.
---
Dispite being a expert in trees, the lisp's cons business is
truely a pain to deal with. A large part of my time spent on
elisp programing is on the debugging the cons business.
But in lisp, it is at the low level exposed to programers the
“con” sequenced in a particular way, and accessed with
car,cdr, caard.. etc. A lisper may argue that a programer can
simply use higher-level constructs like “nth” or “list” to deal
with lists. However, one really cannot pretend that cons
doesn't exist in lisp because the language takes the con cell
as its fundamental primitive. In short, a programer cannot
program in lisp in a real world situation without having a good
understanding of the cons. (partly due to the language itself,
partly due to (i think vast majority) of existing code all
directly deals with cons)
all this may seem shallow and does not constitute a problem
for any serious, professional programer. But i think it is a
damnation to lisp and the major (unfixable) cause of
preventing lisp from becoming widely used in the future
(despite that lisp had been somewhat mainstream in the
1980s, which i wasn't a programer then). Because, the
reason for high-level lang being what they are is from all
these little details. In general, high-level lang being what
they are (Python, PHP, Javascript, Mathematica) is because
they are more and more abstract from the
hardware/implementation/compiler concepts/issues.
Well... yeah. In other languages you need to worry about how many bits an int is, or a floating point number is. In some popular languages (C, C++) you need to worry about whether it's a Pascal string or a C string. In most popular languages you need to worry about bytes vs Unicode code points. In Java, you need to worry about int vs Integer:
Whoops, there goes your "Lisp loses because it's less abstract" argument... The reason Lisp hasn't caught on isn't that other languages "hide the details" better, because they actually hide the details worse, in many ways.
So... if we don't use cons cells to construct lists, what do we use? Arrays like in Python, C, etc.? Objects like in JS and Lua? Really, there has to be a primitive somewhere, and cons cells are nice because they can construct lists, but also other things as well, when you need to. It's a flexible base datatype.
I guess he just hates that Lisp exposes the cons function. He wants (list) to be the only way to construct lists, meaning that all conses would be proper. But... are improper lists difficult to deal with in practice? I would expect most lists to be proper, with improper lists only occurring due to a mistake or because the programmer intended them to be improper.
---
Not to say that he might not have a point or two, but it's kinda bogged down by the rest of it. An amusing read, nonetheless. My suggestion to him: keep using Mathematica, since you seem to like it a lot.
He does have one point, though: Arc seems to be missing some higher-order functions, like "get all the nodes at level n. Map a function to level n. Map a function to just leafs" But... if we ever needed such functionality, it shouldn't be hard to write a function to do that, right? So what's the problem?
---
By the way, he mentions how in Mathematica, it's all lists, so it basically uses alists rather than hashes, etc. Then he says that the compiler should automatically figure out how to optimize it, or allow the programmer to give type hints. That's not a bad idea, but I was thinking of something more generic: iterators.
By defining a base "sequence of something" type, all you would need to do is change (coerce) so it coerces your type to an iterator, and everything would work, ala Python. Though I think it's possible to do it better than Python. One could do the same for hashes, defining a "mapping" or "collection" type.
In any case, there are solutions to the issues he brings up, so he seems to be nitpicking in an attempt to discredit Lisp in any way possible. Makes me wonder why he isn't just happy using Mathematica, since he likes it so much? Why is he trying so hard to help solve what he perceives to be problems in Lisp, if Mathematica is so good?
I think you've edited your post a few times, and given how long it is I'm not quite sure which parts have changed. Perhaps in the future you could break up your thoughts among multiple comments? One advantage would be that I could reply to each point separately.
Indeed. One might say the real problem with Lisp is that it's not concatenative. ;)
I think the OP's point about cons cells is excellent. I've spent a long time thinking cons cells were some ultimate abstraction only to realize they're more of a pretty hack.
I don't think cons cells are a hack, but I do think it's a hack to use them for things other than sequences. Since we almost always want len(x)=len(cdr(x))+1 for cons cells, rather than len(x)=2, they aren't really useful as their own two-element collection type.
Yeah, I'm tempted to agree with you. In the arc.arc source code, pg even mentions a solution to improper lists: allow any symbol to terminate a list, rather than just nil.
Of course, an easier fix would be to change `cons` so it throws an error if the second argument isn't a cons or nil. Honestly, are improper lists useful often enough to warrant overloading the meaning of cons? We could have a separate data-type for B-trees.
That's one area that I can actually agree with the article, but that has nothing to do with conses in general (when used to create proper lists), only with improper lists. And contrary to what he says, it's not an "unfixable problem", instead it would probably take only 2 lines of Python code to fix it.
One thing though... function argument lists:
(fn (a b . c))
Of course I can special-case this in PyArc, so . has special meaning only in the argument list. This is, in fact, what I do right now. But that may make it harder for implementations in say... Common Lisp or Scheme (assuming you're piggybacking, rather than writing a full-on parser/reader/interpreter).
If so... then you may end up with the odd situation that it's possible to create improper lists, using the (a . b) syntax, but not possible to create improper lists using `cons`
---
By the way... how about this: proper lists would have a type of 'list and improper lists would have a type of 'cons. Yes, it would break backwards compatibility, but it might be a good idea in the long run. Or we could have lists have a type of 'cons and improper lists have a type of 'pair.
I don't know if improper lists are really a problem, just hackish. :) My "solution" would be to remove the need for them by changing the rest parameter syntax (both in parameter lists and in destructuring patterns).
---
"how about this: proper lists would have a type of 'list and improper lists would have a type of 'cons."
I don't think I like the idea of something's type changing when it's modified. But then that's mostly because I don't think the 'type function is useful on a high level; it generally seems more flexible to do (if afoo.x ...) rather than (case type.x foo ..), 'cause that way something can have more than one "type." Because of this approach, I end up using the 'type type just to identify the kind of concrete implementation a value has, and the way I think about it, the concrete implementation is whatever invariants are preserved under mutation.
That's just my take on it. Your experience with 'type may vary. ^_^
In Common Lisp and Scheme, the syntax is pretty darn regular: Pretty much everything is (or can be) a reader macro. It's just that the string syntax (the #\" reader macro) and the list syntax (the #\( reader macro) are rather arbitrary and inconsistent with each other. It's mostly a matter of readability:
; current
(def foo (a b)
(+ "foo: " a b))
; somewhat more consistency
(def (foo ((a (b nil
((+ ("foo: " (a (b nil nil
; a somewhat consistent version for a lisp without cons cells
(4 def foo (2 a b
(4 + "foo: " a b
; an extremely consistent version, without the need for escape
; sequences or separating whitespace
(^4 |^3def |^3foo (^2 |^1a |^1b
(^4 |^1+ "^5foo: |^1a |^1b
(I put in some whitespace anyway 'cause I'm nice like that.)
The real issue here is that a syntax that stops at providing reader macros isn't arbitrary enough to meet the arbitrary demands of readability, so the individual syntaxes end up having to make the choices the core didn't. That makes for a greater number of arbitrary choices overall, and that more or less determines the apparent number of arbitrary choices made, and someone who has a similar but less arbitrary alternative in mind (regardless of whether it's possible to achieve) will see inconsistencies.
The arbitrary aspects of a language might be distracting and reduce productivity, but on the other hand they could punctuate the programming experience in an enjoyably artistic way, or even keep a programmer's attention focused while they plan their next moves. Maybe we don't always know what's best for us....
But I think we'll better know the full benefit of background music when we have convenient blank slates to compose it on. Programming's enthralling enough for me without background music anyway. ^_^ So I might as well continue to apply Occam's Razor without regret.
Reading about his argument against cons, I can't help but compare it to Haskell. In Haskell one can define `data List a = Cons a (List a) | Nil` and use it just like Lisp's cons (ignoring polymorphism for the moment) but one can also define `data TriTree a = Node a (TriTree a) (TriTree a) (TriTree a) | Leaf` and have no difference in speed between getting the first inorder element or the last, which is not possible in Lisps while maintaining the common abstractions, such as being able to map over them like with `fmap (+1) myTriTree` or `fmap (+1) myList`. This might be saying something about the powerfulness of typeclasses more than any weakness of cons, but I still think it is a valid point. The ability to define abstract over the structure of the data while maintaining efficiency is very nice.
I don't agree with all of his points, especially those concerning syntax, but it does raise some interesting questions. Is a cons based list system actually bad? Are there better things that we could be doing? And how hard would it be to emulate some of the features that he seems to like so much about Mathematica?
"Is a cons based list system actually bad? Are there better things that we could be doing?"
They are interesting questions. I think that to be a lisp, a language must homoiconic and applicative. A cons-based system is non-essential, and I am coming to agree with the OP that it's not the best way.
Update: Hmm... my mind is playing tricks on me now. What would you use besides conses to represent s-expressions? Maybe that is a good role for cons.
I wonder if there's a way to merge quote and lambda into a single operator. This is done in concatenative languages like Joy and Factor, where a "quotation" acts both as quoted code and as an anonymous function. But I'm struggling to see how you could translate that into applicative terms.
"What would you use besides conses to represent s-expressions?"
What are s-expressions? I thought their main aspect was the way they used lists to represent user-extensible syntax forms (macro forms, function applications, fexpr applications). I'm not a fan, but even if we assume s-expressions, they could always use some other implementation of lists.
--
"I wonder if there's a way to merge quote and lambda into a single operator."
Do you mean like in PicoLisp, where all user-defined functions are lists containing their code, and applying a list causes it to be interpreted as a function?
I don't like it, 'cause you don't get any lexical scoping this way. It probably makes more sense in Factor and Joy thanks to the lack of local variables for run-time-created functions to capture.
"What would you use besides conses to represent s-expressions? Maybe that is a good role for cons."
Arrays, objects, pretty much anything that can represent nested sequences. In fact, in PyArc, conses are a class, because Python has an infatuation with classes.
I don't see conses as an "ultimate abstraction". They're a low-level primitive that can be used to create lists, among other things. To put it bluntly, they're a simple way to implement linked lists. The only difference between an array and a linked list is performance, but they have the same semantics.
Most popular languages choose arrays as their default sequence type, but Lisp chose linked lists. I could represent conses as Python lists (which are like arrays), and then define car so it returns list[0] and cdr so it returns list[1:].
As far as Arc programs would be concerned, everything would look exactly the same. The only difference is that certain operations (like prepending) would be slower with arrays than with a linked list. Specifically, (cons 'a some-list) would be O(n) rather than O(1).
So... to say that conses are bad is like saying that arrays are bad. They're both sequence types, that have differing performance characteristics. Use whichever one happens to suit your language/program the best.