; edit - made scanner and make-scanner different
(def make-scanner args
(dsb (&k car cdr) args
(annotate 'scanner (cons car cdr))))
(defm car ((t c scanner))
((car (rep c))))
(defm cdr ((t c scanner))
((cdr (rep c))))
;(defm isa ((t c scanner) typ)
; (if (is typ 'cons) t
; (is typ 'scanner) t))
(redef isa (c typ)
(if (is (type c) 'scanner)
(in typ 'scanner 'cons)
(old c typ)))
(def scanner (s . args)
(err:tostring:write "Unknown scanner type" (type s)))
(defm scanner ((t s string) (o start 0) (o end (len s)))
(let (a d d-valid) nil
(make-scanner
'car (fn () (or a (= a (s start))))
'cdr (fn ()
(if d-valid
d
(= d-valid t
d (if (is (+ 1 start) end)
nil
(scanner s (+ 1 start) end))))))))
; input port
(defm scanner ((t s input))
(let a (readc s)
(if (no a)
nil
(let (d d-valid) nil
(make-scanner
'car (fn () a)
'cdr (fn ()
(if d-valid
d
(= d-valid t
d (scanner s)))))))))
^^ strings as lists, indeed ^^
Hmm. Another idea: make redef bind 'old to (fn args nil) if the functions hasn't been defined yet. Possibly just replace the (let old ,name ...) with (let old (or (errsafe ,name) (fn args nil)) ...)
edit: Hmm, probably should make redef define the function using (fn args (let ,parms args ...)) instead, so that various functions can pass arbitrarily. Or add a 'pass function which just passes the old call completely.
Yeah, that's exactly the sort of thing I was thinking.
As for old functions, I'd really like to have a better way than "errsafe" for determining whether or not a variable is bound. But yeah, I agree this should work for not-yet-defined functions.
re: call/cc - I think a bit of the lambda the ultimate series of papers eventually boils down to the realization that a machine language jump-to-subroutine is equivalent to a call/cc, and the target of the call/cc just has to access the return address on the stack as a function address.
I don't get it. There's the whole stack copying for call/cc, so call/cc is much more expensive.
(I read the "Lambda the ultimate GOTO" paper you referenced earlier; it's about goto vs structured programming, not call/cc. As an aside, it's very interesting to reflect on just how controversial structured programming was.)
Implementing call/cc efficiently has been well-reasearched in the Scheme community. For a very well-written account of a non-stack-copying implementation see
Then continue at ReadScheme at "Compiler Technology/Implementation Techniques and Optimization"
to see further developments (Look especially for Clinger's papers).
In a function call, the state (the current computation being done) is saved anyway, and therefore "copied" if that is your preferred term. So ideally, call/cc should have the same overhead as an ordinary function call; the only difference is that in call/cc the continuation state is the value given to the function, while in a function call it's just one of the values given to the function.
Note however that much of the theoretical analyses of call/cc make a basic assumption of a "spaghetti stack", which would mean that partially unwound stacks would be saved implicitly as long as any continuation exists which refers to that stack, and all stacks themselves are subject to garbage collection. Most machines don't actually have a spaghetti stack and can't make a spaghetti stack anyway ^^. That said a spaghetti stack could be implemented as a simple list, with push == cons and pop = cdr.
Alternatively store the local variables on a garbage-collected heap, and include a reference to the local variables with the continuation/return address (you'll probably need to save the pointer-to-local-variables anyway, since the target function is likely to use that pointer for its own locals). Again, no additional overhead over plain function calls, except perhaps to restructure the return address and the pointer-to-local-variables.
Don't know about MIT Scheme, but if I were to implement call/cc on stock hardware and compiling down to machine language that's what I'd do ^^
I'm still totally not understanding your claim that call/cc should have the same overhead as an ordinary function call.
I read the Clinger "Implementation Strategies for Continuations" paper and they found call/cc about 10 times slower than function calls on the tak/ctak tests. I tried those tests on PLT Scheme and the overhead I saw is even worse: .7 seconds with function calls vs 51.8 seconds with continuations on (tak 24 16 8).
Clinger says about the stack strategy: "When a continuation is captured, however, a copy of the entire stack is made and stored in the heap. ... Variations on the stack strategy are used by most implementations of Scheme and Smalltalk-80."
Components of function call: (1) put arguments somewhere (2) put return address somewhere (3) jump to function.
Components of call/cc: (1) put return address as argument somewhere (2) put return address somewhere (3) jump to function.
That said, continuations generally assume that "put X somewhere" means somewhere != stack. Yes, even return addresses are not intended to be on the stack when using continuations. The main problem is when compiling down to C, which always assumes that somewhere == stack. If you're compiling down to machine language where you have access to everything, then you can just ignore the stack, but not in C, which inherently expects the stack.
Perhaps the best responses will be code enhancements. And those have a slower cadence.
What would be cool is if we could see a thread-based view of anarki. Kinda like news.yc but with modules or functions as stories. So that next time somebody makes a change to this function it shows up as a comment under it. Patches that change multiple functions show up in them all.
well, each file is its own thread. I guess what I'm asking for is one thread per function. And you don't have to parse code or anything, just delimit functions to be between unindented lines.
Yeah, browsing static code won't show you every aspect of the dynamic runtime context for that code, but that doesn't seem like a fair comparison given we're just talking about a skin for gitweb. Perhaps I don't understand what you're getting at?
(let expander
(fn (f var name body)
`(let ,var (,f ,name)
(after (do ,@body) (close ,var))))
Unindented here. Is the above given its own division? But it wouldn't make much sense to group this by itself, since it's used privately by other functions.
(mac w/infile (var name . body)
" Opens the given file `name' for input, assigning the stream to `var'.
The stream is automatically closed on exit from the `body'.
See also [[w/outfile]] [[w/instring]] [[w/stdin]] [[w/socket]] "
(expander 'infile var name body))
Again, unindented at this point. However, it shares some code with other functions after it.
(mac w/outfile (var name . body)
" Opens the given file `name' for output, assigning the stream to `var'.
The stream is automatically closed on exit from the `body'.
See also [[w/infile]] [[w/appendfile]] [[w/outstring]] [[w/stdout]] "
(expander 'outfile var name body))
... etc.
Of course, we could just define a definition as being divided by unindented non-empty lines ^^.
While we're nitpicking, a set of unindented lines should get grouped together as well. No point having a bunch of single-line separators.
And if using indentation seems too hackish, just balance parens, curlies, etc. And separate by lines containing level-1 brackets. Almost as easy to do, and just as language-agnostic.
Crick, my code must be completely unclear. Oh well. The paper describes snakes; my implementation makes use of "downsnakes" and "rightsnakes" as references to the edit graph, with a "downsnake" being a single downward move followed by a snake and a "rightsnake" similarly defined.
The paper describes a graph traversal method which I eventually realized was suspiciously like Pascal's Triangle (tilt your head to the left 45 degrees and squint ^^); hence the reference to "start the triangle". Basically instead of + as in pascal's triangle I have (minN (rightsnake left) (downsnake right)), while the leftmost and rightmost nodes are generated via downsnake and rightsnake, respectively, instead of the identity function as in Pascal's.
Looks like the old nprocedure protocol used in really old lisps to me. ^^
The problem is that macros are not functions, in spite of having the same syntax. How would addpref react if it was passed to map? By quoting map's variable?
yes, make wrappers. Might probably want to include wrappers for pr and prn too.
If you're using nex-3's arc-wiki.git (also called the "Anarki") there is a 'redef form which allows you to wrap easily.
Also, on arc-wiki there are two implementations of the same way to add a bit of OO in Arc, in lib/settable-fn.arc and lib/settable-fn2.arc. See also my series of "Create your own collection":
edit: ps: some arc built-in features - function argument destructuring comes to mind - won't use the Arc redefined 'car 'cdr etc, but will use the underlying mzscheme methods, I think. Caveat Emptor!
The difference lies mostly in what the use of the type in 'annotate is. nex-3 views this type as the "real" type, with any "claimed" (user-interface) type ducked by overloading 'isa. I view this type as the "claimed" (user-interface) type, with the "real" type being the underlying implementation (usually a function) for this.
Very minor nitpick: "nex3". The hyphen is only in my URL because nex3.com was taken :-p.
Also, I don't like overriding isa; it's just necessary sometimes because the standard libs aren't really geared towards duck typing. I much prefer overriding basic functions and having more complex functions assume those base functions will work. This is really what duck typing (as I understand it) is all about.
^^; lol. I think I've been rather consistently calling you nex-3 with a hyphen O.O;
The problem, I suppose, is the rather inconsistent formalization/nonformalization of type classes/interfaces/hindley-milner typesystem. In Ruby it's informal - you just say this or that object has this or that function which will just work, in the expected manner. In Haskell and Java it's formal - you say this type provides this type class or presents this interface, and must therefore have this or that function which must work, in the expected manner.
annotated types are kinda like a formal type class, except existing type classes are not trivially extendable except informally (via redef).
Yeah, I think that's right. And I think Arc should go the path of informality, for a couple reasons. First, it's dynamically typed, and duck typing (or informal typing) has historically worked well with dynamically-typed languages (with Ruby, Python, and maybe Smalltalk). CLOS also tends to that side of the spectrum, although I'm not sure how explicitly it embraces duck typing.
Second, and I think more importantly, duck typing gives the programmer more power, in exchange for more opportunity to screw stuff up. This is very much in line with Arc's philosophy.
Using formal interfaces, both the people writing the polymorphic code and the people writing the polymorphic objects have to explicitly code polymorphically. Using informal interfaces, only the people writing polymorphic objects have to be explicit. The other people can* be aware of the polymorphism, which allows powerful stuff like Ruby's Enumerable module, but as long as the objects behave correctly ("quack like a duck"), you can use pass them to code that doesn't expect them at all and they'll still work.
* I know less about Smalltalk than I want to, so I don't know how much it makes use of duck typing.
Here's another interesting bit: car and cdr could be modified into settable-fn's and their defset's could be completely removed from arc.arc ^^. Also, it should be possible to additionally modify compose to work on the '= attachments ^^.
> Hmm. Somehow I don't quite like the "scaffold" anamorphic variable, since it strikes me that it is accessible (indirectly) via (ename)
It is true that you could access the scaffold template object by using sname (which stands for "scaffold name" as opposed to ename, which stands for "entity name")
This might make sense... I'll give it some thought...
> By eliminating the scaffold anamorphic we can then create a purely macro-based system without having to use eval, which personally makes me nervous.
This doesn't seem to be true... the reason the "eval" is needed is because calling the scaffold function ("v" in this case) just returns an sexp, which then still needs to be evaluated. I don't think the code sample you gave without the eval could be made to work. (That eval is basically the "magic" that lets you write a single scaffold for a "def" and have it become multiple defs, one for each entity (i.e. producteview, customerview, orderview, etc.)
The only way I can see to get rid of this final eval is to not have an inst-entity1 function and elevate all the logic to be part of the inst-entity macro. Then, the entire set of scaffold sexps could be unquote-spliced into the body of the macro- This is probably a good idea, but I haven't gotten to it yet. (I actually tried doing something like this once in scheme, but scheme "define" has limitations when not used at the toplevel- I believe they fixed this in R6RS because of complaints. Arc defs can be used without constraints at any level I think, so that specific problem wouldn't exist)
It still doesn't look like it would work though, since a scaffold could have like 30 sexps and only the last one would be evaluated by your sample I think...
but if the "each" was changed to a "map" and a "do" was inside the macro I think it could be made to work that way- That's along the lines of what I was thinking