> 1. The best way is to do (thread (asv)), which will launch the server in a separate thread. Then, to modify it, just (load "blog.arc") and refresh the pages.
I really like the zap idea... although your second edit has a good point. But on the other hand, it wouldn't necessarily be a bad thing to allow macros in ssyntax, would it? Just a little difficult to implement perhaps. Or we could just get first class macros...
Another thing I've been thinking about would be allowing macros in call* type tables, e.g. allowing (3 + 4 * 5) to macro-expand instead of forcing infix math to be implemented as a function at runtime. (This would seamlessly integrate infix math into the regular syntax of Arc, with no performance penalty at all... something I personally would like to see.) But this would probably require type inference in the language, so might not be implementable in the near future.
Well, it has to do with the ssyntaxes.arc precedence rules and how they work: basically, split according to the current ssyntax, then go to next ssyntax. Since #\. is listed before #\!, symbols are first split by #\. into (foo! x), so it works properly.
It won't work with a type that ends in ! and if you use the ? ssyntax:
(def my-type! (x)
(annotate 'my-type! x))
(my-type!? my-type!.1) ; will transform to (my-type '?)
> Still, I wonder - how does CLOS implement this? How about for variadic functions?
I don't think CLOS lets you check types on &rest, &optional, or &key parameters. So you couldn't use CLOS for the current behavior of '+.
Also note that CLOS only works on methods with "congruent lambda lists", that is, methods with the same number of required, optional, and keyword arguments. So you can't have
ah, I see. So I suppose this greatly simplifies things then.
Hmm. This certainly seems easier to hack into arc. We could have each method start with a generic function whose lambda list we have to match.
As an aside, currently the base of Arc lambda lists are simply &rest parameters (i.e. optional parameters are converted into rest parameters with destructuring). Should we match on the plain rest parameters or should we properly support optionals?
What OS and version of MzScheme are you on? I have had errors with line feeds before when pasting code from the clipboard on Windows, but those errors don't look the same as the ones you have encountered.
arc> '(1
2
3)
(1 2 3)
The above, typed from directly into the terminal, works fine, but the below, pasted from Notepad, results in the mysterious insertion of the symbol 'M into the list.
arc> '(1
2
3)
(1 2 M)
I posted a bug report at http://bugs.plt-scheme.org/query/?cmd=view&pr=9210, but they don't seem to have done anything about it in the last couple of months :( (although I suppose it is possible this bug doesn't occur in version 400).
My problems also occurred when copying and pasting, but not when (load) the file I was copying from. Another common error in this situation was undefined __M.
I just tried v4.0.1 to see if they had fixed the bug... and it seems to still be there. I guess you just have to be careful about what you paste into the REPL under Windows. (Or use arc-mode in Emacs ;-)
In doing so, I noticed one bug in Darmani's original implementation of 'floor and 'ceil. 'floor would return incorrect results on negative integers (e.g. (floor -1) => -2), and 'ceil on positive integers (e.g. (ceil 1) => 2). This has been corrected on Anarki.
I also used mzscheme's 'sin, 'cos, and 'tan instead of Darmani's, not because of speed issues, but because of decreased precision in those functions. In order to get maximum precision it would be necessary to calculate the Taylor series an extra couple of terms, which I didn't feel like doing at the time.
I didn't commit 'signum, 'mod, 'prime, or 'prime-factorization, because I wasn't sure if they were needed except for computing 'sin, 'cos, and 'gcd... but feel free to commit them if you want.
1) Isn't pushing those math functions straight from scheme sort of cheating? I mean, maybe I'm just wrong, but wouldn't the solution be more long term if we avoided scheme and implemented the math in arc?
2) Shouldn't fac be tail recursive? Or is it, and I just can't tell? Or are you just expecting that no one will try and compute that large of a factorial
3) If some one did compute that large of a factorial, is there some way for arc to handle arbitrarily sized integers?
1) No, you should implement in the math in the underlying machine instructions, which are guaranteed to be as precise and as fast as the manufacturer can make it. The underlying machine instructions are fortunately possible to access in standard C libraries, and the standard C library functions are wrapped by mzscheme, which we then import in arc.
2) It should be, and it isn't.
(defmemo fac (n)
((afn (n a)
(if (> n 1)
(self (- n 1) (* a n))
a))
n 1))
3) Yes, arc-on-mzscheme handles this automagically. arc2c does not (I think it'll overflow)
Implementing numerically stable and accurate transcendental functions is rather difficult. If you're going down that road, please don't just use Taylor series, but look up good algorithms that others have implemented. One source is http://developer.intel.com/technology/itj/q41999/pdf/transen...
That said, I don't see much value in re-implementing math libraries in Arc, given that Arc is almost certainly going to be running on a platform that already has good native math libraries.
I figured that being close to machine instructions was a good thing, but I thought that we should do that via some other method, not necessarily scheme, which may or may not remain the base of arc in the future.
That being said, if you think that pulling from scheme is a good idea, why don't we just pull all of the other math functions from there as well?
Actually I think it might be better if we had a spec which says "A Good Arc Implementation (TM) includes the following functions when you (require "lib/math.arc"): ...." Then the programmer doesn't even have to care about "scheme functions" or "java functions" or "c functions" or "machine language functions" or "SKI functions" - the implementation imports it by whatever means it wants.
Maybe also spec that the implementation can reserve the plain '$ for implementation-specific stuff.
No prob. Sorry if I go on at too much length here ;-)
So a useful formula for counting digits is, a positive integer x has floor(log10(x)) + 1 digits. You can figure this out yourself by thinking, where does the digit-counting function bump up a notch? At 10, 100, 1000, etc. So asymptotically the number of digits in x is O(log x).
So n! has O(log n!) digits. The trickier part is figuring out or knowing that O(n log n) = O(log n!). Using log ab = log a + log b you can expand out the factorial:
In case the last step isn't clear, you can do this splitting-in-half bounding trick. Since each element in the sum is less than log n you can bound from above with
log n + log (n-1) + ... + log 2 + log 1 < n log n
And if you just take the larger half of the list you can bound from below with
Usually, it seems to be either (n log n) or (n log n) - 1 digits.
And usually in this case I would leave of the O, as that usually refers to the performance of an algorithm in time or space. I suppose you could construe the number of digits to be "space" but multiplying O(n) numbers doesn't make that much sense.
Although it is customary to round the number 4.5 up to 5,
in fact 4.5 is no nearer to 5 than it is to 4 (it is 0.5
away from both). When dealing with large sets of
scientific or statistical data, where trends are
important, traditional rounding on average biases the data
upwards slightly. Over a large set of data, or when many
subsequent rounding operations are performed as in digital
signal processing, the round-to-even rule tends to reduce
the total rounding error, with (on average) an equal
portion of numbers rounding up as rounding down. This
generally reduces upwards skewing of the result.