Arc Forumnew | comments | leaders | submit | eds's commentslogin

> 1. The best way is to do (thread (asv)), which will launch the server in a separate thread. Then, to modify it, just (load "blog.arc") and refresh the pages.

http://arclanguage.org/item?id=2739

> 3. As far as I know, it doesn't---but there's been remarkably little spam here. I'm not 100% sure of this, though; I could well be wrong.

http://arclanguage.org/item?id=4412. People were actually disappointed that they couldn't vote it down ;-)

-----


I really like the zap idea... although your second edit has a good point. But on the other hand, it wouldn't necessarily be a bad thing to allow macros in ssyntax, would it? Just a little difficult to implement perhaps. Or we could just get first class macros...

Another thing I've been thinking about would be allowing macros in call* type tables, e.g. allowing (3 + 4 * 5) to macro-expand instead of forcing infix math to be implemented as a function at runtime. (This would seamlessly integrate infix math into the regular syntax of Arc, with no performance penalty at all... something I personally would like to see.) But this would probably require type inference in the language, so might not be implementable in the near future.

-----

2 points by eds 6442 days ago | link | parent | on: Poll: Destructive operations naming

Really? What if you want to use destructive-named functions with the current meaning of '.', '!' and ':'? E.g.

  (join.a.b)
Assuming s/join/join!/ this becomes:

  (join!.a.b)
And thus the destructive-named '!' isn't at the end of the string anymore, is it? Or do you have something else in mind?

-----

1 point by almkglor 6442 days ago | link

  (require "ssyntaxes.arc")
  (def foo! (x)
    (= (car x) 42))
  (= hmm '(3))
  foo!.x
Well, it has to do with the ssyntaxes.arc precedence rules and how they work: basically, split according to the current ssyntax, then go to next ssyntax. Since #\. is listed before #\!, symbols are first split by #\. into (foo! x), so it works properly.

It won't work with a type that ends in ! and if you use the ? ssyntax:

  (def my-type! (x)
    (annotate 'my-type! x))
  (my-type!? my-type!.1) ; will transform to (my-type '?)

-----


> Still, I wonder - how does CLOS implement this? How about for variadic functions?

I don't think CLOS lets you check types on &rest, &optional, or &key parameters. So you couldn't use CLOS for the current behavior of '+.

Also note that CLOS only works on methods with "congruent lambda lists", that is, methods with the same number of required, optional, and keyword arguments. So you can't have

  (defmethod foo ((a type-a)) ...)
  (defmethod foo ((b type-b) &optional (c ...)) ...)

-----

1 point by almkglor 6454 days ago | link

ah, I see. So I suppose this greatly simplifies things then.

Hmm. This certainly seems easier to hack into arc. We could have each method start with a generic function whose lambda list we have to match.

As an aside, currently the base of Arc lambda lists are simply &rest parameters (i.e. optional parameters are converted into rest parameters with destructuring). Should we match on the plain rest parameters or should we properly support optionals?

-----

1 point by eds 6470 days ago | link | parent | on: Checkbox

What OS and version of MzScheme are you on? I have had errors with line feeds before when pasting code from the clipboard on Windows, but those errors don't look the same as the ones you have encountered.

  arc> '(1
  2
  3)
  (1 2 3)
The above, typed from directly into the terminal, works fine, but the below, pasted from Notepad, results in the mysterious insertion of the symbol 'M into the list.

  arc> '(1
  2
  3)
  (1 2 M)
I posted a bug report at http://bugs.plt-scheme.org/query/?cmd=view&pr=9210, but they don't seem to have done anything about it in the last couple of months :( (although I suppose it is possible this bug doesn't occur in version 400).

-----

1 point by gidyn 6469 days ago | link

Windows XP MzScheme v352

My problems also occurred when copying and pasting, but not when (load) the file I was copying from. Another common error in this situation was undefined __M.

-----

1 point by eds 6469 days ago | link

I just tried v4.0.1 to see if they had fixed the bug... and it seems to still be there. I guess you just have to be careful about what you paste into the REPL under Windows. (Or use arc-mode in Emacs ;-)

-----

2 points by eds 6472 days ago | link | parent | on: Poll: Which library should we focus on first?

I just pushed 'floor, 'ceil, and 'fac from http://arclanguage.org/item?id=7280, and 'sin, 'cos, and 'tan to Anarki.

In doing so, I noticed one bug in Darmani's original implementation of 'floor and 'ceil. 'floor would return incorrect results on negative integers (e.g. (floor -1) => -2), and 'ceil on positive integers (e.g. (ceil 1) => 2). This has been corrected on Anarki.

I also used mzscheme's 'sin, 'cos, and 'tan instead of Darmani's, not because of speed issues, but because of decreased precision in those functions. In order to get maximum precision it would be necessary to calculate the Taylor series an extra couple of terms, which I didn't feel like doing at the time.

I didn't commit 'signum, 'mod, 'prime, or 'prime-factorization, because I wasn't sure if they were needed except for computing 'sin, 'cos, and 'gcd... but feel free to commit them if you want.

-----

1 point by shader 6463 days ago | link

I have a few questions:

1) Isn't pushing those math functions straight from scheme sort of cheating? I mean, maybe I'm just wrong, but wouldn't the solution be more long term if we avoided scheme and implemented the math in arc?

2) Shouldn't fac be tail recursive? Or is it, and I just can't tell? Or are you just expecting that no one will try and compute that large of a factorial

3) If some one did compute that large of a factorial, is there some way for arc to handle arbitrarily sized integers?

-----

1 point by almkglor 6463 days ago | link

1) No, you should implement in the math in the underlying machine instructions, which are guaranteed to be as precise and as fast as the manufacturer can make it. The underlying machine instructions are fortunately possible to access in standard C libraries, and the standard C library functions are wrapped by mzscheme, which we then import in arc.

2) It should be, and it isn't.

  (defmemo fac (n)
    ((afn (n a)
       (if (> n 1)
           (self (- n 1) (* a n))
           a))
     n 1))
3) Yes, arc-on-mzscheme handles this automagically. arc2c does not (I think it'll overflow)

-----

3 points by kens 6463 days ago | link

Implementing numerically stable and accurate transcendental functions is rather difficult. If you're going down that road, please don't just use Taylor series, but look up good algorithms that others have implemented. One source is http://developer.intel.com/technology/itj/q41999/pdf/transen...

That said, I don't see much value in re-implementing math libraries in Arc, given that Arc is almost certainly going to be running on a platform that already has good native math libraries.

-----

1 point by shader 6463 days ago | link

I figured that being close to machine instructions was a good thing, but I thought that we should do that via some other method, not necessarily scheme, which may or may not remain the base of arc in the future.

That being said, if you think that pulling from scheme is a good idea, why don't we just pull all of the other math functions from there as well?

-----

2 points by almkglor 6463 days ago | link

> That being said, if you think that pulling from scheme is a good idea, why don't we just pull all of the other math functions from there as well?

Yes. Yes it is. http://arclanguage.com/item?id=7288

That's what I said ^^

-----

1 point by shader 6462 days ago | link

Ok, I added that tail optimized version to math.arc.

Do you want to have a separate math libs for the scheme functions and native implementations? You already suggested the possibility.

-----

2 points by almkglor 6462 days ago | link

Err, "native implementations" being?

Actually I think it might be better if we had a spec which says "A Good Arc Implementation (TM) includes the following functions when you (require "lib/math.arc"): ...." Then the programmer doesn't even have to care about "scheme functions" or "java functions" or "c functions" or "machine language functions" or "SKI functions" - the implementation imports it by whatever means it wants.

Maybe also spec that the implementation can reserve the plain '$ for implementation-specific stuff.

-----

2 points by eds 6472 days ago | link | parent | on: Arc Code Jam

There is http://github.com/nex3/arc/tree/master/ac.sbcl.lisp, but it might be a little old.

-----

2 points by eds 6472 days ago | link

I just found http://github.com/pauek/arc-sbcl/tree/master from this old post http://arclanguage.org/item?id=5509, which seems to be where development moved after it stopped in the main arc tree... (although with that said it still doesn't seem to be under active development).

-----


Since mzscheme's lists are now immutable, you need to use 'mcons rather than 'cons in ac.scm.

-----

1 point by mr_luc 6459 days ago | link

Huh! Is that all it takes?

It's been a while -- does anyone have a version of ac.scm that works w/400? Can I see?

Man, I'm feeling lazy today. Fourth of July, woohoo.

-----

1 point by eds 6483 days ago | link | parent | on: Arc Programming Assignment

I don't follow how n! has O(n log n) digits. Mind explaining?

-----

2 points by lacker 6477 days ago | link

No prob. Sorry if I go on at too much length here ;-)

So a useful formula for counting digits is, a positive integer x has floor(log10(x)) + 1 digits. You can figure this out yourself by thinking, where does the digit-counting function bump up a notch? At 10, 100, 1000, etc. So asymptotically the number of digits in x is O(log x).

So n! has O(log n!) digits. The trickier part is figuring out or knowing that O(n log n) = O(log n!). Using log ab = log a + log b you can expand out the factorial:

  O(log n!) = O(log (n * (n-1) * ... * n * 1))
            = O(log n + log (n-1) + ... + log 2 + log 1)
            = O(n log n)
In case the last step isn't clear, you can do this splitting-in-half bounding trick. Since each element in the sum is less than log n you can bound from above with

  log n + log (n-1) + ... + log 2 + log 1 < n log n
And if you just take the larger half of the list you can bound from below with

  log n + log (n-1) + ... + log 2 + log 2 > log n + log (n-1) + ... + log (n/2)
                                          > (n/2) log (n/2)
which is itself O(n log n). So O(log n!) = O(n log n).

In general the rules of thumb you use to reduce O(log n!) are:

  1. complicated expressions inside factorials are ugly, you should simplify them
  2. O(sum of n things) is usually O(n * the biggest thing)
Make sense?

-----

1 point by kens 6482 days ago | link

You're multiplying O(n) numbers, each of which is O(log n) digits long, so the result is O(n log n) digits long.

-----

1 point by shader 6482 days ago | link

Usually, it seems to be either (n log n) or (n log n) - 1 digits.

And usually in this case I would leave of the O, as that usually refers to the performance of an algorithm in time or space. I suppose you could construe the number of digits to be "space" but multiplying O(n) numbers doesn't make that much sense.

-----

3 points by eds 6484 days ago | link | parent | on: Arc Programming Assignment

Not quite, since 'round returns the nearest even integer on halves. s/round/trunc/g gives the correct solution.

  arc> (round 4.5)
  4
  arc> (round 5.5)
  6
  arc> (trunc 4.5)
  4
  arc> (trunc 5.5)
  5

-----

2 points by bOR_ 6484 days ago | link

Ah.. I remember reading about the reasoning behind that (Round-to-even)

from: http://en.wikipedia.org/wiki/Rounding

  Although it is customary to round the number 4.5 up to 5, 
  in fact 4.5 is no nearer to 5 than it is to 4 (it is 0.5 
  away from both). When dealing with large sets of 
  scientific or statistical data, where trends are 
  important, traditional rounding on average biases the data
  upwards slightly. Over a large set of data, or when many 
  subsequent rounding operations are performed as in digital
  signal processing, the round-to-even rule tends to reduce 
  the total rounding error, with (on average) an equal 
  portion of numbers rounding up as rounding down. This 
  generally reduces upwards skewing of the result.

-----

More