Arc Forumnew | comments | leaders | submit | sacado's commentslogin
2 points by sacado 6447 days ago | link | parent | on: Poll: How often should Arc be released?

You're not the only one, I was wondering too !

-----

1 point by sacado 6447 days ago | link | parent | on: Poll: How often should Arc be released?

I made 3 votes. My thought is : between once every two weeks (on fast periods) and once every 4 months (for more stable periods).

-----

2 points by sacado 6449 days ago | link | parent | on: Ray Tracer written in Arc

looks pretty interesting... congratulations !

-----

1 point by comatose_kid 6449 days ago | link

Thanks sacado. The great community here was a real help when I was stuck.

-----


If you want to use compiled functions in a function to compile, you can. For example, now that fib was compiled (and its return type known) the compiler knows it returns int values. But, to do so, you have to compile the called functions before the calling ones (if foo calls bar, compile bar then foo).

-----


Yep, that's probably the next step after psyco.

-----


OK, so I started doing it in mzscheme. It shouldn't be done in pure Arc for the following reasons :

- compiling Arc code is better done on the compiler side rather than on the arc side

- that way, I can get rid of n+ et al., as they never really get called in Arc code

- manipulating what is generated by the 'ac function is easier than manipulating raw Arc code : 'ac does macro-expansions and translates ssyntax in Scheme syntax.

In practice, until now, I added a function, inside 'ac, wrapping the result of the latter. This function explores and modifies the code generated by 'ac. Every time it sees a 'lambda, it takes its args and body and generates an actual lambda that looks like the arc code I wrote there : http://arclanguage.org/item?id=5216 .

So, 2 hash tables are generated for each lambda. Well, as you can guess, the code is eating as much memory as you can imagine, particularily if you arco the code in arc.arc (which should be done anyway, should it only be to accelerate loops). Now, I'm quite sure the solution based on hash tables is a dead end.

Maybe I should do otherwise : instead of using hash tables, I could make the function's code grow everytime a new type is applied on it :

  (if (type-is-the-new-type)
    (call new-generated-code-for-this-type)
    (call the-previously-generated-code))
I don't know if this would work better, however I will probably not work on this today. Writing macros generating macros generating lambdas analysing other lambdas to generate other lambdas is an unfailing source of headaches. And bugs too.

-----


OK, I've got a working implementation now. It works this way : I defined a macro named adef which works exactly as def, except that it defines a hashtable associating tuples of types to optimized functions definitions. When the function is defined, this table is emtpy.

Now, every time the function is called, it :

- determines the types of all args

-checks whether this list of types is a key in the table

- if it is, the associated function is called with the current args

- if not, a new function is generated based on the given types. In the fib example, since the type of n is an int, we try to optimize the code. This way, (- n 1) is rewritten (n- n 1), etc.

Actually, the code that was given when calling the adef macro is never really called : it is just a skeleton to generate code adapted to the given types (if possible).

The algorithm is currently very naïve, as it is only able to know the return type of functions if it was manually declared (i.e., as for now, mathematical operators). In the fib example, it does not know the result of (fib (- n 1)), so we can't optimize the '+ operator there. almkglor's suggestions are the way to do I guess, but I already had hard time fighting with these macros, so we'll do type inference later ;)

And, well, there is another little problem. The lookup in the hash table takes a very long time. Most of the time is spent looking for the actual function to be called, thus slowing down the whole process... :( Maybe I should rewrite it using redef instead, or maybe I should write all of this directly in mzscheme (as what we are after is generated the best mzscheme code possible).

  (mac adef (name parms . body)
     (w/uniq (htypes tparms types)
        `(let ,htypes (table)
           (def ,name ,parms
              (withs
                 (,tparms (mergel ',parms (map type (list ,@parms)))
                  ,types (map cadr ,tparms))
                 (aif (,htypes ,types)
                    (apply it (list ,@parms))
                    (apply (= (,htypes ,types) (gen-fn ,parms ,tparms ,@body)) (list ,@parms))))))))

-----


Hmm... I think I'll have to read that more deeply later on... There are obviously interesting ideas in it, but hard to implement... Not sure I understood everything yet...

Currently, I'm working on a naïve approach : it supposes native numerical functions (only numerics as for now) are not redefined, or, at least, that they are redefined in a conservative way (that is, as you mention it, e.g. + was redefined to be able to add apple, a user type, but still works the regular way with numbers), and it knows that, if a numerical operation is called only with numbers :

- it can use its inlined version,

- the result will be a number too, so nested numerical operations can be taken into account too.

For example, when we call (fib 30), the compiler knows the n arg and literal numbers are numbers, so (- n 1) and (- n 2) and (< n 2) are numbers too and this gets translated into (n- n 1), (n- n2) and (n< n 2). However, it cannot know (yet) (fib (n- n 1)) is a number, so the final sum cannot rely on the inlined + :

  (gen-ad-hoc (listtab '((n int))) '(fn (n)
    (if (< n 2)
      n
      (+ (fib (- n 1) (fib (- n 2)))))))

  -> (fn (n) (if (n< n 2) n (+ ((fib (n- n 1) (fib (n- n 2)))))))
The gen-ad-hoc function generates the ad hoc code, based on the fact that n is an int. I still have a few bugs (too many parens somewhere), but that's a good start :)

-----

3 points by sacado 6454 days ago | link | parent | on: GTK binding

Glad to see ffi.arc was useful to (and usable by) someone else than me...

Now I think a sample app would be great too :)

-----

2 points by stefano 6454 days ago | link

Within the file gtk.arc there is a sample 'hello world' app. If I find some time I'ill work on a sligthly more complicated example :). At the moment I'm more concentrated on importing as much useful functions as possible.

-----

1 point by almkglor 6454 days ago | link

/me votes for lazy importing ^^

Edit: By any chance, is there any particular package/library/config needed for gtk+ bindings?

I got the following error:

Error: "ffi-lib: couldn't open \"libgtk-x11-2.0.so\" (libgtk-x11-2.0.so: cannot open shared object file: No such file or directory)"

Inspecting my /usr/lib reveals that I have libgtk-x11-2.0.so.0 , which is a link to libgtk-x11-2.0.so.0.1200.0 . I tried linking libgtk-x11-2.0.so to that library, but even though the hello world window exists and opens, when I close it or click it, mzscheme crashes ^^.

-----

2 points by stefano 6454 days ago | link

This is an idea to consider. This way though the binding could rapidly become fragmented and it will be more difficult to make it, one day, complete.

-----

1 point by almkglor 6454 days ago | link

Hmm, how big is the change in the naming anyway? I mean, it could conceivably just be in the same state as 'cdar is in the language today - it's not there yet but there's a name reserved for it already.

-----

2 points by stefano 6453 days ago | link

The naming strictly follows gtk+ naming, with '_' replaced by '-'. As an example gtk_widget_show_all becomes gtk-widget-show-all. gtk+ names are so long that I think they will never collide with other names.

-----

2 points by almkglor 6453 days ago | link

Hmm. In such a case I doubt that lazy importing would hurt badly, since the names already unlikely to collide (not never - someone might make a Great Transformer Kollider library involving midgets, except they misspelled it as widgets (dyslexia's a bummer for programming you know, it's hard to see the difference between i and ! sometimes)); think of it as "cdar", which isn't in arc yet but which pg is too lazy to add "just for completeness".

-----

1 point by stefano 6453 days ago | link

I've tried it on linux with gtk 2.6 (quite old) with libgtk-x11-2.0.so as a link to libgtk-x11-2.0.so.0.600.4. I don't know why it crashes, on my computer it works correctly. What's the error message exactly?

-----

1 point by almkglor 6453 days ago | link

it just core dumps without a message IIRC. Incidentally I actually had to modify the ffi.arc to add "-fPIC" to the gcc command, I have no idea why (Position Independent Code, yes, but what for? for the .so?) but ld complains if I don't and suggests that to me.

As an aside, I'm using an intel core duo, on a 64-bit SMP kernel. I don't know what those terms mean (core duo? like what, apple cores?), I'm a software hacker, not a hardware one. ^^ Oops, scratch that, okay my boss thinks I'm a hardware hacker but I hack FPGA's, not microprocessors and kernels ^^.

Also, I'm on an Ubuntu 7.10 box, with libgtk2.0-0 installed, which provides /usr/lib/libgtk-x11-2.0.so.0.1200.0 and is described as "This package contains the shared libraries.". However Ubuntu also has another package, libgtk2.0-dev (not installed on my computer), which is described as "This package contains the header files and static libraries which is needed for developing the GTK+ applications." Should I be using the -dev version?

-----

1 point by almkglor 6452 days ago | link

tried again with the latest version. Running plain, once I load gtk.arc:

  /usr/bin/ld: /tmp/ccwElZvz.o: relocation R_X86_64_32 against `a local symbol' can not be used when making a shared object; recompile with -fPIC
  /tmp/ccwElZvz.o: could not read symbols: Bad value
  collect2: ld returned 1 exit status
Modifying ffi.arc to add -fPIC to gcc call:

  arc>  (load "gtk.arc")
  *** redefining cdef
  *** redefining w/ffi
  *** redefining w/inline
  *** redefining w/del
  gs1789.c: In function ‘inc_pt’:
  gs1789.c:3: warning: cast from pointer to integer of different size
  gs1789.c:3: warning: cast to pointer from integer of different size


  nil
  arc>  (gtk-hello-world)
  Segmentation fault (core dumped)
The segmentation fault occurs whenever I click the button or close the window. Moving it around and resizing doesn't seem to hurt it.

Modifying inc_pt to:

    void* inc_pt(void *pt, unsigned int offset) 
   {
     return (void*)&((char*)pt)[offset];
   }
results in:

  arc>  (load "gtk.arc")
  *** redefining cdef
  *** redefining w/ffi
  *** redefining w/inline
  *** redefining w/del

  nil
  arc> (gtk-hello-world)
  Segmentation fault (core dumped)
Segfaults at the same conditions. ^^

Possibly the problem is in the 'connect thing?

-----

2 points by stefano 6452 days ago | link

I think the problem is in the connect: with mzscheme 352 it segfaults(not always, though), with mzscheme 372 it seems to work. Maybe is a bug in mzscheme 352 C callbacks. Wich version are you using?

-----

1 point by almkglor 6452 days ago | link

360. Yes, the problem does seem to be in 'connect, because that appears to be the part where it interacts with the user.

That said, is another potential problem the fact that I'm using a 64-bit machine+kernel?

-----

2 points by stefano 6452 days ago | link

To access some structures (such as GValue) I manually allocate the correct size with cmalloc, and to access the structure i use low level functions (such as inc_pt) wich make assumptions on the size of the structure. I program on a 32 bit machine where pointers are smaller than on a 64 bit machine, so this could (and probably is) a problem. I definetly need a better way to access C structures, but this would mean to extend Arc FFI capabilities.

Edit: i've tried mzscheme 360 and it works. The problem then is with the 64 bit.

-----

1 point by stefano 6450 days ago | link

I've found and solved the signal connection problem, the bug fix is now on Anarki.

-----


Good idea. Not perfect as you have to annotate your code to get performance (but after all CL and GambitC Scheme work this way), but this could be tried...

-----

More