Arc Forumnew | comments | leaders | submitlogin
1 point by FredBrach 4648 days ago | link | parent

And

> (car '(x))

x

>(= (car '(x)) 2)

2

> x

2

let's runtime error for the moment



2 points by FredBrach 4647 days ago | link

I've finished the tutorial.

For the fun, I gonna make a few try of implementations of such a language. Let's see if I can do a little bit better/atomic/consistent/powerful in the basic concepts.

One of my other concern is to generalize terms. And the spaces idea looks like a pretty good idea: a name space, a symbol/bind space, a type space, an algebric space and a call space .slt. Each term having something in those.

So I can bind things to 3.14 or to "hello world" and so I can call the function bound to 4 with ((+ 2 2)).

I'm not sure it will work at all ie. consistency/useability.

Also, macros are very useful because of the scope flexibility they offer (the only other thing they offer is code text über tweaking): we love use so-called global/context variables and we certainly need more flexibility for that. I think that there is a misconception in the programming world behind the fact that scopes are defined by functions. I think - in fact I'm sure because I've already done it - you can define scopes independently of everything else. And then, you can call a function which will use variables of the call context, arguments becoming true arguments, not context vars. OO tried to solve that with just the bad idea.

I will use {} for scopes.

So let's try with the bind rules I've defined earlier:

(def y '2)

{

(def x '1)

(def MyFn '((pr x) (pr y)))

(MyFn)

{

(x '10) ; let's say its possible in the call space since 1 is not a list

(def y '3.14)

(MyFn)

}

(MyFn)

}

> 1

> 2

> 10

> 3.14

> 10

> 2

Don't see any problem in that except there is no argument in my function. I've not found something I like for the args already but it will come.

Note that: (MyFn '({(def x '"blabla") (pr x) (pr y)}))

is possible to. I don't see any problem in that. The rule is: scope are independant of everything.

We even probably can do that:

(MyFn '({(def x "blabla") (pr x} x)))

An idea: an inner zap for the need of a beautiful

(x 10)

-----

2 points by akkartik 4647 days ago | link

Interesting idea to have functions not create a new scope by default. But it would make it too easy to create dynamically-scoped variables.

  (def foo() {
    (def a 34)
    (bar)})

  (def bar()
    a)
Here a acts dynamically scoped. I think it's very valuable to have a way to say "this variable is lexically scoped", meaning that functions called within the scope can't access it.

-----

1 point by FredBrach 4647 days ago | link

Ok let do that if I've well understood. That's cool :)

>> I think it's very valuable to have a way to say "this variable is lexically scoped", meaning that functions called within the scope can't access it.

Do you mean can't access it in the sense of c++ private lib or kind of can't use it?

In the case of can't use it: Why would a function evaluate a variable which does not exist from its point of view? ie. compilation error

I've found for the inner zap.

{

(def MyFn '( (= MyFn '(3.14)) (1) ) )

(pr (MyFn))

(pr (MyFn))

}

> 1

> 3.14

:)

EDIT: you have edited your text, I need to re-evaluate it. But unfortunately, I've to sleep now :D Let's continue tomorrow :) Thank you, that's pretty interresting :)

-----

1 point by akkartik 4647 days ago | link

Yeah, sorry I got rid of the let from my code example. I thought I should follow your example more closely. Was that the change you noticed?

I think it would be really hard to implement let to have lexical scope. To do so you'd have to delete some bindings from the current scope before each function call. In that case functions modify the scopes going into them, sometimes deleting bindings and sometimes not. Seems confusing.

---

I don't follow your distinction between access and use..

"Why would a function evaluate a variable which does not exist from its point of view?"

Primarily because it assumes some implicit 'global' bindings. Like function names:

  (def foo() {
    (def list '(1 2 3))
    (bar)})

  (def bar()
    (list 1 2)) ; error
Much of the power of lisp derives from having just a few powerful concepts; function names are symbols just like any other and you can shadow their bindings like anything else.

Even aside from functions, codebases tend to have variables that are implicitly accessed without being passed in as arguments. Implicit variables can be either global or in packages. If two subsystems have common names and you make a stray call between them, it can get hard to debug.

---

I don't understand your code example; can you edit it to add indentation? Just put two spaces at the start of every line of code and it'll preserve indentation. (http://arclanguage.org/formatdoc)

-----

1 point by FredBrach 4645 days ago | link

>> Primarily because it assumes some implicit 'global' bindings. Like function names:

Nothing in that which must drive a language design.

>> Implicit variables can be either global or in packages.

This is what I'm trying to improve above.

>> it can get hard to debug.

Debug is about bad programming.

-----

1 point by akkartik 4643 days ago | link

> Debug is about bad programming.

If so I'm a terrible programmer :)

There seem to be two schools of thought around debugging today. The first is to minimize debugging by use of types, like in Java or Haskell. The second is to embrace debugging as an eternal fact of life, and to ease things by making code super lightweight and easy to change.

Both approaches are valid; combining them doesn't seem to work well. The combination of having no safety net at compile time but forcing the programmer to get his program right the very first try -- this seems unrealistic.

PG's style seems to be akin to sketching (http://paulgraham.com/hp.html; search for 'For a long time'). That implicitly assumes you're always making mistakes and constantly redoing code. My version of that is to add unit tests. That way I ensure I'm always making new mistakes.

-----

1 point by rocketnia 4643 days ago | link

I'd say both approaches you're talking about are all about failing fast, and that unit tests are a way to shove errors up to compile time manually, by running some arbitrary code after each compile. Languages that let the programmer run certain kinds of code at compile time anyway (like a type system or a macroexpander) have other options for where to shove these errors, though they may not always make sense there.

Conversely, they may not make sense in unit tests: If we want to know that a program behaves a certain way for all inputs, that might be easy to check with a static analysis but difficult (or effectively impossible) to check using example code.

---

"The combination of having no safety net at compile time but forcing the programmer to get his program right the very first try -- this seems unrealistic."

I'd say Arc is a demonstration of this option. XD I thought the whole point of Arc being for sufficiently smart programmers was that no guard rails would be erected to save programmers from their own buggy programs.

---

Anyway, if a language designer is trying to make a language that's easy to debug, static types and unit tests are hardly the only options. Here's a more exhaustive selection:

- Reject obviously buggy programs as being semantically meaningless. This could be any kind of error discovered by semantic analysis, including parse errors and type errors.

- Give the programmer tools to view the complexity of the program in intermediate stages as it simplifies. Step debuggers do this for imperative languages. Other languages may have bigger challenges thanks to staging (like macroexpansion) or notions of "effect" that feature simultaneous, time-sensitive, or tentative behavior, for instance.

- Create rich visualizations of the program's potential behavior. We discussed Bret Victor's demonstrations of this recently (though I didn't participate, lol): http://arclanguage.org/item?id=15966

- Collapse the edit-debug cycle so that diagnostic information is continuously visible as the programmer works. Again, this is something Bret Victor champions with a very visual approach. IDEs also provide this kind of information in the form of highlighting compile time errors.

- Give the running program extra functionality that exposes details of the codebase that would normally be hidden. If a program runs with a REPL or step debugger attached, this can be easy. (Also, a programmer can easily pursue this option in lots of languages by manually inserting these interaction points, whether they're as simple as printing to the console or as complicated as a live level editor.)

- Provide tools that write satisfactory code on the programmer's behalf. IDEs do this interactively, especially in languages where sophisticated static analysis can be performed. Compilers do this to whole programs.

- Provide abstraction mechanisms for the programmer to use, so that a single bug doesn't have to be repeated throughout the codebase.

- Provide the programmer with an obvious way to write their own sophisticated debugging tools. A static analysis library might help here, for instance. An extensible static analysis framework, such as a type system, can also help.

- Provide the programmer with an obvious way to write and run unit tests.

- Simply encourage the programmer to hang in there.

-----

1 point by akkartik 4643 days ago | link

"I'd say Arc is a demonstration of this option."

You don't hear people say of Arc, "it worked the first time I wrote it." That's more Haskell's claim to fame.

The dichotomy I'm drawing isn't (in this case) about how much you empower the user but how you view debugging as an activity. I claim that Haskellers would like you to reason through the correctness of a program before it ever runs. They consider debugging to be waste. I consider it to be an essential part of the workflow.

The points on the state space that you enumerate are totally valid; I was just thinking at a coarser granularity. All your options with the word 'debug' (at least) belong in my second category.

Perhaps what's confusing is the word 'debugging' with all its negative connotations. I should say instead, relying on watching the program run while you build vs relying just on abstract pre-runtime properties. It's the old philosophical dichotomy of finding truth by reason vs the senses.

-----

1 point by FredBrach 4646 days ago | link

By fixing some mistakes I've made, I can go forward.

I think I'm able to eliminate the def and have a working evaluation/call system.

Let's say, we can have symbols and lists of symbols only. Symbols can be bound to another symbol or list.

For number and integer, the arithmetic functions work on the symbols as if they were number or integer. I don't see any problem in that, ie. lambda calculus.

Also, let the previous scope system.

Evaluation. An evaluation of a symbol gives its bound symbol or list. If one evaluates a list, it's a call.

And now, the calls. We can call everything. A call on a symbol bind the symbol to the following argument or to a list of the following arguments. If the symbol hasn't been called before in the current scope, it is defining a new symbol on the scope. And if one call a list, it's a function call.

So the previous code looks like this now:

('y '2)

{

('x '1)

('MyFn '((pr x) (pr y)))

(MyFn)

{

('x '10) ;no problem in this anymore

('y '3.14)

(MyFn)

}

(MyFn)

}

> 1

> 2

> 10

> 3.14

> 10

> 2

What we can see now is that, everything ends up with a '.

That's why I would like to explore the opposite strategy, an ' in front of what I want to evaluate.

It gives:

(y 2)

{

(x 1)

('pr x)

(MyFn (('pr 'x) ('pr 'y)))

('MyFn)

{

(x 10)

(y 3.14)

('MyFn)

}

('MyFn)

}

> x

> 1

> 2

> 10

> 3.14

> 10

> 2

I would like to put a star (like in C) instead of a ' for evaluation but I didn't succeeded.

-----

2 points by rocketnia 4645 days ago | link

"And if one call a list, it's a function call."

That's a lot like PicoLisp. In PicoLisp, functions are just lists:

  : (de foo (X Y)            # Define the function 'foo'
     (* (+ X Y) (+ X Y)) )
  -> foo
  : (foo 2 3)                # Call the function 'foo'
  -> 25
  : foo                      # Get the VAL of the symbol 'foo'
  -> ((X Y) (* (+ X Y) (+ X Y)))
(Example taken from http://software-lab.de/doc/ref.html.)

Unfortunately, this approach means not having lexical scope. If any function has a parameter named * or + and it calls foo, foo's behavior might be surprising. Worse, you can't use lambdas to encapsulate state! (Or other context...)

With dynamic scope, you might as well define every function at the top level; local function syntax is only useful for code organization.

In some cases, dynamic scope can be useful for certain variables (typically configuration variables), but it's actually very easy to simulate dynamic scope in a non-concurrent program; just change a global variable and reset it afterwards.

---

"I would like to put a star (like in C) instead of a ' for evaluation but I didn't succeeded."

That's because asterisks ( * ) create italics on this forum. http://arclanguage.org/formatdoc

-----

2 points by FredBrach 4645 days ago | link

>> That's a lot like PicoLisp. In PicoLisp, functions are just lists:

Functions are lists of instructions/operations you can call and re-call. In every languages of the world. - meaning there is no reason they are treaten in a special case or with a special type.

The true concept is the call.

>> Unfortunately, this approach means not having lexical scope. If any function has a parameter named or + and it calls foo, foo's behavior might be surprising.

That's about bad programming. Just know what you're doing.

>> Worse, you can't use lambdas to encapsulate state! (Or other context...)

I gonna look at those lambdas. Thx

>> With dynamic scope, you might as well define every function at the top level; local function syntax is only useful for code organization.

That's not a question of code organization. That is a question of sense. If you define your functions at the top level because you can do it, you'll need a debugguer and a default scope system based on lexical scope. Beleive me :)

So with dynamic scope, you just have a functions system which plays its role: the possibility to repeat code. And a macro system which plays its role: the possibility to - over - tweak the source text in a way which has nothing to do with programming in itself. Functions and macros should be orthogonal concepts. That's the meaning of a concept: something which is orthogonal to every other concepts in the system.

>> In some cases, dynamic scope can be useful for certain variables (typically configuration variables), but it's actually very easy to simulate dynamic scope in a non-concurrent program; just change a global variable and reset it afterwards.

The fact that Object Oriented programming exists tell you re wrong here.

>> That's because asterisks ( ) create italics on this forum*

Cool, thanx for the link :)

-----

1 point by rocketnia 4644 days ago | link

"Functions are lists of instructions/operations you can call and re-call. In every languages of the world."

I'm going to nitpick your use of "list" there. There's no reason effects need to be short actions in sequence. We might want to apply effects continuously, in parallel, in combination, in reverse, under supervised control, or distributed on machines under multiple people's (sometimes untrustworthy) administration. I doubt there's one definition of "effect" that will satisfy everyone, but that doesn't mean we should settle for the same old imperative effects in every single language. :)

I'm also going to nitpick the premise that a function is something "you can call and re-call." It can be useful to write a function that you only call once... if you intend to define it more than once. And sometimes it's useful to enforce that a function that can only be invoked once, perhaps for security; languages can potentially help us express properties like that.

---

"That's about bad programming. Just know what you're doing."

If I write a library with (de foo (X Y) (* (+ X Y) (+ X Y))) in it, would you say I should document the fact that it isn't compatible with programs that use + and * as local variables? Fair enough.

However, suppose we were given a language that had foo in it, and it behaved strangely whenever we used + or * as a local variable. Outrageous! That's a "hard coded pseudo-concept"! :-p We should ditch that language and build a new one with more consistent and orthogonal principles.

Alas, that language is exactly what we're using as soon as we define (de foo (X Y) (* (+ X Y) (+ X Y))).

---

"If you define your functions at the top level because you can do it, you'll need a debugguer and a default scope system based on lexical scope."

No need for a debugger if you're a good programmer, right? :-p And I'm not sure what you mean by needing lexical scope, since we're assuming that we've given up lexical scope already.

But I forgot, one downside to defining functions at the top level is that you need to give them names (global names). Maybe this is related to what you mean.

---

"Functions and macros should be orthogonal concepts."

Who's saying otherwise?

---

"The fact that Object Oriented programming exists tell you re wrong here."

The fact that OO exists is irrelevant. My point is that Arc's global scope is enough to achieve dynamic scope in a non-concurrent program. Who cares what other languages do?

(Incidentally, most popular OO languages also have global scope--static fields--which allows exactly the same kind of dynamic scope technique.)

-----

2 points by akkartik 4645 days ago | link

I'm a little lost :) Are you in favor of lexical scope or against it?

The argument that lexical scopes are entangled with our notion of functions, so let's drop them since they're not an orthogonal concept -- that seems internally consistent and worth thinking about.

-----

1 point by FredBrach 4644 days ago | link

Oh sorry if I've not been clear: I'm in favor of dynamic scope :)

>> The argument that lexical scopes are entangled with our notion of functions, so let's drop them since they're not an orthogonal concept

Exactly. In the fun exploration of an ultimate language for good programming, name conflicts should not drive the language design at all. Programmers should manage their name spaces with care. Also, having a tool for this, like namespaces, is not a problem. Seems even pretty good and it fixes everything.

-----

2 points by akkartik 4644 days ago | link

"Programmers should manage their namespaces with care."

Totally. I think I'm closer to your point of view than anybody here (http://arclanguage.org/item?id=15587, footnote 1; http://arclanguage.org/item?id=12777). I've gradually moved to the dark side in several ways: I no longer care about hygiene[1] or making macros easy to compile[2]. But I still hold on to lexical scope for reasons I can't fully articulate. If unit tests are as great as I say they are, do we need lexical scope? Without them changes may break seemingly distant, unrelated code. Something to think about.

[1] http://arclanguage.org/item?id=15907, especially the http://arclanguage.org/item?id=15913 subtree.

[2] http://arclanguage.org/item?id=13585; http://www.arclanguage.org/item?id=14947; http://www.arclanguage.org/item?id=13319.

-----

1 point by FredBrach 4644 days ago | link

>> If unit tests are as great as I say they are, do we need lexical scope?

Very very interesting. Unit testing.. This is such an engineering concept. Why not built in the language with meta tags (I don't know if it's possible at all)?

>> Without them changes may break seemingly distant, unrelated code. Something to think about.

Let's try the fun of an extreme code expansion language without any compromise :)

-----

1 point by FredBrach 4644 days ago | link

>> and it fixes everything.

Oh no I'm completely wrong here.

Some of you were right, there is a sensible problem in names/scope I've not expected. But I've the answer to everything :)

Libraries.

What are libraries? There are application foundations. In other words, applications are built on top of libraries.

So let's make it as it should.

A libraries is a function which takes in arguments an other libraries or an end application.

Let loadlast be a function which bind to a symbol the eval of the last instruction of a file. And let use the arc evaluation syntax.

App.ext:

  ////////////// app.ext ////////////////////

  (loadlast '"lib1.ext" 'MyLib1)

  (loadlast '"lib2.ext" 'MyLib2)

  (loadlast '"lib3.ext" 'MyLib3)

  (= 'MyApp
     '(*put your application here*))

  (MyLib1 '(MyLib2 '(Mylib3 MyApp))) ; This launches the whole

  MyApp ; that makes MyApp a lib. MyApp is working with MyLib1, MyLib2 and MyLib3 and thus must be embed at least on top of a stack which contains them.
Lib1.ext:

  ///////////// lib1.ext ////////////////////

  {

  *blabla*

  {

  arg1; it evaluates (MyLib2 '(Mylib3 MyApp))  which can now  use lib1 via the dynamic scope system

  }

  *blabla*

  }
I'm not saying this is strictly innovative.

-----

2 points by FredBrach 4644 days ago | link

I'm definitly going to write an implementation of my exploration :)

I've decided to call the resulting language Prompt: http://promptlang.org

-----