Arc Forumnew | comments | leaders | submitlogin
2 points by zck 3902 days ago | link | parent

Huh, this is similar to Church numerals: https://en.wikipedia.org/wiki/Church_encoding

What was the reason you decided to do it this way? It seems more complicated to work with.



2 points by rocketnia 3902 days ago | link

"What was the reason you decided to do it this way? It seems more complicated to work with."

In my designs, I don't just want to make things that are easy for casual consumers. I want to make things people can consume, understand, talk about, implement, upgrade, deprecate, and so on. These are things users do, even if they're not all formal uses of the interface.

I hardly need number types most of the time. If I add them to my language(s), they might just be dead weight that makes the language design more exhausting to talk about and more work to implement.

Still, sometimes I want bigints, or at least numbers big enough to measure time intervals and pixels. I don't think any one notion of "number" will satisfy everyone, so I like the idea of putting numbers in a separate module of their own, where their complexity will have a limited effect on the rest of the language design.

---

"Huh, this is similar to Church numerals"

I'm influenced by dependently typed languages (e.g. Agda, Twelf, Idris) which tend to use a unary encoding of natural numbers in most of their toy examples:

  data Nat = Zero | Succ Nat
To be fair, I think they only do this when efficiency isn't as important as the simplicity of the implementation. In toy examples, implementation simplicity is pretty important. :)

A binary encoding might use a technique like this:

  data OneOrMore = One | Double OneOrMore | DoublePlusOne OneOrMore
  data Int = Negative OneOrMore | Zero | Positive OneOrMore
Or like this:

  data Bool = False | True
  data List a = Nil | Cons a (List a)
  data Int = Negative (List Bool) | Zero | Positive (List Bool)
I get the impression these languages go to the trouble to represent these user-defined binary types as efficient bit strings, at least some of the time. I could be making that up, though.

For what I'm talking about, I don't have the excuse of an optimization-friendly type system. :) I'm just talking about dynamically typed cons cells, but I still think it could be a nifty simplification.

-----

2 points by rocketnia 3902 days ago | link

  data Nat = Zero | Succ Nat
I don't think this itself would be called Church numerals, but it's related. The Church encoding takes an ADT definition like this one and looks at it as a polymorphic type. Originally we have two constructors for Nat whose types are as follows:

  Zero : Nat
  Succ : (Nat -> Nat)
These two constructors are all you need to build whatever natural number you want:

  buildMyNat : Nat -> (Nat -> Nat) -> Nat
  buildMyNat zero succ = succ (succ (succ zero)))
We could make this function more general by abstracting it over any type, not just Nat:

  buildMyNat : a -> (a -> a) -> a
This type (a -> (a -> a) -> a) is the type of a Church numeral.

While it's more general in this way, I think sometimes it's a bit less powerful. Dependently typed languages often provide induction and recursion support for ADT definitions, but I think they can't generally do that for Church-encoded types. (I could be wrong.)

For something more interesting, we can go through the same process to build a Church encoding for my binary integer example:

  data OneOrMore = One | Double OneOrMore | DoublePlusOne OneOrMore
  data Int = Negative OneOrMore | Zero | Positive OneOrMore
  
  buildMyInt :
    OneOrMore ->                  -- One
    (OneOrMore -> OneOrMore) ->   -- Double
    (OneOrMore -> OneOrMore) ->   -- DoublePlusOne
    (OneOrMore -> Int) ->         -- Negative
    Int ->                        -- Zero
    (OneOrMore -> Int) ->         -- Positive
      Int
  
  buildMyInt :
    a ->          -- One
    (a -> a) ->   -- Double
    (a -> a) ->   -- DoublePlusOne
    (a -> b) ->   -- Negative
    b ->          -- Zero
    (a -> b) ->   -- Positive
      b

-----