Arc Forumnew | comments | leaders | submitlogin
2 points by akkartik 2302 days ago | link | parent

"My current thought for solving the sandbox problem is to have a thoroughly first-class and recursive system for creating subsets of resources and providing them to children."

I hadn't been thinking about sandboxing, but my interest in testable interfaces lately has me asking how I can design an interface for memory allocation in a testable way. And that requires precisely such a recursive way for one allocator to be able to do book-keeping on memory allocated by a parent allocator. This would be the start of the capability to run the tests of an OS while running on itself.

The only way to have an address tagged as used in one allocator and free in a child allocator: metadata must be somewhere else. I'm imagining something like this, where 'the 'allocator' blocks are book-keeping metadata on used vs free areas:

    +---------------------------------------+
    |                                       |
    |              Allocator                |
    |                                       |
    +---------------------------------------+
    |                                       |
    |              Block 1                  |
    |                                       |
    +---------------------------------------+
    |              Block 2                  |
    +---------------------------------------+
    |              Block 3                  |
    |                                       |
    +---------------------------------------+
    | Block 4  +--------------------------+ |
    |          |        Allocator         | |
    |          +--------------------------+ |
    |          |        Block 1           | |
    |          +--------------------------+ |
    |          |        Block 2           | |
    |          +--------------------------+ |
    |          |        Unused            | |
    |          +~~~~~~~~~~~~~~~~~~~~~~~~~~+ |
    +---------------------------------------+
    |              Block 5                  |
    +---------------------------------------+
    |              Unused                   |
    |                                       |
    +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+
Anyway, if I have a bootstrappable stack to start with, the timeline for this allocator just got accelerated!


3 points by shader 2302 days ago | link

Yeah; I guess I didn't state it that clearly, but the sandboxing reference was from the reddit thread you linked

"Sandboxing is challenging partly because our boundaries for what is trusted shift imperceptibly over time. Browsers came out in a world where the desktop was considered the crown jewels, while browsers were considered untrusted. Over time local disk has disappeared (e.g. Chromebooks) while our lives are ruled by the browser. This has made the original design of browser security (shielding the local file system; cookies; single origin) if not obsolete then at least dangerously incomplete. So any sandboxing solution has to think about how to fight this gradual creep. No matter how you organize your sandboxes using Qubes or whatnot, something inside a sandbox is liable to become more important than other stuff inside that sandbox. And even outside that sandbox."

The idea of running tests against a live application, particularly an OS, is intriguing. It shouldn't be too hard though, since the code and its state don't have to be in the same place. So you could run the live code against separate testing data in a different sandbox or something...

Your specific example of memory management makes me think that vague "recursive subsets of resources" probably won't cut it for a lot of things. After all, if you make memory access too indirect, it would be very very slow; but if you try to use the actual CPU management interfaces, it may not allow the kind of control that we want...

How much do you really need the bootstrappable stack? There shouldn't be anything stopping you from using something that's already been bootstrapped, instead of raw hex code. E.g. existing C tools, etc. There's also the picolisp-based PilOS project; basically picolisp (which is mostly bootstrapped from an assembler they wrote anyway) running directly on the metal as a kernel. (https://picolisp.com/wiki/?PilOS)

-----

1 point by akkartik 2302 days ago | link

Oh sorry, I did understand your sandboxing reference. Just meant I wasn't concerned with it when thinking about the memory allocator. Should have said, "I haven't been thinking about sandboxing.."

I'm less concerned about overhead because I'm primarily concerned about tests, which by definition don't run in production.

But I think in the case of memory allocation you can do several levels of sandboxing before you start hitting performance bottlenecks. You'd have no extra indirections when using memory, because its just a pointer and you just have an address. You'd only hit extra levels of indirection when you try to allocate and the allocator is out of memory. Then you may need to request more from your parent, who needs more from its parent, and so on, all the way down to the OS.

---

"How much do you really need the bootstrappable stack?"

My basic idea is that code is easier to quickly understand if you can run it and see it in operation in a variety of tiny slices. I think it would be easy to quickly modify codebases if you had access to a curriculum of logs of the subsystems you care about, click on any line to jump to the code emitting it, and so on. The curriculum would be created automatically while running tests (because tests are selected manually to capture interesting scenarios).

This idea requires being able to drill down arbitrarily deep into the traces, because we can't anticipate what a reader is trying to change. I may be trying to investigate why a certain sequence of operations causes my laptop to lock up. That may require drilling down inside a save operation to see the data structures inside the OS that perform buffering of disk writes. And so on.

Compilers are part of this infrastructure I want to drill down into. If I want to understand how the if keyword works, it does me no good if it terminates in some opaque binary that wasn't built on my computer. It's also only mildly nicer to find out that some code in this compiler I'm reading requires understanding the whole compiler. Metacircularity is a cute trick, but it's hell on comprehension. Any sort of coiling is bad. What noobs need is to see things laid out linearly.

I've looked at PicoLisp before. I even modeled Wart's memory allocator on PicoLisp. So it's great in many ways. But like all software today it's not really intended for end-users to look at the source code. Its code is intended for insiders who have spent months and years building up an intimate understanding of its architecture. That assumption affects how it's written and managed.

-----

2 points by shader 2301 days ago | link

"its just a pointer and you just have an address"

That could be true; I'm not really familiar with CPU facilities for memory isolation, but this is probably one of the most solved of the isolation challenges, since it does require CPU support.

"You'd only hit extra levels of indirection when you try to allocate..."

Good point. I wonder if there's any way to improve that, or if the worst case is rare enough that it's acceptable?

---

"This idea requires being able to drill down arbitrarily deep into the traces..."

That's a cool idea, and fits very well with the GNU objective of fully open and transparent source code. I'm not sure that bootstrapping is what is required to achieve that goal, however. What you really need is transparent source all the way down, which can be satisfied with self-hosted code, even if it's not bootstrapped. In fact, I'd argue that multiple layers of bootstrapping would make the drill-down process very challenging, because you'd have to cross very sharp API boundaries — not just between functions or libraries, but between languages and runtime environments. Making a single debug tool that handles all of that would be impressive, let alone expecting your users to understand it.

"Metacircularity is a cute trick, but it's hell on comprehension"

Metacircularity applies to interpreters built on interpreters, not compilers. I can agree that it makes things opaque though, because it reuses the existing platform rather than fully implementing it. A self-hosted compiler, even if written in its own language, could be fully comprehensible, however (depending on the quality of the code...). It's just a program that takes syntax and produces binary.

Interestingly, while writing this, I ran across the following paper, which may be of some interest: Avoiding confusion in metacircularity: The meta-helix (Chiba et al.) (https://pdfs.semanticscholar.org/4319/37e467eb9a516628d47888...) I'll read more of it tomorrow, and possibly make a separate post for it.

I do think it would be really cool and possibly also useful if every compiler was self-hosted, and could be drilled into as part of the debugging process. It would mean that you could read the code, since it would be written in the language you are currently using, and the same debugger should be able to handle it.

---

"But like all software today it's not really intended for end-users to look at the source code"

I haven't actually looked at picolisp's source much myself, but what would it take to satisfy your desires for "end-user" readable code?

-----

2 points by akkartik 2301 days ago | link

Very interesting. I'm going to read that paper as well. And think about what you said about the difference between interpreting and compiling.

> what would it take to satisfy your desires for "end-user" readable code?

See code running. My big rant is that while it's possible to read static code on a screen and imagine it running, it's a very painful process to build up that muscle. And you have to build it from scratch for every new program and problem domain. And even after you build this muscle you screw up every once in a while and fail to imagine some scenario, thereby causing regressions. So to help people understand its inner workings, code should be easy to see running. In increasing order of difficulty:

a) It should be utterly trivial to build from source. Dependencies kill you here. If you try to install library version x and the system has library version y -- boom! If you want to use a library, really the only way to absolutely guarantee it won't cause problems is to bundle it with your program, suitably isolated and namespaced so it doesn't collide with a system installation of the same library.

b) It should be obvious how to run it. Provide example commands.

c) There should be lots and lots of example runs for subsets of the codebase. Automated tests are great for this. My approach to Literate Programming also helps, by making it easy to run subsets of a program without advanced features: http://akkartik.name/post/wart-layers. https://travis-ci.org/akkartik/mu runs not just a single binary, but dozens of binaries, gradually adding features from nothing in a pseudo-autobiographical fashion, recapitulating some approximation of its git history. In this way I guarantee that my readers (both of them) can run any part of the program by building just the parts it needs. That then lets them experiment with it more rigorously without including unnecessary variables and moving parts.

More reading:

* http://akkartik.name/post/readable-bad

* http://akkartik.name/about

-----