Actually, poking through the markup of the forum in general, I'm still impressed by how simple the site really is. There's something to be said for minimalism like that. Not only does it make the initial development easier, but I imagine it's easier to do mashups and derivative works too.
> There's something to be said for minimalism like that. Not only does it make the initial development easier, but I imagine it's easier to do mashups and derivative works too.
If I may kick off a tangent, this is the part of "Worse is better" that tends to be forgotten/deemphasized in Pitman's formulation[1]. C and Unix succeeded because they focused on keeping the implementation simple and accessible for many years. (They eventually forgot that lesson, of course, and have been coasting on the initial momentum for a very long time.)
Indeed. And Richard actually makes that point, that the "initial virus" has to be good and simple, and that having won it will have much more pressure to improve until it gets to 90% of "good". Unfortunately, in the process it conditions users to accept worse, and the patching process probably doesn't result in a simple end result.
In fact, reading the story about the "PC loser-ing problem", I realized that I was so conditioned by the Unix solution that I had never even _considered_ the former as a possibility. I do sometimes wonder how many amazingly good ideas we've lost, that would now actually be much simpler than the stack we have, but we're just used to it.
I think the concept could be better generalized by rephrasing it as "cheaper is better" though. Technically it's not "worse", it just has a different set of values. Obviously, users value it more, or they wouldn't adopt it.
I see it as closely related to ideas like "compatibility is key", "customer is king", and "money is power", each of which builds on the following.
Customers adopt products that have the best cost-benefit ratio. It doesn't matter if the fancy "good" solution is 10% better (from 90% to 100%) if it also costs 2x as much. Maintenance of the ideal solution may actually be cheaper, but it's really hard to estimate maintenance in advance, especially in design fields like software development.
Once the "cheap" solution is adopted, future adoption and upgrades are even cheaper compared to switching to the "good" solution, because the user is already invested, and has built a network of integrations that would be very hard to replicate.
The network effect and basic epidemiology probably provide good explanations for the rapid victory of "cheap" solutions—they spread faster because they are easier to "get", and that amplifies the infection rate to new nodes. Anyone can understand why to adopt something cheap. It takes a lot of effort to learn and understand the technical advantages of a superior system. Given the work involved in properly evaluating competing options to discover technically superior solutions, I think it's safe to assume that the percentage of potential customers that just pick the cheapest one that works, or that is already adopted by the largest number of other users, will always be higher than those who actually compare all the options to pick a better solution.
So "worse" solutions actually are "better", because they're cheaper to adopt. This is especially visible when you look at history and see how many times the systems focusing on backwards compatibility won out over those that merely tried to be "new." Compatibility reduces the cost of adoption. It's that simple.
Does that mean that we're doomed to a "race to the bottom"? I don't think so. In fact, I think with some care new solutions can be designed that are sufficiently better/faster/cheaper that they do disrupt the existing ecosystem. It happens all the time. We see Facebook beating Myspace, all the various chat programs killing XMPP, Slack starting to eat IRC, etc. Most of those did it by making adoption easier for new users. The secret is that a new system doesn't have to replace the existing system, just be easy to adopt. Lots of people use multiple chat programs at the same time. The Lean Startup book[0] was written by an entrepreneur working on a chat system, who initially thought that to make adoption easy he had to integrate with existing systems. What they learned was that people didn't mind adding it to their list of chat systems, and actually liked the ability to meet new networks of friends.
I've been very intrigued recently by a lot of early internet protocols, like IRC, SMTP, NNTP, etc. which are very clean and simple. So easy to use that you can literally connect to an SMTP server via telnet and send an email by hand with just a few simple text commands. I've seen people mention gopher a few times recently (the core doesn't change very fast, but people like to implement custom clients), and even HTTP is pretty simple. I think there's a lot to be said for simple, text-based protocols, because they're easy to understand and implement something that connects to them. I almost think a good test for how complicated an interface is, is how easy it would be to implement in arc, which has very little library support for most of these things. It turns out to be quite easy to build an IRC bot with arc[1].
It is interesting to me that arc may not be very widely adopted, but it is probably one of the few programming languages that has almost as many implementations as it has community members. If we made it just a little bit easier to pick up and start using (particularly in production), the community would probably grow a little faster.
I think there's a lot of opportunity now and in the near future for reintroducing simple foundations, perhaps slightly extended, but mostly made more accessible for new users. Our technology stack has gotten so tall and complicated in the name of shortcuts and simplicity, that a lot of efficiency can be gained by cutting out a few layers. Once people start targeting certain abstraction boundaries, like WASM + WASI, it should be pretty easy to replace everything under that boundary with a much simpler system. A lot of the disadvantages of "good" systems, like microkernels vs monolithic ones, are now so completely outweighed by the rest of the environment that it should be pretty straightforward to build an OS with much better security much closer to the metal than what we have now with 2+ layers of VM sandboxing.
I like http://yosefk.com/blog/what-worse-is-better-vs-the-right-thi... which slices through the ambiguous terms 'worse' and 'better' and focuses on the crucial ideological divide: do you think evolution is something to combat or something to go with the grain of? That fits with a lot of your comment as well.
But you should elaborate on your last 2 paragraphs. I'm not sure I buy either that Arc adoption can pick up or that the mainstream tech stack will ever cut out layers.
My synthesis of "Worse is better" for myself (with Mu[1] and SubX[2]):
a) I don't think of evolution as "bad". Building something incompatible is indeed maladaptive. I'm clear-eyed about that.
b) Mu doesn't try to come up with the perfect architecture that doesn't need to evolve. Instead it tries to identify and eliminate every source of friction for future rewrites.
c) My goal isn't to go mainstream. I'd be happy to just have some minor Arc-level adoption. I think it's better to have a small number of people who actually understand the goal (an implementation that's friendly to outsiders) than to have a lot of adoption that causes Mu to forget its roots. My real goal is to build something that outlasts the mainstream stack (the way mammals outlasted the dinosaurs). That doesn't feel as difficult. It's clear the mainstream has a lot of baggage bogging it down. It'll eventually run out of steam. But probably not in my lifetime.
Anyways, I hope in a year or so to give Mu an Arc-like high-level language. It won't improve Arc's adoption, but hopefully it will help promulgate the spirit of this forum: to keep the implementation transparent, and to be friendly to newcomers without burning ourselves out.