Arc Forumnew | comments | leaders | submitlogin
2 points by shader 2539 days ago | link | parent

Interesting discussion. I do appreciate your observations that only the major version matters, and that could be equivalent to renaming the project, since it doesn't really communicate anything useful other than "differences exist".

However, I don't really agree that preventing version pinning will somehow encourage better coding standards. Hypothetically it makes sense; without version pinning, breaking changes would be more noticeable and painful, and users would have more incentives to switch to other libraries that don't do that. Unfortunately, software is not yet and possibly never will be the kind of competitive market required for that to be effective.

The problem is that the pain is entirely felt by the users; the developers may not even realize when one of their changes is "breaking", because they aren't running the users' code. Furthermore, since libraries are usually very different even from those with similar functionality, updating to accommodate the breaking changes will almost always be easier than switching to a different one with a better reputation (assuming it even exists). Since the user can't control the behavior of the developer, the best they can do to overcome breaking changes on their end is version pinning. It's not a good solution, but it's effectively the only solution.

Maybe in the long run people would read horror stories and avoid those libraries, but that still doesn't help much, because the developers usually don't have much incentive to actually meet the needs of their users--they are not really customers, merely beneficiaries. This would be different for commercial software, but then you won't be getting the resource through a package manager anyway.

One interesting solution would be a package manager with integrated CI functionality, so any minor update to a package is automatically tested against every other package that has it as a dependency. This wouldn't catch all of the errors, since most code wouldn't be published, but it would make the developers much more aware of and sensitive to breaking changes. If they still want to make the change anyway, they can change the name.



3 points by shader 2533 days ago | link

I've been thinking more about the "renaming" vs "numbered versions" for packages, and I'm now leaning more strongly towards a distinct version number—or at least a separate version field, number or otherwise.

It's true that the major version number doesn't convey much, except the vague idea that the authors believe it to be better, or they wouldn't have written it. However, as long as they are unique and you have some easy way to find the "latest" version, it doesn't really matter if they're numbers or not. Commit hashes or code names should work just as well.

However, I think there's a simple security argument in favor of making the version a subfield, to prevent spoofing by malicious or mischievous third parties. Otherwise anyone could claim that their fork was "python-4".

An alternate solution might be to add a "successor" field, so that a package could identify another package as the rightful heir, even if it wasn't developed by the same team. That should make the open-source fork-based community development a little easier. You'd still have to know what the root package was to follow the chain though.

-----

1 point by akkartik 2539 days ago | link

I updated my original post based on conversations I had about it, and my updated recommendation is towards the bottom:

"Package managers should by default never upgrade dependencies past a major version."

I now understand the value of version pinning. But it seems strictly superior to minimize the places where you need pinning. Use it only when a library misbehaves, rather than always adding it just to protect against major version changes.

---

CI integrated with a package manager would indeed be interesting. Hmm, it may disincentivize people from writing tests though.

-----

2 points by shader 2538 days ago | link

A reduced need for tests may not be a bad thing.

We don't write tests just to have tests, but to verify that certain important functionality is preserved across versions. The trouble with many tests, is that they are written to fill coverage quotas, and don't actually check important use cases. By using actual dependents and _their_ tests as a test for upstream libraries, we might actually get a better idea of what functionality is considered important or necessary.

Anything that nobody's using can change; anything that they rely on should not, even if it's counter intuitive.

The problem remains that most user code will still not exist in the package manager. It might be more effective if integrated with the source control services (github, gitlab, etc.), which already provide CI and host software even if it's not intended to be used as a dependency. The "smart package" system could then use the latest head that meets a specified testing requirement, instead of an arbitrary checkpoint chosen by the developers.

-----

2 points by akkartik 2538 days ago | link

Oh, I just realized what you meant. Yes, I've often wanted an open-source app store of some sort where you had confidence you had found every user of a library. That would be the perfect place for CI if we could get it.

Perhaps you're also suggesting a package manager that is able to phone home and send back some sort of analytics about function calls and arguments they were called with? That's compelling as well, though there's the political question of whether people would be creeped out by it. I wonder if there's some way to give confidence by showing exactly what it's tracking, making all the data available to the world, etc. If we could convince users to enter such a bargain it would be a definite improvement.

-----

2 points by shader 2536 days ago | link

I wasn't really considering that level of integration testing. It would certainly be cool to get detailed error reports from the CI tool. I don't see why you couldn't, since the code is published on the public package system, and you would be getting error listings anyway.

I don't think it would be creepy as long as it's not extracting runtime information from actual users. If it's just CI test results, it shouldn't matter.

Live user data would be a huge security risk. Your CI tests could send you passwords, etc., if they happen to pass through your code during an error case.

-----

2 points by akkartik 2538 days ago | link

I wasn't saying there'd be a reduced need for tests. It's hard for me to see how adding CI would reduce the need for tests. I'm worried that people will say this stupid CI keeps failing, so "best practice" is to not write tests. (The way that package managers don't enforce major versions, so "best practice" has evolved to be always pinning.)

Unnecessary tests are an absolutely valid problem, but independent :)

-----

2 points by shader 2536 days ago | link

By "reduced need for tests" I didn't mean that the absolute number of tests would decline, but rather the need and incentives for the development team to write the tests themselves. Since they have the ecosystem providing tests for them, they don't need to make as many themselves. At least, that's how I understood the discussion.

So yes, if the package manager only enforced the tests you include in your package it would incorrectly discourage including tests. But if it enforces tests that _other_ people provide, you have no way around it. The only problem is how to handle bad tests submitted by other people. Maybe only enforce tests that passed on a previous version but fail on the current candidate?

-----

1 point by akkartik 2536 days ago | link

Ooh, that's another novel idea I hadn't considered. I don't know how I feel about others being able to add to my automated test suite, though. Would one of my users be able to delete tests that another wrote? If they only have create privileges, not update, how would they modify tests? Who has the modification privileges?

These are whole new vistas, and it seems fun to think through various scenarios.

-----

2 points by shader 2533 days ago | link

It's not really the same as others being able to add tests to your automated suite. Rather, they add tests to their own package, and then the CI tool collects all tests indirectly dependent on your library into a virtual suite. Those tests are written to test their code, and only indirectly test yours. If a version of their package passes all of their tests with a previous version of your code, but the atomic change to the latest version of your code causes their test to fail, the failure was presumably caused by that change. The tests will probably have to be run multiple times to eliminate non-determinism.

It's still possible that someone writes code that depends on "features" that you consider to be bugs, or a pathologically sensitive test, so there may need to be some ability as the maintainer to flag tests as poor or unhelpful so they can be ignored in the future. Hopefully the requirement that the test pass the previous version to be considered is sufficient to cover most faulty tests though.

-----

1 point by akkartik 2533 days ago | link

Yes, this was kinda what I had in mind when I talked about an open source app store. Make it easy for libraries to be aware of how they're used.

-----

2 points by shader 2530 days ago | link

It doesn't look like it would be too hard to start building such a system on top of Github; they provide an api for seaching for repos by language.

We could potentially build it first for Arc, which would be pretty small and simple, but also provide the package manager we've always wanted :P

-----