Scores as things to constrain rather than optimize for

2 thoughts
last posted April 30, 2016, 8:18 a.m.
1
get stream as: markdown or atom
0

This is around the general theme of what you do with scores, metrics, etc.

My experience is that treating scores as things you're constantly trying to improve upon leads to pathologies and weird incentives, but treating them as constraints where they're just supposed to be in a certain range produces nice results almost regardless of the utility of the score.

0

e.g. code coverage. People have a bit of a hate for it, but I think that's because they perceive it as a score that they're trying to make go up and thus perceive increased code coverage as meaning your tests are better. This is often not the case.

Code coverage instead treated as a constraint (either "You must have 100%" or "You must never decrease code coverage") produces very good results, because at that point it becomes where testing starts rather than the goal, and is much more useful in that space.