Seeding Standards and Measuring Development Outcomes
I’ve thought a lot about how to measure development outcomes, but never been satisfied with my answers. I think I may have a bit more clarity thanks to a new perspective.
I’ve thought a lot about how to measure development outcomes, but never been satisfied with my answers. I think I may have a bit more clarity thanks to a new perspective.
Only measuring total defects could incentivize misreporting. How could we measure defects in a way that aligns incentives with desired outcomes? I don’t have a sure answer, but here are some thoughts.
A previous post got me thinking about how principles can be measured and what value such measures could provide.
Software Engineers are differentiated from other programming disciplines by economical, repeatable, and reliable results. Such consistency requires measurement, data on which to make informed decisions. I propose that source control and work item tracking are the kernel of such process.
My test types diagram sparked concern that mature process focuses on repeated measured improvement, not specific techniques. This is right, but I don’t see the conflict. Tests are a kind of measure, and the diagram identifies common tests (measures) certain actors leverage to meet larger goals. This raises the question, what do the different kinds of tests measure? Consequently, what do they tell us about our system?
I recently put together that multiple verification (the quality technique) is effectively the same and as governance techniques. It’s about managing the level of trust and risk we accept from individual contributors.
I previously wrote on how acceptance tests can streamline communication between developers and teams. I’ve been thinking about practical enactment of such a scheme and surfaced some interesting ideas.
I’ve been exploring large-scale formal development practices, and realized acceptance tests may be the best way for developers to encode expectations for other developers.
I was thinking about responsibility for different kinds of quality in an organization and I noticed that different kinds of testing line up well with particular roles and software lifecycle phases. I’ve summed it all up in a quick visual.
The Software Engineering Body of Knowledge (SWEBOK) portrays the software lifecycle as a set of transforms. I realized that each transform creates an artifact, and these artifacts are key to connecting cross-cutting concerns into the lifecycle phases.