What Matters In Software Development
by havoc
Lots of traffic on Twitter about Steve Yegge’s post defining a “software ideology” spectrum. Myles Recny made a survey to help you place yourself along said spectrum.
Thinking about it over the weekend, I can’t identify with this framing of software development. Those survey questions don’t seem to cover what I think about most in my own software work; in fact I’d be a little worried if I saw colleagues focused on some of these.
Here’s what I do think about, that could be called an ideology.
Ideological View 1: Risk = Getting Worse On Average
Whether we’re talking about dieting, finance, or software, flows matter more than stocks.
The risk I worry about is: are you adding bugs faster than you’re fixing them? Is your technical debt going up? Is this code getting worse, on average?
If the average direction is “worse” then sooner or later your code will be an incomprehensible, hopeless disaster that nobody will want to touch. The risk is descent into unmaintainable chaos where everyone involved hates their life and the software stops improving. I’ve been there on the death march.
Bugs vs. Features: Contextual Question!
In Steve’s post, he says conservatives are focused on safety (bugs in production) while liberals are focused on features. I don’t have an ideological view on “bugs in production risk”; it’s contextual.
For example: Red Hat maintains both Fedora and Enterprise Linux, two branches of the same software project with mostly the same team but with distinct “bugs in production” risk profiles and distinct processes to match. Red Hat uses the same code and the same people to support different tradeoffs in different contexts. Maybe they’re a post-partisan company?
If I were working on software for the Mars rover, I’d strenuously object to continuous deployment. (Maybe we should test that software update before we push it to Mars?) If I were working on I Can Has Cheezburger, bugs in production wouldn’t bother me much, so I’d be happy to keep the process lightweight.
But in both cases I don’t want to see the code getting worse on average, because in both cases I’d want to keep that code alive over a period of years. That’s the ideology that stays constant.
A project that’s getting worse on average will achieve neither safety nor features. A healthy project might have both (though not in the same release stream).
How to Avoid Getting Worse
To avoid risk of steadily getting worse, a couple issues come up every time.
Ideological View 2: Clarity and Simplicity Are Good
Can the team understand it?
This is relative to the team. If your team doesn’t know language XYZ you can’t write code in that language. If your API is intended for mainstream, general programmers, it can’t be full of niche jargon. If your team doesn’t speak German you can’t write your comments in German. Etc.
Software developers learn to make judgment calls about complexity and over- vs. under-engineering. These calls are countless, contextual, and always about tradeoffs. Experience matters.
A definition of “competent software developer” surely includes:
- they worry about complexity and can make judgments about when it’s worth it
- they can write both prose and code such that the rest of the team understands it
Not all teams have the same complexity capacity, but they all have some limit, and good ones use it wisely.
Ideological View 3: Process: Have One
I’ve seen many different methodologies and processes work. They optimize for different team skills, or different levels of “bugs in production” risk. My belief is that you need some method to your madness; something other than free-for-all. Examples:
- Good unit test coverage with mandatory coverage for new code.
- OR hardass code review. (Hardass = reviewer spends a lot of time and most patches get heavily revised at least once. Most reviews will not be “looks good to me.”)
- OR just one developer on a codebase small enough to keep in one head.
- OR Joel’s approach.
You don’t need all of those, but you need at least one thing like that. There has to be some daily habit or structural setup that fights entropy, no matter how smart the individual team members are.
Companies may have rule-based or trust-based cultures, and pick different processes. Lots of different approaches can work.
Summary
Ideological lines in the sand framing my thinking about software development:
- Risk = the project becomes intractable.
- Prerequisite to avoid this risk: you have to be understandable and understood.
- Process to avoid this risk: have one and stick to it.
If you can write clear, maintainable code, and keep it that way, using your OS, text editor, dynamic language, static language, XML-configured framework, agile process, or whatever, then I’m open to your approach.
If you’re creating complexity that doesn’t pay its way, not making any sense to the rest of the team, don’t have a working process, etc. then I’m against it.
“How many bugs in production are OK,” “static vs. dynamic languages,” “do we need a spec for this,” “do we need a schema here”, “what do I name this function”: these are pragmatic, context-dependent issues. I like to consider them case-by-case.
Postscript: Me me me
A lot of these example “liberal/conservative” statements feel ego-driven. I’d look bad if we shipped a bug, I’m smart and can learn stuff, I never write slow code, I always write small code, blah blah.
It’s not about you.
When you agree or disagree with “programmers are only newbies for a little while” – are you thinking of software creation as an IQ test for developers? The goal is not to “dumb down” the code or to prove that for you, it need not be.
Let me suggest a better framing: is this complexity worth it (in the context of our customers and our team). If we’re trying to maximize how useful our software can be given a certain level of complexity our team can cope with, should we use our brain cycles in this corner of the code or some other corner?
When you agree or disagree with “software should aim to be bug free before it launches” – do you have the same opinion about both the Mars lander and icanhascheezburger? If you do, you might need to refocus on the outside world our software’s supposed to be serving.
Better framing: it has to work.
You get the point I guess…
This article is translated to Serbo-Croatian by Jovana Milutinovich from Webhostinggeeks.com.
Great post
Bravo – excellent post. I agree with you on pretty much all points. Like actual politics, I think it is impossible to really define people in two black-and-white camps. In the case of software development, there is so much that just hinges on practicality and context. The mars lander and icanhascheezburger was the perfect dichotomy to demonstrate that.
Woohoo. You said it best with “I like to consider them case-by-case.”
Too many questions I was perplexed about myself: I don’t have a single answer to them.
It’s the basic fallacy in any logical reasoning: you need to discuss merits of actions, not people. And when you do that, you take the circumstances into account as well and it all suddenly starts to make a bit more sense.
However, one thing is clear: anyone with such a rigid “software politics” system sounds like a nazi in that system—and even that doesn’t fit anywhere on the continuum imo 😉
Yes – great post – and the limitations in Yegge’s idea (and my survey) are apparent. Thank you.
Your survey does a great job of summarizing Yegge’s idea, thanks for putting it together. Trying to answer those questions was a helpful way to feel out whether the framework makes sense.
What would be interesting as a follow up is a matrix of the different personas enumerated, and then advice on how to interact with them. ie. You are blue, how to interact with yellow, white, red, blue people, etc.