Let’s say one day you decide to add a feature to your software or service. For example, you need a new flag on user accounts, so that different types of users get different features. (These don’t even have to be tiered account levels; maybe accounts of type “music lover” get a widget in the sidebar with suggestions for bands they might like while “sports fan” accounts get a sports scores widget instead.)
So, following good software development processes, you first write a couple of tests:
UserTypeOne gets features A, B, and C
UserTypeOne does not get features D, E, or F
UserTypeTwo does not get features A, B, or C
UserTypeTwo gets features D, E, and F
Then you write some code that meets these tests, including creating a pair of mock users with the new flag set in both possible ways. You run the tests, and they all pass. Yay! Your new feature is ready!
Except that, of course, the moment you check this code in, you find that suddenly, nobody can log in any more. After all, the new flag is supposed to be mandatory, and that requirement is reflected in one of the changes you made to the login code. But the only users that have the NewFlag at all are the ones you just created in your fixture.
Sure, this will probably get caught before it goes live. Even if your entire QA department is asleep at the switch, if just one person in your organization tries to run even the most rudimentary tests, they’ll get back an error message. But “it didn’t get pushed to production” is only the most minimal level of success; I’d like to look at how you, the coder, could have prevented this problem. How could you keep this error from even getting checked in (and thus breaking the build and flat-lining your co-workers’ productivity for a few hours)?
The way a real QA department would spot this problem is regression testing. But it’s not reasonable to ask every coder to do a full suite of regression tests before every commit.
The real failure, of course, is that you made a change that you didn’t test. But saying “force programmers to write better tests” is not a viable answer; people are imperfect. You might as well mandate “developers may not check in any code that contains bugs” as one of your organization’s rules.
I don’t have a good answer to this problem. Asking developers to also be QA people is completely impractical. The current solution seems to be “developers sometimes check in code which breaks the build, and blocks people’s productivity for a while”. (Distributed SCM can help mitigate this.)
Is there something QA people know that developers don’t, that would make an easy solution that we could adopt?