When it’s in DEV, it’s in DEV, dammit

Last week I was faced with a dilemma I run into on projects that are just gearing up their process.   The dilemma may sound simple, but it this:  when is it OK to log defects?

Now some of you may think this is a stupid question and answer “ALWAYS OF COURSE!”  But I heartily disagree.  Because BEFORE we start logging defects, we have to ask the question:  what are we trying to accomplish with this phase of of the software?  If the software is still in DEV — that is, its no finished yet, then what does a logged defect really mean?  And who can the logging damage?

OK, let’s outline what I see as the problem once more: Logging bugs on work that is in developer progress is a hindrance because it is not finished — the developer is still implementing the business requirements so bugs on this are basically moot.

On last week’s project me and the UI guy had to release something on the rarely-used STAGE server as a DEV server because we are also getting the who lifecycle environment up and going, and we had no proper development environment to deploy to and bang out our fleshing out and development of a major rearchitecture we are working on.  Also, the data environment had not been refreshed in at least two years!   The UI gy started to ask the biz team to start banging on it but I said WOAH WOAH WOAH cowboy — hold on a minute.   We aren’t done yet!  Logging bugs on an unfinished piece  is like telling me that flour doesn’t tasted like baked bread.   The repurcussions for logging bugs out of context are this:

  1. Time lost investigating bad data problems that are just dead ends.  Like old/out of context/non-refreshed data.
  2. Statistics that will be mined (out of a bug tracker) by people who do not understand the context — this is tantamount to lying with stats.   Logging and “fixing” (basically finishing a story) bugs on unfinished software and using this to show how good/bad people and processes are is very simply NON-OBJECTIVE and UNSCIENTIFIC use of crap data.
  3. Team dynamics can be difficult when this happens.  Mistrust.  People holding things close to their vest.
  4. If you are doing TDD or BDD, all your tests break up front.  Seriously — you are going to log this as defects and fix them and close them as PART of your development of a new story????
  5. If a developer has to worry about this QA process, then why should time even be wasted sitting with the state holder fleshing out say a screen, if they are just going to log the missing pieces or code plugs as defects?

See what I am getting at?  Here are two more examples of this kind of probem — when to log defects — seriously impacted work on my teams at two different Fortune 50 companies in the last 10 years:

  • At one place an engineer and I were developing (with another remote team) the Hudson builds.  We were trying to also normalize the IDE builds (i.e. build button actions), the in-IDE ant builds, and the Hudson maven/Ivy builds.    Developing them.  On a development server.  But, the upper management saw fit to start sending out nasty “don’t break the build” messages over and over to everyone pointing at us core people.   It was nasty . . . we spent a ton of money setting of another system just to get out of this stream before we could deploy to the actual place we were supposed to develop this!!!
  • Over 10 years ago I had a manger who had came from QA who literally would start logging bugs on our DEV server, waiting until we completed the feature, then close them.
  • Another place we’d do screens (like I mentioned above) and if we dared release a small piece to our server, or our iteration was observed, defects would get logged on partially complete work.

I have some other examples but those are the major . . and now this at my current gig.

Some of you may think this is totally outlier but its not, not at all.  Some of you may say “well why not just log everything?”  And my answer is:  have you worked at many places?  Because if you have, you’d know the general attitude towards developers may be distrust — which is partly our own faults; you know, those cowboy coders who left everyone holding the bag.   And that stats can be used to show ANYTHING.

Case in point:  I worked at a place where by the management team’s own measurement we pushed out more features than any other team, but they ignored the very stats that showed it, that they told us to use, due to a built in site bias (i.e. all of the management lived in another city).  hahaha  serious!!!

My advice is this:  avoid situations where anything other than a rigorous process can be used.  It just helps everyone out — it creates those proper buckets and behaviors.  More feedback is better of course, but disallow in your Jira or whatever a bug log on DEV; or a metered bug log at worst.  But crank it up in the true QA.

Most places have minimally three levels to thier bug flow:

  • Pre-Release Defect Tracking:  DEV to QA defect process — when code is released to QA, defects found, and code fixed and release.
  • UAT Defect Tracking: QA to STAGE/User Acceptance.
  • Released Code defect tracking — code in the field

Each has a different scope.  If we are true to the agile ideas then all of us can take part in each phase — that is, al of us as a team; BA’s, QA’s, DEV, Owners can give meaningful feedback.  The power is in following a nice process and then re-evaluating that process.

Comments are closed.