Simplifying Test Construction

Historically, software delivery teams, projects and companies have subscribed to separating out different types of testing based upon the activity or role label versus the contextual behavior within the evolution of software. For example, unit testing must surely belong to a developer while regression testing must of course belong to a non-technical black box tester. In some environments, performance testing is performed by technical or semi-technical non-developer testers using COTS solutions; while in other environments the test suite is built from the command line up by compiler level developers in clean room software engineering environments. Making a context driven decision about who should be doing what forms of testing and in what manner is not inappropriate. Separating out forms of testing to different people or phases is unecessarily complex, accidentally redundant, and in some cases just plain inefficient.

  • What if we began our software evolution by articulating all requirements of the system in simple statements like - "A user can..."?
  • What if we created "A user can..." statements for each role in the system?
  • What if, for each and every "user can" statement, we had an associated description just below that briefly told us a little about what this thing is that a user can do?
  • What if, for each and every "user can" statement, we had a short list of testable statements we called acceptance criteria that helped us understand how a customer would measure "done-ness" of the software?

Sound interesting? Read Mike Cohn's User Stories Applied: For Agile Software Development to get a clear understanding of how to leverage user stories, why, and what a successful project looks like under this approach.

  • What if each user story represented one batch of automated test scripts?
  • What if each batch of automated test scripts included at least two individual automated tests PER single acceptance criteria statement associated with a user story? For example, if a user story had two acceptance criteria, there would be associated one positive and one negative test PER acceptance criteria there by suggesting at least four test scenarios per user story. Add path based variation and boundary value scenarios and the set of scripts explodes.
  • What if each script written were its own encapsulated test such that it could be written once and called with any frequency, in any order?

Take test driven development concepts, combine them with Cohn testable acceptance criteria statements, throw in some popular open source tools like FitNesse or xUnit and we have an evolutionary landscape changing the role of testing and the role of developers and testers alike. In my own experience, this is not the role of a career tester, but rather a small collection of developers who genuinely care for the evolution of customer facing software in a matter of days/weeks.

  • What if the build routine polled for new code every 15 minutes to see if it was time to build a new package?
  • What if every time it polled, it found new code, built the package, reported the results for public consumption?
  • What if every time the build completed and it was considered "good", it called the test harness which in turn pulled all live tests associated with code level unit tests, as well as, functional level user stories, i.e. acceptance criteria?
  • What if every time said harness was pulled and associated scripts were executed, all results were posted yet again for public consumption?
  • What if the definition of a good build was not only that it built .. but that all tests called during the build routine executed and in particular - passed?
  • What if the definition of "good" was a clean build with no failed tests?
  • What if this routine ran every 15 minutes 7x24x365 or at the least daily?

Consider looking through Martin Fowler's material on continuous integration to understand the value of continuous feedback loops in the build and test cycle. There is no longer a need to do all of the development and then hand it over to the test group. Make the test efforts part of the development and build routine. If you want to explore the mother of all software configuration management minds, look to Brad Appleton.

  • What if the same set of user stories and associated acceptance criteria which were used to create a functional test suite were also re-usable as the baseline performance test suite?
  • This would seem to suggest the scripts must then be written in a non-proprietary language open-source tool that is technology agnostic. Remember, write it once, use it N times.
  • What if the transaction mixes were composed of various user story and acceptance criteria combinations?
  • What if the user mixes were composed of the various actors identified during user story construction?
  • What if the same set of user stories and acceptance criteria were to be considered the fundamental competitive function points of the system such that marketing materials and sales programs could be built upon this knowledge?
  • What if the same set of user stories and acceptance criteria were to be looked upon as the base table of contents for a modular XML based e-help solution? In other words, the user stories themselves point to the system decomposition ... which then elucidates the table of contents for the e-help and training approach.
  • What if this same set of user stories were considered to be the base set of functionality upon which the product roadmap were assessed and determined for years to come ... to veer from the base or not to veer from the base? In fact, the user story collection defines the base from which the system evolves.

What if a developer knew how to do most or all of this? What if .. we needed a handful of people who could evolve the software purely based upon a user story, a descriptor, some acceptance tests, a continuous integration build routine which called the automated suite ... and the ice cream? We delivered every two weeks?

What if there were no testers, business analysts or requirement engineers? And the requirements were the tests and the tests preceded the code and the code stopped growing when the test passed?

What if a developer was someone who evolved a user story until a customer smiled? Where are the limits?

Compressing Software Lifecycles by Eliminating Lines of War

As we've experienced for years, software development lifecycles get chunked up into phases, particular groups of people get associated to each phase, entry and exit criteria end up associated to each hand-off in between ... and we now have a framework we've created in the name of quality. Ironically, the framework (ordinarily accompanied by required templates and tools) really only protects us from each other and ourselves, but doesn't evidentially protect the customer or even serve the customer.

After all, what did the customer actually ask for but a deliverable, on a timeframe, on some specified amount of money.

What I see time and time again "delivery teams" or "projects" composed of departments for the purposes of career paths and salary trees, human resource considerations, and division of labor. Sometimes said resources are "matrixed" out to cross-functional teams, but there is a conflict for the resource in question as to "who is truly my boss?" I'm sure there are more reasons for this out there and these are deductively logic structures based upon the need to organize people within a company. Unfortunately, some people tend to confuse the resource management structure within a company with the team structure actually needed to deliver something to a customer that they wanted, when they wanted, at some specified or important cost. They are not the same. Due to this confusion, these lines between resource management groups often become lines of war. When hand-offs are expected to happen between departments, expect turmoil. We're a team until some department did not deliver, and then "the so and so group did not deliver to us and we will now be late."

Here are three things I've personally done to help eliminate silos within groups responsible for delivering:

One: Eliminate the black box testing phase

I support Agile frameworks which I'll explore in another post. Summarily, rather than having people do whatever work it is that they do .. and then eventually hand off to a seemingly ostracized test group "over there", eliminate the group.

Specifically,

  • Take the test group that sits in another part of the building, split them up, and sit them right in the middle of the development group according to pairs or iteration teams
  • Make the ordinarily black box or functional/regression tester become part of the creation process rather than validating that a creation process occurred
  • Expect the tester to write automated tests based upon prioritized functional threads that can attach to the continuous integration build routine called on a nightly basis

In other words, figure out how to test during the development process rather than afterwards. As new builds come off the line each night (or hourly or whatever it is you do), have the testers reset their environments and go at it again. Clearly many traditional test groups are not comfortable moving at this pace as they prefer dedicated time on one load before moving to the next. Let the cadence be set by continuous integration builds and reset all other behaviors to synchronize with it.

Two: Eliminate the business analysis/requirements phase

Nothing new here for some. Consider that many times business analysts or requirement individuals are "sent in" to "go get" the requirements. This is often considered the precursor activity to anything else downstream.

Some of the multiple challenges with this logic include:

  • The requirement/business analyst must interpret the need, document the need, and translate the need to people who actually do the code construction (i.e. developers) and testing somewhere else in the journey
  • Most other team members wait until someone tells them the requirements journey is "over" and they may now proceed
  • Requirements are often documented in a way that makes sense only to the person eliciting and documenting the requirement

Get rid of this experience altogether and instead consider the following three (four) changes:

  • Have a developer with social skills interface with the customer and elicit the requirements in the form of user stories, descriptors and acceptance criteria
  • Include a technical tester to help document the conversation, summarize the results, and ensure that the descriptors make sense and the acceptance criteria are in fact automatable and testable
  • If possible, consider including your trainer/documentation/e-help person in this environment as an observer so that they begin to understand how the system will be organized, used, what is most important versus less important, testability, etc.
  • Deliver working tested software representative of N user stories every two weeks

Only include people who will actually do work. Eliminate translators in the middle. This action reduces cycle time, defect insertion, and the number of people necessary to understand and deliver the system. If a team is expected to deliver working tested software in two week chunks, it will influence how much time is spent creating documentation versus actual software.

Three: Eliminate titles

Titles often box people into specific behaviors. "I am a requirements or business analyst". "I am a test analyst". "I am...".

Here's what the customer cares about ... working, useful, value-add software versus my title. Unfortunately, in many company systems, titles equate to salaries and rank. A miserable side effect is "that's not my job" -isms stimulated by the territorial boundaries.

For the purposes of human resources and getting paid, titles must allegedly exist - though I continue to explore ways to make job titles less meaningful. For the purposes of creating a true team that trusts each other and is willing to do whatever necessary, regardless the role or task, to deliver working software every two weeks ... eliminate titles. Rather, have roles.

A team composed of a developer role, sys admin role, test role, and dba role are equally responsible to deliver N working user stories at the end of a two week sprint.

Along the way, the encouraged behavior is that:

  • anyone can code if and when necessary
  • anyone can test if and when necessary
  • anyone can dba and sys admin when necessary

The only limit on such a team is attitude and aptitude. Yes, there are declared roles whereby one person may be expected to fulfill one or two roles; however, if time permit or need merits, the team helps the team help the customer. If we experience one person watching while someone else is struggling, we have a broken team. N people on a team. X roles required to deliver Y user stories in two weeks.

  • Title-less teams require people to evaluate their own skills and upgrade or evolve them, or move on
  • Title-less individuals require teams to evaluate what they are capable of delivering in two weeks by considering everyone's ability, not just their own
  • Title-less teams force people to focus on work not done versus who's not doing work

There are many other changes available to stimulate a shift in how software is delivered in our organizations. The aforementioned three elements:

  • Eliminate the black box testing phase
  • Eliminate the business analysis/requirements phase
  • Eliminate titles