Simplifying Test Construction

Historically, software delivery teams, projects and companies have subscribed to separating out different types of testing based upon the activity or role label versus the contextual behavior within the evolution of software. For example, unit testing must surely belong to a developer while regression testing must of course belong to a non-technical black box tester. In some environments, performance testing is performed by technical or semi-technical non-developer testers using COTS solutions; while in other environments the test suite is built from the command line up by compiler level developers in clean room software engineering environments. Making a context driven decision about who should be doing what forms of testing and in what manner is not inappropriate. Separating out forms of testing to different people or phases is unecessarily complex, accidentally redundant, and in some cases just plain inefficient.

  • What if we began our software evolution by articulating all requirements of the system in simple statements like - "A user can..."?
  • What if we created "A user can..." statements for each role in the system?
  • What if, for each and every "user can" statement, we had an associated description just below that briefly told us a little about what this thing is that a user can do?
  • What if, for each and every "user can" statement, we had a short list of testable statements we called acceptance criteria that helped us understand how a customer would measure "done-ness" of the software?

Sound interesting? Read Mike Cohn's User Stories Applied: For Agile Software Development to get a clear understanding of how to leverage user stories, why, and what a successful project looks like under this approach.

  • What if each user story represented one batch of automated test scripts?
  • What if each batch of automated test scripts included at least two individual automated tests PER single acceptance criteria statement associated with a user story? For example, if a user story had two acceptance criteria, there would be associated one positive and one negative test PER acceptance criteria there by suggesting at least four test scenarios per user story. Add path based variation and boundary value scenarios and the set of scripts explodes.
  • What if each script written were its own encapsulated test such that it could be written once and called with any frequency, in any order?

Take test driven development concepts, combine them with Cohn testable acceptance criteria statements, throw in some popular open source tools like FitNesse or xUnit and we have an evolutionary landscape changing the role of testing and the role of developers and testers alike. In my own experience, this is not the role of a career tester, but rather a small collection of developers who genuinely care for the evolution of customer facing software in a matter of days/weeks.

  • What if the build routine polled for new code every 15 minutes to see if it was time to build a new package?
  • What if every time it polled, it found new code, built the package, reported the results for public consumption?
  • What if every time the build completed and it was considered "good", it called the test harness which in turn pulled all live tests associated with code level unit tests, as well as, functional level user stories, i.e. acceptance criteria?
  • What if every time said harness was pulled and associated scripts were executed, all results were posted yet again for public consumption?
  • What if the definition of a good build was not only that it built .. but that all tests called during the build routine executed and in particular - passed?
  • What if the definition of "good" was a clean build with no failed tests?
  • What if this routine ran every 15 minutes 7x24x365 or at the least daily?

Consider looking through Martin Fowler's material on continuous integration to understand the value of continuous feedback loops in the build and test cycle. There is no longer a need to do all of the development and then hand it over to the test group. Make the test efforts part of the development and build routine. If you want to explore the mother of all software configuration management minds, look to Brad Appleton.

  • What if the same set of user stories and associated acceptance criteria which were used to create a functional test suite were also re-usable as the baseline performance test suite?
  • This would seem to suggest the scripts must then be written in a non-proprietary language open-source tool that is technology agnostic. Remember, write it once, use it N times.
  • What if the transaction mixes were composed of various user story and acceptance criteria combinations?
  • What if the user mixes were composed of the various actors identified during user story construction?
  • What if the same set of user stories and acceptance criteria were to be considered the fundamental competitive function points of the system such that marketing materials and sales programs could be built upon this knowledge?
  • What if the same set of user stories and acceptance criteria were to be looked upon as the base table of contents for a modular XML based e-help solution? In other words, the user stories themselves point to the system decomposition ... which then elucidates the table of contents for the e-help and training approach.
  • What if this same set of user stories were considered to be the base set of functionality upon which the product roadmap were assessed and determined for years to come ... to veer from the base or not to veer from the base? In fact, the user story collection defines the base from which the system evolves.

What if a developer knew how to do most or all of this? What if .. we needed a handful of people who could evolve the software purely based upon a user story, a descriptor, some acceptance tests, a continuous integration build routine which called the automated suite ... and the ice cream? We delivered every two weeks?

What if there were no testers, business analysts or requirement engineers? And the requirements were the tests and the tests preceded the code and the code stopped growing when the test passed?

What if a developer was someone who evolved a user story until a customer smiled? Where are the limits?

No comments:

Post a Comment