Paying in Advance

Insurance companies tend to pay for things after they are already an issue.

Of course, there is no guarantee they will do so. They reserve the right to say no. So you could discover an issue after it is already a problem and be declined for coverage because it doesn't fall into the correct conditional context, even if it was originally alleged to be covered.

The only thing insurance companies guarantee is a required monthly invoice. They send it, the customer pays it.

As it relates to insurance, the customer must pay no matter what. There is no guarantee that anything ever paid will actually benefit the customer. Money in; no guarantee of ROI for the customer.

Software testing at the end of a delivery pipeline is exactly and precisely the same game.

There is no guarantee that end of pipeline testers will find anything useful. In fact, even if the end of pipeline test team communicates "this is what we test" and "this is what we do not test", there is no useful guarantee that either will happen as spoken. To test does not mean to find. And to find does not imply useful discovery.

The only thing end of lifecycle test teams guarantee is a required monthly or bi-monthly invoice. They exist, the employer pays.

As it relates to end of lifecycle test teams and efforts, the employer must pay no matter what. There is no guarantee that anything ever paid will actually benefit the employer. Money in; no guarantee of ROI for the employer.

Both ideas are risk-mitigation tactics or "just in case" choices. Wouldn't it be great if there were things we could do to prevent, mitigate or otherwise minimize the probability of downstream issues?

This isn't dialogue on proper health habits. Although, like all of life, daily healthy choices impact software engineering and leadership effectivity.

So how do we change the cost and risk model of paying for something at the end of a lifecycle that may or may not provide timely value?

For some people this is known. For others, this is brand new knowledge. Start with the basics and evolve from there.

Like proper diet and exercise behaviours, it requires context-driven balance, exploration and committed cadence. And yes, there's more to do after that. However, if you're trying to understand why you're paying all this money and getting nothing discernably useful out of it other than "risk mitigation", stuff that might possibly someday save you, consider changing the model. Change your cost model by changing your risk exposure model. Prevent it now, find it now.

After all, who likes the idea of paying all that money monthly and finding out, when you needed it, you weren't covered?

If you don't understand these fundamentals, advanced topics like evolutionary design will evade you.

2 comments:

  1. Although I fully support the idea of Test-Driven-Development, which involves small unit testing, refactoring etc. right through the development process, there is no substitute for testing a system end-to-end upon dev completion. A software system cannot be considered merely the sum of its parts.
    Geoffrey Sexon

    ReplyDelete
  2. I understand where you're coming from on this and don't seek the complete elimination of things like exploratory testing in particular. However, I do think the cost model of testing is abused and misused all too often as the 'safety' net when a more effective method of upstream TDD exists in the form of acceptance testing.

    Acceptance testing, in the form of Fitnesse, Cucumber and the like, takes the downstream requirements and acceptance based testable statements and automates them during the TDD developer experience. In this case, most of the things a black box tester would have tested at the end, is now tested at the beginning during the continuous delivery process.

    The end to end testing you mention can be addressed during the developer continuous cycles by bringing the reqs, tests and SMEs upstream. By the time we include people downstream to look it over, there should be nothing to do but a) explore for oddities and b) validate what we already know from the automated continuous deployment pipeline.

    Thoughts?

    ReplyDelete