Preventing Fires

Firefighters are central and stalwart component of American society. We primarily associate firefighters with putting out fires and saving our lives and property. When we look at firefighters we think of historical events such as September 11, 2001, The Great Chicago Fire of 1871, and attributes such as ladder trucks, helmets, air tanks, axes, firehoses, firehouses, dalmations and perhaps even parades. We look at firefighters as people who save us from calamity. Given our esteem for firefighters and the profession, is it a surprise that firefighters do not really want to put out fires, saves dogs, elderly and children? Firefighters want to prevent fires, explosions and calamities from ever occurring so that people and property do not need to be put at risk, let alone saved.

Did you know that "Each year, more than 4,500 Americans die and more than 30,000 are injured in fires" (Hopkins, 2008).

The weighted average property loss per fire in the United States is estimated at $7,957.49 with Michigan reporting the maximum at $37,306.00 and New Mexico with the minimum report at $851.00 (Statemaster.com, 2008). In 2006 alone the estimated property loss totals due to fire were estimated at $11.3 billion (Security, Sales & Integration, 2008).

And did you know that "Each year in the United States and its protectorates, approximately 100 firefighters are killed while on duty and tens of thousands are injured" (TriData Corporation, 2002, page 1).


Could we possibly argue anything other than the fact that preventing fire occurrences is far cheaper and safer activity than managing it in arrears? Firefighters want to prevent fires, not fight them. Unfortunately, due to accidental and purposeful fires throughout the United States alone, firefighters are forced to manage them after they happen when it will cost more time, money, health and most critically -- the lives of those in and around the fires. It is a fact. It is an unfortunate fact. Prevention is inarguably a better solution.

Software systems used in life critical implementations such as cellular communication infrastructures, nuclear facilities, air, sea and land transportation, medical equipment and military operations require extensive preventive measures to ensure availability, reliability, accuracy and scalability (as particularly noted by the cellular call load volumes of firefighters, police and infrastructure teams during the 9/11 events). Being equipped to deal with system fault and failure in arrears is equivocal to health and life loss due to fires. Software system fault and failure cannot and should not happen; and therefore must be pre-emptively prevented through training, design, test-driven development, continuous integration and continuous testing. To prevent is again cheaper on multiple levels than detecting and/or managing in arrears.

And then what about software systems that are not necessarily life critical, but critical to daily business operation? Is the need for proactive prevention any less important? There is a cost to failure in terms of time and money, though it is often undocumented in many businesses in any other form than system downtime. Yes, we have documented costs per defect research that came from Barry Boehm in the 1980s, IBM in sixteen reports from the 1990s, and NIST from 2002 -- but this isn't tacit knowledge for most business leaders. The easiest way to handle system downtime is to minimize the probability through contextual countermeasures such as training, design, test-driven development (TDD), continuous integration (CI) and continuous testing (CT). Because people tend to view software as a somewhat innocuous "thing" one can acquire for $N00 at the store or through $00/hour of labor, people exhibit a comfortability cutting corners to get software to market believing that lower cost of acquisition (cost to get it there) will guarantee lower cost of ownership and a quicker return on investment. If the software equivalent of a fire is system downtime (unavailability, non-reliability, lack of scalability, etc.), then the software equivalent of fire prevention is training, TDD, CI and CT.

Few if any would willingly hook themselves up to a heart managing life-support machine where corners had been cut; this is why we have safety regulations. Similarly, Chicago feels that the increased cost of acquisition for housing nowadays due to requiring electrical conduit throughout an entire building (so that no wiring is exposed) is quite conscionable because the cost of failure far outweighs the cost of prevention.

Unless there is distinct legislation or regulation, the balance between spending on prevention and detection and spending on fire management in arrears is subjective to the decision-maker in question seemingly making it an art. Since art is ordinarily a personal taste, where I may like the works of Paul Gauguin and you like that of Salvador Dali, it seems we are left with a Robert Frost-like conflict -- when is prevention a requirement and when is the expense of firefighting in arrears acceptable? It is a safe bet that firefighters will choose prevention every time because they understand first-hand the cost of failure. Is it then fair to assert that those who do not spend time on prevention do not really understand the cost of failure?

________________________________

Hopkins, Gary. "Fire Safety: Activities to Spark Learning!", Education World. Published: October 1, 2008. Accessed: November 18, 2008.

Security, Sales & Integration, "Fire Statistics: Installations, Profits & Damage", 2008. (As quoted from the National Fire Protection Association (NFPA))

TriData Corporation, Firefighter Fatality Retrospective Study, Arlington, Virginia, Page 1, April 2002.

Planned v Actual Deltas

I'm not a fan of PMI or the PMBOK. I believe the fundamental premises behind PMI and PMBOK are honorable and correct in terms of work breakdown structures, dependencies, planned versus actuals, critical paths, and risk mitigation and issue resolution. The problem I have is with people who, having no experience in a particular industry, show up and apply what they believe to be universal laws of work, team organization, management and delivery without context of the unique elements of this industry over that. For example, the required behaviors of urban and regional planning require different behaviors and decisions than moving software to market, or even managing manufacturing assembly lines and production declaration through a plant. Context-driven decisioning is imperative. However, while there are some elements that are the same regardless industry and product, there is one in particular that I believe to be so important it transcends industry, product, practice, team and culture -- planned versus actual tracking and reporting.

What did you say you would do, what actually happened, and what is the delta?

Assembly line #12 is configured to construct one sub-assembly -- 50 units per hour, 24 hours per day. The math used to arrive at this goal and then used for baseline is built upon a series of assumptions and dependencies including sub-assembly availability, line staff availability across three shifts, power, mechanical availability and service frequency/responsiveness and so on. After arriving at the plan which includes what is needed to run and what will be the by-product of the run, we have a plan. What happens across a 24h shift and per hour within the 24h shift is the actual. If there is a delta greater than the planned and acceptable deviation percentage, there is a gap that impacts the entire supply chain and all ripples thereafter.

Software product #13 is expected to have 5 widgets planned to be delivered in AB time at XY cost. The math used for this experience is similar to that of Assembly line #13 in that there are presumptive dependencies to even start, let alone continue and resultantly deliver. There is technical and business design (plan) and then delivery (actual) where the delta is measured by business stakeholders, but primarily customers. Above this, there is overall project plan and actual with expected and manageable deviation; but deviation greater than plan is a gap actual therein unacceptable and requiring attention.

Both scenarios, and there are more examples in more industries for sure including financials, urban and regional planning, transportation logistics, etc., suggest the pertinence of knowing the plan versus the actual in order to manage the gap. No plan, no gap acknowledgment making actuals less relevant and/or meaningful. No documented actuals, and deviation from plan is unrecognized until funding is depleted. Sounds familiar in the 2008 US financial market, yes? We have a litany of Wall Street examples teaching us this concept very well right now. Even if there is a plan, an actual and an acceptable deviation margin -- there must surely be calculated probability suggesting an ability to deliver within margin.

If there is one and only concept from PMI and PMBOK for anyone, educated or otherwise, to take away and use for life in any industry and in any situation, I argue it is understanding plan versus actual versus acceptable deviation and probability of delivery within this framework. Ironically then, in order to understand and apply these concepts, one additionally needs to understand work decomposition (aka work-breakdown structure), time and dependencies, staffing, critical pathing and cost.

Seems like fundamental concepts of the PMI PMBOK just transcended industry, service and product. Now if we can just get the majority to understand context-driven application.