Cost Justifying Good Ideas

I'm currently re-reading Mary and Tom Poppendieck's 2007 book titled, Implementing Lean Software Development: From Concept to Cash, and I find it to be fascinating as if reading it for the first time - again. I think there are three fundamental and significant concepts that evolve around each other, intertwined, separate, but alike:

  • team structures should be of hybrid composition and small;
  • the cost of complexity can be managed through purposeful simplicity; and
  • cost justifying 'new' things/ideas should be a requirement, not a talking point.

Regarding team structures

The idea discussed around team structures challenges current organizational practices (prefaced by some very necessary conversation on 'project' versus 'product' funding models in organizations) suggesting they simply do not make sense. The book's assertion more explicitly? Build cross-functional teams, led by the Business (not IT), that are responsible for defining the target, defending and justifying the value, delivering and thereafter supporting the solution. Furthermore, change the team reward system to one based upon profitability of the feature or delivery element in question; not something more superfluous such as a passing contract that was signed in the 1st quarter of some year or the fact that a team delivered something at all (Poppendieck, 2007, pp. 62-65).

Many times, organizations are structured around what makes the most sense for the management and HR staff, as well as, the cost center logic used by Finance. In other words, people get organized based upon many different factors that make the most sense to those who are not actually doing or benefiting from the work; very few of these factors have or provide any merit to delivering value to the business. If the purpose of all organizations is to deliver, why not construct their existence into one elucidating delivery?

This stated, we already know from much of management theory and experiences today that most people manage 8-12 people well (8-12 direct reports); and having more direct reports than that ... effectiveness degrades. We know from personal home experiences in the US that the more times Cable Internet bandwidth is split in our neighborhoods due to concurrent usage, the less bandwidth there is for everyone. Likewise in software product development, we know that teams of 8-12 are just about the right size for synergistic teaming, effective communication, and competent value-driven delivery (this assumes a customer or customer-proxy is a part of said team to provide iterative 'value based' focus). As teams grow larger thereafter, it becomes more and more difficult to keep control, let alone lead and deliver effectively. Is it any wonder companies cancel project after project that have 50, 100, 200 and 2000 people on them that simply do not deliver?

Even the U.S. Military 'anonymously' asserted in a December 7, 2007 MSNBC article titled, "The Army's $200 billion makeover", that when a project grows so large, encompassing so much money and so many people in such a geographically distributed fashion, "you can't kill it" - suggesting it takes on a life of its own. The question thereafter begs an answer: "If a project is so big and distributed that it can't be easily killed, can it actually deliver something the customer wants at all, let alone within a reasonable cost of acquisition, ownership and return paradigm?" Seems like we collectively have a hard time managing size, let alone scope and effectiveness. Mary and Tom Poppendieck very simply asserted that not only do we need smaller teams that answer directly to the business, but they should be cross-functional (hybrid) teams responsible for proposing and defending the value of an idea. Thereafter, said team should be responsible for defending and justifying the idea (feature) through the various stages of its life on into production implementation and evolution. And this constant justification of any idea from a value and financial perspective circles back on the argument the Poppendieck's make just earlier in the book that most software companies should likely be funded and managed using product management paradigms, not project management ideologies.

When looking at the idea of small, hybrid teams, an example would be to construct each team to be self-encapsulated/solving by containing the following roles or something like it: {business proxy, developer, tester, database administrator, usability/human factors designer}. Were you to break it down lower? A business person, developer and tester can often make very significant progress as a team. The business person is the 'go-to', the developer is the 'do-er', and the tester is the 'validator' (which is often a developer or technical tester or some sort capable of development pairing, evaluating unit/functional tests, etc.). The idea of small, hybrid, self-encapsulated teams is covered quite well in nearly all Agile discussions and resources and is hard to miss if you're on the prowl for new methods/thoughts.

Make the teams small. Make the teams hybrid. Always include a business person inside the team.

Regarding the cost of complexity

The idea discussed around the cost of complexity is really best illustrated by a diagram contained in the book (which I do not currently have permission to re-illustrate here).

Textually, the cost of complexity is illustrated by having a Y-axis labeled 'Cost', and X-axis labeled 'Time'. There are two rays illustrated in the diagram. One is labeled 'Essential Features' and the other 'Complexity'. The purpose of the diagram is to show the {Cost to Time} comparison of putting essential features into software versus falling prey to the requests to put complex and difficult features into software (which happens to many of us at some point or other). As you can visualize perhaps, the 'Essential Features' ray begins at ~X,Y (2,0.5) and heads to right to Infinity ... just barely containing a rising arc. Conversely, we see the 'Complexity' ray begins also at ~X,Y (2,0.5), but quite drastically inclines while heading to the right and into Infinity suggesting that as time progresses, the complexity of doing, managing and supporting work, in addition to pain, toil, labor of existence and the search for resumes and jobs, all increase exponentially. The summary: Complexity kills. So, "Write less code!" (Poppendieck, 2007, p.67).

Note special behaviors here ... if there is a concern by a musician that many notes, high technical difficulty and speed exemplify mastery of music and the earn the utmost respect of other musicians, it is a mistaken belief. Else, how could beautiful music ever be written other than to be mind-numbing to play and process? Likewise, if there is concern by a writer that a message cannot be delivered other than through voluminous verbosity - it is again a mistaken belief, else how could children's books ever be written? And again, if there are technologists who believe the complexity and verbosity of design, code and test suggests supreme super architect intellect, it is yet again a misnomer and falsity. Complexity in music prevents other musicians from enjoying the music through performance and lay-people from actually understanding its contribution or value to the music realm. Complexity in literature reduces readership and application of what may be gold nuggets of wisdom due to the overbearing feel of the material. And lastly, complexity of code and technical solutions prevents other technologists from being a part of the team, from supporting, refactoring and/or extending the solution on into the future.

Complexity is an enemy, not a shrine to intelligence.

Of course, writing less code, while a needed behavior, is not the only behavior required in any organization evolving a software product. As discussed throughout the book, as well as this entry, managing a company's predilection to keep adding new stuff is imperative as well. One without the other is quite obviously an unbalanced existence. How many companies out there actually have a stated corporate goal of being simple? Going forward, when you hear about this or that company being short on cash, performing lay-offs, firings or 'displacements', ask yourself ... "How much of their problem is due to unnecessary complexity?" Clearly more here to discuss.

Regarding cost justifying 'new' things

And very interestingly, the idea of cost justifying 'new' things quite healthily suggests that we should put upon those people petitioning, lobbying or otherwise demanding new things/features in our software products (particularly internal team members versus customers themselves) that they do the math before, during and after the implementation of a new feature or widget to a product. Seems like it should be an 'old news' type of snippet, but alas, companies still fail to require the proper math it seems. Emotion and the silver tongue of Marcus Tullius Cicero being present in someone's office unfortunately carries more weight than hard math quite often.

More explicitly, reward cross-functional (hybrid) teams for cost to value returned for each feature effort, not simply because product was delivered to production, but in an incremental assessment framework through time. Let's explore this a bit.

Let's assume we already have a hybrid/cross-functional team in place composed of a business person (actual customer desirably, but customer proxy if necessary), two developers who share the responsibility of coding and testing in a paired manner, and an implementation/sales support person. This particular team eventually comes to believe that the company must absolutely have Feature123 added to their software product in order to compete in the industry or all is lost - heavily implying that demos will not lead to sales, and/or long-term support costs will go up, implementation times will lengthen, and so on.

Step 1: This hybrid team defines an epic describing what Feature123 is, who cares and why it is important to the vendor and the customer base (in free-form, conversational text).

Step 2: Same hybrid team decomposes the epic down into N user stories so they can understand what it really means to do Feature123 by using statements such as, "As a user [type], I would like to perform [this action/activity] so that I can accomplish [this goal]".

Step 3 -99: Same team associates each user story with acceptance criteria that essentially define when something is 'done' from the perspective of a customer (hence, the customer or customer proxy present on the team). User stories are prioritized and relative sized; the highest risk, highest return story is chosen; tasks are constructed and effort hours associatively .... until we have a work set.

The goal of the exercise is to understand what it will cost the company to implement one user story (COA), what it will cost to own it (COO), and how soon the company will make money back on said user story (ROI). In the Poppendieck model, it is not enough to have an idea ... the teams need to collectively understand what it is, what it will take to do it and own it, and when they company will get money back. And there's a critical element in here ... the comparison or project based versus product based funding.

In many environments today, companies seem to apply the idea of "project management" to everything that requires an effort, scope, budget, etc. Unfortunately in many of those situations, the request is to 'get it done' and funding is either ignored until the corporation is bleeding at a later date, or all funding for the project is provided up front thereby relieving everyone of the responsibility to be financially responsible. Ironically enough, when the project doesn't deliver and gets canceled and the company is out of money, people still get upset about not getting raises.

The to cost-justify effort works hand in hand with funding a product delivery stage by stage by stage. At every stage of an effort should the sponsoring team with the idea need to argue:

a) what it was projected to cost for user story 1
b) what is actually cost for user story 1
c) what monies, if any, have been made as a result of user story 1

d) what it will cost for user story 2
e) what it cost to date for user story 2
f) what is project ROI for user story 2

and so on.

What different behavior would be exhibited by a corporation requiring gated or phase based product delivery accountability to finances - performed by teams (the money spenders)? And what more financially responsible decisioning and implementation may occur if teams were rewarded purely based upon the profitability of their implemented ideas?

We need to constantly strive for simplifying the idea, cost justifying the idea, using a hybrid team to birth, defend, implement, support and evolve the idea, and ensuring that the idea is considered a critical, foundational feature immediately contributing to the financial integrity of the product. And very importantly, we not only need to be simple, but to simplify every time we revisit the ideas in the future. Otherwise, we'll find our companies are more likely coding to the emotional whim of the few while increasing the complexity, technical debt, minimized margins, and future pain for us all.

We go fastest by being lean. And being lean, while being constantly pursued, is only attained by being the simplest in all possible forms.


(Poppendeick) Poppendieck, Mary and Tom. Implementing Lean Software Development: From Concept to Cash. Addison-Wesley Signature Series. September 2007. pp. 50-74.

Fall Scrum Gathering, London 2007 Reflections

Recently completing a trip to London for the 2007 Fall Scrum Gathering, it is yet again made plain to me that given the sheer size of the globe and large numbers of software engineers/practitioners available within - it doesn't matter if you believe you're good at what you do, dress well, have a nice website or work for a major international company with a reputation that gets you in the door ... either you're individually perceived as effective by the customer or you're not - and this determines how long you stay.

I met wonderful people who had many excellent problems to solve such as managing scope through a single repository (or not); understanding how to manage people in the company who have opinions, but aren't solution providers; discerning how to implement Scrum concepts throughout the company rather than simply in development alone; ascertaining what metrics and if metrics are in fact helpful or hurtful; wondering if off- and on-shore teams are complex, practical or achievable...and so on.

One interesting conversation was composed of a collection of questions regarding how to ensure sprint planning meetings had what they needed in order to plan and execute sprints. To this, we talked about the value of having all requirements located in the same place or repository; thereafter having the corporation or appropriate stakeholders prioritize said repository on a very frequent (weekly) basis; and how it only makes sense to take the top 5 or 10 requirements off the top of the list and turn them into executable statements by using User Stories: {Descriptions, Acceptance Criteria, Relative Sizes, and eventually Tasks with 'ideal day' hours associatively}. When technologists show up for a sprint planning meeting, those things they are equipped to assess, task out, and accept into the sprint are wholly dependent upon whether the user story statement, descriptor and acceptance criteria already exist. Otherwise, expecting technologists to take single statements and turn them into software systems is simply ludicrous. So, either the top 5 or 10 things on the prioritized list have been decomposed into user stories, or they haven't. If yes, they can be reviewed, decomposed, and most likely accepted; if no, skip it and move on until the next sprint planning meeting.

Another interesting conversation regarded how to take customer requirements and break them down into parts and pieces effectively. To this we discussed how to teach others to think in User Story statements, how to capture them, what they looked like, and how one knows when it has been delivered. Overall, the value of helping the customer 'fit in' to the process of delivering software requires us to work in language the customer uses naturally, everyday. For example: "As a regular user of an ATM or Cash Point, I want to Deposit money." Ordinary people speak like this, and defining 'done' can be had by using ordinary language as well in the form of Acceptance Criteria. For example: "The system should take my deposited monies from me"; "The balance should reflect my deposit"; and "I should have an option to print" are three examples of 'done definitions'. The key to capturing customer requirements frequently and often is to use the language of the user, not force them into our language as technologists or solution providers. Very specifically, when a developer can see what functional use must exist, and everyone can discern how it should be tested .. we're on the right path of definition and implementation. This is a good place to perform further research by looking at Mike Cohn's books on user stories and estimation.

And yet another interesting conversation regarded how to insure requirements were in fact something that could be fulfilled by the company when contract expectations were being crafted during the sales process. To this we discussed the need for development to equip sales with enough knowledge to craft more intelligent contracts. Perhaps include an active technologist or two in the sales process for crafting right expectations and right contract language. Consider giving Sales 'canned' activity/estimate bundles so that they know when offering up 'X', they know where the cost floor is and whether they are cutting into margin or creating actual losses. Sales only knows what they are taught; and since Development groups are often required to fulfill contracts crafted by sales, it makes sense that Development should equip Sales to make more intelligent finds.

One of the side statements we discussed is ... if Sales is only measured on revenue acquired, but not measured on integrity or sanctity of sound financial contracts with guaranteed profit margins after calculating cost of acquisition, cost of ownership, return on investment and putting it in context of total cost of ownership, no process, behavior or person can fix this afterwards. In other words, if the company specifically teaches the sales group to 'get money no matter what', then the company is fostering bad behaviors, sometimes looked upon as 'wolf pack' behaviors, that other processes will not and cannot fix in arrears. A bad contract is a bad contract and cannot alone be measured by the cost of getting it (acquisition/installation/implementation) and cost of owning it (through maintenance and licensing alone). Either the behavior has to become more fiscally responsible by looking at total cost of ownership PRIOR TO contract signing, or making profits will be hit or miss going forward with no predictability.

Regardless of what country from which we originate, we all struggle with multiple variations of the same problems. Whether bad contracts, poor requirements or more than one repository, and knowing when 'done' is 'done'. Furthermore, the most common challenge we all share is being effective from the perspective of the customer...either the customer is smiling, or the customer is wishing they had another option.

In every conversation I heard or participated in ... there was agreement by word and action in every case ... delivering solutions for customers is not a technical problem, but rather one of relationships and general sociology. We should never mistake the fact that we received our paycheck or paid invoice this week as validation that we are valuable - it just means we're still there. However, when we hear the customer say 'Thank you', compliment the effort, and most importantly - smile - we've been endorsed for just one more day. Tomorrow, we must start over and seek to add value that brings a smile to the face of our customer again.

Recognize Learning Styles or Fail to Communicate

How many snowflakes do you think fall in a cubic inch of snow? How many snowflakes would exist in a one-inch depth snow covering one geographical US state? Further then, how many snowflakes would exist in a one-inch depth snow covering the entire continental US? Do you realize that scientists suggest every snowflake that falls is unique from all others?

Now let's consider people.

Corporations, in order to manage employee records, ordinarily assign a unique employee ID to each and every employee whether past, present or future. People ordinarily receive their own unique phone number, computer with unique MAC addresses, and I.P. address as well. Though there may be a dress code, each person shows up dressed according to what they own, their budget, and personal style (or lack therein). If we put ten people in the room, whether from the same family or department, there will be like characteristics between all of them, but they will be primarily unique and different in most every way. Sounds like the diversity of snowflakes doesn't it?

Now let's consider processes within corporations.

Many times we see third party 'process groups', or simply some third party group with an opinion, identifying the method of doing some form of work. Kind of funny sometimes, amazingly frustrating other times, and every once in a while .. just plain ridiculous. I hold to the idea that no one knows how to do the work or improve the work better than the person doing the work. I may be the only person with this opinion, but oblige me for a moment. Is there only one way to make a sandwich? How about mowing the grass, birthing a child, edifying your team, or otherwise congratulating someone on a job well done? Interestingly, how many times do corporations buy a process framework and some associated tools and attempt implementing it 'as is' and then calling everything that doesn't match the framework a 'deviation'? How do we know the process itself isn't ironically a deviation from what the true workers really needed to get the job done? Oh, the pain of it all. While it makes sense to have a baseline approach, eliminating or otherwise precluding people from evolving the process as needed is anti-climatic and horrible, horrible leadership. Have a baseline. Allow the opportunity to evolve the baseline in a context driven manner. Truth be told by process zealots willing to be transparent, version 1.0 of any process framework is just the conversation starter as the real process hasn't yet been discovered. Processes are ordinarily context-driven when successfully applied; inhibitors when not.

Snowflakes. People. Processes.

Given the unique diversity of so many things in life, is it any surprise that one method of doing anything simply doesn't work in all situations or for all people? Should we really be surprised that 4 year old children don't favor lecture? Should we be surprised that people who favor outdoor physical hobbies may hate sitting inside or in front of a computer, while indoor people may disfavor team-building field trips or physical activity? Have we noticed that some people respond to listening situations, while others favor reading, while yet others favor being given the problem and the opportunity to figure out a problem through trial and error? Did you ever notice in yourself whether you learn the most by watching an educational video, or listening to an audio message, reading the material, or simply jumping in and learning by doing in real-time? Did you notice if your team and company has a primary, secondary or mixed bag of understanding, learning and applying an idea?

So consider that when delivering a message, your success is defined not by your suit, haircut, freshly whitened teeth, or job title and role in the company...your success is wholly defined by the effortless ability of each individual on the receiving end of your message to repeat it in the hallway and operationally execute against it later in the day, week, month and quarter. To do this, your message must be crafted for the lowest common denominator on the receiving end of any message ever delivered in past or future history - the very unique, very important individual.

Artificial Urgency and Failed Leadership

If we have to create urgency in order for people to come together and solve a problem, then leadership has already failed.

When people do not know why what they do matters in the big picture, or they believe that no one knows they contribute, there is no inherent urgency in individuals, teams, project or companies. No matter how hard one tries thereafter, instilling artificial urgency through other means is superficial at best, and contributes to a hollow victory because people merely followed orders to get a paycheck, but they didn't team.

Want a focused team urgently pursuing goals? Lead them to take up the cause with urgency by giving them knowledge, context and clarity regarding where things are, where they need to be, why, and how each individual contributes to the system level goal. Want puppies? Give them doggie treats and put them back in the kennel; but certainly don't mistake yourself for a leader.

Urgency is not created. Urgency is a by-product of effective leadership.

6 Critical Factors for Creating and Fostering Urgency

  1. Take the time with care to teach, not tell, people what problem needs to be solved.

  2. Take the time with care to teach, not tell, people why this problem needs to be solved and how it is important to the company in the larger picture.

  3. Ensure you spend time helping people see and understand understand their personal value and contribution opportunity in relation to the problem and solution path.

  4. Layout and explain a high-level plan showing people how you'd like to see then traverse what path in order to address the stated problem

  5. Be an active, purposefully involved and ever-present leader. People look for someone who will provide clear goal setting, quick decisioning, keep their hand on the wheel, and a comfort that someone will look out for them and their contributions along the way. In other words, every time they look up from their head's down work, they want to see the leader leading in front of them.

Still struggling with how to create urgency in your teams or company? Consider reading a book titled Good To Great, written by Jim Collins and published in 2001. And what is the premise of the book?

Companies evolve to greatness on purpose.

And how do they do this? Read the book. Summarily however, they are constantly evaluating, validating and changing their business plans, strategic and tactical goals to move where the ball is going to be through active, hands-on, from the front of the pack leadership. The leaders are active, purposeful, and leading from the front of the people. And what does it look like? Goals.

We're not discussing those storied, abstract and arbitrary goals sent down from the ivory tower up on the mountain that people don't really understand how to operationalize or what it really means to them. We're discussing simple goals - that if they aren't so simple that you're a little bit embarrassed about it, then they aren't simple enough types of goals. And when people intuitively understand them, why they care and how they matter, simple goals get operationalized. People are enabled and equipped to get excited about something they understand in terms of clarity and importance. People feel no sense of urgency about something to which they cannot relate.

And what next? Is it so simple that we just need a list of balanced scorecard-like goals in pretty templates and then our teams are hungry for success? No way. Writing up goals that intuitively make sense, give context and purpose, and help everyone understand why they matter and why these goals matter in a measurable, tangible sort of way is usually very hard and very time consuming...and that was the easy part. Now you need to become an evangelist regarding everything about these goals, their application, implementation, contextual importance and potential fruit of labor in terms of individual reward and corporate longevity and strength.

And what will being an evangelist take? Constant, hands-on-the-steering-wheel, active, purposeful, in-front-of-the-people, clear, decisive and swift leadership balanced with a healthy dose of listening to the people just as much, if not more, than talking. It will never suffice to distribute goals, state of the union addresses, and blanket 'Thank-yous" and "Good jobs" and call this leadership as those actions are nothing more than the administration required for you to be a good leader...but is not leadership.

Starting to make sense? If you want urgency on projects, teams and within individuals ... relate to them personally, help them relate to you personally, and provide tangible, clear, attainable, operational ready goals that help them understand why this is important and who they are in relation to why. Urgency is a by-product of leadership, relationship, knowledge and context. If this is too much work, then go get those doggie biscuits we discussed earlier and just pray those cute little puppies stay confined on your premises, never finding a crack in the gate to get loose and move on.

What the Agile Manifesto Looks Like When Implemented, Part II

Continued from Part I

If you have already looked at Part I of this soliloquy, we've traversed the primary idea of the Agile Manifesto which suggests that the journey of software development and delivery is
...constantly and purposefully evolutionary through teaming, pairing, mentoring and teaching by example - with no end in sight; in other words, we're never done and we're always changing and improving.
Let's take a look at the remainder components of the Manifesto and further explore what the Agile Manifesto looks like when implemented in your work enviroment.

Manifesto for Agile Software Development

"We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value:

Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan

That is, while there is value in the items on the right, we value items on the left more."

Let's look at the next five elements of the Manifesto.

"Individuals and interactions over processes and tools."

Constant, collaborative, communicative interaction between team members is the goal. If you notice, this happens naturally every day in the line at the cafe, near the water cooler, in the parking lot, and before and after meetings. Most people are naturally social. This particular aspect of the Manifesto simply suggests that there is value in having a process and having a tool. However, there is more value in having people talk, develop relationships, be real, trust each other, and describe, postulate and solve problems by conversation and interaction. This is a very easy facet to see and understand in our work environments.

One symptom of imbalance is if people spend the majority of their time updating a tool, auditing against a process, discussing templates, or calling high-ceremony, high-attendence meetings ... the focus is most likely on things that help us serve customers, not customers, customer problems or team mates.

Any bloke that works for a living will tell you that having the right tools will make the job easier; however, having the right tools does not a good deliverable make. You could give me all of the best, most expensive and high quality tools that exist to make a sidewalk ... though I haven't the faintest idea what to do with them. Liken this to chaps who spend exorbitant amounts of money on golf clubs, associated clothes and shoes to look crisp and impressive on the course, but haven't a clue regarding effective golfing. The tools may be helpful in the long-run, but are clearly not the priority as being a master craftsman precedes the value of master quality tools.

To serve a customer, create a relationship with a customer and get to know them, their environment and their problems by spending time with them, talking, and generally just being around. To lead a team and be part of a team, create a relationship with each individual, be around, talk and discuss things on and off-topic. What we always find is that the problems customers tell us to solve are usually different than the solutions they actually seek. If our focus is on the relationship, the true problem will reveal itself through time. If we focus on populating the tool, using the process, or delivering a template, we'll most often receive that for which we asked.

According to Jim Highsmith at the Agile 2007 conference, the average requirements document is approximately 17% complete, and usually around 8% correct. This suggests a number of things:

  • Customers may not always know what they want until they see what they don't want;
  • Trying to get all requirements defined up front is a waste of money and time; and
  • If the average requirements document is around 8% accurate, perhaps we should spend more time exploring problems and solutions directly with the customer and less time populating a document or tool.

"Working software over comprehensive documentation"

The previous conversation leads right into this one quite obviously. If we know that the average requirements document is around 8% accurate and that customers do not truly know what they want until they see what the don't want, then perhaps we need to deliver something right now so that we can get this process of discovery going at a quick clip. In other words, rather than taking weeks and months to create documentation about things the customer believes they want, take two weeks and deliver working, tested software that the customer can see, feel, touch and interact with in order to glean a reaction and potential project redirection. So the manifesto clearly suggests there is value in some documentation; however there is more value placed on working software that a customer can react to and modify, than a document.

What this looks like in a healthy environment? As a quick example, if you were to spend three hour with a customer helping them articulate their problems and business needs, and then have them prioritize the list from 1 to N ... you're already on the path of adding immediate value through working softwre and a working relationship. Then, if you were to evaluate and take the top one off the list that appears to offer the most business value and is technically acheivable (managed risk) in a short period of time - deliver it in two weeks. In two weeks you will have helped a customer think through their top problems and needs and have shown them working software from which they will recalibrate themselves for the next sprint.

If you were given the choice of the aforementioned experience, or waiting three months for a document that discusses a solution - which one would you choose?

"Customer collaboration over contract negotiation"

If serving a customer requires a partnership - nay, dare I say relationship - then spending time with a customer, talking and solving problems is the goal behavior. Ordinarily when we collaborate with someone, we're working hand-in-hand to think, talk, evolve, postulate, prove, disprove, and otherwise journey together from a mutual starting point. What this often looks like in collaborative relationships is two people standing shoulder to shoulder at a marker board drawing ideas, debating pros and cons, and otherwise sharing in peer level discourse. This type of relationship may grow through multiple meetings over tea, multiple phone conversations, or a constant exchange of ideas via marker boards, napkins or windows...sometimes planned, sometimes spontaneous. Whether walking the park, sitting beside each other or going video, the collaborative relationship is seemingly omnipresent, unbroken, and continual.

Inversely, many people seem to substitute a hierarchy of detailed contracts as having more value than forging a relationship between customer and vendor. Some people choose to craft a contract of conditions, constraints, assumptions, risks, issues, hard dates, hard deliverables, hard costs, and service level agreements as if everything is known on day one with surety. Having a contract is of course legal and likely wise in most cases; however, when the contract defines the relationship to such an extent it nearly represents an anti-relationship, we're killing collaboration through documentation. Couple this with Highsmith's earlier data suggesting 17% of requirements documents are actually complete, and 8% of these are actually correct ... by opting for a detailed contract over collaborative evolution with the customer we gamble that we're capable of being a part of the 8% at the point we sign a contract and we're spending our time and money on the wrong activities - which is to deliver working, tested software in as short a period as two weeks.

"Responding to change over following a plan"

People are interesting creatures. If we provide a list of requirements to a contracted test group and ask them to automate said requirements as tests, they would. Many times, they would automate only those things requested of them or literally articulated in the requirements and nothing more. If the requirements were actually a user story list and associated acceptance criteria, the testers may not look for upper and lower bounds, combinations, negative tests, or even explore alternative paths in the system - they would automate what was provided and requested.

Similarly, again reflecting on the 17% and 8% Highsmith metrics, coupled with the fact that the customer will change their mind after they first see what they don't want, were we to create an N-100 or even N-1000 line task based project schedule that says "do this, then this, then this" and so on ... what do you think would happen? Of course, people tend to focus on "what's next" on the list of tasks rather than "what's next" on the path to meeting a particular customer need. When we create an elaborate, detailed, or otherwise time investment heavy plan, we tend to focus on using the plan rather than making context driven decisions along the journey to our goal...we tend to do what we are told (or what is suggested).

And if I'm given a framework within which to think and act, and along comes a change, my instinct is usually to protect the framework upon which I'm focused (in this case, it may be a project plan) from being compromised. Seems funny, but we see it time and again; people tend to work very hard to create project plans and really do not like to have them jumbled up along the way. We tend to like constants.

What does it look like in an healthy environment when we choose to respond to change versus follow a plan? Well, rather than preventing, discouraging or otherwise befuddling a customer's desire for change by using change prevention boards and processes, we actually invite change on a regular basis. We leverage the customer to identify everything they feel is important and we tell them what we believe is achievable in the next delivery sprint of perhaps two weeks. While we are delivering, we further invite the customer to not only add new items to the list, but to re-prioritize the list in preparation for the next sprint on the way in two weeks. At the end of the sprint when we demo our working, testing product to the customer, we invite defect observations, change observations, reactions, emotions and general feedback so that the customer tells us what they want changed ... and all this without pain, toil and penalty for the customer. Invite change frequently, often and on purpose.

"That is, while there is value in the items on the right, we value items on the left more."

Processes and tools, comprehensive documentation, contract negotiation and following a plan all have a place in serving a customer and delivering value-add solutions. However, when given the opportunity .. choose to spend time with individuals over having in-depth processes and tools discussing interactions and individual; choose to deliver tested, working software in short bursts rather than lengthy documents discussing such a thing; choose to work with a customer to solve problems rather than creating contracts that define the relationship and solution; and choose to invite and respond to change as quickly, proactively, and often as possible rather than following a plan for the sake of honoring the plan.

Summary? Choose to practice being a person.

What the Agile Manifesto Looks Like When Implemented, Part I

Part II

As time goes by, many more people pick up Agile books, attend conferences, classes and user groups, and declare themselves to be "Agile" in some way, shape or form. A challenge lies in the fact that while many people state themselves to be practicing some form of Agile behavior, do they really know why they believe that they believe, or what it really looks like in action? Do all Agilists understand the evolutions of the belief system they profess in public places, or have they simply gotten a free t-shirt at a conference and put on their belief system when getting dressed in that new shirt?

If by chance someone is starting anew on an Agile exploratory journey, that is of course outstanding and should be cultivated. And if someone has been on an Agile journey now for a number of years, it only means they are not new at it, but certainly not "done". Let's explore the Agile manifesto components, what they mean, and what it looks like in action when one sometimes refers to themselves as an "Agilist". In particular, let's look at the first sentence...

Manifesto for Agile Software Development

"We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value:

Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan

That is, while there is value in the items on the right, we value items on the left more."

Let's break it down.

"We are uncovering better ways of developing software by doing it and helping others do it."

This sentence suggests the journey is constantly and purposefully evolutionary through teaming, pairing, mentoring and teaching by example - with no end in sight; in other words, we're never done and we're always changing and improving.

What this looks like when done incorrectly in a workplace: As happens sometimes, were someone to document the development and delivery process at a company, put it up on the wall and suggest 'this is the process' as if the picture or document were an epilogue to a long, unwanted journey of pain ... it is often perceived (and sometimes intentionally communicated) as a declaration of law, complete and sound in usefulness as is. When we later discover the need for a process modification, the method of changing a process sometimes has its own process thereby making it difficult to veer from the base, and seemingly impossible to evolve it. A documented, baselined, published, measured, and audited process sometimes becomes the deliverable, the demi-god, and the purpose for employment. Unless one's company specifically designs and sells processes as a product, most of us are employed to deliver value to a customer, not a process map discussing the delivery of value.

There should always exist a pattern of behavior associated with delivering software such as sprints, sprint planning and review meetings. No, it should not become so sacred that changing it violates normative laws of existence. Just as we invite the customer to change his or her mind in time for the next sprint, so should we be inviting improvement or optimization per sprint. Have a process; do not craft it in granite then worship it as changeless.

What this looks like when done correctly in a workplace: 'Constantly uncovering better ways of developing software' suggests constant observation, constant platform for expression of observations, and a belief that these communicated observations are actually heard, valued, supported, or otherwise encouraged to be implemented.

What this looks like in Toyota's workplace is the ability for any line worker to shut down any line, at any time. In other words, when they see a problem that needs to be improved or addressed they are expected to address it.

Underwater divers are taught something very similiar in their early education when told that anyone can call off any dive at any time for any reason. In this case, when something is observed that needs changed, it usually means a life is potentially, or literally at stake. If you find yourself embarrassed to bring attention to and enact change while diving, said timidity may likely cost a life.

Ross Perot, founder of EDS, noted his favorite kind of employee as one who would see a problem, address a problem.

In our software development and delivery environments, this constant invitation for change takes on many faces - all may or may not be automated - though all require human intervention.

  • In one environment, change is invited by starting every sprint planning meeting with a ten minute retrospective simply asking, "What should we do differently this sprint?"
  • In another environment, change is invited by having a new scrum master for every new sprint such that all team members have opportunity to not only lead, but to understand how to follow.
  • In another environment, change is invited by having development staff be a part of demo and sales cycles for thirty day rotations so that fresh technical perspective is brought to the table to the benefit of sales, and first-hand sales process experience is brought back to the development teams for context.
  • In yet another enviornment, change is invited by having development staff be a part of product development/direction whereby feature requests are decomposed into themes, epics, chapter, stories and acceptance criteria in preparation for relative sizing and corporate prioritization meetings.
  • In yet another environment, change is invited by not only having sprint reviews at the end of every sprint requesting customer feedback, but including the customer in the journey of daily scrums, product backlog prioritization, feature to user story to acceptance criteria definition along the way.
And the latter part of the statement, "Discovering better ways ... by doing it and helping others do it" suggests that it is not enough to give someone a book, send them to a class, or for thirty+ minutes have them listen to an *mp3 or watch an *mp4. The optimum method of constantly discovering better ways is to do it together. What does this look like on a daily basis in our development environments?

  • In one environment it means physically co-located teams who are not only working on the same goals, but working on them together, at the same time ... eating, drinking, and breathing both the goals and the journey - together.
  • In another environment it means physically pairing two team mates together to work on the same piece of code, same acceptance criteria, task and/or story - discussing the structure and evolution of a test to prove yet unwritten code, followed by one coding while another contributes by looking for efficiencies, suggesting alternatives/options, and validating progress along the way.
  • In yet another environment it means on-demand peer to peer video capabilities such that there is a face connected to the conversation in real-time. The tele works well, as does interactive chat; though there is additional value in seeing and experiencing facial response associated to speech, demeanor and intent.
  • And in another environment, a vendor may spend frequent time with a customer helping them to understand how to assess a feature request for business value in terms of net present value (NPV) versus simple return on investment (ROI), how to break it down into parts and pieces, and how to understand the development/delivery environment well enough to leverage strengths and conditions of the team and environment.
There was a time when many people believed software development was a collection of nerd-like people, identified by unique sets of numbers as asset labels, with ne'er a developed social skill, sitting in cubes all alone without need for sunlight or inclusion in every day business. Today, we know this logic to be ridiculous and very simply - wrong. For every layer of separation between a developer and a customer, we increase the probability of delivering wrong solutions.

The very first sentence of the Agile Manifesto suggests that developing and delivering useful, working, tested software that reflects a direct, daily business need of a customer ... is social; and above all else ... moment by moment evolutionary between the developer and the customer.

How to Hold a Customer Hostage - Final Report

Purely fictitious for the fun of exploration, this submission explores what it looks like when vendors of a product and/or service effectively hold a customer in a form of stasis - never really delivering, never really finishing, and never really providing enough understanding for the customer to be happy, sad or equipped to decide to stay or leave - just unfinished, bleeding money. Because of the size, this post is broken into multiple parts.

Part I
Part II
Part III
Part IV

Continued from Part IV:


From: CTO, Senior Technologist Auditor, and Business Sponsor

This brief covers the recently canceled BioReCo project and identifies two primary elements:

a) What are the top 10 things we did not do correctly (observations)?
b) How will we make sure we will not do them like this again (recommendations)?

It should be noted that we arrived at seven, rather than ten, primary items that we observed and recommended attention going forward. It is our belief that by addressing these seven elements, we will equip ourselves for greater success going forward.

Please address us directly with questions. Thank you.

Observation 1: The Project/Business Sponsor did not know how to manage the project
The Project Sponsor, while being responsible for defining the business problem and the manager of project funding, did not understand how to appropriately manage the finer technical points of the project and therefore could not.

In particular, the Project Sponsor's strengths are to identify the business problem to be solved and then verify that deliverables do in fact not only meet expectations, but actually solve the business problem. Second to this, the strength and role of the Project Sponsor is to, just like normal daily business, manage planned to actuals comparisons, validate forecasts, and agree to or decline funding for additional project work.

In this particular project, most of the work and staff was technological in nature and required a technologist, working on behalf of the business, to verify that solutions were necessary, appropriately, timely and complete. For this project, we believed the solutioning vendor was would fulfill this role - acting on behalf of the business and helping the business solve problems. However, what we discovered was that the vendor did not fully understand the business problem, but did in fact understand the technology and the solution was never proven to be useful to the business.

Recommendation 1: Pair Business Sponsors with Technology Sponsors at project inception

For any and all projects in the future where there exists a business problem to be solved allegedly requiring a technology solution, assign an corporate employed executive level technologist to the project for accountability to right technology and financial solutions in accordance to stated business problems.

In other words, when assigning a business sponsor, also assign a technology sponsor to be paired into the leadership structure. Furthermore, associate financial incentive for the technology sponsor to meet the business problems as defined by the business sponsor in as quickly and economically responsible manner as is reasonably possible without compromising stated business and customer quality expectations.

Always pair business and technology together at the leadership/sponsorship/problem definition level and lead the project from the 'paired' top down.

Observation 2: Project deliverables were not predictable, tangible, or comprehensible to all involved - in particular the Business Sponsor funding the project

Nearly all deliverables were stated to be important to the business, the business problem and the subsequent project. However, it seemed that only other technologists could actually appreciate, understand and actually use the deliverables of the project.

Furthermore, most of the project milestones were technical in nature and therefore under-appreciated by non-technical staff. In other words, when a technology team would be happy and communicate success about a particular milestone suggesting the project was making great progress, non-technical team members and especially the business sponsor, could not translate the value and therefore could not reasonably agree that the progress was useful.

Recommendation 2: Decompose the project and deliverables down into parts and pieces that are not only tangible and make sense from the Business Sponsor's point of view, but deliver them in a predictable and repeatable rhythm across time

First off, we recommend that all project milestones and deliverables are actually defined and stated as business objectives, not technical objectives. Given the reason the project exists is to solve a business problem (i.e. launch a new company), it make sense that there must and will exist technical tasks and milestones along the way. However, in order for the business to define a project expenditure as successful, the objectives must be stated in terms understandable and measurable from the perspective of the business. You want $10? Deliver 10 business objectives. You want $10 more? Sign up to deliver 10 more business objectives.

Second to this, what gets delivered and how often must assuredly make sense from the perspective of the technologist - but must more importantly make sense from the perspective of a business person attempting to solve a business problem. In other words, do not deliver only a portion of one objective, deliver the whole objective in one sitting. Is the objective statement to broad or big to be delivered in one motion? Then break it down into parts and pieces until such time that each objective statement _can_ be delivered in one motion.

And what is 'one motion'? A predictable, repeatable delivery pattern that the technologists must strive to practice. In other words, on the business side of the house, deadlines are expected to be met regardless the situation and it should be no different on the technology side of the house. For example, were we to expect usable, tangible, measurable and useful software deliverables every thirty days the answer from the technologist is not "it cannot be done", but rather offering a counter-proposal suggesting that they will break the objectives down into parts and pieces until they identify what they _can_ deliver inside thirty days.

Observation 3: Not all deliverables created for the project were actually useful to anyone

We already discussed this to some extent, but the summary of the conversation lay around deliverables, their usefulness, and determining whether money should be spent to develop them.

When spending someone else's money, it seems nearly all deliverables are important to somebody. However, many of the deliverables announced as 'successes' were in fact only useful to one or two other people, or only useful to other technologists. So in effect, the business was funding deliverables that it could not use, but some were touting as very useful.

Recommendation 3: Minimize and/or eliminate the creation of deliverables that do not first have a customer and purpose to exist

Our primary recommendation is couched in the need for all measurable project objectives to in fact be stated business objectives. If a deliverable does not or cannot be directly traced and evidenced as helping to solve a stated business objective, it should not be created.

Second to this, associate business objectives to deliverables, and deliverables being in a useful, measurable, tangible form, to further funding. In other words, rather than signing a fixed timeline, fixed bid/cost contract across time and then blindly paying invoices as they arrive - only pay invoices after business objectives for that period have been demonstrated to be met and the deliverables are in fact useful for meeting said objective.

There should be a direct trace between deliverables and business objectives in advance, not arrears.

Observation 4: Project Statusing was inconsistent and non-meaningful

Project status reports often included subjective statusing such as "people are working really hard" or "we found some problems, but they are being addressed". Furthermore, some of the status reports measured progress by counting the number of file checkins to the code repository system, measuring defect founds (but not fixed), and the number of meetings that had occurred during the week.

An additional challenge lay in the fact that there was really no predictable pattern of when actual solutions would be delivered, but status reports were delivered in a predictable and repeatable pattern suggesting to us that there was perhaps more time spent talking about doing work, than actually doing work.

Recommendation 4: Measure progress by the number of tangible, tested, business functions from the perspective of the Business Sponsor, not the technologist or Project Manager

Put solution delivery and statusing on the same delivery schedule so that the solutions and reports of solutions share common timing.

Furthermore, make the measure of 'progress' the actual delivery of tangible, tested business objectives as stated at the beginning of the project, rather than determined in real-time and communicated ad hoc through status reports.

In other words, other than the small joys of personal and professional fulfillment along the journey of delivering, it does not matter what the Project Manager, Technologists or any other members of the solution team think or characterize as 'success' and 'completion' - it only matters whether a business objective has been met, can be demo'd or shown, is actually tangible and verifiable by the business sponsor, and actually knocks one objective off the list as 'done'.

It should be noted that at project charter (outset or beginning), the definition of 'done' for any objective should uniformly be defined as: known necessary coding is complete; known necessary testing is complete (unit, integration and functional). Acceptance testing will actually be performed by business people, appointed by the business sponsor, to verify that what is delivered and communicated as done, actually meets the need.

Observation 5: Staffing and costs grew reactively

This particular observation contains more than one point, but we believe the pivotal conversation point revolves around deciding what kind of staff is necessary, for how long, and how.

We believe our philosophy of hiring needs to change for the future. Historically, we have sought the best bid from vendors by looking at hourly rates first, timelines second. We felt that, being reasonably intelligent people, we would be capable of identifying a vendor that would provide good quality output balanced with the needed time and cost. In truth, we did not have good measures in place telling us the type of roles and responsibilities we truly needed to fill, let alone the definition of successful progress and associated 'quality' output. In fact, we found our definition of 'good' was not only different than that of the technologist vendor, but our definition of 'good' changed as we had more time to think about it.

As a result, given our definition of 'good' changed through time, as well as, our lack of clarity regarding all required roles and responsibilities to do the work, when our timeline began to slip we truly had no bearing as to whether staffing levels were correct, and whether staffers themselves were the correct staff. Thereafter, anytime someone made a request for more staffing we really had no feel for whether it was necessary at all and whether the additional staff would or would not positively impact the solution timeline and quality. As the vendor itself began to believe the answers was 'more staff', we had no real choice or context other than to approve increased staffing levels every single time they were requested.

Recommendation 5: Hire only a few really good technologists/solution providers and stick with them from end to end rather than throwing tens and hundreds of lower cost, lower ability resources at the project reactively

First, define a single problem statement and the associated business objectives that must be met in order to define success for this project effort.

Second, for each objective define a definition of 'done' that makes each objective tangible, measurable and complete from the perspective of a business stakeholder. In this example, said 'definitions of done' are more like acceptance criteria or otherwise criteria by which a business stakeholder decides whether a deliverable actually solves the need/meets the need.

Third, ascertain the roles and responsibilities of people we need on a team in order to meet these objectives (with definition of evaluation and acceptance) and solve the problem statement.

After having a definition of the problem statement, objectives, associated definitions of done, and asserted roles and responsibilities to perform the work - the next problem we need to solve is somewhat subjective, but worth discussing ... do we hire N medium skilled workers at a medium rate in order to get the work done, or do we hire N-1 high skilled workers at a higher rate to get the work done? There is no clearly objective algorithm, but it is our assertion based upon this experience and others that more people do not equate to higher output. Furthermore, we believe that having fewer high skilled people may actually improve output quality, communication and teaming.

It comes down to a choice on the part of leadership. Going forward, we believe we should experiment with having smaller teams/fewer people who are more highly paid and have higher senior level skill-sets than trying to hire N+ people at a lower rate and increasing the probability of communication problems, complexity, latency and overall management challenge. The responsibility lay with us, the customer, and never the vendor. And were we to take on a 'partner' versus 'vendor' in the future, our responsibility does not change - it is our dollar and our responsibility to define how we'd like to spend it. Absent such clarity, we will most likely repeat history.

Observation 6: Teams created according to skill-set/role contributed to non-contiguous system-level decisioning and solutioning thereby increasing costs, time utilization, and a higher probability of defect insertion and complex causal analysis

Summary? We inadvertently contributed to silos. In the beginning of the project it appeared to make sense having a research and development group headed by a senior architect who did all of the hard thinking and then passed it down the line to everyone else. Furthermore, it appeared to make sense to have the requirements people working together, the human factors/usability people together, and so on. What really happened, coupled with the fact that neither the project sponsor, nor project manager could manage to the technical details, was that teams focused on their problems uniquely and did not collectively work towards single goals as defined by the business. Explained: Absent a clearly defined set of business objectives met by pre-defined technical solutions, each group of roles/people on the project interepreted the business objectives differently and pursued what they considered to be the goals for their particular skill-set/group.
Now, we already identified that we should have had appropriate pairing with senior technical resources who clearly understood the business objectives (as a method of mitigating over-engineered solutions); and we already noted that had we single business defined mission statement decomposed into business objectives which would be met by technical solutions we would have facilitated a common goal for all project participants grounded in business needs, definitions and measurements of 'done' or 'success'. Absent these two elements, and then by grouping people together by role/skill-set we inadvertently contributed to many groups of people individually believing they were working together when in fact no one really discovered they weren't working together until downstream in the project when parts and pieces did not fit together in a usable manner.

Restated: Absent business derived mission and objectives coupled with people grouped according to their department instead of individual business problems, we created very effective silos and prevented system level, cohesive, tangible progress from the perspective of the business.

Recommendation 6: Make self-contained teams responsible for all aspects of delivering a tangible business function from design through testing, development, delivery and Business Sponsor acceptance

In the future, regardless the history, lobbying, politic or predispositions, we need to practice creating self-contained teams of people who will deliver, from end to end, one complete business objective at a time.

In other words, the business provides US$1.00 and expects to receive one business objective that is completed, working, tested and validated as useful by the business representative setting the expectation from the beginning. The measure of success from the perspective of the business is when each and all business objectives are delivered - not when technical things happen along the way that are meaningless to the end-user.

It is our explicit recommendation that going forward projects may contain many teams, but each team contains all role types necessary to deliver one business objective. For example, one team may contain a designer, developer, tester, business representative, data person and perhaps usability engineer when necessary. This team would deliver to us a tested, complete, usable business objective with all design, development, testing, usability and business validation completed in one motion. In this manner we are assured the possibility of putting in US$1 and getting back out what we ordered for US$1.

Anecdotal note: There may be special times when it makes sense to co-locate same skill-sets for thinking/designing, R&D, etc. However, regardless preferences or opinions, we no longer believe, nor support, putting together like skill-sets in co-located groups. Going forward all teams will be hybrid, self-contained, business objective driven teams.

Observation 7: Project Management did not actually facilitate success by definition of the business, but by definition of the project
We believe the project was staffed incorrectly in context of project leadership. Our historic world view has been to assigned a Project Manager from the PMO every time we have some type of activity called a 'project'. The nuance we have discovered is that in some cases, when the project manager did not really understand what was occurring on the project, she put in more controls on the teams making them focus more on process than on actually delivering work. At other times when the project manager did not fully grasp what was going on or did not fully understand the weight/impact of a system level situation, not wanting to appear ignorant or unable to handle things, this person seemed to blindly support whatever the technologists were saying.

In both cases, this challenge seems to suggest that when delivering a software and/or hardware system in particular, the problem to solve may not be so much putting a "PM" in charge of the project, as much as, having someone in a leadership capacity that has technical or quasi-technical experience in these sectors of technology, as well as, an ability to be a respected leader in the language of software/hardware engineers in alignment with business goals. A difficult task to be sure; but we now challenge our historically practiced logic that if someone is certified as a project manager they will be able to successfully navigate the waters of technical and business objectives successfully purely because of their title or certification.

Recommendation 7: Leverage technologist leaders to define milestones, deliverables, timelines, risks and issues and consider using a project manager, or some project manager type of role, to simply organize and facilitate work measured against business objectives

Find a software model that expects, enables and equips capable software/hardware technologists to self-manage and self-evolve the actual decomposition and delivery of planned work. The creation of a work queue and prioritization of said queue should clearly and continue to lay in the hands of the corporation. Furthermore, the preparation of queued and prioritized work should continue to lay in the hands of self-encapsulated teams, primarily including business representation. However, rather than inserting a 3rd party project manager into the process of micro-managing/managing technologists who know their job better than the project manager, consider finding/developing a model whereby the technologist leaders themselves are expected to observe, define, decompose, size and deliver their work and potentially eliminate the need for a 3rd party intercessor.

Given we already discussed the need for predictable, repeatable delivery patterns built upon and measured by the ability to functionally and tangibly meet a business objective, expect the senior team/technologist lead to facilitate the decomposition and delivery of work within this delivery pattern, according to the business objective. Were we to arrive at this model, we could logically leverage a project manager in other roles within the organization with it may make more sense such as coordinating implementation, etc.

More work must be done in this area and is already underway as of this writing.

How to Hold a Customer Hostage - Part IV

Purely fictitious for the fun of exploration, this submission explores what it looks like when vendors of a product and/or service effectively hold a customer in a form of stasis - never really delivering, never really finishing, and never really providing enough understanding for the customer to be happy, sad or equipped to decide to stay or leave - just unfinished, bleeding money. Because of the size, this post is broken into multiple parts.

Part I
Part II
Part III
Final Report

Continued from Part III:

Prototyping Output
- As agreed, the architecture team used their additional thirty calendar days to finish proofing out ideas, components, and integrations. The Senior Architect was happy with the results and believed the output would be quite sufficient to guide the teams for the future.

A challenge lay in the fact that prototype results proposed/suggested a bit more change than was originally projected. For example, upon further investigation the teams observed that most of their potential customer base used a particular operating system and same-vendor programming language other than what they had chosen for development, not to mention the fact that it appeared most customer companies, either purposefully or accidentally, were two to four major versions behind 'current' published versions of nearly everything. As a result, the architecture team proposed changing all development, test and end-state architecture environments accordingly in order to insure they were most compatible with, and appealing to, those customers valuing single-vendor lock-in in the interest of technology continuity (not to mention they represented the majority of potential revenue for BioRecCo).

Second to this, the prototype results suggested that integrating the hardware and software components would likely take more time than originally estimated due to several factors including, but not limited to, a few newly proofed out components, and the OS/programming language change implications to the original architecture proposal. It would take time to get corporate security approval to swap out the environments, get the programmers trained by the vendor on the new language and standards, and re-write the functional specification documents in a manner suitable for the programmers who would later read them.

All of these items in consideration, the Architect proposed to the Project Manager the most current known level of effort suggests a development window of six months from 'today', and approximately two months for testing thereafter. All in all, the Architect believes all of this work will be completed in eight months, but that he could pull in the window if the development staff size were increased to include an additional team of eight high-end consultants to pick up the slack that these changes will incur. Substantial conversation regarding money did not occur, though the Architect made it clear if the company wanted this job done right, he simply need to find 'the best' and get the job done.

Leadership Activities - With the help of the Architect, the Project Manager crafted a presentation for the Project Sponsor (aka Business Sponsor) in order to bring her up to speed on progress, lay out the definition of success for the project, and propose the required need for a change request 'requiring' a new timeline and more budget to meet the revised objectives. The Project Sponsor, caught by surprise, felt blind-sided, angry and disappointed in the Project Manager because of the large change request presented her. She vented, communicated the importance of not failing on this project, and reiterated to the Project Manager that this behavior was absolutely unacceptable.

Feeling defensive, the Project Manager returned the volley, communicated how incredibly complex the project really was and the fact that they had employed 'only the best' technologists in the industry to meet these goals. The PM further communicated that everyone had been working far more hours than they were billing for, had deferred vacations, and that if this team felt they needed more there was really no one who knew more about it that this team. She further communicated the number of tests written, documents delivered, prototype decisions, and number of requirements in the system (functional and non-functional merged together) to evidence money well spent to date.

The Project Sponsor followed-up with a single question: "When can I see something?" The Project Manager responded with: "Most of the documents are still pretty technical and the prototype work is really at the technical infrastructure level accessible mostly to the technical folks, so we'd like to wait until we have something you can see and touch. Right now, we're planning to show you the system when we publish the first draft in six months and we think you will really like it." The meeting ended and the Project Sponsor left feeling badly about the project and wondering if she had attached her career to a lead weight.

The Re-based Project - November 16, 2000 - The Project Manager was granted the revised timeline, environment structure swap, training, and staff augmentation on the conditions that all of these expenditures would be outlined in detail for the Project Sponsor to understand and approve prior to acquisition attempts.

Projection: Eight months from 'today' and with 66 existing people and the eight new 'high-end' consultants, the project would be production ready. The revised blended rate for all staff increased to $105/hr (because some other consultant rates were negotiated down) and the new infrastructure purchase costs came out to a negotiated low of US$2.5 million (which was new since the previous work was planned to occur on existing in-house and open-source solutions where possible). The Architect was pleased everything was coming together; the Project Manger was pleased to have this bullet already added to the resume; the consultants were pleased to know their contracts would be for another eight months (though likely longer since customer's always change their minds); and the employees were astounded at the ever-growing juggernaut upon which they were at first willing participants, but now began to question.

High-ceremony status meetings occurred with all stakeholders, participants, on-lookers, food and music on a regular monthly basis. The presentation, always delivered by the Project Manager and backed-up by the Architect and/or Project Sponsor, without fail showed:

  • A Gantt chart;
  • the Top 3 Risks and Issues;
  • Milestones;
  • a Picture of the most current application architecture;
  • a List of documents and requirements completed;
  • the Number of screens created and;
  • the Number of tests written and successfully executed.

Anytime actuals did not match planned estimates, one of the top three issues would end-up being 'Corrective actions necessary to bring actuals in line with planned LOE' and the Project Sponsor would then privately check with the Project Manager that all was well (to which the response would ordinarily be 'the Architect will look into it and make sure we get on-schedule').

February 16, 2001 - Three months into the re-based project and during the monthly status meeting, someone at the meeting asked if they could actually see the system work. The Project Manager immediately said 'Yes' and pulled up a PowerPoint presentation showing screen mock-ups the HF UI group had put together before the project was re-based. The Architect barely had a chance to process the question and resultant action before that same someone in the audience asked if they could actually see the application rather than seeing some documents that talk about the application. The Architect stood up from the back of the room and suggested that they did not have the application on an environment they could access and show during this particular meeting, but that he would arrange it and have a demo prepared for the following meeting in March. Immediately thereafter, the Project Manager volunteered to send out the PowerPoint deck they had just viewed so that everyone could see what the 'system looked like'.

Stream 1 - Development - To date, builds were happening as planned; wherever the specifications were not current, developers would check with the Architect for a direction; HF UI screen creation work happened at a high rate of speed and were scheduled to be 'hooked up' later in the project; and for all practical purposes, the hours actuals were within +/-25% of planned LOE at basically all times. All data points suggested a positive future and the Project Manager and Sponsor were pleased with what they knew. Privately, the Project Sponsor began talking with a CTO friend elsewhere in the parent company somewhat hoping that her discomfort could be allayed by something she yet did not understand.

A mild speed bump occurred over and over again during the builds where developers would have trouble building some of the 'stubborn' code elements, but no one really took the time to understand why until somewhere in month four of eight when there occurred disagreement over how to fix a couple of high severity defects. It was at that point that someone made the observation that there were still basically three operating systems, two different databases, and some language version variations collectively between the developers in sum. It later turned into a bit of an unhealthy conversation pitting off-site consultants against on-site full-time employees with regards to vendor-lock-in versus platform/technology agnostic development practices. This discussion further revealed that some developers were stubbing interfaces and others weren't, while the testers were actually writing tests in an environment with the wrong end-state integration points on software loads nearly three weeks behind 'current' at all times...and the hits just kept coming.

The Architect eventually called an 'all technical hands' conference call to lay down the ground rules for all people to work together. The results of the meeting required 'all people' to have their computers reconfigured to a particular development environment specification. A Senior Partner of one of the consulting firms later called the Architect to suggest alternatives and an undisclosed agreement was reached. Thereafter, the Architect brought the Project Manager up to speed, mentioned there had been a technical problem which required attention and it had since been resolved. Three days were lost for some of the off-site consultants who had to have their computers reconfigured and get their development environments rebuilt. Four of the consultants from a yet different consulting company somehow got caught in a Corporate Security dragnet and were not given permission to build their development environments locally (on their own machines) for another two days until all feathers were in place.

Stream 2 - Software Testing Tests were being written based upon the original business requirement documents. Where there was a delta between what they were reading and what they were hearing at lunch and at the water cooler, they err'd on the side of sticking to the business requirements document until they knew otherwise. They were taking a new load nearly every two to three weeks which the test manager simply felt was too quick but had no choice in the matter since he wanted to stay 'current'. The test group was required to send in daily status reports which eventually trickled up to the Architect, but no one noticed the testers had failed to note all component changes that had occurred during the final thirty days of prototyping.

Stream 3 - Leadership Meetings were happening in a consistent mechanized fashion. All of the math and messages sounded good and all were pleased. The Project Manager and Architect communicated things were all 'normal' for the typical software project and that due to the complexity of this project, it would be considered 'ordinary' to have bumps here and there. The Project Sponsor reduced her conversation time with the CTO because everything seemed to be heading along appropriately. The Project Sponsor did however ask if the milestones suggested more parts of the system were being delivered and were tangible to which the Architect responded that "later milestones would be attributable to a tangible system; but today they were largely associated with lower-level technical deliveries".

May 16, 2001 - Now six months into the project, work has continued be reported 'behind schedule' at the individual status level. After a handful of Project Management initiated meetings telling the teams they are failing to do work as planned and need to correct (also known as 'flogging meetings' by many of the team members) it is becoming difficult to report positive results during the monthly status meetings. The Architect, after looking back through status reports for the last couple of weeks - due in part to the fact that the Architect himself has been relying upon the Project Management meetings for status - he discovers that developers simply are not getting their questions answered in a timely manner and therefore do not often know what to code with clarity. He calls a meeting with the senior developers to ask them why they are late and what can be done about it. They report that they often do not get answers to their questions from the requirement team and reiterate that for business reasons, they were told they could not talk to end-users of the system. As a result, many things upon which they are to deliver are stalled until questions get answered and that at this point they believe themselves to be about one month behind schedule. The solution, they believe, is to require business analysts/requirement people to answer their questions within one business day of receipt.

So, the Architect takes this information to the Project Manager who, relying upon the Gantt chart picture, is in disbelief. After the Project Manager processes the impact and utters some unsavories on the Architect, the Architect puts the Project Manager 'in her place' by communicating that the PM is not doing her job by failing to make all parties focus on helping development get their job done and that as far as the Architect is concerned, this time lag is due to lack of leadership on the part of the PM.

During a normally scheduled weekly status meeting, the Project Manager sits down with the Project Sponsor and communicates the needed schedule change couched in 'we have a problem with the requirement team responsiveness which is impacting our schedule'. Effectively communicated, the Project Sponsor runs much of the message down the flagpole to the Business Analysis Manager who then communicates that his team is not technical and does not understand everything being sent to his team. As a result, responsiveness is slow because they have to figure out what is being asked, then go do research on an answer prior to delivering a verdict. At this point, the Business Analysis Manager believes they are about two weeks behind answering development originated questions.

When all parts of the problem are explored, discussed and addressed, two months are added to the project schedule after considering one month for development plus 20% overage, coupled with 2 weeks for the requirements team plus another 20% overage calculation.

August 16, 2001 - Today is the originally planned 'done' date for development. Due to the February 16 challenges discovered, two additional months of development were added prior to the test window suggesting development 'done' should theoretically be on October 16, 2001. After that, testing had planned to take the expected two months, but now theorized they may be bumped out to three months because they were finding disparity between the design documentation and the actual software they were receiving. During the August status meeting with anyone interested, including the food, music and theatrics, someone again asked if they could see the application demonstrated. The Architect immediately stood up and said they had been distracted by the change in project scope and schedule and had simply overlooked the need. He logged into the test site to show what may be there. A pleasant portal framework with nice skin compositions and a predictable, repeatable user behavior could be discerned. Most of the icons did not go anywhere, there were no reports, users did not yet have to actually login and the majority of custom wizards were yet inaccessible. However, while driving through the system most of the conversations were something like... "And this is where you will be able to do this...and this is where you'll be able to do that..." and so on. When questioned, the Architect simply stated that much of the work was actually done and simply had not been delivered to the test group yet.

October 1, 2001 - The revised done date being today, development reports that they are done with all work and waiting on testing. Not seeing this coming, testing communicates that while the development team reports that they are done, the test group does not have code much more mature than at the spontaneous meeting demo two months back. In fact, even though the Business Requirements group was later required to respond to all development requests within one business day, all transmission occurred via email and there were no updates to requirement and design documents as a result. So, the test group believed themselves to be at a stand still until someone updated the business requirements and design documents of yore. Furthermore, the test group communicated an inability to actually take new software loads and predictably and repeatably get them installed and usable within a one day time period. The Architect suggested this issue being due to the fact that the test groups had not tested the installation process yet and a war ensued.

After much discussion, some of it expectedly unhealthy at this point, it came out that the development group had stopped building and executing unit tests back in May when the project architecture was changed and then again when all computers were later re-imaged according to standards. The reality is that most developers had been reporting this through their individual status reports, but it had not been recognized due to the fact that the Architect and Project Manager were processing fewer and fewer of the status reports appropriately by only looking at those things 'done' and not including those things 'not done'. The fact of the matter was...the definition of done had been reported according to when code was checked into the version repository; and how hard people had been working was being measured by the number of checkins into the version control tool across thirty day periods. Exacerbating the situation was the fact that test progress had to date been measured by the number of scripts written versus the number of scripts actually proving out something important to the user. Ironically, upon the advice of the CTO some weeks back, the Project Sponsor decided to start making spontaneous visits to the on-ground troops to see how things were going. Little did anyone realize what she would be walking into during this spontaneous meeting occurring in the hallway near a marker board.

The Project Sponsor asked three questions:

1. How do you know when you're done?
2. What do you have available to show me right now in relation to the aforementioned definition of done?
3. What will it take to correct the current state of affairs?

The Project Manager walked in during this period of time and immediately chose a side of the gameboard without understanding what was actually happening. Acting surprised at what was being stated, the Project Manager feigned being upset with the Architect and the Development team and the Project Sponsor ended up taking control of the meeting, giving assignments and deadlines.

Three days later the Architect submitted detailed thought on the current state of the project suggesting development was basically done and testing needed to begin which would likely take two to three months. The Architect reported that integration testing was really happening every time a new build was performed, but that the test group should really be the ones doing the integration testing as a second set of eyes.

On the same deadline, the Test Manager submitted detailed thought on the current state of software maturity, how much work they had done, how much appeared to require rework, how much they didn't know and even slipped in some unnecessary commentary on the performance of project leadership. Needed time was estimated and reported at three to four months with the highest probability being four months to rewrite the old, write some new, and actually get some testing done.

The Project Manager, using the project management tool and individual status reports arrived at a six month estimate which included remaining development, testing and implementation work thereafter.

The Project Sponsor then took all of this data over to her CTO friend to look it over and discuss next steps. The CTO expected the project to be a relatively good state of affairs due to the fact that it had been assigned a senior architect and had a number of high end consultants on the side. Upon research together, the data points suggested the following:

To date time elapsed: 20 months

To date expenditures:
Super Consultants: US$105/hr * 55hr/wk * 48 weeks = $2,217,600.00
Pre-existing Staff: US$105/hr * 40hr/wk * 48 weeks = $13,305,600.00

  1. Blended wages this period: $15,523,200.00
  2. Infrastructure this period: $2,500,000.00
  3. Total this period: $18,023,200.00
  4. Previous period totals: $3,771,200.00
  5. Project Totals to Date: $21,794,400.00

To date ROI:
  1. 10 documents
  2. 76 automated tests
  3. 254 defects reported, 76 reported fixed
  4. 470 screen mocks
  5. 340,000+/- lines of code, commented
  6. 250 business requirements generated
  7. 127 business requirements fully/partially accessible in the system

Worst case project remaining costs per the PM: $7,114,800.00
Super Consultants: US$105/hr * 55hr/wk * 22 weeks = $1,016,400.00
Pre-existing Staff: US$105/hr * 40hr/wk * 22 weeks = $6,098,400.00

Approximate final project costs: $28,909,200.00.

The final straw came when the CTO had another senior technical person take a look through the application, the code repository, the build output, tests and test results and suggest that he did not personally feel comfortable taking on this project due to lack of clarity relating 'what is done' to 'what is stated to be done'.

The CTO went with the Business Sponsor to the Vice President's office to notify him in advance that they planned to suspend this project until such time as a more cost and time efficient method of completing the project could be ascertained. Angry with the situation, but happy the Sponsor and CTO had the fortitude to say 'stop', the VP simply asked two questions and an assignment for now:
  1. What are the top 10 things we did not do correctly?
  2. How will we make sure we will not do them like this again?
  3. You have 10 business days to bring me this material and recommended next steps.