Why ordinary PM thought doesn't work

As I've discussed in another post, Expecting Abnormal Human Behavior, we often practice different human behavior in our everyday private lives than we expect of ourselves and others at the workplace. At home, we plan the high points and roll with whatever exceptions are thrown the rest of the day. At work, we attempt to plan everything just short of human physiological needs. I think part of our difficulty and challenge delivering software day in and day out is not that we are miserable at doing good, solid work - but rather that we fight natural human behavior. Somehow we've fallen prey to thinking that we can and will control natural human tendencies for the eight to ten hours per each day while in our work space by using a framework that makes little or no sense contextual to the expected and needed team behaviors. On the one hand we have a team of people with natural behavioral tendencies; on the other, a framework which disregards said tendencies and attempts to put in rigor where it only makes sense to those selling the real estate.

Project Management practices are an abstraction layer from reality. Based upon today's currently popular and accepted practices, they will always be N steps behind and discussing the wrong problems.

I'll cover more regarding positively leveraging human behavioral tendencies in a later post. Let's look at some of the common project management practices we experience in the software space today.

Common Practice #1 - Identify All Steps on Day 1

Project Management technique expects a team to identify all possible components and tasks required to deliver a particular project - up front, regardless the length of the project measured in weeks, months or years. Ideally and logically of course, the software project has a defined goal or definition of "done" telling us where we are heading. However, for software, where customers don't know what they want until they see what they don't want, where the market and competitors shift on a daily basis, where requirements shift in priority and relevance as more knowledge is gained, knowing what is important is a changing versus fixed tide pattern. Writing a project scheduled up front and then re-baselining it and managing change with scope requests focuses on things discussing work, not actual work. Of course it depends on where we want to spend our money.

We simply don't know what we don't know until we have more data. To posit the steps it will take to integrate a software system with a particular database is far different than suggesting the software system will get integrated with a yet to be determined third party content provider. One data set is relatively fixed while the other is purely a placeholder for discovery. We may have an idea how big or complex it may be based upon past collective experiences, but we simply do not know what steps it may take until we lift the hood. Asking a team to identify all steps necessary to do the unknown is an exercise in futility. Doing the work is far more valuable than discussing what the work might entail. "Had I only known building this [thing] would have taken so long and been so painful, I never would have done it." Unless we've done this work before, we simply postulate what it might be like, but do not know with fact. Ironically, project schedules are often created from end to end as if they only contain fact. It simply is not so.

Common Practice #2 - Identify Estimates on Day 1

I think some of the most current best published thought on "how long" something should take is contained in Mike Cohn's book titled Agile Estimating and Planning. Aside from the pertinent details one should glean from the book _after_ purchasing it - we note a conversation on sizing.

In many, many cases a project management expectation suggests that a team should not only identify all steps to get from "here to there" on Day 1, but said steps must be accompanied with estimated hours to get the work done to completion. Now, if we have about two weeks of work in front of us, we can reasonably assert the steps between here and there and how much time we _might_ spend getting the work to some state of done-ness. However, if we have a list of tasks in front of us spanning months or years, the farther away from "today" we get, the more abstract our reasoning must forcibly become in order to assert time. When we calculate time in the next two weeks, we can put one element in context of surrounding elements for some sort of relative thought. Were we to consider a particular task or individual element months into the future, we really have no true non-ideal understanding of everything - so, we're forced to evaluate each task in and of itself, perhaps even in context of a sub-group of surrounding tasks - which leads us to Mike Cohn's conversation of relative sizing.

We may not always know how long something will take, but we are usually pretty good estimating how big something is in relation to something else. "We've never done this particular thing before; but when we did 'X' and 'Y' for Client ABC sometime back, it was bigger than we thought, and far more painful. This thing looks very similar." The challenge lay in the fact that Project Managers and associated traditional project scheduling tools request sizing in days and hours in order to build a project plan with an end date. This stimulates people to estimate how long it might take, then to add extra time to it simply to offset the risk of being wrong. There are excercises in Ken Schwaber's Control Chaos ScrumMaster classwork, as well as that offered by Mike Cohn of Mountain Goat Software to reinforce the following idea: the first estimate has the probability of being wrong because we simply don't know how long something might take; this is then compounded by estimating "what if" scenarios thereafter, adding it to the original number and calling it contingency. Is it any wonder project schedules are off-budget, let alone off-time? The only way to manage this problem by current practices is by cutting scope. We set ourselves up for the problem and then solve it incorrectly at the expense of the customer not getting what was requested.

Common Practice #3 - Allocate resource availability by %

One project model expects that all resources are fully dedicated to said project for the life of the journey. Another model expects there is a core of people fully dedicated, and then some referential people who are partially allocated across time based upon availability or cost constraints. Both models ordinarily practiced today assume a project is defined by the beginning and end dates automagically calculated in project scheduling tools. So it stands to reason for any project longer than two weeks or so, resources simply are or are not available. So, the same resources are allocated into the project schedule at arbitrary, semi-arbitrary or mathematically derived hours per day or week. What's missing? After it is in the project schedule, if the tasks suggest the work is done, but the work is in fact not done ... do we go get more % of said resource? Doesn't seem too complicated perhaps, unless a centralized DBA or Sys Admin team is allocated by % to multiple projects simultaneously.

Ah, resource contention. The very practice of asking for a single resource to be allocated to a multi-month project that is one long delivery bubble, or by allocating resources particular percentages across time perpetuates the problem - and solves the wrong problem. The issue is _not_ that "Joe" is not available more than five hours a week; the issue is that roles are assigned to people names. If DBA=Joe, and there are five projects - Joe is the bottleneck, gets put on the risk and/or issue list, and is constantly caught in the middle. Wrong problem. If however, roles required for said delivery bubble are DBA, SysAdmin, Developer and so on, we begin discussing work to be done rather than "who".

It is not the problem of the Project Manager to determine "who", but rather that of the team doing the work. When we spend money negotiating percentages of people, we aren't delivering. For those teams suggesting there are simply not enough resources to get the work done based upon skillsets - cross-train existing people to perform multiple roles and deliver in smaller bites similar to they way most people eat when dining, one bite at a time.

Common Practice #4 - Building a Gantt chart

Gantt charts are wrong the minute they are conceived. Why? Well, if we spend enough time looking at a twenty-page Gantt printout after we've taped it to the wall, we oddly observe sequential, waterfall-like thought. The only time the chart actually reports anything useful is never. The problem we have is that the chart suggests "A" occurs before "B", then "C" and so on. It also suggests time for every progress bar in the picture which leads one to posit that if the angle is going down, and page 20 represents the end, when we get to the lower right-hand corner of the picture we are done.

Actually it has nothing to do with being done; in fact, it merely reports pictorially what we've constructed as the work-breakdown structure in the project schedule. It reports what is done in relation to what is planned to be done based upon tasks in the schedule, but doesn't tell us anything about what is truly left or what it truly took to get there. If we look at the chart, it really is something cool and evidences the complexity in doing any sort of project. Much thinking is necessary, not only on Day 1, but all along the journey. Aside from the wow factor, it teaches people the wrong message (that projects are sequential and downward sloped), does not show the complex iterative interactivity necessary within the teams to actually get something done, always talks about the project in arrears, and doesn't evidence the priority or complexity of one task over another. After someone creates a Gantt chart by which to illustrate the plan and associated progress, we become constrained to the picture, not the reality.

What are the problems with these practices?

Software project management practices ordinarily expect team members, customers and companies to predict the future. When change occurs, the project schedule is re-baselined and a scope change request is drawn up to gain approval for the change. At the end of the project, any deviation from the original baseline is considered to be scope creep and the success of the project is called into question based upon deviation from the time and cost baseline. The pressure to predict the future well in advance is perpetuated by project management practices that do not solve problems, but merely talk about them. Not wanting to be reported as a deviation, teams spend more time trying to predict the future with more accuracy instead of focusing on better delivery practices. The fact is, we are not capable of prediction and while this expectation exists, teams are influenced to solve the wrong problems.

So does this then suggest Project Managers themselves are non-useful? Absolutely not. In fact, Project Managers, just like anyone else employed under contract for pay on a delivery team, are working diligently to meet expectations placed upon them. The communicated expectations are ordinarily something like, " You are the Project Manager. Make sure that project gets delivered on time no matter what!" or "You have $10 US and two days, get it done and don't take no for an answer." Usually something hubristic, fatalistic, and projecting the assumption that it can be done as charged. The conversation is not about the Project Manager. The conversation is about the role of Project Management. What should it really be?

  • Project Management practices measure "done" based upon project schedule line items versus the stateful evolution of software evidenced on a regular basis through sprints and demos. In fact, the project schedule can report "done" while the software is but a 1/3rd evolved with weeks or months remaining.
  • Some project managers actually have no idea what it takes at the rivets and bolts level to deliver software and so are unable to put change in context of reality. "If it is a deviation from baseline, it is a deviation."
  • Status report-type questions are often built upon the project schedule as the single point of reference for all activity and thought. As expected, teams answer the questions asked - even when the questions are wrong and non-meaningful because the measure is assumed to be the project management report, not the work.
  • Risk is managed through "margin of error" or "contingency" calculations instead of through delivery methods such as sprints, iterations and bursts.

These are only some of the challenges. What are immediately accessible alternatives to straight up Project Management practices? Agile Project Management with Scrum written by Ken Schwaber and the companion book to this material is Agile Software Development with Scrum written by Ken Schwaber and Mike Beedle. The focus is shifted from a Project Manager in charge of all things software, to equipping teams to identify, solve and deliver by using different team constructs, different delivery methods, and different methods of estimation. It makes sense that the team doing the work be the team discussing the work rather than a proxy.

"Project Management" is not the solution to delivering quality software, building better teams, managing risk and complexity, providing value to customers and clients, or even hitting budget and time constraints. Building self-contained, self-managing teams is the solution to better software. We must change how we evolve software and it must start by changing the idea and use of project management.

Expect teams to identify and solve problems rather than expecting someone not actually involved in the work to insert themselves, add value, and to provide meaningful reports of work someone else is actually performing. Project Managers are unecessarily put into the position to predict, report and hold accountable others doing the work. Why?

Expecting Abnormal Human Behavior

Assumptions for this conversation:

  • 1 human year = 52 weeks
  • 1 human week = 7 days (~365 days/year)
  • 1 human day = 24 hours (~8760 hours/year)
and
  • 1 work year = 50 weeks
  • 1 work week = 5 days (~250 days/year)
  • 1 work day = 8 hours (~2000 hours/year)
This suggests to us:

  • 1 person allocates 33.3% of each work day working for pay (8h/24h*100)
  • 1 person allocates 23% of each working year for pay (2000h/8736h*100)
So, if on an average calendar year, we get approximately 23% of an individual, how should we best make use of their attention and effort?

If we were to study the 77% of the calendar year an employee spends in their private life, we'd find approximately 1/3rd of it to be allocated to sleep. And the other 1/3rd? Living.

And just how do they live you ask? We should but look in the mirror.

Think about this, a thirty-year old woman just started at your company. She is technically one day old at said company; however, she is thirty years old (young) in her life. She comes to the company with prior experiences, predispositions, declared opinions, education, successes and failures.

Is it the responsibility of the new employee to figure out how to fit into the culture of the company in which she is employed? Or is it the responsibility of the company to figure out how to make best use of the new employee?

Yes, the easy answer is both as they provide reciprocal services (work for paycheck:paycheck for work). However, many of our company cultural practices run contrary to the personal lives of our employees. For example, prediction.

For any project, we diligently work to predict the future by:

  • outlining all possible requirements
  • outlining all possible tasks to deliver said requirements
  • outlining all possible dependencies
  • calculating effort hours for all tasks (even those not scheduled to occur for six+ months)
  • assigning people to tasks in percentages (such as Joe being assigned 25% capacity)
All of this math is completely logical if it weren't for the fact that we are unable to predict the future. So at work in order to manage this risk, we add contingency calculations to everything "just in case". We don't do this in our personal lives to such an extent, if at all.

In our personal lives, we don't know for sure if the University of Illinois will have a good football season this year, if our $30,000 car will last for five years, if all invitees will show up to the birthday party, if we'll get an "A" in any particular class, or if our children will make it through elementary school without bone breaks. We plan for the primary games, choose the most important components of the car, coordinate the party logistics, make sure we nail the major papers and tests, and try and educate our children on good decision-making. Yet, we are guaranteed nothing.

Most of us make no attempt to identify all intricate details for our vacations or to calculate contingency based upon risky dependencies. We live life knowing full well it will happen with or without us and often in a manner not predicted. This is not to suggest we should not plan or not think through a "Plan B". Rather it is suggesting that we simply cannot predict with 100% confidence. And so, we don't.

For any particular person who spends 77% of their calendar year thinking and solving problems for themselves in their own manner on their own time, why do we expect they will overhaul their natural human tendencies and practices at all, let alone well enough to be something different while at a place of work? How do we know work expectations are right?

Maybe we should consider placing greater value on understanding and leveraging natural human behavior versus trying to control it? After all, human history tells us over and over again that the human spirit is not ever stopped or controlled, but rather stimulated in one direction or another.

What if we worked diligently to understand why people are different? What if we worked to understand their strengths, weaknesses, goals and aspirations? What if we recognized that people are humans coming to work to earn money to go home and continue being humans?

Why do we expect people to be different at work than they are at home? Maybe we have it backwards.

Cadence

The average adult heartbeat is approximately 72 beats per minute. Assuming a full 24-hour day we can calculate a little over 103,000 beats per day. Sure there are variations in the rhythm due to hopefully anomalous circumstances like stress, but overall ... 103k beats per day for life ... and with every heartbeat, approximately 2.4 oz of blood are pumped. That's a lot of blood. We know that with a regular rhythm comes the possibility of healthful, productive life; and with an irregular rhythm comes the possibility of temporary or permanent bodily damage, i.e. loss; and the worst case of no rhythm existing brings about eventual, but certain death. We know the pulsating beat of our heart and associated flow of blood throughout our body is good when are able to take it for granted. Predictable, repeatable, healthy ... assumed.

Likened to the naturally beating heartbeat, i.e. pulse, there is the metronome ... a constructed tool for no greater calling and purpose than to simulate a predictable, repeatable pattern within the music helping teach a music student the importance of constancy. Music, being composed of a collection of notes, is ordinarily aggregated into one measure at a time according to time signatures such as 2/2, 2/4, 3/3, 3/4, 4/4 and so on. This framework helps us understand how many beats per measure are expected to occur. For example, Ludwig Van Beethoven's Bagatelles Opus 33 for the piano is written in 6/8 time requiring of us 6 beats per measure - regardless the number of notes necessary to accomplish this feat. And one level higher than the time signature is the tempo - telling us how many beats per minute are expected for any particular musical piece. If the time signature is 4/4 (4 beats per measure) and the tempo is declared at 112 beats per minute ... by math alone do we conclude that 28 measures of music per minute must be played. And if the mood of the music varies throughout the piece, whether playing allegro (fast) or adagissimo (very slowly), we know the music will move faster or slower than the 112 beats per minute, but will always use 112 as the base tempo. So we have the number of beats per measure and number of beats per minute, and we have expected variation built on a consistent base. First a cadence. Second a variation upon the cadence; but always the cadence.

Why don't we hear anyone discussing these things at concerts? Answer: because it is assumed, i.e. unspoken. The only times we hear any talk of tempo or time signature is while someone is learning the music, and when someone is playing it poorly or otherwise inconsistently. Without a tempo there is no flow. Without flow, there is no positive experience.

And what about drum corps? Soldiers. Marching bands. Colour Guard. We tend to notice when one or more people in the formation are out of step when walking down the street en parade. Why is that?

Heartbeats. Metronomes. Time signatures. Tempo. Predictable. Repeatable. Patterns ... signs of health and success. Inversely? Pain. Toil. Unpredictability. Non-repeatability. Anti-patterns. Stress. Fatigue. Attrition. Failure.

Living organisms have patterns. Music and art have patterns. Measurement is a pattern. Math is composed of patterns; patterns are math. Organizations? Should have patterns, i.e. pulse.

Not the simple ones like "there should exist one manager for every N headcount" or "we must achieve 5% market growth each year for the next 5 years"; but more importantly ... things like:

  1. Tasks should be no larger than 16h each
  2. Stories should be no bigger than one sprint
  3. Sprints should be no larger two weeks in length
  4. Working, tested, self-contained software delivered every sprint
  5. Software integration builds executed every hour
  6. Unit test suite called for execution hourly post-integration build
  7. Results posted publicly every hour

Sound like patterns? Repeatable? Predictable?

What about:

  1. For every two-week sprint, it begins with a 2 hour sprint planning meeting
  2. For every two-week sprint, it ends with a 2 hour demo review and lessons learned meeting
  3. For every two-week sprint, we'll only pull from the backlog those things we can fully eat
  4. For every two-week sprint, the backlog will be prioritized prior to the planning meeting

Sounds like a collection of patterns.

If someone tells you they have a healthy organization, but you can't seem to find a delivery pattern, build pattern, planning pattern, prioritization pattern, test pattern, estimation and/or sizing pattern, or any kind of pattern that is predictable and repeatable, they may not know what a healthy organization looks like. To judge the health of a company by revenue, market share, cash flow, debt ratios, or customer acquisition rates evaluates the wrong components. Enron proved to us that money is the wrong dashboard needle to be watching without additionally understanding the internal pulse of the company (or lack therein).

Spend one day and sit down with customer service representatives, developers, testers, project managers, database and system administrators, etc. all of them junior people in the company.

Ask them one item: "Name for me everything in this company that has a cadence."

You may learn about the cadences or lack therein of the company under scrutiny far faster than you can count the time signature changes in the average Kansas and YES albums of yore.

Reducing the Need for more Team Members through Retooling

How many times do we see staffing structures whereby there are dedicated Business Analysts in one group, Requirement resources in another, Trainers in another, Technical Writers in another, Manual Testers in another and even Project Manager groups assigned responsibility to manage them all on one or more projects?

How many times do we encounter situations whereby each group, regardless the situation, has more work than they are able to complete and subsequently ask for more time and/or staff?

First, I posit that these groups do not need more staff - they need to be leveraged in a different manner than currently popular and practiced.

Second, I posit that the problem does not truly lay with too much work, but rather an organization's ability or willingness to manage themselves. I'll discuss this one in another blog.

I'd like to amplify a few common characteristics between all aforementioned roles before going on this journey.

All of these roles require:

  1. Good listening skills
  2. Good communication skills (e.g. writing and speaking)
  3. A functional understanding of the system, the user, and the user's perspective
  4. Customer service skills, whether directly or by proxy
  5. An ability to organize large volumes of data into deductively logical relationships
  6. An ability to distill what is important now versus what may be important later

My opinion is that we really do not need groups specifically dedicated to each of this individual functional areas. Rather, we need a small pool of people who are equipped _and willing_ to do what is necessary to evolve solutions without boundary. Let's explore the fundamental model composed of six activities that I posit can be performed by a single person rather than having six people perform one activity.

Capturing role based user stories, descriptors and acceptance criteria

Most of the people in the aforementioned roles have an ability to converse which often includes listening and talking. Water coolers, cab rides, waiting in line at the deli, no matter - in each situation most people naturally discuss whatever is important to them at the moment. Conversations occur regarding the needs of people. We naturally do this on a daily basis, case by case... "here are my needs..." or "here is what would make me smile".

It is my contention that any people ordinarily associated with a single, individual role such as tester, trainer, project manager, technical writer, and business/requirement analyst have an ability to:

  1. understand what a customer is asking or seeking
  2. document a short statement representing that which is sought
  3. describe a little about what it would look like if this sought experience were delivered
  4. articulate some short burst type statements, in the customer's own words, suggesting what would make this experience good versus stellar versus not so good

What we really need is some fruit juice, a veggie tray, a bit o' coffee, water, a marker board, some index cards, and an ability to listen and verify that what we heard is agreeable. Any one of the aforementioned roles could fulfill this effort. We simply do not benefit by a dedicated middle-man who 'gets' the material, then 'hands it over' to others. The getter can also be the do-er.

Writing modular, portable e-help structures based on stories

It stands to reason that if I heard the customer make a request, and I documented the request in a short statement, with a descriptor, with some sort of acceptance criteria purely based upon conversation (in other words, I experienced the conversation, not just heard it) ... then I should have a reasonable understanding of what the customer really wants the system to do. Having this understanding through experience, I should then be able to document some useful help content to aid the customer in getting the experience they seek. It is a natural evolution to move from conversation to story to some sort of helpful documentation which helps the customer experience that which they first conversed with me. Cyclical in nature.

What we need is someone capable of writing short useful content modules around the particular experiences the user is seeking. Hear it. Write it. See it. Validate it. It makes sense that the people who first experienced the conversation are most easily equipped to write about it in context of help documentation. Why have one person hear the conversation and document it ... then another read the documentation, interpret it, question it and then write multiple versions of it to be sent back and forth until we "get it right". Eliminate the churn.

I heard the story and therefore have the greatest knowledge and experience available to validate that we delivered the story. Furthermore, due to this experience, I am equipped to write just enough help material to actually aid rather than confuse.

As a sidenote, I mention modular, portable e-help structures built upon stories for two reasons:

  • if "a user can" do something in the system, what else is there to discuss in e-help documentation? and
  • technology changes just like customer needs... I merely suggest modularity and technology agnosticism such that one can mix, match, order and re-order, add and remove modules just as easily as user stories in general; and, we never have to care about technology evolution. Move the arrow to the target rather than trying to make the target stay close to your arrow.
Training customers to use the product based on stories

Who better to work directly with a customer than the person or people who originally began conversations and documented the experience into user stories, descriptors and acceptance criteria? Rather than a requirement engineer handing to a business analyst handing to ... eventually a trainer ... what if someone capable of training was the person who originally conversed with the customer and documented the user stories?

When I train a customer, what is it that I'm training? How I want the customer to use the system? Perhaps. How the customer can use the system according to what they originally requested? Often. What is the common thread? What the customer can do in the system.

Consider this as a closed loop process ... converse, document, deliver, train. Condensed since I posit document is actually part of deliver ... converse, deliver, train. Further? Converse and Deliver.

What we need is someone capable of learning what the customer seeks, what it figuratively looks like when experienced, how it is validated as done or not done, and how to help the customer experience - the experience. What we don't need is someone brought in at the end of a lifecycle experience expected to get a clue, establish a relationship, and "bring it home". Constant inclusion, constant evolution.

Evaluating new system setups quickly based upon stories and acceptance criteria

After setting up a new customer installation, exactly what is it that we check to verify the setup is correct and complete? Sure, we may check number of files moved, directories present and accounted for, that the application actually launches and is accessible from outside the firewall, integration points and closed-loop data exchanges ... but what is it that we really check? We check that the system is usable from a customer perspective. And how is it that we check that the system is usable, i.e. upon what premise? In fact, we often use the most popular form of quality control ever devised .... a checklist.

And what, pray tell, composes the checklist? Ah, but if not for the stories, whatever would we be doing in the system after we login? In fact, regardless what we label it ... we are verifying that "a user can..." do a list of things.

And who is qualified to do this work? It is not singularly the trainer just before training. It surely is not the lone responsibility of the forlorn black box tester. The reality of the situation is ... again ... the person or people eliciting the stories through conversation and experience from the beginning of the journey are naturally equipped to verify the system "can" do what the user requested. We aren't ahead of the game by having dedicated resources who install systems ... and there is no particular magic involved. User stories once again. The title of the person doing the work is irrelevant. We are discussing roles and activities versus the historic titles and departments.

Sifting and prioritizing customer requests in context of the base and the future

Companies work diligently to move product to market. What should be in the product? Who will use the product? What is the edge? Questions every company allegedly asks and answers in order to create right solutions for customers and make money along the journey.

Fast forwarding, what does one measure the product future against? For companies that have a baseline set of requirements for their system ... the measure of "do we change or not" is often measured against the base system definition. "Should we enhance what we have, or should be break off and create a brand new branch?"

For those companies managing legacy products, there may not exist a set of requirements, just groups of people with pools and years of experience and knowledge. The challenge in this situation is that what is important is relative to the loudest, most persuasive, most tenured person at the table. It is a very reasonable approach to filtering out shoulds and should nots. It is less fact based than one may desire.

When we have a core set of user stories, regardless the age of the product, company and customer base, we have a measuring stick. "A user can...." do this and that. If we add these other three requests, it will change the fundamental principles of our application. If we do not add these three requests, we may not gain as much market share, but we will be solid experts in those things we do.

Have experienced experts. Have a measuring stick. Use user stories and have everyone keep their fingers on the pulse of these stories and their associated evolution. The user stories make money.

Knowing when you're done

This is a very simple conversation and very difficult action for many people to act upon ... knowing when to stop developing a solution.

Fundamentally, as suggested in Kent Beck's Test-Driven Development: By Example work, first write the test and then code until the test passes. Then, you are done coding. Novel and yet not practiced as often as necessary yet today. It mitigates over-engineering a solution, helps manage money spent, reduces complexity thereby reducing defect insertion velocity and density.

What if we were to practice this with solution delivery in general? What if we stopped delivering after all user stories could be executed as illustrated and associated acceptance criteria all pass? Who inherently has this knowledge? In many cases ... it is the requirement/business analyst, the tester, the trainer, sometimes the technical writer and could be the project manager. The definition of done is _not_ when we run out of tasks in our MS Project schedule. The definition of done is when the user stories are present and work.

We need people who know what these stories are, whether the software does or does not do them properly, and a relationship with a customer to evolve them.

We simply need a person who is willing to mold an application into user stories. It is very reasonable for people to have specialties and special interests. It is not reasonable to put all people with like specialties into special groups apart from one another.

Expect the manual tester, project manager, requirement analyst, business analyst, trainer and technical documentation write to practice the same fundamental behaviors:

  1. understand what a customer is asking or seeking;
  2. document a short statement which represents that which is sought;
  3. describe a little about what it would look like if this sought experience were delivered; and
  4. articulate some short burst type statements, in the customer's own words, suggesting what would make this experience good versus stellar versus not so good.

Then evolve the customer and the stories with the software application, hand in hand, in two week sprints - together. Eliminating phases eliminates the need for "specialists".

Do you really need more people? Or do you need the people you have to become more?

Symphony and Interplay

I recently watched an orchestra at the Lincoln Center in New York interpret a Mozart symphony.

Wolfgang Amadeus Mozart, born in Salzburg, Austria in 1756, created masterful works categorized variously from sonata to ballet, symphony to wind ensemble, chamber music to keyboard, and more, though not fairly reviewed in but a blog as this. Understated, his works are considered masterpieces even today. Complex. Challenging. Multi-spectral. And a wee bit more intellectual than some of my favorite Van Halen with David Lee Roth or Sammy Hagar.

Symphony is of Greek derivation built with two different words: together and sound or sounding. Interesting isn't it? Together, sound.

I watched this performance composed of reading music, hearing sound, seeing the pianist and conductor, reacting and evolving with a predictable cadence, together. Sounding. Together sounding.

On this particular eve, there was a guest pianist placed close to the audience, the conductor immediately behind, and the orchestra itself behind the maestro. Of interest to me was the fact that the guest pianist led the maestro, while the maestro led the orchestra. An interplay. Live. Real-time. Beautiful. I was the customer of this experience.

watched intently as the maestro continued to listen and watch the pianist while he led the orchestra. Whether the pianist paused, evolved a crescendo or decrescendo, alternated themes or moods, the maestro interpreted the output and led the orchestra in such a way as to compliment and supplement the pianist accordingly. Interesting enough.

Then I watched individual musicians reading their music, interpreting the music in front of them in context of the other written parts 1st, 2nd, 3rd and perhaps 4th within their instrument type, playing as but one individual and yet all the while balancing each contribution in context of what they see, hear and feel from the pianist, the maestro, the instrument group, as a whole.

And again, circular, I watched as the pianist interpreted and performed music from his memory outwardly manifested by his physical choices on the keyboard contextual to his environment. The pianist heard the orchestral interpretation of his own interpretation and gave an infrequent glint of attention to the maestro -- silently speaking with each other through the experience. Mozart vicariously on stage for me.

Beautiful music. I enjoyed my time. I became part of the experience. Remember, I was the customer. I was not stressed or anxious. I worried for nothing but perhaps I might miss a single note somewhere that contributed to a good experience.

An example of a team. An example of a goal. An example of keeping the customer in mind.

The customer left happy, content, yet wanting for more.