How to Hold a Customer Hostage Part I

Purely fictitious for the fun of exploration, this submission explores what it looks like when vendors of a product and/or service effectively hold a customer in a form of stasis - never really delivering, never really finishing, and never really providing enough understanding for the customer to be happy, sad or equipped to decide to stay or leave - just unfinished, bleeding money. Because of the size, this post is broken into multiple parts.

Part II
Part III
Part IV
Final Report

Here's what it looks like:

On January 1, 2000 - BioRecognition Co. (BioReCo) sends out 5 RFIs to various vendors asking them to research and bid on a project to build a software and system that does real-time facial scanning, collection and recognition activities as people walk through an airport security checkpoint.

By January 10, 2000 - BioReCo has 4 responses, reviews them, prioritizes interest, and schedules meetings with each vendor - 2 per day, 2 consecutive days. All meetings go variably well and two vendors are dismissed on the spot and two notified for a follow-up meeting with more details.

By January 20, 2000 - BioReCo chose a vendor to research and develop the system in question based upon the professionalism of the reps at the meetings, resumes with flash-bang components, bench depth, big-name past customers, well manicured presentations and materials, on/off-shore resource mix ratios thereby theoretically allowing BioRec to manage costs of acquisition to their liking. A contract for 30d was signed utilizing 3 people at an average $100/hour ~= $72,000.00.

On February 1, 2000 - The engagement began with a 30d+/- quick assessment of the project which would deliver one document containing a scope, mission and objectives, and a project plan for the first 90 days. All planned work completed as scheduled and the need for a project as communicated by the customer was validated. A new contract for 90 days was signed leveraging 10 people at an average $100/hour ~= $720,000.00.

On March 1, 2000 - Requirements elicitation commenced with tiger teams, and geographically distributed meetings requiring constant travel, calendar adjustments, and team building activities. This phase of activity was to deliver the superset list of requirements, followed by a gap analysis document against existing industry solutions, revisement of requirements (prioritized), and a statement of work containing a high-level estimate of time, cost, and resource needs with an associated first phase project plan.

On June 1, 2000 - The requirements elicitation work concluded with a superset document being constructed composed of industry and customer expectations, and a list of additional new ideas to help set them apart from competitors. A base project schedule was constructed as a list of ordered tasks according to application architectural components v. the functional requirements, a timeline was estimated based upon the schedule, and costs were calculated based upon estimated environment and resource needs. The company knew this was the path to walk and signed a fixed bid contract for the entire project schedule, resource list, and associated cost without knowledge of risks, issues, contingency, actual resource costs, prioritized requirements for delivery, or a clear picture of what "it" would look like when "done". Step 1: Prototyping an architectural framework and purchasing base infrastructure.

To date cost of acquisition: $792,000.00 + expenses
To date time elapsed: 5 months
To date return on investment: 6 documents (some >50 pages)
To date project perception: "It must be really hard to take this long."

Looking to Matter

I sat in a coffee shop some time back working on various customer deliverables, some schoolwork, and other things of interest to me and those peering over my shoulder to see my screen.

I watched three people come into the coffee shop discussing various elements of work, nothing out of the ordinary to be seen or heard - so I went back about my business. I was eventually drawn back out of my work trance when I continued to hear one of the three people speaking loudly, gesturing widely, and continuing to espouse fodder sprinkled with "I" seemingly every other word. Still nothing out of the ordinary other than it was irritating to experience the unsolicited showing of someone else's stories about themselves.

After a few minutes of trying to ignore the conversation, I felt the need to watch the behavior of these three people... one person was doing all of the talking - big, boisterous, loud, and starting most sentences with "I"; the second person, silently stood and faced in the direction of the speaker, but I noticed her looking around as if semi-interested and yet looking for other options at the same time; and the third person, would interject a "yeah" and "hmmpf" periodicially while looking around, looking at mugs and coffee beans as if they held some secret that required distinct concentration and effort. My guess is that this gathering of three was to be an informal social period as a break from the day. As time went on, I noticed only the one was doing the talking.

Then it happened ... the loudest, most annoying cell phone ringer deviously loaded to a cell phone by any vendor went off. I think I saw grout crack in the floor tiles. It rang three times as if it weren't yet loud enough to draw someone's attention to the fact that their phone was ringing. And who should answer it but the loudest speaker in the house that I had previously been observing. So, now I, the coffee shop and the other two parties to this fine guest were all enjoying the cell phone conversation just as loud as ever. And what do you know but the conversation included truckloads of "I".

Finally, the phone call was over and we all returned to the previously unenjoyable loud conversation about "I". I think the guy's name was "I" ... or they were talking about another workmate named "I". Nevertheless, the drone was becoming normal, and then .. the cursed cell phone from that previously unnamed vendor went off again and this time I know I saw grout crack the width of the coffee shop. And then we were off again.

While working diligently to put this person into background noise so I could return to productivity, it occurred to me that this gentleman was really looking for validation. In fact, he was not so likely interested in talking about himself as he was having his workmates validate that he was important, or at the least, valued and valuable. Then it occurred to me ... our software delivery and support teams seek the same thing .. to know they are adding value, and that they are individually and uniquely valued.

When people feel valued and that their contributions to the team are valuable, they become part of the team and part of the solution. Inversely, when people do not feel individually valued and that their contributions may not, or are not valued, they likely will not buy into team problems, solutions and sense of urgency.

Similar to this gentleman to whom I refer at the coffeeshop, those who may not feel valued or valuable may seek to get attention in other ways such as being loud and boisterous within the teams, in meetings and phone calls; and exercising their right to disagree with others more often than is useful.

Maybe strife in a team has some direct or indirect relationship with feeling valued _and_ feeling that they valuably contribute to an important problem!? This seems to further speak to methods of leadership as well.

For those teams that are led by prima donnas, dictators and those with god-complexes, the team may not feel valued, the individuals may not feel valued, and productivity and quality may be disappointing. In other words, the team becomes what they believe others view them as while at work. If they feel they are viewed as an intelligent team full of intelligent individuals, respected and capable of assessing situations and solving them, then that is the team that will exist. If they however feel that they are viewed as "staff", "cogs" or "monkeys on the line" waiting for someone smarter than them to think and decide, then that is what the team will be, and will become. Interesting. Clearly there is more here to explore.

The next time one of us experiences a loud, boisterous, exceedingly energetic disruptor personality in meetings, phone calls, cafeterias and other places at work, maybe the problem is that said person in question has a hearing problem and needs a hearing aid. Maybe.

Another more plausible possibility is that said person is trying to find someplace where they feel valued, valuable, and that they are an important member of something bigger than themselves. People want to matter.

Maybe the problem is not a defective person as we postulate when using the Jack Welch 20/70/10 methods of trimming staff where the top 20% are rewarded, the middle 70% are kept, and the bottom 10% are candidates for layoffs. Maybe the problem is defective leadership thereby creating a defective team which shows up as allegedly defective people behaviors, i.e. individuals.

The next time I get interrupted at the coffee shop by a loud, boisterous, unbelievably disruptive person drawing all attention to themselves .. maybe I will be experiencing someone looking to matter while at the coffee shop because they feel they do not matter while at work.

Agile/XP Restaurants

It occurred to me one day whilst eating at Taki Japanese Steakhouse that whether the owners knew it or not, they were practicing many of the fundamental concepts of Agile/XP. Now I've never heard anyone at the restaurant use such words other than the people I may be lunching with at the time; though I've observed through countless visits their relative consistency in these practices.

Let me walk you through the experience.

1. Non intrusive relationship building
Someone casually greets me nearly as soon as I get through the door onto the premises. Whether it be the actual host closest to the front door, or the sushi preparer furthest away from the door, someone always says "Hey", "What's up" or "How are you doing" before I've had opportunity to fully take in the surroundings. What's interesting is in some restaurants when I'm greeted, I feel like some unsolicited obligation was just placed on my shoulders and now I owe someone something for the attention. Here, I walk in anonymous, I become important for a moment with no obligations, and then they leave me alone to soak in the environment for a moment or two. The greeting is truly a "Hey, I'm glad to see you and glad you're here" ... without obligation, pressure, or being made to feel like I'm putting the host out by making him or her do their job. After I've had a moment, the same or different person, host or not, ordinarily asks me one question: "Where would you like to sit?" Often, someone simply hollers for me to go and sit anywhere I choose. And I do.

Relationships in an Agile/XP framework are low ceremony just like this. There is no formal diplomatic relationship with Mr. Barnaby Jones, President and CEO or "The Team" with formal inquests for data by utilizing two of each kind of resource in the company in the pre-scheduled executive conference room with hors doeuvres, mint tea and a ballroom dance. Quite the opposite, the relationship is of necessity and without baggage ... "Hi. My name is Matthew. I wondered if you'd like to spend some time teaching me about the problem you'd like to solve? I have some index cards and a pen. We can go anywhere you want for the conversation and it can be as long or short as you choose." This approach works at the restaurant and client-site alike - because they are fundamentally the same in nature - simple, casual, customer servitude.

2. Sprint planning meetings
Now after I choose a seat somewhere in the building, it is quite okay if I grab my own sushi menu or wait for someone to bring one over. Furthermore, I have the option of walking around looking at the sushi bar, talking to the chefs, reading the assorted menus, staring at the fish, or simply sitting down immediately. There are no clear expectations, but I have many options - start now, start later. Talk to anyone or be silent. The customer:restaurant relationship is one such that the sprint planning meeting, i.e. the beginning of this particular eating experience, starts when I want as the customer, on my terms, in the order I choose.

In nearly every case at this particular restaurant with few exceptions, the minute I sit down, someone expectedly asks me what I'd like to drink, and whether I'd like one of two soups, and one of two dressing options for a salad. You see, I have options. Somedays I choose no salad and one soup. Other days I choose both soup and salad but have options to vary which type of soup and salad combination is appealing to me for that particular visit.

Agile planning meetings are customer-driven just like this. The point of delivering software is to solve a customer need and/or problem. When we sit down with the customer, on the customer's schedule and convenience, it is the customer that chooses what items we discuss, in what order we discuss them. Now we may want to add a pinch of risk and value considerations into the conversation, but the meeting is driven by the customer. When at the restaurant, I have the choice of discussing and choosing drinks, soups, salads, menus, location, and timing. When we sit down with customers to plan the next sprint, they have the choice what what gets delivered next and in what order. If we practice two week delivery sprints, there is no likely need to discuss 'time' as this window is far shorter than current industry practice in many companies anyway. The point is ... the customer drives the experience per planning meeting or experience. Were the restaurant to note my requests on the first visit and then lock me into those choices everytime I visited thereafter, I'd stop going back. My tastes, preferences, moods and desires for sushi, soup and salad change weekly. Thankfully, so does the restaurant.

Ah, one beautifully Agile-like thing to note - when I make my drink, soup and salad choice, the server leaves immediately. In less than 3-4 minutes (Yes, I clocked it multiple times with witnesses), the waiter brings back our selections for this portion of our experience - for the entire table. I don't have to wait 18-months for a prototype, I asked for something now and they delivered something now. They deliver something right now to instill confidence in me that they are capable of delivering at all. Furthermore, if I receive something right now ... I become more patient for those things I expect later. And what could possibly be more enticing? The first delivery, the soup and salad, are free.

3. Delivery Sprints
Now ordinarily at many restaurants I've experienced, while a salad and drink may or may not show up in a timely manner, the entrees are usually delivered in one big bang for the table. The variables impacting how quickly and thoroughly deliveries are made ordinarily include the number of people placing orders, orders en queue, number of servers, number of chefs, the complexity of the item(s) ordered, number of people at your table and so on. For any one or more reasons, one could experience food in fifteen minutes or fifty depending upon the situation.

This simply is not so at the sushi restaurant under scrutiny. On any particular day, I may order two or more sushi rolls depending on the day, the mood, and the hunger. As well, if with another person or two, they order two or more sushi rolls (approximately 8 pieces per roll for non-sushi-ites). Were we at standard and traditional restaurants, we wouldn't see this food until all food was prepared and then delivered at the same time. At this restaurant however, all orders are queued and prioritized for the chefs, by the chefs, in order to deliver quickly. Then, depending upon how busy they are, the chefs will make one roll per customer per table and the first wave of rolls is delivered immediately thereafter.

Most often this is not our entire order. However, they continue to keep me happy because they gave me food immediately upon entry (the soup and salad), and then they deliver a portion of my entree (the first roll) thereafter. Often, I'm barely done with my soup before the first roll shows up. And in nearly all cases, the second roll shows up before I'm finished with the first. Of course there are variations in here based upon complexity of roll, number of orders, number of people sharing or individually ordering and so forth. The point is ... I as the customer do not have to wait.

Agile practices expect teams to deliver incrementally and quickly just like this. When a customer places an order of ten user stories, maybe even prioritizes which ones they would like to have first, we do not need to wait until "all" work is done before delivering anything. More appropriately, we'll take a look at the order and priority and determine what we will deliver first, and deliver it. The difference between this idea and common software practices for many teams today is we are discussing what can be delivered fully functional, fully usable, fully tested in a two week sprint, not a 6-month requirements elicitation phase generating a document that dicusses what will be delivered in the next 6-months.

Most restaurants understand that to keep customers, customers must be happy. They know that customers come to the restaurant because they are hungry, not because they wanted to see the cook or talk about how difficult it will be to prepare the food. When customers order food, they want food now rather than the promise of food later. Software and systems customers are no different.

4. Pairing+, Backlogging and Prioritization
At this particular restaurant, were you to walk by the sushi bar you would see 1-3 sushi chefs working quickly and efficiently on any number of orders at the same time. Looking a bit deeper what we really see is all of the orders laid out side by side in front of the chefs, in order of arrival. Now what's interesting is that one entire order is not created at the same time. In fact, if there were 10 sushi orders in front of the chefs, the orders become mentally indexed by them thereby creating a type of mental, prioritized, backlog based upon a) the customer arrival time (first come, first serve), and b) what work is going on at the time. Let me explain.

Let's say there are six orders split across three tables: 1 california, 1 spicy tuna, 2 tempura salmon, 1 eel, and 1 rainbow roll. If the upline sushi chef (first reviewer of incoming orders and usually the senior) notices there are two orders of tempura salmon at two different tables, he'll ask another downline chef to pick up the order for two tempura salmon rolls and it will be done. At the same time the lead chef notices the order for a rainbow roll from a yet different table, which just happens to have salmon in it, and he asks the same chef currently building the tempura salmon rolls to build the rainbow since the salmon is out and about being prepared. The remaining rolls are each unique to the other in terms of unique ingredients, but still have common denominators involved, i.e. rice and seaweed/rice paper/soy paper. So either the primary or another chef will build the base infrastructure for the remaining california, eel and spicy tuna rolls concurrently, and then create the variations thereafter. In practice, the three rolls picked up by chef B and three by Chef A should all finish around the same time; but remember, they do not wait until all is done to big bang the meal. Thankfully, they do not build out the entire base infrastructure for all possible orders and only then begin building on the details. First roll done is delivered. Next roll done, is delivered, and so on. A form of spiking the system if you will as mentioned in some of Mike Cohn's work in Agile Estimating and Planning.

And they do this in real time. There is no time for meetings to discuss work - work is performed in real time, adjustments are made in real time, and the team picks up whatever work is necessary to make that particular batch of orders (sprints) available to the customers.

Agile teams make real time assessments, prioritizations and indexes of all known backlog elements at the time of review just like these sushi chefs. There is a backlog and ideally the customer prioritizes what they want first; but the technologists (chefs) are expected to manage customer value with delivery risk at the same time, so order of construction will be balanced between customer desire and risk of delivery (technology or food quality). Thankfully, the chefs have one backlog, else the probability of my order getting lost increases with every additional backlog from which the cooks pull and execute work.

Agile teams pair up when necessary just like the sushi chefs. When in need of help getting the work done, mentoring, accountability to quality, teaching, timeliness and efficiency, pairing up with a peer or even more experienced person is immeasurably valuable. Having a team member sit beside you and share the responsibility helps both people improve in real-time versus delivering something to alleged completion, and then receiving significant rework feedback at a later date. Rather than sending a sushi chef to endless hours of coursework, they learn through pairing. Rather than sending a technologist to days or weeks of class, pair them with someone.

A couple of mild differences to note in a sushi versus software construction environment:

  • Regarding sprint planning versus real-time assessment activities: In the sushi environment a single person is reviewing all incoming orders and distributing the work in real-time; whereas in the software environment during an agile sprint planning session, all team members actually review the work and sign-up for what work they will do during the upcoming sprint. Note the difference is really based upon the work product and customer expectations. For example, I want my food in under 30 minutes as opposed to a 2-week sprint. Having multiple 30 minute sprint meetings would delay my food to perhaps 2-weeks, which of course, I stand opposed. The solution to delivering is context-driven in both examples. Both are sprints; both are implemented contextual to the problem statement.
  • Regarding pairing: In the sushi environment, pairing is used to teach and hold accountability to the quality, timeliness and efficiency expectations held by the senior sushi chef and restaurant owner. Sometimes, pairs are used to perform concurrent delivery work for the same sprint. In the software environment, pairing is used for education, quality, timeliness and efficiency in developing a solution for a customer driven sprint. However, the purpose and use of pairing in no way pays homage to Mythical Man-Month logic whereby if one person can do two rolls in 5 minutes, 2 people should be able to do 4 rolls and so forth. Sushi and software stop showing similiarities exactly right here. Something to note.

5. Review meetings and demos
Nearly every time a server passes my table thereafter, someone asks if everything meets expectations - and this includes servers _not_ taking care of my table. Constantly are people asking if the food is good, warm, whether it meets expectations, more drink, less drink, different drink, napkins (I'm messy), etc. Social conversation even occurs just asking how the day is, what is planned for the afternoon and so on. The informal, relaxed relational conversation is continued throughout without pomp, circumstance, high-ceremony or the tedious dancing of the "Where's my waiter?" game. Constantly is the experience being checked upon.

Agile teams informally and constantly check with customers for acceptability; and specifically, if change is desired at any time along the journey just like this. In the customer environment, change sometimes happens with or without customer desires or plans. As product and service vendors, Agile teams roll with the punches accordingly. A new change comes in, prioritize it within the next sprint, and deliver it as requested in the next sprint. How much more expensive eating at a restaurant would be if we were charged everytime we changed our minds about food, drink, temperature, dessert or seating arrangements.

Agile teams have review meetings and demos very frequently

  • Deliver a portion now, more immediately thereafter, and more immediately thereafter, etc.;
  • Ask if it is what was ordered (perception v reality);
  • Ask if it meets expectations;
  • Talk about it if desired; and
  • Make changes to the next delivery iteration as requested.

At the end of every review experience with the customer, leave knowing what is most important to the customer next, and deliver it. When it is ready, show it to them and invite reaction. How often is often contextual to what is being delivered; but no more than 30 day increments - two weeks being ideal.

And as far as the sushi restaurant? They're Agile and don't even know it. What a great problem to have. How many software solution providers out there have this problem? Or is it a strength? You decide. As for me, I'm thankful my favorite sushi restaurant is an Agile practitioner.

The Value of an Identity-less Team

Common to many organizations is the desire to have a department or group with like skills practicing like behaviors. For example, the "requirements group" or the "development group" and so on. I do not know the origins of this model, though it is indeed logical to put like people with like people. It is simply a logical thought similar to the way most of us categorize and organize our clothing in closets, silverware in utensil drawers, and food stuff in cabinets. This is what we do - we categorize (label) and organize associatively. Putting like people together into departments with like labels also tends to make sense in order to offer people a career path and differentiate skills and experience in terms of compensation and advancement triggers.

This model ordinarily gets extended in the operational machines within a company by how we tend to staff and manage projects. One example is our tendency to estimate projects not only by task, but by role-by task - often calculating capacity based upon an individual.

Another example: a department may take whatever one group says and multiply it by a mathematical factor and be content to say "if development says X, then I say X*0.75 by standard".

As discussed in an earlier post titled Compressing Software Lifecycles by Eliminating Lines of War, we see yet another example is how projects are structured: projects ordinarily have someone running the project called a "Project Manager" who then staffs and manages the project based upon labels, aka titles.

If we look at many statements of work or contracts in the marketplace, we tend to see estimates and rates broken down by role with variability within each role associated to seniority or "junior-ity". The label identifies the chargeable cost based upon the title; but what does it really mean? Value some gets associated with the title which then produces some equivalent hourly bill rate. If it were merely a method of identifying roles utilized to complete work it would be fine. Often though, each label in the contract is actually a title with a person's name associated to it (whether public or privately communicated).

It is time to change our model.

I posit that the reason many teams do not work well together is because of company organizational models. The practices of putting people into buckets, assigning hierarchy within buckets, and then expecting them to "team" with others is counter-intuitive and runs contrary how their compensation based performance is measured. Team members are not ineffective by choice; they are ineffective because companies do not understand how their organizational structure impacts operational effectiveness.

If the organization creates "classes" or other differentiators between people groups within corporate society, then that is the way people see themselves and each other. Later when said people are on a particular project, they do not show up as solution providers serving a customer; rather they show up as their corporate societal class or label expects.

Is it a reasonable hypothesis that waterfall processes are merely child products of poorly thought out organizational models?

I am my label. You are your label. Our labels together will deliver a project solution as long as you do the stuff associated with your label, and then I will do mine.

What would happen if we did not have departments and labels? Aside from the uproar from people who believe they can label and organize people like their sock drawer at home, we would eliminate many territorial boundary problems which plague our teams with fruitless bureaucracy and latency.

What does an identity-less solution delivery team really look like?

  1. Everyone has equal license to perform any role, on any task, at any time, for any reason versus certain people only being licensed to perform certain tasks when it is their turn
  2. Everyone stands at the marker board or window at the same time, each with marker in hand, collaboratively talking, writing and solving together versus someone being designated "the leader" and everyone else waiting to be called upon
  3. The focus of the group is to identify what types of work must be performed to solve the problem at hand versus thinking of "who" should be involved, when, and how to get them
  4. Everyone has the proactive responsibility to participate and provide immediate and direct feedback on everything all of the time versus waiting until it is their turn, then logging the observations into a database and sending feedback through a process tunnel
  5. Everyone is equipped and licensed to answer questions at any time from anyone on any subject versus queuing and directing certain questions to certain people on certain timeframes
  6. The team ignores requests for titled people and immediately begins seeking an understanding of the problem versus receiving a request for "John, the Architect", then backing away from the need, and telling John he has a phone call on Line 1
  7. Each team collaboratively identifies the work, signs up to do the work, and delivers the work versus assigning someone to be responsible for motivating and managing the team
  8. Each team member focuses on skills necessary to create solutions versus "who"

Human Resource career pathing plans, departments composed of like people, resource managers responsible for managing like people, and project managers requesting people names or HR titles for projects are all products of corporate culture - and all unwittingly perpetuate team breakdown. As long as our conversation includes identifying a requirement person over the bridge, a developer through the woods, and a tester over yonder - we are focusing on the wrong problem.

Try this at the next planning meeting:
  1. What is the problem needing a solution?
  2. What tasks and roles are necessary to get there?
  3. In what order should these things be prioritized considering risk and value?
  4. Who wants to sign up to address each task?
  5. What do we think we can get done in a pre-defined timeframe like two weeks?
  6. Go.
Rather than having departments by role-type managed by resource managers, who farm out their department personnel into "matrixed" organizations, overhaul the system and change the paradigm. It works. Try it.

  1. Measure teams versus individuals and departments.
  2. Deliver whole solutions to customers versus bits and pieces between departments.
  3. Encourage self-directed teams versus third-party management.
  4. Let the operational structure be the organizational structure versus creating an organizational structure and figuring out how to get it operationalized.

Customers don't care how many departments we have or how many people are in each department with associated hierarchies. Customers care about delivering.

Why ordinary PM thought doesn't work

As I've discussed in another post, Expecting Abnormal Human Behavior, we often practice different human behavior in our everyday private lives than we expect of ourselves and others at the workplace. At home, we plan the high points and roll with whatever exceptions are thrown the rest of the day. At work, we attempt to plan everything just short of human physiological needs. I think part of our difficulty and challenge delivering software day in and day out is not that we are miserable at doing good, solid work - but rather that we fight natural human behavior. Somehow we've fallen prey to thinking that we can and will control natural human tendencies for the eight to ten hours per each day while in our work space by using a framework that makes little or no sense contextual to the expected and needed team behaviors. On the one hand we have a team of people with natural behavioral tendencies; on the other, a framework which disregards said tendencies and attempts to put in rigor where it only makes sense to those selling the real estate.

Project Management practices are an abstraction layer from reality. Based upon today's currently popular and accepted practices, they will always be N steps behind and discussing the wrong problems.

I'll cover more regarding positively leveraging human behavioral tendencies in a later post. Let's look at some of the common project management practices we experience in the software space today.

Common Practice #1 - Identify All Steps on Day 1

Project Management technique expects a team to identify all possible components and tasks required to deliver a particular project - up front, regardless the length of the project measured in weeks, months or years. Ideally and logically of course, the software project has a defined goal or definition of "done" telling us where we are heading. However, for software, where customers don't know what they want until they see what they don't want, where the market and competitors shift on a daily basis, where requirements shift in priority and relevance as more knowledge is gained, knowing what is important is a changing versus fixed tide pattern. Writing a project scheduled up front and then re-baselining it and managing change with scope requests focuses on things discussing work, not actual work. Of course it depends on where we want to spend our money.

We simply don't know what we don't know until we have more data. To posit the steps it will take to integrate a software system with a particular database is far different than suggesting the software system will get integrated with a yet to be determined third party content provider. One data set is relatively fixed while the other is purely a placeholder for discovery. We may have an idea how big or complex it may be based upon past collective experiences, but we simply do not know what steps it may take until we lift the hood. Asking a team to identify all steps necessary to do the unknown is an exercise in futility. Doing the work is far more valuable than discussing what the work might entail. "Had I only known building this [thing] would have taken so long and been so painful, I never would have done it." Unless we've done this work before, we simply postulate what it might be like, but do not know with fact. Ironically, project schedules are often created from end to end as if they only contain fact. It simply is not so.

Common Practice #2 - Identify Estimates on Day 1

I think some of the most current best published thought on "how long" something should take is contained in Mike Cohn's book titled Agile Estimating and Planning. Aside from the pertinent details one should glean from the book _after_ purchasing it - we note a conversation on sizing.

In many, many cases a project management expectation suggests that a team should not only identify all steps to get from "here to there" on Day 1, but said steps must be accompanied with estimated hours to get the work done to completion. Now, if we have about two weeks of work in front of us, we can reasonably assert the steps between here and there and how much time we _might_ spend getting the work to some state of done-ness. However, if we have a list of tasks in front of us spanning months or years, the farther away from "today" we get, the more abstract our reasoning must forcibly become in order to assert time. When we calculate time in the next two weeks, we can put one element in context of surrounding elements for some sort of relative thought. Were we to consider a particular task or individual element months into the future, we really have no true non-ideal understanding of everything - so, we're forced to evaluate each task in and of itself, perhaps even in context of a sub-group of surrounding tasks - which leads us to Mike Cohn's conversation of relative sizing.

We may not always know how long something will take, but we are usually pretty good estimating how big something is in relation to something else. "We've never done this particular thing before; but when we did 'X' and 'Y' for Client ABC sometime back, it was bigger than we thought, and far more painful. This thing looks very similar." The challenge lay in the fact that Project Managers and associated traditional project scheduling tools request sizing in days and hours in order to build a project plan with an end date. This stimulates people to estimate how long it might take, then to add extra time to it simply to offset the risk of being wrong. There are excercises in Ken Schwaber's Control Chaos ScrumMaster classwork, as well as that offered by Mike Cohn of Mountain Goat Software to reinforce the following idea: the first estimate has the probability of being wrong because we simply don't know how long something might take; this is then compounded by estimating "what if" scenarios thereafter, adding it to the original number and calling it contingency. Is it any wonder project schedules are off-budget, let alone off-time? The only way to manage this problem by current practices is by cutting scope. We set ourselves up for the problem and then solve it incorrectly at the expense of the customer not getting what was requested.

Common Practice #3 - Allocate resource availability by %

One project model expects that all resources are fully dedicated to said project for the life of the journey. Another model expects there is a core of people fully dedicated, and then some referential people who are partially allocated across time based upon availability or cost constraints. Both models ordinarily practiced today assume a project is defined by the beginning and end dates automagically calculated in project scheduling tools. So it stands to reason for any project longer than two weeks or so, resources simply are or are not available. So, the same resources are allocated into the project schedule at arbitrary, semi-arbitrary or mathematically derived hours per day or week. What's missing? After it is in the project schedule, if the tasks suggest the work is done, but the work is in fact not done ... do we go get more % of said resource? Doesn't seem too complicated perhaps, unless a centralized DBA or Sys Admin team is allocated by % to multiple projects simultaneously.

Ah, resource contention. The very practice of asking for a single resource to be allocated to a multi-month project that is one long delivery bubble, or by allocating resources particular percentages across time perpetuates the problem - and solves the wrong problem. The issue is _not_ that "Joe" is not available more than five hours a week; the issue is that roles are assigned to people names. If DBA=Joe, and there are five projects - Joe is the bottleneck, gets put on the risk and/or issue list, and is constantly caught in the middle. Wrong problem. If however, roles required for said delivery bubble are DBA, SysAdmin, Developer and so on, we begin discussing work to be done rather than "who".

It is not the problem of the Project Manager to determine "who", but rather that of the team doing the work. When we spend money negotiating percentages of people, we aren't delivering. For those teams suggesting there are simply not enough resources to get the work done based upon skillsets - cross-train existing people to perform multiple roles and deliver in smaller bites similar to they way most people eat when dining, one bite at a time.

Common Practice #4 - Building a Gantt chart

Gantt charts are wrong the minute they are conceived. Why? Well, if we spend enough time looking at a twenty-page Gantt printout after we've taped it to the wall, we oddly observe sequential, waterfall-like thought. The only time the chart actually reports anything useful is never. The problem we have is that the chart suggests "A" occurs before "B", then "C" and so on. It also suggests time for every progress bar in the picture which leads one to posit that if the angle is going down, and page 20 represents the end, when we get to the lower right-hand corner of the picture we are done.

Actually it has nothing to do with being done; in fact, it merely reports pictorially what we've constructed as the work-breakdown structure in the project schedule. It reports what is done in relation to what is planned to be done based upon tasks in the schedule, but doesn't tell us anything about what is truly left or what it truly took to get there. If we look at the chart, it really is something cool and evidences the complexity in doing any sort of project. Much thinking is necessary, not only on Day 1, but all along the journey. Aside from the wow factor, it teaches people the wrong message (that projects are sequential and downward sloped), does not show the complex iterative interactivity necessary within the teams to actually get something done, always talks about the project in arrears, and doesn't evidence the priority or complexity of one task over another. After someone creates a Gantt chart by which to illustrate the plan and associated progress, we become constrained to the picture, not the reality.

What are the problems with these practices?

Software project management practices ordinarily expect team members, customers and companies to predict the future. When change occurs, the project schedule is re-baselined and a scope change request is drawn up to gain approval for the change. At the end of the project, any deviation from the original baseline is considered to be scope creep and the success of the project is called into question based upon deviation from the time and cost baseline. The pressure to predict the future well in advance is perpetuated by project management practices that do not solve problems, but merely talk about them. Not wanting to be reported as a deviation, teams spend more time trying to predict the future with more accuracy instead of focusing on better delivery practices. The fact is, we are not capable of prediction and while this expectation exists, teams are influenced to solve the wrong problems.

So does this then suggest Project Managers themselves are non-useful? Absolutely not. In fact, Project Managers, just like anyone else employed under contract for pay on a delivery team, are working diligently to meet expectations placed upon them. The communicated expectations are ordinarily something like, " You are the Project Manager. Make sure that project gets delivered on time no matter what!" or "You have $10 US and two days, get it done and don't take no for an answer." Usually something hubristic, fatalistic, and projecting the assumption that it can be done as charged. The conversation is not about the Project Manager. The conversation is about the role of Project Management. What should it really be?

  • Project Management practices measure "done" based upon project schedule line items versus the stateful evolution of software evidenced on a regular basis through sprints and demos. In fact, the project schedule can report "done" while the software is but a 1/3rd evolved with weeks or months remaining.
  • Some project managers actually have no idea what it takes at the rivets and bolts level to deliver software and so are unable to put change in context of reality. "If it is a deviation from baseline, it is a deviation."
  • Status report-type questions are often built upon the project schedule as the single point of reference for all activity and thought. As expected, teams answer the questions asked - even when the questions are wrong and non-meaningful because the measure is assumed to be the project management report, not the work.
  • Risk is managed through "margin of error" or "contingency" calculations instead of through delivery methods such as sprints, iterations and bursts.

These are only some of the challenges. What are immediately accessible alternatives to straight up Project Management practices? Agile Project Management with Scrum written by Ken Schwaber and the companion book to this material is Agile Software Development with Scrum written by Ken Schwaber and Mike Beedle. The focus is shifted from a Project Manager in charge of all things software, to equipping teams to identify, solve and deliver by using different team constructs, different delivery methods, and different methods of estimation. It makes sense that the team doing the work be the team discussing the work rather than a proxy.

"Project Management" is not the solution to delivering quality software, building better teams, managing risk and complexity, providing value to customers and clients, or even hitting budget and time constraints. Building self-contained, self-managing teams is the solution to better software. We must change how we evolve software and it must start by changing the idea and use of project management.

Expect teams to identify and solve problems rather than expecting someone not actually involved in the work to insert themselves, add value, and to provide meaningful reports of work someone else is actually performing. Project Managers are unecessarily put into the position to predict, report and hold accountable others doing the work. Why?

Expecting Abnormal Human Behavior

Assumptions for this conversation:

  • 1 human year = 52 weeks
  • 1 human week = 7 days (~365 days/year)
  • 1 human day = 24 hours (~8760 hours/year)
  • 1 work year = 50 weeks
  • 1 work week = 5 days (~250 days/year)
  • 1 work day = 8 hours (~2000 hours/year)
This suggests to us:

  • 1 person allocates 33.3% of each work day working for pay (8h/24h*100)
  • 1 person allocates 23% of each working year for pay (2000h/8736h*100)
So, if on an average calendar year, we get approximately 23% of an individual, how should we best make use of their attention and effort?

If we were to study the 77% of the calendar year an employee spends in their private life, we'd find approximately 1/3rd of it to be allocated to sleep. And the other 1/3rd? Living.

And just how do they live you ask? We should but look in the mirror.

Think about this, a thirty-year old woman just started at your company. She is technically one day old at said company; however, she is thirty years old (young) in her life. She comes to the company with prior experiences, predispositions, declared opinions, education, successes and failures.

Is it the responsibility of the new employee to figure out how to fit into the culture of the company in which she is employed? Or is it the responsibility of the company to figure out how to make best use of the new employee?

Yes, the easy answer is both as they provide reciprocal services (work for paycheck:paycheck for work). However, many of our company cultural practices run contrary to the personal lives of our employees. For example, prediction.

For any project, we diligently work to predict the future by:

  • outlining all possible requirements
  • outlining all possible tasks to deliver said requirements
  • outlining all possible dependencies
  • calculating effort hours for all tasks (even those not scheduled to occur for six+ months)
  • assigning people to tasks in percentages (such as Joe being assigned 25% capacity)
All of this math is completely logical if it weren't for the fact that we are unable to predict the future. So at work in order to manage this risk, we add contingency calculations to everything "just in case". We don't do this in our personal lives to such an extent, if at all.

In our personal lives, we don't know for sure if the University of Illinois will have a good football season this year, if our $30,000 car will last for five years, if all invitees will show up to the birthday party, if we'll get an "A" in any particular class, or if our children will make it through elementary school without bone breaks. We plan for the primary games, choose the most important components of the car, coordinate the party logistics, make sure we nail the major papers and tests, and try and educate our children on good decision-making. Yet, we are guaranteed nothing.

Most of us make no attempt to identify all intricate details for our vacations or to calculate contingency based upon risky dependencies. We live life knowing full well it will happen with or without us and often in a manner not predicted. This is not to suggest we should not plan or not think through a "Plan B". Rather it is suggesting that we simply cannot predict with 100% confidence. And so, we don't.

For any particular person who spends 77% of their calendar year thinking and solving problems for themselves in their own manner on their own time, why do we expect they will overhaul their natural human tendencies and practices at all, let alone well enough to be something different while at a place of work? How do we know work expectations are right?

Maybe we should consider placing greater value on understanding and leveraging natural human behavior versus trying to control it? After all, human history tells us over and over again that the human spirit is not ever stopped or controlled, but rather stimulated in one direction or another.

What if we worked diligently to understand why people are different? What if we worked to understand their strengths, weaknesses, goals and aspirations? What if we recognized that people are humans coming to work to earn money to go home and continue being humans?

Why do we expect people to be different at work than they are at home? Maybe we have it backwards.


The average adult heartbeat is approximately 72 beats per minute. Assuming a full 24-hour day we can calculate a little over 103,000 beats per day. Sure there are variations in the rhythm due to hopefully anomalous circumstances like stress, but overall ... 103k beats per day for life ... and with every heartbeat, approximately 2.4 oz of blood are pumped. That's a lot of blood. We know that with a regular rhythm comes the possibility of healthful, productive life; and with an irregular rhythm comes the possibility of temporary or permanent bodily damage, i.e. loss; and the worst case of no rhythm existing brings about eventual, but certain death. We know the pulsating beat of our heart and associated flow of blood throughout our body is good when are able to take it for granted. Predictable, repeatable, healthy ... assumed.

Likened to the naturally beating heartbeat, i.e. pulse, there is the metronome ... a constructed tool for no greater calling and purpose than to simulate a predictable, repeatable pattern within the music helping teach a music student the importance of constancy. Music, being composed of a collection of notes, is ordinarily aggregated into one measure at a time according to time signatures such as 2/2, 2/4, 3/3, 3/4, 4/4 and so on. This framework helps us understand how many beats per measure are expected to occur. For example, Ludwig Van Beethoven's Bagatelles Opus 33 for the piano is written in 6/8 time requiring of us 6 beats per measure - regardless the number of notes necessary to accomplish this feat. And one level higher than the time signature is the tempo - telling us how many beats per minute are expected for any particular musical piece. If the time signature is 4/4 (4 beats per measure) and the tempo is declared at 112 beats per minute ... by math alone do we conclude that 28 measures of music per minute must be played. And if the mood of the music varies throughout the piece, whether playing allegro (fast) or adagissimo (very slowly), we know the music will move faster or slower than the 112 beats per minute, but will always use 112 as the base tempo. So we have the number of beats per measure and number of beats per minute, and we have expected variation built on a consistent base. First a cadence. Second a variation upon the cadence; but always the cadence.

Why don't we hear anyone discussing these things at concerts? Answer: because it is assumed, i.e. unspoken. The only times we hear any talk of tempo or time signature is while someone is learning the music, and when someone is playing it poorly or otherwise inconsistently. Without a tempo there is no flow. Without flow, there is no positive experience.

And what about drum corps? Soldiers. Marching bands. Colour Guard. We tend to notice when one or more people in the formation are out of step when walking down the street en parade. Why is that?

Heartbeats. Metronomes. Time signatures. Tempo. Predictable. Repeatable. Patterns ... signs of health and success. Inversely? Pain. Toil. Unpredictability. Non-repeatability. Anti-patterns. Stress. Fatigue. Attrition. Failure.

Living organisms have patterns. Music and art have patterns. Measurement is a pattern. Math is composed of patterns; patterns are math. Organizations? Should have patterns, i.e. pulse.

Not the simple ones like "there should exist one manager for every N headcount" or "we must achieve 5% market growth each year for the next 5 years"; but more importantly ... things like:

  1. Tasks should be no larger than 16h each
  2. Stories should be no bigger than one sprint
  3. Sprints should be no larger two weeks in length
  4. Working, tested, self-contained software delivered every sprint
  5. Software integration builds executed every hour
  6. Unit test suite called for execution hourly post-integration build
  7. Results posted publicly every hour

Sound like patterns? Repeatable? Predictable?

What about:

  1. For every two-week sprint, it begins with a 2 hour sprint planning meeting
  2. For every two-week sprint, it ends with a 2 hour demo review and lessons learned meeting
  3. For every two-week sprint, we'll only pull from the backlog those things we can fully eat
  4. For every two-week sprint, the backlog will be prioritized prior to the planning meeting

Sounds like a collection of patterns.

If someone tells you they have a healthy organization, but you can't seem to find a delivery pattern, build pattern, planning pattern, prioritization pattern, test pattern, estimation and/or sizing pattern, or any kind of pattern that is predictable and repeatable, they may not know what a healthy organization looks like. To judge the health of a company by revenue, market share, cash flow, debt ratios, or customer acquisition rates evaluates the wrong components. Enron proved to us that money is the wrong dashboard needle to be watching without additionally understanding the internal pulse of the company (or lack therein).

Spend one day and sit down with customer service representatives, developers, testers, project managers, database and system administrators, etc. all of them junior people in the company.

Ask them one item: "Name for me everything in this company that has a cadence."

You may learn about the cadences or lack therein of the company under scrutiny far faster than you can count the time signature changes in the average Kansas and YES albums of yore.

Reducing the Need for more Team Members through Retooling

How many times do we see staffing structures whereby there are dedicated Business Analysts in one group, Requirement resources in another, Trainers in another, Technical Writers in another, Manual Testers in another and even Project Manager groups assigned responsibility to manage them all on one or more projects?

How many times do we encounter situations whereby each group, regardless the situation, has more work than they are able to complete and subsequently ask for more time and/or staff?

First, I posit that these groups do not need more staff - they need to be leveraged in a different manner than currently popular and practiced.

Second, I posit that the problem does not truly lay with too much work, but rather an organization's ability or willingness to manage themselves. I'll discuss this one in another blog.

I'd like to amplify a few common characteristics between all aforementioned roles before going on this journey.

All of these roles require:

  1. Good listening skills
  2. Good communication skills (e.g. writing and speaking)
  3. A functional understanding of the system, the user, and the user's perspective
  4. Customer service skills, whether directly or by proxy
  5. An ability to organize large volumes of data into deductively logical relationships
  6. An ability to distill what is important now versus what may be important later

My opinion is that we really do not need groups specifically dedicated to each of this individual functional areas. Rather, we need a small pool of people who are equipped _and willing_ to do what is necessary to evolve solutions without boundary. Let's explore the fundamental model composed of six activities that I posit can be performed by a single person rather than having six people perform one activity.

Capturing role based user stories, descriptors and acceptance criteria

Most of the people in the aforementioned roles have an ability to converse which often includes listening and talking. Water coolers, cab rides, waiting in line at the deli, no matter - in each situation most people naturally discuss whatever is important to them at the moment. Conversations occur regarding the needs of people. We naturally do this on a daily basis, case by case... "here are my needs..." or "here is what would make me smile".

It is my contention that any people ordinarily associated with a single, individual role such as tester, trainer, project manager, technical writer, and business/requirement analyst have an ability to:

  1. understand what a customer is asking or seeking
  2. document a short statement representing that which is sought
  3. describe a little about what it would look like if this sought experience were delivered
  4. articulate some short burst type statements, in the customer's own words, suggesting what would make this experience good versus stellar versus not so good

What we really need is some fruit juice, a veggie tray, a bit o' coffee, water, a marker board, some index cards, and an ability to listen and verify that what we heard is agreeable. Any one of the aforementioned roles could fulfill this effort. We simply do not benefit by a dedicated middle-man who 'gets' the material, then 'hands it over' to others. The getter can also be the do-er.

Writing modular, portable e-help structures based on stories

It stands to reason that if I heard the customer make a request, and I documented the request in a short statement, with a descriptor, with some sort of acceptance criteria purely based upon conversation (in other words, I experienced the conversation, not just heard it) ... then I should have a reasonable understanding of what the customer really wants the system to do. Having this understanding through experience, I should then be able to document some useful help content to aid the customer in getting the experience they seek. It is a natural evolution to move from conversation to story to some sort of helpful documentation which helps the customer experience that which they first conversed with me. Cyclical in nature.

What we need is someone capable of writing short useful content modules around the particular experiences the user is seeking. Hear it. Write it. See it. Validate it. It makes sense that the people who first experienced the conversation are most easily equipped to write about it in context of help documentation. Why have one person hear the conversation and document it ... then another read the documentation, interpret it, question it and then write multiple versions of it to be sent back and forth until we "get it right". Eliminate the churn.

I heard the story and therefore have the greatest knowledge and experience available to validate that we delivered the story. Furthermore, due to this experience, I am equipped to write just enough help material to actually aid rather than confuse.

As a sidenote, I mention modular, portable e-help structures built upon stories for two reasons:

  • if "a user can" do something in the system, what else is there to discuss in e-help documentation? and
  • technology changes just like customer needs... I merely suggest modularity and technology agnosticism such that one can mix, match, order and re-order, add and remove modules just as easily as user stories in general; and, we never have to care about technology evolution. Move the arrow to the target rather than trying to make the target stay close to your arrow.
Training customers to use the product based on stories

Who better to work directly with a customer than the person or people who originally began conversations and documented the experience into user stories, descriptors and acceptance criteria? Rather than a requirement engineer handing to a business analyst handing to ... eventually a trainer ... what if someone capable of training was the person who originally conversed with the customer and documented the user stories?

When I train a customer, what is it that I'm training? How I want the customer to use the system? Perhaps. How the customer can use the system according to what they originally requested? Often. What is the common thread? What the customer can do in the system.

Consider this as a closed loop process ... converse, document, deliver, train. Condensed since I posit document is actually part of deliver ... converse, deliver, train. Further? Converse and Deliver.

What we need is someone capable of learning what the customer seeks, what it figuratively looks like when experienced, how it is validated as done or not done, and how to help the customer experience - the experience. What we don't need is someone brought in at the end of a lifecycle experience expected to get a clue, establish a relationship, and "bring it home". Constant inclusion, constant evolution.

Evaluating new system setups quickly based upon stories and acceptance criteria

After setting up a new customer installation, exactly what is it that we check to verify the setup is correct and complete? Sure, we may check number of files moved, directories present and accounted for, that the application actually launches and is accessible from outside the firewall, integration points and closed-loop data exchanges ... but what is it that we really check? We check that the system is usable from a customer perspective. And how is it that we check that the system is usable, i.e. upon what premise? In fact, we often use the most popular form of quality control ever devised .... a checklist.

And what, pray tell, composes the checklist? Ah, but if not for the stories, whatever would we be doing in the system after we login? In fact, regardless what we label it ... we are verifying that "a user can..." do a list of things.

And who is qualified to do this work? It is not singularly the trainer just before training. It surely is not the lone responsibility of the forlorn black box tester. The reality of the situation is ... again ... the person or people eliciting the stories through conversation and experience from the beginning of the journey are naturally equipped to verify the system "can" do what the user requested. We aren't ahead of the game by having dedicated resources who install systems ... and there is no particular magic involved. User stories once again. The title of the person doing the work is irrelevant. We are discussing roles and activities versus the historic titles and departments.

Sifting and prioritizing customer requests in context of the base and the future

Companies work diligently to move product to market. What should be in the product? Who will use the product? What is the edge? Questions every company allegedly asks and answers in order to create right solutions for customers and make money along the journey.

Fast forwarding, what does one measure the product future against? For companies that have a baseline set of requirements for their system ... the measure of "do we change or not" is often measured against the base system definition. "Should we enhance what we have, or should be break off and create a brand new branch?"

For those companies managing legacy products, there may not exist a set of requirements, just groups of people with pools and years of experience and knowledge. The challenge in this situation is that what is important is relative to the loudest, most persuasive, most tenured person at the table. It is a very reasonable approach to filtering out shoulds and should nots. It is less fact based than one may desire.

When we have a core set of user stories, regardless the age of the product, company and customer base, we have a measuring stick. "A user can...." do this and that. If we add these other three requests, it will change the fundamental principles of our application. If we do not add these three requests, we may not gain as much market share, but we will be solid experts in those things we do.

Have experienced experts. Have a measuring stick. Use user stories and have everyone keep their fingers on the pulse of these stories and their associated evolution. The user stories make money.

Knowing when you're done

This is a very simple conversation and very difficult action for many people to act upon ... knowing when to stop developing a solution.

Fundamentally, as suggested in Kent Beck's Test-Driven Development: By Example work, first write the test and then code until the test passes. Then, you are done coding. Novel and yet not practiced as often as necessary yet today. It mitigates over-engineering a solution, helps manage money spent, reduces complexity thereby reducing defect insertion velocity and density.

What if we were to practice this with solution delivery in general? What if we stopped delivering after all user stories could be executed as illustrated and associated acceptance criteria all pass? Who inherently has this knowledge? In many cases ... it is the requirement/business analyst, the tester, the trainer, sometimes the technical writer and could be the project manager. The definition of done is _not_ when we run out of tasks in our MS Project schedule. The definition of done is when the user stories are present and work.

We need people who know what these stories are, whether the software does or does not do them properly, and a relationship with a customer to evolve them.

We simply need a person who is willing to mold an application into user stories. It is very reasonable for people to have specialties and special interests. It is not reasonable to put all people with like specialties into special groups apart from one another.

Expect the manual tester, project manager, requirement analyst, business analyst, trainer and technical documentation write to practice the same fundamental behaviors:

  1. understand what a customer is asking or seeking;
  2. document a short statement which represents that which is sought;
  3. describe a little about what it would look like if this sought experience were delivered; and
  4. articulate some short burst type statements, in the customer's own words, suggesting what would make this experience good versus stellar versus not so good.

Then evolve the customer and the stories with the software application, hand in hand, in two week sprints - together. Eliminating phases eliminates the need for "specialists".

Do you really need more people? Or do you need the people you have to become more?

Symphony and Interplay

I recently watched an orchestra at the Lincoln Center in New York interpret a Mozart symphony.

Wolfgang Amadeus Mozart, born in Salzburg, Austria in 1756, created masterful works categorized variously from sonata to ballet, symphony to wind ensemble, chamber music to keyboard, and more, though not fairly reviewed in but a blog as this. Understated, his works are considered masterpieces even today. Complex. Challenging. Multi-spectral. And a wee bit more intellectual than some of my favorite Van Halen with David Lee Roth or Sammy Hagar.

Symphony is of Greek derivation built with two different words: together and sound or sounding. Interesting isn't it? Together, sound.

I watched this performance composed of reading music, hearing sound, seeing the pianist and conductor, reacting and evolving with a predictable cadence, together. Sounding. Together sounding.

On this particular eve, there was a guest pianist placed close to the audience, the conductor immediately behind, and the orchestra itself behind the maestro. Of interest to me was the fact that the guest pianist led the maestro, while the maestro led the orchestra. An interplay. Live. Real-time. Beautiful. I was the customer of this experience.

watched intently as the maestro continued to listen and watch the pianist while he led the orchestra. Whether the pianist paused, evolved a crescendo or decrescendo, alternated themes or moods, the maestro interpreted the output and led the orchestra in such a way as to compliment and supplement the pianist accordingly. Interesting enough.

Then I watched individual musicians reading their music, interpreting the music in front of them in context of the other written parts 1st, 2nd, 3rd and perhaps 4th within their instrument type, playing as but one individual and yet all the while balancing each contribution in context of what they see, hear and feel from the pianist, the maestro, the instrument group, as a whole.

And again, circular, I watched as the pianist interpreted and performed music from his memory outwardly manifested by his physical choices on the keyboard contextual to his environment. The pianist heard the orchestral interpretation of his own interpretation and gave an infrequent glint of attention to the maestro -- silently speaking with each other through the experience. Mozart vicariously on stage for me.

Beautiful music. I enjoyed my time. I became part of the experience. Remember, I was the customer. I was not stressed or anxious. I worried for nothing but perhaps I might miss a single note somewhere that contributed to a good experience.

An example of a team. An example of a goal. An example of keeping the customer in mind.

The customer left happy, content, yet wanting for more.

Simplifying Test Construction

Historically, software delivery teams, projects and companies have subscribed to separating out different types of testing based upon the activity or role label versus the contextual behavior within the evolution of software. For example, unit testing must surely belong to a developer while regression testing must of course belong to a non-technical black box tester. In some environments, performance testing is performed by technical or semi-technical non-developer testers using COTS solutions; while in other environments the test suite is built from the command line up by compiler level developers in clean room software engineering environments. Making a context driven decision about who should be doing what forms of testing and in what manner is not inappropriate. Separating out forms of testing to different people or phases is unecessarily complex, accidentally redundant, and in some cases just plain inefficient.

  • What if we began our software evolution by articulating all requirements of the system in simple statements like - "A user can..."?
  • What if we created "A user can..." statements for each role in the system?
  • What if, for each and every "user can" statement, we had an associated description just below that briefly told us a little about what this thing is that a user can do?
  • What if, for each and every "user can" statement, we had a short list of testable statements we called acceptance criteria that helped us understand how a customer would measure "done-ness" of the software?

Sound interesting? Read Mike Cohn's User Stories Applied: For Agile Software Development to get a clear understanding of how to leverage user stories, why, and what a successful project looks like under this approach.

  • What if each user story represented one batch of automated test scripts?
  • What if each batch of automated test scripts included at least two individual automated tests PER single acceptance criteria statement associated with a user story? For example, if a user story had two acceptance criteria, there would be associated one positive and one negative test PER acceptance criteria there by suggesting at least four test scenarios per user story. Add path based variation and boundary value scenarios and the set of scripts explodes.
  • What if each script written were its own encapsulated test such that it could be written once and called with any frequency, in any order?

Take test driven development concepts, combine them with Cohn testable acceptance criteria statements, throw in some popular open source tools like FitNesse or xUnit and we have an evolutionary landscape changing the role of testing and the role of developers and testers alike. In my own experience, this is not the role of a career tester, but rather a small collection of developers who genuinely care for the evolution of customer facing software in a matter of days/weeks.

  • What if the build routine polled for new code every 15 minutes to see if it was time to build a new package?
  • What if every time it polled, it found new code, built the package, reported the results for public consumption?
  • What if every time the build completed and it was considered "good", it called the test harness which in turn pulled all live tests associated with code level unit tests, as well as, functional level user stories, i.e. acceptance criteria?
  • What if every time said harness was pulled and associated scripts were executed, all results were posted yet again for public consumption?
  • What if the definition of a good build was not only that it built .. but that all tests called during the build routine executed and in particular - passed?
  • What if the definition of "good" was a clean build with no failed tests?
  • What if this routine ran every 15 minutes 7x24x365 or at the least daily?

Consider looking through Martin Fowler's material on continuous integration to understand the value of continuous feedback loops in the build and test cycle. There is no longer a need to do all of the development and then hand it over to the test group. Make the test efforts part of the development and build routine. If you want to explore the mother of all software configuration management minds, look to Brad Appleton.

  • What if the same set of user stories and associated acceptance criteria which were used to create a functional test suite were also re-usable as the baseline performance test suite?
  • This would seem to suggest the scripts must then be written in a non-proprietary language open-source tool that is technology agnostic. Remember, write it once, use it N times.
  • What if the transaction mixes were composed of various user story and acceptance criteria combinations?
  • What if the user mixes were composed of the various actors identified during user story construction?
  • What if the same set of user stories and acceptance criteria were to be considered the fundamental competitive function points of the system such that marketing materials and sales programs could be built upon this knowledge?
  • What if the same set of user stories and acceptance criteria were to be looked upon as the base table of contents for a modular XML based e-help solution? In other words, the user stories themselves point to the system decomposition ... which then elucidates the table of contents for the e-help and training approach.
  • What if this same set of user stories were considered to be the base set of functionality upon which the product roadmap were assessed and determined for years to come ... to veer from the base or not to veer from the base? In fact, the user story collection defines the base from which the system evolves.

What if a developer knew how to do most or all of this? What if .. we needed a handful of people who could evolve the software purely based upon a user story, a descriptor, some acceptance tests, a continuous integration build routine which called the automated suite ... and the ice cream? We delivered every two weeks?

What if there were no testers, business analysts or requirement engineers? And the requirements were the tests and the tests preceded the code and the code stopped growing when the test passed?

What if a developer was someone who evolved a user story until a customer smiled? Where are the limits?

Compressing Software Lifecycles by Eliminating Lines of War

As we've experienced for years, software development lifecycles get chunked up into phases, particular groups of people get associated to each phase, entry and exit criteria end up associated to each hand-off in between ... and we now have a framework we've created in the name of quality. Ironically, the framework (ordinarily accompanied by required templates and tools) really only protects us from each other and ourselves, but doesn't evidentially protect the customer or even serve the customer.

After all, what did the customer actually ask for but a deliverable, on a timeframe, on some specified amount of money.

What I see time and time again "delivery teams" or "projects" composed of departments for the purposes of career paths and salary trees, human resource considerations, and division of labor. Sometimes said resources are "matrixed" out to cross-functional teams, but there is a conflict for the resource in question as to "who is truly my boss?" I'm sure there are more reasons for this out there and these are deductively logic structures based upon the need to organize people within a company. Unfortunately, some people tend to confuse the resource management structure within a company with the team structure actually needed to deliver something to a customer that they wanted, when they wanted, at some specified or important cost. They are not the same. Due to this confusion, these lines between resource management groups often become lines of war. When hand-offs are expected to happen between departments, expect turmoil. We're a team until some department did not deliver, and then "the so and so group did not deliver to us and we will now be late."

Here are three things I've personally done to help eliminate silos within groups responsible for delivering:

One: Eliminate the black box testing phase

I support Agile frameworks which I'll explore in another post. Summarily, rather than having people do whatever work it is that they do .. and then eventually hand off to a seemingly ostracized test group "over there", eliminate the group.


  • Take the test group that sits in another part of the building, split them up, and sit them right in the middle of the development group according to pairs or iteration teams
  • Make the ordinarily black box or functional/regression tester become part of the creation process rather than validating that a creation process occurred
  • Expect the tester to write automated tests based upon prioritized functional threads that can attach to the continuous integration build routine called on a nightly basis

In other words, figure out how to test during the development process rather than afterwards. As new builds come off the line each night (or hourly or whatever it is you do), have the testers reset their environments and go at it again. Clearly many traditional test groups are not comfortable moving at this pace as they prefer dedicated time on one load before moving to the next. Let the cadence be set by continuous integration builds and reset all other behaviors to synchronize with it.

Two: Eliminate the business analysis/requirements phase

Nothing new here for some. Consider that many times business analysts or requirement individuals are "sent in" to "go get" the requirements. This is often considered the precursor activity to anything else downstream.

Some of the multiple challenges with this logic include:

  • The requirement/business analyst must interpret the need, document the need, and translate the need to people who actually do the code construction (i.e. developers) and testing somewhere else in the journey
  • Most other team members wait until someone tells them the requirements journey is "over" and they may now proceed
  • Requirements are often documented in a way that makes sense only to the person eliciting and documenting the requirement

Get rid of this experience altogether and instead consider the following three (four) changes:

  • Have a developer with social skills interface with the customer and elicit the requirements in the form of user stories, descriptors and acceptance criteria
  • Include a technical tester to help document the conversation, summarize the results, and ensure that the descriptors make sense and the acceptance criteria are in fact automatable and testable
  • If possible, consider including your trainer/documentation/e-help person in this environment as an observer so that they begin to understand how the system will be organized, used, what is most important versus less important, testability, etc.
  • Deliver working tested software representative of N user stories every two weeks

Only include people who will actually do work. Eliminate translators in the middle. This action reduces cycle time, defect insertion, and the number of people necessary to understand and deliver the system. If a team is expected to deliver working tested software in two week chunks, it will influence how much time is spent creating documentation versus actual software.

Three: Eliminate titles

Titles often box people into specific behaviors. "I am a requirements or business analyst". "I am a test analyst". "I am...".

Here's what the customer cares about ... working, useful, value-add software versus my title. Unfortunately, in many company systems, titles equate to salaries and rank. A miserable side effect is "that's not my job" -isms stimulated by the territorial boundaries.

For the purposes of human resources and getting paid, titles must allegedly exist - though I continue to explore ways to make job titles less meaningful. For the purposes of creating a true team that trusts each other and is willing to do whatever necessary, regardless the role or task, to deliver working software every two weeks ... eliminate titles. Rather, have roles.

A team composed of a developer role, sys admin role, test role, and dba role are equally responsible to deliver N working user stories at the end of a two week sprint.

Along the way, the encouraged behavior is that:

  • anyone can code if and when necessary
  • anyone can test if and when necessary
  • anyone can dba and sys admin when necessary

The only limit on such a team is attitude and aptitude. Yes, there are declared roles whereby one person may be expected to fulfill one or two roles; however, if time permit or need merits, the team helps the team help the customer. If we experience one person watching while someone else is struggling, we have a broken team. N people on a team. X roles required to deliver Y user stories in two weeks.

  • Title-less teams require people to evaluate their own skills and upgrade or evolve them, or move on
  • Title-less individuals require teams to evaluate what they are capable of delivering in two weeks by considering everyone's ability, not just their own
  • Title-less teams force people to focus on work not done versus who's not doing work

There are many other changes available to stimulate a shift in how software is delivered in our organizations. The aforementioned three elements:

  • Eliminate the black box testing phase
  • Eliminate the business analysis/requirements phase
  • Eliminate titles