As I've discussed in another post, Expecting Abnormal Human Behavior, we often practice different human behavior in our everyday private lives than we expect of ourselves and others at the workplace. At home, we plan the high points and roll with whatever exceptions are thrown the rest of the day. At work, we attempt to plan everything just short of human physiological needs. I think part of our difficulty and challenge delivering software day in and day out is not that we are miserable at doing good, solid work - but rather that we fight natural human behavior. Somehow we've fallen prey to thinking that we can and will control natural human tendencies for the eight to ten hours per each day while in our work space by using a framework that makes little or no sense contextual to the expected and needed team behaviors. On the one hand we have a team of people with natural behavioral tendencies; on the other, a framework which disregards said tendencies and attempts to put in rigor where it only makes sense to those selling the real estate.
Project Management practices are an abstraction layer from reality. Based upon today's currently popular and accepted practices, they will always be N steps behind and discussing the wrong problems.
I'll cover more regarding positively leveraging human behavioral tendencies in a later post. Let's look at some of the common project management practices we experience in the software space today.
Common Practice #1 - Identify All Steps on Day 1
Project Management technique expects a team to identify all possible components and tasks required to deliver a particular project - up front, regardless the length of the project measured in weeks, months or years. Ideally and logically of course, the software project has a defined goal or definition of "done" telling us where we are heading. However, for software, where customers don't know what they want until they see what they don't want, where the market and competitors shift on a daily basis, where requirements shift in priority and relevance as more knowledge is gained, knowing what is important is a changing versus fixed tide pattern. Writing a project scheduled up front and then re-baselining it and managing change with scope requests focuses on things discussing work, not actual work. Of course it depends on where we want to spend our money.
We simply don't know what we don't know until we have more data. To posit the steps it will take to integrate a software system with a particular database is far different than suggesting the software system will get integrated with a yet to be determined third party content provider. One data set is relatively fixed while the other is purely a placeholder for discovery. We may have an idea how big or complex it may be based upon past collective experiences, but we simply do not know what steps it may take until we lift the hood. Asking a team to identify all steps necessary to do the unknown is an exercise in futility. Doing the work is far more valuable than discussing what the work might entail. "Had I only known building this [thing] would have taken so long and been so painful, I never would have done it." Unless we've done this work before, we simply postulate what it might be like, but do not know with fact. Ironically, project schedules are often created from end to end as if they only contain fact. It simply is not so.
Common Practice #2 - Identify Estimates on Day 1
I think some of the most current best published thought on "how long" something should take is contained in Mike Cohn's book titled Agile Estimating and Planning. Aside from the pertinent details one should glean from the book _after_ purchasing it - we note a conversation on sizing.
In many, many cases a project management expectation suggests that a team should not only identify all steps to get from "here to there" on Day 1, but said steps must be accompanied with estimated hours to get the work done to completion. Now, if we have about two weeks of work in front of us, we can reasonably assert the steps between here and there and how much time we _might_ spend getting the work to some state of done-ness. However, if we have a list of tasks in front of us spanning months or years, the farther away from "today" we get, the more abstract our reasoning must forcibly become in order to assert time. When we calculate time in the next two weeks, we can put one element in context of surrounding elements for some sort of relative thought. Were we to consider a particular task or individual element months into the future, we really have no true non-ideal understanding of everything - so, we're forced to evaluate each task in and of itself, perhaps even in context of a sub-group of surrounding tasks - which leads us to Mike Cohn's conversation of relative sizing.
We may not always know how long something will take, but we are usually pretty good estimating how big something is in relation to something else. "We've never done this particular thing before; but when we did 'X' and 'Y' for Client ABC sometime back, it was bigger than we thought, and far more painful. This thing looks very similar." The challenge lay in the fact that Project Managers and associated traditional project scheduling tools request sizing in days and hours in order to build a project plan with an end date. This stimulates people to estimate how long it might take, then to add extra time to it simply to offset the risk of being wrong. There are excercises in Ken Schwaber's Control Chaos ScrumMaster classwork, as well as that offered by Mike Cohn of Mountain Goat Software to reinforce the following idea: the first estimate has the probability of being wrong because we simply don't know how long something might take; this is then compounded by estimating "what if" scenarios thereafter, adding it to the original number and calling it contingency. Is it any wonder project schedules are off-budget, let alone off-time? The only way to manage this problem by current practices is by cutting scope. We set ourselves up for the problem and then solve it incorrectly at the expense of the customer not getting what was requested.
Common Practice #3 - Allocate resource availability by %
One project model expects that all resources are fully dedicated to said project for the life of the journey. Another model expects there is a core of people fully dedicated, and then some referential people who are partially allocated across time based upon availability or cost constraints. Both models ordinarily practiced today assume a project is defined by the beginning and end dates automagically calculated in project scheduling tools. So it stands to reason for any project longer than two weeks or so, resources simply are or are not available. So, the same resources are allocated into the project schedule at arbitrary, semi-arbitrary or mathematically derived hours per day or week. What's missing? After it is in the project schedule, if the tasks suggest the work is done, but the work is in fact not done ... do we go get more % of said resource? Doesn't seem too complicated perhaps, unless a centralized DBA or Sys Admin team is allocated by % to multiple projects simultaneously.
Ah, resource contention. The very practice of asking for a single resource to be allocated to a multi-month project that is one long delivery bubble, or by allocating resources particular percentages across time perpetuates the problem - and solves the wrong problem. The issue is _not_ that "Joe" is not available more than five hours a week; the issue is that roles are assigned to people names. If DBA=Joe, and there are five projects - Joe is the bottleneck, gets put on the risk and/or issue list, and is constantly caught in the middle. Wrong problem. If however, roles required for said delivery bubble are DBA, SysAdmin, Developer and so on, we begin discussing work to be done rather than "who".
It is not the problem of the Project Manager to determine "who", but rather that of the team doing the work. When we spend money negotiating percentages of people, we aren't delivering. For those teams suggesting there are simply not enough resources to get the work done based upon skillsets - cross-train existing people to perform multiple roles and deliver in smaller bites similar to they way most people eat when dining, one bite at a time.
Common Practice #4 - Building a Gantt chart
Gantt charts are wrong the minute they are conceived. Why? Well, if we spend enough time looking at a twenty-page Gantt printout after we've taped it to the wall, we oddly observe sequential, waterfall-like thought. The only time the chart actually reports anything useful is never. The problem we have is that the chart suggests "A" occurs before "B", then "C" and so on. It also suggests time for every progress bar in the picture which leads one to posit that if the angle is going down, and page 20 represents the end, when we get to the lower right-hand corner of the picture we are done.
Actually it has nothing to do with being done; in fact, it merely reports pictorially what we've constructed as the work-breakdown structure in the project schedule. It reports what is done in relation to what is planned to be done based upon tasks in the schedule, but doesn't tell us anything about what is truly left or what it truly took to get there. If we look at the chart, it really is something cool and evidences the complexity in doing any sort of project. Much thinking is necessary, not only on Day 1, but all along the journey. Aside from the wow factor, it teaches people the wrong message (that projects are sequential and downward sloped), does not show the complex iterative interactivity necessary within the teams to actually get something done, always talks about the project in arrears, and doesn't evidence the priority or complexity of one task over another. After someone creates a Gantt chart by which to illustrate the plan and associated progress, we become constrained to the picture, not the reality.
What are the problems with these practices?
Software project management practices ordinarily expect team members, customers and companies to predict the future. When change occurs, the project schedule is re-baselined and a scope change request is drawn up to gain approval for the change. At the end of the project, any deviation from the original baseline is considered to be scope creep and the success of the project is called into question based upon deviation from the time and cost baseline. The pressure to predict the future well in advance is perpetuated by project management practices that do not solve problems, but merely talk about them. Not wanting to be reported as a deviation, teams spend more time trying to predict the future with more accuracy instead of focusing on better delivery practices. The fact is, we are not capable of prediction and while this expectation exists, teams are influenced to solve the wrong problems.
So does this then suggest Project Managers themselves are non-useful? Absolutely not. In fact, Project Managers, just like anyone else employed under contract for pay on a delivery team, are working diligently to meet expectations placed upon them. The communicated expectations are ordinarily something like, " You are the Project Manager. Make sure that project gets delivered on time no matter what!" or "You have $10 US and two days, get it done and don't take no for an answer." Usually something hubristic, fatalistic, and projecting the assumption that it can be done as charged. The conversation is not about the Project Manager. The conversation is about the role of Project Management. What should it really be?
- Project Management practices measure "done" based upon project schedule line items versus the stateful evolution of software evidenced on a regular basis through sprints and demos. In fact, the project schedule can report "done" while the software is but a 1/3rd evolved with weeks or months remaining.
- Some project managers actually have no idea what it takes at the rivets and bolts level to deliver software and so are unable to put change in context of reality. "If it is a deviation from baseline, it is a deviation."
- Status report-type questions are often built upon the project schedule as the single point of reference for all activity and thought. As expected, teams answer the questions asked - even when the questions are wrong and non-meaningful because the measure is assumed to be the project management report, not the work.
- Risk is managed through "margin of error" or "contingency" calculations instead of through delivery methods such as sprints, iterations and bursts.
These are only some of the challenges. What are immediately accessible alternatives to straight up Project Management practices? Agile Project Management with Scrum written by Ken Schwaber and the companion book to this material is Agile Software Development with Scrum written by Ken Schwaber and Mike Beedle. The focus is shifted from a Project Manager in charge of all things software, to equipping teams to identify, solve and deliver by using different team constructs, different delivery methods, and different methods of estimation. It makes sense that the team doing the work be the team discussing the work rather than a proxy.
"Project Management" is not the solution to delivering quality software, building better teams, managing risk and complexity, providing value to customers and clients, or even hitting budget and time constraints. Building self-contained, self-managing teams is the solution to better software. We must change how we evolve software and it must start by changing the idea and use of project management.
Expect teams to identify and solve problems rather than expecting someone not actually involved in the work to insert themselves, add value, and to provide meaningful reports of work someone else is actually performing. Project Managers are unecessarily put into the position to predict, report and hold accountable others doing the work. Why?