Continued from Part II:
On August 16, 2000, there occurred a new 'kick-off meeting' to mark the transition from 'design' to 'coding and delivery'. The meeting was scheduled for 60 minutes and was managed as an informal teaming event with executive leadership in the room to show support for the coming project work. The Project Manager facilitated the meeting thereby enabling questions in a popcorn fashion during the proceedings. There existed most certainly an agenda.
Stream 1 - Prototyping - People on this team declared the need for an additional thirty calendar days of prototyping oriented work in order to prove out remaining components and integration ideas, as well as, to prove out potential performance characteristics that the end-state system will meet. Everyone in leadership (and of course the Prototype team) agreed that this was the best plan and it was then scheduled accordingly. At this point, the prototype/architecture team make-up included a revised four off-site consulting members and two on-site full-time members, the lead architect being an off-site consultant.
Someone on the development team spoke out regarding a water-cooler originated concern that the work on the prototyping effort, to occur concurrent and independent to development, had the probability of requiring development rework mid-stream if fundamental architectural positions change with additional research. Said developer's request was for the prototype and development teams to be further integrated and possibly on the same release/delivery schedules. The Project Manager spoke up quickly and noted to everyone that the lead Architect had already agreed to keep everyone informed through a weekly status report to initially be sent out on Fridays. At some point in the near future, there would likely be an Architectural review meeting that would occur on an undefined schedule so that Prototype efforts and results would be discussed and development could take the information accordingly and implement thereafter.
Expected output: Revised documentation including a list of decisions.
Expected audience: Developers.
Stream 2 - Development - Five development teams with twelve people per team are scheduled to 'start' development efforts effective the end of the kick-off meeting. Their responsibilities are specifically to code and deliver the component architecture elements as defined in the specification documents, less screens since HF UI is delivering that work. Builds occur every Tuesday and Thursday and developers are told not to check in their code until they are sure they are done and their code is clean. Unit testing is 'required', a coding standards document is distributed via email, and a new lint utility is made available to everyone on a central server for code screening. Three of the development teams are off-site composed wholly of consultants, while the remaining two on-site teams are primarily full-time employees with some on-site consulting staff en partnership. The three off-site teams agree to build on Linux using open-source solutions in order to make the system technology agnostic. The on-site teams are required by Corporate Security to use Microsoft solutions and are not currently able to modify their PC configurations without prior permission. There are five team leads.
Developers are expected to turn in daily email reports discussing what component they are working on and any risks or issues they see. The team leads are expected to turn in weekly reports to the lead architect of the prototype team suggesting what components are being worked on and any problems or concerns observed during this work. Very specifically, the lead architect expects to be notified if a selected component is not working as prototyped, or if developers have difficulty with integrations to 3rd party vendors. The Project Manager requires that all people enter their time daily into the corporate time tracking system, and then log actual hours against tasks in the project schedule kept out on the shared server. 'Code complete' is defined as when a developer checks in code and moves to the next assignment.
Expected output: Completed code, unit tests, status reports, and time entry.
Expected audience: Testers, Architects and the Project Manager.
Stream 3 - HF UI - At this point, the HF UI team is basically a part of the development teams, though on a different goal path than the teams in which they reside. Though their requirement efforts were distilled down to a basic fifty HF UI type requirements, they anticipate actually building out between 450 to 700 screens associatively and coupled with the component architecture documents they've read from the architecture/prototype group. They are unsure of the number of screens they'll truly need, but expect it will reveal itself as they begin their development work. Their status logic will be a component of the larger development status, both daily per individual, and weekly from the team leads to the architect at large.
Expected output: Completed screens according to standards.
Expected audience: Developers and Testers.
Stream 4 - Software Testing - The testing of the application is expected to be fully automated based upon the delivered code, screens and specification documents. The architect chose the tool during the prototype phase and the testers were trained on it one month earlier in preparation for the development phase. It is expected that these automated tests will be run nightly (even though builds only occur twice a week and developers only checkin code when they think it is ready). Many of the testers are non-technical, though the architect believes the capture/playback functionality is solid enough that they will not need to learn a pseudo language. Of the five testers in the group, 1 is actually semi-technical and currently experimenting with Expect, Tcl/Tk, Sed and Awk and can hardly wait to get started. The Project Manager made it clear that everyone will use the new automation tool as per the architect's direction, but that additional zeal will be welcomed as long as it contributes to the end-state product quality. The requirements upon which they will automate and test are the HF UI requirements, as well as, what is defined in the integrations document in terms of I/O, entry/exit criterion, and all other specification documents. Statuses are provided weekly to the architect and Project Manager who then merges it in with everyone else's data points per week.
Expected output: Completed tests, tests results and defect counts.
Expected audience: Developers, HF UI, Architecture, and Management.
September 16 - The prototype work has simply taken longer than planned. Stubbed out component performance testing rendered non-descript results; alternate components, databases and database loads were substituted to no significant conclusion and the proto-team simply believes they need more time than the 30 additional days they just burned. No one really argues with the request for more time because - what are the options? No one else knows what the senior architect envisions and most people are of the opinion they can only rest when the senior architect rests. As a result, an unspecified additional timeframe is approved with 30 day checkpoints, in addition to the regular status reporting the architect provides to the Project Manager. The Project Manager defends the need for more time and simply leans on the architect to make sure he knows what he's doing because their necks on metaphorically 'on the line'.
Otherwise, individual developer status reports flow into the team leads, while the team leads distill them and send in weekly status reports to the Project Manager and Architect as requested. The reports simply contain statements such as, "working on module X", "fixed defects" and "optimized for higher responsiveness". No one reports issues or risks to the project or their work.
The test statuses all report evidential progress with regards to writing automated test scripts using stubbed logic pending delivery of'more working code'. Corporate security policy prevents the test team from having a database with real customer data, so they fabricate based upon known schema elements to date. No one really expects the test group to be logging too many defects at this point, so defect reports, though low on arrival rates, are considered premature. Testing keeps preparing to test.
And the HF UI team? Remember they are a part of the development teams, but on their own development journey creating screens independent of the infrastructure. At some point in the future, when middle and lower tier functionality become available, the architect expects to have everyone spend time hooking up screens. Until then, the HF UI team builds screens that are not hooked up to anything and communicates % of done in relation to when they move on to another screen.
The Project Manager reports % done in terms of screens, % done in terms of automated tests written, positive development progress in terms of people handing in status reports, not reporting risks or issues, and blunt statements such as 'defects are being addressed'. And the architecture group status is simply communicated as 'more work is necessary to validate right decisions' for a bunch of *.abilities like 'scalability and reliability' which everyone believes in of course. Somehow, the milestones being published in the project schedule are built upon when screens are built, tests are created, and architectural decisions/deliverables occur. And development? Because development is peforming two builds per week and time is entered in the corporate time system, accompanied by status reports on the side, progress is measured by the perception of productivity.
Meanwhile the accountant working on behalf of the Project Sponsor/Funder wondered out loud... "If we're spending x$ per month and the Project Manager's report says everything is appropriately progressing, then we should theoretically be done with this project on schedule and our budget should be more than enough at this point." Invoices keep coming in to get paid - matching up to the number of hours logged in time systems and actuals in the project schedule.
A couple more interesting behaviors to note:
1. There exist milestones discussing when 'things' will be done, but so far no milestones discussing when the software will be useful
2. Many status reports and meetings occur communicating people are working hard with long hours, but so far the system being created for the end-user cannot yet be seen, touched or validated by the end-user
3. 'Code complete' implies 'done' to those un-involved listeners watching for project maturity. On this particular project, it goes unstated that integration, functional/system, regression or performance testing is yet to occur, let alone end-user/stakeholder validation of useful 'done-ness'
4. All deliverables created to date are for technical people
To date cost of acquisition: $3,771,200.00
($1,764,800.00 + ($95/hr * 2mos * 66people less the PM only))
To date time elapsed: 9 months
To date return on investment: 10 documents, some working prototype code, screens, automated tests, defect fixes, and completed tasks suggesting some of the product has been built.
This project is already on a path to cost-overrun, time-overrun, and compromised quality; and this is all happening in front of the eyes of everyone one day and one decision at a time.