How many times do we see staffing structures whereby there are dedicated Business Analysts in one group, Requirement resources in another, Trainers in another, Technical Writers in another, Manual Testers in another and even Project Manager groups assigned responsibility to manage them all on one or more projects?
How many times do we encounter situations whereby each group, regardless the situation, has more work than they are able to complete and subsequently ask for more time and/or staff?
First, I posit that these groups do not need more staff - they need to be leveraged in a different manner than currently popular and practiced.
Second, I posit that the problem does not truly lay with too much work, but rather an organization's ability or willingness to manage themselves. I'll discuss this one in another blog.
I'd like to amplify a few common characteristics between all aforementioned roles before going on this journey.
All of these roles require:
- Good listening skills
- Good communication skills (e.g. writing and speaking)
- A functional understanding of the system, the user, and the user's perspective
- Customer service skills, whether directly or by proxy
- An ability to organize large volumes of data into deductively logical relationships
- An ability to distill what is important now versus what may be important later
My opinion is that we really do not need groups specifically dedicated to each of this individual functional areas. Rather, we need a small pool of people who are equipped _and willing_ to do what is necessary to evolve solutions without boundary. Let's explore the fundamental model composed of six activities that I posit can be performed by a single person rather than having six people perform one activity.
Most of the people in the aforementioned roles have an ability to converse which often includes listening and talking. Water coolers, cab rides, waiting in line at the deli, no matter - in each situation most people naturally discuss whatever is important to them at the moment. Conversations occur regarding the needs of people. We naturally do this on a daily basis, case by case... "here are my needs..." or "here is what would make me smile".
It is my contention that any people ordinarily associated with a single, individual role such as tester, trainer, project manager, technical writer, and business/requirement analyst have an ability to:
- understand what a customer is asking or seeking
- document a short statement representing that which is sought
- describe a little about what it would look like if this sought experience were delivered
- articulate some short burst type statements, in the customer's own words, suggesting what would make this experience good versus stellar versus not so good
What we really need is some fruit juice, a veggie tray, a bit o' coffee, water, a marker board, some index cards, and an ability to listen and verify that what we heard is agreeable. Any one of the aforementioned roles could fulfill this effort. We simply do not benefit by a dedicated middle-man who 'gets' the material, then 'hands it over' to others. The getter can also be the do-er.
It stands to reason that if I heard the customer make a request, and I documented the request in a short statement, with a descriptor, with some sort of acceptance criteria purely based upon conversation (in other words, I experienced the conversation, not just heard it) ... then I should have a reasonable understanding of what the customer really wants the system to do. Having this understanding through experience, I should then be able to document some useful help content to aid the customer in getting the experience they seek. It is a natural evolution to move from conversation to story to some sort of helpful documentation which helps the customer experience that which they first conversed with me. Cyclical in nature.
What we need is someone capable of writing short useful content modules around the particular experiences the user is seeking. Hear it. Write it. See it. Validate it. It makes sense that the people who first experienced the conversation are most easily equipped to write about it in context of help documentation. Why have one person hear the conversation and document it ... then another read the documentation, interpret it, question it and then write multiple versions of it to be sent back and forth until we "get it right". Eliminate the churn.
I heard the story and therefore have the greatest knowledge and experience available to validate that we delivered the story. Furthermore, due to this experience, I am equipped to write just enough help material to actually aid rather than confuse.
As a sidenote, I mention modular, portable e-help structures built upon stories for two reasons:
- if "a user can" do something in the system, what else is there to discuss in e-help documentation? and
- technology changes just like customer needs... I merely suggest modularity and technology agnosticism such that one can mix, match, order and re-order, add and remove modules just as easily as user stories in general; and, we never have to care about technology evolution. Move the arrow to the target rather than trying to make the target stay close to your arrow.
Who better to work directly with a customer than the person or people who originally began conversations and documented the experience into user stories, descriptors and acceptance criteria? Rather than a requirement engineer handing to a business analyst handing to ... eventually a trainer ... what if someone capable of training was the person who originally conversed with the customer and documented the user stories?
When I train a customer, what is it that I'm training? How I want the customer to use the system? Perhaps. How the customer can use the system according to what they originally requested? Often. What is the common thread? What the customer can do in the system.
Consider this as a closed loop process ... converse, document, deliver, train. Condensed since I posit document is actually part of deliver ... converse, deliver, train. Further? Converse and Deliver.
What we need is someone capable of learning what the customer seeks, what it figuratively looks like when experienced, how it is validated as done or not done, and how to help the customer experience - the experience. What we don't need is someone brought in at the end of a lifecycle experience expected to get a clue, establish a relationship, and "bring it home". Constant inclusion, constant evolution.
After setting up a new customer installation, exactly what is it that we check to verify the setup is correct and complete? Sure, we may check number of files moved, directories present and accounted for, that the application actually launches and is accessible from outside the firewall, integration points and closed-loop data exchanges ... but what is it that we really check? We check that the system is usable from a customer perspective. And how is it that we check that the system is usable, i.e. upon what premise? In fact, we often use the most popular form of quality control ever devised .... a checklist.
And what, pray tell, composes the checklist? Ah, but if not for the stories, whatever would we be doing in the system after we login? In fact, regardless what we label it ... we are verifying that "a user can..." do a list of things.
And who is qualified to do this work? It is not singularly the trainer just before training. It surely is not the lone responsibility of the forlorn black box tester. The reality of the situation is ... again ... the person or people eliciting the stories through conversation and experience from the beginning of the journey are naturally equipped to verify the system "can" do what the user requested. We aren't ahead of the game by having dedicated resources who install systems ... and there is no particular magic involved. User stories once again. The title of the person doing the work is irrelevant. We are discussing roles and activities versus the historic titles and departments.
Companies work diligently to move product to market. What should be in the product? Who will use the product? What is the edge? Questions every company allegedly asks and answers in order to create right solutions for customers and make money along the journey.
Fast forwarding, what does one measure the product future against? For companies that have a baseline set of requirements for their system ... the measure of "do we change or not" is often measured against the base system definition. "Should we enhance what we have, or should be break off and create a brand new branch?"
For those companies managing legacy products, there may not exist a set of requirements, just groups of people with pools and years of experience and knowledge. The challenge in this situation is that what is important is relative to the loudest, most persuasive, most tenured person at the table. It is a very reasonable approach to filtering out shoulds and should nots. It is less fact based than one may desire.
When we have a core set of user stories, regardless the age of the product, company and customer base, we have a measuring stick. "A user can...." do this and that. If we add these other three requests, it will change the fundamental principles of our application. If we do not add these three requests, we may not gain as much market share, but we will be solid experts in those things we do.
Have experienced experts. Have a measuring stick. Use user stories and have everyone keep their fingers on the pulse of these stories and their associated evolution. The user stories make money.
This is a very simple conversation and very difficult action for many people to act upon ... knowing when to stop developing a solution.
Fundamentally, as suggested in Kent Beck's Test-Driven Development: By Example work, first write the test and then code until the test passes. Then, you are done coding. Novel and yet not practiced as often as necessary yet today. It mitigates over-engineering a solution, helps manage money spent, reduces complexity thereby reducing defect insertion velocity and density.
What if we were to practice this with solution delivery in general? What if we stopped delivering after all user stories could be executed as illustrated and associated acceptance criteria all pass? Who inherently has this knowledge? In many cases ... it is the requirement/business analyst, the tester, the trainer, sometimes the technical writer and could be the project manager. The definition of done is _not_ when we run out of tasks in our MS Project schedule. The definition of done is when the user stories are present and work.
We need people who know what these stories are, whether the software does or does not do them properly, and a relationship with a customer to evolve them.
We simply need a person who is willing to mold an application into user stories. It is very reasonable for people to have specialties and special interests. It is not reasonable to put all people with like specialties into special groups apart from one another.
Expect the manual tester, project manager, requirement analyst, business analyst, trainer and technical documentation write to practice the same fundamental behaviors:
- understand what a customer is asking or seeking;
- document a short statement which represents that which is sought;
- describe a little about what it would look like if this sought experience were delivered; and
- articulate some short burst type statements, in the customer's own words, suggesting what would make this experience good versus stellar versus not so good.
Then evolve the customer and the stories with the software application, hand in hand, in two week sprints - together. Eliminating phases eliminates the need for "specialists".
Do you really need more people? Or do you need the people you have to become more?