Iterative, autonomous software evolutionary behaviors. What are they and why do you give a rip?
Very simple. Without them, you're practising old-school, big-bang, very expensive methods of command and control deliveries characterized by high administrative overhead, long-tail deliveries, miserable response to change-request time tables, missed market opportunities, high-defect densities and completely bogus status reports detailing effort to return rations with allegedly quality metrics. Sure. Sounds like subjectively exaggerative descriptors?
The next great game is not hardware. It is software. It is artificial intelligence. It is autonomous decisioning systems.
- Blood re-circulates the body 60-70 times/hour in a healthy person at rest.
- DVD player disc revolutions are anywhere from 37,800 to 91,800 revolutions per hour.
- Hummingbirds flap their wings about 200,000 times per hour.
- The Ford St. Thomas plant asserts 62 completed passenger cars delivered per hour.
Question: How long does it take you to move production-class software to market?
Don't give me all the reasons, regulations, complexities and enormities of why it takes as long as it takes. I don't care. I'm not working at your company. Since you are, be honest with yourself. Why does it take as long as it takes?
Next Question: How much does it cost? (If you're moving software/hardware integrative systems, how much just for the software piece?)
There are a number of things you can do to implement a continuously evolving software system in your company. How frequently the software is evolved internally versus moved to public production are two completely different conversations we can and will comfortably decouple.
Implementing a system that continuously looks for new code, continuously builds the new code into the existing code base and tells everyone when the latest received code breaks the previously known-to-be-good source bundle is one of the easiest methods of moving from a command-and-control manual system to an autonomous one. Set it up once, run it forever. This assumes your teams are *all* using version control.
Automated test-driven development is another facet of this continuously evolving structure. Tests are written first based upon specifications. Code is written to prove the test fails. After the test agreeably fails, code is written only until the test passes and you are done. No need to over-engineer the solution for a myriad of reasons including defect insertions, feature and function specification creep and unwanted monies spent with no ROI potential. As the code is checked-in as 'completed', the automated test(s) are checked in as a single unit. When the build routine looks for new code, it finds code+tests and kicks a compile. If no tests, no compile and new code rejected. If code compiles, test. If tests pass. Keep going.
Implementing another facet of this system includes continuously inspecting the code against a set of previously defined standards. For example, naming standards, cut and paste (duplicity), complexity, uncalled code, traceability to change requests, information security alignment and anything else important to your team deliverables. Every time new code is checked in, it is sniffed out against the pre-set standards. New code, compile. Clean compile, test. Clean tests, inspect. Clean inspection, notify the house. Failure on any one of these elements, notify the house and reject the newest code submissions.
And what about the ability to deploy on-demand, specifically, continuously? Yet another facet of this mechanism. Deploying to downstream test systems, customer acceptance test systems, sales and customer support test systems, production beta sites and/or live production systems is at-hand. The same routines that check for new code to compile all the way through testing, inspection and notifying teams about new package readiness is also capable of moving the package into a testable or production usable environment and state, continuously.
If new code, then compile.
If clean compile, then run tests.
If clean tests, then inspect.
If clean inspection, then deploy.
And how fast does this all happen? Well, the answer is relative to the size of the compile, number of tests, number of inspection checks and deployment terminating points. However, it is very reasonable to see all of this happen measured in minutes, multiple times per hour. In other words, while we may not yet be as fast as hummingbirds, we today have the capability to move new software to production in a measurably tested and validated state multiple times per hour without people. In other words, we could reasonably perform over-the-air software updates to navigation, integration and operating systems in cars, tractors and other assembly-line deliverables in hours.
Question: Why aren't you?
Sure, there are some operational, organizational dependencies that you need to solve to get here. For example, if I can generate a new, working software feature in less time than you can generate a document talking about it, why are you generating the document?Furthermore, why could a team not already be delivering working, tested, inspected software modules every thirty-days as production-ready candidates while I'm still spec'ng out new feature requests? Why do I need a full list of feature requests prior to starting anything? Doesn't this seem like an assembly line behaviour to you? Smaller batches, higher throughput.
The ability to deliver a software system in a matter of minutes and hours, exists. Compiled, inspected to standards, tested to specifications and deployed. All autonomously. If you're still spending months eliciting requirements, designing possibilities, manually inspecting, verifying and validating then you're competitors are not only going to beat you to market and take more market share by exploiting your absence, their margins will be superior as well. We can religiously debate process all day. It is the money math we're really discussing.
Either you can move software to production as quickly as a completely functional car, tractor, grader or drill or you can't. Simply looking at your software request backlog and budget to actual financials coupled with production support reports will tell you the whole story.
Now, when do we decouple software from hardware delivery velocities/methods and when do we recouple them? Let's keep going with this conversation.