Estimation, often perceived as a sophisticated form of guessing, is crucial for every project. Whether we’re estimating the problem, the solution, or potential returns, it all boils down to making educated guesses. But how often are these estimates accurate? And if they’re inherently flawed, how can they guide our decisions?
The Nature of Estimation
Estimations are merely predictions of the future. Their inherent unpredictability makes them susceptible to error. Predicting timelines or returns on new projects is particularly challenging in software development. Great software products don’t magically appear; they evolve. Each step in their development process is hard to predict, making accurate estimates even more elusive.
When we speak of estimations in software, we’re exploring if the project or feature is worth our effort. Essentially, we’re trying to answer: “What should we work on next?”.
Traditional approaches rely on the HIPPO (Highest Paid Person’s Opinion) model. But should we always trust the HIPPO? Ideally, we aim to get maximum value for minimal effort. This means focusing on projects that promise high returns for less work.
The Planning Dilemma
Organisations often need help with detailed planning, constantly seeking certainty. The inclination to have a comprehensive plan and a thorough cost-benefit analysis sometimes hampers decision-making. However, there’s a simple tool that has made a difference in many organisations. By visualising the planning problem, this approach prioritises essential features and ultimately delivers better customer value. Contrary to what you might think, this tool is straightforward.
Planning’s Inherent Challenge
Humans, by nature, are not the best at predicting the future. Planning is invariably a challenge, and plans often need to be revised. However, effective strategies can still be valuable. The challenge is in striking the right balance between accuracy and precision. While an accurate estimate is desired, chasing extreme precision can be counterproductive.
A scientific approach to this dilemma involves the concept of “error bars,” which measure the degree of uncertainty in any estimate. Given that software estimation has vast error bars, especially at a project’s onset, increasing precision might not be the right goal.
Value Estimation: Beyond Just Costs
Traditional value estimation often overlooks the element of time. It’s not merely about the feature’s worth but the cost incurred for not having it over time. This concept is encapsulated in the CD3 (Cost of Delay Divided by Duration) model. CD3 provides a more comprehensive approach to prioritising tasks by focusing on the cost of delay.
However, as with all methods, CD3 has limitations, notably requiring us to estimate the cost of delay accurately.
A Pragmatic Alternative
At the core, you should believe in simplicity. Here’s a more straightforward approach:
For every feature:
Determine its perceived value to users. Would they equate its worth to a beer, a car, or a house?
Estimate its cost. Is it as much as a beer or as high as a house?
Features that users deem as valuable as a house but cost as little as a beer are a go. Those with minimal user value but high costs are a no-go. For everything in between, further exploration is needed. This seemingly simplistic approach provides clarity, allowing us to concentrate on high-value projects and avoid waste of resources.
Implementing a simple, qualitative tool like the one described can profoundly impact software development in several crucial ways. First and foremost, it focuses the development efforts on what truly matters. By distinguishing between high-value, low-cost features and high-cost, low-value ones, development teams can prioritise their efforts effectively. This approach significantly reduces the chances of spending vast amounts of time and resources on features that don’t yield a commensurate return on investment. As a result, projects have a higher likelihood of success because they align more closely with user needs and stakeholder expectations.
Cost-wise, this approach promotes efficient resource allocation. Development is expensive, both in terms of monetary cost and time. Projects can be delivered faster and more economically by focusing on high-value propositions and disregarding low-yield features. The funds saved on avoiding inconsequential features can be redirected to other high-impact areas, ensuring that the software meets and exceeds user expectations.
Including the client in this decision-making process amplifies these benefits. Clients bring a unique perspective— a deep understanding of their market, users, and business goals. When clients prioritise features based on perceived value and cost, the development process becomes more collaborative and transparent. This shared responsibility and decision-making foster trust, reducing potential friction and misalignment later in the project.
Furthermore, clients who see the logic behind the decisions are more likely to be understanding and supportive, even if certain desired features are deferred or excluded based on the evaluations. This synergy between the development team and the client ensures that the product is not just a culmination of technical expertise but also a true reflection of the client’s vision and the users’ needs.
By using a simple estimation model and actively involving the client, software development projects stand a better chance at success, delivering more value at a reduced cost.
A big thank you to Dave Farley for his excellent YouTube channel, Continuous Delivery, for introducing the ideas in the article.