No two software projects are exactly alike. So, one way to find out what a “typical” software project looks like is to take a large sample of completed projects from the QSM historical database
of over 13,000 completed software projects and look at measurements of central tendency for staff, effort, size, schedule duration, and productivity.
For this study, QSM looked at validated projects that completed beginning in 2010. We eliminated 1 person projects and those that expended less than 1 person month of effort. The eliminated projects accounted for 13% of the sample. About 80% of the projects fell into the Business IT application domain, many of which were from the financial services sector. This domain includes projects that typically automate common business functions such as payroll, financial transactions, personnel, order entry, inventory management, materials handling, warranty and maintenance products. We determined both a median and an average for each metric. With the exception of schedule (project duration), these differed significantly which indicates that that the sample metric values were not normally distributed. To minimize the effect of unrepresentative projects (those that comprise a small part of the sample, but whose metric values are very large or very small), we decided to use the medians – values with 50% of the projects above and 50% below the “average” as a better measure of central tendency.
The "Typical" Project
It seems like ever since the dawn of software development, humans have struggled with the question of team size. What team size is most productive? Most economical? When does adding more people to a project cease to make sense? So it comes as no surprise that one of the most popular articles on our website is a study Doug Putnam did in 1997 on team size, Team Size Can Be the Key to a Successful Project
. The article leveraged data from 491 completed projects in the QSM Database to determine what is the optimal team size - "optimal" being most likely to achieve the highest productivity, the shortest schedule, and the cheapest cost with the least amount of variation in the final outcome. The study determined that for medium-sized (35,000 to 95,000 new or modified source lines of code) systems, smaller teams of 3-7 people were optimal
. This article continues to be referenced today, especially by the agile community.
The topic of team size reappeared again in Don's Beckett study of Best in Class and Worst in Class projects for the 2006 QSM Software Almanac. To identify top and bottom performers, he ran regression fits for effort and schedule vs. project size through a sample of nearly 600 medium and high confidence IT projects completed between 2001 and 2004. On average, Best in Class projects delivered 5 times faster and used 15 times less effort than Worst in Class projects. What made the Best in Class projects perform so much better? Best in Class projects used smaller teams (over 4 times smaller, on average) than the worst performers.
Agile is all the rage today and companies are investing lots of capital to work within agile frameworks. Are these new methods the key to reducing project failure? When projects get behind schedule, a common reaction is still to add more people. Doug Putnam recently examined 390 contemporary applications of the same size, a significant portion of which used agile methods and tools, to see what matters more - staffing decisions or methodology. He discovered that while the additional staff reduced the schedule by approximately 30%, the project cost increased by 350%. The additional staff also created 500% more defects that had to be fixed during testing. Over the past 15 years, QSM has performed this same study in five-year increments and has found the same results -- staffing decisions have more of an impact on project success than any development methodology. In this article, Doug Putnam identifies a staffing "sweet spot" and outlines a step-by-step planning process that uses predictive analysis and early estimation to more accurately account for staffing needs.
Why do projects fail? There are a multitude of reasons from lack of up-front planning to failing to make necessary adjustments as requirements change to overstaffing when the project is running late. Whatever the reason, there are steps you can take to avoid these common traps. In this article for Software Executive Magazine
, Larry Putnam, Jr. explains how focusing on scope-based estimates, agile forecasting, and smaller teams will help your development team deliver products on time and according to budget.
Enterprise IT teams have been searching for years for the Holy Grail of software development: the greatest possible efficiency, at the least possible cost, without sacrificing quality.
This endless search has taken many forms over the years. Twenty years ago, development teams turned to waterfall methodologies as a saving grace. Soon after, waterfall begat object-oriented incremental or spiral, Rational Unified Development (RUP) practices.
Today, it’s agile development’s turn in the spotlight
. C-suite executives are investing huge sums of money to develop their organizations’ agile methodologies. They’re also committing significant resources to train employees to work within agile frameworks.
Yet many projects are still failing, clients remain unsatisfied, and IT departments are often unable to meet scheduling deadlines. Why?
It’s the staff, not the method.
Whenever a project falls behind schedule, the natural inclination is to add more staff. There’s a belief that doing so will accelerate development and, ultimately, help the team hit their deadlines.
What does a typical software project in the QSM historical database
look like, and how has “what’s typical” changed over time? To find out, we segmented our IT sample by decade and looked at the average schedule, effort, team size, new and modified code delivered, productivity, language, and target industry for each 10 year time period.
The QSM benchmark database represents:
- 8,000+ Business projects completed and put into production since 1980.
- Over 600 million total source lines of code (SLOC).
- 2.6 million total function points.
- Over 100 million person hours of effort.
- 600+ programming languages.
During the 1980s, the typical software project in our database delivered 154% more new and modified code, took 52% longer, and used 58% more effort than today’s projects. The table below captures these changes:
Frederick Brooks famously said that adding staff to a late project only makes it later
. The reasons are readily apparent. The project is already experiencing difficulties, most of which were not caused by understaffing. The usual culprit is an unreasonable schedule constraint; but starting work before the requirements were well defined, poor change control, or weak configuration management could also be the villains (or at least play a contributing role). None of these root causes is staff related and adding staff does not fix them: it merely adds more bodies to the confusion.
But, how do we determine the most appropriate staffing profile for a software project? Parametric estimation models suggest a way: these models have determined that there is a relationship between the functionality that a project delivers (called size in the software estimation vernacular) and staff. Fitting a regression line through hundreds or thousands of software projects determines an average and deviation from that average. The regression is a reflection of how software projects have performed. This is what has worked. This capability is built into estimation software like SLIM-Estimate. A wise approach is to take the average as a starting point, then adjust the modeling parameters that would increase or lower the staff. A word of caution here: if you find that your adjustments cause staff, effort, or duration to be more than 1 standard deviation above or below average, you are probably being either too optimistic or pessimistic. Don’t do it!
At one time or another, almost all information technology professionals have heard cries for more resources. They may even have been the one asking for help. "If only there were more people available for this project," they've said, "then maybe it would get done on time." Well, it turns out that more staffing is not the equivalent of optimal staffing. In fact, smaller project teams are more productive and can complete projects cheaper and faster than larger ones, according to a recent study from software life cycle consultancy Quantitative Software Management. That should be good news for IT departments that have seen their ranks depleted in recent years.
I should be the last one to complain about overstaffed projects; I may owe my career to one. My first job in information technology (IT) was with a mortgage company that was a textbook example of bad practices. Annual personnel turnover was 90% and after six months on the job, I was the person on the IT staff with the most seniority. After a year, I knew it was time to go. I applied for a job with a large systems integrator that was hiring furiously. I was drug free, did not have a criminal record, and knew COBOL, so I was a perfect match. The project to which I was assigned had planned to ramp up to a peak staff of 25 and last about 8 months. I was team member number 60 of the 80 it eventually grew into by the time it completed (in 18 months). I stayed with that company for a number of years and have no complaints about the wide range of experiences that I had and skills I gained.
What is the best way to determine how much staff a software project should have? QSM has conducted a productivity study on projects sized in Function Points
that suggests a way. A large sample of projects (over 2,000) was split into size bins. Within each bin the projects were divided into quartiles based on their staffing. The average and median productivity (Function Points per Person Month) were determined for each quartile. The following table compares productivity and staffing levels for the smallest and largest staffing quartiles.
In part one of our team size series, we looked at Best and Worst in Class software projects and found that using small teams is a best practice for top performing projects
. Part two looked at differences in cost and quality between small and large team projects
and found that small teams use dramatically less effort and create fewer defects. But simply knowing that small teams perform better doesn’t tell us how small a team to use. Most software metrics scale with project size, and team size is no exception. Management priorities must also be taken into account. Small projects can realize some schedule compression by using slightly larger teams but for larger projects, using too many people drives up cost but does little to reduce time to market:
Larger teams create more defects, which in turn beget additional rework… These unplanned find/fix/retest cycles take additional time, drive up cost, and cancel out any schedule compression achieved by larger teams earlier in the lifecycle.
In a study conducted in the spring of 2011, QSM Consultant Don Beckett
designed a study that takes both system size and management priorities into account. He divided 1920 IT projects into four size quartiles. Using median effort productivity (SLOC/PM) and schedule productivity (SLOC/Month) values for each size bin, he then isolated top performing projects for schedule, effort, and balanced performance (better than average for effort and schedule):