Open-source or proprietary? Businesses have been arguing about this for years, often concerned about security issues. A new study reveals that the pros and cons may depend to a large extent on the size of the project for which the software is being used.
For several years there have been two opposing camps: on the one side the advocates of open-source software for company use; and on the other those who champion proprietary software. While proprietary software providers do not hand over to their customer firms the source code which enables the customer to modify functionality, open-source software providers do make the source code available, thus allowing users to share information on their experiences with each other with a view to making overall improvements to the software. Traditionally, most firms have preferred to use proprietary software, which is supposedly a guarantee of security. However, the use of open-source software allows businesses to tap into a global community of developers who will contribute to improving the code. Companies like Clinovo have brought this model to highly regulated industries, such as healthcare. Now the 2012 annual report from San Francisco-based software development testing specialist Coverityre-opens the debate by shedding new light on the comparative performance of the two approaches. According to Coverity’s annual Scan report 2012, neither type of software is clearly superior in all cases: it seems to depend on the size of the source codebase – i.e. the entire collection of source code used to build a particular application or component.
Ongoing improvement in code quality
Coverity has been publishing an annual report for several years. This year around 450 million lines of code, both open-source and proprietary, were analyzed. This is the single largest sample size the report has studied to date, which would imply that the findings should be more accurate. In all, over 68 million lines of open-source code and more than 381 million lines of proprietary code were tested, representing total scanning of 374 projects. The report shows that code quality has continued to improve over the years, and suggests that this might well be due to the emphasis companies are now placing on development testing. Coverity’s analysis found an average ‘defect density’ (i.e. defects per 1,000 lines of software code) of 0.69 for open source software projects that make use of the Coverity Scan service, and an average defect density of 0.68 for proprietary code developed by Coverity’s enterprise customers, the accepted industry standard defect density for good quality software being 1.0.
Open-source strategy for smaller codebases, proprietary for large ones
The study highlights an interesting point regarding the impact on quality on the size – i.e. number of lines of code – of the project. While the results indicate overall continuous improvement in the quality of code, a more detailed examination revealed that variations in the relative defect density between open-source and proprietary correlate with the size of the codebase. For smaller codebases, open-source quality is much higher, with projects using between 500,000 and 1 million lines of code showing an average defect density of just 0.44, rising to 0.75 for projects with over 1 million lines of code. Proprietary software, on the other hand, shows a significantly lower average defect density for projects of over 1 million lines of code, at 0.66, than it does for projects using between 500,000 and 1 million lines of code, the average defect density here being 0.98.Possible explanations for this discrepancy are that a) open-source projects are often highly specific projects on which a dedicated team of developers is working, creating a group dynamic which is suited to relatively small-scale projects; and b) formalized development testing processes are often implemented for proprietary software projects only over and above a minimum number of lines. So, open-source or proprietary? It seems that the verdict will depend on project size.