Usage of open source software (OSS) libraries and components is just about ubiquitous these days. Even organizations with policies prohibiting the use of open source code will find that someone, somewhere is running an application that takes advantage of components and dependencies that were created or modified by an open source community. Even the Windows operating system ships with open source components. The magnitude of the impact of OSS, along with several high-profile OSS-based exploits in recent years, has drawn industry attention to the debate over security implications of OSS.
On the one hand, many organizations—and some development teams—are hesitant to use OSS in their projects because of security concerns. They may fear that open source development communities have fewer resources to invest in writing secure code, or that attackers will have more opportunities to find vulnerabilities in code that is so readily accessible.
Conversely, open source proponents often tout claims of superior security as one of the main advantages of adopting an open approach to development. The often-repeated maxim “Many eyes make all bugs shallow” summarizes their main argument: because open source code can be viewed by anyone who wants to participate in the project, flaws in the software will be found—and fixed—more quickly.
Which group is right? Does relying on open source components increase security risks?
Here at Netrix, we believe that the true answer lies somewhere in the middle. From a vulnerability and quality perspective, there’s nothing about OSS that’s inherently more or less risky than proprietary (or closed source) software. Whether you’re relying on open source components or are trying to leverage mostly vendor-provided software, your team should still follow secure coding best practices to create the best and most secure software you can.
Software quality engineering research has demonstrated that having more developers inspect code does indeed improve its quality and security, but only if those developers are well-trained, qualified experts and they’re inspecting the code in a systematic and thorough manner. This is a measurable phenomenon.
Simply having more people looking over the code—that is, reading it casually rather than inspecting all of its parts in a methodical way with the intent of finding and fixing security issues—isn’t going to improve its quality.
There’s nothing about OSS—or the process through which it’s created—that makes thorough inspection more likely to happen. Commercial software vendors can do it just as well, if not (theoretically) better, since they may have more funding and a financial incentive to create secure products.
Of course, there are cases where open source project teams have done an exceptionally good job of building high-quality software with robust secure development practices. The Linux kernel is a shining example, and the OpenSSL Project, which develops and maintains the secure sockets layer (SSL) toolkit for general-purpose cryptography and secure communication, is another. Trained security researchers spend a great deal of time investigating these widely-used OSS products for potential vulnerabilities.
But the security of the Microsoft Windows NT kernel is well-researched, too. It’s not an open source project, but Microsoft does make the source code available to security researchers through a shared source model.
On the other hand, smaller commercial software companies may not have large enough quality assurance (QA) teams to thoroughly test their products. Without the ability to review the source code, their customers simply have to trust in their integrity and processes, which we find are often lacking.
Given that neither open source nor commercial software is inherently more secure, you should evaluate each package or library that you incorporate into your systems individually and carefully. If you’re using open source components in your development toolchain or solution, you should also take a look at the communities behind them.
Software supply chain attacks have been making headlines recently. When a seemingly innocuous logging library like Log4j is so widely used, the discovery of a vulnerability within it has a major impact. And today’s software—whether commercial or open source—inevitably relies on pre-built components.
That’s why it’s critical for software engineers to consider the integrity of the entire system before they can be confident in the integrity of their code.
There are several best practices that can help with this:
None of these tactics can guarantee that your software will be perfect, but it is possible to ensure that it’s free of known vulnerabilities. And just like continuous integration and continuous delivery, the process of finding these vulnerabilities can and should be automated. This is a key part of building high-quality products with discipline and integrity.
To learn more about how Netrix’s expert team can help, contact us https://netrixglobal.com/contact-us/