Does Open Source Leave You Open to Cyberattacks?

From a vulnerability and quality perspective, there’s nothing about OSS that’s inherently more or less risky

Usage of open source software (OSS) libraries and components is just about ubiquitous these days. Even organizations with policies prohibiting the use of open source code will find that someone, somewhere is running an application that takes advantage of components and dependencies that were created or modified by an open source community. Even the Windows operating system ships with open source components. The magnitude of the impact of OSS, along with several high-profile OSS-based exploits in recent years, has drawn industry attention to the debate over security implications of OSS.

On the one hand, many organizations—and some development teams—are hesitant to use OSS in their projects because of security concerns. They may fear that open source development communities have fewer resources to invest in writing secure code, or that attackers will have more opportunities to find vulnerabilities in code that is so readily accessible.

Conversely, open source proponents often tout claims of superior security as one of the main advantages of adopting an open approach to development. The often-repeated maxim “Many eyes make all bugs shallow” summarizes their main argument: because open source code can be viewed by anyone who wants to participate in the project, flaws in the software will be found—and fixed—more quickly.

Which group is right? Does relying on open source components increase security risks?

Here at Netrix, we believe that the true answer lies somewhere in the middle. From a vulnerability and quality perspective, there’s nothing about OSS that’s inherently more or less risky than proprietary (or closed source) software. Whether you’re relying on open source components or are trying to leverage mostly vendor-provided software, your team should still follow secure coding best practices to create the best and most secure software you can.

CERTAIN TYPES OF EYEBALLS MAKE ALL BUGS SHALLOW

Software quality engineering research has demonstrated that having more developers inspect code does indeed improve its quality and security, but only if those developers are well-trained, qualified experts and they’re inspecting the code in a systematic and thorough manner. This is a measurable phenomenon.

Simply having more people looking over the code—that is, reading it casually rather than inspecting all of its parts in a methodical way with the intent of finding and fixing security issues—isn’t going to improve its quality.

There’s nothing about OSS—or the process through which it’s created—that makes thorough inspection more likely to happen. Commercial software vendors can do it just as well, if not (theoretically) better, since they may have more funding and a financial incentive to create secure products.

Of course, there are cases where open source project teams have done an exceptionally good job of building high-quality software with robust secure development practices. The Linux kernel is a shining example, and the OpenSSL Project, which develops and maintains the secure sockets layer (SSL) toolkit for general-purpose cryptography and secure communication, is another. Trained security researchers spend a great deal of time investigating these widely-used OSS products for potential vulnerabilities.

But the security of the Microsoft Windows NT kernel is well-researched, too. It’s not an open source project, but Microsoft does make the source code available to security researchers through a shared source model.

On the other hand, smaller commercial software companies may not have large enough quality assurance (QA) teams to thoroughly test their products. Without the ability to review the source code, their customers simply have to trust in their integrity and processes, which we find are often lacking.

CONSIDER EVERY PIECE OF SOFTWARE INDIVIDUALLY

Given that neither open source nor commercial software is inherently more secure, you should evaluate each package or library that you incorporate into your systems individually and carefully. If you’re using open source components in your development toolchain or solution, you should also take a look at the communities behind them.

  • How large is the user base?
  • How experienced are the core maintainers of the project?
  • How well funded is the project? Whether open or closed source, is it backed by a major enterprise software company or companies?
  • What types of security and quality reviews does the code undergo? Is it ever audited for security flaws?
  • Does the project have established channels for reporting vulnerabilities—and for publishing identified vulnerabilities? Is your team monitoring those vulnerability publications?

IMPLEMENT SECURE DEVELOPMENT PRACTICES

Software supply chain attacks have been making headlines recently. When a seemingly innocuous logging library like Log4j is so widely used, the discovery of a vulnerability within it has a major impact. And today’s software—whether commercial or open source—inevitably relies on pre-built components.

That’s why it’s critical for software engineers to consider the integrity of the entire system before they can be confident in the integrity of their code.

There are several best practices that can help with this:

  • Run automated dependency scans. This enables you to discover dependencies and verify the integrity of the packages and libraries you’re using. Every development organization should have the discipline to run vulnerability scans on their code before it gets to the test engineers.
  • Have developers with training and expertise in secure coding practices examine your code thoroughly and intentionally.
  • Shift left. It may be a buzzword, but it’s still true that the earlier in the development process you find a defect, the cheaper and easier it will be to fix it. Beyond dependency scans, ensure you manually inspect for vulnerabilities early and often, including in the requirements and design phases before you write a single line of code.
  • Understand your development toolchain’s dependency management capabilities (NPM for Node.js, NuGet for .NET, etc.) and follow an intentional process for proactively incorporating upstream patches into your solution. You could opt for either an automated or a manual approach to this essential maintenance step—both approaches have tradeoffs—but the default for many organizations is to think about dependency management only when the development team happens to want to adopt newer features provided by a particular library update. By that point, you have likely missed multiple security fixes and exposed yourself to significant risk.

None of these tactics can guarantee that your software will be perfect, but it is possible to ensure that it’s free of known vulnerabilities. And just like continuous integration and continuous delivery, the process of finding these vulnerabilities can and should be automated. This is a key part of building high-quality products with discipline and integrity.

To learn more about how Netrix’s expert team can help, contact us https://netrixglobal.com/contact-us/