Since 2016, new vulnerabilities reported each year have nearly tripled. As of April 2022, predictions about the number of new vulnerabilities continue to come true.
The trend continues to increase. Basically, more code translates to more vulnerabilities. And code now takes many forms beyond just applications or software. Code exists within embedded systems and IoT devices, resulting in hardware-born vulnerabilities, and code is also used to define and operate infrastructure as part of DevOps practices. It is now commonplace among security engineers or analysts to be accustomed to the Internet being broken every week. The shortage of developers is not helping either.
Vulnerabilities by the last years
All critical vulnerabilities are not the equivalent of log4j or spring4shell in terms of widespread adoption of a software package, exploitability, or impact.
The ideal state for any cybersecurity program is to be able to quickly identify vulnerabilities that are truly impacting the organization and are actionable. Burning out IT teams and security teams by chasing all vulnerabilities is untenable.
Review our blogs where we explain the Log4j vulnerability and the latest Spring4shell.
In this article, we want to explain:
MITRE defines a vulnerability as:
“A weakness in the computational logic (e.g., code) found in software and hardware components that, when exploited, results in a negative impact to confidentiality, integrity, or availability.”
For this reason, it is possible to have a critical vulnerability in a code that does not affect you at all, for example, because this code runs on an IoT device that relies on other security controls that effectively mitigate exploitability of a latent vulnerability in the embedded code.
On the other hand, you may have a vulnerability of low severity that negatively impacts confidentiality of your application. You’d likely prioritize fixing it as soon as possible, because fixing the issue directly impacts availability.
As we mentioned before, the main problem is that we are continuously fed with new vulnerabilities while still wrestling with old vulnerabilities, and there is no easy way to manage them all. We have to be quick in detection and resolution processes when something really critical is discovered and put a majority of our efforts there without forgetting the rest of the vulnerability ecosystem. It sounds simple in theory and underpins all modern security programs, but vulnerability prioritization in practice is now one of the biggest gaps in security.
To go deeper into vulnerability management, we will explain what the process is when a vulnerability is released.
The origin of a vulnerability is not defined by anyone.
Sometimes, it is large research by a company regularly testing their own code that spends great efforts to show the problem with an application or the abuse of a dependency. At other times, a vulnerability might be discovered as a result of an independent security researcher probing a system in their free time and reporting the findings as part of responsible disclosure, or by creating a proof of concept (PoC) to exploit a system and publishing the details on Twitter.
These, before being officially published, are common examples of 0-days. It is an overloaded buzzword because if the vulnerability is already known, it is not a 0-day. This is why these types of vulnerabilities are gold in the black markets. If an attacker has intimate knowledge of a previously unidentified vuln, they can then exploit it readily since there’s likely a lack of detection and protection mechanisms in place for most organizations, at least in the initial access or exploitation phase.
A good practice that is done by researchers is to give developers some time to start working on the patch to fix the vulnerability before registering. Otherwise, days could go by without a fix.
What is the best thing to do then? Improve the process to be ready as soon as possible when a 0-day is disclosed and detect it from that moment on, providing the appropriate mitigations and in many cases, verifying that this vulnerability has not been used in the past (where it really was 0-day).
When the vulnerability is registered, we have an ID that identifies it. This will help us to identify the vulnerability and check if we are being impacted or not. But where is it registered?
One of the most common sites, but not the only one, is Common Vulnerabilities and Exposures (CVE). MITRE Corporation is the organization that identifies, defines, and catalogs publicly disclosed cybersecurity vulnerabilities and shares that information of CVE-IDs publicly. Vulnerability information is also shared with the NIST organization, where additional information may be added on to provide further details or security guidance. That information lives within NIST’s National Vulnerability Database (NVD) and is organized by CVE-IDs.
Other states have their own system to catalog and store their vulnerabilities, such as the Chinese National Vulnerability Database (CNNVD) or Japan Vulnerabilities Notes (JVN). But in this article, we focus on the NVD.
Once we have confirmation that the vulnerability is real, exploitable, and has an ID, the next process is to assess the severity.
The Common Vulnerability Scoring System (CVSS) provides a way to capture the key characteristics of a vulnerability and produce a numerical score that reflects its severity. Many security teams and SOCs use the CVSS to prioritize vulnerability management activities, such as incident response processes, defect tracking and resolution, or implementation of a mitigating control.
The metrics used in CVSS v3.1, the latest version, assess the different elements that depend on the exploitation process and the impact, resulting in the final severity score. The first thing we can find in the documentation is that CVSS measures severity, not risk.
CVSS, as scored, is an “objective” score when you set some attributes of the vulnerability without context, and a formula produces a score that also maps to a “Severity.” Below, we can see a real example of the CVSS of Spring4Shell vulnerability, which scores the severity in 9.8 CRITICAL.
The base score is calculated with eight variables:
The final format of CVE-2022-22965 is a vector with this information: CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H
This first part corresponds to the base score, an objective value that should remain stable over time and consistent across organizations. As a supplement, there are two more metrics, temporal and environment; these values introduce more scoring complexity though and may not be something your organization chooses to pay attention to in the early phases of vulnerability management.
The Temporal metrics measure the current state of exploit techniques or proof of concept code availability, the existence of any patches or workarounds, and the confidence in the description of a vulnerability. It is something that will change along the lifecycle of the vulnerability because there’s a huge difference between having or not having the remediation ready. Environment metrics enable the practitioners to customize the CVSS score depending on the importance or business criticality of the affected IT asset to the impacted organization.
Vendors, such as RedHat or Debian as the base distributor provider, will also evaluate the severity of the vulnerability in a specific context (i.e., the package inside the distribution). Customers may trust the score of the vendor more than the generic scores assigned by MITRE or NIST, as it is usually more accurate.
As we can see, this score is not impacted by the remediation part or fix process. If this vulnerability needs a great effort to be solved, it does not impact the final score. In addition, two vulnerabilities with the same score could have a very different impact or likelihood because they occur in the economic sector or business vertical.
From the CVSS score calculation, several derivations appear that can be of help when evaluating the safety of a system.
Some of them are:
It is obvious that we must feed our systems with more information to correlate with the CVSS and improve our vulnerability management. Remember that risk-based prioritization is the goal of all modern cybersecurity programs.
If we dig deeper into the meaning of vulnerability severity, we may be more interested in other characteristics when calculating the CVSS score.
Obviously, depending on the use case or business sector, it is possible to find alternatives to the CVSS to help you prioritize the management of your vulnerabilities. It’s not always possible to patch quickly enough, especially in cases of third-party code or partner integrations. In this case, the shift-left approach is not enough and we recommend the use of runtime security as another layer of security that enables early detection and identification of affected software, expediting the implementation of a mitigating security control.
What is the actual probability of a vulnerability being exploited by an attacker? That probability is explained by the Exploit Prediction Scoring System (EPSS). The EPSS model produces a probability score that, the higher the score, the greater the likelihood that a vulnerability will be exploited.
The score is maintained by the same organization as the CVSS, MITRE, which guarantees its consistency with the above-mentioned vulnerability taxonomies and classification systems. If we look at the highest rated vulnerabilities of the last 30 days, we better understand the potential real impact of vulnerabilities. An example can be seen here with CVE-2022-0441, which relates to the MasterStudy LMS WordPress plugin.
To calculate the percentage, the EPSS uses part of the CVSS score but also uses threat intelligence to see how easy it is to exploit the vulnerability. For example, an exploit might enable exploitation of other vulnerabilities to increase impact. As part of a complex attack chain, an attacker may achieve RCE by exploiting one vulnerability and can then exploit other vulnerabilities to elevate privileges, resulting in a much more significant impact. The score may also factor in availability of exploit tools or repositories, like Metasploit or Exploit-db, which don’t require knowledge about the exploitation steps.
The Stakeholder-specific Vulnerability Categorization (SSVC) is mostly a conceptual tool for vulnerability management. SSVC aims to avoid one-size-fits-all solutions in favor of a modular decision-making system with clearly defined and tested parts that vulnerability managers can select and use as appropriate to their context.
The goal of SSVC is to be risk-oriented, be more transparent in the calculation process, and be able to scale the quantification of vulnerability risk through automation.
Vulnerability Priority Rating (VPR) is maintained by Tenable and also uses the severity and the facility to be exploited, similar to EPSS.
The Vulnerability Priority Rating (VPR) is a dynamic companion to the data provided by the vulnerability’s CVSS score since Tenable updates the VPR to reflect the current threat landscape, such as the exploit code of a vulnerability becoming available or having escalated maturity. The VPR values range is from 0.1 to 10.0, with a higher value representing a higher likelihood of exploitation.
Other vendors such as Snyk created their own score (Snyk Priority Score) for prioritization by using CVSS and other factors mentioned above, such as exploit maturity, remediation process, or mentions in the community even they rank vulnerabilities as part of their own threat research that may not have CVE-IDs associated but provide value in prioritization.
Relevant to the medical sector, Risk Scoring System for Medical Devices (RSS-MD) is being considered and at a more generic level. As expected, a vulnerability in this industry directly affects people’s health or safety, so it is necessary to have a scale of its own to manage this type of vulnerability and relative impacts differently.
Relevant to the manufacturing industry, Industrial Vulnerability Scoring System (IVSS) incorporates part of its calculation factors such as physical security, among others. This score is specifically designed for vulnerabilities in industrial control systems that affect critical infrastructure where damage can impact entire cities and lives of citizens.
The tendency, as seen above, is to calculate the best score for a vulnerability or the associated risk-based on correlating as much information as possible that can be accessed and processed in order to “vitaminize” the final result.
It is strange that one method contradicts another. Normally they will all have a similar view of the final severity, but these small differences are crucial in a huge scale of vulnerability management. The simplicity approach is worth stressing since some of these scoring mechanisms get incredibly complicated. Many orgs would benefit by keeping their risk-scoring simplified so they can focus their efforts on addressing security problems instead of burning cycles qualifying or quantifying risk.
It is also necessary to have complete visibility of your situation at all times to know if we are being impacted as soon as possible and effectively reduce the risk caused by them.
With all this information, we now need to implement our vulnerability management processes and supporting tooling in our organization.
These vulnerability scores can be viewed ad-hoc, but effective cybersecurity requires that you ingest vulnerability feeds into appropriate security tooling that serve the relevant stage of the system lifecycle.
If you want to push some product, this is where the power of a platform like Sysdig helps since it’s designed to power security capabilities throughout the lifecycle. Use it sparingly or it can come across as too sales-y. It might be better to talk about capabilities, such as imagine scanning in build pipelines to address vulns quickly (likely CVSS scored), but also to identify threats that emerge in runtime and where SecOps teams will need to trigger DFIR workflow.
The key is to be prepared for a new vulnerability and be flexible to close the gap between the vulnerability release and the detection process in your environment.
One of the most famous feeds is Vulndb, which uses National Vulnerability Database (NVD) as a trusted database of vulnerabilities and also owns registered vulnerabilities and collaborates with security companies to be as up-to-date as possible.
If you’re missing the explicit details of a vulnerability, you must still acknowledge there is a potential risk and then accept, avoid, or mitigate it. You need to have an alternative to being sensitive with it.
A significant hurdle to overcome with respect to remediation is to be able to quickly patch every single asset or dependency that is impacted or potentially exploitable, and these processes also need to be able to scale. It’s not trivial to patch old versions that could impact new features or poor performance; do it in a massive way or have other implications. The problem is exacerbated with transitive dependencies. That is, your code or system likely relies on many other codebases or systems, and dependency chains become quite nested in practice. Sometimes, it is even necessary to patch old versions that are still being distributed. That is what RedHat calls backporting.
Backporting is essentially the management of updates through automation to minimize the associated risk. It is possible that a fix in a new version may adversely affect the previous version. You should be aware of when you want to upgrade as soon as possible.
When a vendor offers to backport security fixes, we ensure that fixes do not introduce unwanted side effects and apply to previously released versions.
Prioritization in a world with hundreds of vulnerabilities every day is a necessity. We continuously develop more software that will be targeted by attackers and add to the libraries, firmware, or common dependencies that are already used by applications and systems.
To help us, we need to ingest the vulnerability information that organizations like MITRE share, generate better indicators through the correlation of other sources of information, and maintain full visibility of our assets (and associated attack vectors) to be quick in detecting the impact. Without this, it is impossible to both efficiently plan the vulnerability mitigation process to reduce the noise and time in which we are vulnerable, and be effective in any cybersecurity program.
Manual processes can’t scale to infinity, and you’ll never have enough headcount for all your security needs. Security needs to be seamless and automated. Organizations must plan accordingly to keep ahead of the tide of critical vulnerabilities, like log4j and spring4shell.
See how fast and easy you can identify which vulnerabilities pose a real risk with Sysdig.
No longer scrolling vulnerabilities line-by-line, struggling to estimate risk through an endless spreadsheet of issues. With Risk Spotlight, you can easily find, focus, and fix the vulnerabilities that matter to you.
Register for our Free 30-day trial and see for yourself!