For most security issues, the key question usually boils down to, "Does it matter?" In vulnerability management, we want to know if the vulnerability poses a risk, or simply, “Does this vulnerability matter enough to fix right now?”
Enterprise security teams are bombarded with vulnerability information so it's in their best interest to make the best use of their limited time and resources by prioritizing which vulnerabilities to fix. They can't fix all of them—MITRE since its inception has assigned and published over 120,000 Common Vulnerability Exposures (CVEs) and in 2017 alone, businesses had to deal with an average of 40 new vulnerabilities per day—so the next best thing is to figure out which ones needs immediate attention.
Enterprises can follow the schedule set by major companies such as Microsoft's monthly Patch Tuesday or Oracle's quarterly Critical Patch Update releases, patch every bug assigned a certain score on the Common Vulnerability Scoring System (CVSS) scale, or focus on flaws that are disclosed on public mailing lists. Research firm Cyentia Institute and security company Kenna Security analyzed existing vulnerability information and found that most remediation methods are just as effective as a strategy where vulnerabilities are selected randomly.
“Given the long and growing list of open vulnerabilities that must either be dealt with or delayed, which ones are most likely to be exploited, and thus deserving of priority remediation?" the researchers wrote.
Of the 15 common remediation strategies analyzed, more than half were no better than relying on chance, or "rolling the dice," said Jay Jacobs, Cyentia's chief data scientist.
The research team focused on "discovered and disclosed vulnerabilities" in the Common Vulnerabilities and Exposures (CVE) list from MITRE and enriched the dataset of 94,597 CVEs with details from other sources, such as the National Vulnerability Database (NVD) and CVSS. The team obtained information about what vulnerabilities are actually being used in attacks by analyzing five years of historical vulnerability data and millions of data points compiled from over 15 sources, including threat intelligence feeds and information collected by third-party scanners.
Vulnerability != Risk
Not every vulnerability is a risk. If there is no exploit code, then the vulnerability won't be exploited, then there is no risk to the business.
Researchers found that 77 percent of the vulnerabilities they analyzed did not have any exploits developed, and only 23 percent of published vulnerabilities had associated exploit code. Just 2 percent of published vulnerabilities have observed exploits in the wild.
“Out of the thousands of new vulnerabilities published every year, the vast majority never have exploits developed, and even fewer are actively used in an attack," the researchers wrote in the report.
In order to keep critical business systems protected, enterprises have, on average, ten working days to check whether or not the new vulnerability affects them, and if so, to fix the problem.
Half of the exploits were published within two weeks of the vulnerability being discovered, and 13 percent had exploits after 30 days. About 15 percent of exploits were weaponized a month or more before the CVE was published. Only one percent of vulnerabilities have exploits developed after a year has passed since its discovery.
Not every vulnerability is a risk.
To clarify that last point, vulnerabilities which have not had any exploit code developed in the year after the details of the flaw was released, will most likely not show up in attacks. Yes, attackers frequently target old vulnerabilities because they know there are plenty of users still using unpatched software, but that's because exploit code exists. Enterprises should prioritize patching those old vulnerabilities.
There is no such thing as never in infosec, and this doesn't mean vulnerabilities with no associated exploit code can be safely ignored. It just means the vulnerability should be continually evaluated with new ones released in case exploit code gets developed at a later time, in which case the risk model changes.
The fact that an exploit exists does nothing to harm vulnerable assets directly," the researchers wrote. “It does, however, represent a tool that can be used by those who intend to cause harm, which is why exploits are an important factor in remediation decisions. Active exploitation is another risk-altering event.
What to Remediate?
The goal of a vulnerability program is to remediate vulnerabilities in a cost-effective manner before they become security incidents. It is also a fortune-telling exercise, one that requires security teams to identify which vulnerability could be the attackers' potential entry point. Get the prediction wrong, and they face a whole slew of problems, including data theft, systems damage, business disruption, and regulatory fines. Guess right, and that's time saved to evaluate the next one, and the one after that.
There is no easy way to tell beforehand which vulnerabilities will have associated exploit code, but the researchers found some clues to look for.
Roughly two out of every three CVEs used in attacks already had associated exploit code published. When exploit code has been published, the chance of seeing the vulnerability targeted in the wild is seven times higher than without published exploit code, the researchers found.
When it comes to deciding which vulnerabilities need attention, enterprises can focus on CVSS scores, prioritize patches from popular vendors, rely on reference lists such as IBM X-Force Exchange, the Full Disclosure mailing list, and BugTraq; filter on keywords and phrases extracted from vulnerability descriptions; or incorporate a combination of approaches. Whether or not the approach is effective depends on whether the right vulnerabilities were remediated, by deploying updates and patches, modifying system configuration, or implementing security controls.
Trying to handle as many vulnerabilities as possible would waste resources. If the focus on which vulnerabilities to address is too narrow, then the security program could potentially miss fixing issues that are likely to show up in an attack.
“Successful vulnerability management, then, balances the two opposing goals of coverage (fix everything that matters) and efficiency (delay/deprioritize what doesn't matter)," the researchers said.
Efficiency measures precision, where the focus is on addressing higher-risk vulnerabilities that are actively being targeted. Coverage measures completeness, as the focus is on addressing as many vulnerabilities as possible regardless of their criticality of likelihood of being exploited.
Does this vulnerability matter enough to fix right now?
A strategy that prioritizes only "really bad" CVEs, such as those with a CVSS score of 10, would be considered highly efficient, but have low coverage since many easy-to-exploit vulnerabilities with lower CVSS scores would be missed. A strategy that increases coverage, by remediating all vulnerabilities with a CVSS score of 6 or higher, would have lower efficiency because time is being spent addressing vulnerabilities that would never be exploited.
Ideally, a remediation strategy should have 100 percent coverage and 100 percent efficiency, but the reality is there is an “inherent tradeoff" between the two.
In Kenna and Cyentia's analysis, a strategy that focused on vulnerabilities with a CVSS score of 8 or higher had an efficiency of 23.1 percent and coverage of 14.9 percent. Just going down one rating on CVSS to a score of 7 boosted the efficiency to 31.5 percent and coverage of 53.2 percent.
Looking at popular vendors such as Microsoft and Adobe ("Top 5 Vendors") isn't a very effective strategy, after all, as it has an efficiency of 12 percent and coverage of 21.8 percent.
The reference lists fared a little better, as looking at the full-disclosure list had an efficiency of a little less than 40 percent—which makes sense since the list tends to come with proofs of concept and working exploits—but coverage is less than 10 percent as many vulnerabilities do not get posted on the list. Looking at BugTraq has better efficiency, closer to 40 percent, and higher coverage, around 35 percent.
Researchers calculated the effectiveness of a predictive model which combined different methods (“everything") and found it had better coverage and efficiency than any of the strategies on their own. While there were ways to maximize efficiency or coverage, the "balanced" approach had 60 percent efficiency and 60 percent coverage.
“The 'Everything' model we developed outperforms all other strategies, regardless of whether your goal is to maximize coverage, efficiency, or balance both,” the researchers wrote.
Keep Refining, Keep Predicting
The research shows the vast majority of security flaws don't get used in attacks, and just as importantly, a new vulnerability report shouldn't automatically jump to the top of the remediation queue. It is in line with recent research from Akamai which found that even when web application vulnerabilities are published on public mailing lists, they didn't always get exploited.
Not all vulnerabilities are created equal, and businesses have different requirements and risk profiles, so it doesn't make sense to try to develop a single prediction model. The everything model is a "worst-case scenario" since in reality, nobody needs to consider every single published CVE in their decision making. The volume of CVEs being considered automatically gets reduced since security teams will consider only vulnerabilities that affect the products actually in their environment. If the vulnerability program focuses only on critical systems, then even more CVEs get funnelled out of the discssion. With a smaller set of vulnerabilities to worry about, it becomes easier to figure out the best combination of strategies that fit their requirements.
A small business with limited resources may prefer a model tailored towards CVEs most likely to be exploited, while another organization may be willing to waste some resources in favor of having a broader coverage.
Predictions should happen on a continuum as security programs can take on more remediation tasks as they become more mature, said Mike Roytman, Kenna's chief data scientist. As security teams get more capacity, they can add more variables to the model. Treat the existing strategy is the framework and make some policy tweaks and observe the changes in efficiency and coverage.
"Our advice is to start small: measure the efficiency and coverage of your remediation strategy," the researchers wrote.