While scrutinizing CVSS’s underlying mechanics, our analysis revealed that, although CVSS offers certain benefits, it is not an optimal standalone method for prioritization due to its inherent limitations.
This article will delve into a range of instruments and programs for remediation prioritization, exploring how they can be applied and discussing their respective advantages and disadvantages.
The Experimental Threat Protection Standard (EPSS), initially unveiled at [insert location/time], shares similarities with the Common Vulnerability Scoring System (CVSS) in that it is managed by a special interest group (SIG). During the renowned Black Hat conference, the creators of EPSS aimed to address a gap within the CVSS framework by developing a predictive model that estimates the likelihood of successful exploitation based on historical data.
The EPSS utilised a distinct model based on logistic regression: a statistically robust methodology for evaluating the likelihood of a binary outcome, wherein the influence of multiple independent factors is considered and weighted accordingly. To utilize logistic regression and gauge the likelihood of a specific event happening – say, whether an individual will purchase one of your products – you’d focus on aggregating a substantial dataset comprising historical marketing data from previous customers and prospects. The unbiased variables that could potentially influence an individual’s purchasing decision include factors such as age, gender, salary, discretionary income, profession, geographic location, and existing ownership of a competing product, among others. Whether an individual purchases a product or not serves as the dependent variable.
The logistic regression model would indicate which of these variables makes a significant contribution to the outcome, either positive or negative? So, for instance, I would discover that age < 30 and wage > $50,000 are positively correlated to the result, however already owns related product = true is, unsurprisingly, negatively correlated. Through a nuanced analysis of the variables’ respective influences, we can effectively integrate fresh insights into the model, ultimately yielding a reliable estimate of the likelihood that any particular customer will consider purchasing the product. To ensure the efficacy of logistic regression models, it is crucial to assess their predictive accuracy by quantifying potential false positive and negative rates, thereby utilizing ROC curves as a valuable tool for validation.
The EPSS research team analyzed over 25,000 vulnerabilities from 2016 to 2018, distilling 16 objective variables related to vulnerability characteristics, including affected vendor information, whether exploit code was publicly available in databases like Exploit-DB or frameworks such as Metasploit and Canvas, and the number of references within the corresponding CVE entry.
The independent variables were unbiased; the dependent variable was the exploitation of vulnerabilities in the wild, primarily based on data from trusted sources such as Proofpoint, Fortinet, AlienVault, and GreyNoise.
The discovery of weaponized exploits had the most significant positive impact on the model, followed closely by Microsoft’s involvement as a targeted vendor due to the sheer scale and recognizability of its products; its history of being attacked by malicious actors also played a crucial role. The availability of proof-of-concept code and Adobe’s status as an affected vendor further amplified these findings.
Interestingly, the study also highlighted an unfavorable correlation between dominant tech companies such as Google and Apple, which were significantly impacted by these trends. The assumption is that Google’s products are plagued by numerous vulnerabilities, most of which remain unexploited in the real world, whereas Apple’s closed ecosystem has historically been less of a target for attackers due to its limited exposure. The inherent characteristics of a vulnerability, as reflected in its corresponding CVSS rating, showed limited correlation with the outcome; however, it was anticipated that distant code execution flaws would be more susceptible to exploitation than native memory corruption issues.
Initially, EPSS was implemented in a spreadsheet application. The tool provided an estimated probability that a specific vulnerability could be successfully exploited within the following 12-month timeframe. The model has transitioned to a centralized framework, incorporating a sophisticated machine learning algorithm, and expanded its functionality by introducing variables such as publicly disclosed vulnerability lists, Twitter/X mentions, seamless integration with offensive security tools, correlation of exploitation activity to vendor market share and establishment date, and estimated the likelihood of exploitation within a 30-day window instead of 12 months.
As version 1.0 becomes available in the market, users are able to access the latest model, either by regularly downloading a daily CSV file from the provider’s website or through an application programming interface (API) integration with other systems. Despite being less favored by the Nationwide Vulnerability Database’s (NVD) preference for CVSS scores, EPSS ratings can still be found on various other vulnerability databases.
The correlation between CVSS scores and actual exploitability has always been tenuous, making EPSS a welcome enhancement – whereas CVSS provides insight into vulnerability severity, EPSS furnishes information on the likelihood of successful exploitation. Despite boasting a CVSS Base score of 9.8, this vulnerability’s actual risk is significantly mitigated by an EPSS rating of just 0.8%, indicating that even under extreme exploitation scenarios, the bug remains less than 1% likely to be exploited over the next 30 days. Alternatively, one potential vulnerability might warrant a significant reduction in its CVSS Base score from 6.3, prompting an EPSS rating of 89.9%. In this instance, prioritization would be necessary.
Don’t mistake the simplicity of multiplying CVSS and EPSS scores for a straightforward approach. Although theoretically offering a severity-risk value, it’s crucial to acknowledge that a CVSS rating serves as an ordinal measure, not a precise quantitative assessment. The Electronic Product Support System (EPSS) is distinguished by its creators for communicating distinct information compared to the Common Vulnerability Scoring System (CVSS), necessitating consideration of both metrics separately yet concurrently.
Is there a more nuanced understanding of vulnerability severity scores by considering both the Common Vulnerability Scoring System (CVSS) and the Enhanced Patch Severity Subscore (EPSS)? While CVSS is free to utilize, its application also comes with certain limitations that are essential to consider.
What does EPSS truly measure?
EPSS provides a chance rating, indicating the likelihood of a specific vulnerability being exploited in general. This text does not purport to quantify the likelihood of a specific entity’s concentration on exploitation, nor does it evaluate the impact of profitable exploitation, or its integration as a tool by a malicious group, such as a worm or ransomware gang, for instance. The prediction yields a binary outcome, indicating either the occurrence or absence of exploitation, although this oversimplifies the reality – in actuality, exploitation can manifest in various shades of grey. Consequently, an EPSS rating provides valuable insight into the likelihood of exploitation taking place within the subsequent 30-day timeframe. On an associated word, the value lies in its ability to convey meaning within a specific interval or range. As EPSS scores are inherently time-dependent, they should be recalculated regularly to ensure accuracy. While a single EPSS rating may provide a momentary glimpse into the overall safety performance of an operation, it is essential to recognize that this metric represents a snapshot in time rather than an unyielding measure of effectiveness.
EPSS is a ‘pre-threat’ instrument
EPSS is a cutting-edge, forward-thinking predictive system designed to anticipate and address potential issues before they arise. Assuming comprehensive market data is available, each identified CVE carries a distinct likelihood that its corresponding vulnerability will be exploited within the ensuing 30-day period. If you choose to, individuals may take advantage of this opportunity for prioritization – that is, the system does not provide a significant indication whether a vulnerability is being actively exploited, since it’s a predictive measure anyway? We revisited our previous example of logistic regression, leveraging my model to personalize marketing efforts to customers who had previously made a purchase six weeks prior. While this seems apparent, it’s still valuable to consider: without exception, EPSS scores lack utility in prioritizing vulnerabilities that have already been exploited.
Lack of transparency
While EPSS shares a comparable focus with CVSS in terms of transparency, the motivation behind this similarity lies elsewhere. EPSS is a machine learning model, with its underlying code and data inaccessible, not to mention that most people are unaware of it. While the EPSS maintainers acknowledge that certain information is withheld due to confidentiality agreements with industrial partners, stating that “we cannot share some information” without elaborating on the specific nature of these arrangements. While the mannequin and coding infrastructure for EPSS present several challenging aspects.
Assumptions and constraints
Can the software engineering process, as described by Jonathan Spring, a researcher at Carnegie Mellon University’s Software Engineering Institute, truly be universally applicable? The answer lies in understanding the underlying assumptions that underpin the Extended Pentagonal Spiral Process (EPSS). While EPSS’s website asserts that its system calculates “the probability (chance) that a software program vulnerability might be exploited in the wild”, certain simplifications do apply here. While “software program vulnerability” typically alludes to a designated CVE, certain software providers or bug bounty managers might opt out of using CVEs for prioritization altogether. As Spring notes, this can arise from a pending CVE disclosure for a specific subject – perhaps a vendor is collaborating with a researcher on a patch before publication – or from the vulnerability being more related to misconfiguration issues that would not typically merit a CVE designation.
Without further definition or context, wildlife conservation efforts aim to prevent the exploited species from falling prey to over-exploitation by establishing protected areas within their natural habitats, thereby safeguarding the extent of their protection. The authors of the referenced study note that, due to the reliance on IDS signatures for protection, there is a bias towards detecting network-based attacks targeting perimeter devices.
Numerical outputs
Similar to the Common Vulnerability Scoring System (CVSS), the Energy Performance Score System (EPSS) generates a quantifiable result. While adopting a CVSS-based approach, clients should understand that the severity of a threat cannot be distilled into a solitary numeric score. It is generally not advisable to attempt to combine CVSS and EPSS scores. While considering contextual factors and program limitations, customers should weigh numerical scores carefully, acknowledging their potential biases and limitations to inform their decision-making process. The EPSS score is a standalone numerical value that provides no guidance on interpretation or suggestion for improvement.
Potential future disadvantages
The authors of EPSS . Without compromising the original meaning, a cunning attacker might strategically integrate lower-severity flaws into their toolset, recognizing that certain targets might overlook or downplay the importance of these lesser vulnerabilities. The deployment of machine learning algorithms in EPSS has a potential pitfall: attackers may attempt to exploit this by manipulating input data, such as social media mentions and GitHub repositories, to artificially inflate the scores of certain vulnerabilities, potentially leading to adversarial manipulation.
The NIST Cybersecurity Practice Management Framework, developed by Carnegie Mellon College’s Software Engineering Institute (SEI) in collaboration with the Cybersecurity and Infrastructure Security Agency (CISA) in 2019, diverges from CVSS and EPSS in that it does not yield a numerical rating as its output. Instead, it’s a decision-tree model (in the conventional, logical sense, rather than in a machine learning sense), whereby. It aims to address two primary concerns by leveraging CVSS and EPSS: firstly, customers are typically left without guidance or determinative factors, yet are expected to decipher numerical scores independently; secondly, both frameworks centre the vulnerability, rather than the stakeholder, in their calculations.
According to the framework’s design, users are enabled to make informed decisions regarding priority levels through a hierarchical structure that combines a main call tree with multiple branching options. When addressing vulnerabilities from an administrative standpoint, one key consideration is the extent to which they have been exploited: namely, whether any exploits are theoretical (in other words, purely hypothetical), demonstrated through a proof-of-concept attack, or actively being used in real-world attacks. Here are the results of selections about publicity (low, managed, or open), whether or not the kill chain can be automated, and “value density” – namely, the assets that a threat actor would gain after successful exploitation. Ultimately, there are two crucial queries that pertain to the interplay between security and mission considerations. The leaves of the tree represent four distinct outcome possibilities:.
The latest SSVC model introduces various roles, including patch suppliers, coordinators, and triage/publish teams responsible for selecting and publishing newly discovered vulnerabilities; accordingly, question types and outcome determinations differ significantly in these situations. With coordination triage, the attainable outcomes include timely hospitalizations and efficient resource allocation. Companies’ priorities and sectors significantly influence the labels and weightings used to assess ESG performance.
After navigating the decision tree, users can seamlessly export their outcome in either JSON or PDF format. The output also includes a vector string, familiar to anyone who has read our previous assessment of CVSS. Notably, this vector string incorporates a timestamp, enabling recalculations of certain SSVC outcomes that rely on specific contextual factors. The authors of the SSVC whitepaper recommend recalculating scores dependent on ‘state of exploitation’ determinations at least daily, as these can fluctuate rapidly; in contrast, other influencing factors like technical influence should remain static.
Since the identify implies, SSVC endeavours to put stakeholders at the forefront of the process by highlighting stakeholder-centric arguments and outcome-driven decisions, rather than relying solely on numerical scores. One significant benefit of this approach is its ability to be applied to vulnerabilities without a corresponding CVE identifier, as well as misconfigurations, thereby expanding its scope beyond solely addressing known threats. Additionally, this framework’s versatility enables stakeholders from diverse sectors and industries to adapt it to meet their unique needs and circumstances. Once you’ve grasped the fundamental concepts, utilizing this method becomes surprisingly straightforward.
While no unbiased research has investigated the efficacy of SSVC, a single pilot study was conducted by its developers. The framework prioritizes straightforwardness above subtlety in certain aspects.
While CVSS provides a metric for assessing Assault Complexity, SSVC lacks a comparable measurement for the ease or frequency of exploiting vulnerabilities or equivalent factors; instead, it solely considers the binary choice of whether exploitation has taken place and whether a proof-of-concept exists.
To avoid overcomplication in the decision tree, the SSVC bushes do not include “unknown” options by default; instead, customers must make a reasonable assumption based on past experiences. Under certain conditions, this potential imbalance can significantly impact the ultimate outcome, particularly when considering external factors beyond a company’s control (such as whether a vulnerability is being actively exploited), leading some analysts to adopt a cautious approach and err on the side of caution in their assessments.
While SSVC’s decision not to use numerical scores might raise concerns, it has several advantages that outweigh potential drawbacks. Notably, the tool is highly customizable, fully open-source, and provides actionable recommendations in its final output. As with numerous instruments and frameworks discussed here, developing a robust strategy often involves combining them effectively; integrating EPSS and CVSS metrics (and the KEV Catalog, mentioned below) into a bespoke SSVC decision tree can provide a reliable indicator of which vulnerabilities to prioritize?
The Known Exploited Vulnerabilities (KEV) list, maintained by the Cybersecurity and Infrastructure Security Agency (CISA), is a continually updated registry of Common Vulnerabilities and Exposures (CVEs) that threat actors have been known to exploit. As of December 2024, the vulnerability checklist features 1,238 entries, each comprising vital details such as CVE-ID, vendor, product, brief description, recommended action, deadline (to be discussed later) and a notes section often including a link to a relevant vendor advisory.
According to CISA guidelines, federal government departments, businesses, and organizations must promptly address relevant vulnerabilities listed in the Known Exploited Vulnerabilities (KEV) Catalog, along with other necessary actions, within a specified timeframe: six months for CVE-IDs assigned prior to 2021, and two weeks for all others. The Cybersecurity and Infrastructure Security Agency’s (CISA) reasoning behind the creation of the Known Exploited Vulnerabilities (KEV) Catalog mirrors arguments presented in our previous article: Although a minute percentage of vulnerabilities are ultimately exploited, attackers seemingly disregard severity ratings when crafting and disseminating exploits. Accordingly, CISA posits that identified exploited vulnerabilities should be prioritized for remediating over addressing the thousands of unexploited vulnerabilities that would never be leveraged in a realistic attack scenario.
The KEV Catalog, although not maintained on a regular schedule, ensures timely updates within 24 hours of CISA’s notification regarding a vulnerability meeting specific criteria.
- A CVE-ID exists
- “It’s highly likely that the vulnerability has already been exploited in real-world attacks, providing concrete evidence of its potential impact.”
- “A clear remediation plan exists to mitigate the identified vulnerability.”
According to CISA guidelines, evidence of energetic exploitation, regardless of whether attempted or successful, stems from open-source research by independent groups and “data directly from security providers, researchers, and partners…data through US authorities and international partners…and via third-party subscription services.” It’s essential to note that scanning activity or the existence of a proof-of-concept is not sufficient for a vulnerability to be included in the Catalog.
Primarily intended for US federal entities to prioritize. The simplicity of the Catalog’s offerings is evident: it provides an accessible and organized collection of high-energy threats in CSV or JSON formats, easily ingestible and suitable for integration with a vulnerability management program to inform prioritization, as recommended by CISA. In a critical move, CISA emphasizes that organizations cannot solely rely on the catalog and instead must consider multiple sources of information.
Unlike EPSS, the KEV Catalog’s binary classification system assumes that an identified vulnerability has indeed been exploited if it appears on the comprehensive list of known exposures. The ambiguity of uncertainty is all that’s certain here. However, the sheer magnitude of this ongoing process might necessitate prioritization support, as the checklist inevitably becomes increasingly unwieldy; it will, after all, only be removed from the list if a vendor update yields an “unforeseen issue with greater impact than the vulnerability itself”.
The catalog fails to specify the extent of exploitation. Was a bug exploited after just a few occurrences or thousands of times? Without providing specific data on the industries and regions that will be most significantly impacted, it is challenging to prioritize mitigation efforts effectively, leaving stakeholders without crucial context. The advisory does not specify the type of threat actor exploiting the vulnerability beyond ransomware actors, nor does it provide information on when the vulnerability was initially exploited. As part of our discussion on EPSS, further considerations arise regarding the definition of a vulnerability and the transparency of information. Relating to previous discussions, a KEV Catalog entry should include a CVE identifier, which may be less informative for certain stakeholders, whereas regarding the latter, the exploitation protection information is limited to what CISA’s partners can observe, rendering it unavailable for scrutiny and verification? While a curated list of exploited vulnerabilities may still prove useful for many organisations, providing supplementary information to inform remediation decisions remains crucial.
As you navigate the diverse landscape of instruments and frameworks, you may start to discern how combining a few of these approaches can lead to a richer comprehension of threats and ultimately inform more informed prioritization decisions. CVSS primarily assesses vulnerability severity based on inherent traits. The KEV Catalog identifies exploited vulnerabilities, providing insight into attackers’ tactics. EPSS estimates the likelihood of exploitation over time. SSVC facilitates informed prioritization decisions by integrating relevant data within stakeholder-specific decision trees, considering factors like exploitability and risk.
To varying degrees, CVSS, EPSS, SSVC, and the KEV Catalog are prominent tools. However, it’s also essential to explore lesser-known instruments and frameworks that offer valuable insights into vulnerability management. For clarity, we’ll focus on examining schemes such as , , and their ilk, rather than weaknesses, vulnerabilities, and prioritization.
Vendor-specific schemes
Several industrial organizations provide paid vulnerability rating tools and instruments that facilitate prioritization, including some that integrate EPSS-like predictive data from proprietary models or combine EPSS scores with closed-source information. While others employ the CVSS framework, they may also integrate scores with proprietary scoring programmes, leveraging insights from risk intelligence, vulnerability intelligence, and detailed information about the target’s assets and infrastructure. While the provided metrics may provide a more comprehensive picture of threats and facilitate informed prioritization compared to scoring systems like CVSS or EPSS on their own, their limited availability hinders public scrutiny and evaluation.
Many product distributors have developed unique scoring systems, publicly disclosing their results to provide transparency and accountability. Microsoft has two programs for assessing vulnerabilities in its personal products: the CVSS-like scoring system, which assigns a numerical value indicating the severity of a weakness (Microsoft notes that its rankings are based on “the worst theoretical consequence if that vulnerability were to be exploited”); and the Exploitability Assessment Tool, which aims to provide an evaluation of the likelihood of a vulnerability being exploited. While rooted in Microsoft’s assessment of the vulnerability, its potential impact and prior exploitation, this analysis lacks empirical backing due to limited data availability.
Purple Hat’s system features four attainable rankings and assigns a calculated CVSS Base score. Unlike Microsoft’s proprietary software, this assessment focuses specifically on vulnerabilities in proprietary products, with unclear methods for calculating scores.
In recent years, the cybersecurity landscape has witnessed a significant shift in vulnerability management with the rise of CVE Developments. This innovative approach has revolutionized the way organizations identify and address vulnerabilities, ultimately reducing the attack surface. However, as technology continues to evolve, so do the threats and challenges that come with it. Therefore, it is crucial for stakeholders to explore alternative options that can provide a robust response to emerging cybersecurity risks.
With limited energy due to X’s restrictive API usage policies at the time of writing, this is a crowdsourced knowledge dashboard that aggregates insights from X, Reddit, GitHub, and NVD by scraping relevant information from these sources. The analysis confirmed the top 10 most recently discussed vulnerabilities based on available data.
The integrated dashboard effectively showcased critical vulnerability metrics, including CVSS and EPSS scores, as well as relevant CVE data, trending social media conversations on Twitter and Reddit, and real-time indicators of recent online debate and activity.
While CVE Developments can provide valuable insights into the current “flavor of the month” CVEs within the security community – and may serve as a useful source for breaking news on newly discovered vulnerabilities – its utility ends there, failing to support meaningful prioritization beyond identifying new, high-impact bugs. The tool exclusively identified ten vulnerabilities at once, including several notable ones like Log4j, which, as evident in the screenshot, had been somewhat dated yet remained prominent owing to their widespread recognition and notoriety.
The CVE Development company is currently dormant, having ceased operations in the middle of 2023. As of this writing, visitors arriving at the platform are greeted with the following message, which also circulated as a tweet from its creator:
Whether X will relax its API usage constraints remains uncertain; meanwhile, the creator of CVE Developments may need to explore alternative options to revitalize the platform’s performance.
After the demise of Bell’s website, an organization known as Intruder has developed a beta version of this instrument, dubbed “CVE Developments.” This innovative tool features a unique 0-100 “hype rating” system, based on social media activity.
SOCRadar also offers an analogous service, dubbed ‘Vulnerability Insights,’ which consolidates data on the number of tweets, news stories, and vulnerability-related repositories within a single dashboard; as a token of appreciation, it prominently features Simon Bell’s CVE Developments work on its homepage, similarly to how Intruder acknowledges this effort on its About page. CVE Radar and Intruder models effectively integrate relevant tweet text into their representations of CVE developments, providing a concise summary of the social media conversation surrounding a specific vulnerability. While the intentions of the builders behind each instrument are unclear regarding the incorporation of various social media platforms, it remains uncertain.
CVEMap
Introduced in mid-2024, ProjectDiscovery’s relatively novel command-line interface tool offers a comprehensive suite of features, including CVSS ratings, EPSS evaluations, vulnerability age, KEV catalog entries, proof-of-concept demonstrations, and more. CVEMap does not provide or enable the generation of new data or scores, as its sole function is to aggregate existing information. Despite this, the fact that it consolidates diverse vulnerability data into a single interface – allowing further filtering by product, vendor, and so on – renders it a valuable tool for defenders seeking to inform their prioritization decisions through multiple data sources.
Bug Alert
A proactive service aimed at notifying responders to critical, high-severity vulnerabilities as soon as they emerge, thereby empowering customers to take swift action and minimize risk, without relying on traditional methods such as waiting for scheduled security bulletins or CVE publications? It’s envisioned as a collaborative initiative, relying heavily on researchers contributing reports of newly discovered vulnerabilities in the form of pull requests to the. The status of Bug Alert’s writer and maintenance is unclear, with no activity recorded since a final update in October 2023 on its associated GitHub repository?
While CVE Developments and Bug Alert may overlap in their focus on vulnerability disclosure, they serve distinct purposes, with Bug Alert primarily intended to facilitate awareness rather than prioritize fixes.
vPrioritizer
An open-source framework that enables customers to assess and visualize threats in a context-specific manner, allowing for tailored evaluations of each asset or vulnerability, thereby seamlessly integrating asset management with priority setting. Vulnerability management effectiveness is ensured by leveraging CVSS scores in conjunction with neighborhood analytics and real-time insights from vulnerability scanners. Despite being mentioned in the 2019 SSVC whitepaper and offered elsewhere, it remains unclear whether the developer of vPrioritizer continues to support the project, with no updates to the GitHub repository since October 2020.
Vulntology
A collaboration led by the National Institute of Standards and Technology (NIST) aims to categorize vulnerabilities, which derive their name from combining “vulnerability” and “ontology”, based on how they are typically exploited, the potential impact of such exploits, and measures to prevent or mitigate them. The revised text reads: The recognized goals aim to standardize vulnerability descriptions, as seen in vendor advisories and safety bulletins, by increasing the level of detail in such descriptions and facilitating seamless sharing of vulnerability information across linguistic barriers. A striking example of vulnological illustration has emerged in the market.
Vulnerability analysis is not a scoring framework or decision tree, but rather a nuanced methodology for identifying potential weaknesses. As a significant innovation, this small step paves the way for a typical language that, if widely adopted, could have substantial value in managing vulnerabilities. A uniform approach to categorizing vulnerabilities would undoubtedly prove invaluable in comparing multiple vendor security alerts, threat intelligence reports, and disparate sources. While acknowledging its relevance here, we note that this approach has far-reaching implications for vulnerability prioritization in the long term, seeking to address a pressing issue within the vulnerability management sphere. The final decision regarding the venture’s GitHub repository appears to have been made in the spring of 2023.
Felony market information
Ultimately, leveraging real-time data on felony markets will enable strategic decision-making for informed resource allocation and targeted intervention strategies. In 2014, we conducted an examination to determine whether the Common Vulnerability Scoring System (CVSS) effectively predicts the likelihood of exploitation. They found that CVSS scores fail to accurately reflect the severity of exploitation. While their research didn’t investigate whether remediation yields a threat discount, it’s intriguing to consider if this relationship still holds; the size of exploit markets has grown significantly since 2014, with a substantial underground economy dedicated to promoting and advertising exploits.
Widening the scope beyond identifying vulnerabilities in criminal markets, researchers should also consider various metrics of interest and customer feedback as valuable indicators for informing priority-setting initiatives.
The primary challenge lies in navigating these marketplaces’ restricted access points, with numerous platforms remaining inaccessible to new users unless invited, purchased, or achieved through exclusive membership. As the underground economy has expanded in scope, it’s also arguably become less centralized than previously. Distinguished boards can serve as an initial platform for promoting goods, but many crucial details, including prices, are often only accessible to prospective buyers via private messages. The actual negotiations and sales usually transpire through off-platform channels such as Jabber, Tox, and Telegram. Further examination is necessary to determine whether this information could serve as a viable source of insight for informing priority decisions.
Having thoroughly examined CVSS, EPSS, SSVC, and the KEV Catalog in detail, as well as other instruments and frameworks briefly – you would not be surprised to learn that we did not uncover a singular solution or combination of options that could definitively resolve all prioritization challenges. Although using multiple frameworks may seem advantageous at first glance, it is often nearly always inferior to adopting a single framework. While incorporating additional information factors can provide a more informed perspective, it’s essential to note that this may initially necessitate some technical effort upfront. However, many of the tools and frameworks we’ve discussed are designed to produce outputs that can be easily consumed in an automated manner – with instruments like CVEMap having already done significant legwork to facilitate seamless integration.
Combining outputs can, in fact, be absolutely essential. While ignoring typical protocols, prioritization extends beyond mere vulnerabilities and exploits, considering a broader scope of security considerations. In reality, they’re a significant part of the problem, but the crucial consideration is not the vulnerability itself, but rather how it may impact other factors.
While groups approach prioritization differently, their methods are shaped by their respective characteristics, operational rhythms, financial profiles, and appetite for risk.
While single, one-size-fits-all scores and suggestions may not logically align with assessing diverse frameworks, their relevance diminishes further when applied to individual organization efforts to prioritize remediation strategies. Context is every little thing. Regardless of the instruments or frameworks used, place your team at the forefront of the equation. Chances are you’ll need to tailor your approach at an even more detailed level, depending on the size and structure of your team: prioritizing and contextualising data at a departmental or team-specific level. Regardless of the situation, it is essential to tailor outputs to the specific context, recognizing that even well-established frameworks can produce data that requires customization.
Programs such as CVSS or SSVC offer in-built options to customize and personalize outputs. While EPSS and the KEV Catalog may not prioritize customization per se, their outputs can still be enriched with user-added context, potentially integrating them with other tools and frameworks to gain a more comprehensive understanding.
Prioritization extends beyond the tools discussed here, indeed. We’ve focused on this collection because they’re an intriguing aspect of vulnerability management; however, the data driving prioritization decisions should ideally derive from a diverse range of sources, including risk intelligence, identified weaknesses, organizational security posture, implemented controls, threat assessments, and outcomes from penetration tests and security audits, etc.
We reiterate a key point from our initial article: while we’ve highlighted some drawbacks of these instruments and frameworks, we by no means aim to disparage the innovators behind them or their endeavors. In our evaluations, we strive for objectivity and fairness. Developing such frameworks demands meticulous effort, substantial contemplation, and strategic thinking; their primary purpose is to facilitate effective utilization, necessitating judicious deployment at opportune moments. We trust that this compilation will enable you to accomplish the task with confidence, expertise, and speed.