Security Response Management: Risk, Cost, and Best Practices in an Imperfect World

david reyna By David Reyna

Keeping our products secure is a requirement for survival, demanding vigilance in finding, applying, and distributing working patches to our customers in the timeliest manner possible. While there is heightened awareness about device vulnerabilities, what is often missing is awareness about the process of managing the security response process itself.

Community security vulnerability data is available, but the information varies greatly in quality, completeness, and applicability.  Managing security defects can be very inefficient, resulting in high costs and delays to the organization and to the customers.

In this blog, I will discuss:

  • Security response management resources and challenges
  • Necessary costs, unnecessary costs
  • Advice on best practices
  • The new SRTool (Security Response Tool), Wind River’s open source answer to address costs and best practices

Security response management resources and challenges

Security response management is a necessary part of today’s product development processes.  There are a number of resources available; however there are challenges that need to be overcome.

Upstream CVE Sources

CVEs (Common Vulnerability Enumerations) are the enumerations of the community tracked security vulnerabilities. The list of CVEs is managed by MITRE Corporation. CVEs are passed to the U.S. National Vulnerability Database (NVD), which is managed by the National Institute of Standards and Technology (NIST). These efforts are sponsored by US-CERT in the office of Cybersecurity and Communications at the U.S. Department of Homeland Security. These are the primary community CVE databases, and both are available to the public and are free to use.

In practice, Mitre publishes the assigned CVE numbers but only includes basic data. NIST gathers and publishes the detailed CVE information, but it only included public CVEs. In addition, status updates in the NVD may noticeably lag behind MITRE.

Many individual vendors and maintainers track and share CVEs relevant to their product, but the access and coverage varies greatly between sites, making it difficult to implement programmatic access tools. In contrast, CVE aggregators attempt to provide a common interface to CVE sources, but the aggregator content also varies in coverage, accessibility, and quality.

Finally, there are many mailing lists, websites, and forums, both public and private. These are a necessary part of being proactive in discovering and tracking issues, but they require constant tracking and interaction to use effectively.

Upstream CVE Quality and Completeness

While CVEs attempt to be accurate as possible, there are many gaps in this goal:

  • CVEs may only have a brief or incomplete description
  • CVEs may be missing the affected product list (CPE), have gaps, have errors, or have unexpected deviations in the version numbering
  • CVE content may be misleading, mentioning one package when it actually affects a different package in the developer’s system
  • CVEs may have few, inaccurate, or missing content links (discussion, reproducers, patches)
  • CVE status changes continually as new information is discovered and shared
  • There may be delays in content updates between the maintainers, MITRE, and NIST

The most recently created CVEs (within the last few months) are particularly prone to the above issues, but unfortunately these are often all that organizations have to work with for their pending releases.

The quality and completeness of each individual CVE certainly improves over time, but the price of capturing and re-evaluating each improved version of each CVE is costly.

As for tools designed to process and check CVE vulnerabilities, they must also take into consideration all of the above limitations. Tools must:

  • be flexible in processing the vulnerability  information
  • differentiate between strong and weak data
  • set appropriate expectations of what the tool can accomplish given the imperfect data stream
  • include human input appropriately in the evaluation process

Upstream CVE Volume

There are now more than a thousand new CVEs each month, and the volume will only increase.

Every new CVE must be evaluated. The cost of evaluation continues to rise with the sheer number of CVEs plus the increased evaluation time required to compensate for the quality limitations of the upstream information.

Despite the high cost of evaluation, evaluating CVEs is essential. To skimp on evaluation time and cost – and risk missing or incorrectly categorizing a vulnerability – would be even more costly, affecting the product integrity, release timing, and customer trust.

Security Response Management Process Challenges

The internal management of the security response has its own needs and issues.

Data accessibility: the documents are often not aggregated in accessible ways. It may be spread across different groups, stored in multiple locations, exist only in email, and/or may be only partially captured in the organization’s defect and agile tools. This is true especially true if the system was built organically, perhaps based on processes designed for previous CVE volumes and team staffing.

Data aggregation: there is also the problem of effectively looking up and sharing the state of CVEs with regards to defects, products, releases, and programs in general. Without aggregation, it can take large amounts of time to continually regather the information for customers, management, public disclosures, and legal compliance.

Vendors: some companies offload this process to external vendors. These vendors can provide the missing expertise and resources, but this pass-through can reduce customer response times plus this external support can be quite expensive.

Scanning Tools: there are many tools (both commercial and open source) that can provide CVE analysis and support. Some tools focus on running systems (e.g. Nesus), and some do source or build time analysis (e.g. Blackduck, Yocto Project “cve-check”). All such tools can be very valuable in catching product issues and providing a protective backstop. However, all have built-in limitations, in that they rely on the available CVE information (plus or minus secret sauce) and are thus vulnerable to all the above issues, reducing the quality of the results or providing ambiguous results that require continue evaluation. Also, none of these systems address the larger process of security management across an organization.

Defect systems: an organization’s defect tracking system is often not the best place to manage the security response. Vulnerabilities are generally across multiple products and may have different impacts and solutions for different releases. In addition, with such systems it can be difficult or impossible to correctly store embargoed information, especially given that each reserved CVE might require a different access list of permitted engineers.

Security Management Costs: Necessary versus Unnecessary

Some tasks are simply the cost of doing business, and while can be complicated are never the less generally well understood and executed.

Examples of necessary costs:

  • Tracking upstream CVEs
  • Creating and fixing defects
  • Providing updates to customers, management
  • Providing patches to customers

Many other potential costs are not well understood or managed. While companies may have simply absorbed the lower costs in the past, the costs of today’s higher CVE volume and complexity is or will become too much expense to bear.

Examples of unnecessary costs:

  • Repeated manual polling of upstream data, initial report and all updates
  • Repeated manual polling of defect status
  • Manual analysis of each CVE for vulnerability status, across products
  • Manual re-evaluation of each updated CVE for vulnerability status
  • Manual tracking and sharing patches, reports, documents, and so forth
  • Manual regathering status information for customers, management
  • Manual tracking of embargoed data, need-to-know, and “who knew what when” for compliance
  • Manual repackaging of data for the organization’s public databases

Best practices and solutions

Here is a list of best practices that can help address the above issues.

  1. Automate as much of the process as possible
    1. CVE data gathering
    2. CVE update polling, with filtered change notifications
    3. Defect update polling, with filtered change notifications
    4. Defect creation, population, updates
    5. Report tools for management and customers
    6. History and audit tracking
    7. Use multiple sources
      1. MITRE/NIST: this the place to start, however it is not nor ever will be complete
      2. Alternate sources can sometimes provide clues to missing or incomplete information and may often be more up-to-date
      3. Aggregate the data
        1. Maintain a central specific data base for security management
        2. Maintain files, patches, links, etc. in a central location, and provide links in the central database
        3. Be flexible with the data. Do not assume it is ever definitive.
          1. Use the CVE data as advisory and not assume it is ever definitive
          2. Do not focus on small details of for example version numbering during analysis, to keep small unexpected differences from leading to big hidden misses
          3. Make it easy for humans to find and review the ambiguities
          4. Allow engineers to add notes and even override CVE values, so that they can add intelligence to the data and correct for upstream errors
          5. Provide tools to help with the high cost of CVE inflow triage
            1. Provide an easy interface to walk through and evaluate incoming CVEs
            2. Provide heuristics and helper tools to help manage the gaps in the CVE content
            3. Analyze all available data to help match CVEs to product/packages
            4. Make it easy to triage CVEs in groups, for example removing obvious vulnerabilities and non-vulnerabilities by simple keyword or CPE hits, so that time can be spent on the CVEs that truly need investigation

Ultimately, spend people’s time on the actual problems, not the process.

Introducing the SRTool (Security Response Tool)

To help mitigate the unnecessary costs and to achieve best practices, Wind River has developed a new tool called the SRTool and has shared it with the Yocto Project and the general Open Source community.

Here are some of the features and how they implement the best practices:

  1. A central SQL-based database aggregates the data
  2. Backend scripts that gather CVE bulk and incremental data from the many alternate sources, triggered by cron jobs
  3. A browser-based GUI front end allows general developers and management to easily traverse the data, add data and generate views, reports, and exports (if authorized)
  4. A general data relationship model manages the information. The model includes records for CVEs, “Vulnerabilities” to track the high level issues across products, “Investigations” to track vulnerabilities against specific releases, and “Defects” to map to the organization’s defect system. Each record supports specific comments, attachments, an optional access list, and change stamps for audits
  5. The ability to add content and keywords to records to capture local knowledge, plus the ability to cleanly override CVE values when the upstream is not correct
  6. A modular design so that is it easy to extend to (a) new CVE data sources, (b) local defect and/or agile systems, and (c) other site processes
  7. An enhanced work page for triaging and managing incoming CVEs

A wealth of knowledge derived from years of matching CVEs to packages, resulting in a table of “for” and “against” keywords that can help guide the triage engineer in deciding if an incoming CVE is a vulnerability, a non-vulnerability, or one requiring investigation.
This feature is particularly valuable. It explicitly addresses the problem of incomplete CVEs by incorporating heuristics and acquired knowledge to provide the missing guidance.

There is quite a wealth of security vulnerability information available. The issue becomes the quality, quantity, timeliness, and cost effective management of that information. With knowledge, awareness, adaptability, and automation, we can manage the increasing deluge of information.

Wind River’s new SRTool is a valuable tool to help manage CVEs and decrease costs in the process. We invite the Open Source community to use this new tool and join us in making it even more helpful.

Use these links to learn more about this topic and about using the SRTool for your own projects:


Previous Article
Software-Defined Infrastructure in Industrial IoT: How it Works
Software-Defined Infrastructure in Industrial IoT: How it Works

A software-defined infrastructure approach driving and accelerating digital transformation in industrial au...

Next Article
Digital Transformation: A Catalyst for Changing the Embedded Development Paradigm
Digital Transformation: A Catalyst for Changing the Embedded Development Paradigm

Article first published on SD Times on 10/29/18 By Gareth Noyes (@GarethNoyesWR) The Internet of Things is ...