Tuesday, October 27, 2020

Keeping track of your applications' security posture

There are so many different indicators that you can use to track the security posture of your applications. The challenge is to determine what to track and what good looks like. Make sure that the indicators that you track offer value and aren’t just creating noise. You’ll also need to differentiate between increased risk and improved effectiveness of security controls.

The intention is to comprehensively implement controls and ensure they are efficient and effective at meeting the desired outcome. Do your indicators help you do this?

In this article I’ll be discussing application security indicators and some wider considerations around how to use them to monitor the ongoing effectiveness of your security initiatives and programs.

General Considerations

Some basic considerations:

  • Check which CVSS version they utilise, where possible use a consistent version;
  • Check comparable findings across tools to ensure vulnerabilities have consistent severity ratings;
  • Question the validity of the findings, not all findings will be valid;
  • Tools require refinement to reduce false positive and false negative results;
  • Watch out for exceptions being creating around false results, check that these are and remain valid.

Quality of findings

Its important to note that the quality of findings will vary across controls. Penetration testing will involve an individual proving the existence of a vulnerability. Automated tools especially those that are unauthenticated have a higher probability of identifying false results.

Of the total findings identified check how many are:

  • True positive;
  • False positive – incorrectly identify a vulnerability;
  • False negative – incorrectly identify that a vulnerability does not exist.

You’ll need to go through a process of refinement to reduce the number of false findings. This may involve recording respective false findings as exceptions within your tools.

Potential Indicators

There are lots of potential indicators that you can track. The following are some indicators for you to consider. These are all impacted by the quality of findings and the consistency of ratings. Refine the tools used to reduce any false results and ensure consistent severity rating of findings.

Control Implementation Indicators

According to NIST 800-55 implementation measures demonstrate “progress in implementing programs, specific security controls, and associated policies and procedures.”

Number of automated tests and tools

Determine what proportion of your applications are in scope for your security services. From that scope check how many are covered by your services. It doesn’t matter how effective your controls are if the scope of implementation is limited.

Assessment Coverage

Check what proportion of the application is covered by the assessment. Seek to maximise the coverage of your controls.

Assessment Frequency

Check how frequently security assessments are being performed against the in scope applications. Does this frequency adhere to your or industry defined standards?

There is a balance to be found according to your organisations risk profile and the resources available to you. Testing:

  • Infrequently will leave vulnerabilities undetected on your applications for longer durations;
  • Frequently will provide you with a more frequent snapshot of your security posture but is more resource intensive to operate.

Control Effectiveness / Efficiency indicators

According to NIST 800-55 Effectiveness / Efficiency measures “monitor if program-level processes and system level security controls are implemented correctly, operating as intended, and meeting the desired outcome”. The following are some potential indicators to consider.

Number of application vulnerabilities

At a very basic level you can track the number of vulnerabilities across one or more applications. The number is typically tracked along with the respective severity ratings:

  • Critical;
  • High;
  • Medium;
  • Low.

Ratings are reflective of the vulnerabilities exploitability and impact. The vulnerability score is typically derived using the Common Vulnerability Scoring System (CVSS). It is important to note than a combination of vulnerabilities can represent a higher severity to the application than the individual ratings considered in isolation. For instance two medium findings combined may constitute a critical vulnerability.

Type of vulnerability

Tracking the type of vulnerability helps to apply context to the vulnerabilities identified across applications. Example types include:

  • Cross Site Scripting (XSS);
  • SQL Injection;
  • Cross Site Request Forgery (CRSF);
  • Insecure Cryptographic Storage.

For a more comprehensive list of web application vulnerabilities see the OWASP Top 10.

Look out for consistent / repeat findings across one or more applications. These can be an indicator of weak standards or poor implementation of standard requirements. Seek to understand the root cause rather than trying to address findings on a case by case basis.

Source of discovery

It is common to have multiple controls that are used to identify application vulnerabilities. Consider correlating the number, type and quality of findings with the source tool. This helps to determine which tools and processes are most effective and offer the greatest return on investment (ROI).

Average Time to Fix or Defect Remediation Window (DRW)

From the initial identification of a vulnerability track how long it takes in days until it is verified closed. Companies will typically define time to fix requirements in accordance with the finding severity rating.

Make sure you:

  • Refer to the original identified date as findings often come from multiple sources;
  • Verify that findings have been closed.

This indicator will help determine if vulnerabilities are being addressed within the timescales set out in your standards. Repeated failure to meet the agreed timescales can be a good indicator of increased risk exposure.

Finding / Flaw Creation Rate

Track the rate of new findings over a set period. Its worth noting that an increase in findings may relate to an improvement in your ability to identify findings rather than a drop in development standards.

Finding / Flaw Remediation Rate

Track the rate of remediated findings over a set period. Make sure you verify that findings are closed. It’s possible that an applied fix isn’t sufficient to address the finding.

Finding / Flaw Growth Rate

If the finding creation rate exceeds the finding remediation rate this is a key indicator that vulnerabilities are increasing. This is a good way to identify increasing risk exposure.

Flaw Growth Rate = Flaw Creation Rate – Flaw Remediation Rate

Factor in the severity rating of findings. An increased growth rate driven by low findings alone may not suggest an increased risk exposure.

Density

Not all applications are created equal. Larger applications have a greater attack surface that can be exploited and its reasonable to expect them to contain a greater number of findings.

Consider tracking the number and severity of findings according to a defined density such as lines of code, number of pages or screens. This will be a more effective indicator of the security standard of an application.

Rate of findings per lines of code = Lines of Code / Number of Findings

It may be difficult to source sizing details for applications. Investigate what tools are available to you to support gathering of meta data relating to your applications.

Weighted Risk Trend (WRT)

It is difficult to articulate the actual risk of applications based on just the volume and severity of their findings. WRT makes it possible to derive a single measure which uses the business criticality of the application to determine risk in the context of the business.

WRT = ((critical multiplier x critical defects) + (high multiplier x high defects) + (medium multiplier x medium defects) + (low multiplier x low defects)) x Business Criticality

Choose severity multipliers that work within your organisation and make sure the calculations you use are transparent to the intended audience. Multipliers could be assigned scores on a simple basis of a range of 1 (low) to 4 (critical) or with wider weighting increasingly afforded to the more significant severities such as:

  • Critical = 9;
  • High = 6;
  • Medium = 3;
  • Low = 1.

Rate of Defect Recurrence

Identify the reoccurrence of previously remediated findings. Look out for:

  • Version control related issues that lead to loss of security fixes in the production environment;
  • Remediation of issues without solving the root cause of the problem;
  • Developers that lack security awareness may continue to produce insecure code overwriting that which was previously secure.

Consider Business Context

Security findings need to be considered in the context of the wider organisation. For instance, an application handling confidential / regulated data will pose a greater risk to the company than a brochureware site containing publicly accessible data. Look to prioritise your resources to effectively manage the security risk of your applications.

Application criticality

Determine the criticality of your applications to the organisation. This is often used to determine / prioritise the scope of security services offered. Typical factors often include:

  • Type and volume of data;
  • External availability;
  • Compliance or contractual requirements.

Business Impact Assessments (BIA) are a useful source of information.

Gamify reporting

In isolation it can be hard to articulate what a good application security posture is. Organisations typically own / operate multiple applications. Within a large organisation its viable to expect this to be in the hundreds or potentially thousands.

Consider the comparison across regions / business lines / legal entities / departments / functions within your organisation. Identify those with the best and worst security posture and encourage some friendly competition. You could even create a league table so that the different areas can easily see how they compare to each other.

A mean of the overall rating will enable you to trend indicators over time and track increasing / decreasing trends in your organisations overall security posture.

Summary

Identify a select number of key application indicators that will support you track the effectiveness of your controls. Be careful to avoid tracking data that does not provide an indicator of implementation, effectiveness or return on investment (ROI). Avoid tracking data that provides no clear indication as this can create noise and detract from those indicators that are important.

Target indicators to respective audiences to ensure that stakeholders have access to the right information. Identify where indicators are increasing / decreasing and determine what actions are required to address any increasing risk exposures.

Its important to work with stakeholders involved in the delivery / operation of your security controls. Work closely with your development / technical teams to educate them in the usage of the tools and provide guidance around remediation. Make sure that you recognise stakeholders for improvements that are being made and support them in their on going journey to improve.

Let me know what application indicators you track and what you have found to work effectively within your organisation.

No comments:

Post a Comment