Risk management in Quality Assurance

Imagine being the first person to test a 10-digit calculator.

You could start testing every possible combination, such as 1 + 1 = 2, 1 + 2 = 3, 1 + 3 = 4, and so on. However, completing this task would take a few billion years.

Clearly, we need to be smarter and choose how to invest our finite resources into an infinite amount of possible tests.

In this article, we will answer this question and explore how adopting a risk-based approach can improve our Test Plans.

Introducing risk management

Risk management in Quality Assurance is the ability to make considered decisions about what to test, where to test and how to test in order to maximize quality in a given time.

Additionally, it involves a systematic process of identifying, assessing, and mitigating risks that may impact the quality, performance, and reliability of products or services. These risks will inform decisions about test prioritization, ensuring that your testing efforts are focused and efficient.

Adopting a proactive risk management approach enables organizations to enhance their ability to deliver superior quality products, optimize resources, and protect their reputation.

We will delve into the strategic process of test selection to optimize our testing efforts and manage risks effectively.

Risk analysis

To ensure thoroughness and avoid overlooking critical aspects, it is essential to ask the right questions during the development process.

A well-structured checklist with relevant questions can help us cover all necessary areas and ensure comprehensive risk analysis. The checklist should encompass inquiries directed at developers, product managers, and quality assurance teams.

Instead of merely asking developers about test cases, it is more effective to collectively assess the potential impact and then determine appropriate testing measures.

Here is an illustrative example of the checklist used at Contentsquare for every feature:

  • What component has been updated/deleted?
  • Where is this component used in the application?
  • Does it impact Web and/or Mobile?
  • Does it impact the backend or database?
  • Does it have side-effects in other modules?
  • Does it impact a specific user? (employee, partner or customer)
  • Does it impact a specific role? (guest, viewer or admin)
  • Do we need a feature flag?
  • Does it affect metrics?
  • Does it work with other languages?
  • Any “No data” case to consider?
  • Does it feature error management?
  • What is the loading behavior?

Test data

In our initial calculator example, we quickly realized that testing all possible data sets is impossible. Therefore, we must approach the selection of our test data sets with care, ensuring that we avoid an overwhelming number of test cases.

To guide us in this endeavor, we can adhere to some fundamental principles:

  1. Data types: Test each type with at least one representative test case. For instance, for numbers we should consider:

    • Single-digit number
    • Multi-digit number
    • Fraction
    • Decimal
    • Very large number
    • Zero
    • Positive & negative numbers

    Adhering to this rule, it is crucial to recognize that if a particular test case involving numbers like 1 + 2 or 12,454,676,242,974,233 + 14,325,853,785,234,245,245,111,222,075 produces the expected result, it is reasonable to assume that all other number combinations will function correctly as well. This principle obviously extends beyond testing calculators and is also applicable to various other applications or systems.

  2. Boundary values: Include data points that represent the boundaries or limits of the system’s functionality. Testing with values at the upper and lower ends of these boundaries allows us to identify any issues or anomalies that may arise in these critical areas.

  3. Realistic data: Pick realistic and representative data sets that mimic the actual usage patterns and characteristics of the system’s intended users. This approach validates the system’s performance and behavior in real-world scenarios, improving its reliability and usability.

  4. Edge cases: Include edge cases within the data set, focusing on exceptional or uncommon scenarios that may arise during system usage. Exploring these edge cases allows us to uncover any unexpected behaviors or vulnerabilities that may not surface during normal testing.

Integrating these factors allows us to create a robust test suite that comprehensively validates the system’s functionalities, ensuring its reliability and quality with a minimal number of tests.

Time limits

In the real world, testing resources are not unlimited, and there are actually situations where we have even less time than originally anticipated.

In such cases, it becomes crucial to carefully select the test cases from our Test Plan that we will execute and those that we may need to defer.

While it might appear as if we are compromising on quality, this is where effective risk management in Quality Assurance becomes essential, aiming for the minimum risk while maintaining maximum quality.

Below are some fundamental guidelines to consider when prioritizing test cases under time constraints, in order of importance:

  1. High-Risk test cases: Focus on the test cases with the highest potential for bugs. If uncertain, seek input from the developer or technical lead.
  2. Most frequently used features: Prioritize testing cases that represent the most commonly used scenarios by customers. While unique edge cases may exist in the test plan, their rarity may not justify investing limited time in them. Consult with Customer Success or the User Acceptance Testing team if needed.
  3. Historical analysis: Analyze past testing results to identify and address commonly recurring bugs.
  4. Domain-specific priorities: Focus on specific types of test cases based on the nature of the feature being tested. For example:
    1. Data/Migration Feature: Emphasize migration and performance testing.
    2. Infrastructure Changes: Prioritize basic sanity testing.
    3. UI Changes: Concentrate on thorough frontend tests.
    4. Functional Changes: Focus on testing the functionality itself.

Mobile and Web testing

Mobile testing

Users worldwide possess an extensive array of mobile devices, each featuring countless variations in their configurations.

To discern the devices on which we shall conduct our tests, we must concentrate on a select few differentiators:

  • Operating System: It is imperative to conduct tests across both iOS and Android across various versions to guarantee compatibility and uniform functionality.
  • Mobile Brands: Testing on a wide range of mobile brands is indispensable for ensuring compatibility and optimal performance across diverse devices, encompassing renowned names like Samsung, Apple, and others. Different mobile brands offer a spectrum of hardware specifications, encompassing CPU, RAM, GPU, camera, screen resolution, and battery capacity.
  • Screen Size and Resolution: Mobile devices boast an assortment of screen sizes and resolutions. Ensuring seamless adaptation to varying screen dimensions is crucial for your application’s flawless presentation.

It is up to you to select the right combinations of these factors in order to maximize test coverage.

Web testing

Similar to Mobile, user access the Web through a dizzying variety of browsers and operating systems. Here are the key dimensions to consider:

  • Browsers: While Chrome is the clear leader on desktop, it remains critical to test web apps on as many different browsers as possible to ensure consistency.
  • Legacy Browsers: If your target audience includes users who may still be using older browsers like Internet Explorer, they should be included in your test plan.
  • Screen sizes and resolutions: Verify that your web application adapts correctly to different screen sizes and resolutions. Use the developer tools to simulate various devices or manually resize the browser window.
  • Operating System: Even when using the exact same version of a browser across different operating systems, there will be slight differences in rendering due to fonts and native components. Using third-party services like BrowserStack or Sauce Labs might make it easier to implement such multi-platform tests.

Summary

Hopefully, the process of time and risk management in testing is now clearer to you. We think it is an essential tool to maximize the quality of our work, ensuring efficient and effective testing practices.

Remember to remain flexible and be willing to adapt your approach, embracing new tools and processes when necessary. Stay proactive in staying updated to ensure you never fall behind and deliver top-notch results.