Functional testing

🌐 This document is available in both English and Ukrainian. Use the language toggle in the top right corner to switch between versions.

1. Introduction

1.1. Purpose

Functional testing is a crucial aspect of the software development lifecycle that verifies whether a software application’s functions or features work as intended and meet the specified requirements.

The primary goal of functional testing is to ensure that the software behaves correctly according to the defined specifications and user expectations. It helps identify defects and discrepancies in the software’s behavior, ensuring it delivers the intended value to users.

1.2. Objectives

The main objectives of functional testing are:

Validating requirements

Functional testing helps verify the software application meets the documented requirements and specifications. Executing test scenarios based on these requirements ensures the software meets the intended objectives.

Detecting defects

Functional testing aims to identify software defects, bugs, and errors. By systematically testing various functions and features, QA teams can uncover issues that might negatively impact the user experience or hinder the application’s functionality.

Assuring User Experience

Functional testing ensures the software provides a seamless and user-friendly experience. It validates that end-users can interact with the application without encountering unexpected errors or incorrect behavior.

Regression testing

Performing functional testing is crucial to prevent regression issues as software advances and new features are added. It ensures that recent code changes do not break existing functionality, maintaining the application’s overall stability.

Validating business logic

Many software applications rely on complex business rules and logic. Functional testing ensures the implemented rules are accurate and generate the intended outcomes.

Verifying integration

Functional testing ensures that different software components and modules integrate correctly and collaborate as intended. This method helps identify integration issues early in the development process.

Compliance and standardization

In some industries, software must comply with specific regulations or standards. Functional testing verifies that the software adheres to these requirements.

Enhancing product quality

By identifying and rectifying defects early in the development cycle, functional testing contributes to overall product quality improvement. This results in a more reliable and robust software application.

Mitigating risks

Effective functional testing minimizes the risk of releasing a faulty or substandard product to end-users. Identifying and resolving issues before deployment helps mitigate potential financial, reputation, and legal risks.

Customer Satisfaction

Ultimately, functional testing ensures the software meets user expectations and delivers value. By identifying and addressing available issues, it enhances customer satisfaction and loyalty.

1.3. Functional testing scope

The scope of functional testing encompasses all the platform and registry components within the Platform for state registries.

2. Functional testing approach

2.1. Functional testing methodologies

Functional testing follows a combination of manual and automated testing approaches. It includes unit testing, integration testing, system testing, acceptance testing, regression testing, and functional testing best practices.

Here is the list of used functional testing methodologies:

Unit testing

Unit testing involves isolating individual components, functions, or modules. Every unit undergoes independent testing to guarantee its accuracy and functionality. The primary goal of unit testing is to validate that each unit of code performs as intended. It helps identify defects early in the development cycle and ensures that smaller components work correctly before being integrated into the more extensive system. Unit tests are typically automated and focus on specific input-output scenarios. Developers write unit tests as part of the development process to ensure code reliability and maintainability.

Integration testing

Integration testing verifies the interactions and interfaces between different components or modules. It ensures that integrated components work together as expected. Integration testing aims to detect errors related to data exchange, communication, and collaboration between components. It helps identify issues that arise due to the integration of individual units. Different strategies for conducting integration testing exist, such as top-down, bottom-up, or sandwich approaches. Testers might use stubs or mocks to simulate the behavior of dependent components that are not yet available.

System testing

System testing evaluates the entire software application as a complete and integrated entity. It involves testing the application’s functionalities, interfaces, and interactions holistically. System testing aims to validate that the software meets the specified requirements and behaves as expected in different scenarios. It covers end-to-end scenarios and assesses the software’s overall behavior. System testing often includes various tests, such as functional, performance, security, and usability testing. It focuses on user scenarios and real-world usage.

Acceptance testing

Acceptance testing evaluates whether the software meets the acceptance criteria defined by stakeholders and users. It determines whether the software is ready for deployment. The purpose of acceptance testing is to ensure that the software aligns with user expectations and business requirements. It aims to verify that the software delivers value and meets the needs of its intended users. It comprises two sections:

  • UAT Testing
    User Acceptance Testing (UAT) is conducted yearly with the customer representative. In this testing phase, we prepare test cases, define registry regulations, and conduct testing on the registry. The customer representative goes through each test case, indicating the success or failure of each scenario.

  • Alpha/Beta Testing
    Alpha/beta testing is performed with each release iteration by the registry development teams using the test cases they have prepared.

Regression testing

Regression testing ensures that a software application’s recent code changes or enhancements do not negatively impact its existing functionality. It involves re-running a predefined set of test cases to validate that new code changes have not introduced unintended side effects or defects.

Installation/Update testing

The purpose of installation/update testing is to ensure that the software can be installed, updated, configured, and uninstalled smoothly without any issues, mistakes, or harmful impacts on the targeted system. This type of testing is essential because even a well-developed software application can suffer from installation-related or update-related issues that might disrupt its functionality or cause conflicts with other software components.

Visual testing

Visual testing is a critical aspect of quality assurance that focuses on evaluating the visual appearance and layout of a software application or website. It ensures that the user interface elements, such as fonts, colors, images, and graphical components, are displayed correctly and consistently across various devices, browsers, and resolutions. Visual testing utilizes automated tools to capture screenshots of the application’s different states and then compares these images to a set of baseline images representing the expected appearance. This process helps identify discrepancies, visual regressions, or layout issues, ensuring a visually appealing and consistent user experience.

2.2. Testing approaches description

2.2.1. Acceptance Testing

2.2.1.1. UAT Testing

This testing method involves verifying a build that is a potential candidate for further deployment to the production environment. Act once per year with a customer representative. It includes the following procedures:

  • Coordination and creation of acceptance scenarios with client representatives.

  • Establishment of the necessary testing infrastructure.

  • Search or creation of required test data.

  • Direct execution of acceptance scenarios and agreement of their results with client representatives.

The tests performed during this phase require confirmation of successful completion — the presence of snapshots, logs, and detailed reproduction steps.

2.2.1.2. Alpha/Beta Testing

Alpha testing involves testing a pre-release software version within the development environment. Internal testers, developers, or quality assurance teams perform this task. The goal is to identify bugs, assess system stability, and ensure basic functionalities work as intended before external testing.

Beta testing occurs after alpha testing and involves releasing the software to a select group of external users or customers. The aim is to gather real-world feedback, uncover usability issues, and identify any remaining bugs or performance problems. This testing phase helps refine the software based on user input before its official release, ensuring a more polished and user-friendly product.

Registry development teams are responsible for alpha/beta testing and act on it during the release process.

2.2.2. Integration Testing

This testing method follows the following approach:

  • Designing testing scenarios for the listed integrations and preparing test data.

  • Developing an automated solution for testing integration data and forming test groups if such a solution is feasible.

  • Manual tests that form the regression suite should be executed regularly and updated. They should be added to the appropriate test groups in the git repository defined by the Gherkin language.

Such tests involve testing integrations with real instances of external test systems and require confirmation of successful execution — the presence of snapshots, logs, and detailed reproduction steps.

Artifacts resulting from this type of testing:

  • Automated tests added to relevant test groups (nightly runs, integration, etc.)

  • Manual tests added to relevant test groups (regression, integration, etc.).

  • Updated requirement coverage matrices for tests and automated tests.

  • Results of test runs should be well-structured and accessible to all stakeholders:

    • Reports of automated test runs on Jenkins.

    • Reports of manual test runs in the git repository described on Gherkin.

  • Formulated evidence of test execution — snapshots, attachments, and logs.

2.2.3. Regression Testing

This testing method follows the following approach:

  • Develop an automated solution for test goal management.

  • Automated solutions are designed based on their access levels:

    • Backend — This level involves direct access to contracts and their interactions during testing.

    • UI — This level involves building automated solutions for testing platform UI functionality.

  • Automated testing encompasses the following methods:

    • Functional testing

    • Installation testing

    • Integration testing

  • Developed automated tests are added to corresponding quality gates.

  • Developed automated tests reference the requirements they verify.

  • The number of tests should be evenly distributed across testing levels, forming a balanced testing pyramid.

  • Several levels of quality gates are integrated into the CI/CD process.

  • Test data comprises synthetic data resembling industrial data or a sample of real industrial data (if accessible).

  • To ensure the stability of the automated solution, virtualization tools are utilized.

Artifacts of this testing include:

  • Documented design of the automated solution.

  • Developed code conventions and guidelines for automated test developers.

  • Established principles and rules for conducting code reviews.

  • Description of quality gate levels and test categories.

2.2.4. Installation/Update Testing

This functionality involves only manual testing; we add it to the regression test suite. Since testing is resource-intensive and requires a separate environment, together with the infrastructure team, we will agree on when to execute it as needed.

Artifacts of this testing include:

  • Manual tests added to relevant test groups (Regression, Integration, etc.).

  • Results of test runs should be well-structured and accessible to all stakeholders.

2.2.5. Visual Testing

This testing method follows the following approach:

  • Develop an automated solution for test goal management.

  • Automated solution captures screenshots of the application under different scenarios and compares them to baseline images, highlighting any discrepancies.

  • Visual testing checks how the UI adapts to different screen sizes to ensure a seamless device experience.

  • It confirms the match between the visual representation and the expected design and layout, introducing no visual regressions or inconsistencies.

Artifacts of this testing include:

  • Reports highlight differences between captured screenshots and baseline images, making identifying visual defects or regressions easy.

2.2.6. Unit Testing

This testing method follows the following approach:

  • Unit tests isolate a specific code unit, such as a function or method, from the rest of the application. This isolation ensures that the test results are not affected by external dependencies.

  • Each unit test is independent and should not rely on the execution order of other tests. This approach allows for parallel execution and accurate identification of failures.

  • Unit tests detect defects and issues early in the development process, reducing the cost and effort of fixing bugs in later stages.

Artifacts of this testing include:

  • Passed code review pipeline that validates by sonar 80% coverage

  • Generated code coverage reports.

2.3. Quality gates

All types of described testing make up the quality gates. We structure the CI/CD process into several stages:

Stage cluster

We deploy the newly developed functionality to the stage cluster and perform initial testing:

  • Unit tests

  • Sonar unit test coverage check

  • Integration tests

  • Visual tests

Install cluster

We deploy the compiled platform to a separate cluster and carry out installation testing from the subsequent stages:

  • Regression tests

  • Integration tests

  • System tests

  • Visual tests

  • Manual installation test cases execution

Update cluster

After validating the installation, we deploy a new cluster to check platform updates from the previous version to the new one, and this includes testing the updates through the subsequent stages:

  • Regression tests

  • Integration tests

  • System tests

  • Visual tests

  • Manual update test case execution

Pre-production cluster

Once we confirm the platform’s ability for updates and installations, we install the update on the pre-production cluster, and the testing includes:

  • Manual User Acceptance Tests (UAT) execution

  • Alpha/beta tests by registry development teams

Production cluster

After a year-long development cycle, we conduct the following testing in the production environment:

  • Manual User Acceptance Tests (UAT) execution with customer representatives.

enable-mocking-flow

2.4. Tools and Technologies Used

The following types, tools and technologies are used during functional testing:

Functional testing type Toolset Status

Unit testing

JUnit, Mockito

Automated

Integration testing

JUnit, AssertJ, RestAssured

Automated

System testing

JUnit, Selenide, RestAssured

Automated

Visual testing

Selenide, Moon

Automated

Acceptance testing

JUnit, Selenide

Automated/Manual

Regression testing

JUnit, Selenide, Moon, RestAssured

Automated

3. Managing defects

3.1. Registering and processing defects

We categorize the newly discovered defects into three groups depending on their origins:

  • Production defects

  • Regression defects

  • Functional defects

3.1.1. Production defects

These are defects identified in the live or production environment after the software deployment. Production defects can impact end-users, disrupt business processes, and require immediate attention to minimize adverse effects.

Defects obtained from the production environment should be linked to the defect handling epic related to the production environment. They should have labels like JSM competencies: DevOps, Backend, Frontend, etc., and include a reference to the user-reported defect description. A link to the corresponding Jira defect should be provided for defects reported by users.

3.1.2. Regression defects

Regression defects occur when a new code change or feature introduction inadvertently causes a previously working functionality to fail. They can happen due to code changes affecting interconnected parts of the software.

We log defects found during regression testing or while testing other tasks within the scope of a regression epic.

3.1.3. Functional defects

Functional defects arise when a software component does not perform its intended function correctly during development. They can include incorrect calculations, inaccurate data processing, or failure to execute specific actions as expected.

We link any defects detected during new functionality testing to the corresponding user story where we found them.

3.2. Processing defects

The defect processing is as follows:

  • We prioritize all defects according to the conditions outlined in the Prioritizing defects section and review in the following Determining defect importance section.

  • A resolved defect is marked as Ready for QA and forwarded to the defect registrar. If the defect registrar represents the client, we forward it to the testing team leader.

  • The defect registrar reviews the defect and, if resolved, marks it as closed on a valid quality gate. If the defect reproduces again, it is returned to development with a Rework status.

3.3. Prioritizing defects

Consider the following criteria (not listed in the table) to determine the severity of defects and their impact on further development:

Priority Level Description Impact on Testing

0: Blocker

The Platform stops functioning, and there is no workaround.

The testing team sends the build back to development.

1: Critical

Functionality is not working.

The testing team provides a test report for the development and management team. Management team decides about flow (rework, hotfix).

2: Major

Critical business requirements are broken.

The presence of priority 2 defects requires additional agreement with the business team and project management.

3: Minor

Functionality is not working according to design, but an acceptable workaround exists.

Business and development teams agree on the necessity of defect resolution within the current release.

4: Trivial

Minor changes needed in functionality — aesthetic or cosmetic changes.

Business and development teams agree on the necessity of defect resolution within the current release.

3.4. Determining defect importance

During the stages of development, regression/stabilization, the development team conducts internal and external sessions to review the list of defects to determine their current priorities and statuses. To refine a defect, please use the clarifying statuses in the table and include a detailed comment.

The testing team leader and the defect registrar are responsible for closing defects.

Status Explanation Will it be resolved?

Not a bug: Cannot reproduce

The defect that cannot be reproduced at the moment

No

Not a bug: Duplicate

The defect is already registered

No

Done

The testing is completed thoroughly, and functionality is working

Yes

Rework

The testing was completed thoroughly, and functionality isn’t working after the fix

No

Won’t Do

The defect has minimal impact on business and won’t be resolved

No

Fixed

After making changes, a thorough testing process was conducted.

Yes

Obsolete

The defect is outdated

No

Cancelled

Cancelled functionality

No

Implemented

Technical error that doesn’t require testing

Yes

Deferred

Awaiting resolution in upcoming releases and planned functionality

Yes

Not a Bug

Is not a defect

No

4. Reporting

Testing reports comprise two types:

  • ReportPortal automation report that can be provided to stakeholders.

  • Manual report after manual suite execution that can be provided to stakeholders.