Software development and quality control practices

1. Approaches, methodologies, and standards of software development

1.1. Development approaches and methodologies

The following methodologies and approaches are implemented during the development of information systems components:

  1. Utilization of object-oriented approach in software development;

  2. Implementation of container-based virtualization;

  3. Automation of deployment using the GitOps approach;

  4. Documentation organization following the code organization example;

  5. Unification and standardization of software components;

  6. Subsystem decomposition into modules;

  7. Risk assessment and security threat modeling;

  8. Execution of functional and non-functional testing.

1.2. Coding standards

1.2.1. Design standards and recommendations

  1. Adhere to the "Zen of Python" philosophy:

    • Beautiful is better than ugly.

    • Explicit is better than implicit.

    • Simple is better than complex.

    • Complex is better than complicated.

    • Flat is better than nested.

    • Sparse is better than dense.

    • Readability counts.

    • Special cases aren’t special enough to break the rules.

    • Although practicality beats purity.

    • Errors should never pass silently.

    • Unless explicitly silenced.

    • In the face of ambiguity, refuse the temptation to guess.

    • There should be one—​and preferably only one—​obvious way to do it.

    • Although that way may not be obvious at first unless you’re Dutch.

    • Now is better than never.

    • Although never is often better than right now.

    • If the implementation is hard to explain, it’s a bad idea.

    • If the implementation is easy to explain, it may be a good idea.

    • Namespaces are one honking great idea — let’s do more of those!

  2. The Fail fast option is often the best solution.

  3. Use "Return early" to validate input parameters.

  4. Apply the "Scout rule": "Leave the code better than you found it."

  5. Avoid using break, continue, and return in complex constructs.

  6. Always use curly braces in loops and conditions.

  7. Avoid creating methods with more than four parameters.

  8. Avoid creating methods that modify input parameters.

  9. Logging of objects should occur without object transformation, considering NPE (NullPointerException).

1.2.2. Code styling standards (code style)

  1. Tabulation: use four spaces.

  2. Use static imports only when working with DSL.

  3. Replace wildcard imports ("*") with specific ones.

  4. Separate logical groups of fields with new lines, rather than each field individually.

  5. Declare class fields before object fields.

  6. When choosing variable names, focus on their function rather than type. For example, StringBuilder stringbuilder = new StringBuilder() is poor practice, whereas StringBuilder fields = new StringBuilder() is good.

  7. Prefer class names that start with a domain value. For example: UserService, UserKafkaService (technology component may go in the middle).

1.2.3. Git workflow standards

  1. The repository should not contain specific data, such as local paths or settings specific to individual developers (properties).

  2. When creating Git commit messages:

    • The Jira task number should be mentioned at the beginning in square brackets.

    • The message should answer the question: "What does this commit change?"

    • If the commit description requires further elaboration, use a short headline on the first line, followed by a detailed description on a new line (Git handles this well).

    • Do not end the commit message with a period.

1.2.4. POM (project object model) working requirements

  1. Use the parent pom.xml for:

    • Defining dependency versions;

    • Managing plugins.

  2. When moving library versions to settings (.properties), add the .version suffix. For example, <querydsl.version>…​</querydsl.version>.

1.2.5. Modular testing standards

  1. It is recommended to utilize the AAA (ArrangeActAssert) approach.

  2. Comments (given, when, then) are unnecessary; separation by empty lines is sufficient.

  3. Use the @DisplayName annotation in JUnit 5 to provide detailed information.

  4. Avoid using "throws Exception" in test declarations.

  5. Usage of PowerMock is not recommended.

  6. For mock objects, add the appropriate mock prefix, e.g., mockRepository.

  7. Avoid meaningless messages in assertions, e.g., `Assertions.assertNotNull(object, "Shouldn’t be null`").

1.2.6. Clean code requirement

During code development, adhere to the principles of Clean Code, which emphasizes creating quality, well-written code.

Code is considered "good" if it:

  • Meets requirements and has passed tests;

  • Clearly expresses all intended design concepts;

  • Does not contain duplication;

  • Minimizes the number of components.

1.2.7. Using SQALE metrics for technical debt assessment and management

In software development, changes in one part of the code often require related changes in other code segments or documentation. This process is known as accumulating "technical debt". The term "technical debt" also encompasses other necessary but unfinished changes that constitute a "debt" to be repaid at a later time.

Для вимірювання та керування обсягами технічного боргу застосовується методологія SQALE. The SQALE methodology is used to measure and manage the volume of technical debt.

2. Software development methodologies

The Platform development follows the Agile software development methodology.

Agile software development is a class of methodologies based on iterative development. It assumes that requirements and decisions evolve through collaboration among self-organizing cross-functional teams.

2.1. System delivery structure

2.1.1. Development and management teams

  • Management Team

  • Platform Team

    • Three Scrum eams

    • Platform Service Team

  • System Architects Team

  • Competence Center and Register Team

  • Service Management Team

2.1.2. Adoption of Agile methodology

Within the Agile methodology, the primary approach is Scrum.

Scrum is a project management methodology oriented toward flexible software development, emphasizing high development quality.

Given the Platform’s scale and technical complexity involving numerous teams and specialists, the LeSS (Large Scale Scrum) approach is used. LeSS is a large-scale Scrum designed for multiple teams working on a single product.

This approach involves two-week sprint durations.

The Scrum Master is a specialist from the Management Team acting as a Delivery Manager.

Effort estimation, measured in story points, is used for task completion. Each task that can be divided into smaller parts is logged in the Jira backlog and allocated among the respective teams by the Management Team.

The planning process is conducted by the Management Team with the participation of lead developers and, if necessary, all members of the development team. Planning poker, which entails consensus-based complexity estimation, is used. A pessimistic outlook is used to set task completion deadlines, allowing for time buffers and flexibility in case of complications.

The Scrum process incorporates a dependency management approach involving active analysis and minimization of risks related to intra-team or inter-team dependencies.

2.1.3. Inter-Team collaboration tools

  • Using Jira as a ticket management system.

  • Using Confluence as a knowledge base for documentation.

  • Using Git repositories for source code collaboration.

2.1.4. Usage of Scrum artifacts

2.1.4.1. Definition of Ready (DoR) artifact

In the Scrum framework, the Definition of Ready (DoR) represents the criteria a task (User Story) must fulfill before work begins. These conditions determine when a User Story is ready for execution and inclusion in a sprint. Having clear DoR for backlog items is important for the team.

The readiness criteria include:

  • User Story/task is clearly defined and estimated at a high level.

  • Technical priorities are set based on dependencies or technical capabilities.

  • User story/task includes a detailed description, acceptance criteria in list format, non-functional requirements, and risks.

  • Managers/architects understand what and how to do and have clarifying questions.

  • User story/task is in one of the statuses: "In Analysis," "Open," "Blocked."

  • User story/task has a link to the corresponding Epic.

  • User story/task is assigned to the responsible development team.

  • Development phase of the project is specified for the story/task.

  • User story/task is independent.

  • User story/task is approved and prioritized by the client based on stated requirements during grooming sessions.

  • All blocking factors for sprint stories/tasks are resolved.

  • Acceptance criteria are clearly described and understood from a development and testing perspective for the development and quality control teams.

  • Story/task name may contain specific prefixes for designation: [SPIKE], [POC], [DESIGN].

  • Each user story/task is a tested functional unit, and the tester understands how to verify it and what to do before (setting up the test environment, preparing test data, etc.).

  • User story/task is in the "Ready for Development" status.

  • All sub-tasks are well-defined (one or two days of development for each) and assigned to executors.

  • Each sub-task should have one of the following prefixes based on specialization: [UX], [BA], [BE], [FE], [DB], [DEVOPS], [QA], [TW], [AUTO].

  • For data modeling user stories/tasks, a link to the Knowledge Base page with approved data models should be provided.

  • For business process modeling user stories/tasks, a link to the Knowledge Base page should be available, containing the following information:

    • Integration point description.

    • Form field description.

    • User flow description.

    • UX/UI layouts.

2.1.4.2. Definition of Done (DoD) artifact

Definition of Done (DoD) is a set of conditions under which a task or user story can be considered completed ("Done"). These criteria are developed for user stories to provide the development team with a clear understanding of the expected outcome of the work.

Success Criteria:

  1. Development completed:

    • Code review conducted according to internal standards;

    • Code successfully applied (merged) to the Master branch;

    • Static code analysis and deployment completed (no critical issues – Unit test coverage > 80%);

    • Functionality tested in the "UAT-Integration" environment;

    • Automated security scanning performed using SAST, SCA, and DAST scanners.

  2. Successful development testing in the "UAT-Integration" environment.

  3. Successful manual testing.

  4. Automated tests developed and passed on to CI/CD (meeting all acceptance criteria).

  5. Execution time of user stories/tasks recorded in Jira.

  6. Outcome of story/task can be demonstrated to the Client in the UAT environment.

  7. Status of user story/task set to "Closed" in Jira.

  8. In case of defects identified, all detected defects created, sorted, assigned, and planned.

2.1.5. Approaches to Information System (IS) release management

Semantic versioning of the Platform and Platform components is used as the approach to system release management.

The general approach includes three main release types:

  • MAJOR — major version, including incompatible API changes.

  • MINOR — minor version, including backward-compatible functional additions.

  • PATCH or HOTFIX — version including error fixes with backward compatibility.

Platform and Platform component releases are independent.

2.1.5.1. Sprint and Release numbering

Sprint duration is two weeks.

The release numbering template for the Platform and Platform components is currently as follows: 1.X.X, where X represents the functionality extension, release version. For example, 1.9.5.

After each release, a new Jenkins pipeline is created with the name release-1-X-X using the EDP Admin Console.

3. Code quality control

Specialized methodologies and tools known as Code quality control are used to ensure high-quality code during software development.

3.1. Static code analysis

The primary method utilized by developers is static code analysis.

Static code analysis is a software analysis methodology performed without actual program execution. The initial code is subjected to testing using specialized software.

The following tools are employed for conducting static code analysis:

  • IntelliJ IDEA — an integrated development environment that analyzes code in open files, highlighting problematic areas during input. IntelliJ IDEA also allows manual initiation of checks or a set of checks on a selected set of files, providing a detailed report on all issues found in the code.

  • SonarQube — an open-source platform designed for continuous analysis and code quality checking, detecting errors and security vulnerabilities through static code analysis. The tool is used in Jenkins pipelines when creating pull requests for changes to the master branch, as well as during the merging of a developer’s branch into the master branch.

  • Semgrep — a static code analyzer that identifies potential errors and vulnerabilities in Java programs.

  • Yelp Detect-secrets — a code analyzer that helps detect inadvertently stored secrets in the code.

  • Checkmarx KICS (Keeping Infrastructure as Code Secure) — an open-source solution for static analysis of infrastructure described in code.

  • Trivy — a static scanner for Docker images that detects vulnerabilities and configuration errors.

3.2. Code coverage testing

Developers employ code coverage testing to ensure high-quality code. Code coverage metrics from analysis tools should exceed 80%.

The following tools are used to check code coverage through testing:

  • IntelliJ IDEA — an integrated development environment for software.

    Code coverage in IntelliJ IDEA allows you to see how your code has been executed. The tool also allows you to assess how well your code is covered by unit tests, enabling you to evaluate their effectiveness.

    IntelliJ IDEA applies several local plugins for these purposes, such as EMMA, JaCoCo, and others.

  • SonarQube — an open-source platform designed for continuous inspection and code quality checking, offering automated code review with static code analysis for error and security vulnerability detection.

3.3. Unit testing

To ensure high quality and "cleanliness" of code written by developers, the method of unit testing is used. Code coverage by unit tests should exceed 80%.

Unit testing is a software testing method that involves testing each module of a program’s code individually. A module refers to the smallest part of the program that can be tested. In procedural programming, a module can refer to an individual function or procedure.

In the design and development process, the approach of decomposing parts of the information system into separate modules is employed, each of which is thoroughly tested.

For conducting unit testing, developers use the following tools:

  • JUnit;

  • AssertJ;

  • Wiremock;

  • MockMvc;

  • Spring-boot-test.

Standards and recommendations for conducting unit testing by developers in the development of this information system are described in Modular testing standards section of this document.

3.4. Automated testing coverage

Automated software testing is a part of the quality control process in software development. It utilizes software tools to execute tests and verify execution results, helping to reduce testing time and simplify the process.

To carry out proper automated testing procedures, a set of professional tools/instruments is used.

3.4.1. Platform testing tools

The list of tools involved in Platform testing is provided in the "Platform testing tools" table (see below).

Several categories of tools are identified:

  • Information Storage and Exchange Tools — tools intended for storing and creating project documentation, serving as a single entry point to the project.

  • Testing Tools — tools used during manual and automated testing.

  • Monitoring Tools — tools used for monitoring the platform’s status and displaying it on configured monitors.

Table 1. Platform testing tools
Category Tool name

Information storage and exchange tools

Requirement management system

JIRA, Confluence

Test case management system

JIRA Plugins

Defect management system

JIRA

Інструменти тестування

Testing tools

API Contracts

SoapUi, RestAssured, Postman

SOAP Contracts

SoapUI,JAX-WS

Web applications

Selenium WebDriver, Cucumber або похідні

Desktop system (Camunda)

TBD

Data testing

WireMock (data masking)

Integration with Trembita (UA-specific)

SoapUi

Load testing

Gatling

Security testing

  • owasp zap — DAST

  • trivy — continer security/SCA

  • secrets scanner — detect-secrets from yelp

  • Iaac security — kics from checkmarx

  • semgrep from owasp — SAST

Web content accessibility testing

Wave (Web Accessibility Evaluation Tool)

Monitoring tools

Monitoring system

Prometheus

Data visualization system

Grafana

The overall scope of functional and non-functional testing, as well as the testing methodology (strategy) of the information system, is detailed in the Functional testing section.

3.5. Source code review (Code review)

Code Review is a systematic process of examining program source code used during the development of the information system. This process is aimed not only at error detection but also serves as a crucial stage in software development, enhancing code quality.

3.5.1. Code review process in the context of information system development

  • During the deployment of information system components, a GitOps approach is employed, based on CI/CD processes. One of the key features of this approach, including security aspects, is that Git serves as the sole entry point for making any changes to the system.

  • A developer initially makes changes to their own protected remote VCS repository branch, using the git commit and git push commands.

  • The next step involves creating a Merge Request (MR) to merge changes from the developer’s branch into the master branch of the repository.

  • Subsequently, members of the development team conduct code reviews, which is a collective process. Its purpose is to review the written code to identify errors and provide suggestions for correction or improvement.

  • To merge code into the master branch, at least one approval from the lead developer of the team is required.

  • The merging of changes discussed within the created Merge Request is carried out by an authorized person with appropriate access rights.

3.5.2. Code refactoring

Standard methodology, Code refactoring, is employed to improve code quality and optimization.

Code refactoring is typically conducted in two scenarios:

  • Code refactoring within a code review to address critical errors and improve application functionality.

  • Code refactoring as part of system optimization (non-critical tasks).

Optimization of source code is determined by, but not limited to, the following criteria:

  • Naming;

  • Clean code principles;

  • Performance optimization: RAM, CPU, queries per second, etc.;

  • Code optimization;

  • Simplification of API contracts.

4. Monitoring non-functional requirements compliance

The development of the Platform adheres to the following principles (non-functional requirements):

  • Performance efficiency;

  • Security;

  • Reliability;

  • Portability;

  • Operability;

  • Modifiability;

  • Verifiability;

  • Interoperability.