Registry audit events logging subsystem

🌐 This document is available in both English and Ukrainian. Use the language toggle in the top right corner to switch between versions.

1. Overview

The Registry audit events logging subsystem receives and processes messages about significant system events and ensures they are recorded in the audit log for long-term storage and analysis.

2. Subsystem functions

The subsystem logs the following events:

  • Operations on registry data initiated by the users while executing business processes.

  • Events critical for ensuring system security.

  • General system-level events.

3. Technical design

The following diagram presents the Registry audit events logging subsystem’s components and their interactions with other subsystems in implementing functional scenarios.

audit overview

The Registry audit events logging subsystem provides an asynchronous API in the form of the Kafka audit-events topic for publishing audit event messages by the target subsystems according to a predefined scheme. The subsystem saves data to the Audit events operational database using Kafka Connect API to support exactly-once semantics for message processing.

Administrators can view audit logs through the Registry analytical reporting subsystem’s web interface as a set of service dashboards created during registry deployment by the Platform and registries deployment and configuration subsystem.

For details on the Registry analytical reporting subsystem’s design, see Registry analytical reporting subsystem.

4. Subsystem components

Component name Registry representation Source Repository Function

Audit event message schema storage service

kafka-schema-registry

3rd-party

github:/epam/edp-ddm-kafka-schema-registry

Validation of message structure against the current schema.

Audit event storage service

kafka-connect-cluster-connect

3rd-party

github:/epam/edp-ddm-strimzi-kafka-operator

Saving messages to the database.

Audit events operational database

operational:audit

origin

github:/epam/edp-ddm-registry-postgres/tree/main/platform-db/changesets/audit

A separate database for audit events.

5. List of services subject to audit

Owner subsystem Component name Registry representation

Registry data management subsystem

Synchronous registry data management service

registry-rest-api

Asynchronous registry data management service

registry-kafka-api

Business process management subsystem

Business process history access service

process-history-service-api

Business process history logging service

process-history-service-persistence

User settings management subsystem

User settings management service

user-settings

User notification subsystem

User notification service

ddm-notification-service

Registry excerpt generation subsystem

Excerpt management service

excerpt-service-api

Excerpt generation services

excerpt-worker

excerpt-worker-csv

excerpt-worker-docx

6. Technology stack

The following technologies were used when designing and developing the subsystem:

7. Subsystem quality attributes

7.1. Security

Using TLS authentication to connect the application to the message broker prevents man-in-the-middle attacks. All transit data is also encrypted using TLS.

7.2. Reliability

The overall system reliability is ensured by a number of mechanisms implemented in the subsystem’s components.

  • Kafka (Replication, Fault Tolerance, Message Persistence, Message immutability, Acknowledgment Mechanism).

  • Crunchy PostgreSQL (Replication and Failover, High Availability).

7.3. Scalability

Parallel processing of messages and the absence of state storage in the application ensures horizontal scaling.

7.4. Performance

Service events are created as asynchronous events (Application Events) and do not significantly affect the performance of service scenarios.

7.5. Data integrity

The integrity and immutability of data is guaranteed by the immutability of Kafka messages and access restrictions to database write operations.

7.6. Data retention and archiving

The retention and archiving policies are implemented by configuring the settings of the built-in Kafka message data retention and database backup tools.