In modern distributed systems, data changes come from many sources - API requests, background jobs, scheduled processes, event consumers, admin tools, and even automated scripts. As platforms scale, it becomes increasingly difficult to answer the most essential questions:
Whether for compliance, security, debugging, support, or product analytics, businesses need a reliable way to trace data modifications across their ecosystem.
A well-designed audit logging system becomes the source of truth for understanding platform behavior. It reduces investigation time, strengthens governance, and ensures confidence that every meaningful change can be explained and trusted - without relying on developers to sprinkle logging code everywhere.
This case study describes how we solved exactly that challenge through a clean, modular, and scalable audit logging architecture.
The client is a rapidly scaling enterprise SaaS platform used by mid-market and large organizations. As their product footprint grew, user activities and system events became increasingly distributed across multiple microservices. This created gaps in traceability and slowed down support workflows.
They needed a unified audit logging solution because:
Different microservices emitted logs in different ways - sometimes through request logs, sometimes through job logs, sometimes not at all. There was no unified source of truth.
The solution needed to work asynchronously, without slowing down production databases or microservices.
Raw Database changes lacked application context:
No user ID
No originating API request
No associated background job
No business process detail
The platform needed full CUD (Create/Update/Delete) tracking with:
Before/after values
Table & column-level visibility
Time-based search
Object-level filtering
User-level filtering
Retention & cleanup policies
Each microservice needed the ability to define:
Which tables to track
Which fields to include or exclude
The platform runs multiple environments (dev, staging, UAT, production), so the audit system had to ensure environment isolation.
We designed a solution that automatically captures every meaningful data change, enriches it with business context, and exposes it through a fast, filterable Elasticsearch-backed API
Below is the end-to-end system flow, followed by the module breakdown.
Whenever an API request, background job, scheduled task, or integration workflow modifies data, it records a transaction context before committing to the database.
This captures details like:
This ensures the system never loses the “intent”.
Once the DB commit happens, a real-time change-data-capture (CDC) component observes the change and publishes a compact event describing details such as:
This requires no custom logging code inside any microservice - the system “sees” all changes automatically.
A dedicated Enricher Service consumes the CDC event and looks up the corresponding transaction context.
It then merges both pieces into a single complete audit record capturing:
If the context isn’t immediately available, the Enricher uses a safe retry mechanism with exponential backoff and fallbacks to handle delayed or missing data.
The enriched event is written to an Elasticsearch index built specifically for audit logs.
Index templates, mappings, and ILM policies ensure:
A simple Audit API queries Elasticsearch’s optimized index and provides:
This API would power a future UI module for audit exploration
This module records the “intent” behind every database change before it happens, without slowing down application logic.
Whenever an API request, background job, workflow engine, or integration process modifies data, it writes into the database a small transaction context entry just before committing the database transaction. This ensures every DB change can later be traced back to a user, service, or system action.
This makes the system self-observing and incredibly reliable by providing the critical link needed for “who” and “why”.
Any change to a configured table - insert, update, delete - is automatically detected and streamed out as an event containing before/after data
This satisfies the requirement of "Track all DB changes with minimal overhead."
The Enricher merges the raw DB change event with the corresponding transaction context. This produces a fully attributed and enriched audit record: who, what, where, when, why.
This module ensures no change is ever orphaned or missing crucial attribution data.
Once enriched, the audit record is indexed into Elasticsearch so users can search, filter, and explore logs quickly.
It supports:
Exactly what an audit system needs.
This module provides the speed, searchability, and structure needed for effective audit exploration.
With this unified audit logging foundation in place, the client is now well-positioned to achieve full visibility and governance across their platform. The system delivers strong technical guarantees while remaining lightweight for developers and non-disruptive to existing services.
Every change can now be reconstructed with complete clarity:
The system captures audit trails without placing responsibility on individual microservices:
Because CDC monitors every committed change at the database level, the system automatically captures actions triggered by:
Nothing is missed — even if there is no API involved.
Elasticsearch indexing and ILM policies enable:
The solution is designed to support growth without redesign:
The organization now has a future-ready audit logging foundation that ties together application behavior and database changes with clarity and consistency. The architecture provides:
This project demonstrates our ability to deliver elegant, scalable, and intelligence-driven platforms that strengthen trust, transparency, and operational excellence across distributed systems.