Server Architecture
The Data Nadhi Server is the main entry point for all logs and events sent from SDKs.
It validates, enriches, and routes log data into your configured pipelines using Temporal for async processing.
The system is built to be stateless, heavily cached, and fault-tolerant.
[Github Repository]: data-nadhi-server
Overview
Core Responsibilities
- Authenticate SDK requests using org-level secrets.
- Fetch and validate project and pipeline configs.
- Use extensive caching to minimize MongoDB queries.
- Enqueue data into Temporal workflows for async processing.
- Give structured responses for synchronous validation.
- Push async workflow failures into MinIO for post-processing and debugging.
Extensive Use of Caching
Caching is a huge part of the server architecture.
Almost every data access path — like fetching org secrets, project configs, or pipeline definitions — gets served from cache first.
When there's a cache miss:
- The data gets fetched from MongoDB.
- Decrypted or transformed if needed.
- Written back to the cache with a defined TTL.
This read-through and write-back pattern makes sure:
- Database calls are minimal.
- Secrets and configs are instantly accessible.
- Server latency stays consistently low, even under high load.
In short:
Every piece of reusable data flows through the cache before hitting the database.
Request Processing Flow
The processing is divided into two main parts — Sync Flow (validation and acknowledgment) and Async Flow (background processing via Temporal).
Sync Flow
- Validate API Key — Extract and verify org secret. See API Key Management for more info
- Attach Identifiers — Add
org_idandproject_idto the request. - Fetch Pipeline Config — Resolve using
pipelineCode,org_id, andproject_idfrom cache or Mongo. - Validate Pipeline
- Return 401 if API key is invalid.
- Return 400 if payload or pipelineCode is missing.
- Return 404 if pipeline isn't found.
- Return 400 if pipeline is inactive.
- Return 500 if any unexpected internal error happens.
- Generate Message ID — On success, send back
200 OK.
Async Flow
- Generate Metadata — Create structured log metadata for processing.
- Generate Workflow ID — Create a unique ID for each pipeline execution.
- Resolve Processor ID — Pick the Temporal task queue ID based on priority:
pipeline-level > project-level > org-level > default
- Push to Temporal Queue — Enqueue workflow execution into Temporal for downstream processing.
Exception Handling
The server uses a two-tiered error handling approach:
Synchronous Errors
- These happen during validation or metadata generation (before handing off to Temporal).
- The SDK immediately gets an error response (e.g.,
401,400,404,500). - No retry at this stage.
Asynchronous Errors
- Once the workflow is handed off to Temporal, the SDK isn't involved anymore.
- Any failures during async processing (like destination connection errors, transformation issues) get captured and logged to MinIO.
- These logs are used later for debugging and analysis.
Data Access and Structure
Data is accessed through well-defined keys that uniquely represent each resource:
| Purpose | Key Format |
|---|---|
| Organisation Secret | datanadhiserver:org:<orgId>:secret |
Pipeline By pipelineCode | datanadhiserver:org:<orgId>:prj:<projectId>:plc:<pipelineCode> |
| API Key validation metadata | datanadhiserver:apikey:<apiKey> |
Summary
The Data Nadhi Server works as a secure, high-performance orchestrator between SDKs and the Temporal processing engine.
It achieves speed and reliability by:
- Doing all reads/writes through cache.
- Validating everything synchronously before workflow execution.
- Offloading heavy processing to Temporal.
- Safely handling async failures through MinIO.
This design gives you consistent behavior, low latency, and solid failure isolation across all log ingestion workflows.