Enable distributed tracing
To effectively monitor and debug workflows in production, you need to track a workflow instance from the moment it starts, through every internal task, and across the network to external services.
Quarkus Flow provides a lightweight tracing facility that achieves this through two mechanisms:
-
Local Tracing (MDC Logging): Emits structured lifecycle logs with Mapped Diagnostic Context (MDC) fields, allowing you to filter logs by instance ID in tools like Kibana, Datadog, or Loki.
-
Distributed Tracing (HTTP Headers): Automatically propagates those same instance IDs to downstream services via HTTP headers.
This guide shows how to enable and configure both mechanisms.
1. Enable execution log tracing
The Quarkus Flow execution tracer emits a log line every time a workflow or task starts, completes, or fails.
Tracing is enabled by default in dev and test modes, but disabled by default in prod to avoid flooding your production logs.
To enable it in production, add this to your application.properties:
quarkus.flow.tracing.enabled=true
(If you ever need to explicitly disable it in dev/test, use %dev.quarkus.flow.tracing.enabled=false).
2. Structured logging and MDC fields
To make your tracing logs searchable in centralized logging platforms, you should enable JSON console logging. This automatically extracts the MDC fields injected by Quarkus Flow into queryable JSON attributes.
# Enable JSON console logging
quarkus.log.console.json.enabled=true
|
If you prefer pattern logging (plain text) over JSON, you must manually reference the MDC keys in your log format string using |
MDC keys emitted by the tracer
Every trace log emitted by the engine includes the following MDC fields:
| Key | Example | Description |
|---|---|---|
|
|
The unique Workflow instance ID. |
|
|
The normalized lifecycle event name (e.g., |
|
|
The exact event timestamp. |
|
|
The task name (only present for task-level events). |
|
|
The JSON pointer to the task’s position in the workflow definition (only present for task-level events). |
Example JSON Output
When JSON logging is enabled, your application will emit structured logs resembling this:
{
"timestamp": "2026-03-12T19:20:59.117Z",
"level": "INFO",
"message": "Workflow id=01K9GDCXJVN89V0N4CWVG40R7C started...",
"mdc": {
"quarkus.flow.event": "workflow.started",
"quarkus.flow.instanceId": "01K9GDCXJVN89V0N4CWVG40R7C"
}
}
Because the mdc block is structured, you can easily run a query in your log aggregator like: mdc."quarkus.flow.instanceId" == "01K9GDCXJVN89V0N4CWVG40R7C" to see the exact linear history of a single workflow execution.
3. Distributed Tracing via HTTP Headers
Local logs are only half the battle. When your workflow executes an HTTP or OpenAPI task, you need a way to correlate your workflow logs with the logs of the external service.
For idempotency and end-to-end distributed traceability, Quarkus Flow automatically attaches correlation metadata to all outgoing HTTP calls as headers.
When your workflow makes an external call, the downstream service will receive:
-
X-Flow-Instance-Id: Matches thequarkus.flow.instanceIdfrom your MDC logs. -
X-Flow-Task-Id: Matches thequarkus.flow.taskPosfrom your MDC logs.
These headers allow downstream services to implement idempotency keys, ensuring that if a workflow retries an HTTP call due to a network timeout, the external service knows it is part of the exact same execution step.
You can disable automatic header injection globally if a strict downstream API rejects unrecognized headers:
quarkus.flow.http.client.enable-metadata-propagation=false
4. Log Tracing vs. Messaging Events
It is important to understand the difference between this logging tracer and the lifecycle events emitted over messaging:
-
Log Tracing (
quarkus.flow.tracing.enabled): Writes text/JSON tostdout. It is designed for operational observability (log aggregation, debugging, Grafana Loki). -
Messaging Lifecycle (
quarkus.flow.messaging.lifecycle-enabled): Broadcasts CloudEvents to a Kafka broker. It is designed for system-to-system integration (e.g., triggering another service when a workflow completes).
You can safely enable both simultaneously.
See also
-
Metrics & Prometheus Integration — complement your logs with numeric health metrics.
-
Use messaging and events — publish lifecycle events to Kafka as CloudEvents.
-
Quarkus Logging guide — full logging configuration reference.