Structured Logging
Quarkus Flow can emit all workflow and task lifecycle events as structured JSON logs to stdout. This enables you to export complete workflow execution data to external databases, analytics platforms, or audit systems without coupling your runtime to specific storage technologies.
Overview
Structured logging follows the logs-as-transport pattern: your workflow runtime emits JSON events to stdout, and a log forwarder (like FluentBit or Vector) routes those events to your chosen destination—PostgreSQL, Elasticsearch, S3, Kafka, or any combination.
This approach provides several advantages:
-
Zero transport ownership – Your application doesn’t manage database connections, retries, or buffering. The log forwarder handles all transport concerns.
-
Flexibility – The same log stream can feed multiple destinations simultaneously (e.g., PostgreSQL for queries + S3 for compliance archives).
-
Clear support boundary – If data isn’t reaching your database, the issue is either "logs not being emitted" (your code) or "logs not being forwarded" (infrastructure). No ambiguity.
-
Scalability – Log forwarders are designed for high-volume event streaming and can scale independently of your application.
Use Cases
-
Query APIs – Build GraphQL or REST APIs on top of PostgreSQL to query workflow instances and execution history.
-
Analytics – Feed workflow events into data warehouses (BigQuery, Snowflake, Redshift) for business intelligence.
-
Compliance auditing – Maintain long-term audit trails with detailed execution history.
-
Custom dashboards – Power monitoring UIs with workflow execution data from Elasticsearch.
-
Event-driven integrations – Stream events to Kafka for downstream processing.
Configuration
Structured logging is disabled by default. Enable it via configuration:
# Enable structured logging (REQUIRED)
quarkus.flow.structured-logging.enabled=true
# Event filtering (default: all events)
quarkus.flow.structured-logging.events=workflow.*
# Payload inclusion (default: workflow payloads included, task payloads excluded)
quarkus.flow.structured-logging.include-workflow-payloads=true
quarkus.flow.structured-logging.include-task-payloads=false
# Always include full context in error events (default: true)
quarkus.flow.structured-logging.include-error-context=true
# Truncation for large payloads (default: 10KB)
quarkus.flow.structured-logging.payload-max-size=10240
quarkus.flow.structured-logging.truncate-preview-size=1024
# Log level (default: INFO)
quarkus.flow.structured-logging.log-level=INFO
# Timestamp format (default: ISO8601)
quarkus.flow.structured-logging.timestamp-format=iso8601
# For custom format, specify the pattern (java.time.format.DateTimeFormatter)
# quarkus.flow.structured-logging.timestamp-pattern=yyyy-MM-dd'T'HH:mm:ss.SSSXXX
When you enable structured logging, Quarkus Flow automatically configures a separate file handler to write workflow events to a dedicated file. Events are written to target/quarkus-flow-events.log in dev/test mode, and /var/log/quarkus-flow/events.log in production. This ensures clean separation between application logs and event streams. See Integration with quarkus-logging-json for details.
|
Timestamp Format Configuration
Different log processors and downstream systems have varying requirements for timestamp formats. Quarkus Flow allows you to configure how timestamps are formatted in structured logging events.
# ISO 8601 format (default) - human-readable, widely compatible
quarkus.flow.structured-logging.timestamp-format=iso8601
# Unix epoch seconds with fractional nanoseconds (e.g., 1776807366.427833)
# Best for: PostgreSQL TIMESTAMP WITH TIME ZONE, InfluxDB
quarkus.flow.structured-logging.timestamp-format=epoch-seconds
# Unix epoch milliseconds as long (e.g., 1776807366428)
# Best for: Elasticsearch date fields, Kafka
quarkus.flow.structured-logging.timestamp-format=epoch-millis
# Unix epoch nanoseconds as long (e.g., 1776807366427832969)
# Best for: High-precision time-series databases
quarkus.flow.structured-logging.timestamp-format=epoch-nanos
# Custom format using java.time.format.DateTimeFormatter pattern
quarkus.flow.structured-logging.timestamp-format=custom
quarkus.flow.structured-logging.timestamp-pattern=yyyy-MM-dd'T'HH:mm:ss.SSSXXX
Format Examples:
| Format | Example Value |
|---|---|
|
|
|
|
|
|
|
|
|
|
The timestamp format applies to all timestamp fields in events: timestamp, startTime, endTime, and lastUpdateTime.
|
Event Filtering
Control which events are logged using glob patterns:
# All events (default)
quarkus.flow.structured-logging.events=workflow.*
# Only workflow-level events (no task details)
quarkus.flow.structured-logging.events=workflow.instance.*
# Workflow events + task failures (recommended for most use cases)
quarkus.flow.structured-logging.events=workflow.instance.*,workflow.task.faulted
# Specific events only
quarkus.flow.structured-logging.events=\
workflow.instance.started,\
workflow.instance.completed,\
workflow.instance.faulted
Payload Inclusion Strategy
By default, structured logging captures execution graphs (what executed when) but not task payloads (input/output data). This keeps log volume low while providing enough information for execution visualization.
Default behavior:
-
Workflow events: Include input/output (needed for instance queries)
-
Task events: Only metadata (taskName, position, status, timing)
-
Error events: Always include full context (overrides task payload setting)
This produces ~5KB of logs per workflow (compared to 50-500KB if all task payloads were included).
When to enable task payloads:
# Enable full audit trail with all task input/output
quarkus.flow.structured-logging.include-task-payloads=true
Use this for:
-
Compliance requirements mandating complete execution records
-
Debugging specific workflows in non-production environments
-
Workflows with small payloads where volume isn’t a concern
Large Payload Handling
For agentic workflows with large contexts (conversation history, retrieved documents, etc.), payloads exceeding the configured threshold are automatically truncated:
{
"input": {
"__truncated__": true,
"__originalSize__": 157000,
"__preview__": "First 1KB of data..."
}
}
This prevents overwhelming log systems while preserving metadata about what was truncated.
Event Schema
All events follow a consistent JSON schema:
{
"eventType": "workflow.instance.started",
"timestamp": "2026-04-13T14:30:00.123Z", // Format depends on configuration
"instanceId": "550e8400-e29b-41d4-a716-446655440000",
"workflowNamespace": "default",
"workflowName": "greetings",
"workflowVersion": "1.0.0",
...event-specific fields...
}
| Timestamp fields can be formatted as ISO 8601 strings, Unix epoch values, or custom formats depending on your timestamp format configuration. |
Workflow Instance Events
-
workflow.instance.started– Workflow execution begins -
workflow.instance.completed– Workflow finishes successfully -
workflow.instance.faulted– Workflow fails with error -
workflow.instance.cancelled– Workflow is cancelled -
workflow.instance.suspended– Workflow is suspended (waiting) -
workflow.instance.resumed– Workflow resumes after suspension -
workflow.instance.status.changed– Workflow status changes
Task Events
-
workflow.task.started– Task execution begins -
workflow.task.completed– Task finishes successfully -
workflow.task.faulted– Task fails with error -
workflow.task.cancelled– Task is cancelled -
workflow.task.suspended– Task is suspended -
workflow.task.resumed– Task resumes after suspension -
workflow.task.retried– Task is retried after failure
Example Events
Workflow Started (ISO 8601 format):
{
"eventType": "io.serverlessworkflow.workflow.started.v1",
"timestamp": "2026-04-13T14:30:00.123Z",
"instanceId": "550e8400-e29b-41d4-a716-446655440000",
"workflowNamespace": "default",
"workflowName": "greetings",
"workflowVersion": "1.0.0",
"status": "RUNNING",
"startTime": "2026-04-13T14:30:00.123Z",
"input": {
"name": "Alice"
}
}
Workflow Started (epoch-seconds format):
{
"eventType": "io.serverlessworkflow.workflow.started.v1",
"timestamp": 1744642200.123,
"instanceId": "550e8400-e29b-41d4-a716-446655440000",
"workflowNamespace": "default",
"workflowName": "greetings",
"workflowVersion": "1.0.0",
"status": "RUNNING",
"startTime": 1744642200.123,
"input": {
"name": "Alice"
}
}
Workflow Failed:
{
"eventType": "io.serverlessworkflow.workflow.faulted.v1",
"timestamp": "2026-04-13T14:30:05.789Z",
"instanceId": "550e8400-e29b-41d4-a716-446655440000",
"status": "FAULTED",
"endTime": "2026-04-13T14:30:05.789Z",
"error": {
"message": "Service unavailable",
"type": "java.net.ConnectException",
"stackTrace": "..."
},
"input": {
"name": "Alice"
}
}
Task Started (no payloads):
{
"eventType": "io.serverlessworkflow.task.started.v1",
"timestamp": "2026-04-13T14:30:01.000Z",
"taskExecutionId": "7c9e6679-7425-40de-944b-e07fc1f90ae7",
"instanceId": "550e8400-e29b-41d4-a716-446655440000",
"taskName": "callGreetingService",
"taskPosition": "do/0",
"status": "RUNNING",
"startTime": "2026-04-13T14:30:01.000Z"
}
Integration with quarkus-logging-json
When you use quarkus-logging-json in your application, Quarkus Flow automatically configures a separate file handler for structured events to avoid double JSON serialization.
The Problem (and Automatic Solution)
When quarkus-logging-json is enabled, it wraps all log messages in a JSON structure:
{
"timestamp": "2026-04-13T21:00:40.475075-03:00",
"level": "ERROR",
"loggerName": "io.quarkiverse.flow.structuredlogging",
"message": "{\"instanceId\":\"...\",\"eventType\":\"...\"}", // ← JSON string inside JSON
"threadName": "pool-11-thread-1"
}
Notice how the message field contains a JSON string, not a JSON object. This requires log consumers to:
-
Parse the outer JSON (from quarkus-logging-json)
-
Parse the inner JSON string (our workflow event)
This "double serialization" defeats the purpose of structured logging.
The Solution: Automatic Separate File Handler
Quarkus Flow automatically configures a separate file handler when structured logging is enabled. No manual configuration required!
Default Configuration:
When you enable structured logging:
# Enable Quarkus Flow structured logging
quarkus.flow.structured-logging.enabled=true
quarkus.flow.structured-logging.events=workflow.*
Quarkus Flow automatically:
-
Creates a file handler named
FLOW_EVENTS -
Writes events to:
-
Dev/Test mode:
target/quarkus-flow-events.log -
Production mode:
/var/log/quarkus-flow/events.log
-
-
Uses raw JSON format (
%s%n- no timestamps, just the event JSON) -
Prevents events from appearing in console (no double logging)
Customizing the Configuration:
You can override any of the defaults:
# Custom file path
quarkus.log.handler.file."FLOW_EVENTS".path=/custom/path/workflow-events.log
# Or disable the file handler and use console only (not recommended with quarkus-logging-json)
quarkus.log.handler.file."FLOW_EVENTS".enable=false
quarkus.log.category."io.quarkiverse.flow.structuredlogging".use-parent-handlers=true
Quarkus Flow handles formatting programmatically at runtime. Even if quarkus-logging-json takes over the global loggers, Quarkus Flow automatically pierces through the logging wrappers to strictly enforce raw string formatting for the FLOW_EVENTS file handler.
This guarantees that your event stream remains pure, single-level JSON without any configuration effort on your part.
|
With the default configuration:
-
Console logs: Application logs in JSON format (from
quarkus-logging-json) -
Event file: Pure workflow event JSON (one event per line, safe from double-wrapping)
Why Separate Files?
This follows the logs-as-transport pattern correctly:
-
Application logs: Diagnostic information for debugging (stdout/stderr)
-
Event streams: Structured data for analytics/auditing (dedicated file)
Event streams and diagnostic logs serve different purposes and should be treated separately. Log forwarders can then: - Parse application logs with the appropriate schema - Parse workflow events as pure JSON - Route each to different destinations (e.g., Elasticsearch for logs, PostgreSQL for events)
Automatic Detection and Configuration
Quarkus Flow automatically detects quarkus-logging-json at build time and:
- Emits an informational message about the auto-configured file handler
- Shows the default file path being used
- Reminds you that the path can be customized
Check your build logs for:
INFO [io.qua.flo.dep.FlowProcessor] Quarkus Flow structured logging file handler auto-configured.
Events will be written to: target/quarkus-flow-events.log
(override with quarkus.log.handler.file."FLOW_EVENTS".path)
Log Forwarder Integration
FluentBit Example
FluentBit is a lightweight, high-performance log forwarder. Here’s a basic configuration to route structured logs to PostgreSQL:
[INPUT]
Name tail
Path /var/log/quarkus-flow/events.log # Default production path
Parser json
Tag flow.events
[FILTER]
Name modify
Match flow.events
Add kubernetes.namespace ${K8S_NAMESPACE}
Add kubernetes.pod ${K8S_POD_NAME}
[OUTPUT]
Name pgsql
Match flow.events
Host postgres.database.svc
Port 5432
User flowuser
Password ${DB_PASSWORD}
Database workflow_data
Table workflow_events
Timestamp_Key timestamp
Production Recommendations
What to Log
Recommended (default):
-
All workflow-level events (
workflow.instance.*) -
Task failures (
workflow.task.faulted)
This captures complete workflow state while keeping volume low.
Optional (high-volume):
-
All task events (
workflow.task.*) – Only if you need complete task execution history or are debugging specific workflows.
Log Rotation
Configure log rotation to prevent disk fill:
quarkus.log.handler.file."FLOW_EVENTS".rotation.max-file-size=100M
quarkus.log.handler.file."FLOW_EVENTS".rotation.max-backup-index=7
quarkus.log.handler.file."FLOW_EVENTS".rotation.file-suffix=.yyyy-MM-dd
quarkus.log.handler.file."FLOW_EVENTS".rotation.rotate-on-boot=true
Retention Strategy
-
Active workflows: Hot storage (PostgreSQL/Redis)
-
Completed workflows (<30 days): Warm storage (PostgreSQL)
-
Completed workflows (>30 days): Cold storage (S3/object storage)
-
Completed workflows (>1 year): Archive or delete (configurable by compliance needs)
Implement this via your log forwarder’s routing rules or database policies.
Performance Impact
Structured logging is designed to be lightweight:
-
CPU overhead: <1% (JSON serialization is fast, truncation is efficient)
-
Memory overhead: Negligible (events are streamed, not buffered)
-
Log volume:
-
Default (workflow + task failures): ~5KB per workflow
-
With task payloads: ~50-500KB per workflow (depends on data size)
-
The logs-as-transport pattern ensures your application performance isn’t affected by database connectivity issues or backpressure.
Comparison with Custom Listeners
If you’re considering writing a custom listener for audit logging or data export, structured logging may be a simpler alternative:
| Approach | Pros | Cons |
|---|---|---|
Structured Logging |
✅ No code required |
⚠️ Eventual consistency (log → database delay) |
Custom Listener |
✅ Synchronous writes |
❌ You own database connections |
For most use cases, structured logging is recommended. Reserve custom listeners for scenarios requiring synchronous writes or complex business logic.
See Also
-
Custom Execution Listeners – Write your own listeners for advanced use cases
-
Metrics & Prometheus – Monitor workflow performance
-
Distributed Tracing – Debug cross-service workflow execution