Data flow and Context Management
Workflows in Quarkus Flow operate on a single logical Workflow Context—usually a JSON document or a strongly typed Java record. As the workflow executes, this context is passed from task to task.
However, a task rarely needs the entire workflow context to do its job, and its result rarely needs to overwrite the entire document.
To manage this, the Serverless Workflow specification defines a strict Data Flow model. Every task can define:
-
Input: What specific data the task extracts from the workflow context before executing.
-
Output: How to format the raw result produced by the task.
-
Export: How to merge that formatted result back into the workflow context for the next task.
1. The Core Data Flow Methods
In the Quarkus Flow Java DSL, you control this flow using three optional filtering methods on your tasks. If you don’t use them, the engine uses sensible defaults (the task sees the whole context, and its result is merged directly back).
| Spec Concept | Java DSL Method | Purpose |
|---|---|---|
Input |
|
Filters the workflow context to provide only the exact data the task needs. |
Output |
|
Transforms the raw result returned by the task itself (e.g., formatting an HTTP response) before it is exported. |
Export |
|
Dictates how the task’s output is merged back into the workflow context for downstream tasks. |
2. Using the DSL Filters
You can define these filters using either jq expressions (great for JSON manipulation) or Java Functions (great for type safety and complex logic). You can mix and match them freely within the same workflow.
2.1 inputFrom – Shaping the Task Input
Use inputFrom to give a task a narrowed, focused view of the workflow context. This isolates the task from changes in the broader document.
Using jq:
// The task only receives the "customer" object from the workflow context
call("processOrder")
.inputFrom("{ customerId: .customer.id }");
Using Java Functions:
// T is the current workflow context type
call("processOrder")
.inputFrom((OrderContext ctx) -> new ProcessRequest(ctx.customerId()), OrderContext.class);
2.2 outputAs – Formatting the Task Result
outputAs controls the direct result of the task before it is merged into the workflow context. This is highly useful when a task (like an HTTP call or an AI agent) returns a large payload, but you only care about a specific field.
Using jq:
// The task makes an HTTP call, but we only want the "status" field from the response body
call("checkInventory")
.outputAs(".body.status");
Using Java Functions:
call("checkInventory")
.outputAs((InventoryResponse response) -> response.isAvailable(), InventoryResponse.class);
2.3 exportAs – Merging Back to the Workflow Context
exportAs dictates how the task’s formatted output is appended to or updates the workflow context. This is the data that the next task will see.
Using jq:
// Nests the task's output under a new "inventoryStatus" key in the workflow context
call("checkInventory")
.exportAs("{ inventoryStatus: . }");
Using Java Functions:
call("checkInventory")
.exportAs((Boolean isAvailable, WorkflowContextData wf) -> {
// Merge the boolean result back into the typed workflow context
var currentCtx = (OrderContext) wf.currentData();
return currentCtx.withInventoryStatus(isAvailable);
}, Boolean.class);
3. Context-Aware Functions (Advanced)
Sometimes, transforming data requires metadata about the workflow execution itself. Quarkus Flow provides context-aware functional interfaces for inputFrom, outputAs, and exportAs.
import io.serverlessworkflow.api.types.func.JavaFilterFunction;
import io.serverlessworkflow.impl.WorkflowContextData;
import io.serverlessworkflow.impl.TaskContextData;
call("auditStep")
.inputFrom((OrderContext in, WorkflowContextData wf, TaskContextData task) -> Map.of(
"orderId", in.id(),
"workflowInstanceId", wf.instanceId(), // Engine metadata
"taskPosition", task.position().jsonPointer() // e.g. do/0/task
), OrderContext.class);
-
WorkflowContextData (wf)gives you access to the instance ID, workflow metadata, and the current workflow data state. -
TaskContextData (task)gives you access to the task’s specific execution position, name, and raw unformatted output.
4. Best Practices
-
Keep tasks ignorant: Use
inputFromheavily. A task should only know about the exact data it needs to execute, not the entire workflow payload. -
Lean on Java Records: If you are building Agentic workflows or complex orchestrations, define your workflow context as a Java Record. Using the typed Java function overloads for
inputFromandexportAsprovides compile-time safety that jq expressions cannot.
For a compact list of all DSL data flow operators, see the Java DSL cheatsheet.
5. Which one should I use?
A practical rule of thumb:
-
Use
inputFromwhen:-
you want to keep the task insulated from changes in the data shape,
-
or you want to pass a small, explicit input record.
-
-
Use
outputAswhen:-
you want to pipe data between adjacent steps,
-
or you are shaping a payload for events / downstream services.
-
-
Use
exportAswhen:-
you are deciding what should live in the workflow data context,
-
or you want neat, testable state at the end of the workflow (what Dev UI shows as Output).
-
In many workflows you will only need inputFrom and outputAs. exportAs is
most useful in more advanced pipelines and event-driven flows.