Agentic workflows with LangChain4j
This guide shows how to orchestrate LangChain4j
agents inside Quarkus Flow using the Java DSL.
You will define one or more agents (interfaces) and then compose them as tasks (agent(…))
with optional loops and human-in-the-loop (HITL) steps.
Prerequisites
-
A Quarkus application with Quarkus Flow already set up.
-
At least one LangChain4j provider dependency on the classpath (Ollama, OpenAI, …).
-
Basic familiarity with Quarkus configuration and CDI.
1. Add LangChain4j (choose a backend)
Pick the provider(s) you want; Quarkus will configure them via application properties.
<dependency>
<groupId>io.quarkiverse.langchain4j</groupId>
<artifactId>quarkus-langchain4j-ollama</artifactId>
</dependency>
<dependency>
<groupId>io.quarkiverse.langchain4j</groupId>
<artifactId>quarkus-langchain4j-openai</artifactId>
</dependency>
| You can have multiple providers on the classpath. Bind agents to a provider using standard LangChain4j / Quarkus configuration. |
2. Define an agent
Agents are plain interfaces with annotations such as @SystemMessage and @UserMessage
and are registered as CDI beans.
For workflows, it is common to accept a memory id and structured parameters.
@RegisterAiService
@ApplicationScoped
@SystemMessage("""
You draft a short, friendly newsletter paragraph.
Return ONLY the final draft text (no extra markup).
""")
public interface DrafterAgent {
// Exactly two parameters: memoryId + one argument (brief)
@UserMessage("Brief:\n{{brief}}")
String draft(@MemoryId String memoryId,
@V("brief") String brief);
}
Key points:
-
@RegisterAiServiceregisters the interface as a CDI bean. -
@SystemMessagedefines role/instructions and output shape. -
@UserMessagebuilds the prompt from method arguments. -
Use
@MemoryIdwhen the backend supports per-conversation memory.
3. Compose agents in a workflow
Create a Flow that calls agents with agent(name, fn, ResultType.class).
You can export results, branch, and loop based on typed outputs.
@ApplicationScoped
public class HelloAgenticWorkflow extends Flow {
@Inject
org.acme.agentic.DrafterAgent drafterAgent;
@Inject
org.acme.agentic.CriticAgent criticAgent;
@Override
public Workflow descriptor() {
return FuncWorkflowBuilder.workflow("hello-agentic")
.tasks(
// Build a single brief string from topic + notes and feed it to the drafter
// (jq-style expression produces a String)
agent("draftAgent", drafterAgent::draft, String.class)
.inputFrom("\"Topic: \" + $.topic + \"\\nNotes: \" + $.notes")
.exportAs("."), // expose the whole draft text to the next step
// Critic evaluates the draft and we persist a normalized review state
agent("criticAgent", criticAgent::critique, String.class)
.outputAs("{ reviewRaw: ., needsRevision: (. | tostring | startswith(\"NEEDS_REVISION:\")) }"),
// If needsRevision == true → loop back to draftAgent; else END
switchWhenOrElse(
".needsRevision",
"draftAgent",
FlowDirectiveEnum.END))
.build();
}
}
What is happening:
-
Draft via the first agent (for example, a newsletter drafter).
-
Critique the draft via a second agent.
-
Optionally emit an event to request human review (
emitJson(…)), then listen for a review result (listen(…).outputAs(…)). -
Branch with
switchWhenOrElse(…)to either revise (loop back to the drafter) or consume the final output (for example, send an email).
4. Shape data between steps
Use the standard transformations to keep prompts and results clean:
-
inputFrom(…)– select what the step consumes (jq or Java lambda). -
exportAs(…)– expose only part of a result to the next step without committing to global state. -
outputAs(…)– write a shaped result to the workflow data.
Example (agent → agent piping):
agent("draftAgent", drafterAgent::draft, String.class)
.inputFrom("$.seedPrompt") // read only the seed
.exportAs("$.draft"); // pass only the draft text forward
agent("criticAgent", criticAgent::critique, String.class)
.outputAs(r -> Map.of( // persist a structured review
"review", r,
"status", r.needsRevision() ? "REVISION" : "OK"
));
5. Add an optional human-in-the-loop gate
You can insert a human approval step using events:
emitJson("org.acme.email.review.required", CriticAgentReview.class);
listen("waitHumanReview", to().one(event("org.acme.newsletter.review.done")))
.outputAs((java.util.Collection<Object> c) -> c.iterator().next());
switchWhenOrElse(
(HumanReview h) -> ReviewStatus.NEEDS_REVISION.equals(h.status()),
"draftAgent", // loop back to the drafter
"sendNewsletter", // or finalize
HumanReview.class
);
This pattern lets you have a drafter → critic → human triad, while still keeping the workflow deterministic and observable.
6. Configure providers
Choose one provider (or both) and configure it via application.properties (or YAML).
quarkus.langchain4j.ollama.base-url=http://localhost:11434
quarkus.langchain4j.ollama.chat-model.model=llama3.1
# Optional: temperature/max-tokens/etc. (provider-specific)
# quarkus.langchain4j.ollama.chat-model.temperature=0.3
quarkus.langchain4j.openai.api-key=${OPENAI_API_KEY}
quarkus.langchain4j.openai.chat-model.model=gpt-4o-mini
# Optional knobs:
# quarkus.langchain4j.openai.chat-model.temperature=0.2
# quarkus.langchain4j.openai.chat-model.max-tokens=1024
# Example: enable memory if the provider supports it; scope depends on backend
# quarkus.langchain4j.memory.enabled=true
# quarkus.langchain4j.memory.window-size=20
Keep prompts short and strict in @SystemMessage with an explicit JSON schema for outputs.
This makes downstream workflow tasks easier to type and test.
|
7. Testing agents and workflows
-
Unit-test your agent interface by injecting it directly and calling the method (mock the provider if you want speed or deterministic outputs).
-
Unit-test your Flow by injecting the
Flowsubclass and callingstartInstance(…); mock agents to avoid network calls. -
For end-to-end tests, run
quarkus:dev(or use Testcontainers) with your chosen provider and exercise the HTTP endpoints or messaging entry points.
Troubleshooting
-
Output does not parse as expected – tighten your
@SystemMessagecontract and validate it in a small unit test; then useoutputAs(…)to normalize the result into a stable shape. -
Prompt drift – keep
@UserMessageminimal, compute heavy strings in Java, and feed only what is needed viainputFrom(…). -
Latency or cost – export narrow fields (
exportAs(…)) and cache intermediate results in your workflow data. -
Memory issues – if the provider keeps chat history, ensure you pass a stable
@MemoryId(for example, your workflow instance id) or turn memory off for deterministic runs.
See also
-
Java DSL cheatsheet — all task providers and transformations (
inputFrom,exportAs,outputAs, …). -
CNCF Workflow mapping and concepts — CNCF Workflow concepts behind
call, events, conditions, and loops. -
Use messaging and events — emitting review requests and waiting for approvals via messaging.