Agentic workflows with LangChain4j

This guide shows how to orchestrate LangChain4j agents inside Quarkus Flow using the Java DSL. You will define one or more agents (interfaces) and then compose them as tasks (agent(…​)) with optional loops and human-in-the-loop (HITL) steps.

Prerequisites

  • A Quarkus application with Quarkus Flow already set up.

  • At least one LangChain4j provider dependency on the classpath (Ollama, OpenAI, …).

  • Basic familiarity with Quarkus configuration and CDI.

1. Add LangChain4j (choose a backend)

Pick the provider(s) you want; Quarkus will configure them via application properties.

Ollama (local models)
<dependency>
  <groupId>io.quarkiverse.langchain4j</groupId>
  <artifactId>quarkus-langchain4j-ollama</artifactId>
</dependency>
OpenAI (hosted)
<dependency>
  <groupId>io.quarkiverse.langchain4j</groupId>
  <artifactId>quarkus-langchain4j-openai</artifactId>
</dependency>
You can have multiple providers on the classpath. Bind agents to a provider using standard LangChain4j / Quarkus configuration.

2. Define an agent

Agents are plain interfaces with annotations such as @SystemMessage and @UserMessage and are registered as CDI beans. For workflows, it is common to accept a memory id and structured parameters.

@RegisterAiService
@ApplicationScoped
@SystemMessage("""
        You draft a short, friendly newsletter paragraph.
        Return ONLY the final draft text (no extra markup).
        """)
public interface DrafterAgent {

    // Exactly two parameters: memoryId + one argument (brief)
    @UserMessage("Brief:\n{{brief}}")
    String draft(@MemoryId String memoryId,
            @V("brief") String brief);
}

Key points:

  • @RegisterAiService registers the interface as a CDI bean.

  • @SystemMessage defines role/instructions and output shape.

  • @UserMessage builds the prompt from method arguments.

  • Use @MemoryId when the backend supports per-conversation memory.

3. Compose agents in a workflow

Create a Flow that calls agents with agent(name, fn, ResultType.class). You can export results, branch, and loop based on typed outputs.

@ApplicationScoped
public class HelloAgenticWorkflow extends Flow {

    @Inject
    org.acme.agentic.DrafterAgent drafterAgent;
    @Inject
    org.acme.agentic.CriticAgent criticAgent;

    @Override
    public Workflow descriptor() {
        return FuncWorkflowBuilder.workflow("hello-agentic")
                .tasks(
                        // Build a single brief string from topic + notes and feed it to the drafter
                        // (jq-style expression produces a String)
                        agent("draftAgent", drafterAgent::draft, String.class)
                                .inputFrom("\"Topic: \" + $.topic + \"\\nNotes: \" + $.notes")
                                .exportAs("."), // expose the whole draft text to the next step

                        // Critic evaluates the draft and we persist a normalized review state
                        agent("criticAgent", criticAgent::critique, String.class)
                                .outputAs("{ reviewRaw: ., needsRevision: (. | tostring | startswith(\"NEEDS_REVISION:\")) }"),

                        // If needsRevision == true → loop back to draftAgent; else END
                        switchWhenOrElse(
                                ".needsRevision",
                                "draftAgent",
                                FlowDirectiveEnum.END))
                .build();
    }
}

What is happening:

  1. Draft via the first agent (for example, a newsletter drafter).

  2. Critique the draft via a second agent.

  3. Optionally emit an event to request human review (emitJson(…​)), then listen for a review result (listen(…​).outputAs(…​)).

  4. Branch with switchWhenOrElse(…​) to either revise (loop back to the drafter) or consume the final output (for example, send an email).

4. Shape data between steps

Use the standard transformations to keep prompts and results clean:

  • inputFrom(…​) – select what the step consumes (jq or Java lambda).

  • exportAs(…​) – expose only part of a result to the next step without committing to global state.

  • outputAs(…​) – write a shaped result to the workflow data.

Example (agent → agent piping):

agent("draftAgent", drafterAgent::draft, String.class)
  .inputFrom("$.seedPrompt")           // read only the seed
  .exportAs("$.draft");                // pass only the draft text forward

agent("criticAgent", criticAgent::critique, String.class)
  .outputAs(r -> Map.of(               // persist a structured review
      "review", r,
      "status", r.needsRevision() ? "REVISION" : "OK"
  ));

5. Add an optional human-in-the-loop gate

You can insert a human approval step using events:

emitJson("org.acme.email.review.required", CriticAgentReview.class);

listen("waitHumanReview", to().one(event("org.acme.newsletter.review.done")))
  .outputAs((java.util.Collection<Object> c) -> c.iterator().next());

switchWhenOrElse(
  (HumanReview h) -> ReviewStatus.NEEDS_REVISION.equals(h.status()),
  "draftAgent",            // loop back to the drafter
  "sendNewsletter",        // or finalize
  HumanReview.class
);

This pattern lets you have a drafter → critic → human triad, while still keeping the workflow deterministic and observable.

6. Configure providers

Choose one provider (or both) and configure it via application.properties (or YAML).

Ollama (local)
quarkus.langchain4j.ollama.base-url=http://localhost:11434
quarkus.langchain4j.ollama.chat-model.model=llama3.1
# Optional: temperature/max-tokens/etc. (provider-specific)
# quarkus.langchain4j.ollama.chat-model.temperature=0.3
OpenAI
quarkus.langchain4j.openai.api-key=${OPENAI_API_KEY}
quarkus.langchain4j.openai.chat-model.model=gpt-4o-mini
# Optional knobs:
# quarkus.langchain4j.openai.chat-model.temperature=0.2
# quarkus.langchain4j.openai.chat-model.max-tokens=1024
Memory (optional, if used by your agent)
# Example: enable memory if the provider supports it; scope depends on backend
# quarkus.langchain4j.memory.enabled=true
# quarkus.langchain4j.memory.window-size=20
Keep prompts short and strict in @SystemMessage with an explicit JSON schema for outputs. This makes downstream workflow tasks easier to type and test.

7. Testing agents and workflows

  • Unit-test your agent interface by injecting it directly and calling the method (mock the provider if you want speed or deterministic outputs).

  • Unit-test your Flow by injecting the Flow subclass and calling startInstance(…​); mock agents to avoid network calls.

  • For end-to-end tests, run quarkus:dev (or use Testcontainers) with your chosen provider and exercise the HTTP endpoints or messaging entry points.

Troubleshooting

  • Output does not parse as expected – tighten your @SystemMessage contract and validate it in a small unit test; then use outputAs(…​) to normalize the result into a stable shape.

  • Prompt drift – keep @UserMessage minimal, compute heavy strings in Java, and feed only what is needed via inputFrom(…​).

  • Latency or cost – export narrow fields (exportAs(…​)) and cache intermediate results in your workflow data.

  • Memory issues – if the provider keeps chat history, ensure you pass a stable @MemoryId (for example, your workflow instance id) or turn memory off for deterministic runs.

See also