Agentic workflows with LangChain4j

This guide shows how to orchestrate LangChain4j agents and HTTP tasks in a single workflow.

We’ll build a small Investment Memo Agent inspired by common enterprise patterns: the workflow calls an HTTP API to fetch market data for a ticker, then asks an AI agent to produce a short, structured memo your backend can safely consume.

High-level flow:

  1. Input: { "ticker": "CSU.TO", "objective": "long-term compounder", "horizon": "5y" }

  2. HTTP task calls a market-data API (internal or public).

  3. The HTTP task’s output and the original input are combined into an InvestmentPrompt.

  4. LangChain4j agent produces a typed memo (InvestmentMemo with summary, stance, risks).

  5. The memo becomes the workflow output (or can be emitted as an event, if you want).

1. Add LangChain4j (choose a backend)

Pick the provider(s) you want; Quarkus will configure them via application properties.

Ollama (local models)
<dependency>
  <groupId>io.quarkiverse.langchain4j</groupId>
  <artifactId>quarkus-langchain4j-ollama</artifactId>
</dependency>
OpenAI (hosted)
<dependency>
  <groupId>io.quarkiverse.langchain4j</groupId>
  <artifactId>quarkus-langchain4j-openai</artifactId>
</dependency>
You can have multiple providers on the classpath. Bind agents to a provider using standard LangChain4j / Quarkus config.

2. Define the Investment Analyst agent

The agent receives:

  • the ticker and investment context (objective, horizon), and

  • a market data snapshot fetched by the HTTP task,

then returns a typed memo your workflow can pass around safely.

/**
 * Simple investment analyst agent.
 * <p>
 * It receives an {@link InvestmentPrompt} (ticker + JSON market snapshot) and returns an {@link InvestmentMemo} with a
 * short recommendation.
 */
@RegisterAiService
@SystemMessage("""
        You are a careful, conservative investment analyst.

        Given:
        - a stock ticker
        - a description of the investment objective
        - an investment horizon
        - and a compact JSON snapshot of market data,

        you MUST respond with a short JSON document that can be mapped to:
          InvestmentMemo {
            String summary;
            String stance;      // BUY, HOLD or AVOID
            List<String> keyRisks;
          }

        Be concise and avoid marketing language.
        """)
public interface InvestmentAnalystAgent {

    /**
     * Analyze the prompt and produce an investment memo.
     *
     * @param memoryId
     *        Conversation / workflow memory id (provided by Quarkus Flow).
     * @param prompt
     *        Ticker, objective, horizon and raw market-data JSON.
     */
    @UserMessage("""
            Ticker: {prompt.ticker}
            Objective: {prompt.objective}
            Horizon: {prompt.horizon}

            Here is the JSON market-data snapshot you should analyze:

            {prompt.marketDataJson}

            Produce an InvestmentMemo JSON as specified above.
            """)
    InvestmentMemo analyse(@MemoryId String memoryId, @V("prompt") InvestmentPrompt prompt);
}

Key points:

  • @RegisterAiService turns the interface into a CDI bean backed by your chosen LLM.

  • @SystemMessage sets strict instructions & output schema (JSON fields like summary, stance, keyRisks).

  • @UserMessage combines:

    • user intent (ticker, objective, horizon), and

    • serialized market data (from the HTTP call).

  • The method returns a strongly-typed DTO such as InvestmentMemo instead of a raw String, which makes downstream tasks easier to test.

Keep the system prompt short and explicit about the response JSON structure. Your workflow can then validate/map that DTO without extra parsing.

3. Call a market-data HTTP API from the workflow

We assume you have a simple REST endpoint that exposes market data:

You can call this service directly from the workflow using the HTTP Func DSL. Instead of just storing the HTTP response in the data tree, we use an outputAs filter to build an InvestmentPrompt that becomes the input for the agent.

How the outputAs filter works here

In this example we use the typed outputAs variant:

get("fetchMarketData", "http://localhost:8081/market-data/{ticker}")
    .outputAs((result, wf, tf) -> {
        final Map<String, Object> input = tf.input().asMap().orElseThrow();
        final String response = tf.rawOutput().asText().orElseThrow();
        return new InvestmentPrompt(
                result.ticker(),
                input.get("objective").toString(),
                input.get("horizon").toString(),
                response);
    }, MarketDataSnapshot.class)
  • result is the typed HTTP output (MarketDataSnapshot) – the deserialized response body.

  • wf is the workflow context (not used here, but available if you need globals).

  • tf is the task context, where:

    • tf.input() is the original task input – here, the same shape as what the user sent when starting the workflow (e.g. {ticker, objective, horizon}).

    • tf.rawOutput() is the raw HTTP output before the filter – here we turn it into String to pass as JSON to the agent.

The lambda returns an InvestmentPrompt, so after this step:

  • the workflow data is now an InvestmentPrompt instance,

  • the next task sees that InvestmentPrompt as its input,

  • you effectively did: “HTTP result + original user input → unified prompt object”.

This is the core pattern for data transformation between tasks: use outputAs to map arbitrary inputs/outputs into a shape that the next step (LLM agent, another HTTP call, event emit, etc.) actually needs.

For HTTP timeouts, logging, proxy, and TLS options, see Configure the HTTP client.

4. Compose HTTP + agent in a single Flow

Now we can wire the HTTP step and the agent into a single Flow subclass.

        return workflow("investment-memo").tasks(
                // 1) Fetch market data via HTTP and turn it into an InvestmentPrompt
                get("fetchMarketData", "http://localhost:8081/market-data/{ticker}").outputAs((result, wf, tf) -> {
                    // This is the original task input, as sent by the workflow user
                    // It has the user's objective and horizon
                    // It could be a record, but we use as a Map to exemplify how to handle this type of object in the
                    // example.
                    final Map<String, Object> input = tf.input().asMap().orElseThrow();
                    // This is the task output before the outputAs filter
                    final String response = tf.rawOutput().asText().orElseThrow();
                    return new InvestmentPrompt(result.ticker(), input.get("objective").toString(),
                            input.get("horizon").toString(), response);
                }, MarketDataSnapshot.class),

                // 2) Call the LLM-backed investment analyst agent
                agent("investmentAnalyst", analyst::analyse, InvestmentPrompt.class)).build();

What this workflow does:

  1. Accepts an input like:

    {
      "ticker": "CSU.TO",
      "objective": "long-term compounder",
      "horizon": "5y"
    }
  2. Runs the HTTP task fetchMarketData, which:

    • calls GET /market-data/{ticker} using the ticker from the input, and

    • uses outputAs to combine:

      • the HTTP output (MarketDataSnapshot), and

      • the original input (objective, horizon)

        into a single InvestmentPrompt.

  3. Calls the InvestmentAnalystAgent with that InvestmentPrompt (the agent takes ticker + objective + horizon + raw market-data JSON).

  4. The agent returns an InvestmentMemo that becomes the workflow data / result (no extra outputAs needed in this simple case).

This pattern shows the typical “agent + tool” combination:

  • HTTP task = deterministic, structured tool (prices, fundamentals).

  • outputAs = data shaper that turns “tool output + user input” into a single prompt object.

  • Agent = judgement + narrative (interpretation, explanation, recommendation).

5. Expose the workflow via REST and Dev UI

You can expose the workflow as a simple JAX-RS endpoint, or just run it from the Flow Dev UI.

5.1 REST resource

@Path("/investments")
public class InvestmentMemoResource {

    @Inject
    InvestmentMemoFlow flow;

    @GET
    @Path("/{ticker}")
    @Produces(MediaType.APPLICATION_JSON)
    public CompletionStage<InvestmentMemo> analyse(@PathParam("ticker") String ticker) {
        return flow.instance(Map.of("ticker", ticker, "objective", "Long-term growth", "horizon", "3–5 years")).start()
                .thenApply(data -> data.as(InvestmentMemo.class).orElseThrow());
    }
}
  • The endpoint accepts the investment request (ticker, objective, horizon).

  • It injects the InvestmentMemoFlow Flow subclass and starts an instance.

  • It returns the resulting InvestmentMemo to the caller as JSON.

Because the method returns CompletionStage<Response> (or another reactive type), any WorkflowException (e.g. HTTP 4xx/5xx from the market-data API) propagates directly and is mapped to a RFC 7807 / WorkflowError HTTP response. See CompletionStage vs blocking style for details.

5.2 Run from Flow Dev UI

In dev mode:

  1. Open Dev UI → Flow → Workflows.

  2. Select the investment-memo workflow.

  3. Provide input JSON such as:

    {
      "ticker": "CSU.TO",
      "objective": "long-term compounder",
      "horizon": "5y"
    }
  4. Click Start workflow.

You’ll see:

  • The Input panel with your investment request.

  • The Output panel with the final InvestmentMemo.

  • In the logs you can inspect:

    • the HTTP call to /market-data/{ticker}, and

    • the agent interaction (if you enable LangChain4j logging).

Combine this with Enable tracing to get MDC-enriched logs per workflow instance.

6. Configuration (Optional)

Example configuration for an Ollama-backed analyst agent and a named HTTP client tuned for your market-data service:

# LangChain4j (Ollama)
quarkus.langchain4j.ollama.base-url=http://localhost:11434
quarkus.langchain4j.ollama.chat-model.model=llama3.1

# Optional: stricter / cheaper behaviour
# quarkus.langchain4j.ollama.chat-model.temperature=0.2
# quarkus.langchain4j.ollama.chat-model.max-tokens=1024

# Named HTTP client for market data
quarkus.flow.http.client.named.market-data.connect-timeout=2000
quarkus.flow.http.client.named.market-data.read-timeout=4000
quarkus.flow.http.client.named.market-data.user-agent=QuarkusFlow/InvestmentMemoDemo
quarkus.flow.http.client.named.market-data.logging.scope=request-response
quarkus.flow.http.client.named.market-data.logging.body-limit=2048

# Route the HTTP task in this workflow to the "market-data" client
quarkus.flow.http.client.workflow.investment-memo.task.fetchMarketData.name=market-data

For more HTTP tuning options (proxy, TLS, compression, redirects) see Configure the HTTP client.

7. Extending the pattern

Once this “agent + HTTP tool” pattern is in place, you can easily extend it:

  • Add a second agent to critique or shorten the memo before returning it.

  • Emit the memo as a CloudEvent using Use messaging and events so other services can react to memo.ready.

  • Replace the market-data API with:

    • an internal pricing engine,

    • a credit-risk service,

    • or any other HTTP/OpenAPI backend.

The key idea is always the same: use workflows to coordinate tools and agents, so your business logic stays testable, observable, and safe to evolve.