Agentic workflows with LangChain4j
This guide shows how to orchestrate LangChain4j agents and HTTP tasks in a single workflow.
We’ll build a small Investment Memo Agent inspired by common enterprise patterns: the workflow calls an HTTP API to fetch market data for a ticker, then asks an AI agent to produce a short, structured memo your backend can safely consume.
High-level flow:
-
Input:
{ "ticker": "CSU.TO", "objective": "long-term compounder", "horizon": "5y" } -
HTTP task calls a market-data API (internal or public).
-
The HTTP task’s output and the original input are combined into an
InvestmentPrompt. -
LangChain4j agent produces a typed memo (
InvestmentMemowith summary, stance, risks). -
The memo becomes the workflow output (or can be emitted as an event, if you want).
1. Add LangChain4j (choose a backend)
Pick the provider(s) you want; Quarkus will configure them via application properties.
<dependency>
<groupId>io.quarkiverse.langchain4j</groupId>
<artifactId>quarkus-langchain4j-ollama</artifactId>
</dependency>
<dependency>
<groupId>io.quarkiverse.langchain4j</groupId>
<artifactId>quarkus-langchain4j-openai</artifactId>
</dependency>
| You can have multiple providers on the classpath. Bind agents to a provider using standard LangChain4j / Quarkus config. |
2. Define the Investment Analyst agent
The agent receives:
-
the ticker and investment context (objective, horizon), and
-
a market data snapshot fetched by the HTTP task,
then returns a typed memo your workflow can pass around safely.
/**
* Simple investment analyst agent.
* <p>
* It receives an {@link InvestmentPrompt} (ticker + JSON market snapshot) and returns an {@link InvestmentMemo} with a
* short recommendation.
*/
@RegisterAiService
@SystemMessage("""
You are a careful, conservative investment analyst.
Given:
- a stock ticker
- a description of the investment objective
- an investment horizon
- and a compact JSON snapshot of market data,
you MUST respond with a short JSON document that can be mapped to:
InvestmentMemo {
String summary;
String stance; // BUY, HOLD or AVOID
List<String> keyRisks;
}
Be concise and avoid marketing language.
""")
public interface InvestmentAnalystAgent {
/**
* Analyze the prompt and produce an investment memo.
*
* @param memoryId
* Conversation / workflow memory id (provided by Quarkus Flow).
* @param prompt
* Ticker, objective, horizon and raw market-data JSON.
*/
@UserMessage("""
Ticker: {prompt.ticker}
Objective: {prompt.objective}
Horizon: {prompt.horizon}
Here is the JSON market-data snapshot you should analyze:
{prompt.marketDataJson}
Produce an InvestmentMemo JSON as specified above.
""")
InvestmentMemo analyse(@MemoryId String memoryId, @V("prompt") InvestmentPrompt prompt);
}
Key points:
-
@RegisterAiServiceturns the interface into a CDI bean backed by your chosen LLM. -
@SystemMessagesets strict instructions & output schema (JSON fields likesummary,stance,keyRisks). -
@UserMessagecombines:-
user intent (ticker, objective, horizon), and
-
serialized market data (from the HTTP call).
-
-
The method returns a strongly-typed DTO such as
InvestmentMemoinstead of a rawString, which makes downstream tasks easier to test.
| Keep the system prompt short and explicit about the response JSON structure. Your workflow can then validate/map that DTO without extra parsing. |
3. Call a market-data HTTP API from the workflow
We assume you have a simple REST endpoint that exposes market data:
-
Returns JSON like:
{ "ticker": "CSU.TO", "price": 3170.25, "pe": 28.4, "dividendYield": 0.003, "currency": "CAD" }
You can call this service directly from the workflow using the HTTP Func DSL.
Instead of just storing the HTTP response in the data tree, we use an
outputAs filter to build an InvestmentPrompt that becomes the input
for the agent.
How the outputAs filter works here
In this example we use the typed outputAs variant:
get("fetchMarketData", "http://localhost:8081/market-data/{ticker}")
.outputAs((result, wf, tf) -> {
final Map<String, Object> input = tf.input().asMap().orElseThrow();
final String response = tf.rawOutput().asText().orElseThrow();
return new InvestmentPrompt(
result.ticker(),
input.get("objective").toString(),
input.get("horizon").toString(),
response);
}, MarketDataSnapshot.class)
-
resultis the typed HTTP output (MarketDataSnapshot) – the deserialized response body. -
wfis the workflow context (not used here, but available if you need globals). -
tfis the task context, where:-
tf.input()is the original task input – here, the same shape as what the user sent when starting the workflow (e.g.{ticker, objective, horizon}). -
tf.rawOutput()is the raw HTTP output before the filter – here we turn it intoStringto pass as JSON to the agent.
-
The lambda returns an InvestmentPrompt, so after this step:
-
the workflow data is now an
InvestmentPromptinstance, -
the next task sees that
InvestmentPromptas its input, -
you effectively did: “HTTP result + original user input → unified prompt object”.
This is the core pattern for data transformation between tasks:
use outputAs to map arbitrary inputs/outputs into a shape that the next step
(LLM agent, another HTTP call, event emit, etc.) actually needs.
For HTTP timeouts, logging, proxy, and TLS options, see Configure the HTTP client.
4. Compose HTTP + agent in a single Flow
Now we can wire the HTTP step and the agent into a single Flow subclass.
return workflow("investment-memo").tasks(
// 1) Fetch market data via HTTP and turn it into an InvestmentPrompt
get("fetchMarketData", "http://localhost:8081/market-data/{ticker}").outputAs((result, wf, tf) -> {
// This is the original task input, as sent by the workflow user
// It has the user's objective and horizon
// It could be a record, but we use as a Map to exemplify how to handle this type of object in the
// example.
final Map<String, Object> input = tf.input().asMap().orElseThrow();
// This is the task output before the outputAs filter
final String response = tf.rawOutput().asText().orElseThrow();
return new InvestmentPrompt(result.ticker(), input.get("objective").toString(),
input.get("horizon").toString(), response);
}, MarketDataSnapshot.class),
// 2) Call the LLM-backed investment analyst agent
agent("investmentAnalyst", analyst::analyse, InvestmentPrompt.class)).build();
What this workflow does:
-
Accepts an input like:
{ "ticker": "CSU.TO", "objective": "long-term compounder", "horizon": "5y" } -
Runs the HTTP task
fetchMarketData, which:-
calls
GET /market-data/{ticker}using the ticker from the input, and -
uses
outputAsto combine:-
the HTTP output (
MarketDataSnapshot), and -
the original input (
objective,horizon)into a single
InvestmentPrompt.
-
-
-
Calls the InvestmentAnalystAgent with that
InvestmentPrompt(the agent takes ticker + objective + horizon + raw market-data JSON). -
The agent returns an
InvestmentMemothat becomes the workflow data / result (no extraoutputAsneeded in this simple case).
This pattern shows the typical “agent + tool” combination:
-
HTTP task = deterministic, structured tool (prices, fundamentals).
-
outputAs= data shaper that turns “tool output + user input” into a single prompt object. -
Agent = judgement + narrative (interpretation, explanation, recommendation).
5. Expose the workflow via REST and Dev UI
You can expose the workflow as a simple JAX-RS endpoint, or just run it from the Flow Dev UI.
5.1 REST resource
@Path("/investments")
public class InvestmentMemoResource {
@Inject
InvestmentMemoFlow flow;
@GET
@Path("/{ticker}")
@Produces(MediaType.APPLICATION_JSON)
public CompletionStage<InvestmentMemo> analyse(@PathParam("ticker") String ticker) {
return flow.instance(Map.of("ticker", ticker, "objective", "Long-term growth", "horizon", "3–5 years")).start()
.thenApply(data -> data.as(InvestmentMemo.class).orElseThrow());
}
}
-
The endpoint accepts the investment request (ticker, objective, horizon).
-
It injects the
InvestmentMemoFlowFlowsubclass and starts an instance. -
It returns the resulting
InvestmentMemoto the caller as JSON.
Because the method returns CompletionStage<Response> (or another reactive type),
any WorkflowException (e.g. HTTP 4xx/5xx from the market-data API) propagates
directly and is mapped to a RFC 7807 / WorkflowError HTTP response. See
CompletionStage vs blocking style
for details.
5.2 Run from Flow Dev UI
In dev mode:
-
Open Dev UI → Flow → Workflows.
-
Select the
investment-memoworkflow. -
Provide input JSON such as:
{ "ticker": "CSU.TO", "objective": "long-term compounder", "horizon": "5y" } -
Click Start workflow.
You’ll see:
-
The Input panel with your investment request.
-
The Output panel with the final
InvestmentMemo. -
In the logs you can inspect:
-
the HTTP call to
/market-data/{ticker}, and -
the agent interaction (if you enable LangChain4j logging).
-
Combine this with Enable tracing to get MDC-enriched logs per workflow instance.
6. Configuration (Optional)
Example configuration for an Ollama-backed analyst agent and a named HTTP client tuned for your market-data service:
# LangChain4j (Ollama)
quarkus.langchain4j.ollama.base-url=http://localhost:11434
quarkus.langchain4j.ollama.chat-model.model=llama3.1
# Optional: stricter / cheaper behaviour
# quarkus.langchain4j.ollama.chat-model.temperature=0.2
# quarkus.langchain4j.ollama.chat-model.max-tokens=1024
# Named HTTP client for market data
quarkus.flow.http.client.named.market-data.connect-timeout=2000
quarkus.flow.http.client.named.market-data.read-timeout=4000
quarkus.flow.http.client.named.market-data.user-agent=QuarkusFlow/InvestmentMemoDemo
quarkus.flow.http.client.named.market-data.logging.scope=request-response
quarkus.flow.http.client.named.market-data.logging.body-limit=2048
# Route the HTTP task in this workflow to the "market-data" client
quarkus.flow.http.client.workflow.investment-memo.task.fetchMarketData.name=market-data
For more HTTP tuning options (proxy, TLS, compression, redirects) see Configure the HTTP client.
7. Extending the pattern
Once this “agent + HTTP tool” pattern is in place, you can easily extend it:
-
Add a second agent to critique or shorten the memo before returning it.
-
Emit the memo as a CloudEvent using Use messaging and events so other services can react to
memo.ready. -
Replace the market-data API with:
-
an internal pricing engine,
-
a credit-risk service,
-
or any other HTTP/OpenAPI backend.
-
The key idea is always the same: use workflows to coordinate tools and agents, so your business logic stays testable, observable, and safe to evolve.