IBM watsonx.ai
You can develop generative AI solutions with foundation models in IBM watsonx.ai. You can use prompts to generate, classify, summarize, or extract content from your input text. Choose from IBM models or open source models from Hugging Face. You can tune foundation models to customize your prompt output or optimize inferencing performance.
Supported only for IBM watsonx as a service on IBM Cloud. |
Using watsonx.ai
To employ watsonx.ai LLMs, integrate the following dependency into your project:
<dependency>
<groupId>io.quarkiverse.langchain4j</groupId>
<artifactId>quarkus-langchain4j-watsonx</artifactId>
<version>0.23.0.CR1</version>
</dependency>
If no other extension is installed, AI Services will automatically utilize the configured watsonx dependency.
Configuration
To use the watsonx.ai dependency, you must configure some required values in the application.properties
file.
Base URL
The base-url
property depends on the region of the provided service instance, use one of the following values:
-
Frankfurt: https://eu-de.ml.cloud.ibm.com
-
London: https://eu-gb.ml.cloud.ibm.com
quarkus.langchain4j.watsonx.base-url=https://us-south.ml.cloud.ibm.com
Project ID
To prompt foundation models in watsonx.ai programmatically, you need to pass the identifier (ID) of a project.
To get the ID of a project, complete the following steps:
-
Open the project, and then click the Manage tab.
-
Copy the project ID from the Details section of the General page.
To view the list of projects, go to https://dataplatform.cloud.ibm.com/projects/?context=wx. |
quarkus.langchain4j.watsonx.project-id=23d...
API Key
To prompt foundation models in IBM watsonx.ai programmatically, you need an IBM Cloud API key.
quarkus.langchain4j.watsonx.api-key=hG-...
To determine the API key, go to https://cloud.ibm.com/iam/apikeys and generate it. |
Interacting with Models
The watsonx.ai
module provides two different modes for interacting with LLM models: generation
and chat
. These modes allow you to tailor the interaction based on the complexity of your use case and how much control you want to have over the prompt structure.
You can select the interaction mode using the property quarkus.langchain4j.watsonx.chat-model.mode
.
-
generation
: In this mode, you must explicitly structure the prompts using the required model-specific tags. This provides full control over the format of the prompt, but requires in-depth knowledge of the model being used. For best results, always refer to the documentation provided of each model to maximize the effectiveness of your prompts. -
chat
: This mode abstracts the complexity of tagging by automatically formatting prompts so you can focus on the content (default value).
To choose between one of these two modes, add the chat-model.mode
property to your application.properties
file:
quarkus.langchain4j.watsonx.mode=chat // or 'generate'
Depending on the mode selected, the values for configuring the model are found under the chat-model or generation-model properties.
|
Chat Mode
In chat
mode, you can interact with models without having to manually manage the tags of a prompt.
You might choose this mode if you are looking for dynamic interactions where the model can build on previous messages and provide more contextually relevant responses. This mode simplifies the interaction by automatically managing the necessary tags, allowing you to focus on the content of your prompts rather than formatting.
Chat mode also supports the use of tools
, allowing the model to perform specific actions or retrieve external data as part of its responses. This extends the capabilities of the model, allowing it to perform complex tasks dynamically and adapt to your needs. More information about tools is available on the Agent and Tools page.
quarkus.langchain4j.watsonx.base-url=${BASE_URL}
quarkus.langchain4j.watsonx.api-key=${API_KEY}
quarkus.langchain4j.watsonx.project-id=${PROJECT_ID}
quarkus.langchain4j.watsonx.chat-model.model-id=mistralai/mistral-large
@RegisterAiService
public interface AiService {
@SystemMessage("You are a helpful assistant")
public String chat(@MemoryId String id, @UserMessage message);
}
The availability of chat and tools is currently limited to certain models. Not all models support these features, so be sure to consult the documentation for the specific model you are using to confirm whether these features are available.
|
Generation Mode
In generation
mode, you have complete control over the structure of your prompts by manually specifying tags for a specific model. This mode could be useful in scenarios where a single-response is desired.
quarkus.langchain4j.watsonx.base-url=${BASE_URL}
quarkus.langchain4j.watsonx.api-key=${API_KEY}
quarkus.langchain4j.watsonx.project-id=${PROJECT_ID}
quarkus.langchain4j.watsonx.generation-model.model-id=mistralai/mistral-large
quarkus.langchain4j.watsonx.mode=generation
@RegisterAiService(chatMemoryProviderSupplier = RegisterAiService.NoChatMemoryProviderSupplier.class)
public interface AiService {
@UserMessage("""
<s>[INST] You are a helpful assistant [/INST]</s>\
[INST] What is the capital of {capital}? [/INST]""")
public String askCapital(String capital);
}
The @SystemMessage and @UserMessage annotations are joined by default with a new line. If you want to change this behavior, use the property quarkus.langchain4j.watsonx.chat-model.prompt-joiner=<value> . By adjusting this property, you can define your preferred way of joining messages and ensure that the prompt structure meets your specific needs.
|
Sometimes it may be useful to use the quarkus.langchain4j.watsonx.chat-model.stop-sequences property to prevent the LLM model from returning more results than desired.
|
All configuration properties
Configuration property fixed at build time - All other configuration properties are overridable at runtime
Configuration property |
Type |
Default |
---|---|---|
Whether the model should be enabled. Environment variable: |
boolean |
|
Whether the embedding model should be enabled. Environment variable: |
boolean |
|
Whether the scoring model should be enabled. Environment variable: |
boolean |
|
Specifies the mode of interaction with the LLM model. This property allows you to choose between two modes of operation:
Environment variable: |
string |
|
Base URL of the watsonx.ai API. Environment variable: |
string |
|
IBM Cloud API key. To create a new API key, follow this link. Environment variable: |
string |
|
Timeout for watsonx.ai calls. Environment variable: |
|
|
The version date for the API of the form YYYY-MM-DD. Environment variable: |
string |
|
The space that contains the resource. Either Environment variable: |
string |
|
The space that contains the resource. Either To look up your project id, click here. Environment variable: |
string |
|
Whether the watsonx.ai client should log requests. Environment variable: |
boolean |
|
Whether the watsonx.ai client should log responses. Environment variable: |
boolean |
|
Whether to enable the integration. Defaults to Environment variable: |
boolean |
|
Base URL of the IAM Authentication API. Environment variable: |
||
Timeout for IAM authentication calls. Environment variable: |
|
|
Grant type for the IAM Authentication API. Environment variable: |
string |
|
Model id to use. To view the complete model list, click here. Environment variable: |
string |
|
Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim. Possible values: Environment variable: |
double |
|
Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the content of message. Environment variable: |
boolean |
|
An integer specifying the number of most likely tokens to return at each token position, each with an associated log probability. The option Possible values: Environment variable: |
int |
|
The maximum number of tokens that can be generated in the chat completion. The total length of input tokens and generated tokens is limited by the model’s context length. Environment variable: |
int |
|
How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs Environment variable: |
int |
|
Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics. Possible values: Environment variable: |
double |
|
What sampling temperature to use. Higher values like We generally recommend altering this or Possible values: Environment variable: |
double |
|
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So We generally recommend altering this or Possible values: Environment variable: |
double |
|
Specifies the desired format for the model’s output. Allowable values: Applicable in modes: Environment variable: |
string |
|
Whether chat model requests should be logged. Environment variable: |
boolean |
|
Whether chat model responses should be logged. Environment variable: |
boolean |
|
Model id to use. To view the complete model list, click here. Environment variable: |
string |
|
Represents the strategy used for picking the tokens during generation of the output text. During text generation when parameter value is set to Allowable values: Environment variable: |
string |
|
Represents the factor of exponential decay. Larger values correspond to more aggressive decay. Possible values: Environment variable: |
double |
|
A number of generated tokens after which this should take effect. Possible values: Environment variable: |
int |
|
The maximum number of new tokens to be generated. The maximum supported value for this field depends on the model being used. How the "token" is defined depends on the tokenizer and vocabulary size, which in turn depends on the model. Often the tokens are a mix of full words and sub-words. Depending on the users plan, and on the model being used, there may be an enforced maximum number of new tokens. Possible values: Environment variable: |
int |
|
If stop sequences are given, they are ignored until minimum tokens are generated. Possible values: Environment variable: |
int |
|
Random number generator seed to use in sampling mode for experimental repeatability. Possible values: Environment variable: |
int |
|
Stop sequences are one or more strings which will cause the text generation to stop if/when they are produced as part of the output. Stop sequences encountered prior to the minimum number of tokens being generated will be ignored. Possible values: Environment variable: |
list of string |
|
A value used to modify the next-token probabilities in Possible values: Environment variable: |
double |
|
The number of highest probability vocabulary tokens to keep for top-k-filtering. Only applies for Possible values: Environment variable: |
int |
|
Similar to Possible values: Environment variable: |
double |
|
Represents the penalty for penalizing tokens that have already been generated or belong to the context. The value Possible values: Environment variable: |
double |
|
Represents the maximum number of input tokens accepted. This can be used to avoid requests failing due to input being longer than configured limits. If the text is truncated, then it truncates the start of the input (on the left), so the end of the input will remain the same. If this value exceeds the maximum sequence length (refer to the documentation to find this value for the model) then the call will fail if the total number of tokens exceeds the maximum sequence length. Zero means don’t truncate. Possible values: Environment variable: |
int |
|
Pass Environment variable: |
boolean |
|
Whether chat model requests should be logged. Environment variable: |
boolean |
|
Whether chat model responses should be logged. Environment variable: |
boolean |
|
Delimiter used to concatenate the ChatMessage elements into a single string. By setting this property, you can define your preferred way of concatenating messages to ensure that the prompt is structured in the correct way. Environment variable: |
string |
` ` |
Model id to use. To view the complete model list, click here. Environment variable: |
string |
|
Represents the maximum number of input tokens accepted. This can be used to avoid requests failing due to input being longer than configured limits. If the text is truncated, then it truncates the end of the input (on the right), so the start of the input will remain the same. If this value exceeds the maximum sequence length (refer to the documentation to find this value for the model) then the call will fail if the total number of tokens exceeds the maximum sequence length. Environment variable: |
int |
|
Whether embedding model requests should be logged. Environment variable: |
boolean |
|
Whether embedding model responses should be logged. Environment variable: |
boolean |
|
Model id to use. To view the complete model list, click here. Environment variable: |
string |
|
Represents the maximum number of input tokens accepted. This can be used to avoid requests failing due to input being longer than configured limits. If the text is truncated, then it truncates the end of the input (on the right), so the start of the input will remain the same. If this value exceeds the maximum sequence length (refer to the documentation to find this value for the model) then the call will fail if the total number of tokens exceeds the maximum sequence length. Environment variable: |
int |
|
Whether embedding model requests should be logged. Environment variable: |
boolean |
|
Whether embedding model responses should be logged. Environment variable: |
boolean |
|
Type |
Default |
|
Specifies the mode of interaction with the LLM model. This property allows you to choose between two modes of operation:
Environment variable: |
string |
|
Base URL of the watsonx.ai API. Environment variable: |
string |
|
IBM Cloud API key. To create a new API key, follow this link. Environment variable: |
string |
|
Timeout for watsonx.ai calls. Environment variable: |
|
|
The version date for the API of the form YYYY-MM-DD. Environment variable: |
string |
|
The space that contains the resource. Either Environment variable: |
string |
|
The space that contains the resource. Either To look up your project id, click here. Environment variable: |
string |
|
Whether the watsonx.ai client should log requests. Environment variable: |
boolean |
|
Whether the watsonx.ai client should log responses. Environment variable: |
boolean |
|
Whether to enable the integration. Defaults to Environment variable: |
boolean |
|
Base URL of the IAM Authentication API. Environment variable: |
||
Timeout for IAM authentication calls. Environment variable: |
|
|
Grant type for the IAM Authentication API. Environment variable: |
string |
|
Model id to use. To view the complete model list, click here. Environment variable: |
string |
|
Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim. Possible values: Environment variable: |
double |
|
Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the content of message. Environment variable: |
boolean |
|
An integer specifying the number of most likely tokens to return at each token position, each with an associated log probability. The option Possible values: Environment variable: |
int |
|
The maximum number of tokens that can be generated in the chat completion. The total length of input tokens and generated tokens is limited by the model’s context length. Environment variable: |
int |
|
How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs Environment variable: |
int |
|
Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics. Possible values: Environment variable: |
double |
|
What sampling temperature to use. Higher values like We generally recommend altering this or Possible values: Environment variable: |
double |
|
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So We generally recommend altering this or Possible values: Environment variable: |
double |
|
Specifies the desired format for the model’s output. Allowable values: Applicable in modes: Environment variable: |
string |
|
Whether chat model requests should be logged. Environment variable: |
boolean |
|
Whether chat model responses should be logged. Environment variable: |
boolean |
|
Model id to use. To view the complete model list, click here. Environment variable: |
string |
|
Represents the strategy used for picking the tokens during generation of the output text. During text generation when parameter value is set to Allowable values: Environment variable: |
string |
|
Represents the factor of exponential decay. Larger values correspond to more aggressive decay. Possible values: Environment variable: |
double |
|
A number of generated tokens after which this should take effect. Possible values: Environment variable: |
int |
|
The maximum number of new tokens to be generated. The maximum supported value for this field depends on the model being used. How the "token" is defined depends on the tokenizer and vocabulary size, which in turn depends on the model. Often the tokens are a mix of full words and sub-words. Depending on the users plan, and on the model being used, there may be an enforced maximum number of new tokens. Possible values: Environment variable: |
int |
|
If stop sequences are given, they are ignored until minimum tokens are generated. Possible values: Environment variable: |
int |
|
Random number generator seed to use in sampling mode for experimental repeatability. Possible values: Environment variable: |
int |
|
Stop sequences are one or more strings which will cause the text generation to stop if/when they are produced as part of the output. Stop sequences encountered prior to the minimum number of tokens being generated will be ignored. Possible values: Environment variable: |
list of string |
|
A value used to modify the next-token probabilities in Possible values: Environment variable: |
double |
|
The number of highest probability vocabulary tokens to keep for top-k-filtering. Only applies for Possible values: Environment variable: |
int |
|
Similar to Possible values: Environment variable: |
double |
|
Represents the penalty for penalizing tokens that have already been generated or belong to the context. The value Possible values: Environment variable: |
double |
|
Represents the maximum number of input tokens accepted. This can be used to avoid requests failing due to input being longer than configured limits. If the text is truncated, then it truncates the start of the input (on the left), so the end of the input will remain the same. If this value exceeds the maximum sequence length (refer to the documentation to find this value for the model) then the call will fail if the total number of tokens exceeds the maximum sequence length. Zero means don’t truncate. Possible values: Environment variable: |
int |
|
Pass Environment variable: |
boolean |
|
Whether chat model requests should be logged. Environment variable: |
boolean |
|
Whether chat model responses should be logged. Environment variable: |
boolean |
|
Delimiter used to concatenate the ChatMessage elements into a single string. By setting this property, you can define your preferred way of concatenating messages to ensure that the prompt is structured in the correct way. Environment variable: |
string |
` ` |
Model id to use. To view the complete model list, click here. Environment variable: |
string |
|
Represents the maximum number of input tokens accepted. This can be used to avoid requests failing due to input being longer than configured limits. If the text is truncated, then it truncates the end of the input (on the right), so the start of the input will remain the same. If this value exceeds the maximum sequence length (refer to the documentation to find this value for the model) then the call will fail if the total number of tokens exceeds the maximum sequence length. Environment variable: |
int |
|
Whether embedding model requests should be logged. Environment variable: |
boolean |
|
Whether embedding model responses should be logged. Environment variable: |
boolean |
|
Model id to use. To view the complete model list, click here. Environment variable: |
string |
|
Represents the maximum number of input tokens accepted. This can be used to avoid requests failing due to input being longer than configured limits. If the text is truncated, then it truncates the end of the input (on the right), so the start of the input will remain the same. If this value exceeds the maximum sequence length (refer to the documentation to find this value for the model) then the call will fail if the total number of tokens exceeds the maximum sequence length. Environment variable: |
int |
|
Whether embedding model requests should be logged. Environment variable: |
boolean |
|
Whether embedding model responses should be logged. Environment variable: |
boolean |
|
About the Duration format
To write duration values, use the standard You can also use a simplified format, starting with a number:
In other cases, the simplified format is translated to the
|