Anthropic
Using Anthropic Models
To employ Anthropic LLMs, integrate the following dependency into your project:
<dependency>
<groupId>io.quarkiverse.langchain4j</groupId>
<artifactId>quarkus-langchain4j-anthropic</artifactId>
<version>0.22.0</version>
</dependency>
If no other LLM extension is installed, AI Services will automatically utilize the configured Anthropic model.
Configuration
Configuring Anthropic models mandates an API key, obtainable by creating an account on the Claude platform.
The API key can be set in the application.properties
file:
quarkus.langchain4j.anthropic.api-key=...
Alternatively, leverage the QUARKUS_LANGCHAIN4J_ANTHROPIC_API_KEY environment variable.
|
Several configuration properties are available:
Configuration property fixed at build time - All other configuration properties are overridable at runtime
Configuration property |
Type |
Default |
---|---|---|
Whether the model should be enabled Environment variable: |
boolean |
|
Base URL of the Anthropic API Environment variable: |
string |
|
Anthropic API key Environment variable: |
string |
|
The Anthropic version Environment variable: |
string |
|
Timeout for Anthropic calls Environment variable: |
|
|
Whether the Anthropic client should log requests Environment variable: |
boolean |
|
Whether the Anthropic client should log responses Environment variable: |
boolean |
|
Whether to enable the integration. Defaults to Environment variable: |
boolean |
|
Model name to use Environment variable: |
string |
|
What sampling temperature to use, between 0.0 and 1.0. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. It is generally recommended to set this or the Environment variable: |
double |
|
The maximum number of tokens to generate in the completion. The token count of your prompt plus Environment variable: |
int |
|
Double (0.0-1.0). Nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. It is generally recommended to set this or the Environment variable: |
double |
|
Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative Environment variable: |
int |
|
The maximum number of times to retry. 1 means exactly one attempt, with retrying disabled. Environment variable: |
int |
|
The custom text sequences that will cause the model to stop generating Environment variable: |
list of string |
|
Whether chat model requests should be logged Environment variable: |
boolean |
|
Whether chat model responses should be logged Environment variable: |
boolean |
|
Type |
Default |
|
Base URL of the Anthropic API Environment variable: |
string |
|
Anthropic API key Environment variable: |
string |
|
The Anthropic version Environment variable: |
string |
|
Timeout for Anthropic calls Environment variable: |
|
|
Whether the Anthropic client should log requests Environment variable: |
boolean |
|
Whether the Anthropic client should log responses Environment variable: |
boolean |
|
Whether to enable the integration. Defaults to Environment variable: |
boolean |
|
Model name to use Environment variable: |
string |
|
What sampling temperature to use, between 0.0 and 1.0. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. It is generally recommended to set this or the Environment variable: |
double |
|
The maximum number of tokens to generate in the completion. The token count of your prompt plus Environment variable: |
int |
|
Double (0.0-1.0). Nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. It is generally recommended to set this or the Environment variable: |
double |
|
Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative Environment variable: |
int |
|
The maximum number of times to retry. 1 means exactly one attempt, with retrying disabled. Environment variable: |
int |
|
The custom text sequences that will cause the model to stop generating Environment variable: |
list of string |
|
Whether chat model requests should be logged Environment variable: |
boolean |
|
Whether chat model responses should be logged Environment variable: |
boolean |
|
About the Duration format
To write duration values, use the standard You can also use a simplified format, starting with a number:
In other cases, the simplified format is translated to the
|