Mistral
Mistral is a French company that provide open source LLM models.
Using Mistral Models
To employ Mistral LLMs, integrate the following dependency into your project:
<dependency>
<groupId>io.quarkiverse.langchain4j</groupId>
<artifactId>quarkus-langchain4j-mistral-ai</artifactId>
<version>0.21.0</version>
</dependency>
If no other LLM extension is installed, AI Services will automatically utilize the configured Mistral model.
Configuration
Configuring Mistral models mandates an API key, obtainable by creating an account on the Mistral platform.
The API key can be set in the application.properties
file:
quarkus.langchain4j.mistralai.api-key=...
Alternatively, leverage the QUARKUS_LANGCHAIN4J_MISTRALAI_API_KEY environment variable.
|
Several configuration properties are available:
Configuration property fixed at build time - All other configuration properties are overridable at runtime
Type |
Default |
|
---|---|---|
Whether the model should be enabled Environment variable: |
boolean |
|
Whether the model should be enabled Environment variable: |
boolean |
|
Base URL of Mistral API Environment variable: |
string |
|
Mistral API key Environment variable: |
string |
|
Timeout for Mistral calls Environment variable: |
|
|
Model name to use Environment variable: |
string |
|
What sampling temperature to use, between 0.0 and 1.0. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. It is generally recommended to set this or the Environment variable: |
double |
|
The maximum number of tokens to generate in the completion. The token count of your prompt plus Environment variable: |
int |
|
Double (0.0-1.0). Nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. It is generally recommended to set this or the Environment variable: |
double |
|
Whether to inject a safety prompt before all conversations Environment variable: |
boolean |
|
The seed to use for random sampling. If set, different calls will generate deterministic results. Environment variable: |
int |
|
Whether chat model requests should be logged Environment variable: |
boolean |
|
Whether chat model responses should be logged Environment variable: |
boolean |
|
Model name to use Environment variable: |
string |
|
Whether embedding model requests should be logged Environment variable: |
boolean |
|
Whether embedding model responses should be logged Environment variable: |
boolean |
|
Whether the Mistral client should log requests Environment variable: |
boolean |
|
Whether the Mistral client should log responses Environment variable: |
boolean |
|
Whether to enable the integration. Defaults to Environment variable: |
boolean |
|
Type |
Default |
|
Base URL of Mistral API Environment variable: |
string |
|
Mistral API key Environment variable: |
string |
|
Timeout for Mistral calls Environment variable: |
|
|
Model name to use Environment variable: |
string |
|
What sampling temperature to use, between 0.0 and 1.0. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. It is generally recommended to set this or the Environment variable: |
double |
|
The maximum number of tokens to generate in the completion. The token count of your prompt plus Environment variable: |
int |
|
Double (0.0-1.0). Nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. It is generally recommended to set this or the Environment variable: |
double |
|
Whether to inject a safety prompt before all conversations Environment variable: |
boolean |
|
The seed to use for random sampling. If set, different calls will generate deterministic results. Environment variable: |
int |
|
Whether chat model requests should be logged Environment variable: |
boolean |
|
Whether chat model responses should be logged Environment variable: |
boolean |
|
Model name to use Environment variable: |
string |
|
Whether embedding model requests should be logged Environment variable: |
boolean |
|
Whether embedding model responses should be logged Environment variable: |
boolean |
|
Whether the Mistral client should log requests Environment variable: |
boolean |
|
Whether the Mistral client should log responses Environment variable: |
boolean |
|
Whether to enable the integration. Defaults to Environment variable: |
boolean |
|
About the Duration format
To write duration values, use the standard You can also use a simplified format, starting with a number:
In other cases, the simplified format is translated to the
|