Mistral Moderation Models
Mistral provides moderation models to detect harmful or unsafe content in user inputs before processing them.
Prerequisites
Same setup as chat/embedding models. You must provide a valid API key.
<dependency>
<groupId>io.quarkiverse.langchain4j</groupId>
<artifactId>quarkus-langchain4j-mistral-ai</artifactId>
<version>1.0.2</version>
</dependency>
Configuration
To enable the moderation model:
quarkus.langchain4j.mistralai.api-key=...
quarkus.langchain4j.mistralai.moderation-model.model-name=mistral-moderation-latest
Available moderation model names may evolve — refer to https://docs.mistral.ai/platform/endpoints/#moderation for an up-to-date list.
To use it programmatically:
@Inject ModerationModel moderationModel;
var result = moderationModel.moderate("user input text...").content();
if (result.flagged()) {
// handle unsafe input
}
Configuration property fixed at build time - All other configuration properties are overridable at runtime
Type |
Default |
|
---|---|---|
Whether the model should be enabled Environment variable: |
boolean |
|
Whether the model should be enabled Environment variable: |
boolean |
|
Base URL of Mistral API Environment variable: |
string |
|
Mistral API key Environment variable: |
string |
|
Timeout for Mistral calls Environment variable: |
|
|
Model name to use Environment variable: |
string |
|
What sampling temperature to use, between 0.0 and 1.0. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. It is generally recommended to set this or the Environment variable: |
double |
|
The maximum number of tokens to generate in the completion. The token count of your prompt plus Environment variable: |
int |
|
Double (0.0-1.0). Nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. It is generally recommended to set this or the Environment variable: |
double |
|
Whether to inject a safety prompt before all conversations Environment variable: |
boolean |
|
The seed to use for random sampling. If set, different calls will generate deterministic results. Environment variable: |
int |
|
Whether chat model requests should be logged Environment variable: |
boolean |
|
Whether chat model responses should be logged Environment variable: |
boolean |
|
Model name to use Environment variable: |
string |
|
Whether embedding model requests should be logged Environment variable: |
boolean |
|
Whether embedding model responses should be logged Environment variable: |
boolean |
|
Whether the Mistral client should log requests Environment variable: |
boolean |
|
Whether the Mistral client should log responses Environment variable: |
boolean |
|
Whether to enable the integration. Defaults to Environment variable: |
boolean |
|
Type |
Default |
|
Base URL of Mistral API Environment variable: |
string |
|
Mistral API key Environment variable: |
string |
|
Timeout for Mistral calls Environment variable: |
|
|
Model name to use Environment variable: |
string |
|
What sampling temperature to use, between 0.0 and 1.0. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. It is generally recommended to set this or the Environment variable: |
double |
|
The maximum number of tokens to generate in the completion. The token count of your prompt plus Environment variable: |
int |
|
Double (0.0-1.0). Nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. It is generally recommended to set this or the Environment variable: |
double |
|
Whether to inject a safety prompt before all conversations Environment variable: |
boolean |
|
The seed to use for random sampling. If set, different calls will generate deterministic results. Environment variable: |
int |
|
Whether chat model requests should be logged Environment variable: |
boolean |
|
Whether chat model responses should be logged Environment variable: |
boolean |
|
Model name to use Environment variable: |
string |
|
Whether embedding model requests should be logged Environment variable: |
boolean |
|
Whether embedding model responses should be logged Environment variable: |
boolean |
|
Whether the Mistral client should log requests Environment variable: |
boolean |
|
Whether the Mistral client should log responses Environment variable: |
boolean |
|
Whether to enable the integration. Defaults to Environment variable: |
boolean |
|
About the Duration format
To write duration values, use the standard You can also use a simplified format, starting with a number:
In other cases, the simplified format is translated to the
|