OpenAI Moderation Models
OpenAI provides a dedicated moderation model designed to detect and filter harmful, offensive, or otherwise inappropriate content in user-generated text. These models are particularly useful in public-facing applications where user safety and content compliance are essential.
To learn more about moderation models and their role in AI applications, refer to the Moderation Models section in the Models reference guide.
Prerequisites
OpenAI Account and API Key
To use OpenAI models in your Quarkus application:
-
Generate an API key from the API Keys page.
-
Add the following to your
application.properties
:
quarkus.langchain4j.openai.api-key=sk-...
Use environment variables
You can use the environment variable |
Using configuration placeholders
You can also reference an environment variable directly in your properties file:
|
OpenAI Quarkus Extension
To use OpenAI moderation models in your Quarkus application, add the following dependency:
<dependency>
<groupId>io.quarkiverse.langchain4j</groupId>
<artifactId>quarkus-langchain4j-openai</artifactId>
<version>1.0.2</version>
</dependency>
If no other LLM extension is installed, AI Services will automatically use the configured OpenAI moderation model.
Note that to enable moderation, the method must be annotated with @Moderate
.
Configuration
Configuration property fixed at build time - All other configuration properties are overridable at runtime
Configuration property |
Type |
Default |
---|---|---|
Whether the model should be enabled Environment variable: |
boolean |
|
Whether the model should be enabled Environment variable: |
boolean |
|
Whether the model should be enabled Environment variable: |
boolean |
|
Whether the model should be enabled Environment variable: |
boolean |
|
Base URL of OpenAI API Environment variable: |
string |
|
If set, the named TLS configuration with the configured name will be applied to the REST Client Environment variable: |
string |
|
OpenAI API key Environment variable: |
string |
|
OpenAI Organization ID (https://platform.openai.com/docs/api-reference/organization-optional) Environment variable: |
string |
|
Timeout for OpenAI calls Environment variable: |
|
|
The maximum number of times to retry. 1 means exactly one attempt, with retrying disabled. Environment variable: |
int |
|
Whether the OpenAI client should log requests Environment variable: |
boolean |
|
Whether the OpenAI client should log responses Environment variable: |
boolean |
|
Whether to enable the integration. Defaults to Environment variable: |
boolean |
|
The Proxy type Environment variable: |
string |
|
The Proxy host Environment variable: |
string |
|
The Proxy port Environment variable: |
int |
|
Model name to use Environment variable: |
string |
|
What sampling temperature to use, with values between 0 and 2. Higher values means the model will take more risks. A value of 0.9 is good for more creative applications, while 0 (argmax sampling) is good for ones with a well-defined answer. It is recommended to alter this or topP, but not both. Environment variable: |
double |
|
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with topP probability mass. 0.1 means only the tokens comprising the top 10% probability mass are considered. It is recommended to alter this or temperature, but not both. Environment variable: |
double |
|
The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens can’t exceed the model’s context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096). Environment variable: |
int |
|
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics. Environment variable: |
double |
|
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim. Environment variable: |
double |
|
Whether chat model requests should be logged Environment variable: |
boolean |
|
Whether chat model responses should be logged Environment variable: |
boolean |
|
The response format the model should use. Some models are not compatible with some response formats, make sure to review OpenAI documentation. Environment variable: |
string |
|
Whether responses follow JSON Schema for Structured Outputs Environment variable: |
boolean |
|
The list of stop words to use. Environment variable: |
list of string |
|
Model name to use Environment variable: |
string |
|
Whether embedding model requests should be logged Environment variable: |
boolean |
|
Whether embedding model responses should be logged Environment variable: |
boolean |
|
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Environment variable: |
string |
|
Model name to use Environment variable: |
string |
|
Whether moderation model requests should be logged Environment variable: |
boolean |
|
Whether moderation model responses should be logged Environment variable: |
boolean |
|
Model name to use Environment variable: |
string |
|
Configure whether the generated images will be saved to disk. By default, persisting is disabled, but it is implicitly enabled when Environment variable: |
boolean |
|
The path where the generated images will be persisted to disk. This only applies of Environment variable: |
path |
|
The format in which the generated images are returned. Must be one of Environment variable: |
string |
|
The size of the generated images. Must be one of Must be one of Environment variable: |
string |
|
The quality of the image that will be generated.
This param is only supported for when the model is Environment variable: |
string |
|
The number of images to generate. Must be between 1 and 10. When the model is dall-e-3, only n=1 is supported. Environment variable: |
int |
|
The style of the generated images. Must be one of This param is only supported for when the model is Environment variable: |
string |
|
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Environment variable: |
string |
|
Whether image model requests should be logged Environment variable: |
boolean |
|
Whether image model responses should be logged Environment variable: |
boolean |
|
Type |
Default |
|
Base URL of OpenAI API Environment variable: |
string |
|
If set, the named TLS configuration with the configured name will be applied to the REST Client Environment variable: |
string |
|
OpenAI API key Environment variable: |
string |
|
OpenAI Organization ID (https://platform.openai.com/docs/api-reference/organization-optional) Environment variable: |
string |
|
Timeout for OpenAI calls Environment variable: |
|
|
The maximum number of times to retry. 1 means exactly one attempt, with retrying disabled. Environment variable: |
int |
|
Whether the OpenAI client should log requests Environment variable: |
boolean |
|
Whether the OpenAI client should log responses Environment variable: |
boolean |
|
Whether to enable the integration. Defaults to Environment variable: |
boolean |
|
The Proxy type Environment variable: |
string |
|
The Proxy host Environment variable: |
string |
|
The Proxy port Environment variable: |
int |
|
Model name to use Environment variable: |
string |
|
What sampling temperature to use, with values between 0 and 2. Higher values means the model will take more risks. A value of 0.9 is good for more creative applications, while 0 (argmax sampling) is good for ones with a well-defined answer. It is recommended to alter this or topP, but not both. Environment variable: |
double |
|
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with topP probability mass. 0.1 means only the tokens comprising the top 10% probability mass are considered. It is recommended to alter this or temperature, but not both. Environment variable: |
double |
|
The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens can’t exceed the model’s context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096). Environment variable: |
int |
|
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics. Environment variable: |
double |
|
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim. Environment variable: |
double |
|
Whether chat model requests should be logged Environment variable: |
boolean |
|
Whether chat model responses should be logged Environment variable: |
boolean |
|
The response format the model should use. Some models are not compatible with some response formats, make sure to review OpenAI documentation. Environment variable: |
string |
|
Whether responses follow JSON Schema for Structured Outputs Environment variable: |
boolean |
|
The list of stop words to use. Environment variable: |
list of string |
|
Model name to use Environment variable: |
string |
|
Whether embedding model requests should be logged Environment variable: |
boolean |
|
Whether embedding model responses should be logged Environment variable: |
boolean |
|
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Environment variable: |
string |
|
Model name to use Environment variable: |
string |
|
Whether moderation model requests should be logged Environment variable: |
boolean |
|
Whether moderation model responses should be logged Environment variable: |
boolean |
|
Model name to use Environment variable: |
string |
|
Configure whether the generated images will be saved to disk. By default, persisting is disabled, but it is implicitly enabled when Environment variable: |
boolean |
|
The path where the generated images will be persisted to disk. This only applies of Environment variable: |
path |
|
The format in which the generated images are returned. Must be one of Environment variable: |
string |
|
The size of the generated images. Must be one of Must be one of Environment variable: |
string |
|
The quality of the image that will be generated.
This param is only supported for when the model is Environment variable: |
string |
|
The number of images to generate. Must be between 1 and 10. When the model is dall-e-3, only n=1 is supported. Environment variable: |
int |
|
The style of the generated images. Must be one of This param is only supported for when the model is Environment variable: |
string |
|
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Environment variable: |
string |
|
Whether image model requests should be logged Environment variable: |
boolean |
|
Whether image model responses should be logged Environment variable: |
boolean |
|
About the Duration format
To write duration values, use the standard You can also use a simplified format, starting with a number:
In other cases, the simplified format is translated to the
|
Using the Moderation Model
You can use moderation models in Quarkus LangChain4j either declaratively via @Moderate in an AI service interface, or programmatically using the ModerationModel API.
import dev.langchain4j.model.moderation.Moderation;
import dev.langchain4j.service.UserMessage;
import io.quarkiverse.langchain4j.RegisterAiService;
import io.quarkiverse.langchain4j.moderation.Moderate;
@RegisterAiService
public interface MyModerationService {
@Moderate
@UserMessage("Answer this question: {input}")
String answer(String input);
}
Programmatic Usage
For more control, inject the ModerationModel and call the moderate method directly:
import dev.langchain4j.model.moderation.Moderation;
import dev.langchain4j.model.moderation.ModerationModel;
import jakarta.inject.Inject;
@Inject
ModerationModel moderationModel;
// …
Moderation moderation = moderationModel.moderate("user input here").content();
if (moderation.flagged()) {
// Take appropriate action
}