Hugging Face Chat Models
Hugging Face is a leading platform in the field of natural language processing (NLP) that provides a wide collection of pre-trained language models. It facilitates easy access to cutting-edge models for various NLP tasks through hosted APIs or local inference.
Prerequisites
Extension Installation
To use Hugging Face chat models in your Quarkus application, add the following extension:
<dependency>
<groupId>io.quarkiverse.langchain4j</groupId>
<artifactId>quarkus-langchain4j-hugging-face</artifactId>
<version>1.0.2</version>
</dependency>
If no other LLM extension is installed, AI Services will automatically use the configured Hugging Face chat model.
Supported Models
Only text-to-text models are supported (i.e., models that take a textual prompt and return a textual result).
By default, the following model is used:
-
tiiuae/falcon-7b-instruct
for chat-style generation
Configuration
To use a custom Hugging Face model hosted on the Hugging Face Hub, configure the endpoint:
quarkus.langchain4j.huggingface.chat-model.inference-endpoint-url=https://api-inference.huggingface.co/models/google/flan-t5-small
You can also point to a locally hosted or private endpoint:
quarkus.langchain4j.huggingface.chat-model.inference-endpoint-url=http://localhost:8085
For a fully hosted endpoint (e.g., AWS-hosted Hugging Face endpoint):
quarkus.langchain4j.huggingface.chat-model.inference-endpoint-url=https://<endpoint>.endpoints.huggingface.cloud
Configuration Reference
Configuration property fixed at build time - All other configuration properties are overridable at runtime
Type |
Default |
|
---|---|---|
Whether the model should be enabled Environment variable: |
boolean |
|
Whether the model should be enabled Environment variable: |
boolean |
|
Whether the model should be enabled Environment variable: |
boolean |
|
HuggingFace API key Environment variable: |
string |
|
Timeout for HuggingFace calls Environment variable: |
|
|
The URL of the inference endpoint for the chat model. When using Hugging Face with the inference API, the URL is When using a deployed inference endpoint, the URL is the URL of the endpoint. When using a local hugging face model, the URL is the URL of the local model. Environment variable: |
|
|
Float (0.0-100.0). The temperature of the sampling operation. 1 means regular sampling, 0 means always take the highest score, 100.0 is getting closer to uniform probability Environment variable: |
double |
|
Int (0-250). The amount of new tokens to be generated, this does not include the input length it is a estimate of the size of generated text you want. Each new tokens slows down the request, so look for balance between response times and length of text generated Environment variable: |
int |
|
If set to Environment variable: |
boolean |
|
If the model is not ready, wait for it instead of receiving 503. It limits the number of requests required to get your inference done. It is advised to only set this flag to true after receiving a 503 error as it will limit hanging in your application to known places Environment variable: |
boolean |
|
Whether or not to use sampling ; use greedy decoding otherwise. Environment variable: |
boolean |
|
The number of highest probability vocabulary tokens to keep for top-k-filtering. Environment variable: |
int |
|
If set to less than Environment variable: |
double |
|
The parameter for repetition penalty. 1.0 means no penalty. See this paper for more details. Environment variable: |
double |
|
Whether chat model requests should be logged Environment variable: |
boolean |
|
Whether chat model responses should be logged Environment variable: |
boolean |
|
The URL of the inference endpoint for the embedding. When using Hugging Face with the inference API, the URL is When using a deployed inference endpoint, the URL is the URL of the endpoint. When using a local hugging face model, the URL is the URL of the local model. Environment variable: |
||
If the model is not ready, wait for it instead of receiving 503. It limits the number of requests required to get your inference done. It is advised to only set this flag to true after receiving a 503 error as it will limit hanging in your application to known places Environment variable: |
boolean |
|
Whether the HuggingFace client should log requests Environment variable: |
boolean |
|
Whether the HuggingFace client should log responses Environment variable: |
boolean |
|
Whether or not to enable the integration. Defaults to Environment variable: |
boolean |
|
Type |
Default |
|
HuggingFace API key Environment variable: |
string |
|
Timeout for HuggingFace calls Environment variable: |
|
|
The URL of the inference endpoint for the chat model. When using Hugging Face with the inference API, the URL is When using a deployed inference endpoint, the URL is the URL of the endpoint. When using a local hugging face model, the URL is the URL of the local model. Environment variable: |
|
|
Float (0.0-100.0). The temperature of the sampling operation. 1 means regular sampling, 0 means always take the highest score, 100.0 is getting closer to uniform probability Environment variable: |
double |
|
Int (0-250). The amount of new tokens to be generated, this does not include the input length it is a estimate of the size of generated text you want. Each new tokens slows down the request, so look for balance between response times and length of text generated Environment variable: |
int |
|
If set to Environment variable: |
boolean |
|
If the model is not ready, wait for it instead of receiving 503. It limits the number of requests required to get your inference done. It is advised to only set this flag to true after receiving a 503 error as it will limit hanging in your application to known places Environment variable: |
boolean |
|
Whether or not to use sampling ; use greedy decoding otherwise. Environment variable: |
boolean |
|
The number of highest probability vocabulary tokens to keep for top-k-filtering. Environment variable: |
int |
|
If set to less than Environment variable: |
double |
|
The parameter for repetition penalty. 1.0 means no penalty. See this paper for more details. Environment variable: |
double |
|
Whether chat model requests should be logged Environment variable: |
boolean |
|
Whether chat model responses should be logged Environment variable: |
boolean |
|
The URL of the inference endpoint for the embedding. When using Hugging Face with the inference API, the URL is When using a deployed inference endpoint, the URL is the URL of the endpoint. When using a local hugging face model, the URL is the URL of the local model. Environment variable: |
||
If the model is not ready, wait for it instead of receiving 503. It limits the number of requests required to get your inference done. It is advised to only set this flag to true after receiving a 503 error as it will limit hanging in your application to known places Environment variable: |
boolean |
|
Whether the HuggingFace client should log requests Environment variable: |
boolean |
|
Whether the HuggingFace client should log responses Environment variable: |
boolean |
|
Whether or not to enable the integration. Defaults to Environment variable: |
boolean |
|
About the Duration format
To write duration values, use the standard You can also use a simplified format, starting with a number:
In other cases, the simplified format is translated to the
|