HuggingFace integration
Deploy public or private AI models from HuggingFace.
The HuggingFace integration allows Beamlit users to deploy models that are stored on a HuggingFace repository, either public, gated or private, directly on Beamlit.
The integration must be set up by an admin in the Integrations section in the workspace settings.
Set up the integration
In order to use this integration, you must register a HuggingFace access token into your Beamlit workspace settings. The scope of this access token (i.e. the HuggingFace resources it is allowed to access) will be the scope that Beamlit has access to.
First, generate a HuggingFace access token from your HuggingFace settings. Give this access token the scope that you want Beamlit to access on HuggingFace (e.g. repositories, etc.).
On Beamlit, in the workspace settings, in the HuggingFace integration, paste this token into the “Access token” section.
Deploy from HuggingFace
Once you’ve set up the integration in the workspace, any workspace member can use it to reference a HuggingFace repository as the origin for a model deployment.
Public and private models
When deploying a model, select “Deploy from HuggingFace”. You can search for any public model, or any private model in the organizations & repositories allowed by the integration’s token.
Gated models
If the model you’re trying to deploy is gated, you’ll first need to request access on HuggingFace, and accept their terms and conditions of usage (if applicable). Access to some HuggingFace models is granted immediately after request, while others require manual approval.
When the model gets deployed, Beamlit will check if the integration token is allowed access to the model on HuggingFace. If you have not been allowed access, the model deployment will fail in error.