The Mistral AI integration allows Beamlit users to call Mistral AI models using a Beamlit endpoint in order to unify access control, credentials and observability management.

The integration must be set up by an admin in the Integrations section in the workspace settings.

Set up the integration

In order to use this integration, you must register a Mistral AI access token into your Beamlit workspace settings. First, generate a Mistral AI API key from your Mistral AI La Plateforme settings.

On Beamlit, in Workspace Settings > Mistral AI integration, create a new connection and paste this token into the “Access token” section.

Connect to an Mistral AI model

Once you’ve set up the integration in the workspace, any workspace member can use it to reference a Mistral AI model as an external model API.

When creating a model API, select Connect an external model > Mistral AI. You can search for any ****model from the Mistral AI catalog.

After the model API is created, you will receive a dedicated global Beamlit endpoint to call the model. Beamlit will forward inference requests to Mistral AI, using your Mistral AI credentials for authentication and authorization.

Because your own credentials are used, any inference request on this endpoint will incur potential costs on your Mistral AI account, as if you queried the model directly on Mistral AI.