OpenAI integration
Connect your agents to LLMs from OpenAI.
The OpenAI integration allows Beamlit users to call OpenAI models using a Beamlit endpoint in order to unify access control, credentials and observability management.
The integration must be set up by an admin in the Integrations section in the workspace settings.
Set up the integration
In order to use this integration, you must register an OpenAI access token into your Beamlit workspace settings. The scope of this access token (i.e. the OpenAI resources it is allowed to access) will be the scope that Beamlit has access to.
First, generate an OpenAI API key from your OpenAI Platform settings. Set this API key in Read-only
mode.
On Beamlit, in Workspace Settings > OpenAI integration, create a new connection and paste this token into the “Access token” section.
Connect to an OpenAI model
Once you’ve set up the integration in the workspace, any workspace member can use it to reference an OpenAI model as an external model API.
When creating a model API, select Connect an external model > OpenAI. You can search for any model from the OpenAI catalog.
After the model API is created, you will receive a dedicated global Beamlit endpoint to call the model. Beamlit will forward inference requests to OpenAI, using your OpenAI credentials for authentication and authorization.