This tutorial shows how to deploy a custom LangChain AI agent directly from your IDE using the Beamlit CLI.

Your agent will typically include these key components:

  • A core agent logic algorithm → Beamlit will deploy this as an Agent
  • A set of tools that the agent can use → Beamlit will deploy these in sandboxed environments as Functions
  • A chat model that powers the agent → Beamlit can request a Beamlit model API upon LLM calling.

When the main agent runs, Beamlit orchestrates everything in secure, sandboxed environments, providing complete traceability of all requests and production-grade scalability.

Prerequisites

Guide

(Optional) Quickstart from a template

In Python, you will need to have uv installed; in TypeScript, you will need to have npm installed for this.

Let’s initialize a first app. The following command creates a pre-scaffolded local repository ready for developing and deploying your agent on Beamlit.

bl create-agent-app my-agent

Select your model API, choose the Custom template, and press Enter to create the repo.

Add your agent code

Add or import the script files containing your LangChain agent. Here’s an example using a simple LangChain React agent:

Note that this code references a custom tool from another folder that we assume you developed too: customfunctions.helloworld.helloworld. By default, this function would not be deployed as a separate component. See next section for instructions on deploying it in its own sandboxed environment to trace request usage.

The next step is to use the Beamlit SDK to specify which function should be deployed on Beamlit. You’ll need two key elements:

  • Create a main async function to handle agent execution—this will be your default entry point for serving and deployment.
  • Add the @agent decorator to designate your main agent function for Beamlit serving.

Here’s an example showing the main async function:

Read our reference for the main agent function to serve.

Then, here’s an example adding the Beamlit decorator:

Read our reference for @agent decorator.

At this time:

  • your agent is ready to be deployed on Beamlit
  • functions (tool calls) and model APIs are not ready to be deployed on separate sandboxed environments.

To get total observability and traceability on the requests of your agent, it’s recommended to use a microservices-like architecture where each component runs independently in its own environment. Beamlit helps you cross this last mile with just a few lines of code.

Sandbox the tool call execution

To deploy your tool on Beamlit, the simplest way is to place the main function’s file in the src/functions/ folder.

Here’s an example with the helloworld custom tool from the previous code snippets:

Now, add the @function decorator to specify the default entry point for serving and deployment of the function.

Functions placed in the src/functions/ folder will automatically be deployed as custom functions and can be made available to the agent during execution by calling get_functions() . The final step is to update the tool binding from the main agent file:

Read our reference for get_functions().

Sandbox the model API call

Use the Beamlit console to create the corresponding integration connection to your LLM provider, using one of the supported integrations.

Then create a model API using your integration, and deploy it. Here’s an example from the Beamlit console using an OpenAI model:

Model APIs can be made available to the agent during execution by calling get_chat_model() :

At this time: your agent, functions and model API are ready to be deployed on Beamlit in sandboxed environments.

Test and deploy your AI agent

Run the following command to serve your agent locally:

cd my-agent;
bl serve --hotreload;

Query your agent locally by making a POST request to this endpoint: http://localhost:1338 with the following payload format: {"inputs": "Hello world!"}.

To push to Beamlit on the default production environment, run the following command. Beamlit will handle the build and deployment:

bl deploy

That’s it! 🌎 🌏 🌍 Your agent is now distributed and available across the entire Beamlit global infrastructure! Global Inference Network significantly speeds up inferences by executing your agent’s workloads in sandboxed environments and smartly routing requests based on your policies.

Make a first inference

Run a first inference on your Beamlit agent with the following command:

bl run agent my-agent --data '{"inputs":"Hello world!"}'
Your agent is available behind a global endpoint. Read this guide on how to use it for HTTP requests.

You can also run inference requests on your agent (or each function or model API) from the Beamlit console by using the Playground on the console.

Next steps

You are ready to run AI with Beamlit! Here’s a curated list of guides which may be helpful for you to make the most of the Beamlit platform, but feel free to explore the product on your own!

Deploy agents

Complete guide for deploying AI agents on Beamlit.

Manage environment policies

Complete guide for managing deployment and routing policies on the Global Inference Network.

Guide for querying agents

Complete guide for querying your AI agents on the Global Inference Network.