Inference API
bl run
Inference API
bl run
bl run
Run a resource on beamlit
bl run resource-type resource-name [flags]
Examples
bl run agent my-agent --data '{"inputs": "Hello, world!"}'
bl run model my-model --data '{"inputs": "Hello, world!"}'
bl run function my-function --data '{"query": "4+2"}'
Options
--data string JSON body data for the inference request
--header stringArray Request headers in 'Key: Value' format. Can be specified multiple times
-h, --help help for run
--method string HTTP method for the inference request (default "POST")
--path string path for the inference request
--show-headers Show response headers in output
--upload-file string This transfers the specified local file to the remote URL
Options inherited from parent commands
-e, --env string Environment. One of: development,production
-o, --output string Output format. One of: pretty,yaml,json,table
-v, --verbose Enable verbose output
-w, --workspace string Specify the workspace name
SEE ALSO
- bl - Beamlit CLI is a command line tool to interact with Beamlit APIs.