Dynamic Agent Delegation: The supervisor can decide whether to handle a user query itself or delegate it to a configured specialist agent.
Customizable System Prompt: Tailor the supervisor's behavior and instructions using a configurable system prompt, with a sensible default provided.
To run the supervisor locally,
git clone https://github.com/langchain-ai/open-agent-supervisor.git
.env.example
file into .env
, and set your environment variables*We default to OpenAI to provide the underlying supervisor agent LLM
bash
cp .env.example .env
bash
uv venv
source .venv/bin/activate
bash
uv sync
bash
# The --no-browser will disable auto-opening LangGraph studio when the server starts
# optional, but recommended since the studio is not needed for this project
uv run langgraph dev --no-browser
[!IMPORTANT] Prerequisites: Have at least one supervisor deployed to your instance of LangGraph Platform (LGP) that matches or extends the base implementation provided in this repo
To add a supervisor agent to the platform:
Agents
tab in the left menu barCreate and agent and select the graph deployed to your LGP instance described in the prerequisites above
Give your supervisor a name, description, an optional system prompt if you'd like to modify it and select the agents that it will orchestrate (note
GraphConfigPydantic
class in the agent.py
file. OAP will automatically register any changes to this class. You can modify a specific field's properties by editing the x_oap_ui_config
metadata object. For more information, see the Open Agent Platform documentation on graph configuration.The supervisor operates based on a system prompt that instructs it on how to manage incoming user messages. It can either:
Answer the user directly.
delegate_to_<agent_name>(user_query)
.The user sees all messages and and optionally all tool calls, ensuring transparency in the conversation flow.
This project uses LangGraph custom auth to authenticate requests to the server. It's configured to use Supabase as the authentication provider, however it can be easily swapped for another service.
Authorization
header with a Bearer
token. This token should be a valid JWT token from Supabase.The auth handler then takes that token and verifies it with Supabase. If the token is valid, it returns the user's identity. If the token is invalid, it raises an exception. This means you must have a Supabase URL & key set in your environment variables to use this auth handler:
bash
SUPABASE_URL=""
# Ensure this is your Supabase Service Role key
SUPABASE_KEY=""
The auth handler is then used as middleware for all requests to the server. It is configured to run on the following events:
threads.create
threads.read
threads.delete
threads.update
threads.search
assistants.create
assistants.read
assistants.delete
assistants.update
assistants.search
store
For creation methods, it auto-injects the user's ID into the metadata. This is then uses in all read/update/delete/search methods to ensure that the user can only access their own threads and assistants.
By using custom authentication, we can call this LangGraph server directly from a frontend application, without having to worry about exposing API keys/secrets, since you only need a JWT token from Supabase to authenticate.
This agent is configured to accept model API keys set by a user in the OAP settings page. The default OAP implementation allows users to add their own OpenAI, Anthropic, and Google API keys, and these can easily be extended in OAP.
If a user sets their API keys in the OAP settings page, those keys will be passed to the agent through the config in an 'apiKeys' field and used by default. Otherwise, we will automatically fall back on the environment variables set in the agent deployed on LangGraph platform.