Skip to content
  • Auto
  • Light
  • Dark
DiscordForumGitHubSign up
Tools
Advanced usage
View as Markdown
Copy Markdown

Open in Claude
Open in ChatGPT

Parallel Tool Calling

When an agent calls multiple tools, Letta can execute them concurrently instead of sequentially.

Parallel tool calling has two configuration levels:

  • Agent LLM config: Controls whether the LLM can request multiple tool calls at once
  • Individual tool settings: Controls whether requested tools actually execute in parallel or sequentially

Parallel tool calling is supported for OpenAI and Anthropic models.

Set parallel_tool_calls: true in the agent’s LLM config:

const agent = await client.agents.create({
llm_config: {
model: "anthropic/claude-sonnet-4-20250514",
parallel_tool_calls: true,
},
});

Individual tools must opt-in to parallel execution:

await client.tools.update(toolId, {
enable_parallel_execution: true,
});

By default, tools execute sequentially (enable_parallel_execution=False).

  1. Open SettingsLLM Config
  2. Enable “Parallel tool calls”
  1. Open the Tools panel
  2. Click a tool to open it
  3. Go to the Settings tab
  4. Enable “Enable parallel execution”

When the agent calls multiple tools:

  • Sequential tools execute one-by-one
  • Parallel-enabled tools execute concurrently
  • Mixed: sequential tools complete first, then parallel tools execute together

Example:

Agent calls:
- search_web (parallel: true)
- search_database (parallel: true)
- send_message (parallel: false)
Execution:
1. send_message executes
2. search_web AND search_database execute concurrently
  • Parallel execution is automatically disabled when tool rules are configured
  • Only enable for tools safe to run concurrently (e.g., read-only operations)
  • Tools that modify shared state should remain sequential