Skip to content
  • Auto
  • Light
  • Dark
DiscordForumGitHubSign up
View as Markdown
Copy Markdown

Open in Claude
Open in ChatGPT

Create Batch

client.batches.create(BatchCreateParams { requests, callback_url } body, RequestOptionsoptions?): BatchJob { id, agent_id, background, 15 more }
post/v1/messages/batches

Submit a batch of agent runs for asynchronous processing.

Creates a job that will fan out messages to all listed agents and process them in parallel. The request will be rejected if it exceeds 256MB.

ParametersExpand Collapse
body: BatchCreateParams { requests, callback_url }
requests: Array<Request>

List of requests to be processed in batch.

agent_id: string

The ID of the agent to send this batch request for

Deprecatedassistant_message_tool_kwarg?: string

The name of the message argument in the designated message tool. Still supported for legacy agent types, but deprecated for letta_v1_agent onward.

Deprecatedassistant_message_tool_name?: string

The name of the designated message tool. Still supported for legacy agent types, but deprecated for letta_v1_agent onward.

Deprecatedenable_thinking?: string

If set to True, enables reasoning before responses or tool calls from the agent.

include_return_message_types?: Array<MessageType> | null

Only return specified message types in the response. If None (default) returns all messages.

Accepts one of the following:
"system_message"
"user_message"
"assistant_message"
"reasoning_message"
"hidden_reasoning_message"
"tool_call_message"
"tool_return_message"
"approval_request_message"
"approval_response_message"
input?: string | Array<TextContent { text, signature, type } | ImageContent { source, type } | ToolCallContent { id, input, name, 2 more } | 5 more> | null

Syntactic sugar for a single user message. Equivalent to messages=[{'role': 'user', 'content': input}].

Accepts one of the following:
string
Array<TextContent { text, signature, type } | ImageContent { source, type } | ToolCallContent { id, input, name, 2 more } | 5 more>
TextContent { text, signature, type }
text: string

The text content of the message.

signature?: string | null

Stores a unique identifier for any reasoning associated with this text content.

type?: "text"

The type of the message.

Accepts one of the following:
"text"
ImageContent { source, type }
source: URLImage { url, type } | Base64Image { data, media_type, detail, type } | LettaImage { file_id, data, detail, 2 more }

The source of the image.

Accepts one of the following:
URLImage { url, type }
url: string

The URL of the image.

type?: "url"

The source type for the image.

Accepts one of the following:
"url"
Base64Image { data, media_type, detail, type }
data: string

The base64 encoded image data.

media_type: string

The media type for the image.

detail?: string | null

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

type?: "base64"

The source type for the image.

Accepts one of the following:
"base64"
LettaImage { file_id, data, detail, 2 more }
file_id: string

The unique identifier of the image file persisted in storage.

data?: string | null

The base64 encoded image data.

detail?: string | null

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

media_type?: string | null

The media type for the image.

type?: "letta"

The source type for the image.

Accepts one of the following:
"letta"
type?: "image"

The type of the message.

Accepts one of the following:
"image"
ToolCallContent { id, input, name, 2 more }
id: string

A unique identifier for this specific tool call instance.

input: Record<string, unknown>

The parameters being passed to the tool, structured as a dictionary of parameter names to values.

name: string

The name of the tool being called.

signature?: string | null

Stores a unique identifier for any reasoning associated with this tool call.

type?: "tool_call"

Indicates this content represents a tool call event.

Accepts one of the following:
"tool_call"
ToolReturnContent { content, is_error, tool_call_id, type }
content: string

The content returned by the tool execution.

is_error: boolean

Indicates whether the tool execution resulted in an error.

tool_call_id: string

References the ID of the ToolCallContent that initiated this tool call.

type?: "tool_return"

Indicates this content represents a tool return event.

Accepts one of the following:
"tool_return"
ReasoningContent { is_native, reasoning, signature, type }

Sent via the Anthropic Messages API

is_native: boolean

Whether the reasoning content was generated by a reasoner model that processed this step.

reasoning: string

The intermediate reasoning or thought process content.

signature?: string | null

A unique identifier for this reasoning step.

type?: "reasoning"

Indicates this is a reasoning/intermediate step.

Accepts one of the following:
"reasoning"
RedactedReasoningContent { data, type }

Sent via the Anthropic Messages API

data: string

The redacted or filtered intermediate reasoning content.

type?: "redacted_reasoning"

Indicates this is a redacted thinking step.

Accepts one of the following:
"redacted_reasoning"
OmittedReasoningContent { signature, type }

A placeholder for reasoning content we know is present, but isn't returned by the provider (e.g. OpenAI GPT-5 on ChatCompletions)

signature?: string | null

A unique identifier for this reasoning step.

type?: "omitted_reasoning"

Indicates this is an omitted reasoning step.

Accepts one of the following:
"omitted_reasoning"
SummarizedReasoningContent { id, summary, encrypted_content, type }

The style of reasoning content returned by the OpenAI Responses API

id: string

The unique identifier for this reasoning step.

summary: Array<Summary>

Summaries of the reasoning content.

index: number

The index of the summary part.

text: string

The text of the summary part.

encrypted_content?: string

The encrypted reasoning content.

type?: "summarized_reasoning"

Indicates this is a summarized reasoning step.

Accepts one of the following:
"summarized_reasoning"
max_steps?: number

Maximum number of steps the agent should take to process the request.

messages?: Array<MessageCreate { content, role, batch_item_id, 5 more } | ApprovalCreate { approval_request_id, approvals, approve, 3 more } > | null

The messages to be sent to the agent.

Accepts one of the following:
MessageCreate { content, role, batch_item_id, 5 more }

Request to create a message

content: Array<LettaMessageContentUnion> | string

The content of the message.

Accepts one of the following:
TextContent { text, signature, type }
text: string

The text content of the message.

signature?: string | null

Stores a unique identifier for any reasoning associated with this text content.

type?: "text"

The type of the message.

Accepts one of the following:
"text"
ImageContent { source, type }
source: URLImage { url, type } | Base64Image { data, media_type, detail, type } | LettaImage { file_id, data, detail, 2 more }

The source of the image.

Accepts one of the following:
URLImage { url, type }
url: string

The URL of the image.

type?: "url"

The source type for the image.

Accepts one of the following:
"url"
Base64Image { data, media_type, detail, type }
data: string

The base64 encoded image data.

media_type: string

The media type for the image.

detail?: string | null

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

type?: "base64"

The source type for the image.

Accepts one of the following:
"base64"
LettaImage { file_id, data, detail, 2 more }
file_id: string

The unique identifier of the image file persisted in storage.

data?: string | null

The base64 encoded image data.

detail?: string | null

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

media_type?: string | null

The media type for the image.

type?: "letta"

The source type for the image.

Accepts one of the following:
"letta"
type?: "image"

The type of the message.

Accepts one of the following:
"image"
ToolCallContent { id, input, name, 2 more }
id: string

A unique identifier for this specific tool call instance.

input: Record<string, unknown>

The parameters being passed to the tool, structured as a dictionary of parameter names to values.

name: string

The name of the tool being called.

signature?: string | null

Stores a unique identifier for any reasoning associated with this tool call.

type?: "tool_call"

Indicates this content represents a tool call event.

Accepts one of the following:
"tool_call"
ToolReturnContent { content, is_error, tool_call_id, type }
content: string

The content returned by the tool execution.

is_error: boolean

Indicates whether the tool execution resulted in an error.

tool_call_id: string

References the ID of the ToolCallContent that initiated this tool call.

type?: "tool_return"

Indicates this content represents a tool return event.

Accepts one of the following:
"tool_return"
ReasoningContent { is_native, reasoning, signature, type }

Sent via the Anthropic Messages API

is_native: boolean

Whether the reasoning content was generated by a reasoner model that processed this step.

reasoning: string

The intermediate reasoning or thought process content.

signature?: string | null

A unique identifier for this reasoning step.

type?: "reasoning"

Indicates this is a reasoning/intermediate step.

Accepts one of the following:
"reasoning"
RedactedReasoningContent { data, type }

Sent via the Anthropic Messages API

data: string

The redacted or filtered intermediate reasoning content.

type?: "redacted_reasoning"

Indicates this is a redacted thinking step.

Accepts one of the following:
"redacted_reasoning"
OmittedReasoningContent { signature, type }

A placeholder for reasoning content we know is present, but isn't returned by the provider (e.g. OpenAI GPT-5 on ChatCompletions)

signature?: string | null

A unique identifier for this reasoning step.

type?: "omitted_reasoning"

Indicates this is an omitted reasoning step.

Accepts one of the following:
"omitted_reasoning"
string
role: "user" | "system" | "assistant"

The role of the participant.

Accepts one of the following:
"user"
"system"
"assistant"
batch_item_id?: string | null

The id of the LLMBatchItem that this message is associated with

group_id?: string | null

The multi-agent group that the message was sent in

name?: string | null

The name of the participant.

otid?: string | null

The offline threading id associated with this message

sender_id?: string | null

The id of the sender of the message, can be an identity id or agent id

type?: "message" | null

The message type to be created.

Accepts one of the following:
"message"
ApprovalCreate { approval_request_id, approvals, approve, 3 more }

Input to approve or deny a tool call request

Deprecatedapproval_request_id?: string | null

The message ID of the approval request

approvals?: Array<ApprovalReturn { approve, tool_call_id, reason, type } | ToolReturn { status, tool_call_id, tool_return, 3 more } > | null

The list of approval responses

Accepts one of the following:
ApprovalReturn { approve, tool_call_id, reason, type }
approve: boolean

Whether the tool has been approved

tool_call_id: string

The ID of the tool call that corresponds to this approval

reason?: string | null

An optional explanation for the provided approval status

type?: "approval"

The message type to be created.

Accepts one of the following:
"approval"
ToolReturn { status, tool_call_id, tool_return, 3 more }
status: "success" | "error"
Accepts one of the following:
"success"
"error"
tool_call_id: string
tool_return: string
stderr?: Array<string> | null
stdout?: Array<string> | null
type?: "tool"

The message type to be created.

Accepts one of the following:
"tool"
Deprecatedapprove?: boolean | null

Whether the tool has been approved

group_id?: string | null

The multi-agent group that the message was sent in

Deprecatedreason?: string | null

An optional explanation for the provided approval status

type?: "approval"

The message type to be created.

Accepts one of the following:
"approval"
Deprecateduse_assistant_message?: boolean

Whether the server should parse specific tool call arguments (default send_message) as AssistantMessage objects. Still supported for legacy agent types, but deprecated for letta_v1_agent onward.

callback_url?: string | null

Optional URL to call via POST when the batch completes. The callback payload will be a JSON object with the following fields: {'job_id': string, 'status': string, 'completed_at': string}. Where 'job_id' is the unique batch job identifier, 'status' is the final batch status (e.g., 'completed', 'failed'), and 'completed_at' is an ISO 8601 timestamp indicating when the batch job completed.

maxLength2083
minLength1
formaturi
ReturnsExpand Collapse
BatchJob { id, agent_id, background, 15 more }
id: string

The human-friendly ID of the Job

agent_id?: string | null

The agent associated with this job/run.

background?: boolean | null

Whether the job was created in background mode.

callback_error?: string | null

Optional error message from attempting to POST the callback endpoint.

callback_sent_at?: string | null

Timestamp when the callback was last attempted.

formatdate-time
callback_status_code?: number | null

HTTP status code returned by the callback endpoint.

callback_url?: string | null

If set, POST to this URL when the job completes.

completed_at?: string | null

The unix timestamp of when the job was completed.

formatdate-time
created_at?: string

The unix timestamp of when the job was created.

formatdate-time
created_by_id?: string | null

The id of the user that made this object.

job_type?: JobType
Accepts one of the following:
"job"
"run"
"batch"
last_updated_by_id?: string | null

The id of the user that made this object.

metadata?: Record<string, unknown> | null

The metadata of the job.

status?: JobStatus

The status of the job.

Accepts one of the following:
"created"
"running"
"completed"
"failed"
"pending"
"cancelled"
"expired"
stop_reason?: StopReasonType | null

The reason why the job was stopped.

Accepts one of the following:
"end_turn"
"error"
"llm_api_error"
"invalid_llm_response"
"invalid_tool_call"
"max_steps"
"no_tool_call"
"tool_rule"
"cancelled"
"requires_approval"
total_duration_ns?: number | null

Total run duration in nanoseconds

ttft_ns?: number | null

Time to first token for a run in nanoseconds

updated_at?: string | null

The timestamp when the object was last updated.

formatdate-time
Create Batch
import Letta from '@letta-ai/letta-client';

const client = new Letta({
  apiKey: 'My API Key',
});

const batchJob = await client.batches.create({ requests: [{ agent_id: 'agent_id' }] });

console.log(batchJob.id);
{
  "id": "job-123e4567-e89b-12d3-a456-426614174000",
  "agent_id": "agent_id",
  "background": true,
  "callback_error": "callback_error",
  "callback_sent_at": "2019-12-27T18:11:19.117Z",
  "callback_status_code": 0,
  "callback_url": "callback_url",
  "completed_at": "2019-12-27T18:11:19.117Z",
  "created_at": "2019-12-27T18:11:19.117Z",
  "created_by_id": "created_by_id",
  "job_type": "job",
  "last_updated_by_id": "last_updated_by_id",
  "metadata": {
    "foo": "bar"
  },
  "status": "created",
  "stop_reason": "end_turn",
  "total_duration_ns": 0,
  "ttft_ns": 0,
  "updated_at": "2019-12-27T18:11:19.117Z"
}
Returns Examples
{
  "id": "job-123e4567-e89b-12d3-a456-426614174000",
  "agent_id": "agent_id",
  "background": true,
  "callback_error": "callback_error",
  "callback_sent_at": "2019-12-27T18:11:19.117Z",
  "callback_status_code": 0,
  "callback_url": "callback_url",
  "completed_at": "2019-12-27T18:11:19.117Z",
  "created_at": "2019-12-27T18:11:19.117Z",
  "created_by_id": "created_by_id",
  "job_type": "job",
  "last_updated_by_id": "last_updated_by_id",
  "metadata": {
    "foo": "bar"
  },
  "status": "created",
  "stop_reason": "end_turn",
  "total_duration_ns": 0,
  "ttft_ns": 0,
  "updated_at": "2019-12-27T18:11:19.117Z"
}