Send Message
Process a user message and return the agent's response. This endpoint accepts a message from a user and processes it through the agent.
The response format is controlled by the streaming field in the request body:
- If
streaming=false(default): Returns a complete LettaResponse with all messages - If
streaming=true: Returns a Server-Sent Events (SSE) stream
Additional streaming options (only used when streaming=true):
stream_tokens: Stream individual tokens instead of complete stepsinclude_pings: Include keepalive pings to prevent connection timeoutsbackground: Process the request in the background
Path ParametersExpand Collapse
agent_id: string
The ID of the agent in the format 'agent-
Body ParametersExpand Collapse
Deprecatedassistant_message_tool_kwarg: optional string
The name of the message argument in the designated message tool. Still supported for legacy agent types, but deprecated for letta_v1_agent onward.
Deprecatedassistant_message_tool_name: optional string
The name of the designated message tool. Still supported for legacy agent types, but deprecated for letta_v1_agent onward.
background: optional boolean
Whether to process the request in the background (only used when streaming=true).
Deprecatedenable_thinking: optional string
If set to True, enables reasoning before responses or tool calls from the agent.
include_pings: optional boolean
Whether to include periodic keepalive ping messages in the stream to prevent connection timeouts (only used when streaming=true).
Only return specified message types in the response. If None (default) returns all messages.
input: optional string or array of TextContent { text, signature, type } or ImageContent { source, type } or ToolCallContent { id, input, name, 2 more } or 5 more
Syntactic sugar for a single user message. Equivalent to messages=[{'role': 'user', 'content': input}].
UnionMember1 = array of TextContent { text, signature, type } or ImageContent { source, type } or ToolCallContent { id, input, name, 2 more } or 5 more
TextContent = object { text, signature, type }
text: string
The text content of the message.
signature: optional string
Stores a unique identifier for any reasoning associated with this text content.
type: optional "text"
The type of the message.
ImageContent = object { source, type }
source: object { url, type } or object { data, media_type, detail, type } or object { file_id, data, detail, 2 more }
The source of the image.
URL = object { url, type }
url: string
The URL of the image.
type: optional "url"
The source type for the image.
Base64 = object { data, media_type, detail, type }
data: string
The base64 encoded image data.
media_type: string
The media type for the image.
detail: optional string
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
type: optional "base64"
The source type for the image.
Letta = object { file_id, data, detail, 2 more }
file_id: string
The unique identifier of the image file persisted in storage.
data: optional string
The base64 encoded image data.
detail: optional string
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
media_type: optional string
The media type for the image.
type: optional "letta"
The source type for the image.
type: optional "image"
The type of the message.
ToolCallContent = object { id, input, name, 2 more }
id: string
A unique identifier for this specific tool call instance.
input: map[unknown]
The parameters being passed to the tool, structured as a dictionary of parameter names to values.
name: string
The name of the tool being called.
signature: optional string
Stores a unique identifier for any reasoning associated with this tool call.
type: optional "tool_call"
Indicates this content represents a tool call event.
ToolReturnContent = object { content, is_error, tool_call_id, type }
content: string
The content returned by the tool execution.
is_error: boolean
Indicates whether the tool execution resulted in an error.
tool_call_id: string
References the ID of the ToolCallContent that initiated this tool call.
type: optional "tool_return"
Indicates this content represents a tool return event.
ReasoningContent = object { is_native, reasoning, signature, type }
Sent via the Anthropic Messages API
is_native: boolean
Whether the reasoning content was generated by a reasoner model that processed this step.
reasoning: string
The intermediate reasoning or thought process content.
signature: optional string
A unique identifier for this reasoning step.
type: optional "reasoning"
Indicates this is a reasoning/intermediate step.
RedactedReasoningContent = object { data, type }
Sent via the Anthropic Messages API
data: string
The redacted or filtered intermediate reasoning content.
type: optional "redacted_reasoning"
Indicates this is a redacted thinking step.
OmittedReasoningContent = object { signature, type }
A placeholder for reasoning content we know is present, but isn't returned by the provider (e.g. OpenAI GPT-5 on ChatCompletions)
signature: optional string
A unique identifier for this reasoning step.
type: optional "omitted_reasoning"
Indicates this is an omitted reasoning step.
SummarizedReasoning = object { id, summary, encrypted_content, type }
The style of reasoning content returned by the OpenAI Responses API
id: string
The unique identifier for this reasoning step.
summary: array of object { index, text }
Summaries of the reasoning content.
index: number
The index of the summary part.
text: string
The text of the summary part.
encrypted_content: optional string
The encrypted reasoning content.
type: optional "summarized_reasoning"
Indicates this is a summarized reasoning step.
max_steps: optional number
Maximum number of steps the agent should take to process the request.
messages: optional array of MessageCreate { content, role, batch_item_id, 5 more } or ApprovalCreate { approval_request_id, approvals, approve, 3 more }
The messages to be sent to the agent.
MessageCreate = object { content, role, batch_item_id, 5 more }
Request to create a message
The content of the message.
TextContent = object { text, signature, type }
text: string
The text content of the message.
signature: optional string
Stores a unique identifier for any reasoning associated with this text content.
type: optional "text"
The type of the message.
ImageContent = object { source, type }
source: object { url, type } or object { data, media_type, detail, type } or object { file_id, data, detail, 2 more }
The source of the image.
URL = object { url, type }
url: string
The URL of the image.
type: optional "url"
The source type for the image.
Base64 = object { data, media_type, detail, type }
data: string
The base64 encoded image data.
media_type: string
The media type for the image.
detail: optional string
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
type: optional "base64"
The source type for the image.
Letta = object { file_id, data, detail, 2 more }
file_id: string
The unique identifier of the image file persisted in storage.
data: optional string
The base64 encoded image data.
detail: optional string
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
media_type: optional string
The media type for the image.
type: optional "letta"
The source type for the image.
type: optional "image"
The type of the message.
ToolCallContent = object { id, input, name, 2 more }
id: string
A unique identifier for this specific tool call instance.
input: map[unknown]
The parameters being passed to the tool, structured as a dictionary of parameter names to values.
name: string
The name of the tool being called.
signature: optional string
Stores a unique identifier for any reasoning associated with this tool call.
type: optional "tool_call"
Indicates this content represents a tool call event.
ToolReturnContent = object { content, is_error, tool_call_id, type }
content: string
The content returned by the tool execution.
is_error: boolean
Indicates whether the tool execution resulted in an error.
tool_call_id: string
References the ID of the ToolCallContent that initiated this tool call.
type: optional "tool_return"
Indicates this content represents a tool return event.
ReasoningContent = object { is_native, reasoning, signature, type }
Sent via the Anthropic Messages API
is_native: boolean
Whether the reasoning content was generated by a reasoner model that processed this step.
reasoning: string
The intermediate reasoning or thought process content.
signature: optional string
A unique identifier for this reasoning step.
type: optional "reasoning"
Indicates this is a reasoning/intermediate step.
RedactedReasoningContent = object { data, type }
Sent via the Anthropic Messages API
data: string
The redacted or filtered intermediate reasoning content.
type: optional "redacted_reasoning"
Indicates this is a redacted thinking step.
OmittedReasoningContent = object { signature, type }
A placeholder for reasoning content we know is present, but isn't returned by the provider (e.g. OpenAI GPT-5 on ChatCompletions)
signature: optional string
A unique identifier for this reasoning step.
type: optional "omitted_reasoning"
Indicates this is an omitted reasoning step.
role: "user" or "system" or "assistant"
The role of the participant.
batch_item_id: optional string
The id of the LLMBatchItem that this message is associated with
group_id: optional string
The multi-agent group that the message was sent in
name: optional string
The name of the participant.
otid: optional string
The offline threading id associated with this message
sender_id: optional string
The id of the sender of the message, can be an identity id or agent id
type: optional "message"
The message type to be created.
ApprovalCreate = object { approval_request_id, approvals, approve, 3 more }
Input to approve or deny a tool call request
Deprecatedapproval_request_id: optional string
The message ID of the approval request
approvals: optional array of object { approve, tool_call_id, reason, type } or ToolReturn { status, tool_call_id, tool_return, 3 more }
The list of approval responses
Approval = object { approve, tool_call_id, reason, type }
approve: boolean
Whether the tool has been approved
tool_call_id: string
The ID of the tool call that corresponds to this approval
reason: optional string
An optional explanation for the provided approval status
type: optional "approval"
The message type to be created.
ToolReturn = object { status, tool_call_id, tool_return, 3 more }
status: "success" or "error"
type: optional "tool"
The message type to be created.
Deprecatedapprove: optional boolean
Whether the tool has been approved
group_id: optional string
The multi-agent group that the message was sent in
Deprecatedreason: optional string
An optional explanation for the provided approval status
type: optional "approval"
The message type to be created.
stream_tokens: optional boolean
Flag to determine if individual tokens should be streamed, rather than streaming per step (only used when streaming=true).
streaming: optional boolean
If True, returns a streaming response (Server-Sent Events). If False (default), returns a complete response.
Deprecateduse_assistant_message: optional boolean
Whether the server should parse specific tool call arguments (default send_message) as AssistantMessage objects. Still supported for legacy agent types, but deprecated for letta_v1_agent onward.
ReturnsExpand Collapse
LettaResponse = object { messages, stop_reason, usage }
Response object from an agent interaction, consisting of the new messages generated by the agent and usage statistics.
The type of the returned messages can be either Message or LettaMessage, depending on what was specified in the request.
Attributes: messages (List[Union[Message, LettaMessage]]): The messages returned by the agent. usage (LettaUsageStatistics): The usage statistics
The messages returned by the agent.
SystemMessage = object { id, content, date, 8 more }
A message generated by the system. Never streamed back on a response, only used for cursor pagination.
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message content (str): The message content sent by the system
content: string
The message content sent by the system
message_type: optional "system_message"
The type of the message.
UserMessage = object { id, content, date, 8 more }
A message sent by the user. Never streamed back on a response, only used for cursor pagination.
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message content (Union[str, List[LettaUserMessageContentUnion]]): The message content sent by the user (can be a string or an array of multi-modal content parts)
The message content sent by the user (can be a string or an array of multi-modal content parts)
TextContent = object { text, signature, type }
text: string
The text content of the message.
signature: optional string
Stores a unique identifier for any reasoning associated with this text content.
type: optional "text"
The type of the message.
ImageContent = object { source, type }
source: object { url, type } or object { data, media_type, detail, type } or object { file_id, data, detail, 2 more }
The source of the image.
URL = object { url, type }
url: string
The URL of the image.
type: optional "url"
The source type for the image.
Base64 = object { data, media_type, detail, type }
data: string
The base64 encoded image data.
media_type: string
The media type for the image.
detail: optional string
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
type: optional "base64"
The source type for the image.
Letta = object { file_id, data, detail, 2 more }
file_id: string
The unique identifier of the image file persisted in storage.
data: optional string
The base64 encoded image data.
detail: optional string
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
media_type: optional string
The media type for the image.
type: optional "letta"
The source type for the image.
type: optional "image"
The type of the message.
message_type: optional "user_message"
The type of the message.
ReasoningMessage = object { id, date, reasoning, 10 more }
Representation of an agent's internal reasoning.
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message source (Literal["reasoner_model", "non_reasoner_model"]): Whether the reasoning content was generated natively by a reasoner model or derived via prompting reasoning (str): The internal reasoning of the agent signature (Optional[str]): The model-generated signature of the reasoning step
message_type: optional "reasoning_message"
The type of the message.
source: optional "reasoner_model" or "non_reasoner_model"
HiddenReasoningMessage = object { id, date, state, 9 more }
Representation of an agent's internal reasoning where reasoning content has been hidden from the response.
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message state (Literal["redacted", "omitted"]): Whether the reasoning content was redacted by the provider or simply omitted by the API hidden_reasoning (Optional[str]): The internal reasoning of the agent
state: "redacted" or "omitted"
message_type: optional "hidden_reasoning_message"
The type of the message.
ToolCallMessage = object { id, date, tool_call, 9 more }
A message representing a request to call a tool (generated by the LLM to trigger tool execution).
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message tool_call (Union[ToolCall, ToolCallDelta]): The tool call
Deprecatedtool_call: ToolCall { arguments, name, tool_call_id } or ToolCallDelta { arguments, name, tool_call_id }
ToolCall = object { arguments, name, tool_call_id }
ToolCallDelta = object { arguments, name, tool_call_id }
message_type: optional "tool_call_message"
The type of the message.
tool_calls: optional array of ToolCall { arguments, name, tool_call_id } or ToolCallDelta { arguments, name, tool_call_id }
ToolCallDelta = object { arguments, name, tool_call_id }
ToolReturnMessage = object { id, date, status, 13 more }
A message representing the return value of a tool call (generated by Letta executing the requested tool).
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message tool_return (str): The return value of the tool (deprecated, use tool_returns) status (Literal["success", "error"]): The status of the tool call (deprecated, use tool_returns) tool_call_id (str): A unique identifier for the tool call that generated this message (deprecated, use tool_returns) stdout (Optional[List(str)]): Captured stdout (e.g. prints, logs) from the tool invocation (deprecated, use tool_returns) stderr (Optional[List(str)]): Captured stderr from the tool invocation (deprecated, use tool_returns) tool_returns (Optional[List[ToolReturn]]): List of tool returns for multi-tool support
Deprecatedstatus: "success" or "error"
message_type: optional "tool_return_message"
The type of the message.
status: "success" or "error"
type: optional "tool"
The message type to be created.
AssistantMessage = object { id, content, date, 8 more }
A message sent by the LLM in response to user input. Used in the LLM context.
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message content (Union[str, List[LettaAssistantMessageContentUnion]]): The message content sent by the agent (can be a string or an array of content parts)
The message content sent by the agent (can be a string or an array of content parts)
text: string
The text content of the message.
signature: optional string
Stores a unique identifier for any reasoning associated with this text content.
type: optional "text"
The type of the message.
message_type: optional "assistant_message"
The type of the message.
ApprovalRequestMessage = object { id, date, tool_call, 9 more }
A message representing a request for approval to call a tool (generated by the LLM to trigger tool execution).
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message tool_call (ToolCall): The tool call
Deprecatedtool_call: ToolCall { arguments, name, tool_call_id } or ToolCallDelta { arguments, name, tool_call_id }
The tool call that has been requested by the llm to run
ToolCall = object { arguments, name, tool_call_id }
ToolCallDelta = object { arguments, name, tool_call_id }
message_type: optional "approval_request_message"
The type of the message.
tool_calls: optional array of ToolCall { arguments, name, tool_call_id } or ToolCallDelta { arguments, name, tool_call_id }
The tool calls that have been requested by the llm to run, which are pending approval
ToolCallDelta = object { arguments, name, tool_call_id }
ApprovalResponseMessage = object { id, date, approval_request_id, 11 more }
A message representing a response form the user indicating whether a tool has been approved to run.
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message approve: (bool) Whether the tool has been approved approval_request_id: The ID of the approval request reason: (Optional[str]) An optional explanation for the provided approval status
Deprecatedapproval_request_id: optional string
The message ID of the approval request
approvals: optional array of object { approve, tool_call_id, reason, type } or ToolReturn { status, tool_call_id, tool_return, 3 more }
The list of approval responses
Approval = object { approve, tool_call_id, reason, type }
approve: boolean
Whether the tool has been approved
tool_call_id: string
The ID of the tool call that corresponds to this approval
reason: optional string
An optional explanation for the provided approval status
type: optional "approval"
The message type to be created.
ToolReturn = object { status, tool_call_id, tool_return, 3 more }
status: "success" or "error"
type: optional "tool"
The message type to be created.
Deprecatedapprove: optional boolean
Whether the tool has been approved
message_type: optional "approval_response_message"
The type of the message.
Deprecatedreason: optional string
An optional explanation for the provided approval status
SummaryMessage = object { id, date, summary, 8 more }
A message representing a summary of the conversation. Sent to the LLM as a user or system message depending on the provider.
message_type: optional "summary"
EventMessage = object { id, date, event_data, 9 more }
A message for notifying the developer that an event that has occured (e.g. a compaction). Events are NOT part of the context window.
event_type: "compaction"
message_type: optional "event"
stop_reason: object { stop_reason, message_type }
The stop reason from Letta indicating why agent loop stopped execution.
The reason why execution stopped.
message_type: optional "stop_reason"
The type of the message.
usage: object { completion_tokens, message_type, prompt_tokens, 3 more }
The usage statistics of the agent.
completion_tokens: optional number
The number of tokens generated by the agent.
message_type: optional "usage_statistics"
prompt_tokens: optional number
The number of tokens in the prompt.
run_ids: optional array of string
The background task run IDs associated with the agent interaction
step_count: optional number
The number of steps taken by the agent.
total_tokens: optional number
The total number of tokens processed by the agent.
Send Message
- HTTP
- TypeScript
- Python
curl https://api.letta.com/v1/agents/$AGENT_ID/messages \
-H 'Content-Type: application/json' \
-H "Authorization: Bearer $LETTA_API_KEY" \
-d '{}'
{
"messages": [
{
"id": "id",
"content": "content",
"date": "2019-12-27T18:11:19.117Z",
"is_err": true,
"message_type": "system_message",
"name": "name",
"otid": "otid",
"run_id": "run_id",
"sender_id": "sender_id",
"seq_id": 0,
"step_id": "step_id"
}
],
"stop_reason": {
"stop_reason": "end_turn",
"message_type": "stop_reason"
},
"usage": {
"completion_tokens": 0,
"message_type": "usage_statistics",
"prompt_tokens": 0,
"run_ids": [
"string"
],
"step_count": 0,
"total_tokens": 0
}
}Returns Examples
{
"messages": [
{
"id": "id",
"content": "content",
"date": "2019-12-27T18:11:19.117Z",
"is_err": true,
"message_type": "system_message",
"name": "name",
"otid": "otid",
"run_id": "run_id",
"sender_id": "sender_id",
"seq_id": 0,
"step_id": "step_id"
}
],
"stop_reason": {
"stop_reason": "end_turn",
"message_type": "stop_reason"
},
"usage": {
"completion_tokens": 0,
"message_type": "usage_statistics",
"prompt_tokens": 0,
"run_ids": [
"string"
],
"step_count": 0,
"total_tokens": 0
}
}