Send Message Streaming
Process a user message and return the agent's response. This endpoint accepts a message from a user and processes it through the agent. It will stream the steps of the response always, and stream the tokens if 'stream_tokens' is set to True.
ParametersExpand Collapse
agentID: string
The ID of the agent in the format 'agent-
body: MessageStreamParams { assistant_message_tool_kwarg, assistant_message_tool_name, background, 9 more }
Deprecatedassistant_message_tool_kwarg?: string
The name of the message argument in the designated message tool. Still supported for legacy agent types, but deprecated for letta_v1_agent onward.
Deprecatedassistant_message_tool_name?: string
The name of the designated message tool. Still supported for legacy agent types, but deprecated for letta_v1_agent onward.
background?: boolean
Whether to process the request in the background (only used when streaming=true).
Deprecatedenable_thinking?: string
If set to True, enables reasoning before responses or tool calls from the agent.
include_pings?: boolean
Whether to include periodic keepalive ping messages in the stream to prevent connection timeouts (only used when streaming=true).
Only return specified message types in the response. If None (default) returns all messages.
input?: string | Array<TextContent { text, signature, type } | ImageContent { source, type } | ToolCallContent { id, input, name, 2 more } | 5 more> | null
Syntactic sugar for a single user message. Equivalent to messages=[{'role': 'user', 'content': input}].
Array<TextContent { text, signature, type } | ImageContent { source, type } | ToolCallContent { id, input, name, 2 more } | 5 more>
TextContent { text, signature, type }
text: string
The text content of the message.
signature?: string | null
Stores a unique identifier for any reasoning associated with this text content.
type?: "text"
The type of the message.
ImageContent { source, type }
source: URLImage { url, type } | Base64Image { data, media_type, detail, type } | LettaImage { file_id, data, detail, 2 more }
The source of the image.
URLImage { url, type }
url: string
The URL of the image.
type?: "url"
The source type for the image.
Base64Image { data, media_type, detail, type }
data: string
The base64 encoded image data.
media_type: string
The media type for the image.
detail?: string | null
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
type?: "base64"
The source type for the image.
LettaImage { file_id, data, detail, 2 more }
file_id: string
The unique identifier of the image file persisted in storage.
data?: string | null
The base64 encoded image data.
detail?: string | null
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
media_type?: string | null
The media type for the image.
type?: "letta"
The source type for the image.
type?: "image"
The type of the message.
ToolCallContent { id, input, name, 2 more }
id: string
A unique identifier for this specific tool call instance.
input: Record<string, unknown>
The parameters being passed to the tool, structured as a dictionary of parameter names to values.
name: string
The name of the tool being called.
signature?: string | null
Stores a unique identifier for any reasoning associated with this tool call.
type?: "tool_call"
Indicates this content represents a tool call event.
ToolReturnContent { content, is_error, tool_call_id, type }
content: string
The content returned by the tool execution.
is_error: boolean
Indicates whether the tool execution resulted in an error.
tool_call_id: string
References the ID of the ToolCallContent that initiated this tool call.
type?: "tool_return"
Indicates this content represents a tool return event.
ReasoningContent { is_native, reasoning, signature, type }
Sent via the Anthropic Messages API
is_native: boolean
Whether the reasoning content was generated by a reasoner model that processed this step.
reasoning: string
The intermediate reasoning or thought process content.
signature?: string | null
A unique identifier for this reasoning step.
type?: "reasoning"
Indicates this is a reasoning/intermediate step.
RedactedReasoningContent { data, type }
Sent via the Anthropic Messages API
data: string
The redacted or filtered intermediate reasoning content.
type?: "redacted_reasoning"
Indicates this is a redacted thinking step.
OmittedReasoningContent { signature, type }
A placeholder for reasoning content we know is present, but isn't returned by the provider (e.g. OpenAI GPT-5 on ChatCompletions)
signature?: string | null
A unique identifier for this reasoning step.
type?: "omitted_reasoning"
Indicates this is an omitted reasoning step.
SummarizedReasoningContent { id, summary, encrypted_content, type }
The style of reasoning content returned by the OpenAI Responses API
id: string
The unique identifier for this reasoning step.
summary: Array<Summary>
Summaries of the reasoning content.
index: number
The index of the summary part.
text: string
The text of the summary part.
encrypted_content?: string
The encrypted reasoning content.
type?: "summarized_reasoning"
Indicates this is a summarized reasoning step.
max_steps?: number
Maximum number of steps the agent should take to process the request.
messages?: Array<MessageCreate { content, role, batch_item_id, 5 more } | ApprovalCreate { approval_request_id, approvals, approve, 3 more } > | null
The messages to be sent to the agent.
MessageCreate { content, role, batch_item_id, 5 more }
Request to create a message
The content of the message.
Array<LettaMessageContentUnion>
TextContent { text, signature, type }
text: string
The text content of the message.
signature?: string | null
Stores a unique identifier for any reasoning associated with this text content.
type?: "text"
The type of the message.
ImageContent { source, type }
source: URLImage { url, type } | Base64Image { data, media_type, detail, type } | LettaImage { file_id, data, detail, 2 more }
The source of the image.
URLImage { url, type }
url: string
The URL of the image.
type?: "url"
The source type for the image.
Base64Image { data, media_type, detail, type }
data: string
The base64 encoded image data.
media_type: string
The media type for the image.
detail?: string | null
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
type?: "base64"
The source type for the image.
LettaImage { file_id, data, detail, 2 more }
file_id: string
The unique identifier of the image file persisted in storage.
data?: string | null
The base64 encoded image data.
detail?: string | null
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
media_type?: string | null
The media type for the image.
type?: "letta"
The source type for the image.
type?: "image"
The type of the message.
ToolCallContent { id, input, name, 2 more }
id: string
A unique identifier for this specific tool call instance.
input: Record<string, unknown>
The parameters being passed to the tool, structured as a dictionary of parameter names to values.
name: string
The name of the tool being called.
signature?: string | null
Stores a unique identifier for any reasoning associated with this tool call.
type?: "tool_call"
Indicates this content represents a tool call event.
ToolReturnContent { content, is_error, tool_call_id, type }
content: string
The content returned by the tool execution.
is_error: boolean
Indicates whether the tool execution resulted in an error.
tool_call_id: string
References the ID of the ToolCallContent that initiated this tool call.
type?: "tool_return"
Indicates this content represents a tool return event.
ReasoningContent { is_native, reasoning, signature, type }
Sent via the Anthropic Messages API
is_native: boolean
Whether the reasoning content was generated by a reasoner model that processed this step.
reasoning: string
The intermediate reasoning or thought process content.
signature?: string | null
A unique identifier for this reasoning step.
type?: "reasoning"
Indicates this is a reasoning/intermediate step.
RedactedReasoningContent { data, type }
Sent via the Anthropic Messages API
data: string
The redacted or filtered intermediate reasoning content.
type?: "redacted_reasoning"
Indicates this is a redacted thinking step.
OmittedReasoningContent { signature, type }
A placeholder for reasoning content we know is present, but isn't returned by the provider (e.g. OpenAI GPT-5 on ChatCompletions)
signature?: string | null
A unique identifier for this reasoning step.
type?: "omitted_reasoning"
Indicates this is an omitted reasoning step.
role: "user" | "system" | "assistant"
The role of the participant.
batch_item_id?: string | null
The id of the LLMBatchItem that this message is associated with
group_id?: string | null
The multi-agent group that the message was sent in
name?: string | null
The name of the participant.
otid?: string | null
The offline threading id associated with this message
sender_id?: string | null
The id of the sender of the message, can be an identity id or agent id
type?: "message" | null
The message type to be created.
ApprovalCreate { approval_request_id, approvals, approve, 3 more }
Input to approve or deny a tool call request
Deprecatedapproval_request_id?: string | null
The message ID of the approval request
approvals?: Array<ApprovalReturn { approve, tool_call_id, reason, type } | ToolReturn { status, tool_call_id, tool_return, 3 more } > | null
The list of approval responses
ApprovalReturn { approve, tool_call_id, reason, type }
approve: boolean
Whether the tool has been approved
tool_call_id: string
The ID of the tool call that corresponds to this approval
reason?: string | null
An optional explanation for the provided approval status
type?: "approval"
The message type to be created.
ToolReturn { status, tool_call_id, tool_return, 3 more }
status: "success" | "error"
type?: "tool"
The message type to be created.
Deprecatedapprove?: boolean | null
Whether the tool has been approved
group_id?: string | null
The multi-agent group that the message was sent in
Deprecatedreason?: string | null
An optional explanation for the provided approval status
type?: "approval"
The message type to be created.
stream_tokens?: boolean
Flag to determine if individual tokens should be streamed, rather than streaming per step (only used when streaming=true).
streaming?: boolean
If True, returns a streaming response (Server-Sent Events). If False (default), returns a complete response.
Deprecateduse_assistant_message?: boolean
Whether the server should parse specific tool call arguments (default send_message) as AssistantMessage objects. Still supported for legacy agent types, but deprecated for letta_v1_agent onward.
ReturnsExpand Collapse
LettaStreamingResponse = SystemMessage { id, content, date, 8 more } | UserMessage { id, content, date, 8 more } | ReasoningMessage { id, date, reasoning, 10 more } | 9 more
Streaming response type for Server-Sent Events (SSE) endpoints. Each event in the stream will be one of these types.
SystemMessage { id, content, date, 8 more }
A message generated by the system. Never streamed back on a response, only used for cursor pagination.
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message content (str): The message content sent by the system
content: string
The message content sent by the system
message_type?: "system_message"
The type of the message.
UserMessage { id, content, date, 8 more }
A message sent by the user. Never streamed back on a response, only used for cursor pagination.
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message content (Union[str, List[LettaUserMessageContentUnion]]): The message content sent by the user (can be a string or an array of multi-modal content parts)
The message content sent by the user (can be a string or an array of multi-modal content parts)
Array<LettaUserMessageContentUnion>
TextContent { text, signature, type }
text: string
The text content of the message.
signature?: string | null
Stores a unique identifier for any reasoning associated with this text content.
type?: "text"
The type of the message.
ImageContent { source, type }
source: URLImage { url, type } | Base64Image { data, media_type, detail, type } | LettaImage { file_id, data, detail, 2 more }
The source of the image.
URLImage { url, type }
url: string
The URL of the image.
type?: "url"
The source type for the image.
Base64Image { data, media_type, detail, type }
data: string
The base64 encoded image data.
media_type: string
The media type for the image.
detail?: string | null
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
type?: "base64"
The source type for the image.
LettaImage { file_id, data, detail, 2 more }
file_id: string
The unique identifier of the image file persisted in storage.
data?: string | null
The base64 encoded image data.
detail?: string | null
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
media_type?: string | null
The media type for the image.
type?: "letta"
The source type for the image.
type?: "image"
The type of the message.
message_type?: "user_message"
The type of the message.
ReasoningMessage { id, date, reasoning, 10 more }
Representation of an agent's internal reasoning.
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message source (Literal["reasoner_model", "non_reasoner_model"]): Whether the reasoning content was generated natively by a reasoner model or derived via prompting reasoning (str): The internal reasoning of the agent signature (Optional[str]): The model-generated signature of the reasoning step
message_type?: "reasoning_message"
The type of the message.
source?: "reasoner_model" | "non_reasoner_model"
HiddenReasoningMessage { id, date, state, 9 more }
Representation of an agent's internal reasoning where reasoning content has been hidden from the response.
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message state (Literal["redacted", "omitted"]): Whether the reasoning content was redacted by the provider or simply omitted by the API hidden_reasoning (Optional[str]): The internal reasoning of the agent
state: "redacted" | "omitted"
message_type?: "hidden_reasoning_message"
The type of the message.
ToolCallMessage { id, date, tool_call, 9 more }
A message representing a request to call a tool (generated by the LLM to trigger tool execution).
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message tool_call (Union[ToolCall, ToolCallDelta]): The tool call
Deprecatedtool_call: ToolCall { arguments, name, tool_call_id } | ToolCallDelta { arguments, name, tool_call_id }
ToolCall { arguments, name, tool_call_id }
ToolCallDelta { arguments, name, tool_call_id }
message_type?: "tool_call_message"
The type of the message.
tool_calls?: Array<ToolCall { arguments, name, tool_call_id } > | ToolCallDelta { arguments, name, tool_call_id } | null
Array<ToolCall { arguments, name, tool_call_id } >
ToolCallDelta { arguments, name, tool_call_id }
ToolReturnMessage { id, date, status, 13 more }
A message representing the return value of a tool call (generated by Letta executing the requested tool).
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message tool_return (str): The return value of the tool (deprecated, use tool_returns) status (Literal["success", "error"]): The status of the tool call (deprecated, use tool_returns) tool_call_id (str): A unique identifier for the tool call that generated this message (deprecated, use tool_returns) stdout (Optional[List(str)]): Captured stdout (e.g. prints, logs) from the tool invocation (deprecated, use tool_returns) stderr (Optional[List(str)]): Captured stderr from the tool invocation (deprecated, use tool_returns) tool_returns (Optional[List[ToolReturn]]): List of tool returns for multi-tool support
Deprecatedstatus: "success" | "error"
message_type?: "tool_return_message"
The type of the message.
status: "success" | "error"
type?: "tool"
The message type to be created.
AssistantMessage { id, content, date, 8 more }
A message sent by the LLM in response to user input. Used in the LLM context.
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message content (Union[str, List[LettaAssistantMessageContentUnion]]): The message content sent by the agent (can be a string or an array of content parts)
The message content sent by the agent (can be a string or an array of content parts)
Array<LettaAssistantMessageContentUnion { text, signature, type } >
text: string
The text content of the message.
signature?: string | null
Stores a unique identifier for any reasoning associated with this text content.
type?: "text"
The type of the message.
message_type?: "assistant_message"
The type of the message.
ApprovalRequestMessage { id, date, tool_call, 9 more }
A message representing a request for approval to call a tool (generated by the LLM to trigger tool execution).
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message tool_call (ToolCall): The tool call
Deprecatedtool_call: ToolCall { arguments, name, tool_call_id } | ToolCallDelta { arguments, name, tool_call_id }
The tool call that has been requested by the llm to run
ToolCall { arguments, name, tool_call_id }
ToolCallDelta { arguments, name, tool_call_id }
message_type?: "approval_request_message"
The type of the message.
tool_calls?: Array<ToolCall { arguments, name, tool_call_id } > | ToolCallDelta { arguments, name, tool_call_id } | null
The tool calls that have been requested by the llm to run, which are pending approval
Array<ToolCall { arguments, name, tool_call_id } >
ToolCallDelta { arguments, name, tool_call_id }
ApprovalResponseMessage { id, date, approval_request_id, 11 more }
A message representing a response form the user indicating whether a tool has been approved to run.
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message approve: (bool) Whether the tool has been approved approval_request_id: The ID of the approval request reason: (Optional[str]) An optional explanation for the provided approval status
Deprecatedapproval_request_id?: string | null
The message ID of the approval request
approvals?: Array<ApprovalReturn { approve, tool_call_id, reason, type } | ToolReturn { status, tool_call_id, tool_return, 3 more } > | null
The list of approval responses
ApprovalReturn { approve, tool_call_id, reason, type }
approve: boolean
Whether the tool has been approved
tool_call_id: string
The ID of the tool call that corresponds to this approval
reason?: string | null
An optional explanation for the provided approval status
type?: "approval"
The message type to be created.
ToolReturn { status, tool_call_id, tool_return, 3 more }
status: "success" | "error"
type?: "tool"
The message type to be created.
Deprecatedapprove?: boolean | null
Whether the tool has been approved
message_type?: "approval_response_message"
The type of the message.
Deprecatedreason?: string | null
An optional explanation for the provided approval status
LettaPing { message_type }
Ping messages are a keep-alive to prevent SSE streams from timing out during long running requests.
message_type: "ping"
The type of the message.
LettaStopReason { stop_reason, message_type }
The stop reason from Letta indicating why agent loop stopped execution.
The reason why execution stopped.
message_type?: "stop_reason"
The type of the message.
LettaUsageStatistics { completion_tokens, message_type, prompt_tokens, 3 more }
Usage statistics for the agent interaction.
Attributes: completion_tokens (int): The number of tokens generated by the agent. prompt_tokens (int): The number of tokens in the prompt. total_tokens (int): The total number of tokens processed by the agent. step_count (int): The number of steps taken by the agent.
completion_tokens?: number
The number of tokens generated by the agent.
message_type?: "usage_statistics"
prompt_tokens?: number
The number of tokens in the prompt.
run_ids?: Array<string> | null
The background task run IDs associated with the agent interaction
step_count?: number
The number of steps taken by the agent.
total_tokens?: number
The total number of tokens processed by the agent.
Send Message Streaming
- HTTP
- TypeScript
- Python
import Letta from '@letta-ai/letta-client';
const client = new Letta({
apiKey: 'My API Key',
});
const lettaStreamingResponse = await client.agents.messages.stream(
'agent-123e4567-e89b-42d3-8456-426614174000',
);
console.log(lettaStreamingResponse);
{
"id": "id",
"content": "content",
"date": "2019-12-27T18:11:19.117Z",
"is_err": true,
"message_type": "system_message",
"name": "name",
"otid": "otid",
"run_id": "run_id",
"sender_id": "sender_id",
"seq_id": 0,
"step_id": "step_id"
}Returns Examples
{
"id": "id",
"content": "content",
"date": "2019-12-27T18:11:19.117Z",
"is_err": true,
"message_type": "system_message",
"name": "name",
"otid": "otid",
"run_id": "run_id",
"sender_id": "sender_id",
"seq_id": 0,
"step_id": "step_id"
}