Send Message Async
Asynchronously process a user message and return a run object. The actual processing happens in the background, and the status can be checked using the run ID.
This is "asynchronous" in the sense that it's a background run and explicitly must be fetched by the run ID.
ParametersExpand Collapse
agent_id: str
The ID of the agent in the format 'agent-
Deprecatedassistant_message_tool_kwarg: Optional[str]
The name of the message argument in the designated message tool. Still supported for legacy agent types, but deprecated for letta_v1_agent onward.
Deprecatedassistant_message_tool_name: Optional[str]
The name of the designated message tool. Still supported for legacy agent types, but deprecated for letta_v1_agent onward.
callback_url: Optional[str]
Optional callback URL to POST to when the job completes
Deprecatedenable_thinking: Optional[str]
If set to True, enables reasoning before responses or tool calls from the agent.
Only return specified message types in the response. If None (default) returns all messages.
input: Optional[Union[str, Iterable[InputUnionMember1], null]]
Syntactic sugar for a single user message. Equivalent to messages=[{'role': 'user', 'content': input}].
InputUnionMember1 = Iterable[InputUnionMember1]
class TextContent: …
text: str
The text content of the message.
signature: Optional[str]
Stores a unique identifier for any reasoning associated with this text content.
type: Optional[Literal["text"]]
The type of the message.
class ImageContent: …
source: Source
The source of the image.
class SourceURLImage: …
url: str
The URL of the image.
type: Optional[Literal["url"]]
The source type for the image.
class SourceBase64Image: …
data: str
The base64 encoded image data.
media_type: str
The media type for the image.
detail: Optional[str]
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
type: Optional[Literal["base64"]]
The source type for the image.
class SourceLettaImage: …
file_id: str
The unique identifier of the image file persisted in storage.
data: Optional[str]
The base64 encoded image data.
detail: Optional[str]
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
media_type: Optional[str]
The media type for the image.
type: Optional[Literal["letta"]]
The source type for the image.
type: Optional[Literal["image"]]
The type of the message.
class ToolCallContent: …
id: str
A unique identifier for this specific tool call instance.
input: Dict[str, object]
The parameters being passed to the tool, structured as a dictionary of parameter names to values.
name: str
The name of the tool being called.
signature: Optional[str]
Stores a unique identifier for any reasoning associated with this tool call.
type: Optional[Literal["tool_call"]]
Indicates this content represents a tool call event.
class ToolReturnContent: …
content: str
The content returned by the tool execution.
is_error: bool
Indicates whether the tool execution resulted in an error.
tool_call_id: str
References the ID of the ToolCallContent that initiated this tool call.
type: Optional[Literal["tool_return"]]
Indicates this content represents a tool return event.
class ReasoningContent: …
Sent via the Anthropic Messages API
is_native: bool
Whether the reasoning content was generated by a reasoner model that processed this step.
reasoning: str
The intermediate reasoning or thought process content.
signature: Optional[str]
A unique identifier for this reasoning step.
type: Optional[Literal["reasoning"]]
Indicates this is a reasoning/intermediate step.
class RedactedReasoningContent: …
Sent via the Anthropic Messages API
data: str
The redacted or filtered intermediate reasoning content.
type: Optional[Literal["redacted_reasoning"]]
Indicates this is a redacted thinking step.
class OmittedReasoningContent: …
A placeholder for reasoning content we know is present, but isn't returned by the provider (e.g. OpenAI GPT-5 on ChatCompletions)
signature: Optional[str]
A unique identifier for this reasoning step.
type: Optional[Literal["omitted_reasoning"]]
Indicates this is an omitted reasoning step.
class InputUnionMember1SummarizedReasoningContent: …
The style of reasoning content returned by the OpenAI Responses API
id: str
The unique identifier for this reasoning step.
summary: Iterable[InputUnionMember1SummarizedReasoningContentSummary]
Summaries of the reasoning content.
index: int
The index of the summary part.
text: str
The text of the summary part.
encrypted_content: Optional[str]
The encrypted reasoning content.
type: Optional[Literal["summarized_reasoning"]]
Indicates this is a summarized reasoning step.
max_steps: Optional[int]
Maximum number of steps the agent should take to process the request.
messages: Optional[Iterable[Message]]
The messages to be sent to the agent.
class MessageCreate: …
Request to create a message
The content of the message.
class TextContent: …
text: str
The text content of the message.
signature: Optional[str]
Stores a unique identifier for any reasoning associated with this text content.
type: Optional[Literal["text"]]
The type of the message.
class ImageContent: …
source: Source
The source of the image.
class SourceURLImage: …
url: str
The URL of the image.
type: Optional[Literal["url"]]
The source type for the image.
class SourceBase64Image: …
data: str
The base64 encoded image data.
media_type: str
The media type for the image.
detail: Optional[str]
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
type: Optional[Literal["base64"]]
The source type for the image.
class SourceLettaImage: …
file_id: str
The unique identifier of the image file persisted in storage.
data: Optional[str]
The base64 encoded image data.
detail: Optional[str]
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
media_type: Optional[str]
The media type for the image.
type: Optional[Literal["letta"]]
The source type for the image.
type: Optional[Literal["image"]]
The type of the message.
class ToolCallContent: …
id: str
A unique identifier for this specific tool call instance.
input: Dict[str, object]
The parameters being passed to the tool, structured as a dictionary of parameter names to values.
name: str
The name of the tool being called.
signature: Optional[str]
Stores a unique identifier for any reasoning associated with this tool call.
type: Optional[Literal["tool_call"]]
Indicates this content represents a tool call event.
class ToolReturnContent: …
content: str
The content returned by the tool execution.
is_error: bool
Indicates whether the tool execution resulted in an error.
tool_call_id: str
References the ID of the ToolCallContent that initiated this tool call.
type: Optional[Literal["tool_return"]]
Indicates this content represents a tool return event.
class ReasoningContent: …
Sent via the Anthropic Messages API
is_native: bool
Whether the reasoning content was generated by a reasoner model that processed this step.
reasoning: str
The intermediate reasoning or thought process content.
signature: Optional[str]
A unique identifier for this reasoning step.
type: Optional[Literal["reasoning"]]
Indicates this is a reasoning/intermediate step.
class RedactedReasoningContent: …
Sent via the Anthropic Messages API
data: str
The redacted or filtered intermediate reasoning content.
type: Optional[Literal["redacted_reasoning"]]
Indicates this is a redacted thinking step.
class OmittedReasoningContent: …
A placeholder for reasoning content we know is present, but isn't returned by the provider (e.g. OpenAI GPT-5 on ChatCompletions)
signature: Optional[str]
A unique identifier for this reasoning step.
type: Optional[Literal["omitted_reasoning"]]
Indicates this is an omitted reasoning step.
role: Literal["user", "system", "assistant"]
The role of the participant.
batch_item_id: Optional[str]
The id of the LLMBatchItem that this message is associated with
group_id: Optional[str]
The multi-agent group that the message was sent in
name: Optional[str]
The name of the participant.
otid: Optional[str]
The offline threading id associated with this message
sender_id: Optional[str]
The id of the sender of the message, can be an identity id or agent id
type: Optional[Literal["message"]]
The message type to be created.
class ApprovalCreate: …
Input to approve or deny a tool call request
Deprecatedapproval_request_id: Optional[str]
The message ID of the approval request
approvals: Optional[List[Approval]]
The list of approval responses
class ApprovalApprovalReturn: …
approve: bool
Whether the tool has been approved
tool_call_id: str
The ID of the tool call that corresponds to this approval
reason: Optional[str]
An optional explanation for the provided approval status
type: Optional[Literal["approval"]]
The message type to be created.
class ToolReturn: …
status: Literal["success", "error"]
type: Optional[Literal["tool"]]
The message type to be created.
Deprecatedapprove: Optional[bool]
Whether the tool has been approved
group_id: Optional[str]
The multi-agent group that the message was sent in
Deprecatedreason: Optional[str]
An optional explanation for the provided approval status
type: Optional[Literal["approval"]]
The message type to be created.
Whether the server should parse specific tool call arguments (default send_message) as AssistantMessage objects. Still supported for legacy agent types, but deprecated for letta_v1_agent onward.
ReturnsExpand Collapse
class Run: …
Representation of a run - a conversation or processing session for an agent. Runs track when agents process messages and maintain the relationship between agents, steps, and messages.
id: str
The human-friendly ID of the Run
agent_id: str
The unique identifier of the agent associated with the run.
background: Optional[bool]
Whether the run was created in background mode.
base_template_id: Optional[str]
The base template ID that the run belongs to.
callback_error: Optional[str]
Optional error message from attempting to POST the callback endpoint.
callback_sent_at: Optional[datetime]
Timestamp when the callback was last attempted.
callback_status_code: Optional[int]
HTTP status code returned by the callback endpoint.
callback_url: Optional[str]
If set, POST to this URL when the run completes.
completed_at: Optional[datetime]
The timestamp when the run was completed.
created_at: Optional[datetime]
The timestamp when the run was created.
metadata: Optional[Dict[str, object]]
Additional metadata for the run.
request_config: Optional[RequestConfig]
The request configuration for the run.
assistant_message_tool_kwarg: Optional[str]
The name of the message argument in the designated message tool.
assistant_message_tool_name: Optional[str]
The name of the designated message tool.
Only return specified message types in the response. If None (default) returns all messages.
use_assistant_message: Optional[bool]
Whether the server should parse specific tool call arguments (default send_message) as AssistantMessage objects.
status: Optional[Literal["created", "running", "completed", 2 more]]
The current status of the run.
stop_reason: Optional[StopReasonType]
The reason why the run was stopped.
total_duration_ns: Optional[int]
Total run duration in nanoseconds
ttft_ns: Optional[int]
Time to first token for a run in nanoseconds
Send Message Async
- HTTP
- TypeScript
- Python
from letta_client import Letta
client = Letta(
api_key="My API Key",
)
run = client.agents.messages.send_async(
agent_id="agent-123e4567-e89b-42d3-8456-426614174000",
)
print(run.id)
{
"id": "run-123e4567-e89b-12d3-a456-426614174000",
"agent_id": "agent_id",
"background": true,
"base_template_id": "base_template_id",
"callback_error": "callback_error",
"callback_sent_at": "2019-12-27T18:11:19.117Z",
"callback_status_code": 0,
"callback_url": "callback_url",
"completed_at": "2019-12-27T18:11:19.117Z",
"created_at": "2019-12-27T18:11:19.117Z",
"metadata": {
"foo": "bar"
},
"request_config": {
"assistant_message_tool_kwarg": "assistant_message_tool_kwarg",
"assistant_message_tool_name": "assistant_message_tool_name",
"include_return_message_types": [
"system_message"
],
"use_assistant_message": true
},
"status": "created",
"stop_reason": "end_turn",
"total_duration_ns": 0,
"ttft_ns": 0
}Returns Examples
{
"id": "run-123e4567-e89b-12d3-a456-426614174000",
"agent_id": "agent_id",
"background": true,
"base_template_id": "base_template_id",
"callback_error": "callback_error",
"callback_sent_at": "2019-12-27T18:11:19.117Z",
"callback_status_code": 0,
"callback_url": "callback_url",
"completed_at": "2019-12-27T18:11:19.117Z",
"created_at": "2019-12-27T18:11:19.117Z",
"metadata": {
"foo": "bar"
},
"request_config": {
"assistant_message_tool_kwarg": "assistant_message_tool_kwarg",
"assistant_message_tool_name": "assistant_message_tool_name",
"include_return_message_types": [
"system_message"
],
"use_assistant_message": true
},
"status": "created",
"stop_reason": "end_turn",
"total_duration_ns": 0,
"ttft_ns": 0
}