Agents
List Agents
Create Agent
Modify Agent
Retrieve Agent
Delete Agent
Export Agent
Import Agent
ModelsExpand Collapse
AgentEnvironmentVariable = object { agent_id, key, value, 7 more }
agent_id: string
The ID of the agent this environment variable belongs to.
key: string
The name of the environment variable.
value: string
The value of the environment variable.
id: optional string
The human-friendly ID of the Agent-env
created_at: optional string
The timestamp when the object was created.
created_by_id: optional string
The id of the user that made this object.
description: optional string
An optional description of the environment variable.
last_updated_by_id: optional string
The id of the user that made this object.
updated_at: optional string
The timestamp when the object was last updated.
value_enc: optional string
Encrypted secret value (stored as encrypted string)
AgentState = object { id, agent_type, blocks, 39 more }
Representation of an agent's state. This is the state of the agent at a given time, and is persisted in the DB backend. The state has all the information needed to recreate a persisted agent.
id: string
The id of the agent. Assigned by the database.
The type of agent.
The memory blocks used by the agent.
value: string
Value of the block.
id: optional string
The human-friendly ID of the Block
base_template_id: optional string
The base template id of the block.
created_by_id: optional string
The id of the user that made this Block.
deployment_id: optional string
The id of the deployment.
description: optional string
Description of the block.
entity_id: optional string
The id of the entity within the template.
hidden: optional boolean
If set to True, the block will be hidden.
is_template: optional boolean
Whether the block is a template (e.g. saved human/persona options).
label: optional string
Label of the block (e.g. 'human', 'persona') in the context window.
last_updated_by_id: optional string
The id of the user that last updated this Block.
limit: optional number
Character limit of the block.
metadata: optional map[unknown]
Metadata of the block.
preserve_on_migration: optional boolean
Preserve the block on template migration.
project_id: optional string
The associated project id.
read_only: optional boolean
Whether the agent has read-only access to the block.
template_id: optional string
The id of the template.
template_name: optional string
Name of the block if it is a template.
Deprecatedembedding_config: EmbeddingConfig { embedding_dim, embedding_endpoint_type, embedding_model, 7 more }
Deprecated: Use embedding field instead. The embedding configuration used by the agent.
embedding_dim: number
The dimension of the embedding.
embedding_endpoint_type: "openai" or "anthropic" or "bedrock" or 16 more
The endpoint type for the model.
embedding_model: string
The model for the embedding.
azure_deployment: optional string
The Azure deployment for the model.
azure_endpoint: optional string
The Azure endpoint for the model.
azure_version: optional string
The Azure version for the model.
batch_size: optional number
The maximum batch size for processing embeddings.
embedding_chunk_size: optional number
The chunk size of the embedding.
embedding_endpoint: optional string
The endpoint for the model (None if local).
handle: optional string
The handle for this config, in the format provider/model-name.
Deprecated: Use model field instead. The LLM configuration used by the agent.
context_window: number
The context window size for the model.
model: string
LLM model name.
model_endpoint_type: "openai" or "anthropic" or "google_ai" or 18 more
The endpoint type for the model.
compatibility_type: optional "gguf" or "mlx"
The framework compatibility type for the model.
display_name: optional string
A human-friendly display name for the model.
enable_reasoner: optional boolean
Whether or not the model should use extended thinking if it is a 'reasoning' style model
frequency_penalty: optional number
Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. From OpenAI: Number between -2.0 and 2.0.
handle: optional string
The handle for this config, in the format provider/model-name.
max_reasoning_tokens: optional number
Configurable thinking budget for extended thinking. Used for enable_reasoner and also for Google Vertex models like Gemini 2.5 Flash. Minimum value is 1024 when used with enable_reasoner.
max_tokens: optional number
The maximum number of tokens to generate. If not set, the model will use its default value.
model_endpoint: optional string
The endpoint for the model.
model_wrapper: optional string
The wrapper for the model.
parallel_tool_calls: optional boolean
If set to True, enables parallel tool calling. Defaults to False.
The provider category for the model.
provider_name: optional string
The provider name for the model.
put_inner_thoughts_in_kwargs: optional boolean
Puts 'inner_thoughts' as a kwarg in the function call if this is set to True. This helps with function calling performance and also the generation of inner thoughts.
reasoning_effort: optional "minimal" or "low" or "medium" or "high"
The reasoning effort to use when generating text reasoning models
temperature: optional number
The temperature to use when generating text with the model. A higher temperature will result in more random text.
tier: optional string
The cost tier for the model (cloud only).
verbosity: optional "low" or "medium" or "high"
Soft control for how verbose model output should be, used for GPT-5 models.
Deprecatedmemory: object { blocks, agent_type, file_blocks, prompt_template }
Deprecated: Use blocks field instead. The in-context memory of the agent.
Memory blocks contained in the agent's in-context memory
value: string
Value of the block.
id: optional string
The human-friendly ID of the Block
base_template_id: optional string
The base template id of the block.
created_by_id: optional string
The id of the user that made this Block.
deployment_id: optional string
The id of the deployment.
description: optional string
Description of the block.
entity_id: optional string
The id of the entity within the template.
hidden: optional boolean
If set to True, the block will be hidden.
is_template: optional boolean
Whether the block is a template (e.g. saved human/persona options).
label: optional string
Label of the block (e.g. 'human', 'persona') in the context window.
last_updated_by_id: optional string
The id of the user that last updated this Block.
limit: optional number
Character limit of the block.
metadata: optional map[unknown]
Metadata of the block.
preserve_on_migration: optional boolean
Preserve the block on template migration.
project_id: optional string
The associated project id.
read_only: optional boolean
Whether the agent has read-only access to the block.
template_id: optional string
The id of the template.
template_name: optional string
Name of the block if it is a template.
Agent type controlling prompt rendering.
AgentType = "memgpt_agent" or "memgpt_v2_agent" or "letta_v1_agent" or 6 more
Enum to represent the type of agent.
file_blocks: optional array of object { file_id, is_open, source_id, 19 more }
Special blocks representing the agent's in-context memory of an attached file
file_id: string
Unique identifier of the file.
is_open: boolean
True if the agent currently has the file open.
source_id: string
Unique identifier of the source.
value: string
Value of the block.
id: optional string
The human-friendly ID of the Block
base_template_id: optional string
The base template id of the block.
created_by_id: optional string
The id of the user that made this Block.
deployment_id: optional string
The id of the deployment.
description: optional string
Description of the block.
entity_id: optional string
The id of the entity within the template.
hidden: optional boolean
If set to True, the block will be hidden.
is_template: optional boolean
Whether the block is a template (e.g. saved human/persona options).
label: optional string
Label of the block (e.g. 'human', 'persona') in the context window.
last_accessed_at: optional string
UTC timestamp of the agent’s most recent access to this file. Any operations from the open, close, or search tools will update this field.
last_updated_by_id: optional string
The id of the user that last updated this Block.
limit: optional number
Character limit of the block.
metadata: optional map[unknown]
Metadata of the block.
preserve_on_migration: optional boolean
Preserve the block on template migration.
project_id: optional string
The associated project id.
read_only: optional boolean
Whether the agent has read-only access to the block.
template_id: optional string
The id of the template.
template_name: optional string
Name of the block if it is a template.
prompt_template: optional string
Deprecated. Ignored for performance.
name: string
The name of the agent.
sources: array of object { id, embedding_config, name, 8 more }
The sources used by the agent.
id: string
The human-friendly ID of the Source
embedding_config: EmbeddingConfig { embedding_dim, embedding_endpoint_type, embedding_model, 7 more }
The embedding configuration used by the source.
embedding_dim: number
The dimension of the embedding.
embedding_endpoint_type: "openai" or "anthropic" or "bedrock" or 16 more
The endpoint type for the model.
embedding_model: string
The model for the embedding.
azure_deployment: optional string
The Azure deployment for the model.
azure_endpoint: optional string
The Azure endpoint for the model.
azure_version: optional string
The Azure version for the model.
batch_size: optional number
The maximum batch size for processing embeddings.
embedding_chunk_size: optional number
The chunk size of the embedding.
embedding_endpoint: optional string
The endpoint for the model (None if local).
handle: optional string
The handle for this config, in the format provider/model-name.
name: string
The name of the source.
created_at: optional string
The timestamp when the source was created.
created_by_id: optional string
The id of the user that made this Tool.
description: optional string
The description of the source.
instructions: optional string
Instructions for how to use the source.
last_updated_by_id: optional string
The id of the user that made this Tool.
metadata: optional map[unknown]
Metadata associated with the source.
updated_at: optional string
The timestamp when the source was last updated.
The vector database provider used for this source's passages
system: string
The system prompt used by the agent.
tags: array of string
The tags associated with the agent.
The tools used by the agent.
id: string
The human-friendly ID of the Tool
args_json_schema: optional map[unknown]
The args JSON schema of the function.
created_by_id: optional string
The id of the user that made this Tool.
default_requires_approval: optional boolean
Default value for whether or not executing this tool requires approval.
description: optional string
The description of the tool.
enable_parallel_execution: optional boolean
If set to True, then this tool will potentially be executed concurrently with other tools. Default False.
json_schema: optional map[unknown]
The JSON schema of the function.
last_updated_by_id: optional string
The id of the user that made this Tool.
metadata_: optional map[unknown]
A dictionary of additional metadata for the tool.
name: optional string
The name of the function.
Optional list of npm packages required by this tool.
name: string
Name of the npm package.
version: optional string
Optional version of the package, following semantic versioning.
Optional list of pip packages required by this tool.
name: string
Name of the pip package.
version: optional string
Optional version of the package, following semantic versioning.
return_char_limit: optional number
The maximum number of characters in the response.
source_code: optional string
The source code of the function.
source_type: optional string
The type of the source code.
tags: optional array of string
Metadata tags.
The type of the tool.
base_template_id: optional string
The base template id of the agent.
created_at: optional string
The timestamp when the object was created.
created_by_id: optional string
The id of the user that made this object.
deployment_id: optional string
The id of the deployment.
description: optional string
The description of the agent.
embedding: optional object { model, provider }
Schema for defining settings for an embedding model
model: string
The name of the model.
provider: "openai" or "ollama"
The provider of the model.
enable_sleeptime: optional boolean
If set to True, memory management will move to a background agent thread.
entity_id: optional string
The id of the entity within the template.
hidden: optional boolean
If set to True, the agent will be hidden.
The identities associated with this agent.
id: string
The human-friendly ID of the Identity
Deprecatedagent_ids: array of string
The IDs of the agents associated with the identity.
Deprecatedblock_ids: array of string
The IDs of the blocks associated with the identity.
identifier_key: string
External, user-generated identifier key of the identity.
The type of the identity.
name: string
The name of the identity.
project_id: optional string
The project id of the identity, if applicable.
List of properties associated with the identity
key: string
The key of the property
type: "string" or "number" or "boolean" or "json"
The type of the property
value: string or number or boolean or map[unknown]
The value of the property
Deprecatedidentity_ids: optional array of string
Deprecated: Use identities field instead. The ids of the identities associated with this agent.
last_run_completion: optional string
The timestamp when the agent last completed a run.
last_run_duration_ms: optional number
The duration in milliseconds of the agent's last run.
The stop reason from the agent's last run.
last_updated_by_id: optional string
The id of the user that made this object.
The multi-agent group that this agent manages
id: string
The id of the group. Assigned by the database.
base_template_id: optional string
The base template id.
deployment_id: optional string
The id of the deployment.
hidden: optional boolean
If set to True, the group will be hidden.
max_message_buffer_length: optional number
The desired maximum length of messages in the context window of the convo agent. This is a best effort, and may be off slightly due to user/assistant interleaving.
min_message_buffer_length: optional number
The desired minimum length of messages in the context window of the convo agent. This is a best effort, and may be off-by-one due to user/assistant interleaving.
project_id: optional string
The associated project id.
template_id: optional string
The id of the template.
max_files_open: optional number
Maximum number of files that can be open at once for this agent. Setting this too high may exceed the context window, which will break the agent.
message_buffer_autoclear: optional boolean
If set to True, the agent will not remember previous messages (though the agent will still retain state via core memory blocks and archival/recall memory). Not recommended unless you have an advanced use case.
message_ids: optional array of string
The ids of the messages in the agent's in-context memory.
metadata: optional map[unknown]
The metadata of the agent.
model: optional object { model, max_output_tokens, parallel_tool_calls }
Schema for defining settings for a model
model: string
The name of the model.
max_output_tokens: optional number
The maximum number of tokens the model can generate.
parallel_tool_calls: optional boolean
Whether to enable parallel tool calling.
Deprecated: Use managed_group field instead. The multi-agent group that this agent manages.
id: string
The id of the group. Assigned by the database.
base_template_id: optional string
The base template id.
deployment_id: optional string
The id of the deployment.
hidden: optional boolean
If set to True, the group will be hidden.
max_message_buffer_length: optional number
The desired maximum length of messages in the context window of the convo agent. This is a best effort, and may be off slightly due to user/assistant interleaving.
min_message_buffer_length: optional number
The desired minimum length of messages in the context window of the convo agent. This is a best effort, and may be off-by-one due to user/assistant interleaving.
project_id: optional string
The associated project id.
template_id: optional string
The id of the template.
per_file_view_window_char_limit: optional number
The per-file view window character limit for this agent. Setting this too high may exceed the context window, which will break the agent.
project_id: optional string
The id of the project the agent belongs to.
response_format: optional TextResponseFormat { type } or JsonSchemaResponseFormat { json_schema, type } or JsonObjectResponseFormat { type }
The response format used by the agent
TextResponseFormat = object { type }
Response format for plain text responses.
type: optional "text"
The type of the response format.
JsonSchemaResponseFormat = object { json_schema, type }
Response format for JSON schema-based responses.
json_schema: map[unknown]
The JSON schema of the response.
type: optional "json_schema"
The type of the response format.
JsonObjectResponseFormat = object { type }
Response format for JSON object responses.
type: optional "json_object"
The type of the response format.
The environment variables for tool execution specific to this agent.
agent_id: string
The ID of the agent this environment variable belongs to.
key: string
The name of the environment variable.
value: string
The value of the environment variable.
id: optional string
The human-friendly ID of the Agent-env
created_at: optional string
The timestamp when the object was created.
created_by_id: optional string
The id of the user that made this object.
description: optional string
An optional description of the environment variable.
last_updated_by_id: optional string
The id of the user that made this object.
updated_at: optional string
The timestamp when the object was last updated.
value_enc: optional string
Encrypted secret value (stored as encrypted string)
template_id: optional string
The id of the template the agent belongs to.
timezone: optional string
The timezone of the agent (IANA format).
Deprecatedtool_exec_environment_variables: optional array of AgentEnvironmentVariable { agent_id, key, value, 7 more }
Deprecated: use secrets field instead.
agent_id: string
The ID of the agent this environment variable belongs to.
key: string
The name of the environment variable.
value: string
The value of the environment variable.
id: optional string
The human-friendly ID of the Agent-env
created_at: optional string
The timestamp when the object was created.
created_by_id: optional string
The id of the user that made this object.
description: optional string
An optional description of the environment variable.
last_updated_by_id: optional string
The id of the user that made this object.
updated_at: optional string
The timestamp when the object was last updated.
value_enc: optional string
Encrypted secret value (stored as encrypted string)
tool_rules: optional array of ChildToolRule { children, tool_name, child_arg_nodes, 2 more } or InitToolRule { tool_name, args, prompt_template, type } or TerminalToolRule { tool_name, prompt_template, type } or 6 more
The list of tool rules.
ChildToolRule = object { children, tool_name, child_arg_nodes, 2 more }
A ToolRule represents a tool that can be invoked by the agent.
children: array of string
The children tools that can be invoked.
tool_name: string
The name of the tool. Must exist in the database for the user's organization.
child_arg_nodes: optional array of object { name, args }
Optional list of typed child argument overrides. Each node must reference a child in 'children'.
name: string
The name of the child tool to invoke next.
args: optional map[unknown]
Optional prefilled arguments for this child tool. Keys must match the tool's parameter names and values must satisfy the tool's JSON schema. Supports partial prefill; non-overlapping parameters are left to the model.
prompt_template: optional string
Optional template string (ignored).
type: optional "constrain_child_tools"
InitToolRule = object { tool_name, args, prompt_template, type }
Represents the initial tool rule configuration.
tool_name: string
The name of the tool. Must exist in the database for the user's organization.
args: optional map[unknown]
Optional prefilled arguments for this tool. When present, these values will override any LLM-provided arguments with the same keys during invocation. Keys must match the tool's parameter names and values must satisfy the tool's JSON schema. Supports partial prefill; non-overlapping parameters are left to the model.
prompt_template: optional string
Optional template string (ignored). Rendering uses fast built-in formatting for performance.
type: optional "run_first"
TerminalToolRule = object { tool_name, prompt_template, type }
Represents a terminal tool rule configuration where if this tool gets called, it must end the agent loop.
tool_name: string
The name of the tool. Must exist in the database for the user's organization.
prompt_template: optional string
Optional template string (ignored).
type: optional "exit_loop"
ConditionalToolRule = object { child_output_mapping, tool_name, default_child, 3 more }
A ToolRule that conditionally maps to different child tools based on the output.
child_output_mapping: map[string]
The output case to check for mapping
tool_name: string
The name of the tool. Must exist in the database for the user's organization.
default_child: optional string
The default child tool to be called. If None, any tool can be called.
prompt_template: optional string
Optional template string (ignored).
require_output_mapping: optional boolean
Whether to throw an error when output doesn't match any case
type: optional "conditional"
ContinueToolRule = object { tool_name, prompt_template, type }
Represents a tool rule configuration where if this tool gets called, it must continue the agent loop.
tool_name: string
The name of the tool. Must exist in the database for the user's organization.
prompt_template: optional string
Optional template string (ignored).
type: optional "continue_loop"
RequiredBeforeExitToolRule = object { tool_name, prompt_template, type }
Represents a tool rule configuration where this tool must be called before the agent loop can exit.
tool_name: string
The name of the tool. Must exist in the database for the user's organization.
prompt_template: optional string
Optional template string (ignored).
type: optional "required_before_exit"
MaxCountPerStepToolRule = object { max_count_limit, tool_name, prompt_template, type }
Represents a tool rule configuration which constrains the total number of times this tool can be invoked in a single step.
max_count_limit: number
The max limit for the total number of times this tool can be invoked in a single step.
tool_name: string
The name of the tool. Must exist in the database for the user's organization.
prompt_template: optional string
Optional template string (ignored).
type: optional "max_count_per_step"
ParentToolRule = object { children, tool_name, prompt_template, type }
A ToolRule that only allows a child tool to be called if the parent has been called.
children: array of string
The children tools that can be invoked.
tool_name: string
The name of the tool. Must exist in the database for the user's organization.
prompt_template: optional string
Optional template string (ignored).
type: optional "parent_last_tool"
RequiresApprovalToolRule = object { tool_name, prompt_template, type }
Represents a tool rule configuration which requires approval before the tool can be invoked.
tool_name: string
The name of the tool. Must exist in the database for the user's organization.
prompt_template: optional string
Optional template string (ignored). Rendering uses fast built-in formatting for performance.
type: optional "requires_approval"
updated_at: optional string
The timestamp when the object was last updated.
AgentType = "memgpt_agent" or "memgpt_v2_agent" or "letta_v1_agent" or 6 more
Enum to represent the type of agent.
ChildToolRule = object { children, tool_name, child_arg_nodes, 2 more }
A ToolRule represents a tool that can be invoked by the agent.
children: array of string
The children tools that can be invoked.
tool_name: string
The name of the tool. Must exist in the database for the user's organization.
child_arg_nodes: optional array of object { name, args }
Optional list of typed child argument overrides. Each node must reference a child in 'children'.
name: string
The name of the child tool to invoke next.
args: optional map[unknown]
Optional prefilled arguments for this child tool. Keys must match the tool's parameter names and values must satisfy the tool's JSON schema. Supports partial prefill; non-overlapping parameters are left to the model.
prompt_template: optional string
Optional template string (ignored).
type: optional "constrain_child_tools"
ConditionalToolRule = object { child_output_mapping, tool_name, default_child, 3 more }
A ToolRule that conditionally maps to different child tools based on the output.
child_output_mapping: map[string]
The output case to check for mapping
tool_name: string
The name of the tool. Must exist in the database for the user's organization.
default_child: optional string
The default child tool to be called. If None, any tool can be called.
prompt_template: optional string
Optional template string (ignored).
require_output_mapping: optional boolean
Whether to throw an error when output doesn't match any case
type: optional "conditional"
ContinueToolRule = object { tool_name, prompt_template, type }
Represents a tool rule configuration where if this tool gets called, it must continue the agent loop.
tool_name: string
The name of the tool. Must exist in the database for the user's organization.
prompt_template: optional string
Optional template string (ignored).
type: optional "continue_loop"
InitToolRule = object { tool_name, args, prompt_template, type }
Represents the initial tool rule configuration.
tool_name: string
The name of the tool. Must exist in the database for the user's organization.
args: optional map[unknown]
Optional prefilled arguments for this tool. When present, these values will override any LLM-provided arguments with the same keys during invocation. Keys must match the tool's parameter names and values must satisfy the tool's JSON schema. Supports partial prefill; non-overlapping parameters are left to the model.
prompt_template: optional string
Optional template string (ignored). Rendering uses fast built-in formatting for performance.
type: optional "run_first"
JsonObjectResponseFormat = object { type }
Response format for JSON object responses.
type: optional "json_object"
The type of the response format.
JsonSchemaResponseFormat = object { json_schema, type }
Response format for JSON schema-based responses.
json_schema: map[unknown]
The JSON schema of the response.
type: optional "json_schema"
The type of the response format.
LettaMessageContentUnion = TextContent { text, signature, type } or ImageContent { source, type } or ToolCallContent { id, input, name, 2 more } or 4 more
Sent via the Anthropic Messages API
TextContent = object { text, signature, type }
text: string
The text content of the message.
signature: optional string
Stores a unique identifier for any reasoning associated with this text content.
type: optional "text"
The type of the message.
ImageContent = object { source, type }
source: object { url, type } or object { data, media_type, detail, type } or object { file_id, data, detail, 2 more }
The source of the image.
URL = object { url, type }
url: string
The URL of the image.
type: optional "url"
The source type for the image.
Base64 = object { data, media_type, detail, type }
data: string
The base64 encoded image data.
media_type: string
The media type for the image.
detail: optional string
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
type: optional "base64"
The source type for the image.
Letta = object { file_id, data, detail, 2 more }
file_id: string
The unique identifier of the image file persisted in storage.
data: optional string
The base64 encoded image data.
detail: optional string
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
media_type: optional string
The media type for the image.
type: optional "letta"
The source type for the image.
type: optional "image"
The type of the message.
ToolCallContent = object { id, input, name, 2 more }
id: string
A unique identifier for this specific tool call instance.
input: map[unknown]
The parameters being passed to the tool, structured as a dictionary of parameter names to values.
name: string
The name of the tool being called.
signature: optional string
Stores a unique identifier for any reasoning associated with this tool call.
type: optional "tool_call"
Indicates this content represents a tool call event.
ToolReturnContent = object { content, is_error, tool_call_id, type }
content: string
The content returned by the tool execution.
is_error: boolean
Indicates whether the tool execution resulted in an error.
tool_call_id: string
References the ID of the ToolCallContent that initiated this tool call.
type: optional "tool_return"
Indicates this content represents a tool return event.
ReasoningContent = object { is_native, reasoning, signature, type }
Sent via the Anthropic Messages API
is_native: boolean
Whether the reasoning content was generated by a reasoner model that processed this step.
reasoning: string
The intermediate reasoning or thought process content.
signature: optional string
A unique identifier for this reasoning step.
type: optional "reasoning"
Indicates this is a reasoning/intermediate step.
RedactedReasoningContent = object { data, type }
Sent via the Anthropic Messages API
data: string
The redacted or filtered intermediate reasoning content.
type: optional "redacted_reasoning"
Indicates this is a redacted thinking step.
OmittedReasoningContent = object { signature, type }
A placeholder for reasoning content we know is present, but isn't returned by the provider (e.g. OpenAI GPT-5 on ChatCompletions)
signature: optional string
A unique identifier for this reasoning step.
type: optional "omitted_reasoning"
Indicates this is an omitted reasoning step.
MaxCountPerStepToolRule = object { max_count_limit, tool_name, prompt_template, type }
Represents a tool rule configuration which constrains the total number of times this tool can be invoked in a single step.
max_count_limit: number
The max limit for the total number of times this tool can be invoked in a single step.
tool_name: string
The name of the tool. Must exist in the database for the user's organization.
prompt_template: optional string
Optional template string (ignored).
type: optional "max_count_per_step"
MessageCreate = object { content, role, batch_item_id, 5 more }
Request to create a message
The content of the message.
TextContent = object { text, signature, type }
text: string
The text content of the message.
signature: optional string
Stores a unique identifier for any reasoning associated with this text content.
type: optional "text"
The type of the message.
ImageContent = object { source, type }
source: object { url, type } or object { data, media_type, detail, type } or object { file_id, data, detail, 2 more }
The source of the image.
URL = object { url, type }
url: string
The URL of the image.
type: optional "url"
The source type for the image.
Base64 = object { data, media_type, detail, type }
data: string
The base64 encoded image data.
media_type: string
The media type for the image.
detail: optional string
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
type: optional "base64"
The source type for the image.
Letta = object { file_id, data, detail, 2 more }
file_id: string
The unique identifier of the image file persisted in storage.
data: optional string
The base64 encoded image data.
detail: optional string
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
media_type: optional string
The media type for the image.
type: optional "letta"
The source type for the image.
type: optional "image"
The type of the message.
ToolCallContent = object { id, input, name, 2 more }
id: string
A unique identifier for this specific tool call instance.
input: map[unknown]
The parameters being passed to the tool, structured as a dictionary of parameter names to values.
name: string
The name of the tool being called.
signature: optional string
Stores a unique identifier for any reasoning associated with this tool call.
type: optional "tool_call"
Indicates this content represents a tool call event.
ToolReturnContent = object { content, is_error, tool_call_id, type }
content: string
The content returned by the tool execution.
is_error: boolean
Indicates whether the tool execution resulted in an error.
tool_call_id: string
References the ID of the ToolCallContent that initiated this tool call.
type: optional "tool_return"
Indicates this content represents a tool return event.
ReasoningContent = object { is_native, reasoning, signature, type }
Sent via the Anthropic Messages API
is_native: boolean
Whether the reasoning content was generated by a reasoner model that processed this step.
reasoning: string
The intermediate reasoning or thought process content.
signature: optional string
A unique identifier for this reasoning step.
type: optional "reasoning"
Indicates this is a reasoning/intermediate step.
RedactedReasoningContent = object { data, type }
Sent via the Anthropic Messages API
data: string
The redacted or filtered intermediate reasoning content.
type: optional "redacted_reasoning"
Indicates this is a redacted thinking step.
OmittedReasoningContent = object { signature, type }
A placeholder for reasoning content we know is present, but isn't returned by the provider (e.g. OpenAI GPT-5 on ChatCompletions)
signature: optional string
A unique identifier for this reasoning step.
type: optional "omitted_reasoning"
Indicates this is an omitted reasoning step.
role: "user" or "system" or "assistant"
The role of the participant.
batch_item_id: optional string
The id of the LLMBatchItem that this message is associated with
group_id: optional string
The multi-agent group that the message was sent in
name: optional string
The name of the participant.
otid: optional string
The offline threading id associated with this message
sender_id: optional string
The id of the sender of the message, can be an identity id or agent id
type: optional "message"
The message type to be created.
ParentToolRule = object { children, tool_name, prompt_template, type }
A ToolRule that only allows a child tool to be called if the parent has been called.
children: array of string
The children tools that can be invoked.
tool_name: string
The name of the tool. Must exist in the database for the user's organization.
prompt_template: optional string
Optional template string (ignored).
type: optional "parent_last_tool"
RequiredBeforeExitToolRule = object { tool_name, prompt_template, type }
Represents a tool rule configuration where this tool must be called before the agent loop can exit.
tool_name: string
The name of the tool. Must exist in the database for the user's organization.
prompt_template: optional string
Optional template string (ignored).
type: optional "required_before_exit"
RequiresApprovalToolRule = object { tool_name, prompt_template, type }
Represents a tool rule configuration which requires approval before the tool can be invoked.
tool_name: string
The name of the tool. Must exist in the database for the user's organization.
prompt_template: optional string
Optional template string (ignored). Rendering uses fast built-in formatting for performance.
type: optional "requires_approval"
TerminalToolRule = object { tool_name, prompt_template, type }
Represents a terminal tool rule configuration where if this tool gets called, it must end the agent loop.
tool_name: string
The name of the tool. Must exist in the database for the user's organization.
prompt_template: optional string
Optional template string (ignored).
type: optional "exit_loop"
TextResponseFormat = object { type }
Response format for plain text responses.
type: optional "text"
The type of the response format.
AgentsMessages
List Messages
Send Message
Modify Message
Send Message Streaming
Cancel Message
Send Message Async
Reset Messages
Summarize Messages
ModelsExpand Collapse
ApprovalCreate = object { approval_request_id, approvals, approve, 3 more }
Input to approve or deny a tool call request
Deprecatedapproval_request_id: optional string
The message ID of the approval request
approvals: optional array of object { approve, tool_call_id, reason, type } or ToolReturn { status, tool_call_id, tool_return, 3 more }
The list of approval responses
Approval = object { approve, tool_call_id, reason, type }
approve: boolean
Whether the tool has been approved
tool_call_id: string
The ID of the tool call that corresponds to this approval
reason: optional string
An optional explanation for the provided approval status
type: optional "approval"
The message type to be created.
ToolReturn = object { status, tool_call_id, tool_return, 3 more }
status: "success" or "error"
type: optional "tool"
The message type to be created.
Deprecatedapprove: optional boolean
Whether the tool has been approved
group_id: optional string
The multi-agent group that the message was sent in
Deprecatedreason: optional string
An optional explanation for the provided approval status
type: optional "approval"
The message type to be created.
ApprovalRequestMessage = object { id, date, tool_call, 9 more }
A message representing a request for approval to call a tool (generated by the LLM to trigger tool execution).
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message tool_call (ToolCall): The tool call
Deprecatedtool_call: ToolCall { arguments, name, tool_call_id } or ToolCallDelta { arguments, name, tool_call_id }
The tool call that has been requested by the llm to run
ToolCall = object { arguments, name, tool_call_id }
ToolCallDelta = object { arguments, name, tool_call_id }
message_type: optional "approval_request_message"
The type of the message.
tool_calls: optional array of ToolCall { arguments, name, tool_call_id } or ToolCallDelta { arguments, name, tool_call_id }
The tool calls that have been requested by the llm to run, which are pending approval
ToolCallDelta = object { arguments, name, tool_call_id }
ApprovalResponseMessage = object { id, date, approval_request_id, 11 more }
A message representing a response form the user indicating whether a tool has been approved to run.
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message approve: (bool) Whether the tool has been approved approval_request_id: The ID of the approval request reason: (Optional[str]) An optional explanation for the provided approval status
Deprecatedapproval_request_id: optional string
The message ID of the approval request
approvals: optional array of object { approve, tool_call_id, reason, type } or ToolReturn { status, tool_call_id, tool_return, 3 more }
The list of approval responses
Approval = object { approve, tool_call_id, reason, type }
approve: boolean
Whether the tool has been approved
tool_call_id: string
The ID of the tool call that corresponds to this approval
reason: optional string
An optional explanation for the provided approval status
type: optional "approval"
The message type to be created.
ToolReturn = object { status, tool_call_id, tool_return, 3 more }
status: "success" or "error"
type: optional "tool"
The message type to be created.
Deprecatedapprove: optional boolean
Whether the tool has been approved
message_type: optional "approval_response_message"
The type of the message.
Deprecatedreason: optional string
An optional explanation for the provided approval status
AssistantMessage = object { id, content, date, 8 more }
A message sent by the LLM in response to user input. Used in the LLM context.
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message content (Union[str, List[LettaAssistantMessageContentUnion]]): The message content sent by the agent (can be a string or an array of content parts)
The message content sent by the agent (can be a string or an array of content parts)
text: string
The text content of the message.
signature: optional string
Stores a unique identifier for any reasoning associated with this text content.
type: optional "text"
The type of the message.
message_type: optional "assistant_message"
The type of the message.
EventMessage = object { id, date, event_data, 9 more }
A message for notifying the developer that an event that has occured (e.g. a compaction). Events are NOT part of the context window.
event_type: "compaction"
message_type: optional "event"
HiddenReasoningMessage = object { id, date, state, 9 more }
Representation of an agent's internal reasoning where reasoning content has been hidden from the response.
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message state (Literal["redacted", "omitted"]): Whether the reasoning content was redacted by the provider or simply omitted by the API hidden_reasoning (Optional[str]): The internal reasoning of the agent
state: "redacted" or "omitted"
message_type: optional "hidden_reasoning_message"
The type of the message.
ImageContent = object { source, type }
source: object { url, type } or object { data, media_type, detail, type } or object { file_id, data, detail, 2 more }
The source of the image.
URL = object { url, type }
url: string
The URL of the image.
type: optional "url"
The source type for the image.
Base64 = object { data, media_type, detail, type }
data: string
The base64 encoded image data.
media_type: string
The media type for the image.
detail: optional string
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
type: optional "base64"
The source type for the image.
Letta = object { file_id, data, detail, 2 more }
file_id: string
The unique identifier of the image file persisted in storage.
data: optional string
The base64 encoded image data.
detail: optional string
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
media_type: optional string
The media type for the image.
type: optional "letta"
The source type for the image.
type: optional "image"
The type of the message.
JobStatus = "created" or "running" or "completed" or 4 more
Status of the job.
JobType = "job" or "run" or "batch"
LettaAssistantMessageContentUnion = object { text, signature, type }
text: string
The text content of the message.
signature: optional string
Stores a unique identifier for any reasoning associated with this text content.
type: optional "text"
The type of the message.
LettaMessageUnion = SystemMessage { id, content, date, 8 more } or UserMessage { id, content, date, 8 more } or ReasoningMessage { id, date, reasoning, 10 more } or 8 more
A message generated by the system. Never streamed back on a response, only used for cursor pagination.
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message content (str): The message content sent by the system
SystemMessage = object { id, content, date, 8 more }
A message generated by the system. Never streamed back on a response, only used for cursor pagination.
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message content (str): The message content sent by the system
content: string
The message content sent by the system
message_type: optional "system_message"
The type of the message.
UserMessage = object { id, content, date, 8 more }
A message sent by the user. Never streamed back on a response, only used for cursor pagination.
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message content (Union[str, List[LettaUserMessageContentUnion]]): The message content sent by the user (can be a string or an array of multi-modal content parts)
The message content sent by the user (can be a string or an array of multi-modal content parts)
TextContent = object { text, signature, type }
text: string
The text content of the message.
signature: optional string
Stores a unique identifier for any reasoning associated with this text content.
type: optional "text"
The type of the message.
ImageContent = object { source, type }
source: object { url, type } or object { data, media_type, detail, type } or object { file_id, data, detail, 2 more }
The source of the image.
URL = object { url, type }
url: string
The URL of the image.
type: optional "url"
The source type for the image.
Base64 = object { data, media_type, detail, type }
data: string
The base64 encoded image data.
media_type: string
The media type for the image.
detail: optional string
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
type: optional "base64"
The source type for the image.
Letta = object { file_id, data, detail, 2 more }
file_id: string
The unique identifier of the image file persisted in storage.
data: optional string
The base64 encoded image data.
detail: optional string
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
media_type: optional string
The media type for the image.
type: optional "letta"
The source type for the image.
type: optional "image"
The type of the message.
message_type: optional "user_message"
The type of the message.
ReasoningMessage = object { id, date, reasoning, 10 more }
Representation of an agent's internal reasoning.
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message source (Literal["reasoner_model", "non_reasoner_model"]): Whether the reasoning content was generated natively by a reasoner model or derived via prompting reasoning (str): The internal reasoning of the agent signature (Optional[str]): The model-generated signature of the reasoning step
message_type: optional "reasoning_message"
The type of the message.
source: optional "reasoner_model" or "non_reasoner_model"
HiddenReasoningMessage = object { id, date, state, 9 more }
Representation of an agent's internal reasoning where reasoning content has been hidden from the response.
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message state (Literal["redacted", "omitted"]): Whether the reasoning content was redacted by the provider or simply omitted by the API hidden_reasoning (Optional[str]): The internal reasoning of the agent
state: "redacted" or "omitted"
message_type: optional "hidden_reasoning_message"
The type of the message.
ToolCallMessage = object { id, date, tool_call, 9 more }
A message representing a request to call a tool (generated by the LLM to trigger tool execution).
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message tool_call (Union[ToolCall, ToolCallDelta]): The tool call
Deprecatedtool_call: ToolCall { arguments, name, tool_call_id } or ToolCallDelta { arguments, name, tool_call_id }
ToolCall = object { arguments, name, tool_call_id }
ToolCallDelta = object { arguments, name, tool_call_id }
message_type: optional "tool_call_message"
The type of the message.
tool_calls: optional array of ToolCall { arguments, name, tool_call_id } or ToolCallDelta { arguments, name, tool_call_id }
ToolCallDelta = object { arguments, name, tool_call_id }
ToolReturnMessage = object { id, date, status, 13 more }
A message representing the return value of a tool call (generated by Letta executing the requested tool).
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message tool_return (str): The return value of the tool (deprecated, use tool_returns) status (Literal["success", "error"]): The status of the tool call (deprecated, use tool_returns) tool_call_id (str): A unique identifier for the tool call that generated this message (deprecated, use tool_returns) stdout (Optional[List(str)]): Captured stdout (e.g. prints, logs) from the tool invocation (deprecated, use tool_returns) stderr (Optional[List(str)]): Captured stderr from the tool invocation (deprecated, use tool_returns) tool_returns (Optional[List[ToolReturn]]): List of tool returns for multi-tool support
Deprecatedstatus: "success" or "error"
message_type: optional "tool_return_message"
The type of the message.
status: "success" or "error"
type: optional "tool"
The message type to be created.
AssistantMessage = object { id, content, date, 8 more }
A message sent by the LLM in response to user input. Used in the LLM context.
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message content (Union[str, List[LettaAssistantMessageContentUnion]]): The message content sent by the agent (can be a string or an array of content parts)
The message content sent by the agent (can be a string or an array of content parts)
text: string
The text content of the message.
signature: optional string
Stores a unique identifier for any reasoning associated with this text content.
type: optional "text"
The type of the message.
message_type: optional "assistant_message"
The type of the message.
ApprovalRequestMessage = object { id, date, tool_call, 9 more }
A message representing a request for approval to call a tool (generated by the LLM to trigger tool execution).
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message tool_call (ToolCall): The tool call
Deprecatedtool_call: ToolCall { arguments, name, tool_call_id } or ToolCallDelta { arguments, name, tool_call_id }
The tool call that has been requested by the llm to run
ToolCall = object { arguments, name, tool_call_id }
ToolCallDelta = object { arguments, name, tool_call_id }
message_type: optional "approval_request_message"
The type of the message.
tool_calls: optional array of ToolCall { arguments, name, tool_call_id } or ToolCallDelta { arguments, name, tool_call_id }
The tool calls that have been requested by the llm to run, which are pending approval
ToolCallDelta = object { arguments, name, tool_call_id }
ApprovalResponseMessage = object { id, date, approval_request_id, 11 more }
A message representing a response form the user indicating whether a tool has been approved to run.
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message approve: (bool) Whether the tool has been approved approval_request_id: The ID of the approval request reason: (Optional[str]) An optional explanation for the provided approval status
Deprecatedapproval_request_id: optional string
The message ID of the approval request
approvals: optional array of object { approve, tool_call_id, reason, type } or ToolReturn { status, tool_call_id, tool_return, 3 more }
The list of approval responses
Approval = object { approve, tool_call_id, reason, type }
approve: boolean
Whether the tool has been approved
tool_call_id: string
The ID of the tool call that corresponds to this approval
reason: optional string
An optional explanation for the provided approval status
type: optional "approval"
The message type to be created.
ToolReturn = object { status, tool_call_id, tool_return, 3 more }
status: "success" or "error"
type: optional "tool"
The message type to be created.
Deprecatedapprove: optional boolean
Whether the tool has been approved
message_type: optional "approval_response_message"
The type of the message.
Deprecatedreason: optional string
An optional explanation for the provided approval status
SummaryMessage = object { id, date, summary, 8 more }
A message representing a summary of the conversation. Sent to the LLM as a user or system message depending on the provider.
message_type: optional "summary"
EventMessage = object { id, date, event_data, 9 more }
A message for notifying the developer that an event that has occured (e.g. a compaction). Events are NOT part of the context window.
event_type: "compaction"
message_type: optional "event"
LettaRequest = object { assistant_message_tool_kwarg, assistant_message_tool_name, enable_thinking, 5 more }
Deprecatedassistant_message_tool_kwarg: optional string
The name of the message argument in the designated message tool. Still supported for legacy agent types, but deprecated for letta_v1_agent onward.
Deprecatedassistant_message_tool_name: optional string
The name of the designated message tool. Still supported for legacy agent types, but deprecated for letta_v1_agent onward.
Deprecatedenable_thinking: optional string
If set to True, enables reasoning before responses or tool calls from the agent.
Only return specified message types in the response. If None (default) returns all messages.
input: optional string or array of TextContent { text, signature, type } or ImageContent { source, type } or ToolCallContent { id, input, name, 2 more } or 5 more
Syntactic sugar for a single user message. Equivalent to messages=[{'role': 'user', 'content': input}].
UnionMember1 = array of TextContent { text, signature, type } or ImageContent { source, type } or ToolCallContent { id, input, name, 2 more } or 5 more
TextContent = object { text, signature, type }
text: string
The text content of the message.
signature: optional string
Stores a unique identifier for any reasoning associated with this text content.
type: optional "text"
The type of the message.
ImageContent = object { source, type }
source: object { url, type } or object { data, media_type, detail, type } or object { file_id, data, detail, 2 more }
The source of the image.
URL = object { url, type }
url: string
The URL of the image.
type: optional "url"
The source type for the image.
Base64 = object { data, media_type, detail, type }
data: string
The base64 encoded image data.
media_type: string
The media type for the image.
detail: optional string
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
type: optional "base64"
The source type for the image.
Letta = object { file_id, data, detail, 2 more }
file_id: string
The unique identifier of the image file persisted in storage.
data: optional string
The base64 encoded image data.
detail: optional string
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
media_type: optional string
The media type for the image.
type: optional "letta"
The source type for the image.
type: optional "image"
The type of the message.
ToolCallContent = object { id, input, name, 2 more }
id: string
A unique identifier for this specific tool call instance.
input: map[unknown]
The parameters being passed to the tool, structured as a dictionary of parameter names to values.
name: string
The name of the tool being called.
signature: optional string
Stores a unique identifier for any reasoning associated with this tool call.
type: optional "tool_call"
Indicates this content represents a tool call event.
ToolReturnContent = object { content, is_error, tool_call_id, type }
content: string
The content returned by the tool execution.
is_error: boolean
Indicates whether the tool execution resulted in an error.
tool_call_id: string
References the ID of the ToolCallContent that initiated this tool call.
type: optional "tool_return"
Indicates this content represents a tool return event.
ReasoningContent = object { is_native, reasoning, signature, type }
Sent via the Anthropic Messages API
is_native: boolean
Whether the reasoning content was generated by a reasoner model that processed this step.
reasoning: string
The intermediate reasoning or thought process content.
signature: optional string
A unique identifier for this reasoning step.
type: optional "reasoning"
Indicates this is a reasoning/intermediate step.
RedactedReasoningContent = object { data, type }
Sent via the Anthropic Messages API
data: string
The redacted or filtered intermediate reasoning content.
type: optional "redacted_reasoning"
Indicates this is a redacted thinking step.
OmittedReasoningContent = object { signature, type }
A placeholder for reasoning content we know is present, but isn't returned by the provider (e.g. OpenAI GPT-5 on ChatCompletions)
signature: optional string
A unique identifier for this reasoning step.
type: optional "omitted_reasoning"
Indicates this is an omitted reasoning step.
SummarizedReasoning = object { id, summary, encrypted_content, type }
The style of reasoning content returned by the OpenAI Responses API
id: string
The unique identifier for this reasoning step.
summary: array of object { index, text }
Summaries of the reasoning content.
index: number
The index of the summary part.
text: string
The text of the summary part.
encrypted_content: optional string
The encrypted reasoning content.
type: optional "summarized_reasoning"
Indicates this is a summarized reasoning step.
max_steps: optional number
Maximum number of steps the agent should take to process the request.
messages: optional array of MessageCreate { content, role, batch_item_id, 5 more } or ApprovalCreate { approval_request_id, approvals, approve, 3 more }
The messages to be sent to the agent.
MessageCreate = object { content, role, batch_item_id, 5 more }
Request to create a message
The content of the message.
TextContent = object { text, signature, type }
text: string
The text content of the message.
signature: optional string
Stores a unique identifier for any reasoning associated with this text content.
type: optional "text"
The type of the message.
ImageContent = object { source, type }
source: object { url, type } or object { data, media_type, detail, type } or object { file_id, data, detail, 2 more }
The source of the image.
URL = object { url, type }
url: string
The URL of the image.
type: optional "url"
The source type for the image.
Base64 = object { data, media_type, detail, type }
data: string
The base64 encoded image data.
media_type: string
The media type for the image.
detail: optional string
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
type: optional "base64"
The source type for the image.
Letta = object { file_id, data, detail, 2 more }
file_id: string
The unique identifier of the image file persisted in storage.
data: optional string
The base64 encoded image data.
detail: optional string
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
media_type: optional string
The media type for the image.
type: optional "letta"
The source type for the image.
type: optional "image"
The type of the message.
ToolCallContent = object { id, input, name, 2 more }
id: string
A unique identifier for this specific tool call instance.
input: map[unknown]
The parameters being passed to the tool, structured as a dictionary of parameter names to values.
name: string
The name of the tool being called.
signature: optional string
Stores a unique identifier for any reasoning associated with this tool call.
type: optional "tool_call"
Indicates this content represents a tool call event.
ToolReturnContent = object { content, is_error, tool_call_id, type }
content: string
The content returned by the tool execution.
is_error: boolean
Indicates whether the tool execution resulted in an error.
tool_call_id: string
References the ID of the ToolCallContent that initiated this tool call.
type: optional "tool_return"
Indicates this content represents a tool return event.
ReasoningContent = object { is_native, reasoning, signature, type }
Sent via the Anthropic Messages API
is_native: boolean
Whether the reasoning content was generated by a reasoner model that processed this step.
reasoning: string
The intermediate reasoning or thought process content.
signature: optional string
A unique identifier for this reasoning step.
type: optional "reasoning"
Indicates this is a reasoning/intermediate step.
RedactedReasoningContent = object { data, type }
Sent via the Anthropic Messages API
data: string
The redacted or filtered intermediate reasoning content.
type: optional "redacted_reasoning"
Indicates this is a redacted thinking step.
OmittedReasoningContent = object { signature, type }
A placeholder for reasoning content we know is present, but isn't returned by the provider (e.g. OpenAI GPT-5 on ChatCompletions)
signature: optional string
A unique identifier for this reasoning step.
type: optional "omitted_reasoning"
Indicates this is an omitted reasoning step.
role: "user" or "system" or "assistant"
The role of the participant.
batch_item_id: optional string
The id of the LLMBatchItem that this message is associated with
group_id: optional string
The multi-agent group that the message was sent in
name: optional string
The name of the participant.
otid: optional string
The offline threading id associated with this message
sender_id: optional string
The id of the sender of the message, can be an identity id or agent id
type: optional "message"
The message type to be created.
ApprovalCreate = object { approval_request_id, approvals, approve, 3 more }
Input to approve or deny a tool call request
Deprecatedapproval_request_id: optional string
The message ID of the approval request
approvals: optional array of object { approve, tool_call_id, reason, type } or ToolReturn { status, tool_call_id, tool_return, 3 more }
The list of approval responses
Approval = object { approve, tool_call_id, reason, type }
approve: boolean
Whether the tool has been approved
tool_call_id: string
The ID of the tool call that corresponds to this approval
reason: optional string
An optional explanation for the provided approval status
type: optional "approval"
The message type to be created.
ToolReturn = object { status, tool_call_id, tool_return, 3 more }
status: "success" or "error"
type: optional "tool"
The message type to be created.
Deprecatedapprove: optional boolean
Whether the tool has been approved
group_id: optional string
The multi-agent group that the message was sent in
Deprecatedreason: optional string
An optional explanation for the provided approval status
type: optional "approval"
The message type to be created.
Deprecateduse_assistant_message: optional boolean
Whether the server should parse specific tool call arguments (default send_message) as AssistantMessage objects. Still supported for legacy agent types, but deprecated for letta_v1_agent onward.
LettaResponse = object { messages, stop_reason, usage }
Response object from an agent interaction, consisting of the new messages generated by the agent and usage statistics.
The type of the returned messages can be either Message or LettaMessage, depending on what was specified in the request.
Attributes: messages (List[Union[Message, LettaMessage]]): The messages returned by the agent. usage (LettaUsageStatistics): The usage statistics
The messages returned by the agent.
SystemMessage = object { id, content, date, 8 more }
A message generated by the system. Never streamed back on a response, only used for cursor pagination.
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message content (str): The message content sent by the system
content: string
The message content sent by the system
message_type: optional "system_message"
The type of the message.
UserMessage = object { id, content, date, 8 more }
A message sent by the user. Never streamed back on a response, only used for cursor pagination.
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message content (Union[str, List[LettaUserMessageContentUnion]]): The message content sent by the user (can be a string or an array of multi-modal content parts)
The message content sent by the user (can be a string or an array of multi-modal content parts)
TextContent = object { text, signature, type }
text: string
The text content of the message.
signature: optional string
Stores a unique identifier for any reasoning associated with this text content.
type: optional "text"
The type of the message.
ImageContent = object { source, type }
source: object { url, type } or object { data, media_type, detail, type } or object { file_id, data, detail, 2 more }
The source of the image.
URL = object { url, type }
url: string
The URL of the image.
type: optional "url"
The source type for the image.
Base64 = object { data, media_type, detail, type }
data: string
The base64 encoded image data.
media_type: string
The media type for the image.
detail: optional string
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
type: optional "base64"
The source type for the image.
Letta = object { file_id, data, detail, 2 more }
file_id: string
The unique identifier of the image file persisted in storage.
data: optional string
The base64 encoded image data.
detail: optional string
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
media_type: optional string
The media type for the image.
type: optional "letta"
The source type for the image.
type: optional "image"
The type of the message.
message_type: optional "user_message"
The type of the message.
ReasoningMessage = object { id, date, reasoning, 10 more }
Representation of an agent's internal reasoning.
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message source (Literal["reasoner_model", "non_reasoner_model"]): Whether the reasoning content was generated natively by a reasoner model or derived via prompting reasoning (str): The internal reasoning of the agent signature (Optional[str]): The model-generated signature of the reasoning step
message_type: optional "reasoning_message"
The type of the message.
source: optional "reasoner_model" or "non_reasoner_model"
HiddenReasoningMessage = object { id, date, state, 9 more }
Representation of an agent's internal reasoning where reasoning content has been hidden from the response.
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message state (Literal["redacted", "omitted"]): Whether the reasoning content was redacted by the provider or simply omitted by the API hidden_reasoning (Optional[str]): The internal reasoning of the agent
state: "redacted" or "omitted"
message_type: optional "hidden_reasoning_message"
The type of the message.
ToolCallMessage = object { id, date, tool_call, 9 more }
A message representing a request to call a tool (generated by the LLM to trigger tool execution).
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message tool_call (Union[ToolCall, ToolCallDelta]): The tool call
Deprecatedtool_call: ToolCall { arguments, name, tool_call_id } or ToolCallDelta { arguments, name, tool_call_id }
ToolCall = object { arguments, name, tool_call_id }
ToolCallDelta = object { arguments, name, tool_call_id }
message_type: optional "tool_call_message"
The type of the message.
tool_calls: optional array of ToolCall { arguments, name, tool_call_id } or ToolCallDelta { arguments, name, tool_call_id }
ToolCallDelta = object { arguments, name, tool_call_id }
ToolReturnMessage = object { id, date, status, 13 more }
A message representing the return value of a tool call (generated by Letta executing the requested tool).
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message tool_return (str): The return value of the tool (deprecated, use tool_returns) status (Literal["success", "error"]): The status of the tool call (deprecated, use tool_returns) tool_call_id (str): A unique identifier for the tool call that generated this message (deprecated, use tool_returns) stdout (Optional[List(str)]): Captured stdout (e.g. prints, logs) from the tool invocation (deprecated, use tool_returns) stderr (Optional[List(str)]): Captured stderr from the tool invocation (deprecated, use tool_returns) tool_returns (Optional[List[ToolReturn]]): List of tool returns for multi-tool support
Deprecatedstatus: "success" or "error"
message_type: optional "tool_return_message"
The type of the message.
status: "success" or "error"
type: optional "tool"
The message type to be created.
AssistantMessage = object { id, content, date, 8 more }
A message sent by the LLM in response to user input. Used in the LLM context.
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message content (Union[str, List[LettaAssistantMessageContentUnion]]): The message content sent by the agent (can be a string or an array of content parts)
The message content sent by the agent (can be a string or an array of content parts)
text: string
The text content of the message.
signature: optional string
Stores a unique identifier for any reasoning associated with this text content.
type: optional "text"
The type of the message.
message_type: optional "assistant_message"
The type of the message.
ApprovalRequestMessage = object { id, date, tool_call, 9 more }
A message representing a request for approval to call a tool (generated by the LLM to trigger tool execution).
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message tool_call (ToolCall): The tool call
Deprecatedtool_call: ToolCall { arguments, name, tool_call_id } or ToolCallDelta { arguments, name, tool_call_id }
The tool call that has been requested by the llm to run
ToolCall = object { arguments, name, tool_call_id }
ToolCallDelta = object { arguments, name, tool_call_id }
message_type: optional "approval_request_message"
The type of the message.
tool_calls: optional array of ToolCall { arguments, name, tool_call_id } or ToolCallDelta { arguments, name, tool_call_id }
The tool calls that have been requested by the llm to run, which are pending approval
ToolCallDelta = object { arguments, name, tool_call_id }
ApprovalResponseMessage = object { id, date, approval_request_id, 11 more }
A message representing a response form the user indicating whether a tool has been approved to run.
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message approve: (bool) Whether the tool has been approved approval_request_id: The ID of the approval request reason: (Optional[str]) An optional explanation for the provided approval status
Deprecatedapproval_request_id: optional string
The message ID of the approval request
approvals: optional array of object { approve, tool_call_id, reason, type } or ToolReturn { status, tool_call_id, tool_return, 3 more }
The list of approval responses
Approval = object { approve, tool_call_id, reason, type }
approve: boolean
Whether the tool has been approved
tool_call_id: string
The ID of the tool call that corresponds to this approval
reason: optional string
An optional explanation for the provided approval status
type: optional "approval"
The message type to be created.
ToolReturn = object { status, tool_call_id, tool_return, 3 more }
status: "success" or "error"
type: optional "tool"
The message type to be created.
Deprecatedapprove: optional boolean
Whether the tool has been approved
message_type: optional "approval_response_message"
The type of the message.
Deprecatedreason: optional string
An optional explanation for the provided approval status
SummaryMessage = object { id, date, summary, 8 more }
A message representing a summary of the conversation. Sent to the LLM as a user or system message depending on the provider.
message_type: optional "summary"
EventMessage = object { id, date, event_data, 9 more }
A message for notifying the developer that an event that has occured (e.g. a compaction). Events are NOT part of the context window.
event_type: "compaction"
message_type: optional "event"
stop_reason: object { stop_reason, message_type }
The stop reason from Letta indicating why agent loop stopped execution.
The reason why execution stopped.
message_type: optional "stop_reason"
The type of the message.
usage: object { completion_tokens, message_type, prompt_tokens, 3 more }
The usage statistics of the agent.
completion_tokens: optional number
The number of tokens generated by the agent.
message_type: optional "usage_statistics"
prompt_tokens: optional number
The number of tokens in the prompt.
run_ids: optional array of string
The background task run IDs associated with the agent interaction
step_count: optional number
The number of steps taken by the agent.
total_tokens: optional number
The total number of tokens processed by the agent.
LettaStreamingRequest = object { assistant_message_tool_kwarg, assistant_message_tool_name, background, 9 more }
Deprecatedassistant_message_tool_kwarg: optional string
The name of the message argument in the designated message tool. Still supported for legacy agent types, but deprecated for letta_v1_agent onward.
Deprecatedassistant_message_tool_name: optional string
The name of the designated message tool. Still supported for legacy agent types, but deprecated for letta_v1_agent onward.
background: optional boolean
Whether to process the request in the background (only used when streaming=true).
Deprecatedenable_thinking: optional string
If set to True, enables reasoning before responses or tool calls from the agent.
include_pings: optional boolean
Whether to include periodic keepalive ping messages in the stream to prevent connection timeouts (only used when streaming=true).
Only return specified message types in the response. If None (default) returns all messages.
input: optional string or array of TextContent { text, signature, type } or ImageContent { source, type } or ToolCallContent { id, input, name, 2 more } or 5 more
Syntactic sugar for a single user message. Equivalent to messages=[{'role': 'user', 'content': input}].
UnionMember1 = array of TextContent { text, signature, type } or ImageContent { source, type } or ToolCallContent { id, input, name, 2 more } or 5 more
TextContent = object { text, signature, type }
text: string
The text content of the message.
signature: optional string
Stores a unique identifier for any reasoning associated with this text content.
type: optional "text"
The type of the message.
ImageContent = object { source, type }
source: object { url, type } or object { data, media_type, detail, type } or object { file_id, data, detail, 2 more }
The source of the image.
URL = object { url, type }
url: string
The URL of the image.
type: optional "url"
The source type for the image.
Base64 = object { data, media_type, detail, type }
data: string
The base64 encoded image data.
media_type: string
The media type for the image.
detail: optional string
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
type: optional "base64"
The source type for the image.
Letta = object { file_id, data, detail, 2 more }
file_id: string
The unique identifier of the image file persisted in storage.
data: optional string
The base64 encoded image data.
detail: optional string
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
media_type: optional string
The media type for the image.
type: optional "letta"
The source type for the image.
type: optional "image"
The type of the message.
ToolCallContent = object { id, input, name, 2 more }
id: string
A unique identifier for this specific tool call instance.
input: map[unknown]
The parameters being passed to the tool, structured as a dictionary of parameter names to values.
name: string
The name of the tool being called.
signature: optional string
Stores a unique identifier for any reasoning associated with this tool call.
type: optional "tool_call"
Indicates this content represents a tool call event.
ToolReturnContent = object { content, is_error, tool_call_id, type }
content: string
The content returned by the tool execution.
is_error: boolean
Indicates whether the tool execution resulted in an error.
tool_call_id: string
References the ID of the ToolCallContent that initiated this tool call.
type: optional "tool_return"
Indicates this content represents a tool return event.
ReasoningContent = object { is_native, reasoning, signature, type }
Sent via the Anthropic Messages API
is_native: boolean
Whether the reasoning content was generated by a reasoner model that processed this step.
reasoning: string
The intermediate reasoning or thought process content.
signature: optional string
A unique identifier for this reasoning step.
type: optional "reasoning"
Indicates this is a reasoning/intermediate step.
RedactedReasoningContent = object { data, type }
Sent via the Anthropic Messages API
data: string
The redacted or filtered intermediate reasoning content.
type: optional "redacted_reasoning"
Indicates this is a redacted thinking step.
OmittedReasoningContent = object { signature, type }
A placeholder for reasoning content we know is present, but isn't returned by the provider (e.g. OpenAI GPT-5 on ChatCompletions)
signature: optional string
A unique identifier for this reasoning step.
type: optional "omitted_reasoning"
Indicates this is an omitted reasoning step.
SummarizedReasoning = object { id, summary, encrypted_content, type }
The style of reasoning content returned by the OpenAI Responses API
id: string
The unique identifier for this reasoning step.
summary: array of object { index, text }
Summaries of the reasoning content.
index: number
The index of the summary part.
text: string
The text of the summary part.
encrypted_content: optional string
The encrypted reasoning content.
type: optional "summarized_reasoning"
Indicates this is a summarized reasoning step.
max_steps: optional number
Maximum number of steps the agent should take to process the request.
messages: optional array of MessageCreate { content, role, batch_item_id, 5 more } or ApprovalCreate { approval_request_id, approvals, approve, 3 more }
The messages to be sent to the agent.
MessageCreate = object { content, role, batch_item_id, 5 more }
Request to create a message
The content of the message.
TextContent = object { text, signature, type }
text: string
The text content of the message.
signature: optional string
Stores a unique identifier for any reasoning associated with this text content.
type: optional "text"
The type of the message.
ImageContent = object { source, type }
source: object { url, type } or object { data, media_type, detail, type } or object { file_id, data, detail, 2 more }
The source of the image.
URL = object { url, type }
url: string
The URL of the image.
type: optional "url"
The source type for the image.
Base64 = object { data, media_type, detail, type }
data: string
The base64 encoded image data.
media_type: string
The media type for the image.
detail: optional string
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
type: optional "base64"
The source type for the image.
Letta = object { file_id, data, detail, 2 more }
file_id: string
The unique identifier of the image file persisted in storage.
data: optional string
The base64 encoded image data.
detail: optional string
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
media_type: optional string
The media type for the image.
type: optional "letta"
The source type for the image.
type: optional "image"
The type of the message.
ToolCallContent = object { id, input, name, 2 more }
id: string
A unique identifier for this specific tool call instance.
input: map[unknown]
The parameters being passed to the tool, structured as a dictionary of parameter names to values.
name: string
The name of the tool being called.
signature: optional string
Stores a unique identifier for any reasoning associated with this tool call.
type: optional "tool_call"
Indicates this content represents a tool call event.
ToolReturnContent = object { content, is_error, tool_call_id, type }
content: string
The content returned by the tool execution.
is_error: boolean
Indicates whether the tool execution resulted in an error.
tool_call_id: string
References the ID of the ToolCallContent that initiated this tool call.
type: optional "tool_return"
Indicates this content represents a tool return event.
ReasoningContent = object { is_native, reasoning, signature, type }
Sent via the Anthropic Messages API
is_native: boolean
Whether the reasoning content was generated by a reasoner model that processed this step.
reasoning: string
The intermediate reasoning or thought process content.
signature: optional string
A unique identifier for this reasoning step.
type: optional "reasoning"
Indicates this is a reasoning/intermediate step.
RedactedReasoningContent = object { data, type }
Sent via the Anthropic Messages API
data: string
The redacted or filtered intermediate reasoning content.
type: optional "redacted_reasoning"
Indicates this is a redacted thinking step.
OmittedReasoningContent = object { signature, type }
A placeholder for reasoning content we know is present, but isn't returned by the provider (e.g. OpenAI GPT-5 on ChatCompletions)
signature: optional string
A unique identifier for this reasoning step.
type: optional "omitted_reasoning"
Indicates this is an omitted reasoning step.
role: "user" or "system" or "assistant"
The role of the participant.
batch_item_id: optional string
The id of the LLMBatchItem that this message is associated with
group_id: optional string
The multi-agent group that the message was sent in
name: optional string
The name of the participant.
otid: optional string
The offline threading id associated with this message
sender_id: optional string
The id of the sender of the message, can be an identity id or agent id
type: optional "message"
The message type to be created.
ApprovalCreate = object { approval_request_id, approvals, approve, 3 more }
Input to approve or deny a tool call request
Deprecatedapproval_request_id: optional string
The message ID of the approval request
approvals: optional array of object { approve, tool_call_id, reason, type } or ToolReturn { status, tool_call_id, tool_return, 3 more }
The list of approval responses
Approval = object { approve, tool_call_id, reason, type }
approve: boolean
Whether the tool has been approved
tool_call_id: string
The ID of the tool call that corresponds to this approval
reason: optional string
An optional explanation for the provided approval status
type: optional "approval"
The message type to be created.
ToolReturn = object { status, tool_call_id, tool_return, 3 more }
status: "success" or "error"
type: optional "tool"
The message type to be created.
Deprecatedapprove: optional boolean
Whether the tool has been approved
group_id: optional string
The multi-agent group that the message was sent in
Deprecatedreason: optional string
An optional explanation for the provided approval status
type: optional "approval"
The message type to be created.
stream_tokens: optional boolean
Flag to determine if individual tokens should be streamed, rather than streaming per step (only used when streaming=true).
streaming: optional boolean
If True, returns a streaming response (Server-Sent Events). If False (default), returns a complete response.
Deprecateduse_assistant_message: optional boolean
Whether the server should parse specific tool call arguments (default send_message) as AssistantMessage objects. Still supported for legacy agent types, but deprecated for letta_v1_agent onward.
LettaStreamingResponse = SystemMessage { id, content, date, 8 more } or UserMessage { id, content, date, 8 more } or ReasoningMessage { id, date, reasoning, 10 more } or 9 more
Streaming response type for Server-Sent Events (SSE) endpoints. Each event in the stream will be one of these types.
SystemMessage = object { id, content, date, 8 more }
A message generated by the system. Never streamed back on a response, only used for cursor pagination.
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message content (str): The message content sent by the system
content: string
The message content sent by the system
message_type: optional "system_message"
The type of the message.
UserMessage = object { id, content, date, 8 more }
A message sent by the user. Never streamed back on a response, only used for cursor pagination.
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message content (Union[str, List[LettaUserMessageContentUnion]]): The message content sent by the user (can be a string or an array of multi-modal content parts)
The message content sent by the user (can be a string or an array of multi-modal content parts)
TextContent = object { text, signature, type }
text: string
The text content of the message.
signature: optional string
Stores a unique identifier for any reasoning associated with this text content.
type: optional "text"
The type of the message.
ImageContent = object { source, type }
source: object { url, type } or object { data, media_type, detail, type } or object { file_id, data, detail, 2 more }
The source of the image.
URL = object { url, type }
url: string
The URL of the image.
type: optional "url"
The source type for the image.
Base64 = object { data, media_type, detail, type }
data: string
The base64 encoded image data.
media_type: string
The media type for the image.
detail: optional string
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
type: optional "base64"
The source type for the image.
Letta = object { file_id, data, detail, 2 more }
file_id: string
The unique identifier of the image file persisted in storage.
data: optional string
The base64 encoded image data.
detail: optional string
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
media_type: optional string
The media type for the image.
type: optional "letta"
The source type for the image.
type: optional "image"
The type of the message.
message_type: optional "user_message"
The type of the message.
ReasoningMessage = object { id, date, reasoning, 10 more }
Representation of an agent's internal reasoning.
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message source (Literal["reasoner_model", "non_reasoner_model"]): Whether the reasoning content was generated natively by a reasoner model or derived via prompting reasoning (str): The internal reasoning of the agent signature (Optional[str]): The model-generated signature of the reasoning step
message_type: optional "reasoning_message"
The type of the message.
source: optional "reasoner_model" or "non_reasoner_model"
HiddenReasoningMessage = object { id, date, state, 9 more }
Representation of an agent's internal reasoning where reasoning content has been hidden from the response.
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message state (Literal["redacted", "omitted"]): Whether the reasoning content was redacted by the provider or simply omitted by the API hidden_reasoning (Optional[str]): The internal reasoning of the agent
state: "redacted" or "omitted"
message_type: optional "hidden_reasoning_message"
The type of the message.
ToolCallMessage = object { id, date, tool_call, 9 more }
A message representing a request to call a tool (generated by the LLM to trigger tool execution).
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message tool_call (Union[ToolCall, ToolCallDelta]): The tool call
Deprecatedtool_call: ToolCall { arguments, name, tool_call_id } or ToolCallDelta { arguments, name, tool_call_id }
ToolCall = object { arguments, name, tool_call_id }
ToolCallDelta = object { arguments, name, tool_call_id }
message_type: optional "tool_call_message"
The type of the message.
tool_calls: optional array of ToolCall { arguments, name, tool_call_id } or ToolCallDelta { arguments, name, tool_call_id }
ToolCallDelta = object { arguments, name, tool_call_id }
ToolReturnMessage = object { id, date, status, 13 more }
A message representing the return value of a tool call (generated by Letta executing the requested tool).
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message tool_return (str): The return value of the tool (deprecated, use tool_returns) status (Literal["success", "error"]): The status of the tool call (deprecated, use tool_returns) tool_call_id (str): A unique identifier for the tool call that generated this message (deprecated, use tool_returns) stdout (Optional[List(str)]): Captured stdout (e.g. prints, logs) from the tool invocation (deprecated, use tool_returns) stderr (Optional[List(str)]): Captured stderr from the tool invocation (deprecated, use tool_returns) tool_returns (Optional[List[ToolReturn]]): List of tool returns for multi-tool support
Deprecatedstatus: "success" or "error"
message_type: optional "tool_return_message"
The type of the message.
status: "success" or "error"
type: optional "tool"
The message type to be created.
AssistantMessage = object { id, content, date, 8 more }
A message sent by the LLM in response to user input. Used in the LLM context.
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message content (Union[str, List[LettaAssistantMessageContentUnion]]): The message content sent by the agent (can be a string or an array of content parts)
The message content sent by the agent (can be a string or an array of content parts)
text: string
The text content of the message.
signature: optional string
Stores a unique identifier for any reasoning associated with this text content.
type: optional "text"
The type of the message.
message_type: optional "assistant_message"
The type of the message.
ApprovalRequestMessage = object { id, date, tool_call, 9 more }
A message representing a request for approval to call a tool (generated by the LLM to trigger tool execution).
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message tool_call (ToolCall): The tool call
Deprecatedtool_call: ToolCall { arguments, name, tool_call_id } or ToolCallDelta { arguments, name, tool_call_id }
The tool call that has been requested by the llm to run
ToolCall = object { arguments, name, tool_call_id }
ToolCallDelta = object { arguments, name, tool_call_id }
message_type: optional "approval_request_message"
The type of the message.
tool_calls: optional array of ToolCall { arguments, name, tool_call_id } or ToolCallDelta { arguments, name, tool_call_id }
The tool calls that have been requested by the llm to run, which are pending approval
ToolCallDelta = object { arguments, name, tool_call_id }
ApprovalResponseMessage = object { id, date, approval_request_id, 11 more }
A message representing a response form the user indicating whether a tool has been approved to run.
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message approve: (bool) Whether the tool has been approved approval_request_id: The ID of the approval request reason: (Optional[str]) An optional explanation for the provided approval status
Deprecatedapproval_request_id: optional string
The message ID of the approval request
approvals: optional array of object { approve, tool_call_id, reason, type } or ToolReturn { status, tool_call_id, tool_return, 3 more }
The list of approval responses
Approval = object { approve, tool_call_id, reason, type }
approve: boolean
Whether the tool has been approved
tool_call_id: string
The ID of the tool call that corresponds to this approval
reason: optional string
An optional explanation for the provided approval status
type: optional "approval"
The message type to be created.
ToolReturn = object { status, tool_call_id, tool_return, 3 more }
status: "success" or "error"
type: optional "tool"
The message type to be created.
Deprecatedapprove: optional boolean
Whether the tool has been approved
message_type: optional "approval_response_message"
The type of the message.
Deprecatedreason: optional string
An optional explanation for the provided approval status
Ping = object { message_type }
Ping messages are a keep-alive to prevent SSE streams from timing out during long running requests.
message_type: "ping"
The type of the message.
StopReason = object { stop_reason, message_type }
The stop reason from Letta indicating why agent loop stopped execution.
The reason why execution stopped.
message_type: optional "stop_reason"
The type of the message.
UsageStatistics = object { completion_tokens, message_type, prompt_tokens, 3 more }
Usage statistics for the agent interaction.
Attributes: completion_tokens (int): The number of tokens generated by the agent. prompt_tokens (int): The number of tokens in the prompt. total_tokens (int): The total number of tokens processed by the agent. step_count (int): The number of steps taken by the agent.
completion_tokens: optional number
The number of tokens generated by the agent.
message_type: optional "usage_statistics"
prompt_tokens: optional number
The number of tokens in the prompt.
run_ids: optional array of string
The background task run IDs associated with the agent interaction
step_count: optional number
The number of steps taken by the agent.
total_tokens: optional number
The total number of tokens processed by the agent.
LettaUserMessageContentUnion = TextContent { text, signature, type } or ImageContent { source, type }
TextContent = object { text, signature, type }
text: string
The text content of the message.
signature: optional string
Stores a unique identifier for any reasoning associated with this text content.
type: optional "text"
The type of the message.
ImageContent = object { source, type }
source: object { url, type } or object { data, media_type, detail, type } or object { file_id, data, detail, 2 more }
The source of the image.
URL = object { url, type }
url: string
The URL of the image.
type: optional "url"
The source type for the image.
Base64 = object { data, media_type, detail, type }
data: string
The base64 encoded image data.
media_type: string
The media type for the image.
detail: optional string
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
type: optional "base64"
The source type for the image.
Letta = object { file_id, data, detail, 2 more }
file_id: string
The unique identifier of the image file persisted in storage.
data: optional string
The base64 encoded image data.
detail: optional string
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
media_type: optional string
The media type for the image.
type: optional "letta"
The source type for the image.
type: optional "image"
The type of the message.
Message = object { id, role, agent_id, 21 more }
Letta's internal representation of a message. Includes methods to convert to/from LLM provider formats.
Attributes:
id (str): The unique identifier of the message.
role (MessageRole): The role of the participant.
text (str): The text of the message.
user_id (str): The unique identifier of the user.
agent_id (str): The unique identifier of the agent.
model (str): The model used to make the function call.
name (str): The name of the participant.
created_at (datetime): The time the message was created.
tool_calls (List[OpenAIToolCall,]): The list of tool calls requested.
tool_call_id (str): The id of the tool call.
step_id (str): The id of the step that this message was created in.
otid (str): The offline threading id associated with this message.
tool_returns (List[ToolReturn]): The list of tool returns requested.
group_id (str): The multi-agent group that the message was sent in.
sender_id (str): The id of the sender of the message, can be an identity id or agent id.
t
id: string
The human-friendly ID of the Message
The role of the participant.
agent_id: optional string
The unique identifier of the agent.
approval_request_id: optional string
The id of the approval request if this message is associated with a tool call request.
approvals: optional array of object { approve, tool_call_id, reason, type } or object { status, func_response, stderr, 2 more }
The list of approvals for this message.
ApprovalReturn = object { approve, tool_call_id, reason, type }
approve: boolean
Whether the tool has been approved
tool_call_id: string
The ID of the tool call that corresponds to this approval
reason: optional string
An optional explanation for the provided approval status
type: optional "approval"
The message type to be created.
LettaSchemasMessageToolReturn = object { status, func_response, stderr, 2 more }
status: "success" or "error"
The status of the tool call
func_response: optional string
The function response string
stderr: optional array of string
Captured stderr from the tool invocation
stdout: optional array of string
Captured stdout (e.g. prints, logs) from the tool invocation
tool_call_id: optional unknown
The ID for the tool call
approve: optional boolean
Whether tool call is approved.
batch_item_id: optional string
The id of the LLMBatchItem that this message is associated with
content: optional array of TextContent { text, signature, type } or ImageContent { source, type } or ToolCallContent { id, input, name, 2 more } or 5 more
The content of the message.
TextContent = object { text, signature, type }
text: string
The text content of the message.
signature: optional string
Stores a unique identifier for any reasoning associated with this text content.
type: optional "text"
The type of the message.
ImageContent = object { source, type }
source: object { url, type } or object { data, media_type, detail, type } or object { file_id, data, detail, 2 more }
The source of the image.
URL = object { url, type }
url: string
The URL of the image.
type: optional "url"
The source type for the image.
Base64 = object { data, media_type, detail, type }
data: string
The base64 encoded image data.
media_type: string
The media type for the image.
detail: optional string
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
type: optional "base64"
The source type for the image.
Letta = object { file_id, data, detail, 2 more }
file_id: string
The unique identifier of the image file persisted in storage.
data: optional string
The base64 encoded image data.
detail: optional string
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
media_type: optional string
The media type for the image.
type: optional "letta"
The source type for the image.
type: optional "image"
The type of the message.
ToolCallContent = object { id, input, name, 2 more }
id: string
A unique identifier for this specific tool call instance.
input: map[unknown]
The parameters being passed to the tool, structured as a dictionary of parameter names to values.
name: string
The name of the tool being called.
signature: optional string
Stores a unique identifier for any reasoning associated with this tool call.
type: optional "tool_call"
Indicates this content represents a tool call event.
ToolReturnContent = object { content, is_error, tool_call_id, type }
content: string
The content returned by the tool execution.
is_error: boolean
Indicates whether the tool execution resulted in an error.
tool_call_id: string
References the ID of the ToolCallContent that initiated this tool call.
type: optional "tool_return"
Indicates this content represents a tool return event.
ReasoningContent = object { is_native, reasoning, signature, type }
Sent via the Anthropic Messages API
is_native: boolean
Whether the reasoning content was generated by a reasoner model that processed this step.
reasoning: string
The intermediate reasoning or thought process content.
signature: optional string
A unique identifier for this reasoning step.
type: optional "reasoning"
Indicates this is a reasoning/intermediate step.
RedactedReasoningContent = object { data, type }
Sent via the Anthropic Messages API
data: string
The redacted or filtered intermediate reasoning content.
type: optional "redacted_reasoning"
Indicates this is a redacted thinking step.
OmittedReasoningContent = object { signature, type }
A placeholder for reasoning content we know is present, but isn't returned by the provider (e.g. OpenAI GPT-5 on ChatCompletions)
signature: optional string
A unique identifier for this reasoning step.
type: optional "omitted_reasoning"
Indicates this is an omitted reasoning step.
SummarizedReasoning = object { id, summary, encrypted_content, type }
The style of reasoning content returned by the OpenAI Responses API
id: string
The unique identifier for this reasoning step.
summary: array of object { index, text }
Summaries of the reasoning content.
index: number
The index of the summary part.
text: string
The text of the summary part.
encrypted_content: optional string
The encrypted reasoning content.
type: optional "summarized_reasoning"
Indicates this is a summarized reasoning step.
created_at: optional string
The timestamp when the object was created.
created_by_id: optional string
The id of the user that made this object.
denial_reason: optional string
The reason the tool call request was denied.
group_id: optional string
The multi-agent group that the message was sent in
is_err: optional boolean
Whether this message is part of an error step. Used only for debugging purposes.
last_updated_by_id: optional string
The id of the user that made this object.
model: optional string
The model used to make the function call.
name: optional string
For role user/assistant: the (optional) name of the participant. For role tool/function: the name of the function called.
otid: optional string
The offline threading id associated with this message
run_id: optional string
The id of the run that this message was created in.
sender_id: optional string
The id of the sender of the message, can be an identity id or agent id
step_id: optional string
The id of the step that this message was created in.
tool_call_id: optional string
The ID of the tool call. Only applicable for role tool.
tool_calls: optional array of object { id, function, type }
The list of tool calls requested. Only applicable for role assistant.
function: object { arguments, name }
type: "function"
tool_returns: optional array of object { status, func_response, stderr, 2 more }
Tool execution return information for prior tool calls
status: "success" or "error"
The status of the tool call
func_response: optional string
The function response string
stderr: optional array of string
Captured stderr from the tool invocation
stdout: optional array of string
Captured stdout (e.g. prints, logs) from the tool invocation
tool_call_id: optional unknown
The ID for the tool call
updated_at: optional string
The timestamp when the object was last updated.
MessageRole = "assistant" or "user" or "tool" or 3 more
MessageType = "system_message" or "user_message" or "assistant_message" or 6 more
OmittedReasoningContent = object { signature, type }
A placeholder for reasoning content we know is present, but isn't returned by the provider (e.g. OpenAI GPT-5 on ChatCompletions)
signature: optional string
A unique identifier for this reasoning step.
type: optional "omitted_reasoning"
Indicates this is an omitted reasoning step.
ReasoningContent = object { is_native, reasoning, signature, type }
Sent via the Anthropic Messages API
is_native: boolean
Whether the reasoning content was generated by a reasoner model that processed this step.
reasoning: string
The intermediate reasoning or thought process content.
signature: optional string
A unique identifier for this reasoning step.
type: optional "reasoning"
Indicates this is a reasoning/intermediate step.
ReasoningMessage = object { id, date, reasoning, 10 more }
Representation of an agent's internal reasoning.
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message source (Literal["reasoner_model", "non_reasoner_model"]): Whether the reasoning content was generated natively by a reasoner model or derived via prompting reasoning (str): The internal reasoning of the agent signature (Optional[str]): The model-generated signature of the reasoning step
message_type: optional "reasoning_message"
The type of the message.
source: optional "reasoner_model" or "non_reasoner_model"
RedactedReasoningContent = object { data, type }
Sent via the Anthropic Messages API
data: string
The redacted or filtered intermediate reasoning content.
type: optional "redacted_reasoning"
Indicates this is a redacted thinking step.
Run = object { id, agent_id, background, 13 more }
Representation of a run - a conversation or processing session for an agent. Runs track when agents process messages and maintain the relationship between agents, steps, and messages.
id: string
The human-friendly ID of the Run
agent_id: string
The unique identifier of the agent associated with the run.
background: optional boolean
Whether the run was created in background mode.
base_template_id: optional string
The base template ID that the run belongs to.
callback_error: optional string
Optional error message from attempting to POST the callback endpoint.
callback_sent_at: optional string
Timestamp when the callback was last attempted.
callback_status_code: optional number
HTTP status code returned by the callback endpoint.
callback_url: optional string
If set, POST to this URL when the run completes.
completed_at: optional string
The timestamp when the run was completed.
created_at: optional string
The timestamp when the run was created.
metadata: optional map[unknown]
Additional metadata for the run.
request_config: optional object { assistant_message_tool_kwarg, assistant_message_tool_name, include_return_message_types, use_assistant_message }
The request configuration for the run.
assistant_message_tool_kwarg: optional string
The name of the message argument in the designated message tool.
assistant_message_tool_name: optional string
The name of the designated message tool.
Only return specified message types in the response. If None (default) returns all messages.
use_assistant_message: optional boolean
Whether the server should parse specific tool call arguments (default send_message) as AssistantMessage objects.
status: optional "created" or "running" or "completed" or 2 more
The current status of the run.
The reason why the run was stopped.
total_duration_ns: optional number
Total run duration in nanoseconds
ttft_ns: optional number
Time to first token for a run in nanoseconds
SummaryMessage = object { id, date, summary, 8 more }
A message representing a summary of the conversation. Sent to the LLM as a user or system message depending on the provider.
message_type: optional "summary"
SystemMessage = object { id, content, date, 8 more }
A message generated by the system. Never streamed back on a response, only used for cursor pagination.
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message content (str): The message content sent by the system
content: string
The message content sent by the system
message_type: optional "system_message"
The type of the message.
TextContent = object { text, signature, type }
text: string
The text content of the message.
signature: optional string
Stores a unique identifier for any reasoning associated with this text content.
type: optional "text"
The type of the message.
ToolCall = object { arguments, name, tool_call_id }
ToolCallContent = object { id, input, name, 2 more }
id: string
A unique identifier for this specific tool call instance.
input: map[unknown]
The parameters being passed to the tool, structured as a dictionary of parameter names to values.
name: string
The name of the tool being called.
signature: optional string
Stores a unique identifier for any reasoning associated with this tool call.
type: optional "tool_call"
Indicates this content represents a tool call event.
ToolCallDelta = object { arguments, name, tool_call_id }
ToolCallMessage = object { id, date, tool_call, 9 more }
A message representing a request to call a tool (generated by the LLM to trigger tool execution).
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message tool_call (Union[ToolCall, ToolCallDelta]): The tool call
Deprecatedtool_call: ToolCall { arguments, name, tool_call_id } or ToolCallDelta { arguments, name, tool_call_id }
ToolCall = object { arguments, name, tool_call_id }
ToolCallDelta = object { arguments, name, tool_call_id }
message_type: optional "tool_call_message"
The type of the message.
tool_calls: optional array of ToolCall { arguments, name, tool_call_id } or ToolCallDelta { arguments, name, tool_call_id }
ToolCallDelta = object { arguments, name, tool_call_id }
ToolReturn = object { status, tool_call_id, tool_return, 3 more }
status: "success" or "error"
type: optional "tool"
The message type to be created.
ToolReturnContent = object { content, is_error, tool_call_id, type }
content: string
The content returned by the tool execution.
is_error: boolean
Indicates whether the tool execution resulted in an error.
tool_call_id: string
References the ID of the ToolCallContent that initiated this tool call.
type: optional "tool_return"
Indicates this content represents a tool return event.
UpdateAssistantMessage = object { content, message_type }
The message content sent by the assistant (can be a string or an array of content parts)
text: string
The text content of the message.
signature: optional string
Stores a unique identifier for any reasoning associated with this text content.
type: optional "text"
The type of the message.
message_type: optional "assistant_message"
UpdateReasoningMessage = object { reasoning, message_type }
message_type: optional "reasoning_message"
UpdateSystemMessage = object { content, message_type }
content: string
The message content sent by the system (can be a string or an array of multi-modal content parts)
message_type: optional "system_message"
UpdateUserMessage = object { content, message_type }
The message content sent by the user (can be a string or an array of multi-modal content parts)
TextContent = object { text, signature, type }
text: string
The text content of the message.
signature: optional string
Stores a unique identifier for any reasoning associated with this text content.
type: optional "text"
The type of the message.
ImageContent = object { source, type }
source: object { url, type } or object { data, media_type, detail, type } or object { file_id, data, detail, 2 more }
The source of the image.
URL = object { url, type }
url: string
The URL of the image.
type: optional "url"
The source type for the image.
Base64 = object { data, media_type, detail, type }
data: string
The base64 encoded image data.
media_type: string
The media type for the image.
detail: optional string
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
type: optional "base64"
The source type for the image.
Letta = object { file_id, data, detail, 2 more }
file_id: string
The unique identifier of the image file persisted in storage.
data: optional string
The base64 encoded image data.
detail: optional string
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
media_type: optional string
The media type for the image.
type: optional "letta"
The source type for the image.
type: optional "image"
The type of the message.
message_type: optional "user_message"
UserMessage = object { id, content, date, 8 more }
A message sent by the user. Never streamed back on a response, only used for cursor pagination.
Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message content (Union[str, List[LettaUserMessageContentUnion]]): The message content sent by the user (can be a string or an array of multi-modal content parts)
The message content sent by the user (can be a string or an array of multi-modal content parts)
TextContent = object { text, signature, type }
text: string
The text content of the message.
signature: optional string
Stores a unique identifier for any reasoning associated with this text content.
type: optional "text"
The type of the message.
ImageContent = object { source, type }
source: object { url, type } or object { data, media_type, detail, type } or object { file_id, data, detail, 2 more }
The source of the image.
URL = object { url, type }
url: string
The URL of the image.
type: optional "url"
The source type for the image.
Base64 = object { data, media_type, detail, type }
data: string
The base64 encoded image data.
media_type: string
The media type for the image.
detail: optional string
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
type: optional "base64"
The source type for the image.
Letta = object { file_id, data, detail, 2 more }
file_id: string
The unique identifier of the image file persisted in storage.
data: optional string
The base64 encoded image data.
detail: optional string
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
media_type: optional string
The media type for the image.
type: optional "letta"
The source type for the image.
type: optional "image"
The type of the message.
message_type: optional "user_message"
The type of the message.
AgentsBlocks
Retrieve Block For Agent
Modify Block For Agent
List Blocks For Agent
Attach Block To Agent
Detach Block From Agent
ModelsExpand Collapse
Block = object { value, id, base_template_id, 15 more }
A Block represents a reserved section of the LLM's context window.
value: string
Value of the block.
id: optional string
The human-friendly ID of the Block
base_template_id: optional string
The base template id of the block.
created_by_id: optional string
The id of the user that made this Block.
deployment_id: optional string
The id of the deployment.
description: optional string
Description of the block.
entity_id: optional string
The id of the entity within the template.
hidden: optional boolean
If set to True, the block will be hidden.
is_template: optional boolean
Whether the block is a template (e.g. saved human/persona options).
label: optional string
Label of the block (e.g. 'human', 'persona') in the context window.
last_updated_by_id: optional string
The id of the user that last updated this Block.
limit: optional number
Character limit of the block.
metadata: optional map[unknown]
Metadata of the block.
preserve_on_migration: optional boolean
Preserve the block on template migration.
project_id: optional string
The associated project id.
read_only: optional boolean
Whether the agent has read-only access to the block.
template_id: optional string
The id of the template.
template_name: optional string
Name of the block if it is a template.
BlockModify = object { base_template_id, deployment_id, description, 12 more }
Update a block
base_template_id: optional string
The base template id of the block.
deployment_id: optional string
The id of the deployment.
description: optional string
Description of the block.
entity_id: optional string
The id of the entity within the template.
hidden: optional boolean
If set to True, the block will be hidden.
is_template: optional boolean
Whether the block is a template (e.g. saved human/persona options).
label: optional string
Label of the block (e.g. 'human', 'persona') in the context window.
limit: optional number
Character limit of the block.
metadata: optional map[unknown]
Metadata of the block.
preserve_on_migration: optional boolean
Preserve the block on template migration.
project_id: optional string
The associated project id.
read_only: optional boolean
Whether the agent has read-only access to the block.
template_id: optional string
The id of the template.
template_name: optional string
Name of the block if it is a template.
value: optional string
Value of the block.