Skip to content
  • Auto
  • Light
  • Dark
DiscordForumGitHubSign up
View as Markdown
Copy Markdown

Open in Claude
Open in ChatGPT

Agents

List Agents
client.agents.list(AgentListParams { after, ascending, base_template_id, 16 more } query?, RequestOptionsoptions?): ArrayPage<AgentState { id, agent_type, blocks, 39 more } >
get/v1/agents/
Create Agent
client.agents.create(AgentCreateParams { agent_type, base_template_id, block_ids, 42 more } body, RequestOptionsoptions?): AgentState { id, agent_type, blocks, 39 more }
post/v1/agents/
Modify Agent
client.agents.modify(stringagentID, AgentModifyParams { base_template_id, block_ids, context_window_limit, 31 more } body, RequestOptionsoptions?): AgentState { id, agent_type, blocks, 39 more }
patch/v1/agents/{agent_id}
Retrieve Agent
client.agents.retrieve(stringagentID, AgentRetrieveParams { include, include_relationships } query?, RequestOptionsoptions?): AgentState { id, agent_type, blocks, 39 more }
get/v1/agents/{agent_id}
Delete Agent
client.agents.delete(stringagentID, RequestOptionsoptions?): AgentDeleteResponse
delete/v1/agents/{agent_id}
Export Agent
client.agents.exportFile(stringagentID, AgentExportFileParams { max_steps, use_legacy_format } query?, RequestOptionsoptions?): AgentExportFileResponse
get/v1/agents/{agent_id}/export
Import Agent
client.agents.importFile(AgentImportFileParams { file, append_copy_suffix, env_vars_json, 6 more } params, RequestOptionsoptions?): AgentImportFileResponse { agent_ids }
post/v1/agents/import
ModelsExpand Collapse
AgentEnvironmentVariable { agent_id, key, value, 7 more }
agent_id: string

The ID of the agent this environment variable belongs to.

key: string

The name of the environment variable.

value: string

The value of the environment variable.

id?: string

The human-friendly ID of the Agent-env

created_at?: string | null

The timestamp when the object was created.

formatdate-time
created_by_id?: string | null

The id of the user that made this object.

description?: string | null

An optional description of the environment variable.

last_updated_by_id?: string | null

The id of the user that made this object.

updated_at?: string | null

The timestamp when the object was last updated.

formatdate-time
value_enc?: string | null

Encrypted secret value (stored as encrypted string)

AgentState { id, agent_type, blocks, 39 more }

Representation of an agent's state. This is the state of the agent at a given time, and is persisted in the DB backend. The state has all the information needed to recreate a persisted agent.

id: string

The id of the agent. Assigned by the database.

agent_type: AgentType

The type of agent.

Accepts one of the following:
"memgpt_agent"
"memgpt_v2_agent"
"letta_v1_agent"
"react_agent"
"workflow_agent"
"split_thread_agent"
"sleeptime_agent"
"voice_convo_agent"
"voice_sleeptime_agent"
blocks: Array<Block { value, id, base_template_id, 15 more } >

The memory blocks used by the agent.

value: string

Value of the block.

id?: string

The human-friendly ID of the Block

base_template_id?: string | null

The base template id of the block.

created_by_id?: string | null

The id of the user that made this Block.

deployment_id?: string | null

The id of the deployment.

description?: string | null

Description of the block.

entity_id?: string | null

The id of the entity within the template.

hidden?: boolean | null

If set to True, the block will be hidden.

is_template?: boolean

Whether the block is a template (e.g. saved human/persona options).

label?: string | null

Label of the block (e.g. 'human', 'persona') in the context window.

last_updated_by_id?: string | null

The id of the user that last updated this Block.

limit?: number

Character limit of the block.

metadata?: Record<string, unknown> | null

Metadata of the block.

preserve_on_migration?: boolean | null

Preserve the block on template migration.

project_id?: string | null

The associated project id.

read_only?: boolean

Whether the agent has read-only access to the block.

template_id?: string | null

The id of the template.

template_name?: string | null

Name of the block if it is a template.

Deprecatedembedding_config: EmbeddingConfig { embedding_dim, embedding_endpoint_type, embedding_model, 7 more }

Deprecated: Use embedding field instead. The embedding configuration used by the agent.

embedding_dim: number

The dimension of the embedding.

embedding_endpoint_type: "openai" | "anthropic" | "bedrock" | 16 more

The endpoint type for the model.

Accepts one of the following:
"openai"
"anthropic"
"bedrock"
"google_ai"
"google_vertex"
"azure"
"groq"
"ollama"
"webui"
"webui-legacy"
"lmstudio"
"lmstudio-legacy"
"llamacpp"
"koboldcpp"
"vllm"
"hugging-face"
"mistral"
"together"
"pinecone"
embedding_model: string

The model for the embedding.

azure_deployment?: string | null

The Azure deployment for the model.

azure_endpoint?: string | null

The Azure endpoint for the model.

azure_version?: string | null

The Azure version for the model.

batch_size?: number

The maximum batch size for processing embeddings.

embedding_chunk_size?: number | null

The chunk size of the embedding.

embedding_endpoint?: string | null

The endpoint for the model (None if local).

handle?: string | null

The handle for this config, in the format provider/model-name.

Deprecatedllm_config: LlmConfig { context_window, model, model_endpoint_type, 17 more }

Deprecated: Use model field instead. The LLM configuration used by the agent.

context_window: number

The context window size for the model.

model: string

LLM model name.

model_endpoint_type: "openai" | "anthropic" | "google_ai" | 18 more

The endpoint type for the model.

Accepts one of the following:
"openai"
"anthropic"
"google_ai"
"google_vertex"
"azure"
"groq"
"ollama"
"webui"
"webui-legacy"
"lmstudio"
"lmstudio-legacy"
"lmstudio-chatcompletions"
"llamacpp"
"koboldcpp"
"vllm"
"hugging-face"
"mistral"
"together"
"bedrock"
"deepseek"
"xai"
compatibility_type?: "gguf" | "mlx" | null

The framework compatibility type for the model.

Accepts one of the following:
"gguf"
"mlx"
display_name?: string | null

A human-friendly display name for the model.

enable_reasoner?: boolean

Whether or not the model should use extended thinking if it is a 'reasoning' style model

frequency_penalty?: number | null

Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. From OpenAI: Number between -2.0 and 2.0.

handle?: string | null

The handle for this config, in the format provider/model-name.

max_reasoning_tokens?: number

Configurable thinking budget for extended thinking. Used for enable_reasoner and also for Google Vertex models like Gemini 2.5 Flash. Minimum value is 1024 when used with enable_reasoner.

max_tokens?: number | null

The maximum number of tokens to generate. If not set, the model will use its default value.

model_endpoint?: string | null

The endpoint for the model.

model_wrapper?: string | null

The wrapper for the model.

parallel_tool_calls?: boolean | null

If set to True, enables parallel tool calling. Defaults to False.

provider_category?: ProviderCategory | null

The provider category for the model.

Accepts one of the following:
"base"
"byok"
provider_name?: string | null

The provider name for the model.

put_inner_thoughts_in_kwargs?: boolean | null

Puts 'inner_thoughts' as a kwarg in the function call if this is set to True. This helps with function calling performance and also the generation of inner thoughts.

reasoning_effort?: "minimal" | "low" | "medium" | "high" | null

The reasoning effort to use when generating text reasoning models

Accepts one of the following:
"minimal"
"low"
"medium"
"high"
temperature?: number

The temperature to use when generating text with the model. A higher temperature will result in more random text.

tier?: string | null

The cost tier for the model (cloud only).

verbosity?: "low" | "medium" | "high" | null

Soft control for how verbose model output should be, used for GPT-5 models.

Accepts one of the following:
"low"
"medium"
"high"
Deprecatedmemory: Memory { blocks, agent_type, file_blocks, prompt_template }

Deprecated: Use blocks field instead. The in-context memory of the agent.

blocks: Array<Block { value, id, base_template_id, 15 more } >

Memory blocks contained in the agent's in-context memory

value: string

Value of the block.

id?: string

The human-friendly ID of the Block

base_template_id?: string | null

The base template id of the block.

created_by_id?: string | null

The id of the user that made this Block.

deployment_id?: string | null

The id of the deployment.

description?: string | null

Description of the block.

entity_id?: string | null

The id of the entity within the template.

hidden?: boolean | null

If set to True, the block will be hidden.

is_template?: boolean

Whether the block is a template (e.g. saved human/persona options).

label?: string | null

Label of the block (e.g. 'human', 'persona') in the context window.

last_updated_by_id?: string | null

The id of the user that last updated this Block.

limit?: number

Character limit of the block.

metadata?: Record<string, unknown> | null

Metadata of the block.

preserve_on_migration?: boolean | null

Preserve the block on template migration.

project_id?: string | null

The associated project id.

read_only?: boolean

Whether the agent has read-only access to the block.

template_id?: string | null

The id of the template.

template_name?: string | null

Name of the block if it is a template.

agent_type?: AgentType | (string & {}) | null

Agent type controlling prompt rendering.

Accepts one of the following:
AgentType = "memgpt_agent" | "memgpt_v2_agent" | "letta_v1_agent" | 6 more

Enum to represent the type of agent.

Accepts one of the following:
"memgpt_agent"
"memgpt_v2_agent"
"letta_v1_agent"
"react_agent"
"workflow_agent"
"split_thread_agent"
"sleeptime_agent"
"voice_convo_agent"
"voice_sleeptime_agent"
(string & {})
file_blocks?: Array<FileBlock>

Special blocks representing the agent's in-context memory of an attached file

file_id: string

Unique identifier of the file.

is_open: boolean

True if the agent currently has the file open.

source_id: string

Unique identifier of the source.

value: string

Value of the block.

id?: string

The human-friendly ID of the Block

base_template_id?: string | null

The base template id of the block.

created_by_id?: string | null

The id of the user that made this Block.

deployment_id?: string | null

The id of the deployment.

description?: string | null

Description of the block.

entity_id?: string | null

The id of the entity within the template.

hidden?: boolean | null

If set to True, the block will be hidden.

is_template?: boolean

Whether the block is a template (e.g. saved human/persona options).

label?: string | null

Label of the block (e.g. 'human', 'persona') in the context window.

last_accessed_at?: string | null

UTC timestamp of the agent’s most recent access to this file. Any operations from the open, close, or search tools will update this field.

formatdate-time
last_updated_by_id?: string | null

The id of the user that last updated this Block.

limit?: number

Character limit of the block.

metadata?: Record<string, unknown> | null

Metadata of the block.

preserve_on_migration?: boolean | null

Preserve the block on template migration.

project_id?: string | null

The associated project id.

read_only?: boolean

Whether the agent has read-only access to the block.

template_id?: string | null

The id of the template.

template_name?: string | null

Name of the block if it is a template.

prompt_template?: string

Deprecated. Ignored for performance.

name: string

The name of the agent.

sources: Array<Source>

The sources used by the agent.

id: string

The human-friendly ID of the Source

embedding_config: EmbeddingConfig { embedding_dim, embedding_endpoint_type, embedding_model, 7 more }

The embedding configuration used by the source.

embedding_dim: number

The dimension of the embedding.

embedding_endpoint_type: "openai" | "anthropic" | "bedrock" | 16 more

The endpoint type for the model.

Accepts one of the following:
"openai"
"anthropic"
"bedrock"
"google_ai"
"google_vertex"
"azure"
"groq"
"ollama"
"webui"
"webui-legacy"
"lmstudio"
"lmstudio-legacy"
"llamacpp"
"koboldcpp"
"vllm"
"hugging-face"
"mistral"
"together"
"pinecone"
embedding_model: string

The model for the embedding.

azure_deployment?: string | null

The Azure deployment for the model.

azure_endpoint?: string | null

The Azure endpoint for the model.

azure_version?: string | null

The Azure version for the model.

batch_size?: number

The maximum batch size for processing embeddings.

embedding_chunk_size?: number | null

The chunk size of the embedding.

embedding_endpoint?: string | null

The endpoint for the model (None if local).

handle?: string | null

The handle for this config, in the format provider/model-name.

name: string

The name of the source.

created_at?: string | null

The timestamp when the source was created.

formatdate-time
created_by_id?: string | null

The id of the user that made this Tool.

description?: string | null

The description of the source.

instructions?: string | null

Instructions for how to use the source.

last_updated_by_id?: string | null

The id of the user that made this Tool.

metadata?: Record<string, unknown> | null

Metadata associated with the source.

updated_at?: string | null

The timestamp when the source was last updated.

formatdate-time
vector_db_provider?: VectorDBProvider

The vector database provider used for this source's passages

Accepts one of the following:
"native"
"tpuf"
"pinecone"
system: string

The system prompt used by the agent.

tags: Array<string>

The tags associated with the agent.

tools: Array<Tool { id, args_json_schema, created_by_id, 14 more } >

The tools used by the agent.

id: string

The human-friendly ID of the Tool

args_json_schema?: Record<string, unknown> | null

The args JSON schema of the function.

created_by_id?: string | null

The id of the user that made this Tool.

default_requires_approval?: boolean | null

Default value for whether or not executing this tool requires approval.

description?: string | null

The description of the tool.

enable_parallel_execution?: boolean | null

If set to True, then this tool will potentially be executed concurrently with other tools. Default False.

json_schema?: Record<string, unknown> | null

The JSON schema of the function.

last_updated_by_id?: string | null

The id of the user that made this Tool.

metadata_?: Record<string, unknown> | null

A dictionary of additional metadata for the tool.

name?: string | null

The name of the function.

npm_requirements?: Array<NpmRequirement { name, version } > | null

Optional list of npm packages required by this tool.

name: string

Name of the npm package.

minLength1
version?: string | null

Optional version of the package, following semantic versioning.

pip_requirements?: Array<PipRequirement { name, version } > | null

Optional list of pip packages required by this tool.

name: string

Name of the pip package.

minLength1
version?: string | null

Optional version of the package, following semantic versioning.

return_char_limit?: number

The maximum number of characters in the response.

maximum1000000
minimum1
source_code?: string | null

The source code of the function.

source_type?: string | null

The type of the source code.

tags?: Array<string>

Metadata tags.

tool_type?: ToolType

The type of the tool.

Accepts one of the following:
"custom"
"letta_core"
"letta_memory_core"
"letta_multi_agent_core"
"letta_sleeptime_core"
"letta_voice_sleeptime_core"
"letta_builtin"
"letta_files_core"
"external_langchain"
"external_composio"
"external_mcp"
base_template_id?: string | null

The base template id of the agent.

created_at?: string | null

The timestamp when the object was created.

formatdate-time
created_by_id?: string | null

The id of the user that made this object.

deployment_id?: string | null

The id of the deployment.

description?: string | null

The description of the agent.

embedding?: Embedding | null

Schema for defining settings for an embedding model

model: string

The name of the model.

provider: "openai" | "ollama"

The provider of the model.

Accepts one of the following:
"openai"
"ollama"
enable_sleeptime?: boolean | null

If set to True, memory management will move to a background agent thread.

entity_id?: string | null

The id of the entity within the template.

hidden?: boolean | null

If set to True, the agent will be hidden.

identities?: Array<Identity { id, agent_ids, block_ids, 5 more } >

The identities associated with this agent.

id: string

The human-friendly ID of the Identity

Deprecatedagent_ids: Array<string>

The IDs of the agents associated with the identity.

Deprecatedblock_ids: Array<string>

The IDs of the blocks associated with the identity.

identifier_key: string

External, user-generated identifier key of the identity.

identity_type: IdentityType

The type of the identity.

Accepts one of the following:
"org"
"user"
"other"
name: string

The name of the identity.

project_id?: string | null

The project id of the identity, if applicable.

properties?: Array<IdentityProperty { key, type, value } >

List of properties associated with the identity

key: string

The key of the property

type: "string" | "number" | "boolean" | "json"

The type of the property

Accepts one of the following:
"string"
"number"
"boolean"
"json"
value: string | number | boolean | Record<string, unknown>

The value of the property

Accepts one of the following:
string
number
boolean
Record<string, unknown>
Deprecatedidentity_ids?: Array<string>

Deprecated: Use identities field instead. The ids of the identities associated with this agent.

last_run_completion?: string | null

The timestamp when the agent last completed a run.

formatdate-time
last_run_duration_ms?: number | null

The duration in milliseconds of the agent's last run.

last_stop_reason?: StopReasonType | null

The stop reason from the agent's last run.

Accepts one of the following:
"end_turn"
"error"
"llm_api_error"
"invalid_llm_response"
"invalid_tool_call"
"max_steps"
"no_tool_call"
"tool_rule"
"cancelled"
"requires_approval"
last_updated_by_id?: string | null

The id of the user that made this object.

managed_group?: Group { id, agent_ids, description, 15 more } | null

The multi-agent group that this agent manages

id: string

The id of the group. Assigned by the database.

agent_ids: Array<string>
description: string
manager_type: ManagerType
Accepts one of the following:
"round_robin"
"supervisor"
"dynamic"
"sleeptime"
"voice_sleeptime"
"swarm"
base_template_id?: string | null

The base template id.

deployment_id?: string | null

The id of the deployment.

hidden?: boolean | null

If set to True, the group will be hidden.

last_processed_message_id?: string | null
manager_agent_id?: string | null
max_message_buffer_length?: number | null

The desired maximum length of messages in the context window of the convo agent. This is a best effort, and may be off slightly due to user/assistant interleaving.

max_turns?: number | null
min_message_buffer_length?: number | null

The desired minimum length of messages in the context window of the convo agent. This is a best effort, and may be off-by-one due to user/assistant interleaving.

project_id?: string | null

The associated project id.

Deprecatedshared_block_ids?: Array<string>
sleeptime_agent_frequency?: number | null
template_id?: string | null

The id of the template.

termination_token?: string | null
turns_counter?: number | null
max_files_open?: number | null

Maximum number of files that can be open at once for this agent. Setting this too high may exceed the context window, which will break the agent.

message_buffer_autoclear?: boolean

If set to True, the agent will not remember previous messages (though the agent will still retain state via core memory blocks and archival/recall memory). Not recommended unless you have an advanced use case.

message_ids?: Array<string> | null

The ids of the messages in the agent's in-context memory.

metadata?: Record<string, unknown> | null

The metadata of the agent.

model?: Model | null

Schema for defining settings for a model

model: string

The name of the model.

max_output_tokens?: number

The maximum number of tokens the model can generate.

parallel_tool_calls?: boolean

Whether to enable parallel tool calling.

Deprecatedmulti_agent_group?: Group { id, agent_ids, description, 15 more } | null

Deprecated: Use managed_group field instead. The multi-agent group that this agent manages.

id: string

The id of the group. Assigned by the database.

agent_ids: Array<string>
description: string
manager_type: ManagerType
Accepts one of the following:
"round_robin"
"supervisor"
"dynamic"
"sleeptime"
"voice_sleeptime"
"swarm"
base_template_id?: string | null

The base template id.

deployment_id?: string | null

The id of the deployment.

hidden?: boolean | null

If set to True, the group will be hidden.

last_processed_message_id?: string | null
manager_agent_id?: string | null
max_message_buffer_length?: number | null

The desired maximum length of messages in the context window of the convo agent. This is a best effort, and may be off slightly due to user/assistant interleaving.

max_turns?: number | null
min_message_buffer_length?: number | null

The desired minimum length of messages in the context window of the convo agent. This is a best effort, and may be off-by-one due to user/assistant interleaving.

project_id?: string | null

The associated project id.

Deprecatedshared_block_ids?: Array<string>
sleeptime_agent_frequency?: number | null
template_id?: string | null

The id of the template.

termination_token?: string | null
turns_counter?: number | null
per_file_view_window_char_limit?: number | null

The per-file view window character limit for this agent. Setting this too high may exceed the context window, which will break the agent.

project_id?: string | null

The id of the project the agent belongs to.

response_format?: TextResponseFormat { type } | JsonSchemaResponseFormat { json_schema, type } | JsonObjectResponseFormat { type } | null

The response format used by the agent

Accepts one of the following:
TextResponseFormat { type }

Response format for plain text responses.

type?: "text"

The type of the response format.

Accepts one of the following:
"text"
JsonSchemaResponseFormat { json_schema, type }

Response format for JSON schema-based responses.

json_schema: Record<string, unknown>

The JSON schema of the response.

type?: "json_schema"

The type of the response format.

Accepts one of the following:
"json_schema"
JsonObjectResponseFormat { type }

Response format for JSON object responses.

type?: "json_object"

The type of the response format.

Accepts one of the following:
"json_object"
secrets?: Array<AgentEnvironmentVariable { agent_id, key, value, 7 more } >

The environment variables for tool execution specific to this agent.

agent_id: string

The ID of the agent this environment variable belongs to.

key: string

The name of the environment variable.

value: string

The value of the environment variable.

id?: string

The human-friendly ID of the Agent-env

created_at?: string | null

The timestamp when the object was created.

formatdate-time
created_by_id?: string | null

The id of the user that made this object.

description?: string | null

An optional description of the environment variable.

last_updated_by_id?: string | null

The id of the user that made this object.

updated_at?: string | null

The timestamp when the object was last updated.

formatdate-time
value_enc?: string | null

Encrypted secret value (stored as encrypted string)

template_id?: string | null

The id of the template the agent belongs to.

timezone?: string | null

The timezone of the agent (IANA format).

Deprecatedtool_exec_environment_variables?: Array<AgentEnvironmentVariable { agent_id, key, value, 7 more } >

Deprecated: use secrets field instead.

agent_id: string

The ID of the agent this environment variable belongs to.

key: string

The name of the environment variable.

value: string

The value of the environment variable.

id?: string

The human-friendly ID of the Agent-env

created_at?: string | null

The timestamp when the object was created.

formatdate-time
created_by_id?: string | null

The id of the user that made this object.

description?: string | null

An optional description of the environment variable.

last_updated_by_id?: string | null

The id of the user that made this object.

updated_at?: string | null

The timestamp when the object was last updated.

formatdate-time
value_enc?: string | null

Encrypted secret value (stored as encrypted string)

tool_rules?: Array<ChildToolRule { children, tool_name, child_arg_nodes, 2 more } | InitToolRule { tool_name, args, prompt_template, type } | TerminalToolRule { tool_name, prompt_template, type } | 6 more> | null

The list of tool rules.

Accepts one of the following:
ChildToolRule { children, tool_name, child_arg_nodes, 2 more }

A ToolRule represents a tool that can be invoked by the agent.

children: Array<string>

The children tools that can be invoked.

tool_name: string

The name of the tool. Must exist in the database for the user's organization.

child_arg_nodes?: Array<ChildArgNode> | null

Optional list of typed child argument overrides. Each node must reference a child in 'children'.

name: string

The name of the child tool to invoke next.

args?: Record<string, unknown> | null

Optional prefilled arguments for this child tool. Keys must match the tool's parameter names and values must satisfy the tool's JSON schema. Supports partial prefill; non-overlapping parameters are left to the model.

prompt_template?: string | null

Optional template string (ignored).

type?: "constrain_child_tools"
Accepts one of the following:
"constrain_child_tools"
InitToolRule { tool_name, args, prompt_template, type }

Represents the initial tool rule configuration.

tool_name: string

The name of the tool. Must exist in the database for the user's organization.

args?: Record<string, unknown> | null

Optional prefilled arguments for this tool. When present, these values will override any LLM-provided arguments with the same keys during invocation. Keys must match the tool's parameter names and values must satisfy the tool's JSON schema. Supports partial prefill; non-overlapping parameters are left to the model.

prompt_template?: string | null

Optional template string (ignored). Rendering uses fast built-in formatting for performance.

type?: "run_first"
Accepts one of the following:
"run_first"
TerminalToolRule { tool_name, prompt_template, type }

Represents a terminal tool rule configuration where if this tool gets called, it must end the agent loop.

tool_name: string

The name of the tool. Must exist in the database for the user's organization.

prompt_template?: string | null

Optional template string (ignored).

type?: "exit_loop"
Accepts one of the following:
"exit_loop"
ConditionalToolRule { child_output_mapping, tool_name, default_child, 3 more }

A ToolRule that conditionally maps to different child tools based on the output.

child_output_mapping: Record<string, string>

The output case to check for mapping

tool_name: string

The name of the tool. Must exist in the database for the user's organization.

default_child?: string | null

The default child tool to be called. If None, any tool can be called.

prompt_template?: string | null

Optional template string (ignored).

require_output_mapping?: boolean

Whether to throw an error when output doesn't match any case

type?: "conditional"
Accepts one of the following:
"conditional"
ContinueToolRule { tool_name, prompt_template, type }

Represents a tool rule configuration where if this tool gets called, it must continue the agent loop.

tool_name: string

The name of the tool. Must exist in the database for the user's organization.

prompt_template?: string | null

Optional template string (ignored).

type?: "continue_loop"
Accepts one of the following:
"continue_loop"
RequiredBeforeExitToolRule { tool_name, prompt_template, type }

Represents a tool rule configuration where this tool must be called before the agent loop can exit.

tool_name: string

The name of the tool. Must exist in the database for the user's organization.

prompt_template?: string | null

Optional template string (ignored).

type?: "required_before_exit"
Accepts one of the following:
"required_before_exit"
MaxCountPerStepToolRule { max_count_limit, tool_name, prompt_template, type }

Represents a tool rule configuration which constrains the total number of times this tool can be invoked in a single step.

max_count_limit: number

The max limit for the total number of times this tool can be invoked in a single step.

tool_name: string

The name of the tool. Must exist in the database for the user's organization.

prompt_template?: string | null

Optional template string (ignored).

type?: "max_count_per_step"
Accepts one of the following:
"max_count_per_step"
ParentToolRule { children, tool_name, prompt_template, type }

A ToolRule that only allows a child tool to be called if the parent has been called.

children: Array<string>

The children tools that can be invoked.

tool_name: string

The name of the tool. Must exist in the database for the user's organization.

prompt_template?: string | null

Optional template string (ignored).

type?: "parent_last_tool"
Accepts one of the following:
"parent_last_tool"
RequiresApprovalToolRule { tool_name, prompt_template, type }

Represents a tool rule configuration which requires approval before the tool can be invoked.

tool_name: string

The name of the tool. Must exist in the database for the user's organization.

prompt_template?: string | null

Optional template string (ignored). Rendering uses fast built-in formatting for performance.

type?: "requires_approval"
Accepts one of the following:
"requires_approval"
updated_at?: string | null

The timestamp when the object was last updated.

formatdate-time
AgentType = "memgpt_agent" | "memgpt_v2_agent" | "letta_v1_agent" | 6 more

Enum to represent the type of agent.

Accepts one of the following:
"memgpt_agent"
"memgpt_v2_agent"
"letta_v1_agent"
"react_agent"
"workflow_agent"
"split_thread_agent"
"sleeptime_agent"
"voice_convo_agent"
"voice_sleeptime_agent"
ChildToolRule { children, tool_name, child_arg_nodes, 2 more }

A ToolRule represents a tool that can be invoked by the agent.

children: Array<string>

The children tools that can be invoked.

tool_name: string

The name of the tool. Must exist in the database for the user's organization.

child_arg_nodes?: Array<ChildArgNode> | null

Optional list of typed child argument overrides. Each node must reference a child in 'children'.

name: string

The name of the child tool to invoke next.

args?: Record<string, unknown> | null

Optional prefilled arguments for this child tool. Keys must match the tool's parameter names and values must satisfy the tool's JSON schema. Supports partial prefill; non-overlapping parameters are left to the model.

prompt_template?: string | null

Optional template string (ignored).

type?: "constrain_child_tools"
Accepts one of the following:
"constrain_child_tools"
ConditionalToolRule { child_output_mapping, tool_name, default_child, 3 more }

A ToolRule that conditionally maps to different child tools based on the output.

child_output_mapping: Record<string, string>

The output case to check for mapping

tool_name: string

The name of the tool. Must exist in the database for the user's organization.

default_child?: string | null

The default child tool to be called. If None, any tool can be called.

prompt_template?: string | null

Optional template string (ignored).

require_output_mapping?: boolean

Whether to throw an error when output doesn't match any case

type?: "conditional"
Accepts one of the following:
"conditional"
ContinueToolRule { tool_name, prompt_template, type }

Represents a tool rule configuration where if this tool gets called, it must continue the agent loop.

tool_name: string

The name of the tool. Must exist in the database for the user's organization.

prompt_template?: string | null

Optional template string (ignored).

type?: "continue_loop"
Accepts one of the following:
"continue_loop"
InitToolRule { tool_name, args, prompt_template, type }

Represents the initial tool rule configuration.

tool_name: string

The name of the tool. Must exist in the database for the user's organization.

args?: Record<string, unknown> | null

Optional prefilled arguments for this tool. When present, these values will override any LLM-provided arguments with the same keys during invocation. Keys must match the tool's parameter names and values must satisfy the tool's JSON schema. Supports partial prefill; non-overlapping parameters are left to the model.

prompt_template?: string | null

Optional template string (ignored). Rendering uses fast built-in formatting for performance.

type?: "run_first"
Accepts one of the following:
"run_first"
JsonObjectResponseFormat { type }

Response format for JSON object responses.

type?: "json_object"

The type of the response format.

Accepts one of the following:
"json_object"
JsonSchemaResponseFormat { json_schema, type }

Response format for JSON schema-based responses.

json_schema: Record<string, unknown>

The JSON schema of the response.

type?: "json_schema"

The type of the response format.

Accepts one of the following:
"json_schema"
LettaMessageContentUnion = TextContent { text, signature, type } | ImageContent { source, type } | ToolCallContent { id, input, name, 2 more } | 4 more

Sent via the Anthropic Messages API

Accepts one of the following:
TextContent { text, signature, type }
text: string

The text content of the message.

signature?: string | null

Stores a unique identifier for any reasoning associated with this text content.

type?: "text"

The type of the message.

Accepts one of the following:
"text"
ImageContent { source, type }
source: URLImage { url, type } | Base64Image { data, media_type, detail, type } | LettaImage { file_id, data, detail, 2 more }

The source of the image.

Accepts one of the following:
URLImage { url, type }
url: string

The URL of the image.

type?: "url"

The source type for the image.

Accepts one of the following:
"url"
Base64Image { data, media_type, detail, type }
data: string

The base64 encoded image data.

media_type: string

The media type for the image.

detail?: string | null

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

type?: "base64"

The source type for the image.

Accepts one of the following:
"base64"
LettaImage { file_id, data, detail, 2 more }
file_id: string

The unique identifier of the image file persisted in storage.

data?: string | null

The base64 encoded image data.

detail?: string | null

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

media_type?: string | null

The media type for the image.

type?: "letta"

The source type for the image.

Accepts one of the following:
"letta"
type?: "image"

The type of the message.

Accepts one of the following:
"image"
ToolCallContent { id, input, name, 2 more }
id: string

A unique identifier for this specific tool call instance.

input: Record<string, unknown>

The parameters being passed to the tool, structured as a dictionary of parameter names to values.

name: string

The name of the tool being called.

signature?: string | null

Stores a unique identifier for any reasoning associated with this tool call.

type?: "tool_call"

Indicates this content represents a tool call event.

Accepts one of the following:
"tool_call"
ToolReturnContent { content, is_error, tool_call_id, type }
content: string

The content returned by the tool execution.

is_error: boolean

Indicates whether the tool execution resulted in an error.

tool_call_id: string

References the ID of the ToolCallContent that initiated this tool call.

type?: "tool_return"

Indicates this content represents a tool return event.

Accepts one of the following:
"tool_return"
ReasoningContent { is_native, reasoning, signature, type }

Sent via the Anthropic Messages API

is_native: boolean

Whether the reasoning content was generated by a reasoner model that processed this step.

reasoning: string

The intermediate reasoning or thought process content.

signature?: string | null

A unique identifier for this reasoning step.

type?: "reasoning"

Indicates this is a reasoning/intermediate step.

Accepts one of the following:
"reasoning"
RedactedReasoningContent { data, type }

Sent via the Anthropic Messages API

data: string

The redacted or filtered intermediate reasoning content.

type?: "redacted_reasoning"

Indicates this is a redacted thinking step.

Accepts one of the following:
"redacted_reasoning"
OmittedReasoningContent { signature, type }

A placeholder for reasoning content we know is present, but isn't returned by the provider (e.g. OpenAI GPT-5 on ChatCompletions)

signature?: string | null

A unique identifier for this reasoning step.

type?: "omitted_reasoning"

Indicates this is an omitted reasoning step.

Accepts one of the following:
"omitted_reasoning"
MaxCountPerStepToolRule { max_count_limit, tool_name, prompt_template, type }

Represents a tool rule configuration which constrains the total number of times this tool can be invoked in a single step.

max_count_limit: number

The max limit for the total number of times this tool can be invoked in a single step.

tool_name: string

The name of the tool. Must exist in the database for the user's organization.

prompt_template?: string | null

Optional template string (ignored).

type?: "max_count_per_step"
Accepts one of the following:
"max_count_per_step"
MessageCreate { content, role, batch_item_id, 5 more }

Request to create a message

content: Array<LettaMessageContentUnion> | string

The content of the message.

Accepts one of the following:
TextContent { text, signature, type }
text: string

The text content of the message.

signature?: string | null

Stores a unique identifier for any reasoning associated with this text content.

type?: "text"

The type of the message.

Accepts one of the following:
"text"
ImageContent { source, type }
source: URLImage { url, type } | Base64Image { data, media_type, detail, type } | LettaImage { file_id, data, detail, 2 more }

The source of the image.

Accepts one of the following:
URLImage { url, type }
url: string

The URL of the image.

type?: "url"

The source type for the image.

Accepts one of the following:
"url"
Base64Image { data, media_type, detail, type }
data: string

The base64 encoded image data.

media_type: string

The media type for the image.

detail?: string | null

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

type?: "base64"

The source type for the image.

Accepts one of the following:
"base64"
LettaImage { file_id, data, detail, 2 more }
file_id: string

The unique identifier of the image file persisted in storage.

data?: string | null

The base64 encoded image data.

detail?: string | null

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

media_type?: string | null

The media type for the image.

type?: "letta"

The source type for the image.

Accepts one of the following:
"letta"
type?: "image"

The type of the message.

Accepts one of the following:
"image"
ToolCallContent { id, input, name, 2 more }
id: string

A unique identifier for this specific tool call instance.

input: Record<string, unknown>

The parameters being passed to the tool, structured as a dictionary of parameter names to values.

name: string

The name of the tool being called.

signature?: string | null

Stores a unique identifier for any reasoning associated with this tool call.

type?: "tool_call"

Indicates this content represents a tool call event.

Accepts one of the following:
"tool_call"
ToolReturnContent { content, is_error, tool_call_id, type }
content: string

The content returned by the tool execution.

is_error: boolean

Indicates whether the tool execution resulted in an error.

tool_call_id: string

References the ID of the ToolCallContent that initiated this tool call.

type?: "tool_return"

Indicates this content represents a tool return event.

Accepts one of the following:
"tool_return"
ReasoningContent { is_native, reasoning, signature, type }

Sent via the Anthropic Messages API

is_native: boolean

Whether the reasoning content was generated by a reasoner model that processed this step.

reasoning: string

The intermediate reasoning or thought process content.

signature?: string | null

A unique identifier for this reasoning step.

type?: "reasoning"

Indicates this is a reasoning/intermediate step.

Accepts one of the following:
"reasoning"
RedactedReasoningContent { data, type }

Sent via the Anthropic Messages API

data: string

The redacted or filtered intermediate reasoning content.

type?: "redacted_reasoning"

Indicates this is a redacted thinking step.

Accepts one of the following:
"redacted_reasoning"
OmittedReasoningContent { signature, type }

A placeholder for reasoning content we know is present, but isn't returned by the provider (e.g. OpenAI GPT-5 on ChatCompletions)

signature?: string | null

A unique identifier for this reasoning step.

type?: "omitted_reasoning"

Indicates this is an omitted reasoning step.

Accepts one of the following:
"omitted_reasoning"
string
role: "user" | "system" | "assistant"

The role of the participant.

Accepts one of the following:
"user"
"system"
"assistant"
batch_item_id?: string | null

The id of the LLMBatchItem that this message is associated with

group_id?: string | null

The multi-agent group that the message was sent in

name?: string | null

The name of the participant.

otid?: string | null

The offline threading id associated with this message

sender_id?: string | null

The id of the sender of the message, can be an identity id or agent id

type?: "message" | null

The message type to be created.

Accepts one of the following:
"message"
ParentToolRule { children, tool_name, prompt_template, type }

A ToolRule that only allows a child tool to be called if the parent has been called.

children: Array<string>

The children tools that can be invoked.

tool_name: string

The name of the tool. Must exist in the database for the user's organization.

prompt_template?: string | null

Optional template string (ignored).

type?: "parent_last_tool"
Accepts one of the following:
"parent_last_tool"
RequiredBeforeExitToolRule { tool_name, prompt_template, type }

Represents a tool rule configuration where this tool must be called before the agent loop can exit.

tool_name: string

The name of the tool. Must exist in the database for the user's organization.

prompt_template?: string | null

Optional template string (ignored).

type?: "required_before_exit"
Accepts one of the following:
"required_before_exit"
RequiresApprovalToolRule { tool_name, prompt_template, type }

Represents a tool rule configuration which requires approval before the tool can be invoked.

tool_name: string

The name of the tool. Must exist in the database for the user's organization.

prompt_template?: string | null

Optional template string (ignored). Rendering uses fast built-in formatting for performance.

type?: "requires_approval"
Accepts one of the following:
"requires_approval"
TerminalToolRule { tool_name, prompt_template, type }

Represents a terminal tool rule configuration where if this tool gets called, it must end the agent loop.

tool_name: string

The name of the tool. Must exist in the database for the user's organization.

prompt_template?: string | null

Optional template string (ignored).

type?: "exit_loop"
Accepts one of the following:
"exit_loop"
TextResponseFormat { type }

Response format for plain text responses.

type?: "text"

The type of the response format.

Accepts one of the following:
"text"

AgentsMessages

List Messages
client.agents.messages.list(stringagentID, MessageListParams { after, assistant_message_tool_kwarg, assistant_message_tool_name, 7 more } query?, RequestOptionsoptions?): ArrayPage<LettaMessageUnion>
get/v1/agents/{agent_id}/messages
Send Message
client.agents.messages.send(stringagentID, MessageSendParamsbody, RequestOptionsoptions?): LettaResponse { messages, stop_reason, usage } | Stream<LettaStreamingResponse>
post/v1/agents/{agent_id}/messages
Modify Message
client.agents.messages.modify(stringmessageID, MessageModifyParamsparams, RequestOptionsoptions?): MessageModifyResponse
patch/v1/agents/{agent_id}/messages/{message_id}
Send Message Streaming
client.agents.messages.stream(stringagentID, MessageStreamParams { assistant_message_tool_kwarg, assistant_message_tool_name, background, 9 more } body, RequestOptionsoptions?): LettaStreamingResponse | Stream<LettaStreamingResponse>
post/v1/agents/{agent_id}/messages/stream
Cancel Message
client.agents.messages.cancel(stringagentID, MessageCancelParams { run_ids } body?, RequestOptionsoptions?): MessageCancelResponse
post/v1/agents/{agent_id}/messages/cancel
Send Message Async
client.agents.messages.sendAsync(stringagentID, MessageSendAsyncParams { assistant_message_tool_kwarg, assistant_message_tool_name, callback_url, 6 more } body, RequestOptionsoptions?): Run { id, agent_id, background, 13 more }
post/v1/agents/{agent_id}/messages/async
Reset Messages
client.agents.messages.reset(stringagentID, MessageResetParams { add_default_initial_messages } body, RequestOptionsoptions?): AgentState { id, agent_type, blocks, 39 more }
patch/v1/agents/{agent_id}/reset-messages
Summarize Messages
client.agents.messages.summarize(stringagentID, RequestOptionsoptions?): void
post/v1/agents/{agent_id}/summarize
ModelsExpand Collapse
ApprovalCreate { approval_request_id, approvals, approve, 3 more }

Input to approve or deny a tool call request

Deprecatedapproval_request_id?: string | null

The message ID of the approval request

approvals?: Array<ApprovalReturn { approve, tool_call_id, reason, type } | ToolReturn { status, tool_call_id, tool_return, 3 more } > | null

The list of approval responses

Accepts one of the following:
ApprovalReturn { approve, tool_call_id, reason, type }
approve: boolean

Whether the tool has been approved

tool_call_id: string

The ID of the tool call that corresponds to this approval

reason?: string | null

An optional explanation for the provided approval status

type?: "approval"

The message type to be created.

Accepts one of the following:
"approval"
ToolReturn { status, tool_call_id, tool_return, 3 more }
status: "success" | "error"
Accepts one of the following:
"success"
"error"
tool_call_id: string
tool_return: string
stderr?: Array<string> | null
stdout?: Array<string> | null
type?: "tool"

The message type to be created.

Accepts one of the following:
"tool"
Deprecatedapprove?: boolean | null

Whether the tool has been approved

group_id?: string | null

The multi-agent group that the message was sent in

Deprecatedreason?: string | null

An optional explanation for the provided approval status

type?: "approval"

The message type to be created.

Accepts one of the following:
"approval"
ApprovalRequestMessage { id, date, tool_call, 9 more }

A message representing a request for approval to call a tool (generated by the LLM to trigger tool execution).

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message tool_call (ToolCall): The tool call

id: string
date: string
Deprecatedtool_call: ToolCall { arguments, name, tool_call_id } | ToolCallDelta { arguments, name, tool_call_id }

The tool call that has been requested by the llm to run

Accepts one of the following:
ToolCall { arguments, name, tool_call_id }
arguments: string
name: string
tool_call_id: string
ToolCallDelta { arguments, name, tool_call_id }
arguments?: string | null
name?: string | null
tool_call_id?: string | null
is_err?: boolean | null
message_type?: "approval_request_message"

The type of the message.

Accepts one of the following:
"approval_request_message"
name?: string | null
otid?: string | null
run_id?: string | null
sender_id?: string | null
seq_id?: number | null
step_id?: string | null
tool_calls?: Array<ToolCall { arguments, name, tool_call_id } > | ToolCallDelta { arguments, name, tool_call_id } | null

The tool calls that have been requested by the llm to run, which are pending approval

Accepts one of the following:
Array<ToolCall { arguments, name, tool_call_id } >
arguments: string
name: string
tool_call_id: string
ToolCallDelta { arguments, name, tool_call_id }
arguments?: string | null
name?: string | null
tool_call_id?: string | null
ApprovalResponseMessage { id, date, approval_request_id, 11 more }

A message representing a response form the user indicating whether a tool has been approved to run.

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message approve: (bool) Whether the tool has been approved approval_request_id: The ID of the approval request reason: (Optional[str]) An optional explanation for the provided approval status

id: string
date: string
Deprecatedapproval_request_id?: string | null

The message ID of the approval request

approvals?: Array<ApprovalReturn { approve, tool_call_id, reason, type } | ToolReturn { status, tool_call_id, tool_return, 3 more } > | null

The list of approval responses

Accepts one of the following:
ApprovalReturn { approve, tool_call_id, reason, type }
approve: boolean

Whether the tool has been approved

tool_call_id: string

The ID of the tool call that corresponds to this approval

reason?: string | null

An optional explanation for the provided approval status

type?: "approval"

The message type to be created.

Accepts one of the following:
"approval"
ToolReturn { status, tool_call_id, tool_return, 3 more }
status: "success" | "error"
Accepts one of the following:
"success"
"error"
tool_call_id: string
tool_return: string
stderr?: Array<string> | null
stdout?: Array<string> | null
type?: "tool"

The message type to be created.

Accepts one of the following:
"tool"
Deprecatedapprove?: boolean | null

Whether the tool has been approved

is_err?: boolean | null
message_type?: "approval_response_message"

The type of the message.

Accepts one of the following:
"approval_response_message"
name?: string | null
otid?: string | null
Deprecatedreason?: string | null

An optional explanation for the provided approval status

run_id?: string | null
sender_id?: string | null
seq_id?: number | null
step_id?: string | null
AssistantMessage { id, content, date, 8 more }

A message sent by the LLM in response to user input. Used in the LLM context.

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message content (Union[str, List[LettaAssistantMessageContentUnion]]): The message content sent by the agent (can be a string or an array of content parts)

id: string
content: Array<LettaAssistantMessageContentUnion { text, signature, type } > | string

The message content sent by the agent (can be a string or an array of content parts)

Accepts one of the following:
Array<LettaAssistantMessageContentUnion { text, signature, type } >
text: string

The text content of the message.

signature?: string | null

Stores a unique identifier for any reasoning associated with this text content.

type?: "text"

The type of the message.

Accepts one of the following:
"text"
string
date: string
is_err?: boolean | null
message_type?: "assistant_message"

The type of the message.

Accepts one of the following:
"assistant_message"
name?: string | null
otid?: string | null
run_id?: string | null
sender_id?: string | null
seq_id?: number | null
step_id?: string | null
EventMessage { id, date, event_data, 9 more }

A message for notifying the developer that an event that has occured (e.g. a compaction). Events are NOT part of the context window.

id: string
date: string
event_data: Record<string, unknown>
event_type: "compaction"
Accepts one of the following:
"compaction"
is_err?: boolean | null
message_type?: "event"
Accepts one of the following:
"event"
name?: string | null
otid?: string | null
run_id?: string | null
sender_id?: string | null
seq_id?: number | null
step_id?: string | null
HiddenReasoningMessage { id, date, state, 9 more }

Representation of an agent's internal reasoning where reasoning content has been hidden from the response.

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message state (Literal["redacted", "omitted"]): Whether the reasoning content was redacted by the provider or simply omitted by the API hidden_reasoning (Optional[str]): The internal reasoning of the agent

id: string
date: string
state: "redacted" | "omitted"
Accepts one of the following:
"redacted"
"omitted"
hidden_reasoning?: string | null
is_err?: boolean | null
message_type?: "hidden_reasoning_message"

The type of the message.

Accepts one of the following:
"hidden_reasoning_message"
name?: string | null
otid?: string | null
run_id?: string | null
sender_id?: string | null
seq_id?: number | null
step_id?: string | null
ImageContent { source, type }
source: URLImage { url, type } | Base64Image { data, media_type, detail, type } | LettaImage { file_id, data, detail, 2 more }

The source of the image.

Accepts one of the following:
URLImage { url, type }
url: string

The URL of the image.

type?: "url"

The source type for the image.

Accepts one of the following:
"url"
Base64Image { data, media_type, detail, type }
data: string

The base64 encoded image data.

media_type: string

The media type for the image.

detail?: string | null

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

type?: "base64"

The source type for the image.

Accepts one of the following:
"base64"
LettaImage { file_id, data, detail, 2 more }
file_id: string

The unique identifier of the image file persisted in storage.

data?: string | null

The base64 encoded image data.

detail?: string | null

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

media_type?: string | null

The media type for the image.

type?: "letta"

The source type for the image.

Accepts one of the following:
"letta"
type?: "image"

The type of the message.

Accepts one of the following:
"image"
JobStatus = "created" | "running" | "completed" | 4 more

Status of the job.

Accepts one of the following:
"created"
"running"
"completed"
"failed"
"pending"
"cancelled"
"expired"
JobType = "job" | "run" | "batch"
Accepts one of the following:
"job"
"run"
"batch"
LettaAssistantMessageContentUnion { text, signature, type }
text: string

The text content of the message.

signature?: string | null

Stores a unique identifier for any reasoning associated with this text content.

type?: "text"

The type of the message.

Accepts one of the following:
"text"
LettaMessageUnion = SystemMessage { id, content, date, 8 more } | UserMessage { id, content, date, 8 more } | ReasoningMessage { id, date, reasoning, 10 more } | 8 more

A message generated by the system. Never streamed back on a response, only used for cursor pagination.

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message content (str): The message content sent by the system

Accepts one of the following:
SystemMessage { id, content, date, 8 more }

A message generated by the system. Never streamed back on a response, only used for cursor pagination.

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message content (str): The message content sent by the system

id: string
content: string

The message content sent by the system

date: string
is_err?: boolean | null
message_type?: "system_message"

The type of the message.

Accepts one of the following:
"system_message"
name?: string | null
otid?: string | null
run_id?: string | null
sender_id?: string | null
seq_id?: number | null
step_id?: string | null
UserMessage { id, content, date, 8 more }

A message sent by the user. Never streamed back on a response, only used for cursor pagination.

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message content (Union[str, List[LettaUserMessageContentUnion]]): The message content sent by the user (can be a string or an array of multi-modal content parts)

id: string
content: Array<LettaUserMessageContentUnion> | string

The message content sent by the user (can be a string or an array of multi-modal content parts)

Accepts one of the following:
TextContent { text, signature, type }
text: string

The text content of the message.

signature?: string | null

Stores a unique identifier for any reasoning associated with this text content.

type?: "text"

The type of the message.

Accepts one of the following:
"text"
ImageContent { source, type }
source: URLImage { url, type } | Base64Image { data, media_type, detail, type } | LettaImage { file_id, data, detail, 2 more }

The source of the image.

Accepts one of the following:
URLImage { url, type }
url: string

The URL of the image.

type?: "url"

The source type for the image.

Accepts one of the following:
"url"
Base64Image { data, media_type, detail, type }
data: string

The base64 encoded image data.

media_type: string

The media type for the image.

detail?: string | null

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

type?: "base64"

The source type for the image.

Accepts one of the following:
"base64"
LettaImage { file_id, data, detail, 2 more }
file_id: string

The unique identifier of the image file persisted in storage.

data?: string | null

The base64 encoded image data.

detail?: string | null

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

media_type?: string | null

The media type for the image.

type?: "letta"

The source type for the image.

Accepts one of the following:
"letta"
type?: "image"

The type of the message.

Accepts one of the following:
"image"
string
date: string
is_err?: boolean | null
message_type?: "user_message"

The type of the message.

Accepts one of the following:
"user_message"
name?: string | null
otid?: string | null
run_id?: string | null
sender_id?: string | null
seq_id?: number | null
step_id?: string | null
ReasoningMessage { id, date, reasoning, 10 more }

Representation of an agent's internal reasoning.

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message source (Literal["reasoner_model", "non_reasoner_model"]): Whether the reasoning content was generated natively by a reasoner model or derived via prompting reasoning (str): The internal reasoning of the agent signature (Optional[str]): The model-generated signature of the reasoning step

id: string
date: string
reasoning: string
is_err?: boolean | null
message_type?: "reasoning_message"

The type of the message.

Accepts one of the following:
"reasoning_message"
name?: string | null
otid?: string | null
run_id?: string | null
sender_id?: string | null
seq_id?: number | null
signature?: string | null
source?: "reasoner_model" | "non_reasoner_model"
Accepts one of the following:
"reasoner_model"
"non_reasoner_model"
step_id?: string | null
HiddenReasoningMessage { id, date, state, 9 more }

Representation of an agent's internal reasoning where reasoning content has been hidden from the response.

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message state (Literal["redacted", "omitted"]): Whether the reasoning content was redacted by the provider or simply omitted by the API hidden_reasoning (Optional[str]): The internal reasoning of the agent

id: string
date: string
state: "redacted" | "omitted"
Accepts one of the following:
"redacted"
"omitted"
hidden_reasoning?: string | null
is_err?: boolean | null
message_type?: "hidden_reasoning_message"

The type of the message.

Accepts one of the following:
"hidden_reasoning_message"
name?: string | null
otid?: string | null
run_id?: string | null
sender_id?: string | null
seq_id?: number | null
step_id?: string | null
ToolCallMessage { id, date, tool_call, 9 more }

A message representing a request to call a tool (generated by the LLM to trigger tool execution).

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message tool_call (Union[ToolCall, ToolCallDelta]): The tool call

id: string
date: string
Deprecatedtool_call: ToolCall { arguments, name, tool_call_id } | ToolCallDelta { arguments, name, tool_call_id }
Accepts one of the following:
ToolCall { arguments, name, tool_call_id }
arguments: string
name: string
tool_call_id: string
ToolCallDelta { arguments, name, tool_call_id }
arguments?: string | null
name?: string | null
tool_call_id?: string | null
is_err?: boolean | null
message_type?: "tool_call_message"

The type of the message.

Accepts one of the following:
"tool_call_message"
name?: string | null
otid?: string | null
run_id?: string | null
sender_id?: string | null
seq_id?: number | null
step_id?: string | null
tool_calls?: Array<ToolCall { arguments, name, tool_call_id } > | ToolCallDelta { arguments, name, tool_call_id } | null
Accepts one of the following:
Array<ToolCall { arguments, name, tool_call_id } >
arguments: string
name: string
tool_call_id: string
ToolCallDelta { arguments, name, tool_call_id }
arguments?: string | null
name?: string | null
tool_call_id?: string | null
ToolReturnMessage { id, date, status, 13 more }

A message representing the return value of a tool call (generated by Letta executing the requested tool).

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message tool_return (str): The return value of the tool (deprecated, use tool_returns) status (Literal["success", "error"]): The status of the tool call (deprecated, use tool_returns) tool_call_id (str): A unique identifier for the tool call that generated this message (deprecated, use tool_returns) stdout (Optional[List(str)]): Captured stdout (e.g. prints, logs) from the tool invocation (deprecated, use tool_returns) stderr (Optional[List(str)]): Captured stderr from the tool invocation (deprecated, use tool_returns) tool_returns (Optional[List[ToolReturn]]): List of tool returns for multi-tool support

id: string
date: string
Deprecatedstatus: "success" | "error"
Accepts one of the following:
"success"
"error"
Deprecatedtool_call_id: string
Deprecatedtool_return: string
is_err?: boolean | null
message_type?: "tool_return_message"

The type of the message.

Accepts one of the following:
"tool_return_message"
name?: string | null
otid?: string | null
run_id?: string | null
sender_id?: string | null
seq_id?: number | null
Deprecatedstderr?: Array<string> | null
Deprecatedstdout?: Array<string> | null
step_id?: string | null
tool_returns?: Array<ToolReturn { status, tool_call_id, tool_return, 3 more } > | null
status: "success" | "error"
Accepts one of the following:
"success"
"error"
tool_call_id: string
tool_return: string
stderr?: Array<string> | null
stdout?: Array<string> | null
type?: "tool"

The message type to be created.

Accepts one of the following:
"tool"
AssistantMessage { id, content, date, 8 more }

A message sent by the LLM in response to user input. Used in the LLM context.

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message content (Union[str, List[LettaAssistantMessageContentUnion]]): The message content sent by the agent (can be a string or an array of content parts)

id: string
content: Array<LettaAssistantMessageContentUnion { text, signature, type } > | string

The message content sent by the agent (can be a string or an array of content parts)

Accepts one of the following:
Array<LettaAssistantMessageContentUnion { text, signature, type } >
text: string

The text content of the message.

signature?: string | null

Stores a unique identifier for any reasoning associated with this text content.

type?: "text"

The type of the message.

Accepts one of the following:
"text"
string
date: string
is_err?: boolean | null
message_type?: "assistant_message"

The type of the message.

Accepts one of the following:
"assistant_message"
name?: string | null
otid?: string | null
run_id?: string | null
sender_id?: string | null
seq_id?: number | null
step_id?: string | null
ApprovalRequestMessage { id, date, tool_call, 9 more }

A message representing a request for approval to call a tool (generated by the LLM to trigger tool execution).

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message tool_call (ToolCall): The tool call

id: string
date: string
Deprecatedtool_call: ToolCall { arguments, name, tool_call_id } | ToolCallDelta { arguments, name, tool_call_id }

The tool call that has been requested by the llm to run

Accepts one of the following:
ToolCall { arguments, name, tool_call_id }
arguments: string
name: string
tool_call_id: string
ToolCallDelta { arguments, name, tool_call_id }
arguments?: string | null
name?: string | null
tool_call_id?: string | null
is_err?: boolean | null
message_type?: "approval_request_message"

The type of the message.

Accepts one of the following:
"approval_request_message"
name?: string | null
otid?: string | null
run_id?: string | null
sender_id?: string | null
seq_id?: number | null
step_id?: string | null
tool_calls?: Array<ToolCall { arguments, name, tool_call_id } > | ToolCallDelta { arguments, name, tool_call_id } | null

The tool calls that have been requested by the llm to run, which are pending approval

Accepts one of the following:
Array<ToolCall { arguments, name, tool_call_id } >
arguments: string
name: string
tool_call_id: string
ToolCallDelta { arguments, name, tool_call_id }
arguments?: string | null
name?: string | null
tool_call_id?: string | null
ApprovalResponseMessage { id, date, approval_request_id, 11 more }

A message representing a response form the user indicating whether a tool has been approved to run.

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message approve: (bool) Whether the tool has been approved approval_request_id: The ID of the approval request reason: (Optional[str]) An optional explanation for the provided approval status

id: string
date: string
Deprecatedapproval_request_id?: string | null

The message ID of the approval request

approvals?: Array<ApprovalReturn { approve, tool_call_id, reason, type } | ToolReturn { status, tool_call_id, tool_return, 3 more } > | null

The list of approval responses

Accepts one of the following:
ApprovalReturn { approve, tool_call_id, reason, type }
approve: boolean

Whether the tool has been approved

tool_call_id: string

The ID of the tool call that corresponds to this approval

reason?: string | null

An optional explanation for the provided approval status

type?: "approval"

The message type to be created.

Accepts one of the following:
"approval"
ToolReturn { status, tool_call_id, tool_return, 3 more }
status: "success" | "error"
Accepts one of the following:
"success"
"error"
tool_call_id: string
tool_return: string
stderr?: Array<string> | null
stdout?: Array<string> | null
type?: "tool"

The message type to be created.

Accepts one of the following:
"tool"
Deprecatedapprove?: boolean | null

Whether the tool has been approved

is_err?: boolean | null
message_type?: "approval_response_message"

The type of the message.

Accepts one of the following:
"approval_response_message"
name?: string | null
otid?: string | null
Deprecatedreason?: string | null

An optional explanation for the provided approval status

run_id?: string | null
sender_id?: string | null
seq_id?: number | null
step_id?: string | null
SummaryMessage { id, date, summary, 8 more }

A message representing a summary of the conversation. Sent to the LLM as a user or system message depending on the provider.

id: string
date: string
summary: string
is_err?: boolean | null
message_type?: "summary"
Accepts one of the following:
"summary"
name?: string | null
otid?: string | null
run_id?: string | null
sender_id?: string | null
seq_id?: number | null
step_id?: string | null
EventMessage { id, date, event_data, 9 more }

A message for notifying the developer that an event that has occured (e.g. a compaction). Events are NOT part of the context window.

id: string
date: string
event_data: Record<string, unknown>
event_type: "compaction"
Accepts one of the following:
"compaction"
is_err?: boolean | null
message_type?: "event"
Accepts one of the following:
"event"
name?: string | null
otid?: string | null
run_id?: string | null
sender_id?: string | null
seq_id?: number | null
step_id?: string | null
LettaRequest { assistant_message_tool_kwarg, assistant_message_tool_name, enable_thinking, 5 more }
Deprecatedassistant_message_tool_kwarg?: string

The name of the message argument in the designated message tool. Still supported for legacy agent types, but deprecated for letta_v1_agent onward.

Deprecatedassistant_message_tool_name?: string

The name of the designated message tool. Still supported for legacy agent types, but deprecated for letta_v1_agent onward.

Deprecatedenable_thinking?: string

If set to True, enables reasoning before responses or tool calls from the agent.

include_return_message_types?: Array<MessageType> | null

Only return specified message types in the response. If None (default) returns all messages.

Accepts one of the following:
"system_message"
"user_message"
"assistant_message"
"reasoning_message"
"hidden_reasoning_message"
"tool_call_message"
"tool_return_message"
"approval_request_message"
"approval_response_message"
input?: string | Array<TextContent { text, signature, type } | ImageContent { source, type } | ToolCallContent { id, input, name, 2 more } | 5 more> | null

Syntactic sugar for a single user message. Equivalent to messages=[{'role': 'user', 'content': input}].

Accepts one of the following:
string
Array<TextContent { text, signature, type } | ImageContent { source, type } | ToolCallContent { id, input, name, 2 more } | 5 more>
TextContent { text, signature, type }
text: string

The text content of the message.

signature?: string | null

Stores a unique identifier for any reasoning associated with this text content.

type?: "text"

The type of the message.

Accepts one of the following:
"text"
ImageContent { source, type }
source: URLImage { url, type } | Base64Image { data, media_type, detail, type } | LettaImage { file_id, data, detail, 2 more }

The source of the image.

Accepts one of the following:
URLImage { url, type }
url: string

The URL of the image.

type?: "url"

The source type for the image.

Accepts one of the following:
"url"
Base64Image { data, media_type, detail, type }
data: string

The base64 encoded image data.

media_type: string

The media type for the image.

detail?: string | null

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

type?: "base64"

The source type for the image.

Accepts one of the following:
"base64"
LettaImage { file_id, data, detail, 2 more }
file_id: string

The unique identifier of the image file persisted in storage.

data?: string | null

The base64 encoded image data.

detail?: string | null

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

media_type?: string | null

The media type for the image.

type?: "letta"

The source type for the image.

Accepts one of the following:
"letta"
type?: "image"

The type of the message.

Accepts one of the following:
"image"
ToolCallContent { id, input, name, 2 more }
id: string

A unique identifier for this specific tool call instance.

input: Record<string, unknown>

The parameters being passed to the tool, structured as a dictionary of parameter names to values.

name: string

The name of the tool being called.

signature?: string | null

Stores a unique identifier for any reasoning associated with this tool call.

type?: "tool_call"

Indicates this content represents a tool call event.

Accepts one of the following:
"tool_call"
ToolReturnContent { content, is_error, tool_call_id, type }
content: string

The content returned by the tool execution.

is_error: boolean

Indicates whether the tool execution resulted in an error.

tool_call_id: string

References the ID of the ToolCallContent that initiated this tool call.

type?: "tool_return"

Indicates this content represents a tool return event.

Accepts one of the following:
"tool_return"
ReasoningContent { is_native, reasoning, signature, type }

Sent via the Anthropic Messages API

is_native: boolean

Whether the reasoning content was generated by a reasoner model that processed this step.

reasoning: string

The intermediate reasoning or thought process content.

signature?: string | null

A unique identifier for this reasoning step.

type?: "reasoning"

Indicates this is a reasoning/intermediate step.

Accepts one of the following:
"reasoning"
RedactedReasoningContent { data, type }

Sent via the Anthropic Messages API

data: string

The redacted or filtered intermediate reasoning content.

type?: "redacted_reasoning"

Indicates this is a redacted thinking step.

Accepts one of the following:
"redacted_reasoning"
OmittedReasoningContent { signature, type }

A placeholder for reasoning content we know is present, but isn't returned by the provider (e.g. OpenAI GPT-5 on ChatCompletions)

signature?: string | null

A unique identifier for this reasoning step.

type?: "omitted_reasoning"

Indicates this is an omitted reasoning step.

Accepts one of the following:
"omitted_reasoning"
SummarizedReasoningContent { id, summary, encrypted_content, type }

The style of reasoning content returned by the OpenAI Responses API

id: string

The unique identifier for this reasoning step.

summary: Array<Summary>

Summaries of the reasoning content.

index: number

The index of the summary part.

text: string

The text of the summary part.

encrypted_content?: string

The encrypted reasoning content.

type?: "summarized_reasoning"

Indicates this is a summarized reasoning step.

Accepts one of the following:
"summarized_reasoning"
max_steps?: number

Maximum number of steps the agent should take to process the request.

messages?: Array<MessageCreate { content, role, batch_item_id, 5 more } | ApprovalCreate { approval_request_id, approvals, approve, 3 more } > | null

The messages to be sent to the agent.

Accepts one of the following:
MessageCreate { content, role, batch_item_id, 5 more }

Request to create a message

content: Array<LettaMessageContentUnion> | string

The content of the message.

Accepts one of the following:
TextContent { text, signature, type }
text: string

The text content of the message.

signature?: string | null

Stores a unique identifier for any reasoning associated with this text content.

type?: "text"

The type of the message.

Accepts one of the following:
"text"
ImageContent { source, type }
source: URLImage { url, type } | Base64Image { data, media_type, detail, type } | LettaImage { file_id, data, detail, 2 more }

The source of the image.

Accepts one of the following:
URLImage { url, type }
url: string

The URL of the image.

type?: "url"

The source type for the image.

Accepts one of the following:
"url"
Base64Image { data, media_type, detail, type }
data: string

The base64 encoded image data.

media_type: string

The media type for the image.

detail?: string | null

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

type?: "base64"

The source type for the image.

Accepts one of the following:
"base64"
LettaImage { file_id, data, detail, 2 more }
file_id: string

The unique identifier of the image file persisted in storage.

data?: string | null

The base64 encoded image data.

detail?: string | null

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

media_type?: string | null

The media type for the image.

type?: "letta"

The source type for the image.

Accepts one of the following:
"letta"
type?: "image"

The type of the message.

Accepts one of the following:
"image"
ToolCallContent { id, input, name, 2 more }
id: string

A unique identifier for this specific tool call instance.

input: Record<string, unknown>

The parameters being passed to the tool, structured as a dictionary of parameter names to values.

name: string

The name of the tool being called.

signature?: string | null

Stores a unique identifier for any reasoning associated with this tool call.

type?: "tool_call"

Indicates this content represents a tool call event.

Accepts one of the following:
"tool_call"
ToolReturnContent { content, is_error, tool_call_id, type }
content: string

The content returned by the tool execution.

is_error: boolean

Indicates whether the tool execution resulted in an error.

tool_call_id: string

References the ID of the ToolCallContent that initiated this tool call.

type?: "tool_return"

Indicates this content represents a tool return event.

Accepts one of the following:
"tool_return"
ReasoningContent { is_native, reasoning, signature, type }

Sent via the Anthropic Messages API

is_native: boolean

Whether the reasoning content was generated by a reasoner model that processed this step.

reasoning: string

The intermediate reasoning or thought process content.

signature?: string | null

A unique identifier for this reasoning step.

type?: "reasoning"

Indicates this is a reasoning/intermediate step.

Accepts one of the following:
"reasoning"
RedactedReasoningContent { data, type }

Sent via the Anthropic Messages API

data: string

The redacted or filtered intermediate reasoning content.

type?: "redacted_reasoning"

Indicates this is a redacted thinking step.

Accepts one of the following:
"redacted_reasoning"
OmittedReasoningContent { signature, type }

A placeholder for reasoning content we know is present, but isn't returned by the provider (e.g. OpenAI GPT-5 on ChatCompletions)

signature?: string | null

A unique identifier for this reasoning step.

type?: "omitted_reasoning"

Indicates this is an omitted reasoning step.

Accepts one of the following:
"omitted_reasoning"
string
role: "user" | "system" | "assistant"

The role of the participant.

Accepts one of the following:
"user"
"system"
"assistant"
batch_item_id?: string | null

The id of the LLMBatchItem that this message is associated with

group_id?: string | null

The multi-agent group that the message was sent in

name?: string | null

The name of the participant.

otid?: string | null

The offline threading id associated with this message

sender_id?: string | null

The id of the sender of the message, can be an identity id or agent id

type?: "message" | null

The message type to be created.

Accepts one of the following:
"message"
ApprovalCreate { approval_request_id, approvals, approve, 3 more }

Input to approve or deny a tool call request

Deprecatedapproval_request_id?: string | null

The message ID of the approval request

approvals?: Array<ApprovalReturn { approve, tool_call_id, reason, type } | ToolReturn { status, tool_call_id, tool_return, 3 more } > | null

The list of approval responses

Accepts one of the following:
ApprovalReturn { approve, tool_call_id, reason, type }
approve: boolean

Whether the tool has been approved

tool_call_id: string

The ID of the tool call that corresponds to this approval

reason?: string | null

An optional explanation for the provided approval status

type?: "approval"

The message type to be created.

Accepts one of the following:
"approval"
ToolReturn { status, tool_call_id, tool_return, 3 more }
status: "success" | "error"
Accepts one of the following:
"success"
"error"
tool_call_id: string
tool_return: string
stderr?: Array<string> | null
stdout?: Array<string> | null
type?: "tool"

The message type to be created.

Accepts one of the following:
"tool"
Deprecatedapprove?: boolean | null

Whether the tool has been approved

group_id?: string | null

The multi-agent group that the message was sent in

Deprecatedreason?: string | null

An optional explanation for the provided approval status

type?: "approval"

The message type to be created.

Accepts one of the following:
"approval"
Deprecateduse_assistant_message?: boolean

Whether the server should parse specific tool call arguments (default send_message) as AssistantMessage objects. Still supported for legacy agent types, but deprecated for letta_v1_agent onward.

LettaResponse { messages, stop_reason, usage }

Response object from an agent interaction, consisting of the new messages generated by the agent and usage statistics. The type of the returned messages can be either Message or LettaMessage, depending on what was specified in the request.

Attributes: messages (List[Union[Message, LettaMessage]]): The messages returned by the agent. usage (LettaUsageStatistics): The usage statistics

messages: Array<LettaMessageUnion>

The messages returned by the agent.

Accepts one of the following:
SystemMessage { id, content, date, 8 more }

A message generated by the system. Never streamed back on a response, only used for cursor pagination.

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message content (str): The message content sent by the system

id: string
content: string

The message content sent by the system

date: string
is_err?: boolean | null
message_type?: "system_message"

The type of the message.

Accepts one of the following:
"system_message"
name?: string | null
otid?: string | null
run_id?: string | null
sender_id?: string | null
seq_id?: number | null
step_id?: string | null
UserMessage { id, content, date, 8 more }

A message sent by the user. Never streamed back on a response, only used for cursor pagination.

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message content (Union[str, List[LettaUserMessageContentUnion]]): The message content sent by the user (can be a string or an array of multi-modal content parts)

id: string
content: Array<LettaUserMessageContentUnion> | string

The message content sent by the user (can be a string or an array of multi-modal content parts)

Accepts one of the following:
TextContent { text, signature, type }
text: string

The text content of the message.

signature?: string | null

Stores a unique identifier for any reasoning associated with this text content.

type?: "text"

The type of the message.

Accepts one of the following:
"text"
ImageContent { source, type }
source: URLImage { url, type } | Base64Image { data, media_type, detail, type } | LettaImage { file_id, data, detail, 2 more }

The source of the image.

Accepts one of the following:
URLImage { url, type }
url: string

The URL of the image.

type?: "url"

The source type for the image.

Accepts one of the following:
"url"
Base64Image { data, media_type, detail, type }
data: string

The base64 encoded image data.

media_type: string

The media type for the image.

detail?: string | null

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

type?: "base64"

The source type for the image.

Accepts one of the following:
"base64"
LettaImage { file_id, data, detail, 2 more }
file_id: string

The unique identifier of the image file persisted in storage.

data?: string | null

The base64 encoded image data.

detail?: string | null

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

media_type?: string | null

The media type for the image.

type?: "letta"

The source type for the image.

Accepts one of the following:
"letta"
type?: "image"

The type of the message.

Accepts one of the following:
"image"
string
date: string
is_err?: boolean | null
message_type?: "user_message"

The type of the message.

Accepts one of the following:
"user_message"
name?: string | null
otid?: string | null
run_id?: string | null
sender_id?: string | null
seq_id?: number | null
step_id?: string | null
ReasoningMessage { id, date, reasoning, 10 more }

Representation of an agent's internal reasoning.

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message source (Literal["reasoner_model", "non_reasoner_model"]): Whether the reasoning content was generated natively by a reasoner model or derived via prompting reasoning (str): The internal reasoning of the agent signature (Optional[str]): The model-generated signature of the reasoning step

id: string
date: string
reasoning: string
is_err?: boolean | null
message_type?: "reasoning_message"

The type of the message.

Accepts one of the following:
"reasoning_message"
name?: string | null
otid?: string | null
run_id?: string | null
sender_id?: string | null
seq_id?: number | null
signature?: string | null
source?: "reasoner_model" | "non_reasoner_model"
Accepts one of the following:
"reasoner_model"
"non_reasoner_model"
step_id?: string | null
HiddenReasoningMessage { id, date, state, 9 more }

Representation of an agent's internal reasoning where reasoning content has been hidden from the response.

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message state (Literal["redacted", "omitted"]): Whether the reasoning content was redacted by the provider or simply omitted by the API hidden_reasoning (Optional[str]): The internal reasoning of the agent

id: string
date: string
state: "redacted" | "omitted"
Accepts one of the following:
"redacted"
"omitted"
hidden_reasoning?: string | null
is_err?: boolean | null
message_type?: "hidden_reasoning_message"

The type of the message.

Accepts one of the following:
"hidden_reasoning_message"
name?: string | null
otid?: string | null
run_id?: string | null
sender_id?: string | null
seq_id?: number | null
step_id?: string | null
ToolCallMessage { id, date, tool_call, 9 more }

A message representing a request to call a tool (generated by the LLM to trigger tool execution).

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message tool_call (Union[ToolCall, ToolCallDelta]): The tool call

id: string
date: string
Deprecatedtool_call: ToolCall { arguments, name, tool_call_id } | ToolCallDelta { arguments, name, tool_call_id }
Accepts one of the following:
ToolCall { arguments, name, tool_call_id }
arguments: string
name: string
tool_call_id: string
ToolCallDelta { arguments, name, tool_call_id }
arguments?: string | null
name?: string | null
tool_call_id?: string | null
is_err?: boolean | null
message_type?: "tool_call_message"

The type of the message.

Accepts one of the following:
"tool_call_message"
name?: string | null
otid?: string | null
run_id?: string | null
sender_id?: string | null
seq_id?: number | null
step_id?: string | null
tool_calls?: Array<ToolCall { arguments, name, tool_call_id } > | ToolCallDelta { arguments, name, tool_call_id } | null
Accepts one of the following:
Array<ToolCall { arguments, name, tool_call_id } >
arguments: string
name: string
tool_call_id: string
ToolCallDelta { arguments, name, tool_call_id }
arguments?: string | null
name?: string | null
tool_call_id?: string | null
ToolReturnMessage { id, date, status, 13 more }

A message representing the return value of a tool call (generated by Letta executing the requested tool).

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message tool_return (str): The return value of the tool (deprecated, use tool_returns) status (Literal["success", "error"]): The status of the tool call (deprecated, use tool_returns) tool_call_id (str): A unique identifier for the tool call that generated this message (deprecated, use tool_returns) stdout (Optional[List(str)]): Captured stdout (e.g. prints, logs) from the tool invocation (deprecated, use tool_returns) stderr (Optional[List(str)]): Captured stderr from the tool invocation (deprecated, use tool_returns) tool_returns (Optional[List[ToolReturn]]): List of tool returns for multi-tool support

id: string
date: string
Deprecatedstatus: "success" | "error"
Accepts one of the following:
"success"
"error"
Deprecatedtool_call_id: string
Deprecatedtool_return: string
is_err?: boolean | null
message_type?: "tool_return_message"

The type of the message.

Accepts one of the following:
"tool_return_message"
name?: string | null
otid?: string | null
run_id?: string | null
sender_id?: string | null
seq_id?: number | null
Deprecatedstderr?: Array<string> | null
Deprecatedstdout?: Array<string> | null
step_id?: string | null
tool_returns?: Array<ToolReturn { status, tool_call_id, tool_return, 3 more } > | null
status: "success" | "error"
Accepts one of the following:
"success"
"error"
tool_call_id: string
tool_return: string
stderr?: Array<string> | null
stdout?: Array<string> | null
type?: "tool"

The message type to be created.

Accepts one of the following:
"tool"
AssistantMessage { id, content, date, 8 more }

A message sent by the LLM in response to user input. Used in the LLM context.

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message content (Union[str, List[LettaAssistantMessageContentUnion]]): The message content sent by the agent (can be a string or an array of content parts)

id: string
content: Array<LettaAssistantMessageContentUnion { text, signature, type } > | string

The message content sent by the agent (can be a string or an array of content parts)

Accepts one of the following:
Array<LettaAssistantMessageContentUnion { text, signature, type } >
text: string

The text content of the message.

signature?: string | null

Stores a unique identifier for any reasoning associated with this text content.

type?: "text"

The type of the message.

Accepts one of the following:
"text"
string
date: string
is_err?: boolean | null
message_type?: "assistant_message"

The type of the message.

Accepts one of the following:
"assistant_message"
name?: string | null
otid?: string | null
run_id?: string | null
sender_id?: string | null
seq_id?: number | null
step_id?: string | null
ApprovalRequestMessage { id, date, tool_call, 9 more }

A message representing a request for approval to call a tool (generated by the LLM to trigger tool execution).

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message tool_call (ToolCall): The tool call

id: string
date: string
Deprecatedtool_call: ToolCall { arguments, name, tool_call_id } | ToolCallDelta { arguments, name, tool_call_id }

The tool call that has been requested by the llm to run

Accepts one of the following:
ToolCall { arguments, name, tool_call_id }
arguments: string
name: string
tool_call_id: string
ToolCallDelta { arguments, name, tool_call_id }
arguments?: string | null
name?: string | null
tool_call_id?: string | null
is_err?: boolean | null
message_type?: "approval_request_message"

The type of the message.

Accepts one of the following:
"approval_request_message"
name?: string | null
otid?: string | null
run_id?: string | null
sender_id?: string | null
seq_id?: number | null
step_id?: string | null
tool_calls?: Array<ToolCall { arguments, name, tool_call_id } > | ToolCallDelta { arguments, name, tool_call_id } | null

The tool calls that have been requested by the llm to run, which are pending approval

Accepts one of the following:
Array<ToolCall { arguments, name, tool_call_id } >
arguments: string
name: string
tool_call_id: string
ToolCallDelta { arguments, name, tool_call_id }
arguments?: string | null
name?: string | null
tool_call_id?: string | null
ApprovalResponseMessage { id, date, approval_request_id, 11 more }

A message representing a response form the user indicating whether a tool has been approved to run.

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message approve: (bool) Whether the tool has been approved approval_request_id: The ID of the approval request reason: (Optional[str]) An optional explanation for the provided approval status

id: string
date: string
Deprecatedapproval_request_id?: string | null

The message ID of the approval request

approvals?: Array<ApprovalReturn { approve, tool_call_id, reason, type } | ToolReturn { status, tool_call_id, tool_return, 3 more } > | null

The list of approval responses

Accepts one of the following:
ApprovalReturn { approve, tool_call_id, reason, type }
approve: boolean

Whether the tool has been approved

tool_call_id: string

The ID of the tool call that corresponds to this approval

reason?: string | null

An optional explanation for the provided approval status

type?: "approval"

The message type to be created.

Accepts one of the following:
"approval"
ToolReturn { status, tool_call_id, tool_return, 3 more }
status: "success" | "error"
Accepts one of the following:
"success"
"error"
tool_call_id: string
tool_return: string
stderr?: Array<string> | null
stdout?: Array<string> | null
type?: "tool"

The message type to be created.

Accepts one of the following:
"tool"
Deprecatedapprove?: boolean | null

Whether the tool has been approved

is_err?: boolean | null
message_type?: "approval_response_message"

The type of the message.

Accepts one of the following:
"approval_response_message"
name?: string | null
otid?: string | null
Deprecatedreason?: string | null

An optional explanation for the provided approval status

run_id?: string | null
sender_id?: string | null
seq_id?: number | null
step_id?: string | null
SummaryMessage { id, date, summary, 8 more }

A message representing a summary of the conversation. Sent to the LLM as a user or system message depending on the provider.

id: string
date: string
summary: string
is_err?: boolean | null
message_type?: "summary"
Accepts one of the following:
"summary"
name?: string | null
otid?: string | null
run_id?: string | null
sender_id?: string | null
seq_id?: number | null
step_id?: string | null
EventMessage { id, date, event_data, 9 more }

A message for notifying the developer that an event that has occured (e.g. a compaction). Events are NOT part of the context window.

id: string
date: string
event_data: Record<string, unknown>
event_type: "compaction"
Accepts one of the following:
"compaction"
is_err?: boolean | null
message_type?: "event"
Accepts one of the following:
"event"
name?: string | null
otid?: string | null
run_id?: string | null
sender_id?: string | null
seq_id?: number | null
step_id?: string | null
stop_reason: StopReason { stop_reason, message_type }

The stop reason from Letta indicating why agent loop stopped execution.

stop_reason: StopReasonType

The reason why execution stopped.

Accepts one of the following:
"end_turn"
"error"
"llm_api_error"
"invalid_llm_response"
"invalid_tool_call"
"max_steps"
"no_tool_call"
"tool_rule"
"cancelled"
"requires_approval"
message_type?: "stop_reason"

The type of the message.

Accepts one of the following:
"stop_reason"
usage: Usage { completion_tokens, message_type, prompt_tokens, 3 more }

The usage statistics of the agent.

completion_tokens?: number

The number of tokens generated by the agent.

message_type?: "usage_statistics"
Accepts one of the following:
"usage_statistics"
prompt_tokens?: number

The number of tokens in the prompt.

run_ids?: Array<string> | null

The background task run IDs associated with the agent interaction

step_count?: number

The number of steps taken by the agent.

total_tokens?: number

The total number of tokens processed by the agent.

LettaStreamingRequest { assistant_message_tool_kwarg, assistant_message_tool_name, background, 9 more }
Deprecatedassistant_message_tool_kwarg?: string

The name of the message argument in the designated message tool. Still supported for legacy agent types, but deprecated for letta_v1_agent onward.

Deprecatedassistant_message_tool_name?: string

The name of the designated message tool. Still supported for legacy agent types, but deprecated for letta_v1_agent onward.

background?: boolean

Whether to process the request in the background (only used when streaming=true).

Deprecatedenable_thinking?: string

If set to True, enables reasoning before responses or tool calls from the agent.

include_pings?: boolean

Whether to include periodic keepalive ping messages in the stream to prevent connection timeouts (only used when streaming=true).

include_return_message_types?: Array<MessageType> | null

Only return specified message types in the response. If None (default) returns all messages.

Accepts one of the following:
"system_message"
"user_message"
"assistant_message"
"reasoning_message"
"hidden_reasoning_message"
"tool_call_message"
"tool_return_message"
"approval_request_message"
"approval_response_message"
input?: string | Array<TextContent { text, signature, type } | ImageContent { source, type } | ToolCallContent { id, input, name, 2 more } | 5 more> | null

Syntactic sugar for a single user message. Equivalent to messages=[{'role': 'user', 'content': input}].

Accepts one of the following:
string
Array<TextContent { text, signature, type } | ImageContent { source, type } | ToolCallContent { id, input, name, 2 more } | 5 more>
TextContent { text, signature, type }
text: string

The text content of the message.

signature?: string | null

Stores a unique identifier for any reasoning associated with this text content.

type?: "text"

The type of the message.

Accepts one of the following:
"text"
ImageContent { source, type }
source: URLImage { url, type } | Base64Image { data, media_type, detail, type } | LettaImage { file_id, data, detail, 2 more }

The source of the image.

Accepts one of the following:
URLImage { url, type }
url: string

The URL of the image.

type?: "url"

The source type for the image.

Accepts one of the following:
"url"
Base64Image { data, media_type, detail, type }
data: string

The base64 encoded image data.

media_type: string

The media type for the image.

detail?: string | null

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

type?: "base64"

The source type for the image.

Accepts one of the following:
"base64"
LettaImage { file_id, data, detail, 2 more }
file_id: string

The unique identifier of the image file persisted in storage.

data?: string | null

The base64 encoded image data.

detail?: string | null

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

media_type?: string | null

The media type for the image.

type?: "letta"

The source type for the image.

Accepts one of the following:
"letta"
type?: "image"

The type of the message.

Accepts one of the following:
"image"
ToolCallContent { id, input, name, 2 more }
id: string

A unique identifier for this specific tool call instance.

input: Record<string, unknown>

The parameters being passed to the tool, structured as a dictionary of parameter names to values.

name: string

The name of the tool being called.

signature?: string | null

Stores a unique identifier for any reasoning associated with this tool call.

type?: "tool_call"

Indicates this content represents a tool call event.

Accepts one of the following:
"tool_call"
ToolReturnContent { content, is_error, tool_call_id, type }
content: string

The content returned by the tool execution.

is_error: boolean

Indicates whether the tool execution resulted in an error.

tool_call_id: string

References the ID of the ToolCallContent that initiated this tool call.

type?: "tool_return"

Indicates this content represents a tool return event.

Accepts one of the following:
"tool_return"
ReasoningContent { is_native, reasoning, signature, type }

Sent via the Anthropic Messages API

is_native: boolean

Whether the reasoning content was generated by a reasoner model that processed this step.

reasoning: string

The intermediate reasoning or thought process content.

signature?: string | null

A unique identifier for this reasoning step.

type?: "reasoning"

Indicates this is a reasoning/intermediate step.

Accepts one of the following:
"reasoning"
RedactedReasoningContent { data, type }

Sent via the Anthropic Messages API

data: string

The redacted or filtered intermediate reasoning content.

type?: "redacted_reasoning"

Indicates this is a redacted thinking step.

Accepts one of the following:
"redacted_reasoning"
OmittedReasoningContent { signature, type }

A placeholder for reasoning content we know is present, but isn't returned by the provider (e.g. OpenAI GPT-5 on ChatCompletions)

signature?: string | null

A unique identifier for this reasoning step.

type?: "omitted_reasoning"

Indicates this is an omitted reasoning step.

Accepts one of the following:
"omitted_reasoning"
SummarizedReasoningContent { id, summary, encrypted_content, type }

The style of reasoning content returned by the OpenAI Responses API

id: string

The unique identifier for this reasoning step.

summary: Array<Summary>

Summaries of the reasoning content.

index: number

The index of the summary part.

text: string

The text of the summary part.

encrypted_content?: string

The encrypted reasoning content.

type?: "summarized_reasoning"

Indicates this is a summarized reasoning step.

Accepts one of the following:
"summarized_reasoning"
max_steps?: number

Maximum number of steps the agent should take to process the request.

messages?: Array<MessageCreate { content, role, batch_item_id, 5 more } | ApprovalCreate { approval_request_id, approvals, approve, 3 more } > | null

The messages to be sent to the agent.

Accepts one of the following:
MessageCreate { content, role, batch_item_id, 5 more }

Request to create a message

content: Array<LettaMessageContentUnion> | string

The content of the message.

Accepts one of the following:
TextContent { text, signature, type }
text: string

The text content of the message.

signature?: string | null

Stores a unique identifier for any reasoning associated with this text content.

type?: "text"

The type of the message.

Accepts one of the following:
"text"
ImageContent { source, type }
source: URLImage { url, type } | Base64Image { data, media_type, detail, type } | LettaImage { file_id, data, detail, 2 more }

The source of the image.

Accepts one of the following:
URLImage { url, type }
url: string

The URL of the image.

type?: "url"

The source type for the image.

Accepts one of the following:
"url"
Base64Image { data, media_type, detail, type }
data: string

The base64 encoded image data.

media_type: string

The media type for the image.

detail?: string | null

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

type?: "base64"

The source type for the image.

Accepts one of the following:
"base64"
LettaImage { file_id, data, detail, 2 more }
file_id: string

The unique identifier of the image file persisted in storage.

data?: string | null

The base64 encoded image data.

detail?: string | null

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

media_type?: string | null

The media type for the image.

type?: "letta"

The source type for the image.

Accepts one of the following:
"letta"
type?: "image"

The type of the message.

Accepts one of the following:
"image"
ToolCallContent { id, input, name, 2 more }
id: string

A unique identifier for this specific tool call instance.

input: Record<string, unknown>

The parameters being passed to the tool, structured as a dictionary of parameter names to values.

name: string

The name of the tool being called.

signature?: string | null

Stores a unique identifier for any reasoning associated with this tool call.

type?: "tool_call"

Indicates this content represents a tool call event.

Accepts one of the following:
"tool_call"
ToolReturnContent { content, is_error, tool_call_id, type }
content: string

The content returned by the tool execution.

is_error: boolean

Indicates whether the tool execution resulted in an error.

tool_call_id: string

References the ID of the ToolCallContent that initiated this tool call.

type?: "tool_return"

Indicates this content represents a tool return event.

Accepts one of the following:
"tool_return"
ReasoningContent { is_native, reasoning, signature, type }

Sent via the Anthropic Messages API

is_native: boolean

Whether the reasoning content was generated by a reasoner model that processed this step.

reasoning: string

The intermediate reasoning or thought process content.

signature?: string | null

A unique identifier for this reasoning step.

type?: "reasoning"

Indicates this is a reasoning/intermediate step.

Accepts one of the following:
"reasoning"
RedactedReasoningContent { data, type }

Sent via the Anthropic Messages API

data: string

The redacted or filtered intermediate reasoning content.

type?: "redacted_reasoning"

Indicates this is a redacted thinking step.

Accepts one of the following:
"redacted_reasoning"
OmittedReasoningContent { signature, type }

A placeholder for reasoning content we know is present, but isn't returned by the provider (e.g. OpenAI GPT-5 on ChatCompletions)

signature?: string | null

A unique identifier for this reasoning step.

type?: "omitted_reasoning"

Indicates this is an omitted reasoning step.

Accepts one of the following:
"omitted_reasoning"
string
role: "user" | "system" | "assistant"

The role of the participant.

Accepts one of the following:
"user"
"system"
"assistant"
batch_item_id?: string | null

The id of the LLMBatchItem that this message is associated with

group_id?: string | null

The multi-agent group that the message was sent in

name?: string | null

The name of the participant.

otid?: string | null

The offline threading id associated with this message

sender_id?: string | null

The id of the sender of the message, can be an identity id or agent id

type?: "message" | null

The message type to be created.

Accepts one of the following:
"message"
ApprovalCreate { approval_request_id, approvals, approve, 3 more }

Input to approve or deny a tool call request

Deprecatedapproval_request_id?: string | null

The message ID of the approval request

approvals?: Array<ApprovalReturn { approve, tool_call_id, reason, type } | ToolReturn { status, tool_call_id, tool_return, 3 more } > | null

The list of approval responses

Accepts one of the following:
ApprovalReturn { approve, tool_call_id, reason, type }
approve: boolean

Whether the tool has been approved

tool_call_id: string

The ID of the tool call that corresponds to this approval

reason?: string | null

An optional explanation for the provided approval status

type?: "approval"

The message type to be created.

Accepts one of the following:
"approval"
ToolReturn { status, tool_call_id, tool_return, 3 more }
status: "success" | "error"
Accepts one of the following:
"success"
"error"
tool_call_id: string
tool_return: string
stderr?: Array<string> | null
stdout?: Array<string> | null
type?: "tool"

The message type to be created.

Accepts one of the following:
"tool"
Deprecatedapprove?: boolean | null

Whether the tool has been approved

group_id?: string | null

The multi-agent group that the message was sent in

Deprecatedreason?: string | null

An optional explanation for the provided approval status

type?: "approval"

The message type to be created.

Accepts one of the following:
"approval"
stream_tokens?: boolean

Flag to determine if individual tokens should be streamed, rather than streaming per step (only used when streaming=true).

streaming?: boolean

If True, returns a streaming response (Server-Sent Events). If False (default), returns a complete response.

Deprecateduse_assistant_message?: boolean

Whether the server should parse specific tool call arguments (default send_message) as AssistantMessage objects. Still supported for legacy agent types, but deprecated for letta_v1_agent onward.

LettaStreamingResponse = SystemMessage { id, content, date, 8 more } | UserMessage { id, content, date, 8 more } | ReasoningMessage { id, date, reasoning, 10 more } | 9 more

Streaming response type for Server-Sent Events (SSE) endpoints. Each event in the stream will be one of these types.

Accepts one of the following:
SystemMessage { id, content, date, 8 more }

A message generated by the system. Never streamed back on a response, only used for cursor pagination.

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message content (str): The message content sent by the system

id: string
content: string

The message content sent by the system

date: string
is_err?: boolean | null
message_type?: "system_message"

The type of the message.

Accepts one of the following:
"system_message"
name?: string | null
otid?: string | null
run_id?: string | null
sender_id?: string | null
seq_id?: number | null
step_id?: string | null
UserMessage { id, content, date, 8 more }

A message sent by the user. Never streamed back on a response, only used for cursor pagination.

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message content (Union[str, List[LettaUserMessageContentUnion]]): The message content sent by the user (can be a string or an array of multi-modal content parts)

id: string
content: Array<LettaUserMessageContentUnion> | string

The message content sent by the user (can be a string or an array of multi-modal content parts)

Accepts one of the following:
TextContent { text, signature, type }
text: string

The text content of the message.

signature?: string | null

Stores a unique identifier for any reasoning associated with this text content.

type?: "text"

The type of the message.

Accepts one of the following:
"text"
ImageContent { source, type }
source: URLImage { url, type } | Base64Image { data, media_type, detail, type } | LettaImage { file_id, data, detail, 2 more }

The source of the image.

Accepts one of the following:
URLImage { url, type }
url: string

The URL of the image.

type?: "url"

The source type for the image.

Accepts one of the following:
"url"
Base64Image { data, media_type, detail, type }
data: string

The base64 encoded image data.

media_type: string

The media type for the image.

detail?: string | null

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

type?: "base64"

The source type for the image.

Accepts one of the following:
"base64"
LettaImage { file_id, data, detail, 2 more }
file_id: string

The unique identifier of the image file persisted in storage.

data?: string | null

The base64 encoded image data.

detail?: string | null

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

media_type?: string | null

The media type for the image.

type?: "letta"

The source type for the image.

Accepts one of the following:
"letta"
type?: "image"

The type of the message.

Accepts one of the following:
"image"
string
date: string
is_err?: boolean | null
message_type?: "user_message"

The type of the message.

Accepts one of the following:
"user_message"
name?: string | null
otid?: string | null
run_id?: string | null
sender_id?: string | null
seq_id?: number | null
step_id?: string | null
ReasoningMessage { id, date, reasoning, 10 more }

Representation of an agent's internal reasoning.

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message source (Literal["reasoner_model", "non_reasoner_model"]): Whether the reasoning content was generated natively by a reasoner model or derived via prompting reasoning (str): The internal reasoning of the agent signature (Optional[str]): The model-generated signature of the reasoning step

id: string
date: string
reasoning: string
is_err?: boolean | null
message_type?: "reasoning_message"

The type of the message.

Accepts one of the following:
"reasoning_message"
name?: string | null
otid?: string | null
run_id?: string | null
sender_id?: string | null
seq_id?: number | null
signature?: string | null
source?: "reasoner_model" | "non_reasoner_model"
Accepts one of the following:
"reasoner_model"
"non_reasoner_model"
step_id?: string | null
HiddenReasoningMessage { id, date, state, 9 more }

Representation of an agent's internal reasoning where reasoning content has been hidden from the response.

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message state (Literal["redacted", "omitted"]): Whether the reasoning content was redacted by the provider or simply omitted by the API hidden_reasoning (Optional[str]): The internal reasoning of the agent

id: string
date: string
state: "redacted" | "omitted"
Accepts one of the following:
"redacted"
"omitted"
hidden_reasoning?: string | null
is_err?: boolean | null
message_type?: "hidden_reasoning_message"

The type of the message.

Accepts one of the following:
"hidden_reasoning_message"
name?: string | null
otid?: string | null
run_id?: string | null
sender_id?: string | null
seq_id?: number | null
step_id?: string | null
ToolCallMessage { id, date, tool_call, 9 more }

A message representing a request to call a tool (generated by the LLM to trigger tool execution).

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message tool_call (Union[ToolCall, ToolCallDelta]): The tool call

id: string
date: string
Deprecatedtool_call: ToolCall { arguments, name, tool_call_id } | ToolCallDelta { arguments, name, tool_call_id }
Accepts one of the following:
ToolCall { arguments, name, tool_call_id }
arguments: string
name: string
tool_call_id: string
ToolCallDelta { arguments, name, tool_call_id }
arguments?: string | null
name?: string | null
tool_call_id?: string | null
is_err?: boolean | null
message_type?: "tool_call_message"

The type of the message.

Accepts one of the following:
"tool_call_message"
name?: string | null
otid?: string | null
run_id?: string | null
sender_id?: string | null
seq_id?: number | null
step_id?: string | null
tool_calls?: Array<ToolCall { arguments, name, tool_call_id } > | ToolCallDelta { arguments, name, tool_call_id } | null
Accepts one of the following:
Array<ToolCall { arguments, name, tool_call_id } >
arguments: string
name: string
tool_call_id: string
ToolCallDelta { arguments, name, tool_call_id }
arguments?: string | null
name?: string | null
tool_call_id?: string | null
ToolReturnMessage { id, date, status, 13 more }

A message representing the return value of a tool call (generated by Letta executing the requested tool).

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message tool_return (str): The return value of the tool (deprecated, use tool_returns) status (Literal["success", "error"]): The status of the tool call (deprecated, use tool_returns) tool_call_id (str): A unique identifier for the tool call that generated this message (deprecated, use tool_returns) stdout (Optional[List(str)]): Captured stdout (e.g. prints, logs) from the tool invocation (deprecated, use tool_returns) stderr (Optional[List(str)]): Captured stderr from the tool invocation (deprecated, use tool_returns) tool_returns (Optional[List[ToolReturn]]): List of tool returns for multi-tool support

id: string
date: string
Deprecatedstatus: "success" | "error"
Accepts one of the following:
"success"
"error"
Deprecatedtool_call_id: string
Deprecatedtool_return: string
is_err?: boolean | null
message_type?: "tool_return_message"

The type of the message.

Accepts one of the following:
"tool_return_message"
name?: string | null
otid?: string | null
run_id?: string | null
sender_id?: string | null
seq_id?: number | null
Deprecatedstderr?: Array<string> | null
Deprecatedstdout?: Array<string> | null
step_id?: string | null
tool_returns?: Array<ToolReturn { status, tool_call_id, tool_return, 3 more } > | null
status: "success" | "error"
Accepts one of the following:
"success"
"error"
tool_call_id: string
tool_return: string
stderr?: Array<string> | null
stdout?: Array<string> | null
type?: "tool"

The message type to be created.

Accepts one of the following:
"tool"
AssistantMessage { id, content, date, 8 more }

A message sent by the LLM in response to user input. Used in the LLM context.

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message content (Union[str, List[LettaAssistantMessageContentUnion]]): The message content sent by the agent (can be a string or an array of content parts)

id: string
content: Array<LettaAssistantMessageContentUnion { text, signature, type } > | string

The message content sent by the agent (can be a string or an array of content parts)

Accepts one of the following:
Array<LettaAssistantMessageContentUnion { text, signature, type } >
text: string

The text content of the message.

signature?: string | null

Stores a unique identifier for any reasoning associated with this text content.

type?: "text"

The type of the message.

Accepts one of the following:
"text"
string
date: string
is_err?: boolean | null
message_type?: "assistant_message"

The type of the message.

Accepts one of the following:
"assistant_message"
name?: string | null
otid?: string | null
run_id?: string | null
sender_id?: string | null
seq_id?: number | null
step_id?: string | null
ApprovalRequestMessage { id, date, tool_call, 9 more }

A message representing a request for approval to call a tool (generated by the LLM to trigger tool execution).

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message tool_call (ToolCall): The tool call

id: string
date: string
Deprecatedtool_call: ToolCall { arguments, name, tool_call_id } | ToolCallDelta { arguments, name, tool_call_id }

The tool call that has been requested by the llm to run

Accepts one of the following:
ToolCall { arguments, name, tool_call_id }
arguments: string
name: string
tool_call_id: string
ToolCallDelta { arguments, name, tool_call_id }
arguments?: string | null
name?: string | null
tool_call_id?: string | null
is_err?: boolean | null
message_type?: "approval_request_message"

The type of the message.

Accepts one of the following:
"approval_request_message"
name?: string | null
otid?: string | null
run_id?: string | null
sender_id?: string | null
seq_id?: number | null
step_id?: string | null
tool_calls?: Array<ToolCall { arguments, name, tool_call_id } > | ToolCallDelta { arguments, name, tool_call_id } | null

The tool calls that have been requested by the llm to run, which are pending approval

Accepts one of the following:
Array<ToolCall { arguments, name, tool_call_id } >
arguments: string
name: string
tool_call_id: string
ToolCallDelta { arguments, name, tool_call_id }
arguments?: string | null
name?: string | null
tool_call_id?: string | null
ApprovalResponseMessage { id, date, approval_request_id, 11 more }

A message representing a response form the user indicating whether a tool has been approved to run.

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message approve: (bool) Whether the tool has been approved approval_request_id: The ID of the approval request reason: (Optional[str]) An optional explanation for the provided approval status

id: string
date: string
Deprecatedapproval_request_id?: string | null

The message ID of the approval request

approvals?: Array<ApprovalReturn { approve, tool_call_id, reason, type } | ToolReturn { status, tool_call_id, tool_return, 3 more } > | null

The list of approval responses

Accepts one of the following:
ApprovalReturn { approve, tool_call_id, reason, type }
approve: boolean

Whether the tool has been approved

tool_call_id: string

The ID of the tool call that corresponds to this approval

reason?: string | null

An optional explanation for the provided approval status

type?: "approval"

The message type to be created.

Accepts one of the following:
"approval"
ToolReturn { status, tool_call_id, tool_return, 3 more }
status: "success" | "error"
Accepts one of the following:
"success"
"error"
tool_call_id: string
tool_return: string
stderr?: Array<string> | null
stdout?: Array<string> | null
type?: "tool"

The message type to be created.

Accepts one of the following:
"tool"
Deprecatedapprove?: boolean | null

Whether the tool has been approved

is_err?: boolean | null
message_type?: "approval_response_message"

The type of the message.

Accepts one of the following:
"approval_response_message"
name?: string | null
otid?: string | null
Deprecatedreason?: string | null

An optional explanation for the provided approval status

run_id?: string | null
sender_id?: string | null
seq_id?: number | null
step_id?: string | null
LettaPing { message_type }

Ping messages are a keep-alive to prevent SSE streams from timing out during long running requests.

message_type: "ping"

The type of the message.

Accepts one of the following:
"ping"
LettaStopReason { stop_reason, message_type }

The stop reason from Letta indicating why agent loop stopped execution.

stop_reason: StopReasonType

The reason why execution stopped.

Accepts one of the following:
"end_turn"
"error"
"llm_api_error"
"invalid_llm_response"
"invalid_tool_call"
"max_steps"
"no_tool_call"
"tool_rule"
"cancelled"
"requires_approval"
message_type?: "stop_reason"

The type of the message.

Accepts one of the following:
"stop_reason"
LettaUsageStatistics { completion_tokens, message_type, prompt_tokens, 3 more }

Usage statistics for the agent interaction.

Attributes: completion_tokens (int): The number of tokens generated by the agent. prompt_tokens (int): The number of tokens in the prompt. total_tokens (int): The total number of tokens processed by the agent. step_count (int): The number of steps taken by the agent.

completion_tokens?: number

The number of tokens generated by the agent.

message_type?: "usage_statistics"
Accepts one of the following:
"usage_statistics"
prompt_tokens?: number

The number of tokens in the prompt.

run_ids?: Array<string> | null

The background task run IDs associated with the agent interaction

step_count?: number

The number of steps taken by the agent.

total_tokens?: number

The total number of tokens processed by the agent.

LettaUserMessageContentUnion = TextContent { text, signature, type } | ImageContent { source, type }
Accepts one of the following:
TextContent { text, signature, type }
text: string

The text content of the message.

signature?: string | null

Stores a unique identifier for any reasoning associated with this text content.

type?: "text"

The type of the message.

Accepts one of the following:
"text"
ImageContent { source, type }
source: URLImage { url, type } | Base64Image { data, media_type, detail, type } | LettaImage { file_id, data, detail, 2 more }

The source of the image.

Accepts one of the following:
URLImage { url, type }
url: string

The URL of the image.

type?: "url"

The source type for the image.

Accepts one of the following:
"url"
Base64Image { data, media_type, detail, type }
data: string

The base64 encoded image data.

media_type: string

The media type for the image.

detail?: string | null

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

type?: "base64"

The source type for the image.

Accepts one of the following:
"base64"
LettaImage { file_id, data, detail, 2 more }
file_id: string

The unique identifier of the image file persisted in storage.

data?: string | null

The base64 encoded image data.

detail?: string | null

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

media_type?: string | null

The media type for the image.

type?: "letta"

The source type for the image.

Accepts one of the following:
"letta"
type?: "image"

The type of the message.

Accepts one of the following:
"image"
Message { id, role, agent_id, 21 more }
Letta's internal representation of a message. Includes methods to convert to/from LLM provider formats.

Attributes:
    id (str): The unique identifier of the message.
    role (MessageRole): The role of the participant.
    text (str): The text of the message.
    user_id (str): The unique identifier of the user.
    agent_id (str): The unique identifier of the agent.
    model (str): The model used to make the function call.
    name (str): The name of the participant.
    created_at (datetime): The time the message was created.
    tool_calls (List[OpenAIToolCall,]): The list of tool calls requested.
    tool_call_id (str): The id of the tool call.
    step_id (str): The id of the step that this message was created in.
    otid (str): The offline threading id associated with this message.
    tool_returns (List[ToolReturn]): The list of tool returns requested.
    group_id (str): The multi-agent group that the message was sent in.
    sender_id (str): The id of the sender of the message, can be an identity id or agent id.

t

id: string

The human-friendly ID of the Message

The role of the participant.

Accepts one of the following:
"assistant"
"user"
"tool"
"function"
"system"
"approval"
agent_id?: string | null

The unique identifier of the agent.

approval_request_id?: string | null

The id of the approval request if this message is associated with a tool call request.

approvals?: Array<ApprovalReturn { approve, tool_call_id, reason, type } | LettaSchemasMessageToolReturn { status, func_response, stderr, 2 more } > | null

The list of approvals for this message.

Accepts one of the following:
ApprovalReturn { approve, tool_call_id, reason, type }
approve: boolean

Whether the tool has been approved

tool_call_id: string

The ID of the tool call that corresponds to this approval

reason?: string | null

An optional explanation for the provided approval status

type?: "approval"

The message type to be created.

Accepts one of the following:
"approval"
LettaSchemasMessageToolReturn { status, func_response, stderr, 2 more }
status: "success" | "error"

The status of the tool call

Accepts one of the following:
"success"
"error"
func_response?: string | null

The function response string

stderr?: Array<string> | null

Captured stderr from the tool invocation

stdout?: Array<string> | null

Captured stdout (e.g. prints, logs) from the tool invocation

tool_call_id?: unknown

The ID for the tool call

approve?: boolean | null

Whether tool call is approved.

batch_item_id?: string | null

The id of the LLMBatchItem that this message is associated with

content?: Array<TextContent { text, signature, type } | ImageContent { source, type } | ToolCallContent { id, input, name, 2 more } | 5 more> | null

The content of the message.

Accepts one of the following:
TextContent { text, signature, type }
text: string

The text content of the message.

signature?: string | null

Stores a unique identifier for any reasoning associated with this text content.

type?: "text"

The type of the message.

Accepts one of the following:
"text"
ImageContent { source, type }
source: URLImage { url, type } | Base64Image { data, media_type, detail, type } | LettaImage { file_id, data, detail, 2 more }

The source of the image.

Accepts one of the following:
URLImage { url, type }
url: string

The URL of the image.

type?: "url"

The source type for the image.

Accepts one of the following:
"url"
Base64Image { data, media_type, detail, type }
data: string

The base64 encoded image data.

media_type: string

The media type for the image.

detail?: string | null

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

type?: "base64"

The source type for the image.

Accepts one of the following:
"base64"
LettaImage { file_id, data, detail, 2 more }
file_id: string

The unique identifier of the image file persisted in storage.

data?: string | null

The base64 encoded image data.

detail?: string | null

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

media_type?: string | null

The media type for the image.

type?: "letta"

The source type for the image.

Accepts one of the following:
"letta"
type?: "image"

The type of the message.

Accepts one of the following:
"image"
ToolCallContent { id, input, name, 2 more }
id: string

A unique identifier for this specific tool call instance.

input: Record<string, unknown>

The parameters being passed to the tool, structured as a dictionary of parameter names to values.

name: string

The name of the tool being called.

signature?: string | null

Stores a unique identifier for any reasoning associated with this tool call.

type?: "tool_call"

Indicates this content represents a tool call event.

Accepts one of the following:
"tool_call"
ToolReturnContent { content, is_error, tool_call_id, type }
content: string

The content returned by the tool execution.

is_error: boolean

Indicates whether the tool execution resulted in an error.

tool_call_id: string

References the ID of the ToolCallContent that initiated this tool call.

type?: "tool_return"

Indicates this content represents a tool return event.

Accepts one of the following:
"tool_return"
ReasoningContent { is_native, reasoning, signature, type }

Sent via the Anthropic Messages API

is_native: boolean

Whether the reasoning content was generated by a reasoner model that processed this step.

reasoning: string

The intermediate reasoning or thought process content.

signature?: string | null

A unique identifier for this reasoning step.

type?: "reasoning"

Indicates this is a reasoning/intermediate step.

Accepts one of the following:
"reasoning"
RedactedReasoningContent { data, type }

Sent via the Anthropic Messages API

data: string

The redacted or filtered intermediate reasoning content.

type?: "redacted_reasoning"

Indicates this is a redacted thinking step.

Accepts one of the following:
"redacted_reasoning"
OmittedReasoningContent { signature, type }

A placeholder for reasoning content we know is present, but isn't returned by the provider (e.g. OpenAI GPT-5 on ChatCompletions)

signature?: string | null

A unique identifier for this reasoning step.

type?: "omitted_reasoning"

Indicates this is an omitted reasoning step.

Accepts one of the following:
"omitted_reasoning"
SummarizedReasoningContent { id, summary, encrypted_content, type }

The style of reasoning content returned by the OpenAI Responses API

id: string

The unique identifier for this reasoning step.

summary: Array<Summary>

Summaries of the reasoning content.

index: number

The index of the summary part.

text: string

The text of the summary part.

encrypted_content?: string

The encrypted reasoning content.

type?: "summarized_reasoning"

Indicates this is a summarized reasoning step.

Accepts one of the following:
"summarized_reasoning"
created_at?: string

The timestamp when the object was created.

formatdate-time
created_by_id?: string | null

The id of the user that made this object.

denial_reason?: string | null

The reason the tool call request was denied.

group_id?: string | null

The multi-agent group that the message was sent in

is_err?: boolean | null

Whether this message is part of an error step. Used only for debugging purposes.

last_updated_by_id?: string | null

The id of the user that made this object.

model?: string | null

The model used to make the function call.

name?: string | null

For role user/assistant: the (optional) name of the participant. For role tool/function: the name of the function called.

otid?: string | null

The offline threading id associated with this message

run_id?: string | null

The id of the run that this message was created in.

sender_id?: string | null

The id of the sender of the message, can be an identity id or agent id

step_id?: string | null

The id of the step that this message was created in.

tool_call_id?: string | null

The ID of the tool call. Only applicable for role tool.

tool_calls?: Array<ToolCall> | null

The list of tool calls requested. Only applicable for role assistant.

id: string
function: Function { arguments, name }
arguments: string
name: string
type: "function"
Accepts one of the following:
"function"
tool_returns?: Array<ToolReturn> | null

Tool execution return information for prior tool calls

status: "success" | "error"

The status of the tool call

Accepts one of the following:
"success"
"error"
func_response?: string | null

The function response string

stderr?: Array<string> | null

Captured stderr from the tool invocation

stdout?: Array<string> | null

Captured stdout (e.g. prints, logs) from the tool invocation

tool_call_id?: unknown

The ID for the tool call

updated_at?: string | null

The timestamp when the object was last updated.

formatdate-time
MessageRole = "assistant" | "user" | "tool" | 3 more
Accepts one of the following:
"assistant"
"user"
"tool"
"function"
"system"
"approval"
MessageType = "system_message" | "user_message" | "assistant_message" | 6 more
Accepts one of the following:
"system_message"
"user_message"
"assistant_message"
"reasoning_message"
"hidden_reasoning_message"
"tool_call_message"
"tool_return_message"
"approval_request_message"
"approval_response_message"
OmittedReasoningContent { signature, type }

A placeholder for reasoning content we know is present, but isn't returned by the provider (e.g. OpenAI GPT-5 on ChatCompletions)

signature?: string | null

A unique identifier for this reasoning step.

type?: "omitted_reasoning"

Indicates this is an omitted reasoning step.

Accepts one of the following:
"omitted_reasoning"
ReasoningContent { is_native, reasoning, signature, type }

Sent via the Anthropic Messages API

is_native: boolean

Whether the reasoning content was generated by a reasoner model that processed this step.

reasoning: string

The intermediate reasoning or thought process content.

signature?: string | null

A unique identifier for this reasoning step.

type?: "reasoning"

Indicates this is a reasoning/intermediate step.

Accepts one of the following:
"reasoning"
ReasoningMessage { id, date, reasoning, 10 more }

Representation of an agent's internal reasoning.

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message source (Literal["reasoner_model", "non_reasoner_model"]): Whether the reasoning content was generated natively by a reasoner model or derived via prompting reasoning (str): The internal reasoning of the agent signature (Optional[str]): The model-generated signature of the reasoning step

id: string
date: string
reasoning: string
is_err?: boolean | null
message_type?: "reasoning_message"

The type of the message.

Accepts one of the following:
"reasoning_message"
name?: string | null
otid?: string | null
run_id?: string | null
sender_id?: string | null
seq_id?: number | null
signature?: string | null
source?: "reasoner_model" | "non_reasoner_model"
Accepts one of the following:
"reasoner_model"
"non_reasoner_model"
step_id?: string | null
RedactedReasoningContent { data, type }

Sent via the Anthropic Messages API

data: string

The redacted or filtered intermediate reasoning content.

type?: "redacted_reasoning"

Indicates this is a redacted thinking step.

Accepts one of the following:
"redacted_reasoning"
Run { id, agent_id, background, 13 more }

Representation of a run - a conversation or processing session for an agent. Runs track when agents process messages and maintain the relationship between agents, steps, and messages.

id: string

The human-friendly ID of the Run

agent_id: string

The unique identifier of the agent associated with the run.

background?: boolean | null

Whether the run was created in background mode.

base_template_id?: string | null

The base template ID that the run belongs to.

callback_error?: string | null

Optional error message from attempting to POST the callback endpoint.

callback_sent_at?: string | null

Timestamp when the callback was last attempted.

formatdate-time
callback_status_code?: number | null

HTTP status code returned by the callback endpoint.

callback_url?: string | null

If set, POST to this URL when the run completes.

completed_at?: string | null

The timestamp when the run was completed.

formatdate-time
created_at?: string

The timestamp when the run was created.

formatdate-time
metadata?: Record<string, unknown> | null

Additional metadata for the run.

request_config?: RequestConfig | null

The request configuration for the run.

assistant_message_tool_kwarg?: string

The name of the message argument in the designated message tool.

assistant_message_tool_name?: string

The name of the designated message tool.

include_return_message_types?: Array<MessageType> | null

Only return specified message types in the response. If None (default) returns all messages.

Accepts one of the following:
"system_message"
"user_message"
"assistant_message"
"reasoning_message"
"hidden_reasoning_message"
"tool_call_message"
"tool_return_message"
"approval_request_message"
"approval_response_message"
use_assistant_message?: boolean

Whether the server should parse specific tool call arguments (default send_message) as AssistantMessage objects.

status?: "created" | "running" | "completed" | 2 more

The current status of the run.

Accepts one of the following:
"created"
"running"
"completed"
"failed"
"cancelled"
stop_reason?: StopReasonType | null

The reason why the run was stopped.

Accepts one of the following:
"end_turn"
"error"
"llm_api_error"
"invalid_llm_response"
"invalid_tool_call"
"max_steps"
"no_tool_call"
"tool_rule"
"cancelled"
"requires_approval"
total_duration_ns?: number | null

Total run duration in nanoseconds

ttft_ns?: number | null

Time to first token for a run in nanoseconds

SummaryMessage { id, date, summary, 8 more }

A message representing a summary of the conversation. Sent to the LLM as a user or system message depending on the provider.

id: string
date: string
summary: string
is_err?: boolean | null
message_type?: "summary"
Accepts one of the following:
"summary"
name?: string | null
otid?: string | null
run_id?: string | null
sender_id?: string | null
seq_id?: number | null
step_id?: string | null
SystemMessage { id, content, date, 8 more }

A message generated by the system. Never streamed back on a response, only used for cursor pagination.

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message content (str): The message content sent by the system

id: string
content: string

The message content sent by the system

date: string
is_err?: boolean | null
message_type?: "system_message"

The type of the message.

Accepts one of the following:
"system_message"
name?: string | null
otid?: string | null
run_id?: string | null
sender_id?: string | null
seq_id?: number | null
step_id?: string | null
TextContent { text, signature, type }
text: string

The text content of the message.

signature?: string | null

Stores a unique identifier for any reasoning associated with this text content.

type?: "text"

The type of the message.

Accepts one of the following:
"text"
ToolCall { arguments, name, tool_call_id }
arguments: string
name: string
tool_call_id: string
ToolCallContent { id, input, name, 2 more }
id: string

A unique identifier for this specific tool call instance.

input: Record<string, unknown>

The parameters being passed to the tool, structured as a dictionary of parameter names to values.

name: string

The name of the tool being called.

signature?: string | null

Stores a unique identifier for any reasoning associated with this tool call.

type?: "tool_call"

Indicates this content represents a tool call event.

Accepts one of the following:
"tool_call"
ToolCallDelta { arguments, name, tool_call_id }
arguments?: string | null
name?: string | null
tool_call_id?: string | null
ToolCallMessage { id, date, tool_call, 9 more }

A message representing a request to call a tool (generated by the LLM to trigger tool execution).

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message tool_call (Union[ToolCall, ToolCallDelta]): The tool call

id: string
date: string
Deprecatedtool_call: ToolCall { arguments, name, tool_call_id } | ToolCallDelta { arguments, name, tool_call_id }
Accepts one of the following:
ToolCall { arguments, name, tool_call_id }
arguments: string
name: string
tool_call_id: string
ToolCallDelta { arguments, name, tool_call_id }
arguments?: string | null
name?: string | null
tool_call_id?: string | null
is_err?: boolean | null
message_type?: "tool_call_message"

The type of the message.

Accepts one of the following:
"tool_call_message"
name?: string | null
otid?: string | null
run_id?: string | null
sender_id?: string | null
seq_id?: number | null
step_id?: string | null
tool_calls?: Array<ToolCall { arguments, name, tool_call_id } > | ToolCallDelta { arguments, name, tool_call_id } | null
Accepts one of the following:
Array<ToolCall { arguments, name, tool_call_id } >
arguments: string
name: string
tool_call_id: string
ToolCallDelta { arguments, name, tool_call_id }
arguments?: string | null
name?: string | null
tool_call_id?: string | null
ToolReturn { status, tool_call_id, tool_return, 3 more }
status: "success" | "error"
Accepts one of the following:
"success"
"error"
tool_call_id: string
tool_return: string
stderr?: Array<string> | null
stdout?: Array<string> | null
type?: "tool"

The message type to be created.

Accepts one of the following:
"tool"
ToolReturnContent { content, is_error, tool_call_id, type }
content: string

The content returned by the tool execution.

is_error: boolean

Indicates whether the tool execution resulted in an error.

tool_call_id: string

References the ID of the ToolCallContent that initiated this tool call.

type?: "tool_return"

Indicates this content represents a tool return event.

Accepts one of the following:
"tool_return"
UpdateAssistantMessage { content, message_type }
content: Array<LettaAssistantMessageContentUnion { text, signature, type } > | string

The message content sent by the assistant (can be a string or an array of content parts)

Accepts one of the following:
Array<LettaAssistantMessageContentUnion { text, signature, type } >
text: string

The text content of the message.

signature?: string | null

Stores a unique identifier for any reasoning associated with this text content.

type?: "text"

The type of the message.

Accepts one of the following:
"text"
string
message_type?: "assistant_message"
Accepts one of the following:
"assistant_message"
UpdateReasoningMessage { reasoning, message_type }
reasoning: string
message_type?: "reasoning_message"
Accepts one of the following:
"reasoning_message"
UpdateSystemMessage { content, message_type }
content: string

The message content sent by the system (can be a string or an array of multi-modal content parts)

message_type?: "system_message"
Accepts one of the following:
"system_message"
UpdateUserMessage { content, message_type }
content: Array<LettaUserMessageContentUnion> | string

The message content sent by the user (can be a string or an array of multi-modal content parts)

Accepts one of the following:
TextContent { text, signature, type }
text: string

The text content of the message.

signature?: string | null

Stores a unique identifier for any reasoning associated with this text content.

type?: "text"

The type of the message.

Accepts one of the following:
"text"
ImageContent { source, type }
source: URLImage { url, type } | Base64Image { data, media_type, detail, type } | LettaImage { file_id, data, detail, 2 more }

The source of the image.

Accepts one of the following:
URLImage { url, type }
url: string

The URL of the image.

type?: "url"

The source type for the image.

Accepts one of the following:
"url"
Base64Image { data, media_type, detail, type }
data: string

The base64 encoded image data.

media_type: string

The media type for the image.

detail?: string | null

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

type?: "base64"

The source type for the image.

Accepts one of the following:
"base64"
LettaImage { file_id, data, detail, 2 more }
file_id: string

The unique identifier of the image file persisted in storage.

data?: string | null

The base64 encoded image data.

detail?: string | null

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

media_type?: string | null

The media type for the image.

type?: "letta"

The source type for the image.

Accepts one of the following:
"letta"
type?: "image"

The type of the message.

Accepts one of the following:
"image"
string
message_type?: "user_message"
Accepts one of the following:
"user_message"
UserMessage { id, content, date, 8 more }

A message sent by the user. Never streamed back on a response, only used for cursor pagination.

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message content (Union[str, List[LettaUserMessageContentUnion]]): The message content sent by the user (can be a string or an array of multi-modal content parts)

id: string
content: Array<LettaUserMessageContentUnion> | string

The message content sent by the user (can be a string or an array of multi-modal content parts)

Accepts one of the following:
TextContent { text, signature, type }
text: string

The text content of the message.

signature?: string | null

Stores a unique identifier for any reasoning associated with this text content.

type?: "text"

The type of the message.

Accepts one of the following:
"text"
ImageContent { source, type }
source: URLImage { url, type } | Base64Image { data, media_type, detail, type } | LettaImage { file_id, data, detail, 2 more }

The source of the image.

Accepts one of the following:
URLImage { url, type }
url: string

The URL of the image.

type?: "url"

The source type for the image.

Accepts one of the following:
"url"
Base64Image { data, media_type, detail, type }
data: string

The base64 encoded image data.

media_type: string

The media type for the image.

detail?: string | null

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

type?: "base64"

The source type for the image.

Accepts one of the following:
"base64"
LettaImage { file_id, data, detail, 2 more }
file_id: string

The unique identifier of the image file persisted in storage.

data?: string | null

The base64 encoded image data.

detail?: string | null

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

media_type?: string | null

The media type for the image.

type?: "letta"

The source type for the image.

Accepts one of the following:
"letta"
type?: "image"

The type of the message.

Accepts one of the following:
"image"
string
date: string
is_err?: boolean | null
message_type?: "user_message"

The type of the message.

Accepts one of the following:
"user_message"
name?: string | null
otid?: string | null
run_id?: string | null
sender_id?: string | null
seq_id?: number | null
step_id?: string | null

AgentsBlocks

Retrieve Block For Agent
client.agents.blocks.retrieve(stringblockLabel, BlockRetrieveParams { agent_id } params, RequestOptionsoptions?): BlockResponse { id, value, base_template_id, 15 more }
get/v1/agents/{agent_id}/core-memory/blocks/{block_label}
Modify Block For Agent
client.agents.blocks.modify(stringblockLabel, BlockModifyParams { agent_id, base_template_id, deployment_id, 13 more } params, RequestOptionsoptions?): BlockResponse { id, value, base_template_id, 15 more }
patch/v1/agents/{agent_id}/core-memory/blocks/{block_label}
List Blocks For Agent
client.agents.blocks.list(stringagentID, BlockListParams { after, before, limit, 2 more } query?, RequestOptionsoptions?): ArrayPage<BlockResponse { id, value, base_template_id, 15 more } >
get/v1/agents/{agent_id}/core-memory/blocks
Attach Block To Agent
client.agents.blocks.attach(stringblockID, BlockAttachParams { agent_id } params, RequestOptionsoptions?): AgentState { id, agent_type, blocks, 39 more }
patch/v1/agents/{agent_id}/core-memory/blocks/attach/{block_id}
Detach Block From Agent
client.agents.blocks.detach(stringblockID, BlockDetachParams { agent_id } params, RequestOptionsoptions?): AgentState { id, agent_type, blocks, 39 more }
patch/v1/agents/{agent_id}/core-memory/blocks/detach/{block_id}
ModelsExpand Collapse
Block { value, id, base_template_id, 15 more }

A Block represents a reserved section of the LLM's context window.

value: string

Value of the block.

id?: string

The human-friendly ID of the Block

base_template_id?: string | null

The base template id of the block.

created_by_id?: string | null

The id of the user that made this Block.

deployment_id?: string | null

The id of the deployment.

description?: string | null

Description of the block.

entity_id?: string | null

The id of the entity within the template.

hidden?: boolean | null

If set to True, the block will be hidden.

is_template?: boolean

Whether the block is a template (e.g. saved human/persona options).

label?: string | null

Label of the block (e.g. 'human', 'persona') in the context window.

last_updated_by_id?: string | null

The id of the user that last updated this Block.

limit?: number

Character limit of the block.

metadata?: Record<string, unknown> | null

Metadata of the block.

preserve_on_migration?: boolean | null

Preserve the block on template migration.

project_id?: string | null

The associated project id.

read_only?: boolean

Whether the agent has read-only access to the block.

template_id?: string | null

The id of the template.

template_name?: string | null

Name of the block if it is a template.

BlockModify { base_template_id, deployment_id, description, 12 more }

Update a block

base_template_id?: string | null

The base template id of the block.

deployment_id?: string | null

The id of the deployment.

description?: string | null

Description of the block.

entity_id?: string | null

The id of the entity within the template.

hidden?: boolean | null

If set to True, the block will be hidden.

is_template?: boolean

Whether the block is a template (e.g. saved human/persona options).

label?: string | null

Label of the block (e.g. 'human', 'persona') in the context window.

limit?: number | null

Character limit of the block.

metadata?: Record<string, unknown> | null

Metadata of the block.

preserve_on_migration?: boolean | null

Preserve the block on template migration.

project_id?: string | null

The associated project id.

read_only?: boolean

Whether the agent has read-only access to the block.

template_id?: string | null

The id of the template.

template_name?: string | null

Name of the block if it is a template.

value?: string | null

Value of the block.

AgentsTools

List Tools For Agent
client.agents.tools.list(stringagentID, ToolListParams { after, before, limit, 2 more } query?, RequestOptionsoptions?): ArrayPage<Tool { id, args_json_schema, created_by_id, 14 more } >
get/v1/agents/{agent_id}/tools
Attach Tool To Agent
client.agents.tools.attach(stringtoolID, ToolAttachParams { agent_id } params, RequestOptionsoptions?): AgentState { id, agent_type, blocks, 39 more } | null
patch/v1/agents/{agent_id}/tools/attach/{tool_id}
Detach Tool From Agent
client.agents.tools.detach(stringtoolID, ToolDetachParams { agent_id } params, RequestOptionsoptions?): AgentState { id, agent_type, blocks, 39 more } | null
patch/v1/agents/{agent_id}/tools/detach/{tool_id}
Modify Approval For Tool
client.agents.tools.updateApproval(stringtoolName, ToolUpdateApprovalParams { agent_id, query_requires_approval } params, RequestOptionsoptions?): AgentState { id, agent_type, blocks, 39 more } | null
patch/v1/agents/{agent_id}/tools/approval/{tool_name}

AgentsFolders

Attach Folder To Agent
client.agents.folders.attach(stringfolderID, FolderAttachParams { agent_id } params, RequestOptionsoptions?): AgentState { id, agent_type, blocks, 39 more }
patch/v1/agents/{agent_id}/folders/attach/{folder_id}
Detach Folder From Agent
client.agents.folders.detach(stringfolderID, FolderDetachParams { agent_id } params, RequestOptionsoptions?): AgentState { id, agent_type, blocks, 39 more }
patch/v1/agents/{agent_id}/folders/detach/{folder_id}
List Folders For Agent
client.agents.folders.list(stringagentID, FolderListParams { after, before, limit, 2 more } query?, RequestOptionsoptions?): ArrayPage<FolderListResponse { id, embedding_config, name, 8 more } >
get/v1/agents/{agent_id}/folders

AgentsFiles

Close All Files For Agent
client.agents.files.closeAll(stringagentID, RequestOptionsoptions?): FileCloseAllResponse
patch/v1/agents/{agent_id}/files/close-all
Open File For Agent
client.agents.files.open(stringfileID, FileOpenParams { agent_id } params, RequestOptionsoptions?): FileOpenResponse
patch/v1/agents/{agent_id}/files/{file_id}/open
Close File For Agent
client.agents.files.close(stringfileID, FileCloseParams { agent_id } params, RequestOptionsoptions?): FileCloseResponse
patch/v1/agents/{agent_id}/files/{file_id}/close
List Files For Agent
client.agents.files.list(stringagentID, FileListParams { after, before, cursor, 4 more } query?, RequestOptionsoptions?): NextFilesPage<FileListResponse { id, file_id, file_name, 7 more } >
get/v1/agents/{agent_id}/files

AgentsGroups

List Groups For Agent
client.agents.groups.list(stringagentID, GroupListParams { after, before, limit, 3 more } query?, RequestOptionsoptions?): ArrayPage<Group { id, agent_ids, description, 15 more } >
get/v1/agents/{agent_id}/groups

AgentsArchives

Attach Archive To Agent
client.agents.archives.attach(stringarchiveID, ArchiveAttachParams { agent_id } params, RequestOptionsoptions?): ArchiveAttachResponse
patch/v1/agents/{agent_id}/archives/attach/{archive_id}
Detach Archive From Agent
client.agents.archives.detach(stringarchiveID, ArchiveDetachParams { agent_id } params, RequestOptionsoptions?): ArchiveDetachResponse
patch/v1/agents/{agent_id}/archives/detach/{archive_id}

AgentsIdentities

Attach Identity To Agent
client.agents.identities.attach(stringidentityID, IdentityAttachParams { agent_id } params, RequestOptionsoptions?): IdentityAttachResponse
patch/v1/agents/{agent_id}/identities/attach/{identity_id}
Detach Identity From Agent
client.agents.identities.detach(stringidentityID, IdentityDetachParams { agent_id } params, RequestOptionsoptions?): IdentityDetachResponse
patch/v1/agents/{agent_id}/identities/detach/{identity_id}