Skip to content
  • Auto
  • Light
  • Dark
DiscordForumGitHubSign up
View as Markdown
Copy Markdown

Open in Claude
Open in ChatGPT

Reset Messages

agents.messages.reset(stragent_id, MessageResetParams**kwargs) -> AgentState
patch/v1/agents/{agent_id}/reset-messages

Resets the messages for an agent

ParametersExpand Collapse
agent_id: str

The ID of the agent in the format 'agent-'

minLength42
maxLength42
add_default_initial_messages: Optional[bool]

If true, adds the default initial messages after resetting.

ReturnsExpand Collapse
class AgentState:

Representation of an agent's state. This is the state of the agent at a given time, and is persisted in the DB backend. The state has all the information needed to recreate a persisted agent.

id: str

The id of the agent. Assigned by the database.

agent_type: AgentType

The type of agent.

Accepts one of the following:
"memgpt_agent"
"memgpt_v2_agent"
"letta_v1_agent"
"react_agent"
"workflow_agent"
"split_thread_agent"
"sleeptime_agent"
"voice_convo_agent"
"voice_sleeptime_agent"
blocks: List[Block]

The memory blocks used by the agent.

value: str

Value of the block.

id: Optional[str]

The human-friendly ID of the Block

base_template_id: Optional[str]

The base template id of the block.

created_by_id: Optional[str]

The id of the user that made this Block.

deployment_id: Optional[str]

The id of the deployment.

description: Optional[str]

Description of the block.

entity_id: Optional[str]

The id of the entity within the template.

hidden: Optional[bool]

If set to True, the block will be hidden.

is_template: Optional[bool]

Whether the block is a template (e.g. saved human/persona options).

label: Optional[str]

Label of the block (e.g. 'human', 'persona') in the context window.

last_updated_by_id: Optional[str]

The id of the user that last updated this Block.

limit: Optional[int]

Character limit of the block.

metadata: Optional[Dict[str, object]]

Metadata of the block.

preserve_on_migration: Optional[bool]

Preserve the block on template migration.

project_id: Optional[str]

The associated project id.

read_only: Optional[bool]

Whether the agent has read-only access to the block.

template_id: Optional[str]

The id of the template.

template_name: Optional[str]

Name of the block if it is a template.

Deprecatedembedding_config: EmbeddingConfig

Deprecated: Use embedding field instead. The embedding configuration used by the agent.

embedding_dim: int

The dimension of the embedding.

embedding_endpoint_type: Literal["openai", "anthropic", "bedrock", 16 more]

The endpoint type for the model.

Accepts one of the following:
"openai"
"anthropic"
"bedrock"
"google_ai"
"google_vertex"
"azure"
"groq"
"ollama"
"webui"
"webui-legacy"
"lmstudio"
"lmstudio-legacy"
"llamacpp"
"koboldcpp"
"vllm"
"hugging-face"
"mistral"
"together"
"pinecone"
embedding_model: str

The model for the embedding.

azure_deployment: Optional[str]

The Azure deployment for the model.

azure_endpoint: Optional[str]

The Azure endpoint for the model.

azure_version: Optional[str]

The Azure version for the model.

batch_size: Optional[int]

The maximum batch size for processing embeddings.

embedding_chunk_size: Optional[int]

The chunk size of the embedding.

embedding_endpoint: Optional[str]

The endpoint for the model (None if local).

handle: Optional[str]

The handle for this config, in the format provider/model-name.

Deprecatedllm_config: LlmConfig

Deprecated: Use model field instead. The LLM configuration used by the agent.

context_window: int

The context window size for the model.

model: str

LLM model name.

model_endpoint_type: Literal["openai", "anthropic", "google_ai", 18 more]

The endpoint type for the model.

Accepts one of the following:
"openai"
"anthropic"
"google_ai"
"google_vertex"
"azure"
"groq"
"ollama"
"webui"
"webui-legacy"
"lmstudio"
"lmstudio-legacy"
"lmstudio-chatcompletions"
"llamacpp"
"koboldcpp"
"vllm"
"hugging-face"
"mistral"
"together"
"bedrock"
"deepseek"
"xai"
compatibility_type: Optional[Literal["gguf", "mlx"]]

The framework compatibility type for the model.

Accepts one of the following:
"gguf"
"mlx"
display_name: Optional[str]

A human-friendly display name for the model.

enable_reasoner: Optional[bool]

Whether or not the model should use extended thinking if it is a 'reasoning' style model

frequency_penalty: Optional[float]

Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. From OpenAI: Number between -2.0 and 2.0.

handle: Optional[str]

The handle for this config, in the format provider/model-name.

max_reasoning_tokens: Optional[int]

Configurable thinking budget for extended thinking. Used for enable_reasoner and also for Google Vertex models like Gemini 2.5 Flash. Minimum value is 1024 when used with enable_reasoner.

max_tokens: Optional[int]

The maximum number of tokens to generate. If not set, the model will use its default value.

model_endpoint: Optional[str]

The endpoint for the model.

model_wrapper: Optional[str]

The wrapper for the model.

parallel_tool_calls: Optional[bool]

If set to True, enables parallel tool calling. Defaults to False.

provider_category: Optional[ProviderCategory]

The provider category for the model.

Accepts one of the following:
"base"
"byok"
provider_name: Optional[str]

The provider name for the model.

put_inner_thoughts_in_kwargs: Optional[bool]

Puts 'inner_thoughts' as a kwarg in the function call if this is set to True. This helps with function calling performance and also the generation of inner thoughts.

reasoning_effort: Optional[Literal["minimal", "low", "medium", "high"]]

The reasoning effort to use when generating text reasoning models

Accepts one of the following:
"minimal"
"low"
"medium"
"high"
temperature: Optional[float]

The temperature to use when generating text with the model. A higher temperature will result in more random text.

tier: Optional[str]

The cost tier for the model (cloud only).

verbosity: Optional[Literal["low", "medium", "high"]]

Soft control for how verbose model output should be, used for GPT-5 models.

Accepts one of the following:
"low"
"medium"
"high"
Deprecatedmemory: Memory

Deprecated: Use blocks field instead. The in-context memory of the agent.

blocks: List[Block]

Memory blocks contained in the agent's in-context memory

value: str

Value of the block.

id: Optional[str]

The human-friendly ID of the Block

base_template_id: Optional[str]

The base template id of the block.

created_by_id: Optional[str]

The id of the user that made this Block.

deployment_id: Optional[str]

The id of the deployment.

description: Optional[str]

Description of the block.

entity_id: Optional[str]

The id of the entity within the template.

hidden: Optional[bool]

If set to True, the block will be hidden.

is_template: Optional[bool]

Whether the block is a template (e.g. saved human/persona options).

label: Optional[str]

Label of the block (e.g. 'human', 'persona') in the context window.

last_updated_by_id: Optional[str]

The id of the user that last updated this Block.

limit: Optional[int]

Character limit of the block.

metadata: Optional[Dict[str, object]]

Metadata of the block.

preserve_on_migration: Optional[bool]

Preserve the block on template migration.

project_id: Optional[str]

The associated project id.

read_only: Optional[bool]

Whether the agent has read-only access to the block.

template_id: Optional[str]

The id of the template.

template_name: Optional[str]

Name of the block if it is a template.

agent_type: Optional[Union[AgentType, str, null]]

Agent type controlling prompt rendering.

Accepts one of the following:
Literal["memgpt_agent", "memgpt_v2_agent", "letta_v1_agent", 6 more]
Accepts one of the following:
"memgpt_agent"
"memgpt_v2_agent"
"letta_v1_agent"
"react_agent"
"workflow_agent"
"split_thread_agent"
"sleeptime_agent"
"voice_convo_agent"
"voice_sleeptime_agent"
MemoryAgentTypeUnionMember1 = str
file_blocks: Optional[List[MemoryFileBlock]]

Special blocks representing the agent's in-context memory of an attached file

file_id: str

Unique identifier of the file.

is_open: bool

True if the agent currently has the file open.

source_id: str

Unique identifier of the source.

value: str

Value of the block.

id: Optional[str]

The human-friendly ID of the Block

base_template_id: Optional[str]

The base template id of the block.

created_by_id: Optional[str]

The id of the user that made this Block.

deployment_id: Optional[str]

The id of the deployment.

description: Optional[str]

Description of the block.

entity_id: Optional[str]

The id of the entity within the template.

hidden: Optional[bool]

If set to True, the block will be hidden.

is_template: Optional[bool]

Whether the block is a template (e.g. saved human/persona options).

label: Optional[str]

Label of the block (e.g. 'human', 'persona') in the context window.

last_accessed_at: Optional[datetime]

UTC timestamp of the agent’s most recent access to this file. Any operations from the open, close, or search tools will update this field.

formatdate-time
last_updated_by_id: Optional[str]

The id of the user that last updated this Block.

limit: Optional[int]

Character limit of the block.

metadata: Optional[Dict[str, object]]

Metadata of the block.

preserve_on_migration: Optional[bool]

Preserve the block on template migration.

project_id: Optional[str]

The associated project id.

read_only: Optional[bool]

Whether the agent has read-only access to the block.

template_id: Optional[str]

The id of the template.

template_name: Optional[str]

Name of the block if it is a template.

prompt_template: Optional[str]

Deprecated. Ignored for performance.

name: str

The name of the agent.

sources: List[Source]

The sources used by the agent.

id: str

The human-friendly ID of the Source

embedding_config: EmbeddingConfig

The embedding configuration used by the source.

embedding_dim: int

The dimension of the embedding.

embedding_endpoint_type: Literal["openai", "anthropic", "bedrock", 16 more]

The endpoint type for the model.

Accepts one of the following:
"openai"
"anthropic"
"bedrock"
"google_ai"
"google_vertex"
"azure"
"groq"
"ollama"
"webui"
"webui-legacy"
"lmstudio"
"lmstudio-legacy"
"llamacpp"
"koboldcpp"
"vllm"
"hugging-face"
"mistral"
"together"
"pinecone"
embedding_model: str

The model for the embedding.

azure_deployment: Optional[str]

The Azure deployment for the model.

azure_endpoint: Optional[str]

The Azure endpoint for the model.

azure_version: Optional[str]

The Azure version for the model.

batch_size: Optional[int]

The maximum batch size for processing embeddings.

embedding_chunk_size: Optional[int]

The chunk size of the embedding.

embedding_endpoint: Optional[str]

The endpoint for the model (None if local).

handle: Optional[str]

The handle for this config, in the format provider/model-name.

name: str

The name of the source.

created_at: Optional[datetime]

The timestamp when the source was created.

formatdate-time
created_by_id: Optional[str]

The id of the user that made this Tool.

description: Optional[str]

The description of the source.

instructions: Optional[str]

Instructions for how to use the source.

last_updated_by_id: Optional[str]

The id of the user that made this Tool.

metadata: Optional[Dict[str, object]]

Metadata associated with the source.

updated_at: Optional[datetime]

The timestamp when the source was last updated.

formatdate-time
vector_db_provider: Optional[VectorDBProvider]

The vector database provider used for this source's passages

Accepts one of the following:
"native"
"tpuf"
"pinecone"
system: str

The system prompt used by the agent.

tags: List[str]

The tags associated with the agent.

tools: List[Tool]

The tools used by the agent.

id: str

The human-friendly ID of the Tool

args_json_schema: Optional[Dict[str, object]]

The args JSON schema of the function.

created_by_id: Optional[str]

The id of the user that made this Tool.

default_requires_approval: Optional[bool]

Default value for whether or not executing this tool requires approval.

description: Optional[str]

The description of the tool.

enable_parallel_execution: Optional[bool]

If set to True, then this tool will potentially be executed concurrently with other tools. Default False.

json_schema: Optional[Dict[str, object]]

The JSON schema of the function.

last_updated_by_id: Optional[str]

The id of the user that made this Tool.

metadata: Optional[Dict[str, object]]

A dictionary of additional metadata for the tool.

name: Optional[str]

The name of the function.

npm_requirements: Optional[List[NpmRequirement]]

Optional list of npm packages required by this tool.

name: str

Name of the npm package.

minLength1
version: Optional[str]

Optional version of the package, following semantic versioning.

pip_requirements: Optional[List[PipRequirement]]

Optional list of pip packages required by this tool.

name: str

Name of the pip package.

minLength1
version: Optional[str]

Optional version of the package, following semantic versioning.

return_char_limit: Optional[int]

The maximum number of characters in the response.

maximum1000000
minimum1
source_code: Optional[str]

The source code of the function.

source_type: Optional[str]

The type of the source code.

tags: Optional[List[str]]

Metadata tags.

tool_type: Optional[ToolType]

The type of the tool.

Accepts one of the following:
"custom"
"letta_core"
"letta_memory_core"
"letta_multi_agent_core"
"letta_sleeptime_core"
"letta_voice_sleeptime_core"
"letta_builtin"
"letta_files_core"
"external_langchain"
"external_composio"
"external_mcp"
base_template_id: Optional[str]

The base template id of the agent.

created_at: Optional[datetime]

The timestamp when the object was created.

formatdate-time
created_by_id: Optional[str]

The id of the user that made this object.

deployment_id: Optional[str]

The id of the deployment.

description: Optional[str]

The description of the agent.

embedding: Optional[Embedding]

Schema for defining settings for an embedding model

model: str

The name of the model.

provider: Literal["openai", "ollama"]

The provider of the model.

Accepts one of the following:
"openai"
"ollama"
enable_sleeptime: Optional[bool]

If set to True, memory management will move to a background agent thread.

entity_id: Optional[str]

The id of the entity within the template.

hidden: Optional[bool]

If set to True, the agent will be hidden.

identities: Optional[List[Identity]]

The identities associated with this agent.

id: str

The human-friendly ID of the Identity

Deprecatedagent_ids: List[str]

The IDs of the agents associated with the identity.

Deprecatedblock_ids: List[str]

The IDs of the blocks associated with the identity.

identifier_key: str

External, user-generated identifier key of the identity.

identity_type: IdentityType

The type of the identity.

Accepts one of the following:
"org"
"user"
"other"
name: str

The name of the identity.

project_id: Optional[str]

The project id of the identity, if applicable.

properties: Optional[List[IdentityProperty]]

List of properties associated with the identity

key: str

The key of the property

type: Literal["string", "number", "boolean", "json"]

The type of the property

Accepts one of the following:
"string"
"number"
"boolean"
"json"
value: Union[str, float, bool, Dict[str, object]]

The value of the property

Accepts one of the following:
ValueUnionMember0 = str
ValueUnionMember1 = float
ValueUnionMember2 = bool
ValueUnionMember3 = Dict[str, object]
Deprecatedidentity_ids: Optional[List[str]]

Deprecated: Use identities field instead. The ids of the identities associated with this agent.

last_run_completion: Optional[datetime]

The timestamp when the agent last completed a run.

formatdate-time
last_run_duration_ms: Optional[int]

The duration in milliseconds of the agent's last run.

last_stop_reason: Optional[StopReasonType]

The stop reason from the agent's last run.

Accepts one of the following:
"end_turn"
"error"
"llm_api_error"
"invalid_llm_response"
"invalid_tool_call"
"max_steps"
"no_tool_call"
"tool_rule"
"cancelled"
"requires_approval"
last_updated_by_id: Optional[str]

The id of the user that made this object.

managed_group: Optional[Group]

The multi-agent group that this agent manages

id: str

The id of the group. Assigned by the database.

agent_ids: List[str]
description: str
manager_type: ManagerType
Accepts one of the following:
"round_robin"
"supervisor"
"dynamic"
"sleeptime"
"voice_sleeptime"
"swarm"
base_template_id: Optional[str]

The base template id.

deployment_id: Optional[str]

The id of the deployment.

hidden: Optional[bool]

If set to True, the group will be hidden.

last_processed_message_id: Optional[str]
manager_agent_id: Optional[str]
max_message_buffer_length: Optional[int]

The desired maximum length of messages in the context window of the convo agent. This is a best effort, and may be off slightly due to user/assistant interleaving.

max_turns: Optional[int]
min_message_buffer_length: Optional[int]

The desired minimum length of messages in the context window of the convo agent. This is a best effort, and may be off-by-one due to user/assistant interleaving.

project_id: Optional[str]

The associated project id.

Deprecatedshared_block_ids: Optional[List[str]]
sleeptime_agent_frequency: Optional[int]
template_id: Optional[str]

The id of the template.

termination_token: Optional[str]
turns_counter: Optional[int]
max_files_open: Optional[int]

Maximum number of files that can be open at once for this agent. Setting this too high may exceed the context window, which will break the agent.

message_buffer_autoclear: Optional[bool]

If set to True, the agent will not remember previous messages (though the agent will still retain state via core memory blocks and archival/recall memory). Not recommended unless you have an advanced use case.

message_ids: Optional[List[str]]

The ids of the messages in the agent's in-context memory.

metadata: Optional[Dict[str, object]]

The metadata of the agent.

model: Optional[Model]

Schema for defining settings for a model

model: str

The name of the model.

max_output_tokens: Optional[int]

The maximum number of tokens the model can generate.

parallel_tool_calls: Optional[bool]

Whether to enable parallel tool calling.

Deprecatedmulti_agent_group: Optional[Group]

Deprecated: Use managed_group field instead. The multi-agent group that this agent manages.

id: str

The id of the group. Assigned by the database.

agent_ids: List[str]
description: str
manager_type: ManagerType
Accepts one of the following:
"round_robin"
"supervisor"
"dynamic"
"sleeptime"
"voice_sleeptime"
"swarm"
base_template_id: Optional[str]

The base template id.

deployment_id: Optional[str]

The id of the deployment.

hidden: Optional[bool]

If set to True, the group will be hidden.

last_processed_message_id: Optional[str]
manager_agent_id: Optional[str]
max_message_buffer_length: Optional[int]

The desired maximum length of messages in the context window of the convo agent. This is a best effort, and may be off slightly due to user/assistant interleaving.

max_turns: Optional[int]
min_message_buffer_length: Optional[int]

The desired minimum length of messages in the context window of the convo agent. This is a best effort, and may be off-by-one due to user/assistant interleaving.

project_id: Optional[str]

The associated project id.

Deprecatedshared_block_ids: Optional[List[str]]
sleeptime_agent_frequency: Optional[int]
template_id: Optional[str]

The id of the template.

termination_token: Optional[str]
turns_counter: Optional[int]
per_file_view_window_char_limit: Optional[int]

The per-file view window character limit for this agent. Setting this too high may exceed the context window, which will break the agent.

project_id: Optional[str]

The id of the project the agent belongs to.

response_format: Optional[ResponseFormat]

The response format used by the agent

Accepts one of the following:
class TextResponseFormat:

Response format for plain text responses.

type: Optional[Literal["text"]]

The type of the response format.

Accepts one of the following:
"text"
class JsonSchemaResponseFormat:

Response format for JSON schema-based responses.

json_schema: Dict[str, object]

The JSON schema of the response.

type: Optional[Literal["json_schema"]]

The type of the response format.

Accepts one of the following:
"json_schema"
class JsonObjectResponseFormat:

Response format for JSON object responses.

type: Optional[Literal["json_object"]]

The type of the response format.

Accepts one of the following:
"json_object"
secrets: Optional[List[AgentEnvironmentVariable]]

The environment variables for tool execution specific to this agent.

agent_id: str

The ID of the agent this environment variable belongs to.

key: str

The name of the environment variable.

value: str

The value of the environment variable.

id: Optional[str]

The human-friendly ID of the Agent-env

created_at: Optional[datetime]

The timestamp when the object was created.

formatdate-time
created_by_id: Optional[str]

The id of the user that made this object.

description: Optional[str]

An optional description of the environment variable.

last_updated_by_id: Optional[str]

The id of the user that made this object.

updated_at: Optional[datetime]

The timestamp when the object was last updated.

formatdate-time
value_enc: Optional[str]

Encrypted secret value (stored as encrypted string)

template_id: Optional[str]

The id of the template the agent belongs to.

timezone: Optional[str]

The timezone of the agent (IANA format).

Deprecatedtool_exec_environment_variables: Optional[List[AgentEnvironmentVariable]]

Deprecated: use secrets field instead.

agent_id: str

The ID of the agent this environment variable belongs to.

key: str

The name of the environment variable.

value: str

The value of the environment variable.

id: Optional[str]

The human-friendly ID of the Agent-env

created_at: Optional[datetime]

The timestamp when the object was created.

formatdate-time
created_by_id: Optional[str]

The id of the user that made this object.

description: Optional[str]

An optional description of the environment variable.

last_updated_by_id: Optional[str]

The id of the user that made this object.

updated_at: Optional[datetime]

The timestamp when the object was last updated.

formatdate-time
value_enc: Optional[str]

Encrypted secret value (stored as encrypted string)

tool_rules: Optional[List[ToolRule]]

The list of tool rules.

Accepts one of the following:
class ChildToolRule:

A ToolRule represents a tool that can be invoked by the agent.

children: List[str]

The children tools that can be invoked.

tool_name: str

The name of the tool. Must exist in the database for the user's organization.

child_arg_nodes: Optional[List[ChildArgNode]]

Optional list of typed child argument overrides. Each node must reference a child in 'children'.

name: str

The name of the child tool to invoke next.

args: Optional[Dict[str, object]]

Optional prefilled arguments for this child tool. Keys must match the tool's parameter names and values must satisfy the tool's JSON schema. Supports partial prefill; non-overlapping parameters are left to the model.

prompt_template: Optional[str]

Optional template string (ignored).

type: Optional[Literal["constrain_child_tools"]]
Accepts one of the following:
"constrain_child_tools"
class InitToolRule:

Represents the initial tool rule configuration.

tool_name: str

The name of the tool. Must exist in the database for the user's organization.

args: Optional[Dict[str, object]]

Optional prefilled arguments for this tool. When present, these values will override any LLM-provided arguments with the same keys during invocation. Keys must match the tool's parameter names and values must satisfy the tool's JSON schema. Supports partial prefill; non-overlapping parameters are left to the model.

prompt_template: Optional[str]

Optional template string (ignored). Rendering uses fast built-in formatting for performance.

type: Optional[Literal["run_first"]]
Accepts one of the following:
"run_first"
class TerminalToolRule:

Represents a terminal tool rule configuration where if this tool gets called, it must end the agent loop.

tool_name: str

The name of the tool. Must exist in the database for the user's organization.

prompt_template: Optional[str]

Optional template string (ignored).

type: Optional[Literal["exit_loop"]]
Accepts one of the following:
"exit_loop"
class ConditionalToolRule:

A ToolRule that conditionally maps to different child tools based on the output.

child_output_mapping: Dict[str, str]

The output case to check for mapping

tool_name: str

The name of the tool. Must exist in the database for the user's organization.

default_child: Optional[str]

The default child tool to be called. If None, any tool can be called.

prompt_template: Optional[str]

Optional template string (ignored).

require_output_mapping: Optional[bool]

Whether to throw an error when output doesn't match any case

type: Optional[Literal["conditional"]]
Accepts one of the following:
"conditional"
class ContinueToolRule:

Represents a tool rule configuration where if this tool gets called, it must continue the agent loop.

tool_name: str

The name of the tool. Must exist in the database for the user's organization.

prompt_template: Optional[str]

Optional template string (ignored).

type: Optional[Literal["continue_loop"]]
Accepts one of the following:
"continue_loop"
class RequiredBeforeExitToolRule:

Represents a tool rule configuration where this tool must be called before the agent loop can exit.

tool_name: str

The name of the tool. Must exist in the database for the user's organization.

prompt_template: Optional[str]

Optional template string (ignored).

type: Optional[Literal["required_before_exit"]]
Accepts one of the following:
"required_before_exit"
class MaxCountPerStepToolRule:

Represents a tool rule configuration which constrains the total number of times this tool can be invoked in a single step.

max_count_limit: int

The max limit for the total number of times this tool can be invoked in a single step.

tool_name: str

The name of the tool. Must exist in the database for the user's organization.

prompt_template: Optional[str]

Optional template string (ignored).

type: Optional[Literal["max_count_per_step"]]
Accepts one of the following:
"max_count_per_step"
class ParentToolRule:

A ToolRule that only allows a child tool to be called if the parent has been called.

children: List[str]

The children tools that can be invoked.

tool_name: str

The name of the tool. Must exist in the database for the user's organization.

prompt_template: Optional[str]

Optional template string (ignored).

type: Optional[Literal["parent_last_tool"]]
Accepts one of the following:
"parent_last_tool"
class RequiresApprovalToolRule:

Represents a tool rule configuration which requires approval before the tool can be invoked.

tool_name: str

The name of the tool. Must exist in the database for the user's organization.

prompt_template: Optional[str]

Optional template string (ignored). Rendering uses fast built-in formatting for performance.

type: Optional[Literal["requires_approval"]]
Accepts one of the following:
"requires_approval"
updated_at: Optional[datetime]

The timestamp when the object was last updated.

formatdate-time
Reset Messages
from letta_client import Letta

client = Letta(
    api_key="My API Key",
)
agent_state = client.agents.messages.reset(
    agent_id="agent-123e4567-e89b-42d3-8456-426614174000",
)
print(agent_state.id)
{
  "id": "id",
  "agent_type": "memgpt_agent",
  "blocks": [
    {
      "value": "value",
      "id": "block-123e4567-e89b-12d3-a456-426614174000",
      "base_template_id": "base_template_id",
      "created_by_id": "created_by_id",
      "deployment_id": "deployment_id",
      "description": "description",
      "entity_id": "entity_id",
      "hidden": true,
      "is_template": true,
      "label": "label",
      "last_updated_by_id": "last_updated_by_id",
      "limit": 0,
      "metadata": {
        "foo": "bar"
      },
      "preserve_on_migration": true,
      "project_id": "project_id",
      "read_only": true,
      "template_id": "template_id",
      "template_name": "template_name"
    }
  ],
  "embedding_config": {
    "embedding_dim": 0,
    "embedding_endpoint_type": "openai",
    "embedding_model": "embedding_model",
    "azure_deployment": "azure_deployment",
    "azure_endpoint": "azure_endpoint",
    "azure_version": "azure_version",
    "batch_size": 0,
    "embedding_chunk_size": 0,
    "embedding_endpoint": "embedding_endpoint",
    "handle": "handle"
  },
  "llm_config": {
    "context_window": 0,
    "model": "model",
    "model_endpoint_type": "openai",
    "compatibility_type": "gguf",
    "display_name": "display_name",
    "enable_reasoner": true,
    "frequency_penalty": 0,
    "handle": "handle",
    "max_reasoning_tokens": 0,
    "max_tokens": 0,
    "model_endpoint": "model_endpoint",
    "model_wrapper": "model_wrapper",
    "parallel_tool_calls": true,
    "provider_category": "base",
    "provider_name": "provider_name",
    "put_inner_thoughts_in_kwargs": true,
    "reasoning_effort": "minimal",
    "temperature": 0,
    "tier": "tier",
    "verbosity": "low"
  },
  "memory": {
    "blocks": [
      {
        "value": "value",
        "id": "block-123e4567-e89b-12d3-a456-426614174000",
        "base_template_id": "base_template_id",
        "created_by_id": "created_by_id",
        "deployment_id": "deployment_id",
        "description": "description",
        "entity_id": "entity_id",
        "hidden": true,
        "is_template": true,
        "label": "label",
        "last_updated_by_id": "last_updated_by_id",
        "limit": 0,
        "metadata": {
          "foo": "bar"
        },
        "preserve_on_migration": true,
        "project_id": "project_id",
        "read_only": true,
        "template_id": "template_id",
        "template_name": "template_name"
      }
    ],
    "agent_type": "memgpt_agent",
    "file_blocks": [
      {
        "file_id": "file_id",
        "is_open": true,
        "source_id": "source_id",
        "value": "value",
        "id": "block-123e4567-e89b-12d3-a456-426614174000",
        "base_template_id": "base_template_id",
        "created_by_id": "created_by_id",
        "deployment_id": "deployment_id",
        "description": "description",
        "entity_id": "entity_id",
        "hidden": true,
        "is_template": true,
        "label": "label",
        "last_accessed_at": "2019-12-27T18:11:19.117Z",
        "last_updated_by_id": "last_updated_by_id",
        "limit": 0,
        "metadata": {
          "foo": "bar"
        },
        "preserve_on_migration": true,
        "project_id": "project_id",
        "read_only": true,
        "template_id": "template_id",
        "template_name": "template_name"
      }
    ],
    "prompt_template": "prompt_template"
  },
  "name": "name",
  "sources": [
    {
      "id": "source-123e4567-e89b-12d3-a456-426614174000",
      "embedding_config": {
        "embedding_dim": 0,
        "embedding_endpoint_type": "openai",
        "embedding_model": "embedding_model",
        "azure_deployment": "azure_deployment",
        "azure_endpoint": "azure_endpoint",
        "azure_version": "azure_version",
        "batch_size": 0,
        "embedding_chunk_size": 0,
        "embedding_endpoint": "embedding_endpoint",
        "handle": "handle"
      },
      "name": "name",
      "created_at": "2019-12-27T18:11:19.117Z",
      "created_by_id": "created_by_id",
      "description": "description",
      "instructions": "instructions",
      "last_updated_by_id": "last_updated_by_id",
      "metadata": {
        "foo": "bar"
      },
      "updated_at": "2019-12-27T18:11:19.117Z",
      "vector_db_provider": "native"
    }
  ],
  "system": "system",
  "tags": [
    "string"
  ],
  "tools": [
    {
      "id": "tool-123e4567-e89b-12d3-a456-426614174000",
      "args_json_schema": {
        "foo": "bar"
      },
      "created_by_id": "created_by_id",
      "default_requires_approval": true,
      "description": "description",
      "enable_parallel_execution": true,
      "json_schema": {
        "foo": "bar"
      },
      "last_updated_by_id": "last_updated_by_id",
      "metadata_": {
        "foo": "bar"
      },
      "name": "name",
      "npm_requirements": [
        {
          "name": "x",
          "version": "version"
        }
      ],
      "pip_requirements": [
        {
          "name": "x",
          "version": "version"
        }
      ],
      "return_char_limit": 1,
      "source_code": "source_code",
      "source_type": "source_type",
      "tags": [
        "string"
      ],
      "tool_type": "custom"
    }
  ],
  "base_template_id": "base_template_id",
  "created_at": "2019-12-27T18:11:19.117Z",
  "created_by_id": "created_by_id",
  "deployment_id": "deployment_id",
  "description": "description",
  "embedding": {
    "model": "model",
    "provider": "openai"
  },
  "enable_sleeptime": true,
  "entity_id": "entity_id",
  "hidden": true,
  "identities": [
    {
      "id": "identity-123e4567-e89b-12d3-a456-426614174000",
      "agent_ids": [
        "string"
      ],
      "block_ids": [
        "string"
      ],
      "identifier_key": "identifier_key",
      "identity_type": "org",
      "name": "name",
      "project_id": "project_id",
      "properties": [
        {
          "key": "key",
          "type": "string",
          "value": "string"
        }
      ]
    }
  ],
  "identity_ids": [
    "string"
  ],
  "last_run_completion": "2019-12-27T18:11:19.117Z",
  "last_run_duration_ms": 0,
  "last_stop_reason": "end_turn",
  "last_updated_by_id": "last_updated_by_id",
  "managed_group": {
    "id": "id",
    "agent_ids": [
      "string"
    ],
    "description": "description",
    "manager_type": "round_robin",
    "base_template_id": "base_template_id",
    "deployment_id": "deployment_id",
    "hidden": true,
    "last_processed_message_id": "last_processed_message_id",
    "manager_agent_id": "manager_agent_id",
    "max_message_buffer_length": 0,
    "max_turns": 0,
    "min_message_buffer_length": 0,
    "project_id": "project_id",
    "shared_block_ids": [
      "string"
    ],
    "sleeptime_agent_frequency": 0,
    "template_id": "template_id",
    "termination_token": "termination_token",
    "turns_counter": 0
  },
  "max_files_open": 0,
  "message_buffer_autoclear": true,
  "message_ids": [
    "string"
  ],
  "metadata": {
    "foo": "bar"
  },
  "model": {
    "model": "model",
    "max_output_tokens": 0,
    "parallel_tool_calls": true
  },
  "multi_agent_group": {
    "id": "id",
    "agent_ids": [
      "string"
    ],
    "description": "description",
    "manager_type": "round_robin",
    "base_template_id": "base_template_id",
    "deployment_id": "deployment_id",
    "hidden": true,
    "last_processed_message_id": "last_processed_message_id",
    "manager_agent_id": "manager_agent_id",
    "max_message_buffer_length": 0,
    "max_turns": 0,
    "min_message_buffer_length": 0,
    "project_id": "project_id",
    "shared_block_ids": [
      "string"
    ],
    "sleeptime_agent_frequency": 0,
    "template_id": "template_id",
    "termination_token": "termination_token",
    "turns_counter": 0
  },
  "per_file_view_window_char_limit": 0,
  "project_id": "project_id",
  "response_format": {
    "type": "text"
  },
  "secrets": [
    {
      "agent_id": "agent_id",
      "key": "key",
      "value": "value",
      "id": "agent-env-123e4567-e89b-12d3-a456-426614174000",
      "created_at": "2019-12-27T18:11:19.117Z",
      "created_by_id": "created_by_id",
      "description": "description",
      "last_updated_by_id": "last_updated_by_id",
      "updated_at": "2019-12-27T18:11:19.117Z",
      "value_enc": "value_enc"
    }
  ],
  "template_id": "template_id",
  "timezone": "timezone",
  "tool_exec_environment_variables": [
    {
      "agent_id": "agent_id",
      "key": "key",
      "value": "value",
      "id": "agent-env-123e4567-e89b-12d3-a456-426614174000",
      "created_at": "2019-12-27T18:11:19.117Z",
      "created_by_id": "created_by_id",
      "description": "description",
      "last_updated_by_id": "last_updated_by_id",
      "updated_at": "2019-12-27T18:11:19.117Z",
      "value_enc": "value_enc"
    }
  ],
  "tool_rules": [
    {
      "children": [
        "string"
      ],
      "tool_name": "tool_name",
      "child_arg_nodes": [
        {
          "name": "name",
          "args": {
            "foo": "bar"
          }
        }
      ],
      "prompt_template": "prompt_template",
      "type": "constrain_child_tools"
    }
  ],
  "updated_at": "2019-12-27T18:11:19.117Z"
}
Returns Examples
{
  "id": "id",
  "agent_type": "memgpt_agent",
  "blocks": [
    {
      "value": "value",
      "id": "block-123e4567-e89b-12d3-a456-426614174000",
      "base_template_id": "base_template_id",
      "created_by_id": "created_by_id",
      "deployment_id": "deployment_id",
      "description": "description",
      "entity_id": "entity_id",
      "hidden": true,
      "is_template": true,
      "label": "label",
      "last_updated_by_id": "last_updated_by_id",
      "limit": 0,
      "metadata": {
        "foo": "bar"
      },
      "preserve_on_migration": true,
      "project_id": "project_id",
      "read_only": true,
      "template_id": "template_id",
      "template_name": "template_name"
    }
  ],
  "embedding_config": {
    "embedding_dim": 0,
    "embedding_endpoint_type": "openai",
    "embedding_model": "embedding_model",
    "azure_deployment": "azure_deployment",
    "azure_endpoint": "azure_endpoint",
    "azure_version": "azure_version",
    "batch_size": 0,
    "embedding_chunk_size": 0,
    "embedding_endpoint": "embedding_endpoint",
    "handle": "handle"
  },
  "llm_config": {
    "context_window": 0,
    "model": "model",
    "model_endpoint_type": "openai",
    "compatibility_type": "gguf",
    "display_name": "display_name",
    "enable_reasoner": true,
    "frequency_penalty": 0,
    "handle": "handle",
    "max_reasoning_tokens": 0,
    "max_tokens": 0,
    "model_endpoint": "model_endpoint",
    "model_wrapper": "model_wrapper",
    "parallel_tool_calls": true,
    "provider_category": "base",
    "provider_name": "provider_name",
    "put_inner_thoughts_in_kwargs": true,
    "reasoning_effort": "minimal",
    "temperature": 0,
    "tier": "tier",
    "verbosity": "low"
  },
  "memory": {
    "blocks": [
      {
        "value": "value",
        "id": "block-123e4567-e89b-12d3-a456-426614174000",
        "base_template_id": "base_template_id",
        "created_by_id": "created_by_id",
        "deployment_id": "deployment_id",
        "description": "description",
        "entity_id": "entity_id",
        "hidden": true,
        "is_template": true,
        "label": "label",
        "last_updated_by_id": "last_updated_by_id",
        "limit": 0,
        "metadata": {
          "foo": "bar"
        },
        "preserve_on_migration": true,
        "project_id": "project_id",
        "read_only": true,
        "template_id": "template_id",
        "template_name": "template_name"
      }
    ],
    "agent_type": "memgpt_agent",
    "file_blocks": [
      {
        "file_id": "file_id",
        "is_open": true,
        "source_id": "source_id",
        "value": "value",
        "id": "block-123e4567-e89b-12d3-a456-426614174000",
        "base_template_id": "base_template_id",
        "created_by_id": "created_by_id",
        "deployment_id": "deployment_id",
        "description": "description",
        "entity_id": "entity_id",
        "hidden": true,
        "is_template": true,
        "label": "label",
        "last_accessed_at": "2019-12-27T18:11:19.117Z",
        "last_updated_by_id": "last_updated_by_id",
        "limit": 0,
        "metadata": {
          "foo": "bar"
        },
        "preserve_on_migration": true,
        "project_id": "project_id",
        "read_only": true,
        "template_id": "template_id",
        "template_name": "template_name"
      }
    ],
    "prompt_template": "prompt_template"
  },
  "name": "name",
  "sources": [
    {
      "id": "source-123e4567-e89b-12d3-a456-426614174000",
      "embedding_config": {
        "embedding_dim": 0,
        "embedding_endpoint_type": "openai",
        "embedding_model": "embedding_model",
        "azure_deployment": "azure_deployment",
        "azure_endpoint": "azure_endpoint",
        "azure_version": "azure_version",
        "batch_size": 0,
        "embedding_chunk_size": 0,
        "embedding_endpoint": "embedding_endpoint",
        "handle": "handle"
      },
      "name": "name",
      "created_at": "2019-12-27T18:11:19.117Z",
      "created_by_id": "created_by_id",
      "description": "description",
      "instructions": "instructions",
      "last_updated_by_id": "last_updated_by_id",
      "metadata": {
        "foo": "bar"
      },
      "updated_at": "2019-12-27T18:11:19.117Z",
      "vector_db_provider": "native"
    }
  ],
  "system": "system",
  "tags": [
    "string"
  ],
  "tools": [
    {
      "id": "tool-123e4567-e89b-12d3-a456-426614174000",
      "args_json_schema": {
        "foo": "bar"
      },
      "created_by_id": "created_by_id",
      "default_requires_approval": true,
      "description": "description",
      "enable_parallel_execution": true,
      "json_schema": {
        "foo": "bar"
      },
      "last_updated_by_id": "last_updated_by_id",
      "metadata_": {
        "foo": "bar"
      },
      "name": "name",
      "npm_requirements": [
        {
          "name": "x",
          "version": "version"
        }
      ],
      "pip_requirements": [
        {
          "name": "x",
          "version": "version"
        }
      ],
      "return_char_limit": 1,
      "source_code": "source_code",
      "source_type": "source_type",
      "tags": [
        "string"
      ],
      "tool_type": "custom"
    }
  ],
  "base_template_id": "base_template_id",
  "created_at": "2019-12-27T18:11:19.117Z",
  "created_by_id": "created_by_id",
  "deployment_id": "deployment_id",
  "description": "description",
  "embedding": {
    "model": "model",
    "provider": "openai"
  },
  "enable_sleeptime": true,
  "entity_id": "entity_id",
  "hidden": true,
  "identities": [
    {
      "id": "identity-123e4567-e89b-12d3-a456-426614174000",
      "agent_ids": [
        "string"
      ],
      "block_ids": [
        "string"
      ],
      "identifier_key": "identifier_key",
      "identity_type": "org",
      "name": "name",
      "project_id": "project_id",
      "properties": [
        {
          "key": "key",
          "type": "string",
          "value": "string"
        }
      ]
    }
  ],
  "identity_ids": [
    "string"
  ],
  "last_run_completion": "2019-12-27T18:11:19.117Z",
  "last_run_duration_ms": 0,
  "last_stop_reason": "end_turn",
  "last_updated_by_id": "last_updated_by_id",
  "managed_group": {
    "id": "id",
    "agent_ids": [
      "string"
    ],
    "description": "description",
    "manager_type": "round_robin",
    "base_template_id": "base_template_id",
    "deployment_id": "deployment_id",
    "hidden": true,
    "last_processed_message_id": "last_processed_message_id",
    "manager_agent_id": "manager_agent_id",
    "max_message_buffer_length": 0,
    "max_turns": 0,
    "min_message_buffer_length": 0,
    "project_id": "project_id",
    "shared_block_ids": [
      "string"
    ],
    "sleeptime_agent_frequency": 0,
    "template_id": "template_id",
    "termination_token": "termination_token",
    "turns_counter": 0
  },
  "max_files_open": 0,
  "message_buffer_autoclear": true,
  "message_ids": [
    "string"
  ],
  "metadata": {
    "foo": "bar"
  },
  "model": {
    "model": "model",
    "max_output_tokens": 0,
    "parallel_tool_calls": true
  },
  "multi_agent_group": {
    "id": "id",
    "agent_ids": [
      "string"
    ],
    "description": "description",
    "manager_type": "round_robin",
    "base_template_id": "base_template_id",
    "deployment_id": "deployment_id",
    "hidden": true,
    "last_processed_message_id": "last_processed_message_id",
    "manager_agent_id": "manager_agent_id",
    "max_message_buffer_length": 0,
    "max_turns": 0,
    "min_message_buffer_length": 0,
    "project_id": "project_id",
    "shared_block_ids": [
      "string"
    ],
    "sleeptime_agent_frequency": 0,
    "template_id": "template_id",
    "termination_token": "termination_token",
    "turns_counter": 0
  },
  "per_file_view_window_char_limit": 0,
  "project_id": "project_id",
  "response_format": {
    "type": "text"
  },
  "secrets": [
    {
      "agent_id": "agent_id",
      "key": "key",
      "value": "value",
      "id": "agent-env-123e4567-e89b-12d3-a456-426614174000",
      "created_at": "2019-12-27T18:11:19.117Z",
      "created_by_id": "created_by_id",
      "description": "description",
      "last_updated_by_id": "last_updated_by_id",
      "updated_at": "2019-12-27T18:11:19.117Z",
      "value_enc": "value_enc"
    }
  ],
  "template_id": "template_id",
  "timezone": "timezone",
  "tool_exec_environment_variables": [
    {
      "agent_id": "agent_id",
      "key": "key",
      "value": "value",
      "id": "agent-env-123e4567-e89b-12d3-a456-426614174000",
      "created_at": "2019-12-27T18:11:19.117Z",
      "created_by_id": "created_by_id",
      "description": "description",
      "last_updated_by_id": "last_updated_by_id",
      "updated_at": "2019-12-27T18:11:19.117Z",
      "value_enc": "value_enc"
    }
  ],
  "tool_rules": [
    {
      "children": [
        "string"
      ],
      "tool_name": "tool_name",
      "child_arg_nodes": [
        {
          "name": "name",
          "args": {
            "foo": "bar"
          }
        }
      ],
      "prompt_template": "prompt_template",
      "type": "constrain_child_tools"
    }
  ],
  "updated_at": "2019-12-27T18:11:19.117Z"
}