Skip to content
  • Auto
  • Light
  • Dark
DiscordForumGitHubSign up
View as Markdown
Copy Markdown

Open in Claude
Open in ChatGPT

Steps

List Steps
steps.list(StepListParams**kwargs) -> SyncArrayPage[Step]
get/v1/steps/
Retrieve Step
steps.retrieve(strstep_id) -> Step
get/v1/steps/{step_id}
ModelsExpand Collapse
class ProviderTrace:

Letta's internal representation of a provider trace.

Attributes: id (str): The unique identifier of the provider trace. request_json (Dict[str, Any]): JSON content of the provider request. response_json (Dict[str, Any]): JSON content of the provider response. step_id (str): ID of the step that this trace is associated with. organization_id (str): The unique identifier of the organization. created_at (datetime): The timestamp when the object was created.

request_json: Dict[str, object]

JSON content of the provider request

response_json: Dict[str, object]

JSON content of the provider response

id: Optional[str]

The human-friendly ID of the Provider_trace

created_at: Optional[datetime]

The timestamp when the object was created.

formatdate-time
created_by_id: Optional[str]

The id of the user that made this object.

last_updated_by_id: Optional[str]

The id of the user that made this object.

step_id: Optional[str]

ID of the step that this trace is associated with

updated_at: Optional[datetime]

The timestamp when the object was last updated.

formatdate-time
class Step:
id: str

The id of the step. Assigned by the database.

agent_id: Optional[str]

The ID of the agent that performed the step.

completion_tokens: Optional[int]

The number of tokens generated by the agent during this step.

completion_tokens_details: Optional[Dict[str, object]]

Metadata for the agent.

context_window_limit: Optional[int]

The context window limit configured for this step.

error_data: Optional[Dict[str, object]]

Error details including message, traceback, and additional context

error_type: Optional[str]

The type/class of the error that occurred

feedback: Optional[Literal["positive", "negative"]]

The feedback for this step. Must be either 'positive' or 'negative'.

Accepts one of the following:
"positive"
"negative"
Deprecatedmessages: Optional[List[Message]]

The messages generated during this step. Deprecated: use GET /v1/steps/{step_id}/messages endpoint instead

id: str

The human-friendly ID of the Message

The role of the participant.

Accepts one of the following:
"assistant"
"user"
"tool"
"function"
"system"
"approval"
agent_id: Optional[str]

The unique identifier of the agent.

approval_request_id: Optional[str]

The id of the approval request if this message is associated with a tool call request.

approvals: Optional[List[Approval]]

The list of approvals for this message.

Accepts one of the following:
class ApprovalApprovalReturn:
approve: bool

Whether the tool has been approved

tool_call_id: str

The ID of the tool call that corresponds to this approval

reason: Optional[str]

An optional explanation for the provided approval status

type: Optional[Literal["approval"]]

The message type to be created.

Accepts one of the following:
"approval"
class ApprovalLettaSchemasMessageToolReturn:
status: Literal["success", "error"]

The status of the tool call

Accepts one of the following:
"success"
"error"
func_response: Optional[str]

The function response string

stderr: Optional[List[str]]

Captured stderr from the tool invocation

stdout: Optional[List[str]]

Captured stdout (e.g. prints, logs) from the tool invocation

tool_call_id: Optional[object]

The ID for the tool call

approve: Optional[bool]

Whether tool call is approved.

batch_item_id: Optional[str]

The id of the LLMBatchItem that this message is associated with

content: Optional[List[Content]]

The content of the message.

Accepts one of the following:
class TextContent:
text: str

The text content of the message.

signature: Optional[str]

Stores a unique identifier for any reasoning associated with this text content.

type: Optional[Literal["text"]]

The type of the message.

Accepts one of the following:
"text"
class ImageContent:
source: Source

The source of the image.

Accepts one of the following:
class SourceURLImage:
url: str

The URL of the image.

type: Optional[Literal["url"]]

The source type for the image.

Accepts one of the following:
"url"
class SourceBase64Image:
data: str

The base64 encoded image data.

media_type: str

The media type for the image.

detail: Optional[str]

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

type: Optional[Literal["base64"]]

The source type for the image.

Accepts one of the following:
"base64"
class SourceLettaImage:
file_id: str

The unique identifier of the image file persisted in storage.

data: Optional[str]

The base64 encoded image data.

detail: Optional[str]

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

media_type: Optional[str]

The media type for the image.

type: Optional[Literal["letta"]]

The source type for the image.

Accepts one of the following:
"letta"
type: Optional[Literal["image"]]

The type of the message.

Accepts one of the following:
"image"
class ToolCallContent:
id: str

A unique identifier for this specific tool call instance.

input: Dict[str, object]

The parameters being passed to the tool, structured as a dictionary of parameter names to values.

name: str

The name of the tool being called.

signature: Optional[str]

Stores a unique identifier for any reasoning associated with this tool call.

type: Optional[Literal["tool_call"]]

Indicates this content represents a tool call event.

Accepts one of the following:
"tool_call"
class ToolReturnContent:
content: str

The content returned by the tool execution.

is_error: bool

Indicates whether the tool execution resulted in an error.

tool_call_id: str

References the ID of the ToolCallContent that initiated this tool call.

type: Optional[Literal["tool_return"]]

Indicates this content represents a tool return event.

Accepts one of the following:
"tool_return"
class ReasoningContent:

Sent via the Anthropic Messages API

is_native: bool

Whether the reasoning content was generated by a reasoner model that processed this step.

reasoning: str

The intermediate reasoning or thought process content.

signature: Optional[str]

A unique identifier for this reasoning step.

type: Optional[Literal["reasoning"]]

Indicates this is a reasoning/intermediate step.

Accepts one of the following:
"reasoning"
class RedactedReasoningContent:

Sent via the Anthropic Messages API

data: str

The redacted or filtered intermediate reasoning content.

type: Optional[Literal["redacted_reasoning"]]

Indicates this is a redacted thinking step.

Accepts one of the following:
"redacted_reasoning"
class OmittedReasoningContent:

A placeholder for reasoning content we know is present, but isn't returned by the provider (e.g. OpenAI GPT-5 on ChatCompletions)

signature: Optional[str]

A unique identifier for this reasoning step.

type: Optional[Literal["omitted_reasoning"]]

Indicates this is an omitted reasoning step.

Accepts one of the following:
"omitted_reasoning"
class ContentSummarizedReasoningContent:

The style of reasoning content returned by the OpenAI Responses API

id: str

The unique identifier for this reasoning step.

summary: List[ContentSummarizedReasoningContentSummary]

Summaries of the reasoning content.

index: int

The index of the summary part.

text: str

The text of the summary part.

encrypted_content: Optional[str]

The encrypted reasoning content.

type: Optional[Literal["summarized_reasoning"]]

Indicates this is a summarized reasoning step.

Accepts one of the following:
"summarized_reasoning"
created_at: Optional[datetime]

The timestamp when the object was created.

formatdate-time
created_by_id: Optional[str]

The id of the user that made this object.

denial_reason: Optional[str]

The reason the tool call request was denied.

group_id: Optional[str]

The multi-agent group that the message was sent in

is_err: Optional[bool]

Whether this message is part of an error step. Used only for debugging purposes.

last_updated_by_id: Optional[str]

The id of the user that made this object.

model: Optional[str]

The model used to make the function call.

name: Optional[str]

For role user/assistant: the (optional) name of the participant. For role tool/function: the name of the function called.

otid: Optional[str]

The offline threading id associated with this message

run_id: Optional[str]

The id of the run that this message was created in.

sender_id: Optional[str]

The id of the sender of the message, can be an identity id or agent id

step_id: Optional[str]

The id of the step that this message was created in.

tool_call_id: Optional[str]

The ID of the tool call. Only applicable for role tool.

tool_calls: Optional[List[ToolCall]]

The list of tool calls requested. Only applicable for role assistant.

id: str
function: ToolCallFunction
arguments: str
name: str
type: Literal["function"]
Accepts one of the following:
"function"
tool_returns: Optional[List[ToolReturn]]

Tool execution return information for prior tool calls

status: Literal["success", "error"]

The status of the tool call

Accepts one of the following:
"success"
"error"
func_response: Optional[str]

The function response string

stderr: Optional[List[str]]

Captured stderr from the tool invocation

stdout: Optional[List[str]]

Captured stdout (e.g. prints, logs) from the tool invocation

tool_call_id: Optional[object]

The ID for the tool call

updated_at: Optional[datetime]

The timestamp when the object was last updated.

formatdate-time
model: Optional[str]

The name of the model used for this step.

model_endpoint: Optional[str]

The model endpoint url used for this step.

origin: Optional[str]

The surface that this agent step was initiated from.

project_id: Optional[str]

The project that the agent that executed this step belongs to (cloud only).

prompt_tokens: Optional[int]

The number of tokens in the prompt during this step.

provider_category: Optional[str]

The category of the provider used for this step.

provider_id: Optional[str]

The unique identifier of the provider that was configured for this step

provider_name: Optional[str]

The name of the provider used for this step.

run_id: Optional[str]

The unique identifier of the run that this step belongs to. Only included for async calls.

status: Optional[Literal["pending", "success", "failed", "cancelled"]]

Status of a step execution

Accepts one of the following:
"pending"
"success"
"failed"
"cancelled"
stop_reason: Optional[StopReasonType]

The stop reason associated with the step.

Accepts one of the following:
"end_turn"
"error"
"llm_api_error"
"invalid_llm_response"
"invalid_tool_call"
"max_steps"
"no_tool_call"
"tool_rule"
"cancelled"
"requires_approval"
tags: Optional[List[str]]

Metadata tags.

tid: Optional[str]

The unique identifier of the transaction that processed this step.

total_tokens: Optional[int]

The total number of tokens processed by the agent during this step.

trace_id: Optional[str]

The trace id of the agent step.

StepsMetrics

Retrieve Metrics For Step
steps.metrics.retrieve(strstep_id) -> MetricRetrieveResponse
get/v1/steps/{step_id}/metrics

StepsTrace

Retrieve Trace For Step
steps.trace.retrieve(strstep_id) -> ProviderTrace
get/v1/steps/{step_id}/trace

StepsFeedback

Modify Feedback For Step
steps.feedback.create(strstep_id, FeedbackCreateParams**kwargs) -> Step
patch/v1/steps/{step_id}/feedback

StepsMessages

List Messages For Step
steps.messages.list(strstep_id, MessageListParams**kwargs) -> SyncArrayPage[MessageListResponse]
get/v1/steps/{step_id}/messages