Skip to content
  • Auto
  • Light
  • Dark
DiscordForumGitHubSign up
View as Markdown
Copy Markdown

Open in Claude
Open in ChatGPT

List Steps

client.steps.list(StepListParams { after, agent_id, before, 11 more } query?, RequestOptionsoptions?): ArrayPage<Step { id, agent_id, completion_tokens, 21 more } >
get/v1/steps/

List steps with optional pagination and date filters.

ParametersExpand Collapse
query: StepListParams { after, agent_id, before, 11 more }
after?: string | null

Return steps after this step ID

agent_id?: string | null

Filter by the ID of the agent that performed the step

before?: string | null

Return steps before this step ID

end_date?: string | null

Return steps before this ISO datetime (e.g. "2025-01-29T15:01:19-08:00")

feedback?: "positive" | "negative" | null

Filter by feedback

Accepts one of the following:
"positive"
"negative"
has_feedback?: boolean | null

Filter by whether steps have feedback (true) or not (false)

limit?: number | null

Maximum number of steps to return

model?: string | null

Filter by the name of the model used for the step

order?: "asc" | "desc"

Sort order for steps by creation time. 'asc' for oldest first, 'desc' for newest first

Accepts one of the following:
"asc"
"desc"
order_by?: "created_at"

Field to sort by

Accepts one of the following:
"created_at"
project_id?: string | null

Filter by the project ID that is associated with the step (cloud only).

start_date?: string | null

Return steps after this ISO datetime (e.g. "2025-01-29T15:01:19-08:00")

tags?: Array<string> | null

Filter by tags

trace_ids?: Array<string> | null

Filter by trace ids returned by the server

ReturnsExpand Collapse
Step { id, agent_id, completion_tokens, 21 more }
id: string

The id of the step. Assigned by the database.

agent_id?: string | null

The ID of the agent that performed the step.

completion_tokens?: number | null

The number of tokens generated by the agent during this step.

completion_tokens_details?: Record<string, unknown> | null

Metadata for the agent.

context_window_limit?: number | null

The context window limit configured for this step.

error_data?: Record<string, unknown> | null

Error details including message, traceback, and additional context

error_type?: string | null

The type/class of the error that occurred

feedback?: "positive" | "negative" | null

The feedback for this step. Must be either 'positive' or 'negative'.

Accepts one of the following:
"positive"
"negative"
Deprecatedmessages?: Array<Message { id, role, agent_id, 21 more } >

The messages generated during this step. Deprecated: use GET /v1/steps/{step_id}/messages endpoint instead

id: string

The human-friendly ID of the Message

The role of the participant.

Accepts one of the following:
"assistant"
"user"
"tool"
"function"
"system"
"approval"
agent_id?: string | null

The unique identifier of the agent.

approval_request_id?: string | null

The id of the approval request if this message is associated with a tool call request.

approvals?: Array<ApprovalReturn { approve, tool_call_id, reason, type } | LettaSchemasMessageToolReturn { status, func_response, stderr, 2 more } > | null

The list of approvals for this message.

Accepts one of the following:
ApprovalReturn { approve, tool_call_id, reason, type }
approve: boolean

Whether the tool has been approved

tool_call_id: string

The ID of the tool call that corresponds to this approval

reason?: string | null

An optional explanation for the provided approval status

type?: "approval"

The message type to be created.

Accepts one of the following:
"approval"
LettaSchemasMessageToolReturn { status, func_response, stderr, 2 more }
status: "success" | "error"

The status of the tool call

Accepts one of the following:
"success"
"error"
func_response?: string | null

The function response string

stderr?: Array<string> | null

Captured stderr from the tool invocation

stdout?: Array<string> | null

Captured stdout (e.g. prints, logs) from the tool invocation

tool_call_id?: unknown

The ID for the tool call

approve?: boolean | null

Whether tool call is approved.

batch_item_id?: string | null

The id of the LLMBatchItem that this message is associated with

content?: Array<TextContent { text, signature, type } | ImageContent { source, type } | ToolCallContent { id, input, name, 2 more } | 5 more> | null

The content of the message.

Accepts one of the following:
TextContent { text, signature, type }
text: string

The text content of the message.

signature?: string | null

Stores a unique identifier for any reasoning associated with this text content.

type?: "text"

The type of the message.

Accepts one of the following:
"text"
ImageContent { source, type }
source: URLImage { url, type } | Base64Image { data, media_type, detail, type } | LettaImage { file_id, data, detail, 2 more }

The source of the image.

Accepts one of the following:
URLImage { url, type }
url: string

The URL of the image.

type?: "url"

The source type for the image.

Accepts one of the following:
"url"
Base64Image { data, media_type, detail, type }
data: string

The base64 encoded image data.

media_type: string

The media type for the image.

detail?: string | null

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

type?: "base64"

The source type for the image.

Accepts one of the following:
"base64"
LettaImage { file_id, data, detail, 2 more }
file_id: string

The unique identifier of the image file persisted in storage.

data?: string | null

The base64 encoded image data.

detail?: string | null

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

media_type?: string | null

The media type for the image.

type?: "letta"

The source type for the image.

Accepts one of the following:
"letta"
type?: "image"

The type of the message.

Accepts one of the following:
"image"
ToolCallContent { id, input, name, 2 more }
id: string

A unique identifier for this specific tool call instance.

input: Record<string, unknown>

The parameters being passed to the tool, structured as a dictionary of parameter names to values.

name: string

The name of the tool being called.

signature?: string | null

Stores a unique identifier for any reasoning associated with this tool call.

type?: "tool_call"

Indicates this content represents a tool call event.

Accepts one of the following:
"tool_call"
ToolReturnContent { content, is_error, tool_call_id, type }
content: string

The content returned by the tool execution.

is_error: boolean

Indicates whether the tool execution resulted in an error.

tool_call_id: string

References the ID of the ToolCallContent that initiated this tool call.

type?: "tool_return"

Indicates this content represents a tool return event.

Accepts one of the following:
"tool_return"
ReasoningContent { is_native, reasoning, signature, type }

Sent via the Anthropic Messages API

is_native: boolean

Whether the reasoning content was generated by a reasoner model that processed this step.

reasoning: string

The intermediate reasoning or thought process content.

signature?: string | null

A unique identifier for this reasoning step.

type?: "reasoning"

Indicates this is a reasoning/intermediate step.

Accepts one of the following:
"reasoning"
RedactedReasoningContent { data, type }

Sent via the Anthropic Messages API

data: string

The redacted or filtered intermediate reasoning content.

type?: "redacted_reasoning"

Indicates this is a redacted thinking step.

Accepts one of the following:
"redacted_reasoning"
OmittedReasoningContent { signature, type }

A placeholder for reasoning content we know is present, but isn't returned by the provider (e.g. OpenAI GPT-5 on ChatCompletions)

signature?: string | null

A unique identifier for this reasoning step.

type?: "omitted_reasoning"

Indicates this is an omitted reasoning step.

Accepts one of the following:
"omitted_reasoning"
SummarizedReasoningContent { id, summary, encrypted_content, type }

The style of reasoning content returned by the OpenAI Responses API

id: string

The unique identifier for this reasoning step.

summary: Array<Summary>

Summaries of the reasoning content.

index: number

The index of the summary part.

text: string

The text of the summary part.

encrypted_content?: string

The encrypted reasoning content.

type?: "summarized_reasoning"

Indicates this is a summarized reasoning step.

Accepts one of the following:
"summarized_reasoning"
created_at?: string

The timestamp when the object was created.

formatdate-time
created_by_id?: string | null

The id of the user that made this object.

denial_reason?: string | null

The reason the tool call request was denied.

group_id?: string | null

The multi-agent group that the message was sent in

is_err?: boolean | null

Whether this message is part of an error step. Used only for debugging purposes.

last_updated_by_id?: string | null

The id of the user that made this object.

model?: string | null

The model used to make the function call.

name?: string | null

For role user/assistant: the (optional) name of the participant. For role tool/function: the name of the function called.

otid?: string | null

The offline threading id associated with this message

run_id?: string | null

The id of the run that this message was created in.

sender_id?: string | null

The id of the sender of the message, can be an identity id or agent id

step_id?: string | null

The id of the step that this message was created in.

tool_call_id?: string | null

The ID of the tool call. Only applicable for role tool.

tool_calls?: Array<ToolCall> | null

The list of tool calls requested. Only applicable for role assistant.

id: string
function: Function { arguments, name }
arguments: string
name: string
type: "function"
Accepts one of the following:
"function"
tool_returns?: Array<ToolReturn> | null

Tool execution return information for prior tool calls

status: "success" | "error"

The status of the tool call

Accepts one of the following:
"success"
"error"
func_response?: string | null

The function response string

stderr?: Array<string> | null

Captured stderr from the tool invocation

stdout?: Array<string> | null

Captured stdout (e.g. prints, logs) from the tool invocation

tool_call_id?: unknown

The ID for the tool call

updated_at?: string | null

The timestamp when the object was last updated.

formatdate-time
model?: string | null

The name of the model used for this step.

model_endpoint?: string | null

The model endpoint url used for this step.

origin?: string | null

The surface that this agent step was initiated from.

project_id?: string | null

The project that the agent that executed this step belongs to (cloud only).

prompt_tokens?: number | null

The number of tokens in the prompt during this step.

provider_category?: string | null

The category of the provider used for this step.

provider_id?: string | null

The unique identifier of the provider that was configured for this step

provider_name?: string | null

The name of the provider used for this step.

run_id?: string | null

The unique identifier of the run that this step belongs to. Only included for async calls.

status?: "pending" | "success" | "failed" | "cancelled" | null

Status of a step execution

Accepts one of the following:
"pending"
"success"
"failed"
"cancelled"
stop_reason?: StopReasonType | null

The stop reason associated with the step.

Accepts one of the following:
"end_turn"
"error"
"llm_api_error"
"invalid_llm_response"
"invalid_tool_call"
"max_steps"
"no_tool_call"
"tool_rule"
"cancelled"
"requires_approval"
tags?: Array<string>

Metadata tags.

tid?: string | null

The unique identifier of the transaction that processed this step.

total_tokens?: number | null

The total number of tokens processed by the agent during this step.

trace_id?: string | null

The trace id of the agent step.

List Steps
import Letta from '@letta-ai/letta-client';

const client = new Letta({
  apiKey: 'My API Key',
});

// Automatically fetches more pages as needed.
for await (const step of client.steps.list()) {
  console.log(step.id);
}
[
  {
    "id": "id",
    "agent_id": "agent_id",
    "completion_tokens": 0,
    "completion_tokens_details": {
      "foo": "bar"
    },
    "context_window_limit": 0,
    "error_data": {
      "foo": "bar"
    },
    "error_type": "error_type",
    "feedback": "positive",
    "messages": [
      {
        "id": "message-123e4567-e89b-12d3-a456-426614174000",
        "role": "assistant",
        "agent_id": "agent_id",
        "approval_request_id": "approval_request_id",
        "approvals": [
          {
            "approve": true,
            "tool_call_id": "tool_call_id",
            "reason": "reason",
            "type": "approval"
          }
        ],
        "approve": true,
        "batch_item_id": "batch_item_id",
        "content": [
          {
            "text": "text",
            "signature": "signature",
            "type": "text"
          }
        ],
        "created_at": "2019-12-27T18:11:19.117Z",
        "created_by_id": "created_by_id",
        "denial_reason": "denial_reason",
        "group_id": "group_id",
        "is_err": true,
        "last_updated_by_id": "last_updated_by_id",
        "model": "model",
        "name": "name",
        "otid": "otid",
        "run_id": "run_id",
        "sender_id": "sender_id",
        "step_id": "step_id",
        "tool_call_id": "tool_call_id",
        "tool_calls": [
          {
            "id": "id",
            "function": {
              "arguments": "arguments",
              "name": "name"
            },
            "type": "function"
          }
        ],
        "tool_returns": [
          {
            "status": "success",
            "func_response": "func_response",
            "stderr": [
              "string"
            ],
            "stdout": [
              "string"
            ],
            "tool_call_id": {}
          }
        ],
        "updated_at": "2019-12-27T18:11:19.117Z"
      }
    ],
    "model": "model",
    "model_endpoint": "model_endpoint",
    "origin": "origin",
    "project_id": "project_id",
    "prompt_tokens": 0,
    "provider_category": "provider_category",
    "provider_id": "provider_id",
    "provider_name": "provider_name",
    "run_id": "run_id",
    "status": "pending",
    "stop_reason": "end_turn",
    "tags": [
      "string"
    ],
    "tid": "tid",
    "total_tokens": 0,
    "trace_id": "trace_id"
  }
]
Returns Examples
[
  {
    "id": "id",
    "agent_id": "agent_id",
    "completion_tokens": 0,
    "completion_tokens_details": {
      "foo": "bar"
    },
    "context_window_limit": 0,
    "error_data": {
      "foo": "bar"
    },
    "error_type": "error_type",
    "feedback": "positive",
    "messages": [
      {
        "id": "message-123e4567-e89b-12d3-a456-426614174000",
        "role": "assistant",
        "agent_id": "agent_id",
        "approval_request_id": "approval_request_id",
        "approvals": [
          {
            "approve": true,
            "tool_call_id": "tool_call_id",
            "reason": "reason",
            "type": "approval"
          }
        ],
        "approve": true,
        "batch_item_id": "batch_item_id",
        "content": [
          {
            "text": "text",
            "signature": "signature",
            "type": "text"
          }
        ],
        "created_at": "2019-12-27T18:11:19.117Z",
        "created_by_id": "created_by_id",
        "denial_reason": "denial_reason",
        "group_id": "group_id",
        "is_err": true,
        "last_updated_by_id": "last_updated_by_id",
        "model": "model",
        "name": "name",
        "otid": "otid",
        "run_id": "run_id",
        "sender_id": "sender_id",
        "step_id": "step_id",
        "tool_call_id": "tool_call_id",
        "tool_calls": [
          {
            "id": "id",
            "function": {
              "arguments": "arguments",
              "name": "name"
            },
            "type": "function"
          }
        ],
        "tool_returns": [
          {
            "status": "success",
            "func_response": "func_response",
            "stderr": [
              "string"
            ],
            "stdout": [
              "string"
            ],
            "tool_call_id": {}
          }
        ],
        "updated_at": "2019-12-27T18:11:19.117Z"
      }
    ],
    "model": "model",
    "model_endpoint": "model_endpoint",
    "origin": "origin",
    "project_id": "project_id",
    "prompt_tokens": 0,
    "provider_category": "provider_category",
    "provider_id": "provider_id",
    "provider_name": "provider_name",
    "run_id": "run_id",
    "status": "pending",
    "stop_reason": "end_turn",
    "tags": [
      "string"
    ],
    "tid": "tid",
    "total_tokens": 0,
    "trace_id": "trace_id"
  }
]