Skip to content
  • Auto
  • Light
  • Dark
DiscordForumGitHubSign up
View as Markdown
Copy Markdown

Open in Claude
Open in ChatGPT

Send Message Streaming

client.agents.messages.stream(stringagentID, MessageStreamParams { assistant_message_tool_kwarg, assistant_message_tool_name, background, 9 more } body, RequestOptionsoptions?): LettaStreamingResponse | Stream<LettaStreamingResponse>
post/v1/agents/{agent_id}/messages/stream

Process a user message and return the agent's response. This endpoint accepts a message from a user and processes it through the agent. It will stream the steps of the response always, and stream the tokens if 'stream_tokens' is set to True.

ParametersExpand Collapse
agentID: string

The ID of the agent in the format 'agent-'

minLength42
maxLength42
body: MessageStreamParams { assistant_message_tool_kwarg, assistant_message_tool_name, background, 9 more }
Deprecatedassistant_message_tool_kwarg?: string

The name of the message argument in the designated message tool. Still supported for legacy agent types, but deprecated for letta_v1_agent onward.

Deprecatedassistant_message_tool_name?: string

The name of the designated message tool. Still supported for legacy agent types, but deprecated for letta_v1_agent onward.

background?: boolean

Whether to process the request in the background (only used when streaming=true).

Deprecatedenable_thinking?: string

If set to True, enables reasoning before responses or tool calls from the agent.

include_pings?: boolean

Whether to include periodic keepalive ping messages in the stream to prevent connection timeouts (only used when streaming=true).

include_return_message_types?: Array<MessageType> | null

Only return specified message types in the response. If None (default) returns all messages.

Accepts one of the following:
"system_message"
"user_message"
"assistant_message"
"reasoning_message"
"hidden_reasoning_message"
"tool_call_message"
"tool_return_message"
"approval_request_message"
"approval_response_message"
input?: string | Array<TextContent { text, signature, type } | ImageContent { source, type } | ToolCallContent { id, input, name, 2 more } | 5 more> | null

Syntactic sugar for a single user message. Equivalent to messages=[{'role': 'user', 'content': input}].

Accepts one of the following:
string
Array<TextContent { text, signature, type } | ImageContent { source, type } | ToolCallContent { id, input, name, 2 more } | 5 more>
TextContent { text, signature, type }
text: string

The text content of the message.

signature?: string | null

Stores a unique identifier for any reasoning associated with this text content.

type?: "text"

The type of the message.

Accepts one of the following:
"text"
ImageContent { source, type }
source: URLImage { url, type } | Base64Image { data, media_type, detail, type } | LettaImage { file_id, data, detail, 2 more }

The source of the image.

Accepts one of the following:
URLImage { url, type }
url: string

The URL of the image.

type?: "url"

The source type for the image.

Accepts one of the following:
"url"
Base64Image { data, media_type, detail, type }
data: string

The base64 encoded image data.

media_type: string

The media type for the image.

detail?: string | null

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

type?: "base64"

The source type for the image.

Accepts one of the following:
"base64"
LettaImage { file_id, data, detail, 2 more }
file_id: string

The unique identifier of the image file persisted in storage.

data?: string | null

The base64 encoded image data.

detail?: string | null

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

media_type?: string | null

The media type for the image.

type?: "letta"

The source type for the image.

Accepts one of the following:
"letta"
type?: "image"

The type of the message.

Accepts one of the following:
"image"
ToolCallContent { id, input, name, 2 more }
id: string

A unique identifier for this specific tool call instance.

input: Record<string, unknown>

The parameters being passed to the tool, structured as a dictionary of parameter names to values.

name: string

The name of the tool being called.

signature?: string | null

Stores a unique identifier for any reasoning associated with this tool call.

type?: "tool_call"

Indicates this content represents a tool call event.

Accepts one of the following:
"tool_call"
ToolReturnContent { content, is_error, tool_call_id, type }
content: string

The content returned by the tool execution.

is_error: boolean

Indicates whether the tool execution resulted in an error.

tool_call_id: string

References the ID of the ToolCallContent that initiated this tool call.

type?: "tool_return"

Indicates this content represents a tool return event.

Accepts one of the following:
"tool_return"
ReasoningContent { is_native, reasoning, signature, type }

Sent via the Anthropic Messages API

is_native: boolean

Whether the reasoning content was generated by a reasoner model that processed this step.

reasoning: string

The intermediate reasoning or thought process content.

signature?: string | null

A unique identifier for this reasoning step.

type?: "reasoning"

Indicates this is a reasoning/intermediate step.

Accepts one of the following:
"reasoning"
RedactedReasoningContent { data, type }

Sent via the Anthropic Messages API

data: string

The redacted or filtered intermediate reasoning content.

type?: "redacted_reasoning"

Indicates this is a redacted thinking step.

Accepts one of the following:
"redacted_reasoning"
OmittedReasoningContent { signature, type }

A placeholder for reasoning content we know is present, but isn't returned by the provider (e.g. OpenAI GPT-5 on ChatCompletions)

signature?: string | null

A unique identifier for this reasoning step.

type?: "omitted_reasoning"

Indicates this is an omitted reasoning step.

Accepts one of the following:
"omitted_reasoning"
SummarizedReasoningContent { id, summary, encrypted_content, type }

The style of reasoning content returned by the OpenAI Responses API

id: string

The unique identifier for this reasoning step.

summary: Array<Summary>

Summaries of the reasoning content.

index: number

The index of the summary part.

text: string

The text of the summary part.

encrypted_content?: string

The encrypted reasoning content.

type?: "summarized_reasoning"

Indicates this is a summarized reasoning step.

Accepts one of the following:
"summarized_reasoning"
max_steps?: number

Maximum number of steps the agent should take to process the request.

messages?: Array<MessageCreate { content, role, batch_item_id, 5 more } | ApprovalCreate { approval_request_id, approvals, approve, 3 more } > | null

The messages to be sent to the agent.

Accepts one of the following:
MessageCreate { content, role, batch_item_id, 5 more }

Request to create a message

content: Array<LettaMessageContentUnion> | string

The content of the message.

Accepts one of the following:
TextContent { text, signature, type }
text: string

The text content of the message.

signature?: string | null

Stores a unique identifier for any reasoning associated with this text content.

type?: "text"

The type of the message.

Accepts one of the following:
"text"
ImageContent { source, type }
source: URLImage { url, type } | Base64Image { data, media_type, detail, type } | LettaImage { file_id, data, detail, 2 more }

The source of the image.

Accepts one of the following:
URLImage { url, type }
url: string

The URL of the image.

type?: "url"

The source type for the image.

Accepts one of the following:
"url"
Base64Image { data, media_type, detail, type }
data: string

The base64 encoded image data.

media_type: string

The media type for the image.

detail?: string | null

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

type?: "base64"

The source type for the image.

Accepts one of the following:
"base64"
LettaImage { file_id, data, detail, 2 more }
file_id: string

The unique identifier of the image file persisted in storage.

data?: string | null

The base64 encoded image data.

detail?: string | null

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

media_type?: string | null

The media type for the image.

type?: "letta"

The source type for the image.

Accepts one of the following:
"letta"
type?: "image"

The type of the message.

Accepts one of the following:
"image"
ToolCallContent { id, input, name, 2 more }
id: string

A unique identifier for this specific tool call instance.

input: Record<string, unknown>

The parameters being passed to the tool, structured as a dictionary of parameter names to values.

name: string

The name of the tool being called.

signature?: string | null

Stores a unique identifier for any reasoning associated with this tool call.

type?: "tool_call"

Indicates this content represents a tool call event.

Accepts one of the following:
"tool_call"
ToolReturnContent { content, is_error, tool_call_id, type }
content: string

The content returned by the tool execution.

is_error: boolean

Indicates whether the tool execution resulted in an error.

tool_call_id: string

References the ID of the ToolCallContent that initiated this tool call.

type?: "tool_return"

Indicates this content represents a tool return event.

Accepts one of the following:
"tool_return"
ReasoningContent { is_native, reasoning, signature, type }

Sent via the Anthropic Messages API

is_native: boolean

Whether the reasoning content was generated by a reasoner model that processed this step.

reasoning: string

The intermediate reasoning or thought process content.

signature?: string | null

A unique identifier for this reasoning step.

type?: "reasoning"

Indicates this is a reasoning/intermediate step.

Accepts one of the following:
"reasoning"
RedactedReasoningContent { data, type }

Sent via the Anthropic Messages API

data: string

The redacted or filtered intermediate reasoning content.

type?: "redacted_reasoning"

Indicates this is a redacted thinking step.

Accepts one of the following:
"redacted_reasoning"
OmittedReasoningContent { signature, type }

A placeholder for reasoning content we know is present, but isn't returned by the provider (e.g. OpenAI GPT-5 on ChatCompletions)

signature?: string | null

A unique identifier for this reasoning step.

type?: "omitted_reasoning"

Indicates this is an omitted reasoning step.

Accepts one of the following:
"omitted_reasoning"
string
role: "user" | "system" | "assistant"

The role of the participant.

Accepts one of the following:
"user"
"system"
"assistant"
batch_item_id?: string | null

The id of the LLMBatchItem that this message is associated with

group_id?: string | null

The multi-agent group that the message was sent in

name?: string | null

The name of the participant.

otid?: string | null

The offline threading id associated with this message

sender_id?: string | null

The id of the sender of the message, can be an identity id or agent id

type?: "message" | null

The message type to be created.

Accepts one of the following:
"message"
ApprovalCreate { approval_request_id, approvals, approve, 3 more }

Input to approve or deny a tool call request

Deprecatedapproval_request_id?: string | null

The message ID of the approval request

approvals?: Array<ApprovalReturn { approve, tool_call_id, reason, type } | ToolReturn { status, tool_call_id, tool_return, 3 more } > | null

The list of approval responses

Accepts one of the following:
ApprovalReturn { approve, tool_call_id, reason, type }
approve: boolean

Whether the tool has been approved

tool_call_id: string

The ID of the tool call that corresponds to this approval

reason?: string | null

An optional explanation for the provided approval status

type?: "approval"

The message type to be created.

Accepts one of the following:
"approval"
ToolReturn { status, tool_call_id, tool_return, 3 more }
status: "success" | "error"
Accepts one of the following:
"success"
"error"
tool_call_id: string
tool_return: string
stderr?: Array<string> | null
stdout?: Array<string> | null
type?: "tool"

The message type to be created.

Accepts one of the following:
"tool"
Deprecatedapprove?: boolean | null

Whether the tool has been approved

group_id?: string | null

The multi-agent group that the message was sent in

Deprecatedreason?: string | null

An optional explanation for the provided approval status

type?: "approval"

The message type to be created.

Accepts one of the following:
"approval"
stream_tokens?: boolean

Flag to determine if individual tokens should be streamed, rather than streaming per step (only used when streaming=true).

streaming?: boolean

If True, returns a streaming response (Server-Sent Events). If False (default), returns a complete response.

Deprecateduse_assistant_message?: boolean

Whether the server should parse specific tool call arguments (default send_message) as AssistantMessage objects. Still supported for legacy agent types, but deprecated for letta_v1_agent onward.

ReturnsExpand Collapse
LettaStreamingResponse = SystemMessage { id, content, date, 8 more } | UserMessage { id, content, date, 8 more } | ReasoningMessage { id, date, reasoning, 10 more } | 9 more

Streaming response type for Server-Sent Events (SSE) endpoints. Each event in the stream will be one of these types.

Accepts one of the following:
SystemMessage { id, content, date, 8 more }

A message generated by the system. Never streamed back on a response, only used for cursor pagination.

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message content (str): The message content sent by the system

id: string
content: string

The message content sent by the system

date: string
is_err?: boolean | null
message_type?: "system_message"

The type of the message.

Accepts one of the following:
"system_message"
name?: string | null
otid?: string | null
run_id?: string | null
sender_id?: string | null
seq_id?: number | null
step_id?: string | null
UserMessage { id, content, date, 8 more }

A message sent by the user. Never streamed back on a response, only used for cursor pagination.

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message content (Union[str, List[LettaUserMessageContentUnion]]): The message content sent by the user (can be a string or an array of multi-modal content parts)

id: string
content: Array<LettaUserMessageContentUnion> | string

The message content sent by the user (can be a string or an array of multi-modal content parts)

Accepts one of the following:
TextContent { text, signature, type }
text: string

The text content of the message.

signature?: string | null

Stores a unique identifier for any reasoning associated with this text content.

type?: "text"

The type of the message.

Accepts one of the following:
"text"
ImageContent { source, type }
source: URLImage { url, type } | Base64Image { data, media_type, detail, type } | LettaImage { file_id, data, detail, 2 more }

The source of the image.

Accepts one of the following:
URLImage { url, type }
url: string

The URL of the image.

type?: "url"

The source type for the image.

Accepts one of the following:
"url"
Base64Image { data, media_type, detail, type }
data: string

The base64 encoded image data.

media_type: string

The media type for the image.

detail?: string | null

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

type?: "base64"

The source type for the image.

Accepts one of the following:
"base64"
LettaImage { file_id, data, detail, 2 more }
file_id: string

The unique identifier of the image file persisted in storage.

data?: string | null

The base64 encoded image data.

detail?: string | null

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

media_type?: string | null

The media type for the image.

type?: "letta"

The source type for the image.

Accepts one of the following:
"letta"
type?: "image"

The type of the message.

Accepts one of the following:
"image"
string
date: string
is_err?: boolean | null
message_type?: "user_message"

The type of the message.

Accepts one of the following:
"user_message"
name?: string | null
otid?: string | null
run_id?: string | null
sender_id?: string | null
seq_id?: number | null
step_id?: string | null
ReasoningMessage { id, date, reasoning, 10 more }

Representation of an agent's internal reasoning.

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message source (Literal["reasoner_model", "non_reasoner_model"]): Whether the reasoning content was generated natively by a reasoner model or derived via prompting reasoning (str): The internal reasoning of the agent signature (Optional[str]): The model-generated signature of the reasoning step

id: string
date: string
reasoning: string
is_err?: boolean | null
message_type?: "reasoning_message"

The type of the message.

Accepts one of the following:
"reasoning_message"
name?: string | null
otid?: string | null
run_id?: string | null
sender_id?: string | null
seq_id?: number | null
signature?: string | null
source?: "reasoner_model" | "non_reasoner_model"
Accepts one of the following:
"reasoner_model"
"non_reasoner_model"
step_id?: string | null
HiddenReasoningMessage { id, date, state, 9 more }

Representation of an agent's internal reasoning where reasoning content has been hidden from the response.

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message state (Literal["redacted", "omitted"]): Whether the reasoning content was redacted by the provider or simply omitted by the API hidden_reasoning (Optional[str]): The internal reasoning of the agent

id: string
date: string
state: "redacted" | "omitted"
Accepts one of the following:
"redacted"
"omitted"
hidden_reasoning?: string | null
is_err?: boolean | null
message_type?: "hidden_reasoning_message"

The type of the message.

Accepts one of the following:
"hidden_reasoning_message"
name?: string | null
otid?: string | null
run_id?: string | null
sender_id?: string | null
seq_id?: number | null
step_id?: string | null
ToolCallMessage { id, date, tool_call, 9 more }

A message representing a request to call a tool (generated by the LLM to trigger tool execution).

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message tool_call (Union[ToolCall, ToolCallDelta]): The tool call

id: string
date: string
Deprecatedtool_call: ToolCall { arguments, name, tool_call_id } | ToolCallDelta { arguments, name, tool_call_id }
Accepts one of the following:
ToolCall { arguments, name, tool_call_id }
arguments: string
name: string
tool_call_id: string
ToolCallDelta { arguments, name, tool_call_id }
arguments?: string | null
name?: string | null
tool_call_id?: string | null
is_err?: boolean | null
message_type?: "tool_call_message"

The type of the message.

Accepts one of the following:
"tool_call_message"
name?: string | null
otid?: string | null
run_id?: string | null
sender_id?: string | null
seq_id?: number | null
step_id?: string | null
tool_calls?: Array<ToolCall { arguments, name, tool_call_id } > | ToolCallDelta { arguments, name, tool_call_id } | null
Accepts one of the following:
Array<ToolCall { arguments, name, tool_call_id } >
arguments: string
name: string
tool_call_id: string
ToolCallDelta { arguments, name, tool_call_id }
arguments?: string | null
name?: string | null
tool_call_id?: string | null
ToolReturnMessage { id, date, status, 13 more }

A message representing the return value of a tool call (generated by Letta executing the requested tool).

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message tool_return (str): The return value of the tool (deprecated, use tool_returns) status (Literal["success", "error"]): The status of the tool call (deprecated, use tool_returns) tool_call_id (str): A unique identifier for the tool call that generated this message (deprecated, use tool_returns) stdout (Optional[List(str)]): Captured stdout (e.g. prints, logs) from the tool invocation (deprecated, use tool_returns) stderr (Optional[List(str)]): Captured stderr from the tool invocation (deprecated, use tool_returns) tool_returns (Optional[List[ToolReturn]]): List of tool returns for multi-tool support

id: string
date: string
Deprecatedstatus: "success" | "error"
Accepts one of the following:
"success"
"error"
Deprecatedtool_call_id: string
Deprecatedtool_return: string
is_err?: boolean | null
message_type?: "tool_return_message"

The type of the message.

Accepts one of the following:
"tool_return_message"
name?: string | null
otid?: string | null
run_id?: string | null
sender_id?: string | null
seq_id?: number | null
Deprecatedstderr?: Array<string> | null
Deprecatedstdout?: Array<string> | null
step_id?: string | null
tool_returns?: Array<ToolReturn { status, tool_call_id, tool_return, 3 more } > | null
status: "success" | "error"
Accepts one of the following:
"success"
"error"
tool_call_id: string
tool_return: string
stderr?: Array<string> | null
stdout?: Array<string> | null
type?: "tool"

The message type to be created.

Accepts one of the following:
"tool"
AssistantMessage { id, content, date, 8 more }

A message sent by the LLM in response to user input. Used in the LLM context.

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message content (Union[str, List[LettaAssistantMessageContentUnion]]): The message content sent by the agent (can be a string or an array of content parts)

id: string
content: Array<LettaAssistantMessageContentUnion { text, signature, type } > | string

The message content sent by the agent (can be a string or an array of content parts)

Accepts one of the following:
Array<LettaAssistantMessageContentUnion { text, signature, type } >
text: string

The text content of the message.

signature?: string | null

Stores a unique identifier for any reasoning associated with this text content.

type?: "text"

The type of the message.

Accepts one of the following:
"text"
string
date: string
is_err?: boolean | null
message_type?: "assistant_message"

The type of the message.

Accepts one of the following:
"assistant_message"
name?: string | null
otid?: string | null
run_id?: string | null
sender_id?: string | null
seq_id?: number | null
step_id?: string | null
ApprovalRequestMessage { id, date, tool_call, 9 more }

A message representing a request for approval to call a tool (generated by the LLM to trigger tool execution).

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message tool_call (ToolCall): The tool call

id: string
date: string
Deprecatedtool_call: ToolCall { arguments, name, tool_call_id } | ToolCallDelta { arguments, name, tool_call_id }

The tool call that has been requested by the llm to run

Accepts one of the following:
ToolCall { arguments, name, tool_call_id }
arguments: string
name: string
tool_call_id: string
ToolCallDelta { arguments, name, tool_call_id }
arguments?: string | null
name?: string | null
tool_call_id?: string | null
is_err?: boolean | null
message_type?: "approval_request_message"

The type of the message.

Accepts one of the following:
"approval_request_message"
name?: string | null
otid?: string | null
run_id?: string | null
sender_id?: string | null
seq_id?: number | null
step_id?: string | null
tool_calls?: Array<ToolCall { arguments, name, tool_call_id } > | ToolCallDelta { arguments, name, tool_call_id } | null

The tool calls that have been requested by the llm to run, which are pending approval

Accepts one of the following:
Array<ToolCall { arguments, name, tool_call_id } >
arguments: string
name: string
tool_call_id: string
ToolCallDelta { arguments, name, tool_call_id }
arguments?: string | null
name?: string | null
tool_call_id?: string | null
ApprovalResponseMessage { id, date, approval_request_id, 11 more }

A message representing a response form the user indicating whether a tool has been approved to run.

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message approve: (bool) Whether the tool has been approved approval_request_id: The ID of the approval request reason: (Optional[str]) An optional explanation for the provided approval status

id: string
date: string
Deprecatedapproval_request_id?: string | null

The message ID of the approval request

approvals?: Array<ApprovalReturn { approve, tool_call_id, reason, type } | ToolReturn { status, tool_call_id, tool_return, 3 more } > | null

The list of approval responses

Accepts one of the following:
ApprovalReturn { approve, tool_call_id, reason, type }
approve: boolean

Whether the tool has been approved

tool_call_id: string

The ID of the tool call that corresponds to this approval

reason?: string | null

An optional explanation for the provided approval status

type?: "approval"

The message type to be created.

Accepts one of the following:
"approval"
ToolReturn { status, tool_call_id, tool_return, 3 more }
status: "success" | "error"
Accepts one of the following:
"success"
"error"
tool_call_id: string
tool_return: string
stderr?: Array<string> | null
stdout?: Array<string> | null
type?: "tool"

The message type to be created.

Accepts one of the following:
"tool"
Deprecatedapprove?: boolean | null

Whether the tool has been approved

is_err?: boolean | null
message_type?: "approval_response_message"

The type of the message.

Accepts one of the following:
"approval_response_message"
name?: string | null
otid?: string | null
Deprecatedreason?: string | null

An optional explanation for the provided approval status

run_id?: string | null
sender_id?: string | null
seq_id?: number | null
step_id?: string | null
LettaPing { message_type }

Ping messages are a keep-alive to prevent SSE streams from timing out during long running requests.

message_type: "ping"

The type of the message.

Accepts one of the following:
"ping"
LettaStopReason { stop_reason, message_type }

The stop reason from Letta indicating why agent loop stopped execution.

stop_reason: StopReasonType

The reason why execution stopped.

Accepts one of the following:
"end_turn"
"error"
"llm_api_error"
"invalid_llm_response"
"invalid_tool_call"
"max_steps"
"no_tool_call"
"tool_rule"
"cancelled"
"requires_approval"
message_type?: "stop_reason"

The type of the message.

Accepts one of the following:
"stop_reason"
LettaUsageStatistics { completion_tokens, message_type, prompt_tokens, 3 more }

Usage statistics for the agent interaction.

Attributes: completion_tokens (int): The number of tokens generated by the agent. prompt_tokens (int): The number of tokens in the prompt. total_tokens (int): The total number of tokens processed by the agent. step_count (int): The number of steps taken by the agent.

completion_tokens?: number

The number of tokens generated by the agent.

message_type?: "usage_statistics"
Accepts one of the following:
"usage_statistics"
prompt_tokens?: number

The number of tokens in the prompt.

run_ids?: Array<string> | null

The background task run IDs associated with the agent interaction

step_count?: number

The number of steps taken by the agent.

total_tokens?: number

The total number of tokens processed by the agent.

Send Message Streaming
import Letta from '@letta-ai/letta-client';

const client = new Letta({
  apiKey: 'My API Key',
});

const lettaStreamingResponse = await client.agents.messages.stream(
  'agent-123e4567-e89b-42d3-8456-426614174000',
);

console.log(lettaStreamingResponse);
{
  "id": "id",
  "content": "content",
  "date": "2019-12-27T18:11:19.117Z",
  "is_err": true,
  "message_type": "system_message",
  "name": "name",
  "otid": "otid",
  "run_id": "run_id",
  "sender_id": "sender_id",
  "seq_id": 0,
  "step_id": "step_id"
}
Returns Examples
{
  "id": "id",
  "content": "content",
  "date": "2019-12-27T18:11:19.117Z",
  "is_err": true,
  "message_type": "system_message",
  "name": "name",
  "otid": "otid",
  "run_id": "run_id",
  "sender_id": "sender_id",
  "seq_id": 0,
  "step_id": "step_id"
}