Skip to content
  • Auto
  • Light
  • Dark
DiscordForumGitHubSign up
View as Markdown
Copy Markdown

Open in Claude
Open in ChatGPT

Send Group Message

post/v1/groups/{group_id}/messages

Process a user message and return the group's response. This endpoint accepts a message from a user and processes it through through agents in the group based on the specified pattern

Path ParametersExpand Collapse
group_id: string

The ID of the group in the format 'group-'

minLength42
maxLength42
Body ParametersExpand Collapse
Deprecatedassistant_message_tool_kwarg: optional string

The name of the message argument in the designated message tool. Still supported for legacy agent types, but deprecated for letta_v1_agent onward.

Deprecatedassistant_message_tool_name: optional string

The name of the designated message tool. Still supported for legacy agent types, but deprecated for letta_v1_agent onward.

Deprecatedenable_thinking: optional string

If set to True, enables reasoning before responses or tool calls from the agent.

include_return_message_types: optional array of MessageType

Only return specified message types in the response. If None (default) returns all messages.

Accepts one of the following:
"system_message"
"user_message"
"assistant_message"
"reasoning_message"
"hidden_reasoning_message"
"tool_call_message"
"tool_return_message"
"approval_request_message"
"approval_response_message"
input: optional string or array of TextContent { text, signature, type } or ImageContent { source, type } or ToolCallContent { id, input, name, 2 more } or 5 more

Syntactic sugar for a single user message. Equivalent to messages=[{'role': 'user', 'content': input}].

Accepts one of the following:
UnionMember0 = string
UnionMember1 = array of TextContent { text, signature, type } or ImageContent { source, type } or ToolCallContent { id, input, name, 2 more } or 5 more
Accepts one of the following:
TextContent = object { text, signature, type }
text: string

The text content of the message.

signature: optional string

Stores a unique identifier for any reasoning associated with this text content.

type: optional "text"

The type of the message.

Accepts one of the following:
"text"
ImageContent = object { source, type }
source: object { url, type } or object { data, media_type, detail, type } or object { file_id, data, detail, 2 more }

The source of the image.

Accepts one of the following:
URL = object { url, type }
url: string

The URL of the image.

type: optional "url"

The source type for the image.

Accepts one of the following:
"url"
Base64 = object { data, media_type, detail, type }
data: string

The base64 encoded image data.

media_type: string

The media type for the image.

detail: optional string

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

type: optional "base64"

The source type for the image.

Accepts one of the following:
"base64"
Letta = object { file_id, data, detail, 2 more }
file_id: string

The unique identifier of the image file persisted in storage.

data: optional string

The base64 encoded image data.

detail: optional string

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

media_type: optional string

The media type for the image.

type: optional "letta"

The source type for the image.

Accepts one of the following:
"letta"
type: optional "image"

The type of the message.

Accepts one of the following:
"image"
ToolCallContent = object { id, input, name, 2 more }
id: string

A unique identifier for this specific tool call instance.

input: map[unknown]

The parameters being passed to the tool, structured as a dictionary of parameter names to values.

name: string

The name of the tool being called.

signature: optional string

Stores a unique identifier for any reasoning associated with this tool call.

type: optional "tool_call"

Indicates this content represents a tool call event.

Accepts one of the following:
"tool_call"
ToolReturnContent = object { content, is_error, tool_call_id, type }
content: string

The content returned by the tool execution.

is_error: boolean

Indicates whether the tool execution resulted in an error.

tool_call_id: string

References the ID of the ToolCallContent that initiated this tool call.

type: optional "tool_return"

Indicates this content represents a tool return event.

Accepts one of the following:
"tool_return"
ReasoningContent = object { is_native, reasoning, signature, type }

Sent via the Anthropic Messages API

is_native: boolean

Whether the reasoning content was generated by a reasoner model that processed this step.

reasoning: string

The intermediate reasoning or thought process content.

signature: optional string

A unique identifier for this reasoning step.

type: optional "reasoning"

Indicates this is a reasoning/intermediate step.

Accepts one of the following:
"reasoning"
RedactedReasoningContent = object { data, type }

Sent via the Anthropic Messages API

data: string

The redacted or filtered intermediate reasoning content.

type: optional "redacted_reasoning"

Indicates this is a redacted thinking step.

Accepts one of the following:
"redacted_reasoning"
OmittedReasoningContent = object { signature, type }

A placeholder for reasoning content we know is present, but isn't returned by the provider (e.g. OpenAI GPT-5 on ChatCompletions)

signature: optional string

A unique identifier for this reasoning step.

type: optional "omitted_reasoning"

Indicates this is an omitted reasoning step.

Accepts one of the following:
"omitted_reasoning"
SummarizedReasoning = object { id, summary, encrypted_content, type }

The style of reasoning content returned by the OpenAI Responses API

id: string

The unique identifier for this reasoning step.

summary: array of object { index, text }

Summaries of the reasoning content.

index: number

The index of the summary part.

text: string

The text of the summary part.

encrypted_content: optional string

The encrypted reasoning content.

type: optional "summarized_reasoning"

Indicates this is a summarized reasoning step.

Accepts one of the following:
"summarized_reasoning"
max_steps: optional number

Maximum number of steps the agent should take to process the request.

messages: optional array of MessageCreate { content, role, batch_item_id, 5 more } or ApprovalCreate { approval_request_id, approvals, approve, 3 more }

The messages to be sent to the agent.

Accepts one of the following:
MessageCreate = object { content, role, batch_item_id, 5 more }

Request to create a message

content: array of LettaMessageContentUnion or string

The content of the message.

Accepts one of the following:
UnionMember0 = array of LettaMessageContentUnion
Accepts one of the following:
TextContent = object { text, signature, type }
text: string

The text content of the message.

signature: optional string

Stores a unique identifier for any reasoning associated with this text content.

type: optional "text"

The type of the message.

Accepts one of the following:
"text"
ImageContent = object { source, type }
source: object { url, type } or object { data, media_type, detail, type } or object { file_id, data, detail, 2 more }

The source of the image.

Accepts one of the following:
URL = object { url, type }
url: string

The URL of the image.

type: optional "url"

The source type for the image.

Accepts one of the following:
"url"
Base64 = object { data, media_type, detail, type }
data: string

The base64 encoded image data.

media_type: string

The media type for the image.

detail: optional string

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

type: optional "base64"

The source type for the image.

Accepts one of the following:
"base64"
Letta = object { file_id, data, detail, 2 more }
file_id: string

The unique identifier of the image file persisted in storage.

data: optional string

The base64 encoded image data.

detail: optional string

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

media_type: optional string

The media type for the image.

type: optional "letta"

The source type for the image.

Accepts one of the following:
"letta"
type: optional "image"

The type of the message.

Accepts one of the following:
"image"
ToolCallContent = object { id, input, name, 2 more }
id: string

A unique identifier for this specific tool call instance.

input: map[unknown]

The parameters being passed to the tool, structured as a dictionary of parameter names to values.

name: string

The name of the tool being called.

signature: optional string

Stores a unique identifier for any reasoning associated with this tool call.

type: optional "tool_call"

Indicates this content represents a tool call event.

Accepts one of the following:
"tool_call"
ToolReturnContent = object { content, is_error, tool_call_id, type }
content: string

The content returned by the tool execution.

is_error: boolean

Indicates whether the tool execution resulted in an error.

tool_call_id: string

References the ID of the ToolCallContent that initiated this tool call.

type: optional "tool_return"

Indicates this content represents a tool return event.

Accepts one of the following:
"tool_return"
ReasoningContent = object { is_native, reasoning, signature, type }

Sent via the Anthropic Messages API

is_native: boolean

Whether the reasoning content was generated by a reasoner model that processed this step.

reasoning: string

The intermediate reasoning or thought process content.

signature: optional string

A unique identifier for this reasoning step.

type: optional "reasoning"

Indicates this is a reasoning/intermediate step.

Accepts one of the following:
"reasoning"
RedactedReasoningContent = object { data, type }

Sent via the Anthropic Messages API

data: string

The redacted or filtered intermediate reasoning content.

type: optional "redacted_reasoning"

Indicates this is a redacted thinking step.

Accepts one of the following:
"redacted_reasoning"
OmittedReasoningContent = object { signature, type }

A placeholder for reasoning content we know is present, but isn't returned by the provider (e.g. OpenAI GPT-5 on ChatCompletions)

signature: optional string

A unique identifier for this reasoning step.

type: optional "omitted_reasoning"

Indicates this is an omitted reasoning step.

Accepts one of the following:
"omitted_reasoning"
UnionMember1 = string
role: "user" or "system" or "assistant"

The role of the participant.

Accepts one of the following:
"user"
"system"
"assistant"
batch_item_id: optional string

The id of the LLMBatchItem that this message is associated with

group_id: optional string

The multi-agent group that the message was sent in

name: optional string

The name of the participant.

otid: optional string

The offline threading id associated with this message

sender_id: optional string

The id of the sender of the message, can be an identity id or agent id

type: optional "message"

The message type to be created.

Accepts one of the following:
"message"
ApprovalCreate = object { approval_request_id, approvals, approve, 3 more }

Input to approve or deny a tool call request

Deprecatedapproval_request_id: optional string

The message ID of the approval request

approvals: optional array of object { approve, tool_call_id, reason, type } or ToolReturn { status, tool_call_id, tool_return, 3 more }

The list of approval responses

Accepts one of the following:
Approval = object { approve, tool_call_id, reason, type }
approve: boolean

Whether the tool has been approved

tool_call_id: string

The ID of the tool call that corresponds to this approval

reason: optional string

An optional explanation for the provided approval status

type: optional "approval"

The message type to be created.

Accepts one of the following:
"approval"
ToolReturn = object { status, tool_call_id, tool_return, 3 more }
status: "success" or "error"
Accepts one of the following:
"success"
"error"
tool_call_id: string
tool_return: string
stderr: optional array of string
stdout: optional array of string
type: optional "tool"

The message type to be created.

Accepts one of the following:
"tool"
Deprecatedapprove: optional boolean

Whether the tool has been approved

group_id: optional string

The multi-agent group that the message was sent in

Deprecatedreason: optional string

An optional explanation for the provided approval status

type: optional "approval"

The message type to be created.

Accepts one of the following:
"approval"
Deprecateduse_assistant_message: optional boolean

Whether the server should parse specific tool call arguments (default send_message) as AssistantMessage objects. Still supported for legacy agent types, but deprecated for letta_v1_agent onward.

ReturnsExpand Collapse
LettaResponse = object { messages, stop_reason, usage }

Response object from an agent interaction, consisting of the new messages generated by the agent and usage statistics. The type of the returned messages can be either Message or LettaMessage, depending on what was specified in the request.

Attributes: messages (List[Union[Message, LettaMessage]]): The messages returned by the agent. usage (LettaUsageStatistics): The usage statistics

messages: array of LettaMessageUnion

The messages returned by the agent.

Accepts one of the following:
SystemMessage = object { id, content, date, 8 more }

A message generated by the system. Never streamed back on a response, only used for cursor pagination.

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message content (str): The message content sent by the system

id: string
content: string

The message content sent by the system

date: string
is_err: optional boolean
message_type: optional "system_message"

The type of the message.

Accepts one of the following:
"system_message"
name: optional string
otid: optional string
run_id: optional string
sender_id: optional string
seq_id: optional number
step_id: optional string
UserMessage = object { id, content, date, 8 more }

A message sent by the user. Never streamed back on a response, only used for cursor pagination.

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message content (Union[str, List[LettaUserMessageContentUnion]]): The message content sent by the user (can be a string or an array of multi-modal content parts)

id: string
content: array of LettaUserMessageContentUnion or string

The message content sent by the user (can be a string or an array of multi-modal content parts)

Accepts one of the following:
UnionMember0 = array of LettaUserMessageContentUnion
Accepts one of the following:
TextContent = object { text, signature, type }
text: string

The text content of the message.

signature: optional string

Stores a unique identifier for any reasoning associated with this text content.

type: optional "text"

The type of the message.

Accepts one of the following:
"text"
ImageContent = object { source, type }
source: object { url, type } or object { data, media_type, detail, type } or object { file_id, data, detail, 2 more }

The source of the image.

Accepts one of the following:
URL = object { url, type }
url: string

The URL of the image.

type: optional "url"

The source type for the image.

Accepts one of the following:
"url"
Base64 = object { data, media_type, detail, type }
data: string

The base64 encoded image data.

media_type: string

The media type for the image.

detail: optional string

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

type: optional "base64"

The source type for the image.

Accepts one of the following:
"base64"
Letta = object { file_id, data, detail, 2 more }
file_id: string

The unique identifier of the image file persisted in storage.

data: optional string

The base64 encoded image data.

detail: optional string

What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)

media_type: optional string

The media type for the image.

type: optional "letta"

The source type for the image.

Accepts one of the following:
"letta"
type: optional "image"

The type of the message.

Accepts one of the following:
"image"
UnionMember1 = string
date: string
is_err: optional boolean
message_type: optional "user_message"

The type of the message.

Accepts one of the following:
"user_message"
name: optional string
otid: optional string
run_id: optional string
sender_id: optional string
seq_id: optional number
step_id: optional string
ReasoningMessage = object { id, date, reasoning, 10 more }

Representation of an agent's internal reasoning.

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message source (Literal["reasoner_model", "non_reasoner_model"]): Whether the reasoning content was generated natively by a reasoner model or derived via prompting reasoning (str): The internal reasoning of the agent signature (Optional[str]): The model-generated signature of the reasoning step

id: string
date: string
reasoning: string
is_err: optional boolean
message_type: optional "reasoning_message"

The type of the message.

Accepts one of the following:
"reasoning_message"
name: optional string
otid: optional string
run_id: optional string
sender_id: optional string
seq_id: optional number
signature: optional string
source: optional "reasoner_model" or "non_reasoner_model"
Accepts one of the following:
"reasoner_model"
"non_reasoner_model"
step_id: optional string
HiddenReasoningMessage = object { id, date, state, 9 more }

Representation of an agent's internal reasoning where reasoning content has been hidden from the response.

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message state (Literal["redacted", "omitted"]): Whether the reasoning content was redacted by the provider or simply omitted by the API hidden_reasoning (Optional[str]): The internal reasoning of the agent

id: string
date: string
state: "redacted" or "omitted"
Accepts one of the following:
"redacted"
"omitted"
hidden_reasoning: optional string
is_err: optional boolean
message_type: optional "hidden_reasoning_message"

The type of the message.

Accepts one of the following:
"hidden_reasoning_message"
name: optional string
otid: optional string
run_id: optional string
sender_id: optional string
seq_id: optional number
step_id: optional string
ToolCallMessage = object { id, date, tool_call, 9 more }

A message representing a request to call a tool (generated by the LLM to trigger tool execution).

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message tool_call (Union[ToolCall, ToolCallDelta]): The tool call

id: string
date: string
Deprecatedtool_call: ToolCall { arguments, name, tool_call_id } or ToolCallDelta { arguments, name, tool_call_id }
Accepts one of the following:
ToolCall = object { arguments, name, tool_call_id }
arguments: string
name: string
tool_call_id: string
ToolCallDelta = object { arguments, name, tool_call_id }
arguments: optional string
name: optional string
tool_call_id: optional string
is_err: optional boolean
message_type: optional "tool_call_message"

The type of the message.

Accepts one of the following:
"tool_call_message"
name: optional string
otid: optional string
run_id: optional string
sender_id: optional string
seq_id: optional number
step_id: optional string
tool_calls: optional array of ToolCall { arguments, name, tool_call_id } or ToolCallDelta { arguments, name, tool_call_id }
Accepts one of the following:
UnionMember0 = array of ToolCall { arguments, name, tool_call_id }
arguments: string
name: string
tool_call_id: string
ToolCallDelta = object { arguments, name, tool_call_id }
arguments: optional string
name: optional string
tool_call_id: optional string
ToolReturnMessage = object { id, date, status, 13 more }

A message representing the return value of a tool call (generated by Letta executing the requested tool).

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message tool_return (str): The return value of the tool (deprecated, use tool_returns) status (Literal["success", "error"]): The status of the tool call (deprecated, use tool_returns) tool_call_id (str): A unique identifier for the tool call that generated this message (deprecated, use tool_returns) stdout (Optional[List(str)]): Captured stdout (e.g. prints, logs) from the tool invocation (deprecated, use tool_returns) stderr (Optional[List(str)]): Captured stderr from the tool invocation (deprecated, use tool_returns) tool_returns (Optional[List[ToolReturn]]): List of tool returns for multi-tool support

id: string
date: string
Deprecatedstatus: "success" or "error"
Accepts one of the following:
"success"
"error"
Deprecatedtool_call_id: string
Deprecatedtool_return: string
is_err: optional boolean
message_type: optional "tool_return_message"

The type of the message.

Accepts one of the following:
"tool_return_message"
name: optional string
otid: optional string
run_id: optional string
sender_id: optional string
seq_id: optional number
Deprecatedstderr: optional array of string
Deprecatedstdout: optional array of string
step_id: optional string
tool_returns: optional array of ToolReturn { status, tool_call_id, tool_return, 3 more }
status: "success" or "error"
Accepts one of the following:
"success"
"error"
tool_call_id: string
tool_return: string
stderr: optional array of string
stdout: optional array of string
type: optional "tool"

The message type to be created.

Accepts one of the following:
"tool"
AssistantMessage = object { id, content, date, 8 more }

A message sent by the LLM in response to user input. Used in the LLM context.

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message content (Union[str, List[LettaAssistantMessageContentUnion]]): The message content sent by the agent (can be a string or an array of content parts)

id: string
content: array of LettaAssistantMessageContentUnion { text, signature, type } or string

The message content sent by the agent (can be a string or an array of content parts)

Accepts one of the following:
UnionMember0 = array of LettaAssistantMessageContentUnion { text, signature, type }
text: string

The text content of the message.

signature: optional string

Stores a unique identifier for any reasoning associated with this text content.

type: optional "text"

The type of the message.

Accepts one of the following:
"text"
UnionMember1 = string
date: string
is_err: optional boolean
message_type: optional "assistant_message"

The type of the message.

Accepts one of the following:
"assistant_message"
name: optional string
otid: optional string
run_id: optional string
sender_id: optional string
seq_id: optional number
step_id: optional string
ApprovalRequestMessage = object { id, date, tool_call, 9 more }

A message representing a request for approval to call a tool (generated by the LLM to trigger tool execution).

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message tool_call (ToolCall): The tool call

id: string
date: string
Deprecatedtool_call: ToolCall { arguments, name, tool_call_id } or ToolCallDelta { arguments, name, tool_call_id }

The tool call that has been requested by the llm to run

Accepts one of the following:
ToolCall = object { arguments, name, tool_call_id }
arguments: string
name: string
tool_call_id: string
ToolCallDelta = object { arguments, name, tool_call_id }
arguments: optional string
name: optional string
tool_call_id: optional string
is_err: optional boolean
message_type: optional "approval_request_message"

The type of the message.

Accepts one of the following:
"approval_request_message"
name: optional string
otid: optional string
run_id: optional string
sender_id: optional string
seq_id: optional number
step_id: optional string
tool_calls: optional array of ToolCall { arguments, name, tool_call_id } or ToolCallDelta { arguments, name, tool_call_id }

The tool calls that have been requested by the llm to run, which are pending approval

Accepts one of the following:
UnionMember0 = array of ToolCall { arguments, name, tool_call_id }
arguments: string
name: string
tool_call_id: string
ToolCallDelta = object { arguments, name, tool_call_id }
arguments: optional string
name: optional string
tool_call_id: optional string
ApprovalResponseMessage = object { id, date, approval_request_id, 11 more }

A message representing a response form the user indicating whether a tool has been approved to run.

Args: id (str): The ID of the message date (datetime): The date the message was created in ISO format name (Optional[str]): The name of the sender of the message approve: (bool) Whether the tool has been approved approval_request_id: The ID of the approval request reason: (Optional[str]) An optional explanation for the provided approval status

id: string
date: string
Deprecatedapproval_request_id: optional string

The message ID of the approval request

approvals: optional array of object { approve, tool_call_id, reason, type } or ToolReturn { status, tool_call_id, tool_return, 3 more }

The list of approval responses

Accepts one of the following:
Approval = object { approve, tool_call_id, reason, type }
approve: boolean

Whether the tool has been approved

tool_call_id: string

The ID of the tool call that corresponds to this approval

reason: optional string

An optional explanation for the provided approval status

type: optional "approval"

The message type to be created.

Accepts one of the following:
"approval"
ToolReturn = object { status, tool_call_id, tool_return, 3 more }
status: "success" or "error"
Accepts one of the following:
"success"
"error"
tool_call_id: string
tool_return: string
stderr: optional array of string
stdout: optional array of string
type: optional "tool"

The message type to be created.

Accepts one of the following:
"tool"
Deprecatedapprove: optional boolean

Whether the tool has been approved

is_err: optional boolean
message_type: optional "approval_response_message"

The type of the message.

Accepts one of the following:
"approval_response_message"
name: optional string
otid: optional string
Deprecatedreason: optional string

An optional explanation for the provided approval status

run_id: optional string
sender_id: optional string
seq_id: optional number
step_id: optional string
SummaryMessage = object { id, date, summary, 8 more }

A message representing a summary of the conversation. Sent to the LLM as a user or system message depending on the provider.

id: string
date: string
summary: string
is_err: optional boolean
message_type: optional "summary"
Accepts one of the following:
"summary"
name: optional string
otid: optional string
run_id: optional string
sender_id: optional string
seq_id: optional number
step_id: optional string
EventMessage = object { id, date, event_data, 9 more }

A message for notifying the developer that an event that has occured (e.g. a compaction). Events are NOT part of the context window.

id: string
date: string
event_data: map[unknown]
event_type: "compaction"
Accepts one of the following:
"compaction"
is_err: optional boolean
message_type: optional "event"
Accepts one of the following:
"event"
name: optional string
otid: optional string
run_id: optional string
sender_id: optional string
seq_id: optional number
step_id: optional string
stop_reason: object { stop_reason, message_type }

The stop reason from Letta indicating why agent loop stopped execution.

stop_reason: StopReasonType

The reason why execution stopped.

Accepts one of the following:
"end_turn"
"error"
"llm_api_error"
"invalid_llm_response"
"invalid_tool_call"
"max_steps"
"no_tool_call"
"tool_rule"
"cancelled"
"requires_approval"
message_type: optional "stop_reason"

The type of the message.

Accepts one of the following:
"stop_reason"
usage: object { completion_tokens, message_type, prompt_tokens, 3 more }

The usage statistics of the agent.

completion_tokens: optional number

The number of tokens generated by the agent.

message_type: optional "usage_statistics"
Accepts one of the following:
"usage_statistics"
prompt_tokens: optional number

The number of tokens in the prompt.

run_ids: optional array of string

The background task run IDs associated with the agent interaction

step_count: optional number

The number of steps taken by the agent.

total_tokens: optional number

The total number of tokens processed by the agent.

Send Group Message
curl https://api.letta.com/v1/groups/$GROUP_ID/messages \
    -H 'Content-Type: application/json' \
    -H "Authorization: Bearer $LETTA_API_KEY" \
    -d '{}'
{
  "messages": [
    {
      "id": "id",
      "content": "content",
      "date": "2019-12-27T18:11:19.117Z",
      "is_err": true,
      "message_type": "system_message",
      "name": "name",
      "otid": "otid",
      "run_id": "run_id",
      "sender_id": "sender_id",
      "seq_id": 0,
      "step_id": "step_id"
    }
  ],
  "stop_reason": {
    "stop_reason": "end_turn",
    "message_type": "stop_reason"
  },
  "usage": {
    "completion_tokens": 0,
    "message_type": "usage_statistics",
    "prompt_tokens": 0,
    "run_ids": [
      "string"
    ],
    "step_count": 0,
    "total_tokens": 0
  }
}
Returns Examples
{
  "messages": [
    {
      "id": "id",
      "content": "content",
      "date": "2019-12-27T18:11:19.117Z",
      "is_err": true,
      "message_type": "system_message",
      "name": "name",
      "otid": "otid",
      "run_id": "run_id",
      "sender_id": "sender_id",
      "seq_id": 0,
      "step_id": "step_id"
    }
  ],
  "stop_reason": {
    "stop_reason": "end_turn",
    "message_type": "stop_reason"
  },
  "usage": {
    "completion_tokens": 0,
    "message_type": "usage_statistics",
    "prompt_tokens": 0,
    "run_ids": [
      "string"
    ],
    "step_count": 0,
    "total_tokens": 0
  }
}