Send Group Message Streaming
Process a user message and return the group's responses. This endpoint accepts a message from a user and processes it through agents in the group based on the specified pattern. It will stream the steps of the response always, and stream the tokens if 'stream_tokens' is set to True.
ParametersExpand Collapse
group_id: str
The ID of the group in the format 'group-
Deprecatedassistant_message_tool_kwarg: Optional[str]
The name of the message argument in the designated message tool. Still supported for legacy agent types, but deprecated for letta_v1_agent onward.
Deprecatedassistant_message_tool_name: Optional[str]
The name of the designated message tool. Still supported for legacy agent types, but deprecated for letta_v1_agent onward.
Whether to process the request in the background (only used when streaming=true).
Deprecatedenable_thinking: Optional[str]
If set to True, enables reasoning before responses or tool calls from the agent.
Whether to include periodic keepalive ping messages in the stream to prevent connection timeouts (only used when streaming=true).
Only return specified message types in the response. If None (default) returns all messages.
input: Optional[Union[str, Iterable[InputUnionMember1], null]]
Syntactic sugar for a single user message. Equivalent to messages=[{'role': 'user', 'content': input}].
InputUnionMember1 = Iterable[InputUnionMember1]
class TextContent: …
text: str
The text content of the message.
signature: Optional[str]
Stores a unique identifier for any reasoning associated with this text content.
type: Optional[Literal["text"]]
The type of the message.
class ImageContent: …
source: Source
The source of the image.
class SourceURLImage: …
url: str
The URL of the image.
type: Optional[Literal["url"]]
The source type for the image.
class SourceBase64Image: …
data: str
The base64 encoded image data.
media_type: str
The media type for the image.
detail: Optional[str]
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
type: Optional[Literal["base64"]]
The source type for the image.
class SourceLettaImage: …
file_id: str
The unique identifier of the image file persisted in storage.
data: Optional[str]
The base64 encoded image data.
detail: Optional[str]
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
media_type: Optional[str]
The media type for the image.
type: Optional[Literal["letta"]]
The source type for the image.
type: Optional[Literal["image"]]
The type of the message.
class ToolCallContent: …
id: str
A unique identifier for this specific tool call instance.
input: Dict[str, object]
The parameters being passed to the tool, structured as a dictionary of parameter names to values.
name: str
The name of the tool being called.
signature: Optional[str]
Stores a unique identifier for any reasoning associated with this tool call.
type: Optional[Literal["tool_call"]]
Indicates this content represents a tool call event.
class ToolReturnContent: …
content: str
The content returned by the tool execution.
is_error: bool
Indicates whether the tool execution resulted in an error.
tool_call_id: str
References the ID of the ToolCallContent that initiated this tool call.
type: Optional[Literal["tool_return"]]
Indicates this content represents a tool return event.
class ReasoningContent: …
Sent via the Anthropic Messages API
is_native: bool
Whether the reasoning content was generated by a reasoner model that processed this step.
reasoning: str
The intermediate reasoning or thought process content.
signature: Optional[str]
A unique identifier for this reasoning step.
type: Optional[Literal["reasoning"]]
Indicates this is a reasoning/intermediate step.
class RedactedReasoningContent: …
Sent via the Anthropic Messages API
data: str
The redacted or filtered intermediate reasoning content.
type: Optional[Literal["redacted_reasoning"]]
Indicates this is a redacted thinking step.
class OmittedReasoningContent: …
A placeholder for reasoning content we know is present, but isn't returned by the provider (e.g. OpenAI GPT-5 on ChatCompletions)
signature: Optional[str]
A unique identifier for this reasoning step.
type: Optional[Literal["omitted_reasoning"]]
Indicates this is an omitted reasoning step.
class InputUnionMember1SummarizedReasoningContent: …
The style of reasoning content returned by the OpenAI Responses API
id: str
The unique identifier for this reasoning step.
summary: Iterable[InputUnionMember1SummarizedReasoningContentSummary]
Summaries of the reasoning content.
index: int
The index of the summary part.
text: str
The text of the summary part.
encrypted_content: Optional[str]
The encrypted reasoning content.
type: Optional[Literal["summarized_reasoning"]]
Indicates this is a summarized reasoning step.
max_steps: Optional[int]
Maximum number of steps the agent should take to process the request.
messages: Optional[Iterable[Message]]
The messages to be sent to the agent.
class MessageCreate: …
Request to create a message
The content of the message.
class TextContent: …
text: str
The text content of the message.
signature: Optional[str]
Stores a unique identifier for any reasoning associated with this text content.
type: Optional[Literal["text"]]
The type of the message.
class ImageContent: …
source: Source
The source of the image.
class SourceURLImage: …
url: str
The URL of the image.
type: Optional[Literal["url"]]
The source type for the image.
class SourceBase64Image: …
data: str
The base64 encoded image data.
media_type: str
The media type for the image.
detail: Optional[str]
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
type: Optional[Literal["base64"]]
The source type for the image.
class SourceLettaImage: …
file_id: str
The unique identifier of the image file persisted in storage.
data: Optional[str]
The base64 encoded image data.
detail: Optional[str]
What level of detail to use when processing and understanding the image (low, high, or auto to let the model decide)
media_type: Optional[str]
The media type for the image.
type: Optional[Literal["letta"]]
The source type for the image.
type: Optional[Literal["image"]]
The type of the message.
class ToolCallContent: …
id: str
A unique identifier for this specific tool call instance.
input: Dict[str, object]
The parameters being passed to the tool, structured as a dictionary of parameter names to values.
name: str
The name of the tool being called.
signature: Optional[str]
Stores a unique identifier for any reasoning associated with this tool call.
type: Optional[Literal["tool_call"]]
Indicates this content represents a tool call event.
class ToolReturnContent: …
content: str
The content returned by the tool execution.
is_error: bool
Indicates whether the tool execution resulted in an error.
tool_call_id: str
References the ID of the ToolCallContent that initiated this tool call.
type: Optional[Literal["tool_return"]]
Indicates this content represents a tool return event.
class ReasoningContent: …
Sent via the Anthropic Messages API
is_native: bool
Whether the reasoning content was generated by a reasoner model that processed this step.
reasoning: str
The intermediate reasoning or thought process content.
signature: Optional[str]
A unique identifier for this reasoning step.
type: Optional[Literal["reasoning"]]
Indicates this is a reasoning/intermediate step.
class RedactedReasoningContent: …
Sent via the Anthropic Messages API
data: str
The redacted or filtered intermediate reasoning content.
type: Optional[Literal["redacted_reasoning"]]
Indicates this is a redacted thinking step.
class OmittedReasoningContent: …
A placeholder for reasoning content we know is present, but isn't returned by the provider (e.g. OpenAI GPT-5 on ChatCompletions)
signature: Optional[str]
A unique identifier for this reasoning step.
type: Optional[Literal["omitted_reasoning"]]
Indicates this is an omitted reasoning step.
role: Literal["user", "system", "assistant"]
The role of the participant.
batch_item_id: Optional[str]
The id of the LLMBatchItem that this message is associated with
group_id: Optional[str]
The multi-agent group that the message was sent in
name: Optional[str]
The name of the participant.
otid: Optional[str]
The offline threading id associated with this message
sender_id: Optional[str]
The id of the sender of the message, can be an identity id or agent id
type: Optional[Literal["message"]]
The message type to be created.
class ApprovalCreate: …
Input to approve or deny a tool call request
Deprecatedapproval_request_id: Optional[str]
The message ID of the approval request
approvals: Optional[List[Approval]]
The list of approval responses
class ApprovalApprovalReturn: …
approve: bool
Whether the tool has been approved
tool_call_id: str
The ID of the tool call that corresponds to this approval
reason: Optional[str]
An optional explanation for the provided approval status
type: Optional[Literal["approval"]]
The message type to be created.
class ToolReturn: …
status: Literal["success", "error"]
type: Optional[Literal["tool"]]
The message type to be created.
Deprecatedapprove: Optional[bool]
Whether the tool has been approved
group_id: Optional[str]
The multi-agent group that the message was sent in
Deprecatedreason: Optional[str]
An optional explanation for the provided approval status
type: Optional[Literal["approval"]]
The message type to be created.
Flag to determine if individual tokens should be streamed, rather than streaming per step (only used when streaming=true).
If True, returns a streaming response (Server-Sent Events). If False (default), returns a complete response.
Whether the server should parse specific tool call arguments (default send_message) as AssistantMessage objects. Still supported for legacy agent types, but deprecated for letta_v1_agent onward.
ReturnsExpand Collapse
Send Group Message Streaming
- HTTP
- TypeScript
- Python
from letta_client import Letta
client = Letta(
api_key="My API Key",
)
response = client.groups.messages.stream(
group_id="group-123e4567-e89b-42d3-8456-426614174000",
)
print(response)
{}Returns Examples
{}