JSON Mode & Structured Output
Letta provides two ways to get structured JSON output from agents: Structured Generation through Tools (recommended) and the response_format parameter.
Quick Comparison
Section titled “Quick Comparison”Structured Generation through Tools (Recommended)
Section titled “Structured Generation through Tools (Recommended)”Create a tool that defines your desired response format. The tool arguments become your structured data, and you can extract them from the tool call.
Creating a Structured Generation Tool
Section titled “Creating a Structured Generation Tool”import { LettaClient } from "@letta-ai/letta-client";
// Create client connected to Letta Cloudconst client = new LettaClient({ token: process.env.LETTA_API_KEY });
// First create the toolconst toolCode = `def generate_rank(rank: int, reason: str):"""Generate a ranking with explanation.
Args: rank (int): The numerical rank from 1-10. reason (str): The reasoning behind the rank. """ print("Rank generated") return`;
const tool = await client.tools.create({ sourceCode: toolCode, sourceType: "python",});
// Create agent with the structured generation toolconst agentState = await client.agents.create({ model: "openai/gpt-4o-mini", memoryBlocks: [ { label: "human", value: "The human's name is Chad. They are a food enthusiast who enjoys trying different cuisines.", }, { label: "persona", value: "I am a helpful food critic assistant. I provide detailed rankings and reviews of different foods and restaurants.", }, ], toolIds: [tool.id],});from letta_client import Letta
# Create client connected to Letta Cloudimport osclient = Letta(token=os.getenv("LETTA_API_KEY"))
def generate_rank(rank: int, reason: str): """Generate a ranking with explanation.
Args: rank (int): The numerical rank from 1-10. reason (str): The reasoning behind the rank. """ print("Rank generated") return
# Create the tooltool = client.tools.create(func=generate_rank)
# Create agent with the structured generation toolagent_state = client.agents.create( model="openai/gpt-4o-mini", embedding="openai/text-embedding-3-small", memory_blocks=[ { "label": "human", "value": "The human's name is Chad. They are a food enthusiast who enjoys trying different cuisines." }, { "label": "persona", "value": "I am a helpful food critic assistant. I provide detailed rankings and reviews of different foods and restaurants." } ], tool_ids=[tool.id])Using the Structured Generation Tool
Section titled “Using the Structured Generation Tool”// Send message and instruct agent to use the toolconst response = await client.agents.messages.create(agentState.id, { messages: [ { role: "user", content: "How do you rank sushi as a food? Please use the generate_rank tool to provide your response.", }, ],});
// Extract structured data from tool callfor (const message of response.messages) { if (message.messageType === "tool_call_message") { const args = JSON.parse(message.toolCall.arguments); console.log(`Rank: ${args.rank}`); console.log(`Reason: ${args.reason}`); }}
// Example output:// Rank: 8// Reason: Sushi is a highly regarded cuisine known for its fresh ingredients...# Send message and instruct agent to use the toolresponse = client.agents.messages.create( agent_id=agent_state.id, messages=[ { "role": "user", "content": "How do you rank sushi as a food? Please use the generate_rank tool to provide your response." } ])
# Extract structured data from tool callfor message in response.messages: if message.message_type == "tool_call_message": import json args = json.loads(message.tool_call.arguments) rank = args["rank"] reason = args["reason"] print(f"Rank: {rank}") print(f"Reason: {reason}")
# Example output:# Rank: 8# Reason: Sushi is a highly regarded cuisine known for its fresh ingredients...The agent will call the tool, and you can extract the structured arguments:
{ "rank": 8, "reason": "Sushi is a highly regarded cuisine known for its fresh ingredients, artistic presentation, and cultural significance."}Using response_format for Provider-Native JSON Mode
Section titled “Using response_format for Provider-Native JSON Mode”The response_format parameter enables structured output/JSON mode from LLM providers that support it. This approach is fundamentally different from tools because response_format becomes a persistent part of the agent’s state - once set, all future responses from that agent will follow the format until explicitly changed.
Under the hood, response_format constrains the agent’s assistant messages to follow the specified schema, but it doesn’t affect tools - those continue to work normally with their original schemas.
Basic JSON Mode
Section titled “Basic JSON Mode”import { LettaClient } from "@letta-ai/letta-client";
// Create client (Letta Cloud)const client = new LettaClient({ token: "LETTA_API_KEY" });
// Create agent with basic JSON mode (OpenAI/compatible providers only)const agentState = await client.agents.create({ model: "openai/gpt-4o-mini", memoryBlocks: [ { label: "human", value: "The human's name is Chad. They work as a data analyst and prefer clear, organized information.", }, { label: "persona", value: "I am a helpful assistant who provides clear and well-organized responses.", }, ], responseFormat: { type: "json_object" },});
// Send message expecting JSON responseconst response = await client.agents.messages.create(agentState.id, { messages: [ { role: "user", content: "How do you rank sushi as a food? Please respond in JSON format with rank and reason fields.", }, ],});
for (const message of response.messages) { console.log(message);}from letta_client import Letta
# Create client (Letta Cloud)client = Letta(token="LETTA_API_KEY")
# Create agent with basic JSON mode (OpenAI/compatible providers only)agent_state = client.agents.create( model="openai/gpt-4o-mini", embedding="openai/text-embedding-3-small", memory_blocks=[ { "label": "human", "value": "The human's name is Chad. They work as a data analyst and prefer clear, organized information." }, { "label": "persona", "value": "I am a helpful assistant who provides clear and well-organized responses." } ], response_format={"type": "json_object"})
# Send message expecting JSON responseresponse = client.agents.messages.create( agent_id=agent_state.id, messages=[ { "role": "user", "content": "How do you rank sushi as a food? Please respond in JSON format with rank and reason fields." } ])
for message in response.messages: print(message)Advanced JSON Schema Mode
Section titled “Advanced JSON Schema Mode”For more precise control, you can use OpenAI’s json_schema mode with strict validation:
import { LettaClient } from "@letta-ai/letta-client";
const client = new LettaClient({ token: "LETTA_API_KEY" });
// Define structured schema (from OpenAI structured outputs guide)const responseFormat = { type: "json_schema", jsonSchema: { name: "food_ranking", schema: { type: "object", properties: { rank: { type: "integer", minimum: 1, maximum: 10, }, reason: { type: "string", }, categories: { type: "array", items: { type: "object", properties: { name: { type: "string" }, score: { type: "integer" }, }, required: ["name", "score"], additionalProperties: false, }, }, }, required: ["rank", "reason", "categories"], additionalProperties: false, }, strict: true, },};
// Create agentconst agentState = await client.agents.create({ model: "openai/gpt-4o-mini", memoryBlocks: [],});
// Update agent with response formatconst updatedAgent = await client.agents.update(agentState.id, { responseFormat,});
// Send messageconst response = await client.agents.messages.create(agentState.id, { messages: [ { role: "user", content: "How do you rank sushi? Include categories for taste, presentation, and value.", }, ],});
for (const message of response.messages) { console.log(message);}from letta_client import Letta
client = Letta(token="LETTA_API_KEY")
# Define structured schema (from OpenAI structured outputs guide)response_format = { "type": "json_schema", "json_schema": { "name": "food_ranking", "schema": { "type": "object", "properties": { "rank": { "type": "integer", "minimum": 1, "maximum": 10 }, "reason": { "type": "string" }, "categories": { "type": "array", "items": { "type": "object", "properties": { "name": { "type": "string" }, "score": { "type": "integer" } }, "required": ["name", "score"], "additionalProperties": False } } }, "required": ["rank", "reason", "categories"], "additionalProperties": False }, "strict": True }}
# Create agentagent_state = client.agents.create( model="openai/gpt-4o-mini", embedding="openai/text-embedding-3-small", memory_blocks=[])
# Update agent with response formatagent_state = client.agents.update( agent_id=agent_state.id, response_format=response_format)
# Send messageresponse = client.agents.messages.create( agent_id=agent_state.id, messages=[ {"role": "user", "content": "How do you rank sushi? Include categories for taste, presentation, and value."} ])
for message in response.messages: print(message)With structured JSON schema, the agent’s response will be strictly validated:
{ "rank": 8, "reason": "Sushi is highly regarded for its fresh ingredients and artful presentation", "categories": [ { "name": "taste", "score": 9 }, { "name": "presentation", "score": 10 }, { "name": "value", "score": 6 } ]}Updating Agent Response Format
Section titled “Updating Agent Response Format”You can update an existing agent’s response format:
// Update agent to use JSON mode (OpenAI/compatible only)await client.agents.update(agentState.id, { responseFormat: { type: "json_object" },});
// Or remove JSON modeawait client.agents.update(agentState.id, { responseFormat: null,});# Update agent to use JSON mode (OpenAI/compatible only)client.agents.update( agent_id=agent_state.id, response_format={"type": "json_object"})
# Or remove JSON modeclient.agents.update( agent_id=agent_state.id, response_format=None)