Skip to content
  • Auto
  • Light
  • Dark
DiscordForumGitHubSign up
View as Markdown
Copy Markdown

Open in Claude
Open in ChatGPT

List Llm Models

get/v1/models/

List available LLM models using the asynchronous implementation for improved performance.

Returns Model format which extends LLMConfig with additional metadata fields. Legacy LLMConfig fields are marked as deprecated but still available for backward compatibility.

Query ParametersExpand Collapse
provider_category: optional array of ProviderCategory
Accepts one of the following:
"base"
"byok"
provider_name: optional string
provider_type: optional ProviderType
Accepts one of the following:
"anthropic"
"azure"
"bedrock"
"cerebras"
"deepseek"
"google_ai"
"google_vertex"
"groq"
"hugging-face"
"letta"
"lmstudio_openai"
"mistral"
"ollama"
"openai"
"together"
"vllm"
"xai"
ReturnsExpand Collapse
Deprecatedcontext_window: number

Deprecated: Use 'max_context_window' field instead. The context window size for the model.

max_context_window: number

The maximum context window for the model

Deprecatedmodel: string

Deprecated: Use 'name' field instead. LLM model name.

Deprecatedmodel_endpoint_type: "openai" or "anthropic" or "google_ai" or 18 more

Deprecated: Use 'provider_type' field instead. The endpoint type for the model.

Accepts one of the following:
"openai"
"anthropic"
"google_ai"
"google_vertex"
"azure"
"groq"
"ollama"
"webui"
"webui-legacy"
"lmstudio"
"lmstudio-legacy"
"lmstudio-chatcompletions"
"llamacpp"
"koboldcpp"
"vllm"
"hugging-face"
"mistral"
"together"
"bedrock"
"deepseek"
"xai"
name: string

The actual model name used by the provider

provider_type: ProviderType

The type of the provider

Accepts one of the following:
"anthropic"
"azure"
"bedrock"
"cerebras"
"deepseek"
"google_ai"
"google_vertex"
"groq"
"hugging-face"
"letta"
"lmstudio_openai"
"mistral"
"ollama"
"openai"
"together"
"vllm"
"xai"
Deprecatedcompatibility_type: optional "gguf" or "mlx"

Deprecated: The framework compatibility type for the model.

Accepts one of the following:
"gguf"
"mlx"
display_name: optional string

A human-friendly display name for the model.

Deprecatedenable_reasoner: optional boolean

Deprecated: Whether or not the model should use extended thinking if it is a 'reasoning' style model.

Deprecatedfrequency_penalty: optional number

Deprecated: Positive values penalize new tokens based on their existing frequency in the text so far.

handle: optional string

The handle for this config, in the format provider/model-name.

Deprecatedmax_reasoning_tokens: optional number

Deprecated: Configurable thinking budget for extended thinking.

Deprecatedmax_tokens: optional number

Deprecated: The maximum number of tokens to generate.

Deprecatedmodel_endpoint: optional string

Deprecated: The endpoint for the model.

model_type: optional "llm"

Type of model (llm or embedding)

Accepts one of the following:
"llm"
Deprecatedmodel_wrapper: optional string

Deprecated: The wrapper for the model.

Deprecatedparallel_tool_calls: optional boolean

Deprecated: If set to True, enables parallel tool calling.

Deprecatedprovider_category: optional ProviderCategory

Deprecated: The provider category for the model.

Accepts one of the following:
"base"
"byok"
provider_name: optional string

The provider name for the model.

Deprecatedput_inner_thoughts_in_kwargs: optional boolean

Deprecated: Puts 'inner_thoughts' as a kwarg in the function call.

Deprecatedreasoning_effort: optional "minimal" or "low" or "medium" or "high"

Deprecated: The reasoning effort to use when generating text reasoning models.

Accepts one of the following:
"minimal"
"low"
"medium"
"high"
Deprecatedtemperature: optional number

Deprecated: The temperature to use when generating text with the model.

Deprecatedtier: optional string

Deprecated: The cost tier for the model (cloud only).

Deprecatedverbosity: optional "low" or "medium" or "high"

Deprecated: Soft control for how verbose model output should be.

Accepts one of the following:
"low"
"medium"
"high"
List Llm Models
curl https://api.letta.com/v1/models/ \
    -H "Authorization: Bearer $LETTA_API_KEY"
[
  {
    "context_window": 0,
    "max_context_window": 0,
    "model": "model",
    "model_endpoint_type": "openai",
    "name": "name",
    "provider_type": "anthropic",
    "compatibility_type": "gguf",
    "display_name": "display_name",
    "enable_reasoner": true,
    "frequency_penalty": 0,
    "handle": "handle",
    "max_reasoning_tokens": 0,
    "max_tokens": 0,
    "model_endpoint": "model_endpoint",
    "model_type": "llm",
    "model_wrapper": "model_wrapper",
    "parallel_tool_calls": true,
    "provider_category": "base",
    "provider_name": "provider_name",
    "put_inner_thoughts_in_kwargs": true,
    "reasoning_effort": "minimal",
    "temperature": 0,
    "tier": "tier",
    "verbosity": "low"
  }
]
Returns Examples
[
  {
    "context_window": 0,
    "max_context_window": 0,
    "model": "model",
    "model_endpoint_type": "openai",
    "name": "name",
    "provider_type": "anthropic",
    "compatibility_type": "gguf",
    "display_name": "display_name",
    "enable_reasoner": true,
    "frequency_penalty": 0,
    "handle": "handle",
    "max_reasoning_tokens": 0,
    "max_tokens": 0,
    "model_endpoint": "model_endpoint",
    "model_type": "llm",
    "model_wrapper": "model_wrapper",
    "parallel_tool_calls": true,
    "provider_category": "base",
    "provider_name": "provider_name",
    "put_inner_thoughts_in_kwargs": true,
    "reasoning_effort": "minimal",
    "temperature": 0,
    "tier": "tier",
    "verbosity": "low"
  }
]