Skip to content
  • Auto
  • Light
  • Dark
DiscordForumGitHubSign up
Self-hosting
Model providers
More providers
View as Markdown
Copy Markdown

Open in Claude
Open in ChatGPT

xAI (Grok)

To enable the xAI provider, set your key as an environment variable:

Terminal window
export XAI_API_KEY="..."

To enable xAI models when running the Letta server with Docker, set your XAI_API_KEY as an environment variable:

Terminal window
# replace `~/.letta/.persist/pgdata` with wherever you want to store your agent data
docker run \
-v ~/.letta/.persist/pgdata:/var/lib/postgresql/data \
-p 8283:8283 \
-e XAI_API_KEY="your_xai_api_key" \
letta/letta:latest

See the self-hosting guide for more information on running Letta with Docker.

When creating agents on your self-hosted server, you must specify both the LLM and embedding models to use. You can additionally specify a context window limit (which must be less than or equal to the maximum size).

from letta_client import Letta
import os
# Connect to your self-hosted server
client = Letta(base_url="http://localhost:8283")
agent = client.agents.create(
model="xai/grok-2-1212",
embedding="openai/text-embedding-3-small", # An embedding model is required for self-hosted
# optional configuration
context_window_limit=30000
)

xAI (Grok) models have very large context windows, which will be very expensive and high latency. We recommend setting a lower context_window_limit when using xAI (Grok) models.