xAI (Grok)
Enabling xAI (Grok) models
Section titled “Enabling xAI (Grok) models”To enable the xAI provider, set your key as an environment variable:
export XAI_API_KEY="..."Enabling xAI with Docker
Section titled “Enabling xAI with Docker”To enable xAI models when running the Letta server with Docker, set your XAI_API_KEY as an environment variable:
# replace `~/.letta/.persist/pgdata` with wherever you want to store your agent datadocker run \ -v ~/.letta/.persist/pgdata:/var/lib/postgresql/data \ -p 8283:8283 \ -e XAI_API_KEY="your_xai_api_key" \ letta/letta:latestSee the self-hosting guide for more information on running Letta with Docker.
Specifying agent models
Section titled “Specifying agent models”When creating agents on your self-hosted server, you must specify both the LLM and embedding models to use. You can additionally specify a context window limit (which must be less than or equal to the maximum size).
from letta_client import Lettaimport os
# Connect to your self-hosted serverclient = Letta(base_url="http://localhost:8283")
agent = client.agents.create( model="xai/grok-2-1212", embedding="openai/text-embedding-3-small", # An embedding model is required for self-hosted # optional configuration context_window_limit=30000)xAI (Grok) models have very large context windows, which will be very expensive and high latency. We recommend setting a lower context_window_limit when using xAI (Grok) models.