Skip to content
  • Auto
  • Light
  • Dark
DiscordForumGitHubSign up
Self-hosting
Model providers
View as Markdown
Copy Markdown

Open in Claude
Open in ChatGPT

LM Studio

  1. Download + install LM Studio and the model you want to test with
  2. Make sure to start the LM Studio server

To enable LM Studio models when running the Letta server with Docker, set the LMSTUDIO_BASE_URL environment variable.

macOS/Windows: Since LM Studio is running on the host network, you will need to use host.docker.internal to connect to the LM Studio server instead of localhost.

Terminal window
# replace `~/.letta/.persist/pgdata` with wherever you want to store your agent data
docker run \
-v ~/.letta/.persist/pgdata:/var/lib/postgresql/data \
-p 8283:8283 \
-e LMSTUDIO_BASE_URL="http://host.docker.internal:1234" \
letta/letta:latest

Linux: Use --network host and localhost:

Terminal window
docker run \
-v ~/.letta/.persist/pgdata:/var/lib/postgresql/data \
--network host \
-e LMSTUDIO_BASE_URL="http://localhost:1234" \
letta/letta:latest

See the self-hosting guide for more information on running Letta with Docker.

The following models have been tested with Letta as of 7-11-2025 on LM Studio 0.3.18.

  • qwen3-30b-a3b
  • qwen3-14b-mlx
  • qwen3-8b-mlx
  • qwen2.5-32b-instruct
  • qwen2.5-14b-instruct-1m
  • qwen2.5-7b-instruct
  • meta-llama-3.1-8b-instruct

Some models recommended on LM Studio such as mlx-community/ministral-8b-instruct-2410 and bartowski/ministral-8b-instruct-2410 may not work well with Letta due to default prompt templates being incompatible. Adjusting templates can enable compatibility but will impact model performance.