Skip to content
  • Auto
  • Light
  • Dark
DiscordForumGitHubSign up
View as Markdown
Copy Markdown

Open in Claude
Open in ChatGPT

List Archives

client.archives.list(ArchiveListParams { after, agent_id, before, 4 more } query?, RequestOptionsoptions?): ArrayPage<Archive { id, created_at, embedding_config, 7 more } >
get/v1/archives/

Get a list of all archives for the current organization with optional filters and pagination.

ParametersExpand Collapse
query: ArchiveListParams { after, agent_id, before, 4 more }
after?: string | null

Archive ID cursor for pagination. Returns archives that come after this archive ID in the specified sort order

agent_id?: string | null

Only archives attached to this agent ID

before?: string | null

Archive ID cursor for pagination. Returns archives that come before this archive ID in the specified sort order

limit?: number | null

Maximum number of archives to return

name?: string | null

Filter by archive name (exact match)

order?: "asc" | "desc"

Sort order for archives by creation time. 'asc' for oldest first, 'desc' for newest first

Accepts one of the following:
"asc"
"desc"
order_by?: "created_at"

Field to sort by

Accepts one of the following:
"created_at"
ReturnsExpand Collapse
Archive { id, created_at, embedding_config, 7 more }

Representation of an archive - a collection of archival passages that can be shared between agents.

id: string

The human-friendly ID of the Archive

created_at: string

The creation date of the archive

formatdate-time
embedding_config: EmbeddingConfig { embedding_dim, embedding_endpoint_type, embedding_model, 7 more }

Embedding configuration for passages in this archive

embedding_dim: number

The dimension of the embedding.

embedding_endpoint_type: "openai" | "anthropic" | "bedrock" | 16 more

The endpoint type for the model.

Accepts one of the following:
"openai"
"anthropic"
"bedrock"
"google_ai"
"google_vertex"
"azure"
"groq"
"ollama"
"webui"
"webui-legacy"
"lmstudio"
"lmstudio-legacy"
"llamacpp"
"koboldcpp"
"vllm"
"hugging-face"
"mistral"
"together"
"pinecone"
embedding_model: string

The model for the embedding.

azure_deployment?: string | null

The Azure deployment for the model.

azure_endpoint?: string | null

The Azure endpoint for the model.

azure_version?: string | null

The Azure version for the model.

batch_size?: number

The maximum batch size for processing embeddings.

embedding_chunk_size?: number | null

The chunk size of the embedding.

embedding_endpoint?: string | null

The endpoint for the model (None if local).

handle?: string | null

The handle for this config, in the format provider/model-name.

name: string

The name of the archive

created_by_id?: string | null

The id of the user that made this object.

description?: string | null

A description of the archive

last_updated_by_id?: string | null

The id of the user that made this object.

metadata?: Record<string, unknown> | null

Additional metadata

updated_at?: string | null

The timestamp when the object was last updated.

formatdate-time
vector_db_provider?: VectorDBProvider

The vector database provider used for this archive's passages

Accepts one of the following:
"native"
"tpuf"
"pinecone"
List Archives
import Letta from '@letta-ai/letta-client';

const client = new Letta({
  apiKey: 'My API Key',
});

// Automatically fetches more pages as needed.
for await (const archive of client.archives.list()) {
  console.log(archive.id);
}
[
  {
    "id": "archive-123e4567-e89b-12d3-a456-426614174000",
    "created_at": "2019-12-27T18:11:19.117Z",
    "embedding_config": {
      "embedding_dim": 0,
      "embedding_endpoint_type": "openai",
      "embedding_model": "embedding_model",
      "azure_deployment": "azure_deployment",
      "azure_endpoint": "azure_endpoint",
      "azure_version": "azure_version",
      "batch_size": 0,
      "embedding_chunk_size": 0,
      "embedding_endpoint": "embedding_endpoint",
      "handle": "handle"
    },
    "name": "name",
    "created_by_id": "created_by_id",
    "description": "description",
    "last_updated_by_id": "last_updated_by_id",
    "metadata": {
      "foo": "bar"
    },
    "updated_at": "2019-12-27T18:11:19.117Z",
    "vector_db_provider": "native"
  }
]
Returns Examples
[
  {
    "id": "archive-123e4567-e89b-12d3-a456-426614174000",
    "created_at": "2019-12-27T18:11:19.117Z",
    "embedding_config": {
      "embedding_dim": 0,
      "embedding_endpoint_type": "openai",
      "embedding_model": "embedding_model",
      "azure_deployment": "azure_deployment",
      "azure_endpoint": "azure_endpoint",
      "azure_version": "azure_version",
      "batch_size": 0,
      "embedding_chunk_size": 0,
      "embedding_endpoint": "embedding_endpoint",
      "handle": "handle"
    },
    "name": "name",
    "created_by_id": "created_by_id",
    "description": "description",
    "last_updated_by_id": "last_updated_by_id",
    "metadata": {
      "foo": "bar"
    },
    "updated_at": "2019-12-27T18:11:19.117Z",
    "vector_db_provider": "native"
  }
]