Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,9 @@ EMBEDDING_MODEL=text-embedding-3-small
# You can customize it according to the throughput of your embedding model. Generally, larger batch size means less indexing time.
EMBEDDING_BATCH_SIZE=100

# Maximum number of chunks to process during indexing (default: 450000)
# CHUNK_LIMIT=450000

# =============================================================================
# OpenAI Configuration
# =============================================================================
Expand Down
1 change: 1 addition & 0 deletions docs/getting-started/environment-variables.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,6 +60,7 @@ Claude Context supports a global configuration file at `~/.context/.env` to simp
|----------|-------------|---------|
| `HYBRID_MODE` | Enable hybrid search (BM25 + dense vector). Set to `false` for dense-only search | `true` |
| `EMBEDDING_BATCH_SIZE` | Batch size for processing. Larger batch size means less indexing time | `100` |
| `CHUNK_LIMIT` | Maximum number of chunks to process during indexing | `450000` |
| `SPLITTER_TYPE` | Code splitter type: `ast`, `langchain` | `ast` |
| `CUSTOM_EXTENSIONS` | Additional file extensions to include (comma-separated, e.g., `.vue,.svelte,.astro`) | None |
| `CUSTOM_IGNORE_PATTERNS` | Additional ignore patterns (comma-separated, e.g., `temp/**,*.backup,private/**`) | None |
Expand Down
3 changes: 2 additions & 1 deletion packages/core/src/context.ts
Original file line number Diff line number Diff line change
Expand Up @@ -702,8 +702,9 @@ export class Context {
): Promise<{ processedFiles: number; totalChunks: number; status: 'completed' | 'limit_reached' }> {
const isHybrid = this.getIsHybrid();
const EMBEDDING_BATCH_SIZE = Math.max(1, parseInt(envManager.get('EMBEDDING_BATCH_SIZE') || '100', 10));
const CHUNK_LIMIT = 450000;
const CHUNK_LIMIT = Math.max(1, parseInt(envManager.get('CHUNK_LIMIT') || '450000', 10));
console.log(`[Context] 🔧 Using EMBEDDING_BATCH_SIZE: ${EMBEDDING_BATCH_SIZE}`);
console.log(`[Context] 🔧 Using CHUNK_LIMIT: ${CHUNK_LIMIT}`);

let chunkBuffer: Array<{ chunk: CodeChunk; codebasePath: string }> = [];
let processedFiles = 0;
Expand Down
6 changes: 6 additions & 0 deletions packages/mcp/src/config.ts
Original file line number Diff line number Diff line change
Expand Up @@ -203,6 +203,12 @@ Environment Variables:
MILVUS_ADDRESS Milvus address (optional, can be auto-resolved from token)
MILVUS_TOKEN Milvus token (optional, used for authentication and address resolution)

Advanced Configuration:
EMBEDDING_BATCH_SIZE Batch size for processing (default: 100)
CHUNK_LIMIT Maximum number of chunks to process during indexing (default: 450000)
HYBRID_MODE Enable hybrid search (default: true)
SPLITTER_TYPE Code splitter type: ast, langchain (default: ast)

Examples:
# Start MCP server with OpenAI (default) and explicit Milvus address
OPENAI_API_KEY=sk-xxx MILVUS_ADDRESS=localhost:19530 npx @zilliz/claude-context-mcp@latest
Expand Down