Skip to content

Commit e713e83

Browse files
committed
Standardize env.example with setup tool conventions and double-hash syntax
- Add setup tool documentation header - Convert examples to # # double-hash format - Enable OpenAI embedding by default - Update Ollama embedding model reference
1 parent a660638 commit e713e83

File tree

1 file changed

+86
-79
lines changed

1 file changed

+86
-79
lines changed

env.example

Lines changed: 86 additions & 79 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,8 @@
1+
### All configurable environment variable must show up in this sample file in active or comment out status
2+
### Setup tool `make setup*` uses this file to generate finnal .env file
3+
### Lines starting with `# #` represent repeated environment variables;
4+
### These are placeholders and setup tool should not be substituted with actual values in this lines.
5+
16
###########################
27
### Server Configuration
38
###########################
@@ -122,22 +127,22 @@ RERANK_BINDING=null
122127
# RERANK_BY_DEFAULT=True
123128

124129
### Cohere AI
125-
# RERANK_MODEL=rerank-v3.5
126-
# RERANK_BINDING_HOST=https://api.cohere.com/v2/rerank
127-
# RERANK_BINDING_API_KEY=your_rerank_api_key_here
130+
# # RERANK_MODEL=rerank-v3.5
131+
# # RERANK_BINDING_HOST=https://api.cohere.com/v2/rerank
132+
# # RERANK_BINDING_API_KEY=your_rerank_api_key_here
128133
### Cohere rerank chunking configuration (useful for models with token limits like ColBERT)
129134
# RERANK_ENABLE_CHUNKING=true
130135
# RERANK_MAX_TOKENS_PER_DOC=480
131136

132137
### Aliyun Dashscope
133-
# RERANK_MODEL=gte-rerank-v2
134-
# RERANK_BINDING_HOST=https://dashscope.aliyuncs.com/api/v1/services/rerank/text-rerank/text-rerank
135-
# RERANK_BINDING_API_KEY=your_rerank_api_key_here
138+
# # RERANK_MODEL=gte-rerank-v2
139+
# # RERANK_BINDING_HOST=https://dashscope.aliyuncs.com/api/v1/services/rerank/text-rerank/text-rerank
140+
# # RERANK_BINDING_API_KEY=your_rerank_api_key_here
136141

137142
### Jina AI
138-
# RERANK_MODEL=jina-reranker-v2-base-multilingual
139-
# RERANK_BINDING_HOST=https://api.jina.ai/v1/rerank
140-
# RERANK_BINDING_API_KEY=your_rerank_api_key_here
143+
# # RERANK_MODEL=jina-reranker-v2-base-multilingual
144+
# # RERANK_BINDING_HOST=https://api.jina.ai/v1/rerank
145+
# # RERANK_BINDING_API_KEY=your_rerank_api_key_here
141146

142147
### For local deployment Embedding and Reranker with vLLM (OpenAI-compatible API)
143148
### Wizard metadata used to preserve the chosen rerank provider across setup reruns
@@ -165,7 +170,6 @@ RERANK_BINDING=null
165170
# NVIDIA_VISIBLE_DEVICES=0
166171
### Optional Docker runtime equivalent; generated GPU compose honors either variable.
167172
# VLLM_RERANK_EXTRA_ARGS=
168-
# Docker note: generated compose files rewrite localhost to host.docker.internal for the container only.
169173

170174
########################################
171175
### Document processing configuration
@@ -267,46 +271,50 @@ LLM_MODEL=gpt-5-mini
267271
### Azure OpenAI example
268272
### Use deployment name as model name or set AZURE_OPENAI_DEPLOYMENT instead
269273
# AZURE_OPENAI_API_VERSION=2024-08-01-preview
270-
# LLM_BINDING=azure_openai
271-
# LLM_BINDING_HOST=https://xxxx.openai.azure.com/
272-
# LLM_BINDING_API_KEY=your_api_key
273-
# LLM_MODEL=my-gpt-mini-deployment
274+
# # LLM_BINDING=azure_openai
275+
# # LLM_BINDING_HOST=https://xxxx.openai.azure.com/
276+
# # LLM_BINDING_API_KEY=your_api_key
277+
# # LLM_MODEL=my-gpt-mini-deployment
274278

275279
### Openrouter example
276-
# LLM_BINDING=openai
277-
# LLM_BINDING_HOST=https://openrouter.ai/api/v1
278-
# LLM_BINDING_API_KEY=your_api_key
279-
# LLM_MODEL=google/gemini-2.5-flash
280+
# # LLM_BINDING=openai
281+
# # LLM_BINDING_HOST=https://openrouter.ai/api/v1
282+
# # LLM_BINDING_API_KEY=your_api_key
283+
# # LLM_MODEL=google/gemini-2.5-flash
280284

281285
### Google Gemini example (AI Studio)
282-
# LLM_BINDING=gemini
283-
# LLM_BINDING_API_KEY=your_gemini_api_key
284-
# LLM_BINDING_HOST=https://generativelanguage.googleapis.com
285-
# LLM_MODEL=gemini-flash-latest
286+
# # LLM_BINDING=gemini
287+
# # LLM_BINDING_API_KEY=your_gemini_api_key
288+
# # LLM_BINDING_HOST=https://generativelanguage.googleapis.com
289+
# # LLM_MODEL=gemini-flash-latest
286290

287291
### use the following command to see all support options for OpenAI, azure_openai or OpenRouter
288292
### lightrag-server --llm-binding gemini --help
289293
### Gemini Specific Parameters
290294
# GEMINI_LLM_MAX_OUTPUT_TOKENS=9000
291295
# GEMINI_LLM_TEMPERATURE=0.7
292-
### Enable Thinking
296+
### Enable or disable thinking
293297
# GEMINI_LLM_THINKING_CONFIG='{"thinking_budget": -1, "include_thoughts": true}'
294-
### Disable Thinking
295-
# GEMINI_LLM_THINKING_CONFIG='{"thinking_budget": 0, "include_thoughts": false}'
298+
# # GEMINI_LLM_THINKING_CONFIG='{"thinking_budget": 0, "include_thoughts": false}'
296299

297300
### Google Vertex AI example
298301
### Vertex AI use GOOGLE_APPLICATION_CREDENTIALS instead of API-KEY for authentication
299302
### LLM_BINDING_HOST=DEFAULT_GEMINI_ENDPOINT means select endpoit based on project and location automatically
300-
# LLM_BINDING=gemini
301-
# LLM_BINDING_HOST=https://aiplatform.googleapis.com
303+
# # LLM_BINDING=gemini
304+
# # LM_BINDING_HOST=https://aiplatform.googleapis.com
302305
### or use DEFAULT_GEMINI_ENDPOINT to select endpoint based on project and location automatically
303-
# LLM_BINDING_HOST=DEFAULT_GEMINI_ENDPOINT
304-
# LLM_MODEL=gemini-2.5-flash
306+
# # LLM_BINDING_HOST=DEFAULT_GEMINI_ENDPOINT
307+
# # LLM_MODEL=gemini-2.5-flash
305308
# GOOGLE_GENAI_USE_VERTEXAI=true
306309
# GOOGLE_CLOUD_PROJECT='your-project-id'
307310
# GOOGLE_CLOUD_LOCATION='us-central1'
308311
# GOOGLE_APPLICATION_CREDENTIALS='/Users/xxxxx/your-service-account-credentials-file.json'
309312

313+
### Ollama example
314+
# # LLM_BINDING=ollama
315+
# # LLM_BINDING_HOST=http://localhost:11434
316+
# # LLM_MODEL=qwen3.5:9b
317+
310318
### use the following command to see all support options for Ollama LLM
311319
### lightrag-server --llm-binding ollama --help
312320
### Ollama Server Specific Parameters
@@ -321,8 +329,8 @@ OLLAMA_LLM_NUM_CTX=32768
321329
### Bedrock Specific Parameters
322330
### Bedrock uses AWS credentials from the environment / AWS credential chain.
323331
### It does not use LLM_BINDING_API_KEY.
324-
# LLM_BINDING=aws_bedrock
325-
# LLM_MODEL=anthropic.claude-3-5-sonnet-20241022-v2:0
332+
# # LLM_BINDING=aws_bedrock
333+
# # LLM_MODEL=anthropic.claude-3-5-sonnet-20241022-v2:0
326334
# AWS_ACCESS_KEY_ID=your_aws_access_key_id
327335
# AWS_SECRET_ACCESS_KEY=your_aws_secret_access_key
328336
# AWS_SESSION_TOKEN=your_optional_aws_session_token
@@ -344,40 +352,39 @@ OLLAMA_LLM_NUM_CTX=32768
344352
# EMBEDDING_TIMEOUT=30
345353

346354
### OpenAI compatible embedding
347-
### For local vLLM: set EMBEDDING_BINDING_API_KEY=EMPTY (any non-empty placeholder)
348-
# EMBEDDING_BINDING=openai
349-
# EMBEDDING_BINDING_HOST=https://api.openai.com/v1
350-
# EMBEDDING_BINDING_API_KEY=your_api_key
351-
# EMBEDDING_MODEL=text-embedding-3-large
352-
# EMBEDDING_DIM=3072
353-
# EMBEDDING_TOKEN_LIMIT=8192
354-
# EMBEDDING_SEND_DIM=false
355+
EMBEDDING_BINDING=openai
356+
EMBEDDING_BINDING_HOST=https://api.openai.com/v1
357+
EMBEDDING_BINDING_API_KEY=your_api_key
358+
EMBEDDING_MODEL=text-embedding-3-large
359+
EMBEDDING_DIM=3072
360+
EMBEDDING_TOKEN_LIMIT=8192
361+
EMBEDDING_SEND_DIM=false
355362

356363
### Optional for Azure Embedding
357364
### Use deployment name as model name or set AZURE_EMBEDDING_DEPLOYMENT instead
365+
# # EMBEDDING_BINDING=azure_openai
366+
# # EMBEDDING_BINDING_HOST=https://xxxx.openai.azure.com/
367+
# # EMBEDDING_API_KEY=your_api_key
368+
# # EMBEDDING_MODEL==my-text-embedding-3-large-deployment
369+
# # EMBEDDING_DIM=3072
358370
# AZURE_EMBEDDING_API_VERSION=2024-08-01-preview
359-
# EMBEDDING_BINDING=azure_openai
360-
# EMBEDDING_BINDING_HOST=https://xxxx.openai.azure.com/
361-
# EMBEDDING_API_KEY=your_api_key
362-
# EMBEDDING_MODEL==my-text-embedding-3-large-deployment
363-
# EMBEDDING_DIM=3072
364371

365372
### Gemini embedding
366-
# EMBEDDING_BINDING=gemini
367-
# EMBEDDING_MODEL=gemini-embedding-001
368-
# EMBEDDING_DIM=1536
369-
# EMBEDDING_TOKEN_LIMIT=2048
370-
# EMBEDDING_BINDING_HOST=https://generativelanguage.googleapis.com
371-
# EMBEDDING_BINDING_API_KEY=your_api_key
373+
# # EMBEDDING_BINDING=gemini
374+
# # EMBEDDING_MODEL=gemini-embedding-001
375+
# # EMBEDDING_DIM=1536
376+
# # EMBEDDING_TOKEN_LIMIT=2048
377+
# # EMBEDDING_BINDING_HOST=https://generativelanguage.googleapis.com
378+
# # EMBEDDING_BINDING_API_KEY=your_api_key
372379
### Gemini embedding requires sending dimension to server
373-
# EMBEDDING_SEND_DIM=true
380+
# # EMBEDDING_SEND_DIM=true
374381

375382
### Ollama embedding
376-
# EMBEDDING_BINDING=ollama
377-
# EMBEDDING_BINDING_HOST=http://localhost:11434
378-
# EMBEDDING_BINDING_API_KEY=your_api_key
379-
# EMBEDDING_MODEL=bge-m3:latest
380-
# EMBEDDING_DIM=1024
383+
# # EMBEDDING_BINDING=ollama
384+
# # EMBEDDING_BINDING_HOST=http://localhost:11434
385+
# # EMBEDDING_BINDING_API_KEY=your_api_key
386+
# # EMBEDDING_MODEL=qwen3-embedding:4b
387+
# # EMBEDDING_DIM=2560
381388
### Optional for Ollama embedding
382389
OLLAMA_EMBEDDING_NUM_CTX=8192
383390
### use the following command to see all support options for Ollama embedding
@@ -386,20 +393,20 @@ OLLAMA_EMBEDDING_NUM_CTX=8192
386393
### Bedrock embedding
387394
### Bedrock uses AWS credentials from the environment / AWS credential chain.
388395
### It does not use EMBEDDING_BINDING_API_KEY.
389-
# EMBEDDING_BINDING=aws_bedrock
390-
# EMBEDDING_MODEL=amazon.titan-embed-text-v2:0
391-
# EMBEDDING_DIM=1024
396+
# # EMBEDDING_BINDING=aws_bedrock
397+
# # EMBEDDING_MODEL=amazon.titan-embed-text-v2:0
398+
# # EMBEDDING_DIM=1024
392399
# AWS_ACCESS_KEY_ID=your_aws_access_key_id
393400
# AWS_SECRET_ACCESS_KEY=your_aws_secret_access_key
394401
# AWS_SESSION_TOKEN=your_optional_aws_session_token
395402
# AWS_REGION=us-east-1
396403

397404
### Jina AI Embedding
398-
# EMBEDDING_BINDING=jina
399-
# EMBEDDING_BINDING_HOST=https://api.jina.ai/v1/embeddings
400-
# EMBEDDING_MODEL=jina-embeddings-v4
401-
# EMBEDDING_DIM=2048
402-
# EMBEDDING_BINDING_API_KEY=your_api_key
405+
# # EMBEDDING_BINDING=jina
406+
# # EMBEDDING_BINDING_HOST=https://api.jina.ai/v1/embeddings
407+
# # EMBEDDING_MODEL=jina-embeddings-v4
408+
# # EMBEDDING_DIM=2048
409+
# # EMBEDDING_BINDING_API_KEY=your_api_key
403410

404411
####################################################################
405412
### WORKSPACE sets workspace name for all storage types
@@ -418,29 +425,29 @@ OLLAMA_EMBEDDING_NUM_CTX=8192
418425
# LIGHTRAG_VECTOR_STORAGE=NanoVectorDBStorage
419426

420427
### Redis Storage (Recommended for production deployment)
421-
# LIGHTRAG_KV_STORAGE=RedisKVStorage
422-
# LIGHTRAG_DOC_STATUS_STORAGE=RedisDocStatusStorage
428+
# # LIGHTRAG_KV_STORAGE=RedisKVStorage
429+
# # LIGHTRAG_DOC_STATUS_STORAGE=RedisDocStatusStorage
423430

424431
### Vector Storage (Recommended for production deployment)
425-
# LIGHTRAG_VECTOR_STORAGE=MilvusVectorDBStorage
426-
# LIGHTRAG_VECTOR_STORAGE=QdrantVectorDBStorage
427-
# LIGHTRAG_VECTOR_STORAGE=FaissVectorDBStorage
432+
# # LIGHTRAG_VECTOR_STORAGE=MilvusVectorDBStorage
433+
# # LIGHTRAG_VECTOR_STORAGE=QdrantVectorDBStorage
434+
# # LIGHTRAG_VECTOR_STORAGE=FaissVectorDBStorage
428435

429436
### Graph Storage (Recommended for production deployment)
430-
# LIGHTRAG_GRAPH_STORAGE=Neo4JStorage
431-
# LIGHTRAG_GRAPH_STORAGE=MemgraphStorage
437+
# # LIGHTRAG_GRAPH_STORAGE=Neo4JStorage
438+
# # LIGHTRAG_GRAPH_STORAGE=MemgraphStorage
432439

433440
### PostgreSQL
434-
# LIGHTRAG_KV_STORAGE=PGKVStorage
435-
# LIGHTRAG_DOC_STATUS_STORAGE=PGDocStatusStorage
436-
# LIGHTRAG_GRAPH_STORAGE=PGGraphStorage
437-
# LIGHTRAG_VECTOR_STORAGE=PGVectorStorage
441+
# # LIGHTRAG_KV_STORAGE=PGKVStorage
442+
# # LIGHTRAG_DOC_STATUS_STORAGE=PGDocStatusStorage
443+
# # LIGHTRAG_GRAPH_STORAGE=PGGraphStorage
444+
# # LIGHTRAG_VECTOR_STORAGE=PGVectorStorage
438445

439446
### MongoDB (Vector storage only available on Atlas Cloud)
440-
# LIGHTRAG_KV_STORAGE=MongoKVStorage
441-
# LIGHTRAG_DOC_STATUS_STORAGE=MongoDocStatusStorage
442-
# LIGHTRAG_GRAPH_STORAGE=MongoGraphStorage
443-
# LIGHTRAG_VECTOR_STORAGE=MongoVectorDBStorage
447+
# # LIGHTRAG_KV_STORAGE=MongoKVStorage
448+
# # LIGHTRAG_DOC_STATUS_STORAGE=MongoDocStatusStorage
449+
# # LIGHTRAG_GRAPH_STORAGE=MongoGraphStorage
450+
# # LIGHTRAG_VECTOR_STORAGE=MongoVectorDBStorage
444451

445452
### PostgreSQL Configuration
446453
POSTGRES_HOST=localhost

0 commit comments

Comments
 (0)