Complete configuration reference for RagCode MCP server.
RagCode installs to ~/.local/share/ragcode/ with the following structure:
~/.local/share/ragcode/
├── bin/
│ ├── rag-code-mcp # Main MCP server binary
│ ├── index-all # CLI indexing tool
│ └── mcp.log # Server logs
└── config.yaml # Main configuration file
Edit ~/.local/share/ragcode/config.yaml to customize RagCode:
llm:
provider: "ollama"
base_url: "http://localhost:11434"
model: "phi3:medium" # LLM for code analysis
embed_model: "mxbai-embed-large" # Embedding model
storage:
vector_db:
url: "http://localhost:6333"
collection_prefix: "ragcode"
workspace:
auto_index: true
exclude_patterns:
- "vendor"
- "node_modules"
- ".git"
- "dist"
- "build"
logging:
level: "info" # debug, info, warn, error
path: "~/.local/share/ragcode/bin/mcp.log"
| Model | Size | Speed | Quality | Use Case |
|——-|——|——-|———|———-|
| phi3:medium | 7.9 GB | Fast | Good | Recommended default |
| Model | Size | Dimensions | Use Case |
|——-|——|————|———-|
| mxbai-embed-large | 670 MB | 1024 | Recommended default |
| all-minilm | 45 MB | 384 | Faster, lower quality |
Environment variables override config.yaml settings. Set these in your IDE’s MCP configuration:
| Variable | Default | Description |
|---|---|---|
OLLAMA_BASE_URL |
http://localhost:11434 |
Ollama server URL |
OLLAMA_MODEL |
phi3:medium |
LLM model for code analysis |
OLLAMA_EMBED |
mxbai-embed-large |
Embedding model |
QDRANT_URL |
http://localhost:6333 |
Qdrant vector database URL |
MCP_LOG_LEVEL |
info |
Log level (debug, info, warn, error) |
{
"mcpServers": {
"ragcode": {
"command": "/home/YOUR_USERNAME/.local/share/ragcode/bin/rag-code-mcp",
"args": [],
"env": {
"OLLAMA_BASE_URL": "http://localhost:11434",
"OLLAMA_MODEL": "phi3:medium",
"OLLAMA_EMBED": "mxbai-embed-large",
"QDRANT_URL": "http://localhost:6333"
}
}
}
}
~/.local/share/ragcode/bin/mcp.logtail -f ~/.local/share/ragcode/bin/mcp.log
docker logs ragcode-ollama
docker logs ragcode-qdrant
debug - Verbose output, useful for developmentinfo - Normal operation (default)warn - Warnings onlyerror - Errors onlyThe ragcode-installer supports various configurations:
| Flag | Values | Description |
|---|---|---|
-ollama |
docker, local |
Where to run Ollama |
-qdrant |
docker, remote |
Where to run Qdrant |
-gpu |
(flag) | Enable GPU acceleration for Ollama |
-models-dir |
path | Mount local Ollama models directory |
-skip-build |
(flag) | Skip binary compilation |
# Everything in Docker (default, recommended)
./ragcode-installer -ollama=docker -qdrant=docker
# Local Ollama + Docker Qdrant
./ragcode-installer -ollama=local -qdrant=docker
# Docker with GPU acceleration
./ragcode-installer -ollama=docker -qdrant=docker -gpu
# Reuse existing Ollama models
./ragcode-installer -ollama=docker -models-dir=$HOME/.ollama
# Re-configure IDEs only (no rebuild)
./ragcode-installer -skip-build
| Component | Requirement | Notes |
|---|---|---|
| CPU | 4 cores | For running Ollama models |
| RAM | 16 GB | 8 GB for phi3:medium, 1 GB for mxbai-embed-large, 7 GB system |
| Disk | 10 GB free | ~8 GB for models + 2 GB for data |
| OS | Linux, macOS, Windows | Docker required for Qdrant |
| Component | Requirement | Notes |
|---|---|---|
| CPU | 8+ cores | Better performance for concurrent operations |
| RAM | 32 GB | Allows comfortable multi‑workspace indexing |
| GPU | NVIDIA GPU with 8 GB+ VRAM | Significantly speeds up Ollama inference (optional) |
| Disk | 20 GB free SSD | Faster indexing and search |
mxbai-embed-large: ~670 MBphi3:medium: ~7.9 GBRagCode features smart incremental indexing that only processes changed files.
.ragcode/state.jsonFor technical details, see incremental_indexing.md.