rag-code-mcp

⚙️ RagCode Configuration Guide

Complete configuration reference for RagCode MCP server.


📁 Installation Directory

RagCode installs to ~/.local/share/ragcode/ with the following structure:

~/.local/share/ragcode/
├── bin/
│   ├── rag-code-mcp      # Main MCP server binary
│   ├── index-all         # CLI indexing tool
│   └── mcp.log           # Server logs
└── config.yaml           # Main configuration file

📄 Configuration File

Edit ~/.local/share/ragcode/config.yaml to customize RagCode:

llm:
  provider: "ollama"
  base_url: "http://localhost:11434"
  model: "phi3:medium"        # LLM for code analysis
  embed_model: "mxbai-embed-large"  # Embedding model

storage:
  vector_db:
    url: "http://localhost:6333"
    collection_prefix: "ragcode"

workspace:
  auto_index: true
  exclude_patterns:
    - "vendor"
    - "node_modules"
    - ".git"
    - "dist"
    - "build"

logging:
  level: "info"           # debug, info, warn, error
  path: "~/.local/share/ragcode/bin/mcp.log"

LLM Models (for code analysis)

| Model | Size | Speed | Quality | Use Case | |——-|——|——-|———|———-| | phi3:medium | 7.9 GB | Fast | Good | Recommended default |

Embedding Models

| Model | Size | Dimensions | Use Case | |——-|——|————|———-| | mxbai-embed-large | 670 MB | 1024 | Recommended default | | all-minilm | 45 MB | 384 | Faster, lower quality |


🌍 Environment Variables

Environment variables override config.yaml settings. Set these in your IDE’s MCP configuration:

Variable Default Description
OLLAMA_BASE_URL http://localhost:11434 Ollama server URL
OLLAMA_MODEL phi3:medium LLM model for code analysis
OLLAMA_EMBED mxbai-embed-large Embedding model
QDRANT_URL http://localhost:6333 Qdrant vector database URL
MCP_LOG_LEVEL info Log level (debug, info, warn, error)

Example IDE Configuration

{
  "mcpServers": {
    "ragcode": {
      "command": "/home/YOUR_USERNAME/.local/share/ragcode/bin/rag-code-mcp",
      "args": [],
      "env": {
        "OLLAMA_BASE_URL": "http://localhost:11434",
        "OLLAMA_MODEL": "phi3:medium",
        "OLLAMA_EMBED": "mxbai-embed-large",
        "QDRANT_URL": "http://localhost:6333"
      }
    }
  }
}

📊 Logs and Monitoring

Log File Location

Watch Logs in Real-Time

tail -f ~/.local/share/ragcode/bin/mcp.log

Docker Container Logs

docker logs ragcode-ollama
docker logs ragcode-qdrant

Log Levels


🔧 Installer Options

The ragcode-installer supports various configurations:

Flag Values Description
-ollama docker, local Where to run Ollama
-qdrant docker, remote Where to run Qdrant
-gpu (flag) Enable GPU acceleration for Ollama
-models-dir path Mount local Ollama models directory
-skip-build (flag) Skip binary compilation

Common Scenarios

# Everything in Docker (default, recommended)
./ragcode-installer -ollama=docker -qdrant=docker

# Local Ollama + Docker Qdrant
./ragcode-installer -ollama=local -qdrant=docker

# Docker with GPU acceleration
./ragcode-installer -ollama=docker -qdrant=docker -gpu

# Reuse existing Ollama models
./ragcode-installer -ollama=docker -models-dir=$HOME/.ollama

# Re-configure IDEs only (no rebuild)
./ragcode-installer -skip-build

📦 System Requirements

Minimum Requirements

Component Requirement Notes
CPU 4 cores For running Ollama models
RAM 16 GB 8 GB for phi3:medium, 1 GB for mxbai-embed-large, 7 GB system
Disk 10 GB free ~8 GB for models + 2 GB for data
OS Linux, macOS, Windows Docker required for Qdrant
Component Requirement Notes
CPU 8+ cores Better performance for concurrent operations
RAM 32 GB Allows comfortable multi‑workspace indexing
GPU NVIDIA GPU with 8 GB+ VRAM Significantly speeds up Ollama inference (optional)
Disk 20 GB free SSD Faster indexing and search

Model Sizes


🔄 Incremental Indexing

RagCode features smart incremental indexing that only processes changed files.

How It Works

Performance

For technical details, see incremental_indexing.md.