rag-code-mcp

🐳 Docker Setup for RagCode

This guide explains how to run the RagCode infrastructure (Qdrant + Ollama) using Docker, while leveraging your existing local Ollama models.

Why run Ollama in Docker?

🚀 The “Smart” Setup (Model Mapping)

We have configured docker-compose.yml to map your local Ollama models (~/.ollama) into the container. This means:

  1. You don’t need to re-download models.
  2. Models downloaded inside Docker appear on your host.
  3. You save massive amounts of disk space.

Prerequisites

Usage

  1. Start the stack:
    docker-compose up -d
    
  2. Verify Ollama is running:
    docker logs ragcode-ollama
    
  3. Check available models (inside container):
    docker exec -it ragcode-ollama ollama list
    

    You should see all your locally downloaded models here!

  4. Pull a new model (if needed):
    docker exec -it ragcode-ollama ollama pull phi3:medium
    

⚠️ Troubleshooting

“Error: could not connect to ollama”

GPU not working