This guide explains how to run the RagCode infrastructure (Qdrant + Ollama) using Docker, while leveraging your existing local Ollama models.
docker-compose.We have configured docker-compose.yml to map your local Ollama models (~/.ollama) into the container. This means:
~/.ollama (optional, but recommended).docker-compose up -d
docker logs ragcode-ollama
docker exec -it ragcode-ollama ollama list
You should see all your locally downloaded models here!
docker exec -it ragcode-ollama ollama pull phi3:medium
“Error: could not connect to ollama”
11434 is not being used by a local Ollama instance.systemctl stop ollama or pkill ollama.GPU not working
deploy section from docker-compose.yml to run in CPU-only mode (slower).