Ollama lets you run large language models on your own machine. To use it with this project, follow these steps:
ollama serve
ollama pull a-kore/Arctic-Text2SQL-R1-7B
ollama run a-kore/Arctic-Text2SQL-R1-7B
scripts/test_ollama.py.You can run Ollama on a remote server and connect to it from your development machine:
# On your remote server (e.g., 192.168.0.30)
curl -LsSf https://ollama.com/install.sh | sh
ollama serve
ollama pull a-kore/Arctic-Text2SQL-R1-7B
Option 1: Environment Variable
export OLLAMA_HOST=http://192.168.0.30:11434
python -m iqtoolkit_analyzer your_log.log
Option 2: Configuration File
Add to .iqtoolkit-analyzer.yml:
llm_provider: ollama
ollama_model: a-kore/Arctic-Text2SQL-R1-7B
ollama_host: http://192.168.0.30:11434
Use the included test script:
# Test connection and functionality
export OLLAMA_HOST=http://192.168.0.30:11434
python test_remote_ollama.py
# Or test directly
python -c "import ollama; print(ollama.list())"
# Run unit tests
OLLAMA_HOST=http://192.168.0.30:11434 pytest -c pytest-remote.ini tests/test_llm_client.py -v
# Run specific Ollama tests
OLLAMA_HOST=http://192.168.0.30:11434 pytest -c pytest-remote.ini tests/test_llm_client.py::TestLLMClientOllama -v
model not found error, make sure you have pulled the model on the server.http://localhost:11434.llm_provider: ollama in .iqtoolkit-analyzer.yml.ollama_host: http://your-host:port to the config.For more details, see the official docs: https://ollama.com/docs