iqtoolkit-analyzer

Running Ollama Locally

Ollama lets you run large language models on your own machine. To use it with this project, follow these steps:

1. Install Ollama

2. Start the Ollama Server

3. Pull a Model

4. Test the Server

5. Using a Remote Ollama Server

You can run Ollama on a remote server and connect to it from your development machine:

Server Setup

# On your remote server (e.g., 192.168.0.30)
curl -LsSf https://ollama.com/install.sh | sh
ollama serve
ollama pull a-kore/Arctic-Text2SQL-R1-7B

Client Configuration

Option 1: Environment Variable

export OLLAMA_HOST=http://192.168.0.30:11434
python -m iqtoolkit_analyzer your_log.log

Option 2: Configuration File

Add to .iqtoolkit-analyzer.yml:

llm_provider: ollama
ollama_model: a-kore/Arctic-Text2SQL-R1-7B
ollama_host: http://192.168.0.30:11434

Testing Remote Connection

Use the included test script:

# Test connection and functionality
export OLLAMA_HOST=http://192.168.0.30:11434
python test_remote_ollama.py

# Or test directly
python -c "import ollama; print(ollama.list())"

Running Tests Against Remote Server

# Run unit tests
OLLAMA_HOST=http://192.168.0.30:11434 pytest -c pytest-remote.ini tests/test_llm_client.py -v

# Run specific Ollama tests
OLLAMA_HOST=http://192.168.0.30:11434 pytest -c pytest-remote.ini tests/test_llm_client.py::TestLLMClientOllama -v

6. Troubleshooting


For more details, see the official docs: https://ollama.com/docs