For a summary, see the Project README.
See PostgreSQL Examples for complete instructions on enabling slow query logging in PostgreSQL.
You can create a .iqtoolkit-analyzer.yml file in your project directory to customize analysis options. Example:
log_format: csv
min_duration: 1000
output: my_report.md
top_n: 10
# AI Provider: OpenAI or Ollama
llm_provider: ollama # or 'openai'
# OpenAI Configuration
openai_model: gpt-4o-mini
openai_api_key: sk-xxx # optional, can use OPENAI_API_KEY env var
# Ollama Configuration (local or remote)
ollama_model: a-kore/Arctic-Text2SQL-R1-7B
ollama_host: http://localhost:11434 # or remote: http://192.168.0.30:11434
# LLM Settings
llm_temperature: 0.3
max_tokens: 300
llm_timeout: 30
Set llm_provider to openai or ollama to choose which LLM backend to use. Specify the model for each provider with openai_model or ollama_model.
| Provider | Cost | Privacy | Speed | Notes |
|———-|————–|—————-|————–|——-|
ollama_host: http://localhost:11434 # optional; override default Ollama host
llm_temperature: 0.3
max_tokens: 300
llm_timeout: 30
| OpenAI | Paid (API) | Data sent to OpenAI servers | Fast (cloud) | Requires API key, best for latest models |
| Ollama | Free/local | Data stays on your machine | Fast (local, depends on hardware) | Requires local install, limited to available models |
Set llm_provider to openai or ollama to choose which LLM backend to use. Specify the model for each provider with openai_model or ollama_model. Use openai_api_key or the OPENAI_API_KEY environment variable for OpenAI access, and ollama_host if your Ollama server runs on a non-default host.
You can switch providers by changing llm_provider in your config file. For most users, OpenAI is best for accuracy and features; Ollama is best for privacy and cost.
See the README and this file for all available options.
| Variable | Description | Default | Example |
|---|---|---|---|
| OPENAI_API_KEY | OpenAI API key (required for OpenAI) | None | sk-xxx... |
| OPENAI_MODEL | GPT model to use | gpt-4o-mini | gpt-4o |
| OPENAI_BASE_URL | Custom OpenAI endpoint | Default API URL | Custom endpoint |
| OLLAMA_HOST | Ollama server URL (local or remote) | http://localhost:11434 | http://192.168.0.30:11434 |
pyyaml is required for config file supportpandas and tqdm are required for multi-format log parsing and progress bars