← Back to Index¶
⚙️ Configuration¶
For a summary, see the Project README.
🔥 NEW: IQToolkit Config File for Database Connections¶
Major Feature: You can now use the IQToolkit config file (~/.iqtoolkit/config.yaml) to connect directly to databases and run EXPLAIN analysis without needing log files!
Config File Location¶
Create your config file at:
- ~/.iqtoolkit/config.yaml (recommended)
- Or specify with --config /path/to/config.yaml
Full Config Example¶
# AI Provider Configuration
default_provider: openai # or 'ollama', 'gemini', 'bedrock', 'claude', 'azure'
providers:
openai:
api_key: ${OPENAI_API_KEY} # Use environment variables for secrets
model: gpt-4o-mini
ollama:
host: http://localhost:11434
model: arctic-text2sql-r1:7b
gemini:
api_key: ${GEMINI_API_KEY}
model: gemini-pro
# Database Connections (NEW!)
databases:
# Local PostgreSQL
local_dev:
type: postgres
host: localhost
port: 5432
database: myapp_dev
user: postgres
password: ${DEV_DB_PASSWORD}
# Production RDS
rds_prod:
type: postgres
host: my-db.us-east-1.rds.amazonaws.com
port: 5432
database: production
user: admin
password: ${RDS_PASSWORD}
# Cloud SQL
gcp_staging:
type: postgres
host: 10.128.0.3
port: 5432
database: staging
user: postgres
password: ${CLOUDSQL_PASSWORD}
Using Database Connections¶
Once your config file is set up, run EXPLAIN directly against any configured database:
# Analyze query from command line
poetry run python -m iqtoolkit_analyzer \
--config ~/.iqtoolkit/config.yaml \
postgresql \
--db-name local_dev \
--sql "SELECT * FROM users WHERE email = 'test@example.com'" \
--output analysis.md
# Analyze query from SQL file
poetry run python -m iqtoolkit_analyzer \
--config ~/.iqtoolkit/config.yaml \
postgresql \
--db-name rds_prod \
--query-file slow_query.sql \
--output production_analysis.md
Database Connection Fields¶
| Field | Description | Required | Example |
|---|---|---|---|
type |
Database type | Yes | postgres |
host |
Database hostname or IP | Yes | localhost, my-db.rds.amazonaws.com |
port |
Database port | Yes | 5432 |
database |
Database name | Yes | production |
user |
Database username | Yes | postgres, admin |
password |
Database password (use env vars!) | Yes | ${DB_PASSWORD} |
Security Best Practices¶
Always use environment variables for passwords:
# ✅ GOOD - Uses environment variable
databases:
prod:
password: ${DB_PASSWORD}
# ❌ BAD - Hardcoded password in config
databases:
prod:
password: my_secret_password
Set environment variables before running:
export DB_PASSWORD="your_password"
export OPENAI_API_KEY="sk-..."
export RDS_PASSWORD="rds_password"
PostgreSQL Log Setup¶
See PostgreSQL Examples for complete instructions on enabling slow query logging in PostgreSQL.
Legacy Configuration File (.iqtoolkit-analyzer.yml)¶
You can create a .iqtoolkit-analyzer.yml file in your project directory to customize analysis options. Example:
log_format: csv
min_duration: 1000
output: my_report.md
top_n: 10
# AI Provider: OpenAI or Ollama
llm_provider: ollama # or 'openai'
# OpenAI Configuration
openai_model: gpt-4o-mini
openai_api_key: sk-xxx # optional, can use OPENAI_API_KEY env var
# Ollama Configuration (local or remote)
ollama_model: a-kore/Arctic-Text2SQL-R1-7B
ollama_host: http://localhost:11434 # or remote: http://192.168.0.30:11434
# LLM Settings
llm_temperature: 0.3
max_tokens: 300
llm_timeout: 30
Set llm_provider to openai or ollama to choose which LLM backend to use. Specify the model for each provider with openai_model or ollama_model.
Choosing Your LLM Provider: OpenAI vs Ollama¶
| Provider | Cost | Privacy | Speed | Notes |
|---|---|---|---|---|
| ollama_host: http://localhost:11434 # optional; override default Ollama host | ||||
| llm_temperature: 0.3 | ||||
| max_tokens: 300 | ||||
| llm_timeout: 30 | ||||
| OpenAI | Paid (API) | Data sent to OpenAI servers | Fast (cloud) | Requires API key, best for latest models |
| Ollama | Free/local | Data stays on your machine | Fast (local, depends on hardware) | Requires local install, limited to available models |
Set llm_provider to openai or ollama to choose which LLM backend to use. Specify the model for each provider with openai_model or ollama_model. Use openai_api_key or the OPENAI_API_KEY environment variable for OpenAI access, and ollama_host if your Ollama server runs on a non-default host. |
||||
| - OpenAI: Use for access to the latest GPT models, high reliability, and cloud scalability. Requires an API key and incurs usage costs. Data is processed on OpenAI's servers. | ||||
| - Ollama: Use for privacy, cost savings, and offline/local inference. No API key needed, but you must install Ollama and download models. Data never leaves your machine. |
You can switch providers by changing llm_provider in your config file. For most users, OpenAI is best for accuracy and features; Ollama is best for privacy and cost.
See the README and this file for all available options.
Environment Variables¶
| Variable | Description | Default | Example |
|---|---|---|---|
| OPENAI_API_KEY | OpenAI API key (required for OpenAI) | None | sk-xxx... |
| OPENAI_MODEL | GPT model to use | gpt-4o-mini | gpt-4o |
| OPENAI_BASE_URL | Custom OpenAI endpoint | Default API URL | Custom endpoint |
| OLLAMA_HOST | Ollama server URL (local or remote) | http://localhost:11434 | http://192.168.0.30:11434 |
Dependencies¶
pyyamlis required for config file supportpandasandtqdmare required for multi-format log parsing and progress bars