CLI Reference · Python 3.11+

Documentation

openaigeminianthropicgroqollama

codesage run [TARGET] · all flags

--provider, -popenai|gemini|anthropic|groq|ollama

LLM provider to use

--model, -m<model-id>

Specific model name

--output, -o<file.md>

Output report path

--max-chunks<int>

Max source chunks to analyse

--no-wizard

Skip interactive setup

--no-cache

Bypass result cache

--verbose, -v

Enable verbose logging

codesage init [TARGET]

Generate .codesage.yml interactively

codesage doctor

Check Python, deps & API keys

Common usage patterns

# Install pip install -e . # Analyse current dir (wizard) codesage run # Analyse a path, no wizard codesage run ./my-project \ --no-wizard \ --provider openai \ --model gpt-4o-mini \ --output report.md # Larger analysis budget codesage run . --max-chunks 100 # Init config for a project codesage init ./my-project # Health check codesage doctor

Environment variables

OPENAI_API_KEYOpenAI key
GEMINI_API_KEYGoogle Gemini key
ANTHROPIC_API_KEYAnthropic key
GROQ_API_KEYGroq key

Config priority: CLI flags → env vars → .codesage.yml → defaults

Output

Reports written to ./reports/report.md by default. If the file exists, a timestamped copy is created instead of overwriting.