Monthly LLM stack audit — compare your current models against latest benchmarks and pricing from OpenRouter. Identifies potential savings, upgrades, and bett...
Audit your LLM stack against current pricing and alternatives.
Fetches live pricing from OpenRouter, analyzes your configured models, and recommends potential savings or upgrades by category.
# Full audit with recommendations
python3 {baseDir}/scripts/model_audit.py
# JSON output
python3 {baseDir}/scripts/model_audit.py --json
# Audit specific models
python3 {baseDir}/scripts/model_audit.py --models "anthropic/claude-opus-4-6,openai/gpt-4o"
# Show top models by category
python3 {baseDir}/scripts/model_audit.py --top
# Compare two models
python3 {baseDir}/scripts/model_audit.py --compare "anthropic/claude-sonnet-4" "openai/gpt-4o"
═══ LLM Stack Audit ═══
Your Models:
anthropic/claude-opus-4-6 $5.00/$25.00 per 1M tokens (in/out)
openai/gpt-4o $2.50/$10.00 per 1M tokens
google/gemini-2.0-flash $0.10/$0.40 per 1M tokens
Recommendations:
💡 For fast tasks: gemini-2.0-flash is 50x cheaper than opus
💡 Consider: deepseek/deepseek-r1 for reasoning at $0.55/$2.19
💡 Your stack covers: reasoning ✓, code ✓, fast ✓, vision ✓
Requires OPENROUTER_API_KEY environment variable.
Built by M. Abidi | agxntsix.ai YouTube | GitHub Part of the AgxntSix Skill Suite for OpenClaw agents.
📅 Need help setting up OpenClaw for your business? Book a free consultation
ZIP package — ready to use