The Caveman skill forces Claude to drop filler words, hedging, and pleasantries. No "I'd be happy to help." No "let me summarise." Just the answer. Same technical accuracy, way fewer tokens. See the difference for yourself...
"I'd be happy to help you with that. Let me summarise what I just did. I executed the file search and identified three matching results for your query."
"Found 3 files. Done."
Tap a level. See how Claude would answer at that intensity.
Open your terminal. Run these two commands. Auto-activation is built in.
Works with Cursor, Windsurf, Codex, Copilot, Cline, Gemini CLI, and 40+ more. Auto-detects your agent.
For Cursor specifically: npx skills add JuliusBrussee/caveman -a cursor. Same pattern for windsurf, copilot, cline, etc.
Six levels available. Default is Full. Switch anytime with /caveman ultra in Claude Code.
Tested 31 LLMs across 1,485 problems. Forcing large models to give brief responses improved accuracy by 26 percentage points on certain benchmarks. Mechanism: large models talk themselves into wrong answers via overelaboration. Brevity strips that out.
Read the paper →Open the GitHub repo. Copy the install command. You'll be running caveman mode in under a minute.
Open the GitHub repo Read the brevity research paper