← All Reviews

claude-code-analyzer: A Self-Optimization Skill That Actually Reads Your History

📦 claude-code-analyzer
Language: --
Stars: 161
Trend: Rising
View on GitHub →

claude-code-analyzer: A Self-Optimization Skill That Actually Reads Your History

161 stars on SkillsMP and trending. That's not viral noise — that's developers solving a real problem and telling their friends. I installed this, ran it against a few projects, and here's my honest take.

What This Skill Actually Does

The pitch is: analyze your Claude Code usage history, figure out what you're actually doing, and then generate the configurations you should have set up weeks ago.

Concretely, it runs two shell scripts bundled with the skill:

Then it synthesizes both results, fetches the latest Anthropic docs on-demand, and generates actual config files: updated settings.json for auto-allows, slash commands for repetitive operations, agent definitions for complex workflows, and a CLAUDE.md bootstrapped with your real project details.

The key word is actual. This isn't a template generator. It's reading your history.

Why This Matters

If you've been using Claude Code for more than a few weeks, you've accumulated habits. You've approved the same Bash invocation 200 times. You've typed the same test command in every session. Your CLAUDE.md is either nonexistent or a copy-paste from a tutorial that doesn't reflect your actual stack.

The gap this fills is the feedback loop. Claude Code doesn't tell you "hey, you've approved Read 80 times this month, maybe just auto-allow it." It doesn't notice that you run npm test before every commit and offer to make that a slash command. You're expected to figure out your own optimization, which most people don't do because it's not urgent — it's just friction.

This skill makes that audit automatic. That's the value proposition, and it's a legitimate one.

Key Capabilities Worth Calling Out

1. Usage-driven auto-allow recommendations

This is the most immediately useful feature. The script identifies tools you're approving constantly (high friction, low risk) versus tools that are auto-allowed but you never actually use (configuration debt). Both are problems. Getting a diff-style recommendation — add these, remove these — is genuinely useful.

2. GitHub community resource discovery

When it runs the usage analysis, it also searches GitHub for community skills, agents, and slash commands that match your tool patterns. If you're a TypeScript developer using Vitest, it'll surface relevant community configs. This is a nice touch — you're not just getting generic recommendations, you're getting pointed at what other people in similar setups have built.

3. Stack-aware CLAUDE.md generation

The project analyzer detects your actual setup and generates CLAUDE.md sections that are relevant to it. Next.js project with Vitest and pnpm? It'll scaffold the right commands, testing notes, and framework conventions. This beats starting from a blank file or copying someone else's CLAUDE.md that's full of irrelevant sections.

4. On-demand doc fetching before config creation

Before generating any configuration file, the skill fetches the latest Anthropic documentation for that config type. This matters more than it sounds. The Claude Code docs change. Agent frontmatter syntax, skill directory structure, slash command argument handling — these have evolved. Baking in stale assumptions is how you end up with configs that silently don't work. Fetching fresh docs before writing is the right behavior.

5. Complete workflow, not just analysis

The skill doesn't just tell you what to do — it does it. The workflow goes: analyze → recommend → fetch docs → write configs. You end up with actual files, not a report you have to act on manually.

Who Should Install This

Install it if: - You've been using Claude Code for a month or more and haven't done a serious config audit - Your auto-allows are either empty or stale - You don't have a CLAUDE.md, or yours is a placeholder - You're setting up Claude Code on a new project and want a bootstrapped starting point rather than building from scratch - You're curious what the community has built for your specific tool patterns

Skip it if: - You're brand new to Claude Code and don't have enough history for the analysis to be meaningful - You've already done careful manual optimization and your configs are tight — this will mostly confirm what you already know - You're in an environment where running shell scripts against your history files raises security or privacy concerns (more on this below)

How to Install

Clone or download the skill directory and drop it into your skills folder:

# Project-level
mkdir -p .claude/skills
cp -r claude-code-analyzer .claude/skills/

# Or user-level (available across all projects)
mkdir -p ~/.claude/skills
cp -r claude-code-analyzer ~/.claude/skills/

Then ask Claude: "Help me optimize my Claude Code setup" and it'll trigger the workflow. The scripts need to be executable — if they're not, chmod +x scripts/*.sh inside the skill directory.

The full repo has a static site at microck.github.io/ordinary-claude-skills if you want to browse the broader collection before cloning.

Concerns and Limitations

I want to be straight with you on a few things.

The scripts are shell scripts running against your local history. Before you run anything, read analyze.sh. It's not long. You should know what's touching your JSONL history files. I didn't find anything alarming, but "trust but verify" applies here. The author's maintenance stance is explicitly "passive" — this isn't a commercially supported tool.

The GitHub discovery is only as good as the search. It's doing GitHub searches for community resources matching your patterns. That's useful, but community quality varies wildly. You'll get links to things that range from excellent to abandoned to subtly wrong. Treat the discoveries as leads, not endorsements.

The doc-fetching approach is smart but has a dependency. Fetching live docs before generating configs is the right call, but it means the skill needs network access and the Anthropic docs URLs need to stay stable. If Anthropic restructures their docs (it happens), the fetched content might 404 or return irrelevant pages. The URLs are hardcoded in the SKILL.md. This isn't a dealbreaker, but it's a maintenance surface.

CLAUDE.md generation is a starting point, not a finished product. The stack detection is solid for common setups, but it's heuristic-based. It'll miss things. Treat the generated CLAUDE.md as a first draft you need to review and extend, not a finished artifact.

161 stars with 0 gained in the last 7 days. The trend status says "rising" but the weekly delta is flat. That suggests it had a strong initial run and has leveled off. Not a red flag, but worth noting — it's not currently gaining new users at pace. Could be saturation, could be that the initial audience already found it.

Verdict

Install it. Run it once. The worst case is you spend 10 minutes and confirm your setup is already good. The realistic case is you find 3-5 things to fix — an auto-allow you've been clicking through for months, a CLAUDE.md that's missing your actual test commands, a slash command that would save you 30 seconds per session.

The skill does what it says. The approach is sound — read actual behavior, generate actual configs, fetch current docs. That's the right architecture for a tool like this.

Just read the scripts before you run them, treat the GitHub discoveries as leads not gospel, and review the generated configs before committing them. Standard due diligence for any tool that touches your environment.

For a passive-maintenance community skill, this is well-thought-out work.


Links: - SkillsMP page - GitHub source - Full skill collection

View claude-code-analyzer on GitHub →
Need help building with tools like this?
We build AI-powered applications and developer tools. 30+ years of engineering experience.
Get in Touch
claude-skillsclaude-codedeveloper-toolsworkflow-optimizationproductivity
← Previous Netdata in 2026: Is It Still Worth Deploying Over Prometheus + Grafana? Next → CodeBurn: Finally a Tool That Shows You Where Your AI Coding Budget Actually Goes
← Back to All Reviews