CodeFluent
Personal AI fluency analytics for Claude Code users
Overview
CodeFluent is an open-source tool that helps developers measure and improve how effectively they collaborate with AI coding assistants. While millions of developers use AI assistants daily, Anthropic’s research shows most users exhibit only 3 of 11 key fluency behaviors — and that interaction patterns directly predict whether developers build skills or lose them.
CodeFluent reads your local Claude Code session data, scores your prompting behaviors against Anthropic’s AI Fluency Research, and provides actionable recommendations to become a more effective AI collaborator. Available as an open-source VS Code extension published on the Visual Studio Marketplace and a standalone web app.
Key Features
| Feature | Description |
|---|---|
| Fluency Score | Scores sessions against 11 fluency behaviors and 6 coding interaction patterns, with color-coded benchmark comparisons |
| Prompt Optimizer | Paste any prompt and get an optimized version that incorporates missing fluency behaviors, factoring in your CLAUDE.md config so it won’t duplicate covered behaviors. Shows before/after scores and lets you copy or run the result directly |
| Quick Wins | Scans your GitHub repos and generates copy-paste-ready Claude Code prompts for high-value tasks, scoped to the selected project and launchable directly from VS Code |
| Recommendations | Personalized, research-backed coaching prioritized by impact, with links to the underlying Anthropic research papers |
| CLAUDE.md Config Scoring | Analyzes your project’s CLAUDE.md against the same fluency framework — behaviors defined as conventions boost your effective score |
| Usage Dashboard | Token consumption, cost tracking, and model breakdown from your Claude Code history via stacked area charts |
How It Works
Session Parsing
Parses JSONL session files from ~/.claude/projects/ to extract user prompts and metadata including plan mode usage, tool diversity, and thinking count.
Fluency Scoring
Sends prompts to Claude Sonnet for behavioral scoring against Anthropic's 4D AI Fluency Framework, with results cached locally to avoid re-scoring.
Config Analysis
Scores CLAUDE.md project configuration separately, merging with session scores via session OR config logic for effective behavior calculation.
Usage Tracking
Integrates ccusage to read Claude Code session history and export token/cost data with cache read/creation/input/output breakdown.
GitHub Integration
Uses the gh CLI to pull repo context and open issues, generating targeted Claude Code prompts scoped to your current workspace.
Prompt Optimization
Analyzes any prompt against the 11 fluency behaviors, factors in your CLAUDE.md config (scoring on demand if not cached), then generates an optimized version that incorporates only the missing behaviors not already covered by project conventions.
What Sets It Apart
Existing Claude Code monitoring tools measure what happened — token counts, costs, error rates. CodeFluent is the first tool to analyze how you interact with AI and whether your patterns build or erode skills. Every score maps to published population benchmarks from Anthropic’s research, not subjective heuristics.
All data stays on your machine. The only external calls are to the Anthropic API for scoring.
Technology Stack
- VS Code Extension: TypeScript, VS Code WebviewViewProvider
- Web App: Python, FastAPI, uv
- Frontend: Vanilla HTML/CSS/JS, Chart.js
- Scoring: Anthropic API (Claude Sonnet)
- Usage Data: ccusage
- GitHub Integration:
ghCLI - Testing: Jest + ts-jest, pytest (662 tests including security-focused suites)
- CI/CD: GitHub Actions with automated testing, security audit (
pip-audit), and marketplace publishing
Supported Platforms
| Platform | VS Code Extension | Web App |
|---|---|---|
| Linux | Yes | Yes |
| macOS | Yes | Yes |
| Windows | Yes | Yes |
Research Foundations
- Anthropic AI Fluency Index (Feb 2026) — 11 behavioral indicators and population benchmarks
- Coding Skills Formation with AI (Jan 2026) — 6 coding interaction patterns and quality analysis
- Claude Code Best Practices — Practical guidelines for effective AI collaboration