Claude Code Insights

3,312 messages across 292 sessions (362 total) | 2026-01-14 to 2026-04-17

At a Glance
What's working: You've built an impressively complete PR lifecycle workflow — review, fix, verify, communicate — that you routinely execute in single sessions with high success rates. Your ability to layer context onto reviews by cross-referencing epics, stacked PRs, and merge order produces genuinely useful findings, not just surface-level nits. You're also one of the more effective users at recovering when Claude goes sideways, especially in complex git/monorepo scenarios. Impressive Things You Did →
What's hindering you: On Claude's side, the biggest issue is wrong initial approaches — Claude frequently picks the wrong branch, wrong test strategy, wrong subsystem to investigate, or wrong environment to run commands in, and you end up spending real time redirecting. On your side, the monorepo setup (Sail, submodules, worktrees across sub-repos, stacked PR chains) is complex enough that Claude can't reliably infer the right context without explicit guidance, and it seems like that context isn't always provided up front or codified in project instructions. Where Things Go Wrong →
Quick wins to try: Turn your PR review workflow into a custom slash-command skill — you do it constantly and the steps are consistent enough to standardize, which would also reduce wrong-approach friction by baking in the right defaults. Also, set up hooks or update your CLAUDE.md to enforce environment constraints (e.g., always use Sail for artisan commands, always check sub-repo worktrees not just root) so Claude stops making the same infrastructure mistakes. Features to Try →
Ambitious workflows: As models get more reliable at multi-step execution, your PR review → fix → test → push → reply workflow is a prime candidate for full autonomy — one command that runs the entire pipeline without you steering between steps. Longer term, parallel sub-agents that each hold context for a specific sub-repo in your monorepo could eliminate the branch confusion and worktree mishaps that plague your git operations today, letting you kick off coordinated cross-repo work as a single task. On the Horizon →
3,312
Messages
+74,023/-9,733
Lines
1109
Files
65
Days
51
Msgs/Day

What You Work On

PR Reviews & Code Quality ~45 sessions
Extensive use of Claude Code for structured PR reviews with cross-referencing against epics, sub-tasks, and stacked PRs. Claude performed multi-pass reviews, posted inline findings, drafted review responses to colleagues, and directly implemented must-fix code changes on PR branches. This was the dominant workflow, often involving iterative refinement of review tone, factual accuracy, and finding severity.
Feature Implementation & Bug Fixes ~40 sessions
Claude Code was used to implement features and fixes across a full-stack monorepo (PHP/Laravel backend, JavaScript frontend) including import worker graceful shutdown, CAPTCHA configuration, cookie lifetime changes, revenue labeling standardization, Failed status with retry UI, and database constraint removals. Work typically followed a pattern of planning, multi-file edits, test writing, PR creation, and CI fix-up, with heavy use of Bash and Edit tools.
Git Operations & Monorepo Management ~25 sessions
Significant time spent on git workflow tasks within a monorepo structure containing submodules and worktrees. Claude resolved merge conflicts, managed worktree creation/cleanup across sub-repos, handled branch tracking, forward merges across PR chains, and submodule pointer conflicts. Friction frequently arose from worktree path issues, wrong branch contexts, and submodule navigation challenges.
Debugging & Production Issues ~20 sessions
Claude Code assisted with diagnosing production issues including API timeouts, silently dying import workers (identifying missing EnsureConnection as root cause), database migration errors, and Sentry-reported bugs. Debugging sessions leveraged Bash and Read tools heavily for log analysis and codebase investigation, though some sessions were interrupted before fixes were fully implemented.
Tooling, Automation & Project Planning ~15 sessions
Claude Code was used to maintain and fix internal tooling such as session-renaming scripts, insights report publishing to GitHub Pages, quarterly planner updates, and clipboard integration for WSL environments. It also helped with GitHub issue creation, PR description generation, and skill/command documentation. These sessions often hit edge cases in scripts requiring iterative debugging.
What You Wanted
Code Review
35
Debugging
34
Git Operations
22
Bug Fix
21
Feature Implementation
16
Pr Creation
14
Top Tools Used
Bash
11363
Read
3228
Edit
2073
Grep
1219
Write
692
TodoWrite
678
Languages
Markdown
1912
Shell
307
JSON
180
YAML
43
JavaScript
29
Python
17
Session Types
Multi Task
61
Iterative Refinement
55
Single Task
45
Quick Question
4
Exploration
3

How You Use Claude Code

You are a power user running a sophisticated, multi-repo monorepo workflow with Claude Code as your primary development partner across nearly 300 sessions over three months. Your interaction style is distinctly task-oriented and delegation-heavy — you routinely hand Claude complex, multi-step workflows like "review this PR, implement the fixes, commit, push, and post the review response" and expect end-to-end execution. With 11,363 Bash calls dwarfing all other tools, you clearly prefer Claude to execute directly in your environment rather than just advise, and the heavy use of TodoWrite (678 calls) shows you rely on Claude to track and manage structured plans. Your top goals — code review, debugging, git operations, bug fixes, and PR creation — paint a picture of someone who uses Claude as a full-cycle development collaborator, not just a code generator.

Your interaction pattern is iterative but impatient with wrong directions. The friction data reveals a recurring theme: Claude frequently takes a wrong approach (107 instances), and you course-correct quickly — redirecting Claude from unit tests to real e2e tests, from cron job analysis to the actual People API, or from the wrong git branch to the correct one. You don't over-specify upfront; instead, you let Claude run and then steer when it drifts. This is evident in sessions where Claude initially ran migrations outside Docker, checked the wrong repos for worktrees, or drafted messages that were "too robotic" — in each case you gave pointed corrections and expected immediate recovery. Despite 75 dissatisfied moments and 19 frustrated ones, your overall success rate is exceptional (101 fully achieved, 42 mostly achieved out of 168 analyzed), suggesting you've learned exactly when and how to intervene to keep Claude productive.

What's particularly distinctive is your PR-centric workflow orchestration. You regularly chain together PR reviews with epic cross-referencing, direct code fixes pushed to branches, review response drafting, and CI failure resolution — all in single sessions. You treat Claude as a junior developer who needs supervision but can handle the mechanical work: you provide strategic context ("check this against the epic's sub-tasks", "verify this defends one UUID → one email") while Claude handles the implementation grunt work across your multi-repo setup. The 380 commits across this period — roughly 4 per session analyzed — confirm you're shipping real code through Claude at a remarkable pace, with Markdown (1,912 files) dominating your language profile because so much of your work involves documentation, PR descriptions, and review write-ups rather than raw code generation.

Key pattern: You delegate complex, multi-step PR and git workflows end-to-end, let Claude execute autonomously, and course-correct sharply when it drifts off track — functioning as a hands-off manager who intervenes only at decision points.
User Response Time Distribution
2-10s
134
10-30s
317
30s-1m
470
1-2m
519
2-5m
532
5-15m
347
>15m
213
Median: 93.2s • Average: 292.5s
Multi-Clauding (Parallel Sessions)
102
Overlap Events
132
Sessions Involved
20%
Of Messages

You run multiple Claude Code sessions simultaneously. Multi-clauding is detected when sessions overlap in time, suggesting parallel workflows.

User Messages by Time of Day
Morning (6-12)
692
Afternoon (12-18)
1803
Evening (18-24)
689
Night (0-6)
128
Tool Errors Encountered
Command Failed
903
Other
418
User Rejected
119
File Not Found
31
File Too Large
25
Edit Failed
22

Impressive Things You Did

Over the past three months, you've run 292 sessions with a 60% fully-achieved rate and 380 commits, building a deeply integrated Claude Code workflow around PR management, code review, and monorepo operations.

End-to-End PR Lifecycle Management
You've built a remarkably complete PR workflow where Claude handles reviews, implements fixes from review findings, runs tests, posts review replies, manages forward merges across PR chains, and even drafts polished review summaries. Your ability to chain these steps—review, fix, verify, communicate—in single sessions consistently produces fully-achieved outcomes rated as essential.
Epic-Aware Contextual Code Reviews
You go beyond surface-level reviews by having Claude cross-reference PRs against epic sub-tasks, stacked PR context, and merge order dependencies. This iterative refinement approach—where you feed Claude additional context and have it update findings accordingly—produces nuanced reviews that catch real architectural issues rather than just style nits.
Complex Monorepo and Worktree Operations
You confidently leverage git worktrees, submodule management, and multi-repo coordination across your monorepo setup, using Claude to resolve merge conflicts, manage branch tracking, and clean up worktrees across sub-repos. Despite the inherent complexity of this environment, you consistently guide Claude through tricky git scenarios and recover quickly when it takes a wrong approach.
What Helped Most (Claude's Capabilities)
Multi-file Changes
40
Good Debugging
34
Proactive Help
32
Good Explanations
22
Correct Code Edits
21
Fast/Accurate Search
14
Outcomes
Not Achieved
2
Partially Achieved
21
Mostly Achieved
42
Fully Achieved
101
Unclear
2

Where Things Go Wrong

Your sessions reveal a recurring pattern where Claude takes wrong initial approaches, struggles with your monorepo/Docker environment specifics, and makes errors in git workflow operations that require you to redirect and correct.

Wrong Initial Approach Requiring Redirection
In over 100 sessions, Claude took a wrong approach before arriving at the right one, costing you significant time in correction and redirection. You could reduce this by front-loading more context in your prompts—specifying exactly what kind of test you want, which part of the system to investigate, or what branch to work from—rather than relying on Claude to infer your intent.
  • Claude ran unit tests when you wanted a real end-to-end import test, requiring you to explicitly clarify 'I was referring to a proper import'—wasting a round-trip on the wrong test type entirely.
  • Claude spent significant time analyzing cron workers for API timeout investigation that you determined were irrelevant distractions, and the resulting inquiry doc was too long for its intended Slack audience.
Monorepo and Environment Misunderstanding
Claude repeatedly fails to account for your specific infrastructure—Docker/Sail for running commands, monorepo structure with submodules, and worktree layouts across sub-repos. You could mitigate this by ensuring your CLAUDE.md or project instructions explicitly document that all artisan commands must run via Sail, that worktrees exist in sub-repos not just the root, and how your monorepo branches relate to sub-repo branches.
  • Claude tried to run a migration with plain 'php artisan' instead of through Docker/Sail, causing a false failure that required you to correct the execution context.
  • A session-renaming script incorrectly renamed 112 monorepo sessions to the same PR title because Claude didn't account for shared branches across the monorepo, and later missed sessions by only checking the first text block for GitHub URLs.
Git Workflow and Branch Management Errors
Claude frequently makes mistakes with git operations—creating worktrees from wrong branches, generating PRs against wrong repos, and misunderstanding push/cleanup instructions—which is notable since git_operations and pr_creation are among your top task types. You could reduce this friction by specifying the exact source branch, target branch, and repo in your initial prompt rather than assuming Claude will infer the correct git context.
  • Claude created a git worktree from the wrong branch instead of directly from the PR branch, forcing you to correct it and redo the operation, and separately the PR description generator ran against the monorepo branch instead of the worktree branch.
  • Claude tried to add a verified flash fix to the PR branch when master already had it, and misunderstood your push/worktree cleanup instruction by assuming you wanted to push from the worktree instead of removing it and switching branches on main.
Primary Friction Types
Wrong Approach
107
Buggy Code
48
Misunderstood Request
39
Excessive Changes
16
Api Errors
8
Tool Error
6
Inferred Satisfaction (model-estimated)
Frustrated
19
Dissatisfied
75
Likely Satisfied
389
Satisfied
94
Happy
6

Existing CC Features to Try

Suggested CLAUDE.md Additions

Just copy this into Claude Code to add it to your CLAUDE.md.

Claude repeatedly tried running commands outside the Sail container (migrations, tests), causing false failures and requiring user correction.
Multiple sessions had friction from wrong branch detection in worktrees, PR description generators targeting wrong repos, and broken GitHub references in comments.
Claude repeatedly only checked the main monorepo when the user needed operations across sub-repos (worktree cleanup, session renaming, branch management), requiring redirection.
Claude ran unit tests when the user wanted real E2E import tests, ran tests outside Docker, and admitted its own test suggestions were vacuous — all recurring friction points.
Multiple sessions required 2+ rounds of correction on drafted messages being too robotic, too long, or making wrong assumptions about what happened.

Just copy this into Claude Code and it'll set it up for you.

Custom Skills
Reusable prompt workflows triggered by a single /command.
Why for you: You already have a /publish-insights skill and do PR reviews (35 sessions), PR creation (14 sessions), and planner updates repeatedly. Codifying your PR review format, PR creation checklist (correct branch, correct repo, Sail tests, description generator), and commit workflows as skills would eliminate the recurring friction from wrong branches and missed steps.
mkdir -p .claude/skills/pr-create && cat > .claude/skills/pr-create/SKILL.md << 'EOF' # Create PR Skill 1. Confirm current branch and worktree context (show branch name, ask user to confirm) 2. Run tests inside Sail container: `./vendor/bin/sail test` 3. Run PR description generator against THIS repo's branch (not monorepo root) 4. Create draft PR with `gh pr create --draft` 5. Show PR URL to user EOF
Hooks
Auto-run shell commands at specific lifecycle events (pre-edit, post-edit, etc.).
Why for you: With 48 buggy_code and 107 wrong_approach friction events, auto-running lint and type checks after edits would catch issues before they compound. You could also auto-verify the correct Docker/Sail context before running any artisan commands.
# Add to .claude/settings.json: { "hooks": { "postToolExecution": [ { "matcher": "Edit|Write", "command": "./vendor/bin/sail exec laravel.test php -l $CLAUDE_FILE_PATH 2>/dev/null || true" } ] } }
MCP Servers
Connect Claude to external tools like GitHub, Sentry, and Slack via MCP.
Why for you: You frequently review PRs (35 sessions), create GitHub issues, check Sentry errors, and draft Slack messages. A GitHub MCP server would eliminate the recurring `gh` CLI deprecation errors and GraphQL issues you hit. A Sentry MCP server would let Claude pull error data directly instead of manual investigation.
claude mcp add github -- npx -y @modelcontextprotocol/server-github claude mcp add sentry -- npx -y @sentry/mcp-server

New Ways to Use Claude Code

Just copy this into Claude Code and it'll walk you through it.

Wrong-approach friction dominates your sessions
107 'wrong_approach' friction events dwarf all other friction types combined — invest in guardrails to prevent Claude from going down rabbit holes.
Over 45% of your friction comes from Claude taking a wrong approach: investigating cron workers instead of the People API, running unit tests instead of E2E, creating worktrees from wrong branches, checking only the monorepo root. These are preventable with better upfront context in CLAUDE.md. The pattern suggests Claude lacks domain-specific knowledge about your project structure, so front-loading that context will have outsized impact on your 292 sessions.
Paste into Claude Code:
Before making any changes, briefly outline your plan and which repo/branch/container you'll work in. Wait for my confirmation before proceeding.
Your PR review workflow is your highest-value automation target
Code review (35) + PR creation (14) = your top workflow. Standardize it into a repeatable skill with built-in checks.
You spend more sessions on PR reviews than any other task, and they consistently achieve 'fully_achieved' with 'essential' helpfulness. But friction creeps in from wrong branches, stale context, and format iteration. A structured /review skill that auto-fetches PR context, cross-references epic/sub-tasks (which you do often), and outputs in your preferred format would eliminate the 2-3 correction rounds you currently need per review session.
Paste into Claude Code:
Review PR #XXX. First, fetch the PR diff and linked epic/issues for context. Then produce findings in this format: severity (must-fix/suggestion/nit), file:line, finding, and recommendation. Cross-reference against any stacked PRs in the same epic.
Leverage Task Agents for your codebase exploration sessions
You already use 232 Agent invocations, but your debugging sessions (34) often stall when Claude explores the wrong subsystem.
Sessions like the API timeout diagnosis show Claude spending significant time on irrelevant cron workers before being redirected. Instead of letting Claude sequentially explore, explicitly ask it to spawn parallel sub-agents to investigate multiple hypotheses simultaneously. With your monorepo spanning frontend, backend, and multiple sub-repos, parallel exploration would significantly reduce the time-to-diagnosis that currently causes 'partially_achieved' outcomes in debugging sessions.
Paste into Claude Code:
Use sub-agents to investigate this issue in parallel: Agent 1 should check the People API controller and middleware for timeouts. Agent 2 should check the database queries and indexes. Agent 3 should check the queue/worker configuration. Report back findings from all three before we decide on a direction.

On the Horizon

Your 292 sessions over three months reveal a power user whose workflow is ripe for autonomous, multi-agent orchestration that could eliminate the friction patterns showing up in 107 wrong-approach incidents and dramatically accelerate your PR-heavy development cycle.

Autonomous PR Review-Fix-Verify Pipeline
Your top workflows—code review (35), debugging (34), bug fixes (21), and PR creation (14)—are sequential steps that Claude already handles well individually but could execute as a single autonomous pipeline. Imagine kicking off one command that reviews a PR, implements all must-fix findings, runs tests in the correct environment (Sail, not host), verifies coverage, commits, pushes, and posts the review response—eliminating the back-and-forth that caused friction in multiple sessions. With 40 successful multi-file changes already proven, this pipeline could chain those capabilities end-to-end without human intervention between steps.
Getting started: Use Claude Code's sub-agent spawning (Agent tool, already used 232 times) combined with TodoWrite for structured task tracking. Set up a CLAUDE.md with environment rules (e.g., always use Sail for artisan commands) to prevent recurring friction.
Paste into Claude Code:
Review PR #[NUMBER] thoroughly using our /pr-review skill, then autonomously: 1) Create a worktree from the PR branch (not from main), 2) Implement ALL must-fix findings as code changes, 3) Run the full test suite inside Sail (never bare php artisan), 4) If tests fail, debug and fix until green, 5) Commit with conventional commit messages referencing the PR, 6) Push to the PR branch, 7) Post a review response on GitHub summarizing what was fixed. Use TodoWrite to track each finding as a checklist item. Do NOT ask me for confirmation between steps—only stop if you encounter an ambiguous architectural decision.
Parallel Agents for Monorepo Operations
Your monorepo workflow across lindris-frontend, lindris-backend, and related sub-repos caused repeated friction—wrong branches, submodule confusion, worktree mismanagement, and PR descriptions generated against the wrong repo. Parallel sub-agents could each own a sub-repo context, independently resolve conflicts, create PRs with correct branch targeting, and coordinate merges across your stacked PR chains. This would have prevented the incident where 112 sessions were incorrectly renamed and the multiple worktree/branch mishaps that plagued your sessions.
Getting started: Spawn dedicated Agent sub-tasks for each sub-repo operation, with explicit repo-path context passed to each. Codify your monorepo structure and branch conventions in CLAUDE.md so agents never confuse monorepo root with sub-repo worktrees.
Paste into Claude Code:
I need coordinated changes across our monorepo. Spawn parallel sub-agents for each task: Agent 1: In lindris-backend/, create a feature branch from the correct base (check existing PR chain first), implement [CHANGES], run tests via Sail. Agent 2: In lindris-frontend/, create a matching feature branch, implement [FRONTEND CHANGES], run tests. Agent 3: Coordinate—once both pass, create draft PRs for each with cross-references, update submodule pointers in the monorepo root, and handle the monorepo commit. CRITICAL RULES: Always verify which repo you're in before any git operation. Never create worktrees from main when a PR branch exists. PR descriptions must be generated from the sub-repo worktree, not the monorepo root. Report back a unified status when all agents complete.
Test-Driven Debugging with Iterative Verification
Your data shows 34 debugging sessions and 48 buggy-code friction events, with standout successes like improving regression detection from 2/9 to 5/9 failing tests and catching a real bug during manual CAPTCHA testing. An autonomous debug loop could take a failing test or Sentry error, form a hypothesis, write a regression test that reproduces the failure, implement the fix, verify the test goes green, then expand to related edge cases—all without human steering. The silent-death import worker diagnosis and the API timeout investigation both stalled due to rabbit-holing; a structured test-first approach with automated verification gates would keep the agent on track.
Getting started: Combine Claude Code's Bash tool (11,363 invocations shows heavy CLI usage) with a structured TodoWrite plan that enforces: reproduce → test → fix → verify → expand. Use the Agent tool to spawn a verification sub-agent that independently confirms the fix.
Paste into Claude Code:
Debug this issue: [DESCRIBE BUG OR PASTE SENTRY ERROR]. Follow this autonomous loop strictly: 1) REPRODUCE: Find or write a minimal test that demonstrates the failure (run inside Sail). Confirm it fails. If you can't reproduce it, investigate logs and database state before hypothesizing. 2) HYPOTHESIZE: Write your root-cause theory in a TodoWrite entry. 3) REGRESSION TEST: Write a focused test that will fail with the bug present and pass with the fix. Run it—confirm it fails. 4) FIX: Implement the minimal fix. Do NOT make excessive or unrelated changes. 5) VERIFY: Run the regression test—confirm it passes. Run the full test suite—confirm nothing else broke. 6) EXPAND: Identify 2-3 related edge cases, write tests for them, fix any that fail. 7) REPORT: Summarize root cause, fix, test coverage added, and confidence level. Do not go down rabbit holes investigating unrelated systems. If your first hypothesis doesn't pan out after 10 minutes of investigation, step back and list the top 3 alternative theories before continuing.
"Claude honestly admitted its own test suggestions were "vacuous" when the user questioned them during a PR review"
During a PR review continuation for PR #520, when the user pushed back on Claude's test recommendations, Claude candidly acknowledged that its suggestions had no real substance — a rare moment of AI self-awareness that apparently earned it a 'mostly_achieved' rating anyway.