# ClaudeKit Documentation (Complete)
Generated: 2025-12-09T22:52:46.109Z
Total Pages: 169
---
# Changelog
Section: changelog
Category: N/A
URL: https://docs.claudekit.cc/docs/changelog/index
# Changelog
Recent changes, updates, and release notes for ClaudeKit.
## Latest Release
### Version 1.0.0 - 2024-12-01
#### π Initial Release
**Major Features**:
- **14 Specialized AI Agents** - Planner, Researcher, Tester, Debugger, Code Reviewer, and more
- **30+ Slash Commands** - `/cook`, `/plan`, `/fix`, `/design`, `/git`, `/docs`, and more
- **45 Built-in Skills** - Next.js, Better Auth, PostgreSQL, Docker, Shopify, Gemini Vision
- **Multi-agent Workflows** - Agents collaborate on complex tasks
- **Context-aware Navigation** - Dynamic sidebar based on current section
**Core Capabilities**:
- Complete feature development workflow (`/plan β /code β /test β /commit`)
- Systematic bug fixing with root cause analysis
- Automated documentation generation and maintenance
- UI/UX design with AI-generated assets
- Performance optimization and debugging
- Code review with security and performance analysis
**Supported Technologies**:
- Frontend: Next.js, React, Vue, Angular, Svelte
- Backend: Node.js, Python, Go, Rust, PHP
- Database: PostgreSQL, MongoDB, MySQL, Redis
- Cloud: AWS, GCP, Azure, Cloudflare Workers
- Authentication: Better Auth, OAuth2, JWT
- Payment: Stripe, Shopify, Polar, SePay
## Recent Changes
### Documentation Overhaul - 2024-11-28
**Improved Information Architecture**:
- β
New context-aware navigation system
- β
Separated onboarding from marketing content
- β
Created dedicated "Why ClaudeKit" sales page
- β
Added comprehensive workflow guides
- β
Restructured content into 5 main sections
**New Documentation Pages**:
- Getting Started section with clear learning paths
- Core Concepts explaining agents/commands/skills
- Upgrade Guide for existing Claude Code users
- Feature Development workflow guide
- Bug Fixing systematic approach
- Documentation maintenance workflow
**Navigation Improvements**:
- 5-section NavBar: Getting Started, Docs, Workflows, Changelog, Support
- Context-aware sidebars that change based on current section
- Hierarchical command navigation with 9 subcategories
- Mobile-responsive navigation with hamburger menu
### Beta Testing Period - 2024-10-15 to 2024-11-30
**Key Learnings**:
- Users complete features 10x faster with ClaudeKit
- Multi-agent collaboration reduces bugs by 80%
- Automated testing catches issues before production
- Documentation synchronization eliminates outdated docs
- Teams achieve consistent code quality across members
**Beta Tester Feedback**:
> "ClaudeKit transformed how our team builds features. What used to take days now takes hours." - Senior Developer, Tech Startup
> "The quality of code generated by ClaudeKit agents is impressive. It follows our patterns and best practices automatically." - Engineering Manager, Enterprise
## Version History
### v0.9.0 - Beta Release - 2024-10-15
- Initial beta release with core agents and commands
- Basic skill system with 20 built-in skills
- Simple command-line interface
- GitHub integration for automated workflows
### v0.8.0 - Alpha Testing - 2024-09-01
- Internal alpha testing with pilot users
- Agent communication protocols
- Workflow orchestration system
- Skill activation and context injection
### v0.5.0 - Prototype - 2024-07-15
- Proof of concept with basic planner and developer agents
- Simple `/cook` command implementation
- Manual skill loading
- Local execution only
## Breaking Changes
### v1.0.0
- No breaking changes from beta versions
- Migration path: `claudekit migrate` command available
- All beta configurations remain compatible
### v0.9.0 β v1.0.0
- Enhanced command naming (backward compatible)
- Improved skill detection (automatic upgrade)
- Better error handling and logging
## Deprecations
### Deprecated in v1.0.0
- `--legacy` flag (will be removed in v2.0.0)
- Manual skill loading (use automatic detection instead)
- Classic mode (modern mode now default)
### Migration Guide
```bash
# Update to v1.0.0
npm update @claudekit/cli
# Migrate configuration
claudekit migrate
# Verify setup
claudekit doctor
```
## Security Updates
### v1.0.0 Security Enhancements
- Secure skill loading with sandboxing
- Encrypted agent communication channels
- Audit logging for all agent actions
- Role-based access control for team workflows
- Automatic vulnerability scanning for generated code
### Past Security Issues
- **Fixed in v0.9.2**: Temporary file leakage in skill loading
- **Fixed in v0.8.5**: Unsafe eval in legacy command processing
- **Fixed in v0.7.1**: Information disclosure in error messages
## Performance Improvements
### v1.0.0 Performance
- 50% faster agent startup times
- 75% reduced memory usage for skill loading
- Parallel agent execution for complex workflows
- Optimized context management for large codebases
- Caching system for repeated operations
### Benchmark Results
```
Feature Implementation (Complex CRUD):
- Manual: 8 hours, 15 bugs
- ClaudeKit v0.9: 45 minutes, 2 bugs
- ClaudeKit v1.0: 20 minutes, 0 bugs
Bug Resolution:
- Manual debugging: 2 hours average
- ClaudeKit v0.9: 20 minutes average
- ClaudeKit v1.0: 5 minutes average
```
## Community Contributions
### v1.0.0 Community Features
- Discord integration for collaborative workflows
- Community skill library with 200+ user-contributed skills
- Open-source template repositories
- Community-maintained language translations
### Notable Contributors
- @alex-dev - PostgreSQL optimization skill
- @sarah-designer - UI/UX design patterns
- @mike-ops - DevOps and deployment workflows
- @laura-docs - Technical writing improvements
## Upcoming Features
### v1.1.0 (Planned - Q1 2025)
- Visual workflow designer
- Advanced debugging with time-travel
- Team collaboration features
- Enterprise SSO integration
- Performance monitoring dashboard
### v1.2.0 (Planned - Q2 2025)
- Mobile app companion
- Voice command support
- Real-time collaborative editing
- Advanced CI/CD integrations
- Custom agent creation tools
## Known Issues
### v1.0.0 Known Limitations
- Large codebases (>1M LOC) may experience slower initial scans
- Some niche languages have limited skill support
- Windows subsystem support still in beta
- Enterprise proxy configuration requires manual setup
### Workarounds
- Use `.claudeignore` file to exclude large directories
- Create custom skills for unsupported languages
- Use WSL2 on Windows for better compatibility
- Configure proxy settings manually in configuration
## Support and Resources
### Getting Help
- [Discord Community](https://claudekit.cc/discord) - Real-time chat with community
- [GitHub Issues](https://github.com/claudekit/claudekit/issues) - Bug reports and feature requests
- [Documentation](/docs) - Complete reference documentation
- [Email Support](mailto:support@claudekit.cc) - Enterprise support inquiries
### Contributing
- [Contributing Guide](/docs/development/contributing) - How to contribute to ClaudeKit
- [Skill Development](/docs/development/creating-skills) - Create custom skills
- [Plugin Development](/docs/development/plugins) - Extend ClaudeKit functionality
- [Translation Project](https://translate.claudekit.cc) - Help translate ClaudeKit
---
**Stay Updated**: Join our [Discord](https://claudekit.cc/discord) for announcements and updates.
**Release Cadence**: Regular releases on the 1st of each month. Security patches released as needed.
---
# Agents Overview
Section: docs
Category: agents
URL: https://docs.claudekit.cc/docs/agents/index
# Agents Overview
17 specialized agents that handle every aspect of software developmentβautomatically orchestrated through predefined workflows.
## Quick Reference
### Development & Implementation
| Agent | Purpose |
|-------|---------|
| [planner](/docs/agents/planner) | Research, analyze, create implementation plans before coding |
| [fullstack-developer](/docs/agents/fullstack-developer) | Execute implementation phases with strict file ownership |
| [scout](/docs/agents/scout) | Parallel file search across large codebases |
| [scout-external](/docs/agents/scout-external) | External search using Gemini CLI and OpenCode |
| [debugger](/docs/agents/debugger) | Root cause analysis, log investigation, issue diagnosis |
| [tester](/docs/agents/tester) | Test execution, coverage analysis, quality validation |
### Quality & Review
| Agent | Purpose |
|-------|---------|
| [code-reviewer](/docs/agents/code-reviewer) | Security audits, performance analysis, code quality |
| [database-admin](/docs/agents/database-admin) | Query optimization, schema design, performance tuning |
### Documentation & Management
| Agent | Purpose |
|-------|---------|
| [docs-manager](/docs/agents/docs-manager) | Technical documentation, API docs, architecture guides |
| [project-manager](/docs/agents/project-manager) | Progress tracking, cross-agent coordination, status reports |
| [journal-writer](/docs/agents/journal-writer) | Document failures and setbacks with brutal honesty |
| [git-manager](/docs/agents/git-manager) | Conventional commits, security scanning, token-optimized |
### Creative & Research
| Agent | Purpose |
|-------|---------|
| [ui-ux-designer](/docs/agents/ui-ux-designer) | Award-winning UI with Three.js, responsive layouts |
| [copywriter](/docs/agents/copywriter) | High-converting marketing copy, viral content |
| [brainstormer](/docs/agents/brainstormer) | Explore approaches, challenge assumptions, debate decisions |
| [researcher](/docs/agents/researcher) | Multi-source research, documentation analysis |
### Integration
| Agent | Purpose |
|-------|---------|
| [mcp-manager](/docs/agents/mcp-manager) | MCP server integrations, tool discovery, execution |
## How to Use
**Automatic (recommended):** Commands orchestrate agents automatically
```bash
/cook [feature] # planner β code β tester β reviewer β git-manager
/fix:hard [bug] # scout β debugger β planner β code β tester
/plan [task] # planner + researcher
```
**Explicit:** Request specific agents in prompts
```
"Use scout agent to find auth files, then planner to create migration strategy"
```
## Under the Hood
### Orchestration Patterns
**Sequential** (default): Agents run in order, each building on previous output
```
planner β code β tester β code-reviewer β git-manager
```
**Parallel**: Independent agents run simultaneously
```
scout (dir1) β
scout (dir2) βββ Aggregate β planner
scout (dir3) β
```
**Hybrid**: Mix of sequential and parallel for complex tasks
### Agent Communication
Agents share context through:
- **Shared files**: `docs/`, `plans/`, code standards
- **Handoff protocols**: Each agent receives previous output, performs task, passes results
- **TodoWrite**: Real-time progress tracking visible to user
### Handoff Example
```
planner output β plans/auth-feature.md
β
code reads plan β implements β creates files + tests
β
tester runs tests β validates coverage
β
code-reviewer audits β security + quality report
β
git-manager commits β conventional commit + push
```
### Troubleshooting
**Agent not activating?**
- Check command matches task complexity (`/fix:fast` vs `/fix:hard`)
- Verify workflow files exist in `.claude/agents/`
- Try explicit invocation: "Use [agent] to..."
**Slow response?**
- Use parallel orchestration when tasks are independent
- Scope tasks more specifically
- Use simpler commands for simple tasks
**Conflicts?**
- Review orchestration order in workflow files
- Check handoff protocols between agents
## Key Takeaway
17 agents work together automaticallyβuse commands to orchestrate them, or invoke explicitly for specific tasks. No manual coordination needed.
---
# Researcher Agent
Section: docs
Category: agents
URL: https://docs.claudekit.cc/docs/agents/researcher
# Researcher Agent
Multi-source technology intelligence that explores docs, videos, GitHub repos, and articles in parallel to deliver production-ready implementation insights.
## When to Use
- **New tech evaluation** - Before adopting frameworks, libraries, or tools
- **Pre-implementation research** - Finding official docs, security concerns, best practices
- **Technical comparison** - Evaluating multiple approaches with pros/cons analysis
- **Deep dive validation** - Cross-checking implementation patterns across 10+ sources
## Key Capabilities
| Capability | Description | Tools Used |
|-----------|-------------|-----------|
| **Parallel Search** | Query fan-out across Google, YouTube, websites, GitHub simultaneously | SearchAPI MCP, search_youtube, WebFetch |
| **Video Analysis** | Extract technical insights from tutorials with timestamps | VidCap MCP, Gemini Vision |
| **Repo Analysis** | Find implementation patterns in popular libraries | repomix, GitHub search |
| **Doc Synthesis** | Structured markdown reports with security/performance sections | Multi-source validation |
## Common Use Cases
**Who**: Product teams choosing payment gateway
**Prompt**: `/plan research Stripe vs PayPal integration for SaaS billing`
**Output**: 15-page comparison report with security audit, pricing, code examples
**Who**: DevOps engineer evaluating container orchestration
**Prompt**: `/plan investigate Kubernetes vs Docker Swarm for microservices`
**Output**: Architecture analysis, scaling patterns, operational complexity breakdown
**Who**: Frontend dev implementing real-time features
**Prompt**: `/plan research WebSocket best practices for chat app`
**Output**: Library comparison, connection handling patterns, production configs
**Who**: Security-focused startup
**Prompt**: `/plan research OAuth2 implementation vulnerabilities and mitigations`
**Output**: Security audit, common pitfalls, hardening checklist with sources
**Who**: Tech lead architecting data pipeline
**Prompt**: `/plan compare Apache Kafka vs RabbitMQ for event streaming`
**Output**: Performance benchmarks, use case fit, operational considerations
## How It Works
```bash
# Trigger research via planning command
/plan [add Stripe payment integration]
```
**What happens**:
1. **Query Fan-Out** - Spawns parallel searches:
- Google: "Stripe Node.js production best practices"
- YouTube: "Stripe integration tutorial 2025"
- GitHub: "stripe-samples" repositories
- Official docs: stripe.com/docs
2. **Multi-Source Analysis** - Researcher validates findings across:
- 5+ documentation sites
- 4+ video tutorials with transcripts
- 6+ GitHub repos (popular implementations)
- 10+ articles/discussions
3. **Synthesis** - Generates structured markdown report:
- Executive summary with recommendation
- Security considerations (CRITICAL section)
- Performance implications
- Implementation guide with code
- 15+ cited sources for verification
**Output**: `./plans/research/YYMMDD-stripe-integration.md`
## Report Structure
```markdown
# Research Report: [Technology]
## Executive Summary
- Key recommendation with confidence level
- 3-5 critical takeaways
## Integration Approaches
- Option A vs B vs C comparison
- Pros/cons table
- "Best for" recommendations
## Security Best Practices
β
Critical requirements
β Common vulnerabilities
## Performance Considerations
- Metrics: load time, processing speed
- Optimization strategies
## Implementation Guide
- Minimal working example (TypeScript)
- Step-by-step setup
## Sources (15+)
- Docs, videos, repos, articles
## Next Steps
- 5-step action plan with time estimates
```
## Pro Tips
**Be specific in research requests** - "Stripe with Next.js App Router" beats "payment integration"
**Leverage security sections** - Researcher surfaces CVEs and common vulnerabilities automatically
**Check source dates** - Report includes "Last Updated" for each source; flag outdated info
**Use for architecture decisions** - Research 2-3 alternatives before committing to stack
**Share with planner** - Feed research reports directly to planner agent for implementation plans
## Related Agents
- [Planner Agent](/docs/agents/planner) - Converts research findings into step-by-step implementation plans
- [Brainstormer Agent](/docs/agents/brainstormer) - Debates technical trade-offs discovered in research
- [Scout Agent](/docs/agents/scout) - Locates existing implementations in your codebase
## Key Takeaway
Researcher eliminates "tutorial hell" by validating 15+ sources in parallel, surfacing security risks, and delivering production-ready intelligence before you write a single line of code. Part of ClaudeKit $99 toolkit.
---
# Planner Agent
Section: docs
Category: agents
URL: https://docs.claudekit.cc/docs/agents/planner
# Planner Agent
Researches best practices, analyzes your codebase, and generates step-by-step implementation plans with code examples, timelines, and rollback procedures.
## When to Use
- **Before major features** - Break down complex work into clear steps
- **Technical decisions** - Evaluate multiple approaches with pros/cons
- **Large refactors** - Map dependencies and impact before touching code
- **CI/CD failures** - Analyze logs and create systematic fix plans
## Key Capabilities
| Capability | What It Does |
|-----------|--------------|
| **Research** | Searches industry standards, official docs, proven solutions |
| **Analysis** | Reads codebase, evaluates dependencies, identifies integration points |
| **Planning** | Breaks work into tasks, lists file changes, estimates timeline |
| **Risk Management** | Includes rollback plans, security checks, performance considerations |
## Common Use Cases
**Feature Planning**
- **Who**: Backend dev adding real-time notifications
- **Prompt**: `/plan [add WebSocket notifications with Socket.io and Redis]`
- **Output**: Plan with setup steps, auth integration, database schema, test strategy
**Architecture Review**
- **Who**: Tech lead evaluating database migration
- **Prompt**: `/plan [migrate from MongoDB to PostgreSQL]`
- **Output**: Migration strategy, data transformation steps, zero-downtime approach
**Bug Investigation**
- **Who**: Developer fixing complex race condition
- **Prompt**: `/plan:hard [fix checkout race condition causing double charges]`
- **Output**: Root cause analysis, reproduction steps, fix plan with test cases
**Optimization Planning**
- **Who**: DevOps engineer improving performance
- **Prompt**: `/plan [optimize API response time from 2s to 200ms]`
- **Output**: Profiling strategy, bottleneck analysis, optimization steps with benchmarks
**CRO Implementation**
- **Who**: Growth engineer improving conversion
- **Prompt**: `/plan:cro [improve checkout abandonment rate]`
- **Output**: A/B test plan, UX improvements, tracking implementation
## What You Get
Every plan includes:
```markdown
# Implementation Plan: [Feature Name]
## Approach
Why this solution + alternatives considered
## Steps
1. Install Dependencies (5 min)
- Commands to run
2. Core Implementation (20 min)
- Files to create: src/feature/service.ts
- Files to modify: src/server.ts
- Code snippets showing structure
3. Integration (15 min)
- Where to hook into existing code
4. Testing (20 min)
- Test files to create
- Coverage requirements
## Timeline
Total: 1 hour
## Rollback Plan
Step-by-step recovery if issues occur
## Security Checklist
- Auth validation
- Input sanitization
- Rate limiting
## Next Steps
Ready to implement? Run: /cook @plans/your-plan.md
```
Plans saved to: `plans/[feature-name]-YYYYMMDD-HHMMSS.md`
## Pro Tips
**Review before coding**: Plans catch design flaws before you write code. Check the approach section first.
**Use plans as specs**: Reference the plan during code review. "Did we implement step 3?" ensures nothing is missed.
**Multiple approaches**: Use `/plan:two [description]` to generate two different solutions with trade-off comparison.
**Link from issues**: Save plans to git, link from GitHub issues. Future devs understand *why* decisions were made.
**Update after implementation**: Mark completed steps, note deviations. Plans become living documentation.
## Example Commands
```bash
# Basic planning
/plan [add OAuth2 authentication]
# Compare approaches
/plan:two [use Redis vs PostgreSQL for caching]
# Fix planning
/plan:hard [memory leak in job processor]
# CRO planning
/plan:cro [improve signup conversion]
# CI/CD fix
/plan:ci [github-actions-url]
# Execute plan
/cook @plans/oauth2-auth-20241020.md
```
## Integration with Commands
| Phase | Command | Purpose |
|-------|---------|---------|
| Research | `/plan` | Generate implementation plan |
| Review | `cat plans/latest.md` | Review plan before coding |
| Execute | `/cook @plans/plan.md` | Implement following plan |
| Test | `/test` | Validate implementation |
| Document | Update plan with actuals | Create decision record |
## Related Agents
- [Fullstack Developer](/docs/agents/fullstack-developer) - Executes implementation plans
- [Researcher Agent](/docs/agents/researcher) - Deep dives on specific topics
- [Scout](/docs/agents/scout) - Explores codebase for context
## Key Takeaway
Planning prevents waste. 10 minutes of research and analysis saves hours of refactoring when you discover you picked the wrong approach halfway through implementation.
---
# Tester Agent
Section: docs
Category: agents
URL: https://docs.claudekit.cc/docs/agents/tester
# Tester Agent
Automated test execution with 80%+ coverage targets, failure diagnosis, and build verification across all major frameworks.
## When to Use
- `/test` - Run full test suite with coverage
- `/fix:test [issue]` - Fix failing tests automatically
- Pre-commit/pre-push validation
- CI/CD pipeline verification
## Key Capabilities
| Category | Tools | Coverage Target |
|----------|-------|-----------------|
| **Unit Tests** | Jest, Vitest, pytest, cargo test, go test | 80%+ |
| **Integration** | API testing, DB interactions, service layers | 75%+ |
| **E2E** | Playwright, Cypress, Flutter integration tests | Critical paths |
| **Coverage** | Line, branch, function, statement analysis | 80%+ overall |
| **Build Check** | TypeScript, linting, bundle size, compilation | 100% pass |
## Common Use Cases
**Solo Dev - Pre-commit validation**
```
/test
Validates all changes before commit with coverage report
```
**QA Engineer - Bug regression prevention**
```
/test
Then review: Which areas have <80% coverage?
```
**Team Lead - PR validation**
```
/test
Verify: All tests pass + coverage targets met
```
**DevOps - CI/CD integration**
```
/test
Output: JSON report for pipeline integration
```
**Full-Stack Dev - Multi-framework testing**
```
/test
Runs: Flutter analyze + Jest + pytest in sequence
```
## Pro Tips
**Coverage-first approach**: Always check coverage gaps after test runs - uncovered code = untested risk
**Fail fast**: Use `/test` before starting new work to catch environment issues early
**Fix atomically**: Use `/fix:test [specific failure]` for targeted repairs instead of batch fixes
**Performance baseline**: Track test execution time - >60s indicates need for optimization
**Docker isolation**: Run tests in containers for consistency across team environments
## Related Agents
- [Debugger](/docs/agents/debugger) - Investigate test failures
- [Code Reviewer](/docs/agents/code-reviewer) - Pre-test code quality
- [Fullstack Developer](/docs/agents/fullstack-developer) - Post-test build verification
## Key Takeaway
Test execution isn't about 100% coverage - it's about 80%+ on critical paths with zero flaky tests. The tester agent finds failures, diagnoses root causes, and ensures your coverage targets before any commit goes through.
---
# Debugger Agent
Section: docs
Category: agents
URL: https://docs.claudekit.cc/docs/agents/debugger
# Debugger Agent
Systematic root cause analysis for production incidents, API failures, and complex technical issues.
## When to Use
- API endpoints returning 500 errors or unexpected responses
- CI/CD pipeline failures blocking deployments
- Database connection pools exhausted or queries timing out
- Production incidents requiring immediate diagnosis
## Key Capabilities
| Area | What It Does |
|------|-------------|
| **Issue Investigation** | Structured problem-solving: assess severity, collect logs, identify patterns, trace timeline |
| **Database Analysis** | Schema inspection (`psql \d`), query plans (EXPLAIN), connection monitoring, lock detection |
| **Log Analysis** | Parse server logs, CI/CD output, GitHub Actions failures, container logs, system errors |
| **Performance** | Response times, resource usage (CPU/memory/disk), bottleneck identification, cache analysis |
| **Root Cause** | Error tracing, dependency failures, config issues, code bugs, infrastructure problems |
## Common Use Cases
### Backend Engineer: API 500 Errors
**Prompt**: `/debug [POST /api/orders returning 500, started after v2.3.4 deploy]`
Gets root cause (missing req.user, connection leak), fix plan, rollback steps, validation commands.
### DevOps: Database Connection Exhaustion
**Prompt**: `/debug [PostgreSQL pool exhausted, 47/20 connections active]`
Identifies leaked transactions, long-running queries, table locks. Provides kill commands, query timeouts, code fixes.
### Full-Stack Dev: GitHub Actions Failing
**Prompt**: `/debug [CI build failing on test step, error "Module not found"]`
Analyzes workflow logs, identifies missing dependency or broken import, suggests package.json fix.
### Site Reliability Engineer: Performance Degradation
**Prompt**: `/debug [API latency increased from 200ms to 3s after deploy]`
Profiles endpoints, finds N+1 queries or missing indexes, provides EXPLAIN ANALYZE output and optimization plan.
## Pro Tips
**Collect context first**: Before running `/debug`, gather error messages, timestamps, recent changes (deploys/commits), and environment details.
**Check the usual suspects**: Recent deployments, config changes, dependency updates, database migrations often cause issues.
**Use parallel investigation**: Combine with scout agents for broad searches: `scout('Find all database transaction usage', 3)` while analyzing logs.
**Validate fixes in staging**: Test proposed solutions in non-production before applying to prod.
**Document for next time**: Debugger reports become runbooks - save them for similar future incidents.
## Related Agents
- [Tester](/docs/agents/tester) - Validate fixes with comprehensive tests
- [Code Reviewer](/docs/agents/code-reviewer) - Review fix quality before merge
- [Fullstack Developer](/docs/agents/fullstack-developer) - Implement suggested fixes
## Key Takeaway
Debugger agent systematically investigates technical issues from symptoms to root cause, providing actionable solutions with validation steps and prevention measures - turning hours of troubleshooting into 30-minute structured analysis.
---
# Brainstormer Agent
Section: docs
Category: agents
URL: https://docs.claudekit.cc/docs/agents/brainstormer
# Brainstormer Agent
Your technical advisor who challenges assumptions, debates approaches, and provides brutally honest assessments before code is written.
## When to Use
- Evaluating multiple architectural approaches with competing trade-offs
- Challenging assumptions about requirements and feasibility
- Debating technical decisions before committing resources
- Need a "second opinion" on complex problems requiring creative solutions
## Key Capabilities
| Capability | Description |
|-----------|-------------|
| Multi-approach analysis | Explores 3-5 valid solutions with detailed pros/cons |
| Assumption challenging | Questions requirements to ensure solving real problems |
| Trade-off evaluation | Assesses complexity, maintenance, performance, cost, technical debt |
| Brutal honesty | Unfiltered assessment: overengineering alerts, risk identification, simpler alternatives |
| YAGNI/KISS/DRY | Applies principles to prevent premature optimization |
| Contextual recommendations | Considers team skills, timeline, budget constraints |
## Common Use Cases
**Startup CTO**: Evaluating API architecture
```bash
/brainstorm [should we use REST API or GraphQL for our mobile app? Team has REST experience, need to ship MVP in 6 weeks]
```
**Tech Lead**: Debating caching strategies
```bash
/brainstorm [API responses are slow. Evaluating Redis cache vs database optimization vs CDN. Current p95 is 800ms, need <200ms]
```
**Product Engineer**: Challenging feature requests
```bash
/brainstorm [stakeholder wants real-time dashboard with WebSockets. Is this necessary or can we use polling?]
```
**Solo Developer**: Planning refactoring
```bash
/brainstorm [monolith is getting messy. Should I split into microservices or improve modular monolith structure first?]
```
**Team Lead**: Assessing feasibility
```bash
/brainstorm [client wants offline-first mobile app with sync. Team has 3 junior devs, 8-week timeline. Realistic?]
```
## Pro Tips
**Provide context upfront**: Team skills, timeline, budget, constraints. Better context = better recommendations.
**Be open to challenges**: Brainstormer will question your assumptions. That's the point. Saves costly mistakes.
**Focus on "Recommendation" section**: If overwhelmed by options, read the final recommendation first, then backtrack for details.
**Use success criteria**: Brainstormer defines measurable criteria (e.g., "p95 <200ms"). Use these to validate decisions later.
**Don't skip "Open Questions"**: Unresolved questions in the output need answers before proceeding.
**Trust YAGNI assessments**: If brainstormer says it's overengineering, it probably is. Build what you need now.
**Follow "Next Steps"**: Output includes actionable next steps. Usually involves `/plan` command for implementation.
## Related Agents
- [Planner Agent](/docs/agents/planner) - Creates implementation plans after approach is decided
- [Researcher Agent](/docs/agents/researcher) - Provides data for brainstorming decisions
- [Docs-Manager Agent](/docs/agents/docs-manager) - Documents architectural decision records
## Key Takeaway
Brainstormer doesn't write code. It saves you from writing the wrong code. Use it before committing to complex architectural decisions. 10 minutes of brainstorming prevents weeks of refactoring.
---
# Code Reviewer Agent
Section: docs
Category: agents
URL: https://docs.claudekit.cc/docs/agents/code-reviewer
# Code Reviewer Agent
Production-grade security and quality audits with categorized findings before merge.
## When to Use
- Pre-merge quality gates
- Security vulnerability detection
- Type safety validation
- Performance bottleneck analysis
## Key Capabilities
| Category | Checks |
|----------|--------|
| **Security** | OWASP Top 10, SQL injection, XSS, CSRF, secrets exposure |
| **Type Safety** | TypeScript strict mode, `any` usage, null checks |
| **Performance** | N+1 queries, memory leaks, bundle size |
| **Quality** | Test coverage (80%+ target), error handling, code duplication |
| **Standards** | Linting, naming conventions, documentation |
## Common Use Cases
**1. Pre-Merge Review**
- **Who**: Any developer before PR
- **Prompt**: `/review [feature-name]`
- **Output**: Categorized issues (Critical/High/Medium/Low) with fix recommendations
**2. Security Audit**
- **Who**: Security-conscious teams
- **Prompt**: `/review [security audit of auth module]`
- **Output**: OWASP compliance report, vulnerability list, remediation steps
**3. Refactoring Assessment**
- **Who**: Teams improving legacy code
- **Prompt**: `/review [type safety improvements in src/]`
- **Output**: `any` type locations, strict mode violations, migration plan
**4. Performance Analysis**
- **Who**: Teams optimizing slow endpoints
- **Prompt**: `/review [database query performance]`
- **Output**: N+1 problems, missing indexes, caching opportunities
**5. Standards Compliance**
- **Who**: Teams enforcing code standards
- **Prompt**: `/review [compare auth module against code-standards.md]`
- **Output**: Standards checklist with pass/fail status
## Pro Tips
**Scope Reviews Strategically**
```bash
/review [src/auth/] # Directory
/review [user authentication] # Feature
/review [PR-123] # Pull request
```
**Combine with Fix Workflow**
```bash
/review [feature-x]
/fix:fast [fix critical security issues from review]
/test
/review [feature-x] # Verify fixes
```
**Use Review Categories**
- **Critical**: Must fix before merge (security, data loss)
- **High**: Should fix (performance, type safety, reliability)
- **Medium**: Recommended (maintainability, code smells)
- **Low**: Optional (style, minor improvements)
**Check Task Status Updates**
Reviews automatically update plan files with found issues and blocking status.
**Set Quality Gates**
Enforce 80%+ test coverage, zero `any` types, security scan pass before production.
## Related Agents
- [Planner](/docs/agents/planner) - Creates fix plans from review findings
- [Tester](/docs/agents/tester) - Validates fixes with comprehensive tests
- [Scout External](/docs/agents/scout-external) - Researches best practices for fixes
## Key Takeaway
The code reviewer agent prevents production incidents by catching security vulnerabilities, type safety violations, and performance issues before merge. Use it as a quality gate in every PR workflow.
---
# Scout Agent
Section: docs
Category: agents
URL: https://docs.claudekit.cc/docs/agents/scout
# Scout Agent
Parallel AI file search across large codebasesβorganized results in <5 minutes.
## When to Use
- **Feature work**: Find all files spanning multiple directories before implementation
- **Debugging**: Locate integration points across unfamiliar codebase areas
- **Onboarding**: Map project structure for authentication, payments, or any feature domain
- **Refactoring**: Identify affected files before widespread changes
## Key Capabilities
| Capability | Description |
|------------|-------------|
| **Parallel Search** | Spawns 1-10 agents (Gemini + OpenCode) to search simultaneously |
| **Smart Division** | Splits codebase into logical sections per agent |
| **Organized Output** | Groups results by purpose (Core, API, Tests, Config, Docs) |
| **Timeout Handling** | 3-min limit per agent, uses partial results if needed |
| **Deduplication** | Removes duplicate paths, highlights cross-category files |
## Common Use Cases
**1. Developer starting feature**
```bash
/scout "locate all authentication-related files" 5
```
*Gets organized list: auth services, middleware, routes, tests, configβready to start.*
**2. DevOps debugging production issue**
```bash
/scout "find Stripe payment webhook and error handling" 4
```
*Finds: payment service, webhook routes, error middleware, logging, integration tests.*
**3. Junior dev onboarding**
```bash
/scout "locate database models and migration files" 3
```
*Maps data layer: models, migrations, seeds, schema validation, ORM config.*
**4. Tech lead planning refactor**
```bash
/scout "find all API rate limiting and middleware" 6
```
*Discovers: rate limit middleware, Redis integration, route guards, tests, docs.*
**5. Consultant auditing codebase**
```bash
/scout "locate security-related files (auth, CORS, validation)" 8
```
*Comprehensive scan: auth guards, input validation, CORS config, security tests.*
## Pro Tips
**Scale Guidelines**:
- **1-3 agents**: Small projects (<100 files) or targeted searches β Gemini only
- **4-6 agents**: Medium projects (100-500 files) β Gemini + OpenCode mix
- **7-10 agents**: Large codebases (500+ files) or monorepos β Full parallelization
**Search Query Tips**:
- β
Specific: `"authentication middleware and JWT validation"`
- β Vague: `"auth stuff"`
- Include file types: `"TypeScript service files for payment"`
- Add constraints: `"only in src/api directory"`
**Reading Results**:
- Don't skip "Configuration" or "Tests" sectionsβcritical context
- Files in multiple categories = key integration points
- Review file counts: 50+ files = query too broad, narrow it down
**Performance**:
- Small project (scale=3): 1-2 min, 95% coverage
- Medium (scale=5): 2-4 min, 90% coverage
- Large (scale=8): 3-5 min, 85% coverage
- Monorepo (scale=10): 4-5 min, 75-80% coverage (speed/coverage trade-off)
**Troubleshooting**:
- Too many files (100+): Be more specific or reduce scale
- Missing expected files: Increase scale or check `.gitignore` exclusions
- Agent timeout: Partial results still useful, other agents likely found relevant files
## Related Agents
- [Planner Agent](/docs/agents/planner) - Uses scout results to create implementation plans
- [Debugger Agent](/docs/agents/debugger) - Leverages scout to find debugging targets
- [Fullstack Developer](/docs/agents/fullstack-developer) - Runs scout before full-stack features
## Key Takeaway
Scout parallelizes file discovery across multiple AI agents (Gemini + OpenCode), delivering organized, actionable file lists in under 5 minutesβeven for massive codebases. No guessing where authentication lives or which files handle payments. Scout maps it, you build it.
---
# Scout External Agent
Section: docs
Category: agents
URL: https://docs.claudekit.cc/docs/agents/scout-external
# Scout External Agent
Parallelizes file searches across large codebases using external AI tools (Gemini CLI, OpenCode) for 3-5x faster discovery than sequential scanning.
## When to Use
- Large codebase (500+ files) requiring comprehensive search
- Feature work spanning multiple directories
- Need speed via parallel execution
- Want AI-powered semantic search beyond pattern matching
## Key Capabilities
| Capability | Description |
|------------|-------------|
| **Parallel Execution** | Launches 2-5 agents simultaneously (sub-5 min total) |
| **External Tools** | Gemini CLI (β€3 dirs), OpenCode CLI (>3 dirs) |
| **Smart Scaling** | Auto-selects tool based on search scope |
| **Timeout Resilient** | 3-min per agent, uses partial results if timeout |
| **Report Generation** | Saves `scout-ext-{date}-{topic}.md` to active plan |
## Common Use Cases
**1. Feature Discovery**
- **Who**: Developer starting new feature
- **Prompt**: "Find all auth-related files: middleware, API routes, components"
**2. Codebase Mapping**
- **Who**: New team member onboarding
- **Prompt**: "Locate all database models, schemas, and migration files"
**3. Dependency Audit**
- **Who**: Tech lead refactoring
- **Prompt**: "Find all files importing `@/lib/old-api`"
**4. Cross-Module Search**
- **Who**: Full-stack dev debugging
- **Prompt**: "Search frontend + backend for payment processing logic"
**5. Large File Analysis**
- **Who**: Developer investigating slow endpoints
- **Prompt**: "Analyze API routes >500 lines for performance bottlenecks"
## Pro Tips
**Parallel Execution**: Launch all searches in single message for speed:
```bash
gemini -y -p "Search lib/ for email logic" --model gemini-2.5-flash
gemini -y -p "Search app/api/ for email routes" --model gemini-2.5-flash
gemini -y -p "Search components/ for email UI" --model gemini-2.5-flash
```
**Tool Selection**:
- β€3 directories: Gemini only (`gemini-2.5-flash`)
- >3 directories: Add OpenCode (`opencode/grok-code`)
- External tools unavailable: Auto-fallback to Glob/Grep/Read
**Large File Handling**: When Read fails (>25K tokens):
```bash
echo "What does authentication middleware do in app/middleware/auth.ts?" | gemini -y -m gemini-2.5-flash
```
**Quality Gates**:
- Target <5 min total execution
- Return only relevant file paths (no code snippets in initial scan)
- Use 2-5 agents (diminishing returns beyond 5)
**Timeout Strategy**: Agents timeout after 3 min, NOT restarted. Synthesize partial results and note coverage gaps.
## Related Agents
- [Scout Agent](/docs/agents/scout) - Internal searches (Glob/Grep/Read)
- [Fullstack Developer](/docs/agents/fullstack-developer) - Implements features after discovery
## Key Takeaway
Scout External parallelizes AI-powered searches using external tools for 3-5x faster file discovery in large codebases. Use for comprehensive coverage when internal Scout is too slow.
---
# Docs Manager Agent
Section: docs
Category: agents
URL: https://docs.claudekit.cc/docs/agents/docs-manager
# Docs Manager Agent
**Keep your docs synced with your codebase.** Creates, updates, and maintains technical documentation automatically after code changes.
## When to Use
- After implementing new features requiring documentation updates
- Setting up initial project documentation with `/docs:init`
- Generating codebase summaries with Repomix integration
- Syncing docs with code changes using `/docs:update`
## Key Capabilities
| Feature | What It Does |
|---------|-------------|
| **Auto Documentation** | Analyzes code changes β updates relevant docs |
| **Repomix Integration** | Generates codebase summaries from compacted XML |
| **Scout Orchestration** | Runs parallel file searches to gather context |
| **Standards Enforcement** | Validates case conventions, links, code examples |
| **Doc Structure** | Maintains PDRs, architecture, code standards, guides |
## Common Use Cases
### New Project Setup
**Who**: Team leads starting new projects
**Prompt**: `/docs:init`
**Result**: Creates full documentation suite in `./docs/` directory
### Post-Feature Documentation
**Who**: Developers who just shipped a feature
**Prompt**: `/docs:update`
**Result**: Scans git diff β updates system architecture, API docs, standards
### Architecture Documentation
**Who**: Tech leads documenting system design
**Prompt**: "Update system-architecture.md with the new microservices setup"
**Result**: ASCII diagrams, component docs, integration flows
### Codebase Summary
**Who**: Onboarding new team members
**Prompt**: `/docs:summarize`
**Result**: Runs Repomix β generates `codebase-summary.md` with stats and structure
### API Reference Generation
**Who**: Backend devs exposing new endpoints
**Prompt**: "Document the /api/auth endpoints with request/response examples"
**Result**: Complete API docs with schemas, errors, curl examples
## Pro Tips
**1. Run After Every Feature**
```bash
# Standard workflow
git commit -m "feat: add auth"
/docs:update
git add docs/ && git commit -m "docs: update for auth"
```
**2. Use Scout for Deep Context**
```bash
# docs-manager auto-runs these behind the scenes
/scout "authentication files" 5
/scout "API endpoints" 5
# Aggregates results for accurate documentation
```
**3. Maintain This Structure**
```
./docs/
βββ project-overview-pdr.md # Requirements, roadmap
βββ code-standards.md # Conventions, patterns
βββ codebase-summary.md # Repomix-generated overview
βββ system-architecture.md # Technical design
βββ deployment-guide.md # Production setup
βββ design-guidelines.md # UI/UX specs
```
**4. Validate Everything**
- Tests code examples before committing
- Checks link integrity across all docs
- Enforces case conventions (camelCase, PascalCase, snake_case)
- Verifies formatting consistency
**5. Weekly Maintenance**
```bash
# Keep docs fresh
/docs:update # Scan for outdated content
/docs:summarize # Regenerate summary (monthly)
```
**6. Document the "Why"**
```markdown
# β Bad
"Configure the database"
# β
Good
"Set DATABASE_URL in .env to enable connection pooling.
Using pooling reduces latency by 60% under high load."
```
**7. API Documentation Template**
```markdown
### POST /api/auth/login
Authenticate user and return JWT token.
**Request Body:**
{
email: string; // Valid email
password: string; // Min 8 chars
}
**Response (200):**
{
token: string; // JWT access token
expiresIn: number; // Seconds
}
**Errors:**
- 400: Invalid format
- 401: Invalid credentials
- 429: Rate limited
```
**8. Use Absolute Paths**
```markdown
β
[Authentication](/docs/agents/docs-manager)
β [Authentication](./docs-manager)
```
## Related Agents
- [Planner Agent](/docs/agents/planner) - Creates implementation plans β docs-manager documents them
- [Scout Agent](/docs/agents/scout) - Locates files β docs-manager uses for context
- [Project Manager](/docs/agents/project-manager) - Coordinates tasks β docs-manager provides status
## Key Takeaway
**Docs-manager keeps documentation synchronized with code automatically.** Run `/docs:init` for initial setup, `/docs:update` after features, `/docs:summarize` for overview. Integrates with Repomix and Scout for comprehensive analysis. Part of ClaudeKit $99 toolkit.
---
# Project Manager Agent
Section: docs
Category: agents
URL: https://docs.claudekit.cc/docs/agents/project-manager
# Project Manager Agent
Senior orchestrator for tracking progress, collecting agent reports, maintaining roadmaps, and coordinating multi-agent workflows.
## When to Use
- Track project status with `/watzup` command
- Consolidate multi-agent work after features
- Weekly progress reviews and milestone reports
- Verify task completeness before deployment
## Key Capabilities
| Feature | What It Does |
|---------|--------------|
| **Progress Tracking** | Monitors agent outputs, task completion, blockers, quality metrics |
| **Report Collection** | Gathers reports from all agents in `plans/reports/` |
| **Roadmap Updates** | Maintains `docs/project-roadmap.md` after features/milestones |
| **Plan Analysis** | Validates plans, identifies gaps, adjusts timelines |
| **Coordination** | Orchestrates sequential phases across multiple agents |
## Common Use Cases
### 1. Weekly Status Report
**Who**: Team lead checking progress
**Prompt**: `/watzup`
**Output**: Consolidated report with velocity, quality metrics, completed features, next priorities
### 2. Feature Completion Review
**Who**: Developer finishing major feature
**Prompt**: "Review authentication implementation and update roadmap"
**Output**: Verifies tests, docs, commits β Updates roadmap β Generates completion report
### 3. Multi-Agent Coordination
**Who**: PM planning complex feature
**Prompt**: "Coordinate Stripe payment integration across all agents"
**Output**: Orchestrates planner β code β tester β reviewer β docs-manager β git-manager
### 4. Sprint Planning
**Who**: Team starting new sprint
**Prompt**: "Analyze last sprint velocity and recommend story points for this week"
**Output**: Historical velocity analysis, capacity recommendations, risk assessment
### 5. Blocker Escalation
**Who**: Agent hitting blocker
**Prompt**: "Frontend integration blocked, need timeline adjustment"
**Output**: Documents blocker, adjusts roadmap dates, notifies stakeholders
## Pro Tips
**Roadmap Discipline**: Always update `project-roadmap.md` after features, milestones, bugs (critical/high), security updates, weekly reviews.
**Report Convention**: `plans/reports/YYMMDD-from-[agent]-to-[agent]-[task]-report.md` (e.g., `251030-from-planner-to-pm-auth-research-report.md`)
**Delegation Rule**: Never edits docs directly except `project-roadmap.md`. Delegates all other docs to docs-manager agent.
**Velocity Tracking**: Uses historical sprint data (story points/week) to forecast timelines. Adjusts estimates when velocity changes >15%.
**Quality Gates**: Verifies checklist before completion: Implementation β Tests β Review β Docs β Security β Performance β Commit.
## Related Agents
- [Planner](/docs/agents/planner) - Creates implementation plans analyzed by PM
- [Docs Manager](/docs/agents/docs-manager) - Receives delegation for doc updates
- [Git Manager](/docs/agents/git-manager) - Provides commit reports to PM
- [Tester](/docs/agents/tester) - Sends test results for quality tracking
## Key Takeaway
The project-manager is ClaudeKit's orchestration layer: it doesn't code, test, or write docsβit tracks, reports, delegates, and maintains the single source of truth (project-roadmap.md) so your multi-agent system stays aligned.
---
# UI/UX Designer Agent
Section: docs
Category: agents
URL: https://docs.claudekit.cc/docs/agents/ui-ux-designer
# UI/UX Designer Agent
Research trending designs, create production-ready interfaces, and ship conversion-optimized layouts with Three.js 3D, pure HTML/CSS/JS, and WCAG 2.1 AA compliance.
## When to Use
- Building landing pages, dashboards, or web apps from scratch
- Recreating designs from screenshots or videos
- Implementing 3D interactive experiences with Three.js/WebGL
- Creating responsive design systems with CSS tokens
## Key Capabilities
| Capability | Output |
|------------|--------|
| **Research** | Dribbble, Behance, Awwwards trend analysis β design decisions |
| **Wireframing** | Information architecture β component hierarchy β layout structure |
| **Visual Design** | Color palettes, typography scale, spacing system β CSS tokens |
| **3D Graphics** | Three.js scenes, WebGL shaders, particle systems β interactive 3D |
| **Implementation** | Semantic HTML, CSS Grid/Flexbox, vanilla JS β production code |
| **Optimization** | WebP images, lazy loading, minification β Lighthouse 90+ score |
| **Accessibility** | Keyboard nav, ARIA, screen readers β WCAG 2.1 AA compliant |
| **Responsive** | Mobile-first (320px+, 768px+, 1024px+) β works everywhere |
## Common Use Cases
### 1. SaaS Landing Page
**Who:** Product team launching new SaaS product
**Prompt:** `/design:good [create conversion-optimized landing page for AI analytics platform with hero 3D visualization, pricing tiers, testimonials]`
**Output:** Complete landing page with hero section (animated 3D graph), value props, bento grid features, pricing comparison, FAQ accordion, sticky nav, above-fold CTA
### 2. 3D Product Showcase
**Who:** E-commerce team selling physical products
**Prompt:** `/design:3d [create interactive 3D product viewer for wireless headphones with 360Β° rotation, material textures, color variants]`
**Output:** Three.js scene with orbit controls, PBR materials, HDR lighting, color picker UI, zoom/pan/rotate interactions
### 3. Clone from Screenshot
**Who:** Startup replicating competitor design
**Prompt:** `/design:screenshot [./competitor-landing.png]`
**Output:** Pixel-perfect recreation with semantic HTML, modern CSS, responsive breakpoints, accessibility improvements
### 4. Dashboard Interface
**Who:** Internal tools team building analytics dashboard
**Prompt:** `/design:good [create dark-themed analytics dashboard with charts, KPI cards, data tables, filters sidebar]`
**Output:** Responsive dashboard layout with CSS Grid, chart containers, skeleton loaders, empty states, mobile nav drawer
### 5. Design System from Video
**Who:** Agency recreating client's animated prototype
**Prompt:** `/design:video [./design-prototype.mp4]`
**Output:** Component library matching video specs, micro-animations, transitions, color/typography tokens, style guide docs
## Pro Tips
- **Fast prototyping:** Use `/design:fast` for 60-120s mockups, `/design:good` for production-ready (3-5min)
- **3D performance:** Three.js scenes auto-optimize for 60fps with LOD, frustum culling, texture compression
- **Vietnamese support:** All designs include Google Fonts with Vietnamese diacritics (Inter, Roboto, Noto Sans)
- **CRO built-in:** Above-fold CTAs, social proof placement, trust signals, risk reversal automatically included
- **Design analysis:** Use `/design:describe [path]` to extract colors, typography, spacing from any screenshot
- **Bento grids:** Request "bento grid layout" for modern mixed-size card layouts (like Apple, Linear)
- **Glassmorphism:** Specify "glassmorphism" for backdrop blur + transparency effects
- **Core Web Vitals:** All designs target LCP <2.5s, FID <100ms, CLS <0.1 with lazy loading, font optimization
## Related Agents
- [Copywriter](/docs/agents/copywriter) - Generate compelling headlines, CTAs, product descriptions
- [Fullstack Developer](/docs/agents/fullstack-developer) - Integrate designs with backend APIs, databases
- [Scout](/docs/agents/scout) - Explore existing components and patterns
- [Code Reviewer](/docs/agents/code-reviewer) - Validate HTML semantics, CSS performance, accessibility
## Key Takeaway
ClaudeKit UI/UX Designer agent ships award-winning interfaces in minutes, not days. Research-backed designs with production-ready code, WCAG compliance, and Lighthouse 90+ scores out of the box. No Figma needed.
---
# Copywriter Agent
Section: docs
Category: agents
URL: https://docs.claudekit.cc/docs/agents/copywriter
# Copywriter Agent
Elite conversion copywriter for high-converting marketing copy, viral social media, and product descriptions. Part of ClaudeKit $99 toolkit.
## When to Use
- Creating landing pages, hero sections, and CTAs
- Writing viral Twitter/X threads and LinkedIn posts
- Optimizing email campaigns and product descriptions
- Analyzing low-converting pages (CRO)
## Key Capabilities
| Category | What It Does |
|----------|--------------|
| **Landing Pages** | Hero sections, value props, CTAs with conversion psychology |
| **Social Media** | Twitter/X threads, LinkedIn thought leadership, viral hooks |
| **Email** | Subject lines, body copy, onboarding sequences with 40%+ open rates |
| **Product Copy** | Feature-benefit descriptions, technical specs for buyers |
| **CRO Analysis** | Identify conversion blockers, A/B test variants, friction points |
## Common Use Cases
**1. SaaS Landing Page**
- **Who**: Product Manager launching AI analytics tool
- **Prompt**: `/content:good [create hero section for AI analytics SaaS targeting enterprise CTOs, emphasize 85% faster insights, 10-minute setup]`
- **Output**: Multiple versions (ROI-focused, time-focused, risk-reversal), A/B test plan, specific metrics
**2. Feature Announcement Thread**
- **Who**: Founder announcing major feature
- **Prompt**: `/content:fast [Twitter thread announcing our new query optimizer that reduces analysis time by 90%]`
- **Output**: 8-12 tweet thread with hook, problem/solution flow, social proof, engagement CTA
**3. Welcome Email Series**
- **Who**: Growth team onboarding trial users
- **Prompt**: `/content:good [welcome email for trial users, goal is first dashboard creation within 24 hours]`
- **Output**: Email sequence, subject line variants, follow-up cadence, tracking metrics
**4. Low-Converting Page Fix**
- **Who**: Marketing lead with 1.2% conversion rate
- **Prompt**: `/content:cro [analyze landing page converting at 1.2%, target is 3-5%]`
- **Output**: Conversion blockers, improved copy, psychological triggers, layout changes
**5. LinkedIn Thought Leadership**
- **Who**: Executive sharing industry insights
- **Prompt**: `/content:good [LinkedIn post sharing insights from our analysis of 10M database queries, position as thought leader]`
- **Output**: Data-driven post, engagement hooks, comment strategy, posting schedule
## Pro Tips
**Conversion Psychology Baked In**
- Specificity wins: "85% faster" beats "much faster"
- User-centric: Benefits over features always
- Brutal honesty: Transparent about limitations builds trust
- Social proof: Real metrics, real customers, real results
- No hashtag spam: 1-2 max, focus on value not visibility
**Fast vs. Good Commands**
- `/content:fast [request]` - Single version, 30-60 seconds, good quality
- `/content:good [request]` - Multiple versions, research, A/B tests, 2-4 minutes
- `/content:enhance [issues]` - Improve existing copy
- `/content:cro [issues]` - Full conversion rate optimization
**Platform-Specific Optimization**
- **Twitter/X**: Hook in first 3 seconds, 8-12 tweets, 1 visual, no hashtags
- **LinkedIn**: First 2 lines critical (before "see more"), bullet points, ask questions
- **Landing Pages**: Hero headline 6-12 words, CTA every 1.5 screens, social proof near CTA
- **Email**: Subject 30-50 chars, personal tone, single CTA, mobile-optimized
**Quality Standards**
- Specific numbers: "Query 10M rows in 0.8 seconds" not "fast"
- Show don't tell: "Sarah created dashboard in 8 minutes" not "easy to use"
- Conversational: "Use our tool" not "Utilize our solution to facilitate"
- One primary CTA: Multiple actions confuse, single action converts
**Expected Performance**
- Landing pages: 3-5% trial signup rate
- Email: 40%+ open rate, 15%+ CTR
- Twitter: 5-10% engagement rate
- LinkedIn: 3-5% engagement rate
- Product pages: 8-12% add-to-cart
**A/B Testing Built In**
Every output includes test recommendations:
- Control vs. variant copy
- Specific hypotheses (e.g., "ROI-focused headline will convert better for budget-conscious CTOs")
- Metrics to track (trial signup rate, CTR, bounce rate)
- Expected performance ranges
**Integration With Other Agents**
1. [UI/UX Designer](/docs/agents/ui-ux-designer) creates layout
2. Copywriter fills with high-converting copy
3. [Tester](/docs/agents/tester) runs A/B tests
4. Iterate based on metrics
## Related Agents
- [UI/UX Designer](/docs/agents/ui-ux-designer) - Visual layouts for landing pages
- [Planner](/docs/agents/planner) - Campaign strategy and content calendar
- [Tester](/docs/agents/tester) - A/B test implementation and analysis
## Key Takeaway
The copywriter agent creates brutally honest, conversion-focused copy using psychological triggers, platform optimization, and specific metrics. No theory, no fluffβjust copy that converts visitors into customers. Delivers multiple versions with A/B test plans and expected performance metrics. Part of ClaudeKit $99 toolkit.
---
# Git Manager Agent
Section: docs
Category: agents
URL: https://docs.claudekit.cc/docs/agents/git-manager
# Git Manager Agent
**Stage, commit, and push code with professional conventional commits, security scanning, and 81% cost reduction.**
## When to Use
- Auto-generate semantic commit messages (feat, fix, docs, refactor)
- Prevent secrets from leaking into Git history
- Commit and push after implementing features or fixes
- Create pull requests with proper formatting
## Key Capabilities
| Feature | Description |
|---------|-------------|
| **Conventional Commits** | Auto-generates `type(scope): description` messages (max 72 chars) |
| **Security Scanning** | Blocks commits with API keys, passwords, tokens, connection strings |
| **Token Optimization** | 81% cost reduction vs baseline (Haiku model, 5-8K tokens/commit) |
| **Smart Staging** | Auto-detects relevant changes, respects `.gitignore` |
| **Delegation** | Escalates complex changes (5+ files) to Gemini for context |
## Common Use Cases
### Developers - Quick Commits
**Prompt**: `/git:cm` or `/git:cp`
- Commits auth bug fix with `fix(auth): add email validation in login`
- 10-15 seconds, $0.015 per commit
- Security scan blocks if secrets detected
### Teams - Consistent History
**Prompt**: "commit and push"
- Enforces conventional commit format across team
- Compatible with changelog generators (semantic-release)
- No AI attribution in commit messages
### DevOps - Safe Deployments
**Prompt**: `/git:cp` after feature work
- Scans for leaked AWS credentials, DB passwords, OAuth tokens
- Blocks commit and shows file/line number of violations
- Whitelists `process.env.*` and `.env.example`
### Code Reviewers - Pre-Commit Validation
**Prompt**: Chain with code-reviewer agent β `/git:cm`
- Reviews code β fixes applied β commits with proper message
- Example: `refactor(db): extract query builders to helpers`
### Multi-File Features - Smart Delegation
**Prompt**: `/git:cm` after dashboard feature
- Detects complex changes (Dashboard.tsx, Chart.tsx, styles)
- Delegates to Gemini: `feat(dashboard): add interactive chart component`
- Includes multi-line body with implementation details
## Pro Tips
**Commit Types Reference**:
```
feat β New feature
fix β Bug fix
docs β Documentation only
refactor β Code restructuring
perf β Performance improvement
test β Adding tests
chore β Maintenance (deps, config)
```
**Security Patterns Blocked**:
```typescript
// β Blocked
const API_KEY = "sk-1234567890abcdef";
const DB_URL = "postgres://user:pass@host/db";
// β
Allowed
const API_KEY = process.env.API_KEY;
```
**Logical Commit Grouping**:
```bash
git add src/auth/*
/git:cm # Commits only auth changes
git add src/components/*
/git:cm # Separate UI changes
```
**Performance Metrics**:
| Metric | Haiku (Optimized) | Sonnet (Baseline) | Savings |
|--------|-------------------|-------------------|---------|
| Tokens | 5-8K | 25-30K | 81% |
| Time | 10-15s | 45s | 67% |
| Cost | $0.015 | $0.075 | 80% |
**Workflow After Feature**:
```bash
git status # Review changes
/git:cm # Commit with auto-message
git log -1 # Verify commit
/git:cp # Push if satisfied
```
**Fix Push Failures**:
```bash
git pull --rebase origin main # Pull latest
# Resolve conflicts if any
/git:cp # Retry push
```
## Related Agents
- [Code Reviewer](/docs/agents/code-reviewer) - Pre-commit code review
- [Project Manager](/docs/agents/project-manager) - Overall project coordination
- [Tester](/docs/agents/tester) - Pre-commit test verification
## Key Takeaway
**git-manager automates professional Git operations with security-first conventional commits at 81% lower cost than baselineβno AI attribution, just clean commit history.**
---
# Database Admin Agent
Section: docs
Category: agents
URL: https://docs.claudekit.cc/docs/agents/database-admin
# Database Admin Agent
Optimize queries, design schemas, fix bottlenecks. Your database performance expert for PostgreSQL, MySQL, MongoDB, and more.
## When to Use
- Dashboard queries timing out (>5s response)
- Tables growing but queries slowing down
- Need schema design for new features
- Connection pool exhaustion or deadlocks
## Key Capabilities
| Area | What It Does |
|------|-------------|
| **Query Optimization** | EXPLAIN ANALYZE, index recommendations, rewrite slow queries (45s β 0.8s typical) |
| **Schema Design** | Normalized tables, optimal indexes, constraints, triggers, partitioning strategies |
| **Performance Analysis** | Health checks, connection pool tuning, cache hit ratio, bloat detection |
| **Backup & HA** | WAL archiving, point-in-time recovery, replication setup, failover config |
| **Database Systems** | PostgreSQL, MySQL/MariaDB, MongoDB, Redis, SQLite, distributed databases |
## Common Use Cases
### 1. Fix Slow Dashboard Query
**Who**: Backend dev with 45-second query
**Prompt**:
```
"Dashboard query takes 45 seconds. Optimize it:
SELECT u.name, COUNT(o.id), SUM(o.total)
FROM users u LEFT JOIN orders o ON u.id = o.user_id
WHERE o.created_at >= '2024-01-01'
GROUP BY u.id ORDER BY SUM(o.total) DESC LIMIT 100;"
```
**Output**: EXPLAIN ANALYZE, missing index report, optimized query (0.8s), materialized view option, implementation plan
### 2. Schema Design for New Feature
**Who**: Product team adding order history
**Prompt**: `"Design schema for order history: users, products, orders, payments. Handle 10M+ orders."`
**Output**: Complete DDL with indexes, constraints, triggers, denormalization strategy, partitioning plan
### 3. Database Health Check
**Who**: DevOps investigating production slowness
**Prompt**: `"Run comprehensive health check on production DB"`
**Output**: Connection pool analysis, missing FK indexes, bloat report, slow query top 5, cost analysis, action plan
### 4. Backup Strategy
**Who**: SRE setting up disaster recovery
**Prompt**: `"Setup backup strategy with PITR for production PostgreSQL"`
**Output**: Daily full backup script, WAL archiving config, restore procedures, S3 upload, verification tests
### 5. Connection Pool Exhaustion
**Who**: Frontend seeing "too many connections" errors
**Prompt**: `"95/100 connections used, app timing out. Fix connection pool."`
**Output**: PgBouncer config, app connection pooling code, max_connections tuning, monitoring queries
## Pro Tips
**For Query Optimization**:
- Always provide table sizes + existing indexes for better analysis
- Agent creates covering indexes (include all SELECT columns)
- Expects `plans/reports/` folder for optimization reports
**For Schema Design**:
- Mention scale requirements (10M rows? 1B?). Affects partitioning strategy
- Agent denormalizes strategically (e.g., product name in order_items for history)
- Foreign key indexes created automatically (common oversight)
**For Health Checks**:
- Run during business hours (not 3am). Real traffic reveals issues
- Agent checks: connections, slow queries, missing indexes, bloat, cache hit ratio
- Provides immediate + short-term + long-term action plans
**Performance Benchmarks**:
- Query optimization: Typically 90-98% faster execution time
- Index creation: 15-20 min per 1M rows (concurrent, no downtime)
- Health check: Identifies $4K/month in savings (typical)
## Related Agents
- [Debugger](/docs/agents/debugger) - Debug application database issues
- [Fullstack Developer](/docs/agents/fullstack-developer) - Implement schema changes in app
- [Tester](/docs/agents/tester) - Validate database migrations
## Key Takeaway
Database Admin agent turns 45-second queries into 0.8-second queries with index strategies and query rewrites. Designs production-ready schemas with proper normalization, constraints, and partitioning. Delivers actionable health reports with specific SQL fixes and cost impact ($4K/month savings typical).
---
# Journal Writer Agent
Section: docs
Category: agents
URL: https://docs.claudekit.cc/docs/agents/journal-writer
# Journal Writer Agent
Transform critical failures into brutally honest learning documents that capture technical details, emotional reality, and prevent future mistakes.
## When to Use
- Production down >30min or data loss
- Critical bugs caught before release
- Failed deployments causing outages
- Repeated issues team keeps hitting
## Key Capabilities
| Feature | What It Does |
|---------|--------------|
| Honest Documentation | Captures what failed without sugarcoating - includes error logs, stack traces, timeline |
| Root Cause Analysis | Documents what was tried, why it failed, actual cause, systemic issues |
| Emotional Context | Records team stress, pressure, relief - builds psychological safety through honesty |
| Learning Extraction | Identifies lessons, prevention strategies, warning signs, process improvements |
## Common Use Cases
**DevOps Engineer** discovering payment race condition
```bash
/journal
Context: Found race condition in payment system 2hrs before release.
Bug causes duplicate charges when users rapidly click "Buy Now".
```
Output: Full incident report with code snippets, failed attempts (DB constraints, optimistic locking), final fix (Stripe idempotency keys), emotional timeline, lessons learned, checklist for future payment code reviews
---
**Backend Dev** after 6-hour production outage
```bash
/journal
Context: Kubernetes CrashLoopBackOff on all pods for 6 hours.
Rolled back - same error. No one knew why.
```
Output: Technical autopsy showing CI/CD silently failing (renamed npm script, old workflow config), cost breakdown ($16K revenue + support), 7 failed fix attempts, root cause (no build verification), new CI/CD checks added
---
**Tech Lead** documenting near-miss
```bash
/journal
Context: QA found SQL injection vulnerability in prod API.
Would have exposed 50K user records if exploited.
```
Output: Vulnerability details, attack vectors, immediate patch, systemic causes (no parameterized queries, skipped security review), new secure coding standards, team training plan
---
**Site Reliability Engineer** after Redis cluster split-brain
```bash
/journal
Context: Redis cluster split-brain during network partition.
Half the nodes thought they were primary. Data corruption.
```
Output: Network partition timeline, data inconsistency examples, recovery steps, why monitoring didn't catch it, new quorum configuration, chaos engineering tests added
---
**Full Stack Dev** documenting performance crisis
```bash
/journal
Context: API response times jumped from 200ms to 15sec.
Unindexed DB query on 10M row table hitting production.
```
Output: Slow query log, missing index discovery, query plan analysis, why staging didn't catch it (small dataset), new query performance testing requirements, database review checklist
## Pro Tips
- **Write immediately** - Emotional context fades fast, write while it's fresh
- **Include failed attempts** - Learning comes from what didn't work
- **Use real language** - "We f*cked up" not "encountered an unexpected issue"
- **Show the numbers** - Downtime cost, time to fix, impact metrics
- **Add code snippets** - Actual error logs, broken code, working fix
- **Make it searchable** - Someone will have same problem, use clear tags
- **Blame the system** - Not individuals - focus on process failures that allowed it
## Related Agents
- [Debugger](/docs/agents/debugger) - Debug the issues being documented
- [Project Manager](/docs/agents/project-manager) - Track improvements from learnings
- [Fullstack Developer](/docs/agents/fullstack-developer) - Implement preventive fixes
## Key Takeaway
Journal Writer transforms expensive failures into permanent institutional knowledge through brutally honest documentation - capturing technical details, emotional reality, failed attempts, and systemic causes so teams learn once instead of failing repeatedly. Part of ClaudeKit $99 toolkit.
---
# MCP Manager Agent
Section: docs
Category: agents
URL: https://docs.claudekit.cc/docs/agents/mcp-manager
# MCP Manager Agent
Execute MCP server tools (screenshots, browser automation, docs lookup) in isolated context - keeps your main Claude session clean while accessing 100+ MCP capabilities.
## When to Use
- Need browser screenshots, form automation, web scraping
- Query library docs (Next.js, React, Supabase via context7)
- Access DevTools Protocol, sequential reasoning tools
- Any `/use-mcp [task]` command - agent handles the rest
## Key Capabilities
| Capability | What It Does | Example |
|------------|--------------|---------|
| **Tool Discovery** | Scans multiple MCP servers, filters relevant tools | Finds playwright_screenshot across 5 servers |
| **Execution Priority** | Gemini CLI β Direct scripts β Error report | Auto-fallback ensures reliability |
| **Context Isolation** | Runs in subagent, returns concise summaries only | Main agent stays token-efficient |
| **Multi-Server Support** | Orchestrates context7, human-mcp, chrome-devtools | Single command, multiple backends |
## Common Use Cases
### 1. Frontend Dev - Visual Testing
**Who:** React/Next.js developers
**Prompt:** `/use-mcp Screenshot staging.app.com/dashboard vs prod at 1920x1080, highlight differences`
### 2. DevOps - Monitoring Checks
**Who:** SREs, DevOps engineers
**Prompt:** `/use-mcp Navigate to status.service.com, check all green checkmarks, screenshot if any red`
### 3. Docs Research - API Lookup
**Who:** Full-stack developers
**Prompt:** `/use-mcp Find Supabase auth documentation for OAuth providers and MFA setup`
### 4. QA Automation - Form Testing
**Who:** QA engineers
**Prompt:** `/use-mcp Fill example.com/contact form with test data: name=John, email=test@test.com, submit and confirm`
### 5. Web Scraping - Data Extraction
**Who:** Data analysts, marketers
**Prompt:** `/use-mcp Extract all product names and prices from competitor.com/products table`
## Pro Tips
**Gemini CLI Required:** Install `npm i -g @google/gemini-cli` for primary execution (direct scripts auto-fallback if missing)
**Config Location:** `.claude/.mcp.json` - add servers like context7, human-mcp with API keys in `env` section
**Error Handling:** Agent reports failures with actionable fixes (timeout β check URL, connection β verify .mcp.json)
**Screenshot Defaults:** Full-page captures at viewport size - specify `1920x1080` or `mobile` for custom dimensions
**Multi-Step Tasks:** Chain operations in one prompt: "Navigate β Fill form β Submit β Screenshot confirmation"
## Related Agents
- [Scout External](/docs/agents/scout-external) - External research + MCP docs lookup synergy
- [Fullstack Developer](/docs/agents/fullstack-developer) - Uses MCP Manager for visual regression testing
## Key Takeaway
MCP Manager isolates complex tool execution (browser automation, docs queries, screenshots) from your main Claude context - you get clean outputs, Claude stays fast. Access 100+ MCP tools via `/use-mcp [natural language task]`.
---
# Fullstack Developer Agent
Section: docs
Category: agents
URL: https://docs.claudekit.cc/docs/agents/fullstack-developer
# Fullstack Developer Agent
**Execute parallel-safe implementation phases across backend, frontend, and infrastructure with strict file ownership enforcement.**
## When to Use
- Implementing phases from `/plan:parallel` output
- Running backend + frontend work simultaneously without conflicts
- Need file-level isolation for parallel execution
- Building full-stack features (API + UI + tests)
## Key Capabilities
| Area | Details |
|------|---------|
| **Backend** | Node.js, Express APIs, auth, database ops |
| **Frontend** | React, TypeScript, components, styling |
| **Infrastructure** | Config, env setup, deployment scripts |
| **Parallel Safety** | File ownership enforcement, conflict detection |
| **Quality** | Type checking, tests, success criteria validation |
## Common Use Cases
| Who | Prompt | Outcome |
|-----|--------|---------|
| **Team Lead** | `/code plans/251201-user-api/phase-02-endpoints.md` | Executes Phase 02 API endpoints with ownership boundaries |
| **Developer** | Execute backend + frontend phases simultaneously | Safe parallel execution, no file conflicts |
| **PM** | `/plan:parallel Add CRUD API + React dashboard` β `/code` phases | Full implementation with automated reports |
## How It Works
**1. Phase Analysis**
- Reads `phase-XX-*.md` from plan directory
- Validates file ownership (exclusive files only)
- Checks parallelization info and dependencies
**2. Implementation**
- Follows steps from phase file
- Only touches files in "File Ownership" section
- **NEVER** modifies files owned by other phases
**3. Quality Assurance**
```bash
npm run typecheck # Must pass
npm test # Must pass
```
**4. Report Output**
```
{active-plan}/reports/fullstack-dev-{YYMMDD}-phase-{XX}-{topic}.md
```
## File Ownership Example
```markdown
# Phase 02: API Endpoints
## File Ownership (Exclusive)
- src/api/users.ts
- src/schemas/*.ts
- tests/api/*.test.ts
## Shared (Read-Only)
- src/types/index.ts
```
**Critical Rules:**
- **NEVER** modify files not in ownership list
- **STOP** immediately if conflict detected
- Only read shared files, never write
## Parallel Execution Safety
```
Phase 02 (Backend) Phase 03 (Frontend)
βββ src/api/ βββ src/components/
βββ src/schemas/ βββ src/hooks/
βββ tests/api/ βββ tests/components/
No overlap β Safe for parallel execution
```
**Independence principles:**
- Work without checking other phases' progress
- Trust dependencies listed in phase file
- Report completion to unblock dependent phases
## Pro Tips
1. **Read First**: Check `.claude/active-plan` for current plan path
2. **Ownership Strict**: If not in ownership list, don't touch it
3. **Phase Order**: Sequential phases (01, 04) must run in order; parallel phases (02, 03) can run simultaneously
4. **YAGNI**: Only implement what's in the phase spec, nothing more
5. **Token Efficiency**: Concise implementation, minimal overhead
## Related Agents
- [Planner Agent](/docs/agents/planner) - Creates implementation plans
- [Scout Agent](/docs/agents/scout-external) - Gathers external context before planning
## Key Takeaway
**Fullstack Developer executes implementation phases with file-level isolation, enabling safe parallel backend/frontend development while maintaining code quality through automated testing.**
---
# CLI Overview
Section: docs
Category: cli
URL: https://docs.claudekit.cc/docs/cli/index
# ClaudeKit CLI Overview
Command-line tool for bootstrapping and updating ClaudeKit projects from private GitHub repository releases.
## What is ClaudeKit CLI?
**ClaudeKit CLI** (`ck`) is a command-line tool that downloads and manages ClaudeKit starter kits from private GitHub repositories. Built with Bun and TypeScript, it provides fast, secure project setup and updates.
**Important:** You need to purchase a ClaudeKit Starter Kit from [ClaudeKit.cc](https://claudekit.cc) to use this CLI. Without a purchased kit and repository access, the CLI cannot download project templates.
## Key Features
- **Multi-tier GitHub Authentication** - Secure authentication via gh CLI β env vars β keychain β prompt
- **Streaming Downloads** - Fast downloads with progress tracking
- **Smart File Merging** - Updates projects without breaking custom changes
- **Protected Files** - Automatic protection of sensitive files and custom configurations
- **Secure Credential Storage** - Uses OS keychain for token management
- **Beautiful CLI Interface** - Interactive prompts with progress indicators
## Core Commands
### ck init
Initialize or update ClaudeKit Engineer in your project:
**Note:** This command should be run from the root directory of your project.
```bash
# Interactive mode (recommended)
ck init
# With options
ck init --kit engineer
# Specific version
ck init --kit engineer --version v1.0.0
# With exclude patterns
ck init --exclude "local-config/**" --exclude "*.local"
# Global mode - use platform-specific user configuration
ck init --global
ck init -g --kit engineer
```
**What it does:**
- Downloads specified ClaudeKit release
- Intelligently merges files
- Preserves your custom changes
- Protects sensitive files
- Maintains custom `.claude/` files
**Options:**
- `--dir
` - Target directory (default: current directory)
- `--kit ` - Kit to use (`engineer` or `marketing`)
- `--version ` - Specific version to download (default: latest)
- `--exclude ` - Exclude files/directories using glob patterns (can be used multiple times)
- `--global, -g` - Use platform-specific user configuration directory
**Global vs Local Configuration:**
By default, ClaudeKit uses local configuration (`~/.claudekit`).
For platform-specific **user-scoped settings**, use the `--global` flag:
- **macOS/Linux**: `~/.claude`
- **Windows**: `%LOCALAPPDATA%\.claude`
Global mode uses user-scoped directories (no sudo required), allowing separate configurations for different projects.
### ck update
Update the ClaudeKit CLI itself to the latest version:
```bash
# Update CLI to latest
ck update
```
**What it does:**
- Updates the `ck` command-line tool to the latest version
- Does NOT update ClaudeKit Engineer files (use `ck init` for that)
### ck versions
List available versions of ClaudeKit releases:
```bash
# Show all available versions
ck versions
# Filter by specific kit
ck versions --kit engineer
ck versions --kit marketing
# Show more versions (default: 30)
ck versions --limit 50
# Include prereleases and drafts
ck versions --all
```
**Options:**
- `--kit ` - Filter by specific kit
- `--limit ` - Number of releases to show (default: 30)
- `--all` - Show all releases including prereleases
## Global Options
All commands support these global options:
### --verbose, -v
Enable verbose logging for debugging:
```bash
ck init --verbose
ck init -v # Short form
```
**Shows:**
- HTTP request/response details (tokens sanitized)
- File operations (downloads, extractions, copies)
- Command execution steps and timing
- Error stack traces with full context
- Authentication flow details
### --log-file
Write logs to file for sharing:
```bash
ck init --verbose --log-file debug.log
```
**Note:** All sensitive data (tokens, credentials) is automatically sanitized in logs.
## Available Kits
ClaudeKit offers premium starter kits (purchase required):
- **engineer**: ClaudeKit Engineer - Engineering toolkit with 14 specialized agents
- **marketing**: ClaudeKit Marketing - [Coming Soon]
Purchase at [ClaudeKit.cc](https://claudekit.cc) to get repository access.
## Authentication
The CLI requires a **GitHub Personal Access Token (PAT)** to download releases from private repositories.
### Authentication Flow (Multi-Tier Fallback)
1. **GitHub CLI**: Uses `gh auth token` if available
2. **Environment Variables**: Checks `GITHUB_TOKEN` or `GH_TOKEN`
3. **OS Keychain**: Retrieves stored token
4. **User Prompt**: Prompts for token and offers secure storage
### Creating a Personal Access Token
1. Go to GitHub Settings β Developer settings β Personal access tokens β Tokens (classic)
2. Generate new token with `repo` scope (for private repositories)
3. Copy the token
### Setting Token via Environment Variable
```bash
export GITHUB_TOKEN=ghp_your_token_here
```
## Configuration
Configuration stored in `~/.claudekit/config.json`:
```json
{
"github": {
"token": "stored_in_keychain"
},
"defaults": {
"kit": "engineer",
"dir": "."
}
}
```
## Protected Files
These files are **never overwritten** during updates:
- Environment files: `.env`, `.env.local`, `.env.*.local`
- Security files: `*.key`, `*.pem`, `*.p12`
- Dependencies: `node_modules/**`, `.git/**`
- Build outputs: `dist/**`, `build/**`
### Custom .claude Files
Your custom files in `.claude/` directory are automatically preserved:
**Example:**
```
Your project:
.claude/
βββ commands/standard.md (from ClaudeKit)
βββ commands/my-custom.md (your custom command)
After update:
.claude/
βββ commands/standard.md (updated from release)
βββ commands/my-custom.md (preserved automatically)
```
## Quick Start
```bash
# Install CLI
npm install -g claudekit-cli
# Verify installation
ck --version
# Authenticate with GitHub
gh auth login
# OR
export GITHUB_TOKEN=ghp_your_token
# Initialize project
ck init --kit engineer
# Navigate to project
cd my-project
# Start using ClaudeKit
claude # Start Claude Code
```
## Common Workflows
### Initialize or Update ClaudeKit Engineer
```bash
# Interactive mode (recommended)
ck init
# Direct with options
ck init --dir my-app --kit engineer
# Specific version
ck init --dir my-app --kit engineer --version v1.0.0
# With exclusions
ck init --exclude "*.log" --exclude "temp/**"
# Update ClaudeKit Engineer to latest
ck init
# Update to specific version
ck init --version v1.2.0
# Update with exclusions
ck init --exclude "local-config/**" --exclude "*.local"
# Update with verbose output
ck init --verbose
```
### Update the CLI Itself
```bash
# Update ck CLI to latest version
ck update
```
### Check Available Versions
```bash
# List all versions
ck versions
# Filter by kit
ck versions --kit engineer
# Show more releases
ck versions --limit 50
```
## Troubleshooting
### Authentication Failed
**Problem:** "Authentication failed"
**Solutions:**
1. Check if GitHub CLI is authenticated: `gh auth status`
2. Or set environment variable: `export GITHUB_TOKEN=ghp_your_token`
3. Verify token has `repo` scope
4. Check repository access (requires purchased kit)
### Command Not Found
**Problem:** `ck: command not found`
**Solutions:**
```bash
# Reinstall globally
npm install -g claudekit-cli
# Check installation
npm list -g claudekit-cli
# Restart terminal
```
### Download Fails
**Problem:** "Failed to download release"
**Solutions:**
1. Check internet connection
2. Verify GitHub token is valid: `gh auth status`
3. Confirm you have repository access (purchased kit)
4. Try with verbose flag: `ck init --verbose`
## Version Information
Current version: **1.2.1**
Check version:
```bash
ck --version
```
View help:
```bash
ck --help
ck -h
```
## Next Steps
- [Installation Guide](/docs/docs/cli/installation) - Install ClaudeKit CLI
- [Getting Started](/docs/getting-started/installation) - Start using ClaudeKit
---
**Ready to start?** Purchase a kit at [ClaudeKit.cc](https://claudekit.cc), then run `ck init` to initialize your first project.
---
# CLI Installation
Section: docs
Category: cli
URL: https://docs.claudekit.cc/docs/cli/installation
# CLI Installation
Install ClaudeKit CLI (`ck`) to download and manage ClaudeKit starter kits from private GitHub repository releases.
## Prerequisites
Before installing, ensure you have:
- **Node.js 18+** - [Download from nodejs.org](https://nodejs.org)
- **npm 9+** - Comes with Node.js
- **Git** - For repository access
- **ClaudeKit purchase** - Required for repository access from [ClaudeKit.cc](https://claudekit.cc)
Verify installations:
```bash
node --version # Should be v18.0.0+
npm --version # Should be 9.0.0+
git --version # Any recent version
```
## Install CLI
### Global Installation (Recommended)
```bash
npm install -g claudekit-cli
```
This installs the `ck` command globally, available from any directory.
### Verify Installation
```bash
ck --version
```
**Expected output:**
```
1.2.1
```
View help:
```bash
ck --help
```
**Output:**
```
ClaudeKit CLI v1.2.1
Usage:
ck [options]
Commands:
init Initialize or update project from ClaudeKit release
update (deprecated) Use 'init' instead
versions List available ClaudeKit releases
Options:
--version, -v Show version number
--help, -h Show help
Examples:
ck init --kit engineer
ck init --global
ck versions --kit engineer
For more info: https://docs.claudekit.cc
```
## GitHub Authentication
ClaudeKit CLI requires GitHub authentication to download from private repositories.
### Authentication Methods
The CLI uses **multi-tier authentication** with automatic fallback:
1. **GitHub CLI** - `gh auth token` (if authenticated)
2. **OS Keychain** - Stored token
3. **User Prompt** - Interactive prompt (with secure storage option)
### Option 1: GitHub CLI (Recommended)
Install and authenticate with GitHub CLI:
**macOS:**
```bash
brew install gh
```
**Windows:**
```bash
winget install GitHub.cli
```
**Linux (Ubuntu/Debian):**
```bash
curl -fsSL https://cli.github.com/packages/githubcli-archive-keyring.gpg | sudo dd of=/usr/share/keyrings/githubcli-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/githubcli-archive-keyring.gpg] https://cli.github.com/packages stable main" | sudo tee /etc/apt/sources.list.d/github-cli.list > /dev/null
sudo apt update
sudo apt install gh
```
**Authenticate:**
```bash
gh auth login
```
Follow prompts:
1. Select "GitHub.com"
2. Select "HTTPS"
3. Choose "Login with web browser"
4. Copy one-time code
5. Complete authentication in browser
**Verify:**
```bash
gh auth status
```
### Option 2: Interactive Prompt
If no authentication is found, the CLI will prompt you:
```
GitHub authentication required.
Please enter your GitHub Personal Access Token (PAT):
> ghp_your_token_here
Would you like to store this token securely in your OS keychain? (y/n)
> y
β Token stored securely
```
The token is encrypted and stored in your OS keychain:
- **macOS**: Keychain Access
- **Windows**: Windows Credential Manager
- **Linux**: libsecret
## Verify Access
After authentication, verify you can access ClaudeKit repositories:
```bash
# List available versions
ck versions --kit engineer
```
If authentication is successful, you'll see available releases. If not, you'll see an error about repository access.
## Repository Access
**Important:** You must purchase a ClaudeKit starter kit from [ClaudeKit.cc](https://claudekit.cc) to access the private repositories.
After purchase:
1. You'll be added to the GitHub repository
2. Your GitHub account gets access to releases
3. The CLI can download using your authenticated account
Without purchase, you'll see:
```
Error: Repository not found or access denied
Please purchase a ClaudeKit kit at https://claudekit.cc
```
## Configuration
ClaudeKit CLI stores configuration in `~/.claudekit/config.json`:
```json
{
"github": {
"token": "stored_in_keychain"
},
"defaults": {
"kit": "engineer",
"dir": "."
}
}
```
**Note:** The actual token is stored in your OS keychain, not in the config file.
## Update CLI
Keep the CLI updated for latest features:
```bash
npm update -g claudekit-cli
```
Check installed version:
```bash
ck --version
```
## Uninstall
Remove ClaudeKit CLI:
```bash
npm uninstall -g claudekit-cli
```
Remove configuration (optional):
```bash
rm -rf ~/.claudekit
```
## Troubleshooting
### Command not found: ck
**Problem:** Terminal doesn't recognize `ck` command
**Solutions:**
1. **Restart terminal** - Sometimes PATH needs refresh
2. **Check installation:**
```bash
npm list -g claudekit-cli
```
3. **Verify npm global bin in PATH:**
```bash
npm config get prefix
```
Add to PATH if needed:
```bash
export PATH="$PATH:$(npm config get prefix)/bin"
```
4. **Reinstall:**
```bash
npm uninstall -g claudekit-cli
npm install -g claudekit-cli
```
### Authentication failed
**Problem:** "Authentication failed" or "Repository not found"
**Solutions:**
1. **Check GitHub authentication:**
```bash
gh auth status
```
If not authenticated:
```bash
gh auth login
```
2. **Verify repository access:**
- Ensure you purchased a ClaudeKit kit
- Check you can access the repository on GitHub
### Permission denied
**Problem:** Cannot install globally
**Solutions:**
1. **Use npx (no installation needed):**
```bash
npx claudekit-cli new --kit engineer
```
2. **Fix npm permissions:**
```bash
mkdir ~/.npm-global
npm config set prefix '~/.npm-global'
export PATH=~/.npm-global/bin:$PATH
npm install -g claudekit-cli
```
3. **Use sudo (not recommended):**
```bash
sudo npm install -g claudekit-cli
```
### Download fails
**Problem:** "Failed to download release"
**Solutions:**
1. **Check internet connection**
2. **Verify authentication:**
```bash
gh auth status
```
3. **Try with verbose logging:**
```bash
ck init --kit engineer --verbose
```
4. **Check GitHub status:**
- Visit [githubstatus.com](https://www.githubstatus.com)
## Next Steps
Now that the CLI is installed:
1. **Initialize a project** - Run `ck init --kit engineer`
2. **Browse available versions** - Run `ck versions`
3. **Start developing** - Follow [Getting Started](/docs/getting-started/installation)
## Need Help?
- **Documentation**: [docs.claudekit.cc](https://docs.claudekit.cc)
- **GitHub Issues**: [github.com/mrgoonie/claudekit-cli/issues](https://github.com/mrgoonie/claudekit-cli/issues)
- **Purchase Support**: [claudekit.cc](https://claudekit.cc)
---
**Ready to start?** Run `ck init --kit engineer` to initialize your first project.
---
# Commands Overview
Section: docs
Category: commands
URL: https://docs.claudekit.cc/docs/commands/index
# Commands Overview
ClaudeKit provides a comprehensive set of slash commands to accelerate your development workflow. Each command is designed for specific tasks and automatically orchestrates the appropriate agents.
## Command Categories
### Core Development
- **[/bootstrap](/docs/commands/core/bootstrap)** - Initialize new projects with spec-driven development
- **[/cook](/docs/commands/core/cook)** - Develop new features
- **[/plan](/docs/commands/core/plan)** - Create implementation plans
- **[/brainstorm](/docs/commands/core/brainstorm)** - Explore feature feasibility
- **[/ask](/docs/commands/core/ask)** - Ask questions about the codebase
- **[/watzup](/docs/commands/core/watzup)** - Get project status and recent changes
- **[/scout](/docs/commands/core/scout)** - Find files across large codebases
- **[/test](/docs/commands/core/test)** - Run test suite and get results
- **[/debug](/docs/commands/core/debug)** - Investigate and diagnose issues
### Bug Fixing
- **[/fix](/docs/commands/fix/)** - Intelligently fix any issue (auto-selects fast/hard approach)
- **[/fix:fast](/docs/commands/fix/fast)** - Fix minor bugs quickly
- **[/fix:hard](/docs/commands/fix/hard)** - Fix complex bugs with thorough analysis
- **[/fix:ci](/docs/commands/fix/ci)** - Fix GitHub Actions CI failures
- **[/fix:logs](/docs/commands/fix/logs)** - Analyze and fix issues from logs
- **[/fix:test](/docs/commands/fix/test)** - Fix failing tests
- **[/fix:ui](/docs/commands/fix/ui)** - Fix UI/UX issues
- **[/fix:types](/docs/commands/fix/types)** - Fix TypeScript type errors
### Documentation
- **[/docs:init](/docs/commands/docs/init)** - Initialize project documentation
- **[/docs:update](/docs/commands/docs/update)** - Update project documentation
- **[/docs:summarize](/docs/commands/docs/summarize)** - Summarize project documentation
### Git Operations
- **[/git:cm](/docs/commands/git/commit)** - Stage and commit changes
- **[/git:cp](/docs/commands/git/commit-push)** - Stage, commit, and push
- **[/git:pr](/docs/commands/git/pull-request)** - Create pull request
### Planning
- **[/plan:ci](/docs/commands/plan/ci)** - Analyze CI failures and create fix plan
- **[/plan:two](/docs/commands/plan/two)** - Create plan with 2 approaches
- **[/plan:cro](/docs/commands/plan/cro)** - Create conversion optimization plan
### Design & UI
- **[/design:3d](/docs/commands/design/3d)** - Create 3D designs with Three.js
- **[/design:describe](/docs/commands/design/describe)** - Extract design from screenshots
- **[/design:fast](/docs/commands/design/fast)** - Quick design creation
- **[/design:good](/docs/commands/design/good)** - Complete, refined design
- **[/design:screenshot](/docs/commands/design/screenshot)** - Screenshot to code
- **[/design:video](/docs/commands/design/video)** - Video to code
### Content & Marketing
- **[/content:cro](/docs/commands/content/cro)** - Conversion-optimized content
- **[/content:enhance](/docs/commands/content/enhance)** - Enhance existing content
- **[/content:fast](/docs/commands/content/fast)** - Quick content creation
- **[/content:good](/docs/commands/content/good)** - High-quality content with research
### Integrations
- **[/integrate:polar](/docs/commands/integrate/polar)** - Integrate Polar.sh payments
- **[/integrate:sepay](/docs/commands/integrate/sepay)** - Integrate SePay.vn payments (Vietnam)
### Journaling
- **[/journal](/docs/commands/core/journal)** - Write development journal entries
## Quick Command Reference
### Most Used Commands
```bash
# Feature Development
/plan [feature description] # Plan the feature
/code @plans/feature.md # Implement the plan
# Bug Fixing
/fix [any issue] # Smart fix (auto-selects approach)
/fix:fast [simple bug] # Quick fix
/fix:hard [complex issue] # Thorough investigation + fix
/fix:ci [github-ci-url] # Fix CI failures
# Documentation
/docs:init # First-time setup
/docs:update # After making changes
# Git Workflow
/git:cm # Commit changes
/git:cp # Commit and push
/git:pr [to-branch] # Create pull request
# Project Status
/watzup # What's the current state?
/ask [question] # Ask about codebase
```
## Command Syntax
### Basic Syntax
```bash
/command [required-argument] [optional-argument]
```
### Examples
```bash
# No arguments
/test
/watzup
/docs:init
# Required argument
/cook [add user authentication]
/debug [login button not working]
/ask [how does routing work?]
# Optional arguments
/git:pr # PR to default branch
/git:pr [develop] # PR to develop
/git:pr [main] [feature-branch] # PR from feature to main
# Multiple arguments
/scout [authentication files] [3] # Find auth files, use 3 agents
```
## Command Workflows
### Starting a New Project
```bash
1. /bootstrap [project description]
# OR
ck init --kit engineer
2. # Customize requirements through Q&A
3. # System automatically:
- Researches best practices
- Creates implementation plan
- Implements features
- Generates tests
- Sets up documentation
```
### Developing a Feature
```bash
1. /plan [feature description]
# Creates detailed implementation plan
2. # Review plan in plans/ directory
3. /cook [implement the feature]
# Implements based on plan
# Generates tests
# Updates docs
4. /test
# Validates implementation
5. /git:cm
# Commits with conventional message
```
### Fixing a Bug
```bash
# Smart fix (recommended - auto-selects approach)
/fix [describe any issue]
# - Analyzes complexity
# - Chooses optimal strategy
# - Implements fix
# OR manually choose approach:
# Simple bug (you know the fix)
/fix:fast [typo in validation message]
# Complex bug (needs investigation)
/fix:hard [users can't login after password reset]
# - Uses scout to find related files
# - Analyzes code and logs
# - Researches solutions
# - Creates fix plan
# - Implements fix
# - Tests thoroughly
# CI failure
/fix:ci [https://github.com/user/repo/actions/runs/123]
# - Reads CI logs
# - Identifies failure cause
# - Implements fix
# - Verifies CI passes
```
### Updating Documentation
```bash
# After implementing features
/docs:update
# When onboarding new team members
/docs:summarize
# When starting with existing codebase
/docs:init
```
## Command Best Practices
### Use the Right Command for the Task
β
**Correct Usage**
```bash
# Small fixes
/fix:fast [typo in button text]
# Complex issues
/fix:hard [memory leak in websocket connection]
# UI issues with screenshot
/fix:ui [screenshot.png] - button misaligned on mobile
```
β **Incorrect Usage**
```bash
# Don't use fast for complex issues
/fix:fast [entire authentication system broken]
# Don't use hard for simple fixes
/fix:hard [typo in comment]
```
### Provide Clear Descriptions
β
**Clear**
```bash
/plan [add OAuth2 authentication with Google and GitHub providers]
/code @plans/oauth.md # Implement the plan
/debug [API returns 500 error when creating user with empty email]
```
β **Vague**
```bash
/plan [add auth]
/code [make it work]
/debug [something's broken]
```
### Review Before Committing
```bash
# 1. Implement
/cook [add rate limiting]
# 2. Test
/test
# 3. Review changes
git diff
# 4. Commit only if satisfied
/git:cm
```
### Use Sequential Commands for Complex Tasks
```bash
# 1. Understand codebase
/ask [how is authentication currently implemented?]
# 2. Plan changes
/plan [migrate from session-based to JWT authentication]
# 3. Review plan
cat plans/latest-plan.md
# 4. Implement
/cook [migrate to JWT authentication]
# 5. Test
/test
# 6. Fix if needed
/fix:test
# 7. Commit
/git:cm
```
## Command Flags and Options
Some commands support flags:
### /bootstrap
```bash
/bootstrap [project description] # Interactive Q&A
/bootstrap:auto [detailed description] # Fully automatic
```
### /git:pr
```bash
/git:pr # PR to default branch (main)
/git:pr [develop] # PR to develop branch
/git:pr [main] [feature] # PR from feature to main
```
### /plan
```bash
/plan [feature] # Single approach
/plan:two [feature] # Two different approaches
```
## Understanding Command Output
Commands provide structured output:
### Planning Commands
```
planner Agent: Analyzing codebase...
Research Results:
- OAuth2 best practices reviewed
- Existing auth patterns identified
- Security considerations documented
Implementation Plan Created:
π plans/oauth-implementation.md
Plan Summary:
1. Install dependencies (passport, passport-google-oauth20)
2. Configure OAuth2 providers
3. Implement callback routes
4. Add session management
5. Generate tests
6. Update documentation
Estimated time: 2-3 hours
Files to create: 5
Files to modify: 3
Next: Review plan, then run /code
```
### Implementation Commands
```
Code Agent: Implementing from plan...
Dependencies Installed:
β passport (0.6.0)
β passport-google-oauth20 (2.0.0)
Files Created:
β src/auth/oauth-config.js
β src/auth/google-strategy.js
β src/routes/auth-callback.js
Tests Generated:
β tests/auth/oauth.test.js (15 tests)
Documentation Updated:
β docs/api/authentication.md
Implementation complete!
Next: Run /test to validate
```
### Test Commands
```
tester Agent: Running test suite...
Test Results:
β Unit tests: 45 passed
β Integration tests: 12 passed
β E2E tests: 8 passed
Coverage: 87.3%
All tests passed!
Next: Review changes, then /git:cm
```
## Troubleshooting Commands
### Command Not Found
**Problem**: `/command` not recognized
**Solutions:**
1. Verify you're in a ClaudeKit project (`ls .claude/`)
2. Check command exists (`ls .claude/commands/`)
3. Run `ck init` to get latest commands
4. Restart Claude Code
### Command Fails
**Problem**: Command errors during execution
**Solutions:**
1. Check error message for specific issue
2. Verify prerequisites (API keys, dependencies)
3. Review agent logs
4. Try command with simpler input
5. Use `/debug` to investigate
### Unexpected Results
**Problem**: Command doesn't do what expected
**Solutions:**
1. Review command documentation
2. Check if using correct command for task
3. Provide more specific description
4. Review generated plans before implementing
5. Use feedback to refine
## Next Steps
Explore specific command categories:
- [Core Commands](/docs/commands/core/) - Development essentials
- [Fix Commands](/docs/commands/fix/) - Debugging and fixing
- [Design Commands](/docs/commands/design/) - UI/UX creation
- [Git Commands](/docs/commands/git/) - Version control
Or learn about:
- [Agents](/docs/agents/) - How commands invoke agents
- [Workflows](/docs/docs/configuration/workflows) - Command execution flows
- [Quick Start](/docs/getting-started/quick-start) - Hands-on tutorial
---
**Key Takeaway**: ClaudeKit commands provide a natural, intuitive interface to powerful agent orchestration, making complex development tasks simple and repeatable.
---
# /content:cro
Section: docs
Category: commands/content
URL: https://docs.claudekit.cc/docs/content/cro
# /content:cro
Analyze existing content and optimize it for conversion. This command uses conversion rate optimization (CRO) best practices, psychological triggers, and A/B testing strategies to improve your copy's effectiveness.
## Syntax
```bash
/content:cro [content issues or URL]
```
## How It Works
The `/content:cro` command follows a data-driven optimization workflow:
### 1. Content Analysis
- Reads existing content (from file, URL, or description)
- Identifies conversion goals (signup, purchase, download, etc.)
- Analyzes current messaging and CTAs
- Maps user journey and friction points
### 2. Psychological Analysis
- Identifies psychological triggers used (or missing)
- Analyzes emotional resonance
- Evaluates trust signals
- Checks for cognitive biases leveraged
### 3. CRO Audit
Invokes **copywriter** agent to audit:
- Headlines and value propositions
- Call-to-action effectiveness
- Social proof and testimonials
- Urgency and scarcity elements
- Visual hierarchy and flow
- Mobile optimization
### 4. Optimization Recommendations
Provides specific improvements:
- Rewritten headlines (3-5 variations)
- Improved CTAs
- Added psychological triggers
- Enhanced social proof
- Optimized user flow
### 5. A/B Testing Strategy
Creates testing plan:
- Test variations
- Metrics to track
- Sample size requirements
- Expected lift predictions
## Examples
### Landing Page Optimization
```bash
/content:cro [analyze landing page at https://example.com/signup]
```
**What happens:**
```
1. Content Analysis
Agent: copywriter
Current State:
- Headline: "Sign up for our app"
- CTA: "Get Started"
- No social proof visible
- Benefits listed but not compelling
- No urgency elements
2. Psychological Analysis
Missing triggers:
- Loss aversion
- Social proof
- Scarcity/urgency
- Authority signals
Current triggers:
β Reciprocity (free trial)
β Weak value proposition
3. Optimization Recommendations
HEADLINE VARIANTS:
A: "Join 50,000+ developers building faster"
B: "Ship your next feature 10x faster"
C: "The last development tool you'll need"
CTA IMPROVEMENTS:
Before: "Get Started"
After: "Start Free Trial β No credit card"
ADD SOCIAL PROOF:
- "Trusted by 50,000+ developers"
- Customer logos (GitHub, Shopify, etc.)
- Testimonials with photos
ADD URGENCY:
- "Limited spots in beta"
- "Join before prices increase Nov 1"
RESTRUCTURE FLOW:
1. Attention-grabbing headline
2. Social proof
3. Clear benefits (not features)
4. Visual demo/screenshot
5. Strong CTA
6. Risk reversal (money-back guarantee)
4. A/B Testing Plan
Test 1: Headlines
- Control: Current headline
- Variant A: "Join 50,000+ developers"
- Variant B: "Ship 10x faster"
- Metric: Signup rate
- Duration: 2 weeks, 1000+ visitors/variant
- Expected lift: 15-30%
Test 2: CTA Button
- Control: "Get Started"
- Variant: "Start Free Trial"
- Metric: Click-through rate
- Expected lift: 10-20%
β CRO Analysis complete
```
### Product Page Copy
```bash
/content:cro [optimize product page copy for ClaudeKit Pro subscription]
```
**What happens:**
```
1. Analysis
Product: ClaudeKit Pro ($49/month)
Goal: Increase subscription conversions
Current issues:
- Features-focused (not benefit-focused)
- No pricing anchoring
- Weak urgency
- Missing objection handling
2. Optimization Strategy
PRICING PRESENTATION:
Before:
"$49/month"
After (with anchoring):
"Just $1.63/day"
"Save $118/year (annual plan)"
Compare to: "$199/month for hiring freelancer"
TRANSFORM FEATURES TO BENEFITS:
Before: "AI-powered code generation"
After: "Ship features 10x faster, get home on time"
Before: "Automated testing"
After: "Sleep soundly knowing bugs are caught instantly"
ADD RISK REVERSAL:
- 30-day money-back guarantee
- Cancel anytime
- No questions asked refund
SOCIAL PROOF:
- "4.9/5 stars from 2,000+ developers"
- "Featured on Product Hunt #1"
- Real testimonials with metrics
3. Psychological Triggers Added
β Loss Aversion: "Don't waste 20 hours/week on manual tasks"
β Social Proof: Testimonials + user count
β Scarcity: "Limited beta pricing ends Dec 1"
β Authority: "Used by teams at Google, Meta, Amazon"
β Reciprocity: Free 14-day trial
β Commitment: "No credit card required"
4. A/B Tests Recommended
Test 1: Pricing Display
A: "$49/month"
B: "$1.63/day - Less than coffee"
Expected lift: 8-15%
Test 2: Guarantee Position
A: Guarantee at bottom
B: Guarantee next to CTA
Expected lift: 5-10%
β CRO complete (3 minutes)
```
### Email Campaign
```bash
/content:cro [improve conversion on welcome email sequence]
```
**What happens:**
```
1. Email Sequence Analysis
Current: 3 emails over 7 days
Goal: Activate trial users
Issues found:
- Email 1: Too long, buried CTA
- Email 2: Features, not benefits
- Email 3: Weak urgency, no social proof
2. Optimization Recommendations
EMAIL 1 (Day 0 - Welcome)
Before:
Subject: "Welcome to ClaudeKit!"
After:
Subject: "Your first AI agent is ready π"
Alt: "Here's your complete setup guide"
Body improvements:
- Cut from 300 words to 100 words
- Single clear CTA: "Create Your First Agent"
- Add quick win: "5-minute setup"
EMAIL 2 (Day 3 - Value)
Subject: "See how [Company] saved 20 hours/week"
Changes:
- Real customer story (with permission)
- Specific metrics
- CTA: "Get Similar Results"
EMAIL 3 (Day 6 - Conversion)
Subject: "Your trial ends in 24 hours β°"
Add:
- Urgency (trial ending)
- Loss aversion (lose access to agents)
- Special offer (20% off if upgrade now)
- Social proof (join 50,000+ users)
3. A/B Testing Plan
Test: Subject Lines (Email 1)
A: "Welcome to ClaudeKit!"
B: "Your first AI agent is ready π"
Metric: Open rate
Expected lift: 20-35%
Test: CTA Text (Email 2)
A: "Learn More"
B: "Get Similar Results"
Metric: Click rate
Expected lift: 15-25%
β Email sequence optimized
```
## When to Use
### β
Use /content:cro for:
**Landing Pages**
```bash
/content:cro [optimize homepage for signups]
```
**Product Pages**
```bash
/content:cro [improve pricing page conversion]
```
**Email Campaigns**
```bash
/content:cro [optimize onboarding email sequence]
```
**CTAs**
```bash
/content:cro [improve signup button conversion]
```
**Sales Pages**
```bash
/content:cro [optimize checkout flow copy]
```
### β Don't use for:
**Brand New Content**
- Use `/content:fast` or `/content:good` instead
**Blog Posts**
- Use `/content:enhance` for improving blog content
**Technical Documentation**
- Use `/docs:update` instead
## Optimization Frameworks Used
### 1. AIDA Framework
```
Attention: Compelling headline
Interest: Engaging subheadline and benefits
Desire: Social proof, testimonials, guarantees
Action: Clear, compelling CTA
```
### 2. PAS Framework
```
Problem: Identify user pain point
Agitate: Emphasize consequences of inaction
Solution: Present your product as the solution
```
### 3. Before-After-Bridge
```
Before: Current unsatisfactory state
After: Desired ideal state
Bridge: How your product gets them there
```
## Psychological Triggers
### Loss Aversion
```
Before: "Get 50% more features"
After: "Don't lose 50% of your productivity"
```
### Social Proof
```
Add:
- User counts: "Join 50,000+ developers"
- Testimonials with photos and names
- Trust badges: "Featured in TechCrunch"
- Customer logos
```
### Scarcity & Urgency
```
- "Limited spots available"
- "Price increases in 48 hours"
- "Only 12 left at this price"
- "Offer ends midnight tonight"
```
### Authority
```
- "Recommended by Y Combinator"
- "Used by Fortune 500 companies"
- Expert endorsements
- Industry awards
```
### Reciprocity
```
- Free trial
- Free tools or resources
- Free consultation
- Valuable content
```
## A/B Testing Best Practices
### Test One Thing at a Time
β
**Good:**
```
Test 1: Headline only
Test 2: CTA button only
Test 3: Social proof placement only
```
β **Bad:**
```
Test: Change headline + CTA + layout + colors
(Can't tell what caused the improvement)
```
### Minimum Sample Size
```
Small changes (5-10% lift):
- Need 10,000+ visitors per variant
- Run for 2-4 weeks
Large changes (30%+ lift):
- Need 1,000+ visitors per variant
- Can conclude in 1-2 weeks
```
### Statistical Significance
```
Don't stop test early!
Minimum requirements:
- 95% confidence level
- Complete 2+ full weeks (account for day-of-week effects)
- Reach minimum sample size
```
## Output Files
After `/content:cro` completes:
### Analysis Report
```
plans/content-cro-[page-name]-YYYYMMDD.md
```
Contains:
- Current state analysis
- Issues identified
- Optimization recommendations
- A/B test plans
- Expected improvements
### Optimized Copy Variations
```
content/optimized/
βββ headline-variants.md
βββ cta-variations.md
βββ social-proof-suggestions.md
βββ complete-optimized-page.md
```
## Metrics to Track
### Primary Metrics
- **Conversion Rate**: % of visitors who complete goal action
- **Click-Through Rate**: % who click CTA
- **Bounce Rate**: % who leave without engaging
- **Time on Page**: Average engagement time
### Secondary Metrics
- **Scroll Depth**: How far users scroll
- **Heat Maps**: Where users click
- **Form Abandonment**: Where users drop off
- **Exit Rate**: Where users leave
## Common CRO Patterns
### Above the Fold
```
Must include:
1. Clear headline (value proposition)
2. Supporting subheadline
3. Visual (hero image/video)
4. Primary CTA
5. Trust signal (logo, social proof)
```
### Pricing Page
```
Optimize:
- Show value, not just price
- Use pricing anchoring
- Add social proof
- Include guarantee
- Clear feature comparison
- Address objections
```
### Checkout Flow
```
Reduce friction:
- Remove unnecessary fields
- Show progress indicator
- Include trust badges
- Offer guest checkout
- Display security assurance
```
## Best Practices
### Headlines That Convert
β
**Good:**
```
"Ship Features 10x Faster With AI"
"Join 50,000+ Developers Building Better"
"The Last Dev Tool You'll Ever Need"
```
β **Bad:**
```
"Welcome to Our Platform"
"We Are the Best"
"Revolutionary Technology"
```
### CTAs That Work
β
**Good:**
```
"Start Free Trial β No credit card"
"Get Instant Access"
"Show Me How It Works"
```
β **Bad:**
```
"Submit"
"Click Here"
"Learn More"
```
### Social Proof
β
**Specific:**
```
"Trusted by 50,000+ developers at Google, Meta, Amazon"
"4.9/5 stars from 2,341 reviews"
"Helped teams ship 10x faster"
```
β **Vague:**
```
"Trusted by many companies"
"Highly rated"
"Popular choice"
```
## Troubleshooting
### Low Conversion Despite Optimization
**Check:**
- Is offer compelling?
- Is pricing competitive?
- Does product match promise?
- Is traffic qualified?
**Solution:**
```bash
# Analyze deeper issues
/content:cro [full funnel analysis including traffic sources]
```
### A/B Test Shows No Winner
**Possible reasons:**
- Sample size too small
- Change too subtle
- Test run too short
- Traffic quality issues
**Solution:**
```
- Extend test duration
- Increase traffic
- Test bigger changes
- Segment results by traffic source
```
## Next Steps
After optimization:
```bash
# 1. Analyze content
/content:cro [page description]
# 2. Review recommendations
cat plans/content-cro-*.md
# 3. Implement changes
/cook [implement CRO recommendations]
# 4. Set up A/B tests
# (Use your testing platform)
# 5. Monitor results
# Track metrics for 2-4 weeks
# 6. Iterate
/content:cro [further optimize winning variant]
```
## Related Commands
- [/content:enhance](/docs/commands/content/enhance) - Improve existing copy
- [/content:good](/docs/commands/content/good) - Write new optimized copy
- [/plan:cro](/docs/commands/plan/cro) - Create CRO strategy plan
---
**Key Takeaway**: `/content:cro` analyzes your content through a conversion optimization lens, providing specific recommendations, rewritten variations, and A/B testing strategies to maximize conversion rates using proven psychological triggers and CRO best practices.
---
# /content:enhance
Section: docs
Category: commands/content
URL: https://docs.claudekit.cc/docs/content/enhance
# /content:enhance
Analyze and enhance existing copy to improve clarity, impact, SEO optimization, and readability. This command refines your content while maintaining your brand voice and core message.
## Syntax
```bash
/content:enhance [content description or file path]
```
## How It Works
The `/content:enhance` command follows a comprehensive enhancement workflow:
### 1. Content Analysis
- Reads existing content (file, URL, or description)
- Identifies content type (blog, docs, marketing, etc.)
- Analyzes current quality and effectiveness
- Detects brand voice and tone
### 2. Multi-Dimensional Audit
Invokes **copywriter** agent to audit:
- **Clarity**: Is message clear and easy to understand?
- **Impact**: Does it engage and persuade?
- **SEO**: Is it optimized for search engines?
- **Readability**: Is it easy to read and scan?
- **Grammar**: Are there errors or awkward phrasing?
- **Structure**: Is information well-organized?
### 3. Enhancement Recommendations
Provides specific improvements:
- Clearer headlines and subheadings
- Stronger opening and closing
- Improved transitions
- Better word choices
- SEO keyword integration
- Readability improvements
### 4. Revised Content
Delivers enhanced version:
- Original structure preserved (or improved)
- Brand voice maintained
- All improvements applied
- Before/after comparison
## Examples
### Blog Post Enhancement
```bash
/content:enhance [improve blog post at ./blog/ai-development.md]
```
**What happens:**
```
1. Content Analysis
Agent: copywriter
Reading: ./blog/ai-development.md
Type: Technical blog post
Length: 1,200 words
Target audience: Developers
Current state:
- Headline: Generic
- Opening: Slow, buried lede
- Structure: Wall of text, few subheadings
- Readability: Grade 14 (college level)
- SEO: No keywords, weak meta
2. Issues Identified
CLARITY (Score: 6/10):
- Jargon not explained
- Complex sentences
- Vague pronouns
IMPACT (Score: 5/10):
- Weak opening hook
- No compelling examples
- Passive voice throughout
SEO (Score: 3/10):
- No primary keyword
- Missing meta description
- No internal links
- Headers not optimized
READABILITY (Score: 4/10):
- Flesch score: 32 (difficult)
- Long paragraphs (8-10 lines)
- Few bullet points
- No visual breaks
3. Enhancement Applied
HEADLINE:
Before: "AI Development Tools"
After: "How AI Development Tools Cut Coding Time by 70%"
(+Specific, benefit-focused, includes number)
OPENING:
Before: "AI tools are becoming popular..."
After: "Spent 3 hours debugging yesterday? AI tools
now catch those bugs in seconds. Here's how..."
(+Hook with specific scenario, immediate value)
STRUCTURE:
- Added 6 descriptive subheadings
- Broke long paragraphs into 2-3 lines
- Added bullet lists (3 places)
- Inserted code examples
- Added "Key Takeaway" boxes
READABILITY:
- Flesch score: 32 β 58 (easier)
- Grade level: 14 β 9
- Avg sentence: 25 words β 16 words
- Passive voice: 15% β 3%
SEO OPTIMIZATION:
- Primary keyword: "AI development tools"
- Added meta description (155 chars)
- Optimized H2s with keywords
- Added 3 internal links
- Image alt text added
WORD IMPROVEMENTS:
- "utilize" β "use" (6 places)
- "in order to" β "to" (4 places)
- Removed filler words (23 instances)
- Simplified jargon
4. Before/After Metrics
Readability: 4/10 β 8/10
SEO Score: 3/10 β 9/10
Clarity: 6/10 β 9/10
Impact: 5/10 β 8/10
Estimated read time: 8 min β 5 min
β Enhancement complete (2 minutes)
Files created:
- blog/ai-development-enhanced.md
- blog/ai-development-changes.diff
```
### Marketing Page Copy
```bash
/content:enhance [improve About Us page copy]
```
**What happens:**
```
1. Analysis
Content: About Us page
Current length: 500 words
Issues: Generic, corporate jargon, no personality
2. Enhancement Strategy
TONE SHIFT:
Before: Corporate, formal, distant
After: Professional but approachable, human
BEFORE:
"Our company was founded in 2020 with the mission
to provide innovative solutions to businesses
worldwide. We leverage cutting-edge technology..."
AFTER:
"In 2020, we got frustrated watching developers
waste hours on repetitive tasks. So we built
ClaudeKitβAI agents that actually work."
(+Specific, relatable, conversational)
3. Improvements Applied
STORY STRUCTURE:
- Problem we saw
- Why we started
- What makes us different
- Who we help today
- Where we're going
REMOVE JARGON:
β "Leverage cutting-edge technology"
β "Use AI that actually works"
β "Synergistic solutions"
β "Tools that work together"
β "Best-in-class offerings"
β "The best tools we could build"
ADD PERSONALITY:
- Specific founding story
- Real customer examples
- Team member quotes
- Behind-the-scenes details
STRENGTHEN CREDIBILITY:
- Concrete metrics (50,000+ users)
- Customer logos
- Specific achievements
- Team credentials (where relevant)
β Enhanced copy delivered
```
### Documentation Page
```bash
/content:enhance [improve API documentation for authentication]
```
**What happens:**
```
1. Analysis
Type: Technical documentation
Audience: Developers integrating API
Issues found:
- Assumes too much knowledge
- Missing practical examples
- Unclear error messages
- No troubleshooting section
2. Enhancements
ADD QUICK START:
- Minimal working example first
- Then explain details
- Progressive disclosure
BEFORE:
"The authentication endpoint accepts POST requests
with JSON payload containing username and password
fields which must be validated..."
AFTER:
"Quick Start:
```bash
curl -X POST https://api.example.com/auth \
-H "Content-Type: application/json" \
-d '{"email": "user@example.com", "password": "secret"}'
```
This returns your API token. Use it in subsequent requests..."
IMPROVE STRUCTURE:
1. Quick start (working code first)
2. Authentication flow explained
3. Request/response details
4. Error handling
5. Troubleshooting
6. Security best practices
ADD EXAMPLES:
- cURL examples
- JavaScript/Node.js
- Python
- Common use cases
CLARIFY ERRORS:
Before: "401 Unauthorized"
After:
"401 Unauthorized
- Missing API key in header
- API key expired (renew in dashboard)
- Invalid API key format"
β Documentation enhanced
```
## When to Use
### β
Use /content:enhance for:
**Existing Blog Posts**
```bash
/content:enhance [improve old blog posts for SEO]
```
**Marketing Copy**
```bash
/content:enhance [refine homepage copy]
```
**Documentation**
```bash
/content:enhance [make API docs clearer]
```
**Email Content**
```bash
/content:enhance [improve newsletter readability]
```
**Product Descriptions**
```bash
/content:enhance [enhance product page copy]
```
### β Don't use for:
**Creating New Content**
- Use `/content:fast` or `/content:good` instead
**Conversion Optimization**
- Use `/content:cro` for CRO-focused improvements
**Quick Grammar Fixes**
- Just fix directly or use grammar tool
## Enhancement Dimensions
### 1. Clarity
**Before:**
```
"Our platform enables users to leverage advanced
AI capabilities to optimize their development
workflow efficiency."
```
**After:**
```
"Build features 10x faster with AI that writes,
tests, and reviews your code."
```
### 2. Impact
**Before:**
```
"We offer various features for developers."
```
**After:**
```
"Ship your next feature in hours, not weeks.
50,000+ developers already do."
```
### 3. SEO
**Optimization techniques:**
- Primary keyword in H1
- Secondary keywords in H2s
- Natural keyword density (1-2%)
- Meta description (150-160 chars)
- Internal linking
- Image alt text
- URL slug optimization
### 4. Readability
**Improvements:**
- Shorter sentences (15-20 words avg)
- Shorter paragraphs (2-4 lines)
- Bullet points for lists
- Subheadings every 300 words
- Active voice (minimize passive)
- Simple words (avoid jargon)
- Visual breaks (images, code blocks)
## Readability Metrics
### Flesch Reading Ease
```
90-100: Very easy (5th grade)
60-70: Easy (8th-9th grade) β Target for most content
30-50: Difficult (college)
0-30: Very difficult
```
### Target by Content Type
```
Blog posts: 60-70 (Easy)
Marketing copy: 70-80 (Fairly easy)
Documentation: 50-60 (Standard)
Technical papers: 30-50 (Difficult) - OK
```
## SEO Best Practices
### Keyword Usage
```
Title tag: Include primary keyword
H1: Primary keyword
H2s: Secondary keywords (1-2)
First paragraph: Primary keyword naturally
Throughout: 1-2% keyword density
Meta description: Primary keyword
```
### On-Page SEO
```
β Descriptive URLs (/ai-development-tools)
β Internal links (3-5 per post)
β External links (1-2 authoritative)
β Image optimization (alt text, compression)
β Mobile-friendly formatting
β Fast load time (remove bloat)
```
## Enhancement Patterns
### Blog Post Structure
```
1. Compelling headline (with number or benefit)
2. Hook (problem or surprising fact)
3. Quick value preview
4. Main content (subheadings every 300 words)
5. Practical examples
6. Key takeaways
7. Clear CTA
```
### Marketing Copy Structure
```
1. Attention-grabbing headline
2. Subheadline (elaborate benefit)
3. Problem identification
4. Solution presentation
5. Social proof
6. Features as benefits
7. Objection handling
8. Strong CTA
```
### Documentation Structure
```
1. Quick start (minimal working example)
2. Overview (what it does)
3. Detailed usage
4. Parameters/options
5. Examples (multiple languages)
6. Error handling
7. Troubleshooting
8. Best practices
```
## Output Files
After `/content:enhance` completes:
### Enhanced Content
```
content/enhanced/[original-name]-enhanced.md
```
Full enhanced version ready to use
### Change Report
```
content/enhanced/[original-name]-changes.diff
```
Shows all changes made
### Analysis Report
```
plans/content-enhance-[name]-YYYYMMDD.md
```
Contains:
- Original analysis scores
- Issues identified
- Enhancements applied
- Before/after metrics
## Best Practices
### Preserve Brand Voice
```
Original tone: Professional, helpful
β Keep: Professional, helpful
β Change to: Casual, quirky (unless requested)
```
### Maintain Accuracy
```
β Improve clarity of technical details
β Simplify so much that it's inaccurate
```
### Don't Over-Optimize
```
β Keyword stuffing for SEO
β Dumbing down for readability
β Balance clarity, accuracy, optimization
```
## Common Improvements
### Remove Filler Words
```
β "In order to" β β "To"
β "Due to the fact that" β β "Because"
β "At this point in time" β β "Now"
β "In the event that" β β "If"
β "For the purpose of" β β "For" or "To"
```
### Strengthen Verbs
```
β "Is able to" β β "Can"
β "Make use of" β β "Use"
β "Take action" β β "Act"
β "Provide assistance" β β "Help"
```
### Active Voice
```
β "The code is written by developers"
β "Developers write the code"
β "Bugs are caught by tests"
β "Tests catch bugs"
```
## Troubleshooting
### Enhancement Changed Meaning
**Problem:** Enhanced version says something different
**Solution:**
```bash
# Review change report
cat content/enhanced/[name]-changes.diff
# Specify what to preserve
/content:enhance [same content, but preserve technical accuracy of X section]
```
### Too Much Changed
**Problem:** Enhancement too aggressive
**Solution:**
```bash
# Request lighter touch
/content:enhance [same content, but make minimal changes for readability only]
```
### Lost Brand Voice
**Problem:** Doesn't sound like us anymore
**Solution:**
```bash
# Specify tone
/content:enhance [same content, maintain formal professional tone]
```
## After Enhancement
Standard workflow:
```bash
# 1. Enhance content
/content:enhance [content description]
# 2. Review changes
cat content/enhanced/[name]-changes.diff
# 3. Review enhanced version
cat content/enhanced/[name]-enhanced.md
# 4. Check metrics improvement
cat plans/content-enhance-*.md
# 5. If satisfied, replace original
mv content/enhanced/[name]-enhanced.md [original-location]
# 6. Commit changes
/git:cm
```
## Metrics to Track
After publishing enhanced content:
### Engagement Metrics
- Time on page (should increase)
- Bounce rate (should decrease)
- Scroll depth (should increase)
- Social shares (may increase)
### SEO Metrics
- Search rankings (track target keywords)
- Organic traffic (monitor over 30-60 days)
- Click-through rate from search
- Featured snippet appearances
### Conversion Metrics
- Goal completions (signups, downloads, etc.)
- CTA click rate
- Next page navigation
## Next Steps
- [/content:cro](/docs/commands/content/cro) - Optimize for conversion
- [/content:good](/docs/commands/content/good) - Write new quality content
- [/docs:update](/docs/commands/docs/update) - Update documentation
---
**Key Takeaway**: `/content:enhance` improves existing content across multiple dimensions (clarity, impact, SEO, readability) while preserving your core message and brand voice, making your content more effective and discoverable.
---
# /content:fast
Section: docs
Category: commands/content
URL: https://docs.claudekit.cc/docs/content/fast
# /content:fast
Generate creative and effective copy quickly. This command produces multiple variations optimized for different platforms and use cases, perfect when you need quality content fast.
## Syntax
```bash
/content:fast [content request]
```
## How It Works
The `/content:fast` command follows a rapid content generation workflow:
### 1. Request Analysis
- Understands content type needed
- Identifies target audience
- Determines tone and style
- Recognizes platform requirements
### 2. Quick Research (optional)
- Reviews similar successful content (if reference provided)
- Checks platform best practices
- No deep competitive analysis (use `/content:good` for that)
### 3. Rapid Generation
Invokes **copywriter** agent to create:
- Multiple variations (3-5)
- Platform-optimized formatting
- Different angles/approaches
- Appropriate length for platform
### 4. Quick Delivery
Delivers content within seconds:
- Ready-to-use copy
- Multiple options to choose from
- Basic formatting applied
- Character counts included
## Examples
### Social Media Posts
```bash
/content:fast [write LinkedIn post announcing new AI feature launch]
```
**What happens:**
```
Agent: copywriter
Platform: LinkedIn
Type: Product announcement
Generating 3 variations...
VARIATION 1 (Professional + Stats):
---
π Big news: We just launched AI Code Review
After 6 months of development, ClaudeKit now
reviews your PRs automatically.
Results from beta:
β’ 87% faster code reviews
β’ 92% of bugs caught before production
β’ 50,000+ PRs reviewed
Try it free: link.claudekit.cc/code-review
#AI #Development #CodeReview
---
(253 characters, under LinkedIn optimal 300)
VARIATION 2 (Story-driven):
---
Remember spending hours in code review meetings?
We built something better.
ClaudeKit's new AI Code Review analyzes your PRs
in seconds, catching bugs, security issues, and
style problems automatically.
Beta users shipped 2x faster.
Try it free β link.claudekit.cc/code-review
---
(272 characters)
VARIATION 3 (Question hook):
---
What if code reviews took 30 seconds instead
of 30 minutes?
ClaudeKit's AI Code Review just made it possible:
β Automatic bug detection
β Security analysis
β Best practices enforcement
β Instant feedback
Free trial β link.claudekit.cc/code-review
Tired of slow reviews? This changes everything.
---
(291 characters)
β Content generated (8 seconds)
```
### Email Subject Lines
```bash
/content:fast [create email subject lines for product launch announcement]
```
**What happens:**
```
Agent: copywriter
Type: Email subject lines
Goal: High open rate
Generating 10 variations...
URGENCY:
1. "Your AI assistant launches in 24 hours π"
2. "Limited beta access closing tonight"
3. "Last chance: Get 50% off at launch"
CURIOSITY:
4. "The tool we've been hiding for 6 months..."
5. "This is why we went dark for 2 months"
6. "Something big just landed in your dashboard"
BENEFIT-FOCUSED:
7. "Ship features 10x faster starting today"
8. "Your new secret weapon for development"
9. "Cut your coding time by 70% (no joke)"
PERSONAL:
10. "I built this for you (and 50,000 others)"
Best performers historically:
- Urgency + specificity (#1, #2)
- Curiosity with specificity (#4)
- Concrete benefits (#9)
β Subject lines generated (6 seconds)
```
### Landing Page Headlines
```bash
/content:fast [write headlines for AI development tool landing page]
```
**What happens:**
```
Agent: copywriter
Target: Developer audience
Goal: Signups
Generating 5 headline variations...
BENEFIT-FOCUSED:
1. "Ship Features 10x Faster With AI Agents"
Subheadline: "Write, test, and review code
automatically. Trusted by 50,000+ developers."
NUMBER-DRIVEN:
2. "Cut Your Development Time by 70%"
Subheadline: "AI agents that actually work.
No prompt engineering required."
SOCIAL PROOF:
3. "Join 50,000+ Developers Building Faster"
Subheadline: "The AI toolkit teams at Google,
Meta, and Amazon use daily."
OUTCOME-FOCUSED:
4. "Build Better Software in Half the Time"
Subheadline: "AI-powered development that
feels like magic. Set up in 5 minutes."
PAIN-POINT:
5. "Stop Wasting 20 Hours a Week on Repetitive Tasks"
Subheadline: "AI agents handle the boring stuff.
You focus on building great products."
Recommended: #1 or #3
(Best combine clear benefit + social proof)
β Headlines generated (7 seconds)
```
### Ad Copy
```bash
/content:fast [create Google Ads copy for AI coding tool, target: "AI development tools"]
```
**What happens:**
```
Agent: copywriter
Platform: Google Ads
Keyword: "AI development tools"
Constraints: 30 char headlines, 90 char descriptions
Generating 3 ad variations...
AD VARIATION 1:
Headline 1: "Ship Code 10x Faster" (22 chars)
Headline 2: "AI That Actually Works" (22 chars)
Headline 3: "Try Free for 14 Days" (20 chars)
Description 1: "AI agents write, test & review
code automatically. Join 50,000+ developers
building faster." (90 chars)
Description 2: "No prompt engineering. No setup
hassle. Just AI that works. Start free trial
now." (85 chars)
AD VARIATION 2:
Headline 1: "AI Development Toolkit" (23 chars)
Headline 2: "50,000+ Devs Trust Us" (22 chars)
Headline 3: "Free Trial - No Card" (20 chars)
Description 1: "Build features faster with AI
that writes, tests, and reviews your code.
Setup in minutes." (89 chars)
Description 2: "Used by teams at Google, Meta,
Amazon. Get started free. Cancel anytime.
Zero risk." (87 chars)
AD VARIATION 3:
Headline 1: "Cut Coding Time by 70%" (23 chars)
Headline 2: "Try ClaudeKit Free" (19 chars)
Headline 3: "Trusted by 50K Devs" (20 chars)
Description 1: "AI agents handle boring tasks.
You build great products. Free 14-day trial,
no credit card." (90 chars)
Description 2: "Automatic code generation,
testing & review. Start free trial. Join teams
shipping faster." (89 chars)
β Ad copy generated (9 seconds)
```
### Product Description
```bash
/content:fast [write short product description for GitHub README]
```
**What happens:**
```
Agent: copywriter
Platform: GitHub README
Audience: Developers
Length: Short (100-150 words)
Generated description:
---
# ClaudeKit
AI-powered development toolkit that helps you ship faster.
## What It Does
ClaudeKit provides AI agents that write, test, and review code
automatically. No prompt engineering requiredβjust describe what
you want to build.
**Key Features:**
- π€ AI code generation with full context awareness
- β
Automatic test generation (unit, integration, E2E)
- π Intelligent code review (security, performance, style)
- π§ Works with your existing tools and workflows
**Who It's For:**
Developers and teams who want to:
- Ship features 10x faster
- Spend less time on repetitive tasks
- Maintain code quality automatically
Trusted by 50,000+ developers at companies like Google, Meta,
and Amazon.
[Try Free β](https://claudekit.cc)
---
(147 words)
β Description generated (5 seconds)
```
## When to Use
### β
Use /content:fast for:
**Social Media Posts**
```bash
/content:fast [write Twitter thread about new feature]
```
**Email Content**
```bash
/content:fast [write welcome email for new users]
```
**Ad Copy**
```bash
/content:fast [create Facebook ad copy for developers]
```
**Headlines & CTAs**
```bash
/content:fast [write CTA variations for pricing page]
```
**Quick Announcements**
```bash
/content:fast [write changelog entry for v2.0 release]
```
**Short Descriptions**
```bash
/content:fast [write app store description]
```
### β Don't use for:
**Long-Form Content**
- Use `/content:good` for blog posts, whitepapers
**Strategic Content**
- Use `/content:good` for content requiring research
**Conversion-Critical Pages**
- Use `/content:cro` for landing pages needing optimization
## Platform-Specific Formats
### Twitter/X
```
Character limits:
- Standard: 280 characters
- Optimal: 71-100 characters (higher engagement)
- Thread: 250 chars per tweet (leave room for "1/")
Best practices:
β Hook in first 100 characters
β Use line breaks for readability
β Include CTA in last tweet
β Add relevant hashtags (1-2 max)
```
### LinkedIn
```
Character limits:
- Max: 3,000 characters
- Optimal: 150-300 characters (highest engagement)
- With "see more": Can go longer
Best practices:
β Professional but conversational
β Add line breaks (avoid walls of text)
β Use relevant hashtags (3-5)
β Tag people/companies when appropriate
β Include call to action
```
### Email Subject Lines
```
Best practices:
- Length: 40-50 characters (mobile)
- Avoid spam triggers: "Free", "Act now", "!!!"
- Personalization: Use names when possible
- A/B test: Test 2-3 variations
High performers:
β Numbers: "10 ways to..."
β Questions: "Ready to ship faster?"
β Urgency: "Last chance for..."
β Curiosity: "The secret to..."
```
### Google Ads
```
Character limits:
- Headlines: 30 characters (3 required)
- Descriptions: 90 characters (2 required)
Best practices:
β Include keyword in headline 1
β Clear benefit in headline 2
β CTA in headline 3
β Description 1: Expand on benefit
β Description 2: Social proof or offer
```
### Facebook/Instagram Ads
```
Best practices:
- Length: 125 characters or less (mobile)
- First 3 words crucial (attention grabber)
- Clear CTA
- Speak to pain point
- Avoid link text (link added separately)
```
## Content Types Supported
### Short-Form (Fast generation)
- Social media posts (all platforms)
- Email subject lines
- Headlines and CTAs
- Ad copy (all platforms)
- Product descriptions (short)
- Push notifications
- SMS messages
- Taglines and slogans
### Medium-Form (Still fast)
- Email body copy
- Product pages
- About Us sections
- Landing page copy
- Changelog entries
- README descriptions
- Press releases (short)
## Output Files
After `/content:fast` completes:
### Generated Content
```
content/fast/[type]-[timestamp].md
```
Contains all variations ready to use
### Platform Specs
```
content/fast/[type]-specs.txt
```
Character counts and platform constraints
## Best Practices
### Provide Context
β
**Good:**
```bash
/content:fast [write LinkedIn post announcing AI code review feature
launch, target audience: senior developers and engineering managers,
tone: professional but excited]
```
β **Vague:**
```bash
/content:fast [write LinkedIn post]
```
### Specify Platform
```bash
# Platform affects format, length, tone
/content:fast [write Twitter announcement] # vs
/content:fast [write LinkedIn announcement] # Different format!
```
### Request Multiple Variations
```bash
/content:fast [write 5 email subject line variations for product launch]
```
### Include Constraints
```bash
/content:fast [write Facebook ad copy under 125 characters]
```
## Quality vs Speed Trade-off
### /content:fast (This command)
```
Speed: β‘β‘β‘β‘β‘ 5-15 seconds
Quality: βββββ Very good
Research: Minimal
Variations: 3-5 options
Use case: Quick content needs
```
### /content:good
```
Speed: β‘β‘β‘ββ 2-5 minutes
Quality: βββββ Excellent
Research: Comprehensive
Variations: Multiple with rationale
Use case: Strategic content
```
## Common Patterns
### Product Announcement
```
Structure:
1. Hook (what's new)
2. Key benefit
3. Supporting details (2-3 bullets)
4. Social proof (if available)
5. Clear CTA
```
### Feature Highlight
```
Structure:
1. Pain point identified
2. How feature solves it
3. Specific benefit/outcome
4. Quick example
5. Try it CTA
```
### Company Milestone
```
Structure:
1. The milestone (users, funding, etc.)
2. What it means
3. Thank supporters
4. What's next
5. Join us CTA
```
## Troubleshooting
### Content Not Right Tone
**Problem:** Generated content doesn't match brand voice
**Solution:**
```bash
/content:fast [same request, but use professional and technical tone
like our brand voice in ./docs/brand-voice.md]
```
### Too Long for Platform
**Problem:** Generated content exceeds character limit
**Solution:**
```bash
/content:fast [same request, but keep under 280 characters for Twitter]
```
### Need More Variations
**Problem:** Want more options to choose from
**Solution:**
```bash
/content:fast [generate 10 variations instead of 5]
```
### Content Too Generic
**Problem:** Needs more specificity
**Solution:**
```bash
# Add specific details to request
/content:fast [same request but include that we have 50,000 users
and reduced coding time by 70% in beta testing]
```
## After Generation
Standard workflow:
```bash
# 1. Generate content
/content:fast [content request]
# 2. Review variations
cat content/fast/[type]-[timestamp].md
# 3. Pick favorite (or combine elements)
# 4. Customize if needed
# 5. Use in platform
# 6. Track performance
# (Engagement, clicks, conversions)
```
## Iteration
If first attempt isn't perfect:
```bash
# Original
/content:fast [write LinkedIn post about feature]
# Review output
# "Too formal, need more excitement"
# Iterate
/content:fast [write LinkedIn post about feature, but more excited
and use emojis, target developers not managers]
```
## Time Savings
Typical time to write manually vs `/content:fast`:
```
Social media post: 10 min β 8 sec (75x faster)
Email subject lines: 20 min β 6 sec (200x faster)
Ad copy variations: 30 min β 9 sec (200x faster)
Product description: 15 min β 5 sec (180x faster)
Headlines: 25 min β 7 sec (214x faster)
```
## Next Steps
- [/content:good](/docs/commands/content/good) - For strategic content with research
- [/content:enhance](/docs/commands/content/enhance) - Improve generated content
- [/content:cro](/docs/commands/content/cro) - Optimize for conversion
---
**Key Takeaway**: `/content:fast` generates quality copy in seconds with multiple variations optimized for different platforms, perfect when you need effective content quickly without compromising quality.
---
# /content:good
Section: docs
Category: commands/content
URL: https://docs.claudekit.cc/docs/content/good
# /content:good
Create high-quality, strategic copy with comprehensive research, competitor analysis, and multiple alternatives with detailed rationale. This is your go-to command for important content that requires thoughtful strategy and optimization.
## Syntax
```bash
/content:good [content request]
```
## How It Works
The `/content:good` command follows a comprehensive content creation workflow:
### 1. Deep Research Phase
Invokes **copywriter** agent with **researcher** support:
- Analyzes target audience deeply
- Studies competitor content
- Reviews successful examples
- Identifies content gaps
- Researches psychological triggers
- Examines platform best practices
### 2. Strategic Planning
- Defines clear content objectives
- Identifies key messages
- Plans content structure
- Selects appropriate tone
- Maps user journey alignment
### 3. Multi-Variant Creation
Creates multiple high-quality variations:
- 3-5 complete versions
- Different angles/approaches
- Various hooks and structures
- Platform-optimized formatting
### 4. Detailed Rationale
For each variation provides:
- Strategic reasoning
- Target audience fit
- Psychological triggers used
- Expected effectiveness
- When to use each version
### 5. A/B Testing Recommendations
Suggests testing strategy:
- Which variations to test
- What metrics to track
- Test duration recommendations
- Expected performance ranges
## Examples
### Strategic Blog Post
```bash
/content:good [write blog post about AI code review benefits for engineering teams]
```
**What happens:**
```
Agent: copywriter + researcher
Type: Strategic blog post
Goal: Drive organic traffic + signups
Phase 1: Research (45 seconds)
---
Analyzing:
- Top 10 ranking articles for "AI code review"
- Competitor content (GitHub Copilot, Tabnine, etc.)
- Engineering blogs (Google, Meta, Netflix)
- Reddit discussions (r/programming, r/devops)
Key findings:
β Most content focuses on features, not outcomes
β Gap: Real metrics from production use
β Developers skeptical of AI quality claims
β Need: Specific integration examples
β Pain: Time wasted in manual reviews
Opportunity:
- Lead with real data (70% time savings)
- Include actual team testimonials
- Show integration in 10 minutes
- Address skepticism directly
Phase 2: Outline Created
---
1. Hook: Engineering manager pain point
2. The real cost of manual code review (with data)
3. How AI code review actually works
4. Real results: 3 team case studies
5. Common objections answered
6. Implementation guide
7. ROI calculator
8. Next steps + CTA
Word count target: 2,000 words
Reading time: 8 minutes
SEO keywords: "AI code review", "automated code review",
"code review tools"
Phase 3: Multiple Variations Generated
---
VARIATION 1: Data-Driven Approach
Title: "How 3 Engineering Teams Cut Code Review Time by 70%"
Opening:
"Last Tuesday, Sarah's team spent 6 hours in code review.
They found 3 bugs. Today, AI found 47 bugs in 12 minutes.
Here's what changed..."
Rationale:
+ Starts with specific, relatable scenario
+ Shows concrete before/after
+ Promises specific learnings
+ Best for: Engineering managers, data-driven buyers
Expected performance: High CTR from search, good conversions
VARIATION 2: Story-Driven Approach
Title: "We Let AI Review Our Code for 6 Months. Here's What Happened"
Opening:
"Six months ago, I was skeptical. 'AI can't understand
our codebase,' I told my CTO. I was wrong.
Today, our team ships 2x faster, and code quality is
measurably better. Here's the full story..."
Rationale:
+ Addresses skepticism head-on
+ First-person builds credibility
+ Narrative arc keeps readers engaged
+ Best for: Social shares, developer audience
Expected performance: High engagement, strong retention
VARIATION 3: Problem-Solution Approach
Title: "Your Code Review Process Is Broken. Here's How to Fix It"
Opening:
"Code reviews are the bottleneck nobody talks about.
PRs sit for days. Reviewers get fatigued. Critical bugs
slip through. Sound familiar?
There's a better way..."
Rationale:
+ Identifies universal pain point
+ Creates urgency
+ Promises clear solution
+ Best for: Paid ads, landing page
Expected performance: High conversion rate
Phase 4: Full Content Delivered
---
Selected: Variation 1 (Data-Driven)
Full 2,000-word article created with:
β Compelling headline (includes number + benefit)
β Strong hook (specific scenario)
β 3 real case studies with metrics
β 8 subheadings (H2) with SEO keywords
β Data visualizations suggested (3 charts)
β 5 code examples
β Common objections section
β Implementation guide
β Clear CTA with free trial offer
β Meta description (158 chars)
β Internal links (5) + external (2)
β Image suggestions (7 with alt text)
SEO optimization:
- Primary keyword in title, H1, first paragraph
- Secondary keywords in H2s
- Keyword density: 1.4% (optimal)
- Readability: Flesch score 62 (easy)
Phase 5: A/B Testing Strategy
---
Test 1: Headlines
A: "How 3 Engineering Teams Cut Code Review Time by 70%"
B: "Cut Code Review Time by 70%: Real Results from 3 Teams"
C: "The Code Review Method That Saved 20 Hours/Week"
Metric: CTR from search/social
Duration: 2 weeks
Test 2: CTAs
A: "Start Free Trial"
B: "See How It Works"
C: "Get 70% Faster Reviews"
Metric: Signup rate
Duration: 1 week
Expected overall performance:
- Organic traffic: Top 5 ranking within 3 months
- Avg. time on page: 4-5 minutes
- Signup conversion: 8-12%
β Content complete (2 minutes 30 seconds)
Files created:
- content/blog/ai-code-review-benefits-v1-data.md
- content/blog/ai-code-review-benefits-v2-story.md
- content/blog/ai-code-review-benefits-v3-problem.md
- content/blog/ai-code-review-research.md
- content/blog/ai-code-review-ab-tests.md
```
### Landing Page Copy
```bash
/content:good [write landing page copy for ClaudeKit Pro targeting enterprise teams]
```
**What happens:**
```
Agent: copywriter + researcher
Type: Landing page (conversion-critical)
Audience: Enterprise engineering teams
Phase 1: Research (60 seconds)
---
Competitor analysis:
- GitHub Copilot Enterprise
- Tabnine Enterprise
- AWS CodeWhisperer
- Replit Teams
Key findings:
β All emphasize security (enterprise concern)
β ROI focus (cost per developer matters)
β Integration ease (IT approval process)
β Scale (1000+ developer support)
Enterprise buyer concerns:
1. Security & compliance
2. ROI justification
3. Implementation time
4. Support & SLA
5. Migration path
Phase 2: Landing Page Strategy
---
Structure:
1. Above fold: Clear value prop + social proof
2. Social proof: Logo bar (recognizable companies)
3. Benefits: Focus on business outcomes
4. Features: Presented as business benefits
5. Security: Dedicated section
6. Case study: Enterprise success story
7. ROI calculator: Interactive
8. Pricing: Transparent with comparison
9. FAQ: Address objections
10. CTA: Multiple throughout
Phase 3: Variations Created
---
APPROACH 1: ROI-Focused
Hero Section:
---
Headline: "Cut Development Costs by 40% With AI"
Subheadline: "Enterprise-grade AI development platform
trusted by teams at Google, Meta, and Amazon."
CTA: "Calculate Your ROI" (primary)
CTA: "Book Demo" (secondary)
Social proof: "Trusted by 500+ enterprise teams"
---
Rationale:
+ Leads with CFO concern (cost reduction)
+ Specific number (40%)
+ Enterprise social proof prominent
+ ROI calculator as primary CTA (engagement)
+ Best for: Economic buyer (VPs, C-suite)
APPROACH 2: Security-First
Hero Section:
---
Headline: "Enterprise-Grade AI Development Platform"
Subheadline: "SOC 2 Type II certified. GDPR compliant.
Your code never trains our models."
CTA: "Security Whitepaper" (primary)
CTA: "Book Demo" (secondary)
Trust badges: SOC 2, GDPR, ISO 27001
---
Rationale:
+ Addresses #1 enterprise objection first
+ Compliance badges immediate
+ Security doc as primary CTA (pre-qualifies)
+ Best for: Security-conscious industries
+ Best for: IT decision makers
APPROACH 3: Productivity-Focused
Hero Section:
---
Headline: "Ship Enterprise Features 10x Faster"
Subheadline: "AI agents that write, test, and review
code at enterprise scale. 500+ teams building faster."
CTA: "See It In Action" (primary)
CTA: "Start Free Trial" (secondary)
Social proof: "4.9/5 from enterprise customers"
---
Rationale:
+ Focuses on engineering team benefit
+ Concrete outcome (10x faster)
+ Demo as primary CTA (shows capability)
+ Best for: Engineering leaders
+ Best for: Technical evaluators
Phase 4: Full Page Copy Delivered
---
Selected: Approach 1 (ROI-Focused)
Complete sections written:
β Hero (headline, subhead, CTAs)
β Logo bar (suggested companies)
β Benefits section (5 key benefits)
β Features (8 features as business outcomes)
β Security & compliance section
β Case study (fictional but realistic)
β ROI calculator (formula provided)
β Pricing table (3 tiers)
β FAQ (10 questions)
β Final CTA section
Word count: 2,500 words
Conversion elements: 7 CTAs, 12 trust signals
Phase 5: A/B Testing Plan
---
Test 1: Hero Headlines
A: "Cut Development Costs by 40% With AI"
B: "Enterprise Teams Ship 10x Faster With AI"
C: "The AI Platform 500+ Enterprise Teams Trust"
Metric: Scroll depth, time on page
Duration: 2 weeks, 5,000+ visitors
Test 2: Primary CTA
A: "Calculate Your ROI"
B: "Book Demo"
C: "Start Free Trial"
Metric: Conversion rate
Duration: 2 weeks
Expected performance:
- Conversion rate: 5-8% (enterprise typical)
- Demo booking rate: 3-5%
- Avg time on page: 3+ minutes
β Landing page complete (3 minutes)
```
### Product Launch Email
```bash
/content:good [write product launch email sequence for new AI testing feature]
```
**What happens:**
```
Agent: copywriter
Type: Email sequence (3-email series)
Goal: Feature activation
Phase 1: Research
---
Analyzing:
- Best performing product launch emails
- Testing tools positioning (Jest, Cypress, etc.)
- User feedback about testing pain points
Key insights:
β Developers hate writing tests (tedious)
β But value test coverage (quality/confidence)
β Time is main objection
β Trust is critical (accuracy concerns)
Phase 2: Sequence Strategy
---
Email 1 (Day 0): Announcement + quick win
Email 2 (Day 3): Deep dive + social proof
Email 3 (Day 7): Urgency + special offer
Phase 3: Email Series Created
---
EMAIL 1: The Announcement
---
Subject line variations:
A: "We just made testing 10x faster π"
B: "Your new testing superpower is here"
C: "Testing used to suck. Not anymore."
Selected: A
Body:
Hi [First Name],
Quick question: How much time did you spend writing
tests last week?
If it's more than 2 hours, we have good news.
ClaudeKit now writes your tests automatically.
Unit tests. Integration tests. E2E tests. All generated
from your code in seconds.
Sarah's team tried it yesterday. Generated 247 tests
in 18 minutes. 94% coverage.
Want to try? It takes 30 seconds to set up:
[Quick Start Guide]
Happy testing (well, happy not testing π)
[Your name]
P.S. First 1,000 users get Pro features free for 30 days.
---
(153 words, 2 min read, 1 primary CTA)
Rationale:
+ Opens with relatable question
+ Quantifies pain (time spent)
+ Introduces solution clearly
+ Real example with metrics
+ Easy CTA (30-second setup)
+ PS creates urgency
EMAIL 2: The Deep Dive
---
(Sent 3 days later to non-openers and openers who didn't click)
Subject: "How ClaudeKit generated 247 tests in 18 minutes"
Body:
Hey [First Name],
Remember Sarah's team from our last email?
Here's the full story of their first week with
AI-powered testing...
[Day 1]: Generated 247 tests in 18 minutes
[Day 3]: Found 8 bugs before production
[Day 7]: Achieved 94% test coverage
"We used to dread writing tests. Now it's automatic."
- Sarah Chen, Engineering Lead @ TechCorp
How it works:
1. Install ClaudeKit CLI (1 command)
2. Run: claudekit test generate
3. Review generated tests
4. Commit and push
That's it. No configuration. No prompt engineering.
Watch it in action: [3-min demo video]
Or try it yourself: [Start free trial]
[Your name]
P.S. Our Pro features trial ends in 4 days. Don't miss it.
---
(168 words, includes video CTA alternative)
Rationale:
+ Continues Sarah's story (narrative thread)
+ Provides specific timeline
+ Includes testimonial with attribution
+ Shows simplicity (3 steps)
+ Two CTAs (watch or try)
+ PS reinforces urgency
EMAIL 3: The Urgency Push
---
(Sent 4 days after Email 2, to non-converters)
Subject: "Your Pro trial expires in 24 hours β°"
Body:
[First Name],
Quick heads up: Your free Pro trial expires tomorrow.
After that, AI testing returns to the waitlist.
What you'll lose access to:
β Automatic test generation
β E2E test creation
β 95%+ coverage guarantee
β Priority support
What you keep:
β Manual test writing
β Weekend debugging sessions
β Low test coverage stress
Just kidding (mostly) about the last two π
But seriouslyβ500+ teams already activated their tests.
The setup takes 30 seconds.
The time saved: 20+ hours/month.
Ready? [Activate AI Testing Now]
Not for you? No problem. [Give us feedback]
[Your name]
P.S. Reactivating later means joining the waitlist.
Do it now while you can.
---
(129 words, clear urgency, humor to soften)
Rationale:
+ Clear urgency (24-hour deadline)
+ Uses loss aversion (what they'll lose)
+ Humor makes urgency less pushy
+ Social proof (500+ teams)
+ ROI reminder (20+ hours/month)
+ Easy out (feedback link)
+ PS reinforces consequences
Phase 4: A/B Testing Recommendations
---
Email 1 Subject Lines:
Test A: "We just made testing 10x faster π"
Test B: "Your new testing superpower is here"
Expected: A wins (specific + emoji)
Email 2 CTA:
Test A: "Watch demo" vs "Try it yourself"
Expected: "Try it" wins (lower friction)
Email 3 Send Time:
Test A: 9am vs 2pm
Expected: 9am wins (checked morning)
Sequence performance targets:
- Email 1 open rate: 35-45%
- Email 2 open rate: 25-35%
- Email 3 open rate: 20-30%
- Overall activation rate: 12-18%
β Email sequence complete (2 minutes 45 seconds)
```
## When to Use
### β
Use /content:good for:
**Strategic Blog Posts**
```bash
/content:good [write thought leadership post on AI in software development]
```
**Landing Pages**
```bash
/content:good [write landing page for enterprise product]
```
**Important Emails**
```bash
/content:good [write product launch email sequence]
```
**White Papers**
```bash
/content:good [write whitepaper on AI code review ROI]
```
**Sales Pages**
```bash
/content:good [write sales page for annual subscription]
```
**Campaign Copy**
```bash
/content:good [write multi-channel campaign for new feature]
```
### β Don't use for:
**Quick Social Posts**
- Use `/content:fast` instead (seconds vs minutes)
**Simple Announcements**
- Use `/content:fast` for speed
**Internal Docs**
- Use `/docs:update` instead
## Research Process
### Competitor Analysis
```
What it examines:
- Top 5-10 competitors
- Their messaging and positioning
- Content that performs well
- Gaps in their content
- Opportunities to differentiate
```
### Audience Research
```
What it analyzes:
- Target audience pain points
- Language they use
- Objections they have
- Triggers that motivate them
- Platforms they use
```
### Content Performance Data
```
What it reviews:
- Top-performing similar content
- Engagement patterns
- Conversion benchmarks
- SEO keyword opportunities
- Social share patterns
```
## Quality Indicators
### /content:good Quality
```
Research depth: βββββ
Strategic thinking: βββββ
Multiple variations: 3-5 with rationale
A/B test strategy: β Included
Time: 2-5 minutes
Use for: Strategic content
```
### /content:fast Quality
```
Research depth: βββββ
Strategic thinking: βββββ
Multiple variations: 3-5 quick versions
A/B test strategy: Basic suggestions
Time: 5-15 seconds
Use for: Quick content needs
```
## Output Files
After `/content:good` completes:
### Content Variations
```
content/good/[type]-v1-[approach].md
content/good/[type]-v2-[approach].md
content/good/[type]-v3-[approach].md
```
Each variation with full copy
### Research Report
```
content/good/[type]-research-[date].md
```
Contains:
- Competitor analysis
- Audience insights
- Content strategy
- Rationale for approaches
### A/B Testing Plan
```
content/good/[type]-ab-tests-[date].md
```
Detailed testing recommendations
### Strategic Brief
```
content/good/[type]-brief-[date].md
```
Overall strategy and guidelines
## Best Practices
### Provide Clear Goals
β
**Good:**
```bash
/content:good [write landing page copy for enterprise product targeting
VPs of Engineering, goal is demo bookings, emphasize security and ROI]
```
β **Vague:**
```bash
/content:good [write landing page]
```
### Share Context
```bash
/content:good [write blog post about AI testing, our unique angle is
we achieved 95% coverage in production with 1000+ companies, competitor
X focuses on unit tests only, competitor Y is expensive]
```
### Specify Audience
```bash
/content:good [write for technical audience: senior developers who
evaluate tools, skeptical of AI, value code quality over speed]
```
### Include Constraints
```bash
/content:good [write email sequence, brand voice is professional but
friendly (see ./brand-voice.md), avoid hype, be specific with data]
```
## Time Investment vs Quality
```
Time Quality Best For
/content:fast 10s βββββ Quick needs, social media
/content:good 2-5min βββββ Strategic, conversion-critical
Manual writing 2-8hrs βββββ Unique thought leadership
```
## After Generation
Standard workflow:
```bash
# 1. Generate content
/content:good [detailed request]
# 2. Review research report
cat content/good/[type]-research-*.md
# 3. Review all variations
cat content/good/[type]-v*.md
# 4. Read rationale for each
# (Included in each variation file)
# 5. Select best approach (or combine elements)
# 6. Customize if needed
# 7. Implement A/B tests
cat content/good/[type]-ab-tests-*.md
# 8. Track performance
```
## Metrics to Track
After publishing content:
### Engagement Metrics
- Time on page
- Scroll depth
- Bounce rate
- Social shares
- Comments/discussions
### Conversion Metrics
- Primary goal completions
- CTA click rates
- Form submissions
- Trial signups
- Demo bookings
### SEO Metrics
- Search rankings
- Organic traffic
- Click-through rate from search
- Featured snippets
- Backlinks acquired
## Next Steps
- [/content:fast](/docs/commands/content/fast) - For quick content needs
- [/content:cro](/docs/commands/content/cro) - Optimize for conversion
- [/content:enhance](/docs/commands/content/enhance) - Improve existing content
---
**Key Takeaway**: `/content:good` creates strategic, high-quality content backed by research and competitor analysis, providing multiple variations with detailed rationale and A/B testing recommendationsβperfect for conversion-critical and brand-defining content.
---
# /bootstrap
Section: docs
Category: commands/core
URL: https://docs.claudekit.cc/docs/core/bootstrap
# /bootstrap
Initialize new projects with spec-driven and test-driven development. This command guides you through requirements gathering, research, planning, and implementation.
## Syntax
```bash
/bootstrap [project description]
```
## How It Works
The `/bootstrap` command follows a comprehensive workflow:
### 1. Requirements Gathering (Interactive)
Asks you questions one by one to fully understand requirements:
```
What type of application are you building?
β REST API / Web App / Mobile App / CLI Tool
What's your primary tech stack preference?
β Node.js / Python / Go / Rust
Do you need a database?
β PostgreSQL / MySQL / MongoDB / None
What authentication method?
β JWT / OAuth2 / Session-based / None
```
### 2. Research Phase
Invokes **researcher** agents to:
- Find best practices for chosen stack
- Research recommended libraries
- Review security considerations
- Identify common patterns
### 3. Planning Phase
Invokes **planner** agent to:
- Create detailed architecture
- Define file structure
- Plan implementation steps
- Design test strategy
- Document tech decisions
### 4. Implementation Phase
Automatically:
- Generates project structure
- Implements core features
- Creates configuration files
- Sets up build system
- Generates comprehensive tests
### 5. Validation Phase
Runs tests to ensure:
- All features work correctly
- Tests pass
- Code follows standards
- Documentation is complete
## Examples
### Basic Usage
```bash
/bootstrap [build a REST API for task management]
```
**What happens:**
1. Interactive Q&A about requirements
2. Research best practices for REST APIs
3. Create implementation plan
4. Generate project structure
5. Implement endpoints (CRUD operations)
6. Generate tests
7. Create API documentation
### With Tech Stack Specified
```bash
/bootstrap [create a blog platform with Next.js and PostgreSQL]
```
**What happens:**
1. Confirms tech choices (Next.js, PostgreSQL)
2. Asks about additional requirements (auth, comments, etc.)
3. Researches Next.js 14 + PostgreSQL best practices
4. Plans database schema
5. Implements blog features
6. Sets up Prisma ORM
7. Generates E2E tests
### CLI Tool
```bash
/bootstrap [build a CLI tool for managing environment variables]
```
**What happens:**
1. Asks about CLI framework preference
2. Researches CLI best practices
3. Plans command structure
4. Implements commands
5. Adds help documentation
6. Generates unit tests
## Automated Mode
For fully automatic bootstrapping without Q&A:
```bash
/bootstrap:auto [detailed project description with all requirements]
```
**Example:**
```bash
/bootstrap:auto [
Build a REST API for a todo application with:
- Node.js + Express.js
- PostgreSQL database with Prisma ORM
- JWT authentication
- CRUD operations for tasks
- User management
- Input validation with Joi
- Rate limiting
- Comprehensive test suite with Jest
- Swagger/OpenAPI documentation
]
```
**Important:** Automated mode requires a very detailed description including:
- Tech stack specifics
- Feature requirements
- Authentication method
- Database choice
- Testing preferences
- Any special considerations
## Generated Structure
After running `/bootstrap`, you'll have:
```
my-project/
βββ .claude/ # ClaudeKit configuration
β βββ commands/ # Custom slash commands
β βββ agents/ # Agent definitions
β βββ workflows/ # Development workflows
βββ src/ # Source code
β βββ routes/ # API routes (for APIs)
β βββ models/ # Data models
β βββ middleware/ # Express middleware
β βββ utils/ # Utilities
β βββ server.js # Entry point
βββ tests/ # Test suite
β βββ unit/ # Unit tests
β βββ integration/ # Integration tests
β βββ e2e/ # E2E tests
βββ docs/ # Documentation
β βββ api/ # API documentation
β βββ code-standards.md
β βββ system-architecture.md
β βββ codebase-summary.md
βββ plans/ # Implementation plans
βββ .env.example # Environment template
βββ package.json # Dependencies
βββ tsconfig.json # TypeScript config
βββ README.md # Project readme
```
## Features
### Spec-Driven Development
Creates detailed specifications before coding:
- Architecture decisions documented
- API contracts defined
- Database schema planned
- Test cases outlined
### Test-Driven Development
Generates tests alongside implementation:
- Unit tests for functions
- Integration tests for APIs
- E2E tests for workflows
- Test coverage >80%
### Best Practices Built-In
Follows industry standards:
- Error handling
- Input validation
- Security measures
- Rate limiting
- Logging
- Environment configuration
## Configuration Options
Customize bootstrapping through Q&A:
### Application Type
- REST API
- GraphQL API
- Web Application
- Mobile App
- CLI Tool
- Microservice
### Tech Stack
- Node.js (Express, Fastify, NestJS)
- Python (FastAPI, Django, Flask)
- Go (Gin, Echo)
- TypeScript
- Rust (Actix, Rocket)
### Database
- PostgreSQL
- MySQL
- MongoDB
- SQLite
- None (stateless)
### Authentication
- JWT
- OAuth2 (Google, GitHub)
- Session-based
- API Keys
- None
### Additional Features
- Real-time (WebSockets)
- Caching (Redis)
- File uploads
- Email sending
- Scheduled jobs
## Best Practices
### Provide Clear Description
β
**Good:**
```bash
/bootstrap [build a REST API for managing inventory with user authentication and real-time stock updates]
```
β **Vague:**
```bash
/bootstrap [make an app]
```
### Answer Questions Thoughtfully
During Q&A:
- β
Think about scalability needs
- β
Consider security requirements
- β
Plan for testing
- β
Be specific about features
### Review Generated Plan
Before implementation starts:
1. Check `plans/` directory
2. Review architecture decisions
3. Verify feature list matches expectations
4. Provide feedback if needed
### Iterate on Requirements
If initial result isn't perfect:
```bash
# Review what was generated
ls src/
# Provide feedback
"The user model needs role-based access control"
# System will adjust accordingly
```
## Common Use Cases
### Microservice
```bash
/bootstrap [create a payment processing microservice with Stripe integration]
```
### Full-Stack App
```bash
/bootstrap [build a social media platform with posts, comments, and likes]
```
### API Gateway
```bash
/bootstrap [implement an API gateway with authentication and rate limiting]
```
### Background Worker
```bash
/bootstrap [create a background job processor for email sending]
```
## Troubleshooting
### Too Many Questions
**Problem:** Q&A taking too long
**Solution:** Use `/bootstrap:auto` with detailed description
### Wrong Tech Stack Chosen
**Problem:** System chose technology you don't want
**Solution:** Be explicit in initial description or during Q&A
### Missing Features
**Problem:** Some features not implemented
**Solution:** Add features after bootstrapping using `/cook`
### Tests Failing
**Problem:** Generated tests don't pass
**Solution:** Use `/fix:test` to diagnose and fix
## After Bootstrapping
Once project is initialized:
```bash
# 1. Review generated code
cat src/server.js
# 2. Check tests pass
npm test
# 3. Update documentation
/docs:update
# 4. Add additional features
/cook [add password reset functionality]
# 5. Commit initial structure
/git:cm
```
## Next Steps
- [/cook](/docs/commands/core/cook) - Add new features
- [/plan](/docs/commands/core/plan) - Plan additions
- [/test](/docs/commands/core/test) - Run test suite
- [/docs:update](/docs/commands/docs/update) - Update docs
---
**Key Takeaway**: `/bootstrap` handles the entire project initialization process, from requirements gathering to tested, documented code, following industry best practices.
---
# /cook
Section: docs
Category: commands/core
URL: https://docs.claudekit.cc/docs/core/cook
# /cook
:::tip[Token-Saving Tip: /cook = /plan + /code]
**If you already have a plan**, skip `/cook` and use `/code` directly:
```bash
# β
Efficient: Use existing plan (saves tokens!)
/code @plans/your-feature-plan.md
# β Wasteful: /cook will create a new plan, consuming extra tokens
/cook [same feature you already planned]
```
**The relationship:**
- `/cook` = `/plan` + `/code` (full cycle: planning β implementation)
- `/code` = Implementation only (uses existing plan)
**When to use each:**
| Command | When to Use |
|---------|-------------|
| `/cook` | New feature, no plan exists |
| `/code @plan.md` | Plan already exists in `plans/` directory |
:::
Main command for feature development. Orchestrates planning, implementation, testing, code review, and documentation updates automatically.
## Syntax
```bash
/cook [feature description]
```
## How It Works
The `/cook` command follows a complete development workflow:
### 1. Planning (if needed)
If no plan exists:
- Invokes **planner** agent to create implementation plan
- Saves plan to `plans/` directory
- Asks for your approval before proceeding
If plan exists:
- Uses existing plan from `plans/` directory
- Proceeds directly to implementation
### 2. Scout (for complex features)
For features requiring multiple file changes:
- Invokes **scout** agents to locate relevant files
- Identifies integration points
- Maps dependencies
### 3. Implementation
Main implementation phase:
- Writes code following plan
- Adheres to project code standards
- Follows established patterns
- Implements error handling
- Adds input validation
### 4. Testing
Automatically generates and runs tests:
- Unit tests for new functions
- Integration tests for APIs
- E2E tests for user flows
- Validates test coverage >80%
### 5. Code Review
Invokes **code-reviewer** agent to:
- Check code quality
- Verify security practices
- Validate performance
- Ensure best practices followed
### 6. Documentation
Updates documentation:
- API documentation
- Code comments
- Architecture docs
- Usage examples
### 7. Summary Report
Provides completion report:
- Files created/modified
- Tests generated
- Coverage metrics
- Next steps
## Examples
### Basic Feature
```bash
/cook [add user registration endpoint]
```
**What happens:**
```
1. planner: Creates implementation plan
- POST /auth/register endpoint
- Email/password validation
- Password hashing
- JWT token generation
2. Implementation
- src/routes/auth.js created
- src/middleware/validate.js created
- src/utils/hash.js created
3. Testing
- tests/auth/register.test.js created
- 12 tests generated
- Coverage: 92%
4. Documentation
- docs/api/authentication.md updated
β Feature complete
```
### Complex Feature
```bash
/cook [implement real-time chat with WebSocket support]
```
**What happens:**
```
1. Scout Phase
- Locates WebSocket configuration
- Finds existing event handlers
- Identifies authentication middleware
2. Planning
- WebSocket server setup
- Room management
- Message persistence
- Authentication integration
3. Implementation
- src/websocket/server.js
- src/websocket/rooms.js
- src/models/message.js
- Database migrations
4. Testing
- WebSocket connection tests
- Message delivery tests
- Room management tests
- Integration tests with auth
5. Documentation
- WebSocket API documented
- Usage examples added
```
### Integration Feature
```bash
/cook [add Stripe payment processing]
```
**What happens:**
```
1. Research (via planner)
- Stripe API best practices
- Security considerations
- Webhook handling
2. Implementation
- Stripe SDK integration
- Payment intent creation
- Webhook endpoint
- Error handling
3. Testing
- Mock Stripe responses
- Test payment flows
- Webhook validation
4. Security Review
- API key management
- Webhook signature verification
- Error message sanitization
```
## When to Use /cook
### β
Use /cook for:
- **New Features**: Adding functionality that doesn't exist
- **API Endpoints**: Creating new routes/controllers
- **Database Models**: Adding new entities
- **Integrations**: Connecting external services
- **Refactoring**: Restructuring existing code
- **Enhancements**: Improving existing features
### β Don't use /cook for:
- **Bug Fixes**: Use `/fix:fast` or `/fix:hard` instead
- **Type Errors**: Use `/fix:types` instead
- **UI Issues**: Use `/fix:ui` instead
- **CI Failures**: Use `/fix:ci` instead
- **Just Planning**: Use `/plan` instead
## With Existing Plan
If you've already created a plan with `/plan`:
```bash
# 1. Create plan first
/plan [add two-factor authentication]
# 2. Review the plan
cat plans/two-factor-auth.md
# 3. Implement using plan
/cook [implement two-factor authentication]
# System uses existing plan automatically
```
## Without Plan (Ad-hoc)
For simple features, skip planning:
```bash
/cook [add email validation to user registration]
# System determines no plan needed
# Implements directly
# Still generates tests and docs
```
## Best Practices
### Provide Clear Descriptions
β
**Good:**
```bash
/cook [implement password reset flow with email verification]
/cook [add file upload endpoint with S3 integration]
/cook [create admin dashboard with user management]
```
β **Vague:**
```bash
/cook [add stuff]
/cook [make it better]
/cook [fix things]
```
### Let It Run
Don't interrupt the process:
- β
Let all agents complete
- β
Review results at the end
- β
Provide feedback after completion
Interrupting can cause:
- Incomplete implementation
- Missing tests
- Outdated documentation
### Review Before Committing
```bash
# 1. Cook the feature
/cook [add rate limiting middleware]
# 2. Review changes
git diff
# 3. Run tests manually if desired
npm test
# 4. Check documentation
cat docs/api/rate-limiting.md
# 5. Only then commit
/git:cm
```
### Iterate on Feedback
If result isn't perfect:
```bash
# First attempt
/cook [add caching layer]
# Review result
# Not satisfied with Redis configuration
# Provide feedback
"The Redis connection should use connection pooling and handle reconnection gracefully"
# System adjusts implementation
```
## Generated Artifacts
After `/cook` completes, you'll have:
### Code Files
```
src/
βββ routes/ # New routes
βββ controllers/ # New controllers
βββ models/ # New models
βββ middleware/ # New middleware
βββ utils/ # Helper functions
βββ services/ # Business logic
```
### Test Files
```
tests/
βββ unit/ # Unit tests
β βββ feature.test.js
βββ integration/ # Integration tests
β βββ feature-api.test.js
βββ e2e/ # End-to-end tests
βββ feature-flow.test.js
```
### Documentation
```
docs/
βββ api/ # API documentation
β βββ new-endpoint.md
βββ guides/ # Usage guides
βββ new-feature.md
```
### Plan (if created)
```
plans/
βββ feature-name-YYYYMMDD.md
```
## Progress Tracking
During execution, you'll see real-time updates:
```
β planner Agent: Implementation plan created
β³ Code Agent: Implementing authentication module...
β§ tester Agent: Pending
β§ code-reviewer Agent: Pending
Files Created:
β src/auth/login.js
β³ src/auth/register.js
β§ src/middleware/auth.js
Tests Generated:
β tests/auth/login.test.js (8 tests)
β³ tests/auth/register.test.js (12 tests)
```
## Error Handling
If something goes wrong:
### Implementation Errors
```
β Error during implementation:
TypeError: Cannot read property 'user' of undefined
at src/auth/login.js:23
Invoking debugger agent...
```
System automatically:
1. Invokes **debugger** agent
2. Analyzes error
3. Attempts fix
4. Re-runs tests
### Test Failures
```
β Tests failed:
12 passed, 3 failed
Invoking tester agent for analysis...
```
System:
1. Analyzes test failures
2. Identifies root cause
3. Fixes implementation
4. Re-runs tests
## Advanced Usage
### Specify Implementation Approach
```bash
/cook [add search functionality using Elasticsearch instead of database queries]
```
### Include Specific Requirements
```bash
/cook [implement file uploads with:
- Maximum 10MB file size
- Support for images and PDFs
- Virus scanning with ClamAV
- S3 storage
- Thumbnail generation for images
]
```
### Request Specific Testing
```bash
/cook [add payment processing with comprehensive test coverage including edge cases and error scenarios]
```
## Common Patterns
### API Endpoint
```bash
/cook [create GET /api/users endpoint with pagination and filtering]
```
### Database Model
```bash
/cook [add Product model with inventory tracking and low stock alerts]
```
### Background Job
```bash
/cook [implement email queue processor using Bull and Redis]
```
### Middleware
```bash
/cook [create API key authentication middleware with rate limiting]
```
## After Cooking
Standard workflow after `/cook`:
```bash
# 1. Feature implemented
/cook [add notifications system]
# 2. Review changes
git status
git diff
# 3. Run tests
/test
# 4. Fix any issues
/fix:test # if tests fail
# 5. Update docs if needed
/docs:update
# 6. Commit
/git:cm
# 7. Push
git push
# 8. Create PR
/git:pr
```
## Troubleshooting
### Too Many Files Changed
**Problem:** Feature modified too many files unexpectedly
**Solution:**
- Review plan before implementing
- Provide more specific feature description
- Use `/scout` first to see what exists
### Tests Not Passing
**Problem:** Generated tests failing
**Solution:**
```bash
/fix:test
# Debugger agent analyzes and fixes
```
### Missing Features
**Problem:** Some requirements not implemented
**Solution:**
```bash
# Add missing features
/cook [add the missing password strength validation]
```
### Code Quality Issues
**Problem:** Generated code doesn't meet standards
**Solution:**
- Review `.claude/workflows/development-rules.md`
- Update code standards in `docs/code-standards.md`
- Re-run `/cook` with updated context
## Next Steps
- [/test](/docs/commands/core/test) - Run test suite
- [/fix:test](/docs/commands/fix/test) - Fix test failures
- [/git:cm](/docs/commands/git/commit) - Commit changes
- [/docs:update](/docs/commands/docs/update) - Update documentation
---
**Key Takeaway**: `/cook` is your primary feature development command, handling everything from planning to tested, documented code automatically.
---
# /ask
Section: docs
Category: commands/core
URL: https://docs.claudekit.cc/docs/core/ask
# /ask
Strategic architectural consultation command. Provides expert guidance on technical decisions, system design, and architectural challenges without implementation.
## Syntax
```bash
/ask [technical-question]
```
## When to Use
- **Architecture Decisions**: Choosing between design patterns or system architectures
- **Technology Selection**: Evaluating frameworks, databases, or infrastructure choices
- **Design Challenges**: Solving complex technical problems with multiple trade-offs
- **Scalability Planning**: Assessing performance and growth considerations
- **Risk Assessment**: Identifying potential issues before implementation
## Quick Example
```bash
/ask [should we use microservices or monolithic architecture for a SaaS platform with 10k users?]
```
**Output**:
```
Architecture Analysis:
- Current scale: 10k users suggests moderate complexity
- Team size and expertise matter for microservices
- Deployment and monitoring overhead considerations
Design Recommendations:
1. Start with Modular Monolith
- Faster development velocity
- Lower operational complexity
- Clear module boundaries for future extraction
Technology Guidance:
- Use domain-driven design principles
- Implement API-first architecture
- Plan for horizontal scaling
Implementation Strategy:
Phase 1: Modular monolith with clear boundaries
Phase 2: Extract high-traffic modules if needed
Phase 3: Evaluate microservices at 50k+ users
Next Actions:
- Define module boundaries in code
- Implement monitoring early
- Plan database schema for future separation
```
**Result**: Strategic guidance received, no code changed, ready to implement with confidence.
## Arguments
- `[technical-question]`: Your architecture question, design challenge, or technical decision requiring expert consultation
## What Happens
When you run this command:
1. **Problem Analysis**: System analyzes your question and gathers architectural context from project docs
2. **Expert Consultation**: Four specialized advisors provide perspectives:
- **Systems Designer**: Evaluates system boundaries, interfaces, component interactions
- **Technology Strategist**: Recommends stacks, frameworks, architectural patterns
- **Scalability Consultant**: Assesses performance, reliability, growth considerations
- **Risk Analyst**: Identifies issues, trade-offs, mitigation strategies
3. **Architecture Synthesis**: Combines insights into comprehensive architectural guidance
4. **Strategic Validation**: Ensures recommendations align with business goals and constraints
**Agents Invoked**: Systems Designer, Technology Strategist, Scalability Consultant, Risk Analyst
**Context Used**: Automatically references `./docs` project documentation including:
- `project-overview-pdr.md` - Product requirements and goals
- `code-standards.md` - Development standards
- `system-architecture.md` - Current architecture
- `deployment-guide.md` - Infrastructure constraints
## Complete Example
### Scenario: Database Choice for Analytics Feature
```bash
/ask [we need to add analytics tracking for user behavior. Should we use PostgreSQL with TimescaleDB, ClickHouse, or a separate analytics service like Mixpanel?]
```
**Response Breakdown**:
**1. Architecture Analysis**
```
Current Context:
- Existing PostgreSQL database for transactional data
- ~10k daily active users with growth to 100k projected
- Team familiar with SQL, limited DevOps resources
- Real-time dashboard requirements
Key Considerations:
- Data volume: ~1M events/day currently, 10M projected
- Query patterns: Time-series aggregations, user funnels
- Budget: Prefer self-hosted to control costs
- Compliance: GDPR requires data residency control
```
**2. Design Recommendations**
```
Option A: PostgreSQL + TimescaleDB (Recommended)
Pros:
- Leverage existing PostgreSQL expertise
- Single database reduces operational complexity
- Native SQL support for analytics queries
- GDPR compliance through self-hosting
Cons:
- May require tuning for high-volume writes
- Complex queries impact transactional performance
Option B: ClickHouse
Pros:
- Superior performance for analytical queries
- Handles 10M+ events/day easily
Cons:
- New technology to learn and maintain
- Separate infrastructure increases complexity
- SQL dialect differences
Option C: Mixpanel
Pros:
- Zero operational overhead
- Rich analytics UI out of box
Cons:
- Monthly cost: $999+ at 100k users
- Limited GDPR control
- Vendor lock-in
```
**3. Technology Guidance**
```
Recommended Approach: TimescaleDB Extension
1. Add TimescaleDB to existing PostgreSQL
2. Create separate schema for analytics
3. Use connection pooling (PgBouncer)
4. Implement write buffering via Redis
Architecture Pattern:
[App] β [Redis Buffer] β [PostgreSQL/TimescaleDB]
β
[Analytics Dashboard]
```
**4. Implementation Strategy**
```
Phase 1 (Week 1-2): Proof of Concept
- Install TimescaleDB extension
- Create hypertables for event tracking
- Test write performance with sample data
- Benchmark query performance
Phase 2 (Week 3-4): Production Integration
- Implement Redis write buffer
- Create analytics API layer
- Add monitoring (query performance, disk usage)
- Set up automated retention policies
Phase 3 (Month 2): Dashboard Development
- Build real-time dashboards
- Implement user funnel queries
- Add export capabilities
Migration Path if Needed:
- ClickHouse migration feasible if volume exceeds 50M events/day
- Data export for compliance always available
```
**5. Next Actions**
```
1. Run benchmark: Load 1M sample events, test query performance
2. Review team capacity: Confirm PostgreSQL tuning expertise
3. Cost analysis: Compare TimescaleDB self-hosting vs Mixpanel at scale
4. Proof of concept: Build simple event tracking in dev environment
5. Decision checkpoint: Validate performance meets requirements before full build
```
**Additional Considerations**:
```
- If you need /scout to find existing database configs, I can call it
- Consider data retention policies (GDPR 2-year limit)
- Plan for gradual rollout to validate write performance
```
## Common Use Cases
### Technology Stack Selection
```bash
/ask [choosing between Next.js, Remix, and Astro for documentation site with AI chat]
```
Gets guidance on framework selection based on SSR, static generation, and interactive component needs.
### Scaling Strategy
```bash
/ask [our API response time increased to 2s under load. Should we add caching, scale horizontally, or optimize queries?]
```
Receives analysis of bottlenecks and prioritized optimization strategy.
### Integration Architecture
```bash
/ask [how should we integrate payment processing: direct Stripe integration or payment gateway abstraction layer?]
```
Gets trade-off analysis between simplicity and vendor flexibility.
### Security Design
```bash
/ask [what's the best approach for API authentication: JWT, session-based, or API keys?]
```
Receives security assessment and recommendations based on use case.
## Best Practices
### Ask Strategic Questions
β
**Good:**
```bash
/ask [should we use WebSockets or Server-Sent Events for real-time notifications?]
/ask [how to structure microservices boundaries for e-commerce domain?]
/ask [what database architecture for multi-tenant SaaS with data isolation?]
```
β **Too Implementation-Focused:**
```bash
/ask [how to write a function that connects to Redis?]
/ask [what's the syntax for PostgreSQL indexes?]
/ask [debug this error message]
```
### Provide Context
Include relevant constraints:
```bash
/ask [
Need caching solution for:
- 100k daily users
- Budget: $200/month
- Team knows Redis basics
- Must support complex invalidation
Should we use Redis, Memcached, or in-memory cache?
]
```
### Review Project Docs First
The `/ask` command automatically reads `./docs` but you can help by:
1. Keeping `system-architecture.md` updated
2. Documenting constraints in `project-overview-pdr.md`
3. Updating `code-standards.md` with preferences
## What /ask DOES NOT Do
- β Write implementation code
- β Fix bugs (use `/debug` or `/fix:*` instead)
- β Deploy infrastructure
- β Make final decisions (you decide, it advises)
## Integration with Workflow
### Before Planning
```bash
# 1. Get architectural guidance
/ask [best approach for background job processing?]
# 2. Review recommendations
# [Advisor output received]
# 3. Create implementation plan
/plan [implement background jobs using Bull + Redis]
# 4. Implement
/cook [add background job processing]
```
### During Code Review
```bash
# 1. Review PR
git diff main
# 2. Question design choice
/ask [is this service layer abstraction over-engineered for CRUD operations?]
# 3. Adjust based on guidance
# [Make changes if recommended]
```
### Can Call /scout Automatically
If `/ask` needs more context about your codebase:
```
Architecture Analysis:
Need to understand current database setup...
Invoking /scout to find database configurations...
[Scout results integrated into analysis]
```
You can also call it explicitly:
```bash
# First scout the codebase
/scout [find all API authentication implementations] 3
# Then ask architectural question
/ask [should we consolidate these auth patterns or keep them separate?]
```
## Common Issues
### Too Vague Questions
**Problem**: "How should I build this feature?"
**Solution**: Be specific about the challenge
```bash
/ask [what's the best way to handle file uploads over 100MB with progress tracking and resume capability?]
```
### Missing Context
**Problem**: Advice doesn't fit your constraints
**Solution**: Include constraints in question
```bash
/ask [authentication approach for mobile app with requirement: offline-first, sync when online]
```
### Implementation Questions
**Problem**: Asking about syntax or debugging
**Solution**: Use appropriate commands
- Code syntax: Just ask directly
- Bugs: `/debug [issue]`
- Implementation: `/plan` then `/code`
## Related Commands
- [/scout](/docs/commands/core/scout) - Search codebase before asking architectural questions
- [/plan](/docs/commands/core/plan) - Create implementation plan after receiving guidance
- [/debug](/docs/commands/core/debug) - Investigate technical issues (not architecture)
---
**Key Takeaway**: `/ask` provides strategic architectural consultation from expert advisors, helping you make informed technical decisions before implementation begins.
---
# /scout
Section: docs
Category: commands/core
URL: https://docs.claudekit.cc/docs/core/scout
# /scout
High-speed parallel codebase exploration. Spawns multiple search agents to quickly find relevant files, implementations, and integration points before development work.
## Syntax
```bash
/scout [user-prompt] [scale]
```
## When to Use
- **Pre-Implementation**: Find relevant files before starting feature work
- **Codebase Understanding**: Quickly understand how existing features work
- **Integration Discovery**: Locate where to connect new functionality
- **Pattern Recognition**: Find existing implementations to follow
- **Refactoring Prep**: Identify all files affected by planned changes
## Quick Example
```bash
/scout [find all authentication and authorization implementations] 3
```
**Output**:
```
Spawning 3 scout agents in parallel...
Agent 1: Searching src/auth/, src/middleware/
β Found: src/auth/login.js, src/auth/register.js, src/middleware/auth.js
Agent 2: Searching src/routes/, src/controllers/
β Found: src/routes/auth.js, src/controllers/user.js
Agent 3: Searching tests/, docs/
β Found: tests/auth/*.test.js, docs/api/authentication.md
Results saved to: plans/scouts/authentication-search-20251030.md
```
**Result**: Located 8 relevant files in 15 seconds across 3 parallel searches. Ready to implement auth feature.
## Arguments
- `[user-prompt]`: What you're searching for (be specific about feature, pattern, or file type)
- `[scale]`: Number of parallel scout agents to spawn (default: 3, recommended: 2-5)
## What Happens
When you run this command:
1. **Intelligent Division**: Analyzes your search prompt and divides the codebase into logical regions
2. **Parallel Spawning**: Launches multiple `Explore` subagents simultaneously, each searching different folders
3. **External Tools**: Each agent uses external agentic coding tools (Gemini, OpenCode, Claude) for fast search
4. **Result Aggregation**: Combines findings from all agents into single report
5. **Output Saved**: Results written to `plans/scouts/[search-name]-[date].md`
**Agents Invoked**: Multiple `Explore` subagents (quantity = scale parameter)
**Timeout**: 3 minutes per agent (skips agents that don't return in time)
## Complete Example
### Scenario: Adding Real-Time Features
You want to add WebSocket support but need to understand existing patterns first.
```bash
/scout [find WebSocket, event handlers, and real-time implementations] 4
```
**Process**:
**1. Agent Division**
```
Analyzing codebase structure...
Agent 1: src/websocket/, src/events/
Agent 2: src/server.js, src/config/
Agent 3: src/middleware/, src/utils/
Agent 4: tests/integration/, docs/
```
**2. Parallel Execution**
```
β³ Agent 1: Searching WebSocket implementations...
β³ Agent 2: Searching server setup and configuration...
β³ Agent 3: Searching middleware and utilities...
β³ Agent 4: Searching tests and documentation...
[15 seconds later]
β Agent 1: Complete (3 files)
β Agent 2: Complete (2 files)
β Agent 3: Complete (4 files)
β Agent 4: Complete (5 files)
```
**3. Report Generated**
```markdown
# Scout Report: WebSocket & Real-Time Implementations
Date: 2025-10-30
Scale: 4 agents
Duration: 15.3s
## Findings
### WebSocket Setup
- src/websocket/server.js - Main WebSocket server configuration
- src/websocket/handlers.js - Event handlers for client connections
- src/websocket/rooms.js - Room management for broadcasting
### Server Integration
- src/server.js:45 - WebSocket server attached to Express
- src/config/websocket.js - WebSocket configuration (port, CORS)
### Middleware & Utilities
- src/middleware/ws-auth.js - WebSocket authentication middleware
- src/utils/ws-emit.js - Helper for emitting events
- src/utils/broadcast.js - Room broadcasting utility
- src/middleware/rate-limit-ws.js - Rate limiting for WebSocket events
### Tests & Documentation
- tests/integration/websocket.test.js - WebSocket connection tests
- tests/integration/broadcast.test.js - Broadcasting tests
- tests/integration/ws-auth.test.js - Auth middleware tests
- docs/api/websocket.md - WebSocket API documentation
- docs/guides/real-time.md - Real-time features guide
## Patterns Observed
1. **Authentication Pattern**
- JWT token sent via WebSocket handshake query params
- Middleware validates on connection, stores user in socket.data
2. **Event Handling Pattern**
- Handlers defined in separate files by domain
- Events namespaced: user:*, message:*, notification:*
3. **Room Management**
- Users join rooms based on resource access
- Broadcast helper checks permissions before emitting
## Integration Points
To add new real-time features:
1. Create handler in src/websocket/handlers/[feature].js
2. Register handler in src/websocket/server.js
3. Use src/utils/broadcast.js for room events
4. Add tests in tests/integration/websocket/
## Files for Review (14 total)
[Complete file list with line references]
```
## Scale Parameter Guidelines
### Small Projects (<100 files)
```bash
/scout [search criteria] 2
```
Two agents sufficient for small codebases.
### Medium Projects (100-500 files)
```bash
/scout [search criteria] 3
```
Default scale, balances speed and thoroughness.
### Large Projects (500+ files)
```bash
/scout [search criteria] 5
```
More agents for faster parallel coverage.
### Monorepos
```bash
/scout [search criteria in specific-package] 4
```
Scope search to specific package or directory.
## Search Prompt Best Practices
### Be Specific
β
**Good:**
```bash
/scout [find API routes, controllers, and middleware for user management] 3
/scout [locate all database models, migrations, and ORM configurations] 3
/scout [find authentication, JWT handling, and session management] 3
```
β **Vague:**
```bash
/scout [find stuff about users] 3
/scout [search the code] 3
/scout [look for things] 3
```
### Include Context
Mention feature domain or pattern:
```bash
/scout [payment processing with Stripe: webhooks, charge creation, subscription handling] 4
```
### Specify File Types if Relevant
```bash
/scout [all TypeScript test files for API endpoints] 3
/scout [React components for dashboard and settings pages] 3
/scout [Docker, Kubernetes, and deployment configs] 2
```
## Common Use Cases
### Before New Feature
```bash
# 1. Scout existing patterns
/scout [find similar features to understand patterns] 3
# 2. Review findings
cat plans/scouts/latest-report.md
# 3. Plan implementation
/plan [add new feature following discovered patterns]
```
### Understanding Legacy Code
```bash
# Find how feature works
/scout [trace email sending from trigger to delivery] 3
# Review flow
# [Report shows: queue β worker β mailer β templates]
```
### Pre-Refactoring
```bash
# Find all affected files
/scout [find all files importing deprecated utility] 4
# See impact
# [Report shows 23 files across 5 modules]
# Plan refactor
/plan [migrate to new utility pattern]
```
### Integration Point Discovery
```bash
# Find where to hook in
/scout [find all event emitters and listeners for audit logging] 3
# Review integration points
# [Report shows 12 emit points, 3 listeners]
# Add new listener
/cook [add audit log listener for user events]
```
## Output Structure
Reports saved to `plans/scouts/` with structure:
```markdown
# Scout Report: [Search Description]
Date: YYYY-MM-DD
Scale: N agents
Duration: Xs
## Findings
[Organized by agent/directory]
## Patterns Observed
[Common patterns across findings]
## Integration Points
[Where to connect new code]
## Files for Review
[Complete list with paths and line refs]
## Next Steps
[Recommended actions]
```
## Performance Notes
### Speed Comparison
**Traditional search** (single-threaded grep):
```bash
# 45 seconds for complex search in large codebase
grep -r "authentication" src/
```
**Scout with scale=3**:
```bash
# 15 seconds for same search
/scout [find authentication implementations] 3
```
**Speedup**: ~3x faster through parallelization
### Token Efficiency
Scout agents are designed to be token-efficient:
- Don't read full file contents
- Extract relevant snippets only
- Summarize findings concisely
- Skip binary and generated files
### Timeout Behavior
Each agent has 3-minute timeout:
```
β³ Agent 1: Searching... [Complete at 0:15]
β³ Agent 2: Searching... [Complete at 0:18]
β³ Agent 3: Searching... [Timeout at 3:00, skipped]
Note: Agent 3 timed out, results partial
```
System continues with available results, doesn't retry.
## Integration with Other Commands
### Scout β Ask
```bash
# 1. Find existing implementations
/scout [find caching implementations] 3
# 2. Ask architectural question with context
/ask [should we consolidate these caching patterns or keep separate?]
```
### Scout β Plan β Code
```bash
# 1. Scout codebase
/scout [find API error handling patterns] 3
# 2. Plan feature
/plan [add centralized error handling middleware]
# 3. Implement
/code @plans/error-handling.md
```
### Scout β Debug
```bash
# 1. Find relevant files
/scout [find all database connection handling] 3
# 2. Debug issue
/debug [connection pool exhaustion in production]
```
## Advanced Usage
### Focused Search
Narrow scope for faster results:
```bash
/scout [search only in src/api/ for route handlers] 2
```
### Multi-Domain Search
Search across different concerns:
```bash
/scout [find all references to user model: database, API, tests, docs] 5
```
### Pattern Hunting
Find implementation examples:
```bash
/scout [find examples of async/await error handling in controllers] 3
```
## Common Issues
### Too Many Results
**Problem**: Scout returns 100+ files
**Solution**: Narrow search scope
```bash
# Instead of:
/scout [find user code] 3
# Use:
/scout [find user authentication in API routes only] 2
```
### Agent Timeout
**Problem**: Agents timing out (large codebase)
**Solution**: Increase scale or narrow scope
```bash
# More agents to divide work
/scout [search criteria] 5
# Or narrow scope
/scout [search only in src/auth/] 2
```
### Irrelevant Results
**Problem**: Found files not relevant
**Solution**: Be more specific in prompt
```bash
# Instead of:
/scout [find tests] 3
# Use:
/scout [find integration tests for payment API] 3
```
### Missing Results
**Problem**: Expected files not found
**Solution**: Check if agents timed out, or search manually
```bash
# Check scout report for timeout notes
cat plans/scouts/latest-report.md
# Search manually for specific file
find src/ -name "*payment*"
```
## Related Commands
- [/ask](/docs/commands/core/ask) - Ask architectural questions about scouted code
- [/plan](/docs/commands/core/plan) - Create implementation plan using scout findings
- [/cook](/docs/commands/core/cook) - Implement feature after scouting integration points
---
**Key Takeaway**: `/scout` uses parallel agent search to quickly map your codebase, finding relevant files 3x faster than traditional search, providing context for informed development decisions.
---
# /debug
Section: docs
Category: commands/core
URL: https://docs.claudekit.cc/docs/core/debug
# /debug
Issue investigation and diagnosis command. Analyzes bugs, errors, and technical problems to identify root causes and explain them clearlyβwithout automatically implementing fixes.
## Syntax
```bash
/debug [issues]
```
## When to Use
- **Bug Investigation**: Understand why something isn't working
- **Error Analysis**: Diagnose error messages and stack traces
- **Behavior Investigation**: Figure out unexpected application behavior
- **Performance Issues**: Identify bottlenecks and slowdowns
- **Test Failures**: Understand why tests are failing
## Quick Example
```bash
/debug [API returns 500 error when creating user with long username]
```
**Output**:
```
Root Cause Analysis:
- src/controllers/user.js:34 - No validation for username length
- Database constraint: usernames max 50 chars
- Request with 75-char username triggers database error
- Error not caught, returns generic 500 instead of 400
Chain of Events:
1. POST /api/users with username length 75
2. Controller passes directly to database
3. PostgreSQL rejects: "value too long for type character varying(50)"
4. Unhandled error bubbles up as 500 Internal Server Error
Affected Components:
- src/controllers/user.js (missing validation)
- src/models/user.js (constraint defined but not enforced in app)
- src/middleware/validator.js (username validation not implemented)
Fix Required:
Add username length validation before database call:
- Validate max 50 chars in controller or middleware
- Return 400 Bad Request with clear error message
- Add test case for long username
Related Issues:
- Email field also lacks length validation (same pattern)
- Consider centralized validation schema (Joi/Zod)
```
**Result**: Root cause identified, fix approach explained, no code changed yet.
## Arguments
- `[issues]`: Description of the problem, error message, unexpected behavior, or failing test
## What Happens
When you run this command:
1. **Issue Intake**: System captures your problem description
2. **Debugger Agent Invocation**: Delegates to specialized `debugger` subagent
3. **Root Cause Analysis**: Agent investigates:
- Examines relevant code files
- Traces execution flow
- Analyzes error messages and stack traces
- Identifies integration points
- Reviews recent changes (if relevant)
4. **Explanation Generation**: Creates detailed diagnostic report
5. **User Communication**: Explains findings in clear, actionable terms
**Agents Invoked**: `debugger` subagent
**Important**: `/debug` does NOT implement fixes automatically. It only diagnoses and explains.
## Complete Example
### Scenario: Memory Leak in Production
You notice memory usage growing indefinitely in production.
```bash
/debug [Node.js process memory grows from 200MB to 2GB over 24 hours, eventually crashes]
```
**Debugger Agent Investigation Process**:
**1. Information Gathering**
```
Examining production metrics...
- Memory growth rate: ~75MB/hour
- No corresponding user activity increase
- Heap snapshots show string accumulation
Checking recent deployments...
- Feature added 3 days ago: real-time notifications
- Timeline matches memory leak onset
```
**2. Code Analysis**
```
Analyzing notification system...
src/websocket/notifier.js:
β Connection handling looks correct
β Disconnection cleanup present
! Event listeners not being removed
src/services/notification-queue.js:
! Queue maintains references to all notifications
! No cleanup of old notifications
! Growing unbounded
```
**3. Root Cause Identification**
```
Root Cause Found:
File: src/services/notification-queue.js
Lines: 23-45
Issue:
- Notification queue stores all notifications in memory array
- Array grows without cleanup
- Each notification ~2KB with user data
- 1000 notifications/hour = 2MB/hour growth
- 24 hours = 48MB direct + garbage collection overhead = ~75MB/hour
Code Pattern:
class NotificationQueue {
constructor() {
this.queue = []; // Never cleared!
}
add(notification) {
this.queue.push(notification); // Unbounded growth
this.emit('notification', notification);
}
}
Why It's Problematic:
- Old notifications never removed from memory
- References prevent garbage collection
- Event listeners accumulate on each notification
```
**4. Impact Assessment**
```
Affected Components:
- src/services/notification-queue.js (primary issue)
- src/websocket/notifier.js (uses the queue)
- All connected WebSocket clients (receive notifications)
Production Impact:
- Current: ~1.2GB memory usage (80% capacity)
- Projected: Crash in ~6 hours at current rate
- Risk: High - immediate fix required
User Impact:
- Intermittent connection drops as memory pressure increases
- Slow response times due to garbage collection pauses
- Service restarts cause notification loss
```
**5. Recommended Fix**
```
Solution 1: Implement Notification Expiry (Recommended)
- Add timestamp to notifications
- Remove notifications older than 1 hour
- Implement periodic cleanup
- Estimated effort: 2 hours
Solution 2: Switch to Redis Queue
- Move notification queue to Redis
- Automatic TTL-based expiry
- Supports horizontal scaling
- Estimated effort: 4 hours
Solution 3: Limit Queue Size
- Cap queue at 1000 notifications
- Remove oldest when adding new
- Quick fix but loses old notifications
- Estimated effort: 30 minutes
Immediate Mitigation:
- Restart service to clear memory
- Implement Solution 3 as temporary fix
- Plan Solution 1 or 2 for permanent fix
Testing Plan:
- Load test with 10k notifications
- Monitor memory over 24 hours
- Verify cleanup runs correctly
- Check notification delivery still works
```
**6. Additional Findings**
```
Secondary Issues Discovered:
1. WebSocket connections not cleaned on server restart
File: src/websocket/server.js:67
Impact: Low - minor memory leak
2. Logger buffering messages in memory
File: src/utils/logger.js:12
Impact: Medium - 10-20MB additional growth
3. No memory monitoring alerts configured
Impact: High - prevented early detection
Recommendations:
- Add memory usage monitoring
- Set alerts at 1.5GB threshold
- Review all event listener registrations
- Audit other potential unbounded data structures
```
**Full Report Saved**: `plans/debug/memory-leak-investigation-20251030.md`
## Common Use Cases
### Error Message Investigation
```bash
/debug [TypeError: Cannot read property 'user' of undefined at src/auth/login.js:23]
```
**Output**:
```
Root Cause:
- req.session not initialized
- Session middleware not configured for /auth/login route
- Middleware order issue in src/server.js
Fix: Move session middleware before auth routes
```
### Test Failure Analysis
```bash
/debug [integration test "POST /api/orders" failing with 401 Unauthorized but should be 201]
```
**Output**:
```
Root Cause:
- Test not including authentication token
- Auth middleware added recently, test not updated
- Test data factory doesn't generate valid JWT
Fix: Update test to include authentication header
```
### Performance Degradation
```bash
/debug [API response time increased from 100ms to 3000ms after deploying caching layer]
```
**Output**:
```
Root Cause:
- Cache key generation is synchronous and expensive
- Using JSON.stringify on large objects
- Blocking event loop
Fix: Use hash-based cache keys instead of stringifying objects
```
### Unexpected Behavior
```bash
/debug [emails sending twice for password reset requests]
```
**Output**:
```
Root Cause:
- Password reset handler registered as event listener twice
- Duplicate registration in src/server.js lines 45 and 78
- Both execute on 'password-reset' event
Fix: Remove duplicate event listener registration
```
## Best Practices
### Provide Detailed Information
β
**Good:**
```bash
/debug [
Production error:
- TypeError: Cannot read property 'email' of null
- File: src/services/email-sender.js:45
- Happens randomly ~5% of requests
- Started after deploy at 2025-10-30 14:00
- Error log: /logs/production-2025-10-30.log
]
```
β **Vague:**
```bash
/debug [something is broken]
/debug [it doesn't work]
/debug [fix the error]
```
### Include Context
Helpful details:
- Error messages and stack traces
- When issue started
- Frequency (always, sometimes, rarely)
- Environment (production, staging, dev)
- Recent changes or deployments
- Steps to reproduce
- Expected vs actual behavior
### Attach Evidence
```bash
/debug [
Issue: Database connection pool exhausted
Evidence:
- Error: "remaining connection slots reserved for non-replication superuser connections"
- Occurs during peak traffic (>1000 req/min)
- Pool config: max 20 connections
- Monitoring shows connections not released
- Screenshot: /tmp/db-connections-graph.png
]
```
## What /debug Does NOT Do
- β Automatically implement fixes (that's for `/fix:*` commands)
- β Modify code files
- β Deploy fixes to production
- β Run tests or validations
## After Debugging
Standard workflow:
```bash
# 1. Investigate issue
/debug [issue description]
# 2. Review diagnostic report
cat plans/debug/issue-investigation-YYYYMMDD.md
# 3. Decide on fix approach
# 4. Implement fix using appropriate command
/fix:fast [implement the recommended fix]
# or
/fix:hard [complex fix requiring multiple changes]
# or manually implement
# 5. Verify fix
/test
# 6. Commit
/git:cm
```
## Integration with Other Commands
### Debug β Fix
```bash
# 1. Diagnose
/debug [API rate limiting not working]
# Output: Missing Redis connection, middleware not applied
# 2. Fix based on diagnosis
/fix:fast [add Redis connection and apply rate limit middleware to routes]
```
### Scout β Debug
```bash
# 1. Find relevant files
/scout [find all email sending code] 3
# 2. Debug issue with context
/debug [emails not sending, relevant files in scout report]
```
### Debug β Ask
```bash
# 1. Debug finds architectural issue
/debug [database deadlocks during concurrent user updates]
# Output: Transaction isolation level causing conflicts
# 2. Ask for architectural guidance
/ask [best transaction isolation level for concurrent user updates in PostgreSQL?]
```
## Advanced Debugging
### With Log Files
```bash
/debug [
Issue: Intermittent 503 errors
Log file: /var/log/app/error.log
Relevant entries from 14:30-15:00
]
```
Debugger agent will analyze log patterns.
### Performance Profiling
```bash
/debug [
Performance issue: /api/search endpoint slow
Profiling data:
- 90th percentile: 2.3s
- 99th percentile: 5.8s
- Database query time: 1.8s
- CPU: 25%
- Memory: stable
]
```
Agent identifies database query as bottleneck.
### Concurrency Issues
```bash
/debug [
Race condition suspected:
- Sometimes user balance goes negative
- Only happens with concurrent requests
- Balance checks passing but still overdrawing
]
```
Agent traces execution paths to find race condition.
## Common Issues
### Insufficient Information
**Problem**: Debug report says "need more information"
**Solution**: Provide error messages, logs, or steps to reproduce
```bash
/debug [
Full error message: [paste here]
Stack trace: [paste here]
How to reproduce: [steps]
]
```
### Wrong Scope
**Problem**: Asking to debug and fix together
**Solution**: Separate diagnosis from fix
```bash
# Not: /debug [find and fix the bug]
# Instead:
/debug [diagnosis: why is the bug happening?]
# Then:
/fix:fast [implement the fix]
```
### Production Urgency
**Problem**: Need immediate fix, debug taking too long
**Solution**: Skip debug, go straight to fix with description
```bash
# If urgent:
/fix:fast [urgent: API returning 500, need immediate mitigation]
# Can debug later:
/debug [investigate root cause of API 500 errors]
```
## Related Commands
- [/fix:fast](/docs/commands/fix/fast) - Implement quick fixes after diagnosis
- [/fix:hard](/docs/commands/fix/hard) - Fix complex issues requiring planning
- [/fix:logs](/docs/commands/fix/logs) - Analyze and fix issues from log files
- [/scout](/docs/commands/core/scout) - Find relevant files before debugging
- [/ask](/docs/commands/core/ask) - Get architectural guidance on systemic issues
---
**Key Takeaway**: `/debug` provides thorough root cause analysis and clear explanations of technical issues, empowering you to make informed decisions about fixes without automatic code changes.
---
# /journal
Section: docs
Category: commands/core
URL: https://docs.claudekit.cc/docs/core/journal
# /journal
Development documentation command. Creates structured journal entries documenting work sessions, code changes, technical decisions, and project memories.
## Syntax
```bash
/journal
```
## When to Use
- **End of Day**: Document work completed during the session
- **After Major Changes**: Record significant code changes and rationale
- **Decision Documentation**: Capture architectural and technical decisions
- **Knowledge Transfer**: Create memory logs for team handoffs
- **Session Summaries**: Track progress across multiple work sessions
## Quick Example
```bash
/journal
```
**Output**:
```
Exploring memories and recent code changes...
Journal Entry Created: journals/session-2025-10-30.md
## Session Summary
Date: 2025-10-30
Duration: 3.5 hours
Focus: Payment integration
## Work Completed
### Features Implemented
1. Stripe payment processing (src/payments/stripe.js)
- Payment intent creation
- Webhook handling for payment confirmation
- Refund processing
2. Payment API endpoints (src/routes/payments.js)
- POST /api/payments/create-intent
- POST /api/payments/webhook
- POST /api/payments/refund
### Tests Added
- tests/payments/stripe.test.js (12 tests)
- tests/integration/payment-flow.test.js (8 tests)
- Coverage: 94%
### Documentation Updated
- docs/api/payments.md - Payment API documentation
- docs/guides/payment-integration.md - Integration guide
## Technical Decisions
1. Chose Stripe over PayPal
- Better API design
- Stronger TypeScript support
- Lower transaction fees for our volume
2. Webhook security approach
- Signature verification using Stripe SDK
- Idempotency keys for duplicate prevention
- Retry logic with exponential backoff
## Challenges & Solutions
Challenge: Webhook testing in development
Solution: Used Stripe CLI for local webhook forwarding
Challenge: Handling partial refunds
Solution: Store refund history in database, check before processing
## Code Changes
Files Modified: 8
Files Created: 6
Tests Added: 20
Lines Changed: +742, -23
Key Files:
- src/payments/stripe.js (new, 234 lines)
- src/routes/payments.js (new, 123 lines)
- src/models/payment.js (modified, +45 lines)
- .env.example (modified, +3 lines)
## Next Steps
1. Deploy to staging for testing
2. Test webhook handling in staging environment
3. Add monitoring for failed payments
4. Implement payment notification emails
## Notes
- Stripe test keys configured in .env
- Webhook secret stored in config/secrets
- Production deployment requires PCI compliance review
```
**Result**: Comprehensive session documentation saved for future reference.
## Arguments
None required. Command automatically:
- Explores recent commits and changes
- Reads project memories
- Analyzes modified files
- Generates structured journal entry
## What Happens
When you run this command:
1. **Memory Exploration**: Delegates to `journal-writer` subagent
2. **Change Analysis**: Agent examines:
- Recent git commits
- Modified files (git diff)
- Created files
- Test additions
- Documentation updates
- Configuration changes
3. **Context Gathering**: Reviews:
- Project documentation in `./docs`
- Recent commands executed
- Agent reports in `./plans`
- Error logs if any
4. **Entry Generation**: Creates structured journal entry
5. **Save**: Writes to `journals/` directory with timestamp
**Agents Invoked**: `journal-writer` subagent
## Complete Example
### Scenario: Multi-Day Feature Development
After completing a complex feature over several work sessions.
```bash
/journal
```
**Generated Journal Entry**:
```markdown
# Development Journal Entry
Date: 2025-10-30
Session: 5 (of ongoing feature work)
Author: ClaudeKit Engineer
## Context
### Previous Sessions Recap
- Session 1-2: Real-time chat architecture and planning
- Session 3: WebSocket server implementation
- Session 4: Room management and broadcasting
- Session 5 (today): Message persistence and testing
### Current Sprint Goal
Implement real-time chat feature with message history, typing indicators, and read receipts
## Today's Work (Session 5)
### Features Completed
#### 1. Message Persistence
Location: src/models/message.js, src/services/message-store.js
Description:
- Messages saved to PostgreSQL database
- Indexes on room_id and created_at for fast retrieval
- Automatic cleanup of messages older than 90 days
Implementation Details:
- Used Prisma ORM for type-safe database access
- Batch inserts for performance (max 100 messages/batch)
- Connection pooling to handle high concurrent writes
Code Snippet:
src/services/message-store.js:34-56
async saveMessage(message) {
return await this.prisma.message.create({
data: {
roomId: message.roomId,
userId: message.userId,
content: message.content,
createdAt: new Date()
}
});
}
#### 2. Message History API
Location: src/routes/messages.js
Endpoints:
- GET /api/rooms/:roomId/messages - Retrieve message history
- GET /api/rooms/:roomId/messages?before=:timestamp - Pagination
- GET /api/rooms/:roomId/messages/unread - Unread count
Features:
- Cursor-based pagination (50 messages per page)
- Efficient unread message counting
- Permission checks (user must be in room)
Performance:
- 20ms average response time
- Supports 1000+ concurrent requests
- Index scan (not full table)
#### 3. Real-Time Delivery Confirmation
Location: src/websocket/handlers/message.js
Description:
- Server acknowledges message receipt
- Broadcasts to room members
- Sends delivery confirmation to sender
Flow:
1. Client sends message via WebSocket
2. Server persists to database
3. Server broadcasts to room
4. Server sends confirmation to sender
5. Recipients send read receipts
### Testing Completed
#### Unit Tests
File: tests/unit/message-store.test.js
Coverage: 96%
Tests: 18
Focus:
- Message creation validation
- Batch insert edge cases
- Query performance
- Error handling
#### Integration Tests
File: tests/integration/message-api.test.js
Coverage: 92%
Tests: 24
Focus:
- API endpoint functionality
- Pagination correctness
- Permission enforcement
- Error responses
#### End-to-End Tests
File: tests/e2e/chat-flow.test.js
Coverage: 88%
Tests: 12
Scenarios:
- Complete message send/receive flow
- Multiple users in room
- Message history retrieval
- Unread message counting
- WebSocket reconnection
Overall Test Results:
- 54 tests total
- All passing
- Total coverage: 91%
- No flaky tests
### Documentation
#### Updated
- docs/api/messages.md - Message API documentation
- docs/websocket.md - WebSocket event protocol
- docs/database-schema.md - Message table schema
#### Created
- docs/guides/chat-implementation.md - Chat feature guide
- docs/troubleshooting/websocket.md - Common issues and fixes
### Configuration & Infrastructure
#### Database Migration
File: prisma/migrations/20251030_add_messages/migration.sql
Changes:
- Created messages table
- Added indexes on roomId, userId, createdAt
- Foreign keys to rooms and users tables
#### Environment Variables
Added to .env.example:
- MESSAGE_RETENTION_DAYS=90
- MESSAGE_PAGE_SIZE=50
- WEBSOCKET_MESSAGE_MAX_LENGTH=5000
## Technical Decisions
### Decision 1: Message Storage Strategy
Options Considered:
A. Store all messages in PostgreSQL
B. Use Redis for recent messages, PostgreSQL for archive
C. Use separate message database (MongoDB)
Chosen: Option A (PostgreSQL only)
Rationale:
- Simpler architecture (single database)
- PostgreSQL handles our message volume (<1M messages/day)
- Excellent query performance with proper indexes
- ACID guarantees important for message integrity
- Team familiar with PostgreSQL
Trade-offs:
- May need to revisit at 10M+ messages/day
- Redis layer could reduce database load
- Plan migration path if needed
### Decision 2: Pagination Strategy
Options Considered:
A. Offset-based pagination (LIMIT/OFFSET)
B. Cursor-based pagination (timestamp)
C. Keyset pagination (composite key)
Chosen: Option B (Cursor-based)
Rationale:
- Consistent results even with new messages
- Better performance (no offset scan)
- Natural ordering by timestamp
- Works well with infinite scroll UI
Implementation:
- Use createdAt timestamp as cursor
- Client passes last message timestamp
- Query: WHERE createdAt < :cursor ORDER BY createdAt DESC LIMIT 50
### Decision 3: Real-Time Delivery Confirmation
Approach: Three-stage confirmation
1. Server ACK (message received)
2. Delivery confirmation (broadcast complete)
3. Read receipts (user saw message)
Rationale:
- Provides detailed delivery status
- Handles offline users gracefully
- Similar to WhatsApp/Signal UX
- Minimal additional complexity
## Challenges & Solutions
### Challenge 1: Race Condition in Unread Counts
Problem:
- Multiple WebSocket connections for same user
- Unread count inconsistent across devices
- Concurrent updates causing incorrect counts
Solution:
- Use database-level counters with atomic updates
- Last-read timestamp per user per room
- Calculate unread on-demand rather than storing count
- Broadcast count updates to all user devices
Code: src/services/unread-counter.js:23-45
Result: Eliminated race condition, consistent counts
### Challenge 2: WebSocket Message Ordering
Problem:
- Messages sometimes delivered out of order
- Multiple server instances (load balancing)
- Network latency variations
Solution:
- Add sequence numbers to messages
- Client reorders based on sequence
- Server assigns sequence atomically (Redis INCR)
- Fallback to timestamp if sequence missing
Code: src/websocket/sequencer.js:12-34
Result: Guaranteed message ordering
### Challenge 3: Large Message Handling
Problem:
- Some users sending very long messages
- Overwhelming WebSocket buffer
- Database performance impact
Solution:
- Enforce 5000 character limit
- Client-side validation with user feedback
- Server-side validation (reject oversized)
- Suggest splitting into multiple messages
Code: src/middleware/message-validator.js:8-20
Result: Better performance, clearer limits
## Code Quality
### Metrics
- Cyclomatic complexity: 4.2 (target <10)
- Function length: avg 12 lines (target <20)
- Test coverage: 91% (target >80%)
- No linting errors
### Code Review Notes
Self-review findings:
- β Error handling comprehensive
- β Input validation thorough
- β Logging appropriate
- β Consider extracting message validation rules to config
- β Add JSDoc comments to public functions
### Refactoring Done
- Extracted message broadcasting logic to utility
- Created reusable pagination helper
- Consolidated error responses
## Performance
### Benchmarks
Test: 1000 concurrent users sending 10 messages/minute each
Results:
- Average message latency: 45ms
- 99th percentile latency: 120ms
- Database connections: 18/20 (peak)
- CPU usage: 35%
- Memory usage: 340MB (stable)
### Optimizations Applied
1. Message batching for database writes
2. Connection pooling (20 max connections)
3. Query result caching (5 second TTL)
4. Index on (roomId, createdAt) for pagination queries
## Security
### Implemented
- β Input sanitization (XSS prevention)
- β Room membership verification
- β Rate limiting (30 messages/minute per user)
- β Message content validation
- β SQL injection prevention (parameterized queries)
### Pending
- [ ] End-to-end encryption (Phase 2)
- [ ] Message reporting/moderation (Phase 2)
- [ ] Audit logging for admin actions
## Known Issues
### Issue 1: Typing Indicators
Status: Not implemented yet
Reason: Deprioritized for launch
Plan: Implement in Phase 2
Effort: 4 hours estimated
### Issue 2: File Attachments
Status: Planned for Phase 2
Dependencies: Need S3 setup
Timeline: Next sprint
### Issue 3: Message Search
Status: Basic implementation only
Limitations: Searches message content, not performant at scale
Future: Consider Elasticsearch for full-text search
## Dependencies
### New Dependencies Added
- prisma@5.7.0 - ORM for database access
- @prisma/client@5.7.0 - Prisma client
- ws@8.14.2 - WebSocket library
### Updated Dependencies
- express@4.18.2 β 4.18.3 (security patch)
## Deployment Notes
### Staging Deployment
- [ ] Run database migration
- [ ] Update environment variables
- [ ] Deploy application
- [ ] Verify WebSocket connectivity
- [ ] Test message send/receive
- [ ] Monitor for 24 hours
### Production Checklist
- [ ] All staging tests passing
- [ ] Performance benchmarks met
- [ ] Security review completed
- [ ] Documentation updated
- [ ] Rollback plan documented
- [ ] Monitoring alerts configured
## Next Steps
### Immediate (Today)
1. Deploy to staging
2. Run smoke tests
3. Update team on progress
### Short-Term (This Week)
1. User acceptance testing on staging
2. Fix any bugs discovered
3. Prepare production deployment
4. Write deployment runbook
### Medium-Term (Next Sprint)
1. Implement typing indicators
2. Add file attachment support
3. Enhance message search
4. Add message reactions (emoji)
## Lessons Learned
1. **Plan for scale early**: Added indexes from start, avoided performance refactor
2. **Test WebSocket edge cases**: Reconnection, connection drops, concurrent connections
3. **Database migrations careful**: Test migrations on production-like data volume
4. **Documentation concurrent**: Wrote docs as I implemented, not after
## References
- Original design doc: docs/design/chat-feature-design.md
- Architecture decision records: docs/adr/
- WebSocket protocol spec: docs/websocket-protocol.md
## Time Breakdown
- Implementation: 2.5 hours
- Testing: 1 hour
- Documentation: 0.5 hours
- Code review & cleanup: 0.5 hours
- Total: 4.5 hours
---
**End of Journal Entry**
```
**Report Saved**: `journals/session-2025-10-30-complete.md`
## Journal Entry Structure
Typical sections included:
### Automatically Generated
- Date and session metadata
- Code changes summary (files, lines, commits)
- Test coverage statistics
- Documentation updates
### Agent-Written
- Work completed description
- Technical decisions and rationale
- Challenges encountered and solutions
- Performance metrics
- Security considerations
- Known issues and future work
- Lessons learned
## Common Use Cases
### Daily Standup Preparation
```bash
# Before standup meeting
/journal
# Review journal entry
cat journals/session-latest.md
# Use in standup:
# - What I did: [from Work Completed]
# - Challenges: [from Challenges section]
# - Today's plan: [from Next Steps]
```
### Team Handoff
```bash
# End of your work session
/journal
# Share with team
# Journal provides complete context for next developer
```
### Sprint Retrospective
```bash
# Review week's journals
ls journals/
# Analyze:
# - Velocity (time breakdowns)
# - Common challenges
# - Technical debt accumulated
# - Decisions made
```
### Audit Trail
```bash
# For compliance or review
/journal
# Documents:
# - What changed
# - Why decisions made
# - Who made changes (git author)
# - When (timestamps)
```
## Best Practices
### Run Regularly
β
**Good cadence:**
- End of each work session
- After completing major feature
- Before team handoffs
- After significant decisions
β **Too infrequent:**
- Once per sprint (loses detail)
- Only when asked (missing context)
### Commit Journals
```bash
# Create journal
/journal
# Add to git
git add journals/
# Commit with code
/git:cm
```
Benefits:
- Journals version-controlled with code
- Team can see decision history
- Future developers understand why
### Review Old Journals
```bash
# Before similar work
grep -r "authentication" journals/
# Understand past decisions
cat journals/session-2025-09-15.md
# Avoid repeating mistakes
```
## Generated Artifacts
Journals saved to:
```
journals/
βββ session-2025-10-30.md
βββ session-2025-10-29.md
βββ session-2025-10-28.md
βββ ...
```
Naming convention:
- `session-YYYY-MM-DD.md` - One per day
- `session-YYYY-MM-DD-HH-MM.md` - Multiple sessions per day
## Integration with Other Commands
### After /cook
```bash
# Implement feature
/cook [add payment processing]
# Document the work
/journal
# Commit everything
/git:cm
```
### With /watzup
```bash
# See what changed
/watzup
# Create detailed journal
/journal
# Both commands complement each other:
# - /watzup: Quick summary
# - /journal: Detailed documentation
```
### Before /git:pr
```bash
# Complete feature work
/cook [feature]
# Document
/journal
# Review changes
git diff main
# Create PR with journal as context
/git:pr feature-branch main
```
## What Journal Captures
### Code Changes
- Files modified/created/deleted
- Lines added/removed
- Commits made
- Branches worked on
### Technical Decisions
- Architectural choices
- Technology selections
- Trade-off analysis
- Alternatives considered
### Work Progress
- Features implemented
- Tests written
- Documentation updated
- Bugs fixed
### Challenges
- Problems encountered
- Solutions attempted
- Successful resolutions
- Open issues
### Future Work
- Next steps identified
- TODOs created
- Known limitations
- Enhancement ideas
## Common Issues
### Empty Journal
**Problem**: Journal has minimal content
**Cause**: No recent code changes or commits
**Solution**: Make commits before running journal
```bash
# Make changes
# ...
# Commit first
git add .
git commit -m "Implement feature"
# Then journal
/journal
```
### Too Much Detail
**Problem**: Journal extremely long
**Cause**: Many changes across multiple features
**Solution**: Journal more frequently
```bash
# After each major feature
/cook [feature 1]
/journal
/cook [feature 2]
/journal
```
### Missing Context
**Problem**: Journal doesn't explain why decisions made
**Solution**: Add comments in code explaining rationale
```javascript
// Using exponential backoff because linear retry
// caused thundering herd in production incident 2025-09
async function retryWithBackoff() { ... }
```
Journal will pick up these comments.
## Advanced Usage
### Custom Journal Sections
Add file: `.claude/journal-template.md`
```markdown
# Custom Section
## Business Value Delivered
[Automatically filled by agent]
## Customer Impact
[Automatically filled by agent]
```
### Integration with Task Tracking
Journal automatically references:
- GitHub issues (if mentioned in commits)
- JIRA tickets (if in commit messages)
- TODO comments in code
### Metrics Tracking
Journals include metrics over time:
- Code velocity (lines/day)
- Test coverage trend
- Bug fix rate
- Feature completion rate
## Related Commands
- [/watzup](/docs/commands/core/watzup) - Quick summary of recent changes
- [/cook](/docs/commands/core/cook) - Implement features (document with journal after)
- [/git:cm](/docs/commands/git/commit) - Commit changes (journal first for context)
---
**Key Takeaway**: `/journal` creates comprehensive, structured documentation of your development sessions, preserving technical decisions, challenges, and context for future reference and team collaboration.
---
# /watzup
Section: docs
Category: commands/core
URL: https://docs.claudekit.cc/docs/core/watzup
# /watzup
Quick review command that analyzes your current branch, recent commits, and code changes to provide a comprehensive summary of all work done. Perfect for stand-ups, code reviews, or understanding recent activity.
## Syntax
```bash
/watzup
```
## What It Does
The `/watzup` command provides a comprehensive overview of:
1. **Current Branch Status**
- Active branch name
- Comparison with main/master branch
- Ahead/behind commit count
2. **Recent Commits**
- Commit messages
- Author information
- Timestamps
- Files affected
3. **Code Changes Analysis**
- Files modified, added, or removed
- Lines added/deleted
- Change patterns and themes
4. **Overall Impact Assessment**
- Features added
- Bugs fixed
- Refactoring done
- Breaking changes (if any)
5. **Quality Evaluation**
- Code organization
- Test coverage impact
- Documentation updates
- Potential concerns
**IMPORTANT**: This command does NOT start implementation. It only analyzes and reports.
## Quick Example
```bash
/watzup
```
**Output:**
```
Analyzing current branch and recent changes...
## Branch Status
Current: feature/payment-integration
Base: main
Status: 5 commits ahead, up to date
## Recent Commits (Last 5)
1. feat: Add Stripe payment processing
Author: You
Time: 2 hours ago
Files: 8 changed (+420, -15)
2. test: Add payment integration tests
Author: You
Time: 1 hour ago
Files: 3 changed (+180, -0)
3. docs: Update payment API documentation
Author: You
Time: 30 minutes ago
Files: 2 changed (+65, -10)
4. fix: Handle payment webhook errors
Author: You
Time: 15 minutes ago
Files: 2 changed (+25, -8)
5. chore: Update Stripe SDK to v12
Author: You
Time: 10 minutes ago
Files: 1 changed (+2, -2)
## Code Changes Summary
### Files Modified (14 total)
- src/payments/stripe.ts (new, +234 lines)
- src/routes/payment-routes.ts (+85 lines)
- src/models/payment.model.ts (+45 lines)
- tests/payments/*.test.ts (new, +180 lines)
- docs/api/payments.md (+55 lines)
- package.json (+2, -2)
### Changes by Category
Features:
- Payment processing with Stripe
- Webhook handling for payment events
- Refund processing API
Tests:
- Payment creation tests (12 tests)
- Webhook handling tests (8 tests)
- Integration tests (5 tests)
- Coverage: 94%
Documentation:
- Payment API endpoints
- Webhook setup guide
- Error handling docs
## Impact Analysis
### Positive Changes
β
Complete payment integration
β
Comprehensive test coverage (94%)
β
Well-documented API
β
Error handling implemented
β
No breaking changes
### Quality Assessment
β
Code follows project conventions
β
Proper error handling
β
TypeScript types complete
β
Tests cover edge cases
### Potential Concerns
β Webhook signature verification needs production testing
β Rate limiting not yet implemented
β Monitoring/alerting for failed payments pending
## Summary
Implemented complete Stripe payment integration including:
- Payment intent creation and processing
- Webhook event handling with signature verification
- Refund processing capability
- 25 new tests with 94% coverage
- Comprehensive API documentation
Ready for: Code review and staging deployment
Next steps: Production testing, monitoring setup, rate limiting
Time invested: ~4 hours
Complexity: Medium-High
Quality: High
```
## When to Use
### Daily Stand-ups
```bash
# Before stand-up meeting
/watzup
# Use output for:
# - What I did yesterday
# - Current progress
# - Any blockers
```
### Code Review Preparation
```bash
# Before requesting review
/watzup
# Share summary with reviewers
# Helps them understand scope and context
```
### Team Handoffs
```bash
# End of day handoff
/watzup
# Provides complete context for next developer
```
### Sprint Reviews
```bash
# End of sprint
/watzup
# Shows all accomplishments
# Demonstrates progress
```
### Mid-Task Check-in
```bash
# Check progress on large feature
/watzup
# See what's been done
# Plan remaining work
```
### PR Description
```bash
# Before creating pull request
/watzup
# Use summary for PR description
# Then: /git:pr [branch] [base]
```
## Complete Example
### Scenario: Multi-Day Feature Work
You've been working on a real-time chat feature for several days. Time to check progress.
```bash
/watzup
```
**Detailed Output:**
```markdown
# Work Summary: feature/realtime-chat
## Branch Information
Current Branch: feature/realtime-chat
Base Branch: main
Status: 12 commits ahead of main
Last Updated: 5 minutes ago
Branch Age: 3 days
## Commit History (Last 12 commits)
### Day 1: Foundation (4 commits)
1. feat: Add WebSocket server setup
- Time: 3 days ago
- Files: 5 (+320, -0)
- New: src/websocket/server.ts, config/websocket.ts
2. feat: Implement room management
- Time: 3 days ago
- Files: 4 (+180, -0)
- New: src/models/room.ts, src/services/room-manager.ts
3. feat: Add user connection handling
- Time: 3 days ago
- Files: 3 (+145, -0)
- New: src/websocket/connection-handler.ts
4. test: Add WebSocket connection tests
- Time: 3 days ago
- Files: 2 (+95, -0)
- New: tests/websocket/*.test.ts
### Day 2: Core Features (5 commits)
5. feat: Implement message broadcasting
- Time: 2 days ago
- Files: 3 (+165, -12)
- Modified: src/websocket/message-handler.ts
6. feat: Add message persistence to database
- Time: 2 days ago
- Files: 4 (+220, -0)
- New: src/models/message.ts, migrations/add_messages_table.sql
7. feat: Implement typing indicators
- Time: 2 days ago
- Files: 2 (+85, -5)
- Modified: src/websocket/events.ts
8. test: Add message handling tests
- Time: 2 days ago
- Files: 3 (+140, -0)
- New: tests/messages/*.test.ts
9. docs: Document WebSocket protocol
- Time: 2 days ago
- Files: 2 (+120, -0)
- New: docs/websocket-protocol.md
### Day 3: Polish & Testing (3 commits)
10. fix: Handle disconnection edge cases
- Time: 1 day ago
- Files: 3 (+45, -15)
- Modified: src/websocket/connection-handler.ts
11. feat: Add unread message counters
- Time: 1 day ago
- Files: 4 (+95, -8)
- New: src/services/unread-counter.ts
12. test: Add integration tests for chat flow
- Time: 1 day ago
- Files: 2 (+180, -0)
- New: tests/integration/chat-flow.test.ts
## Comprehensive Code Changes
### New Files Created (15 files)
Core Implementation:
- src/websocket/server.ts (234 lines)
- src/websocket/connection-handler.ts (178 lines)
- src/websocket/message-handler.ts (245 lines)
- src/websocket/events.ts (156 lines)
- src/models/room.ts (89 lines)
- src/models/message.ts (112 lines)
- src/services/room-manager.ts (198 lines)
- src/services/unread-counter.ts (87 lines)
Configuration:
- config/websocket.ts (45 lines)
Database:
- migrations/20251030_add_messages_table.sql (23 lines)
- migrations/20251030_add_rooms_table.sql (18 lines)
Tests:
- tests/websocket/connection.test.ts (142 lines)
- tests/messages/persistence.test.ts (165 lines)
- tests/integration/chat-flow.test.ts (223 lines)
Documentation:
- docs/websocket-protocol.md (187 lines)
### Modified Files (8 files)
- src/routes/index.ts (+12, -2)
- src/app.ts (+25, -5)
- package.json (+5, -1)
- .env.example (+3, -0)
- tsconfig.json (+1, -0)
- docs/api/index.md (+45, -8)
- README.md (+15, -3)
- tests/setup.ts (+8, -2)
### Statistics
- Total Files Changed: 23
- New Files: 15
- Modified Files: 8
- Lines Added: 2,547
- Lines Removed: 44
- Net Change: +2,503 lines
## Feature Analysis
### Major Features Implemented
#### 1. WebSocket Server Infrastructure
Status: β
Complete
Components:
- Connection management
- Room-based message routing
- Automatic reconnection handling
- Connection pooling
#### 2. Real-Time Messaging
Status: β
Complete
Features:
- Message broadcasting within rooms
- Delivery confirmation
- Message persistence to database
- Message history retrieval
#### 3. Room Management
Status: β
Complete
Capabilities:
- Create/join/leave rooms
- Room member tracking
- Permission management
- Room metadata
#### 4. Typing Indicators
Status: β
Complete
Functionality:
- Real-time typing status
- Automatic timeout
- Per-room indicators
#### 5. Unread Counters
Status: β
Complete
Features:
- Per-room unread counts
- Atomic counter updates
- Real-time synchronization
## Testing Coverage
### Unit Tests
Files: 8 test files
Tests: 42 tests
Coverage: 94%
Status: β
All passing
Key test areas:
- WebSocket connection handling (12 tests)
- Message broadcasting (10 tests)
- Room management (8 tests)
- Typing indicators (5 tests)
- Unread counters (7 tests)
### Integration Tests
Files: 2 test files
Tests: 12 tests
Coverage: 89%
Status: β
All passing
Scenarios covered:
- Complete message flow (4 tests)
- Multi-user rooms (3 tests)
- Reconnection handling (3 tests)
- Error scenarios (2 tests)
### Overall Test Metrics
Total Tests: 54
Passing: 54 (100%)
Coverage: 92%
Test Time: 8.3 seconds
## Documentation Updates
### New Documentation
- docs/websocket-protocol.md - Complete WebSocket event protocol
- WebSocket setup guide in README
- API documentation for message endpoints
### Updated Documentation
- README.md - Added WebSocket feature section
- docs/api/index.md - Added message API endpoints
- .env.example - Added WebSocket configuration
## Quality Assessment
### Code Quality
β
Excellent
- Consistent TypeScript usage
- Proper error handling throughout
- Clean separation of concerns
- Well-structured modules
### Architecture
β
Solid
- Event-driven design
- Scalable room management
- Efficient message routing
- Database optimization with indexes
### Test Quality
β
Comprehensive
- High coverage (92%)
- Edge cases tested
- Integration scenarios covered
- No flaky tests
### Documentation
β
Well-documented
- Clear protocol specification
- Setup instructions complete
- API endpoints documented
- Code comments thorough
## Potential Issues & Concerns
### Performance Considerations
β οΈ May need attention:
- WebSocket connection limit (currently 1000)
- Message throughput not benchmarked
- Database query optimization needed at scale
- Memory usage with many concurrent rooms
### Security Considerations
β οΈ Needs review:
- WebSocket authentication mechanism
- Message content sanitization
- Rate limiting not implemented
- Room access control validation
### Missing Features (Future Work)
π Planned but not implemented:
- File attachments in messages
- Message reactions (emoji)
- Message search functionality
- End-to-end encryption
## Impact on Codebase
### Dependencies Added
- ws@8.14.2 - WebSocket library
- socket.io-adapter@2.5.2 - Room adapter
### Configuration Changes
- Added WebSocket server port (3001)
- Added CORS configuration for WebSocket
- Database migrations for messages and rooms
### Breaking Changes
β οΈ None - All additions, no modifications to existing APIs
## Overall Assessment
### Summary
Implemented a complete real-time chat system with WebSocket support over 3 days. The implementation includes robust message handling, room management, typing indicators, and unread message tracking. Test coverage is excellent at 92%, and documentation is comprehensive.
### Strengths
1. Comprehensive feature set
2. Excellent test coverage
3. Well-documented protocol
4. Clean, maintainable code
5. No breaking changes
### Areas for Improvement
1. Need performance benchmarking
2. Security review required
3. Rate limiting implementation
4. Monitoring and alerting setup
### Readiness Assessment
β
Ready for: Code review
β
Ready for: Staging deployment
β οΈ Not ready for: Production (needs security review, performance testing)
### Recommended Next Steps
#### Immediate (Today)
1. Request code review
2. Deploy to staging
3. Run smoke tests
#### Short-term (This Week)
1. Performance benchmarking
2. Security review
3. Add rate limiting
4. Setup monitoring
#### Medium-term (Next Sprint)
1. File attachments
2. Message search
3. Message reactions
4. Enhanced security (E2E encryption)
## Time & Effort Analysis
Total Commits: 12
Days Active: 3
Estimated Effort: 24-28 hours
Complexity: High
Value Delivered: High
Breakdown:
- Day 1: Foundation & Architecture (8 hours)
- Day 2: Core Features & Testing (10 hours)
- Day 3: Polish & Integration (6 hours)
---
**End of Summary**
Generated: 2025-11-13 14:30:00
Branch: feature/realtime-chat
Commits Analyzed: 12
```
## Output Sections Explained
### 1. Branch Status
- Current vs base branch
- Commits ahead/behind
- Last update time
- Branch age
### 2. Commit History
- Recent commits (default: last 10)
- Organized by day or theme
- Files affected per commit
- Author and timestamp
### 3. Code Changes
- New files created
- Modified files
- Files deleted
- Line count statistics
### 4. Feature Analysis
- Major features implemented
- Feature status and completion
- Components affected
### 5. Testing Coverage
- Test files and count
- Coverage percentage
- Test status (passing/failing)
### 6. Documentation
- New docs created
- Updated documentation
- README changes
### 7. Quality Assessment
- Code quality evaluation
- Architecture assessment
- Test quality review
- Documentation completeness
### 8. Concerns & Issues
- Performance considerations
- Security concerns
- Missing features
- Technical debt
### 9. Impact Analysis
- Dependencies changed
- Configuration updates
- Breaking changes
- Migration requirements
### 10. Recommendations
- Next steps
- Priority actions
- Timeline suggestions
## Best Practices
### Run Before Key Events
```bash
# Before stand-up
/watzup
# Before code review request
/watzup
# Before PR creation
/watzup
# End of day/sprint
/watzup
```
### Compare with Previous State
```bash
# See what changed today
git log --since="1 day ago"
/watzup
# Compare to main
git diff main
/watzup
```
### Share with Team
```bash
# Generate summary
/watzup > work-summary.md
# Share in Slack/Teams
cat work-summary.md
```
## Common Use Cases
### Daily Stand-up
```bash
/watzup
# Answer:
# - What did I do?
# - What am I doing today?
# - Any blockers?
```
### Code Review Request
```bash
# See full scope
/watzup
# Use summary in review request
# Helps reviewers understand changes
```
### Sprint Demo
```bash
/watzup
# Show accomplishments
# Demonstrate progress
# Discuss next steps
```
### Knowledge Transfer
```bash
# Before vacation/handoff
/watzup
# Provides complete context
# Documents decisions
# Lists pending work
```
### Progress Check
```bash
# Mid-feature development
/watzup
# Assess progress
# Plan remaining work
# Identify blockers
```
## Integration with Other Commands
### With /journal
```bash
# Quick summary
/watzup
# Detailed documentation
/journal
# /watzup: Overview
# /journal: Deep dive with context
```
### With /git:pr
```bash
# Analyze changes
/watzup
# Create PR with summary
/git:pr feature-branch main
```
### With /git:cm
```bash
# See uncommitted changes
git status
# Review all work
/watzup
# Commit
/git:cm
```
## Customization
### Focus on Specific Time Range
```bash
# Last 24 hours
git log --since="1 day ago"
/watzup
# Last week
git log --since="1 week ago"
/watzup
```
### Include Specific Files
```bash
# See changes to specific area
git log -- src/payments/
/watzup
```
## Limitations
### What /watzup Does NOT Do
β Does not start implementation
β Does not modify code
β Does not create commits
β Does not deploy code
β Does not run tests
β
Only analyzes and reports
### When NOT to Use
**Before Starting Work:**
```bash
β /watzup
β
/plan [feature]
```
**When Implementing:**
```bash
β /watzup
β
/cook [feature]
```
**When Fixing Bugs:**
```bash
β /watzup
β
/fix [issue]
```
## Related Commands
- [/journal](/docs/commands/core/journal) - Detailed work documentation
- [/git:cm](/docs/commands/git/commit) - Commit changes
- [/git:pr](/docs/commands/git/pull-request) - Create pull request
- [/cook](/docs/commands/core/cook) - Implement features
- [/plan](/docs/commands/core/plan) - Plan implementations
---
**Key Takeaway**: `/watzup` gives you instant visibility into recent work - perfect for stand-ups, code reviews, and understanding what's been done. It's analysis-only and never modifies your code.
---
# /code
Section: docs
Category: commands/core
URL: https://docs.claudekit.cc/docs/core/code
# /code
Plan execution command. Implements a structured 6-step workflow with mandatory quality gates, automated testing, and code review.
## Syntax
```bash
/code [plan]
```
## When to Use
- **Executing Plans**: After `/plan` creates an implementation plan
- **Resuming Work**: Continue incomplete plan phases
- **Quality-Gated Development**: When mandatory testing and review are required
- **Tracked Implementation**: When tasks need TodoWrite tracking
- **Auto-Commit Workflow**: When you want automatic git commits on success
## Quick Example
```bash
/code @plans/251128-auth-integration/plan.md
```
**Output**:
```
β Step 0: Auth Integration - Phase 02
β Step 1: Found 5 tasks across 1 phase - Ambiguities: none
β Step 2: Implemented 3 files - [5/5] tasks complete
β Step 3: Tests [12/12 passed] - All requirements met
β Step 4: Code reviewed - [0] critical issues
βΈ Step 5: WAITING for user approval
Phase implementation complete. All tests pass, code reviewed. Approve changes?
```
**Result**: Plan executed with quality gates, ready for approval.
## Arguments
- `[plan]`: Path to plan file (optional). If empty, auto-detects latest plan in `./plans` directory
## What Happens
The `/code` command executes a 6-step workflow with strict enforcement:
### Step 0: Plan Detection
- If plan path provided: Uses that plan
- If empty: Finds latest `plan.md` in `./plans` directory
- Auto-selects next incomplete phase (prefers IN_PROGRESS, then earliest Planned)
### Step 1: Analysis & Task Extraction
- Reads plan file completely
- Maps dependencies between tasks
- Lists ambiguities or blockers
- Identifies required skills/tools
- Initializes TodoWrite with extracted tasks
**Output**: `β Step 1: Found [N] tasks across [M] phases - Ambiguities: [list or "none"]`
### Step 2: Implementation
- Executes tasks step-by-step per plan requirements
- Marks tasks complete in TodoWrite as done
- For UI work: Calls `ui-ux-designer` subagent
- For images: Uses `ai-multimodal` skill
- Runs type checking and compilation to verify no syntax errors
**Output**: `β Step 2: Implemented [N] files - [X/Y] tasks complete`
### Step 3: Testing (BLOCKING GATE)
- Calls `tester` subagent to run test suite
- If ANY tests fail:
- Calls `debugger` subagent to analyze failures
- Fixes all issues
- Re-runs tests
- Repeats until 100% pass
**Testing Standards**:
- Unit tests: May use mocks for external dependencies
- Integration tests: Use test environment
- E2E tests: Use real but isolated data
- Forbidden: Commenting out tests, changing assertions, TODO/FIXME to defer fixes
**Output**: `β Step 3: Tests [X/X passed] - All requirements met`
**Validation**: If X β total, Step 3 INCOMPLETE - workflow stops.
### Step 4: Code Review (BLOCKING GATE)
- Calls `code-reviewer` subagent for comprehensive review
- Checks: Security, performance, architecture, YAGNI/KISS/DRY
- If critical issues found:
- Fixes all issues
- Re-runs `tester` to verify
- Re-runs `code-reviewer`
- Repeats until no critical issues
**Critical Issues** (must be 0):
- Security vulnerabilities (XSS, SQL injection, OWASP)
- Performance bottlenecks
- Architectural violations
- Principle violations (YAGNI, KISS, DRY)
**Output**: `β Step 4: Code reviewed - [0] critical issues`
### Step 5: User Approval (BLOCKING GATE)
- Presents summary (3-5 bullets):
- What was implemented
- Test results
- Code review outcome
- Asks explicitly: "Phase implementation complete. Approve changes?"
- **WAITS** for user response - does not proceed without approval
**Output**: `βΈ Step 5: WAITING for user approval`
### Step 6: Finalize
After user approval:
1. **Status Update** (parallel):
- `project-manager` subagent: Updates plan status, marks phase DONE
- `docs-manager` subagent: Updates documentation for changed files
2. **Onboarding Check**: Detects requirements (API keys, env vars, config)
3. **Auto-Commit** (if all conditions met):
- User approved
- Tests passed
- Review passed
- Auto-stages, commits with message, pushes
**Output**: `β Step 6: Finalize - Status updated - Git committed`
## Output Format
Each step produces a status marker:
```
β Step 0: [Plan Name] - [Phase Name]
β Step 1: Found [N] tasks across [M] phases - Ambiguities: [list]
β Step 2: Implemented [N] files - [X/Y] tasks complete
β Step 3: Tests [X/X passed] - All requirements met
β Step 4: Code reviewed - [0] critical issues
β Step 5: User approved - Ready to complete
β Step 6: Finalize - Status updated - Git committed
```
If any "β Step N:" output is missing, that step is INCOMPLETE.
## Complete Example
### Scenario: API Endpoint Implementation
```bash
/code @plans/251128-user-api/phase-02-endpoints.md
```
**Step-by-Step Execution**:
```
β Step 0: User API - Phase 02 Endpoints
Reading plan: /plans/251128-user-api/plan.md
Auto-selected: Phase 02 (status: Planned)
β Step 1: Found 4 tasks across 1 phase - Ambiguities: none
Tasks extracted:
- Step 2.1: Create GET /users endpoint
- Step 2.2: Create POST /users endpoint
- Step 2.3: Add input validation
- Step 2.4: Write integration tests
β Step 2: Implemented 3 files - [4/4] tasks complete
Files changed:
- src/routes/users.ts (created)
- src/validators/userSchema.ts (created)
- tests/routes/users.test.ts (created)
Running tester subagent...
β Step 3: Tests [8/8 passed] - All requirements met
Test Results:
- GET /users: 4 tests passed
- POST /users: 4 tests passed
Running code-reviewer subagent...
β Step 4: Code reviewed - [0] critical issues
Review Summary:
- Security: Input validation present
- Performance: Pagination implemented
- Architecture: Follows existing patterns
βΈ Step 5: WAITING for user approval
Summary:
- Created user API endpoints (GET, POST)
- Added Zod validation schemas
- Wrote 8 integration tests
- All tests passing
- No critical issues found
Phase implementation complete. All tests pass, code reviewed. Approve changes?
```
**User**: "Approved"
```
β Step 5: User approved - Ready to complete
Running project-manager subagent...
Running docs-manager subagent...
Auto-committing changes...
β Step 6: Finalize - Status updated - Git committed
Commit: feat(api): add user endpoints with validation
Branch: kai/user-api
Files: 3 changed, 245 insertions
```
## Enforcement Rules
1. **No Step Skipping**: Each step must complete before the next
2. **TodoWrite Required**: All tasks tracked through TodoWrite
3. **Blocking Gates**:
- Step 3: Tests must be 100% passing
- Step 4: Critical issues must be 0
- Step 5: User must explicitly approve
4. **Mandatory Subagents**:
- Step 3: `tester`
- Step 4: `code-reviewer`
- Step 6: `project-manager` AND `docs-manager`
5. **One Phase Per Run**: Command focuses on single plan phase only
## Subagents Invoked
| Step | Subagent | Purpose |
|------|----------|---------|
| 2 | ui-ux-designer | UI implementation (when needed) |
| 3 | tester | Run test suite |
| 3 | debugger | Analyze test failures |
| 4 | code-reviewer | Quality and security review |
| 6 | project-manager | Update plan status |
| 6 | docs-manager | Update documentation |
## Common Issues
### Missing Plan
**Problem**: No plan found in `./plans` directory
**Solution**: Create a plan first
```bash
/plan [implement user authentication]
/code
```
### Test Failures
**Problem**: Step 3 stuck with failing tests
**What Happens**: Debugger called automatically, fixes attempted, re-run
```
Step 3: Tests [6/8 passed] - 2 failures
Invoking debugger subagent...
Fixing: src/routes/users.ts:45 - missing null check
Re-running tests...
```
### Critical Issues Found
**Problem**: Step 4 finds security vulnerabilities
**Solution**: Auto-fixed and re-verified
```
Step 4: Code reviewed - [2] critical issues
- XSS vulnerability in user input
- Missing rate limiting
Fixing issues...
Re-running tester...
Re-running code-reviewer...
β Step 4: Code reviewed - [0] critical issues
```
### Approval Timeout
**Problem**: Workflow paused at Step 5
**Solution**: Respond with approval to continue
```bash
# User response
Approved
```
## Integration with Workflow
### Standard Development Flow
```bash
# 1. Create implementation plan
/plan [add payment processing]
# 2. Execute plan
/code
# 3. Continue with next phase
/code
```
### Resuming Incomplete Work
```bash
# Check current plan status
cat plans/current-plan/plan.md
# Resume from where stopped
/code @plans/current-plan/plan.md
```
### Specifying Phase
```bash
# Execute specific phase
/code @plans/251128-api/phase-03-testing.md
```
## Related Commands
- [/plan](/docs/commands/core/plan) - Create implementation plan before `/code`
- [/cook](/docs/commands/core/cook) - Step-by-step implementation without plan structure
- [/fix](/docs/commands/fix) - Fix issues without full workflow
- [/brainstorm](/docs/commands/core/brainstorm) - Evaluate approaches before planning
---
**Key Takeaway**: `/code` executes implementation plans with mandatory quality gates (tests, code review, user approval), ensuring production-ready code with automatic documentation and git commits.
---
# /brainstorm
Section: docs
Category: commands/core
URL: https://docs.claudekit.cc/docs/core/brainstorm
# /brainstorm
Elite brainstorming command for technical solutions. Engages multi-agent collaboration to evaluate approaches against engineering principles, producing documented recommendations.
## Syntax
```bash
/brainstorm [question]
```
## When to Use
- **Architecture Decisions**: Choosing between design patterns or system architectures
- **Technology Selection**: Evaluating frameworks, libraries, or tools
- **Design Challenges**: Solving complex problems with multiple valid approaches
- **Risk Assessment**: Identifying trade-offs before committing to implementation
- **Team Alignment**: Getting structured analysis for stakeholder discussions
## Quick Example
```bash
/brainstorm [should we use Redis or PostgreSQL for session storage?]
```
**Output**:
```
Discovery Phase:
- Current user load: 10k DAU
- Session data: ~2KB per user
- Existing infrastructure: PostgreSQL primary
Analysis Against Principles:
YAGNI: PostgreSQL sessions sufficient for current scale
KISS: Single database reduces operational complexity
DRY: Reuse existing connection pooling
Recommendation:
PostgreSQL sessions with redis cache layer when >50k DAU
Report: plans/reports/brainstorm-251129-session-storage.md
```
**Result**: Structured analysis saved to markdown report.
## Arguments
- `[question]`: Technical question or challenge requiring multi-perspective analysis (required)
## Core Principles
Every brainstorm evaluates solutions against three principles:
### YAGNI (You Aren't Gonna Need It)
Don't build for hypothetical future requirements. If a simpler solution works now:
- Avoid premature optimization
- Reject features "just in case"
- Question abstractions without immediate need
### KISS (Keep It Simple, Stupid)
Favor straightforward solutions over clever ones:
- Fewer moving parts = fewer failure points
- Readable code over compressed code
- Standard patterns over custom implementations
### DRY (Don't Repeat Yourself)
Eliminate meaningful duplication:
- Extract repeated logic into functions
- Centralize configuration
- But: Don't DRY too early (three occurrences first)
## What Happens
### 6-Phase Process
**Phase 1: Discovery**
Clarifies requirements and constraints:
- What problem are we solving?
- What constraints exist? (budget, timeline, team skills)
- What's the success criteria?
- What's already been tried?
**Phase 2: Research**
Gathers information from multiple sources:
- Project docs (system-architecture.md, code-standards.md)
- External APIs via MCP tools
- Documentation lookups via docs-seeker
- Codebase analysis via scout
**Phase 3: Analysis**
Evaluates each approach against principles:
- YAGNI: Does this add unnecessary complexity?
- KISS: Is there a simpler approach?
- DRY: Does this create duplication?
- Additionally: Security, performance, maintainability
**Phase 4: Debate**
Challenges assumptions and preferences:
- Devil's advocate for each option
- Surface hidden trade-offs
- Question "obvious" choices
- Consider edge cases and failure modes
**Phase 5: Consensus**
Builds alignment on recommendation:
- Synthesizes perspectives
- Ranks options with rationale
- Identifies non-negotiables
- Notes acceptable trade-offs
**Phase 6: Documentation**
Creates comprehensive markdown report:
- Problem statement
- Options considered
- Analysis against principles
- Recommendation with rationale
- Risks and mitigations
- Success metrics
## Expertise Areas
Brainstorming draws on multiple perspectives:
| Area | Focus |
|------|-------|
| Architecture | System design, component boundaries, interfaces |
| Risk | Failure modes, security implications, tech debt |
| Development | Time estimates, implementation complexity |
| UX/DX | User experience, developer experience, ergonomics |
| Performance | Scalability, latency, resource utilization |
| Operations | Deployment, monitoring, maintenance burden |
## Collaboration Tools
The brainstorm process may invoke:
- **planner**: Structure analysis and create recommendations
- **docs-manager**: Access and update project documentation
- **searchapi MCP**: Research external solutions and patterns
- **docs-seeker**: Look up framework documentation
- **ai-multimodal**: Analyze diagrams or visual references
- **psql**: Query database schema for context
## Output Location
Reports are saved to markdown files:
**With active plan**:
```
{active-plan}/reports/brainstorm-YYMMDD-{topic}.md
```
**Without active plan**:
```
plans/reports/brainstorm-YYMMDD-{topic}.md
```
## Report Structure
Generated reports include:
```markdown
# Brainstorm: {Topic}
**Date**: YYMMDD
**Question**: {Original question}
## Problem Statement
{Clarified problem and constraints}
## Options Considered
### Option A: {Name}
- Description
- Pros
- Cons
- Principle alignment: YAGNI β KISS β DRY β
### Option B: {Name}
...
## Recommendation
{Selected option with rationale}
## Risks & Mitigations
- Risk 1: {description} β Mitigation: {approach}
...
## Success Metrics
- {Measurable criteria for success}
## Next Steps
1. {Action item}
...
```
## Complete Example
### Scenario: Choosing State Management
```bash
/brainstorm [what state management should we use for a React dashboard with real-time updates?]
```
**Phase Outputs**:
**Discovery**:
```
Requirements clarified:
- Dashboard with 20+ widgets
- Real-time data via WebSocket
- User preferences persistence
- Team: 3 devs, familiar with React hooks
Constraints:
- No Redux experience on team
- Performance critical (60fps animations)
- Must support offline mode
```
**Research**:
```
Options identified:
- Redux Toolkit + RTK Query
- Zustand
- Jotai
- React Query + Context
- Recoil
External sources consulted:
- State management benchmarks 2024
- React documentation on state patterns
- Real-time dashboard case studies
```
**Analysis**:
```
YAGNI Analysis:
- Redux: Overkill for 20 widgets, adds boilerplate
- Zustand: Minimal API, no unnecessary abstractions
- React Query: Built-in caching eliminates manual sync
KISS Analysis:
- Zustand: ~10 lines for basic store
- Redux: Requires slices, actions, selectors
- Jotai: Atomic model adds cognitive load
DRY Analysis:
- React Query: Centralizes data fetching logic
- All options support shared state
```
**Debate**:
```
Devil's Advocate Points:
- Zustand: Less ecosystem than Redux
Counter: Ecosystem not needed for dashboard scope
- React Query: Not for WebSocket
Counter: Can integrate with WS for cache invalidation
- No Redux: Team won't learn industry standard
Counter: Modern React prefers lighter solutions
```
**Recommendation**:
```
Zustand + React Query
Rationale:
- Zustand: UI state (widget positions, preferences)
- React Query: Server state (API data, caching)
- WebSocket: Triggers React Query invalidation
Principle alignment:
- YAGNI: No unused Redux infrastructure
- KISS: Two focused tools vs one complex one
- DRY: React Query eliminates fetch boilerplate
```
**Report saved**: `plans/reports/brainstorm-251129-state-management.md`
## Common Use Cases
### Architecture Evaluation
```bash
/brainstorm [monorepo with Turborepo vs separate repos for 5 services?]
```
Gets analysis on build complexity, code sharing, and deployment trade-offs.
### Technology Selection
```bash
/brainstorm [Prisma vs Drizzle vs raw SQL for new Node.js API?]
```
Receives comparison based on team experience, query complexity, and type safety needs.
### Design Challenge
```bash
/brainstorm [how to handle file uploads: direct to S3 vs through API server?]
```
Gets security, cost, and complexity analysis for each approach.
### Migration Strategy
```bash
/brainstorm [gradual migration from Express to Fastify or full rewrite?]
```
Receives risk analysis and phased approach recommendations.
## What /brainstorm DOES NOT Do
- β Implement code (use `/cook` or `/code`)
- β Fix bugs (use `/debug` or `/fix:*`)
- β Make final decisions (you decide, it advises)
- β Skip documentation (always produces report)
## Best Practices
### Provide Context
Include constraints in your question:
```bash
/brainstorm [
Authentication approach for:
- 50k users, 5k concurrent
- Team knows JWT basics
- Must support SSO later
- Budget: startup (minimize vendor costs)
]
```
### Ask Comparative Questions
β
**Good**:
```bash
/brainstorm [PostgreSQL vs MongoDB for user-generated content with nested comments?]
/brainstorm [SSR vs SSG vs ISR for documentation site with daily updates?]
```
β **Too Vague**:
```bash
/brainstorm [what database should I use?]
/brainstorm [how to build this?]
```
### Review Report Before Acting
The markdown report contains nuanced analysis not shown in terminal output:
```bash
# After brainstorm
cat plans/reports/brainstorm-251129-{topic}.md
```
## Integration with Workflow
### Before Planning
```bash
# 1. Brainstorm approach
/brainstorm [best auth strategy for multi-tenant SaaS?]
# 2. Review report, discuss with team
cat plans/reports/brainstorm-251129-auth-strategy.md
# 3. Create implementation plan
/plan [implement JWT auth with tenant isolation]
# 4. Execute
/code
```
### During Architecture Review
```bash
# 1. Reviewing PR with complex changes
git diff main
# 2. Question the approach
/brainstorm [is this service abstraction worth the complexity?]
# 3. Adjust based on recommendation
```
## Related Commands
- [/ask](/docs/commands/core/ask) - Quick architectural questions without full report
- [/plan](/docs/commands/core/plan) - Create implementation plan after brainstorming
- [/code](/docs/commands/core/code) - Execute plans with quality gates
- [/cook](/docs/commands/core/cook) - Implement features step-by-step
---
**Key Takeaway**: `/brainstorm` provides structured multi-perspective analysis on technical decisions, documenting recommendations against YAGNI, KISS, and DRY principles. It advises - you decide.
---
# /use-mcp
Section: docs
Category: commands/core
URL: https://docs.claudekit.cc/docs/core/use-mcp
# /use-mcp
Execute Model Context Protocol (MCP) operations via Gemini CLI to preserve context budget.
## Syntax
```bash
/use-mcp [task]
```
## How It Works
1. **Primary**: Execute via Gemini CLI (stdin pipe):
```bash
echo "$ARGUMENTS. Return JSON only per GEMINI.md instructions." | gemini -y -m gemini-2.5-flash
```
2. **Fallback**: If Gemini CLI unavailable, use `mcp-manager` subagent
## Key Rules
- **MUST use stdin piping** - deprecated `-p` flag skips MCP initialization
- Use `-y` flag to auto-approve tool execution
- `GEMINI.md` auto-loaded from project root
- Response format: `{"server":"name","tool":"name","success":true,"result":,"error":null}`
## Anti-Pattern
```bash
# β BROKEN - deprecated -p flag skips MCP connections!
gemini -y -m gemini-2.5-flash -p "..."
# β
CORRECT
echo "your task" | gemini -y -m gemini-2.5-flash
```
## When to Use
- Browser automation via MCP
- External tool integrations
- Database operations via MCP
- Any task requiring MCP server tools
---
**Key Takeaway**: Use `/use-mcp` to execute MCP operations through Gemini CLI with stdin piping, preserving Claude's context budget.
---
# /ck-help
Section: docs
Category: commands/core
URL: https://docs.claudekit.cc/docs/core/ck-help
# /ck-help
All-in-one ClaudeKit guide for discovering commands and workflows.
## Syntax
```bash
/ck-help [category|command|task description]
```
## Examples
```bash
# Overview of all commands
/ck-help
# Explore a category
/ck-help fix
# Task-based search
/ck-help add login page
```
## Response Patterns
| Input | Response |
|-------|----------|
| Empty | Full overview of commands |
| Category | Category workflow with commands |
| Task description | Recommended commands for task |
## Correct Workflows
- `/plan` β `/code`: Plan first, then execute the plan
- `/cook`: Standalone - plans internally, no separate `/plan` needed
- **NEVER** `/plan` β `/cook` (cook has its own planning)
## Example Interaction
```bash
/ck-help fix
# Response: Here's the fixing workflow:
# - /fix:fast - Quick fix for simple issues
# - /fix:hard - Deep analysis for complex bugs
# - /fix:test - Fix test failures
# - /fix:types - Fix TypeScript errors
# - /fix:ci - Fix CI/CD failures
```
---
**Key Takeaway**: Use `/ck-help` to discover ClaudeKit commands, explore categories, or find commands for specific tasks.
---
# /test
Section: docs
Category: commands/core
URL: https://docs.claudekit.cc/docs/core/test
# /test
Run tests locally and analyze the summary report.
## Syntax
```bash
/test
```
## How It Works
Uses `tester` subagent to:
1. Run the test suite
2. Analyze results
3. Report summary
**Important**: Does not implement fixes - only reports test status.
## Example Output
```
Test Results:
β 45 tests passed
β 3 tests failed
- auth/login.test.js: Expected 200, got 401
- api/users.test.js: Timeout error
- db/migrations.test.js: Connection refused
Coverage: 78%
```
## Related Commands
| Command | Use Case |
|---------|----------|
| `/fix:test` | Fix failing tests |
| `/cook` | Implement with auto-testing |
| `/code` | Implement plan with testing |
---
**Key Takeaway**: Use `/test` to run the test suite and get a summary report without implementing fixes.
---
# /code:no-test
Section: docs
Category: commands/core
URL: https://docs.claudekit.cc/docs/core/code-no-test
# /code:no-test
Implement a plan without the testing phase.
## Syntax
```bash
/code:no-test [plan]
```
## Workflow Steps
| Step | Description | Output |
|------|-------------|--------|
| Step 0 | Plan detection & phase selection | `β Step 0: [Plan] - [Phase]` |
| Step 1 | Analysis & task extraction | `β Step 1: Found [N] tasks` |
| Step 2 | Implementation | `β Step 2: Implemented [N] files` |
| Step 3 | Code review | `β Step 3: Code reviewed - [0] critical issues` |
| Step 4 | User approval (blocking) | `β Step 4: User approved` |
| Step 5 | Finalize (status, docs, commit) | `β Step 5: Finalize complete` |
## Key Differences from /code
| Feature | /code | /code:no-test |
|---------|-------|---------------|
| Testing | Yes | No |
| Code Review | Yes | Yes |
| User Approval | Yes | Yes |
## When to Use
- Rapid prototyping
- Non-critical code changes
- When tests will be added later
- Time-constrained iterations
## Mandatory Subagents
- Step 3: `code-reviewer`
- Step 5: `project-manager`, `docs-manager`
## Blocking Gates
- Step 3: Critical issues must be 0
- Step 4: User must explicitly approve
---
**Key Takeaway**: Use `/code:no-test` for faster implementation cycles when testing can be deferred.
---
# /bootstrap:auto
Section: docs
Category: commands/core
URL: https://docs.claudekit.cc/docs/core/bootstrap-auto
# /bootstrap:auto
Comprehensive automatic project bootstrapping with full research, design, and implementation.
## Syntax
```bash
/bootstrap:auto [user-requirements]
```
## Workflow Phases
### 1. Git Initialization
- Check/initialize Git repository
- Use `main` branch
### 2. Research
- Multiple `researcher` subagents explore in parallel
- Idea validation, challenges, solutions
- Reports β€150 lines each
### 3. Tech Stack
- `planner` + `researcher` subagents find best fit
- Document in `./docs` directory
### 4. Wireframe & Design
- `ui-ux-designer` + `researcher` subagents
- Design guidelines at `./docs/design-guidelines.md`
- Wireframes in HTML at `./docs/wireframe/`
- Logo generation if needed
- Screenshots saved to `./docs/wireframes/`
- User approval required
### 5. Implementation
- Follow plan in `./plans` directory
- UI implementation per design guidelines
- Asset generation with `ai-multimodal`
- Type checking and compilation
### 6. Testing
- Real tests (no fake data)
- `tester` + `debugger` subagents
- Repeat until all pass
### 7. Code Review
- `code-reviewer` subagent
- Fix critical issues
- Re-test after fixes
### 8. Documentation
- `docs-manager` creates/updates:
- README.md
- project-overview-pdr.md
- code-standards.md
- system-architecture.md
- Project roadmap at `./docs/project-roadmap.md`
### 9. Onboarding
- Step-by-step user guidance
- API key configuration
- Environment setup
### 10. Final Report
- Summary of changes
- Optional git commit/push
## Comparison
| Command | Speed | Research | Design |
|---------|-------|----------|--------|
| `/bootstrap` | Medium | Yes | Yes |
| `/bootstrap:auto` | Slow | Full | Full |
| `/bootstrap:auto:fast` | Fast | Parallel | Yes |
| `/bootstrap:auto:parallel` | Medium | Parallel | Yes |
---
**Key Takeaway**: Use `/bootstrap:auto` for comprehensive project bootstrapping with full research, design system, testing, and documentation.
---
# /bootstrap:auto:fast
Section: docs
Category: commands/core
URL: https://docs.claudekit.cc/docs/core/bootstrap-auto-fast
# /bootstrap:auto:fast
Faster automatic project bootstrapping with parallel research phases.
## Syntax
```bash
/bootstrap:auto:fast [user-requirements]
```
## Key Differences from /bootstrap:auto
| Aspect | /bootstrap:auto | /bootstrap:auto:fast |
|--------|-----------------|----------------------|
| Research | Sequential | 6 agents in parallel |
| Sources | Unlimited | Max 5 per researcher |
| Final commit | User choice (push) | Local only (no push) |
| User approval | Multiple points | Fewer checkpoints |
## Workflow Phases
### 1. Git Initialization
- Initialize Git with `main` branch
### 2. Research & Planning (Parallel)
- 2 researchers: User request, idea validation
- 2 researchers: Tech stack selection
- 2 researchers: Design planning
- All limited to max 5 sources each
### 3. Planning (Sequential)
- `ui-ux-designer`: Design guidelines & wireframes
- Logo generation if needed
- `planner`: Implementation plan
### 4. Implementation
- Follow plan step by step
- UI per design guidelines
- Asset generation and processing
### 5. Testing
- Real tests, no fake data
- Debug and fix until all pass
### 6. Code Review
- Fix critical issues
- Re-test after fixes
### 7. Documentation
- README.md, PDR, code-standards, architecture
- Project roadmap
### 8. Final Report
- Local commits only (no push)
- Summary of changes
### 9. Onboarding
- Step-by-step configuration help
## When to Use
- Faster project setup needed
- Requirements are clear
- Standard tech stack expected
## When to Use /bootstrap:auto Instead
- Complex requirements needing deep research
- Unfamiliar domain
- Need comprehensive documentation
---
**Key Takeaway**: Use `/bootstrap:auto:fast` for quicker project bootstrapping with parallel research and streamlined workflow.
---
# /cook:auto
Section: docs
Category: commands/core
URL: https://docs.claudekit.cc/docs/core/cook-auto
# /cook:auto
Implement a feature automatically with planning, coding, and optional commit.
## Syntax
```bash
/cook:auto [tasks]
```
## Workflow
1. **Plan**: `/plan ` creates implementation plan
2. **Code**: `/code ` implements the plan
3. **Commit**: Asks user if they want to commit, triggers `/git:cm`
## Example
```bash
/cook:auto [add user authentication with JWT]
# Workflow:
# 1. Creates plan at plans/YYYYMMDD-HHmm-auth/
# 2. Implements authentication
# 3. Runs tests
# 4. Code review
# 5. Asks: "Want to commit?" β /git:cm
```
## Comparison
| Command | Planning | Testing | Commit |
|---------|----------|---------|--------|
| `/cook` | Internal | Yes | No |
| `/cook:auto` | `/plan` | Yes | Ask user |
| `/cook:auto:fast` | `/plan:fast` | Yes | No |
## When to Use
- Feature needs explicit plan
- Want plan saved for reference
- Need user approval before commit
## When to Use /cook Instead
- Simple features
- Don't need saved plan
- Faster iteration
---
**Key Takeaway**: Use `/cook:auto` when you want explicit planning with saved plan files before implementation.
---
# /cook:auto:fast
Section: docs
Category: commands/core
URL: https://docs.claudekit.cc/docs/core/cook-auto-fast
# /cook:auto:fast
Fast feature implementation with scouting and quick planning (no research).
## Syntax
```bash
/cook:auto:fast [tasks-or-prompt]
```
## Workflow
1. **Scout**: `scout` subagent finds related resources in codebase
2. **Plan**: `/plan:fast` creates plan (no research)
3. **Implement**: `/code "skip code review step"` implements the plan
## Key Characteristics
- No external research
- Uses existing codebase knowledge
- Skips code review step
- Fastest `/cook` variant
## Comparison
| Command | Research | Scout | Code Review |
|---------|----------|-------|-------------|
| `/cook` | If needed | If needed | Yes |
| `/cook:auto` | Yes | If needed | Yes |
| `/cook:auto:fast` | No | Yes | No |
## When to Use
- Task is well-understood
- Codebase patterns are established
- Quick iteration needed
- Trust existing code quality
## When NOT to Use
- Complex integrations
- Security-critical code
- Production-facing changes
- New architecture patterns
## Example
```bash
/cook:auto:fast [add pagination to user list API]
# Workflow:
# 1. Scout finds existing pagination patterns
# 2. Quick plan based on codebase
# 3. Implement without review
```
---
**Key Takeaway**: Use `/cook:auto:fast` for rapid development when you trust the codebase and need minimal overhead.
---
# /bootstrap:auto:parallel
Section: docs
Category: commands/core
URL: https://docs.claudekit.cc/docs/core/bootstrap-auto-parallel
# /bootstrap:auto:parallel
Parallel project bootstrapping with multi-agent orchestration. Creates complete projects from requirements using researcher, planner, designer, and implementation agents working in parallel execution waves.
## Syntax
```bash
/bootstrap:auto:parallel [user-requirements]
```
## When to Use
- **New Project Kickoff**: Starting projects from scratch
- **Proof of Concept**: Rapid prototype development
- **Microservices**: Scaffolding multiple services
- **Full Automation**: When approval gates aren't needed
- **Speed Priority**: Maximum parallelization
## Quick Example
```bash
/bootstrap:auto:parallel [build a task management app with user auth, real-time updates, and mobile-responsive UI]
```
**Output**:
```
Starting parallel bootstrap...
Wave 1 (Parallel):
ββ [researcher] Tech stack research...
ββ [ui-designer] Design system...
Wave 2:
ββ [planner] Architecture planning...
Wave 3 (Parallel):
ββ [fullstack-dev 1] Auth module...
ββ [fullstack-dev 2] Task CRUD...
ββ [fullstack-dev 3] Real-time updates...
Wave 4:
ββ [tester] Integration tests...
ββ [docs-manager] Documentation...
Project delivered: task-manager/
```
## Arguments
- `[user-requirements]`: Natural language description of desired project (required)
## What It Does
### 10-Step Workflow
**Step 1: Requirements Analysis**
```
Parsing requirements...
Features identified:
- User authentication
- Task management (CRUD)
- Real-time updates
- Mobile-responsive UI
Constraints:
- Modern stack (inferred)
- Production-ready (inferred)
```
**Step 2: Tech Stack Research**
```
[researcher] Analyzing tech options...
Recommended stack:
- Frontend: Next.js 14 + TypeScript
- Backend: Node.js + PostgreSQL
- Real-time: WebSockets
- Auth: Better Auth
```
**Step 3: Architecture Planning**
```
[planner] Creating architecture...
Architecture: Modular monolith
- /app - Next.js routes
- /lib - Business logic
- /components - UI components
- /api - API routes
```
**Step 4: UI/UX Design (parallel with 2-3)**
```
[ui-designer] Creating design system...
Design tokens:
- Colors, typography, spacing
- Component patterns
- Responsive breakpoints
```
**Step 5: Parallel Implementation Plan**
```
Creating /plan:parallel plan...
Phases identified:
- Phase 1: Auth (no deps)
- Phase 2: Task CRUD (no deps)
- Phase 3: Real-time (depends on 2)
- Phase 4: UI polish (depends on 1,2,3)
```
**Step 6: Phase Dependency Resolution**
```
Building execution waves...
Wave 1: Phase 1 + Phase 2 (parallel)
Wave 2: Phase 3 (sequential)
Wave 3: Phase 4 (sequential)
```
**Step 7: Parallel Execution**
```
Launching fullstack-developer agents...
[Agent 1] Phase 1: Auth module
[Agent 2] Phase 2: Task CRUD
Progress:
[ββββββββββ] Agent 1: Complete (8 min)
[ββββββββββ] Agent 2: 80% (9 min)
```
**Step 8: Integration Testing**
```
[tester] Running integration tests...
Tests: 24/24 passed
Coverage: 78%
```
**Step 9: Documentation**
```
[docs-manager] Generating docs...
Created:
- README.md
- API documentation
- Development guide
```
**Step 10: Project Delivery**
```
Project complete!
Location: task-manager/
```
## Parallel Execution Waves
```
Wave 1 (Parallel):
βββ researcher: Tech stack research
βββ ui-designer: Design system
β
βΌ
Wave 2 (Sequential):
βββ planner: Architecture planning
β
βΌ
Wave 3 (Parallel):
βββ fullstack-dev 1: Auth module
βββ fullstack-dev 2: Task CRUD
βββ fullstack-dev 3: Real-time
β
βΌ
Wave 4 (Sequential):
βββ tester: Integration tests
βββ docs-manager: Documentation
```
## Agents Invoked
| Agent | Role | Wave |
|-------|------|------|
| researcher | Tech stack research | 1 |
| ui-designer | Design system | 1 |
| planner | Architecture planning | 2 |
| fullstack-developer (x3) | Parallel implementation | 3 |
| tester | Integration testing | 4 |
| docs-manager | Documentation | 4 |
## Output Structure
```
{project-name}/
βββ src/
β βββ app/
β βββ components/
β βββ lib/
β βββ styles/
βββ tests/
β βββ unit/
β βββ integration/
βββ docs/
β βββ api.md
β βββ development.md
βββ plans/
β βββ bootstrap-YYMMDD/
β βββ plan.md
β βββ research-report.md
β βββ design-spec.md
βββ README.md
βββ package.json
βββ tsconfig.json
```
## Complete Example
### Scenario: E-commerce Platform
```bash
/bootstrap:auto:parallel [build e-commerce platform with product catalog, shopping cart, checkout, and admin dashboard]
```
**Execution**:
```
βββββββββββββββββββββββββββββββββββββββ
PARALLEL BOOTSTRAP
βββββββββββββββββββββββββββββββββββββββ
Requirements:
- Product catalog
- Shopping cart
- Checkout flow
- Admin dashboard
Wave 1: Research + Design (Parallel)
βββββββββββββββββββββββββββββββββββββ
[researcher] Tech stack analysis...
β Recommended: Next.js + Stripe + PostgreSQL
[ui-designer] Design system...
β E-commerce patterns: Product cards, cart, checkout
Wave 2: Architecture
βββββββββββββββββββββββββββββββββββββ
[planner] Creating architecture...
β Modular e-commerce structure
β 4 independent modules identified
Wave 3: Implementation (Parallel)
βββββββββββββββββββββββββββββββββββββ
[fullstack-dev 1] Product catalog...
[fullstack-dev 2] Shopping cart...
[fullstack-dev 3] Checkout + Stripe...
[fullstack-dev 4] Admin dashboard...
Progress:
[ββββββββββ] Agent 1: Complete (12 min)
[ββββββββββ] Agent 2: Complete (8 min)
[ββββββββββ] Agent 3: Complete (15 min)
[ββββββββββ] Agent 4: Complete (14 min)
Wave 4: Testing + Docs
βββββββββββββββββββββββββββββββββββββ
[tester] Integration tests...
β 42 tests passed
[docs-manager] Documentation...
β README, API docs, admin guide
βββββββββββββββββββββββββββββββββββββββ
PROJECT COMPLETE
βββββββββββββββββββββββββββββββββββββββ
Location: ecommerce-platform/
Files: 87 files created
Tests: 42/42 passed
Coverage: 75%
Time: 18 minutes (vs ~45 min sequential)
Next steps:
1. cd ecommerce-platform
2. npm install
3. npm run dev
βββββββββββββββββββββββββββββββββββββββ
```
## Use Cases
### Full-Stack Application
```bash
/bootstrap:auto:parallel [SaaS dashboard with user management, billing, and analytics]
```
### Microservices Architecture
```bash
/bootstrap:auto:parallel [microservices: auth-service, user-service, notification-service with shared API gateway]
```
### API-First Project
```bash
/bootstrap:auto:parallel [REST API with OpenAPI spec, JWT auth, rate limiting, and PostgreSQL]
```
## Comparison
| Command | Approval | Parallelization | Speed |
|---------|----------|-----------------|-------|
| /bootstrap | Required | No | Slowest |
| /bootstrap:auto | No | No | Medium |
| /bootstrap:auto:parallel | No | Yes | Fastest |
## Best Practices
### Detailed Requirements
```bash
# Good: Specific requirements
/bootstrap:auto:parallel [
Task management app with:
- User auth (email + Google OAuth)
- Project organization
- Real-time collaboration
- Mobile-responsive
- Dark mode support
]
# Less effective: Vague
/bootstrap:auto:parallel [build a task app]
```
### Check Output Structure
After bootstrap:
```bash
cd {project-name}
ls -la
cat README.md
```
## Related Commands
- [/bootstrap](/docs/commands/core/bootstrap) - Bootstrap with approval gates
- [/bootstrap:auto](/docs/commands/core/bootstrap-auto) - Auto bootstrap (sequential)
- [/plan:parallel](/docs/commands/plan/parallel) - Create parallel plans
- [/code:parallel](/docs/commands/core/code-parallel) - Execute parallel plans
---
**Key Takeaway**: `/bootstrap:auto:parallel` creates complete projects using parallel agent execution waves, significantly reducing project setup time through coordinated multi-agent orchestration.
---
# /cook:auto:parallel
Section: docs
Category: commands/core
URL: https://docs.claudekit.cc/docs/core/cook-auto-parallel
# /cook:auto:parallel
Feature implementation with parallel execution. Creates a parallel plan and launches multiple fullstack-developer agents to implement independent phases simultaneously.
## Syntax
```bash
/cook:auto:parallel [tasks]
```
## When to Use
- **Multi-Module Features**: Features spanning multiple independent areas
- **Speed Priority**: When faster implementation matters
- **Independent Components**: Features with clear module boundaries
- **Full Automation**: No approval gates needed
## Quick Example
```bash
/cook:auto:parallel [implement user authentication and payment processing]
```
**Output**:
```
Analyzing feature requirements...
Creating parallel plan...
Phases identified:
- Phase 1: Auth module (no deps)
- Phase 2: Payment module (no deps)
- Phase 3: Integration (depends on 1,2)
Launching parallel agents...
Wave 1 (Parallel):
[Agent 1] Auth module... ββββββββββββ Complete (6 min)
[Agent 2] Payment module... ββββββββββββ Complete (8 min)
Wave 2 (Sequential):
[Agent 3] Integration... ββββββββββββ Complete (4 min)
Tests: 18/18 passed
Feature complete!
```
## Arguments
- `[tasks]`: Feature description or task list (required)
## What It Does
### Step 1: Analyze Requirements
```
Parsing feature: auth + payment processing
Components identified:
- Authentication (signup, login, sessions)
- Payment (Stripe integration, webhooks)
- Integration (protected checkout)
```
### Step 2: Create Parallel Plan
Invokes `/plan:parallel`:
```
Creating parallel plan...
## Dependency Graph
Phase 1: Auth (no deps)
Phase 2: Payment (no deps)
Phase 3: Integration (depends on: 1, 2)
## File Ownership Matrix
Phase 1: src/auth/**, src/middleware/auth.ts
Phase 2: src/payment/**, src/api/webhooks/**
Phase 3: src/checkout/**, src/pages/checkout.tsx
```
### Step 3: Parse Dependency Graph
```
Parsing dependencies...
Wave 1: Phase 1, Phase 2 (parallel)
Wave 2: Phase 3 (after Wave 1)
```
### Step 4: Execute Parallel Phases
```
Launching fullstack-developer agents...
[Agent 1] Starting Phase 1: Auth module
[Agent 2] Starting Phase 2: Payment module
File ownership enforced:
- Agent 1: src/auth/**
- Agent 2: src/payment/**
```
### Step 5: Monitor Progress
```
Progress:
[ββββββββββββ] Agent 1: Complete (6 min)
[ββββββββββββ] Agent 2: 85% (7 min)
Wave 1: 1/2 complete
```
### Step 6: Integrate Results
```
Wave 1 complete.
Executing Wave 2...
[Agent 3] Starting Phase 3: Integration
```
### Step 7: Run Tests
```
Running tests...
Auth tests: 8/8 passed
Payment tests: 6/6 passed
Integration tests: 4/4 passed
Total: 18/18 passed
```
### Step 8: Generate Summary
```
βββββββββββββββββββββββββββββββββββββββ
FEATURE COMPLETE
βββββββββββββββββββββββββββββββββββββββ
Feature: Auth + Payment Processing
Phases Completed:
β Phase 1: Auth module
β Phase 2: Payment module
β Phase 3: Integration
Files Changed: 24
Tests: 18/18 passed
Time: 14 minutes (vs ~25 min sequential)
Ready for PR.
βββββββββββββββββββββββββββββββββββββββ
```
## Dependency Management
### Reading Dependency Graph
From plan.md:
```markdown
## Dependency Graph
Phase 1: Auth Module (no deps)
Phase 2: Payment Module (no deps)
Phase 3: Integration (depends on: Phase 1, Phase 2)
Phase 4: E2E Tests (depends on: Phase 3)
```
### Building Execution Waves
```
Wave Analysis:
Phase 1: No dependencies β Wave 1
Phase 2: No dependencies β Wave 1
Phase 3: Depends on 1,2 β Wave 2
Phase 4: Depends on 3 β Wave 3
Execution Order:
Wave 1: Phase 1 + Phase 2 (parallel)
Wave 2: Phase 3 (after Wave 1)
Wave 3: Phase 4 (after Wave 2)
```
## Agent Coordination
### File Ownership
Each agent gets exclusive file access:
```
Agent 1 (Auth):
βββ src/auth/**
βββ src/middleware/auth.ts
βββ tests/auth/**
Agent 2 (Payment):
βββ src/payment/**
βββ src/api/webhooks/**
βββ tests/payment/**
No overlap = No conflicts
```
### Timeout and Failure
```
Agent configuration:
- Timeout: 15 minutes per agent
- Failure handling: Failed agents don't block others
If Agent 2 fails:
- Agent 1 continues to completion
- Wave 2 starts with Phase 3
- Failure reported in summary
```
## Complete Example
### Scenario: Multi-Module Feature
```bash
/cook:auto:parallel [implement user dashboard with profile, settings, notifications, and activity feed]
```
**Execution**:
```
Analyzing feature...
Components:
- User profile (view, edit)
- Settings (preferences, security)
- Notifications (list, read/unread)
- Activity feed (timeline)
Creating parallel plan...
## Dependency Graph
Phase 1: Profile module (no deps)
Phase 2: Settings module (no deps)
Phase 3: Notifications (no deps)
Phase 4: Activity feed (no deps)
Phase 5: Dashboard integration (depends on: 1,2,3,4)
## File Ownership Matrix
Phase 1: src/profile/**
Phase 2: src/settings/**
Phase 3: src/notifications/**
Phase 4: src/activity/**
Phase 5: src/dashboard/**, src/pages/dashboard.tsx
Launching agents...
Wave 1 (4 parallel agents):
[Agent 1] Profile... ββββββββββββ Complete (5 min)
[Agent 2] Settings... ββββββββββββ Complete (6 min)
[Agent 3] Notifications... ββββββββββββ Complete (4 min)
[Agent 4] Activity feed... ββββββββββββ Complete (7 min)
Wave 2 (sequential):
[Agent 5] Dashboard integration... ββββββββββββ Complete (5 min)
Running tests...
All phases: 32/32 tests passed
βββββββββββββββββββββββββββββββββββββββ
COMPLETE
βββββββββββββββββββββββββββββββββββββββ
Time: 12 minutes
Sequential estimate: ~27 minutes
Speedup: 2.25x
Files changed: 38
Tests passed: 32
Ready for review!
βββββββββββββββββββββββββββββββββββββββ
```
## Progress Tracking
TodoWrite integration:
```
Todo List:
[ββββββββββββ] Phase 1: Profile - Complete
[ββββββββββββ] Phase 2: Settings - Complete
[ββββββββββββ] Phase 3: Notifications - Complete
[ββββββββββββ] Phase 4: Activity - Complete
[ββββββββββββ] Phase 5: Dashboard - 90%
```
## Best Practices
### Define Clear Boundaries
```bash
# Good: Clear module boundaries
/cook:auto:parallel [
implement:
1. User authentication (email, OAuth)
2. Payment processing (Stripe)
3. Email notifications (SendGrid)
]
# Challenging: Overlapping concerns
/cook:auto:parallel [fix auth bugs and update payment UI]
```
### Check Plan First
If unsure about parallelization:
```bash
# Create plan first to review
/plan:parallel [your feature]
# Review plan.md for dependencies
cat plans/*/plan.md
# Then execute
/code:parallel
```
## Related Commands
- [/cook](/docs/commands/core/cook) - Step-by-step implementation
- [/cook:auto](/docs/commands/core/cook-auto) - Auto cook (sequential)
- [/plan:parallel](/docs/commands/plan/parallel) - Create parallel plans
- [/code:parallel](/docs/commands/core/code-parallel) - Execute existing parallel plans
---
**Key Takeaway**: `/cook:auto:parallel` accelerates feature implementation by running independent phases in parallel, using file ownership to prevent conflicts between concurrent agents.
---
# /scout:ext
Section: docs
Category: commands/core
URL: https://docs.claudekit.cc/docs/core/scout-ext
# /scout:ext
External tool-powered codebase scouting. Uses Gemini CLI, Opencode, and Explore agents for enhanced search capabilities, especially for large codebases that exceed standard context limits.
## Syntax
```bash
/scout:ext [user-prompt] [scale]
```
## When to Use
- **Large Codebases**: Projects exceeding standard context windows
- **Semantic Search**: When you need AI-powered understanding
- **Complex Queries**: Multi-faceted questions about codebase
- **Parallel Exploration**: When thoroughness matters more than speed
## Quick Example
```bash
/scout:ext [find all authentication implementations] 5
```
**Output**:
```
Analyzing scale: 5 (Gemini CLI + Explore)
Dispatching tools...
β Gemini CLI: Loading codebase context (1.2M tokens)
β Explore Agent 1: src/auth/**
β Explore Agent 2: src/middleware/**
β Explore Agent 3: src/api/auth/**
Progress:
[ββββββββββ] Gemini CLI: Complete (45s)
[ββββββββββ] Explore 1: Complete (12s)
[ββββββββββ] Explore 2: Complete (8s)
[ββββββββββ] Explore 3: Complete (15s)
Aggregating results...
Report: plans/reports/scout-ext-251129.md
```
## Arguments
- `[user-prompt]`: What to search for (required)
- `[scale]`: Search thoroughness 1-10 (optional, default: 3)
## Tool Selection by Scale
| Scale | Tools Used | Context Size | Best For |
|-------|------------|--------------|----------|
| 1-2 | Explore agents only | Standard | Quick searches |
| 3-5 | Gemini CLI + Explore | 2M tokens | Most projects |
| 6-10 | Gemini + Opencode + Explore | 2M+ tokens | Enterprise codebases |
## What It Does
### Step 1: Analyze Scale
Determines tool selection based on scale parameter:
```
Scale: 5
β Enable Gemini CLI (large context)
β Enable Explore agents (parallel)
β Skip Opencode (scale < 6)
```
### Step 2: Select Tools
**Gemini CLI**:
- 2M token context window
- Semantic code understanding
- Cross-file relationship analysis
**Opencode**:
- Alternative LLM-powered search
- Different perspective on codebase
- Complementary to Gemini
**Explore Agents**:
- Multiple parallel searches
- Directory-specific exploration
- Fast pattern matching
### Step 3: Dispatch in Parallel
All selected tools run simultaneously:
```
Launching parallel tools...
[Gemini CLI] Processing entire codebase...
[Explore 1] Scanning src/auth/**
[Explore 2] Scanning src/api/**
[Explore 3] Scanning lib/**
```
### Step 4: Aggregate Results
Combines findings from all tools:
```
Results aggregation:
Gemini CLI found:
- JWT implementation in src/auth/jwt.ts
- Session handling in src/auth/session.ts
- OAuth2 providers in src/auth/providers/
Explore agents found:
- Middleware auth in src/middleware/auth.ts
- API route guards in src/api/auth/guards.ts
- Token refresh in lib/token.ts
Combined: 6 unique auth implementations
```
### Step 5: Generate Report
Creates comprehensive report:
```
Report saved: plans/reports/scout-ext-251129.md
Contents:
1. Search Query
2. Tools Used
3. Findings by Tool
4. Combined Results
5. Recommendations
```
## Output Location
**With active plan**:
```
{active-plan}/reports/scout-ext-YYMMDD.md
```
**Without active plan**:
```
plans/reports/scout-ext-YYMMDD.md
```
## Advantages Over /scout
| Feature | /scout | /scout:ext |
|---------|--------|------------|
| Context size | Standard | 2M tokens |
| External tools | No | Gemini, Opencode |
| Semantic search | Basic | Advanced |
| Large codebases | Limited | Excellent |
| Parallel tools | Internal only | Multiple external |
## Complete Example
### Scenario: Understanding Authentication in Large Monorepo
```bash
/scout:ext [how does authentication work across all services?] 7
```
**Execution**:
```
Scale: 7 (Full external toolset)
Launching tools:
β Gemini CLI: Loading monorepo (1.8M tokens)
β Opencode: Analyzing architecture
β Explore 1: services/auth/**
β Explore 2: services/api/**
β Explore 3: packages/shared/auth/**
β Explore 4: libs/security/**
Progress:
[ββββββββββ] Explore 1: Complete (10s)
[ββββββββββ] Explore 2: Complete (12s)
[ββββββββββ] Explore 3: Complete (8s)
[ββββββββββ] Explore 4: Complete (15s)
[ββββββββββ] Opencode: Complete (35s)
[ββββββββββ] Gemini CLI: Complete (52s)
βββββββββββββββββββββββββββββββββββββββ
AGGREGATED FINDINGS
βββββββββββββββββββββββββββββββββββββββ
Authentication Architecture:
- Central auth service at services/auth/
- Shared JWT library in packages/shared/auth/
- Per-service middleware integration
Implementation Details:
1. JWT tokens (access + refresh)
2. OAuth2 providers (Google, GitHub)
3. API key authentication for services
4. Session fallback for legacy clients
Cross-Service Flow:
User β API Gateway β Auth Service β JWT β Target Service
Files Identified: 23
Services Involved: 5
Shared Libraries: 2
Report: plans/reports/scout-ext-251129.md
βββββββββββββββββββββββββββββββββββββββ
```
## Scale Guidelines
### Scale 1-2: Quick Search
```bash
/scout:ext [find database config] 1
```
- Uses Explore agents only
- Fast results (~10-20s)
- Good for specific file searches
### Scale 3-5: Standard Search
```bash
/scout:ext [understand the API architecture] 4
```
- Gemini CLI + Explore
- Balanced depth and speed
- Good for most queries
### Scale 6-10: Deep Analysis
```bash
/scout:ext [comprehensive security audit of auth flow] 8
```
- All tools engaged
- Maximum thoroughness
- Best for complex analysis
## Limitations
### Timeout
Each tool has 5-minute timeout:
```
Tool timeout: 5 minutes
β οΈ Gemini CLI timed out
Partial results collected from other tools.
```
### External Tool Availability
Requires configured external tools:
```bash
# Tools must be installed and configured
gemini --version # Gemini CLI
opencode --version # Opencode
```
### API Costs
External tools may incur API costs:
```
Gemini CLI: Uses Gemini API credits
Opencode: Uses configured LLM API
```
## Best Practices
### Match Scale to Codebase
```bash
# Small project (< 50 files)
/scout:ext [query] 2
# Medium project (50-500 files)
/scout:ext [query] 4
# Large project (500+ files)
/scout:ext [query] 7
```
### Be Specific
```bash
# Good: Specific query
/scout:ext [find all places where user permissions are checked] 5
# Less effective: Vague
/scout:ext [security stuff] 5
```
## Related Commands
- [/scout](/docs/commands/core/scout) - Standard codebase exploration
- [/review:codebase](/docs/commands/core/review-codebase) - Comprehensive code analysis
- [/ask](/docs/commands/core/ask) - Architectural questions
---
**Key Takeaway**: `/scout:ext` extends codebase exploration with external AI tools, enabling semantic search across large codebases that exceed standard context limits.
---
# /review:codebase
Section: docs
Category: commands/core
URL: https://docs.claudekit.cc/docs/core/review-codebase
# /review:codebase
Multi-agent codebase analysis command. Scans your entire codebase using researcher, scout, and code-reviewer agents to assess quality, identify technical debt, and create an improvement roadmap.
## Syntax
```bash
/review:codebase [tasks-or-prompt]
```
## When to Use
- **Onboarding**: Understanding a new codebase
- **Pre-Refactoring**: Assessing before major changes
- **Technical Debt Audit**: Identifying debt inventory
- **Architecture Review**: Evaluating current patterns
- **Quality Check**: Comprehensive code quality assessment
## Quick Example
```bash
/review:codebase
```
**Output**:
```
Starting codebase review...
Phase 1: Structure Scan
Analyzing directory structure...
Found: 234 files, 18 directories
Phase 2: Multi-Agent Exploration
Dispatching 5 scout agents...
[ββββββββββββββββββββ] Complete
Phase 3: Pattern Analysis
Researcher analyzing architecture...
[ββββββββββββββββββββ] Complete
Phase 4: Quality Review
Code-reviewer checking standards...
[ββββββββββββββββββββ] Complete
Phase 5: Improvement Planning
Creating roadmap...
[ββββββββββββββββββββ] Complete
Report: plans/reports/codebase-review-251129.md
```
## Arguments
- `[tasks-or-prompt]`: Optional focus area. If empty, reviews entire codebase.
## What It Does
### Agents Invoked
| Agent | Role | Focus |
|-------|------|-------|
| scout (x5) | Explore | Directory-parallel exploration |
| researcher | Analyze | Architecture patterns, best practices |
| code-reviewer | Review | Code quality, standards compliance |
| planner | Plan | Improvement roadmap creation |
### Workflow
**Step 1: Scan Directory Structure**
```
Scanning codebase...
src/
βββ components/ (42 files)
βββ hooks/ (12 files)
βββ services/ (18 files)
βββ utils/ (8 files)
βββ pages/ (24 files)
Total: 234 files across 18 directories
```
**Step 2: Dispatch Scout Agents**
Five parallel scouts explore different areas:
```
Scout 1: src/components/**
Scout 2: src/hooks/** + src/utils/**
Scout 3: src/services/**
Scout 4: src/pages/**
Scout 5: tests/** + config files
```
**Step 3: Researcher Analyzes Patterns**
```
Analyzing architecture patterns...
Detected:
- Pattern: Feature-based organization
- State: Zustand stores per feature
- API: REST with React Query
- Styling: Tailwind CSS + CSS modules
```
**Step 4: Code-Reviewer Checks Quality**
```
Quality assessment...
Code Quality Metrics:
- Complexity: Medium (avg cyclomatic: 8)
- Duplication: Low (2.3%)
- Test Coverage: 67%
- Type Safety: High (strict mode)
Issues Found:
- 3 potential security issues
- 12 code style violations
- 5 performance concerns
```
**Step 5: Planner Creates Roadmap**
```
Creating improvement roadmap...
Priority 1 (Critical):
- Fix security issues in auth module
- Add input validation to API endpoints
Priority 2 (High):
- Increase test coverage to 80%
- Refactor complex components
Priority 3 (Medium):
- Address code style violations
- Optimize bundle size
```
**Step 6: Generate Report**
Comprehensive markdown report created.
## Analysis Areas
### Code Organization
- File structure patterns
- Naming conventions
- Module boundaries
- Import/export patterns
### Architecture Patterns
- Monolith vs microservices
- State management approach
- API design patterns
- Component architecture
### Code Quality Metrics
- Cyclomatic complexity
- Code duplication percentage
- Test coverage
- Type safety level
- Documentation coverage
### Security Issues
- Input validation gaps
- Authentication vulnerabilities
- Authorization flaws
- Dependency vulnerabilities
### Performance Bottlenecks
- Bundle size concerns
- Render performance issues
- API response times
- Memory leaks potential
### Technical Debt Inventory
- Legacy code sections
- Outdated dependencies
- Missing tests
- TODO/FIXME comments
- Deprecated patterns
## Output
### Report Location
```
plans/reports/codebase-review-YYMMDD.md
```
### Report Sections
```markdown
# Codebase Review Report
## Executive Summary
- Overall health score
- Key findings
- Critical issues
## Structure Analysis
- Directory organization
- File distribution
- Naming patterns
## Architecture Overview
- Patterns detected
- Component relationships
- Data flow
## Quality Metrics
- Complexity scores
- Duplication analysis
- Coverage statistics
## Issues Inventory
### Critical
### High
### Medium
### Low
## Technical Debt
- Debt items
- Estimated effort
- Impact assessment
## Recommendations
### Immediate Actions
### Short-term Improvements
### Long-term Refactoring
## Improvement Roadmap
- Phase 1: Critical fixes
- Phase 2: Quality improvements
- Phase 3: Architecture evolution
```
## Complete Example
### Scenario: Pre-Refactoring Assessment
```bash
/review:codebase [assess readiness for React 19 migration]
```
**Execution**:
```
Starting focused codebase review...
Focus: React 19 migration readiness
Phase 1: Structure Scan
Found: 156 React components, 24 hooks, 12 context providers
Phase 2: Scout Analysis
[Scout 1] Analyzing component patterns...
[Scout 2] Checking hook implementations...
[Scout 3] Reviewing state management...
[Scout 4] Examining data fetching...
[Scout 5] Checking build configuration...
Phase 3: Research
Comparing current patterns against React 19 requirements...
Phase 4: Quality Review
Checking for deprecated patterns...
Phase 5: Migration Planning
Creating migration roadmap...
βββββββββββββββββββββββββββββββββββββββ
REVIEW SUMMARY
βββββββββββββββββββββββββββββββββββββββ
React 19 Migration Readiness: 72%
Blockers (Must Fix):
- 8 components using deprecated lifecycle methods
- 3 class components need conversion
- Legacy context API in 2 providers
Warnings (Should Fix):
- 12 components with potential Suspense issues
- 5 effects without cleanup
- Outdated React Query patterns
Ready (No Changes):
- 134 functional components
- 19 custom hooks
- Modern state management
Estimated Migration Effort:
- Critical fixes: 2 days
- Refactoring: 5 days
- Testing: 3 days
- Total: ~2 weeks
Report: plans/reports/codebase-review-251129.md
βββββββββββββββββββββββββββββββββββββββ
```
## Focused Review
Specify an area for targeted analysis:
```bash
# Security focus
/review:codebase [security audit of authentication system]
# Performance focus
/review:codebase [identify performance bottlenecks]
# Testing focus
/review:codebase [assess test coverage gaps]
# Architecture focus
/review:codebase [evaluate microservices boundaries]
```
## Best Practices
### Run Before Major Changes
```bash
# Before refactoring
/review:codebase
# Review report
cat plans/reports/codebase-review-*.md
# Then plan
/plan [refactor based on review findings]
```
### Regular Health Checks
Run periodically for ongoing projects:
```bash
# Monthly health check
/review:codebase [monthly quality assessment]
```
### Focus When Needed
```bash
# Full review for new projects
/review:codebase
# Focused review for specific concerns
/review:codebase [API security]
```
## Related Commands
- [/scout](/docs/commands/core/scout) - Quick codebase exploration
- [/scout:ext](/docs/commands/core/scout-ext) - External tool exploration
- [/ask](/docs/commands/core/ask) - Architectural questions
- [/plan](/docs/commands/plan) - Create improvement plans
---
**Key Takeaway**: `/review:codebase` provides comprehensive multi-agent analysis of your codebase, identifying quality issues, technical debt, and creating actionable improvement roadmaps.
---
# /code:parallel
Section: docs
Category: commands/core
URL: https://docs.claudekit.cc/docs/core/code-parallel
# /code:parallel
Plan execution with parallel/sequential phase coordination. Reads dependency graphs from existing plans and executes phases using fullstack-developer agents in optimized waves.
## Syntax
```bash
/code:parallel [plan-path]
```
## When to Use
- **Existing Parallel Plans**: Execute plans created by `/plan:parallel`
- **Dependency-Aware Execution**: When phases have dependencies
- **Multi-Phase Projects**: Complex implementations with multiple stages
- **Optimized Execution**: Automatic parallelization where possible
## Quick Example
```bash
/code:parallel @plans/251129-auth-system/plan.md
```
**Output**:
```
Reading plan: plans/251129-auth-system/plan.md
Parsing dependency graph...
Phase 1: Auth module (no deps)
Phase 2: Session management (no deps)
Phase 3: OAuth providers (depends on: 1)
Phase 4: Integration (depends on: 1, 2, 3)
Building execution waves...
Wave 1: Phase 1 + Phase 2 (parallel)
Wave 2: Phase 3 (sequential)
Wave 3: Phase 4 (sequential)
Executing Wave 1...
[Agent 1] Auth module... ββββββββββββ Complete
[Agent 2] Session management... ββββββββββββ Complete
Executing Wave 2...
[Agent 3] OAuth providers... ββββββββββββ Complete
Executing Wave 3...
[Agent 4] Integration... ββββββββββββ Complete
All phases complete!
```
## Arguments
- `[plan-path]`: Path to plan.md (optional - auto-detects active plan)
## What It Does
### Step 1: Load Plan
```
Checking for plan...
Plan provided: plans/251129-auth-system/plan.md
OR
Auto-detected: plans/251129-auth-system/plan.md (from active-plan)
```
### Step 2: Extract Dependency Graph
Looks for "## Dependency Graph" section:
```markdown
## Dependency Graph
Phase 1: Auth module (no deps)
Phase 2: Session management (no deps)
Phase 3: OAuth providers (depends on: Phase 1)
Phase 4: Integration (depends on: Phase 1, Phase 2, Phase 3)
```
Parsed:
```
Phase 1: dependencies = []
Phase 2: dependencies = []
Phase 3: dependencies = [1]
Phase 4: dependencies = [1, 2, 3]
```
### Step 3: Build Execution Waves
```
Analyzing dependencies...
Wave 1: Phases with no dependencies
β Phase 1, Phase 2 (can run parallel)
Wave 2: Phases depending only on Wave 1
β Phase 3 (depends on Phase 1)
Wave 3: Phases depending on Wave 2
β Phase 4 (depends on all previous)
Execution plan:
Wave 1: [Phase 1, Phase 2] - Parallel
Wave 2: [Phase 3] - After Wave 1
Wave 3: [Phase 4] - After Wave 2
```
### Step 4: Execute Wave 1 (Parallel)
```
Launching Wave 1 agents...
[Agent 1] Phase 1: Auth module
File ownership: src/auth/**
Status: Running...
[Agent 2] Phase 2: Session management
File ownership: src/session/**
Status: Running...
Progress:
[ββββββββββ] Agent 1: Complete (6 min)
[ββββββββββ] Agent 2: Complete (5 min)
Wave 1 complete.
```
### Step 5: Wait for Wave Completion
```
Wave 1 results:
β Phase 1: Auth module - 8 files changed
β Phase 2: Session management - 5 files changed
All Wave 1 phases complete.
Proceeding to Wave 2...
```
### Step 6-7: Execute Remaining Waves
```
Executing Wave 2...
[Agent 3] Phase 3: OAuth providers
Dependencies satisfied: Phase 1 β
Status: Running...
[ββββββββββ] Agent 3: Complete (7 min)
Executing Wave 3...
[Agent 4] Phase 4: Integration
Dependencies satisfied: Phase 1 β, Phase 2 β, Phase 3 β
Status: Running...
[ββββββββββ] Agent 4: Complete (4 min)
```
### Step 8: Integration & Testing
```
All waves complete.
Running integration tests...
Tests: 24/24 passed
Summary:
- Phases executed: 4
- Waves: 3
- Total time: 16 minutes
- Sequential estimate: 28 minutes
- Speedup: 1.75x
```
## Dependency Graph Format
Plan.md should include:
```markdown
## Dependency Graph
Phase 1: [Phase Name] (no deps)
Phase 2: [Phase Name] (no deps)
Phase 3: [Phase Name] (depends on: Phase 1)
Phase 4: [Phase Name] (depends on: Phase 1, Phase 2, Phase 3)
```
Or visual format:
```markdown
## Dependency Graph
```
Phase 1 βββββββ
ββββ Phase 3 βββ Phase 4
Phase 2 βββββββ
```
```
## File Ownership Matrix
Plan.md can include file ownership:
```markdown
## File Ownership Matrix
| Phase | Owned Files |
|-------|-------------|
| Phase 1 | src/auth/**, tests/auth/** |
| Phase 2 | src/session/**, tests/session/** |
| Phase 3 | src/oauth/**, tests/oauth/** |
| Phase 4 | src/integration/**, tests/e2e/** |
```
Prevents conflicts between parallel agents.
## Execution Waves
### Parallel Wave
Multiple phases with satisfied dependencies:
```
Wave 1:
βββ Phase 1 (no deps) ββββββββββ
β βββ Parallel
βββ Phase 2 (no deps) ββββββββββ
[Both agents run simultaneously]
```
### Sequential Wave
Single phase or phases with new dependencies:
```
Wave 2:
βββ Phase 3 (depends on Phase 1)
[Waits for Wave 1, then runs Phase 3]
```
## Fallback Behavior
If no dependency graph found:
```
No "## Dependency Graph" section found.
Falling back to sequential execution...
Executing phases in order:
1. Phase 1
2. Phase 2
3. Phase 3
4. Phase 4
```
## Complete Example
### Scenario: Executing E-commerce Plan
```bash
/code:parallel @plans/251129-ecommerce/plan.md
```
**Plan Content**:
```markdown
# E-commerce Implementation Plan
## Dependency Graph
Phase 1: Product Catalog (no deps)
Phase 2: Shopping Cart (no deps)
Phase 3: User Auth (no deps)
Phase 4: Checkout (depends on: Phase 1, Phase 2, Phase 3)
Phase 5: Order Processing (depends on: Phase 4)
## File Ownership Matrix
| Phase | Owned Files |
|-------|-------------|
| Phase 1 | src/products/**, src/api/products/** |
| Phase 2 | src/cart/**, src/api/cart/** |
| Phase 3 | src/auth/**, src/api/auth/** |
| Phase 4 | src/checkout/**, src/pages/checkout/** |
| Phase 5 | src/orders/**, src/api/orders/** |
```
**Execution**:
```
βββββββββββββββββββββββββββββββββββββββ
PARALLEL EXECUTION
βββββββββββββββββββββββββββββββββββββββ
Plan: E-commerce Implementation
Dependency Analysis:
Phase 1: no deps β Wave 1
Phase 2: no deps β Wave 1
Phase 3: no deps β Wave 1
Phase 4: deps [1,2,3] β Wave 2
Phase 5: deps [4] β Wave 3
Wave 1 (Parallel - 3 agents):
βββββββββββββββββββββββββββββββββββββ
[Agent 1] Product Catalog...
[Agent 2] Shopping Cart...
[Agent 3] User Auth...
Progress:
[ββββββββββ] Agent 1: Complete (8 min)
[ββββββββββ] Agent 2: Complete (6 min)
[ββββββββββ] Agent 3: Complete (7 min)
Wave 2 (Sequential):
βββββββββββββββββββββββββββββββββββββ
[Agent 4] Checkout...
Dependencies: β Phase 1, β Phase 2, β Phase 3
[ββββββββββ] Agent 4: Complete (9 min)
Wave 3 (Sequential):
βββββββββββββββββββββββββββββββββββββ
[Agent 5] Order Processing...
Dependencies: β Phase 4
[ββββββββββ] Agent 5: Complete (5 min)
βββββββββββββββββββββββββββββββββββββββ
EXECUTION COMPLETE
βββββββββββββββββββββββββββββββββββββββ
Phases: 5/5 complete
Waves: 3
Time: 17 minutes
Sequential: ~35 minutes
Speedup: 2.06x
Tests: 45/45 passed
Files: 52 changed
βββββββββββββββββββββββββββββββββββββββ
```
## Best Practices
### Verify Plan Structure
Before execution:
```bash
# Check plan has dependency graph
cat plans/*/plan.md | grep -A 20 "Dependency Graph"
```
### Use with /plan:parallel
```bash
# Create optimized plan
/plan:parallel [your feature]
# Execute with parallel coordination
/code:parallel
```
### Auto-Detection
If no path provided, uses active plan:
```bash
/code:parallel
# Uses plan from .claude/active-plan
```
## Related Commands
- [/code](/docs/commands/core/code) - Sequential plan execution
- [/plan:parallel](/docs/commands/plan/parallel) - Create parallel plans
- [/cook:auto:parallel](/docs/commands/core/cook-auto-parallel) - Plan + execute parallel
- [/fix:parallel](/docs/commands/fix/parallel) - Parallel bug fixing
---
**Key Takeaway**: `/code:parallel` executes implementation plans with optimized parallel/sequential coordination, using dependency graphs to maximize parallelization while respecting phase dependencies.
---
# /design:3d
Section: docs
Category: commands/design
URL: https://docs.claudekit.cc/docs/design/3d
# /design:3d
Transform your web application into an immersive 3D experience. Build interactive 3D scenes, animations, and visualizations using Three.js and WebGL shaders with responsive design principles.
## Syntax
```bash
/design:3d [description of 3D experience]
```
## How It Works
The `/design:3d` command orchestrates a specialized workflow for 3D design:
### 1. Concept & Planning
- Invokes **ui-ux-designer** agent to understand requirements
- Defines 3D scene architecture and interaction model
- Plans performance budget and optimization strategy
- Identifies required assets (models, textures, shaders)
### 2. Three.js Setup
- Configures Three.js scene, camera, and renderer
- Sets up responsive canvas sizing
- Implements WebGL context management
- Adds performance monitoring
### 3. 3D Asset Creation
- Creates or configures 3D geometries
- Designs material systems (PBR, custom shaders)
- Implements lighting schemes (ambient, directional, point lights)
- Loads external models (GLTF, FBX, OBJ)
### 4. Interaction Design
- Implements orbit controls and camera movement
- Adds raycasting for object selection
- Creates hover and click interactions
- Designs touch gesture support for mobile
### 5. Animation & Effects
- Creates animation loops and timelines
- Implements particle systems
- Adds post-processing effects (bloom, depth of field)
- Designs shader effects (displacement, noise, distortion)
### 6. Performance Optimization
- Implements LOD (Level of Detail) systems
- Optimizes draw calls and geometry batching
- Adds frustum culling and occlusion
- Configures texture compression and mipmapping
### 7. Responsive Integration
- Ensures 3D scene scales across devices
- Implements mobile performance optimizations
- Adds progressive enhancement fallbacks
- Tests across different GPU capabilities
## Examples
### Interactive Product Showcase
```bash
/design:3d [create 3D product viewer for sneakers with 360Β° rotation, material variants, and AR preview button]
```
**What happens:**
```
1. Scene Architecture
- Three.js scene with PBR lighting
- Turntable rotation system
- Material switcher UI
- WebXR integration for AR
2. Implementation
β Scene setup with HDR environment lighting
β GLTF sneaker model loader
β Orbit controls with momentum
β Material variants (leather, mesh, suede)
β Annotation hotspots for features
β AR Quick Look for iOS
β WebXR for Android AR
3. Interactions
- Drag to rotate (with inertia)
- Pinch to zoom
- Tap hotspots for details
- Material swatch selection
- "View in AR" button
4. Performance
- 60 FPS on mobile (tested)
- Lazy loading for textures
- Compressed GLTF with Draco
- Progressive JPEG textures
Generated Files:
β src/components/ProductViewer3D.tsx
β src/shaders/pbr.vert
β src/shaders/pbr.frag
β src/utils/three-setup.ts
β public/models/sneaker.glb
```
### Immersive Landing Page Hero
```bash
/design:3d [abstract 3D particle galaxy hero section that responds to mouse movement with smooth camera parallax]
```
**What happens:**
```
1. Visual Design
- 10,000 particle system
- Gradient color scheme (purple β blue)
- Depth-based fog effect
- Smooth camera parallax
2. Implementation
β Custom particle shader with size attenuation
β Mouse-tracking camera movement (smoothed)
β Galaxy spiral formation algorithm
β Color gradient mapped to particle position
β Exponential fog for depth perception
β Responsive canvas resizing
3. Animation
- Particle rotation (slow, subtle)
- Individual particle drift noise
- Camera follow mouse (0.1 damping)
- Scroll-based opacity fade
4. Optimization
- BufferGeometry for particles
- Single draw call
- 60 FPS on mid-range devices
- Reduced particle count on mobile
Generated Files:
β src/components/Hero3D.tsx
β src/shaders/particles.vert
β src/shaders/particles.frag
β src/utils/galaxy-generator.ts
```
### Interactive Data Visualization
```bash
/design:3d [3D bar chart showing sales data across regions with interactive tooltips and smooth camera animations between views]
```
**What happens:**
```
1. Data Architecture
- Parse sales data JSON
- Map data to 3D coordinates
- Calculate optimal camera positions
- Design color mapping strategy
2. Implementation
β Dynamic 3D bar generation from data
β Interactive raycasting for tooltips
β Animated camera transitions (GSAP)
β Color gradient by value
β Grid plane with labels
β Legend UI overlay
3. Interactions
- Hover: highlight bar + show tooltip
- Click: zoom to region view
- Filter buttons: animate data transitions
- Reset camera button
4. Animations
- Bars grow from zero on load
- Smooth camera tweens (easing)
- Data transition morphing
- Hover state elevation
Generated Files:
β src/components/DataViz3D.tsx
β src/utils/data-to-geometry.ts
β src/components/CameraControls.tsx
β src/styles/data-viz-ui.css
```
## When to Use /design:3d
### β
Use /design:3d for:
- **Product Showcases**: Interactive 3D viewers for e-commerce
- **Hero Sections**: Immersive landing page experiences
- **Data Visualization**: 3D charts and graphs
- **Games & Experiences**: Interactive web-based games
- **Configurators**: Customizable 3D products
- **AR Previews**: WebXR-enabled product previews
- **Architectural Walkthroughs**: Building/space visualization
- **Creative Portfolios**: Unique 3D portfolio presentations
### β Don't use /design:3d for:
- **Simple UI Elements**: Use regular HTML/CSS instead
- **Static Images**: Use 2D design tools
- **Heavy Applications**: Consider native apps for complex 3D
- **Low-End Devices Only**: Ensure broad device support requirements
## Best Practices
### Provide Clear Requirements
β
**Good:**
```bash
/design:3d [create rotating 3D globe showing customer locations with animated connection lines]
/design:3d [abstract geometric shapes that morph on scroll with pastel colors]
/design:3d [car configurator with color, wheel, and interior customization]
```
β **Vague:**
```bash
/design:3d [make it look cool]
/design:3d [add some 3D stuff]
/design:3d [3D thing]
```
### Performance First
Always consider performance:
- **Mobile Devices**: Test on mid-range phones
- **Frame Rate**: Target 60 FPS, acceptable 30 FPS on mobile
- **Loading**: Show loading states, lazy load assets
- **Fallbacks**: Provide 2D alternatives for unsupported devices
### Accessibility Considerations
3D experiences should be accessible:
- Provide keyboard navigation alternatives
- Add screen reader descriptions
- Include skip links for main content
- Ensure color contrast in UI overlays
- Support reduced motion preferences
## Technical Specifications
### Three.js Stack
```typescript
// Typical setup generated
import * as THREE from 'three';
import { OrbitControls } from 'three/examples/jsm/controls/OrbitControls';
import { GLTFLoader } from 'three/examples/jsm/loaders/GLTFLoader';
const scene = new THREE.Scene();
const camera = new THREE.PerspectiveCamera(75, window.innerWidth / window.innerHeight, 0.1, 1000);
const renderer = new THREE.WebGLRenderer({
antialias: true,
alpha: true
});
```
### Shader Examples
**Custom Particle Shader:**
```glsl
// Vertex Shader
attribute float size;
attribute vec3 customColor;
varying vec3 vColor;
void main() {
vColor = customColor;
vec4 mvPosition = modelViewMatrix * vec4(position, 1.0);
gl_PointSize = size * (300.0 / -mvPosition.z);
gl_Position = projectionMatrix * mvPosition;
}
// Fragment Shader
varying vec3 vColor;
void main() {
float r = distance(gl_PointCoord, vec2(0.5));
if (r > 0.5) discard;
gl_FragColor = vec4(vColor, 1.0 - r * 2.0);
}
```
### Performance Optimization Patterns
```typescript
// Instanced Geometry for repeated objects
const geometry = new THREE.BoxGeometry(1, 1, 1);
const material = new THREE.MeshStandardMaterial({ color: 0x00ff00 });
const mesh = new THREE.InstancedMesh(geometry, material, 1000);
// LOD for distant objects
const lod = new THREE.LOD();
lod.addLevel(meshHighDetail, 0);
lod.addLevel(meshMediumDetail, 50);
lod.addLevel(meshLowDetail, 100);
scene.add(lod);
// Frustum culling (automatic)
mesh.frustumCulled = true;
```
## Advanced Features
### WebXR Integration
```bash
/design:3d [product viewer with AR support for placing furniture in room using WebXR]
```
Generates:
- WebXR session management
- AR hit testing for surface detection
- Lighting estimation
- iOS AR Quick Look fallback
- Android Scene Viewer support
### Post-Processing Effects
```bash
/design:3d [sci-fi dashboard with bloom, chromatic aberration, and scan lines]
```
Generates:
- EffectComposer setup
- Bloom pass configuration
- Custom shader passes
- Performance-optimized pipeline
### Physics Integration
```bash
/design:3d [3D physics-based marble maze game with touch controls]
```
Generates:
- Cannon.js integration
- Physics world setup
- Collision detection
- Touch control mapping
## Generated Artifacts
After `/design:3d` completes:
### Component Files
```
src/components/
βββ Scene3D.tsx # Main 3D scene component
βββ Controls3D.tsx # Camera controls wrapper
βββ LoadingScreen3D.tsx # Asset loading UI
```
### Shader Files
```
src/shaders/
βββ vertex.vert # Vertex shader
βββ fragment.frag # Fragment shader
βββ particles.glsl # Particle shaders
```
### Utility Files
```
src/utils/
βββ three-setup.ts # Scene initialization
βββ loaders.ts # Asset loaders
βββ animations.ts # Animation helpers
βββ responsive-3d.ts # Responsive handlers
```
### Asset Files
```
public/
βββ models/ # 3D models (GLTF, FBX)
βββ textures/ # Texture maps
βββ hdri/ # Environment maps
```
## Troubleshooting
### Performance Issues
**Problem:** Frame rate drops below 30 FPS
**Solution:**
```bash
# Optimize existing 3D scene
/fix:fast [3D scene performance - reduce draw calls and optimize geometries]
# Common fixes applied:
β Merge geometries
β Use instanced meshes
β Reduce particle count
β Lower shadow map resolution
β Disable shadows on mobile
β Implement LOD system
```
### Loading Times
**Problem:** 3D scene takes too long to load
**Solution:**
```typescript
// Generated optimization
- Compressed GLTF with Draco
- Progressive texture loading
- Show loading screen with progress
- Lazy load non-critical assets
- Use texture atlases
```
### Mobile Compatibility
**Problem:** 3D scene not working on mobile devices
**Solution:**
```bash
# Add mobile support
/fix:ui [3D scene] - optimize for mobile with reduced quality and touch controls
# Automatically adds:
β Device capability detection
β Reduced particle count on mobile
β Lower texture resolution
β Touch gesture controls
β Disable expensive effects
```
## Inspiration Gallery
### Award-Winning Examples
**Apple Product Pages**
- Smooth scroll-based animations
- High-quality PBR materials
- Seamless mobile experience
- WebGL performance optimization
**Awwwards Winners**
- bruno-simon.com (3D portfolio)
- Active Theory projects
- Locomotive Scroll + Three.js
- Creative particle systems
**Three.js Examples**
- https://threejs.org/examples/
- Shader examples
- Post-processing demos
- Physics simulations
## Next Steps
- [/design:good](/docs/commands/design/good) - High-fidelity 2D design
- [/design:fast](/docs/commands/design/fast) - Quick prototypes
- [/fix:ui](/docs/commands/fix/ui) - Fix 3D visual issues
- [/test](/docs/commands/core/test) - Performance testing
---
**Key Takeaway**: `/design:3d` creates production-ready 3D web experiences with Three.js, complete with interactions, animations, and performance optimizations across all devices. Think beyond flat design - create immersive experiences that captivate users.
---
# /design:describe
Section: docs
Category: commands/design
URL: https://docs.claudekit.cc/docs/design/describe
# /design:describe
Transform visual design references into actionable technical specifications. Analyze screenshots, videos, or design mockups to extract layout structure, styling details, component patterns, and implementation strategies.
## Syntax
```bash
/design:describe [screenshot or video file]
```
## How It Works
The `/design:describe` command uses AI vision to analyze visual content:
### 1. Visual Analysis
- Processes screenshot or video using multimodal AI
- Identifies UI components and layout structure
- Extracts color palette and typography
- Measures spacing, sizing, and proportions
- Detects interaction patterns (from videos)
### 2. Component Identification
- Recognizes common UI patterns
- Identifies component library usage (Material UI, Ant Design, etc.)
- Maps components to standard implementations
- Detects custom components
### 3. Style Extraction
- Extracts color values (hex, RGB)
- Identifies font families and sizes
- Measures spacing system (padding, margins)
- Detects border radius, shadows, gradients
- Analyzes layout systems (Grid, Flexbox)
### 4. Implementation Strategy
- Suggests appropriate tech stack
- Recommends component libraries
- Provides CSS/Tailwind implementation hints
- Identifies reusable patterns
- Estimates complexity and effort
### 5. Technical Specifications
- Generates detailed component breakdown
- Creates responsive design notes
- Documents state management requirements
- Lists required assets and resources
## Examples
### Landing Page Screenshot
```bash
/design:describe [saas-landing-hero.png]
```
**Visual Analysis Output:**
```
Design Analysis: SaaS Landing Hero Section
ββββββββββββββββββββββββββββββββββββββββββββ
Layout Structure:
βββ Full-width hero section (min-height: 100vh)
βββ Two-column grid layout (60/40 split at desktop)
β βββ Left: Content (headline, description, CTA)
β βββ Right: Product screenshot with floating cards
βββ Gradient background (purple-600 β indigo-900)
Typography:
βββ Headline: "Inter" font, 72px, 900 weight, -0.02em tracking
βββ Subheading: "Inter" font, 24px, 400 weight, 1.6 line-height
βββ Button: "Inter" font, 18px, 600 weight, uppercase
Color Palette:
βββ Primary: #7C3AED (purple-600)
βββ Accent: #4338CA (indigo-700)
βββ Text: #FFFFFF (white)
βββ Text Secondary: #E9D5FF (purple-200)
βββ Background: Linear gradient (135deg, #7C3AED β #312E81)
Components Identified:
βββ Navigation Bar
β βββ Logo (SVG, 40px height)
β βββ Menu Items (5 items, horizontal)
β βββ CTA Button (outlined, white border)
βββ Hero Content
β βββ Badge ("New Feature" pill)
β βββ H1 Headline (3 lines, left-aligned)
β βββ Description Paragraph (2 lines)
β βββ CTA Group (2 buttons, horizontal)
βββ Product Visual
βββ Browser mockup (screenshot container)
βββ Floating cards (3 cards, positioned absolute)
βββ Blur backdrop effects
Spacing System:
βββ Container padding: 80px (desktop), 24px (mobile)
βββ Section gap: 48px
βββ Button padding: 16px 32px
βββ Card gap: 24px
βββ Consistent 8px spacing unit
Visual Effects:
βββ Gradient background with noise texture
βββ Floating cards with shadow: 0 25px 50px -12px rgba(0,0,0,0.25)
βββ Button hover: scale(1.05) + shadow increase
βββ Backdrop blur: blur(12px) on floating cards
βββ Subtle animation: floating cards move vertically (2s ease-in-out infinite)
Responsive Behavior:
βββ Desktop (>1024px): Two-column layout
βββ Tablet (768-1023px): Stack layout, reduced font sizes
βββ Mobile (<768px): Single column, full-width CTA
βββ Navigation: Hamburger menu below 768px
Implementation Suggestions:
βββ Framework: React or Next.js
βββ Styling: Tailwind CSS with custom gradient
βββ Components: Hero, Button, Card, Badge, Navigation
βββ Libraries: Framer Motion for animations
βββ Assets Needed: Logo SVG, product screenshot, noise texture
βββ Estimated Time: 4-6 hours for pixel-perfect implementation
Accessibility Notes:
βββ Ensure sufficient color contrast (currently good)
βββ Add alt text for product screenshot
βββ Keyboard navigation for menu
βββ Focus indicators for interactive elements
βββ Reduced motion media query for animations
```
### Mobile App Screenshot
```bash
/design:describe [mobile-app-profile.png]
```
**Analysis Output:**
```
Design Analysis: Mobile Profile Screen
ββββββββββββββββββββββββββββββββββββββββββββ
Device: iPhone 14 Pro (393 Γ 852 px)
Platform: iOS (status bar, rounded corners detected)
Layout Structure:
βββ Status Bar (system, 54px height)
βββ Navigation Header (60px)
β βββ Back button (left)
β βββ Title "Profile" (center)
β βββ Edit icon (right)
βββ Profile Section (auto height)
β βββ Avatar (120px circle, center)
β βββ Name (24px, center)
β βββ Bio (16px, center, gray)
β βββ Stats Row (3 columns)
βββ Action Buttons Row (2 buttons, gap 12px)
βββ Content Tabs (underline style)
βββ Grid Content (3 columns)
Design System:
βββ Colors:
β βββ Primary: #007AFF (iOS blue)
β βββ Background: #F2F2F7 (iOS light gray)
β βββ Surface: #FFFFFF (white cards)
β βββ Text Primary: #000000
β βββ Text Secondary: #8E8E93
βββ Typography:
β βββ System Font: SF Pro (iOS native)
β βββ Name: 24px, 600 weight
β βββ Stats Number: 20px, 600 weight
β βββ Labels: 14px, 400 weight
βββ Spacing:
βββ Container padding: 16px
βββ Element gap: 12px
βββ Grid gap: 2px
Components:
βββ Avatar
β βββ Size: 120px Γ 120px
β βββ Border: 4px white + 2px shadow
β βββ Centered with gradient border effect
βββ Stats Card
β βββ Three columns (equal width)
β βββ Number (large, bold) + Label (small, gray)
β βββ No borders, clean spacing
βββ Buttons
β βββ Primary: Filled (#007AFF, white text)
β βββ Secondary: Outlined (border: #007AFF)
β βββ Height: 44px (iOS touch target)
β βββ Border radius: 12px
βββ Tab Bar
βββ Underline indicator (2px, #007AFF)
βββ Active state: blue text + underline
βββ Inactive: gray text
Interaction Patterns:
βββ Pull to refresh (detected from context)
βββ Tap avatar to view full size
βββ Tab switching with slide animation
βββ Grid items tap to open detail
βββ Native iOS gestures (swipe back)
Technical Implementation:
βββ Platform: React Native or Swift/SwiftUI
βββ Navigation: Stack navigator with modal for edit
βββ State Management: Profile data, tabs state
βββ APIs: User profile, posts grid
βββ Image Handling: Avatar upload, image caching
βββ Performance: Lazy loading grid, image optimization
Similar Patterns Found:
βββ Instagram profile screen
βββ Twitter profile layout
βββ Standard iOS Settings pattern
βββ Apple HIG compliant design
Recommendations:
β Use native iOS components for authenticity
β Implement haptic feedback on interactions
β Add skeleton loaders for data fetching
β Support dark mode with adaptive colors
β Handle safe area insets properly
```
### Design Video Analysis
```bash
/design:describe [onboarding-flow.mp4]
```
**Video Analysis Output:**
```
Design Analysis: Onboarding Flow Animation
ββββββββββββββββββββββββββββββββββββββββββββ
Video Duration: 15 seconds
Flow Steps: 4 screens
Transitions: Smooth slide animations (300ms ease-out)
Screen 1: Welcome (0:00 - 0:03)
βββ Hero illustration (Lottie animation)
β βββ Character waves (loop, 2s duration)
βββ Headline: "Welcome to AppName"
βββ Subheading: Brief value proposition
βββ "Get Started" button (pulsing subtle animation)
Screen 2: Feature Highlight (0:04 - 0:07)
βββ Icon animation (scale in + rotate)
βββ Feature title with typewriter effect
βββ Bullet points fade in sequentially (100ms stagger)
βββ Progress dots (2/4, animated)
βββ "Next" button + "Skip" link
Screen 3: Permissions (0:08 - 0:11)
βββ Permission icon (notifications bell)
βββ Permission request explanation
βββ "Allow" button (primary)
βββ "Not Now" button (text link)
βββ Progress dots (3/4)
Screen 4: Profile Setup (0:12 - 0:15)
βββ Avatar upload circle (dashed border animation)
βββ Name input field (focus state shown)
βββ Email input field (validation icon)
βββ "Complete Setup" button
βββ Progress dots (4/4, all filled)
Animation Details:
βββ Page Transitions:
β βββ Slide from right (incoming)
β βββ Slide to left (outgoing)
β βββ Duration: 300ms
β βββ Easing: cubic-bezier(0.4, 0.0, 0.2, 1)
βββ Element Animations:
β βββ Fade + scale for illustrations (500ms)
β βββ Staggered list items (100ms delay each)
β βββ Button pulse (2s infinite, scale 1.0 β 1.05)
β βββ Progress dots fill (200ms)
βββ Micro-interactions:
βββ Button press: scale down to 0.95
βββ Input focus: border color change + scale
βββ Skip link: underline slide animation
βββ Success checkmark: draw animation (300ms)
Design System:
βββ Colors:
β βββ Primary: #6366F1 (indigo-500)
β βββ Success: #10B981 (green-500)
β βββ Background: #F9FAFB (gray-50)
β βββ Text: #111827 (gray-900)
βββ Typography:
β βββ Headlines: "Poppins" 28px 600
β βββ Body: "Inter" 16px 400
β βββ Buttons: "Inter" 16px 600
βββ Spacing:
βββ Screen padding: 24px
βββ Element gap: 24px
βββ Button height: 56px
Technical Implementation:
βββ Framework: React Native + Reanimated
β or Framer Motion for web
βββ Animations:
β βββ Lottie for illustrations
β βββ Spring physics for transitions
β βββ Gesture handling for swipe
β βββ Progress tracking state
βββ Components:
β βββ OnboardingSlide (reusable)
β βββ AnimatedButton
β βββ ProgressDots
β βββ TypewriterText
β βββ IllustrationContainer
βββ State Management:
βββ Current step index
βββ User input values
βββ Completion status
βββ Skip/back navigation
Performance Notes:
βββ All animations run at 60 FPS
βββ No jank detected in transitions
βββ Illustrations preloaded before display
βββ Smooth gesture handling
βββ No layout shifts
Accessibility Considerations:
β Respects reduced motion preferences
β Clear focus indicators on inputs
β Skip button always visible
β Adequate touch target sizes (56px min)
β Color contrast meets WCAG AA
Similar Patterns:
βββ Duolingo onboarding
βββ Headspace intro flow
βββ Notion first-time setup
βββ Linear app walkthrough
Implementation Estimate: 8-12 hours
Complexity: Medium (animations add complexity)
```
## When to Use /design:describe
### β
Use /design:describe for:
- **Design Analysis**: Understanding design decisions
- **Competitive Research**: Analyzing competitor interfaces
- **Design Handoff**: Extracting specs from mockups
- **Reverse Engineering**: Understanding existing designs
- **Learning**: Studying award-winning designs
- **Pre-Implementation**: Planning before coding
- **Design Reviews**: Documenting design patterns
- **Style Guide Creation**: Building design system from examples
### β Don't use /design:describe for:
- **Creating New Designs**: Use `/design:good` or `/design:fast` instead
- **Direct Implementation**: Use `/design:screenshot` to implement
- **Bug Fixes**: Use `/fix:ui` for fixing issues
- **Code Review**: This is for visual analysis only
## Best Practices
### Provide High-Quality References
β
**Good:**
```bash
# Clear, high-resolution screenshots
/design:describe [figma-export-2x.png]
# Full context videos
/design:describe [complete-user-flow.mp4]
# Multiple views
/design:describe [desktop-view.png] and /design:describe [mobile-view.png]
```
β **Bad:**
```bash
# Low resolution, blurry
/design:describe [blurry-screenshot.jpg]
# Cropped or partial view
/design:describe [just-a-button.png]
```
### Ask Specific Questions
```bash
# Focus the analysis
/design:describe [dashboard.png] - focus on the data visualization components
# Compare designs
/design:describe [design-v1.png] vs [design-v2.png] - identify differences
# Extract specific details
/design:describe [hero.png] - extract exact color values and spacing
```
### Use for Learning
```bash
# Study award-winning work
/design:describe [awwwards-winner.png] - analyze what makes this design exceptional
# Understand trends
/design:describe [modern-saas-page.png] - identify current design trends
# Learn patterns
/design:describe [stripe-dashboard.png] - document reusable patterns
```
## Output Formats
### Component Breakdown
```markdown
Identified Components:
βββ Header
β βββ Type: Sticky navigation
β βββ Height: 80px
β βββ Background: rgba(255,255,255,0.8) + backdrop-blur
β βββ Children: Logo, Nav, CTA Button
βββ Hero Section
β βββ Layout: Two-column grid (1fr 1fr)
β βββ Min-height: 600px
β βββ Background: Linear gradient
βββ Feature Cards
βββ Grid: 3 columns (repeat(3, 1fr))
βββ Gap: 32px
βββ Card: Padding 40px, Border-radius 16px
```
### Color Palette Export
```css
/* Extracted Color System */
:root {
--primary-50: #EEF2FF;
--primary-100: #E0E7FF;
--primary-500: #6366F1;
--primary-600: #4F46E5;
--primary-900: #312E81;
--neutral-50: #F9FAFB;
--neutral-900: #111827;
--success: #10B981;
--error: #EF4444;
--warning: #F59E0B;
}
```
### Typography System
```css
/* Extracted Typography */
--font-primary: 'Inter', sans-serif;
--font-display: 'Poppins', sans-serif;
--text-xs: 0.75rem; /* 12px */
--text-sm: 0.875rem; /* 14px */
--text-base: 1rem; /* 16px */
--text-lg: 1.125rem; /* 18px */
--text-xl: 1.25rem; /* 20px */
--text-2xl: 1.5rem; /* 24px */
--text-4xl: 2.25rem; /* 36px */
```
### Spacing Scale
```css
/* Extracted Spacing System */
--space-1: 0.25rem; /* 4px */
--space-2: 0.5rem; /* 8px */
--space-3: 0.75rem; /* 12px */
--space-4: 1rem; /* 16px */
--space-6: 1.5rem; /* 24px */
--space-8: 2rem; /* 32px */
--space-12: 3rem; /* 48px */
--space-16: 4rem; /* 64px */
```
## Advanced Analysis
### Design System Extraction
```bash
/design:describe [multiple-screens/*.png] - extract complete design system with colors, typography, components, and spacing
```
Generates comprehensive design system documentation with:
- Complete color palette
- Typography scale
- Component library
- Spacing system
- Grid system
- Breakpoints
### Competitive Analysis
```bash
/design:describe [competitor-landing/*.png] - compare design approaches across 5 competitors
```
Generates comparison report:
- Common patterns identified
- Unique differentiators
- Best practices observed
- Improvement opportunities
### Animation Specification
```bash
/design:describe [interaction-video.mp4] - document all animations with timing and easing functions
```
Generates animation specification:
- Transition durations
- Easing curves
- Trigger events
- Sequence timing
- Implementation code
## Integration Workflow
### Design β Implementation
```bash
# 1. Analyze design
/design:describe [design-mockup.png]
# 2. Review specifications
# (Read generated analysis)
# 3. Implement based on specs
/design:screenshot [design-mockup.png]
# 4. Compare result
/fix:ui [screenshot-comparison] - adjust any differences
```
### Learning Workflow
```bash
# 1. Find inspiring design
# (Browse Dribbble, Awwwards, etc.)
# 2. Analyze thoroughly
/design:describe [inspiring-design.png]
# 3. Document patterns
# (Save analysis to design system docs)
# 4. Apply learnings
/design:good [implement similar pattern for our product]
```
## Next Steps
- [/design:screenshot](/docs/commands/design/screenshot) - Implement the analyzed design
- [/design:good](/docs/commands/design/good) - Create similar high-quality design
- [/design:fast](/docs/commands/design/fast) - Quick prototype based on analysis
- [/docs:update](/docs/commands/docs/update) - Document extracted design system
---
**Key Takeaway**: `/design:describe` transforms visual references into actionable specifications, extracting every design detail from colors to animations. Use it to understand, learn from, and document great design before implementation.
---
# /design:fast
Section: docs
Category: commands/design
URL: https://docs.claudekit.cc/docs/design/fast
# /design:fast
Rapidly prototype UI designs with fast turnaround. Perfect for quick iterations, MVP launches, internal tools, or when you need functional designs without the polish time. Speed over perfection - ship fast, iterate faster.
## Syntax
```bash
/design:fast [brief design description]
```
## How It Works
The `/design:fast` command prioritizes speed and functionality:
### 1. Quick Requirements Capture
- Parses essential requirements from brief description
- Skips lengthy research phase
- Uses proven patterns immediately
- Focuses on core functionality
### 2. Template-Based Design
- Leverages existing component libraries
- Uses standard design patterns
- Applies pre-built layouts
- Minimal custom styling
### 3. Rapid Implementation
- Uses utility-first CSS (Tailwind)
- Copies proven component structures
- Focuses on responsiveness basics
- Skips custom animations
### 4. Functional First
- Prioritizes working interactions
- Basic accessibility compliance
- Mobile-responsive by default
- Clean, readable code
### 5. Fast Delivery
- Generates complete components in minutes
- Includes only essential styles
- Documentation comments
- Ready to iterate
## Examples
### Dashboard MVP
```bash
/design:fast [admin dashboard with user table, stats cards, and sidebar navigation]
```
**What happens:**
```
β‘ Fast Design Mode: Speed priority
1. Component Selection
β Using: Tailwind CSS
β Pattern: Standard admin layout
β Components: Sidebar, Header, Table, Cards
β Time estimate: 15 minutes
2. Layout Implementation
β Three-column layout (sidebar + main + none)
β Responsive breakpoints (mobile stacks)
β Basic typography scale
β Standard spacing (4, 8, 16, 24px)
3. Components Generated
β Sidebar.tsx (navigation links, logo)
β StatsCards.tsx (4 metric cards)
β UserTable.tsx (sortable table)
β DashboardLayout.tsx (wrapper)
4. Styling Approach
- Tailwind utility classes
- No custom CSS
- Basic hover states
- Standard colors (gray, blue, green, red)
5. Result
β Fully functional dashboard
β Mobile responsive
β Clean, professional look
β Ready for data integration
Generated Files:
β src/components/DashboardLayout.tsx
β src/components/Sidebar.tsx
β src/components/StatsCards.tsx
β src/components/UserTable.tsx
Time: 12 minutes
Lines of Code: 450
```
### Landing Page Quick
```bash
/design:fast [SaaS landing page with hero, features, pricing, and contact form]
```
**What happens:**
```
β‘ Fast Design Mode: MVP landing page
1. Section Planning
β Hero: headline + subheading + CTA + image
β Features: 3-column grid, icons + text
β Pricing: 3 tiers, simple cards
β Contact: basic form (name, email, message)
2. Component Structure
β LandingPage.tsx (main container)
β Hero.tsx (centered content)
β Features.tsx (grid layout)
β Pricing.tsx (card grid)
β ContactForm.tsx (form fields)
3. Design Decisions
- Single color scheme (blue-600)
- Inter font (Google Fonts)
- Standard button styles
- Basic card shadows
- No animations (static)
4. Responsive Behavior
β Desktop: 3-column grids
β Tablet: 2-column grids
β Mobile: stacked single column
β Readable font sizes all screens
5. Form Integration
β Basic validation (HTML5)
β Submit handler placeholder
β Error message states
β Success feedback
Generated Files:
β src/pages/LandingPage.tsx
β src/components/Hero.tsx
β src/components/Features.tsx
β src/components/Pricing.tsx
β src/components/ContactForm.tsx
Time: 18 minutes
Ready for: content + backend integration
```
### Login Flow Prototype
```bash
/design:fast [login page with email/password, social auth buttons, forgot password link]
```
**What happens:**
```
β‘ Fast Design Mode: Authentication pages
1. Pages Created
β Login.tsx
β ForgotPassword.tsx
β ResetPassword.tsx
2. Login Page Components
β Centered card layout (max-width: 400px)
β Email input (with validation)
β Password input (with show/hide toggle)
β Remember me checkbox
β Submit button
β Social auth buttons (Google, GitHub)
β Links (forgot password, sign up)
3. Form Handling
β React Hook Form integration
β Zod validation schema
β Error message display
β Loading states
β Success redirects
4. Security Basics
β Password field type="password"
β Autocomplete attributes
β CSRF token placeholder
β Rate limiting note
Generated Files:
β src/pages/Login.tsx
β src/pages/ForgotPassword.tsx
β src/pages/ResetPassword.tsx
β src/schemas/authSchema.ts
Time: 10 minutes
Status: Ready for API integration
```
### Mobile App Screen
```bash
/design:fast [mobile profile screen with avatar, stats, action buttons, and content grid]
```
**What happens:**
```
β‘ Fast Design Mode: Mobile-first design
1. Screen Layout
β Header (back button + title + edit)
β Profile section (avatar + name + bio)
β Stats row (3 columns)
β Action buttons (2 buttons)
β Tab navigation
β Content grid (3 columns)
2. Component Structure
β ProfileScreen.tsx (main)
β ProfileHeader.tsx
β StatsRow.tsx
β ActionButtons.tsx
β ContentGrid.tsx
3. Mobile Optimizations
β Touch targets: minimum 44px
β Safe area insets
β No hover states (touch only)
β Swipe gestures placeholder
β Pull to refresh structure
4. Styling
- Tailwind Mobile CSS
- System font stack
- Standard iOS/Android patterns
- Minimal custom styles
Generated Files:
β src/screens/ProfileScreen.tsx
β src/components/ProfileHeader.tsx
β src/components/StatsRow.tsx
β src/components/ContentGrid.tsx
Time: 14 minutes
Platform: React Native / Expo compatible
```
## When to Use /design:fast
### β
Use /design:fast for:
- **MVPs**: Launch quickly, iterate later
- **Prototypes**: Test ideas with users
- **Internal Tools**: Admin dashboards, internal apps
- **Quick Iterations**: Rapid feature testing
- **Hackathons**: Ship working products fast
- **Proof of Concepts**: Validate ideas quickly
- **Learning Projects**: Focus on functionality
- **Time Constraints**: Tight deadlines
### β Don't use /design:fast for:
- **Customer-Facing Products**: Use `/design:good` instead
- **Brand-Critical Pages**: Need polish and custom design
- **Marketing Materials**: Require high-quality visuals
- **Complex Animations**: Fast mode keeps it simple
- **Unique Design Requirements**: Need custom approach
## Speed Optimizations
### Template Library
Fast mode uses proven templates:
**Admin Layouts:**
- Sidebar navigation
- Top navigation
- Split navigation
- Mobile drawer
**Data Display:**
- Tables with sorting
- Card grids
- List views
- Detail pages
**Forms:**
- Login/signup
- Contact forms
- Multi-step wizards
- Settings pages
**Landing Pages:**
- Hero + features + pricing
- App showcase
- Product page
- Coming soon
### Component Libraries Used
```typescript
// Fast mode defaults
import { Button, Input, Card } from '@/components/ui';
// Pre-styled, ready to use
Click me
Content here
```
### CSS Strategy
```jsx
// Tailwind utility classes only
Title
Action
// No custom CSS files needed
// No complex animations
// Fast to write, fast to modify
```
## Quality Standards
Even in fast mode, designs maintain:
### β
Included:
- **Responsive Design**: Mobile, tablet, desktop
- **Accessibility Basics**: Semantic HTML, ARIA when needed
- **Clean Code**: Readable, well-structured
- **Type Safety**: TypeScript with proper types
- **Basic Interactions**: Hover, focus, active states
- **Error States**: Form validation, loading states
- **Professional Look**: Clean, modern, functional
### β Not Included:
- **Custom Animations**: Keep it static
- **Brand Customization**: Standard colors/fonts
- **Pixel-Perfect Design**: Good enough is good enough
- **Advanced Interactions**: Complex gestures, micro-interactions
- **Custom Illustrations**: Use placeholders or icons
- **Performance Optimization**: Works, but not optimized
- **Cross-Browser Testing**: Modern browsers only
## Iteration Workflow
Fast design is meant for rapid iteration:
```bash
# 1. Create initial version
/design:fast [user dashboard]
# 2. Test with users
# ... gather feedback ...
# 3. Quick iteration
/design:fast [update dashboard - add search bar and filters]
# 4. Keep iterating
/design:fast [dashboard refinement - improve mobile layout]
# 5. Once stable, upgrade
/design:good [polish dashboard design with custom brand styling]
```
## Best Practices
### Be Specific But Brief
β
**Good:**
```bash
/design:fast [e-commerce checkout with 3 steps: cart, shipping, payment]
/design:fast [blog post page with author card, related posts, comments]
/design:fast [settings page with tabs: profile, security, notifications]
```
β **Too Vague:**
```bash
/design:fast [make a website]
/design:fast [design something cool]
```
β **Too Detailed (use /design:good):**
```bash
/design:fast [luxury brand homepage with custom animations, parallax scrolling, video backgrounds, micro-interactions...]
```
### Accept Good Enough
Fast mode is about **progress over perfection**:
β
**Good mindset:**
- "Good enough to test with users"
- "Functional and clean"
- "Ship now, polish later"
β **Wrong mindset:**
- "This must be pixel-perfect"
- "I need custom animations"
- "Every detail must be unique"
### Use for Learning
Fast mode is great for learning:
```bash
# Learn React patterns
/design:fast [React todo app with CRUD operations]
# Learn form handling
/design:fast [multi-step registration form]
# Learn data display
/design:fast [data table with sorting, filtering, pagination]
```
## Generated Code Quality
### Example: Button Component
```typescript
// Generated by /design:fast
interface ButtonProps {
children: React.ReactNode;
onClick?: () => void;
variant?: 'primary' | 'secondary' | 'outline';
disabled?: boolean;
}
export function Button({
children,
onClick,
variant = 'primary',
disabled = false
}: ButtonProps) {
const baseClasses = "px-4 py-2 rounded font-medium transition-colors";
const variantClasses = {
primary: "bg-blue-600 text-white hover:bg-blue-700",
secondary: "bg-gray-200 text-gray-900 hover:bg-gray-300",
outline: "border-2 border-blue-600 text-blue-600 hover:bg-blue-50"
};
return (
{children}
);
}
```
Clean, typed, reusable, simple.
### Example: Form Component
```typescript
// Generated by /design:fast
import { useForm } from 'react-hook-form';
import { zodResolver } from '@hookform/resolvers/zod';
import * as z from 'zod';
const loginSchema = z.object({
email: z.string().email('Invalid email'),
password: z.string().min(8, 'Password must be at least 8 characters'),
});
type LoginFormData = z.infer;
export function LoginForm() {
const { register, handleSubmit, formState: { errors } } = useForm({
resolver: zodResolver(loginSchema),
});
const onSubmit = (data: LoginFormData) => {
console.log('Login data:', data);
// TODO: Implement login API call
};
return (
);
}
```
Functional, validated, ready to integrate.
## Time Estimates
Typical delivery times:
| Design Type | Fast Mode | Good Mode | Time Saved |
|-------------|-----------|-----------|------------|
| Login Page | 8-12 min | 2-4 hours | 95% faster |
| Dashboard | 15-20 min | 8-12 hours | 96% faster |
| Landing Page | 15-25 min | 6-10 hours | 95% faster |
| Form Wizard | 20-30 min | 4-6 hours | 92% faster |
| Profile Screen | 10-15 min | 3-5 hours | 95% faster |
| Data Table | 15-20 min | 4-6 hours | 94% faster |
## Upgrade Path
Start fast, upgrade when ready:
```bash
# Phase 1: MVP (Week 1)
/design:fast [all core pages]
# Phase 2: User Testing (Week 2-3)
# Test with real users, gather feedback
# Phase 3: Polish (Week 4)
/design:good [upgrade landing page with brand styling]
/design:good [polish dashboard with custom charts]
# Phase 4: Optimize
/fix:ui [optimize performance]
```
## Troubleshooting
### Design Too Basic
**Problem:** Generated design too simple
**Solution:**
```bash
# Add specific requirements
/design:fast [dashboard with advanced filtering and export functionality]
# Or upgrade to good mode
/design:good [polished version of the dashboard]
```
### Missing Features
**Problem:** Some features not included
**Solution:**
```bash
# Iterate quickly
/design:fast [add missing feature: notifications dropdown]
```
### Need Customization
**Problem:** Need brand colors/fonts
**Solution:**
```bash
# Manually update CSS variables
:root {
--primary: #your-brand-color;
--font-sans: 'Your Font', sans-serif;
}
# Or use good mode with brand requirements
/design:good [dashboard with brand guidelines: colors #..., font ...]
```
## Next Steps
- [/design:good](/docs/commands/design/good) - Upgrade to polished design
- [/design:screenshot](/docs/commands/design/screenshot) - Implement from mockup
- [/cook](/docs/commands/core/cook) - Add backend functionality
- [/fix:ui](/docs/commands/fix/ui) - Fix any UI issues
---
**Key Takeaway**: `/design:fast` delivers functional, professional designs in minutes not hours. Perfect for MVPs, prototypes, and rapid iteration. Ship fast, learn faster, polish later. Speed is a feature.
---
# /design:good
Section: docs
Category: commands/design
URL: https://docs.claudekit.cc/docs/design/good
# /design:good
Create exceptional, production-ready designs that captivate users and win awards. Research-driven design process studying Dribbble, Behance, and Awwwards winners to deliver immersive, high-fidelity experiences with meticulous attention to detail.
## Syntax
```bash
/design:good [detailed design brief]
```
## How It Works
The `/design:good` command follows a comprehensive design process:
### 1. Research & Inspiration
- Studies award-winning designs on Dribbble, Behance, Awwwards
- Analyzes current design trends and best practices
- Reviews competitor designs in your industry
- Identifies successful patterns and unique opportunities
- Creates mood boards and visual direction
### 2. Strategic Planning
- Invokes **ui-ux-designer** agent for deep analysis
- Defines design system (colors, typography, spacing, components)
- Plans information architecture and user flows
- Identifies key micro-interactions and animations
- Establishes accessibility and performance goals
### 3. Design System Creation
- Develops comprehensive color palette with semantic tokens
- Designs typography system with type scale and hierarchy
- Creates spacing scale for consistent rhythm
- Defines component library with variants and states
- Establishes animation principles and timing
### 4. High-Fidelity Design
- Creates pixel-perfect layouts with attention to detail
- Implements custom illustrations or graphics
- Designs sophisticated micro-interactions
- Adds meaningful animations that enhance UX
- Ensures brand consistency throughout
### 5. Responsive & Accessible
- Designs for all breakpoints (mobile, tablet, desktop, ultra-wide)
- Implements WCAG 2.1 AA accessibility standards
- Tests color contrast ratios (4.5:1 minimum)
- Ensures keyboard navigation and screen reader support
- Adds focus indicators and skip links
### 6. Motion Design
- Creates purposeful animations (300-500ms for UI)
- Implements smooth transitions and easing curves
- Adds micro-interactions for delight
- Designs loading states and skeleton screens
- Respects prefers-reduced-motion
### 7. Production Ready
- Generates clean, maintainable code
- Includes comprehensive documentation
- Provides design system documentation
- Adds inline comments for complex logic
- Ensures performance optimization
## Examples
### Premium SaaS Landing Page
```bash
/design:good [SaaS landing page for AI writing assistant - modern, professional, showcasing product capabilities with interactive demo section, social proof, and elegant micro-interactions]
```
**Research Phase:**
```
Inspiration Research:
βββ Analyzed: Linear, Notion, Stripe, Vercel landing pages
βββ Trend: Gradient mesh backgrounds, glass morphism
βββ Pattern: Hero with product preview, feature tabs, pricing comparison
βββ Opportunity: Interactive writing demo with real-time AI suggestions
Mood Board Created:
βββ Colors: Deep purples, bright blues, warm accents
βββ Style: Modern, clean, sophisticated
βββ Typography: Display fonts for headlines, readable sans-serif for body
βββ Imagery: Abstract gradients, subtle 3D elements
```
**Design System:**
```
Color Palette:
βββ Primary:
β βββ 50: #F5F3FF β 900: #4C1D95 (9 shades)
β βββ Main: #7C3AED (violet-600)
βββ Secondary:
β βββ 50: #EFF6FF β 900: #1E3A8A (9 shades)
β βββ Main: #3B82F6 (blue-500)
βββ Accent:
β βββ Warm: #F59E0B (amber-500)
β βββ Cool: #06B6D4 (cyan-500)
βββ Neutrals:
βββ Gray scale: 50 β 900
βββ Semantic: success, error, warning, info
Typography System:
βββ Display: "Cal Sans" (custom, headlines only)
βββ Body: "Inter Variable" (400, 500, 600, 700)
βββ Code: "JetBrains Mono" (code blocks)
βββ Scale:
βββ xs: 12px | sm: 14px | base: 16px
βββ lg: 18px | xl: 20px | 2xl: 24px
βββ 3xl: 30px | 4xl: 36px | 5xl: 48px
βββ 6xl: 60px | 7xl: 72px (display only)
Spacing Scale:
βββ Base unit: 4px
βββ Scale: 4, 8, 12, 16, 24, 32, 48, 64, 96, 128px
βββ Container: 1280px max-width, 32px padding
Animation Principles:
βββ Duration: 200ms (micro) | 300ms (standard) | 500ms (complex)
βββ Easing: cubic-bezier(0.4, 0.0, 0.2, 1) for UI
βββ Purpose: Provide feedback, guide attention, create delight
βββ Respect: prefers-reduced-motion
```
**Implementation:**
```
Generated Components:
β Hero Section
βββ Gradient mesh background (animated)
βββ Headline with gradient text effect
βββ Animated CTA buttons (hover: scale + glow)
βββ Product preview (browser mockup + live demo)
βββ Social proof ticker (auto-scrolling logos)
β Interactive Demo Section
βββ Split view: input textarea | AI output
βββ Real-time typing animation
βββ Suggestion cards with hover effects
βββ "Try it yourself" interactive mode
βββ Smooth transitions between demo states
β Feature Tabs
βββ Tab navigation with animated indicator
βββ Content fade-in with stagger effect
βββ Feature illustrations (custom SVG)
βββ Code examples with syntax highlighting
βββ Scroll-triggered animations
β Social Proof
βββ Testimonial carousel (smooth auto-play)
βββ Customer logos (grayscale β color on hover)
βββ Statistics counter (animated on scroll into view)
βββ Video testimonials (modal overlay)
β Pricing Section
βββ Comparison table with feature highlights
βββ Toggle (monthly/annual) with slider animation
βββ Recommended plan highlight
βββ Tooltip icons for feature explanations
βββ CTA buttons with loading states
β Footer
βββ Multi-column layout with links
βββ Newsletter signup with validation
βββ Social icons with hover animations
βββ Legal links and copyright
Micro-Interactions:
β Button hovers: scale(1.02) + shadow increase
β Card hovers: lift effect + border glow
β Input focus: border color transition + scale
β Scroll progress indicator (top bar)
β Smooth scroll with offset for fixed header
β Toast notifications (slide + fade)
β Loading spinners (pulsing brand logo)
Animations:
β Hero gradient: slow animation (20s loop)
β Text reveal: fade up with stagger (100ms)
β Demo typing: realistic typing speed (60ms/char)
β Feature tabs: slide + fade (300ms)
β Scroll reveals: intersection observer (fade up)
β Pricing toggle: smooth width transition
β Logo ticker: seamless infinite scroll
Performance:
β Lazy loading for images and videos
β Preload critical assets
β Optimize gradient rendering (GPU)
β Debounce scroll animations
β Use will-change for animations
β Code splitting by section
β Total bundle: <150KB gzipped
Accessibility:
β Semantic HTML throughout
β ARIA labels for interactive elements
β Keyboard navigation (Tab, Enter, Esc)
β Focus visible indicators (2px blue outline)
β Color contrast: all text 4.5:1 minimum
β Skip to main content link
β Screen reader announcements
β Captions for video content
β Reduced motion support
Generated Files:
β src/pages/Landing.tsx (main page)
β src/components/Hero.tsx
β src/components/InteractiveDemo.tsx
β src/components/FeatureTabs.tsx
β src/components/SocialProof.tsx
β src/components/PricingTable.tsx
β src/components/Footer.tsx
β src/animations/gradientMesh.ts
β src/animations/scrollReveal.ts
β src/styles/design-system.css
β src/utils/intersectionObserver.ts
β docs/design-system.md
Time: 6-8 hours
Quality: Production-ready, award-worthy
```
### E-commerce Product Page
```bash
/design:good [luxury watch product page with 3D viewer, detailed specifications, size comparison tool, and immersive shopping experience]
```
**Implementation Highlights:**
```
Premium Features:
β 3D Product Viewer (Three.js)
βββ 360Β° rotation with momentum
βββ Zoom to see details (2x, 4x)
βββ Material switcher (metal, leather)
βββ Lighting presets (studio, outdoor, dramatic)
βββ AR preview button
β Image Gallery
βββ Main large image (zoom on hover)
βββ Thumbnail carousel (smooth scroll)
βββ Full-screen lightbox (swipe gestures)
βββ Video integration
βββ Lazy loading with blur placeholders
β Product Details
βββ Accordion specifications (smooth expand)
βββ Size comparison tool (interactive overlay)
βββ Material information with tooltips
βββ Care instructions
βββ Warranty details
β Purchase Section
βββ Size selector with stock availability
βββ Quantity selector (+ / - with validation)
βββ Add to cart (with animation feedback)
βββ Add to wishlist (heart animation)
βββ Price display (sale price highlight)
βββ Shipping calculator
β Social Features
βββ Share buttons (copy link, social)
βββ Review section (5-star with filters)
βββ Q&A section
βββ Recently viewed products
Design Excellence:
β Luxury aesthetic (generous whitespace, serif accents)
β Premium photography (high resolution, consistent lighting)
β Smooth animations (buttery 60 FPS)
β Attention to detail (every state designed)
β Mobile-optimized (thumb-friendly zones)
Time: 10-14 hours
```
### Creative Portfolio
```bash
/design:good [creative portfolio for photographer with full-screen image galleries, custom cursor, smooth page transitions, and unique navigation]
```
**Creative Implementation:**
```
Unique Features:
β Custom Cursor
βββ Default: small dot + large ring
βββ Hover links: expands with "View" text
βββ Hover images: becomes magnifier icon
βββ Smooth follow (slight delay for effect)
βββ Hide on touch devices
β Page Transitions
βββ Smooth fade + scale effect (500ms)
βββ Loading animation (brand logo)
βββ Route-based transitions
βββ Preload next page content
β Hero Section
βββ Full-screen background video
βββ Parallax text overlay
βββ Scroll indicator (animated)
βββ Signature/logo placement
β Project Gallery
βββ Masonry grid layout (responsive)
βββ Hover: project title + category reveal
βββ Click: expand to full-screen detail
βββ Smooth transitions between views
βββ Keyboard navigation (arrow keys)
β Project Detail Page
βββ Full-screen image viewer
βββ Swipe/arrow navigation
βββ Project information sidebar (slide in/out)
βββ Next project preview (bottom)
βββ Back button with transition
β About Page
βββ Split layout: large portrait | bio
βββ Animated stats counter
βββ Services list with icons
βββ Client logos
βββ Contact CTA
β Contact Page
βββ Full-screen split design
βββ Animated contact form
βββ Alternative contact methods
βββ Social links
βββ Availability calendar widget
Artistic Details:
β Typography: Mix of serif + sans-serif
β Whitespace: Generous, breathing room
β Animations: Subtle, refined, never distracting
β Photography: Hero treatment, full bleed
β Color: Monochrome with accent color
Performance:
β Image optimization (WebP, lazy load)
β Progressive image loading (blur up)
β Route preloading
β Efficient animations (GPU accelerated)
β Fast page transitions
Time: 12-16 hours
```
## When to Use /design:good
### β
Use /design:good for:
- **Customer-Facing Products**: Landing pages, marketing sites
- **Brand-Critical Pages**: Homepage, product pages, about pages
- **Portfolio/Showcase**: Personal brand, creative work
- **Premium Products**: Luxury brands, high-end services
- **Competitive Markets**: Stand out with exceptional design
- **Award Submissions**: Awwwards, CSS Design Awards, FWA
- **Long-Term Projects**: Worth the investment in quality
- **Learning Excellence**: Study and apply best practices
### β Don't use /design:good for:
- **Quick Prototypes**: Use `/design:fast` instead
- **Internal Tools**: Quality vs. speed tradeoff
- **Time Constraints**: Requires 6-16 hours minimum
- **Low-Traffic Pages**: ROI consideration
- **Temporary Pages**: Not worth the effort
## Design Excellence Standards
### Visual Hierarchy
```
Clear hierarchy in every design:
βββ Primary action: most prominent
βββ Secondary actions: visible but secondary
βββ Tertiary: minimal, supportive
βββ Content: F-pattern or Z-pattern reading flow
```
### Micro-Interactions
Every interaction designed:
- Button hover: scale + shadow
- Input focus: border + scale
- Form submit: loading state + success animation
- Error: shake animation + icon
- Success: checkmark animation + confetti (subtle)
### Loading States
Never leave users waiting without feedback:
- Skeleton screens (not spinners)
- Progressive image loading
- Optimistic UI updates
- Loading percentage indicators
- Smooth content reveals
### Empty States
Beautiful empty states:
- Illustration or icon
- Helpful message
- Primary action (CTA)
- Examples or suggestions
### Error States
Friendly error handling:
- Clear error messages
- Suggestions to fix
- Visual indicators
- Retry actions
- Support contact info
## Research Sources
### Inspiration Platforms
**Dribbble** (dribbble.com)
- Latest design trends
- UI/UX patterns
- Micro-interactions
- Color schemes
**Behance** (behance.net)
- Full project case studies
- Design process insights
- Brand guidelines
- Complete design systems
**Awwwards** (awwwards.com)
- Award-winning websites
- Cutting-edge interactions
- Technical excellence
- Innovation in web design
**Mobbin** (mobbin.com)
- Mobile app patterns
- User flows
- Screen designs
- iOS/Android patterns
**Land-book** (land-book.com)
- Landing page inspiration
- Contemporary web design
- Clean, modern aesthetics
### Design Systems to Study
**Industry Leaders:**
- Material Design (Google)
- Human Interface Guidelines (Apple)
- Fluent Design (Microsoft)
- Polaris (Shopify)
- Carbon (IBM)
- Ant Design (Alibaba)
**Startups with Great Design:**
- Linear (linear.app)
- Stripe (stripe.com)
- Vercel (vercel.com)
- Notion (notion.so)
- Figma (figma.com)
## Quality Checklist
Before delivery, verify:
### Visual Design
- [ ] Pixel-perfect alignment
- [ ] Consistent spacing (design system)
- [ ] Color harmony and contrast
- [ ] Typography hierarchy clear
- [ ] High-quality assets
- [ ] Brand consistency
### Interaction Design
- [ ] All states designed (hover, active, disabled, loading, error)
- [ ] Smooth transitions (60 FPS)
- [ ] Purposeful animations
- [ ] Clear feedback on actions
- [ ] Intuitive navigation
- [ ] Micro-interactions polished
### Responsive Design
- [ ] Mobile (320px - 767px)
- [ ] Tablet (768px - 1023px)
- [ ] Desktop (1024px - 1919px)
- [ ] Ultra-wide (1920px+)
- [ ] Touch-friendly targets
- [ ] Readable text sizes
### Accessibility
- [ ] WCAG 2.1 AA compliant
- [ ] Color contrast 4.5:1 minimum
- [ ] Keyboard navigable
- [ ] Screen reader friendly
- [ ] Focus indicators visible
- [ ] Semantic HTML
- [ ] ARIA labels where needed
- [ ] Skip links present
### Performance
- [ ] Optimized images
- [ ] Lazy loading
- [ ] Code splitting
- [ ] Fast load time (<3s)
- [ ] Smooth animations
- [ ] Efficient rendering
### Code Quality
- [ ] Clean, readable code
- [ ] Type safety (TypeScript)
- [ ] Commented where needed
- [ ] Reusable components
- [ ] Design system documented
- [ ] No console errors
## Best Practices
### Provide Detailed Briefs
β
**Excellent:**
```bash
/design:good [E-learning platform homepage targeting college students. Modern, friendly aesthetic with purple/blue gradient brand colors. Include: hero with video demo, course categories grid with hover effects, student testimonials carousel, pricing comparison table, and FAQ accordion. Emphasize trust and accessibility. Inspired by: Coursera, Udemy, but more youthful and dynamic.]
```
β
**Good:**
```bash
/design:good [Portfolio site for UI designer showcasing 6 projects. Minimal, elegant design with black/white/accent color. Smooth page transitions, custom cursor, full-screen project images.]
```
β **Too vague:**
```bash
/design:good [make a nice website]
```
### Include References
```bash
/design:good [Dashboard similar to Linear.app but for project management. Include: task kanban board, timeline view, team activity feed. Use their modern aesthetic with shadows and subtle animations.]
```
### Specify Brand Guidelines
```bash
/design:good [Corporate website following brand guidelines:
- Colors: Navy #1E3A8A (primary), Orange #F97316 (accent)
- Font: "Proxima Nova" for all text
- Style: Professional, trustworthy, innovative
- Sections: Hero, services, case studies, team, contact]
```
## After Delivery
### Documentation Provided
```
docs/
βββ design-system.md # Complete design system documentation
βββ components.md # Component library reference
βββ animations.md # Animation specifications
βββ accessibility.md # Accessibility compliance notes
```
### Design Tokens
```css
/* Generated design system CSS */
:root {
/* Colors */
--color-primary-50: #F5F3FF;
--color-primary-500: #7C3AED;
--color-primary-900: #4C1D95;
/* Typography */
--font-display: "Cal Sans", sans-serif;
--font-body: "Inter Variable", sans-serif;
--text-xs: 0.75rem;
--text-base: 1rem;
--text-4xl: 2.25rem;
/* Spacing */
--space-1: 0.25rem;
--space-4: 1rem;
--space-8: 2rem;
/* Shadows */
--shadow-sm: 0 1px 2px 0 rgba(0, 0, 0, 0.05);
--shadow-lg: 0 10px 15px -3px rgba(0, 0, 0, 0.1);
/* Animation */
--duration-fast: 200ms;
--duration-base: 300ms;
--ease-in-out: cubic-bezier(0.4, 0.0, 0.2, 1);
}
```
## Troubleshooting
### Takes Too Long
**Problem:** Design process taking longer than expected
**Solution:**
- Consider `/design:fast` for non-critical pages
- Focus quality on high-impact pages only
- Iterate: start fast, upgrade to good later
### Need Revisions
**Problem:** Design doesn't match vision
**Solution:**
```bash
# Be specific about changes
/design:good [revise hero section - make headline more prominent, reduce CTA button size, use purple gradient instead of blue]
# Provide reference
/design:good [update pricing section to match this style: [reference-image.png]]
```
### Performance Issues
**Problem:** Animations causing lag
**Solution:**
```bash
/fix:ui [optimize animations - reduce complexity while maintaining smooth feel]
```
## Next Steps
- [/design:fast](/docs/commands/design/fast) - Quick prototypes first
- [/design:describe](/docs/commands/design/describe) - Analyze award winners
- [/design:3d](/docs/commands/design/3d) - Add 3D elements
- [/fix:ui](/docs/commands/fix/ui) - Polish final details
---
**Key Takeaway**: `/design:good` creates exceptional, award-worthy designs by researching best-in-class examples and applying meticulous attention to every detail. Worth the time investment for customer-facing, brand-critical pages. Excellence matters.
---
# /design:screenshot
Section: docs
Category: commands/design
URL: https://docs.claudekit.cc/docs/design/screenshot
# /design:screenshot
Convert any screenshot into a working implementation. Reverse-engineer existing designs, recreate inspirational interfaces, or implement design mockups with pixel-perfect accuracy. See it, build it.
## Syntax
```bash
/design:screenshot [screenshot file path]
```
## How It Works
The `/design:screenshot` command transforms visual references into code:
### 1. Visual Analysis
- Processes screenshot using AI vision
- Extracts layout structure and component hierarchy
- Identifies colors, fonts, spacing, and sizing
- Detects responsive behavior patterns
- Measures proportions and relationships
### 2. Component Decomposition
- Breaks design into reusable components
- Identifies component patterns (cards, buttons, forms)
- Maps component relationships and nesting
- Determines component props and variants
- Plans component file structure
### 3. Style Extraction
- Extracts exact color values (hex, RGB, HSL)
- Identifies font families, sizes, weights
- Measures spacing system and grid layout
- Detects border radius, shadows, gradients
- Captures animation and transition details
### 4. Code Generation
- Generates clean, typed React/Vue components
- Applies Tailwind CSS or CSS-in-JS
- Implements responsive breakpoints
- Adds interactive states (hover, focus, active)
- Includes accessibility attributes
### 5. Pixel-Perfect Refinement
- Matches exact spacing and sizing
- Replicates typography hierarchy
- Implements precise color matching
- Ensures alignment and proportions
- Tests across different screen sizes
## Examples
### Landing Page Hero
```bash
/design:screenshot [stripe-hero-section.png]
```
**Analysis Output:**
```
Screenshot Analysis: Stripe Hero Section
ββββββββββββββββββββββββββββββββββββββββββββ
Detected Layout:
βββ Container: max-width 1280px, padding 80px (desktop) / 24px (mobile)
βββ Grid: 2 columns (55% / 45%)
βββ Left Column: Text content + CTAs
βββ Right Column: Product visual with floating elements
Extracted Styles:
βββ Background: #0A2540 (dark blue)
βββ Headline Color: #FFFFFF
βββ Description Color: #8898AA (light gray)
βββ Primary Button: #635BFF (purple), hover: #4F46E5
βββ Secondary Button: transparent, border: #635BFF
Typography:
βββ Headline: "SΓΆhne" font, 72px, 600 weight, -0.03em tracking
βββ Description: "SΓΆhne" font, 21px, 400 weight, 1.5 line-height
βββ Button Text: "SΓΆhne" font, 16px, 500 weight
βββ Line height: Headline 1.1, Body 1.5
Spacing:
βββ Headline β Description: 24px
βββ Description β CTAs: 32px
βββ Between buttons: 16px
βββ Section padding: 120px top/bottom
Component Structure:
Hero
βββ Container
β βββ Grid
β β βββ ContentColumn
β β β βββ Headline
β β β βββ Description
β β β βββ CTAGroup
β β β βββ PrimaryButton
β β β βββ SecondaryButton
β β βββ VisualColumn
β β βββ ProductImage
β β βββ FloatingCards (3)
```
**Generated Implementation:**
```typescript
// src/components/Hero.tsx
import { Button } from './Button';
export function Hero() {
return (
{/* Content Column */}
Financial infrastructure for the internet
Millions of companies of all sizes use Stripe to accept payments,
send payouts, and manage their businesses online.
Start now
Contact sales
{/* Visual Column */}
);
}
// Generated animations
@keyframes float {
0%, 100% { transform: translateY(0px); }
50% { transform: translateY(-10px); }
}
.animate-float {
animation: float 3s ease-in-out infinite;
}
.animate-float-delayed {
animation: float 3s ease-in-out 0.5s infinite;
}
```
**Result:**
```
β Pixel-perfect match to screenshot
β Responsive across all breakpoints
β Smooth animations included
β Accessible (WCAG AA)
β Production-ready code
```
### Dashboard Card
```bash
/design:screenshot [analytics-card.png]
```
**Generated Implementation:**
```typescript
// src/components/AnalyticsCard.tsx
import { TrendingUp, TrendingDown } from 'lucide-react';
interface AnalyticsCardProps {
title: string;
value: string;
change: number;
timeframe: string;
}
export function AnalyticsCard({
title,
value,
change,
timeframe
}: AnalyticsCardProps) {
const isPositive = change >= 0;
return (
{/* Header */}
{title}
{timeframe}
{/* Value */}
{/* Change Indicator */}
{isPositive ? (
) : (
)}
{isPositive ? '+' : ''}{change}%
vs last period
);
}
// Usage example
```
### Mobile App Screen
```bash
/design:screenshot [instagram-profile-mobile.png]
```
**Generated Implementation:**
```typescript
// src/screens/ProfileScreen.tsx
import { ArrowLeft, Settings, Grid, List } from 'lucide-react';
export function ProfileScreen() {
return (
{/* Header */}
{/* Profile Section */}
{/* Avatar and Stats Row */}
{/* Avatar */}
{/* Stats */}
{/* Bio */}
Full Name
Bio description goes here π¨
link.com
{/* Action Buttons */}
Follow
Message
{/* Tabs */}
{/* Content Grid */}
{Array.from({ length: 12 }).map((_, i) => (
))}
);
}
```
### Complex Form
```bash
/design:screenshot [checkout-form.png]
```
**Generated Implementation:**
```typescript
// src/components/CheckoutForm.tsx
import { useForm } from 'react-hook-form';
import { zodResolver } from '@hookform/resolvers/zod';
import * as z from 'zod';
import { CreditCard, Lock } from 'lucide-react';
const checkoutSchema = z.object({
email: z.string().email('Invalid email'),
cardNumber: z.string().regex(/^\d{16}$/, 'Invalid card number'),
expiry: z.string().regex(/^\d{2}\/\d{2}$/, 'Format: MM/YY'),
cvc: z.string().regex(/^\d{3,4}$/, 'Invalid CVC'),
name: z.string().min(2, 'Name required'),
address: z.string().min(5, 'Address required'),
city: z.string().min(2, 'City required'),
postalCode: z.string().min(5, 'Postal code required'),
});
type CheckoutFormData = z.infer;
export function CheckoutForm() {
const {
register,
handleSubmit,
formState: { errors, isSubmitting }
} = useForm({
resolver: zodResolver(checkoutSchema),
});
const onSubmit = async (data: CheckoutFormData) => {
// Process payment
console.log('Checkout data:', data);
};
return (
);
}
```
## When to Use /design:screenshot
### β
Use /design:screenshot for:
- **Design Inspiration**: Implement designs you admire
- **Design Mockups**: Convert Figma/Sketch exports to code
- **Competitive Analysis**: Recreate competitor interfaces
- **Design Handoff**: Designer sends screenshot, you build it
- **Legacy Redesign**: Modernize old interfaces
- **Rapid Prototyping**: Convert sketches to working code
- **Learning**: Study and replicate great designs
- **Client References**: "Make it look like this"
### β Don't use /design:screenshot for:
- **New Designs**: Use `/design:fast` or `/design:good`
- **Just Analysis**: Use `/design:describe` to understand first
- **Complex Interactions**: Video might be better (see `/design:video`)
- **Copyright Concerns**: Don't copy protected designs exactly
## Best Practices
### Provide High-Quality Screenshots
β
**Good:**
```bash
# High resolution (2x or higher)
/design:screenshot [figma-export@2x.png]
# Full context (not cropped)
/design:screenshot [complete-page-view.png]
# Clear, sharp images
/design:screenshot [no-blur-screenshot.png]
```
β **Bad:**
```bash
# Low resolution, pixelated
/design:screenshot [tiny-image.png]
# Partial view, missing context
/design:screenshot [just-corner.png]
# Blurry, out of focus
/design:screenshot [unclear.png]
```
### Include Multiple Views
For responsive designs:
```bash
# Desktop view
/design:screenshot [desktop-view.png]
# Then specify mobile
/design:screenshot [mobile-view.png] - mobile version of previous design
# Tablet if different
/design:screenshot [tablet-view.png] - tablet variant
```
### Specify Adaptations
```bash
# Use exact colors
/design:screenshot [design.png] - use exact colors from screenshot
# Adapt for brand
/design:screenshot [design.png] - adapt to our brand colors: primary #3B82F6, secondary #10B981
# Simplify
/design:screenshot [complex-design.png] - simplify animations, focus on layout
```
## Advanced Usage
### Component Extraction
```bash
# Extract single component
/design:screenshot [full-page.png] - only implement the navigation bar component
# Specific element
/design:screenshot [dashboard.png] - extract the stats cards pattern as reusable component
```
### Style Guide Generation
```bash
# Multiple screenshots
/design:screenshot [page1.png, page2.png, page3.png] - extract consistent design system across all pages
```
### Responsive Reconstruction
```bash
# Create responsive version
/design:screenshot [desktop-only.png] - implement and extrapolate mobile/tablet responsive versions
```
## Accuracy Level
### Pixel-Perfect Mode (Default)
```
Matches:
β Exact spacing and sizing
β Precise color values
β Font sizes and weights
β Border radius and shadows
β Layout proportions
May vary:
- Exact fonts (uses similar web-safe alternatives)
- Images (uses placeholders unless provided)
- Complex animations (simplified)
```
### Adaptive Mode
```bash
/design:screenshot [design.png] - adapt loosely, prioritize responsiveness over exact match
```
More flexible implementation:
- Adapts spacing to grid system
- Uses standard component library
- Focuses on overall feel, not pixels
## Troubleshooting
### Colors Don't Match
**Problem:** Generated colors slightly off
**Solution:**
```bash
# Specify exact colors
/design:screenshot [design.png] - use these exact colors: primary #3B82F6, accent #F59E0B
```
### Layout Not Exact
**Problem:** Spacing or proportions different
**Solution:**
```bash
/fix:ui [compare screenshot to implementation] - adjust spacing to match exactly
```
### Missing Interactions
**Problem:** Hover states or animations not included
**Solution:**
```bash
# Provide video showing interactions
/design:video [interaction-demo.mp4]
# Or describe explicitly
/design:screenshot [design.png] - add hover effects: buttons scale slightly, cards lift with shadow
```
### Font Not Available
**Problem:** Screenshot uses custom font
**Solution:**
```typescript
// ClaudeKit suggests similar fonts
// Original: "Custom Font Pro"
// Suggested: "Inter" (similar characteristics)
// Or specify font
/design:screenshot [design.png] - use "Your Font Name" font from Google Fonts
```
## Integration Workflow
### Designer β Developer Handoff
```bash
# 1. Designer exports screen
# (Figma export at 2x)
# 2. Developer implements
/design:screenshot [figma-export.png]
# 3. Review and adjust
/fix:ui [compare side by side] - adjust button padding and card shadows
# 4. Iterate
/design:screenshot [updated-design.png] - implement changes from v2
```
### Competitive Analysis Workflow
```bash
# 1. Screenshot competitor
# (Take high-res screenshots)
# 2. Analyze first
/design:describe [competitor-page.png]
# 3. Implement adapted version
/design:screenshot [competitor-page.png] - implement similar layout but with our brand and unique twist
# 4. Ensure differentiation
# (Don't copy exactly, adapt and improve)
```
## Legal & Ethical Considerations
### β
Appropriate Use:
- **Learning**: Study and understand design patterns
- **Internal Tools**: Recreate layouts for internal use
- **Inspiration**: Use as reference, create unique version
- **Design Handoff**: Implement your own mockups
- **Pattern Libraries**: Recreate common UI patterns
### β Avoid:
- **Direct Copying**: Exact replicas of copyrighted designs
- **Competitor Cloning**: Identical copies without differentiation
- **Brand Theft**: Using exact brand assets and styling
- **Trademark Violation**: Copying logos and branded elements
**Best Practice:** Use as inspiration, implement your unique version.
## Next Steps
- [/design:describe](/docs/commands/design/describe) - Analyze screenshot first
- [/design:video](/docs/commands/design/video) - For animated interfaces
- [/design:good](/docs/commands/design/good) - Polish the implementation
- [/fix:ui](/docs/commands/fix/ui) - Fine-tune the result
---
**Key Takeaway**: `/design:screenshot` transforms any visual reference into working code with pixel-perfect accuracy. See great design, implement it. Perfect for design handoffs, competitive inspiration, and rapid prototyping from visual references.
---
# /design:video
Section: docs
Category: commands/design
URL: https://docs.claudekit.cc/docs/design/video
# /design:video
Transform video demonstrations into working animated interfaces. Analyze screen recordings, interaction demos, or animation references to extract design patterns, motion principles, and timing specifications. See it move, build it move.
## Syntax
```bash
/design:video [video file path]
```
## How It Works
The `/design:video` command analyzes motion and interaction:
### 1. Video Processing
- Processes video frame by frame using AI vision
- Extracts key frames showing different states
- Identifies transition moments and timing
- Captures user interactions (clicks, hovers, swipes)
- Measures animation duration and easing
### 2. Motion Analysis
- Identifies animation types (fade, slide, scale, rotate)
- Measures transition durations (milliseconds)
- Detects easing functions (linear, ease-in-out, spring)
- Captures sequential timing (stagger delays)
- Identifies looping animations
### 3. Interaction Mapping
- Maps user actions to visual feedback
- Identifies hover states and effects
- Captures click/tap animations
- Documents scroll-based animations
- Extracts gesture patterns (swipe, pinch)
### 4. State Transitions
- Identifies all component states
- Maps transitions between states
- Documents loading states
- Captures error and success animations
- Extracts empty state treatments
### 5. Code Generation
- Implements animations with Framer Motion or CSS
- Generates exact timing specifications
- Creates responsive interaction handlers
- Adds gesture support for mobile
- Includes accessibility considerations
## Examples
### Onboarding Flow Animation
```bash
/design:video [app-onboarding-flow.mp4]
```
**Video Analysis:**
```
Video Analysis: Onboarding Flow
ββββββββββββββββββββββββββββββββββββββββββββ
Duration: 24 seconds
Screens: 4
Transitions: 3 (screen changes)
Frame-by-Frame Analysis:
Screen 1: Welcome (0:00 - 0:06)
βββ 0.0s: Fade in from white (500ms)
βββ 0.5s: Logo scale animation (1.0 β 1.2 β 1.0, 800ms, spring)
βββ 1.0s: Headline slide up + fade (300ms, ease-out)
βββ 1.3s: Description fade in (300ms)
βββ 2.0s: Button fade in + pulse (400ms)
βββ 2.4s: Button continues pulsing (2s loop, scale 1.0 β 1.05)
βββ 6.0s: User taps button
Transition 1β2 (6.0s - 6.4s)
βββ Button press: scale down to 0.95 (100ms)
βββ Screen 1: slide left + fade out (400ms, ease-in-out)
βββ Screen 2: slide from right + fade in (400ms, ease-in-out)
βββ Total transition: 400ms
Screen 2: Feature Explanation (6.4s - 12.0s)
βββ 6.4s: Illustration slide up from bottom (500ms, ease-out)
βββ 6.9s: Feature icons appear sequentially
β βββ Icon 1: fade + scale (300ms)
β βββ Icon 2: fade + scale (300ms, 100ms delay)
β βββ Icon 3: fade + scale (300ms, 200ms delay)
βββ 7.5s: Feature text fade in with stagger (200ms each, 80ms stagger)
βββ 8.0s: Progress dots animate (dot 2 fills, 200ms)
βββ 12.0s: User swipes left
Transition 2β3 (12.0s - 12.4s)
βββ Swipe gesture detected (velocity-based)
βββ Screen follows finger, then snaps
βββ Spring animation (stiffness: 300, damping: 30)
βββ Duration: 400ms
Screen 3: Permissions (12.4s - 18.0s)
βββ 12.4s: Icon bounces in (scale 0 β 1.1 β 1.0, 600ms)
βββ 13.0s: Permission text typewriter effect
β βββ "We'd like to send you notifications"
β βββ Character delay: 30ms
βββ 14.5s: Allow button slide up (300ms)
βββ 15.0s: Skip button fade in (300ms)
βββ 18.0s: User taps Allow
Transition 3β4 (18.0s - 18.4s)
βββ Checkmark draw animation (300ms, path stroke)
βββ Success ripple effect (500ms)
βββ Screen transition (same as before)
βββ Duration: 400ms
Screen 4: Complete (18.4s - 24.0s)
βββ 18.4s: Confetti animation (2s, particles fall)
βββ 19.0s: Success icon scale + rotate (600ms)
βββ 19.6s: "You're all set!" fade in (300ms)
βββ 20.0s: Get Started button slide up (400ms)
βββ 24.0s: Video ends
Animation Specifications:
βββ Primary Easing: cubic-bezier(0.4, 0.0, 0.2, 1)
βββ Spring Config: { stiffness: 300, damping: 30 }
βββ Fast Duration: 200ms
βββ Standard Duration: 300-400ms
βββ Slow Duration: 500-600ms
βββ Stagger Delay: 80-100ms
βββ Loop Duration: 2000ms
Interaction Patterns:
βββ Button Press: scale(0.95), 100ms
βββ Button Hover: scale(1.02), shadow increase
βββ Swipe: Follow gesture + spring snap
βββ Tap: Ripple effect from tap point
βββ Success: Checkmark + confetti
```
**Generated Implementation:**
```typescript
// src/components/Onboarding.tsx
import { motion, AnimatePresence } from 'framer-motion';
import { useState } from 'react';
import Confetti from 'react-confetti';
const screens = ['welcome', 'features', 'permissions', 'complete'];
export function Onboarding() {
const [currentScreen, setCurrentScreen] = useState(0);
const [showConfetti, setShowConfetti] = useState(false);
const nextScreen = () => {
if (currentScreen < screens.length - 1) {
setCurrentScreen(currentScreen + 1);
if (currentScreen === screens.length - 2) {
setShowConfetti(true);
}
}
};
return (
{showConfetti &&
}
{currentScreen === 0 && }
{currentScreen === 1 && }
{currentScreen === 2 && }
{currentScreen === 3 && }
{/* Progress Dots */}
{screens.map((_, index) => (
))}
);
}
function WelcomeScreen({ onNext }: { onNext: () => void }) {
return (
{/* Logo */}
π
{/* Headline */}
Welcome to AppName
{/* Description */}
Discover amazing features that will transform your workflow
{/* Button */}
Get Started
);
}
function FeaturesScreen({ onNext }: { onNext: () => void }) {
const features = [
{ icon: 'β‘', title: 'Lightning Fast', description: 'Blazing performance' },
{ icon: 'π¨', title: 'Beautiful UI', description: 'Pixel perfect design' },
{ icon: 'π', title: 'Secure', description: 'End-to-end encryption' },
];
return (
{
if (offset.x < -100 || velocity.x < -500) {
onNext();
}
}}
>
{/* Illustration */}
{/* Features */}
{features.map((feature, index) => (
{feature.icon}
{feature.title}
{feature.description}
))}
Swipe left to continue β
);
}
function PermissionsScreen({ onNext }: { onNext: () => void }) {
const [showCheckmark, setShowCheckmark] = useState(false);
const handleAllow = () => {
setShowCheckmark(true);
setTimeout(onNext, 800);
};
return (
{/* Icon */}
π
{/* Typewriter Text */}
Stay updated with important information
{/* Buttons */}
Allow
Not now
{/* Checkmark Animation */}
{showCheckmark && (
)}
);
}
function CompleteScreen() {
return (
β
You're all set!
Start Exploring
);
}
function TypewriterText({ text, className }: { text: string; className?: string }) {
return (
{text.split('').map((char, index) => (
{char}
))}
);
}
```
**Result:**
```
β Exact animation timing matched
β All transitions implemented
β Gesture support added
β Progress tracking included
β Accessibility maintained
β 60 FPS smooth animations
Generated Files:
β src/components/Onboarding.tsx
β src/components/TypewriterText.tsx
β src/hooks/useSwipeGesture.ts
β package.json (framer-motion, react-confetti added)
Time: 45 minutes
Quality: Production-ready with exact timing
```
### Dropdown Menu Interaction
```bash
/design:video [dropdown-animation.mp4]
```
**Implementation:**
```typescript
// Extracted: Smooth dropdown with stagger animation
import { motion } from 'framer-motion';
const menuItems = [
{ icon: 'π€', label: 'Profile', href: '/profile' },
{ icon: 'βοΈ', label: 'Settings', href: '/settings' },
{ icon: 'β', label: 'Help', href: '/help' },
{ icon: 'πͺ', label: 'Logout', href: '/logout' },
];
export function Dropdown({ isOpen }: { isOpen: boolean }) {
return (
{isOpen && (
{menuItems.map((item, index) => (
{item.icon}
{item.label}
))}
)}
);
}
```
### Loading Animation
```bash
/design:video [loading-spinner.mp4]
```
**Implementation:**
```typescript
// Extracted: Custom loading animation
import { motion } from 'framer-motion';
export function LoadingSpinner() {
return (
);
}
export function SkeletonLoader() {
return (
{[1, 2, 3].map((i) => (
))}
);
}
```
## When to Use /design:video
### β
Use /design:video for:
- **Animated Interfaces**: Onboarding flows, transitions
- **Micro-Interactions**: Button animations, hover effects
- **Gesture Patterns**: Swipe, pinch, drag interactions
- **Loading States**: Spinners, skeleton screens, progress
- **Transitions**: Page changes, modal animations
- **Scroll Effects**: Parallax, scroll-triggered animations
- **Data Viz**: Animated charts and graphs
- **Game UI**: Interactive game interfaces
### β Don't use /design:video for:
- **Static Designs**: Use `/design:screenshot` instead
- **Just Layout**: Use screenshot if no animation
- **Simple Hover**: Basic CSS hover states don't need video
- **Very Long Videos**: Keep under 2 minutes
## Best Practices
### Record Quality Videos
β
**Good:**
```bash
# High resolution (1080p or higher)
/design:video [screen-recording-1080p.mp4]
# Smooth frame rate (60 FPS preferred)
/design:video [60fps-recording.mp4]
# Clear interactions (show cursor, taps)
/design:video [with-cursor-visible.mp4]
# Appropriate length (10-60 seconds)
/design:video [concise-demo.mp4]
```
β **Bad:**
```bash
# Low resolution, pixelated
/design:video [blurry-video.mp4]
# Choppy frame rate
/design:video [low-fps.mp4]
# Too long, unfocused
/design:video [5-minute-rambling.mp4]
```
### Focus on Specific Interactions
```bash
# Single interaction
/design:video [button-hover-animation.mp4] - focus on button hover effect only
# Flow
/design:video [checkout-flow.mp4] - implement the 3-step checkout animation
# Gesture
/design:video [card-swipe.mp4] - extract swipe gesture pattern
```
### Provide Timing Notes
```bash
# Specify if timing should be adjusted
/design:video [animation.mp4] - speed up transitions by 30%
# Emphasize specific timing
/design:video [onboarding.mp4] - match the exact timing, especially the stagger effect
```
## Animation Principles
### Duration Guidelines
Extracted from videos, applied to code:
```
Micro-interactions: 100-200ms (fast, immediate)
UI Transitions: 200-400ms (standard)
Page Transitions: 400-600ms (noticeable)
Complex Animations: 600-1000ms (deliberate)
Looping Animations: 2000-4000ms (ambient)
```
### Easing Functions
```typescript
// Common easing from analyzed videos
const easing = {
// Quick entry, slow exit (most UI)
easeOut: [0.0, 0.0, 0.2, 1.0],
// Slow entry, quick exit (exits, dismissals)
easeIn: [0.4, 0.0, 1.0, 1.0],
// Balanced (general purpose)
easeInOut: [0.4, 0.0, 0.2, 1.0],
// Spring physics (organic, playful)
spring: { stiffness: 300, damping: 30 },
// Sharp (attention-grabbing)
sharp: [0.4, 0.0, 0.6, 1.0],
};
```
### Stagger Patterns
```typescript
// Sequential reveals (list items, cards)
const staggerChildren = {
visible: {
transition: {
staggerChildren: 0.1, // 100ms between each
},
},
};
// Wave effect (domino)
const staggerWave = {
visible: {
transition: {
staggerChildren: 0.05,
delayChildren: 0.2,
},
},
};
```
## Advanced Features
### Scroll-Based Animations
```bash
/design:video [scroll-parallax.mp4]
```
```typescript
// Generated scroll animation
import { useScroll, useTransform, motion } from 'framer-motion';
export function ParallaxSection() {
const { scrollYProgress } = useScroll();
const y = useTransform(scrollYProgress, [0, 1], ['0%', '50%']);
const opacity = useTransform(scrollYProgress, [0, 0.5, 1], [1, 0.8, 0.6]);
return (
{/* Content */}
);
}
```
### Gesture Recognition
```bash
/design:video [swipe-cards.mp4]
```
```typescript
// Generated swipe pattern
import { motion, PanInfo } from 'framer-motion';
export function SwipeCard() {
const handleDragEnd = (event: any, info: PanInfo) => {
const swipeThreshold = 100;
const swipeVelocity = 500;
if (
info.offset.x < -swipeThreshold ||
info.velocity.x < -swipeVelocity
) {
// Swiped left
onSwipeLeft();
} else if (
info.offset.x > swipeThreshold ||
info.velocity.x > swipeVelocity
) {
// Swiped right
onSwipeRight();
}
};
return (
{/* Card content */}
);
}
```
### Complex Sequences
```bash
/design:video [multi-step-animation.mp4]
```
```typescript
// Generated sequence
const sequence = {
initial: { opacity: 0, scale: 0.8, y: 50 },
animate: {
opacity: [0, 1, 1, 1],
scale: [0.8, 1.1, 1, 1],
y: [50, -10, 0, 0],
},
transition: {
duration: 1.2,
times: [0, 0.3, 0.6, 1],
ease: 'easeOut',
},
};
```
## Performance Optimization
### GPU Acceleration
```css
/* Applied automatically for smooth animations */
.animated-element {
will-change: transform, opacity;
transform: translateZ(0);
}
```
### Reduced Motion
```typescript
// Respects user preferences
import { useReducedMotion } from 'framer-motion';
export function AnimatedComponent() {
const shouldReduceMotion = useReducedMotion();
const variants = {
hidden: { opacity: 0, y: shouldReduceMotion ? 0 : 20 },
visible: { opacity: 1, y: 0 },
};
return ;
}
```
## Troubleshooting
### Animation Too Fast/Slow
**Problem:** Timing doesn't feel right
**Solution:**
```bash
# Adjust timing explicitly
/design:video [animation.mp4] - slow down all animations by 50%
# Or specify exact durations
/design:video [animation.mp4] - set transition duration to 400ms instead of detected 200ms
```
### Choppy Animation
**Problem:** Animation not smooth
**Solution:**
```typescript
// Use GPU-accelerated properties
// β
Good (GPU)
transform: 'translateX(100px)'
opacity: 0.5
// β Bad (CPU)
left: '100px'
width: '200px'
```
### Can't Extract Exact Timing
**Problem:** Video unclear about timing
**Solution:**
- Record at higher frame rate (60 FPS)
- Slow down video playback when recording
- Use screen recording software with cursor visible
- Provide supplementary notes
## Next Steps
- [/design:screenshot](/docs/commands/design/screenshot) - For static views
- [/design:3d](/docs/commands/design/3d) - For 3D animations
- [/design:describe](/docs/commands/design/describe) - Analyze video first
- [/fix:ui](/docs/commands/fix/ui) - Adjust animation feel
---
**Key Takeaway**: `/design:video` extracts motion and interaction patterns from video demonstrations to implement perfectly-timed animated interfaces. See it move, build it move, with exact timing specifications and gesture support.
---
# /docs:init
Section: docs
Category: commands/docs-cmd
URL: https://docs.claudekit.cc/docs/docs-cmd/init
# /docs:init
Initialize comprehensive project documentation by analyzing the entire codebase. This is the essential first command when integrating ClaudeKit into an existing project.
## Syntax
```bash
/docs:init
```
## How It Works
The `/docs:init` command performs a thorough codebase analysis:
### 1. Codebase Scanning
- Analyzes all source files
- Maps project structure
- Identifies patterns and conventions
- Detects frameworks and libraries
- Examines database schemas
- Reviews API endpoints
### 2. Architecture Analysis
- Identifies architectural patterns (MVC, Clean Architecture, etc.)
- Maps dependencies and relationships
- Analyzes data flow
- Documents integration points
- Identifies external services
### 3. Code Standards Detection
- Identifies naming conventions
- Detects code style patterns
- Finds linting/formatting rules
- Documents testing approaches
- Identifies error handling patterns
### 4. Documentation Generation
Creates comprehensive documentation files in `docs/`:
- **codebase-summary.md** - High-level overview
- **project-overview-pdr.md** - Product requirements
- **code-standards.md** - Coding conventions
- **system-architecture.md** - Architecture documentation
- **design-guidelines.md** - UI/UX patterns
- **deployment-guide.md** - Deployment procedures
- **project-roadmap.md** - Future plans and TODOs
## When to Use
### β
Perfect Scenarios
**Joining Existing Project**
```bash
# First day on new codebase
cd existing-project
/docs:init
# Now you understand the entire project!
```
**Integrating ClaudeKit**
```bash
# Adding ClaudeKit to existing project
ck init --kit engineer
/docs:init
# ClaudeKit now understands your codebase
```
**Project Audit**
```bash
# Haven't looked at project in months
cd old-project
/docs:init
# Get refreshed on everything
```
**Before Major Refactoring**
```bash
# Document current state first
/docs:init
# Now safely refactor
```
## Generated Documentation
### codebase-summary.md
High-level project overview:
```markdown
# Codebase Summary
## Project Type
Full-stack web application with REST API backend and React frontend
## Tech Stack
- **Backend**: Node.js, Express.js, PostgreSQL
- **Frontend**: React 18, TypeScript, Tailwind CSS
- **Infrastructure**: Docker, AWS (EC2, RDS, S3)
## Project Structure
- `/src/api` - REST API endpoints
- `/src/services` - Business logic
- `/src/models` - Database models
- `/client` - React frontend
- `/tests` - Test suite
## Key Features
1. User authentication (JWT-based)
2. Real-time notifications (WebSockets)
3. File uploads (S3 integration)
4. Payment processing (Stripe)
5. Admin dashboard
## Statistics
- Total files: 247
- Lines of code: 45,829
- Test coverage: 78%
- Last updated: 2024-01-15
```
### code-standards.md
Coding conventions and patterns:
```markdown
# Code Standards
## Naming Conventions
- **Files**: kebab-case (user-service.ts)
- **Classes**: PascalCase (UserService)
- **Functions**: camelCase (getUserById)
- **Constants**: UPPER_SNAKE_CASE (API_BASE_URL)
## Code Style
- **Formatter**: Prettier
- **Linter**: ESLint with Airbnb config
- **Indentation**: 2 spaces
- **Quotes**: Single quotes
- **Semicolons**: Required
## Architecture Patterns
- **Backend**: Layered architecture (routes β controllers β services β models)
- **Frontend**: Feature-based organization with custom hooks
- **State Management**: React Context + useReducer
## Error Handling
- Custom error classes extending Error
- Centralized error middleware
- Consistent error response format
- Detailed error logging
## Testing
- **Unit**: Jest for business logic
- **Integration**: Supertest for API endpoints
- **E2E**: Playwright for critical user flows
- **Target coverage**: 80%+
```
### system-architecture.md
Technical architecture documentation:
```markdown
# System Architecture
## High-Level Overview
[Diagram generated showing components and data flow]
## Components
### API Server
- Framework: Express.js
- Authentication: JWT with refresh tokens
- Rate limiting: Redis-based token bucket
- Validation: Joi schemas
### Database
- Type: PostgreSQL 14
- ORM: Prisma
- Migrations: Automatic via Prisma Migrate
- Backup: Daily snapshots to S3
### Frontend
- Framework: React 18 with TypeScript
- State: Context API + useReducer
- Routing: React Router v6
- API Client: Axios with interceptors
### External Services
- **Stripe**: Payment processing
- **AWS S3**: File storage
- **SendGrid**: Email delivery
- **Redis**: Caching and rate limiting
## Data Flow
1. Client sends request to API
2. Auth middleware validates JWT
3. Controller receives request
4. Service layer processes business logic
5. Model layer interacts with database
6. Response sent back to client
## Security
- HTTPS enforced
- CORS configured
- Helmet.js security headers
- Input validation and sanitization
- SQL injection prevention (parameterized queries)
- XSS protection
```
### deployment-guide.md
Deployment procedures:
```markdown
# Deployment Guide
## Environments
- **Development**: localhost:3000
- **Staging**: staging.example.com
- **Production**: app.example.com
## Prerequisites
- Node.js 18+
- PostgreSQL 14
- Redis 6
- AWS CLI configured
- Docker
## Local Setup
```bash
# Clone repository
git clone https://github.com/org/project.git
# Install dependencies
npm install
# Setup environment
cp .env.example .env
# Edit .env with your values
# Run database migrations
npm run migrate
# Start development server
npm run dev
```
## Staging Deployment
```bash
# Push to staging branch
git push origin staging
# CI/CD automatically:
# 1. Runs tests
# 2. Builds Docker image
# 3. Deploys to staging ECS
# 4. Runs smoke tests
```
## Production Deployment
```bash
# Tag release
git tag v1.2.3
git push origin v1.2.3
# Create release PR
/git:pr main staging
# After approval:
# 1. Merge to main
# 2. CI/CD deploys to production
# 3. Health checks verify deployment
# 4. Rollback if issues detected
```
```
## Examples
### New Team Member Onboarding
```bash
# Day 1: Clone project
git clone https://github.com/company/project.git
cd project
# Generate documentation
/docs:init
Reading codebase...
[ββββββββββββββββββββββββ] 100%
Documentation generated:
β docs/codebase-summary.md
β docs/code-standards.md
β docs/system-architecture.md
β docs/deployment-guide.md
β docs/project-overview-pdr.md
# Now read the docs
cat docs/codebase-summary.md
# Ask questions
/ask [how does authentication work?]
# Start contributing!
```
### Legacy Project Revival
```bash
# Haven't touched project in 2 years
cd legacy-project
# Generate current state documentation
/docs:init
Analyzing codebase...
- Framework: Express.js (old version)
- Database: MongoDB
- No tests found
- 15,000 lines of code
- 3 external APIs integrated
Documentation created with:
- Current architecture
- Identified technical debt
- Migration recommendations
- Security concerns
# Now you understand what you're working with
```
## Time Estimates
| Codebase Size | Time to Complete |
|---------------|------------------|
| Small (< 5k lines) | 30-60 seconds |
| Medium (5k-20k lines) | 1-2 minutes |
| Large (20k-50k lines) | 2-5 minutes |
| Very Large (50k+ lines) | 5-10 minutes |
## What Gets Documented
### Automatically Detected
β
**Tech Stack**
- Frameworks and libraries
- Programming languages
- Build tools
- Testing frameworks
β
**Architecture**
- Project structure
- Design patterns
- Dependency graph
- Data flow
β
**Conventions**
- Naming patterns
- File organization
- Code style
- Testing approach
β
**APIs**
- REST endpoints
- GraphQL schemas
- WebSocket events
- External integrations
β
**Database**
- Schema structure
- Relationships
- Migrations
- Indexes
β
**Infrastructure**
- Deployment setup
- Environment configuration
- CI/CD pipelines
- Cloud services
## Benefits
### Prevents Hallucinations
With documentation, ClaudeKit:
- Understands existing patterns
- Follows project conventions
- Avoids creating duplicate code
- Respects architectural decisions
**Before `/docs:init`:**
```
User: "Add user authentication"
Claude: Creates new auth system (duplicate!)
```
**After `/docs:init`:**
```
User: "Add user authentication"
Claude: Extends existing auth system properly
```
### Faster Development
Documentation helps ClaudeKit:
- Generate code matching your style
- Reuse existing utilities
- Follow established patterns
- Avoid reinventing the wheel
### Better Code Quality
ClaudeKit produces code that:
- Matches your standards
- Fits your architecture
- Uses your conventions
- Follows your patterns
## Updating Documentation
Documentation should be updated when:
- Major refactoring occurs
- Architecture changes
- New patterns introduced
- Tech stack inits
```bash
# After major changes
/docs:update
# Or regenerate completely
rm -rf docs/
/docs:init
```
## Customization
### Exclude Specific Directories
Create `.claudeignore`:
```
node_modules/
dist/
build/
.git/
vendor/
```
### Focus on Specific Areas
```bash
# Document only backend
/docs:init --focus=src/api,src/services
# Document only frontend
/docs:init --focus=client/
```
## Best Practices
### Run Immediately
When joining a project or integrating ClaudeKit:
```bash
# First thing to do
cd project
/docs:init
```
### Review Generated Docs
```bash
# After generation, review accuracy
cat docs/codebase-summary.md
cat docs/code-standards.md
# Provide corrections if needed
"The authentication actually uses OAuth2, not JWT"
```
### Keep Updated
```bash
# Update docs regularly
/docs:update
# Or after major changes
/docs:init # Regenerate
```
### Share with Team
```bash
# Commit documentation
git add docs/
/git:cm # "docs: initialize project documentation"
# Push to team
git push
# Now everyone benefits!
```
## Troubleshooting
### Incomplete Documentation
**Problem:** Generated docs missing information
**Solution:**
```bash
# Provide additional context
"The project also uses Redis for caching, located in src/cache/"
# Regenerate
/docs:init
```
### Incorrect Information
**Problem:** Documentation has wrong details
**Solution:**
```bash
# Correct specific errors
"The database is actually PostgreSQL, not MySQL"
# Update
/docs:update
```
### Very Large Codebase
**Problem:** Taking too long to analyze
**Solution:**
```bash
# Focus on important directories
/docs:init --focus=src/
# Or split into chunks
/docs:init --dir=src/api
/docs:init --dir=src/services
```
## Next Steps
After running `/docs:init`:
- [/docs:update](/docs/commands/docs/update) - Update documentation
- [/docs:summarize](/docs/commands/docs/summarize) - Create summary
- [/ask](/docs/commands/core/ask) - Ask about codebase
- [/watzup](/docs/commands/core/watzup) - Get project status
---
**Key Takeaway**: `/docs:init` is essential when starting with ClaudeKit on existing projects. It creates comprehensive documentation that helps ClaudeKit understand your codebase and prevents hallucinations.
---
# /docs:update
Section: docs
Category: commands/docs-cmd
URL: https://docs.claudekit.cc/docs/docs-cmd/update
# /docs:update
Comprehensively analyze your codebase and update all documentation files to ensure they accurately reflect the current state of your project. Uses the `docs-manager` agent to maintain synchronized, high-quality documentation.
## Syntax
```bash
/docs:update [additional requests]
```
### Parameters
- `[additional requests]` (optional): Specific documentation updates or focus areas
## How It Works
The `/docs:update` command uses the `docs-manager` agent with this workflow:
### 1. Codebase Analysis
- Generates comprehensive codebase compaction using `repomix`
- Creates/updates `./docs/codebase-summary.md`
- Scans project structure and architecture
- Identifies key components and patterns
- Analyzes dependencies and integrations
### 2. Documentation Review
- Reads all existing documentation in `./docs/` directory
- Identifies outdated information
- Finds gaps and inconsistencies
- Cross-references with actual code implementation
- Checks examples and code snippets for accuracy
### 3. Systematic Updates
- Updates each documentation file:
- `README.md` - Project overview and quick start
- `docs/project-overview-pdr.md` - Product Development Requirements
- `docs/codebase-summary.md` - Comprehensive codebase summary
- `docs/code-standards.md` - Codebase structure and standards
- `docs/system-architecture.md` - System architecture documentation
- `docs/project-roadmap.md` - Project roadmap and future plans
- `docs/deployment-guide.md` (optional) - Deployment instructions
- `docs/design-guidelines.md` (optional) - Design system guidelines
### 4. Quality Assurance
- Ensures consistent formatting and terminology
- Validates all links and references
- Verifies code examples are functional
- Maintains documentation hierarchy
- Updates version information and timestamps
## When to Use
### β
Perfect For
**After Major Features**
```bash
# Implemented new authentication system
/docs:update
```
**Refactoring Projects**
```bash
# Refactored entire API layer
/docs:update [focus on API architecture changes]
```
**New Team Members**
```bash
# Preparing documentation for onboarding
/docs:update [ensure all setup instructions are current]
```
**Pre-Release**
```bash
# Before version release
/docs:update [prepare for v2.0 release]
```
**Quarterly Maintenance**
```bash
# Regular documentation review
/docs:update [quarterly documentation audit]
```
### β Don't Use For
**Simple Typos**
```bash
β /docs:update [fix typo in README]
β
Just edit the README directly
```
**No Code Changes**
```bash
β /docs:update [just checking]
β
Only run after meaningful code changes
```
**Quick Status Check**
```bash
β /docs:update [what changed?]
β
/watzup [review recent changes]
```
## Examples
### After Feature Implementation
```bash
/docs:update
```
**What happens:**
```
1. Analyzing codebase
$ repomix
β Generated: ./repomix-output.xml (245KB, 45K tokens)
β Created: ./docs/codebase-summary.md
2. Reviewing existing documentation
- README.md: Outdated (mentions old API structure)
- project-overview-pdr.md: Missing new features
- code-standards.md: Current
- system-architecture.md: Needs update (new microservices)
- codebase-summary.md: Regenerated
3. Updating documentation
β README.md
- Updated API endpoint examples
- Added new environment variables
- Refreshed feature list
β project-overview-pdr.md
- Added OAuth2 authentication section
- Updated technical requirements
- Added new acceptance criteria
β system-architecture.md
- Added payment service architecture
- Updated database schema diagram
- Documented new message queue integration
β codebase-summary.md
- Complete project structure overview
- File organization and naming conventions
- Key architectural patterns
4. Summary
Files updated: 4
Sections added: 12
Examples refreshed: 8
Broken links fixed: 3
β Documentation synchronized (3m 45s)
```
### Focused Update
```bash
/docs:update [update API documentation with new endpoints]
```
**What happens:**
```
1. Analyzing API changes
- New endpoints: 5 (user management)
- Modified endpoints: 3 (authentication)
- Deprecated endpoints: 1 (legacy login)
2. Updating API documentation
β project-overview-pdr.md
- Added API endpoints section
- Documented request/response formats
- Added authentication requirements
β code-standards.md
- Updated API design patterns
- Added endpoint naming conventions
- Documented error response structure
β README.md
- Added API quick start examples
- Updated authentication flow
3. Validation
- All endpoints documented
- Request examples tested
- Response schemas validated
β API documentation updated (2m 12s)
```
### Pre-Release Documentation
```bash
/docs:update [prepare documentation for v2.0 release]
```
**What happens:**
```
1. Comprehensive review
- Reviewing 8 documentation files
- Checking 156 code examples
- Validating 43 links
2. Major updates
β README.md
- Updated version to 2.0
- Added breaking changes section
- Refreshed installation instructions
β project-overview-pdr.md
- Added v2.0 features overview
- Updated technical requirements
- Added migration guide from v1.x
β project-roadmap.md
- Marked v2.0 items as completed
- Added v2.1 and v3.0 planning
- Updated timeline
β deployment-guide.md
- Updated for new deployment architecture
- Added Docker Compose examples
- Documented environment variables
3. Quality checks
β All links validated
β Code examples tested
β Consistent formatting applied
β Version numbers updated throughout
β Release documentation ready (4m 23s)
```
### Architecture Documentation
```bash
/docs:update [focus on system architecture]
```
**What happens:**
```
1. Architecture analysis
- Microservices: 7 services identified
- Databases: 2 (PostgreSQL, Redis)
- Message queues: RabbitMQ
- External APIs: 4 integrations
2. Updating system-architecture.md
β Added system overview diagram
β Documented each microservice
- API Gateway
- Auth Service
- User Service
- Payment Service
- Notification Service
- Analytics Service
- Background Job Service
β Database architecture
- Schema relationships
- Migration strategy
- Backup procedures
β Communication patterns
- REST APIs
- Message queue flows
- WebSocket connections
β Deployment architecture
- Container orchestration
- Load balancing
- Scaling strategies
β Architecture documentation complete (3m 56s)
```
## Documentation Files Updated
### Core Documentation
**README.md**
- Project overview
- Quick start guide
- Installation instructions
- Basic usage examples
- Contributing guidelines
**docs/project-overview-pdr.md**
- Product vision and goals
- Functional requirements
- Non-functional requirements
- Technical constraints
- Acceptance criteria
- Success metrics
**docs/codebase-summary.md**
- Project structure overview
- File organization
- Key components
- Architectural patterns
- Token count and statistics
**docs/code-standards.md**
- Coding conventions
- File naming patterns
- Directory structure
- Error handling patterns
- Testing strategies
- Security practices
**docs/system-architecture.md**
- High-level architecture
- Component diagrams
- Data flow
- Technology stack
- Integration points
- Deployment architecture
### Optional Documentation
**docs/project-roadmap.md**
- Feature timeline
- Release planning
- Future improvements
- Technical debt tracking
**docs/deployment-guide.md**
- Deployment procedures
- Environment setup
- Configuration management
- Monitoring and logging
**docs/design-guidelines.md**
- UI/UX patterns
- Component library
- Design system
- Accessibility guidelines
## Agent Invoked
The command uses the **docs-manager agent** with these capabilities:
- **Documentation Analysis**: Systematic review of all documentation
- **Codebase Synchronization**: Cross-referencing docs with code
- **Standards Enforcement**: Consistent formatting and terminology
- **Gap Identification**: Finding missing or outdated documentation
- **Quality Assurance**: Validating examples, links, and references
## Best Practices
### Run After Major Changes
β
**Good - After meaningful work:**
```bash
# After implementing features
/cook [add payment integration]
/fix:types
/test
/docs:update
# Commit everything together
/git:cm
```
β **Bad - Too frequent:**
```bash
# After every tiny change
/fix:fast [typo]
/docs:update # Wasteful
```
### Provide Context
β
**Specific focus:**
```bash
/docs:update [focus on API changes and authentication flow]
```
β **No context:**
```bash
/docs:update # Works but less targeted
```
### Review Before Committing
β
**Review changes:**
```bash
/docs:update
git diff docs/
# Review changes
/git:cm
```
## Workflow
### Standard Feature Development
```bash
# 1. Plan feature
/plan [add OAuth2 authentication]
# 2. Implement feature
/cook [implement OAuth2 with Google and GitHub providers]
# 3. Fix any issues
/fix:types
/test
# 4. Update documentation
/docs:update [document OAuth2 implementation]
# 5. Review and commit
git diff
/git:cm
```
### Quarterly Documentation Maintenance
```bash
# 1. Update documentation
/docs:update [quarterly documentation review]
# 2. Review all changes
git diff docs/
# 3. Commit documentation updates
/git:cm
```
### Pre-Release Checklist
```bash
# 1. Update documentation
/docs:update [prepare for v2.0 release]
# 2. Review roadmap
# Edit docs/project-roadmap.md
# 3. Update changelog
# Edit CHANGELOG.md
# 4. Commit release documentation
/git:cm
# 5. Create release PR
/git:pr main develop
```
## Troubleshooting
### Incomplete Updates
**Problem:** Some documentation files not updated
**Solution:**
```bash
# Specify which files need attention
/docs:update [update system-architecture.md and deployment-guide.md]
```
### Outdated Examples
**Problem:** Code examples in docs don't match current code
**Solution:**
```bash
# Command automatically detects and updates
/docs:update
# Or specify focus
/docs:update [refresh all code examples]
```
### Missing Documentation
**Problem:** New features not documented
**Solution:**
```bash
# Command will create missing sections
/docs:update [document new payment service]
```
### Formatting Issues
**Problem:** Inconsistent formatting across docs
**Solution:**
```bash
# Command standardizes formatting
/docs:update
```
## Related Commands
### Documentation Overview
```bash
# Quick summary of recent changes
/watzup
# Full documentation update
/docs:update
```
### Codebase Summary Only
```bash
# Just update codebase summary
/docs:summarize
# Full documentation update
/docs:update
```
### Initialize Documentation
```bash
# First-time documentation setup
/docs:init
# Regular updates thereafter
/docs:update
```
## Output Structure
After running `/docs:update`, your documentation structure:
```
./
βββ README.md (updated)
βββ docs/
β βββ project-overview-pdr.md (updated)
β βββ codebase-summary.md (regenerated)
β βββ code-standards.md (updated)
β βββ system-architecture.md (updated)
β βββ project-roadmap.md (updated)
β βββ deployment-guide.md (optional, updated)
β βββ design-guidelines.md (optional, updated)
βββ repomix-output.xml (generated)
```
## Metrics
Typical `/docs:update` performance:
- **Time**: 3-5 minutes (depending on codebase size)
- **Files analyzed**: Entire codebase
- **Files updated**: 4-8 documentation files
- **Code examples validated**: All examples in documentation
- **Links checked**: All internal and external links
## Next Steps
After using `/docs:update`:
- [/docs:summarize](/docs/commands/docs/summarize) - Update just codebase summary
- [/watzup](/docs/commands/core/watzup) - Review recent changes
- [/git:cm](/docs/commands/git/commit) - Commit documentation updates
---
**Key Takeaway**: `/docs:update` ensures your documentation stays synchronized with your codebase through comprehensive analysis and systematic updates across all documentation files.
---
# /docs:summarize
Section: docs
Category: commands/docs-cmd
URL: https://docs.claudekit.cc/docs/docs-cmd/summarize
# /docs:summarize
Generate a comprehensive summary of your codebase by analyzing project structure, files, and architecture. Creates or updates `./docs/codebase-summary.md` with detailed project overview, file statistics, and token counts.
## Syntax
```bash
/docs:summarize
```
No arguments needed - the command analyzes your entire codebase automatically.
## How It Works
The `/docs:summarize` command uses the `docs-manager` agent:
### 1. Codebase Compaction
- Runs `repomix` to analyze entire project
- Generates `./repomix-output.xml` with complete codebase
- Calculates file counts and token statistics
- Identifies project structure and patterns
- Excludes build artifacts and dependencies
### 2. Summary Generation
- Parses compacted codebase data
- Identifies key components and modules
- Analyzes file organization patterns
- Documents architectural decisions
- Lists technology stack and dependencies
### 3. Documentation Creation
- Creates/updates `./docs/codebase-summary.md`
- Includes project structure tree
- Documents file organization
- Lists key files and their purposes
- Provides token count and statistics
- Adds timestamp and version information
### 4. Quality Assurance
- Ensures consistent formatting
- Validates file paths and references
- Checks for completeness
- Maintains documentation standards
## When to Use
### β
Perfect For
**New Team Members**
```bash
# Onboarding new developers
/docs:summarize
```
**Project Handoff**
```bash
# Preparing project for transfer
/docs:summarize
```
**Architecture Review**
```bash
# Before major refactoring
/docs:summarize
```
**AI Context Generation**
```bash
# Generate context for AI tools
/docs:summarize
```
**Regular Maintenance**
```bash
# Weekly/monthly codebase review
/docs:summarize
```
### β Don't Use For
**Full Documentation Update**
```bash
β /docs:summarize # Only updates summary
β
/docs:update # Updates all documentation
```
**Detailed API Docs**
```bash
β /docs:summarize # High-level overview only
β
/docs:update [focus on API documentation]
```
## Examples
### Basic Codebase Summary
```bash
/docs:summarize
```
**What happens:**
```
1. Analyzing codebase
$ repomix
- Scanning project files...
- Processing: 245 files
- Excluding: node_modules, dist, .git
2. Generating compaction
β Created: ./repomix-output.xml
- Size: 1.2 MB
- Tokens: 325,478
- Lines of code: 45,234
3. Creating summary
β Updated: ./docs/codebase-summary.md
Contents:
- Project Overview
- Technology Stack
- Project Structure (directory tree)
- Key Components
- File Organization
- Statistics and Metrics
β Codebase summary generated (1m 23s)
```
### Large Monorepo
```bash
/docs:summarize
```
**What happens:**
```
1. Analyzing monorepo structure
$ repomix
- Packages found: 8
- Total files: 1,247
- Shared components: 156
2. Processing packages
β packages/api (234 files, 78K tokens)
β packages/web (445 files, 125K tokens)
β packages/mobile (334 files, 89K tokens)
β packages/shared (156 files, 45K tokens)
β packages/database (45 files, 12K tokens)
β packages/auth (67 files, 23K tokens)
β packages/payment (89 files, 34K tokens)
β packages/analytics (78 files, 28K tokens)
3. Generating comprehensive summary
β Created: ./docs/codebase-summary.md
Sections:
- Monorepo Overview
- Package Structure
- Inter-package Dependencies
- Shared Libraries
- Build and Deployment
- Development Workflow
- Total Statistics
Metrics:
- Total tokens: 434,000
- Total files: 1,247
- Total lines: 156,789
β Monorepo summary complete (2m 45s)
```
### Microservices Architecture
```bash
/docs:summarize
```
**What happens:**
```
1. Analyzing microservices
$ repomix
- Services detected: 7
- Shared libraries: 3
- Configuration files: 23
2. Service breakdown
β services/api-gateway (45 files, 15K tokens)
β services/auth (67 files, 23K tokens)
β services/users (89 files, 31K tokens)
β services/payments (78 files, 28K tokens)
β services/notifications (56 files, 19K tokens)
β services/analytics (67 files, 24K tokens)
β services/background-jobs (45 files, 16K tokens)
3. Creating summary
β Updated: ./docs/codebase-summary.md
Structure:
- Microservices Architecture Overview
- Service Descriptions
- Communication Patterns
- Shared Infrastructure
- Data Flow Diagrams
- Deployment Configuration
Statistics:
- Total services: 7
- Total tokens: 156,000
- Average service size: 22,000 tokens
β Microservices summary generated (1m 56s)
```
### TypeScript Monorepo
```bash
/docs:summarize
```
**What happens:**
```
1. Scanning TypeScript project
$ repomix
- TypeScript files: 456
- Type definitions: 67
- Test files: 234
- Configuration: 12
2. Analyzing structure
β Core modules identified
β Type hierarchies mapped
β Test coverage calculated
β Dependencies analyzed
3. Summary created
β ./docs/codebase-summary.md
Includes:
- TypeScript Configuration
- Module Structure
- Type System Overview
- Key Interfaces and Types
- Test Organization
- Build Configuration
Metrics:
- TypeScript version: 5.3.3
- Total tokens: 289,456
- Type files: 67
- Test coverage: 87%
β TypeScript summary complete (1m 34s)
```
## Generated Summary Structure
The `./docs/codebase-summary.md` file includes:
### Project Overview
- Project name and description
- Technology stack
- Key features
- Development status
### Technology Stack
- Programming languages
- Frameworks and libraries
- Build tools
- Testing frameworks
- Infrastructure tools
### Project Structure
```
project-root/
βββ src/
β βββ api/
β βββ services/
β βββ models/
β βββ utils/
βββ tests/
βββ docs/
βββ config/
```
### Key Components
- Main application entry points
- Core business logic modules
- API endpoints and routes
- Database models and schemas
- Utility libraries
### File Organization
- Naming conventions
- Directory structure patterns
- Module organization
- Configuration files
### Statistics and Metrics
- Total files: 245
- Total lines of code: 45,234
- Total tokens: 325,478
- Test coverage: 87%
- Last updated: 2025-10-29
## Repomix Integration
### What is Repomix?
Repomix is a tool that packs entire repositories into single, AI-friendly files:
- Generates comprehensive codebase snapshot
- Calculates token counts for LLM context
- Preserves project structure
- Excludes unnecessary files
### Generated Files
**./repomix-output.xml**
- Complete codebase compaction
- XML format optimized for AI parsing
- Includes all source files
- Metadata and statistics
### Repomix Configuration
Default exclusions (via `.gitignore` and `.repomixignore`):
```
node_modules/
dist/
build/
.git/
*.log
coverage/
.env
```
## Agent Invoked
The command uses the **docs-manager agent** with these capabilities:
- **Codebase Analysis**: Comprehensive project scanning
- **Structure Identification**: Pattern recognition in file organization
- **Statistics Generation**: File counts, token counts, metrics
- **Documentation Creation**: Formatted markdown generation
- **Quality Assurance**: Consistency and completeness validation
## Best Practices
### Regular Updates
β
**Periodic summaries:**
```bash
# Weekly or after major changes
/docs:summarize
```
β **Too frequent:**
```bash
# After every tiny change
/fix:fast [typo]
/docs:summarize # Wasteful
```
### Before Major Work
β
**Establish baseline:**
```bash
# Before refactoring
/docs:summarize
# Perform refactoring
/docs:summarize # Compare changes
```
### For Onboarding
β
**Prepare for new developers:**
```bash
# Update documentation
/docs:summarize
/docs:update
# New team member gets complete picture
```
## Workflow
### Onboarding New Developers
```bash
# 1. Generate summary
/docs:summarize
# 2. Update full documentation
/docs:update
# 3. Share documentation
# Point new developers to ./docs/codebase-summary.md
```
### Architecture Review
```bash
# 1. Generate current state
/docs:summarize
# 2. Review summary
cat docs/codebase-summary.md
# 3. Plan refactoring based on insights
/plan [refactor based on architecture review]
```
### Project Handoff
```bash
# 1. Generate comprehensive summary
/docs:summarize
# 2. Update all documentation
/docs:update
# 3. Commit documentation
/git:cm
# 4. Share with receiving team
```
### Regular Maintenance
```bash
# Weekly/monthly task
/docs:summarize
# Review changes
git diff docs/codebase-summary.md
# Commit if significant changes
/git:cm
```
## Troubleshooting
### Repomix Not Found
**Problem:** `repomix` command not available
**Solution:**
```bash
# Install repomix
npm install -g repomix
# Then run command
/docs:summarize
```
### Large Codebase Timeout
**Problem:** Timeout on very large projects
**Solution:**
```bash
# Configure repomix to exclude more
echo "target/" >> .repomixignore
echo "*.min.js" >> .repomixignore
# Then retry
/docs:summarize
```
### Missing Files in Summary
**Problem:** Some files not included
**Solution:**
```bash
# Check .gitignore and .repomixignore
# Remove exclusions if needed
# Then regenerate
/docs:summarize
```
## Token Count Usage
### Why Token Counts Matter
Token counts help:
- **AI Context Planning**: Know if codebase fits in LLM context window
- **Documentation Scope**: Understand documentation requirements
- **Code Review**: Estimate review effort
- **Refactoring Planning**: Assess complexity
### Example Token Breakdown
```
Total tokens: 325,478
Breakdown:
- API layer: 89,234 tokens (27%)
- Services: 123,456 tokens (38%)
- Models: 45,678 tokens (14%)
- Utils: 34,567 tokens (11%)
- Tests: 32,543 tokens (10%)
```
### Context Window Planning
```
Claude 3.5 Sonnet: 200K tokens
Project size: 325K tokens
Strategy:
- Analyze by module (< 200K each)
- Use codebase summary for high-level decisions
- Deep dive into specific modules as needed
```
## Related Commands
### Full Documentation Update
```bash
# Summary only
/docs:summarize
# All documentation
/docs:update
```
### Initialize Documentation
```bash
# First-time setup
/docs:init
# Regular updates
/docs:summarize
```
### Review Changes
```bash
# Generate summary
/docs:summarize
# Review recent work
/watzup
```
## Output Files
After running `/docs:summarize`:
```
./
βββ docs/
β βββ codebase-summary.md (created/updated)
βββ repomix-output.xml (generated)
```
## Metrics
Typical `/docs:summarize` performance:
- **Time**: 1-3 minutes (depending on codebase size)
- **Files analyzed**: All source files (excluding node_modules, build artifacts)
- **Output size**: 5-50 KB markdown file
- **Token accuracy**: 99%+ accurate counting
- **Update frequency**: Recommended weekly or after major changes
## Next Steps
After using `/docs:summarize`:
- [/docs:update](/docs/commands/docs/update) - Update all documentation
- [/docs:init](/docs/commands/docs/init) - Initialize full documentation
- [/watzup](/docs/commands/core/watzup) - Review recent changes
- [/git:cm](/docs/commands/git/commit) - Commit documentation
---
**Key Takeaway**: `/docs:summarize` provides a quick, comprehensive overview of your codebase structure, helping developers understand project organization and serving as valuable context for AI-assisted development.
---
# /fix
Section: docs
Category: commands/fix
URL: https://docs.claudekit.cc/docs/fix/index
# /fix
Intelligent issue router. Analyzes your issue description and automatically routes to the most appropriate fix command based on type, complexity, and scope.
## Syntax
```bash
/fix [issues]
```
## When to Use
- **Any Bug or Issue**: Let ClaudeKit decide the approach
- **Unknown Complexity**: Don't know if simple or complex
- **Quick Workflow**: Describe problem, let router decide
- **Multiple Issues**: Can detect and route to parallel fixing
## Quick Example
```bash
/fix [users sometimes getting logged out randomly]
```
**Output**:
```
Analyzing issue...
Keywords detected: "randomly", "sometimes" (intermittent)
Scope: Unknown (needs investigation)
Complexity: High
Decision: Complex issue requiring investigation
β Routing to /fix:hard
Executing /fix:hard with enhanced prompt...
```
## Arguments
- `[issues]`: Description of issue(s) to fix (required). Can be single issue or numbered list.
## Pre-Routing Check
Before analyzing the issue, `/fix` checks for existing plans:
```
Checking for active plan...
Active plan found: plans/251128-auth-system/plan.md
β Routing to /code plans/251128-auth-system/plan.md
```
If an active plan exists at `/.claude/active-plan`, routes to `/code` instead.
## Decision Tree
Routes are evaluated in priority order:
```
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β /fix [issues] β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β 1. Existing plan found? β
β β /code β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β No
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β 2. Type errors? (type, typescript, tsc) β
β β /fix:types β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β No
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β 3. UI/UX issue? (ui, ux, design, layout, style, css) β
β β /fix:ui β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β No
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β 4. CI/CD issue? (github actions, pipeline, workflow) β
β β /fix:ci β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β No
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β 5. Test failure? (test, spec, jest, vitest, failing) β
β β /fix:test β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β No
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β 6. Log analysis? (logs, error logs, stack trace) β
β β /fix:logs β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β No
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β 7. Multiple independent issues? (2+ unrelated) β
β β /fix:parallel β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β No
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β 8. Complex issue? (complex, architecture, refactor) β
β β /fix:hard β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β No
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β 9. Default: Simple, single-file issue β
β β /fix:fast β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
```
## Keyword Detection
| Keywords | Route | Description |
|----------|-------|-------------|
| `type`, `typescript`, `tsc`, `type error` | /fix:types | TypeScript type errors |
| `ui`, `ux`, `design`, `layout`, `style`, `css`, `responsive` | /fix:ui | UI/UX issues |
| `github actions`, `pipeline`, `ci/cd`, `workflow`, `build failed` | /fix:ci | CI/CD failures |
| `test`, `spec`, `jest`, `vitest`, `failing test`, `test error` | /fix:test | Test failures |
| `logs`, `error logs`, `log file`, `stack trace`, `exception` | /fix:logs | Log-based debugging |
| `complex`, `architecture`, `refactor`, `major`, `rewrite` | /fix:hard | Complex issues |
| `typo`, `simple`, `quick`, `single file`, `obvious` | /fix:fast | Simple fixes |
## Routing Examples
### Route 1: Type Errors β /fix:types
```bash
/fix [typescript errors in user service]
```
```
Keywords: "typescript"
β Routing to /fix:types
```
### Route 2: UI Issues β /fix:ui
```bash
/fix [button alignment broken on mobile layout]
```
```
Keywords: "button", "mobile", "layout"
β Routing to /fix:ui
```
### Route 3: CI/CD β /fix:ci
```bash
/fix [GitHub Actions workflow failing on test step]
```
```
Keywords: "GitHub Actions", "workflow", "failing"
β Routing to /fix:ci
```
### Route 4: Test Failures β /fix:test
```bash
/fix [jest tests failing after refactor]
```
```
Keywords: "jest", "tests", "failing"
β Routing to /fix:test
```
### Route 5: Log Analysis β /fix:logs
```bash
/fix [stack trace shows null pointer in auth module]
```
```
Keywords: "stack trace"
β Routing to /fix:logs
```
### Route 6: Multiple Issues β /fix:parallel
```bash
/fix [
1. Button not clickable on mobile
2. API timeout on /users endpoint
3. Typo in footer copyright
]
```
```
Issues detected: 3 independent issues
β Routing to /fix:parallel
```
### Route 7: Complex β /fix:hard
```bash
/fix [users randomly getting logged out, need to investigate session handling]
```
```
Keywords: "randomly", "investigate"
Complexity: High
β Routing to /fix:hard
```
### Route 8: Simple β /fix:fast
```bash
/fix [change button color from blue to green]
```
```
Keywords: "change", single-file implied
Complexity: Low
β Routing to /fix:fast
```
## Clarification Questions
If the issue is ambiguous, `/fix` asks before routing:
```bash
/fix [something's wrong with the API]
```
```
Issue unclear. Please clarify:
1. What specific behavior are you seeing?
[ ] Errors returned
[ ] Slow responses
[ ] Wrong data
[ ] Connection issues
2. Is this affecting:
[ ] Single endpoint
[ ] Multiple endpoints
[ ] All endpoints
3. Do you have:
[ ] Error logs
[ ] Stack traces
[ ] Reproduction steps
```
## Prompt Enhancement
Before delegating, `/fix` enhances your description:
**Your Input**:
```bash
/fix [login broken]
```
**Enhanced Prompt**:
```
Analyze and fix login functionality issue.
Context:
- Issue: Login broken (needs investigation)
- Scope: Authentication system
- Priority: High (core functionality)
Investigation steps:
1. Check authentication flow
2. Verify API endpoints
3. Review session handling
4. Check error logs
5. Test login scenarios
Fix and validate:
- Implement solution
- Add tests
- Verify fix
- Document changes
```
## Complete Example
### Scenario: Mixed Complexity Issue
```bash
/fix [payment processing fails for international cards]
```
**Routing Analysis**:
```
Analyzing issue...
Keywords: "payment", "fails", "international cards"
Scope: Payment system (likely multiple files)
Complexity: Medium-High
Investigation needed: Yes (specific card types)
Decision: Complex issue requiring investigation
β Routing to /fix:hard
Enhancing prompt...
Added context:
- Payment provider integration points
- Currency handling considerations
- International card validation rules
Executing /fix:hard with enhanced prompt...
[Fix execution begins]
```
## Manual Override
If you know the right command, call it directly:
```bash
# Override router, use specific command
/fix:types [your issue]
/fix:ui [your issue]
/fix:fast [your issue]
/fix:hard [your issue]
/fix:parallel [issue list]
```
## Related Commands
| Command | Description | When Auto-Selected |
|---------|-------------|--------------------|
| [/fix:fast](/docs/commands/fix/fast) | Quick single-file fixes | Simple, clear issues |
| [/fix:hard](/docs/commands/fix/hard) | Complex multi-file fixes | Investigation needed |
| [/fix:types](/docs/commands/fix/types) | TypeScript type errors | Type-related keywords |
| [/fix:ui](/docs/commands/fix/ui) | UI/UX issues | UI/design keywords |
| [/fix:ci](/docs/commands/fix/ci) | CI/CD failures | Pipeline keywords |
| [/fix:test](/docs/commands/fix/test) | Test failures | Test-related keywords |
| [/fix:logs](/docs/commands/fix/logs) | Log-based debugging | Log/trace keywords |
| [/fix:parallel](/docs/commands/fix/parallel) | Multiple independent issues | 2+ unrelated issues |
| [/code](/docs/commands/core/code) | Execute existing plan | Active plan found |
| [/debug](/docs/commands/core/debug) | Investigate issues | Deep investigation |
## Best Practices
### Be Descriptive
```bash
# Good: Specific with context
/fix [email validation accepts invalid emails like "test@" without domain]
# Less helpful: Vague
/fix [validation broken]
```
### Include Location if Known
```bash
# Good: Location provided
/fix [missing await in getUserData function in user.service.ts]
# Less context
/fix [missing await somewhere]
```
### Trust the Router
Let `/fix` decide the strategy:
```bash
# Just describe the problem
/fix [users reporting slow page loads]
# Don't overthink which command to use
```
---
**Key Takeaway**: `/fix` is your intelligent bug-fixing entry point. Describe the problem, and it routes to the optimal fix command based on type, complexity, and scope analysis.
---
# /fix:test
Section: docs
Category: commands/fix
URL: https://docs.claudekit.cc/docs/fix/test
# /fix:test
Run tests and automatically fix failing tests.
## Syntax
```bash
/fix:test [issues]
```
## Workflow
1. **Compile**: `tester` subagent compiles code, fixes syntax errors
2. **Run Tests**: Execute test suite, report results
3. **Debug**: `debugger` subagent finds root cause of failures
4. **Plan**: `planner` subagent creates fix plan
5. **Implement**: Main agent implements fixes
6. **Re-test**: Verify fixes work
7. **Review**: `code-reviewer` validates changes
8. **Repeat**: If still failing, loop from step 2
## Example
```bash
/fix:test
# Output:
# β 3 tests failed
# - auth/login.test.js: Expected 200, got 401
#
# Debugging...
# Root cause: JWT secret not loaded in test env
#
# Planning fix...
# Implementing...
#
# β All tests passing (48/48)
```
## When to Use
- Test suite failing
- CI tests broken
- After refactoring
- Before merging PR
## Related Commands
| Command | Use Case |
|---------|----------|
| `/test` | Just run tests (no fix) |
| `/fix:fast` | Quick fix for simple issues |
| `/fix:ci` | Fix CI/CD specific failures |
---
**Key Takeaway**: Use `/fix:test` to automatically debug and fix failing tests with full analysis cycle.
---
# /fix:parallel
Section: docs
Category: commands/fix
URL: https://docs.claudekit.cc/docs/fix/parallel
# /fix:parallel
Parallel issue fixing command. Resolves multiple independent issues simultaneously by spawning parallel fullstack-developer agents, each handling one issue.
## Syntax
```bash
/fix:parallel [issues]
```
## When to Use
- **Multiple Unrelated Bugs**: 2+ issues that don't affect same files
- **Independent Fixes**: Bugs with no shared context needed
- **Time-Sensitive**: Need multiple fixes done quickly
- **Batch Fixes**: Clearing backlog of small issues
## Quick Example
```bash
/fix:parallel [
1. Button not clickable on mobile
2. API timeout on /users endpoint
3. Typo in footer copyright
]
```
**Output**:
```
Parsing issues...
Found 3 issues to fix
Validating independence...
β Issue 1: src/components/Button.tsx
β Issue 2: src/api/users.ts
β Issue 3: src/components/Footer.tsx
No file conflicts detected.
Launching 3 parallel agents...
Agent 1: Fixing mobile button...
Agent 2: Fixing API timeout...
Agent 3: Fixing footer typo...
Progress:
[ββββββββββ] Agent 3: Complete (12s)
[ββββββββββ] Agent 1: Complete (28s)
[ββββββββββ] Agent 2: Complete (45s)
All agents complete.
Summary:
β Issue 1: Fixed - Added touch event handler
β Issue 2: Fixed - Increased timeout to 30s
β Issue 3: Fixed - Changed "Copywrite" to "Copyright"
Total time: 45s (vs ~90s sequential)
```
## Arguments
- `[issues]`: List of issues to fix (required). Use numbered or bulleted format.
## Issue Format
### Numbered List
```bash
/fix:parallel [
1. Button not responding on Safari
2. Date picker shows wrong timezone
3. Search results not paginating
]
```
### Bulleted List
```bash
/fix:parallel [
- Missing loading spinner on submit
- Incorrect currency symbol for EUR
- Broken link in navigation
]
```
### Inline Format
```bash
/fix:parallel [fix mobile menu toggle; fix email validation regex; fix footer alignment]
```
## What It Does
### Step 1: Parse Issues
Extracts individual issues from your input:
```
Input: "1. Button bug 2. API error 3. Typo"
Parsed:
- Issue 1: Button bug
- Issue 2: API error
- Issue 3: Typo
```
### Step 2: Validate Independence
Checks that issues don't affect the same files:
```
Analyzing file dependencies...
Issue 1: Likely affects src/components/Button.tsx
Issue 2: Likely affects src/api/endpoints.ts
Issue 3: Likely affects src/components/Footer.tsx
Overlap check: None detected β
Issues are independent.
```
### Step 3: Launch Parallel Agents
Spawns one fullstack-developer agent per issue:
```
Launching agents...
Agent 1: fullstack-developer β Issue 1
Agent 2: fullstack-developer β Issue 2
Agent 3: fullstack-developer β Issue 3
All agents running in parallel.
```
### Step 4: Monitor Progress
Tracks each agent with timeout (10 minutes per agent):
```
Progress:
[ββββββββββ] Agent 1: Investigating... (15s)
[ββββββββββ] Agent 2: Complete (22s)
[ββββββββββ] Agent 3: Implementing fix... (18s)
```
### Step 5: Aggregate Results
Collects results from all agents:
```
Results collected:
- Agent 1: Success - 1 file changed
- Agent 2: Success - 2 files changed
- Agent 3: Success - 1 file changed
Total: 4 files changed
```
### Step 6: Report Summary
Provides consolidated fix report:
```
βββββββββββββββββββββββββββββββββββββββ
PARALLEL FIX SUMMARY
βββββββββββββββββββββββββββββββββββββββ
Issue 1: Button not responding on Safari
Status: β Fixed
Files: src/components/Button.tsx
Changes: Added -webkit-tap-highlight-color
Issue 2: Date picker wrong timezone
Status: β Fixed
Files: src/utils/date.ts, src/components/DatePicker.tsx
Changes: Added timezone normalization
Issue 3: Search pagination broken
Status: β Fixed
Files: src/hooks/useSearch.ts
Changes: Fixed offset calculation
βββββββββββββββββββββββββββββββββββββββ
Total time: 45 seconds
Sequential estimate: ~2 minutes
Speedup: 2.7x
βββββββββββββββββββββββββββββββββββββββ
```
## Complete Example
### Scenario: Sprint Cleanup
```bash
/fix:parallel [
1. Login button disabled state not showing
2. Profile image not loading for new users
3. Search autocomplete not closing on blur
4. Footer social links point to wrong URLs
]
```
**Execution**:
```
Parsing issues...
Found 4 issues
Validating independence...
Issue 1: src/components/auth/LoginButton.tsx
Issue 2: src/components/profile/Avatar.tsx
Issue 3: src/components/search/Autocomplete.tsx
Issue 4: src/components/layout/Footer.tsx
No overlapping files β
Launching 4 parallel agents...
[Agent 1] LoginButton: Investigating disabled state...
[Agent 2] Avatar: Checking image loading logic...
[Agent 3] Autocomplete: Analyzing blur behavior...
[Agent 4] Footer: Reviewing social links...
Progress:
[ββββββββββ] Agent 4: Complete (8s)
[ββββββββββ] Agent 1: Complete (15s)
[ββββββββββ] Agent 3: Complete (22s)
[ββββββββββ] Agent 2: Complete (35s)
βββββββββββββββββββββββββββββββββββββββ
RESULTS
βββββββββββββββββββββββββββββββββββββββ
β Issue 1: Fixed disabled prop binding
β Issue 2: Added fallback for undefined avatar
β Issue 3: Added onBlur handler with delay
β Issue 4: Updated social media URLs
Files changed: 4
Tests passing: β
Total time: 35s
βββββββββββββββββββββββββββββββββββββββ
```
## Dependency Detection
If issues share files, `/fix:parallel` routes to `/fix:hard`:
```bash
/fix:parallel [
1. Auth token not refreshing
2. Login redirect broken
]
```
```
Validating independence...
Issue 1: Likely affects src/auth/token.ts, src/auth/session.ts
Issue 2: Likely affects src/auth/login.ts, src/auth/session.ts
β οΈ Overlap detected: src/auth/session.ts
Issues are not independent.
β Routing to /fix:hard instead
Both issues may share context in auth/session.ts.
Sequential fixing recommended for consistency.
```
## Limitations
### Maximum Agents
```
Max parallel agents: 5
```
If more than 5 issues, batches into waves:
```
Found 8 issues
Wave 1 (parallel): Issues 1-5
Wave 2 (parallel): Issues 6-8
Executing Wave 1...
```
### Independence Required
Issues must not share files:
```
β Independent: Button fix + API fix + Footer fix
β Dependent: Auth fix + Session fix (share auth module)
```
### Timeout
Each agent has 10-minute timeout:
```
Agent timeout: 10 minutes
Agent 3 timed out.
Partial results collected.
```
## Best Practices
### Group Related Issues Elsewhere
```bash
# Bad: Related issues
/fix:parallel [
1. Auth token expiring
2. Session not persisting
]
# Good: Use /fix:hard for related issues
/fix:hard [auth token expiring and session not persisting]
```
### Keep Issues Specific
```bash
# Good: Specific, actionable
/fix:parallel [
1. Button color wrong on hover (should be #2563eb)
2. Missing aria-label on search input
3. Footer copyright says 2024
]
# Less effective: Vague
/fix:parallel [
1. UI looks wrong
2. Accessibility issues
3. Update footer
]
```
### Don't Exceed 5 Issues
```bash
# Optimal: 2-5 issues
/fix:parallel [
1. Issue one
2. Issue two
3. Issue three
]
# Too many: Consider multiple runs
/fix:parallel [1-5]
/fix:parallel [6-10]
```
## When NOT to Use
### Related Issues
Issues that affect shared code:
```bash
# Don't use parallel for:
- Auth token + Session handling (shared auth code)
- Database query + Connection pool (shared DB layer)
- API route + Middleware (shared request flow)
# Use instead:
/fix:hard [describe related issues together]
```
### Complex Investigation
Issues requiring deep analysis:
```bash
# Don't use parallel for:
- "App crashes randomly" (needs investigation)
- "Performance degraded" (needs profiling)
# Use instead:
/fix:hard [issue needing investigation]
```
## Related Commands
- [/fix](/docs/commands/fix) - Intelligent routing (may route here)
- [/fix:fast](/docs/commands/fix/fast) - Single simple issue
- [/fix:hard](/docs/commands/fix/hard) - Complex or related issues
- [/code:parallel](/docs/commands/code/parallel) - Parallel plan execution
- [/cook:auto:parallel](/docs/commands/cook/auto-parallel) - Parallel feature implementation
---
**Key Takeaway**: `/fix:parallel` accelerates bug fixing by resolving multiple independent issues simultaneously. Provide a list of unrelated issues, and parallel agents handle them concurrently for faster resolution.
---
# /fix:fast
Section: docs
Category: commands/fix
URL: https://docs.claudekit.cc/docs/fix/fast
# /fix:fast
Fix minor bugs and issues quickly. This command skips extensive codebase analysis and planning, getting straight to implementation and testing. Perfect for simple fixes where you know what needs to be done.
## Syntax
```bash
/fix:fast [bug description]
```
## How It Works
The `/fix:fast` command follows a streamlined workflow:
### 1. Quick Analysis
- Reads the bug description
- Identifies likely location from description
- Minimal codebase scanning
### 2. Direct Implementation
- Implements the fix immediately
- No extensive planning phase
- Follows existing patterns
### 3. Testing
- Runs relevant tests
- Validates the fix works
- Checks for regressions
### 4. Summary Report
- Files changed
- Tests status
- Confidence level
## When to Use
### β
Perfect For
**Simple Typos**
```bash
/fix:fast [typo in error message: "sucessful" should be "successful"]
```
**Minor UI Issues**
```bash
/fix:fast [button text says "Submitt" instead of "Submit"]
```
**Simple Logic Fixes**
```bash
/fix:fast [validation allows empty email field when it should be required]
```
**Configuration Updates**
```bash
/fix:fast [update API rate limit from 100 to 200 requests per minute]
```
**Obvious Bugs**
```bash
/fix:fast [forgot to add await keyword before database query in login handler]
```
### β Don't Use For
**Complex Issues**
```bash
β /fix:fast [users randomly getting logged out]
β
/fix:hard [users randomly getting logged out]
```
**System-Wide Problems**
```bash
β /fix:fast [memory leak causing crashes]
β
/fix:hard [memory leak causing crashes]
```
**Unknown Root Cause**
```bash
β /fix:fast [something is broken with payments]
β
/fix:hard [payment processing failing with unclear error]
```
**Multiple Related Issues**
```bash
β /fix:fast [authentication system has multiple issues]
β
/fix:hard [authentication system has multiple issues]
```
## Examples
### Typo Fix
```bash
/fix:fast [fix typo in welcome message: "Welcom" should be "Welcome"]
```
**What happens:**
```
1. Locates welcome message
- Found in: src/components/Welcome.tsx
2. Fixes typo
- Changed: "Welcom to our app"
- To: "Welcome to our app"
3. Runs tests
- UI tests: β passed
- Integration tests: β passed
β Fix complete (12 seconds)
```
### Validation Fix
```bash
/fix:fast [email validation accepts invalid emails like "test@"]
```
**What happens:**
```
1. Locates validation function
- Found in: src/utils/validation.js
2. Updates regex pattern
- Old: /^[^\s@]+@[^\s@]+$/
- New: /^[^\s@]+@[^\s@]+\.[^\s@]+$/
3. Runs validation tests
- test("test@" rejected): β passed
- test("test@domain.com" accepted): β passed
- All 15 tests passed
β Fix complete (18 seconds)
```
### Missing Await
```bash
/fix:fast [forgot await in getUserData function causing Promise instead of User]
```
**What happens:**
```
1. Locates function
- Found in: src/services/user.service.ts:45
2. Adds await keyword
- const user = db.getUser(id)
- const user = await db.getUser(id)
3. Checks TypeScript types
- Type now correctly resolves to User
- No more Promise errors
4. Runs tests
- All tests passed
β Fix complete (9 seconds)
```
### Configuration Update
```bash
/fix:fast [increase session timeout from 15 minutes to 30 minutes]
```
**What happens:**
```
1. Locates config
- Found in: config/session.ts
2. Updates value
- sessionTimeout: 15 * 60 * 1000
- sessionTimeout: 30 * 60 * 1000
3. Validates config
- Config loads correctly
- Tests pass
β Fix complete (7 seconds)
```
## Speed Comparison
### /fix:fast vs /fix:hard
**Simple Typo:**
```
/fix:fast: 10-20 seconds
/fix:hard: 2-3 minutes (unnecessary overhead)
```
**Complex Bug:**
```
/fix:fast: May produce incomplete fix
/fix:hard: 5-10 minutes (proper investigation)
```
**Rule of Thumb:**
- Know exact fix? β `/fix:fast`
- Need investigation? β `/fix:hard`
## What Gets Skipped
To save time, `/fix:fast` skips:
1. **Extensive Codebase Scanning**
- No scout agents deployed
- Only looks at obvious locations
2. **Planning Phase**
- No detailed plan created
- Direct to implementation
3. **Research**
- No internet research
- No documentation lookup
- Uses existing knowledge only
4. **Root Cause Analysis**
- Fixes symptom, not necessarily root cause
- Assumes description is accurate
## Best Practices
### Provide Exact Location
β
**With Location:**
```bash
/fix:fast [in src/auth/login.ts line 45, change timeout from 5000 to 10000]
```
β **Without Location:**
```bash
/fix:fast [change a timeout somewhere]
```
### Be Specific About Fix
β
**Specific:**
```bash
/fix:fast [button text "Loggin" in LoginButton.tsx should be "Login"]
```
β **Vague:**
```bash
/fix:fast [fix button text]
```
### Verify Scope is Simple
Before using `/fix:fast`, ask:
- Do I know exactly what needs to change?
- Is it in one or two files?
- Will fix take < 5 lines of code?
- Am I confident this won't break anything?
If yes to all β Use `/fix:fast`
If no to any β Use `/fix:hard`
## Common Use Cases
### Code Typos
```bash
/fix:fast [variable name "usreName" should be "userName" in profile.service.ts]
```
### Import Statements
```bash
/fix:fast [missing import for User type in auth.controller.ts]
```
### Simple Calculations
```bash
/fix:fast [discount calculation showing 15% instead of 20%, update in checkout.ts]
```
### Text Updates
```bash
/fix:fast [update copyright year from 2023 to 2024 in footer]
```
### Simple Conditionals
```bash
/fix:fast [flip condition: if (isDisabled) should be if (!isDisabled) in SubmitButton]
```
### Default Values
```bash
/fix:fast [change default page size from 10 to 20 in pagination config]
```
## Error Handling
If `/fix:fast` can't complete the fix:
```
β Warning: Fix may be more complex than expected
Considerations:
- Multiple files affected
- Unclear location
- May require deeper analysis
Recommendation: Use /fix:hard instead
Continue with /fix:fast anyway? (y/n)
```
You can:
1. **Continue** - Try fixing anyway
2. **Cancel** - Switch to `/fix:hard`
## After Fixing
Standard workflow after `/fix:fast`:
```bash
# 1. Fix applied
/fix:fast [typo in button text]
# 2. Review changes
git diff
# 3. Run full test suite (optional)
/test
# 4. Commit if satisfied
/git:cm
```
## Troubleshooting
### Fix Didn't Work
**Problem:** Issue still occurs after fix
**Solution:**
```bash
# Issue might be more complex
/fix:hard [describe the issue again with more detail]
```
### Wrong Location
**Problem:** Fixed wrong file/location
**Solution:**
```bash
# Provide exact file path
/fix:fast [in src/correct/file.ts line 42, fix the actual issue]
```
### Tests Failing
**Problem:** Fix broke existing tests
**Solution:**
```bash
# Investigate why tests failed
/debug [test failures after fixing X]
# Or revert and use thorough approach
git restore .
/fix:hard [original issue description]
```
### Incomplete Fix
**Problem:** Fix works but feels incomplete
**Solution:**
```bash
# Add follow-up improvements
/cook [improve X with Y feature]
```
## Metrics
Typical `/fix:fast` performance:
- **Time**: 5-30 seconds
- **Files changed**: 1-2
- **Test coverage**: Existing tests only
- **Success rate**: ~95% for simple issues
Compare to `/fix:hard`:
- **Time**: 2-10 minutes
- **Files changed**: 1-10+
- **Test coverage**: New tests generated
- **Success rate**: ~99% for all issues
## Next Steps
After using `/fix:fast`:
- [/test](/docs/commands/core/test) - Run full test suite
- [/fix:hard](/docs/commands/fix/hard) - For complex issues
- [/git:cm](/docs/commands/git/commit) - Commit the fix
- [/debug](/docs/commands/core/debug) - If issues persist
---
**Key Takeaway**: `/fix:fast` is perfect for simple, well-understood fixes where speed matters. For anything complex or unclear, use `/fix:hard` instead.
---
# /fix:hard
Section: docs
Category: commands/fix
URL: https://docs.claudekit.cc/docs/fix/hard
# /fix:hard
Fix complex bugs with thorough investigation and root cause analysis. This command deploys multiple agents to scan the codebase, research solutions, create a detailed fix plan, and implement it with comprehensive testing.
## Syntax
```bash
/fix:hard [issue description]
```
## How It Works
The `/fix:hard` command follows a comprehensive debugging workflow:
### 1. Codebase Analysis (Scout Phase)
- Deploys multiple scout agents in parallel
- Scans entire codebase for related code
- Identifies all files involved
- Maps dependencies and relationships
### 2. Root Cause Analysis
- Invokes **debugger** agent
- Analyzes logs and error traces
- Identifies underlying causes
- Distinguishes symptoms from root causes
### 3. Research Phase
- Searches for best practices
- Reviews similar issues in documentation
- Checks for known solutions
- Identifies potential pitfalls
### 4. Planning
- Creates detailed fix strategy
- Plans implementation steps
- Identifies test scenarios
- Documents approach
### 5. Implementation
- Implements fix following plan
- Handles edge cases
- Adds error handling
- Updates related code
### 6. Comprehensive Testing
- Generates new tests
- Runs full test suite
- Validates edge cases
- Checks for regressions
## When to Use
### β
Use /fix:hard for:
**Complex Bugs**
```bash
/fix:hard [users randomly get logged out after 5-10 minutes of activity]
```
**System-Wide Issues**
```bash
/fix:hard [memory leak in WebSocket connections causing server crashes]
```
**Intermittent Problems**
```bash
/fix:hard [database deadlocks occurring under high load]
```
**Multiple Related Issues**
```bash
/fix:hard [authentication system has issues with session handling, token refresh, and logout]
```
**Unknown Root Cause**
```bash
/fix:hard [API sometimes returns 500 errors but logs don't show anything]
```
**Performance Problems**
```bash
/fix:hard [page load time increased from 200ms to 3s after recent changes]
```
## Examples
### Complex Authentication Bug
```bash
/fix:hard [users getting logged out randomly, seems to happen more on mobile, error logs show "invalid token" but token looks valid]
```
**What happens:**
```
1. Scout Phase (3 agents parallel)
Agent 1: Scanning auth/* files
Agent 2: Scanning middleware/* files
Agent 3: Scanning session/* files
Found:
- src/auth/token.service.ts
- src/middleware/auth.middleware.ts
- src/services/session.service.ts
- config/jwt.config.ts
2. Root Cause Analysis
Debugger Agent analyzing...
Findings:
- Token expiry set to 15 minutes
- Mobile apps don't refresh tokens proactively
- Token refresh endpoint has race condition
- Session cleanup runs every 10 minutes
Root Cause: Race condition between session cleanup
and token refresh when requests overlap
3. Research
Researcher Agent: Searching best practices...
Found:
- Token refresh patterns for mobile
- Race condition prevention strategies
- Session management best practices
4. Fix Plan Created
Plan: plans/fix-token-race-condition.md
Steps:
1. Add mutex to token refresh endpoint
2. Implement exponential backoff in mobile client
3. Extend token expiry to 30 minutes
4. Add refresh token rotation
5. Update session cleanup logic
6. Add comprehensive logging
5. Implementation
Files Modified:
β src/auth/token.service.ts (mutex added)
β src/middleware/auth.middleware.ts (logging)
β src/services/session.service.ts (cleanup logic)
β config/jwt.config.ts (expiry extended)
Files Created:
β src/auth/token-mutex.ts
β tests/auth/token-race-condition.test.ts
6. Testing
New Tests: 25 generated
- Token refresh race conditions: β
- Concurrent requests: β
- Session cleanup during refresh: β
- Mobile token expiry: β
All 87 tests passed
Coverage: 94%
β Fix complete (8 minutes)
```
### Memory Leak Investigation
```bash
/fix:hard [server memory usage grows continuously, eventually crashes after 6-8 hours]
```
**What happens:**
```
1. Scout + Analysis
Scanning for:
- Event listeners
- Database connections
- WebSocket handlers
- Cache implementations
Suspicious areas found:
- src/websocket/connection-manager.ts
- src/cache/redis-client.ts
- src/events/event-bus.ts
2. Root Cause Analysis
Debugger Agent profiling...
Memory profile shows:
- WebSocket connections not being cleaned up
- Event listeners accumulating
- Redis clients not pooled properly
Root Cause: WebSocket disconnect handler not
removing event listeners, causing memory leak
3. Research
Best practices for:
- WebSocket lifecycle management
- Event listener cleanup
- Connection pooling
4. Implementation Plan
1. Add proper cleanup in disconnect handler
2. Implement connection pooling for Redis
3. Add automated cleanup checks
4. Add memory monitoring
5. Create load tests
5. Fix Applied
β Added cleanup logic
β Implemented connection pool
β Added monitoring
β Generated stress tests
6. Validation
Load tested for 24 hours:
- Memory stable at 450MB
- No leaks detected
- All tests passed
β Fix complete (12 minutes)
```
## Workflow Stages
### Stage 1: Discovery (2-3 min)
```
Scout Agents: Finding relevant code...
Progress:
[ββββββββββ] 80%
Files scanned: 1,247
Files identified: 15
Dependencies mapped: 43
```
### Stage 2: Analysis (1-2 min)
```
Debugger Agent: Analyzing root cause...
Analyzing:
- Error patterns
- Log sequences
- Code flow
- Dependencies
Root cause identified: [description]
```
### Stage 3: Research (1-2 min)
```
Researcher Agent: Finding solutions...
Searching:
- Best practices
- Similar issues
- Documentation
- Stack Overflow
3 relevant solutions found
```
### Stage 4: Planning (1 min)
```
Planner Agent: Creating fix strategy...
Plan created: plans/fix-[issue].md
Strategy:
- Approach: [description]
- Files to modify: 8
- Tests to add: 15
- Estimated time: 10 min
```
### Stage 5: Implementation (3-5 min)
```
Code Agent: Implementing fix...
Progress:
β Updated authentication logic
β³ Refactoring session handler
β§ Adding error handling
β§ Generating tests
```
### Stage 6: Validation (1-2 min)
```
Tester Agent: Running comprehensive tests...
Test Results:
β Unit tests: 45/45
β Integration tests: 23/23
β New tests: 15/15
β Regression tests: 67/67
Coverage: 96% (+8%)
```
## Best Practices
### Provide Detailed Context
β
**Good Description:**
```bash
/fix:hard [
Users report getting logged out randomly on mobile devices.
Happens more frequently after 10-15 minutes of usage.
Error in logs: "Invalid token signature"
But token appears valid when decoded.
Issue started after deploying session management update.
Desktop users not affected.
]
```
β **Poor Description:**
```bash
/fix:hard [logout bug]
```
### Include Error Messages
```bash
/fix:hard [
Payment processing fails with error:
"PaymentError: Card declined (code: insufficient_funds)"
But user's card has sufficient funds.
Happens only for amounts over $100.
Stripe dashboard shows payment as "pending".
]
```
### Mention When It Started
```bash
/fix:hard [
Memory leak started after deploying v2.3.0.
Server memory grows from 200MB to 2GB over 6 hours.
Checked recent PRs: #245, #247, #251.
Suspect WebSocket changes in #247.
]
```
### Share Reproduction Steps
```bash
/fix:hard [
To reproduce:
1. Login as admin
2. Navigate to /admin/users
3. Click "Export CSV"
4. Gets 504 timeout after 30 seconds
5. CSV generation works fine for < 1000 users
6. Fails for > 5000 users
]
```
## Output Files
After `/fix:hard` completes, you'll have:
### Fix Plan
```
plans/fix-[issue]-YYYYMMDD-HHMMSS.md
```
Contains:
- Root cause analysis
- Proposed solution
- Implementation steps
- Test strategy
- Rollback plan
### Modified Code
```
src/
βββ [fixed files]
βββ [related updates]
```
### Generated Tests
```
tests/
βββ unit/
β βββ [new tests]
βββ integration/
β βββ [integration tests]
βββ regression/
βββ [regression tests]
```
### Documentation Updates
```
docs/
βββ [updated docs]
```
## Time Comparison
| Issue Complexity | /fix:fast | /fix:hard |
|-----------------|-----------|-----------|
| Simple typo | 10s | 2 min (overkill) |
| Logic error | 30s | 3 min |
| Race condition | May miss | 8 min |
| Memory leak | Won't find | 12 min |
| System-wide bug | Won't fix | 15 min |
## When /fix:hard Finds Multiple Issues
Sometimes investigation reveals multiple related issues:
```
Root Cause Analysis Complete
Found 3 related issues:
1. Primary: Token refresh race condition
2. Secondary: Missing token validation
3. Tertiary: Insufficient logging
Recommended fix order:
1. Fix race condition (critical)
2. Add validation (important)
3. Improve logging (nice-to-have)
Proceed with all fixes? (y/n)
```
Options:
1. **Fix all** - Address all issues together
2. **Primary only** - Fix critical issue first
3. **Custom** - Choose which to fix
## Rollback Plan
Every `/fix:hard` includes rollback instructions:
```
Rollback Plan (if needed):
1. Revert commits:
git revert HEAD~3..HEAD
2. Restore config:
cp config/jwt.config.backup.ts config/jwt.config.ts
3. Clear cache:
redis-cli FLUSHDB
4. Restart services:
pm2 restart all
5. Verify:
curl https://api.example.com/health
```
## Troubleshooting
### Investigation Too Slow
**Problem:** Taking longer than expected
**Check:**
- Codebase size (larger = longer scan)
- Scout agent progress
- May need to narrow scope
**Solution:**
```bash
# Cancel and provide more specific location
Ctrl+C
/fix:hard [same issue description, but search only in src/auth/]
```
### Can't Find Root Cause
**Problem:** Analysis completes but no clear cause
**Check:**
- Error logs available?
- Can reproduce consistently?
- Recent changes documented?
**Solution:**
```bash
# Provide more diagnostic info
/debug [collect more details about the issue]
# Then retry with more context
/fix:hard [issue + new diagnostic info]
```
### Fix Breaks Other Features
**Problem:** Fix works but causes regressions
**Solution:**
```bash
# Use rollback plan
git revert HEAD
# Investigate impact
/debug [regression details]
# Create comprehensive plan
/plan [how to fix without breaking X]
```
## After Fixing
Recommended workflow:
```bash
# 1. Fix complete
/fix:hard [issue]
# 2. Review fix plan
cat plans/fix-[issue].md
# 3. Review changes
git diff
# 4. Run full test suite
/test
# 5. Deploy to staging
/deploy [staging]
# 6. Validate in staging
# (Manual testing)
# 7. Commit
/git:cm
# 8. Deploy to production
/deploy [production]
# 9. Monitor
# (Check logs, metrics)
```
## Next Steps
- [/debug](/docs/commands/core/debug) - Investigate issues
- [/fix:fast](/docs/commands/fix/fast) - For simple fixes
- [/test](/docs/commands/core/test) - Run tests
- [/git:cm](/docs/commands/git/commit) - Commit fix
---
**Key Takeaway**: `/fix:hard` is your go-to command for complex bugs requiring investigation. It deploys multiple agents to thoroughly analyze, research, plan, and fix issues with comprehensive testing.
---
# /fix:ci
Section: docs
Category: commands/fix
URL: https://docs.claudekit.cc/docs/fix/ci
# /fix:ci
Automatically fix GitHub Actions CI failures by analyzing workflow logs, identifying root causes, and implementing solutions. Essential for maintaining green CI/CD pipelines.
## Syntax
```bash
/fix:ci [github-workflow-url]
```
## Prerequisites
You need the GitHub CLI (`gh`) installed:
```bash
# macOS
brew install gh
# Windows
winget install GitHub.cli
# Linux
sudo apt install gh
# Authenticate
gh auth login
```
## How It Works
### 1. Log Retrieval
- Uses `gh` CLI to fetch workflow run logs
- Downloads all job logs
- Parses error messages and stack traces
- Identifies failed steps
### 2. Error Analysis
- Categorizes failures (tests, build, lint, deploy, etc.)
- Identifies root causes
- Distinguishes between flaky and real failures
- Maps errors to specific files/lines
### 3. Solution Research
- Searches for similar CI issues
- Reviews GitHub Actions documentation
- Checks for known issues with actions
- Identifies best practices
### 4. Implementation
- Fixes code issues (if code problem)
- Updates workflow configuration (if CI config problem)
- Adds missing dependencies
- Adjusts environment settings
### 5. Verification
- Runs tests locally if possible
- Validates workflow syntax
- Suggests manual verification steps
## Examples
### Test Failures
```bash
/fix:ci https://github.com/user/repo/actions/runs/123456789
```
**CI Output:**
```
Error: Cannot find module '@testing-library/react'
at require (src/components/Button.test.tsx:1:1)
```
**What happens:**
```
1. Analyzing logs...
- Job: test
- Step: Run tests
- Error: Missing dependency
2. Root Cause:
@testing-library/react not in package.json
Recently removed in commit abc123
3. Fix: Add missing dependency
npm install --save-dev @testing-library/react
4. Update package.json
β Added @testing-library/react@14.0.0
5. Commit fix
β fix(ci): add missing test dependency
Would you like to push and re-run CI? (y/n)
```
### Build Failures
```bash
/fix:ci https://github.com/user/repo/actions/runs/987654321
```
**CI Output:**
```
Error: TypeScript compilation failed
src/auth/login.ts:45:12 - error TS2339: Property 'user' does not exist on type 'Response'
```
**What happens:**
```
1. Analyzing TypeScript errors...
- 5 type errors found
- All in src/auth/ directory
2. Root Cause:
Updated Response type doesn't include 'user' property
Need to use 'data.user' instead
3. Fixes Applied:
β src/auth/login.ts - Changed to response.data.user
β src/auth/register.ts - Changed to response.data.user
β src/auth/logout.ts - Changed to response.data.user
4. Verification:
β TypeScript compiles locally
β All type checks pass
Push changes? (y/n)
```
### Lint Failures
```bash
/fix:ci https://github.com/user/repo/actions/runs/456789123
```
**CI Output:**
```
/home/runner/work/repo/src/utils/helper.ts
12:7 error 'result' is assigned a value but never used @typescript-eslint/no-unused-vars
45:1 error Expected linebreak to be 'LF' but found 'CRLF' prettier/prettier
```
**What happens:**
```
1. Found 15 linting errors
- 8 unused variables
- 7 formatting issues
2. Auto-fixing...
β Removed unused variables
β Fixed line endings (CRLF β LF)
β Applied Prettier formatting
3. Running lint locally...
β ESLint: No errors
β Prettier: All files formatted
Ready to commit and push? (y/n)
```
### Workflow Configuration Issues
```bash
/fix:ci https://github.com/user/repo/actions/runs/321654987
```
**CI Output:**
```
Error: The workflow is not valid.
.github/workflows/ci.yml (Line: 25, Col: 7): Unexpected value 'node-version'
```
**What happens:**
```
1. Workflow syntax error detected
File: .github/workflows/ci.yml
Line: 25
2. Issue:
- 'node-version' should be under 'with:' block
- Indentation incorrect
3. Fixed workflow file:
Before:
- uses: actions/setup-node@v3
node-version: '18'
After:
- uses: actions/setup-node@v3
with:
node-version: '18'
4. Validated workflow syntax
β Workflow is valid
Commit fix? (y/n)
```
### Flaky Tests
```bash
/fix:ci https://github.com/user/repo/actions/runs/147258369
```
**CI Output:**
```
FAIL src/api/users.test.ts
β UserService βΊ should create user
expect(received).toBe(expected)
Expected: "user-123"
Received: "user-124"
```
**What happens:**
```
1. Test failure analysis
- Test: "should create user"
- Failure: ID mismatch
- History: Failed 3 times this week
2. Root Cause: Flaky test
- Test relies on database auto-increment
- Non-deterministic ID generation
- Race condition in parallel tests
3. Fix: Use mocked IDs
Before:
const user = await userService.create(data)
expect(user.id).toBe('user-123')
After:
const user = await userService.create(data)
expect(user.id).toMatch(/^user-\d+$/)
4. Added test isolation
β Each test uses separate database
β IDs reset between tests
Re-run tests locally? (y/n)
```
## Common CI Failures
### Dependency Issues
```bash
# Package not installed
Error: Cannot find module 'express'
β Fix: npm install express
# Version mismatch
Error: Requires Node.js >= 18, got 16
β Fix: Update Node version in workflow
# Lock file out of sync
Error: package-lock.json is out of date
β Fix: npm install, commit lock file
```
### Environment Issues
```bash
# Missing environment variable
Error: STRIPE_API_KEY is not defined
β Fix: Add to GitHub Secrets
# Wrong working directory
Error: ENOENT: no such file or directory 'src/app'
β Fix: Add 'working-directory' to step
# Insufficient permissions
Error: Permission denied
β Fix: Add permissions to workflow
```
### Test Issues
```bash
# Timeout
Error: Test exceeded 5000ms timeout
β Fix: Increase timeout or optimize test
# Flaky test
Error: Intermittent failures
β Fix: Add retries or fix race condition
# Missing test setup
Error: Database not initialized
β Fix: Add setup script before tests
```
### Build Issues
```bash
# Out of memory
Error: JavaScript heap out of memory
β Fix: Increase NODE_OPTIONS=--max-old-space-size
# Build artifacts missing
Error: dist/ directory not found
β Fix: Add build step before deploy
# Asset size too large
Error: Bundle size exceeds limit
β Fix: Optimize bundle or increase limit
```
## Best Practices
### Provide Full Workflow URL
β
**Complete URL:**
```bash
/fix:ci https://github.com/user/repo/actions/runs/123456789
```
β **Incomplete:**
```bash
/fix:ci 123456789
```
### Check Locally First
Before pushing fixes:
```bash
# Run tests
npm test
# Check types
npm run type-check
# Run linter
npm run lint
# Build
npm run build
```
### Review Changes
```bash
# After /fix:ci makes changes
git diff
# Verify changes make sense
# Then commit and push
```
### Monitor Re-run
After pushing fix:
```bash
# Watch the re-run
gh run watch
# Check if fix worked
# If still failing, investigate further
```
## Workflow Integration
### Standard CI Fix Workflow
```bash
# 1. CI fails
# (Receive notification)
# 2. Analyze and fix
/fix:ci https://github.com/user/repo/actions/runs/123
# 3. Review changes
git diff
# 4. Commit
/git:cm
# 5. Push
git push
# 6. Monitor re-run
gh run watch
# 7. Verify success
β All checks passed
```
### For Multiple Failures
```bash
# Fix one at a time
/fix:ci https://github.com/user/repo/actions/runs/123
# After fix is verified
/fix:ci https://github.com/user/repo/actions/runs/124
# Or batch fix if related
"Fix both issues in runs 123 and 124"
```
## Advanced Usage
### Specific Job Analysis
```bash
/fix:ci https://github.com/user/repo/actions/runs/123 --job=test
```
### Rerun After Fix
```bash
/fix:ci https://github.com/user/repo/actions/runs/123 --auto-rerun
```
### Create Issue If Can't Fix
```bash
/fix:ci https://github.com/user/repo/actions/runs/123 --create-issue
```
## Troubleshooting
### Can't Access Logs
**Problem:** "gh: Not authenticated"
**Solution:**
```bash
gh auth login
gh auth status
# Grant necessary permissions
gh auth refresh -s repo -s workflow
```
### Fix Doesn't Work
**Problem:** Fix applied but CI still fails
**Solution:**
```bash
# Get more details
/debug [CI failure in job X]
# Use comprehensive fix
/fix:hard [describe the CI issue]
```
### Workflow File Syntax Errors
**Problem:** Can't parse workflow file
**Solution:**
```bash
# Validate workflow locally
gh workflow view
# Use GitHub's validator
# https://github.com/user/repo/actions
```
## Common Scenarios
### After Dependency Update
```bash
# Update dependencies
npm update
# CI fails
/fix:ci [workflow-url]
# Usually: Update lock file, fix breaking changes
```
### After Refactoring
```bash
# Large refactor pushed
# CI fails with multiple test failures
/fix:ci [workflow-url]
# Fixes import paths, updates tests
```
### Deployment Failures
```bash
# Build succeeds but deploy fails
/fix:ci [workflow-url]
# Usually: Environment variables, permissions, or credentials
```
## Metrics
Average fix times:
- **Test failures**: 2-5 minutes
- **Build errors**: 3-7 minutes
- **Lint issues**: 1-2 minutes
- **Config errors**: 1-3 minutes
- **Flaky tests**: 5-10 minutes
Success rate: ~85% automated fix rate
## Next Steps
- [/plan:ci](/docs/commands/plan/ci) - Create fix plan without implementing
- [/fix:hard](/docs/commands/fix/hard) - For complex CI issues
- [/test](/docs/commands/core/test) - Run tests locally
---
**Key Takeaway**: `/fix:ci` automates the tedious process of analyzing GitHub Actions failures and implementing fixes, getting your CI pipeline back to green quickly.
---
# /fix:logs
Section: docs
Category: commands/fix
URL: https://docs.claudekit.cc/docs/fix/logs
# /fix:logs
Analyze log files to identify errors, warnings, and issues, then automatically implement fixes based on comprehensive root cause analysis. Perfect for troubleshooting production issues, debugging failed deployments, or investigating system errors.
## Syntax
```bash
/fix:logs [issue description]
```
## How It Works
The `/fix:logs` command uses the `debugger` agent to systematically analyze logs:
### 1. Log Collection
- Reads entire `./logs.txt` file
- Identifies all errors and warnings
- Captures stack traces and error messages
- Collects contextual information
### 2. Root Cause Analysis
- Correlates events across log entries
- Identifies patterns and anomalies
- Traces execution paths
- Determines underlying causes
### 3. Solution Development
- Creates comprehensive diagnostic report
- Designs targeted fixes for identified problems
- Implements fixes systematically
- Validates fixes with appropriate commands
### 4. Verification
- Re-analyzes logs after fixes
- Ensures issues are resolved
- Confirms no new issues introduced
- Provides final status report
## When to Use
### β
Perfect For
**Production Errors**
```bash
/fix:logs [users reporting 500 errors in checkout]
```
**Failed Deployments**
```bash
/fix:logs [CI/CD pipeline failing on deployment step]
```
**Database Issues**
```bash
/fix:logs [connection timeout errors in logs]
```
**API Failures**
```bash
/fix:logs [intermittent API timeouts]
```
**Performance Issues**
```bash
/fix:logs [slow response times logged]
```
### β Don't Use For
**No Log File**
```bash
β /fix:logs [issue but no logs collected]
β
First collect logs to ./logs.txt
```
**Simple Code Typos**
```bash
β /fix:logs [typo in variable name]
β
/fix:fast [typo in variable name]
```
**Known Simple Fix**
```bash
β /fix:logs [need to update config value]
β
/fix:fast [update config value]
```
## Examples
### API Error Investigation
```bash
/fix:logs [API returning 500 errors for /users endpoint]
```
**What happens:**
```
1. Analyzing logs
- Reading: ./logs.txt (2,345 lines)
- Found: 47 error entries
- Time range: 2025-10-29 14:23 - 16:45
2. Root cause identified
- Issue: Unhandled null reference in user.profile
- Location: src/api/users.controller.ts:89
- Cause: Missing null check after database query
- Impact: 47 failed requests over 2h 22m
3. Implementing fix
- Added: Null validation before profile access
- Updated: Error handling with proper status codes
- Added: Logging for better diagnostics
4. Verifying fix
- Tests: All passing
- No errors in new logs
β Issue resolved (2m 34s)
```
### Database Connection Issues
```bash
/fix:logs [database connection timeouts]
```
**What happens:**
```
1. Log analysis
- Found: 23 timeout errors
- Pattern: Occurs during high load (>100 concurrent users)
- Database: Connection pool exhausted
2. Root cause
- Pool size: 10 (too small)
- Connections not released properly
- Missing connection cleanup in error handlers
3. Fixes applied
- Increased pool size: 10 β 50
- Added connection release in finally blocks
- Implemented connection timeout: 30s
- Added pool monitoring
4. Validation
- Load test: 200 concurrent users β
- No timeouts observed
- Pool utilization: healthy (avg 35%)
β Database issues resolved (3m 12s)
```
### Memory Leak Detection
```bash
/fix:logs [application crashing with out of memory errors]
```
**What happens:**
```
1. Log examination
- Memory usage pattern: Steadily increasing
- Crash point: ~2GB (heap limit reached)
- Frequency: Every 4-6 hours under load
2. Analysis
- Event listeners not removed
- Large objects retained in cache indefinitely
- WebSocket connections accumulating
3. Implementation
- Added proper cleanup for event listeners
- Implemented LRU cache with TTL (1 hour)
- Added WebSocket connection pooling
- Implemented periodic garbage collection hints
4. Testing
- 24-hour stress test completed
- Memory stable at ~400MB
- No crashes observed
β Memory leak fixed (5m 43s)
```
### CI/CD Pipeline Failure
```bash
/fix:logs [GitHub Actions workflow failing]
```
**What happens:**
```
1. Collecting pipeline logs
- Run ID: 1234567890
- Failed step: "Build and Test"
- Exit code: 1
2. Error identification
- Test failures: 3 tests in auth.test.ts
- Reason: Environment variable missing
- Variable: JWT_SECRET not set in CI
3. Solution
- Added JWT_SECRET to GitHub Secrets
- Updated workflow: Use secrets.JWT_SECRET
- Added validation step to check required env vars
4. Verification
- Re-ran workflow: β passed
- All tests: 127/127 β
β CI/CD pipeline fixed (1m 56s)
```
## Agent Invoked
The command delegates to the **debugger agent** with these capabilities:
- **Log Analysis**: Systematic parsing and pattern recognition
- **Root Cause Identification**: Evidence-based diagnosis
- **Database Diagnostics**: Query analysis and structure examination
- **Performance Analysis**: Bottleneck identification
- **CI/CD Integration**: GitHub Actions log retrieval and analysis
## Log File Format
### Expected Location
```bash
./logs.txt
```
### Supported Log Formats
**Standard Application Logs**
```
2025-10-29T14:23:45.123Z [ERROR] Database connection failed
2025-10-29T14:23:45.456Z [WARN] Retrying connection (attempt 2/3)
```
**JSON Logs**
```json
{"timestamp":"2025-10-29T14:23:45.123Z","level":"error","message":"Request failed"}
```
**Stack Traces**
```
Error: Cannot read property 'id' of null
at UserController.getProfile (src/controllers/user.ts:89:15)
at processRequest (src/middleware/handler.ts:34:8)
```
## Best Practices
### Collect Complete Logs
β
**Good - Full context:**
```bash
# Collect comprehensive logs
docker logs my-app > logs.txt
# Or from server
ssh server "cat /var/log/app/*.log" > logs.txt
```
β **Bad - Incomplete:**
```bash
# Only partial output
echo "some error" > logs.txt
```
### Include Timestamps
β
**With timestamps:**
```
2025-10-29 14:23:45 [ERROR] Database timeout
2025-10-29 14:23:46 [INFO] Retrying connection
```
β **Without timestamps:**
```
ERROR Database timeout
INFO Retrying connection
```
### Provide Context
β
**Specific issue:**
```bash
/fix:logs [payment processing failing with timeout errors since 14:00]
```
β **Vague issue:**
```bash
/fix:logs [something is broken]
```
## Collecting Logs
### Docker Containers
```bash
# Single container
docker logs container-name > logs.txt
# Follow logs in real-time
docker logs -f container-name > logs.txt
# Last 1000 lines
docker logs --tail 1000 container-name > logs.txt
# Then analyze
/fix:logs [describe the issue]
```
### Server Applications
```bash
# System logs
journalctl -u myapp > logs.txt
# Application logs
cat /var/log/myapp/*.log > logs.txt
# PM2 logs
pm2 logs --lines 1000 > logs.txt
# Then analyze
/fix:logs [describe the issue]
```
### CI/CD Pipelines
```bash
# GitHub Actions
gh run view 1234567890 --log > logs.txt
# GitLab CI
gitlab-ci-trace job_id > logs.txt
# Then analyze
/fix:logs [pipeline failure in build step]
```
### Kubernetes
```bash
# Pod logs
kubectl logs pod-name > logs.txt
# Multiple pods
kubectl logs -l app=myapp --all-containers=true > logs.txt
# Previous crashed container
kubectl logs pod-name --previous > logs.txt
# Then analyze
/fix:logs [pods crashing with OOM errors]
```
## Workflow
### Standard Troubleshooting Flow
```bash
# 1. Collect logs
docker logs my-app > logs.txt
# 2. Analyze and fix
/fix:logs [users reporting checkout failures]
# 3. Review changes
git diff
# 4. Test fix
/test
# 5. Commit if satisfied
/git:cm
# 6. Deploy
git push
```
### Production Incident Response
```bash
# 1. Immediate log collection
ssh production "docker logs app > /tmp/incident.log"
scp production:/tmp/incident.log ./logs.txt
# 2. Analyze urgently
/fix:logs [production down - 500 errors on all endpoints]
# 3. Create hotfix branch
git checkout -b hotfix/production-500-errors
# 4. Apply fix and test
/test
# 5. Deploy immediately
/git:cm
git push origin hotfix/production-500-errors
/git:pr main hotfix/production-500-errors
```
## Troubleshooting
### Log File Not Found
**Problem:** `./logs.txt` doesn't exist
**Solution:**
```bash
# Create logs file first
docker logs my-app > logs.txt
# Or from server
ssh server "cat /var/log/app/error.log" > logs.txt
# Then run command
/fix:logs [issue description]
```
### Logs Too Large
**Problem:** Log file exceeds reasonable size
**Solution:**
```bash
# Filter relevant time period
grep "2025-10-29 14:" app.log > logs.txt
# Or last 5000 lines
tail -5000 app.log > logs.txt
# Then analyze
/fix:logs [issue occurring at 14:00]
```
### No Clear Root Cause
**Problem:** Logs analyzed but cause unclear
**Solution:**
```bash
# Collect more comprehensive logs
# Include debug level
docker logs my-app --since 1h > logs.txt
# Enable verbose logging first
# Then reproduce issue
# Then analyze again
/fix:logs [issue with more verbose logs]
```
### Fix Didn't Work
**Problem:** Issue persists after fix
**Solution:**
```bash
# Reproduce issue and collect new logs
./reproduce-issue.sh
docker logs my-app > logs.txt
# Analyze again with more context
/fix:logs [issue still occurring - previous fix was X]
```
## Related Commands
### Log Analysis + Testing
```bash
# 1. Analyze and fix
/fix:logs [API errors in production]
# 2. Run comprehensive tests
/test
# 3. If tests reveal more issues
/fix:hard [additional issues found in tests]
```
### Logs + CI/CD Fix
```bash
# 1. Get CI logs
/fix:ci [github-actions-url]
# Or with local logs
# 2. Collect and analyze
gh run view 123 --log > logs.txt
/fix:logs [CI pipeline failing on test step]
```
### Debug Complex Issues
```bash
# 1. Start with logs
/fix:logs [complex issue with multiple symptoms]
# 2. If more investigation needed
/debug [findings from log analysis]
```
## Metrics
Typical `/fix:logs` performance:
- **Time**: 2-5 minutes
- **Log lines analyzed**: 100-10,000+
- **Fix accuracy**: ~92% (based on single attempt)
- **Common fixes**: Configuration, error handling, resource limits
- **Secondary issues found**: ~35% of cases
## Next Steps
After using `/fix:logs`:
- [/test](/docs/commands/core/test) - Verify fix with tests
- [/fix:hard](/docs/commands/fix/hard) - For complex issues requiring deeper analysis
- [/debug](/docs/commands/core/debug) - If issue needs more investigation
- [/git:cm](/docs/commands/git/commit) - Commit the fix
---
**Key Takeaway**: `/fix:logs` provides automated log analysis and issue resolution by leveraging the debugger agent's systematic approach to root cause identification and fix implementation.
---
# /fix:ui
Section: docs
Category: commands/fix
URL: https://docs.claudekit.cc/docs/fix/ui
# /fix:ui
Fix UI/UX issues by analyzing screenshots, videos, or descriptions. This command combines visual analysis with code debugging to resolve interface problems quickly.
## Syntax
```bash
/fix:ui [screenshot/video] [description]
```
## How It Works
### 1. Visual Analysis (if media provided)
- Analyzes screenshots using AI vision
- Extracts layout, colors, spacing
- Identifies visual inconsistencies
- Compares to design intent
### 2. Code Location
- Finds relevant component files
- Identifies CSS/styling files
- Locates responsive breakpoints
- Maps state management
### 3. Root Cause Identification
- Analyzes CSS rules
- Checks responsive design
- Reviews component logic
- Identifies conflicts
### 4. Fix Implementation
- Updates styles
- Adjusts layouts
- Fixes responsive issues
- Updates component logic
### 5. Visual Validation
- Suggests manual testing steps
- Provides preview commands
- Recommends browser testing
## Examples
### With Screenshot
```bash
/fix:ui [screenshot.png] - Button is misaligned on mobile, text overflows
```
**Visual Analysis:**
```
Analyzing screenshot...
Detected Issues:
1. Button positioned 20px from right edge (should be centered)
2. Text "Continue to Payment" truncated to "Continue to Pay..."
3. Padding inconsistent (top: 12px, bottom: 8px)
4. Button width: 150px (should be responsive)
Device: iPhone 12 Pro (390px width)
Viewport: Mobile portrait
```
**What happens:**
```
1. Located component
File: src/components/CheckoutButton.tsx
Styles: src/styles/checkout.css
2. Root Cause:
- Fixed width instead of responsive
- Text using px instead of rem
- Missing mobile breakpoint
- Absolute positioning
3. Fixes Applied:
β Changed width: 150px β width: 100%
β Added max-width: 320px
β Changed font-size: 16px β font-size: 1rem
β Fixed positioning: absolute β relative
β Added padding: 12px (consistent)
β Added text-overflow: ellipsis with tooltip
4. Added responsive styles:
@media (max-width: 768px) {
.checkout-button {
width: calc(100% - 32px);
margin: 0 16px;
}
}
Test on mobile device to verify fix.
```
### With Description Only
```bash
/fix:ui Modal dialog doesn't close when clicking backdrop on mobile Safari
```
**What happens:**
```
1. Located modal component
File: src/components/Modal.tsx
2. Root Cause:
- Click event not registering on iOS
- Missing touch event handler
- Z-index issue with backdrop
3. Fixes Applied:
β Added touchEnd event listener
β Increased backdrop z-index
β Added -webkit-tap-highlight-color
β Fixed event propagation
4. Browser-specific fixes:
/* iOS Safari fix */
.modal-backdrop {
cursor: pointer;
-webkit-tap-highlight-color: transparent;
}
Test on iOS Safari to confirm.
```
### With Video
```bash
/fix:ui [screen-recording.mp4] - Animation is janky, dropdown flickers
```
**Video Analysis:**
```
Analyzing video frames...
Detected Issues:
- Frame rate drops during animation (60fps β 15fps)
- Dropdown flickers 3 times before stabilizing
- Repaints occurring on every frame
- CSS transitions triggering layout recalculation
Timeline:
0:00 - User clicks dropdown
0:01 - Flicker #1
0:02 - Flicker #2
0:03 - Flicker #3
0:04 - Stabilizes
```
**What happens:**
```
1. Located components
- src/components/Dropdown.tsx
- src/animations/slide-down.css
2. Performance Issues:
- Using height animation (causes reflow)
- No will-change property
- Box-shadow animating (expensive)
- Multiple re-renders
3. Optimizations Applied:
β Changed height β transform: scaleY()
β Added will-change: transform
β Removed box-shadow animation
β Added GPU acceleration
β Implemented React.memo for dropdown items
β Debounced position calculations
4. Performance Improvement:
Before: 15 FPS, 150ms animation time
After: 60 FPS, 200ms animation time (smoother)
Test with DevTools Performance tab.
```
## Common UI Issues
### Layout Issues
```bash
# Misalignment
/fix:ui [screenshot] - Elements not aligned vertically
# Overflow
/fix:ui Content overflowing container on small screens
# Spacing
/fix:ui Inconsistent padding between sections
```
### Responsive Issues
```bash
# Mobile layout
/fix:ui [mobile-screenshot] - Layout breaks on iPhone SE
# Tablet view
/fix:ui Navigation menu overlaps content on iPad
# Desktop scaling
/fix:ui Text too small on 4K monitors
```
### Visual Bugs
```bash
# Z-index
/fix:ui Modal appearing behind header
# Colors
/fix:ui Button color too light, hard to read text
# Animations
/fix:ui Loading spinner stuttering on slow devices
```
### Interaction Issues
```bash
# Click/Touch
/fix:ui Button not responding to clicks on mobile
# Hover states
/fix:ui Hover effect staying active after click
# Focus states
/fix:ui Can't see keyboard focus indicator
```
## Best Practices
### Provide Visual Context
β
**With Screenshot:**
```bash
/fix:ui [screenshot.png] - Clear description of issue
```
β
**With Video:**
```bash
/fix:ui [recording.mp4] - Shows the interaction problem
```
β **Text Only (when visual issue):**
```bash
/fix:ui Something looks wrong
```
### Specify Device/Browser
```bash
/fix:ui [screenshot] - Button misaligned on mobile Safari, iPhone 12 Pro
```
### Include Expected Behavior
```bash
/fix:ui [screenshot] - Modal should be centered on screen, currently offset to left by ~50px
```
### Describe Interaction
```bash
/fix:ui When hovering over dropdown, items flicker. Should smoothly expand without flicker.
```
## Testing Recommendations
After fix is applied:
### Manual Testing
```bash
# Desktop browsers
- Chrome (latest)
- Firefox (latest)
- Safari (latest)
- Edge (latest)
# Mobile devices
- iOS Safari (iPhone)
- Chrome (Android)
- Samsung Internet
# Screen sizes
- Mobile: 375px, 414px
- Tablet: 768px, 1024px
- Desktop: 1920px, 2560px
```
### Automated Testing
```bash
# Visual regression
npm run test:visual
# Cross-browser
npm run test:browsers
# Responsive
npm run test:responsive
```
## Integration with Design Tools
### Figma Integration
```bash
# Compare with Figma design
/fix:ui [screenshot] compare with [figma-url]
```
### Storybook Integration
```bash
# Test component in isolation
npm run storybook
# Check component states
```
## Advanced Scenarios
### Dark Mode Issues
```bash
/fix:ui [dark-mode-screenshot] - Text not visible in dark mode
```
**Fixes:**
- Color contrast adjustments
- Missing dark mode styles
- Theme variable issues
### RTL (Right-to-Left) Layout
```bash
/fix:ui [rtl-screenshot] - Arabic text layout broken
```
**Fixes:**
- Add RTL-specific styles
- Fix logical properties
- Mirror layout elements
### Accessibility Issues
```bash
/fix:ui [screenshot] - Focus indicator not visible for keyboard users
```
**Fixes:**
- Enhanced focus styles
- ARIA attributes
- Keyboard navigation
## Common Solutions
### Flexbox Alignment
```css
/* Before */
.container {
display: flex;
}
/* After */
.container {
display: flex;
align-items: center;
justify-content: space-between;
}
```
### Responsive Width
```css
/* Before */
.button {
width: 200px;
}
/* After */
.button {
width: 100%;
max-width: 200px;
}
```
### Z-index Stacking
```css
/* Before */
.modal {
z-index: 100;
}
/* After */
.modal {
z-index: 1000;
position: fixed;
}
```
### Text Overflow
```css
/* Before */
.text {
width: 150px;
}
/* After */
.text {
width: 150px;
overflow: hidden;
text-overflow: ellipsis;
white-space: nowrap;
}
```
## Performance Optimization
### Animation Performance
```css
/* Avoid (causes reflow) */
.element {
transition: height 0.3s;
}
/* Use (GPU accelerated) */
.element {
transition: transform 0.3s;
will-change: transform;
}
```
### Reduce Repaints
```css
/* Avoid */
.hover:hover {
box-shadow: 0 4px 8px rgba(0,0,0,0.2);
}
/* Use */
.hover {
box-shadow: 0 4px 8px rgba(0,0,0,0.2);
opacity: 0;
transition: opacity 0.3s;
}
.hover:hover {
opacity: 1;
}
```
## Troubleshooting
### Can't Reproduce Issue
**Problem:** Issue visible in screenshot but not in code
**Solution:**
```bash
# Check specific browser/device
/fix:ui [screenshot] - Happens only on Safari iOS
# Check user's environment
# - Browser version
# - Screen size
# - Zoom level
```
### Fix Works Locally But Not in Production
**Problem:** Fix works in development
**Solution:**
```bash
# Check build process
npm run build
npm run preview
# Check for CSS purging issues
# Check for minification problems
```
### Multiple Related Issues
**Problem:** Several UI issues in same area
**Solution:**
```bash
# Fix all at once
/fix:ui [screenshots] - List all issues: 1) alignment, 2) color contrast, 3) spacing
```
## Next Steps
- [/design:screenshot](/docs/commands/design/screenshot) - Convert design to code
- [/fix:fast](/docs/commands/fix/fast) - Quick CSS fixes
- [/test](/docs/commands/core/test) - Visual regression tests
---
**Key Takeaway**: `/fix:ui` combines visual analysis with code debugging to quickly resolve UI/UX issues across devices and browsers, with support for screenshots and videos.
---
# /fix:types
Section: docs
Category: commands/fix
URL: https://docs.claudekit.cc/docs/fix/types
# /fix:types
Automatically identify and fix all type errors in your TypeScript or Dart codebase. The command runs type checkers iteratively until all type errors are resolved, ensuring type safety without resorting to `any` types.
## Syntax
```bash
/fix:types
```
No arguments needed - the command automatically detects your project type and runs the appropriate type checker.
## How It Works
The `/fix:types` command follows an iterative workflow:
### 1. Type Check Execution
- Detects project type (TypeScript or Dart)
- Runs appropriate type checker:
- TypeScript: `bun run typecheck` or `npm run typecheck`
- Dart: `dart analyze` or `flutter analyze`
- Captures all type errors with file locations and descriptions
### 2. Error Analysis
- Categorizes errors by type and severity
- Identifies patterns across multiple errors
- Determines fix strategies for each error category
- Prioritizes fixes by dependency order
### 3. Fix Implementation
- Applies type fixes systematically
- Adds proper type annotations
- Fixes type mismatches and incompatibilities
- Updates function signatures and interfaces
- Resolves generic type constraints
### 4. Verification Loop
- Re-runs type checker after fixes
- Identifies any remaining errors
- Continues fixing until clean type check
- Ensures no regressions introduced
## When to Use
### β
Perfect For
**After Refactoring**
```bash
# Just refactored user service
/fix:types
```
**Adding New Features**
```bash
# Added new API endpoints
/fix:types
```
**Upgrading Dependencies**
```bash
# Updated TypeScript from 4.9 to 5.0
npm install typescript@latest
/fix:types
```
**Converting JavaScript to TypeScript**
```bash
# Renamed .js files to .ts
/fix:types
```
**Strict Mode Migration**
```bash
# Enabled strict mode in tsconfig.json
/fix:types
```
### β Don't Use For
**Runtime Errors**
```bash
β /fix:types # For runtime errors
β
/fix:logs [runtime error description]
```
**Logic Bugs**
```bash
β /fix:types # For incorrect calculations
β
/fix:fast [fix calculation logic]
```
**No Type Checker Configured**
```bash
β /fix:types # No typecheck script exists
β
First add type checker to project
```
## Examples
### Simple Type Errors
```bash
/fix:types
```
**What happens:**
```
1. Running type checker
$ bun run typecheck
Found 5 type errors:
- src/auth/login.ts:45 - Type 'string | undefined' not assignable to 'string'
- src/api/users.ts:89 - Property 'email' missing in type
- src/utils/format.ts:23 - Argument of type 'number' not assignable to 'string'
- src/models/user.ts:12 - Type 'null' not assignable to 'User'
- src/services/auth.ts:67 - Cannot invoke object which is possibly 'undefined'
2. Fixing errors
β Added null check for login.ts
β Added required email field to User interface
β Fixed type mismatch in format function
β Updated User type to allow null
β Added optional chaining for auth service
3. Verifying fixes
$ bun run typecheck
β No type errors found
β All type errors fixed (1m 23s)
```
### After Strict Mode Migration
```bash
# Enable strict mode
cat > tsconfig.json <' can't be assigned to 'Future'
- lib/widgets/profile.dart:67 - The argument type 'int' can't be assigned to 'String'
2. Fixing Dart type errors
β Added null safety operators (!)
β Fixed generic type parameters in Future
β Added toString() conversion for int to String
β Updated nullable type annotations
3. Re-running analyzer
$ flutter analyze
β No issues found!
β Dart type errors fixed (1m 34s)
```
## Type Check Commands
The command automatically detects and runs:
### TypeScript Projects
**Bun:**
```bash
bun run typecheck
```
**npm:**
```bash
npm run typecheck
```
**Yarn:**
```bash
yarn typecheck
```
**pnpm:**
```bash
pnpm typecheck
```
### Dart/Flutter Projects
**Flutter:**
```bash
flutter analyze
```
**Dart:**
```bash
dart analyze
```
## Common Fix Patterns
### Null Safety
**Before:**
```typescript
function getUserName(user: User) {
return user.profile.name; // Error: profile possibly undefined
}
```
**After:**
```typescript
function getUserName(user: User) {
return user.profile?.name ?? 'Unknown';
}
```
### Type Annotations
**Before:**
```typescript
function calculateTotal(items) { // Error: Parameter implicitly has 'any' type
return items.reduce((sum, item) => sum + item.price, 0);
}
```
**After:**
```typescript
function calculateTotal(items: CartItem[]): number {
return items.reduce((sum, item) => sum + item.price, 0);
}
```
### Generic Constraints
**Before:**
```typescript
function findById(items: T[], id: string) { // Error: Property 'id' does not exist on type 'T'
return items.find(item => item.id === id);
}
```
**After:**
```typescript
function findById(items: T[], id: string): T | undefined {
return items.find(item => item.id === id);
}
```
### Union Type Handling
**Before:**
```typescript
function processValue(value: string | number) {
return value.toUpperCase(); // Error: Property 'toUpperCase' does not exist on type 'number'
}
```
**After:**
```typescript
function processValue(value: string | number): string {
return typeof value === 'string' ? value.toUpperCase() : value.toString();
}
```
### Interface Completeness
**Before:**
```typescript
const user: User = { // Error: Property 'email' is missing
name: 'John Doe',
age: 30
};
```
**After:**
```typescript
const user: User = {
name: 'John Doe',
age: 30,
email: 'john@example.com'
};
```
## Best Practices
### No `any` Types
β
**Good - Proper types:**
```typescript
interface ApiResponse {
data: T;
status: number;
}
function fetchUser(): Promise> {
// ...
}
```
β **Bad - Using any:**
```typescript
function fetchUser(): Promise { // Defeats type safety!
// ...
}
```
### Iterative Fixing
β
**Command handles iterations:**
```bash
# Just run once
/fix:types
# Command will iterate until all errors fixed
```
β **Don't run manually repeatedly:**
```bash
# Inefficient
/fix:types
/fix:types
/fix:types
```
### After Major Changes
β
**Fix types after refactoring:**
```bash
# 1. Refactor code
/cook [refactor user service to use new API]
# 2. Fix resulting type errors
/fix:types
# 3. Test
/test
```
## Project Setup
### TypeScript Configuration
Ensure `package.json` has typecheck script:
```json
{
"scripts": {
"typecheck": "tsc --noEmit"
}
}
```
Or with `tsconfig.json`:
```json
{
"compilerOptions": {
"strict": true,
"noEmit": true,
"skipLibCheck": true
},
"include": ["src/**/*"],
"exclude": ["node_modules", "dist"]
}
```
### Dart Configuration
Ensure `analysis_options.yaml` exists:
```yaml
analyzer:
strong-mode:
implicit-casts: false
implicit-dynamic: false
errors:
missing_return: error
invalid_assignment: error
```
## Workflow
### Standard Development Flow
```bash
# 1. Implement feature
/cook [add user profile feature]
# 2. Fix type errors
/fix:types
# 3. Run tests
/test
# 4. Commit
/git:cm
```
### Refactoring Workflow
```bash
# 1. Refactor
/cook [refactor authentication to use OAuth]
# 2. Fix types
/fix:types
# 3. Fix any remaining issues
/fix:hard [if complex issues remain]
# 4. Test thoroughly
/test
# 5. Commit
/git:cm
```
### Strict Mode Migration
```bash
# 1. Enable strict mode
echo '{"compilerOptions": {"strict": true}}' > tsconfig.json
# 2. Fix all resulting errors
/fix:types
# 3. Verify with tests
/test
# 4. Commit strict mode migration
/git:cm
```
## Troubleshooting
### Too Many Errors
**Problem:** Hundreds of type errors overwhelming
**Solution:**
```bash
# Fix in stages
# 1. Fix strict mode errors first
/fix:types
# 2. Then enable additional checks gradually
# Update tsconfig.json one option at a time
```
### Circular Dependencies
**Problem:** Type errors due to circular imports
**Solution:**
```bash
# Command will identify circular dependencies
# Manually restructure imports to break cycles
# Then run again
/fix:types
```
### Generic Type Complexity
**Problem:** Complex generic types causing issues
**Solution:**
```bash
# Let command handle first pass
/fix:types
# If issues remain, simplify generics manually
# Then run again to verify
/fix:types
```
### External Types Missing
**Problem:** Missing type definitions for packages
**Solution:**
```bash
# Install type definitions
npm install --save-dev @types/package-name
# Then fix remaining errors
/fix:types
```
## Error Categories
Common error types fixed:
### Implicit Any
- Missing parameter types
- Missing return types
- Missing variable types
### Null Safety
- Possible undefined values
- Possible null values
- Optional chaining needed
### Type Mismatches
- Wrong return types
- Incompatible assignments
- Generic constraint violations
### Missing Properties
- Incomplete interface implementations
- Missing required fields
- Incorrect property names
### Function Signatures
- Wrong parameter types
- Wrong parameter count
- Wrong return type
## Metrics
Typical `/fix:types` performance:
- **Time**: 1-5 minutes (depending on error count)
- **Errors per iteration**: 15-30
- **Average iterations**: 1-3
- **Success rate**: ~98% (some manual intervention may be needed)
- **No `any` types added**: 100% of fixes use proper types
## Next Steps
After using `/fix:types`:
- [/test](/docs/commands/core/test) - Run tests to verify fixes
- [/fix:fast](/docs/commands/fix/fast) - For remaining simple issues
- [/git:cm](/docs/commands/git/commit) - Commit type fixes
- [/cook](/docs/commands/core/cook) - Continue feature development
---
**Key Takeaway**: `/fix:types` provides automated type error resolution without compromising type safety by avoiding `any` types and implementing proper type annotations, null checks, and interface completeness.
---
# /git:cm
Section: docs
Category: commands/git
URL: https://docs.claudekit.cc/docs/git/cm
# /git:cm
Stage all files and create a commit. Does NOT push to remote.
## Syntax
```bash
/git:cm
```
## How It Works
Uses `git-manager` agent to:
1. Stage all changed files (`git add .`)
2. Create meaningful commit message based on changes
3. Commit locally
**Important**: Does NOT push changes to remote repository.
## Workflow
```bash
# 1. Make changes with /cook or /code
/cook [add user authentication]
# 2. Review changes
git status
git diff
# 3. Commit locally
/git:cm
# 4. Push when ready
git push # or /git:cp for combined commit+push
```
## Related Commands
| Command | Action |
|---------|--------|
| `/git:cm` | Stage & commit (no push) |
| `/git:cp` | Stage, commit & push |
| `/git:pr` | Create pull request |
---
**Key Takeaway**: Use `/git:cm` to commit changes locally without pushing, allowing review before pushing to remote.
---
# /git:cp
Section: docs
Category: commands/git
URL: https://docs.claudekit.cc/docs/git/cp
# /git:cp
Stage, commit and push all changes in one command.
## Syntax
```bash
/git:cp
```
## How It Works
Uses `git-manager` agent to:
1. Stage all changed files (`git add .`)
2. Create meaningful commit message based on changes
3. Commit locally
4. Push to remote repository
## Workflow
```bash
# 1. Complete feature
/cook [add user dashboard]
# 2. Review changes
git status
git diff
# 3. Commit and push in one step
/git:cp
```
## When to Use
- Quick iterations on feature branches
- Confident about changes
- Ready to share with team
## When NOT to Use
- Need to review before pushing: use `/git:cm` instead
- Working on main/master: be careful
- Unsure about changes: review first
## Related Commands
| Command | Action |
|---------|--------|
| `/git:cm` | Stage & commit only |
| `/git:cp` | Stage, commit & push |
| `/git:pr` | Create pull request |
---
**Key Takeaway**: Use `/git:cp` for quick stage-commit-push workflow when confident about your changes.
---
# /git:merge
Section: docs
Category: commands/git
URL: https://docs.claudekit.cc/docs/git/merge
# /git:merge
Merge code from one branch to another with conflict resolution.
## Syntax
```bash
/git:merge [to-branch] [from-branch]
```
## Arguments
| Argument | Default | Description |
|----------|---------|-------------|
| `to-branch` | `main` | Target branch to merge into |
| `from-branch` | Current branch | Source branch to merge from |
## How It Works
Uses `git-manager` agent to:
1. Checkout target branch
2. Merge source branch
3. Resolve conflicts if any
4. Complete merge
## Examples
```bash
# Merge current branch to main
/git:merge main
# Merge feature to develop
/git:merge develop feature/auth
# Merge hotfix to main and develop
/git:merge main hotfix/security
/git:merge develop hotfix/security
```
## Conflict Resolution
If conflicts occur:
1. Agent identifies conflicting files
2. Analyzes changes from both branches
3. Resolves conflicts intelligently
4. Completes merge
## Prerequisites
- `gh` CLI must be installed and authorized
- Git repository initialized
- Both branches must exist
---
**Key Takeaway**: Use `/git:merge` to merge branches with automatic conflict resolution via git-manager agent.
---
# /git:cm
Section: docs
Category: commands/git
URL: https://docs.claudekit.cc/docs/git/commit
# /git:cm
Stage all files and create a conventional commit with a professionally crafted commit message following conventional commit standards.
## Syntax
```bash
/git:cm
```
## How It Works
The `/git:cm` command follows a structured git workflow:
### 1. Analysis Phase
- Runs `git status` to see untracked/modified files
- Runs `git diff` to analyze all changes (staged + unstaged)
- Runs `git log` to understand commit history and style
### 2. Change Categorization
- Identifies nature of changes (feature, fix, refactor, docs, etc.)
- Groups related changes
- Determines semantic version impact
### 3. Message Generation
- Creates concise, descriptive commit message
- Follows conventional commit format
- Focuses on "why" rather than "what"
- Matches repository's commit style
### 4. Staging and Committing
- Stages all relevant files (`git add`)
- Creates commit with generated message
- Runs `git status` to verify success
## Commit Message Format
ClaudeKit generates commits following [Conventional Commits](https://www.conventionalcommits.org/):
```
():
```
### Types
- `feat`: New feature
- `fix`: Bug fix
- `docs`: Documentation changes
- `style`: Formatting, missing semicolons, etc.
- `refactor`: Code restructuring without changing behavior
- `perf`: Performance improvements
- `test`: Adding or updating tests
- `build`: Build system or dependency changes
- `ci`: CI/CD configuration changes
- `chore`: Maintenance tasks
### Example Commits
**Feature:**
```
feat: add user registration endpoint
- Implement POST /auth/register
- Add email/password validation with Joi
- Secure password hashing with bcrypt
- Generate JWT tokens on registration
- Include comprehensive test suite (92% coverage)
```
**Bug Fix:**
```
fix: resolve token refresh race condition
- Add mutex to prevent concurrent refresh attempts
- Implement exponential backoff in retry logic
- Extend token expiry to 30 minutes
- Update session cleanup to respect active refreshes
```
**Refactor:**
```
refactor: extract payment processing into service layer
- Move Stripe logic from controller to service
- Improve error handling and logging
- Add payment state machine
- Increase test coverage to 95%
```
## Examples
### Simple Feature Addition
```bash
# After implementing a feature
/git:cm
```
**Generated commit:**
```
feat: add rate limiting middleware
- Implement token bucket algorithm
- Configure limit: 100 requests per 15 minutes
- Add Redis for distributed rate limiting
- Include bypass for admin users
```
### Bug Fix
```bash
# After fixing a bug
/git:cm
```
**Generated commit:**
```
fix: prevent memory leak in WebSocket connections
- Add proper cleanup in disconnect handler
- Implement connection pooling for Redis
- Add automated cleanup checks every 5 minutes
- Include stress tests validating 24-hour stability
```
### Multiple Changes
```bash
# After making several related changes
/git:cm
```
**Generated commit:**
```
feat: improve authentication security
- Add two-factor authentication support
- Implement refresh token rotation
- Enhance session management with Redis
- Update password requirements (min 12 chars)
- Add security event logging
```
## What Gets Committed
The command stages and commits:
β
**Modified files** - Changes to existing files
β
**New files** - Newly created files
β
**Deleted files** - Removed files
β
**Test files** - Test updates and new tests
β
**Documentation** - Doc changes
β **Excluded by default:**
- Files in `.gitignore`
- Sensitive files (`.env`, `credentials.json`, etc.)
- Build artifacts
- Node_modules
- Temporary files
## Security Checks
Before committing, ClaudeKit checks for:
### Sensitive Files
```
β Warning: Detected sensitive file
File: config/.env.production
Contains: API keys, database passwords
This file should NOT be committed.
Add to .gitignore? (y/n)
```
### Secrets in Code
```
β Warning: Possible secret detected
File: src/config/api.ts
Line 12: apiKey: "sk_live_..."
Remove secret before committing? (y/n)
```
### Large Files
```
β Warning: Large file detected
File: public/video.mp4 (45 MB)
Consider using Git LFS.
Continue anyway? (y/n)
```
## Pre-commit Hooks
If pre-commit hooks are configured, they run automatically:
```
Running pre-commit hooks...
β ESLint: No errors
β Prettier: All files formatted
β Tests: 87/87 passed
β TypeScript: No type errors
Pre-commit checks passed
```
If hooks fail:
```
β Pre-commit hooks failed
ESLint errors:
- src/auth/login.ts:45 - unused variable 'token'
Fix errors and try again.
```
## Workflow
### Standard Development Flow
```bash
# 1. Implement feature
/cook [add user profile page]
# 2. Review changes
git diff
# 3. Run tests
/test
# 4. Commit
/git:cm
# 5. Push (if ready)
git push
```
### With Code Review
```bash
# 1. Make changes
# ... implement feature ...
# 2. Commit locally
/git:cm
# 3. Review in commit
git show HEAD
# 4. Push to feature branch
git push origin feature/user-profile
# 5. Create PR
/git:pr
```
## Customization
### Commit Message Style
ClaudeKit analyzes your commit history and matches the style:
**If your repo uses:**
```
Add user authentication
Update README
Fix login bug
```
**ClaudeKit generates:**
```
Add rate limiting middleware
```
**If your repo uses:**
```
feat: add user authentication
fix: resolve login issue
docs: update README
```
**ClaudeKit generates:**
```
feat: add rate limiting middleware
```
### Scope Detection
ClaudeKit automatically detects scope from files changed:
```
Files changed: src/auth/login.ts, src/auth/register.ts
Generated: feat(auth): add OAuth2 support
Files changed: docs/api/users.md
Generated: docs(api): document user endpoints
Files changed: tests/integration/payment.test.ts
Generated: test(payment): add Stripe webhook tests
```
## Best Practices
### Commit Frequently
β
**Good - Atomic commits:**
```bash
# Implement feature
/git:cm # "feat: add login form"
# Add tests
/git:cm # "test: add login form validation tests"
# Update docs
/git:cm # "docs: document login API"
```
β **Bad - Massive commits:**
```bash
# Implement 10 features over 3 days
/git:cm # Huge, unclear commit
```
### Review Before Committing
```bash
# Always review changes first
git diff
git status
# Then commit
/git:cm
```
### Don't Commit Broken Code
```bash
# Run tests first
/test
# Only commit if tests pass
β All tests passed
/git:cm
```
## Common Scenarios
### Emergency Hotfix
```bash
# 1. Fix critical bug
/fix:fast [production bug]
# 2. Test fix
/test
# 3. Quick commit
/git:cm
# 4. Deploy immediately
/deploy [production]
```
### Feature Branch Workflow
```bash
# 1. Create feature branch
git checkout -b feature/new-dashboard
# 2. Implement incrementally
/cook [add dashboard layout]
/git:cm
/cook [add dashboard widgets]
/git:cm
/cook [add dashboard filters]
/git:cm
# 3. Push branch
git push origin feature/new-dashboard
# 4. Create PR
/git:pr
```
### Refactoring Session
```bash
# 1. Refactor incrementally
# Refactor auth service
/git:cm # "refactor(auth): extract validation logic"
# Refactor payment service
/git:cm # "refactor(payment): simplify error handling"
# Update tests
/git:cm # "test: update tests for refactored services"
```
## Troubleshooting
### Nothing to Commit
**Problem:** No changes to commit
**Solution:**
```bash
# Check status
git status
# If changes exist but not detected:
git add .
/git:cm
```
### Commit Message Unclear
**Problem:** Generated message doesn't capture intent
**Solution:**
```bash
# Provide more context in code comments
# ClaudeKit reads comments to understand intent
# Or manually edit commit message:
git commit --amend
```
### Pre-commit Hooks Failing
**Problem:** Hooks preventing commit
**Solution:**
```bash
# Fix issues identified by hooks
# ESLint errors
npm run lint:fix
# Prettier formatting
npm run format
# Then retry
/git:cm
```
### Sensitive Data Detected
**Problem:** Accidentally trying to commit secrets
**Solution:**
```bash
# Remove sensitive data
# Update .env file to .env.example
# Use environment variables
# Add to .gitignore
echo ".env" >> .gitignore
# Then commit
/git:cm
```
## Advanced Usage
### Amending Commits
If you need to amend (use sparingly):
```bash
# Make additional changes
# ... edit files ...
# Amend to previous commit
git add .
git commit --amend --no-edit
```
**Note:** Only amend commits that haven't been pushed!
### Partial Staging
If you want to commit only specific files:
```bash
# Stage specific files manually
git add src/auth/login.ts src/auth/register.ts
# Then commit
/git:cm
```
## Commit History Example
After using `/git:cm` regularly:
```
git log --oneline
a1b2c3d feat: add user profile page with avatar upload
d4e5f6g test: add comprehensive profile tests
g7h8i9j docs: document profile API endpoints
j0k1l2m fix: resolve avatar upload size limit
m3n4o5p refactor: extract file upload to service
p6q7r8s feat: add profile privacy settings
```
Clean, professional, and easy to understand!
## Next Steps
- [/git:cp](/docs/commands/git/commit-push) - Commit and push
- [/git:pr](/docs/commands/git/pull-request) - Create pull request
- [Code Review](/docs/commands/core/review) - Review before committing
---
**Key Takeaway**: `/git:cm` creates professional, conventional commits automatically by analyzing your changes and matching your repository's commit style.
---
# /git:cp
Section: docs
Category: commands/git
URL: https://docs.claudekit.cc/docs/git/commit-push
# /git:cp
Stage all changes, create a conventional commit with a professional message, and push to the remote repository in one streamlined command. Perfect for quick iteration cycles.
## Syntax
```bash
/git:cp
```
## How It Works
This command combines `/git:cm` (commit) with `git push`:
### 1. Stage & Commit (`/git:cm`)
- Analyzes all changes (staged + unstaged)
- Creates conventional commit message
- Stages relevant files
- Creates commit
### 2. Push to Remote
- Pushes to current branch's upstream
- Or prompts to set upstream if not configured
- Verifies push success
## When to Use
### β
Perfect For
**Rapid Development**
```bash
# Quick iteration cycle
/cook [add feature]
/git:cp # Commit and push immediately
```
**Solo Development**
```bash
# Working alone on feature branch
/git:cp # No need for local review
```
**Small Changes**
```bash
# Typo fixes, minor updates
/fix:fast [typo]
/git:cp # Push right away
```
**Continuous Integration**
```bash
# Trigger CI on every change
/git:cp # CI runs automatically after push
```
### β When to Avoid
**Team Collaboration**
```bash
β /git:cp # Pushing unreviewed code
β
/git:cm # Commit locally
β
Create PR # Team reviews first
```
**Uncertain Changes**
```bash
β /git:cp # Not sure if fix works
β
/git:cm # Commit locally
β
Test more
β
Then: git push
```
**Shared Branches**
```bash
β /git:cp # On main/develop
β
Use feature branches
β
Create PR for review
```
## Examples
### Feature Development
```bash
# 1. Implement feature
/cook [add user notifications]
β Feature implemented
β Tests generated (95% coverage)
β Documentation updated
# 2. Commit and push
/git:cp
Analyzing changes...
β Staged 8 files
β Created commit: "feat: add user notifications system"
β Pushed to origin/feature/notifications
Changes are now on GitHub!
```
### Bug Fix Workflow
```bash
# 1. Fix bug
/fix:fast [button text typo]
β Fixed typo in SubmitButton.tsx
# 2. Commit and push
/git:cp
β Commit: "fix: correct button text typo"
β Pushed to origin/bugfix/button-text
# 3. CI automatically runs
β Tests passed
β Ready for deployment
```
### Continuous Deployment
```bash
# Development with auto-deploy to staging
# Change 1
/cook [update header]
/git:cp # β Deploys to staging
# Change 2
/fix:ui [alignment issue]
/git:cp # β Deploys to staging
# Change 3
/docs:update
/git:cp # β Deploys to staging
```
## What Happens
### Step-by-Step
```
1. Analyzing changes...
- 5 files modified
- 2 files created
- 1 file deleted
2. Creating commit...
β Commit message generated:
"feat: add real-time notifications
- Implement WebSocket connection
- Add notification bell UI
- Store notifications in database
- Add mark-as-read functionality
- Include comprehensive tests"
3. Staging files...
β src/services/notification.service.ts
β src/components/NotificationBell.tsx
β src/websocket/notification-handler.ts
β tests/notification.test.ts
β docs/api/notifications.md
4. Creating commit...
β Commit created: a1b2c3d
5. Pushing to remote...
β Pushed to origin/feature/notifications
6. Verifying...
β Remote updated successfully
β CI triggered automatically
Complete! View at:
https://github.com/user/repo/commit/a1b2c3d
```
## Configuration
### Setting Upstream
If upstream not set:
```
β No upstream branch set
Set upstream to origin/feature-name? (y/n)
> y
β Upstream set
β Pushing...
β Complete
```
Manual setup:
```bash
git branch --set-upstream-to=origin/feature-name
```
### Push Options
ClaudeKit automatically handles:
- First push (`git push -u origin branch-name`)
- Subsequent pushes (`git push`)
- Force push warnings
- Push conflicts
## Safety Checks
### Pre-Push Validation
```
Running pre-push checks...
β All tests passed (87/87)
β No TypeScript errors
β Lint checks passed
β No sensitive files detected
Safe to push.
```
### Conflict Detection
```
β Warning: Remote has changes
Remote branch has 3 new commits.
Pull before pushing? (y/n)
```
### Force Push Prevention
```
β Error: Would require force push
Your branch is behind 'origin/main' by 2 commits.
Options:
1. Pull and rebase: git pull --rebase
2. Create new branch
3. Cancel
Choose:
```
## Best Practices
### Verify Tests First
```bash
# Always test before pushing
/test
β All tests passed
# Now safe to push
/git:cp
```
### Review Changes
```bash
# Check what you're pushing
git diff
# Review commit
git show HEAD
# Then push
git push # or /git:cp on next change
```
### Use Feature Branches
```bash
# Create feature branch
git checkout -b feature/new-dashboard
# Develop and push
/cook [build dashboard]
/git:cp # Safe on feature branch
# Create PR when ready
/git:pr
```
### Small, Frequent Commits
β
**Good:**
```bash
/cook [add login form]
/git:cp
/cook [add registration form]
/git:cp
/cook [add password reset]
/git:cp
```
β **Bad:**
```bash
# Work for 3 days
# Make 50 changes
/git:cp # Huge, unclear commit
```
## Workflow Patterns
### Feature Development
```bash
# Day 1
git checkout -b feature/payments
/cook [implement Stripe integration]
/git:cp
# Day 2
/cook [add payment UI]
/git:cp
# Day 3
/test
/git:cp
# Create PR
/git:pr
```
### Hotfix
```bash
# Critical bug in production
git checkout -b hotfix/login-error
/fix:fast [login error]
/git:cp # Push immediately
# Create PR to main
/git:pr main
# Merge and deploy
```
### Documentation Updates
```bash
# Update docs
/docs:update
/git:cp # Push docs immediately
# Changes live on docs site
```
## Troubleshooting
### Push Rejected
**Problem:** `! [rejected] main -> main (non-fast-forward)`
**Solution:**
```bash
# Don't force push!
# Instead, pull and rebase
git pull --rebase origin main
# Resolve conflicts if any
# Then push
git push
```
### Upstream Not Set
**Problem:** "No upstream branch"
**Solution:**
```bash
# Set upstream
git push -u origin branch-name
# Or let /git:cp do it
/git:cp
> y # When prompted to set upstream
```
### Failed Pre-Push Hooks
**Problem:** Pre-push hook prevents push
**Solution:**
```bash
# Fix the issues
npm run lint:fix
npm test
# Then retry
/git:cp
```
### Large Files
**Problem:** "File exceeds GitHub's 100 MB limit"
**Solution:**
```bash
# Use Git LFS
git lfs track "*.mp4"
git add .gitattributes
# Or remove large file
git rm --cached large-file.mp4
echo "large-file.mp4" >> .gitignore
# Then push
/git:cp
```
## Comparison
| Command | Local Commit | Push | Use Case |
|---------|-------------|------|----------|
| `/git:cm` | β | β | Review before push |
| `/git:cp` | β | β | Quick iteration |
| `git push` | β | β | After manual commit |
## Advanced Usage
### Multiple Remotes
```bash
# Push to specific remote
git push staging
git push production
# /git:cp uses default (origin)
```
### Branch Protection
Some repos have branch protection:
```
β Cannot push to protected branch 'main'
Create feature branch instead:
git checkout -b feature/your-changes
/git:cp # Now works
```
### CI Integration
After `/git:cp`, CI automatically:
1. Runs tests
2. Builds project
3. Deploys to staging
4. Notifies team
Monitor: `gh run watch`
## When NOT to Use
β **Don't use `/git:cp` when:**
- Working on shared branches (main, develop)
- Changes need team review
- Uncertain if changes work
- Multiple unrelated changes
- Experimental code
- Breaking changes
β
**Use instead:**
```bash
/git:cm # Commit locally
# Test, review, get feedback
git push # Manual push when ready
```
## Next Steps
- [/git:pr](/docs/commands/git/pull-request) - Create pull request
- [/git:cm](/docs/commands/git/commit) - Commit without pushing
- [Git Workflow](/docs/guides/git-workflow) - Team workflows
---
**Key Takeaway**: `/git:cp` streamlines rapid development by combining commit and push, perfect for feature branches and solo development, but use cautiously on shared branches.
---
# /git:pr
Section: docs
Category: commands/git
URL: https://docs.claudekit.cc/docs/git/pull-request
# /git:pr
Create a pull request with an AI-generated comprehensive summary and test plan. Analyzes all commits since branch divergence, reviews complete change set, and generates professional PR description using GitHub CLI.
## Syntax
```bash
/git:pr [target-branch] [source-branch]
```
### Parameters
- `[target-branch]` (optional): Branch to merge into (defaults to `main`)
- `[source-branch]` (optional): Branch to merge from (defaults to current branch)
## How It Works
The `/git:pr` command uses the `git-manager` agent with this workflow:
### 1. Branch Analysis
- Identifies target and source branches
- Checks branch tracking and remote status
- Determines divergence point from base branch
- Reviews all commits since divergence
### 2. Change Review
- Runs `git status` to see current state
- Runs `git diff` for staged/unstaged changes
- Runs `git log` to review commit history
- Runs `git diff [base]...HEAD` for complete changeset
- Analyzes ALL commits (not just latest)
### 3. PR Summary Generation
- Reviews complete diff and commit messages
- Identifies feature additions, bug fixes, refactoring
- Creates bullet-point summary of changes
- Generates test plan checklist
- Follows professional PR format
### 4. PR Creation
- Creates/pushes branch if needed
- Executes `gh pr create` with generated content
- Returns PR URL for review
- Handles authentication and permissions
## When to Use
### β
Perfect For
**Feature Completion**
```bash
# After implementing feature
/git:pr
```
**Bug Fix Ready**
```bash
# After fixing and testing bug
/git:pr main hotfix/critical-bug
```
**Release Preparation**
```bash
# Merge develop to main
/git:pr main develop
```
**Code Review Request**
```bash
# Request team review
/git:pr
```
### β Don't Use For
**Work in Progress**
```bash
β /git:pr # Before feature is complete
β
Finish feature first, then create PR
```
**No Commits**
```bash
β /git:pr # Nothing committed yet
β
/git:cm # Commit first
```
**GitHub CLI Not Installed**
```bash
β /git:pr
β
Install gh: brew install gh
```
## Examples
### Simple Feature PR
```bash
/git:pr
```
**What happens:**
```
1. Analyzing branch
Current branch: feature/user-profile
Target branch: main
Diverged at: commit abc1234
Commits ahead: 5
2. Reviewing changes
$ git log main..HEAD
- feat: add user profile page
- feat: add profile avatar upload
- test: add profile component tests
- docs: document profile API
- fix: handle missing avatar gracefully
$ git diff main...HEAD
Files changed: 12
Insertions: +487
Deletions: -23
3. Generating PR description
β Created summary from all 5 commits
β Generated test plan
4. Creating PR
$ gh pr create --title "Add user profile page" --body "..."
β Created: https://github.com/user/repo/pull/42
β Pull request created (34s)
```
**Generated PR:**
```markdown
## Summary
- Add user profile page with avatar upload capability
- Implement profile data editing functionality
- Add profile avatar upload with image validation
- Include comprehensive test suite for profile components
- Document profile API endpoints
- Handle edge case for missing avatar with default image
## Test plan
- [ ] Verify profile page loads correctly
- [ ] Test profile data editing and saving
- [ ] Validate avatar upload with various image formats
- [ ] Test avatar size limits and validation
- [ ] Verify default avatar displays when none uploaded
- [ ] Run full test suite (npm test)
- [ ] Check API endpoint documentation accuracy
- [ ] Test error handling for failed uploads
```
### Hotfix PR
```bash
/git:pr main hotfix/payment-timeout
```
**What happens:**
```
1. Branch setup
Source: hotfix/payment-timeout
Target: main
Commits: 2
2. Analyzing hotfix
$ git diff main...hotfix/payment-timeout
- fix: increase payment gateway timeout
- test: add timeout handling tests
Critical fix: Payment processing timeouts under load
3. Creating PR
Title: "Fix: Payment gateway timeout issues"
Labels: hotfix, urgent
β Created: https://github.com/user/repo/pull/43
β Hotfix PR ready for immediate review (18s)
```
**Generated PR:**
```markdown
## Summary
- Increase payment gateway timeout from 5s to 30s to handle slow connections
- Add comprehensive timeout handling with retry logic
- Include tests validating timeout behavior under various network conditions
## Test plan
- [ ] Verify payments complete successfully under normal conditions
- [ ] Test timeout handling with slow network simulation
- [ ] Validate retry logic works correctly
- [ ] Run full payment integration test suite
- [ ] Monitor production metrics after deployment
- [ ] Check error logging captures timeout events correctly
```
### Complex Feature PR
```bash
/git:pr main feature/oauth-integration
```
**What happens:**
```
1. Analyzing large feature
Commits: 23
Files changed: 45
Insertions: +2,345
Deletions: -567
2. Reviewing comprehensive changes
β Analyzed all 23 commits
β Reviewed complete diff
β Identified major components:
- OAuth2 authentication flow
- Google and GitHub providers
- Token management system
- User account linking
- Security enhancements
3. Generating detailed PR
β Created comprehensive summary
β Generated thorough test plan
β Listed breaking changes
β Documented migration steps
4. Creating PR
β https://github.com/user/repo/pull/44
β Complex PR created with full documentation (1m 12s)
```
**Generated PR:**
```markdown
## Summary
### OAuth2 Implementation
- Implement OAuth2 authentication flow with authorization code grant
- Add Google OAuth provider with profile data retrieval
- Add GitHub OAuth provider with email and profile access
- Create token management system with secure storage
### User Experience
- Allow account linking for existing users
- Implement seamless signup flow for new users via OAuth
- Add OAuth disconnect functionality in account settings
### Security Enhancements
- Implement PKCE for OAuth flow security
- Add CSRF protection for OAuth callbacks
- Secure token storage with encryption
- Add rate limiting for OAuth endpoints
### Infrastructure
- Create OAuth callback endpoints
- Implement OAuth state management
- Add comprehensive error handling
- Include extensive test coverage (94%)
## Breaking Changes
- Authentication flow now requires `AUTH_CALLBACK_URL` environment variable
- Users table schema updated with OAuth fields (migration required)
## Migration Guide
1. Add environment variables:
```
GOOGLE_CLIENT_ID=...
GOOGLE_CLIENT_SECRET=...
GITHUB_CLIENT_ID=...
GITHUB_CLIENT_SECRET=...
AUTH_CALLBACK_URL=https://yourdomain.com/auth/callback
```
2. Run database migration:
```
npm run migrate
```
## Test plan
### OAuth Flow
- [ ] Test Google OAuth signup flow
- [ ] Test GitHub OAuth signup flow
- [ ] Test Google OAuth login flow
- [ ] Test GitHub OAuth login flow
- [ ] Verify OAuth callback handling
### Account Linking
- [ ] Test linking Google account to existing user
- [ ] Test linking GitHub account to existing user
- [ ] Verify unlinking OAuth accounts
- [ ] Test multiple OAuth providers per user
### Security
- [ ] Verify PKCE implementation
- [ ] Test CSRF protection
- [ ] Validate token encryption
- [ ] Test rate limiting
- [ ] Security audit of OAuth flow
### Edge Cases
- [ ] Test OAuth with invalid credentials
- [ ] Test OAuth callback errors
- [ ] Test concurrent OAuth attempts
- [ ] Verify expired token handling
### Integration
- [ ] Run full test suite (npm test)
- [ ] Test in staging environment
- [ ] Verify database migrations
- [ ] Check error logging and monitoring
```
### Release PR
```bash
/git:pr main develop
```
**What happens:**
```
1. Analyzing release
Source: develop
Target: main
Commits since last release: 67
Features: 12
Bug fixes: 8
Improvements: 5
2. Generating release summary
β Grouped changes by type
β Highlighted breaking changes
β Listed new features
β Documented bug fixes
3. Creating release PR
Title: "Release v2.1.0"
β https://github.com/user/repo/pull/45
β Release PR created (1m 45s)
```
## GitHub CLI Integration
### Requirements
The command requires GitHub CLI (`gh`):
```bash
# Install GitHub CLI
brew install gh # macOS
# or
sudo apt install gh # Linux
# or
winget install --id GitHub.cli # Windows
# Authenticate
gh auth login
```
### Authentication
If not authenticated:
```
! GitHub CLI not authenticated
Run: gh auth login
Follow prompts to authenticate with GitHub.
```
### Permissions Required
- Repository write access
- Pull request creation permissions
- Branch push permissions (if branch doesn't exist on remote)
## PR Format
### Title
Auto-generated based on commits:
- Single feature: "Add user authentication"
- Multiple features: "Release v2.1.0"
- Bug fix: "Fix payment processing timeout"
### Body Structure
```markdown
## Summary
- Bullet-point list of changes
- Organized by logical grouping
- Focuses on WHAT changed and WHY
## Test plan
- [ ] Checklist item 1
- [ ] Checklist item 2
- [ ] Integration tests
```
## Best Practices
### Commit Before Creating PR
β
**Good - Everything committed:**
```bash
# Commit all work
/git:cm
# Push to remote
git push
# Create PR
/git:pr
```
β **Bad - Uncommitted changes:**
```bash
# Has uncommitted changes
/git:pr # May not include all changes
```
### Provide Branch Context
β
**Explicit branches:**
```bash
# Merge hotfix to main
/git:pr main hotfix/critical-bug
# Merge feature to develop
/git:pr develop feature/new-api
```
### Review Before Merging
β
**Review the PR:**
```bash
# Create PR
/git:pr
# Review on GitHub
# Get team feedback
# Address comments
# Then merge
```
## Workflow
### Feature Development Flow
```bash
# 1. Create feature branch
git checkout -b feature/user-dashboard
# 2. Implement feature (multiple commits)
/cook [add dashboard layout]
/git:cm
/cook [add dashboard widgets]
/git:cm
/cook [add dashboard filters]
/git:cm
# 3. Run tests
/test
# 4. Create PR
/git:pr
# 5. Address review feedback
# ... make changes ...
/git:cm
git push
# 6. Merge on GitHub after approval
```
### Hotfix Flow
```bash
# 1. Create hotfix branch from main
git checkout main
git pull
git checkout -b hotfix/payment-bug
# 2. Fix issue
/fix:fast [payment processing bug]
# 3. Test fix
/test
# 4. Commit
/git:cm
# 5. Create PR immediately
/git:pr main hotfix/payment-bug
# 6. Request urgent review
# 7. Merge and deploy ASAP
```
### Release Flow
```bash
# 1. Ensure develop is clean
git checkout develop
git pull
# 2. Update version and changelog
# Edit package.json and CHANGELOG.md
/git:cm
# 3. Create release PR
/git:pr main develop
# 4. Review changes thoroughly
# 5. Run full test suite in staging
# 6. Merge to main
# 7. Tag release
git tag v2.1.0
git push --tags
```
## Troubleshooting
### GitHub CLI Not Installed
**Problem:** `gh` command not found
**Solution:**
```bash
# Install GitHub CLI
brew install gh # macOS
sudo apt install gh # Linux
# Authenticate
gh auth login
# Retry
/git:pr
```
### Not Authenticated
**Problem:** GitHub authentication required
**Solution:**
```bash
# Authenticate with GitHub
gh auth login
# Follow prompts to authenticate
# Retry
/git:pr
```
### No Commits Ahead
**Problem:** Current branch has no new commits
**Solution:**
```bash
# Check branch status
git status
git log main..HEAD
# Ensure you have commits
# Or create new feature first
```
### Branch Not Pushed
**Problem:** Local branch not on remote
**Solution:**
```bash
# Command automatically pushes
/git:pr
# Or push manually first
git push -u origin feature-branch
/git:pr
```
### PR Already Exists
**Problem:** PR already exists for branch
**Solution:**
```bash
# View existing PR
gh pr view
# Or close old PR and create new one
gh pr close 42
/git:pr
```
## Related Commands
### Commit and Create PR
```bash
# 1. Commit changes
/git:cm
# 2. Create PR
/git:pr
```
### Commit, Push, and Create PR
```bash
# 1. Commit and push
/git:cp
# 2. Create PR
/git:pr
```
### Fix and Create PR
```bash
# 1. Fix issue
/fix:fast [bug description]
# 2. Test
/test
# 3. Commit
/git:cm
# 4. Create PR
/git:pr
```
## Advanced Usage
### Custom Target Branch
```bash
# Merge to develop instead of main
/git:pr develop
# Merge to staging
/git:pr staging
# Merge specific branches
/git:pr production hotfix/urgent-fix
```
### Draft PRs
```bash
# Create PR first (work in progress)
/git:pr
# Then mark as draft on GitHub
gh pr ready --undo
```
## Metrics
Typical `/git:pr` performance:
- **Time**: 30 seconds - 2 minutes (depending on changeset size)
- **Commits analyzed**: All commits since branch divergence
- **Summary quality**: Professional, comprehensive
- **Test plan coverage**: Typically 8-15 checklist items
- **Success rate**: 99%+ (assuming gh CLI configured)
## Next Steps
After using `/git:pr`:
- Review PR on GitHub
- Address reviewer feedback
- Run CI/CD pipeline
- Merge when approved
- [/watzup](/docs/commands/core/watzup) - Review what was accomplished
---
**Key Takeaway**: `/git:pr` creates professional pull requests by analyzing all commits since branch divergence and generating comprehensive summaries with actionable test plans, streamlining the code review process.
---
# /integrate:polar
Section: docs
Category: commands/integrate
URL: https://docs.claudekit.cc/docs/integrate/polar
# /integrate:polar
Implement complete payment integration with Polar.sh. This command handles subscriptions, one-time payments, webhook processing, customer portal, and all necessary backend logic for SaaS monetization.
## Syntax
```bash
/integrate:polar [integration requirements]
```
## How It Works
The `/integrate:polar` command follows a comprehensive integration workflow:
### 1. Requirements Analysis
- Identifies payment model (subscription, one-time, hybrid)
- Determines features needed (trials, coupons, metering, etc.)
- Analyzes existing codebase architecture
- Plans database schema for payment data
### 2. Polar.sh Research
Invokes **researcher** agent to:
- Review latest Polar.sh API documentation
- Check integration best practices
- Identify security requirements
- Study webhook handling patterns
### 3. Implementation Planning
Creates detailed plan for:
- API routes and endpoints
- Database models
- Webhook handlers
- Frontend components
- Error handling
- Testing strategy
### 4. Code Implementation
Invokes **code** agent to implement:
- Polar.sh SDK integration
- Payment flow endpoints
- Webhook processing
- Database operations
- Customer portal integration
- Error handling and logging
### 5. Testing & Security
- Generates comprehensive tests
- Implements webhook signature verification
- Adds error handling for all failure modes
- Tests subscription lifecycle events
- Validates PCI compliance requirements
## Examples
### Basic Subscription Integration
```bash
/integrate:polar [add Polar.sh subscription billing with 3 tiers: Free, Pro ($29/mo), Enterprise ($99/mo)]
```
**What happens:**
```
Phase 1: Analysis (30 seconds)
---
Agent: planner
Requirements identified:
β 3 subscription tiers
β Monthly billing
β Trial period (standard 14 days)
β Upgrade/downgrade flows
β Customer portal access
β Webhook processing
Architecture decisions:
- Backend: Node.js/Express detected
- Database: PostgreSQL detected
- Frontend: React detected
Phase 2: Research (45 seconds)
---
Agent: researcher
Polar.sh API features analyzed:
β Subscription API (v1)
β Webhook events (subscription.created, etc.)
β Customer portal (hosted)
β Checkout sessions
β Usage-based billing support
β Tax handling (automatic)
Best practices identified:
β Webhook signature verification required
β Idempotent webhook processing
β Subscription status synchronization
β Grace period handling
Phase 3: Implementation Plan
---
Agent: planner
Files to create:
1. Backend:
- src/routes/billing.routes.ts
- src/controllers/billing.controller.ts
- src/services/polar.service.ts
- src/services/subscription.service.ts
- src/middleware/subscription.middleware.ts
- src/webhooks/polar.webhook.ts
- src/models/subscription.model.ts
- src/models/customer.model.ts
2. Database:
- migrations/add-billing-tables.sql
3. Frontend:
- src/components/PricingTable.tsx
- src/components/CheckoutButton.tsx
- src/components/SubscriptionStatus.tsx
- src/components/CustomerPortal.tsx
- src/hooks/useSubscription.ts
4. Configuration:
- .env.example (add Polar.sh keys)
- config/polar.config.ts
5. Tests:
- tests/integration/billing.test.ts
- tests/webhooks/polar.webhook.test.ts
- tests/services/subscription.service.test.ts
Phase 4: Code Implementation (2-3 minutes)
---
Agent: code
β Polar.sh SDK installed (@polar-sh/sdk)
β Database migrations created
β Models implemented (Subscription, Customer)
β Polar service with error handling
β Subscription service with lifecycle management
β API routes created:
- POST /api/billing/checkout
- GET /api/billing/portal
- GET /api/billing/subscription
- PUT /api/billing/subscription/upgrade
- PUT /api/billing/subscription/downgrade
- DELETE /api/billing/subscription/cancel
β Webhook endpoint implemented:
- POST /api/webhooks/polar
- Signature verification
- Event handlers for:
β’ subscription.created
β’ subscription.updated
β’ subscription.canceled
β’ subscription.active
β’ payment.succeeded
β’ payment.failed
β Frontend components created
β Subscription middleware for route protection
β Error handling with retry logic
β Logging for all payment events
Phase 5: Database Schema
---
Created tables:
customers:
- id (uuid, primary key)
- user_id (uuid, foreign key)
- polar_customer_id (text, unique)
- email (text)
- created_at (timestamp)
- updated_at (timestamp)
subscriptions:
- id (uuid, primary key)
- customer_id (uuid, foreign key)
- polar_subscription_id (text, unique)
- tier (text) -- 'free', 'pro', 'enterprise'
- status (text) -- 'active', 'canceled', 'past_due', etc.
- current_period_start (timestamp)
- current_period_end (timestamp)
- cancel_at_period_end (boolean)
- created_at (timestamp)
- updated_at (timestamp)
payment_history:
- id (uuid, primary key)
- subscription_id (uuid, foreign key)
- polar_payment_id (text)
- amount (integer) -- in cents
- currency (text)
- status (text)
- paid_at (timestamp)
- created_at (timestamp)
Phase 6: Configuration
---
Environment variables added to .env.example:
POLAR_SECRET_KEY=polar_sk_...
POLAR_WEBHOOK_SECRET=whsec_...
POLAR_PRODUCT_ID_PRO=prod_...
POLAR_PRODUCT_ID_ENTERPRISE=prod_...
FRONTEND_URL=http://localhost:3000
Phase 7: Frontend Integration
---
Pricing page example created:
}
highlighted
/>
}
/>
Customer portal integration:
component provides:
- View subscription details
- Update payment method
- Download invoices
- Cancel subscription
- Upgrade/downgrade
Phase 8: Testing (1 minute)
---
Agent: tester
Generated tests:
β Checkout flow tests (15 tests)
β Webhook processing tests (22 tests)
β Subscription lifecycle tests (18 tests)
β Upgrade/downgrade tests (12 tests)
β Cancellation tests (8 tests)
β Error handling tests (14 tests)
All 89 tests passed
Coverage: 94%
Phase 9: Documentation
---
Created documentation:
docs/billing/polar-integration.md:
- Setup instructions
- Environment variables
- API endpoints
- Webhook events
- Testing with Polar.sh CLI
- Production deployment checklist
- Common issues and solutions
β Integration complete (4 minutes 30 seconds)
Summary:
---
β 15 files created
β 8 API endpoints implemented
β 6 webhook events handled
β 3 database tables
β 4 React components
β 89 tests (94% coverage)
β Complete documentation
Next steps:
1. Add Polar.sh API keys to .env
2. Run migrations: npm run migrate
3. Test with Polar.sh test mode
4. Create products in Polar.sh dashboard
5. Update .env with product IDs
6. Deploy to staging
```
### Advanced Integration with Usage-Based Billing
```bash
/integrate:polar [implement Polar.sh with usage-based billing for API calls, $49 base + $0.01 per API call over 10k]
```
**What happens:**
```
Phase 1: Analysis
---
Requirements:
β Base subscription ($49/month)
β Usage-based add-on ($0.01 per API call)
β Included quota (10,000 calls/month)
β Usage tracking and reporting
β Overage warnings
Additional complexity:
- API call metering required
- Usage aggregation
- Invoice preview before billing
- Usage reset on billing cycle
Phase 2: Implementation
---
Added components:
1. Usage Tracking:
- src/middleware/usage-tracking.middleware.ts
- src/services/usage.service.ts
- Database table: api_usage
2. Usage Reporting:
- src/routes/usage.routes.ts
- GET /api/usage/current
- GET /api/usage/history
- GET /api/usage/invoice-preview
3. Polar Metered Billing:
- Integration with Polar metered usage API
- Hourly usage sync to Polar
- Usage reset on billing cycle
4. Usage Alerts:
- Email notifications at 80%, 100%, 150%
- Dashboard warnings
- Webhook events for threshold crossing
Phase 3: Usage Tracking Implementation
---
Middleware added to all API routes:
import { trackUsage } from './middleware/usage-tracking';
router.use('/api/v1', trackUsage);
Usage service:
- Tracks each API call
- Aggregates by subscription
- Syncs to Polar every hour
- Calculates overage charges
Database schema:
api_usage:
- id, subscription_id, endpoint
- request_count, timestamp
- billing_period_start, billing_period_end
Phase 4: Frontend Updates
---
Added components:
- - Visual usage graph
- - Warning banners
- - Estimated next bill
- - Detailed breakdown
β Usage-based billing complete (6 minutes)
```
## Polar.sh Features Implemented
### 1. Checkout Sessions
```typescript
// Create checkout session
const checkout = await polar.checkouts.create({
productId: POLAR_PRODUCT_ID,
successUrl: `${FRONTEND_URL}/success`,
customerEmail: user.email,
});
// Frontend redirects to:
window.location.href = checkout.url;
```
### 2. Webhook Processing
```typescript
// Webhook handler with signature verification
app.post('/api/webhooks/polar', async (req, res) => {
const signature = req.headers['polar-signature'];
// Verify signature
const isValid = polar.webhooks.verify(
req.body,
signature,
POLAR_WEBHOOK_SECRET
);
if (!isValid) {
return res.status(400).json({ error: 'Invalid signature' });
}
// Handle event
const event = req.body;
switch (event.type) {
case 'subscription.created':
await handleSubscriptionCreated(event.data);
break;
case 'subscription.updated':
await handleSubscriptionUpdated(event.data);
break;
// ... other events
}
res.json({ received: true });
});
```
### 3. Customer Portal
```typescript
// Generate customer portal URL
const portalUrl = await polar.customerPortal.createSession({
customerId: customer.polarCustomerId,
returnUrl: `${FRONTEND_URL}/settings`,
});
// Frontend redirects to:
window.location.href = portalUrl;
```
### 4. Subscription Management
```typescript
// Upgrade subscription
await polar.subscriptions.update(subscriptionId, {
productId: NEW_PRODUCT_ID,
prorationBehavior: 'create_prorations',
});
// Cancel subscription
await polar.subscriptions.cancel(subscriptionId, {
cancelAtPeriodEnd: true,
});
```
## Webhook Events Handled
```
β subscription.created - New subscription started
β subscription.updated - Tier changed, status updated
β subscription.canceled - Subscription canceled
β subscription.active - Subscription became active
β subscription.past_due - Payment failed
β payment.succeeded - Payment successful
β payment.failed - Payment failed
β customer.created - New customer created
β customer.updated - Customer details changed
β invoice.created - New invoice generated
β invoice.paid - Invoice paid
β invoice.payment_failed - Invoice payment failed
```
## Database Schema
### Minimal Schema
```sql
CREATE TABLE customers (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
user_id UUID NOT NULL REFERENCES users(id),
polar_customer_id TEXT UNIQUE,
email TEXT NOT NULL,
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW()
);
CREATE TABLE subscriptions (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
customer_id UUID NOT NULL REFERENCES customers(id),
polar_subscription_id TEXT UNIQUE,
tier TEXT NOT NULL,
status TEXT NOT NULL,
current_period_start TIMESTAMP,
current_period_end TIMESTAMP,
cancel_at_period_end BOOLEAN DEFAULT FALSE,
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW()
);
CREATE INDEX idx_subscriptions_customer ON subscriptions(customer_id);
CREATE INDEX idx_subscriptions_status ON subscriptions(status);
```
## Security Best Practices
### 1. Webhook Signature Verification
```typescript
// ALWAYS verify webhook signatures
const isValid = polar.webhooks.verify(
req.body,
signature,
POLAR_WEBHOOK_SECRET
);
if (!isValid) {
throw new Error('Invalid webhook signature');
}
```
### 2. Idempotent Webhook Processing
```typescript
// Prevent duplicate processing
const existingEvent = await db.webhookEvents.findOne({
eventId: event.id
});
if (existingEvent) {
return; // Already processed
}
// Process event...
// Save event ID
await db.webhookEvents.create({
eventId: event.id,
processedAt: new Date()
});
```
### 3. API Key Security
```typescript
// NEVER expose secret key in frontend
// Backend only:
const polar = new Polar(process.env.POLAR_SECRET_KEY);
// Frontend uses API routes:
await fetch('/api/billing/checkout', {
method: 'POST',
headers: { Authorization: `Bearer ${userToken}` }
});
```
## Testing
### Test Mode
```bash
# Use Polar.sh test mode keys
POLAR_SECRET_KEY=polar_sk_test_...
POLAR_WEBHOOK_SECRET=whsec_test_...
```
### Webhook Testing
```bash
# Install Polar CLI
npm install -g @polar-sh/cli
# Forward webhooks to local
polar webhooks forward --url http://localhost:3000/api/webhooks/polar
```
### Trigger Test Events
```bash
# Test subscription created
polar trigger subscription.created --subscription-id sub_test_123
# Test payment failed
polar trigger payment.failed --subscription-id sub_test_123
```
## Common Use Cases
### 1. SaaS with Tiers
```bash
/integrate:polar [3 tiers: Starter ($19), Pro ($49), Enterprise ($199), 14-day trial]
```
### 2. Usage-Based Billing
```bash
/integrate:polar [base $29/mo + $0.10 per GB storage over 100GB]
```
### 3. One-Time Payments
```bash
/integrate:polar [sell course for $299 one-time payment]
```
### 4. Freemium Model
```bash
/integrate:polar [free tier with upgrade to Pro ($29/mo) with feature gating]
```
## Troubleshooting
### Webhook Not Received
**Problem:** Webhooks not hitting endpoint
**Check:**
```bash
# 1. Verify webhook URL in Polar dashboard
# 2. Check endpoint is accessible
curl -X POST http://your-domain.com/api/webhooks/polar
# 3. Check webhook logs in Polar dashboard
# 4. Verify webhook secret matches .env
```
### Payment Failed
**Problem:** Customer payment fails
**Handled automatically:**
- `payment.failed` webhook triggers
- Subscription status β `past_due`
- Customer notified via email
- Retry logic in Polar handles retries
- Grace period before cancellation
### Subscription Not Syncing
**Problem:** Database out of sync with Polar
**Solution:**
```bash
# Run sync script (generated during integration)
npm run sync:subscriptions
# Or manually query Polar API
const subscription = await polar.subscriptions.get(subscriptionId);
await syncToDB(subscription);
```
## Production Deployment Checklist
Before going live:
```
β‘ Switch to production API keys
β‘ Configure webhook endpoint (must be HTTPS)
β‘ Test webhook signature verification
β‘ Set up webhook monitoring (logging)
β‘ Configure tax settings in Polar dashboard
β‘ Create production products/prices
β‘ Update .env with production product IDs
β‘ Test subscription flow end-to-end
β‘ Test cancellation flow
β‘ Test upgrade/downgrade
β‘ Set up invoice email templates
β‘ Configure customer portal branding
β‘ Add terms of service link
β‘ Add privacy policy link
β‘ Set up payment failure notifications
β‘ Configure dunning settings
β‘ Test with real card (then refund)
```
## Generated Files
After `/integrate:polar` completes:
### Backend
```
src/routes/billing.routes.ts
src/controllers/billing.controller.ts
src/services/polar.service.ts
src/services/subscription.service.ts
src/webhooks/polar.webhook.ts
src/models/subscription.model.ts
src/models/customer.model.ts
src/middleware/subscription.middleware.ts
```
### Frontend
```
src/components/PricingTable.tsx
src/components/CheckoutButton.tsx
src/components/SubscriptionStatus.tsx
src/components/CustomerPortal.tsx
src/hooks/useSubscription.ts
```
### Database
```
migrations/YYYYMMDD-add-billing-tables.sql
```
### Tests
```
tests/integration/billing.test.ts
tests/webhooks/polar.webhook.test.ts
tests/services/subscription.service.test.ts
```
### Documentation
```
docs/billing/polar-integration.md
docs/billing/webhook-events.md
docs/billing/testing-guide.md
```
## Next Steps
After integration:
```bash
# 1. Integration complete
/integrate:polar [requirements]
# 2. Add API keys
cp .env.example .env
# Add POLAR_SECRET_KEY and other keys
# 3. Run migrations
npm run migrate
# 4. Test in test mode
npm run dev
# Visit /pricing and test checkout
# 5. Run tests
/test
# 6. Deploy to staging
/deploy [staging]
# 7. Test end-to-end in staging
# 8. Switch to production keys
# 9. Deploy to production
/deploy [production]
# 10. Monitor webhooks
# Check logs and Polar dashboard
```
## Related Commands
- [/integrate:sepay](/docs/commands/integrate/sepay) - Vietnamese payment gateway
- [/cook](/docs/commands/core/cook) - Implement custom features
- [/test](/docs/commands/core/test) - Run test suite
---
**Key Takeaway**: `/integrate:polar` provides complete Polar.sh payment integration including subscriptions, webhooks, customer portal, and all necessary backend/frontend code with tests and documentationβproduction-ready in minutes.
---
# /integrate:sepay
Section: docs
Category: commands/integrate
URL: https://docs.claudekit.cc/docs/integrate/sepay
# /integrate:sepay
Implement complete payment integration with SePay.vn, Vietnam's leading payment gateway. This command handles QR code payments, bank transfers, webhook processing, and all necessary backend logic for Vietnamese market monetization.
## Syntax
```bash
/integrate:sepay [integration requirements]
```
## How It Works
The `/integrate:sepay` command follows a comprehensive integration workflow:
### 1. Requirements Analysis
- Identifies payment methods needed (QR, bank transfer, card)
- Determines payment flow (one-time, subscription, top-up)
- Analyzes existing codebase architecture
- Plans database schema for Vietnamese payment data
### 2. SePay Research
Invokes **researcher** agent to:
- Review latest SePay API documentation
- Check Vietnam-specific payment regulations
- Identify localization requirements
- Study webhook handling patterns
- Research VND currency handling
### 3. Implementation Planning
Creates detailed plan for:
- API routes and endpoints
- Database models (Vietnamese fields)
- Webhook handlers
- QR code generation
- Bank transfer tracking
- Payment confirmation flow
- Error handling (Vietnamese language)
- Testing strategy
### 4. Code Implementation
Invokes **code** agent to implement:
- SePay SDK integration
- Payment creation endpoints
- QR code generation and display
- Bank transfer instructions
- Webhook processing
- Payment status polling
- Database operations with VND support
- Vietnamese localization
### 5. Testing & Compliance
- Generates comprehensive tests
- Implements webhook signature verification
- Adds Vietnamese bank ID validation
- Tests payment lifecycle
- Validates Vietnamese regulatory compliance
## Examples
### Basic QR Code Payment Integration
```bash
/integrate:sepay [add SePay QR code payments for course purchases, support Vietcombank, Techcombank, BIDV]
```
**What happens:**
```
Phase 1: Analysis (30 seconds)
---
Agent: planner
Requirements identified:
β QR code payment method
β Course purchase (one-time payment)
β Support major Vietnamese banks
β VND currency handling
β Payment confirmation flow
β Webhook processing
Vietnam-specific requirements:
β Bank account validation
β Vietnamese language support
β VND formatting (no decimals)
β Transaction reference format
β Bank transfer instructions
Architecture decisions:
- Backend: Node.js/Express detected
- Database: PostgreSQL detected
- Frontend: React detected
- Currency: VND (no decimals)
Phase 2: Research (45 seconds)
---
Agent: researcher
SePay API features analyzed:
β QR Code API (Dynamic QR)
β Bank Transfer API
β Webhook events (payment.success, payment.expired)
β Transaction verification
β Supported banks (18 major Vietnamese banks)
β QR code expiry (15 minutes default)
Vietnamese payment regulations:
β Transaction reference format: ABC123
β Max transaction: 500,000,000 VND
β Min transaction: 1,000 VND
β VAT handling (10% standard)
β Invoice requirements
Phase 3: Implementation Plan
---
Files to create:
1. Backend:
- src/routes/payment.routes.ts
- src/controllers/payment.controller.ts
- src/services/sepay.service.ts
- src/services/transaction.service.ts
- src/webhooks/sepay.webhook.ts
- src/models/payment.model.ts
- src/models/transaction.model.ts
- src/utils/vnd-formatter.ts
- src/utils/bank-validator.ts
2. Database:
- migrations/add-sepay-tables.sql
3. Frontend:
- src/components/PaymentQR.tsx
- src/components/BankTransferInstructions.tsx
- src/components/PaymentStatus.tsx
- src/components/PaymentHistory.tsx
- src/hooks/usePayment.ts
- src/utils/vnd-format.ts
4. Configuration:
- .env.example (add SePay credentials)
- config/sepay.config.ts
- locales/vi.json (Vietnamese translations)
5. Tests:
- tests/integration/payment.test.ts
- tests/webhooks/sepay.webhook.test.ts
- tests/utils/vnd-formatter.test.ts
Phase 4: Code Implementation (2-3 minutes)
---
Agent: code
β SePay SDK installed (sepay-node-sdk)
β Database migrations created
β Models implemented (Payment, Transaction)
β SePay service with error handling
β Transaction service with VND support
β API routes created:
- POST /api/payment/create
- GET /api/payment/:id
- GET /api/payment/:id/qr
- GET /api/payment/:id/status
- POST /api/payment/:id/verify
- GET /api/payment/history
β Webhook endpoint implemented:
- POST /api/webhooks/sepay
- Signature verification
- Event handlers for:
β’ payment.success
β’ payment.expired
β’ payment.failed
β’ payment.pending
β Frontend components created:
- QR code display with countdown
- Bank transfer instructions
- Payment status tracking
- Vietnamese formatting
β Vietnamese localization:
- Error messages in Vietnamese
- Bank names in Vietnamese
- Payment instructions in Vietnamese
- Success/failure messages
β VND utilities:
- Format: 1,000,000 β«
- Parse: Handle comma separators
- Validate: No decimals, min/max
Phase 5: Database Schema
---
Created tables:
payments:
- id (uuid, primary key)
- user_id (uuid, foreign key)
- sepay_transaction_id (text, unique)
- order_id (text) -- course purchase ID
- amount (bigint) -- in VND, no decimals
- currency (text) -- always 'VND'
- status (text) -- 'pending', 'success', 'expired', 'failed'
- payment_method (text) -- 'qr', 'bank_transfer'
- bank_code (text) -- 'VCB', 'TCB', 'BIDV', etc.
- qr_code_url (text)
- qr_expires_at (timestamp)
- paid_at (timestamp)
- created_at (timestamp)
- updated_at (timestamp)
transactions:
- id (uuid, primary key)
- payment_id (uuid, foreign key)
- transaction_ref (text) -- Vietnamese bank ref
- bank_transaction_id (text)
- sender_account (text)
- sender_name (text)
- receiver_account (text)
- transfer_content (text)
- transferred_at (timestamp)
- verified_at (timestamp)
- created_at (timestamp)
Phase 6: Configuration
---
Environment variables added:
SEPAY_APP_ID=your_app_id
SEPAY_SECRET_KEY=your_secret_key
SEPAY_WEBHOOK_SECRET=your_webhook_secret
SEPAY_ACCOUNT_NUMBER=0123456789
SEPAY_ACCOUNT_NAME=YOUR_COMPANY_NAME
SEPAY_BANK_CODE=VCB
FRONTEND_URL=http://localhost:3000
Phase 7: Frontend Integration
---
Payment QR Code component:
Displays:
- QR code image (auto-refreshes)
- Amount in VND format: 1,000,000 β«
- Countdown timer (15:00)
- Bank transfer instructions
- Payment status updates (real-time)
- Auto-redirect on success
Bank Transfer Instructions:
Shows:
- Bank name (Vietnamese)
- Account number
- Account holder name
- Transfer amount
- Transfer content (unique ref)
- Copy buttons for each field
Example:
---
NgΓ’n hΓ ng: Vietcombank (VCB)
Sα» tΓ i khoαΊ£n: 0123456789
TΓͺn tΓ i khoαΊ£n: CONG TY CLAUDEKIT
Sα» tiα»n: 1,000,000 β«
Nα»i dung: COURSE ABC123
[Sao chΓ©p tαΊ₯t cαΊ£]
---
Phase 8: Payment Flow
---
1. User clicks "Thanh toΓ‘n" (Pay)
2. Backend creates SePay payment
3. Frontend displays QR code
4. User scans QR with banking app
5. User confirms payment in app
6. Bank sends transaction to SePay
7. SePay webhook notifies backend
8. Backend verifies payment
9. Database updated (status: success)
10. Frontend shows success message
11. Course access granted
Timeout handling:
- QR expires after 15 minutes
- User can generate new QR
- Expired payments marked 'expired'
Phase 9: Testing (1 minute)
---
Agent: tester
Generated tests:
β Payment creation tests (12 tests)
β QR code generation tests (8 tests)
β Webhook processing tests (18 tests)
β VND formatting tests (15 tests)
β Bank validation tests (10 tests)
β Transaction verification tests (14 tests)
β Payment expiry tests (9 tests)
β Error handling tests (12 tests)
All 98 tests passed
Coverage: 96%
Phase 10: Documentation
---
Created documentation:
docs/payment/sepay-integration.md:
- Setup instructions
- Bank account configuration
- Environment variables
- API endpoints
- Webhook events
- Testing with SePay sandbox
- Vietnamese bank codes
- Troubleshooting guide
- Production deployment checklist
β Integration complete (4 minutes 15 seconds)
Summary:
---
β 16 files created
β 7 API endpoints implemented
β 4 webhook events handled
β 2 database tables
β 5 React components
β Vietnamese localization
β 98 tests (96% coverage)
β Complete documentation
Next steps:
1. Register SePay account at sepay.vn
2. Get App ID and Secret Key
3. Configure bank account
4. Add credentials to .env
5. Run migrations: npm run migrate
6. Test with SePay sandbox
7. Deploy to staging
```
### Subscription Billing with Auto-Renewal
```bash
/integrate:sepay [implement monthly subscription billing 299,000 VND/month with auto bank transfer reminders]
```
**What happens:**
```
Phase 1: Analysis
---
Requirements:
β Monthly subscription (299,000 VND)
β Auto-renewal reminders via email
β Bank transfer payment method
β Grace period (3 days)
β Suspension after grace period
β Reactivation flow
Vietnamese subscription pattern:
- Manual bank transfer each month
- Email reminder 3 days before expiry
- Fixed transfer content for tracking
- No auto-charge (regulatory)
Phase 2: Implementation
---
Additional components:
1. Subscription Management:
- src/services/subscription.service.ts
- src/models/subscription.model.ts
- src/jobs/subscription-reminder.job.ts
- src/jobs/subscription-expiry.job.ts
2. Email Notifications:
- templates/email/subscription-reminder-vi.html
- templates/email/subscription-expired-vi.html
- templates/email/payment-confirmed-vi.html
3. Cron Jobs:
- Daily check for subscriptions expiring in 3 days
- Daily check for expired subscriptions
- Send email reminders with payment instructions
4. Payment Verification:
- Manual verification endpoint
- Auto-verification via webhook
- Admin dashboard for verification
Database schema additions:
subscriptions:
- id, user_id, plan_id
- status ('active', 'grace_period', 'suspended')
- amount (bigint, VND)
- current_period_start, current_period_end
- next_billing_date
- auto_renew (boolean)
- payment_reminder_sent (boolean)
- grace_period_end
- created_at, updated_at
subscription_payments:
- id, subscription_id, payment_id
- period_start, period_end
- amount (bigint)
- paid_at
- created_at
Email reminder example:
---
TiΓͺu Δα»: Gia hαΊ‘n gΓ³i ClaudeKit Pro
Xin chΓ o [TΓͺn],
GΓ³i ClaudeKit Pro cα»§a bαΊ‘n sαΊ½ hαΊΏt hαΊ‘n vΓ o 3 ngΓ y nα»―a
(ngΓ y 15/11/2025).
Δα» tiαΊΏp tα»₯c sα» dα»₯ng, vui lΓ²ng thanh toΓ‘n:
Sα» tiα»n: 299,000 β«
NgΓ’n hΓ ng: Vietcombank (VCB)
Sα» tΓ i khoαΊ£n: 0123456789
Nα»i dung: SUB[user_id]
[Thanh toΓ‘n ngay]
CαΊ£m Ζ‘n bαΊ‘n ΔΓ£ tin dΓΉng ClaudeKit!
---
β Subscription billing complete (5 minutes)
```
## SePay Features Implemented
### 1. QR Code Payment
```typescript
// Create payment and generate QR
const payment = await sepay.createPayment({
amount: 1000000, // 1,000,000 VND
orderId: 'ORDER123',
description: 'Course purchase',
returnUrl: `${FRONTEND_URL}/payment/success`,
});
// Get QR code URL
const qrCodeUrl = payment.qrCodeUrl;
// Frontend displays QR
```
### 2. Bank Transfer Instructions
```typescript
// Generate transfer instructions
const instructions = {
bankCode: 'VCB', // Vietcombank
accountNumber: process.env.SEPAY_ACCOUNT_NUMBER,
accountName: process.env.SEPAY_ACCOUNT_NAME,
amount: 1000000,
content: `COURSE${orderId}`, // Unique reference
};
// Display to user with copy buttons
```
### 3. Payment Verification
```typescript
// Check payment status
const status = await sepay.verifyPayment(transactionId);
if (status === 'success') {
// Grant access
await grantCourseAccess(userId, courseId);
}
```
### 4. Webhook Processing
```typescript
// Webhook handler with signature verification
app.post('/api/webhooks/sepay', async (req, res) => {
const signature = req.headers['sepay-signature'];
// Verify signature
const isValid = sepay.verifyWebhookSignature(
req.body,
signature,
SEPAY_WEBHOOK_SECRET
);
if (!isValid) {
return res.status(400).json({ error: 'Invalid signature' });
}
const event = req.body;
switch (event.type) {
case 'payment.success':
await handlePaymentSuccess(event.data);
break;
case 'payment.expired':
await handlePaymentExpired(event.data);
break;
}
res.json({ success: true });
});
```
## Vietnamese Bank Codes
Supported major banks:
```
VCB - Vietcombank
TCB - Techcombank
BIDV - BIDV
VTB - Vietinbank
ACB - ACB
MB - MBBank
MSB - MSB
SCB - SCB
VPB - VPBank
TPB - TPBank
SHB - SHB
HDB - HDBank
OCB - OCB
LPB - LienVietPostBank
NAB - NamABank
ABB - AnBinhBank
```
## VND Currency Handling
### Formatting
```typescript
// Format VND (no decimals)
function formatVND(amount: number): string {
return new Intl.NumberFormat('vi-VN', {
style: 'currency',
currency: 'VND',
minimumFractionDigits: 0,
maximumFractionDigits: 0,
}).format(amount);
}
// Usage
formatVND(1000000); // "1.000.000 β«"
formatVND(299000); // "299.000 β«"
```
### Parsing
```typescript
// Parse VND string to number
function parseVND(str: string): number {
return parseInt(str.replace(/[^\d]/g, ''), 10);
}
// Usage
parseVND("1.000.000 β«"); // 1000000
parseVND("299,000"); // 299000
```
### Validation
```typescript
// Validate VND amount
function validateVND(amount: number): boolean {
const MIN = 1000; // 1,000 VND
const MAX = 500000000; // 500,000,000 VND
return (
amount >= MIN &&
amount <= MAX &&
Number.isInteger(amount) // No decimals
);
}
```
## Vietnamese Localization
### Payment Status Messages
```typescript
const messages = {
pending: 'Δang chα» thanh toΓ‘n',
success: 'Thanh toΓ‘n thΓ nh cΓ΄ng',
expired: 'ΔΓ£ hαΊΏt hαΊ‘n',
failed: 'Thanh toΓ‘n thαΊ₯t bαΊ‘i',
};
```
### Error Messages
```typescript
const errors = {
INVALID_AMOUNT: 'Sα» tiα»n khΓ΄ng hợp lα»',
PAYMENT_EXPIRED: 'QR code ΔΓ£ hαΊΏt hαΊ‘n',
PAYMENT_NOT_FOUND: 'KhΓ΄ng tΓ¬m thαΊ₯y giao dα»ch',
INSUFFICIENT_AMOUNT: 'Sα» tiα»n chuyα»n khΓ΄ng Δα»§',
};
```
## Testing
### Sandbox Mode
```bash
# Use SePay sandbox credentials
SEPAY_APP_ID=test_app_id
SEPAY_SECRET_KEY=test_secret_key
SEPAY_WEBHOOK_SECRET=test_webhook_secret
```
### Test Bank Accounts
```
Test accounts provided by SePay:
- Account: 0123456789 (always success)
- Account: 9876543210 (always fail)
- Account: 5555555555 (expires after 1 min)
```
### Webhook Testing
```bash
# Forward webhooks to local (SePay CLI)
sepay webhooks forward --url http://localhost:3000/api/webhooks/sepay
# Trigger test event
sepay trigger payment.success --transaction-id TXN123
```
## Common Use Cases
### 1. Course Purchase
```bash
/integrate:sepay [one-time payment for course, support QR and bank transfer]
```
### 2. Subscription
```bash
/integrate:sepay [monthly subscription 299,000 VND with email reminders]
```
### 3. Wallet Top-Up
```bash
/integrate:sepay [wallet top-up any amount 10,000-10,000,000 VND]
```
### 4. Service Payment
```bash
/integrate:sepay [pay for services with instant confirmation]
```
## Troubleshooting
### QR Code Expired
**Problem:** User didn't pay within 15 minutes
**Solution:**
```typescript
// Provide "Generate New QR" button
const newPayment = await sepay.createPayment({
amount: originalAmount,
orderId: orderId,
});
// Update UI with new QR
setQrCodeUrl(newPayment.qrCodeUrl);
```
### Payment Not Detected
**Problem:** User paid but webhook not received
**Solution:**
```typescript
// Provide manual verification button
{
const status = await sepay.verifyPayment(transactionId);
if (status === 'success') {
// Grant access
}
}}>
Kiα»m tra thanh toΓ‘n
```
### Wrong Transfer Amount
**Problem:** User transferred incorrect amount
**Solution:**
```typescript
// Webhook receives actual amount
if (event.data.amount < expectedAmount) {
// Notify user of shortage
await sendEmail(user, 'amount_mismatch', {
expected: expectedAmount,
received: event.data.amount,
shortage: expectedAmount - event.data.amount,
});
}
```
## Production Checklist
Before going live:
```
β‘ Register business account at sepay.vn
β‘ Complete business verification (1-3 days)
β‘ Get production credentials
β‘ Configure webhook endpoint (HTTPS required)
β‘ Test with real bank account (small amount)
β‘ Set up webhook monitoring
β‘ Configure email notifications
β‘ Add Vietnamese terms of service
β‘ Add refund policy (in Vietnamese)
β‘ Test with multiple banks
β‘ Set up customer support (Vietnamese language)
β‘ Configure invoice generation (VAT if applicable)
β‘ Test subscription renewal flow
β‘ Set up payment reconciliation process
β‘ Add admin dashboard for payment verification
```
## Next Steps
After integration:
```bash
# 1. Integration complete
/integrate:sepay [requirements]
# 2. Register SePay account
# Visit sepay.vn and register
# 3. Add credentials
cp .env.example .env
# Add SEPAY_APP_ID, SEPAY_SECRET_KEY, etc.
# 4. Run migrations
npm run migrate
# 5. Test in sandbox mode
npm run dev
# Make test payment
# 6. Run tests
/test
# 7. Deploy to staging
/deploy [staging]
# 8. Test with real bank (small amount)
# 9. Switch to production credentials
# 10. Deploy to production
/deploy [production]
```
## Related Commands
- [/integrate:polar](/docs/commands/integrate/polar) - International payments
- [/cook](/docs/commands/core/cook) - Implement custom features
- [/test](/docs/commands/core/test) - Run test suite
---
**Key Takeaway**: `/integrate:sepay` provides complete SePay.vn payment integration optimized for Vietnamese market with QR codes, bank transfers, VND formatting, Vietnamese localization, and all necessary backend/frontend codeβproduction-ready for Vietnam in minutes.
---
# /plan
Section: docs
Category: commands/plan
URL: https://docs.claudekit.cc/docs/plan/index
# /plan
Intelligent planning router. Analyzes task complexity, asks clarifying questions if needed, and routes to the appropriate planning workflow (`/plan:fast` or `/plan:hard`).
## Syntax
```bash
/plan [task]
```
## When to Use
- **Starting New Features**: Before implementing any significant functionality
- **Complex Tasks**: When unsure whether task is simple or complex
- **Project Kickoff**: When beginning work on a new project or module
- **Architectural Changes**: Before refactoring or restructuring code
## Quick Example
```bash
/plan [add user authentication with OAuth support]
```
**Output**:
```
Analyzing task complexity...
Task: Add user authentication with OAuth support
Complexity Assessment:
- Multiple components: auth service, OAuth providers, sessions
- External dependencies: OAuth configuration, callback handling
- Security considerations: Token storage, CSRF protection
Decision: This task requires research and detailed planning.
β Routing to /plan:hard
Enhancing prompt with additional context...
Activating planning skill...
[/plan:hard executes with enhanced prompt]
```
**Result**: Complex task routed to `/plan:hard` with enhanced prompt.
## Arguments
- `[task]`: Description of what you want to plan (required)
## What It Does
### 1. Pre-Creation Check
Before creating a new plan, checks for existing active plans:
```
Checking for active plan...
Active plan found: plans/251128-user-api/plan.md
Continue with existing plan? [Y/n]
```
- **Y (default)**: Passes existing plan path to subcommand
- **n**: Creates new plan in `plans/YYMMDD-HHMM-{task-slug}/`
### 2. Complexity Analysis
Evaluates the task against multiple factors:
| Factor | Simple (β fast) | Complex (β hard) |
|--------|-----------------|------------------|
| Scope | Single file/module | Multiple systems |
| Dependencies | None or few | External APIs, DBs |
| Research | Not needed | Best practices required |
| Decisions | Clear approach | Multiple valid options |
| Risk | Low impact | Security, data integrity |
### 3. Clarification Questions
If requirements are ambiguous, asks for clarification:
```
Before planning, I need to clarify a few things:
1. What authentication methods do you need?
[ ] Email/password
[ ] OAuth (Google, GitHub)
[ ] Magic links
[ ] All of the above
2. Do you need role-based access control (RBAC)?
3. What's the expected user scale?
```
### 4. Routing Decision
Based on analysis, routes to appropriate planner:
**Route to `/plan:fast`**:
- Small, well-defined tasks
- Clear implementation path
- No research needed
- Single component changes
**Route to `/plan:hard`**:
- Complex, multi-component tasks
- Research required
- Multiple valid approaches
- Security or performance considerations
### 5. Prompt Enhancement
Before delegation, enhances your prompt with:
- Project context from `./docs/` (system-architecture, code-standards)
- Codebase patterns from recent files
- Relevant constraints and requirements
- Active plan context (if continuing)
### 6. Skill Activation
Activates the `planning` skill for structured plan creation.
## Workflow Steps
```
Step 1: Check for active plan
β
Step 2: Analyze task complexity
β
Step 3: Ask clarifying questions (if needed)
β
Step 4: Decide: fast vs hard
β
Step 5: Enhance prompt with context
β
Step 6: Execute /plan:fast or /plan:hard
β
Step 7: Activate planning skill
```
## Complete Example
### Scenario: Planning a New API Endpoint
```bash
/plan [add product inventory management API]
```
**Execution Flow**:
```
Checking for active plan...
No active plan found.
Analyzing task complexity...
Task: Add product inventory management API
Factors analyzed:
β Scope: Multiple endpoints (CRUD + inventory tracking)
β Dependencies: Database schema changes needed
β Research: Inventory patterns (SKU, variants, stock levels)
β Decisions: Real-time vs batch inventory updates
β Risk: Data integrity for stock counts
Complexity Score: 7/10 (High)
Clarifying questions:
1. Do you need variant support (size, color, etc.)?
> Yes, products have multiple variants
2. Real-time inventory updates or batch processing?
> Real-time for point of sale
3. Multi-warehouse support?
> Single warehouse for now, but design for future multi-warehouse
Decision: Complex task requiring research
β Routing to /plan:hard
Enhancing prompt...
Added context:
- Existing product model structure
- Database schema patterns
- API conventions from code-standards.md
Executing /plan:hard with enhanced prompt...
Activating planning skill...
[/plan:hard creates detailed implementation plan]
Plan created: plans/251129-inventory-api/plan.md
```
## Routing Examples
### Routes to /plan:fast
```bash
# Simple, clear tasks
/plan [add pagination to products list]
/plan [fix date formatting in dashboard]
/plan [add loading spinner to submit button]
/plan [update error messages in validation]
```
### Routes to /plan:hard
```bash
# Complex, research-heavy tasks
/plan [implement real-time notifications system]
/plan [add multi-tenant support to the platform]
/plan [migrate from REST to GraphQL]
/plan [implement end-to-end encryption for messages]
```
## Active Plan Management
### Continuing Existing Plan
```bash
/plan [add tests for auth module]
```
```
Active plan found: plans/251128-auth-system/plan.md
Phase 2 (testing) not yet started.
Continue with existing plan? [Y/n] Y
Adding test phase to existing plan...
β Routing to /plan:fast (clear scope within existing plan)
```
### Creating New Plan
```bash
/plan [completely new feature unrelated to current work]
```
```
Active plan found: plans/251128-auth-system/plan.md
Continue with existing plan? [Y/n] n
Creating new plan directory...
β plans/251129-new-feature/
Analyzing complexity...
```
## Related Commands
| Command | Description | When to Use |
|---------|-------------|-------------|
| [/plan:fast](/docs/commands/plan/fast) | Quick planning without research | Simple, clear tasks |
| [/plan:hard](/docs/commands/plan/hard) | Research-driven detailed planning | Complex tasks |
| [/plan:parallel](/docs/commands/plan/parallel) | Plan with parallel-executable phases | Multi-agent execution |
| [/plan:two](/docs/commands/plan/two) | Compare two implementation approaches | Architecture decisions |
| [/plan:ci](/docs/commands/plan/ci) | Plan based on CI/CD failures | Fixing pipeline issues |
## Best Practices
### Provide Context
```bash
# Good: Specific with constraints
/plan [add search functionality using Elasticsearch, must support fuzzy matching and filters]
# Less helpful: Vague
/plan [add search]
```
### Trust the Router
Let `/plan` decide the complexity:
```bash
# Let it route
/plan [add caching layer]
# Don't pre-decide
/plan:hard [add caching layer] # Might be overkill
```
### Use Active Plans
When working on related tasks, continue existing plans:
```
Continue with existing plan? [Y/n] Y
```
This keeps related work organized in one plan directory.
## Common Issues
### Frequent Hard Routing
**Problem**: Most tasks routing to `/plan:hard`
**Solution**: Break large tasks into smaller pieces
```bash
# Instead of
/plan [build entire e-commerce platform]
# Break down
/plan [add product catalog]
/plan [add shopping cart]
/plan [add checkout flow]
```
### Missed Context
**Problem**: Plan doesn't reflect existing patterns
**Solution**: Ensure `./docs/` is up to date
- `system-architecture.md` - Current architecture
- `code-standards.md` - Coding conventions
---
**Key Takeaway**: `/plan` is your intelligent planning entry point. It analyzes complexity, asks the right questions, and routes to the appropriate planning workflow - so you get just enough planning for each task.
---
# /plan:fast
Section: docs
Category: commands/plan
URL: https://docs.claudekit.cc/docs/plan/fast
# /plan:fast
Quick planning without research. Analyzes codebase and creates implementation plan.
## Syntax
```bash
/plan:fast [task]
```
## How It Works
1. **Pre-Creation Check**: Checks for active plan in `.claude/active-plan`
2. **Codebase Analysis**: Reads `codebase-summary.md`, `code-standards.md`, `system-architecture.md`, `project-overview-pdr.md`
3. **Plan Creation**: Uses `planner` subagent to create implementation plan
4. **User Review**: Asks for approval before proceeding
## Output Structure
```
plans/
βββ YYYYMMDD-HHmm-plan-name/
βββ reports/
β βββ XX-report.md
βββ plan.md # Overview (<80 lines)
βββ phase-XX-name.md # Phase details
```
## When to Use
- Task is straightforward
- Codebase is well-understood
- No external research needed
- Faster iteration desired
## Comparison
| Command | Research | Speed |
|---------|----------|-------|
| `/plan` | Full research | Slower |
| `/plan:fast` | No research | Faster |
| `/plan:hard` | Deep research | Slowest |
**Important**: Does NOT start implementation. Use `/code` after approval.
---
**Key Takeaway**: Use `/plan:fast` for quick planning when you understand the task and don't need external research.
---
# /plan:hard
Section: docs
Category: commands/plan
URL: https://docs.claudekit.cc/docs/plan/hard
# /plan:hard
Thorough planning with research. Uses multiple researcher agents for comprehensive analysis.
## Syntax
```bash
/plan:hard [task]
```
## How It Works
1. **Pre-Creation Check**: Checks for active plan in `.claude/active-plan`
2. **Research Phase**: Multiple `researcher` agents (max 2) research in parallel
3. **Codebase Analysis**: Reads docs; uses `/scout` if `codebase-summary.md` unavailable or >3 days old
4. **Gather & Plan**: Main agent gathers research/scout reports, passes to `planner` subagent
5. **User Review**: Asks for approval
## Output Structure
```
plans/
βββ YYYYMMDD-HHmm-plan-name/
βββ research/
β βββ researcher-XX-report.md # Research findings
βββ reports/
β βββ XX-report.md
βββ scout/
β βββ scout-XX-report.md
βββ plan.md # Overview (<80 lines)
βββ phase-XX-name.md # Phase details
```
## Research Limits
- Max 2 researcher agents in parallel
- Max 5 tool calls per researcher
- Reports β€150 lines each
## When to Use
- Unfamiliar technology/library
- Complex architectural decisions
- Need best practices research
- External integrations
## Comparison
| Command | Research | Speed |
|---------|----------|-------|
| `/plan` | Full | Medium |
| `/plan:fast` | None | Fast |
| `/plan:hard` | Deep | Slow |
**Important**: Does NOT start implementation. Use `/code` after approval.
---
**Key Takeaway**: Use `/plan:hard` when you need comprehensive research and detailed planning for complex tasks.
---
# /plan:cro
Section: docs
Category: commands/plan
URL: https://docs.claudekit.cc/docs/plan/cro
# /plan:cro
Create a CRO (Conversion Rate Optimization) plan for given content.
## Syntax
```bash
/plan:cro [issues or URL]
```
## CRO Framework (25 Principles)
1. **Headline 4-U Formula**: Useful, Unique, Urgent, Ultra-specific
2. **Above-Fold Value Proposition**: Customer problem focus
3. **CTA First-Person Psychology**: "Get MY Guide" > "Get YOUR Guide"
4. **5-Field Form Maximum**: Every field kills conversions
5. **Message Match Precision**: Ad copy = landing page headline
6. **Social Proof Near CTAs**: Testimonials with faces/names
7. **Cognitive Bias Stack**: Loss aversion, social proof, anchoring
8. **PAS Copy Framework**: Problem > Agitate > Solve
9. **Genuine Urgency Only**: Real deadlines, no fake timers
10. **Price Anchoring**: Show expensive option first
*... and 15 more principles*
## How It Works
1. **Input Analysis**:
- Screenshots/videos β `ai-multimodal` skill
- URLs β `web_fetch` for content analysis
2. **Scout**: `/scout:ext` (preferred) or `/scout` (fallback) to find related codebase files
3. **Plan Creation**: `planner` agent creates CRO plan with:
- `plan.md` overview (<80 lines)
- `phase-XX-name.md` files with sections: Context links, Overview, Key Insights, Requirements, Architecture, Implementation Steps, Todo list, Success Criteria, Risk Assessment, Security Considerations, Next steps
## Output Structure
```
plans/
βββ YYYYMMDD-HHmm-cro-plan/
βββ plan.md # Overview
βββ phase-XX-name.md # Specific optimizations
```
## When to Use
- Landing page optimization
- Checkout flow improvement
- Form conversion analysis
- A/B test planning
- UX audit for conversions
**Important**: Does NOT implement changes. Creates plan for review first.
---
**Key Takeaway**: Use `/plan:cro` for data-driven conversion optimization planning using proven CRO principles.
---
# /plan:ci
Section: docs
Category: commands/plan
URL: https://docs.claudekit.cc/docs/plan/ci
# /plan:ci
Analyze GitHub Actions workflow logs and create a detailed implementation plan to fix CI/CD issues. This command identifies problems, root causes, and provides actionable steps but does NOT implement fixes automatically.
## Syntax
```bash
/plan:ci [github-actions-url]
```
## How It Works
The `/plan:ci` command follows an analytical workflow:
### 1. Log Retrieval
- Fetches GitHub Actions workflow logs via URL or `gh` CLI
- Identifies failed steps and error messages
- Extracts relevant context (job names, timestamps, commands)
- Collects environment information
### 2. Error Analysis
Invokes **debugger** agent to:
- Parse error messages and stack traces
- Identify root causes (not just symptoms)
- Categorize issues (build, test, deploy, env, etc.)
- Map dependencies between failures
### 3. Codebase Context
Invokes **scout** agent to:
- Locate relevant workflow files (`.github/workflows/`)
- Find related configuration (package.json, tsconfig.json, etc.)
- Identify affected source files
- Check recent changes (git log)
### 4. Research Solutions
Invokes **researcher** agent to:
- Search for similar CI issues
- Review GitHub Actions documentation
- Check action marketplace for fixes
- Find best practices
### 5. Plan Creation
Creates comprehensive plan including:
- Root cause explanation
- Step-by-step fix instructions
- Files to modify
- Commands to run
- Testing strategy
- Prevention measures
## Examples
### Build Failure Analysis
```bash
/plan:ci https://github.com/user/repo/actions/runs/12345
```
**What happens:**
```
Phase 1: Fetching Logs (10 seconds)
---
Agent: debugger
Using GitHub CLI to fetch logs...
gh run view 12345 --log
Workflow: CI/CD Pipeline
Job: build
Status: Failed
Duration: 2m 15s
Failed step: Build application
Exit code: 1
Error log:
---
Run npm run build
> build
> tsc && vite build
src/components/Dashboard.tsx:45:12 - error TS2339:
Property 'userName' does not exist on type 'User'.
45 return user.userName
~~~~~~~~
Found 1 error in src/components/Dashboard.tsx:45
Error: Process completed with exit code 1.
---
Phase 2: Error Analysis (15 seconds)
---
Agent: debugger
Analyzing error...
Error type: TypeScript compilation error
Root cause: Property 'userName' does not exist on type 'User'
Location: src/components/Dashboard.tsx:45:12
Possible causes:
1. Property renamed from 'userName' to 'username'
2. Type definition outdated
3. Property removed from User type
4. Merge conflict introduced incorrect property name
Checking recent changes...
git log --oneline -10 -- src/types/user.ts
Found: commit abc123 "refactor: rename userName to username"
Root cause confirmed:
User type was refactored but Dashboard.tsx not updated
Phase 3: Codebase Scout (10 seconds)
---
Agent: scout
Locating affected files...
Files with 'userName' references:
β src/components/Dashboard.tsx (1 occurrence)
β src/components/Profile.tsx (2 occurrences)
β src/utils/user-helper.ts (1 occurrence)
Files with 'username' (correct):
β src/types/user.ts (type definition)
β src/api/auth.ts (API calls)
β 15 other files
Inconsistency found:
- 3 files still use old 'userName'
- All should use 'username'
Phase 4: Research (5 seconds)
---
Agent: researcher
Best practices found:
β Use automated rename refactoring tools
β Add ESLint rule to enforce consistent naming
β Run type check before committing
Prevention strategies:
β Pre-commit hook with type checking
β IDE configuration for consistent naming
β Code review checklist item
Phase 5: Plan Created (5 seconds)
---
Agent: planner
Plan saved to: plans/fix-ci-build-12345.md
IMPLEMENTATION PLAN
---
## Root Cause
Property 'userName' was renamed to 'username' in User type
but 3 files were not updated in commit abc123.
## Files to Fix
1. src/components/Dashboard.tsx
Line 45: user.userName β user.username
2. src/components/Profile.tsx
Line 23: user.userName β user.username
Line 67: user.userName β user.username
3. src/utils/user-helper.ts
Line 12: data.userName β data.username
## Step-by-Step Fix
### Step 1: Fix Property References
```bash
# Update Dashboard.tsx
# Change line 45: return user.userName
# To: return user.username
# Update Profile.tsx
# Change line 23 and 67 similarly
# Update user-helper.ts
# Change line 12 similarly
```
### Step 2: Verify Type Checking Locally
```bash
npm run type-check
# Should pass with no errors
```
### Step 3: Run Build Locally
```bash
npm run build
# Should complete successfully
```
### Step 4: Run Tests
```bash
npm test
# Ensure no tests broken by changes
```
### Step 5: Commit and Push
```bash
git add src/components/Dashboard.tsx src/components/Profile.tsx src/utils/user-helper.ts
git commit -m "fix: update userName to username in remaining files"
git push
```
### Step 6: Verify CI Passes
Wait for GitHub Actions to run and verify build succeeds.
## Prevention Measures
### 1. Add Pre-commit Hook
```bash
# .husky/pre-commit
npm run type-check
```
### 2. Add GitHub Actions Type Check Step
```yaml
# .github/workflows/ci.yml
- name: Type check
run: npm run type-check
```
### 3. Configure VS Code
```json
// .vscode/settings.json
{
"typescript.tsdk": "node_modules/typescript/lib",
"typescript.enablePromptUseWorkspaceTsdk": true
}
```
## Estimated Time
- Fix implementation: 5 minutes
- Testing: 5 minutes
- Total: 10 minutes
## Risk Level: Low
- Straightforward property rename
- Type system catches all instances
- No runtime logic changes
---
β Plan complete (45 seconds total)
Next step: Review plan and implement fixes
Use: /code [implement plan from plans/fix-ci-build-12345.md]
```
### Test Failure Analysis
```bash
/plan:ci https://github.com/user/repo/actions/runs/67890
```
**What happens:**
```
Phase 1: Fetching Logs
---
Workflow: Test Suite
Job: test
Status: Failed
Duration: 4m 32s
Failed tests:
---
FAIL src/auth/login.test.ts
β should validate email format (25ms)
β should require password (18ms)
β should authenticate valid credentials (2503ms)
Error: Timeout - Async callback was not invoked within
the 2000ms timeout specified by jest.setTimeout.
FAIL src/api/users.test.ts
β should fetch user list (5002ms)
Error: connect ECONNREFUSED 127.0.0.1:5432
---
Phase 2: Error Analysis
---
Two types of errors identified:
ERROR 1: Test Timeout
- Tests taking > 2 seconds
- Async callback not invoked
- Likely cause: API not mocking properly
ERROR 2: Database Connection Refused
- PostgreSQL not running in CI
- Port 5432 connection refused
- Missing service configuration
Root causes:
1. GitHub Actions workflow missing PostgreSQL service
2. Tests not properly mocked (hitting real API)
3. Test timeout too short for integration tests
Phase 3: Plan Created
---
## Root Cause Analysis
### Issue 1: Missing PostgreSQL Service
CI workflow doesn't define PostgreSQL service container.
Integration tests fail with connection refused.
### Issue 2: Unmocked API Calls
Login test making real HTTP request instead of using mocks.
Exceeds 2-second timeout.
## Solution Plan
### Fix 1: Add PostgreSQL Service to CI
Update `.github/workflows/ci.yml`:
```yaml
jobs:
test:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:15
env:
POSTGRES_USER: testuser
POSTGRES_PASSWORD: testpass
POSTGRES_DB: testdb
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
ports:
- 5432:5432
env:
DATABASE_URL: postgresql://testuser:testpass@localhost:5432/testdb
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
- run: npm ci
- run: npm run migrate:test
- run: npm test
```
### Fix 2: Mock API Calls in Login Test
Update `src/auth/login.test.ts`:
```typescript
// Before:
it('should authenticate valid credentials', async () => {
const result = await login(email, password);
expect(result).toBeDefined();
});
// After:
import { mockApiClient } from '../test-utils/mocks';
it('should authenticate valid credentials', async () => {
mockApiClient.post.mockResolvedValueOnce({
data: { token: 'mock-token', user: { id: 1 } }
});
const result = await login(email, password);
expect(result.token).toBe('mock-token');
});
```
### Fix 3: Increase Timeout for Integration Tests
Update `jest.config.js`:
```javascript
module.exports = {
testTimeout: 10000, // 10 seconds for integration tests
testMatch: [
'**/__tests__/**/*.test.ts',
'**/*.integration.test.ts'
],
};
```
## Implementation Steps
1. Update GitHub Actions workflow (1 file)
2. Add API mocks to login test (1 file)
3. Update Jest config (1 file)
4. Run tests locally to verify
5. Commit and push
6. Verify CI passes
## Estimated Time: 15 minutes
## Risk Level: Medium
- Database service config requires careful setup
- Must ensure test environment variables correct
- Migration must run before tests
---
β Plan complete
Next: /code [implement CI fixes from plan]
```
### Deployment Failure
```bash
/plan:ci https://github.com/user/repo/actions/runs/11111
```
**What happens:**
```
Phase 1: Fetching Logs
---
Workflow: Deploy to Production
Job: deploy
Status: Failed
Duration: 1m 8s
Error:
---
Run docker build -t myapp:latest .
ERROR [build 5/8] RUN npm ci --only=production
npm ERR! code ENOTFOUND
npm ERR! errno ENOTFOUND
npm ERR! network request to https://registry.npmjs.org/...
npm ERR! network This is a problem related to network connectivity.
Error: Process completed with exit code 1.
---
Phase 2: Analysis
---
Error type: Network error during npm install
Root cause: npm registry unreachable OR rate limited
Contributing factors:
- No npm cache in CI
- Rebuilding dependencies every time
- Potential npm registry rate limiting
Phase 3: Plan Created
---
## Root Cause
npm registry connection failed during Docker build.
No caching strategy = slow builds + rate limit risk.
## Solution Plan
### Fix 1: Add Dependency Caching
Update `.github/workflows/deploy.yml`:
```yaml
- name: Cache node modules
uses: actions/cache@v4
with:
path: ~/.npm
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
restore-keys: |
${{ runner.os }}-node-
- name: Install dependencies
run: npm ci --prefer-offline
```
### Fix 2: Add Retry Logic
```yaml
- name: Install dependencies with retry
uses: nick-invision/retry@v2
with:
timeout_minutes: 10
max_attempts: 3
command: npm ci
```
### Fix 3: Use BuildKit Cache
Update Dockerfile:
```dockerfile
# syntax=docker/dockerfile:1
FROM node:20-alpine AS builder
# Enable BuildKit cache mount
RUN --mount=type=cache,target=/root/.npm \
npm ci --only=production
COPY . .
RUN npm run build
FROM node:20-alpine
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
CMD ["node", "dist/index.js"]
```
### Fix 4: Consider Private Registry Mirror
For enterprise: Set up Verdaccio or Artifactory as cache.
```yaml
env:
NPM_CONFIG_REGISTRY: https://npm-cache.company.com
```
## Implementation Steps
1. Add caching to workflow
2. Add retry action
3. Update Dockerfile with BuildKit
4. Test build locally
5. Deploy to staging first
6. Verify production deployment
## Estimated Time: 25 minutes
## Prevention
- Monitor npm registry status
- Consider npm Enterprise
- Set up internal registry mirror
- Use lock files (already using package-lock.json β)
---
β Plan complete
```
## When to Use
### β
Use /plan:ci for:
**Build Failures**
```bash
/plan:ci https://github.com/user/repo/actions/runs/12345
```
**Test Failures**
```bash
/plan:ci https://github.com/user/repo/actions/runs/67890
```
**Deployment Issues**
```bash
/plan:ci https://github.com/user/repo/actions/runs/11111
```
**Linting/Type Errors**
```bash
/plan:ci https://github.com/user/repo/actions/runs/22222
```
**Environment Issues**
```bash
/plan:ci https://github.com/user/repo/actions/runs/33333
```
### β Don't use for:
**Local Issues**
- Debug locally first
- Only use for actual CI failures
**Minor Failures Already Fixed**
- If you know the fix, just implement it
## Plan-Only Approach
`/plan:ci` creates a plan but does NOT implement:
**Why plan-only?**
- CI issues need careful review
- Multiple possible solutions
- Environment-specific considerations
- You decide which approach to take
**After getting plan:**
```bash
# Option 1: Implement manually
cat plans/fix-ci-*.md
# Follow steps yourself
# Option 2: Use /cook to implement
/code [implement CI fix plan]
# Option 3: Use /fix:ci to auto-implement
/fix:ci https://github.com/user/repo/actions/runs/12345
```
## Plan Structure
Every plan includes:
### 1. Root Cause Analysis
```markdown
## Root Cause
Clear explanation of why CI failed.
Not just symptoms, but underlying issue.
```
### 2. Solution Options
```markdown
## Solution Options
### Option A: Quick Fix
- Pros: Fast, minimal changes
- Cons: May not prevent recurrence
### Option B: Proper Fix
- Pros: Addresses root cause
- Cons: More time, more changes
Recommendation: Option B
```
### 3. Files to Modify
```markdown
## Files to Change
1. .github/workflows/ci.yml
- Add PostgreSQL service
- Configure environment variables
2. jest.config.js
- Increase timeout to 10s
3. src/auth/login.test.ts
- Add API mocks
```
### 4. Step-by-Step Instructions
```markdown
## Implementation Steps
### Step 1: Update Workflow
```yaml
# Exact YAML to add
```
### Step 2: Update Tests
```typescript
// Exact code changes
```
### Step 3: Verify Locally
```bash
npm test
```
```
### 5. Prevention Measures
```markdown
## Prevention
To prevent this issue in future:
- Add pre-commit type checking
- Update CI to run on all branches
- Add test coverage requirements
```
### 6. Risk Assessment
```markdown
## Risk Level: Low/Medium/High
Risks:
- Risk 1: Description
- Risk 2: Description
Mitigation:
- Deploy to staging first
- Run full test suite
```
## Output Files
After `/plan:ci` completes:
### Implementation Plan
```
plans/fix-ci-[run-id]-[date].md
```
Complete plan with all details
### Error Analysis
```
plans/ci-error-analysis-[run-id].md
```
Detailed error breakdown
## Common CI Issues Detected
### Build Errors
- TypeScript compilation errors
- Missing dependencies
- Build script failures
- Environment variable issues
### Test Failures
- Flaky tests
- Missing test dependencies
- Database connection issues
- Timeout errors
- Mock/stub issues
### Deployment Issues
- Docker build failures
- Registry connection issues
- Missing secrets/credentials
- Permission errors
### Environment Issues
- Node version mismatches
- Missing system dependencies
- Service configuration missing
- Port conflicts
## Best Practices
### Provide Full URL
β
**Good:**
```bash
/plan:ci https://github.com/user/repo/actions/runs/12345
```
β **Bad:**
```bash
/plan:ci 12345 # Missing repo context
```
### Review Plan Before Implementing
```bash
# 1. Get plan
/plan:ci [url]
# 2. READ the plan
cat plans/fix-ci-*.md
# 3. Understand the root cause
# 4. Then implement
/code [implement from plan]
```
### Test Locally First
```bash
# Before implementing in CI:
npm run build # Test build
npm test # Test tests
npm run lint # Test linting
# If local works but CI fails:
# Issue is environment-specific
```
## After Getting Plan
Standard workflow:
```bash
# 1. Get analysis and plan
/plan:ci https://github.com/user/repo/actions/runs/12345
# 2. Review plan
cat plans/fix-ci-12345.md
# 3. Choose implementation approach
# Option A: Implement manually
# Follow steps in plan
# Option B: Use /code (recommended - uses existing plan)
/code @plans/fix-ci-12345.md
# Option C: Use /fix:ci (auto-implements)
/fix:ci https://github.com/user/repo/actions/runs/12345
# 4. Test locally first
npm test
npm run build
# 5. Commit and push
/git:cm
# 6. Verify CI passes
# Check GitHub Actions
```
## Next Steps
- [/fix:ci](/docs/commands/fix/ci) - Auto-implement CI fix
- [/code](/docs/commands/core/code) - Implement existing plan
- [/debug](/docs/commands/core/debug) - Debug complex issues
---
**Key Takeaway**: `/plan:ci` analyzes GitHub Actions failures, identifies root causes, and creates detailed implementation plans with step-by-step instructionsβgiving you full control over how to fix CI/CD issues without automatic implementation.
---
# /plan:two
Section: docs
Category: commands/plan
URL: https://docs.claudekit.cc/docs/plan/two
# /plan:two
Research a task and create two distinct implementation approaches with detailed pros/cons analysis and a clear recommendation. Perfect when you want to evaluate multiple strategies before committing to one.
## Syntax
```bash
/plan:two [task description]
```
## How It Works
The `/plan:two` command follows a comparative planning workflow:
### 1. Requirements Analysis
- Parses task description
- Identifies key objectives
- Determines success criteria
- Recognizes constraints
### 2. Research Phase
Invokes **researcher** agent to:
- Study best practices
- Review similar implementations
- Analyze different architectural patterns
- Research relevant technologies
### 3. Approach Brainstorming
Invokes **planner** agent to:
- Generate multiple possible approaches
- Select two most promising strategies
- Ensure approaches are genuinely different
- Consider trade-offs for each
### 4. Detailed Planning
For each approach creates:
- Architecture/design overview
- Implementation steps
- Technology stack choices
- File structure
- Estimated complexity
- Timeline estimation
### 5. Comparative Analysis
Provides comprehensive comparison:
- Pros and cons for each
- Performance implications
- Maintainability considerations
- Scalability potential
- Development time
- Learning curve
- Cost implications
### 6. Recommendation
Based on analysis provides:
- Recommended approach with clear reasoning
- When to choose alternative approach
- Hybrid possibilities
## Examples
### Authentication System
```bash
/plan:two [implement user authentication system]
```
**What happens:**
```
Phase 1: Requirements Analysis (20 seconds)
---
Agent: planner
Task: Implement user authentication
Key requirements:
β User registration and login
β Password security
β Session management
β Token-based auth likely needed
β Social auth consideration
β Security best practices
Constraints identified:
- Must be secure (top priority)
- Needs to scale
- Should support multiple clients (web, mobile)
Phase 2: Research (45 seconds)
---
Agent: researcher
Authentication patterns analyzed:
β Session-based authentication
β Token-based (JWT)
β OAuth2 / OpenID Connect
β Passwordless authentication
β Multi-factor authentication
Industry standards reviewed:
β OWASP authentication guidelines
β OAuth 2.0 specification
β JWT best practices
β Password hashing (bcrypt, argon2)
Phase 3: Approach Generation (30 seconds)
---
Agent: planner
Generated 4 possible approaches:
1. Traditional session-based with cookies
2. JWT tokens with refresh token rotation
3. OAuth2 with third-party providers only
4. Hybrid: JWT + OAuth2
Selected for detailed planning:
β Approach 1: JWT Token-Based
β Approach 2: Session-Based with Redis
(Most applicable to typical use cases)
Phase 4: Detailed Plans Created (60 seconds)
---
APPROACH 1: JWT Token-Based Authentication
==========================================
## Architecture Overview
Client β API β JWT Verification β Route Handler
β
Database (users, refresh_tokens)
Access token: Short-lived (15 minutes)
Refresh token: Long-lived (7 days)
Storage: Refresh tokens in database
Stateless: No session storage needed
## Technology Stack
- JWT library: jsonwebtoken
- Password hashing: bcrypt or argon2
- Token storage: PostgreSQL (refresh tokens only)
- Middleware: Custom JWT verification
- Security: httpOnly cookies for tokens
## File Structure
src/
βββ auth/
β βββ auth.controller.ts
β βββ auth.service.ts
β βββ jwt.service.ts
β βββ password.service.ts
β βββ auth.middleware.ts
βββ models/
β βββ user.model.ts
β βββ refresh-token.model.ts
βββ routes/
βββ auth.routes.ts
Database tables:
- users (id, email, password_hash, created_at)
- refresh_tokens (id, user_id, token, expires_at)
## Implementation Steps
1. Set up JWT configuration
- Generate secret keys (access + refresh)
- Configure expiry times
- Set up token signing/verification
2. Create User model
- Email validation
- Password hashing on save
- Password comparison method
3. Implement Registration
- POST /auth/register
- Validate input
- Hash password
- Create user
- Return tokens
4. Implement Login
- POST /auth/login
- Validate credentials
- Generate access token
- Generate refresh token
- Store refresh token in DB
- Return tokens in httpOnly cookies
5. Implement Token Refresh
- POST /auth/refresh
- Verify refresh token
- Check if valid in DB
- Generate new access token
- Rotate refresh token
- Return new tokens
6. Create Auth Middleware
- Extract access token from cookie
- Verify signature
- Decode payload
- Attach user to request
- Handle token expiry
7. Implement Logout
- POST /auth/logout
- Invalidate refresh token in DB
- Clear cookies
## Pros
β
Stateless - No server session storage needed
β
Scalable - Works across multiple servers
β
Mobile-friendly - Tokens easy to store
β
CORS-friendly - No session cookie issues
β
Microservices-ready - Token can be verified independently
β
Fast - No DB lookup for every request (just verify token)
β
Modern - Industry standard for APIs
## Cons
β Token size - JWT tokens are larger than session IDs
β Can't invalidate - Access tokens valid until expiry
β Security concerns - If stolen, valid until expiry
β Refresh token management - Need DB for refresh tokens
β More complex - Token rotation, expiry handling
β Client responsibility - Client must handle token refresh
## Estimated Complexity
Medium - Standard pattern but requires careful implementation
## Timeline
- Implementation: 2 days
- Testing: 1 day
- Security audit: 0.5 days
- Total: 3.5 days
---
APPROACH 2: Session-Based with Redis
====================================
## Architecture Overview
Client β API β Session Check β Route Handler
β β
Redis Database
(sessions) (users)
Session cookie: httpOnly, secure
Session storage: Redis (fast lookup)
Session data: User ID, role, metadata
Expiry: 7 days (sliding window)
## Technology Stack
- Session management: express-session
- Session store: connect-redis
- Redis client: ioredis
- Password hashing: bcrypt
- Cookie parser: cookie-parser
## File Structure
src/
βββ auth/
β βββ auth.controller.ts
β βββ auth.service.ts
β βββ password.service.ts
β βββ session.middleware.ts
βββ config/
β βββ redis.config.ts
β βββ session.config.ts
βββ models/
β βββ user.model.ts
βββ routes/
βββ auth.routes.ts
External:
- Redis server (session storage)
Database tables:
- users (id, email, password_hash, created_at)
## Implementation Steps
1. Set up Redis
- Install Redis server
- Configure connection
- Set up session store
2. Configure Express Session
- Session secret
- Cookie options (httpOnly, secure, sameSite)
- Redis store integration
- Session duration
3. Create User model
- Email validation
- Password hashing
- Password comparison
4. Implement Registration
- POST /auth/register
- Validate input
- Hash password
- Create user
- Create session
- Return success
5. Implement Login
- POST /auth/login
- Validate credentials
- Create session in Redis
- Set session cookie
- Return success
6. Create Auth Middleware
- Check session exists
- Verify session not expired
- Load user data from session
- Attach user to request
- Update session expiry (sliding window)
7. Implement Logout
- POST /auth/logout
- Destroy session in Redis
- Clear session cookie
- Return success
## Pros
β
Simple - Well-understood pattern
β
Immediate invalidation - Logout kills session instantly
β
Server control - Full control over sessions
β
Smaller cookies - Just session ID
β
Less client complexity - Client just sends cookie
β
Mature ecosystem - Many battle-tested libraries
β
Sliding expiry - Session extends with activity
## Cons
β Stateful - Requires session storage (Redis)
β Redis dependency - Another service to manage
β Scaling complexity - Need shared Redis for multiple servers
β Memory cost - Session data stored in Redis
β CORS complexity - Cookie-based, requires proper CORS setup
β Mobile apps - Cookies less natural on mobile
β Microservices - Each service needs Redis access
## Estimated Complexity
Low-Medium - Simpler implementation, but requires Redis
## Timeline
- Implementation: 1.5 days
- Redis setup: 0.5 days
- Testing: 1 day
- Total: 3 days
---
Phase 5: Comparative Analysis
---
COMPARISON MATRIX
=================
| Aspect | JWT (Approach 1) | Sessions (Approach 2) |
|---------------------|---------------------|-----------------------|
| Scalability | βββββ Excellent | βββββ Good |
| Security | βββββ Good | βββββ Excellent |
| Simplicity | βββββ Moderate | βββββ Simple |
| Mobile Support | βββββ Excellent | βββββ Moderate |
| Logout Control | βββββ Limited | βββββ Immediate |
| Performance | βββββ Fast | βββββ Good |
| Microservices Fit | βββββ Perfect | βββββ Challenging |
| Infrastructure | βββββ Minimal | βββββ Needs Redis |
| CORS Complexity | βββββ Simple | βββββ Complex |
| Token Revocation | βββββ Complex | βββββ Simple |
## Performance Considerations
JWT:
- Auth check: ~0.5ms (token verification only)
- No DB/Redis lookup per request
- Scales horizontally easily
Sessions:
- Auth check: ~2-5ms (Redis lookup)
- Network call to Redis per request
- Redis becomes bottleneck at scale
## Security Considerations
JWT:
- Risk: If access token stolen, valid until expiry
- Mitigation: Short expiry (15 min), refresh rotation
- Best for: Public APIs, mobile apps
Sessions:
- Risk: Session fixation attacks
- Mitigation: Regenerate session ID on login
- Best for: Web apps, admin panels
## Scalability
JWT:
- Horizontal scaling: β
Perfect
- Load balancing: β
No sticky sessions needed
- Microservices: β
Each service can verify
Sessions:
- Horizontal scaling: β οΈ Needs shared Redis
- Load balancing: β οΈ Redis must be accessible
- Microservices: β οΈ Centralized session store
Phase 6: Recommendation
---
RECOMMENDED: Approach 1 (JWT Token-Based)
=========================================
## Why JWT?
For most modern applications, JWT is recommended because:
1. **Scalability**: No shared session storage needed
2. **Mobile-first**: Works naturally with mobile apps
3. **Microservices**: Easy to verify tokens independently
4. **API-friendly**: Standard pattern for REST APIs
5. **Modern stack**: Aligns with industry trends
## When to Choose Sessions Instead
Use Approach 2 (Sessions) if:
1. **Web-only application**: No mobile app needed
2. **High security requirements**: Need immediate logout
3. **Small scale**: Single server or small cluster
4. **Simple infrastructure**: Don't want token complexity
5. **Admin dashboard**: Internal tools with tight control
## Hybrid Approach
For maximum flexibility:
1. Use JWT for mobile app and public API
2. Use sessions for admin web dashboard
3. Separate auth flows for different clients
## Implementation Priority
Phase 1: Basic JWT authentication (Week 1)
Phase 2: Refresh token rotation (Week 1)
Phase 3: Security hardening (Week 2)
Phase 4: Multi-factor auth (Future)
Phase 5: Social OAuth (Future)
β Planning complete (2 minutes 35 seconds)
Plans saved:
- plans/auth-approach-1-jwt.md
- plans/auth-approach-2-sessions.md
- plans/auth-comparison.md
```
### State Management
```bash
/plan:two [implement global state management for React app]
```
**What happens:**
```
Phase 1-2: Analysis & Research (60 seconds)
---
Task: Global state management for React app
Research findings:
β Redux - Most popular, mature
β Zustand - Modern, simple
β Context API - Built-in
β Jotai - Atomic state
β MobX - Observable-based
Phase 3-4: Two Approaches Selected
---
APPROACH 1: Zustand (Modern & Simple)
======================================
## Overview
Lightweight state management with hooks-based API.
No boilerplate, minimal concepts, easy to learn.
## Why Zustand?
- Bundle size: 1.2KB (vs Redux 12KB)
- No providers/context needed
- TypeScript-friendly
- DevTools support
- Middleware support
## Implementation
```typescript
// stores/user.store.ts
import create from 'zustand';
interface UserState {
user: User | null;
setUser: (user: User) => void;
logout: () => void;
}
export const useUserStore = create((set) => ({
user: null,
setUser: (user) => set({ user }),
logout: () => set({ user: null }),
}));
// Usage in component
function Profile() {
const { user, logout } = useUserStore();
return {user?.name}
;
}
```
## File Structure
```
src/
βββ stores/
β βββ user.store.ts
β βββ cart.store.ts
β βββ ui.store.ts
βββ hooks/
βββ useStore.ts (optional helpers)
```
## Pros
β
Minimal boilerplate
β
Small bundle size
β
Easy to learn (<1 hour)
β
Great TypeScript support
β
No provider hell
β
Can split stores by domain
## Cons
β Less ecosystem than Redux
β No time-travel debugging
β Smaller community
β Less opinionated (can be pro or con)
Timeline: 0.5 days setup + learning
---
APPROACH 2: Redux Toolkit (Industry Standard)
=============================================
## Overview
Redux with modern API, less boilerplate.
Battle-tested, huge ecosystem, opinionated.
## Why Redux Toolkit?
- Industry standard (most jobs use it)
- Massive ecosystem
- Redux DevTools (time-travel debugging)
- Middleware (thunks, sagas, etc.)
- Immer for immutable updates
## Implementation
```typescript
// store/userSlice.ts
import { createSlice, PayloadAction } from '@reduxjs/toolkit';
interface UserState {
user: User | null;
}
const userSlice = createSlice({
name: 'user',
initialState: { user: null } as UserState,
reducers: {
setUser: (state, action: PayloadAction) => {
state.user = action.payload;
},
logout: (state) => {
state.user = null;
},
},
});
export const { setUser, logout } = userSlice.actions;
export default userSlice.reducer;
// store/index.ts
import { configureStore } from '@reduxjs/toolkit';
import userReducer from './userSlice';
export const store = configureStore({
reducer: {
user: userReducer,
},
});
// Usage
import { useSelector, useDispatch } from 'react-redux';
function Profile() {
const user = useSelector((state: RootState) => state.user.user);
const dispatch = useDispatch();
const handleLogout = () => dispatch(logout());
return {user?.name}
;
}
```
## File Structure
```
src/
βββ store/
β βββ index.ts
β βββ userSlice.ts
β βββ cartSlice.ts
β βββ uiSlice.ts
βββ App.tsx (wrap with Provider)
```
## Pros
β
Industry standard
β
Huge ecosystem
β
Amazing DevTools
β
Time-travel debugging
β
Opinionated (clear patterns)
β
Great for large teams
## Cons
β Larger bundle (~12KB)
β More boilerplate
β Steeper learning curve
β Provider required
β Overkill for small apps
Timeline: 1-2 days setup + learning
---
COMPARISON
==========
| Aspect | Zustand | Redux Toolkit |
|------------------|-------------------|-------------------|
| Learning Curve | 1 hour | 1-2 days |
| Bundle Size | 1.2KB | 12KB |
| Boilerplate | Minimal | Moderate |
| DevTools | Basic | Advanced |
| Ecosystem | Small | Huge |
| Job Market | Growing | Standard |
| Team Size | Small/Medium | Any size |
| App Complexity | Simple/Medium | Any complexity |
RECOMMENDATION: Zustand
=======================
For most new React apps, choose Zustand because:
1. Faster to implement (save days of dev time)
2. Easier to maintain (less code)
3. Better performance (smaller bundle)
4. Sufficient for 80% of apps
Choose Redux Toolkit if:
1. Team already knows Redux
2. Need Redux ecosystem (sagas, etc.)
3. Very large app (100+ components)
4. Need advanced debugging
5. Want industry standard for resume
β Plans complete
```
## When to Use
### β
Use /plan:two for:
**Architecture Decisions**
```bash
/plan:two [choose database: SQL vs NoSQL]
```
**Technology Selection**
```bash
/plan:two [implement caching: Redis vs Memcached]
```
**Design Patterns**
```bash
/plan:two [API design: REST vs GraphQL]
```
**Implementation Strategies**
```bash
/plan:two [file uploads: direct S3 vs presigned URLs]
```
### β Don't use when:
**Only One Obvious Way**
- If one approach clearly superior, just plan that
**Already Decided**
- Use `/plan` if you know which approach to use
**Need More Than Two Options**
- /plan:two specifically compares two approaches
## Comparison Aspects
Every comparison includes:
### 1. Technical Comparison
- Performance implications
- Scalability considerations
- Security aspects
- Maintainability
### 2. Development Comparison
- Implementation complexity
- Development time
- Testing difficulty
- Learning curve
### 3. Operational Comparison
- Infrastructure requirements
- Operating costs
- Monitoring needs
- Deployment complexity
### 4. Team Comparison
- Skill requirements
- Onboarding time
- Documentation needs
- Support availability
## Output Files
After `/plan:two` completes:
### Approach 1 Plan
```
plans/[task]-approach-1-[name].md
```
Complete implementation plan for first approach
### Approach 2 Plan
```
plans/[task]-approach-2-[name].md
```
Complete implementation plan for second approach
### Comparison Document
```
plans/[task]-comparison.md
```
Side-by-side comparison with recommendation
## Best Practices
### Provide Clear Context
β
**Good:**
```bash
/plan:two [implement real-time features for chat app with 10k concurrent users]
```
β **Vague:**
```bash
/plan:two [add real-time]
```
### Specify Constraints
```bash
/plan:two [implement search: full-text search vs vector search, max budget $100/month, need sub-100ms response]
```
## After Getting Plans
Standard workflow:
```bash
# 1. Get two approaches
/plan:two [task]
# 2. Review both plans
cat plans/[task]-approach-1-*.md
cat plans/[task]-approach-2-*.md
# 3. Review comparison
cat plans/[task]-comparison.md
# 4. Make decision
# Consider: team skills, timeline, budget, requirements
# 5. Implement chosen approach (use /code since plan exists)
/code @plans/[task]-approach-1-*.md
# OR
/code @plans/[task]-approach-2-*.md
# 6. Optionally: Hybrid
/code @plans/[task]-approach-1-*.md # then manually integrate from approach 2
```
## Next Steps
- [/plan](/docs/commands/core/plan) - Single approach planning
- [/code](/docs/commands/core/code) - Implement chosen plan
- [/plan:cro](/docs/commands/plan/cro) - CRO-specific planning
---
**Key Takeaway**: `/plan:two` creates two distinct implementation approaches with detailed pros/cons analysis and clear recommendation, helping you make informed architectural and technical decisions before committing to implementation.
---
# /plan:parallel
Section: docs
Category: commands/plan
URL: https://docs.claudekit.cc/docs/plan/parallel
# /plan:parallel
Parallel execution planning command. Creates implementation plans with phases that can run simultaneously, using dependency graphs and file ownership boundaries to prevent conflicts.
## Syntax
```bash
/plan:parallel [task]
```
## When to Use
- **Independent Modules**: Features that don't share files
- **Multi-Agent Execution**: When you want parallel agent execution
- **Large Features**: Breaking work into concurrent streams
- **Team Parallelization**: Multiple developers working simultaneously
## Quick Example
```bash
/plan:parallel [implement user dashboard with auth, analytics, and settings modules]
```
**Output**:
```
Creating parallel execution plan...
## Dependency Graph
Phase 1: Auth Module (no deps)
Phase 2: Analytics Module (no deps)
Phase 3: Settings Module (no deps)
Phase 4: Dashboard Integration (depends on 1, 2, 3)
## Parallelization Strategy
Wave 1 (Parallel): Phase 1 + Phase 2 + Phase 3
Wave 2 (Sequential): Phase 4
## File Ownership Matrix
Phase 1: src/auth/*, src/middleware/auth.ts
Phase 2: src/analytics/*, src/hooks/useAnalytics.ts
Phase 3: src/settings/*, src/hooks/useSettings.ts
Phase 4: src/dashboard/*, src/pages/dashboard.tsx
Plan saved: plans/251129-dashboard/plan.md
Execute with: /code:parallel
```
## Arguments
- `[task]`: Description of multi-component feature to plan (required)
## Key Concepts
### Dependency Graph
Shows which phases depend on other phases:
```
Phase 1 (Auth) βββββββββ
ββββ Phase 4 (Integration)
Phase 2 (Analytics) ββββ€
β
Phase 3 (Settings) βββββ
```
- **No deps**: Phase can start immediately
- **Depends on X**: Must wait for X to complete
- **Parallel**: Multiple phases with no shared dependencies
### File Ownership Matrix
Assigns exclusive file access to each phase:
```
Phase 1: src/auth/**
Phase 2: src/analytics/**
Phase 3: src/settings/**
Phase 4: src/dashboard/**, src/app.tsx
```
**Why File Ownership?**
- Prevents merge conflicts
- Enables true parallel execution
- Clear boundaries for agents
- No coordination needed between parallel phases
### Parallelization Strategy
Groups phases into execution waves:
```
Wave 1 (Parallel):
βββ Phase 1: Auth Module
βββ Phase 2: Analytics Module
βββ Phase 3: Settings Module
Wave 2 (Sequential):
βββ Phase 4: Dashboard Integration
```
- **Parallel wave**: All phases run simultaneously
- **Sequential wave**: Waits for previous wave to complete
## Plan Structure
Generated plans include these sections:
```markdown
# Plan: [Task Name]
## Overview
Brief description of the implementation
## Dependency Graph
```mermaid
graph TD
P1[Phase 1: Auth] --> P4[Phase 4: Integration]
P2[Phase 2: Analytics] --> P4
P3[Phase 3: Settings] --> P4
```
## File Ownership Matrix
| Phase | Owned Files | Shared (Read-Only) |
|-------|-------------|-------------------|
| Phase 1 | src/auth/* | src/types/* |
| Phase 2 | src/analytics/* | src/types/* |
| Phase 3 | src/settings/* | src/types/* |
| Phase 4 | src/dashboard/* | All above |
## Parallelization Strategy
### Wave 1 (Parallel Execution)
- Phase 1, Phase 2, Phase 3 can run simultaneously
- No dependencies between these phases
- Estimated time: 2 hours (longest phase)
### Wave 2 (Sequential)
- Phase 4 requires all Wave 1 phases complete
- Integrates all modules
- Estimated time: 1 hour
## Phases
### Phase 1: Auth Module
[Detailed implementation steps]
### Phase 2: Analytics Module
[Detailed implementation steps]
...
```
## Complete Example
### Scenario: E-commerce Checkout System
```bash
/plan:parallel [build checkout system with cart, payment, shipping, and order confirmation]
```
**Generated Plan**:
```markdown
# Plan: E-commerce Checkout System
## Overview
Multi-module checkout system with independent cart, payment,
shipping calculation, and order confirmation components.
## Dependency Graph
```
Cart βββββββββββββββββββ
β
Payment Gateway ββββββββΌβββ Order Processing βββ Confirmation
β
Shipping Calculator ββββ
```
## File Ownership Matrix
| Phase | Exclusive Files | Notes |
|-------|-----------------|-------|
| Phase 1: Cart | src/cart/*, src/hooks/useCart.ts | Cart state management |
| Phase 2: Payment | src/payment/*, src/hooks/usePayment.ts | Stripe integration |
| Phase 3: Shipping | src/shipping/*, src/hooks/useShipping.ts | Rate calculation |
| Phase 4: Orders | src/orders/*, src/hooks/useOrder.ts | Order creation |
| Phase 5: UI | src/checkout/*, src/pages/checkout.tsx | Final integration |
**Shared (Read-Only)**:
- src/types/* - Type definitions
- src/utils/* - Utility functions
- src/config/* - Configuration
## Parallelization Strategy
### Wave 1: Independent Modules (Parallel)
**Phase 1: Cart Module** [2 agents]
- Cart state management (Zustand)
- Add/remove/update items
- Cart persistence (localStorage)
- Cart summary calculations
**Phase 2: Payment Gateway** [1 agent]
- Stripe integration
- Payment intent creation
- Card validation
- Error handling
**Phase 3: Shipping Calculator** [1 agent]
- Address validation
- Rate calculation API
- Shipping method selection
- Delivery estimation
**Execution**: All three phases run simultaneously
**Duration**: ~2 hours (determined by longest phase)
### Wave 2: Order Processing (Sequential)
**Phase 4: Order Processing** [1 agent]
- Depends on: Phase 1, 2, 3
- Order creation service
- Inventory deduction
- Payment capture
- Shipping label generation
**Execution**: Starts after Wave 1 complete
**Duration**: ~1.5 hours
### Wave 3: UI Integration (Sequential)
**Phase 5: Checkout UI** [1 agent]
- Depends on: Phase 4
- Multi-step checkout flow
- Form validation
- Order review page
- Confirmation page
**Execution**: Starts after Wave 2 complete
**Duration**: ~1 hour
## Execution Command
```bash
# Execute all phases with parallel support
/code:parallel @plans/251129-checkout/plan.md
```
## Success Criteria
- [ ] Cart operations work independently
- [ ] Payment processes without cart dependency
- [ ] Shipping calculates before payment
- [ ] Order combines all modules
- [ ] No file conflicts between phases
```
## Use Cases
### Independent Feature Modules
```bash
/plan:parallel [build admin panel with user management, content management, and analytics]
```
Three independent modules, one integration phase.
### Microservices Setup
```bash
/plan:parallel [create microservices for auth, products, orders, and notifications]
```
Each service is a separate phase with its own file ownership.
### Component Library
```bash
/plan:parallel [build UI component library with buttons, forms, modals, and tables]
```
Each component category is an independent phase.
### API Endpoint Groups
```bash
/plan:parallel [implement REST API with users, products, orders, and search endpoints]
```
Each endpoint group can be developed in parallel.
## Execution
After creating a parallel plan, execute with:
```bash
# Execute with parallel agent support
/code:parallel @plans/251129-{task}/plan.md
```
The `/code:parallel` command:
1. Reads dependency graph
2. Launches parallel agents for Wave 1
3. Waits for wave completion
4. Proceeds to next wave
5. Repeats until all waves complete
## Best Practices
### Clear Module Boundaries
Define tasks with natural separation:
```bash
# Good: Clear boundaries
/plan:parallel [build auth service, payment service, and notification service]
# Harder to parallelize
/plan:parallel [build user profile with settings and preferences]
```
### Minimize Shared Files
Keep shared files to read-only types/utils:
```
β
Good structure:
src/auth/ # Phase 1 owns
src/payment/ # Phase 2 owns
src/types/ # Shared, read-only
β Problematic:
src/services/auth.ts # Phase 1
src/services/payment.ts # Phase 2
src/services/index.ts # Both need to modify!
```
### Right-Size Phases
Balance parallel gains vs. coordination overhead:
```bash
# Too granular (overhead > benefit)
/plan:parallel [implement login button, logout button, profile button]
# Better granularity
/plan:parallel [implement auth UI, dashboard UI, settings UI]
```
## Related Commands
- [/plan](/docs/commands/plan) - Intelligent planning router
- [/plan:hard](/docs/commands/plan/hard) - Sequential detailed planning
- [/code:parallel](/docs/commands/code/parallel) - Execute parallel plans
- [/bootstrap:auto:parallel](/docs/commands/bootstrap/auto-parallel) - Bootstrap with parallel execution
- [/cook:auto:parallel](/docs/commands/cook/auto-parallel) - Cook with parallel agents
---
**Key Takeaway**: `/plan:parallel` creates plans optimized for concurrent execution by defining dependency graphs and file ownership boundaries, enabling faster implementation through parallel agent execution.
---
# /review:codebase
Section: docs
Category: commands/review
URL: https://docs.claudekit.cc/docs/review/codebase
# /review:codebase
Comprehensive codebase scan and analysis with research, planning, and code review.
## Syntax
```bash
/review:codebase [tasks-or-prompt]
```
## Role
Elite software engineering expert specializing in system architecture and technical decisions. Operates by:
- **YAGNI**: You Aren't Gonna Need It
- **KISS**: Keep It Simple, Stupid
- **DRY**: Don't Repeat Yourself
## Workflow
### 1. Research Phase
- Activates relevant skills from catalog
- Uses 2 `researcher` subagents in parallel (max 5 sources each)
- Research reports: β€150 lines, concise
- Uses `/scout:ext` (preferred) or `/scout` (fallback) for codebase search
### 2. Code Review Phase
- Multiple `code-reviewer` subagents in parallel
- Checks for: issues, duplicate code, security vulnerabilities
- If issues found β improve code β repeat testing until pass
### 3. Planning Phase
Uses `planner` subagent to create improvement plan:
```
plans/YYYYMMDD-HHmm-plan-name/
βββ plan.md # Overview (<80 lines)
βββ phase-XX-name.md # Phase details with sections:
# Context links, Overview (date/priority/status),
# Key Insights, Requirements, Architecture,
# Related code files, Implementation Steps,
# Todo list, Success Criteria, Risk Assessment,
# Security Considerations, Next steps
```
### 4. Final Report
- Summary of changes with brief explanations
- Guide for getting started
- Next steps suggestions
- Option to commit/push via `git-manager` subagent
## Key Features
- **Image generation**: Uses `ai-multimodal` skill on the fly
- **Image analysis**: Verifies generated assets meet requirements
- **Image editing**: ImageMagick for background removal, cropping, adjusting
## Writing Style
- Sacrifices grammar for concision in reports
- Lists unresolved questions at end of reports
## When to Use
- Comprehensive codebase analysis
- Architecture review
- Security audit
- Pre-refactoring assessment
- Technical debt evaluation
---
**Key Takeaway**: Use `/review:codebase` for comprehensive codebase analysis combining research, code review, and planning phases.
---
# /skill:create
Section: docs
Category: commands/skill
URL: https://docs.claudekit.cc/docs/skill/create
# /skill:create
Create new agent skills that extend Claude's capabilities with specialized knowledge, workflows, or tool integrations. This command follows a comprehensive 4-phase process: research, implementation, review, and evaluation.
## Syntax
```bash
/skill:create [prompt-or-llms-or-github-url]
```
## Input Types
### 1. Natural Language Prompt
```bash
/skill:create [create skill for MongoDB database operations]
```
### 2. llms.txt URL
```bash
/skill:create https://docs.polar.sh/llms.txt
```
### 3. GitHub Repository
```bash
/skill:create https://github.com/cloudflare/workers-sdk
```
## How It Works
The `/skill:create` command follows a 4-phase workflow:
### Phase 1: Research (2-3 minutes)
Invokes **researcher** agent to:
- Fetch documentation (llms.txt, GitHub, web)
- Analyze domain knowledge required
- Identify key concepts and patterns
- Map out skill structure
- Determine MCP integration needs
- Research best practices
### Phase 2: Implementation (3-5 minutes)
Invokes **skill-creator** agent to:
- Write skill prompt file
- Structure knowledge sections
- Create usage examples
- Define when to use skill
- Add tool integration instructions
- Include best practices
- Format in markdown
### Phase 3: Review (1-2 minutes)
Invokes **code-reviewer** agent to:
- Validate skill structure
- Check completeness
- Verify examples work
- Ensure clear instructions
- Validate formatting
- Suggest improvements
### Phase 4: Evaluation (1 minute)
Invokes **tester** agent to:
- Test skill activation
- Verify skill prompt loads
- Check examples are clear
- Validate tool integrations
- Test edge cases
- Generate usage report
## Examples
### Create MongoDB Skill
```bash
/skill:create [create skill for MongoDB operations including CRUD, aggregations, and indexing]
```
**What happens:**
```
Phase 1: Research (2 minutes 15 seconds)
---
Agent: researcher
Gathering information...
MongoDB documentation analyzed:
β CRUD operations
β Aggregation pipelines
β Indexing strategies
β Schema design patterns
β Performance optimization
β Replication & sharding
β Security best practices
MCP server check:
β Found: @modelcontextprotocol/server-mongodb
β Available tools: query, insert, update, delete, aggregate
β Connection requirements: MongoDB URI
Best practices identified:
β Connection pooling
β Error handling patterns
β Query optimization
β Index selection
β Schema validation
Phase 2: Implementation (3 minutes 45 seconds)
---
Agent: skill-creator
Creating skill file...
Structure:
1. Skill Overview
2. When to Use This Skill
3. Prerequisites
4. MongoDB Operations Guide
- CRUD operations
- Aggregation pipelines
- Indexing
- Schema design
- Performance optimization
5. MCP Integration
6. Examples
7. Best Practices
8. Common Patterns
9. Troubleshooting
File created:
.claude/skills/mongodb.md (2,847 words)
Key sections included:
β Clear use cases
β Step-by-step operations
β Code examples (15+)
β Aggregation pipeline examples
β Index creation patterns
β MCP tool usage
β Error handling
β Performance tips
Phase 3: Review (1 minute 30 seconds)
---
Agent: code-reviewer
Reviewing skill quality...
Structure: β
Excellent
- Clear sections
- Logical flow
- Good hierarchy
Completeness: β
Comprehensive
- All CRUD operations covered
- Aggregation examples included
- Indexing strategies explained
- MCP integration documented
Examples: β
High Quality
- 15 working examples
- Various complexity levels
- Real-world scenarios
- Copy-paste ready
Clarity: β
Clear
- Technical but accessible
- Step-by-step instructions
- Good use of code blocks
Suggestions:
1. Add section on transactions
2. Include change streams example
3. Add troubleshooting for connection issues
Applying suggestions...
β Transactions section added
β Change streams example included
β Troubleshooting expanded
Phase 4: Evaluation (45 seconds)
---
Agent: tester
Testing skill...
Activation test:
β Skill loads correctly
β No syntax errors
β Markdown formatting valid
Example validation:
β All 17 examples use correct syntax
β MongoDB queries valid
β MCP tool calls properly formatted
Tool integration:
β MCP server referenced correctly
β Connection instructions clear
β Authentication covered
Documentation quality:
β Professional tone
β Technical accuracy verified
β Easy to follow
Score: 9.2/10
Areas of excellence:
+ Comprehensive CRUD coverage
+ Excellent aggregation examples
+ Clear MCP integration
+ Good troubleshooting section
Minor improvements needed:
- Add more replica set examples
- Include sharding guidance
β Skill creation complete (8 minutes 15 seconds)
Summary:
---
β Skill created: .claude/skills/mongodb.md
β 2,847 words
β 17 examples
β 8 major sections
β MCP integration included
β Reviewed and tested
β Quality score: 9.2/10
Next steps:
1. Review skill: cat .claude/skills/mongodb.md
2. Test skill: /ask [mongodb question]
3. Iterate if needed: /skill:fix-logs
```
### Create Skill from llms.txt
```bash
/skill:create https://docs.polar.sh/llms.txt
```
**What happens:**
```
Phase 1: Research (1 minute 30 seconds)
---
Agent: researcher
Fetching llms.txt...
URL: https://docs.polar.sh/llms.txt
Content retrieved:
- 5,249 lines
- Polar.sh payment platform
- Subscription management
- Webhook handling
- Customer portal
- API documentation
Analyzing structure...
β Clear API sections
β Code examples included
β Authentication documented
β Webhooks explained
Phase 2: Implementation (4 minutes)
---
Agent: skill-creator
Creating Polar.sh skill...
Skill structure:
1. Overview (what Polar.sh does)
2. When to use this skill
3. Prerequisites (API keys)
4. Core Features
- Subscriptions
- One-time payments
- Customer portal
- Webhooks
5. Implementation Guide
- Setup
- Creating checkouts
- Webhook handling
- Customer management
6. Code Examples (SDK usage)
7. Best Practices
8. Security Guidelines
9. Testing
10. Troubleshooting
File created:
.claude/skills/polar.md (3,124 words)
β 23 code examples extracted from llms.txt
β API endpoints documented
β Webhook events listed
β Security best practices added
Phase 3: Review (1 minute 20 seconds)
---
Agent: code-reviewer
Review findings:
Strengths:
β Comprehensive API coverage
β Real code examples from docs
β Security well documented
β Clear integration steps
Enhancements applied:
+ Added pricing tier examples
+ Included webhook signature verification
+ Added troubleshooting section
+ Improved error handling examples
Phase 4: Evaluation (50 seconds)
---
Agent: tester
Evaluation results:
β All API examples valid
β Webhook handling comprehensive
β Security practices sound
β Documentation clear
Score: 9.4/10
β Skill ready to use (7 minutes 40 seconds)
```
### Create Skill from GitHub
```bash
/skill:create https://github.com/cloudflare/workers-sdk
```
**What happens:**
```
Phase 1: Research (2 minutes 45 seconds)
---
Agent: researcher
Analyzing GitHub repository...
Repo: cloudflare/workers-sdk
Using Repomix to analyze codebase...
β 847 files analyzed
β Documentation extracted
β README.md processed
β Examples folder scanned
β API patterns identified
Key findings:
- Cloudflare Workers SDK
- Wrangler CLI tool
- Workers API documentation
- D1 database integration
- KV storage operations
- Durable Objects
- R2 storage
Phase 2: Implementation (5 minutes)
---
Agent: skill-creator
Creating Cloudflare Workers skill...
Skill sections:
1. Cloudflare Workers Overview
2. When to Use This Skill
3. Wrangler CLI Guide
4. Worker Development
- Basic workers
- Request handling
- Environment variables
5. Cloudflare Services
- D1 (SQLite)
- KV (Key-Value)
- R2 (Object Storage)
- Durable Objects
6. Deployment
7. Testing & Debugging
8. Best Practices
9. Common Patterns
10. Troubleshooting
File created:
.claude/skills/cloudflare-workers.md (4,126 words)
β 31 code examples
β CLI commands documented
β Service integration guides
β Deployment workflow
Phase 3-4: Review & Evaluation
---
β Review complete
β Examples validated
β CLI commands tested
Score: 9.1/10
β Skill creation complete (10 minutes)
```
## Skill File Structure
Every created skill follows this structure:
```markdown
# [Skill Name]
[Brief description of what the skill helps with]
## When to Use This Skill
Use this skill when:
- [Specific use case 1]
- [Specific use case 2]
- [Specific use case 3]
## Prerequisites
- [Requirement 1]
- [Requirement 2]
## [Main Knowledge Section 1]
### [Subsection]
[Detailed information with examples]
```code
// Working code example
```
## [Main Knowledge Section 2]
[More content...]
## MCP Integration (if applicable)
[How to use MCP tools with this skill]
## Examples
### Example 1: [Scenario]
```code
// Complete working example
```
### Example 2: [Scenario]
```code
// Another example
```
## Best Practices
- [Practice 1]
- [Practice 2]
## Common Patterns
[Reusable patterns and solutions]
## Troubleshooting
**Problem:** [Common issue]
**Solution:** [How to fix]
```
## Skill Quality Criteria
Skills are evaluated on:
### 1. Completeness (25%)
- All major topics covered
- Edge cases addressed
- Error handling included
- Security considerations
### 2. Clarity (25%)
- Clear explanations
- Logical structure
- Easy to follow
- Good formatting
### 3. Examples (25%)
- Working code examples
- Various complexity levels
- Real-world scenarios
- Copy-paste ready
### 4. Usefulness (25%)
- Practical application
- Time-saving
- Reduces errors
- Fills knowledge gap
## MCP Integration
When skill requires tools, MCP integration is included:
```markdown
## MCP Server Integration
This skill works best with the [tool-name] MCP server.
### Installation
```bash
npm install @modelcontextprotocol/server-[name]
```
### Configuration
Add to claude_desktop_config.json:
```json
{
"mcpServers": {
"[name]": {
"command": "node",
"args": ["path/to/server"],
"env": {
"API_KEY": "your-key"
}
}
}
}
```
### Available Tools
- `tool_name_1` - Description
- `tool_name_2` - Description
```
## Best Practices for Skill Creation
### Provide Detailed Input
β
**Good:**
```bash
/skill:create [create skill for implementing WebSocket real-time features including connection management, reconnection logic, message queuing, and scaling with Redis pub/sub]
```
β **Vague:**
```bash
/skill:create [websockets]
```
### Use Official Documentation
```bash
# Best: Official llms.txt
/skill:create https://docs.service.com/llms.txt
# Good: Official GitHub
/skill:create https://github.com/official/repo
# OK: Natural language (but less comprehensive)
/skill:create [create skill for X]
```
### Specify Domain if Broad
```bash
# Too broad
/skill:create [Python]
# Better
/skill:create [Python async programming with asyncio]
# Best
/skill:create [Python async programming with asyncio including event loops, coroutines, async/await patterns, and concurrent task management]
```
## After Skill Creation
Standard workflow:
```bash
# 1. Create skill
/skill:create [description or URL]
# 2. Review skill
cat .claude/skills/[skill-name].md
# 3. Test skill
/ask [question related to skill]
# Skill should automatically activate
# 4. If issues found
/skill:fix-logs .claude/skills/[skill-name].md
# 5. Iterate until satisfied
# 6. Commit skill
/git:cm
```
## Output Files
After `/skill:create` completes:
### Skill File
```
.claude/skills/[skill-name].md
```
Complete skill prompt ready to use
### Research Report
```
plans/skill-research-[name]-[date].md
```
Research findings and analysis
### Evaluation Report
```
plans/skill-evaluation-[name]-[date].md
```
Quality score and recommendations
## Common Use Cases
### 1. Framework/Library Skills
```bash
/skill:create https://docs.nextjs.org/llms.txt
/skill:create https://github.com/remix-run/remix
/skill:create [create skill for TailwindCSS]
```
### 2. Cloud Platform Skills
```bash
/skill:create [create skill for AWS Lambda deployment]
/skill:create https://github.com/cloudflare/workers-sdk
/skill:create [Azure Functions development]
```
### 3. Database Skills
```bash
/skill:create [PostgreSQL query optimization]
/skill:create https://github.com/mongodb/mongo
/skill:create [Redis caching patterns]
```
### 4. Tool Integration Skills
```bash
/skill:create [Docker containerization best practices]
/skill:create [GitHub Actions CI/CD]
/skill:create [Kubernetes deployment]
```
### 5. Domain-Specific Skills
```bash
/skill:create [payment processing with Stripe]
/skill:create [email marketing with SendGrid]
/skill:create [analytics with Google Analytics]
```
## Skill Activation
Once created, skills activate automatically:
```bash
# Skill: mongodb.md exists
# Ask MongoDB question
/ask [how to create compound index in MongoDB]
# System automatically:
1. Detects question relates to MongoDB
2. Loads mongodb.md skill
3. Uses skill knowledge to answer
4. Provides accurate, detailed response
```
## Troubleshooting
### Skill Not Activating
**Problem:** Skill exists but doesn't activate
**Check:**
```bash
# 1. Verify skill exists
ls .claude/skills/
# 2. Check skill format
cat .claude/skills/[name].md
# Should have clear "When to Use" section
# 3. Fix if needed
/skill:fix-logs .claude/skills/[name].md
```
### Low Quality Score
**Problem:** Skill scores below 8.0
**Solution:**
```bash
# Review evaluation report
cat plans/skill-evaluation-*.md
# Address issues mentioned
# Re-create with more detail
/skill:create [more detailed prompt]
```
### Missing Information
**Problem:** Skill incomplete
**Solution:**
```bash
# Add more context to input
/skill:create [original prompt + specific areas to cover]
# Or manually edit skill file
# Then validate
/skill:fix-logs .claude/skills/[name].md
```
## Next Steps
After creating skills:
- [/skill:fix-logs](/docs/commands/skill/fix-logs) - Fix skill based on logs
- [/ask](/docs/commands/core/ask) - Use skill to answer questions
- [Skill Creator Guide](/docs/guides/creating-skills) - Advanced skill creation
---
**Key Takeaway**: `/skill:create` generates comprehensive agent skills through 4-phase process (research, implementation, review, evaluation) from natural language prompts, llms.txt URLs, or GitHub repositoriesβextending Claude's capabilities with specialized domain knowledge and tool integrations.
---
# /skill:fix-logs
Section: docs
Category: commands/skill
URL: https://docs.claudekit.cc/docs/skill/fix-logs
# /skill:fix-logs
Fix agent skills based on errors and issues found in `logs.txt`. This command analyzes skill failures, identifies problems, updates the skill prompt, and validates fixes.
## Syntax
```bash
/skill:fix-logs [skill-path]
```
## Input Types
### 1. Skill File Path
```bash
/skill:fix-logs .claude/skills/mongodb.md
```
### 2. Natural Language (finds skill automatically)
```bash
/skill:fix-logs [fix MongoDB skill based on logs]
```
## How It Works
The `/skill:fix-logs` command follows a diagnostic workflow:
### 1. Log Analysis
- Reads `logs.txt` file
- Identifies skill-related errors
- Extracts error messages and context
- Categorizes issues (syntax, logic, missing info, etc.)
- Maps errors to skill sections
### 2. Skill Diagnosis
Invokes **debugger** agent to:
- Analyze current skill content
- Compare with error patterns
- Identify root causes
- Determine missing information
- Find incorrect examples
- Locate unclear instructions
### 3. Fix Planning
Creates fix strategy:
- List issues to address
- Plan content additions
- Identify sections to rewrite
- Determine examples to fix
- Map tool integration corrections
### 4. Skill Update
Invokes **skill-creator** agent to:
- Update problematic sections
- Add missing information
- Fix code examples
- Clarify instructions
- Improve tool integration docs
- Enhance error handling guidance
### 5. Validation
Invokes **tester** agent to:
- Verify fixes address logged issues
- Test updated examples
- Validate skill structure
- Check completeness
- Generate validation report
## Examples
### Fix Skill After Usage Errors
```bash
/skill:fix-logs .claude/skills/polar.md
```
**logs.txt content:**
```
[2025-10-30 10:23:45] ERROR: Polar skill - webhook signature verification failed
User asked: "How do I verify Polar webhooks?"
Skill response: Used generic webhook verification, not Polar-specific
[2025-10-30 10:28:12] ERROR: Polar skill - incorrect SDK import
User asked: "How to import Polar SDK?"
Skill suggested: import { Polar } from 'polar-sh'
Actual import: import { Polar } from '@polar-sh/sdk'
[2025-10-30 10:35:33] ERROR: Polar skill - missing subscription cancellation example
User asked: "How to cancel subscription?"
Skill response: Generic answer, no code example
[2025-10-30 10:42:18] ERROR: Polar skill - webhook events incomplete
User asked: "What webhook events does Polar send?"
Skill missing: subscription.canceled, payment.refunded events
```
**What happens:**
```
Phase 1: Log Analysis (20 seconds)
---
Agent: debugger
Reading logs.txt...
Errors found: 4
Skill: polar.md
Error categorization:
1. INCORRECT EXAMPLE - Webhook verification (Line 156)
Issue: Generic example, not Polar-specific
Impact: Users get non-working code
Severity: High
2. INCORRECT IMPORT - SDK import (Line 45)
Issue: Wrong package name
Impact: Import fails, users confused
Severity: Critical
3. MISSING CONTENT - Subscription cancellation (Section 4)
Issue: No code example provided
Impact: Users don't know how to cancel
Severity: Medium
4. INCOMPLETE LIST - Webhook events (Section 7)
Issue: Missing 2 events
Impact: Users don't handle all events
Severity: Medium
Phase 2: Skill Diagnosis (30 seconds)
---
Agent: debugger
Analyzing .claude/skills/polar.md...
Current state:
- 3,124 words
- 23 code examples
- 10 sections
Issues confirmed:
Issue 1: Line 156
Current:
```typescript
const isValid = verifyWebhookSignature(
req.body,
signature,
WEBHOOK_SECRET
);
```
Should be:
```typescript
const isValid = polar.webhooks.verify(
req.body,
signature,
POLAR_WEBHOOK_SECRET
);
```
Issue 2: Line 45
Current:
```typescript
import { Polar } from 'polar-sh';
```
Should be:
```typescript
import { Polar } from '@polar-sh/sdk';
```
Issue 3: Section 4
Missing: Subscription cancellation example
Need: Complete code with explanation
Issue 4: Section 7
Current: 8 webhook events
Missing: subscription.canceled, payment.refunded
Phase 3: Fix Planning (15 seconds)
---
Agent: planner
Fix plan created:
1. Update webhook verification (Line 156)
- Replace generic code
- Add Polar-specific example
- Include error handling
2. Fix SDK import (Line 45)
- Correct package name
- Add installation command
- Verify all imports
3. Add subscription cancellation (Section 4)
- Create new subsection
- Add complete example
- Explain cancel_at_period_end
- Show immediate vs end-of-period
4. Complete webhook events (Section 7)
- Add subscription.canceled
- Add payment.refunded
- Update event handler examples
5. Add validation
- Test all code examples
- Verify imports work
- Check webhook list complete
Phase 4: Skill Update (2 minutes)
---
Agent: skill-creator
Applying fixes...
Fix 1: Webhook Verification β
---
Before:
```typescript
const isValid = verifyWebhookSignature(
req.body,
signature,
WEBHOOK_SECRET
);
```
After:
```typescript
import { Polar } from '@polar-sh/sdk';
const polar = new Polar(process.env.POLAR_SECRET_KEY);
// Verify webhook signature
const isValid = polar.webhooks.verify(
req.body,
req.headers['polar-signature'] as string,
process.env.POLAR_WEBHOOK_SECRET
);
if (!isValid) {
return res.status(401).json({ error: 'Invalid signature' });
}
```
Fix 2: SDK Import β
---
Before:
```typescript
import { Polar } from 'polar-sh';
```
After:
```typescript
// Install SDK
// npm install @polar-sh/sdk
import { Polar } from '@polar-sh/sdk';
const polar = new Polar(process.env.POLAR_SECRET_KEY);
```
Fix 3: Subscription Cancellation Added β
---
New section added to "Subscription Management":
### Cancel Subscription
```typescript
// Cancel at end of current period (recommended)
await polar.subscriptions.cancel(subscriptionId, {
cancelAtPeriodEnd: true
});
// User retains access until period ends
// No refund issued
// Cancel immediately (rare, usually for refunds)
await polar.subscriptions.cancel(subscriptionId, {
cancelAtPeriodEnd: false
});
// Access revoked immediately
// May issue prorated refund based on your settings
```
**When to use each:**
Cancel at period end:
- User requests cancellation
- Want to provide full service for paid period
- No refund needed
Cancel immediately:
- Fraud detected
- Terms of service violation
- User requests refund
Fix 4: Webhook Events Completed β
---
Added to webhook events list:
```typescript
switch (event.type) {
case 'subscription.created':
// User subscribed
await activateSubscription(event.data);
break;
case 'subscription.updated':
// Subscription changed (tier, status)
await updateSubscription(event.data);
break;
case 'subscription.canceled': // β ADDED
// User canceled subscription
await handleCancellation(event.data);
break;
case 'payment.succeeded':
// Payment successful
await recordPayment(event.data);
break;
case 'payment.failed':
// Payment failed
await handleFailedPayment(event.data);
break;
case 'payment.refunded': // β ADDED
// Payment refunded
await processRefund(event.data);
break;
// ... other events
}
```
Additional improvements:
---
β Updated installation instructions
β Added error handling examples
β Improved code comments
β Fixed 3 other minor typos
β Enhanced troubleshooting section
Phase 5: Validation (45 seconds)
---
Agent: tester
Validating fixes...
Issue 1: Webhook verification β
- Code syntax valid
- Polar SDK method correct
- Error handling included
- Example complete
Issue 2: SDK import β
- Package name correct
- Installation command added
- Import statement valid
Issue 3: Subscription cancellation β
- Both methods documented
- Clear when to use each
- Code examples work
- Error handling included
Issue 4: Webhook events β
- All events now listed
- Handlers for new events
- Event data types documented
Additional validation:
β All 25 code examples verified
β No syntax errors
β Imports consistent
β TypeScript types correct
β All issues resolved (3 minutes 50 seconds)
Summary:
---
Issues fixed: 4/4 (100%)
Sections updated: 3
Examples added: 2
Examples fixed: 3
Lines changed: 87
Quality improvement: 9.2 β 9.8
Files updated:
β .claude/skills/polar.md
Next steps:
1. Test skill with original questions
2. Verify all logged errors resolved
3. Archive logs.txt
```
### Fix Skill with Missing Documentation
```bash
/skill:fix-logs .claude/skills/cloudflare-workers.md
```
**logs.txt content:**
```
[2025-10-30 11:15:22] ERROR: Cloudflare Workers skill - no D1 migration example
User asked: "How to create D1 database migrations?"
Skill response: Generic database migration, not D1-specific
[2025-10-30 11:22:45] ERROR: Cloudflare Workers skill - KV namespace binding unclear
User asked: "How to bind KV namespace in wrangler.toml?"
Skill response: Incomplete, missing environment-specific bindings
[2025-10-30 11:31:18] ERROR: Cloudflare Workers skill - Durable Objects not explained
User asked: "When to use Durable Objects vs KV?"
Skill response: Mentioned both but didn't explain trade-offs
```
**What happens:**
```
Phase 1: Log Analysis
---
3 errors identified
All related to incomplete documentation
Issues:
1. D1 migrations - No wrangler commands
2. KV bindings - Missing env-specific config
3. Durable Objects - Trade-offs not explained
Phase 2-4: Diagnosis & Fixes
---
Fix 1: Add D1 Migration Guide
β Added section with wrangler commands
β Included migration file examples
β Explained migration workflow
Fix 2: Clarify KV Bindings
β Added complete wrangler.toml example
β Showed dev vs production bindings
β Explained namespace naming
Fix 3: Add Durable Objects Comparison
β Created comparison table
β Explained when to use each
β Added decision tree
β Included example use cases
Phase 5: Validation
---
β All examples tested
β wrangler commands verified
β Configuration examples valid
Quality: 9.1 β 9.7
```
### Fix Multiple Skills
```bash
/skill:fix-logs [fix all skills based on logs]
```
**What happens:**
```
Phase 1: Log Analysis
---
Analyzing logs.txt...
Errors found: 12
Skills affected: 4
- polar.md (4 errors)
- mongodb.md (3 errors)
- cloudflare-workers.md (3 errors)
- nextjs.md (2 errors)
Processing each skill...
Phase 2-5: Fix Each Skill
---
Fixing polar.md... β (3 min 50 sec)
Fixing mongodb.md... β (2 min 15 sec)
Fixing cloudflare-workers.md... β (3 min 05 sec)
Fixing nextjs.md... β (1 min 30 sec)
Total time: 10 minutes 40 seconds
All issues resolved: 12/12
Archive logs? (y/n) y
β Moved to logs.txt.2025-10-30.bak
```
## Log File Format
For best results, `logs.txt` should contain:
```
[TIMESTAMP] ERROR: Skill-name - Brief description
User asked: "Actual user question"
Skill response: What went wrong
Expected: What should have happened (optional)
[TIMESTAMP] ERROR: Skill-name - Another issue
...
```
Example:
```
[2025-10-30 14:23:45] ERROR: MongoDB skill - aggregation pipeline syntax
User asked: "How to do $lookup with multiple conditions?"
Skill response: Showed basic $lookup, didn't explain multiple conditions
Expected: Should show $lookup with $match in pipeline
[2025-10-30 14:28:12] ERROR: MongoDB skill - missing index types
User asked: "What's the difference between single and compound indexes?"
Skill response: Only mentioned compound indexes
Expected: Should explain both types with examples
```
## Common Issues Fixed
### 1. Incorrect Code Examples
**Before:**
```typescript
// Wrong import
import { Client } from 'service';
```
**After:**
```typescript
// Correct import with installation
// npm install @service/sdk
import { Client } from '@service/sdk';
```
### 2. Missing Information
**Before:**
```markdown
## Authentication
Use API keys for authentication.
```
**After:**
```markdown
## Authentication
### Getting API Keys
1. Sign up at service.com
2. Go to Settings β API Keys
3. Click "Create New Key"
4. Copy key (shown only once)
### Using API Keys
```typescript
const client = new Client({
apiKey: process.env.SERVICE_API_KEY
});
```
Store API key in `.env`:
```
SERVICE_API_KEY=sk_live_...
```
```
### 3. Unclear Instructions
**Before:**
```markdown
Configure webhooks in dashboard.
```
**After:**
```markdown
### Configure Webhooks
1. Navigate to Dashboard β Settings β Webhooks
2. Click "Add Endpoint"
3. Enter your webhook URL: `https://your-api.com/webhooks/service`
4. Select events to receive:
- β
`payment.succeeded`
- β
`payment.failed`
- β
`subscription.created`
5. Click "Create Endpoint"
6. Copy webhook secret for signature verification
### Verify Webhook Signatures
```typescript
import { verifySignature } from '@service/sdk';
app.post('/webhooks/service', (req, res) => {
const signature = req.headers['service-signature'];
if (!verifySignature(req.body, signature, WEBHOOK_SECRET)) {
return res.status(401).json({ error: 'Invalid signature' });
}
// Process webhook...
});
```
```
## Fix Categories
### Syntax Errors
- Incorrect imports
- Wrong API calls
- Invalid configurations
- Type errors
### Logic Errors
- Incorrect algorithms
- Wrong patterns
- Flawed examples
- Incomplete flows
### Missing Content
- No examples
- Missing sections
- Incomplete lists
- No error handling
### Clarity Issues
- Unclear instructions
- Vague explanations
- Missing context
- Poor structure
## Best Practices
### Regular Log Review
```bash
# After using skills, check logs
cat logs.txt
# If errors found, fix immediately
/skill:fix-logs [skill-path]
# Archive old logs
mv logs.txt logs.txt.$(date +%Y%m%d).bak
```
### Detailed Error Logging
When errors occur, log them well:
```
β
Good log entry:
[2025-10-30 10:23:45] ERROR: Polar skill - webhook verification
User asked: "How do I verify Polar webhooks?"
Skill response: Used verifyWebhookSignature() function
Issue: Function doesn't exist in Polar SDK
Expected: Should use polar.webhooks.verify()
β Poor log entry:
Webhook verification wrong
```
### Test After Fixes
```bash
# 1. Fix skill
/skill:fix-logs .claude/skills/polar.md
# 2. Test with original question
/ask [how do I verify Polar webhooks]
# 3. Verify response correct
# 4. If still wrong, check logs again
```
## Output Files
After `/skill:fix-logs` completes:
### Updated Skill
```
.claude/skills/[skill-name].md
```
Fixed and improved
### Fix Report
```
plans/skill-fix-[name]-[date].md
```
Details of what was fixed
### Validation Report
```
plans/skill-validation-[name]-[date].md
```
Verification of fixes
## Troubleshooting
### No Logs File
**Problem:** logs.txt doesn't exist
**Solution:**
```bash
# Create logs.txt manually
touch logs.txt
# Add errors in format shown above
# Then run fix
/skill:fix-logs [skill-path]
```
### Fixes Don't Work
**Problem:** Applied fixes but issues persist
**Check:**
```bash
# 1. Verify skill file updated
cat .claude/skills/[name].md
# 2. Test skill activation
/ask [test question]
# 3. Check if question triggers skill
# Skill should activate based on "When to Use" section
# 4. If not activating, update triggers
```
### Can't Find Issue in Skill
**Problem:** Log mentions issue but skill seems correct
**Solution:**
```bash
# Provide more context
/skill:fix-logs [skill-path with detailed description of issue from logs]
# Or manually search
grep -n "keyword" .claude/skills/[name].md
```
## After Fixing
Standard workflow:
```bash
# 1. Fix skill
/skill:fix-logs .claude/skills/[name].md
# 2. Review changes
git diff .claude/skills/[name].md
# 3. Test skill
/ask [original failing question]
# 4. If satisfied, commit
/git:cm
# 5. Archive logs
mv logs.txt logs.txt.$(date +%Y%m%d).bak
```
## Next Steps
- [/skill:create](/docs/commands/skill/create) - Create new skills
- [/ask](/docs/commands/core/ask) - Use fixed skills
- [Skill Development Guide](/docs/guides/skill-development) - Advanced topics
---
**Key Takeaway**: `/skill:fix-logs` analyzes `logs.txt` to identify skill failures, diagnoses root causes, applies targeted fixes to skill documentation, and validates correctionsβkeeping your agent skills accurate and effective based on real usage errors.
---
# /skill:add
Section: docs
Category: commands/skill
URL: https://docs.claudekit.cc/docs/skill/add
# /skill:add
Extend existing skills with new references or executable scripts. Uses progressive disclosure to optimize token usage while expanding skill capabilities.
## Syntax
```bash
/skill:add [skill-name] [reference-or-script-prompt]
```
## When to Use
- **Adding Documentation**: Include new docs, guides, or API references
- **Adding Scripts**: Create executable tools for the skill
- **Extending Capabilities**: Add new reference materials
- **Integration**: Connect external resources to skills
## Quick Example
```bash
/skill:add better-auth https://better-auth.dev/docs/api
```
**Output**:
```
Validating skill: better-auth
Found: $HOME/.claude/skills/better-auth/
Analyzing reference: https://better-auth.dev/docs/api
Type: URL (documentation)
Fetching content...
Extracted: 45 API endpoints, 12 configuration options
Creating reference: references/api-docs.md
Applying progressive disclosure structure...
β Reference added to better-auth skill
Token impact: +2,400 tokens (loaded on-demand)
```
## Arguments
- `[skill-name]`: Target skill name (must exist in `$HOME/.claude/skills/`)
- `[reference-or-script-prompt]`: URL, file path, or script description
## Reference Types
### URLs
Web documentation, blog posts, GitHub repos:
```bash
# Official docs
/skill:add nextjs https://nextjs.org/docs/app/api-reference
# GitHub repo
/skill:add my-skill https://github.com/org/library
# Blog post
/skill:add react-patterns https://example.com/react-best-practices
```
Processing:
- Fetches via WebFetch
- Extracts relevant content
- Summarizes key information
- Creates reference file
### Files
Local markdown, code samples:
```bash
# Local markdown
/skill:add my-skill /path/to/reference.md
# Code sample
/skill:add my-skill /path/to/example.ts
```
Processing:
- Reads file content
- Validates format
- Integrates into skill structure
### Scripts
Executable tools (bash, python, node):
```bash
# Script from description
/skill:add my-skill "script that validates API responses against OpenAPI spec"
# Script from template
/skill:add my-skill "bash script to run database migrations"
```
Processing:
- Creates executable script
- Adds to `scripts/` directory
- Sets appropriate permissions
## What It Does
### Step 1: Validate Skill
```
Checking skill existence...
Found: $HOME/.claude/skills/better-auth/
βββ prompt.md
βββ references/
βββ scripts/
```
### Step 2: Analyze Reference Type
```
Input: https://better-auth.dev/docs/api
Detected: URL
Content type: API documentation
```
### Step 3: Process Reference
**For URLs**:
```
Fetching: https://better-auth.dev/docs/api
Status: 200 OK
Content: 15KB markdown
Extracting:
- API endpoints: 45
- Configuration: 12 options
- Examples: 8 code blocks
```
**For Files**:
```
Reading: /path/to/reference.md
Size: 8KB
Format: Markdown (valid)
```
**For Scripts**:
```
Creating script: validate-api-response.sh
Language: bash
Dependencies: jq, curl
```
### Step 4: Apply Progressive Disclosure
```
Structuring for token efficiency...
Core (always loaded):
- Skill name and description
- Key capabilities summary
References (on-demand):
- api-docs.md: Loaded when API questions asked
- examples.md: Loaded when examples needed
Scripts (executed when needed):
- validate-api.sh: Called for validation tasks
```
### Step 5: Test Activation
```
Testing skill activation...
Skill: better-auth
Status: β Loads correctly
New references: β Accessible
```
## Progressive Disclosure
Token-efficient loading strategy:
```
βββββββββββββββββββββββββββββββββββββββ
β prompt.md (Core) β β Always loaded (~500 tokens)
βββββββββββββββββββββββββββββββββββββββ€
β references/ β
β βββ api-docs.md β β On-demand (~2,400 tokens)
β βββ examples.md β β On-demand (~1,200 tokens)
β βββ config-guide.md β β On-demand (~800 tokens)
βββββββββββββββββββββββββββββββββββββββ€
β scripts/ β
β βββ validate-api.sh β β Executed when needed
β βββ generate-types.py β β Executed when needed
βββββββββββββββββββββββββββββββββββββββ
```
**Benefits**:
- Core always available
- References loaded only when relevant
- Scripts executed only when needed
- Significant token savings
## Complete Example
### Scenario: Adding API Reference to Authentication Skill
```bash
/skill:add better-auth https://better-auth.dev/docs/authentication
```
**Execution**:
```
Validating skill: better-auth
Found at: $HOME/.claude/skills/better-auth/
Current structure:
βββ prompt.md (core instructions)
βββ references/
β βββ getting-started.md
βββ scripts/
βββ check-config.sh
Analyzing reference...
URL: https://better-auth.dev/docs/authentication
Type: Documentation
Topic: Authentication flows
Fetching content...
Downloaded: 12KB
Processing: Extracting key sections
Creating reference file...
Path: references/authentication-flows.md
Content:
- Email/password flow
- OAuth providers setup
- Session management
- Token handling
- MFA configuration
Applying progressive disclosure...
- Added to references index
- Configured on-demand loading
- Updated skill metadata
Testing activation...
β Skill loads correctly
β New reference accessible
β Progressive disclosure working
Summary:
- Reference added: authentication-flows.md
- Token impact: +1,800 (on-demand only)
- Total skill size: 4,500 tokens potential
- Typical load: 500 tokens (core only)
```
## Examples
### Adding URL Reference
```bash
/skill:add prisma https://www.prisma.io/docs/reference/api-reference
```
Adds Prisma API reference for database queries.
### Adding Local File
```bash
/skill:add my-framework ./docs/internal-patterns.md
```
Integrates internal documentation into skill.
### Adding Script
```bash
/skill:add docker "script to check container health and restart if unhealthy"
```
Creates executable health check script.
## Token Efficiency
Before `/skill:add` with progressive disclosure:
```
All content loaded upfront: ~8,000 tokens
```
After progressive disclosure:
```
Core only: ~500 tokens
With one reference: ~2,900 tokens
All content: ~8,000 tokens (rare)
Average usage: ~1,500 tokens (81% reduction)
```
## Related Commands
- [/skill:create](/docs/commands/skill/create) - Create new skills
- [/skill:optimize](/docs/commands/skill/optimize) - Optimize existing skills
- [/skill:fix-logs](/docs/commands/skill/fix-logs) - Fix skills from logs
---
**Key Takeaway**: `/skill:add` extends skills with new references or scripts while using progressive disclosure to maintain token efficiency.
---
# /skill:optimize
Section: docs
Category: commands/skill
URL: https://docs.claudekit.cc/docs/skill/optimize
# /skill:optimize
Plan-based skill optimization with user approval. Analyzes skills, identifies improvements, creates a plan, and waits for approval before implementing changes.
## Syntax
```bash
/skill:optimize [skill-name] [prompt]
```
## When to Use
- **Token Optimization**: Reduce skill token usage
- **Clarity Improvements**: Make instructions clearer
- **Error Handling**: Add better error handling
- **Major Changes**: When you want to review before applying
- **Documentation**: Improve skill documentation
## Quick Example
```bash
/skill:optimize better-auth "reduce token usage by 40%"
```
**Output**:
```
Analyzing skill: better-auth
Current Analysis:
- Total tokens: 8,400
- Progressive disclosure: Partial
- Redundant content: 2,100 tokens
Optimization Plan Created:
Location: plans/skill-optimize-better-auth-251129/plan.md
Proposed Changes:
1. Consolidate duplicate API references (-1,200 tokens)
2. Extract examples to on-demand loading (-600 tokens)
3. Compress instruction format (-300 tokens)
Estimated Impact: -2,100 tokens (25% reduction)
βΈ Waiting for approval...
Review plan and respond with 'approve' or 'reject'
```
## Arguments
- `[skill-name]`: Target skill to optimize
- `[prompt]`: Optimization goal (e.g., "reduce tokens", "improve clarity")
## Optimization Areas
| Area | Description |
|------|-------------|
| Token Efficiency | Reduce token count, improve progressive disclosure |
| Instruction Clarity | Make instructions clearer, remove ambiguity |
| Script Performance | Optimize script execution, reduce runtime |
| Error Handling | Add validation, improve error messages |
| Documentation | Update docs, add examples |
## What It Does
### Step 1: Read Skill Files
```
Reading skill: better-auth
Files found:
βββ prompt.md (2,400 tokens)
βββ references/
β βββ api-docs.md (3,200 tokens)
β βββ examples.md (1,800 tokens)
βββ scripts/
βββ validate.sh
```
### Step 2: Analyze Against Goal
```
Goal: "reduce token usage by 40%"
Analysis:
- Current: 8,400 tokens
- Target: 5,040 tokens
- Required reduction: 3,360 tokens
Opportunities identified:
- Duplicate content: 1,200 tokens
- Non-progressive loading: 1,400 tokens
- Verbose instructions: 800 tokens
- Redundant examples: 600 tokens
```
### Step 3: Create Plan
```
Creating optimization plan...
Plan location:
plans/skill-optimize-better-auth-251129/
βββ plan.md
βββ analysis.md
```
### Step 4: Present to User
```
βββββββββββββββββββββββββββββββββββββββ
OPTIMIZATION PLAN
βββββββββββββββββββββββββββββββββββββββ
Skill: better-auth
Goal: Reduce token usage by 40%
Current State:
- Total tokens: 8,400
- Files: 3
- Progressive disclosure: 60%
Proposed Changes:
1. Consolidate API References
- Merge overlapping docs
- Impact: -1,200 tokens
- Risk: Low
2. Implement Full Progressive Disclosure
- Move examples to on-demand
- Impact: -1,400 tokens
- Risk: Low
3. Compress Instructions
- Remove redundant text
- Impact: -800 tokens
- Risk: Medium
Estimated Result:
- New total: 5,000 tokens
- Reduction: 40.5%
- Target: β Met
βββββββββββββββββββββββββββββββββββββββ
```
### Step 5: Wait for Approval (BLOCKING)
```
βΈ WAITING FOR APPROVAL
Review the plan at:
plans/skill-optimize-better-auth-251129/plan.md
Respond with:
- 'approve' - Implement changes
- 'reject' - Cancel optimization
- 'modify' - Request changes to plan
```
### Step 6: Execute if Approved
```
User: approve
Executing optimization plan...
Step 1/3: Consolidating API references...
β Merged 3 files into 1
Step 2/3: Implementing progressive disclosure...
β Moved examples to on-demand loading
Step 3/3: Compressing instructions...
β Reduced verbose text
Optimization complete.
```
### Step 7: Test Optimized Skill
```
Testing optimized skill...
Activation: β Success
Core load: 1,200 tokens (was 2,400)
Full load: 5,000 tokens (was 8,400)
All features: β Working
Optimization successful!
```
## Plan Structure
Generated plan includes:
```markdown
# Skill Optimization Plan
## Target
- Skill: better-auth
- Goal: reduce token usage by 40%
## Current Issues
1. [Issue description]
2. [Issue description]
## Proposed Changes
### Change 1: [Title]
- Description
- Files affected
- Token impact
- Risk level
### Change 2: [Title]
...
## Token Impact Summary
| Before | After | Change |
|--------|-------|--------|
| 8,400 | 5,000 | -40.5% |
## Risk Assessment
- Overall risk: Low
- Rollback: Available
## Implementation Steps
1. [Step description]
2. [Step description]
```
## Complete Example
### Scenario: Improving Instruction Clarity
```bash
/skill:optimize docker "improve instruction clarity for container management"
```
**Execution**:
```
Analyzing skill: docker
Current Analysis:
- Instructions: Technically accurate but verbose
- Examples: Scattered across files
- Common tasks: Not prioritized
Creating optimization plan...
βββββββββββββββββββββββββββββββββββββββ
OPTIMIZATION PLAN
βββββββββββββββββββββββββββββββββββββββ
Skill: docker
Goal: Improve instruction clarity
Current Issues:
1. Instructions assume advanced knowledge
2. Common tasks buried in reference docs
3. Examples not grouped by use case
4. Error handling missing
Proposed Changes:
1. Restructure Instructions
- Add beginner-friendly intro
- Prioritize common tasks
- Impact: +200 tokens (worth it for clarity)
- Risk: Low
2. Group Examples by Use Case
- Development: dev containers
- Production: deployment
- Debugging: logs and shell access
- Impact: Neutral
- Risk: Low
3. Add Error Handling Guide
- Common errors and fixes
- Troubleshooting flowchart
- Impact: +400 tokens
- Risk: Low
Net Impact: +600 tokens (acceptable for clarity gains)
βββββββββββββββββββββββββββββββββββββββ
βΈ Waiting for approval...
```
**User**: approve
```
Executing plan...
β Restructured instructions
β Grouped examples by use case
β Added error handling guide
Testing skill...
β Clarity improved
β All features working
Optimization complete!
New clarity score: 8.5/10 (was 6/10)
```
## Approval Workflow
```
/skill:optimize
β
βΌ
βββββββββββββββ
β Analyze β
βββββββββββββββ
β
βΌ
βββββββββββββββ
β Create Plan β
βββββββββββββββ
β
βΌ
βββββββββββββββ
β Present β
βββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββ
β βΈ WAIT FOR USER APPROVAL β
β β
β 'approve' β Execute plan β
β 'reject' β Cancel β
β 'modify' β Edit plan β
βββββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββ
β Execute β
βββββββββββββββ
β
βΌ
βββββββββββββββ
β Test β
βββββββββββββββ
```
## Related Commands
- [/skill:optimize:auto](/docs/commands/skill/optimize-auto) - Optimize without approval
- [/skill:add](/docs/commands/skill/add) - Add references to skills
- [/skill:create](/docs/commands/skill/create) - Create new skills
---
**Key Takeaway**: `/skill:optimize` provides controlled skill improvement with a plan-and-approve workflow, ensuring you review changes before they're applied.
---
# /skill:optimize:auto
Section: docs
Category: commands/skill
URL: https://docs.claudekit.cc/docs/skill/optimize-auto
# /skill:optimize:auto
Automatic skill optimization without approval gates. Applies improvements directly with backup/rollback safety for rapid iteration during skill development.
## Syntax
```bash
/skill:optimize:auto [skill-name] [prompt]
```
## When to Use
- **Rapid Iteration**: During active skill development
- **Minor Fixes**: Typos, formatting, small adjustments
- **Token Optimization**: Quick token reduction
- **Known Improvements**: When you know exactly what to fix
- **Trusted Changes**: Low-risk optimizations
## Quick Example
```bash
/skill:optimize:auto better-auth "reduce token usage"
```
**Output**:
```
Creating backup...
Backup: $HOME/.claude/skills/better-auth/.backup-251129-2030/
Analyzing skill: better-auth
Goal: reduce token usage
Applying optimizations...
β Consolidated duplicate references (-800 tokens)
β Applied progressive disclosure (-1,200 tokens)
β Compressed verbose instructions (-400 tokens)
Validating changes...
β Skill syntax valid
β Activation successful
Results:
- Before: 6,400 tokens
- After: 4,000 tokens
- Reduction: 37.5%
Optimization complete!
```
## Arguments
- `[skill-name]`: Target skill to optimize
- `[prompt]`: Optimization goal
## Difference from /skill:optimize
| Feature | /skill:optimize | /skill:optimize:auto |
|---------|-----------------|---------------------|
| Creates plan | Yes | No |
| User approval | Required | Not needed |
| Execution speed | Slower (wait) | Immediate |
| Best for | Major changes | Minor fixes |
| Safety | Review before | Backup + rollback |
## What It Does
### Step 1: Create Backup
```
Creating backup...
Source: $HOME/.claude/skills/better-auth/
Backup: $HOME/.claude/skills/better-auth/.backup-251129-2030/
Backed up:
βββ prompt.md
βββ references/
βββ scripts/
```
### Step 2: Analyze Goal
```
Goal: "reduce token usage"
Analysis:
- Current tokens: 6,400
- Progressive disclosure: 40%
- Optimization opportunities: 3 found
```
### Step 3: Apply Changes
```
Applying optimizations...
Change 1: Consolidate references
- Merged api-v1.md and api-v2.md
- Saved: 800 tokens
Change 2: Progressive disclosure
- Moved examples to on-demand
- Saved: 1,200 tokens
Change 3: Compress instructions
- Removed redundant phrases
- Saved: 400 tokens
```
### Step 4: Validate Syntax
```
Validating skill syntax...
Checking prompt.md... β
Checking references... β
Checking scripts... β
Syntax valid.
```
### Step 5: Test Activation
```
Testing skill activation...
Loading skill: better-auth
Core instructions: β Loaded
References: β Accessible
Scripts: β Executable
Activation successful.
```
### Step 6: Report or Rollback
**Success**:
```
βββββββββββββββββββββββββββββββββββββββ
OPTIMIZATION COMPLETE
βββββββββββββββββββββββββββββββββββββββ
Skill: better-auth
Changes Applied:
β Consolidated duplicate references
β Applied progressive disclosure
β Compressed verbose instructions
Token Impact:
- Before: 6,400 tokens
- After: 4,000 tokens
- Saved: 2,400 tokens (37.5%)
Backup available at:
.backup-251129-2030/
βββββββββββββββββββββββββββββββββββββββ
```
**Failure (auto-rollback)**:
```
Optimization failed!
Error: Skill activation failed
Cause: Missing required reference
Rolling back...
Restored from: .backup-251129-2030/
Skill restored to previous state.
```
## Safety Features
### Automatic Backup
Every optimization creates a timestamped backup:
```
$HOME/.claude/skills/{skill-name}/
βββ prompt.md (current)
βββ references/
βββ scripts/
βββ .backup-251129-2030/ β Automatic backup
βββ prompt.md
βββ references/
βββ scripts/
```
### Syntax Validation
Validates skill structure after changes:
```
Validation checks:
β prompt.md exists and valid
β YAML frontmatter correct
β References accessible
β Scripts executable
```
### Automatic Rollback
If activation fails, automatically restores:
```
Activation failed!
Auto-rollback initiated...
Restored from backup.
Skill working again.
```
## Complete Example
### Scenario: Quick Token Optimization
```bash
/skill:optimize:auto prisma "optimize for token efficiency"
```
**Execution**:
```
Creating backup...
β Backup created: .backup-251129-2035/
Analyzing skill: prisma
Current state:
- Tokens: 5,200
- Files: 4
- Progressive disclosure: 30%
Goal: optimize for token efficiency
Identifying optimizations...
1. Large inline examples (1,400 tokens)
2. Redundant schema docs (600 tokens)
3. Verbose error descriptions (400 tokens)
Applying changes...
[1/3] Moving examples to on-demand...
β Created references/examples-on-demand.md
β Updated prompt.md references
[2/3] Consolidating schema docs...
β Merged 3 files into schema-reference.md
[3/3] Compressing error descriptions...
β Shortened while keeping key info
Validating...
β Syntax valid
β Activation successful
β All features working
βββββββββββββββββββββββββββββββββββββββ
OPTIMIZATION COMPLETE
βββββββββββββββββββββββββββββββββββββββ
Before: 5,200 tokens
After: 2,800 tokens
Saved: 2,400 tokens (46%)
Changes:
- Examples: Now on-demand
- Schemas: Consolidated
- Errors: Compressed
Backup: .backup-251129-2035/
βββββββββββββββββββββββββββββββββββββββ
```
### Scenario: Adding Error Handling
```bash
/skill:optimize:auto my-api-skill "add input validation and error handling"
```
**Execution**:
```
Creating backup...
β Backup created
Analyzing skill: my-api-skill
Goal: add input validation and error handling
Adding error handling...
[1/2] Adding input validation...
β Added validation section to prompt.md
β Created scripts/validate-input.sh
[2/2] Adding error handling...
β Added error handling guide
β Included common error patterns
Validating...
β All checks passed
Results:
- Added: Input validation instructions
- Added: Error handling guide
- Added: validate-input.sh script
- Token impact: +350 tokens
Optimization complete!
```
## Limitations
### Use for Safe Changes Only
```
β
Appropriate:
- Token optimization
- Typo fixes
- Formatting improvements
- Small additions
β Use /skill:optimize instead:
- Major restructuring
- Significant logic changes
- Large content additions
- Risky modifications
```
### No Plan Review
Changes apply immediately - no review step:
```
/skill:optimize β Plan β Review β Execute
/skill:optimize:auto β Execute immediately
```
### Backup Retention
Backups kept for manual cleanup:
```
# List backups
ls $HOME/.claude/skills/my-skill/.backup-*/
# Clean old backups
rm -rf $HOME/.claude/skills/my-skill/.backup-251128-*/
```
## Related Commands
- [/skill:optimize](/docs/commands/skill/optimize) - Optimization with approval
- [/skill:add](/docs/commands/skill/add) - Add references to skills
- [/skill:create](/docs/commands/skill/create) - Create new skills
- [/skill:fix-logs](/docs/commands/skill/fix-logs) - Fix skills from error logs
---
**Key Takeaway**: `/skill:optimize:auto` provides rapid skill optimization without approval gates, backed by automatic backup and rollback for safe iteration.
---
# CLAUDE.md
Section: docs
Category: configuration
URL: https://docs.claudekit.cc/docs/configuration/claude-md
# CLAUDE.md
The `CLAUDE.md` file is the primary configuration file that provides guidance to Claude Code when working with your codebase. Understanding this file is crucial for effective use of ClaudeKit.
## What is CLAUDE.md?
`CLAUDE.md` serves as the entry point for Claude Code's understanding of your project. It contains:
- Role and responsibilities definitions
- Links to detailed workflow documentation
- References to development rules
- Documentation management guidelines
## File Structure
A typical `CLAUDE.md` file looks like this:
```markdown
# CLAUDE.md
This file provides guidance to Claude Code when working with code in this repository.
## Role & Responsibilities
Your role is to analyze user requirements, delegate tasks to appropriate sub-agents,
and ensure cohesive delivery of features that meet specifications and architectural standards.
## Workflows
- Primary workflow: `./.claude/workflows/primary-workflow.md`
- Development rules: `./.claude/workflows/development-rules.md`
- Orchestration protocols: `./.claude/workflows/orchestration-protocol.md`
- Documentation management: `./.claude/workflows/documentation-management.md`
## Documentation Management
We keep all important docs in `./docs` folder and keep updating them.
```
## Why File System As Context?
ClaudeKit follows Manus's approach to Context Engineering: **Use File System As Context**.
### Benefits
1. **Token Efficiency**: CLAUDE.md contains just a few lines with links to detailed files
2. **On-Demand Loading**: Detailed instructions are loaded only when needed
3. **Better Organization**: Related documentation is grouped in logical directories
4. **Easier Maintenance**: Update specific files without touching CLAUDE.md
### Example
Instead of putting all development rules in CLAUDE.md:
```markdown
β Bad Approach (All in CLAUDE.md)
# CLAUDE.md
## Development Rules
1. Always write tests
2. Follow TypeScript strict mode
3. Use conventional commits
... (hundreds of lines)
```
ClaudeKit uses references:
```markdown
β
Good Approach (File System As Context)
# CLAUDE.md
## Workflows
- Development rules: `./.claude/workflows/development-rules.md`
```
This keeps CLAUDE.md lightweight while maintaining access to detailed guidelines.
## Important: Do Not Modify
**[Important]** You should not modify `CLAUDE.md` directly, as it will be overwritten each time you update ClaudeKit using `ck init`.
### Why?
- ClaudeKit updates may include improvements to workflows and rules
- Manual changes will be lost during updates
- Consistency across ClaudeKit projects
### What if I need to customize?
If you want to modify `CLAUDE.md` without it being overwritten:
```bash
# Use the exclude flag during updates
ck init --exclude CLAUDE.md
```
**Better approach**: Instead of modifying CLAUDE.md, customize the referenced files in `.claude/workflows/` which are less likely to change during updates.
## Structure Overview
CLAUDE.md links to several key directories:
### `.claude/workflows/`
Contains detailed workflow instructions:
- `development-rules.md` - Code quality standards, subagent orchestration, pre-commit procedures
- `documentation-management.md` - Documentation standards and practices
- `orchestration-protocol.md` - Agent orchestration methods
- `primary-workflow.md` - Development workflow from code to deployment
### `docs/`
Project-specific documentation:
```
docs/
βββ project-overview-pdr.md
βββ code-standards.md
βββ codebase-summary.md
βββ design-guidelines.md
βββ deployment-guide.md
βββ system-architecture.md
βββ project-roadmap.md
```
These files help Claude Code:
- Avoid hallucinations
- Prevent creating redundant code
- Understand project-specific patterns
- Follow established conventions
## How Claude Code Uses CLAUDE.md
### Initial Load
1. Claude Code reads `CLAUDE.md` when started
2. Understands the project's role and structure
3. Knows where to find detailed instructions
### During Tasks
When performing specific tasks, Claude Code:
1. References linked workflow files
2. Reads relevant documentation from `docs/`
3. Follows established patterns and rules
4. Updates documentation as needed
### Example Flow
```
User: "Add user authentication"
β
Claude reads CLAUDE.md
β
Loads development-rules.md
β
Checks code-standards.md
β
Reviews system-architecture.md
β
Implements following patterns
β
Updates documentation
```
## Best Practices
### Do's
β
Keep CLAUDE.md concise with links to detailed docs
β
Update workflow files in `.claude/workflows/` as needed
β
Maintain project docs in `docs/` directory
β
Use `ck init --exclude CLAUDE.md` if you must customize
### Don'ts
β Don't put all documentation in CLAUDE.md
β Don't modify CLAUDE.md without understanding update implications
β Don't ignore the linked workflow files
β Don't skip documentation management
## Token Consumption
Using File System As Context significantly reduces token usage:
**Without File System As Context:**
- Initial load: ~5000 tokens (everything in CLAUDE.md)
- Every task: Same 5000 tokens loaded
**With File System As Context (ClaudeKit approach):**
- Initial load: ~500 tokens (just CLAUDE.md)
- Specific task: +1000 tokens (only relevant file)
- Total: 1500 tokens vs 5000 tokens (70% savings)
## Validation
Ensure your CLAUDE.md is properly configured:
```bash
# Check if CLAUDE.md exists
cat CLAUDE.md
# Verify linked files exist
ls .claude/workflows/
# Check documentation structure
ls docs/
```
## Next Steps
Now that you understand CLAUDE.md:
- [Workflows](/docs/docs/configuration/workflows) - Learn about workflow files
- [Agents](/docs/agents/) - Understand the agent system
- [Commands](/docs/commands/) - Explore available commands
---
**Key Takeaway**: CLAUDE.md is a lightweight entry point that uses the file system as context, making ClaudeKit efficient and maintainable.
---
# Workflows
Section: docs
Category: configuration
URL: https://docs.claudekit.cc/docs/configuration/workflows
# Workflows
Workflows are the backbone of ClaudeKit's agent coordination system. They provide detailed instructions that ensure agents work together cohesively and follow project standards.
## What are Workflows?
Workflows are markdown files stored in `.claude/workflows/` that contain:
- Development guidelines and rules
- Agent coordination protocols
- Documentation standards
- Implementation patterns
- Quality assurance procedures
## Workflow Files
ClaudeKit includes four core workflow files:
### 1. development-rules.md
**Purpose**: Comprehensive development guidelines
**Contains:**
- Code quality standards
- Subagent orchestration rules
- Pre-commit/push procedures
- Implementation principles
- Testing requirements
- Code review criteria
**When Used:**
- During feature implementation
- Before committing code
- During code review
- When refactoring
**Key Sections:**
```markdown
## Code Quality Standards
- TypeScript strict mode
- ESLint configuration
- Prettier formatting
- Test coverage requirements
## Subagent Orchestration
- When to use parallel agents
- Sequential agent workflows
- Agent handoff protocols
## Pre-commit Procedures
- Run tests
- Check types
- Lint code
- Update documentation
```
### 2. documentation-management.md
**Purpose**: Documentation standards and maintenance
**Contains:**
- Documentation structure requirements
- When to update docs
- Documentation formats
- API documentation standards
- Changelog management
**When Used:**
- After implementing features
- During refactoring
- When updating APIs
- For new team members
**Key Sections:**
```markdown
## Documentation Standards
- Code comments
- API documentation
- Architecture diagrams
- User guides
## Update Triggers
- New features
- API changes
- Breaking changes
- Configuration updates
## File Locations
- docs/ - Project documentation
- README.md - Project overview
- CHANGELOG.md - Version history
```
**Why Important:**
- Prevents hallucinations by providing context
- Avoids redundant code creation
- Maintains consistency across codebase
### 3. orchestration-protocol.md
**Purpose**: Methods for coordinating multiple agents
**Contains:**
- Parallel agent initialization
- Sequential agent workflows
- Agent communication patterns
- Task delegation strategies
**When Used:**
- Complex multi-step tasks
- Large feature implementations
- System-wide refactoring
- Performance optimization
**Orchestration Patterns:**
**Parallel Orchestration:**
```markdown
Use when tasks are independent:
- Multiple code reviews
- Concurrent testing
- Parallel documentation updates
Example:
1. Launch scout agents simultaneously
2. Each scans different directories
3. Aggregate results
4. Proceed with unified plan
```
**Sequential Orchestration:**
```markdown
Use when tasks depend on each other:
1. Planner agent creates plan
2. Code agent implements
3. Tester agent validates
4. Reviewer agent checks quality
5. Git agent commits changes
```
**Hybrid Orchestration:**
```markdown
Combine parallel and sequential:
1. Parallel: Scout agents scan codebase
2. Sequential: Planner creates unified plan
3. Parallel: Multiple code agents implement
4. Sequential: Tester validates all changes
5. Sequential: Git agent commits
```
### 4. primary-workflow.md
**Purpose**: End-to-end development workflow
**Contains:**
- Implementation workflow steps
- Testing procedures
- Code review process
- Integration protocols
- Debugging strategies
- Reporting requirements
**Standard Workflow:**
```mermaid
graph TD
A[User Request] --> B[Planner Agent]
B --> C[Code Agent]
C --> D[Tester Agent]
D --> E{Tests Pass?}
E -->|No| F[Debugger Agent]
F --> C
E -->|Yes| G[Code Reviewer Agent]
G --> H{Review Pass?}
H -->|No| I[Refactor]
I --> C
H -->|Yes| J[Git Agent]
J --> K[Documentation Update]
K --> L[Complete]
```
**Workflow Stages:**
1. **Planning**: Analyze requirements, create implementation plan
2. **Implementation**: Write code following standards
3. **Testing**: Run tests, validate functionality
4. **Review**: Check code quality, security, performance
5. **Integration**: Commit changes, update docs
6. **Debugging**: Fix issues if tests or review fail
7. **Reporting**: Document changes and decisions
## How Workflows Work Together
### Example: Implementing a New Feature
```
User: "Add user authentication"
1. Primary Workflow Activates
ββ development-rules.md loaded
ββ Checks code standards
2. Orchestration Protocol Determines Approach
ββ Sequential workflow chosen
ββ Agents queued in order
3. Planner Agent
ββ Reads documentation-management.md
ββ Reviews docs/system-architecture.md
ββ Creates implementation plan
4. Code Agent
ββ Follows development-rules.md
ββ Implements authentication
ββ Adheres to code standards
5. Tester Agent
ββ Runs test suite
ββ Validates security
6. Code Reviewer Agent
ββ Checks against development-rules.md
ββ Validates best practices
7. Documentation Manager
ββ Follows documentation-management.md
ββ Updates docs/
8. Git Agent
ββ Creates conventional commit
ββ Pushes changes
```
## Workflow Benefits
### Consistency
All agents follow the same rules and patterns, ensuring:
- Uniform code style
- Consistent documentation
- Predictable behavior
- Standardized commits
### Quality Assurance
Workflows enforce:
- Test coverage requirements
- Code review standards
- Security best practices
- Performance benchmarks
### Coordination
Multiple agents work together efficiently:
- Clear handoff protocols
- No duplicate work
- Efficient task distribution
- Proper error handling
### Maintainability
Well-defined workflows make it easier to:
- Onboard new team members
- Update development practices
- Scale projects
- Debug issues
## Customizing Workflows
While CLAUDE.md shouldn't be modified, workflow files can be customized:
### Safe to Modify
β
**development-rules.md** - Add project-specific rules
β
**documentation-management.md** - Adjust doc structure
β
**orchestration-protocol.md** - Define custom patterns
### Example Customization
```markdown
# development-rules.md
## Project-Specific Rules
### API Design
- RESTful endpoints only
- Versioned URLs (/v1/users)
- JSON responses
- Rate limiting required
### Database
- PostgreSQL queries use parameterized statements
- Migrations before code changes
- No raw SQL in business logic
```
### Best Practices for Customization
1. **Document Changes**: Note why custom rules exist
2. **Test Impact**: Verify agents follow new rules
3. **Team Alignment**: Ensure team agrees on changes
4. **Version Control**: Track workflow changes in git
5. **Backup**: Keep original files for reference
## Workflow Validation
Ensure workflows are properly configured:
```bash
# Check workflow files exist
ls .claude/workflows/
# Should show:
# - development-rules.md
# - documentation-management.md
# - orchestration-protocol.md
# - primary-workflow.md
# Verify file contents
cat .claude/workflows/development-rules.md
```
## Common Issues
### Agents Not Following Rules
**Problem**: Agent behavior inconsistent with workflow rules
**Solutions:**
1. Verify workflow files are in correct location
2. Check file permissions (must be readable)
3. Ensure CLAUDE.md references correct paths
4. Review agent logs for errors
### Conflicting Workflows
**Problem**: Different workflows give contradictory instructions
**Solutions:**
1. Review all workflow files for conflicts
2. Establish clear priority order
3. Consolidate overlapping rules
4. Update CLAUDE.md if needed
### Outdated Workflows
**Problem**: Workflows don't match current project needs
**Solutions:**
1. Review and update workflow files
2. Run `ck init` to get latest ClaudeKit workflows
3. Merge custom changes with updates
4. Document customizations
## Next Steps
Now that you understand workflows:
- [Agents](/docs/agents/) - Learn about the 14 specialized agents
- [Commands](/docs/commands/) - Explore available commands
- [Development Rules](/.claude/workflows/development-rules.md) - Read the full development rules
---
**Key Takeaway**: Workflows ensure all agents follow consistent patterns and coordinate effectively, resulting in high-quality, maintainable code.
---
# Hooks
Section: docs
Category: configuration
URL: https://docs.claudekit.cc/docs/configuration/hooks
# Hooks
Hooks allow you to extend Claude Code with custom scripts that run at specific points in the workflow. ClaudeKit includes pre-built hooks for notifications (Discord, Telegram) and development rule enforcement.
## Overview
Hooks are configured in `.claude/settings.json` and execute shell commands in response to Claude Code events.
### Available Hook Events
| Event | When Triggered |
|-------|----------------|
| `UserPromptSubmit` | Before user prompt is processed |
| `PreToolUse` | Before a tool executes |
| `PostToolUse` | After a tool executes |
| `Stop` | When Claude session ends |
| `SubagentStop` | When a subagent completes |
## Configuration
Hooks are defined in `.claude/settings.json`:
```json
{
"hooks": {
"UserPromptSubmit": [
{
"hooks": [
{
"type": "command",
"command": "node .claude/hooks/dev-rules-reminder.cjs"
}
]
}
],
"PreToolUse": [
{
"matcher": "Bash|Glob|Grep|Read|Edit|Write",
"hooks": [
{
"type": "command",
"command": "node .claude/hooks/scout-block.cjs"
}
]
}
]
}
}
```
### Hook Properties
- **type**: Always `"command"` for shell execution
- **command**: Shell command to run
- **matcher**: (PreToolUse only) Regex to match tool names
## Built-in Hooks
### 1. Development Rules Reminder
**File:** `.claude/hooks/dev-rules-reminder.cjs`
**Purpose:** Reminds Claude about development rules before processing prompts.
**Event:** `UserPromptSubmit`
**Configuration:**
```json
{
"hooks": {
"UserPromptSubmit": [
{
"hooks": [
{
"type": "command",
"command": "node .claude/hooks/dev-rules-reminder.cjs"
}
]
}
]
}
}
```
**What it does:**
- Checks `.claude/workflows/development-rules.md`
- Injects rule reminders into Claude's context
- Ensures consistent code quality standards
### 2. Scout Block
**File:** `.claude/hooks/scout-block.cjs`
**Purpose:** Prevents file operations during scout mode to keep context focused.
**Event:** `PreToolUse`
**Configuration:**
```json
{
"hooks": {
"PreToolUse": [
{
"matcher": "Bash|Glob|Grep|Read|Edit|Write",
"hooks": [
{
"type": "command",
"command": "node .claude/hooks/scout-block.cjs"
}
]
}
]
}
}
```
**What it does:**
- Blocks file operations when scout mode is active
- Prevents accidental modifications during exploration
- Keeps codebase search focused and efficient
### 3. Discord Notifications (Manual)
**File:** `.claude/hooks/send-discord.sh`
**Purpose:** Sends rich notifications to Discord when tasks complete.
> **Note:** Discord notifications are triggered manually in workflows, not automatically via hook events. This is intentional for flexibility.
**Setup:**
1. **Create Discord Webhook:**
- Discord Server β Settings β Integrations β Webhooks
- Create webhook, copy URL
2. **Configure Environment:**
```bash
# .env or .claude/.env
DISCORD_WEBHOOK_URL=https://discord.com/api/webhooks/YOUR_ID/YOUR_TOKEN
```
3. **Make Executable:**
```bash
chmod +x .claude/hooks/send-discord.sh
```
4. **Test:**
```bash
./.claude/hooks/send-discord.sh 'Test notification'
```
**Usage in Workflows:**
```markdown
- When implementation complete, run:
`./.claude/hooks/send-discord.sh 'Task completed: [summary]'`
```
**Message Format:**
```
βββββββββββββββββββββββββββββββββββββββββ
β π€ Claude Code Session Complete β
β ββββββββββββββββββββββββββββββββββββββββ£
β Implementation Complete β
β β
β β
Added user authentication β
β β
Created login/signup forms β
β β
All tests passing β
β ββββββββββββββββββββββββββββββββββββββββ£
β β° Session Time: 14:30:45 β
β π Project: my-project β
βββββββββββββββββββββββββββββββββββββββββ
```
### 4. Telegram Notifications
**File:** `.claude/hooks/telegram_notify.sh`
**Purpose:** Sends detailed notifications to Telegram with tool usage stats.
**Setup:**
1. **Create Telegram Bot:**
- Message @BotFather on Telegram
- Send `/newbot`, follow prompts
- Copy bot token
2. **Get Chat ID:**
```bash
# After messaging your bot, run:
curl -s "https://api.telegram.org/bot/getUpdates" | jq '.result[-1].message.chat.id'
```
3. **Configure Environment:**
```bash
# .env or .claude/.env
TELEGRAM_BOT_TOKEN=123456789:ABCdefGHIjklMNOpqrsTUVwxyz
TELEGRAM_CHAT_ID=987654321
```
4. **Configure Hook (add to `.claude/settings.json`):**
> **Note:** Telegram hooks are not configured by default. Add this to your `settings.json` to enable automatic notifications.
```json
{
"hooks": {
"Stop": [
{
"hooks": [
{
"type": "command",
"command": "${CLAUDE_PROJECT_DIR}/.claude/hooks/telegram_notify.sh"
}
]
}
],
"SubagentStop": [
{
"hooks": [
{
"type": "command",
"command": "${CLAUDE_PROJECT_DIR}/.claude/hooks/telegram_notify.sh"
}
]
}
]
}
}
```
**Message Format:**
```
π Project Task Completed
π
Time: 2025-10-22 14:30:45
π Project: my-project
π§ Total Operations: 15
π Session: abc12345...
Tools Used:
5 Edit
3 Read
2 Bash
2 Write
Files Modified:
β’ src/auth/service.ts
β’ src/utils/validation.ts
β’ tests/auth.test.ts
π Location: /Users/user/projects/my-project
```
## Creating Custom Hooks
### Basic Hook Structure
```javascript
// .claude/hooks/my-hook.cjs
// Read hook input from stdin (JSON)
let input = '';
process.stdin.on('data', chunk => input += chunk);
process.stdin.on('end', () => {
const data = JSON.parse(input);
// Hook logic here
console.log('Hook triggered:', data.hookType);
// Exit 0 for success, non-zero to block
process.exit(0);
});
```
### Hook Input Data
**UserPromptSubmit:**
```json
{
"hookType": "UserPromptSubmit",
"projectDir": "/path/to/project",
"prompt": "User's prompt text"
}
```
**PreToolUse:**
```json
{
"hookType": "PreToolUse",
"projectDir": "/path/to/project",
"tool": "Edit",
"parameters": {
"file_path": "/path/to/file.ts",
"old_string": "...",
"new_string": "..."
}
}
```
**Stop:**
```json
{
"hookType": "Stop",
"projectDir": "/path/to/project",
"sessionId": "abc123",
"toolsUsed": [
{"tool": "Read", "parameters": {"file_path": "..."}},
{"tool": "Edit", "parameters": {"file_path": "..."}}
]
}
```
### Example: Logging Hook
```javascript
// .claude/hooks/log-tools.cjs
const fs = require('fs');
let input = '';
process.stdin.on('data', chunk => input += chunk);
process.stdin.on('end', () => {
const data = JSON.parse(input);
const logEntry = {
timestamp: new Date().toISOString(),
hookType: data.hookType,
tool: data.tool,
file: data.parameters?.file_path
};
fs.appendFileSync('logs.txt', JSON.stringify(logEntry) + '\n');
process.exit(0);
});
```
### Example: Blocking Hook
```javascript
// .claude/hooks/prevent-secrets.cjs
let input = '';
process.stdin.on('data', chunk => input += chunk);
process.stdin.on('end', () => {
const data = JSON.parse(input);
// Block edits to .env files
if (data.tool === 'Edit' && data.parameters?.file_path?.includes('.env')) {
console.error('Blocked: Cannot edit .env files directly');
process.exit(1); // Non-zero exits block the action
}
process.exit(0);
});
```
## Environment Variables
Hooks can access these environment variables:
| Variable | Description |
|----------|-------------|
| `CLAUDE_PROJECT_DIR` | Project root directory |
| `CLAUDE_SESSION_ID` | Current session identifier |
| Custom variables | From `.env` files |
### Loading .env Files
ClaudeKit hooks load environment variables in this priority:
1. System environment variables
2. `.claude/.env` (project-level)
3. `.claude/hooks/.env` (hook-specific)
## Best Practices
### Security
1. **Never commit secrets:**
```bash
# .gitignore
.env
.env.*
```
2. **Use environment variables** for tokens and URLs
3. **Rotate webhook tokens** regularly
4. **Limit hook permissions** to necessary scope
### Performance
1. **Keep hooks lightweight** - they run on every event
2. **Use async operations** for slow tasks
3. **Exit quickly** if no action needed
### Reliability
1. **Handle errors gracefully**
2. **Log hook failures** for debugging
3. **Test hooks manually** before deployment
## Troubleshooting
### Hook Not Triggering
**Solutions:**
1. Verify hook in `settings.json` is valid JSON
2. Check script is executable (`chmod +x`)
3. Verify path is correct
4. Test script manually
### Hook Blocking Unexpectedly
**Solutions:**
1. Check exit code (0 = allow, non-zero = block)
2. Review matcher regex for PreToolUse
3. Add logging to debug
### Environment Variables Not Loading
**Solutions:**
1. Check `.env` file exists and has correct format
2. Verify no spaces around `=` in `.env`
3. Ensure script reads `.env` files
## Related
- [CLAUDE.md](/docs/configuration/claude-md) - Project instructions
- [MCP Setup](/docs/configuration/mcp-setup) - MCP server configuration
- [Workflows](/docs/configuration/workflows) - Development workflows
---
**Key Takeaway**: Hooks extend Claude Code with custom automation - from development rule enforcement to real-time notifications. Use built-in Discord/Telegram hooks or create custom hooks to fit your workflow.
---
# MCP Setup
Section: docs
Category: configuration
URL: https://docs.claudekit.cc/docs/configuration/mcp-setup
# MCP Setup
## TL;DR
ClaudeKit delegates MCP (Model Context Protocol) servers to the dedicated **mcp-manager** subagent. This isolates heavy tool manifests away from the primary agent, keeping its context window lean while still enabling deep integrations.
---
## Setup Checklist
1. **Copy the template config**
```bash
cp .claude/.mcp.json.example .claude/.mcp.json
```
2. **Customize the MCP roster**
- Remove the default sample servers: `context7`, `human-mcp`, `chrome-devtools`, `sequential-thinking`.
- Add only the MCP servers you truly need to avoid unnecessary token use.
3. **Save the configuration** so the subagent can bootstrap clients from `.claude/.mcp.json` on demand.
> π‘ Keep `.claude/.mcp.json` outside your main prompts so the core agent never loads server manifests unless explicitly requested.
---
## Using MCP Tools
Trigger the subagent-managed tools via the `/use-mcp` command:
```bash
/use-mcp
```
**Example**
```bash
/use-mcp Use chrome-devtools mcp to capture a screenshot of google.com
```
ClaudeKit will summon the **mcp-manager** subagent, load the configured MCP clients, analyze available tools, execute the best fit, and return the results to your primary chat.
---
## Technical Deep Dive

### Why This Architecture?
Anthropic's βCode Execution with MCPβ pattern inspired a lightweight approach: subagents have their own context windows. Loading MCP manifests directly into the main agent quickly bloats its contextβespecially with tool-heavy servers like Chrome DevTools or Playwright. By shifting those manifests into a subagent, the primary conversation stays clean even if dozens of MCP servers are configured.

### How It Works
1. The **mcp-management** skill bundle stores script snippets that instantiate MCP clients from `.claude/.mcp.json`.
2. The **mcp-manager** subagent is granted these skills and remains dormant until a `/use-mcp` command fires.
3. When invoked, the subagent:
- Loads `.claude/.mcp.json`.
- Connects to the declared MCP servers.
- Enumerates available tools and selects the best option for the prompt.
- Executes the tool invocation and streams the response back to the main agent.
The result: your main context stays pristine, yet you can still tap into specialized MCP capabilities. (Yes, the joke still standsβyou *could* register 80 MCP servers, but please add only what you really need.)
### Further Optimization
Even with subagent isolation, processing massive MCP catalogs still burns tokens. To mitigate that, ClaudeKit can hand off heavy MCP orchestration to **gemini-cli**, shifting the most expensive reasoning to a cheaper, external runtime while keeping the main conversation focused.
---
## Next Steps
- Keep refining `.claude/.mcp.json` as your toolset evolves.
- Version-control the file privately if it includes API endpoints or sensitive details.
- Pair `/use-mcp` with automation commands (e.g., `/cook`, `/fix`, `/plan`) to mix bespoke tools with ClaudeKitβs native agents.
With this workflow, you get the power of MCP without the usual context penalty.
---
# Payment Integration
Section: docs
Category: skills
URL: https://docs.claudekit.cc/docs/payments/payment-integration
# Payment Integration
Implement payment processing with SePay (Vietnamese market) and Polar (global SaaS monetization).
## When to Use
- Payment gateway integration and checkout flows
- Subscription management with trials, upgrades, and billing
- QR code payments (VietQR, NAPAS) and bank transfers
- Usage-based billing with metering and credits
- Automated benefit delivery (licenses, GitHub access, Discord roles)
- Webhook handling for payment notifications
- Customer portals for self-service management
- Tax compliance and global payments
## Platform Selection
### Choose SePay for:
- Vietnamese market (VND currency)
- Bank transfer automation (44+ Vietnamese banks)
- VietQR/NAPAS payments
- Local payment methods
- Direct bank account monitoring
### Choose Polar for:
- Global SaaS products
- Subscription lifecycle management
- Usage-based billing
- Automated benefits (GitHub, Discord, licenses)
- Merchant of Record (handles global tax compliance)
- Digital product sales
## Key Capabilities
| Feature | SePay | Polar |
|---------|-------|-------|
| Payment methods | QR, bank transfer, cards | Cards, subscriptions, usage-based |
| Bank monitoring | Webhooks for 44+ VN banks | N/A |
| Tax handling | Manual | MoR (global compliance) |
| Subscriptions | Manual | Full lifecycle management |
| Benefits automation | Manual | GitHub, Discord, licenses, files |
| Rate limit | 2 calls/second | 300 req/min |
| Customer portal | No | Built-in self-service |
## Common Use Cases
### Vietnamese E-commerce Checkout
Implement VietQR payment flow with bank transfer monitoring.
```
"Set up SePay checkout with VietQR generation for bank transfers. Monitor transactions via webhook and mark orders paid when bank account receives matching amount."
```
### SaaS Subscription Platform
Build subscription system with automated license delivery.
```
"Use Polar to create subscription product with 3 tiers. Implement checkout flow and webhook handler that auto-generates license keys on successful subscription."
```
### Usage-Based Billing
Implement metered billing for API usage or credits.
```
"Set up Polar usage-based pricing that tracks API calls per month. Configure webhooks to monitor usage and automatically upgrade/downgrade subscriptions."
```
### Automated GitHub Access
Deliver private repository access on payment.
```
"Use Polar's GitHub benefit to automatically grant repository access when customer subscribes. Remove access on subscription cancellation."
```
## Implementation Workflow
### SePay Quick Start
1. Load `references/sepay/overview.md` for auth setup
2. Load `references/sepay/sdk.md` for integration code
3. Load `references/sepay/webhooks.md` for payment notifications
4. Use `scripts/sepay-webhook-verify.js` for webhook verification
5. Load `references/sepay/best-practices.md` for production
### Polar Quick Start
1. Load `references/polar/overview.md` for auth and concepts
2. Load `references/polar/products.md` for product setup
3. Load `references/polar/checkouts.md` for payment flows
4. Load `references/polar/webhooks.md` for event handling
5. Load `references/polar/benefits.md` for automated delivery
6. Use `scripts/polar-webhook-verify.js` for webhook verification
7. Load `references/polar/best-practices.md` for production
## Pro Tips
- Start with sandbox/test mode before production deployment
- Always verify webhook signatures to prevent fraudulent requests
- Load references progressively - only what you need for current step
- Use provided scripts for webhook verification boilerplate
- For SePay: Monitor rate limits (2 calls/second)
- For Polar: Leverage MoR benefits to avoid tax compliance complexity
- **Not activating?** Say: "Use payment-integration skill to integrate Polar checkout"
## Related Skills
- [Backend Development](/docs/skills/backend/backend-development) - API implementation
- [Web Frameworks](/docs/skills/frontend/web-frameworks) - Checkout UI integration
---
## Key Takeaway
Use SePay for Vietnamese market (VietQR, bank transfers, 44+ banks) and Polar for global SaaS (subscriptions, usage billing, automated benefits with tax compliance).
---
# Skills Overview
Section: docs
Category: skills
URL: https://docs.claudekit.cc/docs/skills/index
# Skills Overview
42 specialized skills that extend Claude's capabilitiesβloaded dynamically when you mention them.
## Quick Reference
### Frontend & Design
| Skill | Purpose |
|-------|---------|
| [frontend-design](/docs/skills/frontend/frontend-design) | Build memorable web interfaces with bold aesthetics |
| [frontend-design-pro](/docs/skills/frontend/frontend-design-pro) | Agency-grade interfaces with perfect imagery |
| [ui-ux-pro-max](/docs/skills/frontend/ui-ux-pro-max) | Production-ready UI with research-backed patterns |
| [aesthetic](/docs/skills/frontend/aesthetic) | Systematic aesthetic framework (BEAUTIFULβRIGHTβSATISFYINGβPEAK) |
| [ui-styling](/docs/skills/frontend/ui-styling) | Tailwind patterns, responsive layouts, dark mode |
| [frontend-development](/docs/skills/frontend/frontend-development) | React patterns, Suspense, state management |
| [nextjs](/docs/skills/frontend/nextjs) | App Router, Server Components, SSR/SSG |
| [web-frameworks](/docs/skills/frontend/web-frameworks) | Next.js + Turborepo + RemixIcon stack |
| [shadcn-ui](/docs/skills/frontend/shadcn-ui) | Accessible UI components with Radix + Tailwind |
| [tailwindcss](/docs/skills/frontend/tailwindcss) | Utility-first CSS, zero custom CSS |
| [threejs](/docs/skills/frontend/threejs) | 3D web experiences with WebGL/WebGPU |
### Backend & Infrastructure
| Skill | Purpose |
|-------|---------|
| [backend-development](/docs/skills/backend/backend-development) | Node.js, NestJS, security, testing patterns |
| [databases](/docs/skills/backend/databases) | Schema design, query optimization, migrations |
| [postgresql-psql](/docs/skills/backend/postgresql-psql) | PostgreSQL CLI, performance tuning, administration |
| [docker](/docs/skills/backend/docker) | Containerization, multi-stage builds, Compose |
| [devops](/docs/skills/backend/devops) | CI/CD, deployment, infrastructure automation |
### AI & Multimodal
| Skill | Purpose |
|-------|---------|
| [ai-multimodal](/docs/skills/ai/ai-multimodal) | Gemini vision, audio, document processing |
| [google-adk-python](/docs/skills/ai/google-adk-python) | Google AI Development Kit for Python agents |
| [canvas-design](/docs/skills/ai/canvas-design) | Visual art creation (PNG/PDF) with design philosophy |
| [gemini-vision](/docs/skills/ai/gemini-vision) | Image analysis (redirects to ai-multimodal) |
### Tools & Utilities
| Skill | Purpose |
|-------|---------|
| [mcp-builder](/docs/skills/tools/mcp-builder) | Create MCP servers (Python FastMCP / TypeScript) |
| [mcp-management](/docs/skills/tools/mcp-management) | Discover and execute MCP tools |
| [skill-creator](/docs/skills/tools/skill-creator) | Create custom skills for your projects |
| [repomix](/docs/skills/tools/repomix) | Pack repos into AI-friendly context files |
| [document-skills](/docs/skills/tools/document-skills) | PDF, DOCX, PPTX, XLSX processing |
| [docs-seeker](/docs/skills/tools/docs-seeker) | Find and retrieve external documentation |
| [chrome-devtools](/docs/skills/tools/chrome-devtools) | Browser automation, performance profiling |
| [ffmpeg](/docs/skills/tools/ffmpeg) | Video/audio processing and conversion |
| [imagemagick](/docs/skills/tools/imagemagick) | Image manipulation and batch processing |
| [claude-code-skill](/docs/skills/tools/claude-code-skill) | Meta-skill for Claude Code itself |
### Process & Methodology
| Skill | Purpose |
|-------|---------|
| [planning](/docs/skills/tools/planning) | Transform requirements into executable plans |
| [research](/docs/skills/tools/research) | Multi-source validation before implementation |
| [sequential-thinking](/docs/skills/tools/sequential-thinking) | Numbered thought sequences for complex problems |
| [problem-solving](/docs/skills/tools/problem-solving) | Systematic approaches when stuck |
| [debugging](/docs/skills/tools/debugging) | Root cause investigation framework |
| [systematic-debugging](/docs/skills/tools/systematic-debugging) | Four-phase debugging (95% fix rate) |
| [code-review](/docs/skills/tools/code-review) | Verification gates and technical rigor |
### Integrations
| Skill | Purpose |
|-------|---------|
| [better-auth](/docs/skills/auth/better-auth) | TypeScript auth (OAuth, 2FA, passkeys, multi-tenant) |
| [shopify](/docs/skills/ecommerce/shopify) | Shopify apps, GraphQL API, checkout extensions |
| [payment-integration](/docs/skills/payments/payment-integration) | Stripe, PayPal, LemonSqueezy integration |
### Mobile & Media
| Skill | Purpose |
|-------|---------|
| [mobile-development](/docs/skills/mobile/mobile-development) | React Native, Expo, cross-platform patterns |
| [media-processing](/docs/skills/multimedia/media-processing) | Audio/video manipulation workflows |
## How to Use
**Basic invocation:**
```
"Use [skill-name] to [task]"
```
**Examples:**
```
"Use better-auth to add GitHub OAuth with 2FA"
"Use docker to create production Dockerfile"
"Use systematic-debugging to investigate this test failure"
"Use frontend-design to build a SaaS landing page"
```
**Skill not activating?** Be explicit:
```
"Use the [skill-name] skill to..."
```
## Under the Hood
### How Skills Activate
Skills activate through **semantic matching** on your prompt:
1. Claude matches your request to skill descriptions
2. Relevant skill instructions load into context
3. Claude follows skill-specific patterns and best practices
**Activation triggers:**
- Mentioning the skill name explicitly
- Describing a task that matches skill description
- Using keywords from skill's domain
### Skill Structure
Every skill contains:
```
.claude/skills/[skill-name]/
βββ SKILL.md # Core instructions (<100 lines)
βββ references/ # Detailed documentation
βββ scripts/ # Automation scripts (optional)
```
**Progressive disclosure**: SKILL.md provides essentials, references/ has depth.
### Skills vs Commands vs Agents
| Aspect | Skills | Commands | Agents |
|--------|--------|----------|--------|
| **Purpose** | Specialized knowledge | Workflow orchestration | Task execution |
| **Invocation** | "Use [skill]..." | `/command` | Auto or explicit |
| **Scope** | Single focused capability | Multi-step processes | Autonomous work |
| **Example** | better-auth, docker | /cook, /plan | planner, tester |
### Creating Custom Skills
```
"Use skill-creator to create a skill for [your-domain]"
```
skill-creator will:
1. Ask clarifying questions
2. Design skill structure
3. Create SKILL.md with proper frontmatter
4. Add references if needed
5. Save to `.claude/skills/`
### Troubleshooting
**Skill not working?**
- Check skill name spelling
- Provide more context about your task
- Try explicit invocation: "Use the X skill to..."
**Need a skill that doesn't exist?**
- Use skill-creator to build it
- Request on [Discord](https://claudekit.cc/discord)
## Key Takeaway
42 skills provide instant expertiseβjust mention the skill and describe your task. No configuration needed.
---
# Debugging
Section: docs
Category: skills
URL: https://docs.claudekit.cc/docs/tools/debugging
# Debugging
Systematic debugging framework that ensures root cause investigation before implementing any fixes.
## Core Principle
**NO FIXES WITHOUT ROOT CAUSE INVESTIGATION FIRST**
Random fixes waste time and create new bugs. Find the root cause, fix at source, validate at every layer, verify before claiming success.
## When to Use
**Always use for:**
- Test failures, bugs, unexpected behavior
- Performance issues, build failures
- Integration problems
- Before claiming work complete
**Especially when:**
- Under time pressure or "quick fix" seems obvious
- Already tried multiple fixes
- Don't fully understand the issue
- About to claim success
## The Four Techniques
### 1. Systematic Debugging
Four-phase framework ensuring proper investigation:
- **Phase 1**: Root Cause Investigation (read errors, reproduce, check changes)
- **Phase 2**: Pattern Analysis (find working examples, compare differences)
- **Phase 3**: Hypothesis and Testing (form theory, test minimally)
- **Phase 4**: Implementation (create test, fix once, verify)
**Key rule**: Complete each phase before proceeding. No fixes without Phase 1.
### 2. Root Cause Tracing
Trace bugs backward through call stack to find original trigger. When error appears deep in execution, trace backward level-by-level until finding source where invalid data originated. Fix at source, not at symptom.
Includes `scripts/find-polluter.sh` for bisecting test pollution.
### 3. Defense-in-Depth
Validate at every layer data passes through. Make bugs impossible.
**Four layers**: Entry validation β Business logic β Environment guards β Debug instrumentation
### 4. Verification
Run verification commands and confirm output before claiming success.
**Iron law**: NO COMPLETION CLAIMS WITHOUT FRESH VERIFICATION EVIDENCE
Run the command. Read the output. Then claim the result.
## Common Use Cases
### Fixing Test Failures
"This test is failing with error X. Use the debugging skill to investigate the root cause before proposing a fix."
### Performance Debugging
"API endpoint is slow. Use debugging skill to trace the performance bottleneck systematically."
### Production Bug Investigation
"Users report feature X breaking. Use debugging skill to reproduce and investigate root cause."
## Red Flags
Stop and follow process if thinking:
- "Quick fix for now, investigate later"
- "Just try changing X and see if it works"
- "It's probably X, let me fix that"
- "Should work now" / "Seems fixed"
- "Tests pass, we're done"
**All mean**: Return to systematic process.
## Pro Tips
- Always complete Phase 1 investigation before attempting any fix
- Trace errors backward to their origin, not just where they surface
- Add validation at multiple layers to prevent recurrence
- **Not activating?** Say: "Use debugging skill to investigate this systematically"
## Related Skills
- [Problem Solving](/docs/skills/tools/problem-solving) - Structured approach to complex problems
- [Sequential Thinking](/docs/skills/tools/sequential-thinking) - Step-by-step reasoning
---
## Key Takeaway
Always investigate root cause before fixing, validate at every layer, and verify with fresh evidence before claiming success.
---
# Google ADK Python Skill
Section: docs
Category: skills/ai
URL: https://docs.claudekit.cc/docs/ai/google-adk-python
# Google ADK Python Skill
Code-first AI agent framework with tool integration, multi-agent coordination, and workflow orchestration.
## When to Use
- Building multi-agent systems with hierarchical coordination (researcher β writer β editor)
- Creating workflow pipelines (sequential/parallel execution, loop patterns)
- Integrating LLMs with tools (Google Search, Code Execution, custom functions)
- Deploying production agents to Vertex AI, Cloud Run, or custom infrastructure
## Quick Start
```bash
pip install google-adk
```
```python
from google.adk.agents import LlmAgent
from google.adk.tools import google_search
# Single agent with tools
agent = LlmAgent(
name="search_assistant",
model="gemini-2.5-flash",
instruction="You are a helpful assistant that searches the web.",
tools=[google_search]
)
# Multi-agent system
researcher = LlmAgent(name="Researcher", tools=[google_search])
writer = LlmAgent(name="Writer")
coordinator = LlmAgent(
name="Coordinator",
sub_agents=[researcher, writer]
)
```
## Common Use Cases
### Research Pipeline
**Who**: Content team lead
```
"Build a research assistant that searches tech news, summarizes findings,
and generates a weekly report. Need human approval before publishing."
```
### Code Analysis System
**Who**: Engineering manager
```
"Create an agent that reviews PR code, runs tests, checks documentation,
and suggests improvements. Parallel execution for speed."
```
### Customer Support Router
**Who**: Support operations lead
```
"Build a support agent that classifies tickets, searches knowledge base,
drafts responses, and escalates complex issues to specialists."
```
### Data Processing Workflow
**Who**: Analytics engineer
```
"Create a sequential pipeline: fetch data from APIs, clean it,
run analysis, generate visualizations. Loop through multiple datasets."
```
## Key Differences
| Feature | LlmAgent | Workflow Agents | BaseAgent |
|---------|----------|-----------------|-----------|
| **Behavior** | Dynamic routing | Predictable flow | Custom logic |
| **Use Case** | Adaptive tasks | Fixed pipelines | Specialized needs |
| **Sub-agents** | Yes (delegation) | Yes (orchestration) | Customizable |
| **Tools** | Built-in + custom | Inherited from agents | Manual setup |
**Workflow Types**:
- **SequentialAgent**: Execute agents in order (research β summarize β write)
- **ParallelAgent**: Run concurrently (web + papers + experts search)
- **LoopAgent**: Repeat with iteration logic (process each dataset)
## Quick Reference
### Agent Creation
```python
# Basic agent
agent = LlmAgent(
name="agent_name",
model="gemini-2.5-flash",
instruction="Your instructions here",
description="Agent description",
tools=[tool1, tool2]
)
# Multi-agent coordinator
coordinator = LlmAgent(
name="coordinator",
model="gemini-2.5-flash",
sub_agents=[agent1, agent2, agent3]
)
```
### Custom Tools
```python
from google.adk.tools import Tool
def calculate_sum(a: int, b: int) -> int:
"""Calculate sum of two numbers."""
return a + b
sum_tool = Tool.from_function(calculate_sum)
```
### Workflows
```python
from google.adk.agents import SequentialAgent, ParallelAgent, LoopAgent
# Sequential
seq = SequentialAgent(name="pipeline", agents=[a1, a2, a3])
# Parallel
par = ParallelAgent(name="concurrent", agents=[a1, a2, a3])
# Loop
loop = LoopAgent(name="iterator", agent=processor)
```
### Human-in-the-Loop
```python
agent = LlmAgent(
name="careful_agent",
tools=[google_search],
tool_confirmation=True # Requires approval
)
```
### Supported Models
- `gemini-2.5-flash` (recommended for speed)
- `gemini-2.5-pro` (complex tasks)
- `gemini-1.5-flash`, `gemini-1.5-pro`
- Model-agnostic (supports other LLM providers)
## Pro Tips
- **Not activating?** Say: "Use google-adk-python skill to build a multi-agent research system"
- Use SequentialAgent for dependent tasks, ParallelAgent for independent ones
- Custom tools = any Python function with type hints + docstring
- Enable `tool_confirmation=True` for sensitive operations (data deletion, API calls)
- Hierarchical structure: coordinator β specialized agents β tools
- Deploy to Cloud Run for containerized hosting, Vertex AI for managed scaling
- Development version: `pip install git+https://github.com/google/adk-python.git@main`
## Related Skills
- [AI Multimodal](/docs/skills/ai/ai-multimodal) - Gemini API integration
- [Planning](/docs/skills/tools/planning) - Agent workflow design
- [Backend Development](/docs/skills/backend/backend-development) - API integration
## Key Takeaway
Google ADK enables code-first AI agent development with tool integration, multi-agent coordination, and workflow orchestration. Optimized for Gemini, deployable to Cloud Run/Vertex AI, model-agnostic for flexibility.
---
# AI Multimodal Skill
Section: docs
Category: skills/ai
URL: https://docs.claudekit.cc/docs/ai/ai-multimodal
# AI Multimodal Skill
Process audio, images, videos, documents, and generate images/videos using Google Gemini's multimodal API. Supports analysis up to 9.5 hours of audio, 6 hours of video, and 1000-page PDFs with context windows up to 2M tokens.
## When to Use
- Analyzing audio files (transcription, summarization, music analysis)
- Understanding images (captioning, OCR, object detection, design extraction)
- Processing videos (scene detection, Q&A, temporal analysis, YouTube URLs)
- Extracting from documents (PDF tables, forms, charts, diagrams)
- Generating images (text-to-image with Imagen 4)
- Generating videos (text-to-video with Veo 3, 8-second clips with audio)
## Setup
```bash
export GEMINI_API_KEY="your-key" # Get from https://aistudio.google.com/apikey
pip install google-genai python-dotenv pillow
```
Verify setup:
```bash
python scripts/check_setup.py
```
## Quick Start
### Analyze Media
```bash
# Using Gemini CLI (if available)
echo "Describe this image" | gemini -y -m gemini-2.5-flash
# Using scripts
python scripts/gemini_batch_process.py --files image.png --task analyze
python scripts/gemini_batch_process.py --files audio.mp3 --task transcribe
python scripts/gemini_batch_process.py --files document.pdf --task extract
```
### Generate Content
```bash
# Generate image with Imagen 4
python scripts/gemini_batch_process.py --task generate --prompt "A futuristic city at sunset"
# Generate video with Veo 3
python scripts/gemini_batch_process.py --task generate-video --prompt "Ocean waves at golden hour"
```
### Stdin Support
```bash
# Pipe files directly (auto-detects PNG/JPG/PDF/WAV/MP3)
cat image.png | python scripts/gemini_batch_process.py --task analyze --prompt "Describe this"
```
## Models
| Task | Model | Notes |
|------|-------|-------|
| Analysis | `gemini-2.5-flash` | Recommended for speed |
| Analysis | `gemini-2.5-pro` | Advanced reasoning |
| Image Gen | `imagen-4.0-generate-001` | Standard quality |
| Image Gen | `imagen-4.0-ultra-generate-001` | Best quality |
| Image Gen | `imagen-4.0-fast-generate-001` | Fastest |
| Video Gen | `veo-3.1-generate-preview` | 8s clips with audio |
## Scripts
| Script | Purpose |
|--------|---------|
| `gemini_batch_process.py` | CLI orchestrator for transcribe/analyze/extract/generate tasks |
| `media_optimizer.py` | Compress/resize media to stay within Gemini limits |
| `document_converter.py` | Convert PDFs/images to markdown |
| `check_setup.py` | Verify API key and dependencies |
## Limits
| Format | Limits |
|--------|--------|
| Audio | WAV/MP3/AAC, up to 9.5 hours |
| Images | PNG/JPEG/WEBP, up to 3600 images |
| Video | MP4/MOV, up to 6 hours |
| PDF | Up to 1000 pages |
| Size | 20MB inline, 2GB via File API |
## Use Cases
### Design Extraction
Extract design guidelines from screenshots:
```bash
python scripts/gemini_batch_process.py \
--files screenshot.png \
--task analyze \
--prompt "Extract: colors (hex), typography, spacing, layout patterns"
```
### Video Transcription
```bash
python scripts/gemini_batch_process.py \
--files meeting.mp4 \
--task transcribe \
--prompt "Include speaker labels and timestamps"
```
### Batch Processing
```bash
python scripts/gemini_batch_process.py \
--files images/*.png \
--task analyze \
--prompt "Describe each image"
```
## Integration with ClaudeKit
The ai-multimodal skill integrates with:
- **frontend-design**: Extract design guidelines from screenshots before implementation
- **media-processing**: Optimize media files before Gemini analysis
- **document-skills**: Convert extracted content to structured formats
## Best Practices
1. **Use appropriate models**: `gemini-2.5-flash` for speed, `gemini-2.5-pro` for complex analysis
2. **Optimize media first**: Use `media_optimizer.py` for large files
3. **Batch when possible**: Process multiple files in one call
4. **Structure prompts**: Be specific about output format (JSON, markdown, etc.)
## Resources
- [Gemini API Docs](https://ai.google.dev/gemini-api/docs/)
- [Pricing](https://ai.google.dev/pricing)
- [Google AI Studio](https://aistudio.google.com/)
---
## Key Takeaway
The AI Multimodal skill provides comprehensive multimedia processing through Google Gemini, handling everything from audio transcription to AI-generated images and videos with support for massive context windows.
---
# Gemini Vision Skill
Section: docs
Category: skills/ai
URL: https://docs.claudekit.cc/docs/ai/gemini-vision
# Gemini Vision Skill
This skill has been merged into **AI Multimodal** which provides comprehensive multimedia capabilities.
## Redirecting to AI Multimodal
The [AI Multimodal skill](/docs/skills/ai/ai-multimodal) now covers:
- **Image analysis** - Captioning, OCR, object detection, visual Q&A, segmentation
- **Audio processing** - Transcription, summarization, up to 9.5 hours
- **Video understanding** - Scene detection, temporal analysis, up to 6 hours
- **Document extraction** - PDF tables, forms, charts, diagrams
- **Image generation** - Text-to-image with Imagen 4
- **Video generation** - Text-to-video with Veo 3
## Quick Examples
```
"Analyze this product image and extract name, color, condition"
"Extract text from invoice.jpg and return as JSON"
"Compare these before/after photos and list differences"
"Detect all objects in image with bounding boxes"
```
β **Go to [AI Multimodal](/docs/skills/ai/ai-multimodal)** for full documentation.
---
## Key Takeaway
Use [AI Multimodal](/docs/skills/ai/ai-multimodal) for all Gemini-powered image, audio, video, and document processing.
---
# Canvas Design Skill
Section: docs
Category: skills/ai
URL: https://docs.claudekit.cc/docs/ai/canvas-design
# canvas-design
Create museum-quality visual art and designs in PNG/PDF formats through sophisticated design philosophies and expert craftsmanship.
## Just Ask Claude To...
```
Create a poster for a jazz festival with brutalist typography
```
```
Design an Instagram post (1080x1080) announcing our product launch with chromatic language aesthetic
```
```
Make a landing page hero section with geometric silence philosophy - gradient background, bold headline, CTA
```
```
Create a presentation title slide using organic systems design philosophy
```
```
Design a promotional flyer for our event with concrete poetry aesthetic - include date, location, QR code
```
```
Create an email newsletter header showing this month's theme using analog meditation style
```
## What You'll Get
- High-quality PNG or PDF files ready for web, print, or social media
- Original visual designs with sophisticated aesthetic philosophies
- Expert-level craftsmanship that looks professionally labored over
- 90% visual design, 10% essential text - design communicates, not paragraphs
- Museum/magazine-quality compositions with perfect spacing and alignment
## Supported Formats
**Outputs:** PNG (web/social), PDF (print/presentations, multi-page support)
**Common sizes:** Hero sections (1920x1080), Instagram (1080x1080), Facebook (1200x630), Twitter (1600x900), Posters (24x36"), Flyers (8.5x11")
## Common Use Cases
### Landing Pages
Design hero sections, feature showcases, or call-to-action sections with modern aesthetics.
```
Create hero section for SaaS product with gradient background, product screenshot, value proposition, CTA button
```
### Social Media Graphics
Instagram posts, Facebook covers, Twitter headers with platform-specific dimensions.
```
Design Instagram carousel (3 slides, 1080x1080) showing our product features with chromatic language style
```
### Marketing Materials
Posters, flyers, promotional graphics for events or campaigns.
```
Create event poster with concrete poetry aesthetic - bold typography, event details, registration QR code
```
### Presentations
Title slides, section dividers, visual content slides.
```
Design investor pitch title slide with geometric silence philosophy and company branding
```
### Email Newsletters
Headers, banners, visual content for email campaigns.
```
Create email header (600x200) for monthly newsletter with analog meditation aesthetic
```
## Design Philosophies
The skill uses sophisticated aesthetic movements to guide visual creation:
- **Concrete Poetry**: Monumental form, bold geometry, sculptural typography
- **Chromatic Language**: Color as primary information system, geometric precision
- **Analog Meditation**: Quiet contemplation through texture and breathing room
- **Organic Systems**: Natural clustering, modular growth patterns
- **Geometric Silence**: Pure order, grid-based precision, dramatic negative space
Specify a philosophy in your prompt or let Claude choose based on your needs.
## Pro Tips
- Be specific about dimensions for target platform (web, social, print)
- Mention brand colors if consistency matters: "Use #3B82F6 as primary color"
- Specify DPI for print: "Export as PNG at 300 DPI"
- Request multi-page PDFs for presentations or coffee-table book style collections
- Text is always minimal - the design communicates visually, not through paragraphs
- **Not activating?** Say: "Use canvas-design skill to create..."
## Related Skills
- [frontend-design](/docs/skills/frontend/frontend-design) - Build web interfaces and components
- [ui-styling](/docs/skills/frontend/ui-styling) - Style existing web elements with Tailwind/CSS
---
## Key Takeaway
canvas-design creates museum-quality visual art and marketing materials through sophisticated design philosophies - expect expert craftsmanship that looks like it took countless hours to create.
---
# Better Auth Skill
Section: docs
Category: skills/auth
URL: https://docs.claudekit.cc/docs/auth/better-auth
# Better Auth Skill
Production-ready authentication for any TypeScript frameworkβNext.js, Nuxt, SvelteKit, Remix, Astro, Hono, Express.
## When to Use
- Adding authentication to TypeScript/JavaScript apps
- Email/password or social OAuth login
- 2FA, passkeys, magic links
- Multi-tenant apps with organizations
- Session management and protected routes
## Key Capabilities
| Feature | Built-in | Plugin |
|---------|----------|--------|
| Email/Password | β | - |
| OAuth (GitHub, Google, etc.) | β | - |
| Email Verification | β | - |
| Password Reset | β | - |
| Rate Limiting | β | - |
| Two-Factor (TOTP) | - | `twoFactor` |
| Passkeys/WebAuthn | - | `passkey` |
| Magic Links | - | `magicLink` |
| Organizations | - | `organization` |
**Frameworks**: Next.js, Nuxt, SvelteKit, Remix, Astro, Hono, Express
**Databases**: PostgreSQL, MySQL, SQLite, MongoDB (via Drizzle, Prisma, Kysely)
## Common Use Cases
### SaaS MVP Authentication
**Who**: Solo founder building first product
```
"Add authentication to my Next.js app with email/password signup,
GitHub OAuth, and PostgreSQL with Drizzle. Include email verification."
```
### Multi-Tenant Platform
**Who**: Team building B2B SaaS
```
"Set up Better Auth with organization support for multi-tenant app.
Need team invitations, role-based permissions, and admin dashboard."
```
### Secure Enterprise App
**Who**: Developer at security-conscious company
```
"Implement Better Auth with 2FA requirement, passkey support,
rate limiting, and audit logging. PostgreSQL backend."
```
### Passwordless Experience
**Who**: UX-focused startup
```
"Add magic link authentication to my SvelteKit app.
No passwords, just email-based login with session management."
```
### Quick Prototype
**Who**: Developer testing an idea
```
"Set up basic Better Auth with SQLite for local development.
Just email/password, minimal config."
```
## Quick Start
```bash
npm install better-auth
```
```env
BETTER_AUTH_SECRET=your-32-char-secret
BETTER_AUTH_URL=http://localhost:3000
```
```bash
npx @better-auth/cli generate # Generate schema
npx @better-auth/cli migrate # Apply migrations
```
## Auth Method Selection
| Method | Best For |
|--------|----------|
| Email/Password | Traditional web apps, full credential control |
| OAuth | Quick signup, social profile access |
| Passkeys | Passwordless, security-first apps |
| Magic Links | Email-first users, temporary access |
## Pro Tips
- **Run migrations after adding plugins**: `npx @better-auth/cli generate && migrate`
- **Use environment variables** for secrets and OAuth credentials
- **Enable rate limiting** in production to prevent abuse
- **Combine methods** for user flexibility (OAuth + email as backup)
- **Not activating?** Say: "Use the better-auth skill to..."
## Related Skills
- [Databases](/docs/skills/backend/databases) - PostgreSQL/MongoDB setup
- [Next.js](/docs/skills/frontend/nextjs) - Framework integration
- [Backend Development](/docs/skills/backend/backend-development) - API patterns
---
## Key Takeaway
Use Better Auth for production-ready authentication in any TypeScript framework with built-in email/password, OAuth, and extensible plugin system for 2FA, passkeys, and organizations.
---
# Backend Development Skill
Section: docs
Category: skills/backend
URL: https://docs.claudekit.cc/docs/backend/backend-development
# Backend Development Skill
Build production-ready backend systems with modern technologies, security best practices, and proven scalability patterns.
## When to Use
- Designing RESTful, GraphQL, or gRPC APIs
- Building authentication/authorization systems
- Optimizing database queries and schemas
- Implementing caching and performance optimization
- OWASP Top 10 security mitigation
- Designing scalable microservices
- Testing strategies (unit, integration, E2E)
- CI/CD pipelines and deployment
## Quick Start
```typescript
// NestJS API with security basics
import { hash, verify } from 'argon2';
import { Controller, Get, Post, UseGuards } from '@nestjs/common';
@Controller('users')
export class UserController {
@Post('register')
async register(@Body() dto: CreateUserDto) {
const hashedPassword = await hash(dto.password);
return this.userService.create({ ...dto, password: hashedPassword });
}
@Get('profile')
@UseGuards(JwtAuthGuard)
async getProfile(@CurrentUser() user: User) {
return user;
}
}
```
## Common Use Cases
### RESTful API with Authentication
**Who**: Startup building MVP backend
```
"Build a REST API with user registration, JWT authentication, and protected routes.
Use PostgreSQL with Prisma ORM. Add rate limiting and input validation."
```
### Microservices Architecture
**Who**: Enterprise team scaling monolith
```
"Design microservices for orders, payments, and inventory.
Use gRPC for internal communication, Kafka for events, Redis for caching."
```
### Performance Optimization
**Who**: SaaS product with scaling issues
```
"Database queries are slow. Add Redis caching, optimize N+1 queries,
create proper indexes, and implement connection pooling."
```
### Security Hardening
**Who**: Fintech ensuring compliance
```
"Audit backend for OWASP Top 10. Implement Argon2id passwords,
parameterized queries, OAuth 2.1, rate limiting, and security headers."
```
### Testing Strategy
**Who**: Team with production bugs
```
"Set up testing pyramid: 70% unit tests (Vitest), 20% integration (API contracts),
10% E2E (critical paths). Add CI/CD test automation."
```
## Key Differences
| Language | Best For | Performance | Ecosystem |
|----------|----------|-------------|-----------|
| **Node.js** | Full-stack, rapid dev | Good (async) | Largest (npm) |
| **Python** | Data/ML integration | Moderate | Rich (PyPI) |
| **Go** | Concurrency, cloud | Excellent | Growing |
| **Rust** | Max performance | Best-in-class | Specialized |
| Database | Use Case | Transactions | Schema |
|----------|----------|--------------|--------|
| **PostgreSQL** | ACID-critical | Strong | Rigid |
| **MongoDB** | Flexible data | Limited | Schema-less |
| **Redis** | Caching, sessions | None | Key-value |
## Quick Reference
### Security Essentials
```typescript
// Argon2id (not bcrypt)
import { hash, verify } from 'argon2';
const hashed = await hash(password);
// Parameterized queries (98% SQL injection reduction)
db.query('SELECT * FROM users WHERE id = $1', [userId]);
// Rate limiting
@UseGuards(ThrottlerGuard)
@Throttle({ default: { limit: 10, ttl: 60000 } })
```
### Caching Pattern
```typescript
// Redis caching (90% DB load reduction)
const cached = await redis.get(`user:${id}`);
if (cached) return JSON.parse(cached);
const user = await db.findUser(id);
await redis.setex(`user:${id}`, 3600, JSON.stringify(user));
```
### Testing Commands
```bash
# Vitest (50% faster than Jest)
npm install -D vitest
npx vitest run # Run tests once
npx vitest watch # Watch mode
npx vitest --coverage # With coverage
```
### Docker Deployment
```dockerfile
# Multi-stage build (50-80% size reduction)
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --production
COPY . .
RUN npm run build
FROM node:20-alpine
USER node
WORKDIR /app
COPY --from=builder /app/dist ./dist
CMD ["node", "dist/main.js"]
```
## Pro Tips
- **70-20-10 testing pyramid**: 70% unit, 20% integration, 10% E2E
- **Database indexing**: 30% I/O reduction on high-traffic columns
- **Connection pooling**: Prevent database connection exhaustion
- **Feature flags**: 90% fewer deployment failures
- **Blue-green deployments**: Zero-downtime releases
- **OpenTelemetry**: Distributed tracing across microservices
- **Not activating?** Say: "Use the backend-development skill to..."
## Related Skills
- [Databases](/docs/skills/backend/databases) - PostgreSQL, MongoDB deep-dive
- [DevOps](/docs/skills/backend/devops) - Docker, Kubernetes, cloud deployment
- [Better Auth](/docs/skills/auth/better-auth) - Authentication implementation
---
## Key Takeaway
Backend development in 2025 prioritizes security (Argon2id, parameterized queries), performance (Redis, indexing), and reliability (testing pyramid, feature flags) with modern frameworks like NestJS, FastAPI, and Gin.
---
# Databases
Section: docs
Category: skills/backend
URL: https://docs.claudekit.cc/docs/backend/databases
# Databases
Expert guidance for MongoDB (document-oriented) and PostgreSQL (relational) databases.
## When to Use
- Designing database schemas and data models
- Writing queries (SQL or MongoDB query language)
- Building aggregation pipelines or complex joins
- Optimizing indexes and query performance
- Implementing database migrations
- Setting up replication, sharding, or clustering
- Configuring backups and disaster recovery
- Managing database users and permissions
- Analyzing slow queries and performance issues
## Quick Start
### MongoDB
```bash
# Atlas (Cloud) - Recommended
# 1. Sign up at mongodb.com/atlas
# 2. Create M0 free cluster
# 3. Get connection string
# Connect
mongosh "mongodb+srv://cluster.mongodb.net/mydb"
# Basic operations
db.users.insertOne({ name: "Alice", age: 30 })
db.users.find({ age: { $gte: 18 } })
db.users.updateOne({ name: "Alice" }, { $set: { age: 31 } })
```
### PostgreSQL
```bash
# Install (Ubuntu/Debian)
sudo apt-get install postgresql postgresql-contrib
sudo systemctl start postgresql
# Connect
psql -U postgres -d mydb
# Basic operations
CREATE TABLE users (id SERIAL PRIMARY KEY, name TEXT, age INT);
INSERT INTO users (name, age) VALUES ('Alice', 30);
SELECT * FROM users WHERE age >= 18;
UPDATE users SET age = 31 WHERE name = 'Alice';
```
## Common Use Cases
### Schema Design for E-commerce
"Design a MongoDB schema for an e-commerce platform with products, users, orders, and reviews"
"Create a PostgreSQL schema for an e-commerce platform with proper normalization and foreign keys"
### Query Optimization
"Optimize this slow MongoDB aggregation pipeline that processes user analytics data"
"Analyze and improve this PostgreSQL query that joins 5 tables and takes 10 seconds"
### Database Migration
"Generate a migration to add a new 'status' field to all documents in MongoDB users collection"
"Create a PostgreSQL migration to add a composite index on orders(user_id, created_at)"
### Performance Analysis
"Analyze MongoDB slow query log and recommend indexes for collections with high read latency"
"Use EXPLAIN ANALYZE to diagnose why this PostgreSQL query is doing sequential scans"
## Key Differences
| Aspect | MongoDB | PostgreSQL |
|--------|---------|------------|
| **Data Model** | Document (JSON/BSON) | Relational (Tables/Rows) |
| **Schema** | Flexible, dynamic | Strict, predefined |
| **Query Language** | MongoDB Query Language | SQL |
| **Joins** | $lookup (limited) | Native, optimized |
| **Transactions** | Multi-document (4.0+) | Native ACID |
| **Scaling** | Horizontal (sharding) | Vertical (primary) |
| **Best For** | Content management, IoT, real-time analytics | Financial systems, e-commerce, ERP |
### Choose MongoDB When
- Schema flexibility: Frequent structure changes, heterogeneous data
- Document-centric: Natural JSON/BSON data model
- Horizontal scaling: Need to shard across multiple servers
- High write throughput: IoT, logging, real-time analytics
- Nested/hierarchical data: Embedded documents preferred
### Choose PostgreSQL When
- Strong consistency: ACID transactions critical
- Complex relationships: Many-to-many joins, referential integrity
- SQL requirement: Team expertise, reporting tools, BI systems
- Data integrity: Strict schema validation, constraints
- Complex queries: Window functions, CTEs, analytical workloads
## Quick Reference
### Indexing
```javascript
// MongoDB
db.users.createIndex({ email: 1 })
db.users.createIndex({ status: 1, createdAt: -1 })
```
```sql
-- PostgreSQL
CREATE INDEX idx_users_email ON users(email);
CREATE INDEX idx_users_status_created ON users(status, created_at DESC);
```
### Common Operations Comparison
| Operation | MongoDB | PostgreSQL |
|-----------|---------|------------|
| **Insert** | `insertOne({ name: "Bob" })` | `INSERT INTO users (name) VALUES ('Bob')` |
| **Query** | `find({ age: { $gte: 18 } })` | `SELECT * FROM users WHERE age >= 18` |
| **Update** | `updateOne({}, { $set: { age: 25 } })` | `UPDATE users SET age = 25 WHERE ...` |
| **Delete** | `deleteOne({ name: "Bob" })` | `DELETE FROM users WHERE name = 'Bob'` |
## Pro Tips
- Specify database context upfront: "Using MongoDB" or "Using PostgreSQL"
- For complex queries, provide sample data structure
- Mention performance requirements: "Query needs to run under 100ms"
- Include current index strategy when optimizing
- **Not activating?** Say: "Use databases skill to design a MongoDB schema for..."
## Related Skills
- [Backend Development](/docs/skills/backend/backend-development) - Full backend implementation
- [DevOps](/docs/skills/backend/devops) - Database deployment and management
---
## Key Takeaway
Choose MongoDB for flexible schemas and horizontal scaling, PostgreSQL for ACID transactions and complex relationshipsβboth support JSON, full-text search, and geospatial queries.
---
# DevOps Skill
Section: docs
Category: skills/backend
URL: https://docs.claudekit.cc/docs/backend/devops
# DevOps Skill
Deploy and manage cloud infrastructure across Cloudflare edge, Docker containers, and Google Cloud Platform.
## When to Use
- Deploying serverless to Cloudflare Workers
- Containerizing apps with Docker
- Managing GCP infrastructure
- Setting up CI/CD pipelines
- Multi-region deployments
## Platform Selection
| Need | Choose |
|------|--------|
| Sub-50ms latency globally | Cloudflare Workers |
| Large file storage (zero egress) | Cloudflare R2 |
| SQL database (global reads) | Cloudflare D1 |
| Containerized workloads | Docker + Cloud Run/GKE |
| Enterprise Kubernetes | GKE |
| Static site + API | Cloudflare Pages |
| WebSocket/real-time | Durable Objects |
| ML/AI pipelines | GCP Vertex AI |
## Common Use Cases
### Edge-First API
**Who**: Startup needing global low-latency
```
"Deploy my API to Cloudflare Workers with R2 for file storage
and D1 for user data. Need sub-50ms response times globally."
```
### Containerized Microservices
**Who**: Team migrating monolith to microservices
```
"Containerize our Node.js services with Docker multi-stage builds.
Deploy to Cloud Run with auto-scaling. PostgreSQL on Cloud SQL."
```
### Full-Stack Deployment
**Who**: Solo developer shipping fast
```
"Deploy my Next.js app to Cloudflare Pages with Workers for API routes.
Use R2 for uploads, D1 for database."
```
### Enterprise Kubernetes
**Who**: Platform team at scale
```
"Set up GKE cluster with auto-scaling node pools.
Configure Ingress, SSL, monitoring with Cloud Operations."
```
### Local Dev Environment
**Who**: Developer onboarding to project
```
"Create Docker Compose setup with app, PostgreSQL, and Redis.
Match production config for local development."
```
## Quick Start
### Cloudflare Workers
```bash
npm install -g wrangler
wrangler init my-worker
wrangler deploy
```
### Docker
```bash
docker build -t myapp .
docker run -p 3000:3000 myapp
```
### Google Cloud
```bash
gcloud run deploy my-service \
--image gcr.io/project/image \
--region us-central1
```
## Key Capabilities
| Platform | Products |
|----------|----------|
| **Cloudflare** | Workers, R2, D1, KV, Pages, Durable Objects |
| **Docker** | Multi-stage builds, Compose, cross-platform |
| **GCP** | Compute Engine, GKE, Cloud Run, Cloud SQL |
## Pro Tips
- **Zero egress costs**: Use Cloudflare R2 instead of S3 for large file serving
- **Multi-stage builds**: Reduce Docker image size by 50-80%
- **Local testing**: `wrangler dev` for Workers, Docker Compose for containers
- **Run as non-root**: Always use `USER node` in production Dockerfiles
- **Not activating?** Say: "Use the devops skill to..."
## Related Skills
- [Docker](/docs/skills/backend/docker) - Container deep-dive
- [Databases](/docs/skills/backend/databases) - PostgreSQL, MongoDB setup
- [Backend Development](/docs/skills/backend/backend-development) - API patterns
---
## Key Takeaway
Choose Cloudflare for edge-first with zero egress, Docker for containerization, and GCP for enterprise scale. Combine platforms for optimal architecture.
---
# docker
Section: docs
Category: skills/backend
URL: https://docs.claudekit.cc/docs/backend/docker
# docker
Containerize applications with production-ready Dockerfiles, multi-stage builds, and Docker Compose orchestration.
## When to Use
- **Containerize apps**: Create optimized Dockerfiles for any language/framework
- **Multi-container setups**: Orchestrate services with Docker Compose
- **Dev/prod parity**: Consistent environments across local/staging/production
- **CI/CD integration**: Build, test, and deploy containerized workflows
## Quick Start
```bash
# Build multi-stage Dockerfile
docker build -t myapp:1.0 .
# Run with Docker Compose
docker compose up -d
# View logs
docker compose logs -f
```
## Common Use Cases
### 1. Containerize Node.js App
**Who**: Full-stack dev deploying Next.js app to VPS
**Prompt**: "Use docker to create production Dockerfile for Next.js 14 with multi-stage build, Node 20 Alpine, non-root user, and health checks"
### 2. Multi-Service Stack
**Who**: Backend dev setting up API + database locally
**Prompt**: "Use docker to create Docker Compose with FastAPI, PostgreSQL, Redis, and Nginx reverse proxy"
### 3. Optimize Existing Image
**Who**: DevOps engineer reducing CI build time
**Prompt**: "Use docker to optimize this Dockerfileβreduce image size, improve layer caching, add security hardening"
### 4. Development Environment
**Who**: Team lead standardizing local dev setup
**Prompt**: "Use docker to create dev Docker Compose with hot reload, volume mounts, seed data, and debug config"
### 5. Production Deployment
**Who**: SRE preparing container for Kubernetes
**Prompt**: "Use docker for production: health checks, resource limits, logging, secrets management, and vulnerability scan"
## Key Differences
| Feature | Development | Production |
|---------|-------------|------------|
| **Base image** | Standard (node:20) | Alpine (node:20-alpine) |
| **Build** | Single-stage | Multi-stage |
| **Volumes** | Bind mounts (hot reload) | Named volumes only |
| **User** | root (convenience) | Non-root (security) |
| **Health checks** | Optional | Required |
| **Resource limits** | None | CPU/memory caps |
## Quick Reference
### Essential Commands
```bash
# Build & run
docker build -t app:1.0 .
docker run -d -p 8080:3000 app:1.0
# Compose lifecycle
docker compose up -d # Start
docker compose logs -f web # Logs
docker compose restart web # Restart
docker compose down --volumes # Clean up
# Debugging
docker exec -it container sh
docker logs -f container
docker inspect container
# Cleanup
docker system prune -a
docker volume prune
```
### Multi-Stage Template
```dockerfile
# Build stage
FROM node:20-alpine AS build
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Production stage
FROM node:20-alpine
WORKDIR /app
COPY --from=build /app/dist ./dist
COPY --from=build /app/node_modules ./node_modules
USER node
EXPOSE 3000
CMD ["node", "dist/server.js"]
```
### Docker Compose Template
```yaml
version: '3.8'
services:
web:
build: .
ports:
- "3000:3000"
depends_on:
- db
environment:
DATABASE_URL: postgresql://user:pass@db:5432/app
db:
image: postgres:15-alpine
volumes:
- postgres_data:/var/lib/postgresql/data
volumes:
postgres_data:
```
## Pro Tips
- **Multi-stage builds**: Reduce image size by 60-80% (keeps build tools out of production)
- **Layer caching**: Copy `package*.json` before source code for faster rebuilds
- **Security**: Always run as non-root user, pin specific versions, scan with `docker scout cves`
- **.dockerignore**: Exclude `node_modules`, `.git`, `.env` to speed builds
- **Health checks**: Add `/health` endpoint for container orchestrators
- **Resource limits**: Set memory/CPU caps in production to prevent runaway containers
- **Alpine images**: Use `-alpine` variants (5-10x smaller than standard images)
**Not activating?** Say: "Use docker skill to containerize my app"
## Related Skills
- [/docs/skills/backend/devops](/docs/skills/backend/devops) - Cloudflare Workers, GCP, deployment strategies
- [/docs/skills/backend/databases](/docs/skills/backend/databases) - PostgreSQL, MongoDB containerization
- [/docs/skills/backend/backend-development](/docs/skills/backend/backend-development) - Node.js, Python, Go backends
## Key Takeaway
Docker skill creates production-ready containers with security hardening, optimized builds, and multi-service orchestration. Just describe your stackβget Dockerfiles, Compose configs, and best practices instantly.
---
# postgresql-psql
Section: docs
Category: skills/backend
URL: https://docs.claudekit.cc/docs/backend/postgresql-psql
# postgresql-psql Skill
Master PostgreSQL with psql CLI - query optimization, schema design, and production-ready database administration.
## When to Use
- **Slow queries**: Analyze EXPLAIN plans, add indexes, rewrite inefficient SQL
- **Schema design**: Model relationships, foreign keys, constraints, normalization
- **Performance tuning**: Configure PostgreSQL, connection pooling, vacuum strategies
- **Production ops**: Backups, replication, monitoring, security hardening
## Quick Start
```bash
# Connect to database
psql -U postgres -d myapp
# Essential meta-commands
\d users # Describe table
\di # List indexes
\x on # Expanded output
\timing # Show query time
```
## Common Use Cases
### 1. E-commerce Developer - Optimize Slow Checkout Query
**Who**: Backend dev, 2000ms checkout query killing conversions
**Prompt**:
```
"Use postgresql-psql skill to optimize this checkout query:
SELECT * FROM orders WHERE user_id = 123 AND status = 'pending'
It's taking 2 seconds. Need EXPLAIN analysis and index recommendations."
```
### 2. SaaS Founder - Design Multi-Tenant Schema
**Who**: Solo founder building B2B SaaS, needs tenant isolation
**Prompt**:
```
"Use postgresql-psql skill to design multi-tenant schema:
- Shared tables with tenant_id column
- Row-level security policies
- Indexes for tenant queries
- Migration from single-tenant"
```
### 3. Data Engineer - Fix Bloated Tables
**Who**: Data team seeing 10GB table that should be 2GB
**Prompt**:
```
"Use postgresql-psql skill to fix bloated tables:
- Identify bloat with pg_stat_user_tables
- VACUUM FULL strategy
- Autovacuum tuning
- Prevent future bloat"
```
### 4. DevOps Engineer - Production Replication Setup
**Who**: DevOps setting up HA PostgreSQL for production
**Prompt**:
```
"Use postgresql-psql skill for production replication:
- Streaming replication setup
- Failover automation
- Monitoring lag
- Backup strategy"
```
## Key Differences
| vs MongoDB | Use PostgreSQL When |
|------------|---------------------|
| Flexible schema | Need strict schema validation, ACID transactions |
| Document store | Complex joins, referential integrity critical |
| Horizontal scaling | SQL expertise, existing tools, BI integration |
| Nested data | Financial data, e-commerce, ERP systems |
## Quick Reference
### EXPLAIN Analysis
```sql
-- Basic execution plan
EXPLAIN SELECT * FROM users WHERE email = 'test@example.com';
-- Actual runtime (executes query)
EXPLAIN ANALYZE SELECT * FROM orders WHERE user_id = 123;
-- Show buffer usage
EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM products;
```
### Index Types
```sql
-- B-tree (default, most common)
CREATE INDEX idx_email ON users(email);
-- Partial index (filtered)
CREATE INDEX idx_active_users ON users(id) WHERE deleted_at IS NULL;
-- Multi-column (order matters)
CREATE INDEX idx_status_created ON orders(status, created_at DESC);
-- Expression index
CREATE INDEX idx_email_lower ON users(LOWER(email));
```
### Performance Queries
```sql
-- Find slow queries
SELECT query, mean_exec_time, calls
FROM pg_stat_statements
ORDER BY mean_exec_time DESC LIMIT 10;
-- Unused indexes
SELECT schemaname, tablename, indexname, idx_scan
FROM pg_stat_user_indexes WHERE idx_scan = 0;
-- Table sizes
SELECT schemaname, tablename,
pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename))
FROM pg_tables WHERE schemaname = 'public'
ORDER BY pg_total_relation_size(schemaname||'.'||tablename) DESC;
```
### Schema Patterns
```sql
-- One-to-many with index
CREATE TABLE posts (
id SERIAL PRIMARY KEY,
user_id INTEGER REFERENCES users(id),
created_at TIMESTAMPTZ DEFAULT NOW()
);
CREATE INDEX idx_posts_user_id ON posts(user_id);
-- Many-to-many junction table
CREATE TABLE post_tags (
post_id INTEGER REFERENCES posts(id) ON DELETE CASCADE,
tag_id INTEGER REFERENCES tags(id) ON DELETE CASCADE,
PRIMARY KEY (post_id, tag_id)
);
```
## Pro Tips
**Not activating?** Say: "Use postgresql-psql skill to [your task]"
**Index strategy**: Index foreign keys, WHERE/JOIN columns, ORDER BY columns first.
**EXPLAIN gotchas**: "Seq Scan" isn't always bad - for small tables it's faster than indexes.
**Connection pooling**: Use pgBouncer in production. Formula: `max_connections = (CPU cores * 2) + disk spindles`.
**Vacuum**: Run `ANALYZE` after bulk inserts. If tables bloat, check autovacuum settings before VACUUM FULL.
**JSON in PostgreSQL**: Use JSONB (binary), not JSON (text). GIN indexes on JSONB for fast queries.
## Related Skills
- [databases](/docs/skills/backend/databases) - MongoDB + PostgreSQL unified guide
- [backend-development](/docs/skills/backend/backend-development) - API integration patterns
- [devops](/docs/skills/backend/devops) - Production deployment, monitoring
## Key Takeaway
PostgreSQL + psql = production-grade relational database. When queries slow down, schema needs redesign, or production needs HA - this skill delivers optimized SQL, proper indexes, and battle-tested patterns. No theory, just working solutions.
---
# Shopify Skill
Section: docs
Category: skills/ecommerce
URL: https://docs.claudekit.cc/docs/ecommerce/shopify
# Shopify Skill
Build production-ready Shopify apps, checkout extensions, admin integrations, and themes with GraphQL APIs and Shopify CLI.
## When to Use
- Building public or custom Shopify apps
- Creating checkout/admin/POS UI extensions
- Developing themes with Liquid templating
- Integrating external services with webhooks
- Managing products, orders, inventory via API
## Key Capabilities
| Feature | Technology | Use Case |
|---------|-----------|----------|
| **App Development** | Shopify CLI, OAuth | Multi-store functionality, merchant tools |
| **GraphQL Admin API** | GraphQL | Product/order/customer management |
| **Checkout Extensions** | React, UI Extensions | Custom checkout fields, validation |
| **Admin Extensions** | React, Polaris | Dashboard widgets, bulk actions |
| **Theme Development** | Liquid, Sections | Custom storefront design |
| **Webhooks** | HMAC Verification | Real-time event handling |
| **Shopify Functions** | JavaScript | Custom discounts, payments, shipping |
| **Metafields** | GraphQL | Store custom data on resources |
**Frameworks**: React (extensions), Remix, Node.js (apps)
**APIs**: GraphQL Admin (recommended), REST Admin (legacy), Storefront API
## Common Use Cases
### Custom Checkout Fields
**Who**: E-commerce store adding gift options
```
"Use shopify skill to create checkout extension with gift message field,
delivery instructions, and save to order metafields. Show in admin order view."
```
### Inventory Management App
**Who**: Multi-location retailer
```
"Build Shopify app that tracks inventory across locations, sends low stock
alerts via email, and auto-creates purchase orders. PostgreSQL backend."
```
### Admin Analytics Dashboard
**Who**: Store owner needing custom reports
```
"Use shopify skill to add admin extension showing top products this month,
revenue by category, and customer lifetime value charts with Polaris."
```
### Product Sync Integration
**Who**: Business with external ERP system
```
"Create Shopify app that syncs products from our REST API every hour,
updates inventory via GraphQL, handles webhooks for order fulfillment."
```
### Custom Discount Logic
**Who**: B2B wholesaler
```
"Use shopify skill to build Shopify Function applying tiered discounts
based on customer tags, order volume, and product collections."
```
## Quick Start
```bash
# Install Shopify CLI
npm install -g @shopify/cli@latest
# Verify
shopify version
```
**Create App:**
```bash
shopify app init
shopify app dev # Local development
shopify app deploy # Production deploy
```
**Create Theme:**
```bash
shopify theme init
shopify theme dev # Preview at localhost:9292
shopify theme push # Deploy to store
```
## Core Patterns
### GraphQL Product Query
```graphql
query {
products(first: 10) {
edges {
node {
id
title
variants(first: 5) {
edges {
node {
id
price
inventoryQuantity
}
}
}
}
}
}
}
```
### Checkout Extension
```javascript
import { reactExtension, BlockStack, TextField } from '@shopify/ui-extensions-react/checkout';
export default reactExtension('purchase.checkout.block.render', () => );
function Extension() {
return (
);
}
```
### Handle Webhooks
```javascript
app.post('/webhooks/orders', async (req, res) => {
const hmac = req.headers['x-shopify-hmac-sha256'];
const verified = verifyWebhook(req.body, hmac);
if (verified) {
const order = req.body;
// Process order
}
res.status(200).send();
});
```
## Pro Tips
- **Use GraphQL over REST** for new development (better performance, cost-based limits)
- **Request minimal scopes** in OAuth to pass app review faster
- **Implement retry logic** for API calls with exponential backoff
- **Cache API responses** to reduce costs and improve speed
- **Test on development stores** (free via Partner Dashboard)
- **Monitor rate limits** via `X-Shopify-Shop-Api-Call-Limit` header
- **Use bulk operations** for processing 1000+ resources
- **Verify webhook signatures** with HMAC to prevent spoofing
- **Not activating?** Say: "Use shopify skill to..."
## Related Skills
- [Backend Development](/docs/skills/backend/backend-development) - API integration patterns
- [Databases](/docs/skills/backend/databases) - PostgreSQL for app data
- [Frontend Development](/docs/skills/frontend/frontend-development) - React extensions
---
## Key Takeaway
Use Shopify skill to build apps, extensions, and themes with GraphQL APIs, Shopify CLI, and React. Handles authentication, webhooks, metafields, and all Shopify platform integrations.
---
# Next.js Skill
Section: docs
Category: skills/frontend
URL: https://docs.claudekit.cc/docs/frontend/nextjs
# Next.js Skill
React framework with SSR, SSG, Server Components (RSC), and App Router for building production-grade web applications.
## When to Use
- Building full-stack React apps with server-side rendering
- SEO-critical sites needing pre-rendered or dynamic content
- Applications requiring both static pages and dynamic data
- Projects needing API routes without separate backend
## Quick Start
```bash
# Create new project
npx create-next-app@latest my-app --typescript --tailwind --app
cd my-app && npm run dev
```
**Basic page structure:**
```typescript
// app/page.tsx (Server Component by default)
export default async function Page() {
const data = await fetch('https://api.example.com/data');
return Welcome to Next.js ;
}
// app/components/Counter.tsx (Client Component)
'use client';
import { useState } from 'react';
export default function Counter() {
const [count, setCount] = useState(0);
return setCount(count + 1)}>Count: {count} ;
}
```
## Common Use Cases
**Frontend developer building e-commerce site:**
"Create Next.js product listing with SSG, dynamic [slug] routes, metadata for SEO, and Image optimization"
**Full-stack dev adding authentication:**
"Implement Next.js middleware for auth protection, server actions for login/logout, and cookie-based sessions"
**Startup founder launching SaaS:**
"Build Next.js dashboard with Server Components for data, Client Components for charts, and API routes for webhooks"
**Marketing team needing blog:**
"Set up Next.js blog with MDX support, generateStaticParams for posts, and automatic sitemap generation"
**Enterprise dev optimizing performance:**
"Add Next.js PPR (Partial Prerendering), streaming with Suspense, and revalidation strategies for high-traffic pages"
## Key Differences
| Feature | Server Component | Client Component | API Route |
|---------|-----------------|------------------|-----------|
| Default | Yes (no directive) | `'use client'` directive | `/app/api/*/route.ts` |
| Runs on | Server only | Server + Client | Server only |
| Can use | Async/await, DB access | Hooks, browser APIs | NextRequest/Response |
| Bundle | Not sent to client | Sent to client | Not sent to client |
| Best for | Data fetching, SEO | Interactivity, state | Backend logic, webhooks |
## Quick Reference
**File-based routing:**
```
app/
βββ page.tsx β /
βββ about/page.tsx β /about
βββ blog/[slug]/page.tsx β /blog/:slug
βββ layout.tsx β Shared layout
βββ loading.tsx β Loading UI
βββ error.tsx β Error boundary
βββ not-found.tsx β 404 page
```
**Data fetching:**
```typescript
// Static (SSG) - build time
const data = await fetch('https://api.example.com/data');
// Revalidate (ISR) - every 60s
const data = await fetch('url', { next: { revalidate: 60 } });
// Dynamic (SSR) - per request
const data = await fetch('url', { cache: 'no-store' });
// Generate static params
export async function generateStaticParams() {
return [{ slug: 'post-1' }, { slug: 'post-2' }];
}
```
**Metadata:**
```typescript
export const metadata = {
title: 'Page Title',
description: 'SEO description',
openGraph: { images: ['/og-image.png'] },
};
```
**API Routes:**
```typescript
// app/api/posts/route.ts
export async function GET(request: NextRequest) {
return NextResponse.json({ posts: [] });
}
```
**Environment Variables:**
```bash
# .env.local
NEXT_PUBLIC_API_URL=https://api.example.com # Client + Server
DATABASE_URL=postgresql://... # Server only
```
## Pro Tips
**Not activating?** Say: "Use nextjs skill to build this with App Router and Server Components"
- Default to Server Components, add `'use client'` only when you need hooks/interactivity
- Use `next/image` for automatic optimization (WebP, lazy loading, blur placeholders)
- Leverage `generateMetadata` for dynamic SEO instead of React Helmet
- Implement `loading.tsx` and `error.tsx` for better UX (no extra code needed)
- Use Server Actions (`'use server'`) for form mutations without API routes
- Enable Turbopack in dev with `next dev --turbo` for faster reloads
- Check https://nextjs.org/docs/llms.txt for AI-optimized reference docs
## Related Skills
- [Frontend Design](/docs/skills/frontend/frontend-design) - UI/UX implementation
- [Web Frameworks](/docs/skills/frontend/web-frameworks) - Full-stack patterns with Turborepo
- [Backend Development](/docs/skills/backend/backend-development) - API integration patterns
## Key Takeaway
Next.js eliminates the client vs. server tradeoff by making Server Components the default, letting you fetch data directly in your components without APIs, while still supporting client-side interactivity when needed. ClaudeKit agents leverage this to build full-stack apps faster.
---
# Tailwind CSS Skill
Section: docs
Category: skills/frontend
URL: https://docs.claudekit.cc/docs/frontend/tailwindcss
# Tailwind CSS Skill
Utility-first CSS framework that enables rapid UI development through composable classes - write zero custom CSS.
## When to Use
- Building responsive layouts with mobile-first breakpoints
- Implementing consistent design systems (colors, spacing, typography)
- Rapid UI prototyping without switching between CSS/HTML
- Dark mode implementation with class-based theming
## Quick Start
**Next.js (includes Tailwind by default):**
```bash
npx create-next-app@latest
# Select "Yes" for Tailwind CSS
```
**Vite:**
```bash
npm create vite@latest my-app
npm install -D tailwindcss @tailwindcss/vite
```
```javascript
// vite.config.ts
import tailwindcss from '@tailwindcss/vite'
export default { plugins: [tailwindcss()] }
```
```css
/* src/index.css */
@import "tailwindcss";
```
## Common Use Cases
### Startup founder building MVP landing page
**Prompt:** "Create a landing page with hero section, feature cards, and pricing table"
**What happens:** ClaudeKit generates mobile-first layout using Tailwind grid/flexbox utilities, consistent spacing scale (p-4, gap-6), responsive breakpoints (md:, lg:), and hover states - zero custom CSS files.
### Product designer implementing design system
**Prompt:** "Build component library with our brand colors: primary #0066FF, secondary #00CC88"
**What happens:** ClaudeKit configures custom theme in `tailwind.config.js`, extends color palette with 50-950 shades, applies brand colors across buttons/cards/inputs, implements dark mode variants.
### Developer adding responsive dashboard
**Prompt:** "Create admin dashboard with sidebar, stats cards, and data table"
**What happens:** ClaudeKit uses Tailwind utilities for layout (`grid grid-cols-1 md:grid-cols-3`), spacing (`space-y-4`), shadows (`shadow-lg`), and breakpoint-specific navigation (`hidden md:block`).
### SaaS team implementing dark mode
**Prompt:** "Add dark mode toggle with system preference detection"
**What happens:** ClaudeKit configures `darkMode: 'class'`, adds dark variants (`bg-white dark:bg-gray-900`), implements toggle component with localStorage persistence, applies theme to all UI elements.
## Key Differences
| Aspect | Tailwind CSS | Traditional CSS |
|--------|-------------|-----------------|
| **Approach** | Utility classes in HTML | Separate CSS files |
| **Customization** | Config file + @theme | Custom stylesheets |
| **File Size** | Auto-purged (tiny bundle) | Manual optimization |
| **Responsive** | Inline prefixes (md:, lg:) | Media queries |
| **Dark Mode** | Class variants (dark:) | Manual theming |
## Quick Reference
**Layout:**
```html
```
**Spacing (4px increments):**
```html
```
**Typography:**
```html
```
**Responsive breakpoints:**
```html
```
**Dark mode:**
```html
```
**Custom theme (@theme directive):**
```css
@import "tailwindcss";
@theme {
--color-brand-500: oklch(0.55 0.22 264);
--font-display: "Inter", sans-serif;
--spacing-navbar: 4.5rem;
}
```
**Common patterns:**
```html
```
## Pro Tips
**Organize long class names:**
```tsx
const cardClasses = `
max-w-sm rounded-lg shadow-lg
bg-white dark:bg-gray-800
hover:shadow-xl transition-shadow
`;
Content
```
**Extract repeated patterns:**
```tsx
// Create component instead of repeating classes
function Button({ children }) {
return (
{children}
);
}
```
**Use @apply for complex components:**
```css
@layer components {
.btn-primary {
@apply px-4 py-2 bg-blue-500 text-white rounded-lg;
@apply hover:bg-blue-600 focus:ring-2 focus:ring-blue-500;
}
}
```
**Not activating?** Say: "Use ui-styling skill to build this with Tailwind CSS"
## Related Skills
- [UI Styling](/docs/skills/frontend/ui-styling) - Combined shadcn/ui + Tailwind
- [Frontend Design](/docs/skills/frontend/frontend-design) - Design-focused UI creation
- [Frontend Development](/docs/skills/frontend/frontend-development) - Full-stack React development
## Key Takeaway
Tailwind eliminates context switching between HTML/CSS by putting all styling in utility classes. ClaudeKit leverages this for rapid prototyping - responsive layouts, dark mode, custom themes - all without writing a single CSS file.
---
# shadcn/ui Skill
Section: docs
Category: skills/frontend
URL: https://docs.claudekit.cc/docs/frontend/shadcn-ui
# shadcn/ui Skill
Copy-paste accessible UI components built on Radix UI + Tailwind CSS that live in your codebase, not node_modules.
## When to Use
- Building React apps with Next.js, Vite, or Remix
- Need accessible components (WCAG-compliant dialogs, forms, tables)
- Want full ownership and customization of UI components
- Implementing forms with React Hook Form + Zod validation
- Building data-heavy interfaces (admin panels, dashboards)
## Quick Start
**Initialize:**
```bash
npx shadcn@latest init
```
**Add components:**
```bash
npx shadcn@latest add button dialog form
```
**Use:**
```tsx
import { Button } from "@/components/ui/button"
Delete
```
## Common Use Cases
### Admin Dashboard with Data Tables
**Who**: SaaS founder building analytics
```
"Build an admin dashboard with user table (sortable, filterable),
status badges, and delete confirmation dialogs"
```
### Multi-Step Form with Validation
**Who**: E-commerce developer adding checkout
```
"Create 3-step checkout form: shipping, payment, review.
Validate each step with Zod before proceeding"
```
### Modal-Heavy Workflow
**Who**: Product manager tool with popups
```
"Task management UI with create/edit modals, confirmation dialogs,
and toast notifications for all actions"
```
### Responsive Dashboard Cards
**Who**: Analytics app developer
```
"Dashboard with metric cards showing KPIs, trend charts,
and dropdown menus for time range selection"
```
### Command Palette Search
**Who**: Documentation site adding search
```
"Add Cmd+K command palette to search docs, navigate pages,
and trigger actions with keyboard shortcuts"
```
## Key Differences
| Feature | shadcn/ui | Material-UI | Ant Design |
|---------|-----------|-------------|------------|
| Distribution | Copy to codebase | npm package | npm package |
| Customization | Full control | Theme overrides | Theme config |
| Bundle Size | Only what you use | Entire library | Entire library |
| TypeScript | Native | Native | Native |
| Dependencies | Radix UI + Tailwind | Emotion | RC Components |
| Accessibility | Built-in (Radix) | Manual | Manual |
## Quick Reference
**Installation:**
```bash
npx shadcn@latest add button # Single
npx shadcn@latest add form input # Multiple
npx shadcn@latest add --all # All components
```
**Essential Components:**
```bash
# Forms
npx shadcn@latest add form input label button select checkbox
# Modals/Overlays
npx shadcn@latest add dialog sheet popover toast
# Navigation
npx shadcn@latest add tabs dropdown-menu navigation-menu
# Data Display
npx shadcn@latest add table card badge avatar
```
**Form with Validation:**
```tsx
import { useForm } from "react-hook-form"
import { zodResolver } from "@hookform/resolvers/zod"
import * as z from "zod"
const schema = z.object({
email: z.string().email(),
password: z.string().min(8)
})
const form = useForm({
resolver: zodResolver(schema)
})
```
**cn() Utility for Conditional Classes:**
```tsx
import { cn } from "@/lib/utils"
Submit
```
**Component Variants:**
```tsx
// Button variants: default, destructive, outline, secondary, ghost, link
Delete
// Sizes: default, sm, lg, icon
Small
```
## Pro Tips
- **Copy, don't import**: Components copy to `components/ui/`, modify freely
- **Compose, don't configure**: Build complex UIs by nesting simple components
- **Use cn()**: Merge Tailwind classes conditionally without conflicts
- **Dark mode**: All components support dark mode with CSS variables
- **Accessibility**: Radix UI handles focus management, ARIA, keyboard nav
- **Not activating?** Say: "Use shadcn-ui skill to build this form with validation"
## Related Skills
- [Frontend Design](/docs/skills/frontend/frontend-design) - High-design UI generation
- [UI Styling](/docs/skills/frontend/ui-styling) - Tailwind CSS patterns
- [Frontend Development](/docs/skills/frontend/frontend-development) - React best practices
## Key Takeaway
shadcn/ui gives you production-ready, accessible components you own and control. Unlike npm packages, components live in your codebase for full customization. Perfect for developers who want beautiful UIs without fighting a component library.
---
# Frontend Design Skill
Section: docs
Category: skills/frontend
URL: https://docs.claudekit.cc/docs/frontend/frontend-design
# Frontend Design Skill
Build memorable web interfaces with bold aesthetics. Production-grade code that avoids generic AI patterns.
## Just Ask Claude To...
```
"Build a brutalist landing page for a crypto wallet app"
```
```
"Create a retro-futuristic dashboard with neon accents and grid overlays"
```
```
"Design a maximalist portfolio hero section with animated gradient meshes"
```
```
"Build a minimal editorial blog layout with dramatic typography"
```
```
"Create a playful pricing page with organic shapes and pastel colors"
```
```
"Design a luxury product showcase with subtle animations and refined spacing"
```
```
"Build an industrial-style admin panel with monospace fonts and raw aesthetics"
```
## What You'll Get
- **Working code** (HTML/CSS/JS, React, Vue, etc.) ready for production
- **Distinctive aesthetics** with unique typography, color schemes, and layouts
- **Refined animations** using CSS, anime.js, or Framer Motion
- **Responsive design** tested across screen sizes
## Supported Stacks
| Framework | Languages | Animation | Styling |
|-----------|-----------|-----------|---------|
| React, Vue, Svelte, Astro | TypeScript, JavaScript | anime.js, Framer Motion, CSS | Tailwind CSS, CSS-in-JS, CSS variables |
## Common Use Cases
### Landing Page for SaaS
**Who**: Startup founder
```
"Build a landing page for an AI code review tool. Brutalist aesthetic with sharp contrasts, monospace fonts, and terminal-inspired UI elements. Include hero, features, and CTA sections."
```
### Portfolio Redesign
**Who**: Designer seeking refresh
```
"Create a portfolio homepage that screams editorial luxury. Asymmetric grid layout, serif display fonts (Fraunces or similar), dramatic shadows, and scroll-triggered reveals. Show 3 featured projects."
```
### Dashboard Overhaul
**Who**: Product team lead
```
"Design an analytics dashboard with retro-futuristic vibes. Neon accent colors on dark backgrounds, glowing borders, geometric patterns, and smooth chart animations. 4 key metric cards + graph."
```
### Marketing Microsite
**Who**: Marketing manager
```
"Build a product launch microsite with maximalist energy. Vibrant gradients, overlapping elements, playful animations, and bold typography. Single-page with sections for features, testimonials, and waitlist signup."
```
### Blog Platform
**Who**: Content creator
```
"Create a minimal blog layout with exceptional typography. Large readable text, generous whitespace, subtle hover effects, and art deco-inspired geometric accents. Article list + featured post."
```
## Pro Tips
- **Define aesthetic first**: Choose brutalist, maximalist, retro-futuristic, luxury, playful, editorial, industrial, or create your own. Commit fully.
- **Avoid generic patterns**: No Inter/Roboto fonts, no purple gradients on white, no cookie-cutter layouts. Be bold.
- **Match complexity to vision**: Maximalist needs elaborate code; minimalist needs precision and restraint.
- **Vary between projects**: Different themes, fonts, aesthetics each time. Never converge on Space Grotesk.
- **Extract before building**: When replicating screenshots, use ai-multimodal skill to extract colors, typography, spacing first. Document in `docs/design-guidelines/`, then implement.
**Not activating?** Say: "Use frontend-design skill to build this UI"
## Related Skills
- [AI Multimodal](/docs/skills/ai/ai-multimodal) - Extract design guidelines from screenshots
- [Frontend Development](/docs/skills/frontend/frontend-development) - Build complex frontend features
- [UI Styling](/docs/skills/frontend/ui-styling) - Tailwind CSS + component libraries
- [Aesthetic](/docs/skills/frontend/aesthetic) - Advanced visual design systems
## Key Takeaway
Frontend Design creates unforgettable interfaces by committing to bold aesthetic directions. Extract design specs from references, choose your visual identity, then execute with precision. Avoid generic AI patterns at all costs.
---
# Web Frameworks Skill
Section: docs
Category: skills/frontend
URL: https://docs.claudekit.cc/docs/frontend/web-frameworks
# Web Frameworks
Build full-stack web apps with Next.js, Turborepo monorepos, and RemixIcon.
## When to Use
- Building React apps with SSR/SSG/RSC
- Setting up monorepos with shared packages
- Optimizing build performance with caching
- Creating multi-app platforms with shared UI
## Quick Start
```bash
# Single app
npx create-next-app@latest my-app && cd my-app
npm install remixicon
# Monorepo
npx create-turbo@latest my-monorepo
```
## Common Use Cases
### New SaaS Application
**Who**: Startup founder building MVP
```
"Create a Next.js app for a project management SaaS with App Router,
TypeScript, Tailwind, and RemixIcon. Include auth pages, dashboard layout,
and API routes for CRUD operations."
```
### Monorepo Setup
**Who**: Tech lead scaling from single app to platform
```
"Convert my Next.js app to a Turborepo monorepo. I need separate apps for
customer portal (web), admin dashboard (admin), and docs site. Share UI
components and TypeScript types across all apps."
```
### E-commerce Storefront
**Who**: Agency developer building client site
```
"Build a Next.js e-commerce storefront with ISR for product pages, static
generation for categories, and server components for the cart. Use RemixIcon
for UI icons throughout."
```
### Design System with Docs
**Who**: Design engineer at growing company
```
"Set up a Turborepo with a shared component library using RemixIcon and a
Storybook docs site. Components should be publishable to npm and consumable
by our three Next.js apps."
```
### Performance Optimization
**Who**: Developer improving existing Next.js app
```
"Optimize my Next.js app's build performance. Add proper caching strategies,
implement ISR for dynamic content, and configure Turborepo remote caching
for our CI pipeline."
```
## Key Differences
| Need | Solution |
|------|----------|
| Single app | Next.js + RemixIcon |
| Multiple apps sharing code | Add Turborepo |
| SSR with real-time data | Server Components + `no-store` |
| Static marketing pages | SSG with `generateStaticParams` |
| Periodically updated content | ISR with `revalidate` |
## Quick Reference
| Tool | Key Commands |
|------|--------------|
| Next.js | `npm run dev`, `npm run build`, `npm run start` |
| Turborepo | `turbo run build`, `turbo run dev --filter=web` |
| RemixIcon | `
` or `
` |
**Turborepo Pipeline (turbo.json):**
```json
{
"pipeline": {
"build": { "dependsOn": ["^build"], "outputs": [".next/**", "dist/**"] },
"dev": { "cache": false, "persistent": true }
}
}
```
## Pro Tips
- Default to Server Components, add `"use client"` only when needed
- Use `^build` in Turborepo for topological dependency ordering
- RemixIcon: `-line` suffix for minimal UI, `-fill` for emphasis
- Enable Turborepo remote caching for team CI speedup
- **Not activating?** Say: "Use web-frameworks skill to..."
## Related Skills
- [Frontend Development](/docs/skills/frontend/frontend-development) - React patterns and state management
- [UI Styling](/docs/skills/frontend/ui-styling) - Tailwind and CSS strategies
- [DevOps](/docs/skills/backend/devops) - Deployment and CI/CD
---
## Key Takeaway
Next.js + RemixIcon for single apps, add Turborepo when sharing code across multiple applications.
---
# UI Styling Skill
Section: docs
Category: skills/frontend
URL: https://docs.claudekit.cc/docs/frontend/ui-styling
# UI Styling Skill
Beautiful, accessible UI with shadcn/ui components (Radix UI + Tailwind), utility-first styling, and canvas-based visual design systems.
## Just Ask Claude To...
```
"Add a login form with email/password validation using shadcn form components"
```
```
"Create a responsive dashboard with cards, buttons, and dark mode support"
```
```
"Build a data table with sorting, filtering, and pagination"
```
```
"Style this component with Tailwind - mobile-first, add hover effects and smooth transitions"
```
```
"Add a command palette with keyboard shortcuts and search"
```
```
"Create a visual design poster for our product launch - minimal text, bold colors"
```
```
"Implement a settings page with tabs, switches, and form validation"
```
## What You'll Get
- **Accessible Components**: Dialog, Dropdown, Form, Table, Navigation with full keyboard support and ARIA
- **Responsive Layouts**: Mobile-first grid systems, breakpoint utilities (sm/md/lg/xl), dark mode variants
- **Consistent Styling**: Design tokens (colors, spacing, typography), utility classes, custom theme configurations
- **Visual Designs**: Canvas-based posters, brand materials, sophisticated compositions with minimal text
## Supported Stacks
**Component Library**: shadcn/ui (Radix UI primitives)
**Styling**: Tailwind CSS v3+ with Vite or PostCSS
**Frameworks**: Next.js, Vite, Remix, Astro (React-based)
**Languages**: TypeScript (preferred), JavaScript
**Visual Design**: Canvas-based visual compositions
| Component Type | Examples |
|----------------|----------|
| Forms & Inputs | Button, Input, Select, Checkbox, Date Picker, Form validation |
| Layout & Nav | Card, Tabs, Accordion, Navigation Menu |
| Overlays | Dialog, Drawer, Popover, Toast, Command |
| Feedback | Alert, Progress, Skeleton |
| Display | Table, Data Table, Avatar, Badge |
## Common Use Cases
### Landing Page Hero
**Who**: SaaS founder building marketing site
```
"Create a hero section with gradient background, large heading, subtitle,
CTA buttons (primary + secondary), and a product screenshot card.
Mobile-responsive with Tailwind."
```
### Admin Dashboard
**Who**: Backend dev needing quick admin UI
```
"Build a dashboard layout: sidebar nav (collapsible on mobile), header with
user dropdown, main content area with stats cards showing metrics (users, revenue,
orders). Use shadcn components and chart library."
```
### Form Wizard
**Who**: Product manager prototyping onboarding flow
```
"Create a 3-step signup wizard with progress indicator. Step 1: email/password,
Step 2: profile info (name, avatar upload), Step 3: preferences (checkboxes).
Add validation with error messages."
```
### Design Assets
**Who**: Startup founder needing event poster
```
"Generate a visual design for our launch event - bold geometric shapes,
company colors (blue/purple gradient), date/location minimal text.
Museum-quality aesthetic, canvas-based."
```
### E-commerce Product Page
**Who**: Frontend dev building shop UI
```
"Create a product page: image gallery with thumbnails, title, price,
size selector (radio buttons), quantity input, Add to Cart button,
collapsible description tabs. Dark mode support."
```
## Pro Tips
- **Setup shortcut**: Run `npx shadcn@latest init` once per project - configures both shadcn and Tailwind
- **Mobile-first**: Always start with base mobile styles, add `md:` and `lg:` prefixes for larger screens
- **Dark mode**: Use `dark:` prefix for all themed elements (`dark:bg-gray-900`, `dark:text-white`)
- **Avoid dynamic classes**: Use full class names (`bg-blue-500` not `bg-${color}-500`) for Tailwind purge
- **Component variants**: Customize in `components/ui/*.tsx` for project-wide consistency
- **Not activating?** Say: **"Use ui-styling skill to build this interface with shadcn and Tailwind"**
## Related Skills
- [Frontend Design](/docs/skills/frontend/frontend-design) - Full design system creation
- [Frontend Design Pro](/docs/skills/frontend/frontend-design-pro) - Advanced visual design
- [Aesthetic](/docs/skills/frontend/aesthetic) - Visual polish and refinement
- [Canvas Design](/docs/skills/ai/canvas-design) - Museum-quality visual compositions
- [Frontend Development](/docs/skills/frontend/frontend-development) - React/Next.js implementation
## Key Takeaway
Combine shadcn/ui (accessible Radix components), Tailwind CSS (utility-first styling), and canvas design for production-ready interfaces. Copy components into your codebase, style with utilities, ship fast with full TypeScript safety and accessibility built-in.
---
# Three.js Skill
Section: docs
Category: skills/frontend
URL: https://docs.claudekit.cc/docs/frontend/threejs
# Three.js Skill
Build immersive 3D web experiences with WebGL/WebGPU. Production-ready scenes, animations, and interactive graphics.
## Just Ask Claude To...
```
"Create an interactive 3D product configurator with orbit controls and material swapping"
```
```
"Build a particle system with 10,000 stars and camera movement through space"
```
```
"Design a GLTF model viewer with post-processing bloom and realistic lighting"
```
```
"Create an animated 3D globe showing data points with raycasting interaction"
```
```
"Build a physics-based scene with falling objects and collision detection"
```
```
"Design a shader-based water surface with reflections and animated waves"
```
```
"Create a VR-ready scene with hand tracking and teleportation controls"
```
## What You'll Get
- **Working 3D scenes** with optimized rendering, cameras, and lighting setups
- **Interactive experiences** using raycasting, controls, and animations
- **Performance-tuned code** with instancing, LOD, and frustum culling
- **Advanced effects** like post-processing, custom shaders, and physics integration
## Supported Stacks
| Renderer | Features | Integration | Performance |
|----------|----------|-------------|-------------|
| WebGL, WebGPU | Scenes, cameras, lights, materials, geometries, animations | React Three Fiber, vanilla JS, TypeScript | Instancing, LOD, batching, compute shaders |
## Common Use Cases
### Product Configurator
**Who**: E-commerce developer
```
"Build a 3D car configurator with color picker, wheel options, and interior materials. Load GLTF model, add orbit controls, and update materials on button click. Include environment map for realistic reflections."
```
### Data Visualization
**Who**: Analytics team
```
"Create a 3D bar chart from JSON data with animated growth on load. Add axis labels, hover tooltips using raycasting, and orbit camera controls. Use instanced geometry for performance."
```
### Game Prototype
**Who**: Indie game developer
```
"Build a third-person character controller with physics-based movement. Load animated GLTF character, implement keyboard controls, add collision detection with Rapier physics, and camera follow."
```
### Marketing Landing
**Who**: Creative agency
```
"Design a hero section with floating geometric shapes, animated camera path, and scroll-triggered interactions. Add bloom post-processing, gradient materials, and smooth transitions between scenes."
```
### AR Experience
**Who**: Museum curator
```
"Create a WebXR artifact viewer that places 3D models in real space using AR. Load GLTF scan, add touch rotation, scale controls, and lighting that matches device camera environment."
```
## Pro Tips
- **Start simple**: Scene + camera + renderer + lights + object. Test early, add complexity incrementally.
- **Optimize from start**: Use instancing for repeated objects, enable frustum culling, monitor draw calls.
- **Light strategically**: Ambient + directional/spot. Shadows are expensiveβuse sparingly.
- **GLTF over others**: Compressed, optimized, supports PBR. Use Draco compression for meshes.
- **Post-processing last**: Performance cost. Use EffectComposer only when needed.
- **WebGPU when ready**: Faster compute, better performance. Fallback to WebGL for compatibility.
**Not activating?** Say: "Use threejs skill to build this 3D scene"
## Related Skills
- [Frontend Design](/docs/skills/frontend/frontend-design) - UI overlays for 3D configurators
- [Frontend Development](/docs/skills/frontend/frontend-development) - React Three Fiber integration
- [AI Multimodal](/docs/skills/ai/ai-multimodal) - Extract 3D requirements from mockups
## Key Takeaway
Three.js transforms complex 3D graphics into browser-native experiences. Load models, add interactions, optimize performance, then layer advanced effects. Start with core rendering loop, scale to physics, shaders, and VR.
---
# Frontend Development Skill
Section: docs
Category: skills/frontend
URL: https://docs.claudekit.cc/docs/frontend/frontend-development
# Frontend Development
Modern React patterns with Suspense-first data fetching, lazy loading, and feature-based organization. Zero layout shift, type-safe, performance-optimized.
## When to Use
- Building React components with data fetching
- Setting up TanStack Router routes
- Organizing features with proper file structure
- Implementing MUI v7 styling patterns
- Optimizing React performance (memo, lazy, Suspense)
## Quick Start
**New Component Template:**
```typescript
import React, { useCallback } from 'react';
import { Box, Paper } from '@mui/material';
import { useSuspenseQuery } from '@tanstack/react-query';
import { featureApi } from '../api/featureApi';
interface Props {
id: number;
}
export const MyComponent: React.FC = ({ id }) => {
const { data } = useSuspenseQuery({
queryKey: ['feature', id],
queryFn: () => featureApi.getFeature(id),
});
const handleAction = useCallback(() => {
// Handler logic
}, []);
return (
{/* Content */}
);
};
export default MyComponent;
```
**Route Setup:**
```typescript
import { createFileRoute } from '@tanstack/react-router';
import { lazy } from 'react';
const MyPage = lazy(() => import('@/features/my-feature/components/MyPage'));
export const Route = createFileRoute('/my-route/')({
component: MyPage,
loader: () => ({ crumb: 'My Route' }),
});
```
## Common Use Cases
### Dashboard with Real-time Data
**Who**: SaaS developer building analytics dashboard
"Build a dashboard page that fetches user analytics from /analytics endpoint. Use lazy loading for the chart components and proper loading states. Include a DataGrid for recent activities."
### Feature with Forms
**Who**: Full-stack developer adding CRUD feature
"Create a new 'projects' feature with list and create pages. Use React Hook Form with Zod validation. Set up proper routing and data fetching with cache invalidation on mutations."
### Performance Optimization
**Who**: Frontend engineer reducing bundle size
"The app is slow to load. Implement lazy loading for all routes, optimize heavy components with React.memo, and ensure proper code splitting. Check for unnecessary re-renders."
### Migration to Suspense Pattern
**Who**: React developer modernizing legacy code
"Refactor the Posts feature to use useSuspenseQuery instead of useQuery. Remove all early return loading states and use SuspenseLoader component. Update error handling to use useMuiSnackbar."
### Component Library Integration
**Who**: UI developer integrating design system
"Create reusable Card and Modal components using MUI v7 with proper TypeScript types. Style using sx prop with theme support. Make them accessible and responsive."
## Key Patterns
### Data Fetching (Suspense-first)
| Pattern | When to Use | Example |
|---------|-------------|---------|
| `useSuspenseQuery` | New code, primary pattern | List/detail views |
| `useQuery` | Legacy code only | Existing components |
| API service layer | All data fetching | `features/*/api/` |
### Loading States (Zero Layout Shift)
| Approach | Status | Reason |
|----------|--------|--------|
| `` | β
Use | No layout shift |
| Early return loading | β Avoid | Causes CLS |
| Inline spinners | β Avoid | Inconsistent UX |
### File Organization
```
features/{feature-name}/
api/ # API service layer
components/ # Feature components
hooks/ # Custom hooks
helpers/ # Utilities
types/ # TypeScript types
index.ts # Public exports
```
## Quick Reference
**Import Aliases:**
```typescript
@/ β src/
~types β src/types
~components β src/components
~features β src/features
```
**MUI v7 Grid (Breaking Change):**
```typescript
// β
v7
// β Old
```
**Styling Rules:**
- <100 lines β Inline `const styles: Record>`
- >100 lines β Separate `.styles.ts` file
**Performance Hooks:**
- `useMemo` β Filter/sort/map operations
- `useCallback` β Event handlers passed to children
- `React.memo` β Expensive components
**Component Checklist:**
- [ ] `React.FC` with TypeScript
- [ ] Lazy load if heavy (DataGrid, charts)
- [ ] Wrap in ``
- [ ] Use `useSuspenseQuery` for data
- [ ] No early return loading states
- [ ] `useCallback` for child handlers
- [ ] Default export at bottom
## Pro Tips
**Not activating?** Say: "Use frontend-development skill to create this component"
**Common Issues:**
- MUI Grid not working β Update to v7 syntax: `size={{ xs: 12 }}`
- Layout shifts on load β Replace early returns with ``
- Slow route transitions β Lazy load page components
- Bundle too large β Check for missing `React.lazy()` imports
**Best Practices:**
- Always use `import type` for TypeScript types
- Keep features self-contained (no cross-feature imports except via index.ts)
- Use `useMuiSnackbar` for all notifications (NOT react-toastify)
- Debounce search inputs (300-500ms)
- Clean up effects to prevent memory leaks
## Related Skills
- [UI/UX Pro Max](/docs/skills/frontend/ui-ux-pro-max) - Advanced design patterns
- [Frontend Design](/docs/skills/frontend/frontend-design) - Component styling & layouts
- [Web Frameworks](/docs/skills/frontend/web-frameworks) - Next.js, Remix integration
- [Backend Development](/docs/skills/backend/backend-development) - API patterns
- [Debugging](/docs/skills/tools/debugging) - React DevTools & performance profiling
## Key Takeaway
Modern React = `useSuspenseQuery` + `` + lazy loading + feature-based files. Zero layout shift, type-safe, performance-first. No early returns, no react-toastify, use MUI v7 Grid syntax.
---
# Aesthetic Skill
Section: docs
Category: skills/frontend
URL: https://docs.claudekit.cc/docs/frontend/aesthetic
# Aesthetic Skill
Transform any design from generic to gorgeous using ClaudeKit's systematic aesthetic framework.
## Just Ask Claude To...
- "Analyze this Dribbble design and extract the color palette, typography, and spacing system"
- "Generate a hero section design for a luxury skincare brand with soft, elegant aesthetics"
- "Rate this UI mockup's aesthetic quality and suggest improvements"
- "Create micro-interactions for this card componentβ150-300ms, ease-out animations"
- "Build a design guideline document from this Behance inspiration screenshot"
- "Iterate this design until it scores 8/10+ on aesthetic quality"
- "Add storytelling elements with parallax scrolling and thematic consistency"
## What You'll Get
- **Design Analysis Reports**: Color palettes (hex codes), typography specs (fonts, sizes, weights), spacing scales, visual hierarchy breakdowns, aesthetic quality scores (1-10)
- **Refined Design Assets**: AI-generated hero images, backgrounds, texturesβiterated until they meet 7/10+ quality standards
- **Production-Ready Code**: Micro-interactions with precise timing (150-300ms), design system implementation, accessibility-compliant components
- **Design Documentation**: `design-guideline.md` and `design-story.md` templates auto-generated from analysis
## Supported Stacks
- **Frontend**: React, Next.js, Vue, Svelte, HTML/Tailwind
- **Design Analysis**: Dribbble, Mobbin, Behance, Awwwards screenshots
- **Asset Generation**: AI-generated images via ai-multimodal skill
- **Media Processing**: FFmpeg, ImageMagick for refinement
## Common Use Cases
**Startup Founder**: "Analyze top 3 SaaS landing pages from Dribbble and create our design system"
β Extracts color theory, typography hierarchy, component patterns from real examples
**Designer**: "Generate a glassmorphic dashboard hero with purple-blue gradients, then iterate until perfect"
β AI generates β analyzes quality β refines prompt β repeats until 7/10+ score
**Developer**: "Add satisfying micro-interactions to this navbarβsmooth transitions, hover states"
β Implements 200ms ease-out animations, sequential delays, professional polish
**Agency**: "Build a design guideline doc from this client's Behance inspiration board"
β Captures screenshots β extracts design DNA β documents standards
**Product Team**: "Review our UI mockup and rate aesthetic quality with improvement suggestions"
β Scores visual hierarchy, typography, spacing β provides actionable fixes
## Pro Tips
- **Quality Gate**: Never accept designs scoring below 7/10βiterate using analysis feedback
- **Timing Matters**: 150-300ms for micro-interactions; longer feels sluggish, shorter feels jarring
- **Human Standards**: AI learns aesthetics from your curated examples (Dribbble, Awwwards)βgarbage in, garbage out
- **Progressive Disclosure**: Start BEAUTIFUL (aesthetics) β RIGHT (usability) β SATISFYING (micro-interactions) β PEAK (storytelling)
- **Not activating?** Say: "Use aesthetic skill to analyze this design" or "Activate aesthetic skill for this UI work"
## Framework: Four Stages
1. **BEAUTIFUL**: Study high-quality examples (Dribbble, Behance), extract visual principles (hierarchy, typography, color theory, white space)
2. **RIGHT**: Ensure functionalityβWCAG accessibility, design systems, component architecture
3. **SATISFYING**: Add micro-interactionsβ150-300ms duration, ease-out entry, ease-in exit, sequential delays
4. **PEAK**: Storytelling through designβparallax, particle systems, thematic consistency (use restraint)
## Related Skills
- [AI Multimodal](/docs/skills/ai/ai-multimodal) - Generate/analyze design images
- [Chrome DevTools](/docs/skills/tools/chrome-devtools) - Capture inspiration screenshots
- [UI Styling](/docs/skills/frontend/ui-styling) - Implement with shadcn/ui + Tailwind
- [Frontend Design](/docs/skills/frontend/frontend-design) - Production-grade frontend interfaces
- [Web Frameworks](/docs/skills/frontend/web-frameworks) - Build with Next.js, React, Vue
## Key Takeaway
Aesthetic skill turns "AI slop" into professional design by teaching Claude to analyze real-world examples, iterate until quality standards are met (7/10+ scores), and implement with precise micro-interactions. Beauty without usability is worthlessβfollow BEAUTIFUL β RIGHT β SATISFYING β PEAK.
---
# Frontend Design Pro Skill
Section: docs
Category: skills/frontend
URL: https://docs.claudekit.cc/docs/frontend/frontend-design-pro
# Frontend Design Pro Skill
Agency-grade interfaces with perfect imagery - real photos OR custom AI prompts. Zero fake URLs.
## Just Ask Claude To...
- "Build a glassmorphism dashboard with aurora gradient backdrop"
- "Create a dark OLED luxury landing page with velvet textures"
- "Design a brutalist portfolio with asymmetric grid and neon accents"
- "Build a claymorphism SaaS pricing page with bubbly 3D cards"
- "Create a minimalist Swiss-style blog with razor-sharp typography"
- "Design a cyberpunk crypto exchange with neon glow and scanlines"
- "Build an organic biomorphic wellness app with fluid shapes"
## What You'll Get
- **Production code**: Copy-paste HTML/Tailwind/React/Vue, fully responsive
- **Perfect images**: Direct Unsplash/Pexels URLs OR detailed prompts for Flux/Midjourney
- **Bold aesthetics**: Glassmorphism, brutalism, claymorphism, aurora gradients, OLED luxury
- **Zero AI slop**: No Inter/Roboto, no fake links, WCAG AA/AAA compliant
## Supported Stacks
HTML + Tailwind CSS, React, Vue, Astro, Next.js, Nuxt
## Common Use Cases
**Startup Founder**: "Build a glassmorphism landing page with animated mesh gradient hero"
**Agency Designer**: "Create a dark luxury SaaS dashboard with velvet textures and emerald accents"
**Developer**: "Design a brutalist portfolio with asymmetric grid and custom cursor"
**Product Manager**: "Build a minimalist pricing page with Swiss typography and one bold accent"
**E-commerce**: "Create a 3D hyperrealistic product showcase with metallic textures"
## Pro Tips
- Pick ONE aesthetic direction and commit 100% (glassmorphism, brutalism, claymorphism)
- Use characterful fonts: GT America, Clash Display, Satoshi, Neue Machina
- Add unforgettable details: grain overlay, custom cursor, animated mesh, diagonal splits
- Break centered grids: asymmetry, overlap, diagonal flow
- Real photos: Direct `https://images.unsplash.com/...?w=1920&q=80` URLs
- Custom scenes: Wrap prompts in `[IMAGE PROMPT START]...[IMAGE PROMPT END]`
- Not activating? Say: "Use frontend-design-pro skill to build this interface"
## Related Skills
- [Frontend Design](/docs/skills/frontend/frontend-design) - Baseline design skill
- [UI/UX Pro Max](/docs/skills/frontend/ui-ux-pro-max) - Advanced UX patterns
- [Aesthetic](/docs/skills/frontend/aesthetic) - Visual polish
- [Three.js](/docs/skills/frontend/threejs) - 3D web graphics
## Key Takeaway
Frontend Design Pro delivers $50k+ agency-grade interfaces with perfect imagery handling. Bold aesthetics, characterful typography, zero AI slop.
---
# UI/UX Pro Max Skill
Section: docs
Category: skills/frontend
URL: https://docs.claudekit.cc/docs/frontend/ui-ux-pro-max
# UI/UX Pro Max Skill
Get instant access to 50 UI styles, 21 color palettes, 50 font pairings, and 20 chart typesβall tailored to your product type and tech stack.
## Just Ask Claude To...
```
"Design a SaaS dashboard with dark mode"
"Build a landing page for my beauty spa"
"The navbar hover effect feels janky, fix it"
"Create a fintech dashboard with trend charts"
"Review this modal for accessibility"
```
Claude automatically activates this skill when you're working on UI/UX tasks.
## What You'll Get
- **Style guide** with colors, effects, and framework recommendations
- **Font pairings** with ready-to-use Google Fonts imports
- **Color palette** (Primary, Secondary, CTA, Background, Text, Border)
- **Page structure** patterns for landing pages
- **Chart recommendations** for dashboards
- **Stack-specific code** for your framework
## Supported Stacks
`html-tailwind` (default) Β· `react` Β· `nextjs` Β· `vue` Β· `svelte` Β· `swiftui` Β· `react-native` Β· `flutter`
## Common Use Cases
### New Landing Page
**Who**: Startup founder launching a product
```
"Build a landing page for my AI writing tool targeting content creators.
Use glassmorphism style, include hero, features, pricing, and testimonials sections."
```
### Dashboard Redesign
**Who**: Developer improving internal tools
```
"Redesign our analytics dashboard with dark mode.
Show user metrics, revenue charts, and activity feed. React + Tailwind."
```
### E-commerce Product Page
**Who**: Agency building for client
```
"Create a luxury fashion product page with image gallery,
size selector, reviews, and related products. Minimalist aesthetic."
```
### Mobile App Onboarding
**Who**: Mobile developer improving UX
```
"Design 4 onboarding screens for a fitness app.
Vibrant colors, progress indicator, skip option. React Native."
```
### UI Code Review
**Who**: Tech lead ensuring quality
```
"Review this checkout form for accessibility issues and mobile responsiveness.
Check contrast, focus states, and error handling."
```
## Pro Tips
- **Mention your product type** (SaaS, e-commerce, healthcare, beauty, fintech) for tailored recommendations
- **Specify your stack** to get framework-specific patterns
- **Not activating?** Say: "Use the ui-ux-pro-max skill to..."
## Quality Rules Claude Follows
### Icons & Visual Elements
- SVG icons only (Heroicons, Lucide)βno emojis
- Stable hover states with color/opacity transitions
- Verified brand logos from Simple Icons
### Interaction
- `cursor-pointer` on all clickable elements
- `transition-colors duration-200` for smooth feedback
- Visible focus states for keyboard navigation
### Light/Dark Mode
- Glass cards: `bg-white/80` or higher in light mode
- Text contrast: `#0F172A` (slate-900) for body
- Borders: `border-gray-200` visible in light mode
### Layout
- Floating navbar with `top-4 left-4 right-4` spacing
- Responsive at 320px, 768px, 1024px, 1440px
- No horizontal scroll on mobile
## Pre-Delivery Checklist
Claude validates before delivering:
- [ ] No emoji icons
- [ ] Consistent icon set
- [ ] `cursor-pointer` on clickables
- [ ] Transitions 150-300ms
- [ ] Text contrast 4.5:1+
- [ ] Both light/dark modes tested
- [ ] Responsive breakpoints
- [ ] Alt text on images
- [ ] Form labels present
## Related Skills
- [Frontend Design Pro](/docs/skills/frontend/frontend-design-pro) - Premium agency-quality interfaces
- [Aesthetic](/docs/skills/frontend/aesthetic) - Four-stage design framework
- [UI Styling](/docs/skills/frontend/ui-styling) - shadcn/ui + Tailwind patterns
---
# Mobile Development Skill
Section: docs
Category: skills/mobile
URL: https://docs.claudekit.cc/docs/mobile/mobile-development
# Mobile Development Skill
Production-ready mobile development with modern frameworks, best practices, and mobile-first thinking patterns.
## When to Use
- Building mobile applications (iOS, Android, or cross-platform)
- Implementing mobile-first design and UX patterns
- Optimizing for mobile constraints (battery, memory, network, small screens)
- Making native vs cross-platform technology decisions
- Implementing offline-first architecture and data sync
- Following platform-specific guidelines (iOS HIG, Material Design)
- Testing mobile applications (unit, integration, E2E)
- Deploying to App Store and Google Play
## Technology Selection
### Quick Decision Matrix
| Need | Choose |
|------|--------|
| JavaScript team, web code sharing | React Native |
| Performance-critical, complex animations | Flutter |
| Maximum iOS performance, latest features | Swift/SwiftUI native |
| Maximum Android performance, Material 3 | Kotlin/Compose native |
| Rapid prototyping | React Native + Expo |
| Desktop + mobile | Flutter |
| Enterprise with JavaScript skills | React Native |
| Startup with limited resources | Flutter or React Native |
| Gaming or heavy graphics | Native (Swift/Kotlin) or Unity |
### Framework Comparison (2024-2025)
| Criterion | React Native | Flutter | Swift/SwiftUI | Kotlin/Compose |
|-----------|--------------|---------|---------------|----------------|
| **GitHub Stars** | 121K | 170K | N/A | N/A |
| **Adoption** | 35% | 46% | iOS only | Android only |
| **Performance** | 80-90% native | 85-95% native | 100% native | 100% native |
| **Dev Speed** | Fast (hot reload) | Very fast (hot reload) | Fast (Xcode Previews) | Fast (Live Edit) |
| **Learning Curve** | Easy (JavaScript) | Medium (Dart) | Medium (Swift) | Medium (Kotlin) |
| **Best For** | JS teams, web sharing | Performance, animations | iOS-only apps | Android-only apps |
## Mobile Development Mindset
**The 10 Commandments:**
1. **Performance is Foundation, Not Feature** - 70% abandon apps >3s load time
2. **Every Kilobyte, Every Millisecond Matters** - Mobile constraints are real
3. **Offline-First by Default** - Network is unreliable, design for it
4. **User Context > Developer Environment** - Think real-world usage scenarios
5. **Platform Awareness Without Platform Lock-In** - Respect platform conventions
6. **Iterate, Don't Perfect** - Ship, measure, improve cycle is survival
7. **Security and Accessibility by Design** - Not afterthoughts
8. **Test on Real Devices** - Simulators lie about performance
9. **Architecture Scales with Complexity** - Don't over-engineer simple apps
10. **Continuous Learning is Survival** - Mobile landscape evolves rapidly
## Performance Budgets
| Metric | Target |
|--------|--------|
| App size | <50MB initial, <200MB total |
| Launch time | <2 seconds to interactive |
| Screen load | <1 second for cached data |
| Network request | <3 seconds for API calls |
| Memory | <100MB typical, <200MB peak |
| Battery | <5% drain per hour active use |
| Frame rate | 60 FPS (16.67ms per frame) |
## Key Best Practices (2024-2025)
### Architecture
- MVVM for small-medium apps (clean separation, testable)
- MVVM + Clean Architecture for large enterprise apps
- Offline-first with hybrid sync (push + pull)
- State management: Zustand (React Native), Riverpod 3 (Flutter), StateFlow (Android)
### Security (OWASP Mobile Top 10)
- OAuth 2.0 + JWT + Biometrics for authentication
- Keychain (iOS) / KeyStore (Android) for sensitive data
- Certificate pinning for network security
- Never hardcode credentials or API keys
- Implement proper session management
### Testing Strategy
- Unit tests: 70%+ coverage for business logic
- Integration tests: Critical user flows
- E2E tests: Detox (React Native), Appium (cross-platform), XCUITest (iOS), Espresso (Android)
- Real device testing mandatory before release
### Deployment
- Fastlane for automation across platforms
- Staged rollouts: Internal β Closed β Open β Production
- Mandatory: iOS 17 SDK (2024), Android 15 API 35 (Aug 2025)
- CI/CD saves 20% development time
## Platform Guidelines
### iOS (Human Interface Guidelines)
- Native navigation patterns (tab bar, navigation bar)
- iOS design patterns (pull to refresh, swipe actions)
- San Francisco font, iOS color system
- Haptic feedback, 3D Touch/Haptic Touch
- Respect safe areas and notch
### Android (Material Design 3)
- Material navigation (bottom nav, navigation drawer)
- Floating action buttons, material components
- Roboto font, Material You dynamic colors
- Touch feedback (ripple effects)
- Respect system bars and gestures
## Common Pitfalls
1. **Testing only on simulators** - Real devices show true performance
2. **Ignoring platform conventions** - Users expect platform-specific patterns
3. **No offline handling** - Network failures will happen
4. **Poor memory management** - Leads to crashes and poor UX
5. **Hardcoded credentials** - Security vulnerability
6. **No accessibility** - Excludes 15%+ of users
7. **Premature optimization** - Optimize based on metrics, not assumptions
8. **Over-engineering** - Start simple, scale as needed
9. **Skipping real device testing** - Simulators don't show battery/network issues
10. **Not respecting battery** - Background processing must be justified
## Reference Navigation
| Topic | Reference |
|-------|-----------|
| Frameworks | `mobile-frameworks.md` |
| iOS Development | `mobile-ios.md` |
| Android Development | `mobile-android.md` |
| Best Practices | `mobile-best-practices.md` |
| Debugging | `mobile-debugging.md` |
| Mindset | `mobile-mindset.md` |
## Resources
- [React Native](https://reactnative.dev/)
- [Flutter](https://flutter.dev/)
- [iOS HIG](https://developer.apple.com/design/human-interface-guidelines/)
- [Material Design](https://m3.material.io/)
- [OWASP Mobile](https://owasp.org/www-project-mobile-top-10/)
- [Detox E2E](https://wix.github.io/Detox/)
- [Fastlane](https://fastlane.tools/)
---
## Key Takeaway
Choose React Native for JavaScript teams with web code sharing, Flutter for performance-critical apps, or native (Swift/Kotlin) for maximum platform performance. Follow mobile-first principles: offline-first architecture, <2s launch time, real device testing, and platform-specific guidelines.
---
# Media Processing Skill
Section: docs
Category: skills/multimedia
URL: https://docs.claudekit.cc/docs/multimedia/media-processing
# Media Processing Skill
Process video, audio, and images using FFmpeg, ImageMagick, and RMBG CLI tools.
## Tool Selection
| Task | Tool | Reason |
|------|------|--------|
| Video encoding/conversion | FFmpeg | Native codec support, streaming |
| Audio extraction/conversion | FFmpeg | Direct stream manipulation |
| Image resize/effects | ImageMagick | Optimized for still images |
| Background removal | RMBG | AI-powered, local processing |
| Batch images | ImageMagick | mogrify for in-place edits |
| Video thumbnails | FFmpeg | Frame extraction built-in |
| GIF creation | FFmpeg/ImageMagick | FFmpeg for video, ImageMagick for images |
## Installation
```bash
# macOS
brew install ffmpeg imagemagick
npm install -g rmbg-cli
# Ubuntu/Debian
sudo apt-get install ffmpeg imagemagick
npm install -g rmbg-cli
# Verify
ffmpeg -version && magick -version && rmbg --version
```
## Essential Commands
### FFmpeg
```bash
# Convert/re-encode
ffmpeg -i input.mkv -c copy output.mp4
ffmpeg -i input.avi -c:v libx264 -crf 22 -c:a aac output.mp4
# Extract audio
ffmpeg -i video.mp4 -vn -c:a copy audio.m4a
```
### ImageMagick
```bash
# Convert/resize
magick input.png output.jpg
magick input.jpg -resize 800x600 output.jpg
# Batch resize
mogrify -resize 800x -quality 85 *.jpg
```
### RMBG
```bash
# Basic (modnet)
rmbg input.jpg
# High quality
rmbg input.jpg -m briaai -o output.png
# Fast
rmbg input.jpg -m u2netp -o output.png
```
## Key Parameters
### FFmpeg
| Parameter | Purpose |
|-----------|---------|
| `-c:v libx264` | H.264 codec |
| `-crf 22` | Quality (0-51, lower=better) |
| `-preset slow` | Speed/compression balance |
| `-c:a aac` | Audio codec |
### ImageMagick
| Parameter | Purpose |
|-----------|---------|
| `800x600` | Fit within (maintains aspect) |
| `800x600^` | Fill (may crop) |
| `-quality 85` | JPEG quality |
| `-strip` | Remove metadata |
### RMBG
| Parameter | Purpose |
|-----------|---------|
| `-m briaai` | High quality model |
| `-m u2netp` | Fast model |
| `-r 4096` | Max resolution |
## Reference Navigation
| Topic | Reference |
|-------|-----------|
| Encoding | `references/ffmpeg-encoding.md` |
| Streaming | `references/ffmpeg-streaming.md` |
| Filters | `references/ffmpeg-filters.md` |
| Image Editing | `references/imagemagick-editing.md` |
| Batch Processing | `references/imagemagick-batch.md` |
| Background Removal | `references/rmbg-background-removal.md` |
| Common Workflows | `references/common-workflows.md` |
| Troubleshooting | `references/troubleshooting.md` |
| Format Compatibility | `references/format-compatibility.md` |
## Integration with ClaudeKit
Works with:
- **ai-multimodal**: Process media for AI analysis
- **frontend-design**: Asset preparation
## Resources
- [FFmpeg Documentation](https://ffmpeg.org/documentation.html)
- [ImageMagick Documentation](https://imagemagick.org/script/command-line-processing.php)
---
## Key Takeaway
Use FFmpeg for video/audio (encoding, conversion, streaming), ImageMagick for images (resize, effects, batch), and RMBG for AI-powered background removal. Supports 100+ formats with hardware acceleration.
---
# Chrome DevTools Skill
Section: docs
Category: skills/tools
URL: https://docs.claudekit.cc/docs/tools/chrome-devtools
# Chrome DevTools
Browser automation via executable Puppeteer scripts with JSON output for screenshots, performance analysis, web scraping, and debugging.
## When to Use
- Automate browser interactions (clicking, filling forms, navigation)
- Capture screenshots (full page or specific elements)
- Analyze performance and Core Web Vitals
- Monitor network traffic and console errors
- Web scraping with JavaScript execution
- Debug frontend issues in real browser environments
## Key Capabilities
| Capability | Description |
|------------|-------------|
| **Core Automation** | Navigate, screenshot, click, fill forms, execute JavaScript |
| **Analysis Tools** | Snapshot elements, monitor console/network, measure performance |
| **Smart Compression** | Auto-compress screenshots >5MB for AI compatibility |
| **Browser Chaining** | Reuse browser instance across multiple commands |
| **JSON Output** | All scripts output parseable JSON for easy integration |
## Common Use Cases
### Visual Regression Testing
"Take screenshots of https://example.com homepage and compare with baseline"
```bash
node screenshot.js --url https://example.com --output ./docs/screenshots/homepage.png
```
### Performance Audits
"Measure Core Web Vitals for production site and check if LCP < 2.5s"
```bash
node performance.js --url https://example.com | jq '.vitals.LCP'
```
### Form Automation
"Fill out the contact form on https://example.com with test data"
```bash
node fill.js --url https://example.com --selector "#email" --value "test@example.com" --close false
node fill.js --selector "#message" --value "Test message" --close false
node click.js --selector "button[type=submit]"
```
### Web Scraping
"Extract all product titles and prices from the catalog page"
```bash
node evaluate.js --url https://example.com/products --script "
Array.from(document.querySelectorAll('.product')).map(el => ({
title: el.querySelector('.title')?.textContent,
price: el.querySelector('.price')?.textContent
}))
" | jq '.result'
```
### Error Monitoring
"Monitor console errors on https://example.com for 10 seconds"
```bash
node console.js --url https://example.com --types error,warn --duration 10000
```
## Quick Reference
### Available Scripts
All located in `.claude/skills/chrome-devtools/scripts/`:
**Automation**: `navigate.js`, `screenshot.js`, `click.js`, `fill.js`, `evaluate.js`
**Analysis**: `snapshot.js`, `console.js`, `network.js`, `performance.js`
### Common Options
```bash
--headless false # Show browser window
--close false # Keep browser open for chaining
--timeout 30000 # Set timeout (milliseconds)
--wait-until networkidle2 # Wait strategy
```
### Setup
```bash
cd .claude/skills/chrome-devtools/scripts
npm install # Install dependencies
./install-deps.sh # Linux/WSL only: system libs
brew install imagemagick # Optional: screenshot compression
```
## Pro Tips
- **Always verify `pwd`** before running scripts to ensure correct output paths
- **Use `snapshot.js`** to discover correct selectors before clicking/filling elements
- **Chain commands** with `--close false` to reuse browser and speed up workflows
- **Install ImageMagick** for automatic screenshot compression (keeps files under 5MB for AI tools)
- **Parse with jq** to extract specific fields from JSON output
- **Not activating?** Say: "Use chrome-devtools skill to take a screenshot of..."
## Related Skills
- [AI Multimodal](/docs/skills/ai/ai-multimodal) - Analyze captured screenshots with vision models
- [Debugging](/docs/skills/tools/debugging) - Systematic debugging workflows
- [Research](/docs/skills/tools/research) - Investigate issues across multiple sources
---
## Key Takeaway
Chrome DevTools skill provides production-ready browser automation through CLI scripts with automatic screenshot compression, JSON output, and comprehensive tooling for testing, scraping, and performance analysis.
---
# Code Review Skill
Section: docs
Category: skills/tools
URL: https://docs.claudekit.cc/docs/tools/code-review
# Code Review Skill
Enforce verification gates and technical rigor across three critical practices: receiving feedback, requesting reviews, and proving completion claims.
## Core Principle
**Technical correctness over social comfort.** Verify before implementing. Ask before assuming. Evidence before claims. Always honor YAGNI, KISS, and DRY principles.
## When to Use
**Always use for:**
- Receiving code review feedback (especially unclear or questionable items)
- Completing tasks in subagent-driven development (after EACH task)
- Before making ANY completion/success claims (tests pass, build succeeds, bug fixed)
**Especially when:**
- Feedback conflicts with existing decisions or lacks context
- About to commit, push, or create PRs without fresh verification
- External reviewers suggest "proper" features without usage evidence
## The Process
### 1. Receiving Feedback
**Protocol:** READ β UNDERSTAND β VERIFY β EVALUATE β RESPOND β IMPLEMENT
**Rules:**
- β No performative agreement ("You're right!", "Great point!", "Thanks for...")
- β No implementation before verification
- β
Restate requirement, ask questions, push back technically
- β
If unclear: STOP and ask for clarification first
- β
YAGNI check: grep for usage before implementing
**Source handling:**
- Human partner: Trusted - implement after understanding
- External reviewer: Verify technically, check for breakage, push back if wrong
### 2. Requesting Reviews
**When:** After each task, major features, before merge
**Steps:**
```bash
# 1. Get git SHAs
BASE_SHA=$(git rev-parse HEAD~1)
HEAD_SHA=$(git rev-parse HEAD)
# 2. Dispatch code-reviewer subagent with:
# - WHAT_WAS_IMPLEMENTED
# - PLAN_OR_REQUIREMENTS
# - BASE_SHA, HEAD_SHA
# - DESCRIPTION
```
**Act on feedback:**
- Critical: Fix immediately
- Important: Fix before proceeding
- Minor: Note for later
### 3. Verification Gates
**The Iron Law:** NO COMPLETION CLAIMS WITHOUT FRESH VERIFICATION EVIDENCE
**Gate function:**
```
IDENTIFY command β RUN full command β READ output β VERIFY confirms claim β THEN claim
```
**Evidence requirements:**
| Claim | Evidence Required |
|-------|-------------------|
| Tests pass | Test output shows 0 failures |
| Build succeeds | Build command exit 0 |
| Bug fixed | Test original symptom passes |
| Requirements met | Line-by-line checklist verified |
**Red flags - STOP:**
- Using "should"/"probably"/"seems to"
- Expressing satisfaction before verification
- Committing without verification
- Trusting agent reports
- ANY wording implying success without running verification
### 4. Quick Decision Tree
```
SITUATION?
β
ββ Received feedback
β ββ Unclear items? β STOP, ask for clarification first
β ββ From human partner? β Understand, then implement
β ββ From external reviewer? β Verify technically before implementing
β
ββ Completed work
β ββ Major feature/task? β Request code-reviewer subagent review
β ββ Before merge? β Request code-reviewer subagent review
β
ββ About to claim status
ββ Have fresh verification? β State claim WITH evidence
ββ No fresh verification? β RUN verification command first
```
## Common Use Cases
### Receiving External Feedback
**Who**: Developer receiving PR comments
```
"External reviewer suggests adding error handling to validateUser(). Before implementing, verify if this function is actually used in production or just tests."
```
### Task Completion Verification
**Who**: Developer in subagent workflow
```
"Just completed the authentication refactor. Before moving to next task, dispatch code-reviewer subagent with BASE_SHA and HEAD_SHA to verify the implementation."
```
### Pre-Commit Evidence Check
**Who**: Developer about to commit
```
"Ready to commit the bug fix. Run full test suite and show output before claiming tests pass."
```
### Unclear Feedback Clarification
**Who**: Developer receiving vague comments
```
"Reviewer says 'improve error handling' but doesn't specify where or why. Stop and ask: Which error paths need handling? What scenarios am I missing?"
```
### YAGNI Enforcement
**Who**: Developer receiving "proper" suggestions
```
"Reviewer suggests adding a caching layer. Grep the codebase for actual usage patterns before implementing. Is this solving a real problem or premature optimization?"
```
## Pro Tips
- **Never assume success** - always verify with fresh evidence
- **Technical rigor first** - no performative agreement, just restate and implement
- **Evidence over claims** - show command output, not opinions
- **Question unclear feedback** - ask before implementing saves rework
- **YAGNI check** - grep before adding suggested "proper" features
- **Not activating?** Say: "Use code-review skill to verify this completion claim with evidence."
## Related Skills
- [Debugging](/docs/skills/tools/debugging) - Debug with evidence-based approach
- [Sequential Thinking](/docs/skills/tools/sequential-thinking) - Systematic problem-solving
- [Planning](/docs/skills/tools/planning) - Task breakdown and verification
## Key Takeaway
Code review requires technical rigor over social comfort. Use verification gates before ANY completion claims (run β read β verify β then claim). Request systematic reviews via code-reviewer subagent after each task. Push back technically on questionable feedback. Evidence, not assumptions.
---
# MCP Management Skill
Section: docs
Category: skills/tools
URL: https://docs.claudekit.cc/docs/tools/mcp-management
# MCP Management Skill
Discover and execute tools from MCP servers without polluting your context window.
## When to Use
- **Discovering capabilities**: List available MCP tools/prompts/resources
- **Task-based selection**: Find relevant tools for specific operations
- **Programmatic execution**: Call MCP tools with proper parameter handling
- **Context efficiency**: Delegate MCP operations to subagents
## Key Capabilities
| Capability | Method | Output |
|------------|--------|--------|
| **Tool Discovery** | `list-tools` | Saves to `assets/tools.json` |
| **Intelligent Selection** | LLM analysis | Context-aware tool filtering |
| **Auto-Execution** | Gemini CLI | Structured JSON responses |
| **Multi-Server** | Config-driven | Aggregate capabilities |
## Common Use Cases
### Browse Available MCP Tools
**Who**: Developer exploring MCP ecosystem
```
"List all MCP tools configured in this project and save them for quick reference"
```
### Execute Screenshot Tool
**Who**: QA engineer testing UI
```
"Take a screenshot of https://staging.example.com and save it to the screenshots folder"
```
### Query Documentation
**Who**: Developer researching API
```
"Use context7 to find Next.js 15 app router documentation on data fetching patterns"
```
### Store Entity Relationships
**Who**: Backend developer modeling data
```
"Create memory entities for User, Post, and Comment models with their relationships"
```
### Coordinate Multiple Tools
**Who**: DevOps engineer automating tasks
```
"Take a screenshot of the admin panel, then use memory to store the current deployment state"
```
## Pro Tips
**Execution Priority**:
1. **Gemini CLI** (primary) - Fast, automatic tool discovery
2. **Direct scripts** (secondary) - Manual tool specification
3. **mcp-manager agent** (fallback) - Context-efficient delegation
**Critical**: Use stdin piping for Gemini CLI, NOT `-p` flag (skips MCP initialization):
```bash
# β
Correct
echo "Take screenshot of example.com" | gemini -y -m gemini-2.5-flash
# β Wrong - skips MCP init
gemini -p "Take screenshot" -y -m gemini-2.5-flash
```
**GEMINI.md Format**: Enforce structured JSON responses by adding:
```
"Return JSON only per GEMINI.md instructions"
```
**Tool Catalog**: Run `list-tools` to persist capabilities to JSON for fast offline reference.
**Not activating?** Say: "Use mcp-management skill to discover available MCP tools"
## Configuration
MCP servers configured in `.claude/.mcp.json`:
```json
{
"mcpServers": {
"context7": {
"command": "npx",
"args": ["-y", "@upstash/context7-mcp"]
},
"memory": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-memory"]
}
}
}
```
**Gemini CLI Integration** (recommended):
```bash
mkdir -p .gemini && ln -sf .claude/.mcp.json .gemini/settings.json
```
## Quick Start
### Method 1: Gemini CLI (Recommended)
```bash
# Install
npm install -g @google/gemini-cli
# Create symlink for shared config
mkdir -p .gemini && ln -sf .claude/.mcp.json .gemini/settings.json
# Execute with stdin piping
echo "Take a screenshot of https://example.com. Return JSON only per GEMINI.md instructions." | gemini -y -m gemini-2.5-flash
```
**Expected Output**:
```json
{"server":"puppeteer","tool":"screenshot","success":true,"result":"screenshot.png","error":null}
```
### Method 2: Direct Scripts
```bash
cd .claude/skills/mcp-management/scripts && npm install
# List tools (saves to assets/tools.json)
npx tsx cli.ts list-tools
# Execute tool
npx tsx cli.ts call-tool memory create_entities '{"entities":[{"name":"User","type":"model"}]}'
```
### Method 3: mcp-manager Subagent
Use when Gemini CLI unavailable. Subagent handles discovery and execution, keeping main context clean.
## Related Skills
- [Chrome DevTools](/docs/skills/tools/chrome-devtools) - Browser automation via MCP
- [Problem Solving](/docs/skills/tools/problem-solving) - Systematic debugging patterns
- [Debugging](/docs/skills/tools/debugging) - Error investigation strategies
## Key Takeaway
MCP Management provides context-efficient access to external tools through Gemini CLI (primary), direct scripts (secondary), or subagent delegation (fallback). Persistent tool catalogs and structured JSON responses enable programmatic integration without context pollution.
---
# Sequential Thinking Skill
Section: docs
Category: skills/tools
URL: https://docs.claudekit.cc/docs/tools/sequential-thinking
# Sequential Thinking
Break down complex problems into numbered, reflective thought sequences that can revise, branch, and verify hypotheses dynamically.
## Core Principle
**Complex problems require visible reasoning chains, not jumbled analysis.**
When you work through multi-step problems, your brain benefits from explicit thought progression: "This led to that, which revealed this flaw, so I'll revise my approach." Sequential Thinking makes this process systematic with numbered thoughts, revision markers, and hypothesis verification.
## When to Use
Always use for:
- Multi-step problems with unclear scope
- Debugging that requires hypothesis testing
- Architecture decisions comparing alternatives
- Analysis requiring course correction mid-thought
Especially when:
- You discover new complexity halfway through
- Initial assumptions prove wrong
- Multiple approaches need evaluation
- Problem scope emerges as you think
## The Process
### 1. Start with Loose Estimate
Begin with rough thought count (`Thought 1/5`), adjust as complexity emerges. Don't overthink the totalβit changes.
### 2. Structure Each Thought
- Build on previous context explicitly
- Address ONE aspect per thought
- State assumptions/uncertainties/realizations
- Signal what next thought tackles
### 3. Apply Dynamic Operations
- **Expand**: More complex β increase total (`1/5` becomes `1/8`)
- **Contract**: Simpler β decrease total (`3/8` becomes `3/6`)
- **Revise**: New insight β mark `[REVISION of Thought 2]`
- **Branch**: Multiple paths β explore `[BRANCH A]` and `[BRANCH B]`
### 4. Verify Hypotheses
Use `[HYPOTHESIS]` for proposed solutions, `[VERIFICATION]` for test results. Iterate until verified. Mark final with `[FINAL]`.
## Common Use Cases
### Debugging Authentication Flow
**Who**: Full-stack developer fixing login issues
```
"Users report login works initially but fails after 24 hours. JWT tokens are configured with 24h expiry. Refresh token logic exists in the backend. Help me debug why authentication breaks exactly at the 24h mark."
```
### Architecture Decision for State Management
**Who**: React developer choosing state solution
```
"I need state management for a dashboard app with real-time data sync, optimistic updates, and offline support. Evaluate Redux Toolkit vs TanStack Query + Zustand, considering our team hasn't used either heavily."
```
### API Design Review
**Who**: Backend engineer designing endpoints
```
"Reviewing our new REST API for user profiles. Current design has /users/:id for basic info and /users/:id/details for extended data. Does this split make sense or should we consolidate? Consider performance and frontend DX."
```
### Performance Investigation
**Who**: Frontend dev solving slow renders
```
"React app has slow dashboard rendering. Profiler shows ProductList component re-renders 10+ times per page load. It receives products array from Redux store. Walk me through identifying the root cause and fix."
```
### Database Schema Refactoring
**Who**: Engineer migrating data model
```
"Need to refactor our e-commerce schema. Current design has separate tables for physical/digital products with 60% duplicate columns. Evaluate single-table with discriminator vs polymorphic associations vs current approach."
```
## Pro Tips
**Not activating?** Say: *"Use sequential-thinking skill to analyze this step-by-step with explicit thought markers."*
**Modes:**
- **Explicit**: Use visible `Thought 1/5` markers when complexity warrants or requested
- **Implicit**: Apply methodology internally for routine analysis
**Revision example:**
```
Thought 5/8 [REVISION of Thought 2]: Corrected understanding
- Original: localStorage is sufficient for tokens
- Why revised: XSS vulnerability discovered in dependencies
- Impact: Must switch to httpOnly cookies
```
**Branching example:**
```
Thought 4/7 [BRANCH A from Thought 2]: Redux Toolkit
Pros: Mature, predictable. Cons: Boilerplate, learning curve
Thought 4/7 [BRANCH B from Thought 2]: Zustand
Pros: Simple API, TypeScript. Cons: Less middleware ecosystem
```
## Related Skills
- [Problem Solving](/docs/skills/tools/problem-solving) - General problem decomposition
- [Debugging](/docs/skills/tools/debugging) - Systematic bug investigation
- [Code Review](/docs/skills/tools/code-review) - Technical analysis
## Key Takeaway
Sequential Thinking transforms messy analysis into structured thought chains with revision capability, branching for alternatives, and hypothesis verificationβuse explicitly for complex problems or implicitly when structured reasoning improves accuracy.
---
# Planning Skill
Section: docs
Category: skills/tools
URL: https://docs.claudekit.cc/docs/tools/planning
# Planning
Transform vague requirements into detailed, executable technical plans through systematic research, codebase analysis, and solution design.
## Core Principle
**YAGNI, KISS, DRY first. Plans over code. Brutal honesty over politeness.**
Before any implementation, a well-researched plan eliminates wasted effort, premature optimization, and architectural regret. This skill doesn't write codeβit creates battle-tested blueprints that junior developers can execute confidently.
## When to Use
Always use for:
- New feature implementations
- System architecture decisions
- Technical approach evaluation
- Complex refactoring projects
Especially when:
- Requirements span multiple components
- Trade-offs need formal comparison
- Codebase patterns must be preserved
- Security/performance are critical
## The Process
### 1. Initial Analysis
Read codebase docs, `development-rules.md`, existing patterns. Understand constraints before proposing solutions.
### 2. Research Phase
Spawn researcher agents for unknowns. Investigate libraries, patterns, architectural approaches. Skip if reports already provided.
### 3. Codebase Understanding
Use scout agents to analyze structure, dependencies, conventions. Skip if scout reports provided.
### 4. Solution Design
Synthesize research + codebase analysis. Design architecture with clear trade-offs. Provide 2-3 options when no clear winner exists.
### 5. Plan Documentation
Write self-contained plans with:
- Context: Why this problem exists
- Options: Evaluated alternatives with pros/cons
- Recommendation: Chosen approach with rationale
- Phases: Step-by-step implementation breakdown
- Security/Performance: Addressed concerns
- Validation: How to verify success
### 6. Review & Output
Ensure plan is actionable by junior developers. Include code snippets/pseudocode for clarity. Store in timestamped directory.
## Common Use Cases
### Implementing OAuth Authentication
**Who**: Full-stack developer adding social login
```
"Add Google OAuth to our Next.js app. We use NextAuth but haven't configured providers yet. Users should sign in with Google, store profile in PostgreSQL, and maintain sessions securely."
```
### Migrating to Microservices
**Who**: Backend architect refactoring monolith
```
"Plan migration of our user management module from Rails monolith to standalone microservice. Current system handles auth, profiles, and permissions. 50k active users. Zero downtime required."
```
### Real-Time Chat Feature
**Who**: Product engineer adding messaging
```
"Design real-time chat for our SaaS dashboard. Users need 1-on-1 and group conversations with message history, typing indicators, and read receipts. Tech stack: React + Node.js. Consider WebSockets vs SSE."
```
### Database Schema Refactoring
**Who**: Data engineer optimizing queries
```
"Refactor our e-commerce product catalog schema. Current design has 12 joins for category filtering, causing 3s page loads. 100k products, 5k categories. Must maintain backward compatibility during migration."
```
### CI/CD Pipeline Setup
**Who**: DevOps engineer automating deployments
```
"Create CI/CD pipeline for our TypeScript monorepo with 6 apps (3 Next.js, 2 Express APIs, 1 React Native). We use GitHub Actions. Need automated testing, preview deployments, and production releases."
```
## Pro Tips
**Not activating?** Say: *"Use planning skill to create a detailed implementation plan with research and trade-off analysis."*
**Active Plan State**: Planning skill creates `.claude/active-plan` to prevent version proliferation. It prompts "Continue with existing plan? [Y/n]" when resuming work.
**Directory Structure**:
```
plans/20251208-1430-oauth-implementation/
βββ research/ # Researcher agent reports
βββ scout/ # Codebase analysis reports
βββ reports/ # General findings
βββ plan.md # Main plan document
βββ phase-01-setup.md # Implementation phases
```
**Report Location**: All agents write to `{active-plan}/reports/` using format `{agent}-{YYMMDD}-{slug}.md`. Fallback: `plans/reports/`
**Quality Gates**:
- Thorough: Consider edge cases, security, performance
- Specific: Concrete steps, not vague guidance
- Maintainable: Align with codebase patterns
- Actionable: Junior devs can execute without guessing
## Related Skills
- [Sequential Thinking](/docs/skills/tools/sequential-thinking) - Multi-step analysis
- [Problem Solving](/docs/skills/tools/problem-solving) - Complexity resolution
- [Research](/docs/skills/tools/research) - External investigation
## Key Takeaway
Planning creates comprehensive, executable blueprints through research, codebase analysis, and trade-off evaluation. Plans are self-contained documents stored in timestamped directories with active plan tracking to prevent version chaos.
---
# Docs Seeker
Section: docs
Category: skills/tools
URL: https://docs.claudekit.cc/docs/tools/docs-seeker
# Docs Seeker
Execute scripts to fetch technical documentation from llms.txt sources (context7.com) with automatic query classification and agent distribution strategy.
## When to Use
- Need topic-specific docs (features/components/API methods)
- Looking up library/framework documentation fast
- Analyzing GitHub repositories for architecture
- Large doc sets requiring parallel agent strategy
## Key Capabilities
| Capability | What It Does |
|------------|--------------|
| **Query Detection** | Auto-classifies topic-specific vs general queries |
| **Smart Fetching** | Constructs context7 URLs, handles fallback chains |
| **Result Analysis** | Categorizes URLs, recommends 1/3/7 agent strategies |
| **Zero-Token Scripts** | All logic in Node.js scripts, no prompt overhead |
## Common Use Cases
### Topic-Specific Lookup
**Who**: Developer needing specific feature documentation
**Prompt**: "How do I use date picker in shadcn?"
```bash
node scripts/detect-topic.js "" # β {topic, library, isTopicSpecific}
node scripts/fetch-docs.js "" # β 2-3 focused URLs
# Read with WebFetch
```
### General Library Docs
**Who**: Developer exploring new framework
**Prompt**: "Get Next.js documentation"
```bash
node scripts/detect-topic.js "" # β {isTopicSpecific: false}
node scripts/fetch-docs.js "" # β 8+ URLs
cat llms.txt | node scripts/analyze-llms-txt.js - # β Agent strategy
# Deploy parallel agents per recommendation
```
### Repository Analysis
**Who**: Team lead investigating library architecture
**Prompt**: "Analyze shadcn/ui repository structure"
```bash
node scripts/fetch-docs.js "github.com/shadcn/ui" # β Repo docs
# Read with WebFetch for architecture insights
```
### Multi-Agent Documentation Research
**Who**: Tech lead needing comprehensive framework knowledge
**Prompt**: "Research React Server Components in Next.js 15"
```bash
node scripts/fetch-docs.js "" # β Multiple URLs
cat llms.txt | node scripts/analyze-llms-txt.js - # β "7 agents recommended"
# Spawn parallel research agents
```
## Quick Reference
**Three-Script Workflow:**
```bash
# 1. Detect query type
node scripts/detect-topic.js ""
# 2. Fetch documentation
node scripts/fetch-docs.js ""
# 3. Analyze for agent strategy (if 8+ URLs)
cat llms.txt | node scripts/analyze-llms-txt.js -
```
**Agent Distribution:**
- 1 agent: Few URLs (3-5)
- 3 agents: Medium coverage (6-12)
- 7 agents: Comprehensive (13+)
- Phased: Massive doc sets (30+)
**Environment Config:**
```
process.env > .claude/skills/docs-seeker/.env > .claude/skills/.env > .claude/.env
```
## Pro Tips
- Scripts handle all URL construction and fallback logic automatically
- Topic queries return 2-3 focused URLs (10-15s), general queries return 8+ (30-60s)
- Use `analyze-llms-txt.js` to get parallel agent recommendations for large doc sets
- Scripts are zero-token execution - no context loading overhead
- **Not activating?** Say: "Use docs-seeker skill to fetch documentation for [library/topic]"
## Related Skills
- [Research](/docs/skills/tools/research) - Documentation research workflows
- [Planning](/docs/skills/tools/planning) - Plan with documentation context
- [MCP Management](/docs/skills/tools/mcp-management) - Manage MCP servers for extended capabilities
---
## Key Takeaway
Script-first documentation discovery with automatic query classification, intelligent URL fetching via context7.com, and parallel agent distribution for comprehensive doc coverage.
---
# systematic-debugging
Section: docs
Category: skills/tools
URL: https://docs.claudekit.cc/docs/tools/systematic-debugging
# systematic-debugging
Four-phase debugging framework that enforces root cause investigation before attempting any fixes - turning hours of thrashing into 15-minute resolutions.
## Core Principle
**NO FIXES WITHOUT ROOT CAUSE INVESTIGATION FIRST**
Random fixes waste 2-3 hours thrashing. Systematic investigation takes 15-30 minutes and achieves 95% first-time fix rate. If you don't understand the root cause, you're guessing - and guessing introduces new bugs.
## When to Use
Always use for ANY technical issue:
- Test failures, bugs in production, unexpected behavior
- Performance problems, build failures, integration issues
Especially when:
- Under time pressure (systematic is FASTER than random)
- "Quick fix" seems obvious (usually wrong)
- You've tried multiple fixes already
- You don't fully understand the issue
## The Process
### 1. Root Cause Investigation (REQUIRED FIRST)
**Read error messages carefully** - Stack traces often contain exact solution
**Reproduce consistently** - Can you trigger it reliably? If not, gather more data first
**Check recent changes** - Git diff, dependencies, config changes
**Gather evidence** - Add diagnostic logging at component boundaries, trace data flow
**Trace to source** - Where does bad value originate? Fix source, not symptom
### 2. Pattern Analysis
**Find working examples** - Locate similar working code in codebase
**Compare references** - Read reference implementation COMPLETELY, understand fully
**Identify differences** - List every difference, don't assume "that can't matter"
**Check dependencies** - What components, settings, config, environment needed?
### 3. Hypothesis Testing
**Form single hypothesis** - "I think X because Y" (specific, not vague)
**Test minimally** - SMALLEST possible change, one variable at a time
**Verify before continuing** - Worked β Phase 4. Didn't work β NEW hypothesis (don't stack fixes)
**Count attempts** - If 3+ fixes failed β STOP, question architecture
### 4. Implementation
**Create failing test** - Simplest reproduction, automated if possible (MUST have before fixing)
**Implement single fix** - Address root cause, ONE change, no "while I'm here" improvements
**Verify completely** - Test passes? No other tests broken? Issue actually resolved?
## Common Use Cases
### Senior Backend Dev: Test Suite Failing After Dependency Update
**"Use systematic-debugging to investigate test failures: Read error messages completely, check what dependency changed, reproduce locally, trace data flow to find where new version breaks contract, create minimal failing test, fix root cause"**
### Frontend Engineer: Button Click Not Working
**"Use systematic-debugging for button not responding: Check browser console for errors, verify event handler attached, trace click event flow, compare with working buttons, identify missing handler or binding issue"**
### DevOps Engineer: Production API Returning 500 Errors
**"Use systematic-debugging for production 500s: Gather error logs, reproduce in staging, check recent deployments, analyze error patterns, trace request flow through services, identify failing component, fix at source"**
### Full-Stack Dev: Slow Page Load Performance
**"Use systematic-debugging to investigate slow page loads: Measure actual load time, check network waterfall, analyze bundle size, profile JavaScript execution, identify bottleneck, test optimization, verify improvement"**
### Integration Specialist: Third-Party API Integration Failing
**"Use systematic-debugging for API integration: Log request/response at boundary, verify data format matches contract, compare with working examples, check both sides of integration, trace to source of mismatch"**
## Pro Tips
**Red flags to STOP immediately:**
- "Quick fix for now" β Sets bad precedent
- "Just try changing X" β Random guessing
- "Add multiple changes" β Can't isolate what worked
- "Skip the test" β Untested fixes don't stick
- "One more fix" (after 2+) β Architecture problem
**If 3+ fixes failed:** Don't try more fixes. Each failure revealing new problem = architectural issue. Question fundamentals before continuing.
**Emergency = MORE systematic:** Time pressure makes random fixes tempting, but systematic is 8-12x faster than thrashing.
**Not activating?** Say: "Use systematic-debugging skill to investigate this issue - follow the four phases starting with root cause investigation"
## Related Skills
- [debugging](/docs/skills/tools/debugging) - General debugging skill
- [problem-solving](/docs/skills/tools/problem-solving) - Structured problem solving
- [sequential-thinking](/docs/skills/tools/sequential-thinking) - Step-by-step analysis
## Key Takeaway
Systematic debugging converts 2-3 hours of random fix attempts into 15-30 minutes of focused investigation. Always investigate root cause first, fix once, move on. If 3+ fixes fail, stop and question architecture - don't keep trying.
---
# MCP Builder Skill
Section: docs
Category: skills/tools
URL: https://docs.claudekit.cc/docs/tools/mcp-builder
# MCP Builder Skill
Production-ready MCP (Model Context Protocol) servers that connect Claude to external APIs, databases, and servicesβbuilt with agent-first design principles.
## When to Use
- Connecting Claude to external APIs (Stripe, GitHub, Slack, etc.)
- Creating database access tools (PostgreSQL, MongoDB, MySQL)
- Building custom business logic tools
- Enabling workflow automation through integrated services
## Key Capabilities
| Capability | Python (FastMCP) | TypeScript (MCP SDK) |
|-----------|-----------------|---------------------|
| Tool Registration | `@mcp.tool` decorator | `server.registerTool` |
| Input Validation | Pydantic v2 models | Zod schemas |
| Async Operations | Native async/await | Promise-based |
| Type Safety | Type hints | TypeScript strict mode |
| Best For | Data/ML/scientific | API wrappers/web services |
**Core Features**:
- Agent-optimized workflow tools (not just API wrappers)
- Context-aware responses (concise vs. detailed)
- Actionable error messages that guide usage
- Pagination, rate limiting, character limits (25K tokens)
- Read-only, destructive, idempotent tool hints
## Common Use Cases
### Payment Processing Integration
**Who**: SaaS developer adding billing
```
"Use mcp-builder to create Stripe MCP server with:
- Create customer and subscription
- Process payments
- Manage invoices
- Handle webhooks
- List transactions"
```
### Project Management Automation
**Who**: Team lead integrating dev tools
```
"Use mcp-builder for GitHub + Jira MCP server:
- Sync issues between platforms
- Create PRs from Jira tickets
- Track commit status
- Send Slack notifications"
```
### Database Operations
**Who**: Backend developer building admin tools
```
"Use mcp-builder for PostgreSQL MCP server with:
- Execute queries safely
- Schema introspection
- Read-only mode
- Character limit truncation
- Pagination support"
```
### Custom Business Logic
**Who**: Startup automating workflows
```
"Use mcp-builder to create invoice generation MCP server:
- Calculate taxes
- Generate PDF invoices
- Send email notifications
- Update accounting records"
```
### Multi-Service Integration
**Who**: Full-stack developer building AI assistant
```
"Use mcp-builder to combine Slack, Google Calendar, and Notion:
- Schedule meetings
- Send team updates
- Create task lists
- Sync across platforms"
```
## Development Workflow
| Phase | Action |
|-------|--------|
| **1. Research** | Study MCP docs, API docs, design agent-first workflows |
| **2. Plan** | Select high-impact tools, define input/output schemas |
| **3. Implement** | Build shared utilities, implement tools with validation |
| **4. Review** | Check DRY, type safety, error handling, documentation |
| **5. Evaluate** | Create 10 complex questions to test tool effectiveness |
**Testing**: Use evaluation harness or tmux (servers run indefinitely on stdio/stdin).
## Pro Tips
- **Design for workflows, not endpoints**: Combine operations (e.g., check availability + schedule event)
- **Optimize context**: Default to concise responses, provide detailed option when needed
- **Write actionable errors**: Suggest next steps ("Try filter='active_only'")
- **Use consistent prefixes**: Group related tools (e.g., `github_*`, `stripe_*`)
- **Load docs as needed**: MCP protocol, SDK docs, language-specific guides
- **Create evaluations**: 10 complex, read-only questions to verify tool quality
- **Not activating?** Say: "Use mcp-builder skill to..."
## Configuration
**Claude Desktop** (`~/Library/Application Support/Claude/claude_desktop_config.json`):
```json
{
"mcpServers": {
"my-server": {
"command": "node",
"args": ["/absolute/path/to/dist/index.js"],
"env": {
"API_KEY": "your-api-key"
}
}
}
}
```
**Python**: Use `python /path/to/server.py` as command
**TypeScript**: Use `node /path/to/dist/index.js` after `npm run build`
## Related Skills
- [MCP Management](/docs/skills/tools/mcp-management) - Install and manage MCP servers
- [Databases](/docs/skills/backend/databases) - PostgreSQL/MongoDB integration
- [Backend Development](/docs/skills/backend/backend-development) - API design patterns
- [Planning](/docs/skills/tools/planning) - Design complex integrations
---
## Key Takeaway
Use MCP Builder to create production-ready servers that connect Claude to any external service. Follows agent-first design principles with workflow-focused tools, context optimization, and comprehensive evaluation harnesses for quality assurance.
---
# Repomix Skill
Section: docs
Category: skills/tools
URL: https://docs.claudekit.cc/docs/tools/repomix
# Repomix Skill
Package entire repositories into single AI-friendly files optimized for LLM context.
## When to Use
- Packaging codebases for AI analysis or code review
- Analyzing third-party libraries before integration
- Creating repository snapshots with token management
- Preparing security audit contexts with sensitive data filtering
## Key Capabilities
| Capability | Description |
|-----------|-------------|
| **Multiple Formats** | XML, Markdown, JSON, Plain text output |
| **Token Counting** | Automatic counting with per-file/directory breakdown |
| **Remote Support** | Package GitHub repos without cloning |
| **Comment Stripping** | Remove comments from 20+ languages |
| **Security Checks** | Secretlint integration for API keys, passwords, credentials |
| **Git Integration** | Respects .gitignore patterns |
## Common Use Cases
### Code Review Prep
**Who**: Senior developer preparing feature branch
```
"Package the src/auth module with type definitions for code review. Remove comments, output as markdown."
```
### Security Audit
**Who**: Security engineer evaluating third-party library
```
"Package the owner/suspicious-library repo from GitHub. Include security check for credentials and API keys."
```
### Bug Investigation
**Who**: Backend developer debugging auth flow
```
"Package src/auth and src/api directories with all TypeScript files. Include token count tree to see largest files."
```
### Documentation Context
**Who**: Tech writer generating API reference
```
"Package src directory and all markdown files. Output as single markdown file with file structure preserved."
```
### Token Optimization
**Who**: AI engineer managing LLM context limits
```
"Show token count tree for this repo. Focus on directories over 1000 tokens. Help me decide what to include."
```
## Quick Reference
### Basic Commands
```bash
# Package current directory (generates repomix-output.xml)
repomix
# Specify output format
repomix --style markdown
repomix --style json
# Package remote repository
npx repomix --remote owner/repo
# Custom output with filters
repomix --include "src/**/*.ts" --remove-comments -o output.md
```
### File Selection
```bash
# Include patterns
repomix --include "src/**/*.ts,*.md"
# Ignore patterns
repomix -i "tests/**,*.test.js"
# Disable .gitignore rules
repomix --no-gitignore
```
### Output Options
```bash
# Output format: markdown, xml, json, plain
repomix --style markdown -o output.md
# Remove comments
repomix --remove-comments
# Copy to clipboard
repomix --copy
```
### Token Management
```bash
# Show token count tree
repomix --token-count-tree
# Show only 1000+ token files
repomix --token-count-tree 1000
```
**LLM Context Limits:**
- Claude Sonnet 4.5: ~200K tokens
- GPT-4: ~128K tokens
- GPT-3.5: ~16K tokens
## Pro Tips
- **Review output before sharing** - Always check for hardcoded secrets
- **Use .repomixignore** - Add sensitive file patterns
- **Token count first** - Run `--token-count-tree` to optimize includes
- **Disable security checks** - Use `--no-security-check` for known-safe repos
- **Not activating?** Say: "Use repomix skill to package this repo for AI analysis"
## Related Skills
- [Code Review](/docs/skills/tools/code-review) - AI-powered code analysis
- [Research](/docs/skills/tools/research) - Investigate unfamiliar codebases
- [Debugging](/docs/skills/tools/debugging) - Systematic bug isolation
## Key Takeaway
Repomix packages repositories into LLM-ready files with token counting, security checks, and format flexibility. Essential for preparing large codebases for AI consumption.
---
# Problem-Solving Skill
Section: docs
Category: skills/tools
URL: https://docs.claudekit.cc/docs/tools/problem-solving
# Problem-Solving Skill
Systematic approaches for different types of stuck-ness. Each technique targets specific problem patterns.
## When to Use
- **Complexity spiraling**: Multiple implementations, growing special cases
- **Innovation blocks**: Conventional solutions inadequate
- **Recurring patterns**: Same issue across domains
- **Assumption constraints**: "Only way" thinking
- **Scale uncertainty**: Production readiness unclear
- **General stuck-ness**: Unsure which technique applies
## Quick Dispatch
| Stuck Symptom | Technique |
|---------------|-----------|
| Same thing implemented 5+ ways | Simplification Cascades |
| Conventional solutions inadequate | Collision-Zone Thinking |
| Same issue in different places | Meta-Pattern Recognition |
| Solution feels forced | Inversion Exercise |
| Will this work at production? | Scale Game |
| Unsure which technique | When Stuck flowchart |
## Core Techniques
### 1. Simplification Cascades
Find one insight eliminating multiple components. "If this is true, we don't need X, Y, Z."
**Key insight**: Everything is a special case of one general pattern.
**Red flag**: "Just need to add one more case..." (repeating forever)
### 2. Collision-Zone Thinking
Force unrelated concepts together to discover emergent properties. "What if we treated X like Y?"
**Key insight**: Revolutionary ideas from deliberate metaphor-mixing.
**Red flag**: "I've tried everything in this domain"
### 3. Meta-Pattern Recognition
Spot patterns appearing in 3+ domains to find universal principles.
**Key insight**: Patterns in how patterns emerge reveal reusable abstractions.
**Red flag**: "This problem is unique" (probably not)
### 4. Inversion Exercise
Flip core assumptions to reveal hidden constraints. "What if the opposite were true?"
**Key insight**: Valid inversions reveal context-dependence of "rules."
**Red flag**: "There's only one way to do this"
### 5. Scale Game
Test at extremes (1000x bigger/smaller, instant/year-long) to expose fundamental truths.
**Key insight**: What works at one scale fails at another.
**Red flag**: "Should scale fine" (without testing)
## Application Process
1. **Identify stuck-type**: Match symptom to technique
2. **Load detailed reference**: Read specific technique
3. **Apply systematically**: Follow technique's process
4. **Document insights**: Record what worked/failed
5. **Combine if needed**: Some problems need multiple techniques
## Powerful Combinations
- **Simplification + Meta-pattern**: Find pattern, simplify all instances
- **Collision + Inversion**: Force metaphor, invert assumptions
- **Scale + Simplification**: Extremes reveal what to eliminate
- **Meta-pattern + Scale**: Universal patterns tested at extremes
## Reference Navigation
| Topic | Reference |
|-------|-----------|
| Dispatch Flowchart | `references/when-stuck.md` |
| Simplification | `references/simplification-cascades.md` |
| Collision-Zone | `references/collision-zone-thinking.md` |
| Meta-Pattern | `references/meta-pattern-recognition.md` |
| Inversion | `references/inversion-exercise.md` |
| Scale Game | `references/scale-game.md` |
| Attribution | `references/attribution.md` |
## Integration with ClaudeKit
Works with:
- **sequential-thinking**: Structured problem decomposition
- **debugging**: Root cause investigation
- **planning**: Solution design
---
## Key Takeaway
Match stuck symptoms to specific techniques: Simplification Cascades for complexity, Collision-Zone for innovation blocks, Meta-Pattern for recurring issues, Inversion for assumptions, Scale Game for production readiness.
---
# document-skills
Section: docs
Category: skills/tools
URL: https://docs.claudekit.cc/docs/tools/document-skills
# document-skills
Read, parse, create, and manipulate PDF, Word, PowerPoint, and Excel documents with formula preservation and format conversion.
## When to Use
- Extract text, tables, or data from existing documents (invoices, reports, forms)
- Generate professional documents programmatically (reports, presentations, spreadsheets)
- Convert between formats while preserving structure (DOCX to PDF, Excel to JSON)
- Automate document workflows at scale (merge PDFs, batch processing)
## Key Capabilities
| **Skill** | **Read** | **Create** | **Special Features** |
|-----------|----------|------------|----------------------|
| pdf | Text, tables, forms | Merge, split, watermark | OCR scanned PDFs, form filling |
| docx | Text, tracked changes | Professional docs | Redlining, comments, formatting |
| pptx | Slides, speaker notes | Presentations | HTML conversion, templates |
| xlsx | Data, formulas | Spreadsheets | Formula recalc, financial models |
## Common Use Cases
**Accountant extracting invoice data**
> "Use document-skills/pdf to extract all line items, amounts, and vendor info from these 50 invoices and save to Excel"
**Lawyer reviewing contracts**
> "Use document-skills/docx to analyze contract.docx, track changes for proposed amendments to payment terms in Section 5, and generate redlined version"
**Analyst building financial model**
> "Use document-skills/xlsx to create DCF model with assumptions sheet, 5-year projections, formulas for NPV/IRR, and blue text for inputs"
**Marketing team generating reports**
> "Use document-skills/pptx to create quarterly deck using template.pptx - duplicate slide 3 for each region, replace text with Q4 metrics, output presentation.pptx"
**Compliance team processing forms**
> "Use document-skills/pdf to fill out tax forms from data.json, merge into single PDF, and validate all required fields completed"
## Quick Reference
**PDF Operations**
```
Extract tables: "Use document-skills/pdf to extract tables from report.pdf and save as CSV"
Merge files: "Use document-skills/pdf to merge contract.pdf, terms.pdf, exhibit.pdf into final.pdf"
Fill forms: "Use document-skills/pdf to fill W-9 form with company data and flatten fields"
```
**Word Documents**
```
Read content: "Use document-skills/docx to convert agreement.docx to markdown preserving structure"
Track changes: "Use document-skills/docx to add tracked change replacing '30 days' with '60 days' in Section 3.2"
Create doc: "Use document-skills/docx to generate report with headings, tables, and formatted text"
```
**PowerPoint**
```
Use template: "Use document-skills/pptx to duplicate slides 0,5,5,12 from template.pptx, replace text with new content"
Extract text: "Use document-skills/pptx to extract all slide text and speaker notes to JSON"
Create deck: "Use document-skills/pptx to build 10-slide presentation with charts and bullet points"
```
**Excel**
```
Read data: "Use document-skills/xlsx to load sales.xlsx and analyze revenue by region"
Create model: "Use document-skills/xlsx to build budget with formulas, blue inputs, formatted numbers"
Recalc formulas: "Use document-skills/xlsx to recalculate all formulas in model.xlsx and check for errors"
```
## Pro Tips
- **Specify format details**: "Extract tables preserving merged cells" vs just "extract tables"
- **Chain operations**: Read β Process β Create in one request for efficiency
- **Use templates**: Faster than creating from scratch for consistent branding
- **Validate formulas**: Always recalculate Excel files after modification (`recalc.py`)
- **Batch processing**: Process multiple files in one request for large jobs
**Not activating?** Say: *"Use document-skills skill to [your task]"* or reference specific sub-skill like *"Use document-skills/pdf to..."*
## Related Skills
- [ai-multimodal](/docs/skills/ai/ai-multimodal) - Analyze document images with AI
- [gemini-vision](/docs/skills/ai/gemini-vision) - OCR and visual document analysis
- [research](/docs/skills/tools/research) - Extract insights from documents
- [repomix](/docs/skills/tools/repomix) - Package documents for AI analysis
## Key Takeaway
document-skills handles all major document formats with professional precision. Extract invoice data, track contract changes, generate financial models, build presentations - all automated and production-ready.
---
# skill-creator
Section: docs
Category: skills/tools
URL: https://docs.claudekit.cc/docs/tools/skill-creator
# skill-creator
Transform your workflow knowledge into reusable AI skills that Claude can activate automatically.
## When to Use
- **API integration** - Create skills for Stripe, Twilio, or custom APIs with proper patterns
- **Company knowledge** - Encode coding standards, schemas, or business logic into portable skills
- **Reusable workflows** - Turn deployment, testing, or ETL processes into repeatable instructions
- **Framework guides** - Build skills for Vue, Django, or tech stacks with bundled resources
## Key Capabilities
| Feature | What It Does |
|---------|-------------|
| SKILL.md Generation | Creates structured skill files with metadata and instructions |
| Bundled Resources | Adds scripts, references, and assets for complex tasks |
| Progressive Disclosure | Keeps skills under 100 lines, references external files as needed |
| Validation & Packaging | Validates structure and packages skills as distributable ZIPs |
## Common Use Cases
### API Integration Developer
**Prompt:** "Use skill-creator to make Stripe payment skill with webhooks, subscriptions, and error handling"
### DevOps Engineer
**Prompt:** "Use skill-creator for deployment workflow: build, test staging, smoke tests, production deploy"
### Engineering Lead
**Prompt:** "Use skill-creator for team coding standards: file structure, naming conventions, error patterns, testing requirements"
### Data Engineer
**Prompt:** "Use skill-creator for BigQuery skill with table schemas, join patterns, and optimization rules"
### Frontend Developer
**Prompt:** "Use skill-creator for React skill with component patterns, hooks best practices, and state management"
## Quick Reference
### Invoke Skill
```bash
"Use skill-creator to create skill for [purpose]:
- [capability 1]
- [capability 2]
- [best practices]"
```
### Skill Structure
```
.claude/skills/skill-name/
βββ SKILL.md # <100 lines, core instructions
βββ scripts/ # Python/Node scripts with tests
βββ references/ # Documentation loaded as needed
βββ assets/ # Templates, images, boilerplate
```
### Skill Creation Process
1. **Clarify** - skill-creator asks about use cases and functionality
2. **Design** - Plans structure: scripts, references, assets needed
3. **Initialize** - Runs `init_skill.py` to create template
4. **Implement** - Writes SKILL.md, adds bundled resources, writes tests
5. **Validate** - Runs `package_skill.py` to check structure and package
6. **Save** - Outputs to `.claude/skills/` for immediate use
### Key Requirements
- **SKILL.md**: <100 lines, uses imperative form ("To do X, run Y")
- **Scripts**: Include tests, respect `.env` hierarchy, prefer Python/Node over Bash
- **References**: <100 lines each, split large docs using progressive disclosure
- **Metadata**: Specific `description` triggers automatic activation
## Pro Tips
- **Not activating?** Say: "Use skill-creator skill to create a [domain] skill"
- **Large documentation?** Split into multiple reference files, let Claude load as needed
- **Repetitive code?** Move to scripts with tests instead of regenerating
- **Company schemas?** Store in `references/` so Claude doesn't rediscover each time
- **Boilerplate?** Add to `assets/` as templates Claude can copy and modify
## Related Skills
- [/docs/skills/tools/claude-code-skill](/docs/skills/tools/claude-code-skill) - Create skills via CLI commands
- [/docs/skills/tools/planning](/docs/skills/tools/planning) - Design complex workflows before building skills
- [/docs/skills/tools/mcp-management](/docs/skills/tools/mcp-management) - Manage Model Context Protocol integrations
## Key Takeaway
skill-creator turns procedural knowledge into AI-activatable skills. Describe what you need, get a validated skill package ready for your team or personal toolkit.
---
# Research Skill
Section: docs
Category: skills/tools
URL: https://docs.claudekit.cc/docs/tools/research
# Research
Validate technical decisions with multi-source research before implementation. Max 5 searches per task.
## Core Principle
**Honor YAGNI, KISS, DRY. Research to eliminate uncertainty, not satisfy curiosity.**
Research is waste if it doesn't inform a decision. Before searching, know what decision depends on the answer. Focus on authoritative sources (docs, repos, CVEs), cross-reference for accuracy, prioritize last 12 months. Brutal honesty over diplomatic hedging.
## When to Use
Always for:
- Evaluating libraries/frameworks before adoption
- Investigating security vulnerabilities or best practices
- Comparing solution approaches with unclear trade-offs
- Creating technical specs that require evidence
Especially when:
- Team unfamiliar with technology (reduce guesswork)
- Security/performance critical (need benchmarks, CVEs)
- Multiple solutions exist (comparative analysis required)
- Legacy migration (must verify deprecation/compatibility)
## The Process
### 1. Define Scope + Decision Criteria
Identify: What decision needs this research? What facts will change the outcome? Set boundaries (depth, recency, sources). Example: "Which auth lib for Next.js?" β Criteria: maintained, security track record, TypeScript support, <50kb.
### 2. Systematic Search (Max 5)
**Preferred**: `gemini -m gemini-2.5-flash -p "your prompt"` (if available)
**Fallback**: `WebSearch` tool
**Query fan-out** across:
- Official docs, GitHub repos, changelogs
- CVE databases (security topics)
- Recognized experts/conferences (videos)
- Benchmarks, case studies (performance)
Cross-reference minimum 3 sources. Check dates. Identify consensus vs. outliers.
### 3. Analyze + Synthesize
Compare: Pros/cons, maturity, adoption, security, performance, compatibility. Flag: Deprecated features, breaking changes, unresolved issues. Output: Actionable recommendation with evidence.
### 4. Generate Report
**Location**: `{active-plan}/reports/researcher-YYMMDD-topic.md` (fallback: `plans/reports/`)
**Structure**:
```markdown
# Research: [Topic]
## Decision Summary
[1-2 paragraphs: recommendation + reasoning]
## Methodology
Sources: [list], Date range: [X], Search terms: [Y]
## Findings
### Technology Overview
### Best Practices
### Security/Performance
### Trade-offs
## Recommendation
### Quick Start
### Code Example
### Pitfalls to Avoid
## References
[Links with titles]
## Unresolved
[Open questions if any]
```
Sacrifice grammar for concision. Use code blocks, mermaid/ASCII diagrams.
## Common Use Cases
### Auth Library Selection
**Who**: Full-stack dev building SaaS
```
"Research Next.js auth solutions. Need: Prisma integration, OAuth providers, session management, TypeScript. Compare NextAuth vs Clerk vs Supabase Auth. Recommend one."
```
### Performance Optimization Approach
**Who**: Frontend engineer with slow app
```
"React dashboard renders slowly. Research: Code splitting strategies, lazy loading patterns, bundle analysis tools. Focus on Vite-specific optimizations and real-world benchmarks."
```
### Database Migration Path
**Who**: Backend lead planning Postgres upgrade
```
"Migrating from Postgres 12 to 16. Research breaking changes, performance improvements, migration tools. Check for issues with Prisma 5.x compatibility."
```
### Security Vulnerability Assessment
**Who**: DevOps investigating CVE
```
"CVE-2024-XXXX affects our Express version. Research: Impact scope, exploit difficulty, patching strategy, workaround options. Check if Next.js 14 affected."
```
### New Framework Evaluation
**Who**: Team considering framework switch
```
"Evaluate Astro vs Next.js for content site. Priorities: Build speed, SEO, partial hydration, markdown support, deployment simplicity. Need hard data on build times."
```
## Pro Tips
**Not activating?** Say: *"Use research skill to investigate [topic] with multi-source validation and generate a report."*
**Budget searches**:
- Search 1-2: Broad discovery (official docs, popular articles)
- Search 3-4: Deep dive (specific features, benchmarks, CVEs)
- Search 5: Edge cases/unresolved questions
**Red flags**:
- Only one source for critical claim
- Dates >2 years old for fast-moving tech
- Vague claims without examples/data
**Quality checklist**:
- [ ] 3+ authoritative sources cross-referenced
- [ ] Code examples included (if applicable)
- [ ] Security implications addressed
- [ ] Performance data cited (if relevant)
- [ ] Migration/compatibility notes clear
**Report efficiently**:
- Use bullet points over paragraphs
- Lead with recommendation, evidence second
- Skip obvious background (assume technical audience)
- List unresolved questions at end
## Related Skills
- [Docs Seeker](/docs/skills/tools/docs-seeker) - Documentation lookup
- [Sequential Thinking](/docs/skills/tools/sequential-thinking) - Structured analysis
- [Planning](/docs/skills/tools/planning) - Solution design
## Key Takeaway
Research skill produces evidence-based technical decisions through systematic multi-source validation, with reports focusing on actionable recommendations over theory. Max 5 searchesβthink before each query.
---
# Claude Code Skill
Section: docs
Category: skills/tools
URL: https://docs.claudekit.cc/docs/tools/claude-code-skill
# Claude Code Skill
Anthropic's agentic coding tool combining autonomous planning, execution, and validation with extensibility through skills, plugins, MCP servers, and hooks.
## When to Use
- Installation and setup questions
- Slash commands (/cook, /plan, /fix, /test, /docs, /design, /git)
- Creating/managing Agent Skills
- Configuring MCP servers
- Setting up hooks/plugins
- IDE integration (VS Code, JetBrains)
- CI/CD workflows
- Enterprise deployment (SSO, RBAC, sandboxing)
- Troubleshooting authentication/performance issues
- Advanced features (extended thinking, caching, checkpointing)
## Core Concepts
| Concept | Description |
|---------|-------------|
| **Subagents** | Specialized agents (planner, code-reviewer, tester, debugger, docs-manager, ui-ux-designer, database-admin) |
| **Agent Skills** | Modular capabilities with SKILL.md + bundled resources loaded progressively |
| **Slash Commands** | User-defined operations in `.claude/commands/` expanding to prompts |
| **Hooks** | Event-driven shell commands (SessionStart, PreToolUse, PostToolUse, Stop, SubagentStop) |
| **MCP Servers** | Model Context Protocol integrations for external tools |
| **Plugins** | Packaged collections distributed via marketplace |
## Reference Navigation
| Topic | Reference |
|-------|-----------|
| Installation & setup | `references/getting-started.md` |
| Slash commands | `references/slash-commands.md` |
| Workflow examples | `references/common-workflows.md` |
| Creating skills | `references/agent-skills.md` |
| MCP servers | `references/mcp-integration.md` |
| Hooks system | `references/hooks-comprehensive.md` |
| Plugins | `references/hooks-and-plugins.md` |
| Configuration | `references/configuration.md` |
| Enterprise | `references/enterprise-features.md` |
| IDE integration | `references/ide-integration.md` |
| CI/CD | `references/cicd-integration.md` |
| Advanced features | `references/advanced-features.md` |
| Troubleshooting | `references/troubleshooting.md` |
| API reference | `references/api-reference.md` |
| Best practices | `references/best-practices.md` |
## Usage Instructions
When answering questions:
1. Identify topic from user query
2. Load relevant reference files
3. Provide specific guidance with examples
4. For complex queries, load multiple references
## Documentation Sources
- Context7: `https://context7.com/websites/claude_en_claude-code/llms.txt?tokens=10000`
- Topic search: `https://context7.com/websites/claude_en_claude-code/llms.txt?topic=&tokens=5000`
- [Official docs](https://docs.claude.com/en/docs/claude-code/)
- [GitHub](https://github.com/anthropics/claude-code)
- Support: support.claude.com
## Integration with ClaudeKit
Works with:
- **skill-creator**: Create new skills
- **mcp-management**: Manage MCP servers
- **mcp-builder**: Build MCP servers
---
## Key Takeaway
Claude Code skill provides guidance for Anthropic's agentic coding tool including installation, slash commands, skills, MCP integration, hooks, IDE integration, and enterprise features. Load specific references based on user query topic.
---
# ffmpeg
Section: docs
Category: skills/tools
URL: https://docs.claudekit.cc/docs/tools/ffmpeg
# ffmpeg
Process video and audio with professional-grade encoding, conversion, streaming, and filtering.
## When to Use
- **Format Conversion** - Convert MKV/AVI/MOV to MP4/WebM with optimal codecs
- **Video Compression** - Reduce file size while maintaining quality (4K to 1080p, web optimization)
- **Audio Extraction** - Pull audio tracks from videos as MP3/AAC/FLAC
- **Media Processing** - Apply filters, create GIFs, generate thumbnails, batch process files
## Key Capabilities
| Capability | What It Does | Example |
|------------|--------------|---------|
| **Video Encoding** | Convert with H.264/H.265/VP9/AV1 codecs | `ffmpeg -i input.mkv -c:v libx264 -crf 23 output.mp4` |
| **Audio Processing** | Extract, convert, normalize audio | `ffmpeg -i video.mp4 -vn -c:a copy audio.m4a` |
| **Stream Manipulation** | Fast copy without re-encoding | `ffmpeg -i input.mkv -c copy output.mp4` |
| **Filtering** | Scale, crop, denoise, rotate, watermark | `ffmpeg -i input.mp4 -vf scale=1280:720 output.mp4` |
| **Streaming** | RTMP/HLS live streaming | `ffmpeg -i input.mp4 -f hls playlist.m3u8` |
| **Hardware Accel** | GPU encoding (NVENC, QSV) | `ffmpeg -hwaccel cuda -c:v h264_nvenc output.mp4` |
## Common Use Cases
**Video Producer** converting raw footage to web-ready format
```
"Use ffmpeg skill to convert 4K.mov to 1080p MP4 with H.264, CRF 22, and AAC audio"
```
**Content Creator** extracting audio for podcast
```
"Use ffmpeg skill to extract audio from all videos in /recordings as high-quality MP3"
```
**Web Developer** optimizing video for site performance
```
"Use ffmpeg skill to compress video.mp4 for web: 720p, small file size, maintain quality"
```
**Live Streamer** setting up RTMP stream
```
"Use ffmpeg skill to stream video.mp4 to Twitch RTMP with 1080p, 3000k bitrate"
```
**Designer** creating animated GIFs
```
"Use ffmpeg skill to create high-quality GIF from video clip (5s-10s) at 15fps"
```
## Quick Reference
**Fast Format Conversion (No Re-encoding)**
```bash
ffmpeg -i input.mkv -c copy output.mp4
```
**Quality Compression (Recommended Settings)**
```bash
# H.264 with CRF 23 (lower = better quality)
ffmpeg -i input.mp4 -c:v libx264 -crf 23 -preset medium -c:a aac -b:a 128k output.mp4
```
**Extract Audio as MP3**
```bash
ffmpeg -i video.mp4 -vn -q:a 0 audio.mp3
```
**Resize to 720p (Keep Aspect Ratio)**
```bash
ffmpeg -i input.mp4 -vf scale=-1:720 output.mp4
```
**Create Thumbnail at 5s**
```bash
ffmpeg -ss 00:00:05 -i input.mp4 -frames:v 1 thumbnail.png
```
**Batch Convert MKV to MP4**
```bash
for file in *.mkv; do ffmpeg -i "$file" -c:v libx264 -crf 22 "${file%.mkv}.mp4"; done
```
**Codec Quick Ref**
- **H.264** (`libx264`) - Universal compatibility, web/streaming
- **H.265** (`libx265`) - 50% smaller files, 4K, slower encoding
- **AAC** - Industry standard audio
- **CRF 18-23** - High quality range (lower = better)
- **Preset slow** - Better compression (ultrafast/fast/medium/slow)
## Pro Tips
**Quality Control**
- Start with CRF 23, adjust lower (better) or higher (smaller)
- Use `slow` preset for final output, `fast` for testing
- Test on 10-second clip before processing full video
**Performance**
- Use `-c copy` when format allows (no quality loss, instant)
- Enable hardware acceleration: `-hwaccel cuda` (NVIDIA) or `-hwaccel qsv` (Intel)
- Batch process with shell loops for multiple files
**Common Patterns**
- Extract frame: `-ss HH:MM:SS -frames:v 1`
- Trim video: `-ss 00:01:00 -to 00:02:00 -c copy`
- Normalize audio: `-af loudnorm`
- Remove black borders: `-vf cropdetect`
**Not activating?** Say: *"Use ffmpeg skill to [your task]"*
## Related Skills
- [Media Processing](/docs/skills/tools/media-processing) - Combined FFmpeg + ImageMagick + RMBG
- [ImageMagick](/docs/skills/tools/imagemagick) - Image manipulation and batch processing
- [Backend Development](/docs/skills/backend/backend-development) - Server-side media pipelines
## Key Takeaway
ffmpeg is the industry-standard multimedia Swiss Army knife. Master CRF values (18-28), use `-c copy` for fast conversions, and leverage hardware acceleration for production workloads. One tool handles all video/audio encoding, conversion, streaming, and filtering needs.
---
# imagemagick
Section: docs
Category: skills/tools
URL: https://docs.claudekit.cc/docs/tools/imagemagick
# imagemagick
Command-line powerhouse for image processing - format conversion, resizing, effects, batch operations.
## When to Use
- **Format conversion** - PNG/JPEG/WebP/GIF between formats with quality control
- **Batch processing** - Resize, optimize, watermark hundreds of images at once
- **Web optimization** - Create responsive sizes, strip metadata, compress for web
- **Effects & filters** - Blur, sharpen, color adjustments, artistic effects
## Key Capabilities
| Operation | Examples |
|-----------|----------|
| **Resize** | Fit, fill, crop, maintain aspect, exact dimensions |
| **Convert** | PNGβJPEGβWebP with quality settings |
| **Effects** | Blur, sharpen, grayscale, sepia, watermarks |
| **Batch** | Process entire folders with mogrify or loops |
| **Optimize** | Strip metadata, progressive JPEGs, WebP compression |
## Common Use Cases
### E-commerce Product Manager
**Who:** Store owner with 200 product photos
**Prompt:** "Use imagemagick to batch process product images: resize to 1000x1000 square from center, convert to WebP 85% quality, create 300x300 thumbnails"
### Social Media Manager
**Who:** Content creator preparing Instagram posts
**Prompt:** "Use imagemagick to create Instagram square (1080x1080) from these landscape photos, center crop, add white watermark bottom-right"
### Web Developer
**Who:** Dev optimizing site performance
**Prompt:** "Use imagemagick to generate responsive image set: create 320w, 640w, 1024w, 1920w versions of all JPEGs, WebP format, strip metadata"
### Email Marketer
**Who:** Newsletter designer reducing image sizes
**Prompt:** "Use imagemagick to optimize images for email: max width 600px, JPEG quality 75, strip all metadata, progressive encoding"
### Photographer
**Who:** Pro photographer watermarking portfolio
**Prompt:** "Use imagemagick to add logo watermark to all photos in folder, position bottom-right with 20px margin, 50% opacity"
## Quick Reference
```bash
# Resize (maintain aspect)
magick input.jpg -resize 800x600 output.jpg
# Square crop from center
magick input.jpg -resize 1000x1000^ -gravity center -extent 1000x1000 output.jpg
# Convert format
magick input.png -quality 85 output.jpg
# Batch resize all
mogrify -resize 800x *.jpg
# Batch to different folder
mogrify -path ./output -resize 800x600 *.jpg
# Web optimize
magick input.jpg -quality 85 -strip -interlace Plane output.jpg
# Add watermark
magick input.jpg logo.png -gravity southeast -geometry +10+10 -composite output.jpg
# Grayscale
magick input.jpg -colorspace Gray output.jpg
# Create thumbnails
for f in *.jpg; do magick "$f" -thumbnail 200x200 "thumb_$f"; done
# Responsive sizes
for w in 320 640 1024 1920; do magick input.jpg -resize ${w}x out-${w}w.jpg; done
```
**Geometry syntax:**
- `800x600` = fit within (maintains aspect)
- `800x600!` = exact size (may distort)
- `800x600^` = fill (may crop)
- `800x` = width 800, auto height
- `x600` = height 600, auto width
## Pro Tips
**Quality settings:**
- 95-100: Archival (large files)
- 85-90: Web publishing (recommended)
- 70-80: Email, thumbnails
- WebP: Use 80-85 for best size/quality
**Batch best practices:**
- Always backup originals before mogrify (modifies in-place)
- Test command on single file first
- Use `-path` to preserve originals: `mogrify -path ./output ...`
- For complex operations, use loops instead of mogrify
**Web optimization combo:**
```bash
magick input.jpg -resize 1920x -quality 85 -strip -interlace Plane output.jpg
```
**Not activating?** Say: "Use imagemagick skill to resize these images to 800px width and convert to WebP"
## Related Skills
- [FFmpeg](/docs/skills/tools/ffmpeg) - Video/audio processing
- [Frontend Design](/docs/skills/frontend/frontend-design) - UI component creation
- [Chrome DevTools](/docs/skills/tools/chrome-devtools) - Browser debugging
## Key Takeaway
ImageMagick = Swiss Army knife for images. One command processes hundreds of files. Essential for any workflow involving image optimization, batch conversion, or automated resizing.
---
# Getting Started
Section: getting-started
Category: N/A
URL: https://docs.claudekit.cc/docs/getting-started/index
# Getting Started with ClaudeKit
ClaudeKit supercharges Claude Code with agents, commands, and skills for production-grade development. Get started in 15 minutes.
## Quick Paths
### New to ClaudeKit?
1. [Introduction](/docs/getting-started/introduction) - Understand what ClaudeKit is
2. [Concepts](/docs/getting-started/concepts) - Learn how agents, commands, and skills work
3. [Installation](/docs/getting-started/installation) - Install ClaudeKit CLI
4. [Quick Start](/docs/getting-started/quick-start) - Build your first feature in 15 minutes
### Existing Claude Code User?
[Upgrade Guide](/docs/getting-started/upgrade-guide) - Migrate to ClaudeKit seamlessly
### Evaluating ClaudeKit?
[Why ClaudeKit](/docs/getting-started/why-claudekit) - See comparisons, ROI, and pricing
## Additional Resources
- [Cheatsheet](/docs/getting-started/cheatsheet) - Quick reference for common commands
- [FAQ](/docs/support/faq) - Frequently asked questions
- [Troubleshooting](/docs/support/troubleshooting) - Common issues and solutions
## Next Steps
After installation, explore:
- **[Workflows](/docs/workflows)** - Task-oriented guides for common scenarios
- **[Docs](/docs)** - Deep reference documentation
- **[Support](/docs/support)** - Get help when stuck
---
# Introduction
Section: getting-started
Category: N/A
URL: https://docs.claudekit.cc/docs/getting-started/introduction
# Introduction to ClaudeKit
**ClaudeKit** extends Claude Code with specialized agents, slash commands, and reusable skills.
## Video Walkthrough
New to ClaudeKit? Watch this step-by-step walkthrough covering CLI installation, setup with `ck` commands, and a demo building UI from a screenshot.
VIDEO
*More tutorials: [@goonnguyen](https://www.youtube.com/@goonnguyen) | X: [@goon_nguyen](https://x.com/goon_nguyen)*
## What is ClaudeKit?
ClaudeKit transforms Claude Code from a general AI assistant into a focused engineering toolkit. Instead of writing prompts from scratch, you invoke battle-tested workflows optimized for speed and quality.
**Core Components**:
- **Agents**: Specialized AI assistants (planner, researcher, tester, debugger)
- **Commands**: Slash commands for common tasks (`/cook`, `/fix`, `/plan`, `/bootstrap`)
- **Skills**: Reusable knowledge modules (Next.js, Better Auth, Docker)
## How It Works
1. **Invoke Command**: Type `/cook "add user authentication"`
2. **Agent Activates**: System spawns planner β researcher β developer β tester
3. **Workflow Executes**: Agents collaborate, write code, run tests, commit changes
4. **You Review**: Check output, provide feedback, iterate
## Why Use ClaudeKit?
- **Speed**: 10x faster than manual prompting
- **Quality**: Battle-tested workflows reduce bugs and rework
- **Consistency**: Same approach across team members
- **Learning**: See how experts structure engineering tasks
## Quick Example
```bash
# Without ClaudeKit
You: "I need to add authentication to my Next.js app"
Claude: "Sure! What auth library? What features?"
[20+ message back-and-forth]
# With ClaudeKit
You: /cook "add authentication with Better Auth"
ClaudeKit:
β Planner creates implementation plan
β Researcher analyzes codebase
β Developer writes code following best practices
β Tester runs tests and fixes issues
β Git commits changes
[Done in 1 command]
```
## Next Steps
1. **[Understand Concepts](/docs/getting-started/concepts)** - How agents, commands, and skills work
2. **[Install ClaudeKit](/docs/getting-started/installation)** - Set up the CLI
3. **[Quick Start](/docs/getting-started/quick-start)** - Build your first feature in 15 minutes
## Want to Learn More?
- [Why ClaudeKit](/docs/getting-started/why-claudekit) - Comparisons, ROI calculations, pricing
- [Use Cases](/docs/workflows) - Real-world workflows
- [FAQ](/docs/support/faq) - Common questions answered
---
# Core Concepts
Section: getting-started
Category: N/A
URL: https://docs.claudekit.cc/docs/getting-started/concepts
# Core Concepts
ClaudeKit's power comes from three interconnected systems: **Agents**, **Commands**, and **Skills**.
## Agents
Specialized AI assistants with focused expertise.
**Available Agents**:
- **Planner**: Creates implementation plans with phases, tasks, success criteria
- **Researcher**: Analyzes codebases, best practices, technical decisions
- **Tester**: Writes and runs tests, analyzes failures, fixes issues
- **Debugger**: Investigates bugs, traces root causes, proposes fixes
- **Code Reviewer**: Reviews code quality, security, performance
- **Docs Manager**: Creates and maintains documentation
- **UI/UX Designer**: Designs interfaces following aesthetic principles
- **Copywriter**: Writes marketing copy, product descriptions, content
**How Agents Work**:
1. You invoke a command (e.g., `/cook "add feature"`)
2. Command spawns relevant agents in sequence or parallel
3. Agents collaborate via shared context (plans, code, test results)
4. Output consolidated and presented to you
## Commands
Slash commands that trigger agent workflows.
**Categories**:
- **Core**: `/cook`, `/plan`, `/bootstrap`, `/ask`, `/scout`
- **Fix**: `/fix`, `/fix:ci`, `/fix:hard`, `/fix:types`
- **Design**: `/design:good`, `/design:fast`, `/design:3d`
- **Git**: `/git:cm`, `/git:cp`, `/git:pr`
- **Docs**: `/docs:init`, `/docs:update`, `/docs:summarize`
- **Content**: `/content:good`, `/content:cro`
- **Plan**: `/plan:ci`, `/plan:two`, `/plan:hard`
- **Integrate**: `/integrate:polar`, `/integrate:sepay`
- **Skill**: `/skill:create`, `/skill:optimize`
**Example Workflow**:
```bash
/plan "add payment processing with Stripe"
# β Planner agent creates detailed plan
/code
# β Reads plan, spawns developer + tester agents, implements feature
/fix
# β Debugger analyzes any test failures, fixes issues
/git:cm
# β Git manager stages, commits, pushes changes
```
## Skills
Reusable knowledge modules injected into agent context.
**Available Skills**:
- **Frontend**: Next.js, Tailwind, shadcn/ui
- **Backend**: PostgreSQL, Docker
- **AI**: Gemini Vision, Canvas Design
- **Auth**: Better Auth
- **Ecommerce**: Shopify
- **Tools**: FFmpeg, ImageMagick, MCP Builder
**How Skills Work**:
1. Skill files stored in `.claude/skills/`
2. Auto-activated based on codebase detection (e.g., detects `next.config.js` β activates Next.js skill)
3. Skill provides agent with best practices, examples, gotchas
4. Agent makes better decisions (uses right patterns, avoids common mistakes)
**Creating Custom Skills**:
```bash
/skill:create "FastAPI best practices"
# β Creates new skill with structure, references, examples
```
## How They Work Together
**Example: Adding Authentication**
```bash
You: /cook "add authentication with Better Auth"
System:
1. Detects Better Auth skill β injects into context
2. Spawns Planner agent β creates implementation plan
3. Planner references Better Auth skill β uses correct setup pattern
4. Spawns Developer agent β writes code following skill guidelines
5. Spawns Tester agent β writes tests based on skill examples
6. Output: Fully implemented, tested authentication
```
## CLAUDE.md Configuration
All agent behaviors, skills, and workflows configured in `.claude/CLAUDE.md`:
```markdown
# CLAUDE.md
## Agents
- planner: Creates implementation plans
- researcher: Analyzes technical decisions
## Skills
- better-auth: Authentication framework patterns
- nextjs: Next.js App Router best practices
## Workflows
- /plan β /code: Plan β Implement existing plan
- /fix: Debug β Fix β Test β Commit
```
## Next Steps
- **[Installation](/docs/getting-started/installation)** - Install ClaudeKit
- **[Quick Start](/docs/getting-started/quick-start)** - Try your first command
- **[Commands Reference](/docs/commands)** - Explore all commands
- **[Agents Reference](/docs/agents)** - Learn about each agent
---
# Upgrade Guide for Claude Code Users
Section: getting-started
Category: N/A
URL: https://docs.claudekit.cc/docs/getting-started/upgrade-guide
# Upgrading from Claude Code to ClaudeKit
Already using Claude Code? ClaudeKit enhances your workflow without breaking existing habits.
## What Changes?
**Stays the Same**:
- Claude Code CLI and interface
- Existing project structure
- .claude/ directory (extended, not replaced)
- Chat-based interaction
**New Capabilities**:
- Slash commands for common tasks
- Specialized agents (planner, tester, debugger)
- Reusable skills library
- Multi-agent collaboration
## Installation (Additive)
```bash
# Install ClaudeKit CLI (doesn't replace Claude Code)
npm install -g claudekit-cli
# Initialize in existing project
ck init
# β Adds .claude/CLAUDE.md, skills/, workflows/
# β Existing .claude/commands/ preserved
```
### Upgrading ClaudeKit
```bash
# Recommended: Use built-in update command
ck update
# Alternative: npm global update
npm install -g claudekit-cli@latest
```
## Gradual Migration Path
### Week 1: Try Core Commands
Start with `/cook` for feature development:
```bash
# Old way
You: "I need to add a new API endpoint for user profiles"
[Long conversation with manual guidance]
# ClaudeKit way
/cook "add user profiles API endpoint"
[Automated planning, development, testing]
```
### Week 2: Use Specialized Workflows
Adopt task-specific commands:
- `/fix` for bug fixing (faster than manual debugging)
- `/plan` for complex features (structured planning)
- `/docs:update` for documentation (automated sync)
### Week 3: Leverage Skills
Add custom skills for your stack:
```bash
/skill:create "Our GraphQL conventions"
# β Agents learn your team's patterns
```
### Week 4: Full Workflow Integration
Combine commands for complete workflows:
```bash
/plan "redesign checkout flow"
/code @plans/checkout-redesign.md
/design:good "checkout UI mockup"
/fix:test
/git:pr "feature/new-checkout"
```
## Compatibility
**Supported**:
- Claude Code v1.0+
- Existing .claude/commands/ (fully compatible)
- Custom prompts (still work as-is)
**Not Supported**:
- Claude Desktop app (Claude Code CLI only)
- Projects without Git (ClaudeKit requires version control)
## FAQs
**Q: Do I need to rewrite existing commands?**
A: No. ClaudeKit commands coexist with your custom .claude/commands/
**Q: Can I still use regular chat?**
A: Yes. ClaudeKit adds slash commands, doesn't remove chat.
**Q: What if I don't like ClaudeKit?**
A: Uninstall and delete .claude/CLAUDE.md. No breaking changes.
## Next Steps
1. [Install ClaudeKit](/docs/getting-started/installation)
2. [Try Quick Start](/docs/getting-started/quick-start)
3. [Explore Commands](/docs/commands)
---
# Why ClaudeKit
Section: getting-started
Category: N/A
URL: https://docs.claudekit.cc/docs/getting-started/why-claudekit
# Why ClaudeKit
## 5-Minute Quick Win
```bash
ck init my-app --kit engineer
cd my-app
/cook add user authentication with OAuth
```
**Result**: Production auth system with login/signup pages, OAuth (Google/GitHub), protected routes, tests, and docs. **Time: 6 minutes vs 8 hours manually.**
[Try Quick Start β](/docs/getting-started/quick-start)
## What You Get
**14 AI Agents**:
- **planner** - Implementation plans
- **researcher** - Best practices (parallel search)
- **tester** - Validates everything works
- **code-reviewer** - Security + performance checks
- **debugger** - Root cause analysis
- **ui-ux-designer** - Interface design + wireframes
- **database-admin** - Query optimization
- **git-manager** - Professional commits
- **docs-manager** - Keeps docs current
- **project-manager** - Progress tracking
- **copywriter** - Conversion-focused content
- **journal-writer** - Decision documentation
- **brainstormer** - Solution exploration
- **scout** - Codebase navigation
**30+ Slash Commands**:
- `/cook` - Implement features end-to-end
- `/plan` - Research + create implementation plan
- `/fix:hard` - Multi-agent bug fixing
- `/design:good` - UI/UX design
- `/git:cm` - Commit with conventional format
- [See all commands β](/docs/commands)
**45 Built-in Skills**:
- Frontend: Next.js, shadcn/ui, Tailwind CSS
- Auth: Better Auth
- E-commerce: Shopify
- Cloud: Cloudflare Workers, R2, Docker
- AI: Gemini (audio, video, images, vision)
- Database: PostgreSQL, MongoDB
- [See all skills β](/docs/skills)
## ClaudeKit vs Alternatives
| Feature | ClaudeKit | Boilerplates | AI Chat |
|---------|-----------|--------------|---------|
| **Tech Stack** | Any (Next.js, Django, Go, Rust) | Locked to specific stack | Limited support |
| **Setup Time** | 2 min | 30-60 min | N/A |
| **Feature Shipping** | 5-20 min | 4-12 hours | Copy-paste, manual fixes |
| **Quality** | Production-ready, tested | Copy-paste required | Unvalidated code |
| **Testing** | Automated (tester agent) | Manual | You write tests |
| **Code Review** | Built-in (code-reviewer) | Manual or external | None |
| **Updates** | Evolves with Claude | Becomes outdated | N/A |
| **Documentation** | Auto-generated | Manual | None |
| **Visual Assets** | AI-generated (Gemini) | You design | Not applicable |
| **Time Savings** | **10+ hours/feature** | None (is the baseline) | 2-3 hours |
| **Cost** | $99 one-time | $0-$200 one-time | $20/month ongoing |
**Verdict**: ClaudeKit is the only solution that ships production features end-to-end with testing, review, and docs.
## Why It Works
### Living System, Not Dead Code
Traditional boilerplates are snapshots in time. ClaudeKit improves with every Claude update. Your dev team gets smarter automatically.
### Tech Stack Agnostic
Works with your existing codebase. Next.js today, migrate to Django tomorrow. No lock-in.
### Full Lifecycle
Not just code generation. **Plan β Research β Implement β Test β Review β Document β Commit**. Complete workflow, not fragments.
### Real ROI
**First feature**: 10 hours saved at $100/hr = $1,000 value
**Cost**: $99 one-time
**Net profit day 1**: $901
## Who Uses ClaudeKit
- **Solo developers** - Entire dev team in your terminal
- **Indie makers** - Validate ideas 800% faster
- **Small teams** - Augment with AI specialists
- **Anyone frustrated with boilerplates** - Escape tech stack prison
## Pricing
**$99 one-time** (save $50 from $149):
- Lifetime access + unlimited updates
- 30-day money-back guarantee
- All future features included
- No monthly fees
[Get ClaudeKit β](https://claudekit.cc)
---
**Traditional dev**: 40 hours/feature
**With ClaudeKit**: 5-20 minutes/feature
**Time reclaimed**: Build your product instead of infrastructure.
---
# Installation
Section: getting-started
Category: getting-started
URL: https://docs.claudekit.cc/docs/getting-started/installation
# Installation
This guide will help you install ClaudeKit and set up your development environment. You can choose between manual setup or using the ClaudeKit CLI.
## Video Guide
Prefer video? Watch the complete installation walkthrough:
VIDEO
*More tutorials: [@goonnguyen](https://www.youtube.com/@goonnguyen) | X: [@goon_nguyen](https://x.com/goon_nguyen)*
## Prerequisites
Before installing ClaudeKit, ensure you have:
- **Node.js** v18 or higher
- **npm** v10 or higher (or bun, pnpm, yarn)
- **Git** for version control
- **Claude Code CLI** installed (`claude`)
- **Google Gemini API Key** from [Google AI Studio](https://aistudio.google.com)
## Method 1: Manual Setup
This method gives you full control over the installation process.
### Step 1: Copy ClaudeKit Files
Copy all directories and files from the `claudekit-engineer` repo to your project:
```bash
# Copy these files and directories:
.claude/*
docs/*
plans/*
CLAUDE.md
```
### Step 2: Configure Gemini API Key (Optional)
**WHY?**
ClaudeKit utilized [Human MCP](https://www.npmjs.com/package/@goonnguyen/human-mcp) to analyze images and videos since Gemini models have better vision capabilities. But Anthropic already released [**Agent Skills**](https://docs.claude.com/en/docs/agents-and-tools/agent-skills/overview) which is much better for context engineering, so we already converted all tools of Human MCP to Agent Skills.
**Notes:** Gemini API have a pretty generous free requests limit at the moment.
1. Go to [Google AI Studio](https://aistudio.google.com) and grab your API Key
2. Copy `.claude/skills/.env.example` to `.claude/skills/.env` and paste the key into the `GEMINI_API_KEY` environment variable
Now you're good to go.
### Step 3: Start Claude Code
Start Claude Code in your working project:
```bash
# Standard mode
claude
# Skip permissions (use with caution)
claude --dangerously-skip-permissions
```
### Step 4: Initialize Documentation
Run the `/docs:init` command to scan and create specs for your project:
```bash
/docs:init
```
This generates markdown files in the `docs` directory:
- `codebase-summary.md`
- `code-standards.md`
- `system-architecture.md`
- And more...
Now your project is ready for development!
## Method 2: ClaudeKit CLI
The CLI provides an automated way to set up ClaudeKit projects.
### Installation
Install ClaudeKit CLI globally:
```bash
# npm
npm install -g claudekit-cli
# bun
bun add -g claudekit-cli
# Verify installation
ck --version
```
### Initialize or Update ClaudeKit Engineer
**Note:** This command should be run from the root directory of your project.
```bash
# Interactive mode (recommended)
ck init
# With options
ck init --kit engineer
# Specific version
ck init --kit engineer --version v1.0.0
# With exclude patterns
ck init --exclude "local-config/**" --exclude "*.local"
# Global mode - use platform-specific user configuration
ck init --global
ck init -g --kit engineer
```
### Update the CLI Itself
To update the `ck` command-line tool to the latest version:
```bash
ck update
```
**Note:** This updates the CLI tool only, not ClaudeKit Engineer files. Use `ck init` to update ClaudeKit Engineer.
**Global vs Local Configuration:**
By default, ClaudeKit uses local configuration (`~/.claudekit`).
For platform-specific **user-scoped settings**, use the `--global` flag:
- **macOS/Linux**: `~/.claude`
- **Windows**: `%LOCALAPPDATA%\.claude`
Global mode uses user-scoped directories (no sudo required), allowing separate configurations for different projects.
### Authentication
The CLI requires a **GitHub Personal Access Token (PAT)** to download releases from private repositories (`claudekit-engineer` and `claudekit-marketing`).
**Authentication Fallback Chain:**
1. **GitHub CLI**: Uses `gh auth token` if GitHub CLI is installed and authenticated
2. **Environment Variables**: Checks `GITHUB_TOKEN` or `GH_TOKEN`
3. **OS Keychain**: Retrieves stored token from system keychain
4. **User Prompt**: Prompts for token input and offers to save it securely
**Creating a Personal Access Token:**
1. Go to GitHub Settings β Developer settings β Personal access tokens β Tokens (classic)
2. Generate new token with `repo` scope (for private repositories)
3. Copy the token
**Setting Token via Environment Variable:**
```bash
export GITHUB_TOKEN=ghp_your_token_here
```
## Verify Installation
After installation (either method), verify everything is set up correctly:
```bash
# Check if Claude Code is available
claude --version
# Check if .claude directory exists
ls -la .claude/
```
## Update ClaudeKit
Keep ClaudeKit Engineer up to date:
```bash
# Update ClaudeKit Engineer to latest version
ck init
# Update to specific version
ck init --version v1.2.0
```
**Exclude specific files during update:**
```bash
# Don't overwrite CLAUDE.md
ck init --exclude CLAUDE.md
```
**Update the CLI itself:**
```bash
# Update ck command-line tool
ck update
```
## Troubleshooting
### Permission Errors
On macOS/Linux, you may need sudo:
```bash
sudo npm install -g claudekit-cli
```
Or configure npm to use a different directory:
```bash
mkdir ~/.npm-global
npm config set prefix '~/.npm-global'
export PATH=~/.npm-global/bin:$PATH
```
### Claude Code Not Found
If `claude` command is not found:
1. Install Claude Code CLI from [claude.ai/code](https://claude.ai/code)
2. Restart your terminal
3. Verify with `claude --version`
### GitHub Authentication Failed
If CLI can't authenticate:
1. Install GitHub CLI: `brew install gh` (macOS) or see [cli.github.com](https://cli.github.com)
2. Authenticate: `gh auth login`
3. Verify: `gh auth status`
4. Or set environment variable: `export GITHUB_TOKEN=your_token`
## Optional Tools
### CCS - Claude Code Switch (Recommended for Heavy Users)
If you're a heavy ClaudeKit user or frequently hit Claude's rate limits, consider installing **CCS**:
```bash
npm install -g @kaitranntt/ccs
```
**Benefits:**
- Switch between multiple Claude accounts instantly
- Delegate simple tasks to cheaper models (81% cost savings)
- Keep working without interruption when hitting limits
- Optimize costs for high-volume usage
[**Learn more about CCS β**](/docs/tools/ccs)
## Next Steps
Now that ClaudeKit is installed, proceed to:
- [Quick Start Guide](/docs/getting-started/quick-start) - Build your first project
- [CLAUDE.md Explained](/docs/docs/configuration/claude-md) - Understand the configuration file
- [Workflows](/docs/docs/configuration/workflows) - Learn about development workflows
---
# Quick Start
Section: getting-started
Category: getting-started
URL: https://docs.claudekit.cc/docs/getting-started/quick-start
# Quick Start
Ship production feature in 15 minutes. No boilerplate, no setup hell.
## Video Demo
See ClaudeKit in action - from installation to building UI from a screenshot:
VIDEO
*More tutorials: [@goonnguyen](https://www.youtube.com/@goonnguyen) | X: [@goon_nguyen](https://x.com/goon_nguyen)*
## Prerequisites
- ClaudeKit CLI installed (`ck --version` works)
- Claude Code running
- Project initialized with ClaudeKit
**Don't have these?** See [Installation Guide](/docs/getting-started/installation)
## Your First Feature
Add user authentication to Next.js app in 15 minutes.
### Step 1: Bootstrap Project
```bash
ck init my-app --kit engineer
cd my-app
```
**Created**:
- `.claude/` - 14 AI agents, 30+ commands, 45 skills
- `docs/` - Auto-generated project docs
- `plans/` - Implementation plans storage
### Step 2: Plan Authentication
```bash
/plan add user authentication with email/password and OAuth
```
**What happens** (30 seconds):
1. `planner` agent spawns 3 `researcher` agents in parallel
2. Researches Next.js 15 + Better Auth best practices
3. Analyzes your codebase structure
4. Creates detailed implementation plan
**Output**:
```
β planner: Research complete (3 sources analyzed)
β planner: Plan created
π plans/251030-auth-implementation.md
Summary:
β’ Better Auth with credentials + OAuth (Google, GitHub)
β’ Session management with JWT
β’ Login/signup/reset pages
β’ Protected routes middleware
β’ Files to create: 8
β’ Files to modify: 4
β’ Tests: 12 test cases
β’ Estimated: 2 hours manually, 5 minutes with ClaudeKit
Next: /code plans/251030-auth-implementation.md
```
### Step 3: Review Plan
```bash
cat plans/251030-auth-implementation.md
```
Scan the plan. Architecture solid? Continue.
### Step 4: Implement
```bash
/code plans/251030-auth-implementation.md
```
**What happens** (5 minutes):
**Clarification** (15 sec):
```
code: Which OAuth providers?
You: Google and GitHub
```
**Implementation** (4 min):
1. Installs `better-auth` + dependencies
2. Creates auth config with providers
3. Generates login/signup/reset pages with shadcn/ui
4. Adds API routes
5. Implements middleware for protected routes
6. Writes 12 test cases
7. Updates documentation
**Real-time output**:
```
β Installing dependencies... (15s)
β Creating auth.ts config
β Generating UI components
β’ app/login/page.tsx (Google OAuth button)
β’ app/signup/page.tsx (Email + OAuth)
β’ app/reset-password/page.tsx
β Creating API routes
β’ app/api/auth/[...all]/route.ts
β Adding middleware.ts (protect /dashboard/*)
β Writing tests (12 cases)
β Updating docs/system-architecture.md
tester: Running test suite...
β Unit tests: 45 passed
β Auth tests: 12 passed
β Coverage: 89%
code-reviewer: Quality check...
β No security issues
β Type-safe (0 errors)
β Performance: Fast (< 100ms auth check)
β Implementation complete (4m 32s)
```
### Step 5: See It Work
```bash
npm run dev
```
Visit:
- `http://localhost:3000/login` - Login with email or OAuth
- `http://localhost:3000/signup` - Create account
- `http://localhost:3000/dashboard` - Protected route (redirects if not logged in)
**Try it**:
1. Sign up with email
2. Login redirects to dashboard
3. Logout, try OAuth with Google
4. Check session persistence
### Step 6: Commit
```bash
/git:cm
```
**Output**:
```
git-manager: Analyzing changes...
β Staged: 12 files
β Secret scan: Clean
β Commit message generated
feat(auth): add Better Auth with email and OAuth
- Add Better Auth configuration
- Implement login/signup/reset pages
- Add Google and GitHub OAuth
- Protect routes with middleware
- Add 12 auth test cases
- Update documentation
β Committed: abc1234
β Pushed to origin/main
Done!
```
## What Just Happened?
**Traditional approach** (8-12 hours):
1. Research auth libraries (1h)
2. Read documentation (1h)
3. Setup configuration (1h)
4. Build UI components (3h)
5. Implement OAuth flows (2h)
6. Write tests (1h)
7. Debug issues (1h)
8. Documentation (30min)
**ClaudeKit approach** (6 minutes):
1. Plan: AI researches, analyzes, designs (30s)
2. Implement: AI codes, tests, documents (4m)
3. Review: AI validates security, performance (30s)
4. Commit: AI creates professional commit (30s)
**Time saved**: 8-12 hours β **~800% faster**
## Why It Works
### 14 Specialized Agents
- **planner**: Creates implementation plans
- **researcher**: Finds best practices (3 run in parallel)
- **tester**: Validates everything works
- **code-reviewer**: Security + performance checks
- **git-manager**: Professional commits
- **docs-manager**: Keeps docs current
- ...and 8 more
### 45 Built-in Skills
- **better-auth**: Auth implementation expertise
- **shadcn-ui**: Pre-configured UI components
- **nextjs**: Next.js 15 best practices
- **gemini-image-gen**: Generate logos, assets
- ...and 41 more
### Living System
- Updates with Claude improvements
- No tech stack lock-in
- Learns your patterns
- Gets smarter over time
## Next: Real Work
Try more complex features:
```bash
# Add payment processing (Stripe + webhooks)
/cook add Stripe payment with subscriptions and webhooks
# Build REST API with validation
/cook create REST API for blog posts with Zod validation
# Implement real-time chat
/cook add real-time chat using WebSockets
# Database migrations
/cook migrate from SQLite to PostgreSQL with zero downtime
```
Each takes 5-20 minutes instead of days.
## Learn Workflows
### Full Feature Cycle
```bash
/plan [feature] # Research + plan
/code [plan] # Implement
/test # Validate
/fix:fast [issue] # Quick fixes
/git:cm # Commit
```
### Debug + Fix
```bash
/debug [issue] # Root cause analysis
/fix:hard [complex-issue] # Multi-agent fix
/fix:ci [actions-url] # Fix failing CI/CD
```
### Design + Content
```bash
/design:good [feature] # UI/UX design
/content:good [page] # Marketing copy
/brainstorm [idea] # Explore solutions
```
## Common Questions
**Q: Does it work with my tech stack?**
A: Yes. Next.js, Django, Laravel, Go, Rust, Flutter - any stack. ClaudeKit adapts to your patterns.
**Q: What if AI makes mistakes?**
A: `code-reviewer` catches issues before commit. Plus, you review PRs as normal. AI augments, doesn't replace judgment.
**Q: Do I need API keys?**
A: For basic features, no. For advanced skills (Gemini, Search), yes. See [API Key Setup](/docs/support/troubleshooting/api-key-setup).
**Q: Can I customize agents?**
A: Yes. Edit `.claude/agents/*.md` to tune behavior. See [Custom Agents](/docs/hooks/custom-agents).
## Explore More
**Core Concepts**:
- [Architecture](/docs/docs/agents) - How ClaudeKit works
- [Agents Overview](/docs/agents/) - Meet your AI team
- [Commands Guide](/docs/commands/) - All 30+ commands
**Real-World Uses**:
- [Starting New Project](/docs/workflows/new-project)
- [Adding Features](/docs/workflows/adding-feature)
- [Fixing Bugs](/docs/workflows/fixing-bugs)
**Troubleshooting**:
- [Installation Issues](/docs/support/troubleshooting/installation-issues)
- [Command Errors](/docs/support/troubleshooting/command-errors)
- [Performance Issues](/docs/support/troubleshooting/performance-issues)
---
**You just shipped production auth in 6 minutes.** Boilerplates can't do that. AI chat assistants can't do that. Only ClaudeKit.
**Ready to ship?** Your AI dev team is waiting.
---
# ClaudeKit Cheatsheet
Section: getting-started
Category: getting-started
URL: https://docs.claudekit.cc/docs/getting-started/cheatsheet
# ClaudeKit Cheatsheet
Quick reference guide for ClaudeKit CLI commands and workflows.
## Installation
```bash
# Install ClaudeKit globally
npm i -g claudekit-cli@latest
# Check version
ck --version
```
## Starting ClaudeKit
```bash
# Navigate to your project
cd /path/to/project
# Start Claude Code with ClaudeKit
claude
```
## Initial Setup
```bash
# For existing projects (brownfield)
/docs:init
# For new projects (greenfield)
ck init --kit engineer --dir /path/to/project
```
## Core Commands
### Development
```bash
# Initialize documentation and specs
/docs:init
# Implement new feature
/cook
# Autonomous feature implementation
/cook:auto
# Fast autonomous mode (less planning)
/cook:auto:fast
# Create implementation plan only
/plan
# Execute existing plan
/code
# Bootstrap new project
/bootstrap
# Autonomous bootstrap
/bootstrap:auto
```
### Bug Fixing
```bash
# Quick bug fix
/fix:fast
# Complex bug fix (deeper analysis)
/fix:hard
# Auto-fetch logs and fix
/fix:logs
# Run test suite and fix until passing
/fix:test
# Fix CI/CD pipeline issues
/fix:ci
```
### Testing
```bash
# Run test suite and report (no fixes)
/test
```
### Documentation
```bash
# Initialize documentation
/docs:init
# Update documentation
/docs:update
# Summarize documentation
/docs:summarize
```
### Git Operations
```bash
# Create commit with meaningful message
/git:cm
# Commit and push changes
/git:cp
# Create pull request
/git:pr
```
### Planning & Research
```bash
# Brainstorm technical approaches
/brainstorm
# Create detailed implementation plan
/plan
# Plan CI/CD setup or fix CI/CD pipeline
/plan:ci
# Two-step implementation plan
/plan:two
```
### Integration
```bash
# Integrate Polar API
/integrate:polar
# Integrate SePay payment
/integrate:sepay
```
### Skills Management
```bash
# Create new skill
/skill:create
# Fix skill errors
/skill:fix-logs
```
## Command Comparison
### Feature Implementation Flow
```bash
# Approach 1: With plan review (recommended)
/cook
# β CC asks questions
# β Review plan
# β Approve
# β Implementation starts
# Approach 2: Autonomous (use with caution)
/cook:auto
# β Full autonomous without plan review
# Approach 3: Fast autonomous (least tokens)
/cook:auto:fast
# β Fast mode with minimal planning
```
### Bug Fixing Flow
```bash
# Simple bugs
/fix:fast
# Complex bugs
/fix:hard
# From logs
/fix:logs
# From failing tests
/fix:test
# From CI/CD
/fix:ci
```
## Common Workflows
### Brownfield Project Setup
```bash
# 1. Install ClaudeKit
npm i -g claudekit-cli@latest
# 2. Go to project
cd /path/to/existing/project
# 3. Start Claude Code
claude
# 4. Initialize
/docs:init
# 5. Start working
/cook
```
### Greenfield Project Setup
```bash
# 1. Install ClaudeKit
npm i -g claudekit-cli@latest
# 2. Initialize project
ck init --kit engineer --dir /path/to/project
# 3. Navigate to project
cd /path/to/project
# 4. Start Claude Code
claude
# 5. Bootstrap idea
/bootstrap
# 6. Continue development
/cook
```
### Feature Development
```bash
# 1. Plan feature
/plan Add user profile with avatar upload
# 2. Review plan (markdown file generated)
# 3. Implement
/code profile-feature-plan.md
# 4. Test
/test
# 5. Fix if needed
/fix:test
# 6. Commit
/git:cm
```
### Bug Fix Workflow
```bash
# 1. Describe bug
/fix:hard Payment fails on Safari after form validation
# 2. CC analyzes and fixes
# 3. Test the fix
/test
# 4. Commit
/git:cm
```
### CI/CD Fix Workflow
```bash
# 1. Get failing action URL
# https://github.com/user/repo/actions/runs/12345
# 2. Fix CI
/fix:ci https://github.com/user/repo/actions/runs/12345
# 3. CC fetches logs, analyzes, fixes
# 4. Push fix
/git:cp
```
## Quick Examples
### Add Authentication
```bash
/cook Add JWT authentication with login, register, and password reset
```
### Fix Performance Issue
```bash
/fix:hard Dashboard loads slowly with 1000+ items
```
### Plan Database Migration
```bash
/plan Migrate from MongoDB to PostgreSQL with zero downtime
```
### Integrate Payment
```bash
/integrate stripe
# or
/cook Add Stripe payment integration with subscription billing
```
### Bootstrap New API
```bash
/bootstrap REST API for task management with teams, projects, tasks, and time tracking
```
## Command Categories
### π Core Development
- `/cook` - Feature implementation
- `/plan` - Create plans
- `/code` - Execute plans
- `/bootstrap` - New projects
### π Debugging & Fixing
- `/fix:fast` - Quick fixes
- `/fix:hard` - Complex fixes
- `/fix:logs` - Log-based fixes
- `/fix:test` - Test-based fixes
- `/fix:ci` - CI/CD fixes
### π§ͺ Testing
- `/test` - Run tests
### π Documentation
- `/docs:init` - Initialize
- `/docs:update` - Update
- `/docs:summarize` - Summarize
### π§ Git Operations
- `/git:cm` - Commit changes
- `/git:cp` - Commit and push
- `/git:pr` - Create PR
### π‘ Planning
- `/plan` - Detailed planning
- `/brainstorm` - Explore ideas
### π Integrations
- `/integrate ` - Add integrations
### βοΈ Skills
- `/skill:create` - New skills
- `/skill:fix-logs` - Fix skills
## Tips & Best Practices
### 1. Always Review Plans
**IMPORTANT:** Review implementation plans carefully before approving. Plans exist for a reason.
### 2. Provide Context
More detailed descriptions = better results
```bash
# β Bad
/cook Add search
# β
Good
/cook Add full-text search for blog posts with filters by category, tag, and date range
```
### 3. Use Right Command
```bash
# Quick bugs
/fix:fast
# Complex bugs
/fix:hard
# Small features
/cook
# Large features
/plan β review β /code plan.md
```
### 4. Test Frequently
```bash
# After each feature
/test
# Or auto-fix tests
/fix:test
```
### 5. Document Changes
```bash
# Keep docs updated
/docs:update
```
## Troubleshooting
### Command Not Working
```bash
# Check ClaudeKit version
ck --version
# Restart Claude Code
# Exit and run: claude
```
### Need Fresh Start
```bash
# Reinitialize docs
/docs:init
```
### Need More Help
```bash
# Brainstorm approach
/brainstorm How to implement
# Get detailed plan
/plan
```
## Language-Specific Quick Reference
### TiαΊΏng Viα»t
```bash
# Khα»i tαΊ‘o dα»± Γ‘n cΓ³ sαΊ΅n
/docs:init
# TΓnh nΔng mα»i (cαΊ§n review plan)
/cook
# TΓnh nΔng mα»i (tα»± Δα»ng, ko review)
/cook:auto
# TΓnh nΔng mα»i (nhanh hΖ‘n, Γt plan hΖ‘n)
/cook:auto:fast
# Chα» lΓͺn plan, khΓ΄ng code
/plan
# Code theo plan cΓ³ sαΊ΅n
/code
# Sα»a lα»i nhanh
/fix:fast
# Sα»a lα»i khΓ³ (suy nghΔ© lΓ’u hΖ‘n)
/fix:hard
# Tα»± lαΊ₯y logs vΓ sα»a
/fix:logs
# ChαΊ‘y test vΓ sα»a tα»i chαΊΏt
/fix:test
# LαΊ₯y logs GitHub Actions vΓ sα»a
/fix:ci
# TαΊ‘o dα»± Γ‘n mα»i (cαΊ§n review plan)
/bootstrap <Γ½-tΖ°α»ng>
# TαΊ‘o dα»± Γ‘n mα»i (tα»± Δα»ng tα»i chαΊΏt)
/bootstrap:auto <Γ½-tΖ°α»ng>
# ChαΊ‘y test vΓ bΓ‘o cΓ‘o (khΓ΄ng sα»a)
/test
```
### English
```bash
# Initialize existing project
/docs:init
# New feature (needs plan review)
/cook
# New feature (autonomous, no review)
/cook:auto
# New feature (faster, less planning)
/cook:auto:fast
# Only plan, no implementation
/plan
# Code from existing plan
/code
# Quick bug fix
/fix:fast
# Hard bug fix (deeper analysis)
/fix:hard
# Auto-fetch logs and fix
/fix:logs
# Run tests and fix till passing
/fix:test
# Fetch GitHub Actions logs and fix
/fix:ci
# Create new project (needs plan review)
/bootstrap
# Create new project (autonomous till death)
/bootstrap:auto
# Run test suite and report (no fixes)
/test
```
## Resources
- [Full Documentation](https://docs.claudekit.cc)
- [All Commands](/docs/commands/)
- [AI Agents](/docs/agents/)
- [Workflows](/docs/docs/configuration/workflows)
- [Troubleshooting](/docs/support/troubleshooting/)
- [GitHub Discussions](https://github.com/mrgoonie/claudekit-cli/discussions)
---
**Print this page** or keep it open while working with ClaudeKit for quick command reference!
---
# Support
Section: support
Category: N/A
URL: https://docs.claudekit.cc/docs/support/index
# Support
Get help with ClaudeKit through our community resources, documentation, and direct support channels.
## Quick Help
### Stuck on something specific?
- [FAQ](/docs/support/faq) - Answers to common questions
- [Troubleshooting](/docs/support/troubleshooting) - Solutions to common issues
- [Discord Community](https://claudekit.cc/discord) - Real-time help from community
### Looking for how-to guides?
- [Getting Started](/docs/getting-started) - Learn the basics
- [Workflows](/docs/workflows) - Step-by-step guides
- [Commands Reference](/docs/commands) - All available commands
## Community Support
### Discord Community
[**Join ClaudeKit Discord**](https://claudekit.cc/discord)
**What you'll find**:
- Real-time chat with ClaudeKit users and developers
- Help with specific commands and workflows
- Share your projects and get feedback
- Feature request discussions
- Community-created skills and templates
**Channels**:
- `#help` - Get assistance with issues
- `#show-and-tell` - Share what you're building
- `#feature-requests` - Suggest new features
- `#skills` - Share custom skills
- `#workflows` - Discuss workflow patterns
### GitHub Discussions
[**GitHub Discussions**](https://github.com/claudekit/claudekit/discussions)
**Best for**:
- Feature requests and ideas
- General questions about ClaudeKit
- Sharing workflows and techniques
- Community feedback and polls
- Long-form discussions
### Stack Overflow
[**ClaudeKit on Stack Overflow**](https://stackoverflow.com/questions/tagged/claudekit)
**Use when**:
- You have a specific technical question
- You want a searchable knowledge base
- You need expert answers to coding problems
- Your question might help others with similar issues
**Tag your questions**: `claudekit` and relevant technology tags
## Self-Service Resources
### Documentation
- [Getting Started](/docs/getting-started) - New user guides
- [Commands Reference](/docs/commands) - Complete command documentation
- [Agents Overview](/docs/agents) - Meet your AI team
- [Skills Library](/docs/skills) - Built-in knowledge modules
- [Troubleshooting](/docs/support/troubleshooting) - Common issues and solutions
### Tutorials and Examples
- [Workflow Examples](/docs/workflows) - Real-world workflows
- [Use Cases](/docs/workflows) - Practical applications
- [Quick Start Guide](/docs/getting-started/quick-start) - First project in 15 minutes
- [Migration Guide](/docs/getting-started/upgrade-guide) - From Claude Code to ClaudeKit
### Code Examples
- [GitHub Repository](https://github.com/claudekit/examples) - Example projects
- [Templates](https://github.com/claudekit/templates) - Starter templates
- [Community Skills](https://github.com/claudekit/community-skills) - User-contributed skills
## Direct Support
### Email Support
**support@claudekit.cc**
**Response time**: Within 24 business hours
**For**:
- Enterprise licensing inquiries
- Technical support for paid plans
- Partnership opportunities
- Press and media inquiries
- Security vulnerability reports
### Business Support
**Business@claudekit.cc**
**For**:
- Enterprise deployments
- Team training and onboarding
- Custom skill development
- Integration consulting
- SLA and support agreements
### Security Issues
**security@claudekit.cc**
**For**:
- Security vulnerability reports
- Security questions and concerns
- Compliance inquiries
- Privacy-related issues
**Please do not disclose vulnerabilities publicly**
## Getting Better Help
### Before Asking for Help
1. **Search First**: Check [FAQ](/docs/support/faq) and [Troubleshooting](/docs/support/troubleshooting)
2. **Check Documentation**: Look in relevant documentation sections
3. **Search GitHub Issues**: See if your issue has been reported
4. **Try Debugging**: Use `/debug` command to investigate issues
### When Asking for Help
**Include in your request**:
- ClaudeKit version (`claudekit --version`)
- Operating system and version
- Node.js version (`node --version`)
- Specific command or workflow you're using
- Error messages (full stack traces)
- Expected vs actual behavior
- Steps to reproduce the issue
**Good example**:
```
Hi! I'm having trouble with the /cook command on my Next.js project.
ClaudeKit version: 1.0.0
OS: macOS 14.0
Node.js: v18.17.0
Command: /cook "add user authentication with Better Auth"
Error: "Better Auth skill not found"
Expected: Should detect Next.js project and activate Better Auth skill
Project structure:
- next.config.js exists
- package.json includes better-auth
- .claude/CLAUDE.md configured
I've tried running /skill:refresh and restarting ClaudeKit.
```
### Where to Ask
**Quick questions** (< 5 min response):
- Discord `#help` channel
**Technical issues**:
- GitHub Issues (bug reports)
- Stack Overflow (technical questions)
**Feature requests**:
- GitHub Discussions
- Discord `#feature-requests`
**Business inquiries**:
- Email support@claudekit.cc
- Direct message on Discord
## Contributing
### Help Improve Documentation
- [Contributing Guide](/docs/development/contributing)
- Report documentation issues on GitHub
- Suggest improvements via Discord
### Share Your Experience
- Write blog posts about your ClaudeKit workflows
- Share custom skills with the community
- Create video tutorials
- Contribute to GitHub discussions
### Report Bugs
- [GitHub Issues](https://github.com/claudekit/claudekit/issues)
- Include detailed reproduction steps
- Provide system information
- Share error logs and output
## Community Guidelines
### Code of Conduct
- Be respectful and inclusive
- Help others learn and grow
- Share knowledge generously
- Follow Discord community guidelines
### Support Etiquette
- Search before asking
- Provide context and details
- Be patient with responses
- Thank people who help you
- Pay it forward by helping others
## Additional Resources
### Learning Resources
- [Blog](https://claudekit.cc/blog) - Tips, tricks, and announcements
- [YouTube Channel](https://youtube.com/@claudekit) - Video tutorials and demos
- [Newsletter](https://claudekit.cc/newsletter) - Weekly tips and updates
### Professional Services
- [ClaudeKit Pro Services](https://claudekit.cc/pro) - Enterprise consulting
- [Training Programs](https://claudekit.cc/training) - Team workshops and onboarding
- [Certification](https://claudekit.cc/certification) - Become a ClaudeKit expert
### Developer Resources
- [API Documentation](/docs/api) - For extending ClaudeKit
- [Plugin Development](/docs/development/plugins) - Create extensions
- [Skill Development](/docs/development/creating-skills) - Build custom skills
## Feedback
We value your feedback! Help us improve ClaudeKit:
- [Feature Requests](https://github.com/claudekit/claudekit/discussions/categories/feature-requests)
- [Bug Reports](https://github.com/claudekit/claudekit/issues)
- [Documentation Feedback](https://github.com/claudekit/claudekit/discussions/categories/documentation)
- [Community Survey](https://claudekit.cc/survey) - Share your experience
---
**Need immediate help?** Join our [Discord community](https://claudekit.cc/discord) for real-time assistance from fellow users and the ClaudeKit team.
---
# Frequently Asked Questions
Section: support
Category: N/A
URL: https://docs.claudekit.cc/docs/support/faq
# Frequently Asked Questions
Common questions about ClaudeKit, from getting started to advanced usage.
## Getting Started
### Q: What is ClaudeKit?
**A:** ClaudeKit is a production-grade framework that extends Claude Code with specialized AI agents, slash commands, and reusable skills. It transforms Claude Code from a general-purpose assistant into a focused engineering toolkit.
### Q: How is ClaudeKit different from Claude Code?
**A:** Claude Code provides the base AI assistant, while ClaudeKit adds:
- **14 specialized agents** (planner, tester, debugger, etc.)
- **30+ slash commands** for common development tasks
- **45 built-in skills** for specific technologies
- **Multi-agent workflows** that collaborate on complex tasks
### Q: Do I need to pay for Claude Code separately?
**A:** Yes, ClaudeKit requires an active Claude Code subscription. ClaudeKit is a one-time purchase ($99) that enhances your existing Claude Code subscription.
### Q: Can I try ClaudeKit before buying?
**A:** Yes! We offer a 30-day money-back guarantee. Try ClaudeKit risk-free for 30 days, and if you're not satisfied, get a full refund.
## Installation and Setup
### Q: How do I install ClaudeKit?
**A:**
```bash
# Install ClaudeKit CLI
npm install -g @claudekit/cli
# Initialize in your project
claudekit init
```
### Q: Does ClaudeKit replace my existing setup?
**A:** No. ClaudeKit enhances your existing workflow:
- Keeps your current Claude Code setup
- Extends rather than replaces functionality
- Works with your existing projects
- Preserves your custom commands
### Q: What are the system requirements?
**A:**
- Node.js 16+
- Claude Code subscription
- Git (required for version control)
- Any modern OS (Windows, macOS, Linux)
### Q: Can I use ClaudeKit with existing projects?
**A:** Yes! ClaudeKit works with existing projects:
```bash
cd your-existing-project
claudekit init
# Analyzes your project and activates relevant skills
```
## Commands and Usage
### Q: What's the most common workflow?
**A:** The feature development workflow:
```bash
/plan "add user authentication with OAuth" # Plan the feature
/code @plans/user-auth.md # Implement the plan
/fix:test # Test and fix issues
/git:cm # Commit changes
```
### Q: How do I fix bugs with ClaudeKit?
**A:** Use the debugging workflow:
```bash
/debug "login button not working" # Investigate issue
/fix:hard "session timeout problem" # Implement fix
/fix:test # Verify fix works
/git:cm # Commit solution
```
### Q: Can I create custom commands?
**A:** Yes! ClaudeKit supports custom commands:
```bash
/skill:create "My company's React patterns"
# Creates reusable skill for your team's conventions
```
### Q: How do agents work together?
**A:** Agents collaborate through shared context:
- **Planner** creates implementation plan
- **Researcher** analyzes best practices
- **Developer** writes the code
- **Tester** validates functionality
- **Code Reviewer** ensures quality
## Skills and Capabilities
### Q: What technologies does ClaudeKit support?
**A:** ClaudeKit supports 45+ technologies out of the box:
- **Frontend**: Next.js, React, Vue, Angular, Tailwind, shadcn/ui
- **Backend**: Node.js, Python, Go, Rust, PostgreSQL, MongoDB
- **Cloud**: AWS, GCP, Azure, Cloudflare Workers, Docker
- **Auth**: Better Auth, OAuth2, JWT, session management
- **Payment**: Stripe, Shopify, Polar, SePay
### Q: How do skills work?
**A:** Skills automatically activate based on your project:
1. ClaudeKit scans your project files
2. Detects technologies (e.g., `next.config.js` β Next.js skill)
3. Injects relevant knowledge into agent context
4. Agents use best practices for detected technologies
### Q: Can I create custom skills?
**A:** Yes! Create skills for your specific needs:
```bash
/skill:create "Our GraphQL conventions"
/skill:create "Company's testing patterns"
/skill:create "Internal API documentation"
```
## Performance and Quality
### Q: How much faster is ClaudeKit?
**A:** Based on user data:
- **10x faster** feature development
- **80% fewer bugs** with automated testing
- **90% reduction** in code review time
- **75% faster** bug resolution
### Q: Is the generated code production-ready?
**A:** Yes. ClaudeKit generates production-quality code:
- Follows security best practices
- Includes comprehensive error handling
- Has automated test coverage
- Meets performance standards
- Includes proper documentation
### Q: How does ClaudeKit ensure code quality?
**A:** Multiple quality assurance layers:
- **Code Reviewer Agent** performs security and performance analysis
- **Tester Agent** writes comprehensive tests
- **Debugger Agent** validates implementations
- **Skills provide** proven patterns and best practices
## Troubleshooting
### Q: ClaudeKit isn't detecting my technology stack
**A:** Try these solutions:
```bash
/skill:refresh # Refresh skill detection
/debug "project analysis" # Analyze project structure
claudekit doctor # Check system health
```
### Q: Commands are taking too long
**A:** Performance optimization tips:
- Use `.claudeignore` to exclude large directories
- Run `/debug "performance issues"` to identify bottlenecks
- Consider splitting large tasks into smaller ones
### Q: I keep hitting Claude's rate limits. What can I do?
**A:** Install [CCS (Claude Code Switch)](/docs/tools/ccs) to:
- **Switch accounts instantly:** Use multiple Claude accounts without downtime
- **Optimize costs:** Delegate simple tasks to GLM/Kimi (81% cheaper)
- **Parallel workflows:** Run multiple sessions simultaneously
- **Keep momentum:** Never lose context due to rate limits
### Q: Generated code has errors
**A:** Quick fixes:
```bash
/fix:test # Run tests and fix issues
/fix:hard # Use multi-agent debugging
/debug "analyze generated code" # Investigate specific issues
```
## Pricing and Licensing
### Q: How much does ClaudeKit cost?
**A:** ClaudeKit is a one-time purchase of $99:
- Lifetime access to all features
- All future updates included
- 30-day money-back guarantee
- No monthly fees or subscriptions
### Q: Are there any ongoing costs?
**A:** No ongoing costs for ClaudeKit itself. You only need:
- ClaudeKit one-time purchase ($99)
- Claude Code subscription (separate)
### Q: What's included in the price?
**A:** Everything:
- 14 specialized AI agents
- 30+ slash commands
- 45 built-in skills
- All future updates and features
- Community support
- Documentation and tutorials
### Q: Do you offer team licenses?
**A:** Yes! We offer team and enterprise licenses:
- Small teams (5-10 users): 20% discount
- Large teams (10+ users): Custom pricing
- Enterprise: Custom features and support
## Integration and Workflow
### Q: Can ClaudeKit work with my existing CI/CD?
**A:** Yes! ClaudeKit integrates with:
- GitHub Actions
- GitLab CI/CD
- Jenkins
- CircleCI
- Any standard Git workflow
### Q: How does ClaudeKit handle version control?
**A:** ClaudeKit includes Git management:
- Professional commit messages
- Conventional commit format
- Automated branching strategies
- Pull request creation
- Merge conflict resolution
### Q: Can I use ClaudeKit with team workflows?
**A:** Absolutely! ClaudeKit enhances team collaboration:
- Shared skill libraries
- Consistent code patterns across team
- Automated documentation
- Standardized workflows
- Knowledge sharing through custom skills
## Advanced Usage
### Q: Can I extend ClaudeKit with custom agents?
**A:** While you can't create new agent types, you can:
- Create custom skills that modify agent behavior
- Use workflow patterns to combine agents
- Extend functionality through custom commands
### Q: How do I optimize ClaudeKit for large projects?
**A:** Best practices for large codebases:
```bash
# Create .claudeignore file
echo "node_modules/\ndist/\nbuild/\ncoverage/" > .claudeignore
# Use specific focus areas
/cook "optimize user authentication flow" --scope auth
# Regular maintenance
/docs:update "keep documentation current"
/fix:ci "maintain CI/CD health"
```
### Q: Can ClaudeKit work with microservices?
**A:** Yes! ClaudeKit handles microservices:
- Service discovery and analysis
- Inter-service communication patterns
- Distributed system debugging
- Microservice testing strategies
- Deployment automation
## Support and Community
### Q: How do I get help?
**A:** Multiple support channels:
- [Discord Community](https://claudekit.cc/discord) - Real-time help
- [GitHub Issues](https://github.com/claudekit/claudekit/issues) - Bug reports
- [Documentation](/docs) - Complete reference
- [Email Support](mailto:support@claudekit.cc) - Direct assistance
### Q: How can I contribute to ClaudeKit?
**A:** We welcome contributions:
- Share custom skills with the community
- Report bugs and suggest features
- Improve documentation
- Help others in Discord
- Create tutorials and examples
### Q: How often is ClaudeKit updated?
**A:** Regular update schedule:
- Major features: Quarterly
- Bug fixes: Monthly
- Security patches: As needed
- New skills: Bi-weekly
## Still Have Questions?
### Q: Where can I find more information?
**A:** Additional resources:
- [Complete Documentation](/docs)
- [Getting Started Guide](/docs/getting-started)
- [Workflow Examples](/docs/workflows)
- [Discord Community](https://claudekit.cc/discord)
- [Blog](https://claudekit.cc/blog)
### Q: Can I talk to a human?
**A:** Yes! Human support available:
- Discord community (real-time chat)
- Email support@claudekit.cc
- Enterprise customers get dedicated support
- Office hours for community calls
---
**Don't see your question here?** Ask in our [Discord community](https://claudekit.cc/discord) or [open an issue](https://github.com/claudekit/claudekit/issues). We're here to help!
---
# Troubleshooting
Section: support
Category: support/troubleshooting
URL: https://docs.claudekit.cc/docs/troubleshooting/index
# Troubleshooting
Quick fixes for common issues. Most problems resolve in under 5 minutes.
## Quick Diagnosis
**Problem category?**
- [Installation fails](#installation) β [Installation Issues](/docs/support/troubleshooting/installation-issues)
- [Command not found or errors](#commands) β [Command Errors](/docs/support/troubleshooting/command-errors)
- [Agent not working](#agents) β [Agent Issues](/docs/support/troubleshooting/agent-issues)
- [API key errors](#api-keys) β [API Key Setup](/docs/support/troubleshooting/api-key-setup)
- [Slow or hanging](#performance) β [Performance Issues](/docs/support/troubleshooting/performance-issues)
## Installation
**Issue**: `ck: command not found`
**Fix**:
```bash
# Verify installation
npm list -g claudekit-cli
# Reinstall if needed
npm install -g claudekit-cli
# Verify
ck --version
```
[More installation fixes β](/docs/support/troubleshooting/installation-issues)
## Commands
**Issue**: `/command` does nothing
**Fix**:
1. Check `.claude/commands/` exists
2. Verify command file exists
3. Check frontmatter is valid
```bash
# List available commands
ls .claude/commands/**/*.md
# Test specific command
cat .claude/commands/core/cook.md
```
[More command fixes β](/docs/support/troubleshooting/command-errors)
## Agents
**Issue**: Agent not activating
**Fix**:
1. Verify `.claude/agents/` exists
2. Check agent file format
3. Confirm Claude Code is running
```bash
# List agents
ls .claude/agents/*.md
# Verify agent file
cat .claude/agents/planner.md
```
[More agent fixes β](/docs/support/troubleshooting/agent-issues)
## API Keys
**Issue**: "API key not found"
**Fix**:
```bash
# Add to .env
echo 'GEMINI_API_KEY=your-key' >> .env
echo 'SEARCH_API_KEY=your-key' >> .env
# Or export for session
export GEMINI_API_KEY=your-key
```
[Complete API key guide β](/docs/support/troubleshooting/api-key-setup)
## Performance
**Issue**: Commands take forever
**Fix**:
1. Check internet connection
2. Verify API keys are set
3. Use `--verbose` to see what's slow
```bash
# Run with verbose logging
/cook add feature --verbose
```
[More performance fixes β](/docs/support/troubleshooting/performance-issues)
## Common Quick Fixes
### Reset ClaudeKit
```bash
# Backup first
cp -r .claude .claude.backup
# Update to latest
ck init --kit engineer
# Restore custom files
cp .claude.backup/commands/my-custom.md .claude/commands/
```
### Clear Cache
```bash
# Clear Node modules
rm -rf node_modules
npm install
# Clear ClaudeKit cache
rm -rf ~/.claudekit/cache
```
### Verify Setup
```bash
# Check CLI
ck --version
# Check Claude Code
claude --version
# Check directory structure
tree .claude -L 2
```
## Still Stuck?
### Get Help
1. **Run diagnostics**:
```bash
ck diagnose --verbose
```
2. **Check logs**:
```bash
# Enable verbose mode
export CLAUDEKIT_VERBOSE=1
# Run command
/cook add feature
# Check output
cat claudekit-debug.log
```
3. **Report issue**:
- GitHub: https://github.com/claudekit/claudekit-engineer/issues
- Include: OS, CLI version, error message, steps to reproduce
### Community
- **Discord**: [Join ClaudeKit Discord](https://claudekit.cc/discord)
- **GitHub Discussions**: Share solutions, ask questions
## Prevention Tips
β
**Do**:
- Keep ClaudeKit updated (`ck init`)
- Use `--verbose` when debugging
- Backup before major changes
- Read error messages fully
β **Don't**:
- Modify core `.claude/` files directly
- Ignore API rate limits
- Skip version updates
- Delete `.claude/` directory
---
**95% of issues resolve in under 5 minutes** with these guides. Dive into specific sections for detailed fixes.
---
# Installation Issues
Section: support
Category: support/troubleshooting
URL: https://docs.claudekit.cc/docs/troubleshooting/installation-issues
# Installation Issues
ClaudeKit installation problems? Get unblocked in minutes with platform-specific fixes.
## Quick Fix: Command Not Found
**Symptom**: `ck: command not found` or `claudekit-cli: command not found`
**Solution**:
```bash
# Verify global installation
npm list -g claudekit-cli
# If not found, install globally
npm install -g claudekit-cli
# Verify installation
ck --version
```
If still not working, check your PATH (see [PATH issues](#path-issues) below).
---
## Platform-Specific Issues
### Windows
#### PowerShell Execution Policy
**Symptom**: "cannot be loaded because running scripts is disabled"
**Solution**:
```powershell
# Check current policy
Get-ExecutionPolicy
# Set policy (run as Administrator)
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
# Verify
ck --version
```
**Expected output**:
```
claudekit-cli/1.0.0
```
#### Windows PATH Not Updated
**Symptom**: `ck` works in some terminals but not others
**Solution**:
1. Find npm global path:
```bash
npm config get prefix
```
2. Add to System PATH:
- Open "Environment Variables" (Win + X β System β Advanced β Environment Variables)
- Under "User variables", select "Path" β Edit
- Add: `C:\Users\YourName\AppData\Roaming\npm` (or your npm prefix path)
- Click OK, restart terminal
3. Verify:
```bash
ck --version
```
#### Node.js Version Too Old
**Symptom**: "Error: ClaudeKit requires Node.js 18 or higher"
**Solution**:
```bash
# Check current version
node --version
# If < 18, install latest Node.js from nodejs.org
# Or use nvm (Node Version Manager)
nvm install 20
nvm use 20
# Reinstall ClaudeKit
npm install -g claudekit-cli
```
---
### macOS
#### Permission Denied
**Symptom**: "EACCES: permission denied" during `npm install -g`
**Solution 1**: Fix npm permissions (recommended)
```bash
# Create npm global directory
mkdir ~/.npm-global
# Configure npm to use new directory
npm config set prefix '~/.npm-global'
# Add to PATH
echo 'export PATH=~/.npm-global/bin:$PATH' >> ~/.zshrc
source ~/.zshrc
# Reinstall ClaudeKit
npm install -g claudekit-cli
```
**Solution 2**: Use sudo (not recommended)
```bash
sudo npm install -g claudekit-cli
```
**Solution 3**: Use nvm (best practice)
```bash
# Install nvm
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash
# Reload shell
source ~/.zshrc
# Install Node 20
nvm install 20
nvm use 20
# Install ClaudeKit (no sudo needed)
npm install -g claudekit-cli
```
#### Command Not Found After Install
**Symptom**: Installation succeeds but `ck` command not found
**Solution**:
```bash
# Check npm global bin path
npm config get prefix
# Add to PATH in ~/.zshrc (or ~/.bash_profile)
echo 'export PATH="$(npm config get prefix)/bin:$PATH"' >> ~/.zshrc
source ~/.zshrc
# Verify
ck --version
```
#### Apple Silicon (M1/M2/M3) Issues
**Symptom**: Architecture mismatch errors
**Solution**:
```bash
# Ensure using native ARM64 Node
arch
# Should show "arm64", not "i386"
# If wrong architecture, reinstall Node
# Download ARM64 version from nodejs.org
# Or use nvm
nvm install 20
nvm alias default 20
# Reinstall ClaudeKit
npm install -g claudekit-cli
```
---
### Linux
#### Permission Denied
**Symptom**: "EACCES: permission denied" during global install
**Solution**:
```bash
# Option 1: Use nvm (recommended)
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash
source ~/.bashrc
nvm install 20
npm install -g claudekit-cli
# Option 2: Change npm global directory
mkdir ~/.npm-global
npm config set prefix '~/.npm-global'
echo 'export PATH=~/.npm-global/bin:$PATH' >> ~/.bashrc
source ~/.bashrc
npm install -g claudekit-cli
```
#### Missing Dependencies
**Symptom**: "node-gyp rebuild" fails or native module errors
**Solution**:
```bash
# Ubuntu/Debian
sudo apt-get update
sudo apt-get install -y build-essential python3
# Fedora/RHEL
sudo dnf install -y gcc-c++ make python3
# Arch
sudo pacman -S base-devel python
# Reinstall ClaudeKit
npm install -g claudekit-cli
```
#### PATH Not Set
**Symptom**: `ck` command not found after successful install
**Solution**:
```bash
# Find npm global bin directory
npm config get prefix
# Add to PATH in ~/.bashrc (or ~/.zshrc)
echo 'export PATH="$(npm config get prefix)/bin:$PATH"' >> ~/.bashrc
source ~/.bashrc
# Verify
which ck
ck --version
```
---
## Common Installation Errors
### NPM Registry Issues
**Symptom**: "Unable to resolve dependency tree" or "ERESOLVE" errors
**Solution**:
```bash
# Clear npm cache
npm cache clean --force
# Update npm
npm install -g npm@latest
# Retry installation
npm install -g claudekit-cli
```
### Network/Proxy Issues
**Symptom**: Timeout or connection refused during install
**Solution**:
```bash
# Check npm registry
npm config get registry
# If behind proxy, configure
npm config set proxy http://proxy.company.com:8080
npm config set https-proxy http://proxy.company.com:8080
# Or use different registry
npm config set registry https://registry.npmjs.org/
# Retry installation
npm install -g claudekit-cli
```
### Version Conflicts
**Symptom**: "Incompatible peer dependencies" or version mismatch errors
**Solution**:
```bash
# Uninstall old version
npm uninstall -g claudekit-cli
# Clear cache
npm cache clean --force
# Install latest version
npm install -g claudekit-cli@latest
# Verify
ck --version
```
---
## PATH Issues
### Verify Global npm Location
```bash
# Check where npm installs global packages
npm config get prefix
# Common locations:
# Windows: C:\Users\YourName\AppData\Roaming\npm
# macOS: /usr/local or ~/.npm-global
# Linux: /usr/local or ~/.npm-global
```
### Add npm to PATH
**Windows**:
```powershell
# PowerShell (permanent)
$env:Path += ";C:\Users\YourName\AppData\Roaming\npm"
[System.Environment]::SetEnvironmentVariable("Path", $env:Path, "User")
```
**macOS/Linux (bash)**:
```bash
echo 'export PATH="$(npm config get prefix)/bin:$PATH"' >> ~/.bashrc
source ~/.bashrc
```
**macOS/Linux (zsh)**:
```bash
echo 'export PATH="$(npm config get prefix)/bin:$PATH"' >> ~/.zshrc
source ~/.zshrc
```
### Verify PATH Update
```bash
# Check if npm global bin is in PATH
echo $PATH | grep npm
# Should show npm bin directory
```
---
## Verify Complete Installation
After fixing installation issues, verify everything works:
```bash
# Check CLI version
ck --version
# Expected: claudekit-cli/1.0.0
# Check available commands
ck --help
# Expected: Lists init, versions, diagnose commands
# Test with demo project
mkdir test-project
cd test-project
ck init --kit engineer
# Expected: Downloads ClaudeKit Engineer successfully
```
---
## Prevention Tips
β
**Do**:
- Use nvm (Node Version Manager) to avoid permission issues
- Keep Node.js updated (18+)
- Use npm global directory in home folder
- Check PATH after installation
- Update ClaudeKit regularly: `ck init`
β **Don't**:
- Use sudo with npm (except as last resort)
- Install with old Node.js versions (<18)
- Ignore permission warnings
- Mix package managers (npm, yarn, pnpm, bun)
---
## Related Issues
- [Command Errors](/docs/support/troubleshooting/command-errors) - After installation, commands not working
- [API Key Setup](/docs/support/troubleshooting/api-key-setup) - Configure credentials after install
- [Performance Issues](/docs/support/troubleshooting/performance-issues) - Slow installation or downloads
---
## Still Stuck?
### Run Diagnostics
```bash
# System info
node --version
npm --version
which node
which npm
# npm configuration
npm config list
npm config get prefix
# Installation status
npm list -g claudekit-cli
which ck
```
### Get Help
1. **Check logs**: Look for errors during `npm install -g claudekit-cli`
2. **GitHub Issues**: [Report installation problems](https://github.com/claudekit/claudekit-engineer/issues)
3. **Discord**: [Join ClaudeKit community](https://claudekit.cc/discord)
Include in your report:
- Operating system and version
- Node.js version (`node --version`)
- npm version (`npm --version`)
- Full error message
- Steps you've already tried
---
**90% of installation issues resolve with proper PATH configuration.** Follow platform-specific steps above to get unblocked fast.
---
# Command Errors
Section: support
Category: support/troubleshooting
URL: https://docs.claudekit.cc/docs/troubleshooting/command-errors
# Command Errors
Commands not working? Get them running in minutes with these step-by-step fixes.
## Quick Fix: Command Does Nothing
**Symptom**: Typing `/cook` or other slash commands produces no response
**Solution**:
```bash
# 1. Verify .claude directory exists
ls -la .claude/
# 2. Check commands directory
ls .claude/commands/
# 3. Verify specific command file
cat .claude/commands/core/cook.md
# If files missing, reinitialize ClaudeKit
ck init --kit engineer
```
---
## Command Not Found
### Slash Commands Don't Work
**Symptom**: `/command` shows "command not found" or no response
**Detailed Solution**:
**Step 1: Verify Claude Code is running**
```bash
# Check if Claude Code CLI is available
claude --version
# If not found, install from claude.ai/code
```
**Step 2: Check .claude directory structure**
```bash
# Should show:
# .claude/
# βββ agents/
# βββ commands/
# βββ skills/
# βββ workflows/
tree .claude -L 2
# Or without tree command
ls -R .claude/
```
**Step 3: Verify command files exist**
```bash
# List all commands
find .claude/commands -name "*.md"
# Should show files like:
# .claude/commands/core/cook.md
# .claude/commands/core/plan.md
# .claude/commands/fix/fast.md
# etc.
```
**Step 4: Check command file format**
```bash
# View command file
cat .claude/commands/core/cook.md
# Must have frontmatter:
# ---
# name: cook
# description: Implement a feature step by step
# ---
```
**Expected structure**:
```markdown
---
name: cook
description: Implement a feature step by step
model: gemini-2.5-flash-agent
---
# Cook Command
Detailed implementation instructions...
```
---
### ck Commands Don't Work
**Symptom**: `ck init` shows "command not found"
**Solution**:
```bash
# Check if CLI installed
npm list -g claudekit-cli
# If not installed
npm install -g claudekit-cli
# Verify
ck --version
# If still not found, check PATH
which ck
echo $PATH
```
See [Installation Issues](/docs/support/troubleshooting/installation-issues) for PATH configuration.
---
## Command Execution Failures
### Command Starts But Fails
**Symptom**: Command begins execution but throws errors
**Diagnosis**:
```bash
# Run with verbose output
export CLAUDEKIT_VERBOSE=1
/cook implement user authentication
# Check error message details
```
**Common failures**:
#### Missing API Keys
**Error**: "API key not found" or "GEMINI_API_KEY is not set"
**Solution**:
```bash
# Add to .env
echo 'GEMINI_API_KEY=your-key-here' >> .env
# Or export for current session
export GEMINI_API_KEY=your-key-here
# Verify
echo $GEMINI_API_KEY
```
See [API Key Setup](/docs/support/troubleshooting/api-key-setup) for complete configuration.
#### Invalid Command Arguments
**Error**: "Invalid argument" or "Unexpected option"
**Solution**:
```bash
# Check command syntax in file
cat .claude/commands/core/cook.md
# Use correct format:
/cook implement feature name
# NOT:
/cook --implement feature
```
#### Agent Not Available
**Error**: "Agent 'planner' not found" or agent fails to respond
**Solution**:
```bash
# Check agents directory
ls .claude/agents/
# Should show:
# planner.md
# researcher.md
# code-reviewer.md
# etc.
# Verify agent file
cat .claude/agents/planner.md
# Reinitialize if missing
ck init --kit engineer
```
See [Agent Issues](/docs/support/troubleshooting/agent-issues) for agent-specific problems.
---
## Syntax Errors
### Invalid Frontmatter
**Symptom**: Command file exists but command doesn't load
**Problem**: Frontmatter syntax errors
```bash
# Check frontmatter format
head -n 10 .claude/commands/core/cook.md
```
β
**Correct format**:
```yaml
---
name: cook
description: Implement a feature step by step
model: gemini-2.5-flash-agent
---
```
β **Incorrect formats**:
```yaml
# Missing closing ---
---
name: cook
description: Implement a feature
# Extra spaces
---
name: cook
description: Implement a feature
---
# Wrong separators
***
name: cook
description: Implement a feature
***
```
**Fix**:
1. Ensure exactly three dashes: `---`
2. No spaces before property names
3. Use correct YAML syntax
4. Include closing `---`
### Command Name Conflicts
**Symptom**: Custom command doesn't work, core command runs instead
**Problem**: Duplicate command names
**Solution**:
```bash
# Find duplicates
find .claude/commands -name "*.md" -exec grep -l "^name: cook$" {} \;
# Should show only one file
# If duplicates exist, rename custom command
# Change name in frontmatter
---
name: cook-custom
description: My custom cook implementation
---
```
---
## File System Issues
### .claude Directory Missing
**Symptom**: No slash commands work, directory not found
**Solution**:
```bash
# Initialize ClaudeKit
ck init --kit engineer
# Verify structure
tree .claude -L 2
```
### Corrupted Command Files
**Symptom**: Commands worked before, now fail with parse errors
**Solution**:
```bash
# Backup current .claude
cp -r .claude .claude.backup
# Update to fresh version
ck init --kit engineer
# Restore custom files if needed
cp .claude.backup/commands/my-custom.md .claude/commands/
# Verify
/cook test command
```
### Permission Issues
**Symptom**: "Permission denied" when accessing .claude files
**Solution**:
```bash
# Check file permissions
ls -la .claude/
# Fix permissions (Unix/Linux/macOS)
chmod -R 755 .claude/
# On Windows (PowerShell as Admin)
icacls .claude /grant Everyone:F /T
```
---
## Verify Command Structure
### Required Files
ClaudeKit requires this structure:
```
.claude/
βββ agents/ # AI agent definitions
β βββ planner.md
β βββ researcher.md
β βββ code-reviewer.md
β βββ ...
βββ commands/ # Slash commands
β βββ core/
β β βββ cook.md
β β βββ plan.md
β β βββ ...
β βββ fix/
β βββ git/
β βββ ...
βββ skills/ # Specialized skills
βββ workflows/ # Workflow definitions
CLAUDE.md # Configuration
.mcp.json # MCP server config [DEPRECATED]
```
### Validate Structure
```bash
# Check all required directories
for dir in agents commands skills workflows; do
if [ -d ".claude/$dir" ]; then
echo "β
.claude/$dir exists"
else
echo "β .claude/$dir missing"
fi
done
# Count command files
find .claude/commands -name "*.md" | wc -l
# Should show 30+
# Count agent files
find .claude/agents -name "*.md" | wc -l
# Should show 12+
```
---
## Debugging Commands
### Enable Verbose Mode
```bash
# Method 1: Environment variable
export CLAUDEKIT_VERBOSE=1
/cook implement feature
# Method 2: Command flag (if supported)
/cook implement feature --verbose
# Method 3: Check logs
cat ~/.claudekit/logs/latest.log
```
### Test Individual Components
```bash
# Test agent directly
cat .claude/agents/planner.md
# Test command parsing
head -n 20 .claude/commands/core/cook.md
# Test Claude Code
claude --version
```
### Run Diagnostics
```bash
# If available
ck diagnose --verbose
# Manual checks
echo "Node: $(node --version)"
echo "npm: $(npm --version)"
echo "Claude Code: $(claude --version)"
echo "ClaudeKit CLI: $(ck --version)"
echo "Working directory: $(pwd)"
echo ".claude exists: $([ -d .claude ] && echo yes || echo no)"
```
---
## Common Quick Fixes
### Reset ClaudeKit
```bash
# Backup custom files
cp -r .claude .claude.backup
# Update to latest
ck init --kit engineer
# Restore custom commands
cp .claude.backup/commands/my-custom.md .claude/commands/
# Test
/cook hello world
```
### Verify Command Works
```bash
# Simple test command
/cook implement hello world feature
# Expected: Planner starts, creates plan, delegates to agents
# If works: Other commands should work too
# If fails: Check specific error message above
```
### Reload Claude Code
```bash
# Exit Claude Code (Ctrl+C or type 'exit')
exit
# Restart
claude
# Or with skip permissions
claude --dangerously-skip-permissions
# Test command again
/cook test
```
---
## Prevention Tips
β
**Do**:
- Keep ClaudeKit updated: `ck init`
- Backup .claude before modifications
- Use correct frontmatter syntax
- Verify command names are unique
- Check .claude structure regularly
β **Don't**:
- Modify core command files directly
- Delete .claude directory without backup
- Mix ClaudeKit versions
- Create commands without frontmatter
- Use special characters in command names
---
## Related Issues
- [Installation Issues](/docs/support/troubleshooting/installation-issues) - CLI not installed properly
- [Agent Issues](/docs/support/troubleshooting/agent-issues) - Agents not responding
- [API Key Setup](/docs/support/troubleshooting/api-key-setup) - Missing credentials
---
## Still Stuck?
### Collect Debug Info
```bash
# Create debug report
{
echo "=== System Info ==="
uname -a
node --version
npm --version
claude --version
ck --version
echo -e "\n=== Directory Structure ==="
tree .claude -L 2
echo -e "\n=== Command Files ==="
find .claude/commands -name "*.md"
echo -e "\n=== Recent Errors ==="
tail -50 ~/.claudekit/logs/latest.log
} > claudekit-debug.txt
```
### Get Help
1. **GitHub Issues**: [Report command problems](https://github.com/claudekit/claudekit-engineer/issues)
2. **Discord**: [Ask community](https://claudekit.cc/discord)
3. **Include**: Debug report, error message, steps to reproduce
---
**Most command issues stem from missing files or incorrect structure.** Run `ck init --kit engineer` to fix 80% of problems instantly.
---
# Agent Issues
Section: support
Category: support/troubleshooting
URL: https://docs.claudekit.cc/docs/troubleshooting/agent-issues
# Agent Issues
Agents not responding or behaving unexpectedly? Debug and fix agent problems fast.
## Quick Fix: Agent Not Activating
**Symptom**: Command runs but agent never starts or responds
**Solution**:
```bash
# 1. Verify agent file exists
ls .claude/agents/ | grep planner
# 2. Check agent file format
cat .claude/agents/planner.md
# 3. Verify Claude Code is running
claude --version
# 4. Restart Claude Code
# Exit (Ctrl+C), then restart
claude
```
---
## Agent Not Responding
### Command Runs But Agent Silent
**Symptom**: Command starts but no agent output appears
**Diagnosis**:
```bash
# Check agent file exists
ls -la .claude/agents/
# Should show:
# planner.md
# researcher.md
# code-reviewer.md
# tester.md
# copywriter.md
# etc.
```
**Step 1: Verify agent file format**
```bash
# View agent file
cat .claude/agents/planner.md
```
**Required structure**:
```markdown
---
name: planner
description: Creates implementation plans
model: gemini-2.5-flash-agent
---
# Planner Agent
Agent instructions and behaviors...
```
β
**Correct frontmatter**:
```yaml
---
name: planner
description: Creates implementation plans
model: gemini-2.5-flash-agent
---
```
β **Incorrect frontmatter**:
```yaml
# Missing model
---
name: planner
description: Creates implementation plans
---
# Wrong model name
---
name: planner
description: Creates implementation plans
model: gpt-4
---
# Syntax error
---
name planner
description: Creates implementation plans
---
```
**Step 2: Check Claude Code connection**
```bash
# Verify Claude Code CLI is running
ps aux | grep claude
# If not running, start it
claude
```
**Step 3: Check API keys**
Some agents require API keys:
```bash
# Check .env file
cat .env | grep API_KEY
# Required keys:
# GEMINI_API_KEY=... # For Gemini-powered agents
# SEARCH_API_KEY=... # For researcher agent
# OPENROUTER_API_KEY=... # For alternative models
```
See [API Key Setup](/docs/support/troubleshooting/api-key-setup) for configuration.
---
### Agent Starts Then Stops
**Symptom**: Agent begins work but stops mid-execution
**Common causes**:
#### API Rate Limiting
**Error**: "Rate limit exceeded" or "Quota exceeded"
**Solution**:
```bash
# Check API key tier
# Google AI Studio: Free tier = 15 RPM
# Vertex AI: Higher limits
# Wait 1 minute, then retry
/cook implement feature
# Or upgrade to paid tier at aistudio.google.com
```
**Monitor rate limits**:
```bash
# Enable verbose mode
export CLAUDEKIT_VERBOSE=1
# Watch API calls
/cook implement feature
# Look for "Rate limit" messages
```
#### Network Issues
**Error**: Timeout or connection refused
**Solution**:
```bash
# Check internet connection
ping google.com
# Check proxy settings if behind corporate firewall
export HTTP_PROXY=http://proxy.company.com:8080
export HTTPS_PROXY=http://proxy.company.com:8080
# Retry command
/cook implement feature
```
#### Memory Issues
**Error**: "Out of memory" or agent hangs
**Solution**:
```bash
# Check available memory
free -h # Linux
vm_stat # macOS
# Clear Node cache
npm cache clean --force
# Restart Claude Code with more memory
NODE_OPTIONS="--max-old-space-size=4096" claude
```
---
## Agent Conflicts
### Multiple Agents Interfere
**Symptom**: Two agents try to modify same files simultaneously
**Solution**:
Commands are designed to orchestrate agents sequentially. If conflicts occur:
```bash
# Check command implementation
cat .claude/commands/core/cook.md
# Look for orchestration logic:
# 1. Planner (creates plan)
# 2. Developer (implements code)
# 3. Tester (runs tests)
# 4. Code reviewer (reviews)
# Agents should run in sequence, not parallel
```
**Fix conflicts**:
```bash
# Backup work in progress
git stash
# Restart command with clean state
/cook implement feature
# Expected: Agents run one at a time
```
### Agent Loop
**Symptom**: Agent keeps repeating same action
**Solution**:
```bash
# Stop Claude Code (Ctrl+C)
^C
# Check if agent file has infinite loop logic
cat .claude/agents/problematic-agent.md
# Look for:
# - Missing completion conditions
# - Recursive calls without exit
# - Incorrect workflow logic
# Report to ClaudeKit if core agent
# Or fix custom agent logic
```
---
## Delegation Problems
### Agent Doesn't Spawn Sub-Agents
**Symptom**: Planner should spawn researchers but doesn't
**Expected behavior**:
```
Planner agent starts
βββ Spawns researcher #1 (Google search)
βββ Spawns researcher #2 (Documentation)
βββ Waits for results, creates plan
```
**Diagnosis**:
```bash
# Check planner agent file
cat .claude/agents/planner.md | grep -i "spawn\|delegate"
# Should contain instructions like:
# "Spawn researcher agents in parallel"
# "Delegate research tasks"
```
**Solution**:
```bash
# Update ClaudeKit to latest version
ck init --kit engineer
# Verify planner agent updated
cat .claude/agents/planner.md | head -20
# Test delegation
/plan implement user authentication
# Expected: Multiple agents activate
```
### Sub-Agent Results Not Used
**Symptom**: Sub-agents complete but parent ignores results
**Solution**:
```bash
# Check if results written to file system
ls -la plans/reports/
# Should show:
# research-*.md
# plan-*.md
# review-*.md
# If missing, agents not writing reports correctly
# Update ClaudeKit
ck init --kit engineer
```
---
## Slow Agent Performance
### Agent Takes Too Long
**Symptom**: Agent runs for minutes without completing
**Optimization steps**:
**Step 1: Use appropriate agent for task**
```bash
# β
Correct: Fast tasks with fast agents
/fix:fast typo in button text # Uses fast model
# β Wrong: Simple task with complex agent
/fix:hard typo in button text # Overkill, uses slow model
```
**Step 2: Check model configuration**
```bash
# View agent model
cat .claude/agents/planner.md | grep model:
# Fast models:
# - gemini-2.5-flash-agent
# - grok-code
# Slow but thorough models:
# - claude-sonnet-4
# - claude-opus-4
```
**Step 3: Optimize task description**
```bash
# β
Specific task
/cook implement JWT authentication with refresh tokens
# β Vague task (agent spends time clarifying)
/cook add auth stuff
```
**Step 4: Check network latency**
```bash
# Test API response time
time curl -X POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash:generateContent \
-H "Content-Type: application/json" \
-d '{"contents":[{"parts":[{"text":"Hello"}]}]}'
# Should complete in <2 seconds
# If slow, check internet connection
```
See [Performance Issues](/docs/support/troubleshooting/performance-issues) for optimization.
---
## Agent-Specific Issues
### Planner Agent
**Issue**: Plan too vague or missing details
**Solution**:
```bash
# Provide more context in task description
/plan implement user authentication system with:
- JWT tokens
- Refresh token rotation
- Email/password login
- OAuth (Google, GitHub)
- 2FA support
# More context = better plan
```
**Issue**: Plan missing research
**Solution**:
```bash
# Check researcher agent exists
cat .claude/agents/researcher.md
# Review researcher agent logic
```
### Researcher Agent
**Issue**: No search results returned
**Solution**:
```bash
# Check SearchAPI key
echo $SEARCH_API_KEY
# If missing:
export SEARCH_API_KEY=your-key-here
# Or add to .env
echo 'SEARCH_API_KEY=your-key' >> .env
# Test manually
curl "https://www.searchapi.io/api/v1/search?q=test&api_key=$SEARCH_API_KEY"
```
### Code Reviewer Agent
**Issue**: Review too strict or missing issues
**Solution**:
```bash
# Check review criteria in agent file
cat .claude/agents/code-reviewer.md
# Adjust sensitivity if needed (custom agent):
# - Set review_level: strict/moderate/lenient
# - Configure security rules
# - Enable/disable specific checks
```
### Tester Agent
**Issue**: Tests fail to run
**Solution**:
```bash
# Check test framework installed
npm list jest # or vitest, mocha, etc.
# Install if missing
npm install --save-dev jest
# Verify test command in package.json
cat package.json | grep test
# Should show:
# "test": "jest" or similar
# Run tests manually
npm test
```
### Copywriter Agent
**Issue**: Generated copy is off-brand
**Solution**:
```bash
# Add brand guidelines to project
cat > docs/brand-voice.md << 'EOF'
# Brand Voice Guidelines
- Tone: Professional but approachable
- Avoid: Jargon, corporate speak
- Use: Active voice, short sentences
- Examples: [Include brand examples]
EOF
# Reference in copywriter prompt
/content:good homepage hero section
# Include: Use brand voice from docs/brand-voice.md
```
---
## Debugging Agents
### Enable Agent Logging
```bash
# Method 1: Environment variable
export CLAUDEKIT_DEBUG=1
export CLAUDEKIT_VERBOSE=1
# Method 2: Check agent output files
ls -la plans/reports/
# View latest agent report
cat plans/reports/$(ls -t plans/reports/ | head -1)
```
### Monitor Agent Activity
```bash
# Watch plans directory for new reports
watch -n 1 'ls -lh plans/reports/'
# Or use continuous tail
tail -f plans/reports/*.md
```
### Test Agent Directly
```bash
# View agent prompt
cat .claude/agents/planner.md
# Test with minimal command
/plan hello world feature
# Expected: Quick plan generation
# If fails: Check agent file syntax
```
---
## Verify Agent Setup
### Check All Agents
```bash
# List all agents
ls -1 .claude/agents/
# Expected output (12 agents):
# code-reviewer.md
# copywriter.md
# database-admin.md
# debugger.md
# designer.md
# docs-manager.md
# git-manager.md
# journal-writer.md
# planner.md
# project-manager.md
# researcher.md
# tester.md
# Verify count
ls .claude/agents/*.md | wc -l
# Should show: 12
```
### Validate Agent Files
```bash
# Check each agent has required frontmatter
for agent in .claude/agents/*.md; do
echo "Checking $agent..."
grep -q "^name:" "$agent" && echo "β
Has name" || echo "β Missing name"
grep -q "^model:" "$agent" && echo "β
Has model" || echo "β Missing model"
echo ""
done
```
---
## Prevention Tips
β
**Do**:
- Keep agents updated: `ck init`
- Use appropriate agents for task complexity
- Provide clear, specific task descriptions
- Monitor API rate limits
- Check agent reports after execution
β **Don't**:
- Modify core agent files directly
- Run multiple commands simultaneously
- Use slow models for simple tasks
- Ignore agent error messages
- Delete agent reports prematurely
---
## Related Issues
- [Command Errors](/docs/support/troubleshooting/command-errors) - Commands not triggering agents
- [API Key Setup](/docs/support/troubleshooting/api-key-setup) - Agent API credentials
- [Performance Issues](/docs/support/troubleshooting/performance-issues) - Slow agent execution
---
## Still Stuck?
### Create Agent Debug Report
```bash
# Collect agent diagnostics
{
echo "=== Agent Files ==="
ls -lh .claude/agents/
echo -e "\n=== Agent Reports ==="
ls -lh plans/reports/
echo -e "\n=== API Keys ==="
env | grep API_KEY | sed 's/=.*/=***/'
echo -e "\n=== Recent Agent Output ==="
tail -100 plans/reports/$(ls -t plans/reports/ | head -1)
} > agent-debug.txt
```
### Get Help
1. **GitHub Issues**: [Report agent problems](https://github.com/claudekit/claudekit-engineer/issues)
2. **Discord**: [Agent troubleshooting channel](https://claudekit.cc/discord)
3. **Include**: Agent debug report, specific agent name, command used, expected vs actual behavior
---
**Most agent issues resolve with updated ClaudeKit.** Run `ck init --kit engineer` first, then retest your command.
---
# API Key Setup
Section: support
Category: support/troubleshooting
URL: https://docs.claudekit.cc/docs/troubleshooting/api-key-setup
# API Key Setup
Fix API key errors and configure credentials for ClaudeKit's AI-powered features.
## Quick Fix: Missing API Key
**Symptom**: "API key not found" or "GEMINI_API_KEY is not set"
**Solution**:
```bash
# Add to .env file (recommended)
echo 'GEMINI_API_KEY=your-key-here' >> .env
# Or export for current session
export GEMINI_API_KEY=your-key-here
# Verify
echo $GEMINI_API_KEY
# Restart Claude Code
claude
```
---
## Google Gemini API Key
### Get Your API Key
**Step 1: Visit Google AI Studio**
Go to [aistudio.google.com](https://aistudio.google.com)
**Step 2: Create API Key**
1. Click "Get API key" in top nav
2. Select "Create API key in new project" (or use existing project)
3. Copy the generated key (starts with `AIza...`)
**Step 3: Configure ClaudeKit**
**Method 1: .env file (recommended)**
```bash
# Create or edit .env in project root
echo 'GEMINI_API_KEY=AIza...' >> .env
# Verify
cat .env | grep GEMINI
```
**Method 2: Export (session only)**
```bash
# Add to current terminal session
export GEMINI_API_KEY=AIza...
# Verify
echo $GEMINI_API_KEY
# Make permanent (add to shell profile)
echo 'export GEMINI_API_KEY=AIza...' >> ~/.bashrc
source ~/.bashrc
```
### Verify Gemini Setup
```bash
# Test API key directly
curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash:generateContent?key=$GEMINI_API_KEY" \
-H 'Content-Type: application/json' \
-d '{"contents":[{"parts":[{"text":"Hello"}]}]}'
# Expected: JSON response with generated text
# If error: Check key validity
```
### Gemini API Issues
#### Invalid API Key
**Error**: "API key not valid" or "403 Forbidden"
**Solutions**:
1. **Check key format**:
```bash
# Correct format: AIza... (39 characters)
echo $GEMINI_API_KEY | wc -c
# Should show: 39
```
2. **Check for spaces/quotes**:
```bash
# β Wrong
export GEMINI_API_KEY='AIza...' # Has quotes
export GEMINI_API_KEY=AIza ... # Has space
# β
Correct
export GEMINI_API_KEY=AIza...
```
3. **Regenerate key**:
- Go to [aistudio.google.com](https://aistudio.google.com)
- Delete old key
- Create new key
- Update configuration
#### API Not Enabled
**Error**: "API not enabled" or "Permission denied"
**Solution**:
1. Go to [Google Cloud Console](https://console.cloud.google.com)
2. Select your project
3. Enable "Generative Language API"
4. Wait 1-2 minutes for propagation
5. Retry
#### Rate Limit Exceeded
**Error**: "429 Resource exhausted" or "Quota exceeded"
**Free tier limits**:
- 15 requests per minute
- 1,500 requests per day
- 1 million tokens per minute
**Solutions**:
```bash
# Wait 1 minute between commands
/plan implement feature
# Wait 60 seconds...
/code @plans/feature.md
# Or upgrade to paid tier
# Visit: console.cloud.google.com/billing
```
**Monitor usage**:
```bash
# Enable verbose mode
export CLAUDEKIT_VERBOSE=1
# Watch API calls
/cook implement feature
# Count requests in output
```
---
## SearchAPI Key
SearchAPI powers the researcher agent for Google/YouTube search.
### Get Your API Key
**Step 1: Visit SearchAPI**
Go to [searchapi.io](https://www.searchapi.io)
**Step 2: Sign Up**
1. Create free account (100 searches/month)
2. Verify email
3. Copy API key from dashboard
**Step 3: Configure ClaudeKit**
```bash
# Add to .env
echo 'SEARCH_API_KEY=your-searchapi-key' >> .env
# Or export
export SEARCH_API_KEY=your-searchapi-key
# Verify
echo $SEARCH_API_KEY
```
### Verify SearchAPI Setup
```bash
# Test API key
curl "https://www.searchapi.io/api/v1/search?engine=google&q=test&api_key=$SEARCH_API_KEY"
# Expected: JSON with search results
# If error: Check key validity
```
### SearchAPI Issues
#### Free Tier Limit Reached
**Error**: "Monthly quota exceeded"
**Free tier**: 100 searches/month
**Solution**:
```bash
# Check usage at searchapi.io dashboard
# Wait until next month
# Or upgrade to paid plan
```
#### Search Returns No Results
**Error**: Empty results or "No results found"
**Solutions**:
1. **Check query format**:
```bash
# Test manually
curl "https://www.searchapi.io/api/v1/search?engine=google&q=react+hooks&api_key=$SEARCH_API_KEY"
```
2. **Try different search terms**:
```bash
# More specific query
/plan implement authentication with JWT tokens
# Better than:
/plan add auth
```
---
## OpenRouter API Key
Optional: Use alternative models like Claude Opus, GPT-4, etc.
### Get Your API Key
**Step 1: Visit OpenRouter**
Go to [openrouter.ai](https://openrouter.ai)
**Step 2: Sign Up**
1. Create account
2. Add credits (pay-as-you-go)
3. Generate API key
**Step 3: Configure ClaudeKit**
```bash
# Add to .env
echo 'OPENROUTER_API_KEY=sk-or-...' >> .env
# Or export
export OPENROUTER_API_KEY=sk-or-...
# Verify
echo $OPENROUTER_API_KEY
```
### Configure Agent Models
Edit agent files to use OpenRouter models:
```bash
# Example: Use Claude Opus for planning
cat > .claude/agents/planner.md << 'EOF'
---
name: planner
description: Creates implementation plans
model: anthropic/claude-opus-4
---
# Planner Agent
...
EOF
```
**Available models**:
- `anthropic/claude-opus-4` - Most capable
- `anthropic/claude-sonnet-4` - Balanced
- `openai/gpt-4-turbo` - Fast, high quality
- `meta-llama/llama-3-70b` - Open source
See [openrouter.ai/models](https://openrouter.ai/models) for full list.
### Verify OpenRouter Setup
```bash
# Test API key
curl https://openrouter.ai/api/v1/chat/completions \
-H "Authorization: Bearer $OPENROUTER_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "anthropic/claude-sonnet-4",
"messages": [{"role": "user", "content": "Hello"}]
}'
# Expected: JSON response with completion
```
---
## Environment Variable Management
### .env File Setup
**Create .env in project root**:
```bash
# .env file
GEMINI_API_KEY=AIza...
SEARCH_API_KEY=searchapi-key-here
OPENROUTER_API_KEY=sk-or-...
# Optional
CLAUDEKIT_VERBOSE=0
CLAUDEKIT_DEBUG=0
```
**Add to .gitignore**:
```bash
# Prevent committing secrets
echo '.env' >> .gitignore
# Verify
cat .gitignore | grep .env
```
### Shell Profile Setup
**Make keys permanent** (add to shell profile):
**Bash** (~/.bashrc or ~/.bash_profile):
```bash
echo 'export GEMINI_API_KEY=AIza...' >> ~/.bashrc
echo 'export SEARCH_API_KEY=searchapi-key' >> ~/.bashrc
source ~/.bashrc
```
**Zsh** (~/.zshrc):
```bash
echo 'export GEMINI_API_KEY=AIza...' >> ~/.zshrc
echo 'export SEARCH_API_KEY=searchapi-key' >> ~/.zshrc
source ~/.zshrc
```
**Fish** (~/.config/fish/config.fish):
```bash
echo 'set -x GEMINI_API_KEY AIza...' >> ~/.config/fish/config.fish
echo 'set -x SEARCH_API_KEY searchapi-key' >> ~/.config/fish/config.fish
source ~/.config/fish/config.fish
```
### Verify All Keys
```bash
# Check all required keys
{
echo "GEMINI_API_KEY: ${GEMINI_API_KEY:0:10}..."
echo "SEARCH_API_KEY: ${SEARCH_API_KEY:0:10}..."
echo "OPENROUTER_API_KEY: ${OPENROUTER_API_KEY:0:10}..."
} | grep -v "^.*: *$"
# Should show truncated keys (not empty)
```
---
## Security Best Practices
### Protect Your Keys
β
**Do**:
- Store keys in .env file
- Add .env to .gitignore
- Use environment variables
- Rotate keys periodically
- Use different keys per project
β **Don't**:
- Commit keys to git
- Share keys publicly
- Store keys in code files
- Use keys in URLs
- Hardcode keys in scripts
### Check for Leaked Keys
```bash
# Scan git history for keys
git log -p | grep -i "api.*key"
# Check if .env is tracked
git ls-files | grep .env
# If .env is tracked, remove it:
git rm --cached .env
git commit -m "Remove .env from tracking"
```
### Revoke Compromised Keys
**If key is leaked**:
1. **Gemini**: Go to [aistudio.google.com](https://aistudio.google.com) β Delete key
2. **SearchAPI**: Go to [searchapi.io/dashboard](https://www.searchapi.io/dashboard) β Revoke key
3. **OpenRouter**: Go to [openrouter.ai/keys](https://openrouter.ai/keys) β Delete key
Then generate new keys and update configuration.
---
## Verify Complete Setup
After configuring all keys:
```bash
# Test Gemini
curl -s "https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash:generateContent?key=$GEMINI_API_KEY" \
-H 'Content-Type: application/json' \
-d '{"contents":[{"parts":[{"text":"Test"}]}]}' | grep -q "candidates" && echo "β
Gemini works" || echo "β Gemini failed"
# Test SearchAPI
curl -s "https://www.searchapi.io/api/v1/search?q=test&api_key=$SEARCH_API_KEY" | grep -q "search_results" && echo "β
SearchAPI works" || echo "β SearchAPI failed"
# Test with ClaudeKit
/plan implement hello world
# Should complete without API key errors
```
---
## Common Issues
### Keys Not Loading
**Symptom**: Keys set but still "not found" error
**Solutions**:
```bash
# 1. Check .env location
ls -la .env
# Must be in project root
# 2. Restart Claude Code
# Exit and restart to reload environment
# 3. Check for typos
cat .env | grep API_KEY
# Verify spelling: GEMINI_API_KEY (not GEMINI_KEY)
# 4. Check for spaces
cat .env | cat -A
# Look for trailing spaces or tabs
```
### Keys Expiring
**Symptom**: Keys worked before, now invalid
**Solution**:
```bash
# Regenerate key at provider:
# - Gemini: aistudio.google.com
# - SearchAPI: searchapi.io
# - OpenRouter: openrouter.ai
# Update .env with new key
nano .env
# Restart Claude Code
```
---
## Prevention Tips
β
**Do**:
- Document which keys are required
- Use .env.example template (without real keys)
- Test keys after configuration
- Monitor API usage/quotas
- Set up billing alerts
β **Don't**:
- Share keys between projects
- Commit keys to version control
- Ignore rate limit warnings
- Use production keys for testing
- Store keys in cloud config
---
## Related Issues
- [Agent Issues](/docs/support/troubleshooting/agent-issues) - Agents failing due to missing keys
- [Command Errors](/docs/support/troubleshooting/command-errors) - Commands failing silently
- [Performance Issues](/docs/support/troubleshooting/performance-issues) - Rate limiting causing slowness
---
## Still Stuck?
### Create API Debug Report
```bash
# Collect API diagnostics (masks real keys)
{
echo "=== Environment Variables ==="
env | grep -i api_key | sed 's/=.*/=***/'
echo -e "\n=== .env File ==="
[ -f .env ] && cat .env | sed 's/=.*/=***/' || echo "No .env file"
echo -e "\n=== API Tests ==="
curl -s "https://generativelanguage.googleapis.com/v1beta/models?key=$GEMINI_API_KEY" > /dev/null && echo "β
Gemini reachable" || echo "β Gemini failed"
} > api-debug.txt
```
### Get Help
1. **GitHub Issues**: [Report API key problems](https://github.com/claudekit/claudekit-engineer/issues)
2. **Discord**: [API configuration help](https://claudekit.cc/discord)
3. **Include**: API debug report (keys masked), error messages, provider (Gemini/SearchAPI/OpenRouter)
---
**Most API key issues resolve with proper .env configuration and Claude Code restart.** Set up keys once, use everywhere.
---
# Performance Issues
Section: support
Category: support/troubleshooting
URL: https://docs.claudekit.cc/docs/troubleshooting/performance-issues
# Performance Issues
Commands taking forever? Optimize ClaudeKit for blazing-fast execution.
## Quick Fix: Commands Too Slow
**Symptom**: Commands take 5+ minutes when they should take seconds
**Solution**:
```bash
# 1. Check internet connection
ping google.com
# 2. Verify API keys set (avoid retries)
echo $GEMINI_API_KEY
# 3. Use faster model
# Edit agent file, change model to:
model: gemini-2.5-flash-agent
# 4. Run with verbose to see bottleneck
export CLAUDEKIT_VERBOSE=1
/cook implement feature
```
---
## Slow Command Execution
### General Slowness
**Symptom**: All commands take longer than expected
**Diagnosis**:
```bash
# Time a simple command
time /plan hello world
# Expected: <30 seconds
# If >60 seconds: Investigate below
```
**Common causes & fixes**:
#### Internet Connection
```bash
# Test connection speed
curl -w "@-" -o /dev/null -s https://generativelanguage.googleapis.com <<< "time_total: %{time_total}s\n"
# Should be <1 second
# If slow: Check WiFi, restart router, try ethernet
```
#### API Rate Limiting
**Symptom**: Commands slow after burst of usage
**Google Gemini free tier limits**:
- 15 requests/minute
- 1,500 requests/day
- 1 million tokens/minute
**Solution**:
```bash
# Wait between commands
/plan feature A
# Wait 60 seconds
/code @plans/feature-a.md
# Or upgrade to paid tier
# console.cloud.google.com/billing
```
**Monitor rate limits**:
```bash
# Enable verbose mode
export CLAUDEKIT_VERBOSE=1
/cook implement feature
# Watch for "429" errors or "quota exceeded"
```
#### Wrong Model Selection
**Symptom**: Simple tasks take minutes
**Problem**: Using slow, thorough models for simple tasks
**Solution**:
```bash
# Check agent model
cat .claude/agents/tester.md | grep model:
# Fast models (use for simple tasks):
model: gemini-2.5-flash-agent
model: grok-code
# Slow models (use for complex tasks):
model: claude-sonnet-4
model: claude-opus-4
# Fix: Edit agent file
nano .claude/agents/tester.md
# Change to: model: gemini-2.5-flash-agent
```
**Model speed comparison**:
| Model | Speed | Best For |
|-------|-------|----------|
| gemini-2.5-flash-agent | Fastest | Testing, docs, simple fixes |
| grok-code | Fast | Git ops, quick reviews |
| claude-sonnet-4 | Medium | Code review, design |
| claude-opus-4 | Slow | Architecture, complex planning |
---
### Large Codebase Performance
**Symptom**: Commands slow in projects with 1,000+ files
**Optimization steps**:
#### Exclude Unnecessary Files
```bash
# Create .claudeignore (like .gitignore)
cat > .claudeignore << 'EOF'
node_modules/
dist/
build/
.next/
.cache/
*.log
*.lock
coverage/
.git/
EOF
# Verify exclusions
ls .claudeignore
```
#### Optimize Repomix
Repomix scans your codebase for agent context. Large projects slow it down.
```bash
# Check repomix config
cat repomix.config.json
# Optimize with exclude patterns:
{
"exclude": [
"node_modules/**",
"dist/**",
"build/**",
"*.min.js",
"*.map",
"coverage/**",
".git/**"
],
"include": [
"src/**",
"lib/**"
]
}
# Test performance
time npx repomix
# Before optimization: 60+ seconds
# After: <10 seconds
```
#### Limit Context Size
```bash
# For agents that use full codebase context
# Limit scope to relevant directories
# β Slow (scans everything)
/cook implement user auth
# β
Fast (scoped)
/cook implement user auth in src/auth/
# Or use --scope flag (if available)
/cook implement user auth --scope src/auth/
```
---
## Memory Issues
### Out of Memory Errors
**Symptom**: "JavaScript heap out of memory" or process crashes
**Solution**:
**Increase Node.js memory**:
```bash
# Method 1: Environment variable
export NODE_OPTIONS="--max-old-space-size=4096"
claude
# Method 2: Direct command
node --max-old-space-size=4096 $(which claude)
# Method 3: Permanent (add to shell profile)
echo 'export NODE_OPTIONS="--max-old-space-size=4096"' >> ~/.bashrc
source ~/.bashrc
```
**Memory allocation guide**:
- Small projects (<100 files): 2048 MB
- Medium projects (100-1,000 files): 4096 MB
- Large projects (1,000+ files): 8192 MB
**Verify change**:
```bash
node -e "console.log(v8.getHeapStatistics().heap_size_limit/(1024*1024))"
# Should show: 4096 or your set value
```
### Memory Leaks
**Symptom**: Claude Code slows down over time, eventually crashes
**Solution**:
```bash
# Monitor memory usage
# Linux
watch -n 1 "ps aux | grep claude"
# macOS
watch -n 1 "ps aux | grep claude | grep -v grep"
# If memory grows continuously:
# 1. Restart Claude Code
exit
claude
# 2. Clear npm cache
npm cache clean --force
# 3. Update ClaudeKit
ck init --kit engineer
# 4. Update Node.js
node --version # Should be 18+
```
---
## Timeout Issues
### Commands Timeout
**Symptom**: "Request timeout" or command hangs forever
**Solutions**:
#### Increase Timeout
```bash
# If command takes >5 minutes, split into smaller tasks
# β Slow (tries to do everything)
/cook implement entire authentication system
# β
Fast (incremental)
/plan implement authentication
/code @plans/auth.md # Implements login, signup, password reset phases
```
#### Check API Endpoint
```bash
# Test API connectivity
curl -w "time: %{time_total}s\n" -o /dev/null -s \
https://generativelanguage.googleapis.com/v1beta/models?key=$GEMINI_API_KEY
# Should complete in <2 seconds
# If timeout: Check firewall, VPN, proxy
```
#### Corporate Firewall/Proxy
**Symptom**: Timeouts only at work/school
**Solution**:
```bash
# Configure proxy
export HTTP_PROXY=http://proxy.company.com:8080
export HTTPS_PROXY=http://proxy.company.com:8080
# Or in .env
echo 'HTTP_PROXY=http://proxy.company.com:8080' >> .env
echo 'HTTPS_PROXY=http://proxy.company.com:8080' >> .env
# Restart Claude Code
claude
```
---
## Debugging Performance
### Enable Verbose Mode
```bash
# Method 1: Environment variable
export CLAUDEKIT_VERBOSE=1
/cook implement feature
# Method 2: Command flag
/cook implement feature --verbose
# Watch for slow operations:
# - "Generating repomix..." (should be <10s)
# - "Calling API..." (should be <5s)
# - "Reading files..." (should be <2s)
```
### Profile Command Execution
```bash
# Time each command
time /plan implement authentication
# Note the "real" time
time /cook implement authentication
# Compare times
# Identify bottleneck:
# - Planning: >2 minutes = researcher agent slow
# - Cooking: >5 minutes = codebase too large
# - Testing: >3 minutes = test suite issues
```
### Check API Latency
```bash
# Test API response time
for i in {1..5}; do
curl -w "time: %{time_total}s\n" -o /dev/null -s \
"https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash:generateContent?key=$GEMINI_API_KEY" \
-H 'Content-Type: application/json' \
-d '{"contents":[{"parts":[{"text":"Hi"}]}]}'
done
# Average should be <2 seconds
# If >5 seconds consistently: Check network
```
### Monitor Resource Usage
**Linux**:
```bash
# CPU and memory
htop
# Or basic version
top
# Look for "claude" or "node" process
```
**macOS**:
```bash
# Activity Monitor (GUI)
open -a "Activity Monitor"
# Or command line
top -o cpu
# Look for "claude" or "node"
```
**Windows**:
```powershell
# Task Manager (GUI)
taskmgr
# Or PowerShell
Get-Process | Where-Object {$_.ProcessName -like "*node*"} | Select-Object ProcessName, CPU, WorkingSet
```
---
## Optimization Tips
### Project-Level Optimizations
#### Optimize Dependencies
```bash
# Remove unused dependencies
npm prune
# Update to latest versions (often faster)
npm update
# Use faster package manager
npm install -g pnpm
pnpm install # 2-3x faster than npm
```
#### Cache Optimization
```bash
# Clear all caches
npm cache clean --force
rm -rf node_modules/.cache
rm -rf .next # If using Next.js
rm -rf .nuxt # If using Nuxt
# Reinstall
npm install
```
#### Use .claudeignore
```bash
# Exclude performance-killing directories
cat > .claudeignore << 'EOF'
# Dependencies
node_modules/
vendor/
.pnp.*
# Build outputs
dist/
build/
out/
.next/
.nuxt/
.cache/
# Large assets
*.mp4
*.mov
*.zip
*.tar.gz
public/videos/
public/images/originals/
# Logs and temp files
*.log
*.tmp
.DS_Store
# Version control
.git/
.svn/
# IDE
.idea/
.vscode/
*.swp
EOF
```
---
### Command-Level Optimizations
#### Use Faster Commands
```bash
# β
Fast commands (seconds)
/fix:fast small bug # Uses fast model
/git:cm # Simple git operations
# β Slow commands (minutes)
/fix:hard complex issue # Uses thorough model
/bootstrap full project # Creates entire structure
```
#### Scope Tasks Narrowly
```bash
# β Slow (vague, agent explores everything)
/cook add feature
# β
Fast (specific, agent focuses)
/cook implement GET /api/users endpoint with pagination
# β Slow (entire codebase)
/fix:ui button styling
# β
Fast (scoped)
/fix:ui button styling in src/components/Button.tsx
```
#### Run Commands Sequentially
```bash
# β
Correct (one at a time)
/plan implement auth
# Wait for completion
/code @plans/auth.md
# β Wrong (compete for resources)
/plan implement auth
/code @plans/auth.md # Don't run simultaneously!
```
---
## Performance Benchmarks
### Expected Command Times
**With fast models (gemini-2.5-flash-agent)**:
| Command | Small Project | Medium Project | Large Project |
|---------|---------------|----------------|---------------|
| /plan | 10-20s | 20-40s | 40-90s |
| /cook | 30-60s | 60-120s | 120-300s |
| /test | 15-30s | 30-60s | 60-120s |
| /fix:fast | 10-20s | 20-30s | 30-60s |
| /git:cm | 5-10s | 10-20s | 20-30s |
**If commands take 2x longer**: Check optimizations above
**If commands take 5x+ longer**: Critical performance issue, see [Still Stuck](#still-stuck)
---
## Prevention Tips
β
**Do**:
- Use .claudeignore for large projects
- Choose appropriate models for task complexity
- Scope commands narrowly
- Monitor API rate limits
- Keep dependencies updated
- Use verbose mode when debugging
β **Don't**:
- Scan entire codebase for small changes
- Use slow models for simple tasks
- Run multiple commands simultaneously
- Ignore timeout warnings
- Let npm cache grow indefinitely
- Process unnecessary files
---
## Related Issues
- [Installation Issues](/docs/support/troubleshooting/installation-issues) - Slow installation problems
- [Command Errors](/docs/support/troubleshooting/command-errors) - Commands failing vs hanging
- [Agent Issues](/docs/support/troubleshooting/agent-issues) - Specific agent slowness
---
## Still Stuck?
### Create Performance Report
```bash
# Collect performance diagnostics
{
echo "=== System Info ==="
node --version
npm --version
claude --version
uname -a
echo -e "\n=== Project Size ==="
find . -type f | wc -l
du -sh .
echo -e "\n=== Memory ==="
free -h 2>/dev/null || vm_stat
echo -e "\n=== Disk Space ==="
df -h .
echo -e "\n=== API Latency ==="
curl -w "time: %{time_total}s\n" -o /dev/null -s \
"https://generativelanguage.googleapis.com/v1beta/models?key=$GEMINI_API_KEY"
echo -e "\n=== Command Timing ==="
time /plan hello world 2>&1 | tail -3
} > performance-report.txt
```
### Get Help
1. **GitHub Issues**: [Report performance problems](https://github.com/claudekit/claudekit-engineer/issues)
2. **Discord**: [Performance optimization channel](https://claudekit.cc/discord)
3. **Include**: Performance report, project size, specific slow command, expected vs actual time
---
**Most performance issues resolve with proper .claudeignore and model selection.** Start there, then optimize deeper if needed.
---
# Tools & Utilities
Section: tools
Category: tools
URL: https://docs.claudekit.cc/docs/tools/index
# Tools & Utilities
Ecosystem tools that complement ClaudeKit and enhance your development workflow.
## Official Tools
### CCS - Claude Code Switch
**Purpose:** Multi-account management and cost optimization
Switch between multiple Claude accounts and AI models (GLM, Kimi) instantly to avoid rate limits and reduce costs.
**Key Features:**
- π Zero-downtime account switching
- π° 81% cost savings with model delegation
- π Parallel workflow support
- π OAuth and API key authentication
**Use Cases:**
- Heavy ClaudeKit users hitting rate limits
- Teams sharing Claude accounts
- Cost optimization for simple tasks
- Multi-project workflows
[**Learn More β**](/docs/tools/ccs)
## Community Tools
*Coming soon - Submit your tools via GitHub*
## Tool Categories
### Account Management
- [CCS](/docs/tools/ccs) - Multi-account switching and delegation
### Monitoring & Analytics
*Coming soon*
### Workflow Automation
*Coming soon*
### Integration Tools
*Coming soon*
## Contributing Tools
Have a tool that enhances ClaudeKit? We'd love to feature it!
**Requirements:**
- Open source (MIT, Apache 2.0, or similar)
- Well-documented
- Actively maintained
- Solves a real ClaudeKit user problem
**Submit via:**
- [GitHub](https://github.com/claudekit/claudekit-docs/pulls)
- [Discord Community](https://discord.gg/GnCRPpSw)
---
**Next:** Explore individual tools or return to [Documentation](/docs/agents)
---
# CCS - Claude Code Switch
Section: tools
Category: tools
URL: https://docs.claudekit.cc/docs/tools/ccs
# CCS - Claude Code Switch
**One command, zero downtime, multiple accounts**
Switch between Claude accounts, GLM, Kimi and more instantly. Stop hitting rate limits. Keep working continuously.
## The Problem
You're deep in implementation. Context loaded. Solution crystallizing. Then:
π΄ **"You've reached your usage limit."**
Momentum gone. Context lost. Productivity crater.
Session limits shouldn't kill your flow state.
## The Solution
CCS enables **parallel workflows** instead of sequential switching:
```bash
# Terminal 1: Main work (Work Account)
ccs work "implement authentication system"
# Terminal 2: Side task (Personal Account)
ccs personal "review PR #123"
# Terminal 3: Cost-optimized task (GLM - 81% cheaper)
ccs glm "add tests for all service files"
```
All running simultaneously. No context switching. No downtime.
## Installation
```bash
# Install globally
npm install -g @kaitranntt/ccs
# Verify installation
ccs --version
```
## Quick Start
### Basic Usage
```bash
ccs # Claude subscription (default)
ccs glm # GLM (cost-optimized)
ccs kimi # Kimi (with thinking support)
```
### Delegation with -p Flag
```bash
# Delegate task to GLM
ccs glm -p "fix linting errors in src/"
# Delegate to Kimi for analysis
ccs kimi -p "analyze project structure and document"
# Continue previous session
ccs glm:continue -p "run the tests and fix failures"
```
### Multi-Account Setup
```bash
# Create account profiles
ccs auth create work
ccs auth create personal
# Run concurrently in separate terminals
# Terminal 1 - Work
ccs work "implement feature"
# Terminal 2 - Personal (concurrent)
ccs personal "review code"
```
## Core Features
### 1. Model Switching
Switch between AI models instantly:
```bash
ccs # Claude (default)
ccs glm # GLM-4.6 (cost-optimized)
ccs kimi # Kimi (long-context)
ccs gemini # Gemini 2.5 Pro (OAuth)
ccs codex # GPT-5.1 Codex Max (OAuth)
```
### 2. AI-Powered Delegation
Delegate tasks to cost-optimized models with `-p` flag:
```bash
# Simple task (GLM)
ccs glm -p "add tests for UserService"
# Long-context task (Kimi)
ccs kimi -p "analyze all files in src/ and document architecture"
# Continue previous session
ccs glm:continue -p "run the tests and fix any failures"
```
### 3. Slash Command Support
Use slash commands inside delegated sessions:
```bash
# Execute /cook command in GLM session
ccs glm -p "/cook create responsive landing page"
# Use ClaudeKit commands
ccs glm -p "/fix:test run all tests and fix failures"
```
### 4. Parallel Workflows
Run multiple sessions simultaneously:
```bash
# Terminal 1: Planning (Claude)
ccs "Plan a REST API with authentication"
# Terminal 2: Execution (GLM, cost-optimized)
ccs glm "Implement the user authentication endpoints"
# Terminal 3: Analysis (Kimi)
ccs kimi "Design a caching strategy with trade-off analysis"
```
## Configuration
Location: `~/.ccs/config.json`
### Auto-Created Structure
```json
{
"profiles": {
"glm": "~/.ccs/glm.settings.json",
"glmt": "~/.ccs/glmt.settings.json",
"kimi": "~/.ccs/kimi.settings.json",
"default": "~/.claude/settings.json"
}
}
```
### API Keys Setup
Before using alternative models, update API keys:
**GLM:**
```bash
# Edit ~/.ccs/glm.settings.json
# Add your Z.AI Coding Plan API Key
```
**Kimi:**
```bash
# Edit ~/.ccs/kimi.settings.json
# Add your Kimi API key
```
### Custom Claude CLI Path
If Claude is in a non-standard location:
```bash
# Unix/macOS
export CCS_CLAUDE_PATH="/path/to/claude"
# Windows
$env:CCS_CLAUDE_PATH = "D:\Tools\Claude\claude.exe"
```
## Usage Examples
### Basic Switching
```bash
# Use Claude (default)
ccs "implement user authentication"
# Use GLM (cost-optimized)
ccs glm "add tests for all controllers"
# Use Kimi (long-context)
ccs kimi "analyze entire project structure"
```
### Cost-Optimized Workflow
```bash
# Complex planning (use Claude)
ccs "Plan authentication system with OAuth and JWT"
# Simple execution (delegate to GLM - 81% cheaper)
ccs glm -p "Implement the user login endpoint"
# Testing (delegate to GLM)
ccs glm -p "Add unit tests for auth service"
# Review (use Claude)
ccs "Review the authentication implementation"
```
### Multi-Account Workflow
```bash
# Create profiles
ccs auth create client-a
ccs auth create client-b
ccs auth create personal
# Switch during work
ccs client-a "Morning: Client A work"
ccs client-b "Afternoon: Client B work"
ccs personal "Evening: Side project"
```
### Session Continuation
```bash
# Start a task
ccs glm -p "refactor auth.js to use async/await"
# Continue in next session
ccs glm:continue -p "also update the README examples"
# Continue again
ccs glm:continue -p "add error handling"
```
## Integration with ClaudeKit
### Recommended Workflow
```bash
# 1. Plan with Claude
ccs "/plan add payment integration"
# 2. Implement with GLM (cost-optimized)
ccs glm -p "/cook implement Stripe payment flow"
# 3. Test with GLM
ccs glm -p "/fix:test run payment tests"
# 4. Review with Claude
ccs "/review check payment implementation"
```
### Cost Optimization Strategy
**Use Claude for:**
- Complex planning (`/plan`)
- Architecture decisions
- Code review (`/review`)
- Creative problem solving
**Use GLM for:**
- Simple implementations
- Testing and bug fixes (`/fix:test`)
- Documentation updates
- Repetitive tasks
**Use Kimi for:**
- Long-context analysis
- Entire codebase reviews
- Architecture documentation
- Multi-file refactoring
## Advanced Features
### OAuth Authentication
Zero-config auth for supported models:
```bash
# Interactive OAuth (browser opens)
ccs gemini
# Authenticate only (save tokens)
ccs gemini --auth
# Headless mode (for SSH/servers)
ccs gemini --headless
# Logout (clear tokens)
ccs gemini --logout
```
### Help & Version
```bash
# Show version
ccs --version
# Show all commands
ccs --help
```
### Health Check
```bash
# Check CCS installation
ccs doctor
# Shows:
# - Binary status
# - Port availability
# - Configuration validity
```
## Troubleshooting
### OAuth Timeout
```bash
# If browser doesn't load in time
ccs gemini --auth --headless
# Get URL manually
```
### Port Conflict
```bash
# Check port availability
ccs doctor
# Kill process using port 8317
lsof -ti:8317 | xargs kill # Unix
```
### Session Not Continuing
```bash
# Ensure you use :continue suffix
ccs glm:continue -p "next task"
# Check session ID in previous output
```
## Uninstall
```bash
# Remove CCS
npm uninstall -g @kaitranntt/ccs
# Remove config (optional)
rm -rf ~/.ccs
```
## Resources
- **GitHub:** [kaitranntt/ccs](https://github.com/kaitranntt/ccs)
- **Documentation:** [Full docs](https://github.com/kaitranntt/ccs#readme)
- **Issues:** [Report bugs](https://github.com/kaitranntt/ccs/issues)
- **Troubleshooting:** [Guide](https://github.com/kaitranntt/ccs/blob/main/docs/en/troubleshooting.md)
## Next Steps
- [Installation Guide](/docs/getting-started/installation) - Setup ClaudeKit
- [Workflows](/docs/workflows/) - Learn ClaudeKit workflows
- [FAQ](/docs/support/faq) - Common questions
---
**Key Takeaway:** CCS transforms rate limits from blockers into opportunities for cost optimization and parallel workflows. Use it to maintain flow state and reduce AI costs by 81%.
---
# Workflows
Section: workflows
Category: N/A
URL: https://docs.claudekit.cc/docs/workflows/index
# Workflows
Task-oriented guides for common development scenarios using ClaudeKit's slash commands and agents.
## Popular Workflows
### Feature Development
[**Feature Development Guide**](/docs/workflows/feature-development) - Complete feature lifecycle from planning to deployment
```bash
/plan "add user authentication with OAuth"
/code @plans/user-auth.md
/fix:test
/git:pr "feature/user-auth"
```
### Bug Fixing
[**Bug Fixing Workflow**](/docs/workflows/bug-fixing) - Systematic approach to debugging and fixing issues
```bash
/debug "login button not working"
/fix:hard
/fix:test
/git:cm
```
### Documentation
[**Documentation Workflow**](/docs/workflows/documentation) - Keep docs in sync with code changes
```bash
/docs:init
/docs:update "after feature changes"
```
## Quick Workflows
### Setup New Project
```bash
ck init my-project --kit engineer
cd my-project
/plan "set up project structure"
/code @plans/project-setup.md
```
### Add New Feature
```bash
/plan "add [feature description]"
/code @plans/your-feature.md
/design:good "UI mockups if needed"
/fix:test
/git:cm
```
### Deploy to Production
```bash
/plan "prepare for production deployment"
/fix:ci "fix any failing tests"
/git:pr "deploy-to-production"
```
### Code Review
```bash
/code-review "review recent changes"
/fix "implement suggested improvements"
/git:cm
```
## By Use Case
### Frontend Development
- [UI/UX Design](/docs/commands#design-commands) - `/design:good`, `/design:fast`
- Component Development - `/plan β /code β /fix:test`
- Styling - `/design:good` for aesthetic components
### Backend Development
- API Development - `/plan β /code β /fix:hard`
- Database Changes - `/plan "add user table" β /code`
- Performance Optimization - `/debug "slow queries" β /fix`
### Full Stack
- Complete Features - See [Feature Development](/docs/workflows/feature-development)
- Authentication - `/cook "add authentication with Better Auth"`
- E-commerce - `/cook "add Stripe payment integration`
### DevOps & Infrastructure
- Docker Setup - `/cook "add Docker configuration"`
- CI/CD - `/fix:ci "fix failing GitHub Actions"`
- Deployment - `/plan "deploy to Cloudflare Workers"`
## Advanced Workflows
### Multi-agent Collaboration
```bash
/plan "complex feature with multiple components"
# Spawns: planner β researcher β frontend dev β backend dev β tester
/fix:hard "production bug"
# Spawns: debugger β researcher β dev β tester β reviewer
```
### Content Creation
```bash
/content:good "write marketing copy for new feature"
/content:enhance "improve existing landing page"
/design:good "create visual assets for social media"
```
### Integration Workflows
```bash
/integrate:polar "add Polar billing integration"
/integrate:sepay "add SePay payment gateway"
```
## Getting Started
New to ClaudeKit? Start with:
1. [Getting Started Guide](/docs/getting-started) - Learn the basics
2. [Quick Start](/docs/getting-started/quick-start) - Build your first feature
3. [Feature Development](/docs/workflows/feature-development) - Complete workflow example
## Reference
- [Commands Reference](/docs/commands) - All available commands
- [Agents Overview](/docs/agents) - Meet your AI team
- [Skills Library](/docs/skills) - Built-in knowledge modules
## Need Help?
- [Troubleshooting](/docs/support/troubleshooting) - Common issues
- [FAQ](/docs/support/faq) - Frequently asked questions
- [Support](/docs/support) - Get help from the community
---
# Feature Development Workflow
Section: workflows
Category: N/A
URL: https://docs.claudekit.cc/docs/workflows/feature-development
# Feature Development Workflow
Complete feature lifecycle from planning to deployment using ClaudeKit's multi-agent workflows.
## Overview
The feature development workflow combines planning, research, implementation, testing, and deployment into a cohesive process.
## Step-by-Step Guide
### 1. Planning Phase
```bash
/plan "add user authentication with OAuth providers"
```
**What happens**:
- **Planner Agent** creates detailed implementation plan
- **Researcher Agent** analyzes best practices and security considerations
- **Code Reviewer Agent** reviews plan for architectural soundness
- Generates plan file with phases, tasks, and success criteria
**Output**: Detailed plan in `plans/YYYYMMDD-HHMM-feature-description/`
### 2. Implementation Phase
```bash
/code
```
**What happens**:
- Reads the latest plan
- **Fullstack Developer Agent** implements backend/frontend
- **UI/UX Designer Agent** creates interface designs
- **Database Admin Agent** handles database schema changes
- Skills auto-activate based on tech stack detection
**Output**: Working feature implementation with:
- Code following best practices
- Database migrations
- API endpoints
- Frontend components
- Configuration files
### 3. Testing Phase
```bash
/fix:test
```
**What happens**:
- **Tester Agent** writes comprehensive tests
- Runs unit tests, integration tests, E2E tests
- **Debugger Agent** investigates any failing tests
- **Code Reviewer Agent** reviews test coverage
**Output**: Complete test suite with:
- Unit tests (90%+ coverage)
- Integration tests
- E2E tests for critical paths
- Performance benchmarks
### 4. Code Review Phase
```bash
/code-review "review authentication implementation"
```
**What happens**:
- **Code Reviewer Agent** performs security and performance analysis
- **Debugger Agent** checks for potential bugs
- **Researcher Agent** validates architectural decisions
- Generates review report with recommendations
### 5. Deployment Preparation
```bash
/fix:ci "prepare for production deployment"
```
**What happens**:
- **Debugger Agent** fixes any CI/CD issues
- **DevOps Agent** prepares deployment configurations
- **Tester Agent** validates deployment pipeline
- **Docs Manager Agent** updates deployment documentation
### 6. Commit & Deploy
```bash
/git:cm
/git:pr "feature/user-authentication"
```
**What happens**:
- **Git Manager Agent** stages and commits with conventional format
- Creates professional commit message
- Opens pull request with detailed description
- Handles merge conflicts if any
## Real Example
Let's walk through adding authentication to a Next.js app:
### Step 1: Plan
```bash
/plan "add user authentication with Better Auth including OAuth providers"
```
**Generated Plan**:
- Phase 1: Database setup (users table, sessions)
- Phase 2: Backend API (auth endpoints, middleware)
- Phase 3: Frontend components (login, signup, protected routes)
- Phase 4: OAuth integration (Google, GitHub)
- Phase 5: Testing and security review
- Phase 6: Documentation and deployment
### Step 2: Implement
```bash
/code
```
**Implementation Details**:
- Detects Next.js project β activates Next.js skill
- Detects Better Auth requirement β activates Better Auth skill
- Creates API routes: `/api/auth/signin`, `/api/auth/signout`
- Implements middleware for protected routes
- Builds login/signup forms with proper validation
- Sets up session management
### Step 3: Test
```bash
/fix:test
```
**Test Coverage**:
- Unit tests for auth functions
- Integration tests for API endpoints
- E2E tests for login/logout flow
- Security tests for OAuth flow
- Performance tests for session handling
### Step 4: Review & Deploy
```bash
/code-review "authentication implementation"
/fix:ci
/git:cm
/git:pr "feature/user-authentication"
```
## Best Practices
### Before Starting
1. **Clear Requirements**: Have specific acceptance criteria
2. **Technical Stack**: Ensure required skills are available
3. **Dependencies**: Identify external services needed
### During Implementation
1. **Iterate**: Use `/fix:hard` for complex issues
2. **Validate**: Test early and often with `/fix:test`
3. **Document**: Use `/docs:update` to keep docs current
### Before Deployment
1. **Security Review**: Always run `/code-review`
2. **Performance**: Test with `/debug "performance issues"`
3. **Documentation**: Ensure docs are updated
## Troubleshooting
### Common Issues
- **Plan Too Broad**: Break into smaller features
- **Missing Skills**: Create custom skills with `/skill:create`
- **Test Failures**: Use `/debug` to investigate root causes
- **CI Failures**: Use `/fix:ci` for pipeline issues
### Getting Help
- [Troubleshooting Guide](/docs/support/troubleshooting)
- [FAQ](/docs/support/faq)
- [Discord Community](https://claudekit.cc/discord)
## Next Steps
After mastering feature development:
- Learn [Bug Fixing Workflow](/docs/workflows/bug-fixing)
- Explore [Documentation Workflow](/docs/workflows/documentation)
- Study [Commands Reference](/docs/commands)
## Related Workflows
- [Bug Fixing](/docs/workflows/bug-fixing) - For when things go wrong
- [Code Review](/docs/workflows/code-review) - Maintaining code quality
- [Documentation](/docs/workflows/documentation) - Keeping docs current
---
# Bug Fixing Workflow
Section: workflows
Category: N/A
URL: https://docs.claudekit.cc/docs/workflows/bug-fixing
# Bug Fixing Workflow
Systematic approach to debugging and fixing issues using ClaudeKit's specialized debugging agents.
## Overview
The bug fixing workflow combines investigation, root cause analysis, fixing, and validation into a comprehensive debugging process.
## When to Use This Workflow
- Production bugs affecting users
- Test failures in CI/CD
- Performance issues
- Security vulnerabilities
- Unexpected behavior
## Step-by-Step Guide
### 1. Initial Investigation
```bash
/debug "login button not working in production"
```
**What happens**:
- **Debugger Agent** investigates the issue
- **Researcher Agent** analyzes similar problems
- Collects logs, error messages, stack traces
- Identifies affected components and dependencies
**Output**: Investigation report with:
- Issue description and reproduction steps
- Error logs and stack traces
- Affected components list
- Initial hypothesis of root cause
### 2. Deep Analysis
```bash
/debug "analyze authentication flow for session issues"
```
**What happens**:
- **Debugger Agent** performs deep code analysis
- **Database Admin Agent** checks for data issues
- **Security Agent** checks for vulnerabilities
- Traces execution flow and data flow
**Output**: Detailed analysis with:
- Root cause identification
- Code locations needing fixes
- Data consistency issues
- Security implications
### 3. Fix Implementation
```bash
/fix:hard "session timeout causing login issues"
```
**What happens**:
- **Debugger Agent** implements the fix
- **Code Reviewer Agent** reviews fix quality
- **Security Agent** validates security implications
- **Database Admin Agent** handles data corrections if needed
**Types of Fixes**:
- Code logic corrections
- Configuration updates
- Database migrations
- Security patches
- Performance optimizations
### 4. Comprehensive Testing
```bash
/fix:test
```
**What happens**:
- **Tester Agent** writes regression tests
- **Debugger Agent** verifies fix effectiveness
- **Performance Agent** checks for regressions
- Runs comprehensive test suite
**Test Coverage**:
- Unit tests for fixed code
- Integration tests for affected flows
- Regression tests for related functionality
- Performance benchmarks
- Security tests
### 5. Validation & Deployment
```bash
/fix:ci "validate fix doesn't break production"
/git:cm
```
**What happens**:
- **Debugger Agent** validates fix in staging
- **CI/CD Agent** ensures pipeline passes
- **Git Manager Agent** commits with proper format
- Monitors for any post-deployment issues
## Real Example
Let's debug a production authentication issue:
### Step 1: Investigate
```bash
/debug "users getting logged out randomly in production"
```
**Investigation Results**:
- Issue: Session tokens expiring prematurely
- Frequency: Affects 15% of users
- Impact: Users lose work session
- Pattern: Happens after 5 minutes of inactivity
### Step 2: Analyze
```bash
/debug "session configuration and cookie settings"
```
**Root Cause Analysis**:
- Session timeout set to 5 minutes instead of 24 hours
- Cookie configuration missing secure flags
- Redis session store connection issues
- Load balancer stripping session headers
### Step 3: Fix
```bash
/fix:hard "session timeout and cookie configuration"
```
**Fixes Applied**:
```javascript
// Updated session configuration
const sessionConfig = {
maxAge: 24 * 60 * 60 * 1000, // 24 hours
secure: true,
httpOnly: true,
sameSite: 'strict'
}
```
### Step 4: Test
```bash
/fix:test
```
**Tests Created**:
- Session duration tests
- Cookie security tests
- Load balancer session persistence tests
- Redis connection failover tests
### Step 5: Deploy
```bash
/fix:ci
/git:cm
```
## Specialized Debugging Commands
### CI/CD Issues
```bash
/fix:ci "failing GitHub Actions for test suite"
```
- Fixes pipeline configuration
- Resolves dependency conflicts
- Updates test commands
### Type Errors
```bash
/fix:types "TypeScript errors in user service"
```
- Fixes type definitions
- Resolves interface mismatches
- Updates type annotations
### Performance Issues
```bash
/debug "slow API response times in user endpoints"
/fix "implement database query optimization"
```
### Security Issues
```bash
/debug "potential XSS vulnerability in comment system"
/fix:hard "sanitize user input and implement CSP"
```
## Debugging Best Practices
### Before Starting
1. **Gather Context**: Collect error logs, user reports, reproduction steps
2. **Narrow Scope**: Be specific about the issue in your command
3. **Check Environment**: Verify if it's dev, staging, or production
### During Investigation
1. **Document Everything**: Keep track of findings and hypotheses
2. **Test Hypotheses**: Validate each potential cause
3. **Consider Dependencies**: Check external services and integrations
### When Implementing Fixes
1. **Minimum Changes**: Fix only what's necessary
2. **Add Tests**: Prevent regression with comprehensive tests
3. **Review Security**: Ensure fixes don't introduce vulnerabilities
### After Deployment
1. **Monitor**: Watch for related issues
2. **Document**: Update knowledge base
3. **Communicate**: Inform stakeholders about resolution
## Troubleshooting Debugging
### Common Debugging Challenges
- **Cannot Reproduce**: Use `/debug "analyze logs for pattern"`
- **Intermittent Issues**: Use `/debug "setup monitoring and logging"`
- **Performance Issues**: Use `/debug "profile application performance"`
- **Memory Leaks**: Use `/debug "analyze memory usage patterns"`
### Advanced Debugging
```bash
/debug "comprehensive system health check"
# Analyzes entire application stack
/debug "security vulnerability assessment"
# Performs security audit
/fix:hard "complete system optimization"
# Implements multiple performance improvements
```
## Getting Help
- [Troubleshooting Guide](/docs/support/troubleshooting)
- [Common Issues](/docs/support/troubleshooting#common-issues)
- [Discord Community](https://claudekit.cc/discord)
## Related Workflows
- [Feature Development](/docs/workflows/feature-development) - Building new features
- [Code Review](/docs/workflows/code-review) - Maintaining quality
- [Performance Optimization](/docs/workflows/performance-optimization) - Speed improvements
---
# Documentation Workflow
Section: workflows
Category: N/A
URL: https://docs.claudekit.cc/docs/workflows/documentation
# Documentation Workflow
Keep documentation synchronized with code changes using ClaudeKit's automated documentation management.
## Overview
The documentation workflow ensures your project documentation stays current, accurate, and useful as your codebase evolves.
## When to Use This Workflow
- Starting a new project
- After completing features
- Before releases
- When onboarding new team members
- Regular documentation maintenance
## Step-by-Step Guide
### 1. Initialize Documentation
```bash
/docs:init
```
**What happens**:
- **Docs Manager Agent** creates documentation structure
- **Researcher Agent** analyzes existing codebase
- **Planner Agent** creates documentation plan
- Sets up documentation tooling
**Output Created**:
- `docs/` directory structure
- README.md with project overview
- API documentation template
- Contributing guidelines
- Code of conduct
- Architecture documentation
### 2. Initial Documentation Generation
```bash
/docs:init "comprehensive project documentation"
```
**What happens**:
- **Docs Manager Agent** generates initial docs
- **Researcher Agent** extracts information from code
- **UI/UX Designer Agent** creates diagrams and visuals
- **Copywriter Agent** writes clear explanations
**Documentation Generated**:
- Project overview and goals
- Architecture diagrams
- API documentation from code annotations
- Setup and installation guides
- Development workflow documentation
### 3. Update Documentation After Changes
```bash
/docs:update "after implementing user authentication"
```
**What happens**:
- **Docs Manager Agent** identifies what needs updating
- **Researcher Agent** analyzes code changes
- **Code Reviewer Agent** ensures technical accuracy
- **Copywriter Agent** improves clarity and readability
**Updates Applied**:
- API documentation for new endpoints
- New feature documentation
- Updated architecture diagrams
- Modified setup instructions
- New examples and tutorials
### 4. Comprehensive Documentation Review
```bash
/docs:summarize "complete documentation review"
```
**What happens**:
- **Docs Manager Agent** performs comprehensive review
- **Copywriter Agent** improves writing quality
- **UI/UX Designer Agent** creates missing diagrams
- **Tester Agent** validates code examples
**Review Coverage**:
- Consistency check across all docs
- Accuracy verification of code examples
- Completeness assessment
- Readability and clarity improvements
- Visual content enhancement
## Real Example
Let's document a new authentication feature:
### Step 1: Initial Setup
```bash
/docs:init
```
**Created Structure**:
```
docs/
βββ README.md
βββ getting-started/
β βββ installation.md
β βββ quick-start.md
βββ api/
β βββ authentication.md
β βββ users.md
β βββ errors.md
βββ architecture/
β βββ overview.md
β βββ database-schema.md
βββ development/
βββ contributing.md
βββ testing.md
```
### Step 2: Update After Feature
```bash
/docs:update "after implementing OAuth authentication"
```
**Updates Made**:
- Added OAuth authentication guide
- Updated API documentation with new endpoints
- Modified quick-start to include auth setup
- Created authentication architecture diagram
- Added troubleshooting section
### Step 3: Review and Improve
```bash
/docs:summarize "authentication documentation quality"
```
**Improvements Applied**:
- Simplified complex explanations
- Added more code examples
- Created step-by-step tutorials
- Improved navigation structure
- Added FAQ section
## Documentation Types
### API Documentation
```bash
/docs:init "generate API documentation from code"
```
- REST API endpoints
- GraphQL schemas
- WebSocket events
- Request/response examples
- Error codes and handling
### User Guides
```bash
/docs:update "create user guide for new feature"
```
- Step-by-step tutorials
- Common use cases
- Best practices
- Troubleshooting guides
- FAQ sections
### Developer Documentation
```bash
/docs:update "development setup and workflows"
```
- Local development setup
- Code contribution guidelines
- Testing procedures
- Release process
- Architecture decisions
### Architecture Documentation
```bash
/docs:summarize "system architecture and design"
```
- High-level architecture
- Data flow diagrams
- Service interactions
- Database schemas
- Security considerations
## Advanced Documentation Workflows
### Multi-language Documentation
```bash
/docs:update "create Japanese translation of user guide"
```
- Automatic translation with human review
- Cultural adaptation
- Region-specific examples
- Localized screenshots
### Interactive Documentation
```bash
/docs:init "create interactive API documentation"
```
- Try-it-now API consoles
- Interactive tutorials
- Live code examples
- Embedded demos
### Video Documentation
```bash
/docs:update "create video tutorials for complex features"
```
- Screen recording workflows
- Animated explanations
- Step-by-step video guides
- Visual concept explanations
## Best Practices
### Writing Quality Documentation
1. **Clear Purpose**: Each document has a clear goal and audience
2. **Consistent Style**: Follow established style guides
3. **Practical Examples**: Include real-world code examples
4. **Regular Updates**: Keep docs current with code changes
5. **User Feedback**: Incorporate user suggestions and corrections
### Documentation Structure
1. **Logical Organization**: Group related content together
2. **Clear Navigation**: Easy to find relevant information
3. **Progressive Disclosure**: Start simple, add complexity gradually
4. **Cross-References**: Link between related documents
5. **Search Optimization**: Use clear headings and keywords
### Code Examples in Documentation
1. **Working Examples**: All code examples should be tested
2. **Complete Context**: Show necessary imports and setup
3. **Error Handling**: Include proper error handling patterns
4. **Multiple Languages**: Provide examples in different languages when applicable
5. **Version Specific**: Clearly indicate version compatibility
## Integration with Development Workflow
### Pre-commit Documentation Check
```bash
# Add to pre-commit hook
/docs:summarize "check documentation for changes"
```
### Automated Documentation Updates
```bash
# Run after feature completion
/cook
/docs:update "document new feature"
/git:cm
```
### Release Documentation
```bash
/docs:update "prepare release notes and documentation"
/docs:summarize "final review before release"
```
## Measuring Documentation Quality
### Metrics to Track
- Documentation coverage percentage
- User feedback scores
- Time to find information
- Number of support tickets reduced
- Developer onboarding time
### Quality Assessment
```bash
/docs:summarize "comprehensive documentation quality assessment"
```
- Completeness analysis
- Accuracy verification
- Readability scoring
- User experience evaluation
- Maintenance requirements
## Tools and Integration
### Documentation Platforms
- GitBook for user-facing docs
- Notion for internal knowledge base
- GitHub Pages for static sites
- Confluence for team documentation
- Docusaurus for React-based docs
### Automation
- GitHub Actions for doc builds
- Automated API docs generation
- Link checking workflows
- Spell checking and grammar
- Screenshot automation
## Getting Help
- [Writing Style Guide](/docs/writing-style-guide)
- [Templates and Examples](/docs/templates)
- [Community Forum](https://forum.claudekit.com)
- [Documentation Best Practices](/docs/best-practices)
## Related Workflows
- [Feature Development](/docs/workflows/feature-development) - Building features that need documentation
- [Code Review](/docs/workflows/code-review) - Reviewing documentation alongside code
- [Release Management](/docs/workflows/release-management) - Preparing release notes
---
# Starting a New Project
Section: workflows
Category: workflows
URL: https://docs.claudekit.cc/docs/workflows/starting-new-project
# Starting a New Project
Learn how to bootstrap a new project with ClaudeKit, from initial requirements to deployed application. This guide walks through a real-world example of building a task management API.
## Overview
**What we'll build:** REST API for task management
**Time required:** 45-60 minutes
**Tech stack:** Node.js, Express, PostgreSQL, TypeScript
**End result:** Tested, documented, deployable API
## Prerequisites
- Node.js 18+ installed
- ClaudeKit CLI installed (`npm install -g claudekit-cli`)
- GitHub account
- Basic understanding of REST APIs
## Step 1: Initialize Project
### Option A: Using /bootstrap (Recommended)
Start Claude Code and use the bootstrap command:
```bash
# Create project directory
mkdir task-api
cd task-api
# Start Claude Code
claude
# Bootstrap the project
/bootstrap [build a REST API for task management with user authentication]
```
**Interactive Q&A:**
```
π€ ClaudeKit: I'll help you bootstrap a REST API for task management. Let me ask a few questions:
Q1: What's your primary tech stack preference?
1. Node.js + Express
2. Node.js + Fastify
3. Python + FastAPI
4. Go + Gin
> 1
Q2: Which database would you like to use?
1. PostgreSQL
2. MySQL
3. MongoDB
4. SQLite (development only)
> 1
Q3: What ORM/Query Builder?
1. Prisma (recommended)
2. TypeORM
3. Sequelize
4. Raw SQL
> 1
Q4: Authentication method?
1. JWT
2. OAuth2 (Google, GitHub)
3. Session-based
4. Both JWT + OAuth2
> 1
Q5: Additional features?
[x] Input validation (Joi/Zod)
[x] Rate limiting
[x] API documentation (Swagger)
[x] Docker support
[ ] Redis caching
[ ] WebSocket support
> Select 1,2,3,4 (Enter)
Q6: Testing framework?
1. Jest
2. Vitest
3. Mocha + Chai
> 1
Perfect! I'll now:
1. Research best practices for Node.js + Express + PostgreSQL APIs
2. Create implementation plan
3. Generate project structure
4. Implement core features
5. Generate comprehensive tests
6. Create documentation
Proceed? (y/n) > y
```
**System works for 5-8 minutes:**
```
[1/6] Researching best practices...
β Node.js + Express patterns
β PostgreSQL + Prisma setup
β JWT authentication best practices
β API security considerations
[2/6] Creating implementation plan...
β Plan saved: plans/bootstrap-task-api.md
[3/6] Generating project structure...
β src/routes/
β src/controllers/
β src/services/
β src/middleware/
β src/models/ (Prisma schemas)
β tests/
β docs/
[4/6] Implementing features...
β User authentication (register, login, logout)
β Task CRUD operations
β Input validation with Joi
β Rate limiting middleware
β Error handling
β Database setup with Prisma
[5/6] Generating tests...
β Unit tests: 45 tests
β Integration tests: 23 tests
β E2E tests: 12 tests
β Coverage: 87%
[6/6] Creating documentation...
β API documentation (Swagger)
β README.md
β Setup guide
β Architecture docs
β
Bootstrap complete!
Project structure:
task-api/
βββ src/
β βββ routes/
β β βββ auth.routes.ts
β β βββ task.routes.ts
β βββ controllers/
β β βββ auth.controller.ts
β β βββ task.controller.ts
β βββ services/
β β βββ auth.service.ts
β β βββ task.service.ts
β βββ middleware/
β β βββ auth.middleware.ts
β β βββ validate.middleware.ts
β β βββ rate-limit.middleware.ts
β βββ models/ (Prisma)
β βββ utils/
β βββ server.ts
βββ tests/
βββ docs/
βββ prisma/
β βββ schema.prisma
βββ .env.example
βββ Dockerfile
βββ docker-compose.yml
βββ package.json
βββ README.md
Next steps:
1. cp .env.example .env (add your DB credentials)
2. npm install
3. npx prisma migrate dev
4. npm run dev
```
### Option B: Using ClaudeKit CLI
```bash
# Interactive setup
ck init
? Project name: task-api
? Template: Node.js API
? Database: PostgreSQL
? Authentication: JWT
β Project created
β ClaudeKit configured
β Dependencies installed
cd task-api
claude
```
Then use `/docs:init` and `/plan` to start development.
## Step 2: Review Generated Code
### Examine Project Structure
```bash
# Check what was created
ls -la
# Review main files
cat src/server.ts
cat prisma/schema.prisma
cat README.md
```
### Review API Endpoints
Generated endpoints:
```
POST /api/auth/register
POST /api/auth/login
POST /api/auth/logout
GET /api/auth/me
GET /api/tasks
POST /api/tasks
GET /api/tasks/:id
PUT /api/tasks/:id
DELETE /api/tasks/:id
```
### Review Database Schema
```prisma
// prisma/schema.prisma
model User {
id String @id @default(uuid())
email String @unique
password String
name String?
tasks Task[]
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
}
model Task {
id String @id @default(uuid())
title String
description String?
completed Boolean @default(false)
userId String
user User @relation(fields: [userId], references: [id])
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
}
```
## Step 3: Setup Environment
### Configure Environment Variables
```bash
# Copy example env file
cp .env.example .env
# Edit with your values
nano .env
```
```env
# .env
NODE_ENV=development
PORT=3000
# Database
DATABASE_URL="postgresql://user:password@localhost:5432/taskapi?schema=public"
# JWT
JWT_SECRET="your-super-secret-jwt-key-change-this"
JWT_EXPIRES_IN="7d"
# Rate Limiting
RATE_LIMIT_WINDOW_MS=900000
RATE_LIMIT_MAX_REQUESTS=100
```
### Start Database
```bash
# Using Docker
docker-compose up -d postgres
# Or install PostgreSQL locally
# brew install postgresql (macOS)
# sudo apt install postgresql (Linux)
```
### Run Migrations
```bash
# Generate Prisma client
npx prisma generate
# Run migrations
npx prisma migrate dev --name init
# Seed database (if seed file exists)
npx prisma db seed
```
## Step 4: Run Tests
### Run Test Suite
```bash
# Run all tests
npm test
```
**Expected output:**
```
PASS tests/unit/auth.service.test.ts
PASS tests/unit/task.service.test.ts
PASS tests/integration/auth.routes.test.ts
PASS tests/integration/task.routes.test.ts
PASS tests/e2e/complete-flow.test.ts
Test Suites: 5 passed, 5 total
Tests: 80 passed, 80 total
Snapshots: 0 total
Time: 8.234 s
Coverage: 87.3%
β All tests passed
```
### If Tests Fail
```bash
# Use ClaudeKit to fix
/fix:test
```
## Step 5: Start Development Server
```bash
# Start in development mode
npm run dev
```
**Output:**
```
π Server running on http://localhost:3000
π Swagger docs: http://localhost:3000/api-docs
ποΈ Database connected
β
Ready to accept requests
```
### Test API Manually
```bash
# Register a user
curl -X POST http://localhost:3000/api/auth/register \
-H "Content-Type: application/json" \
-d '{
"email": "user@example.com",
"password": "SecurePass123!",
"name": "John Doe"
}'
# Login
curl -X POST http://localhost:3000/api/auth/login \
-H "Content-Type: application/json" \
-d '{
"email": "user@example.com",
"password": "SecurePass123!"
}'
# Save the JWT token from response
# Create a task
curl -X POST http://localhost:3000/api/tasks \
-H "Authorization: Bearer YOUR_JWT_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"title": "Complete project documentation",
"description": "Write comprehensive README"
}'
# Get all tasks
curl -X GET http://localhost:3000/api/tasks \
-H "Authorization: Bearer YOUR_JWT_TOKEN"
```
## Step 6: Add Custom Features
### Add New Feature
```bash
# Plan the feature first
/plan [add task categories and tags]
```
**Review the plan:**
```bash
cat plans/add-task-categories-YYYYMMDD.md
```
**Implement:**
```bash
/cook [implement task categories and tags]
```
**What happens:**
```
1. Updates Prisma schema
β Added Category model
β Added Tag model
β Updated Task model with relations
2. Generates migration
β Created migration file
3. Updates API endpoints
β GET /api/categories
β POST /api/categories
β PUT /api/tasks/:id/tags
4. Generates tests
β 15 new tests added
5. Updates documentation
β API docs updated
β Schema documented
Run: npx prisma migrate dev
```
### Run Migration
```bash
npx prisma migrate dev --name add-categories-tags
```
### Test New Feature
```bash
npm test
```
## Step 7: Documentation
### Review Generated Docs
```bash
# API documentation
open http://localhost:3000/api-docs
# Project docs
cat docs/api/README.md
cat docs/architecture.md
```
### Update Documentation
```bash
/docs:update
```
## Step 8: Commit Changes
```bash
# Review changes
git status
git diff
# Commit with ClaudeKit
/git:cm
```
**Generated commit:**
```
feat: initialize task management API
- Set up Express + TypeScript server
- Implement user authentication with JWT
- Create task CRUD operations
- Add Prisma ORM with PostgreSQL
- Include rate limiting and validation
- Generate comprehensive test suite (87% coverage)
- Add Swagger API documentation
- Configure Docker for deployment
π€ Generated with ClaudeKit
```
## Step 9: Set Up CI/CD
### Generate GitHub Actions Workflow
If not already created:
```bash
/cook [add GitHub Actions CI workflow]
```
**Generated workflow** (`.github/workflows/ci.yml`):
```yaml
name: CI
on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main ]
jobs:
test:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:14
env:
POSTGRES_PASSWORD: postgres
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run Prisma generate
run: npx prisma generate
- name: Run tests
run: npm test
env:
DATABASE_URL: postgresql://postgres:postgres@localhost:5432/test
- name: Build
run: npm run build
```
### Push and Verify CI
```bash
/git:cp
# Watch CI run
gh run watch
```
## Step 10: Deploy
### Option A: Deploy to Heroku
```bash
# Create Heroku app
heroku create task-api-prod
# Add PostgreSQL
heroku addons:create heroku-postgresql:mini
# Set environment variables
heroku config:set NODE_ENV=production
heroku config:set JWT_SECRET=$(openssl rand -base64 32)
# Deploy
git push heroku main
# Run migrations
heroku run npx prisma migrate deploy
```
### Option B: Deploy with Docker
```bash
# Build image
docker build -t task-api:latest .
# Run locally
docker-compose up
# Deploy to cloud (AWS, GCP, etc.)
# Follow cloud provider's container deployment guide
```
### Verify Deployment
```bash
# Test production API
curl https://your-app.herokuapp.com/health
# Check logs
heroku logs --tail
```
## Complete Project Timeline
| Phase | Time | Activities |
|-------|------|------------|
| Bootstrap | 5-8 min | Requirements, research, generation |
| Setup | 5 min | Environment, database, dependencies |
| Testing | 2 min | Run test suite, verify |
| Development | 10-15 min | Review code, test manually |
| Custom Features | 10-15 min | Add categories, tags |
| Documentation | 5 min | Review, update docs |
| CI/CD Setup | 5 min | GitHub Actions |
| Deployment | 10-15 min | Deploy to production |
| **Total** | **52-75 min** | **Complete project ready** |
## What You've Built
A production-ready REST API with:
β
User authentication (JWT)
β
Task CRUD operations
β
Categories and tags
β
Input validation
β
Rate limiting
β
PostgreSQL database
β
87%+ test coverage
β
API documentation
β
Docker support
β
CI/CD pipeline
β
Production deployment
## Next Steps
### Add More Features
```bash
/plan [add task reminders and notifications]
/code @plans/reminders.md
/test
/git:cm
```
### Improve Testing
```bash
/cook [add E2E tests for complete user flows]
```
### Add Frontend
```bash
cd ..
/bootstrap [build a React frontend for the task API]
```
### Monitor Production
```bash
# Add logging
/cook [implement structured logging with Winston]
# Add monitoring
/cook [add health check endpoints]
```
## Troubleshooting
### Database Connection Issues
```bash
# Check DATABASE_URL
echo $DATABASE_URL
# Test connection
npx prisma db pull
```
### Port Already in Use
```bash
# Change PORT in .env
PORT=3001
```
### Migration Errors
```bash
# Reset database (development only!)
npx prisma migrate reset
# Or fix manually
/fix:hard [describe migration error]
```
## Key Takeaways
1. **Use `/bootstrap`** for new projects - saves hours of setup
2. **Review generated code** - understand before modifying
3. **Test immediately** - catch issues early
4. **Document as you go** - `/docs:update` regularly
5. **Use feature branches** - safer development
6. **Deploy early** - find production issues sooner
---
**Congratulations!** You've built a complete, production-ready REST API with ClaudeKit in under an hour. This same approach works for any type of project.
---
# Maintaining an Old Project
Section: workflows
Category: workflows
URL: https://docs.claudekit.cc/docs/workflows/maintaining-old-project
# Maintaining an Old Project
Learn how to integrate ClaudeKit into an existing project, understand legacy code, and systematically improve it. This guide walks through a real-world scenario of reviving a 2-year-old project.
## Scenario
**The Challenge:** Inherited a 2-year-old Node.js project
- No documentation
- Test coverage: 12%
- Last commit: 18 months ago
- Multiple deprecated dependencies
- No one remembers how it works
- Production deployment broken
**Goal:** Understand, document, fix, and improve the project
**Time:** 2-3 hours for initial setup, ongoing improvements
## Step 1: Initial Assessment
### Clone and Explore
```bash
# Clone the project
git clone https://github.com/company/legacy-project.git
cd legacy-project
# Check what's here
ls -la
cat package.json
cat README.md # Usually outdated or missing
```
### Start Claude Code
```bash
claude
```
### Generate Documentation
This is the **most important first step**:
```bash
/docs:init
```
**What happens (5-8 minutes):**
```
Analyzing codebase...
[ββββββββββββββββββββββββ] 100%
Files scanned: 1,247
Lines of code: 45,829
Dependencies: 73 packages
Generating documentation...
β docs/codebase-summary.md
β docs/code-standards.md
β docs/system-architecture.md
β docs/project-overview-pdr.md
β docs/technical-debt.md (NEW!)
β docs/migration-recommendations.md (NEW!)
Documentation complete!
Key Findings:
β 15 deprecated dependencies
β No tests for 88% of code
β 3 security vulnerabilities
β Inconsistent code style
β Well-structured architecture
β Good separation of concerns
```
### Review Generated Documentation
```bash
# Read the overview
cat docs/codebase-summary.md
```
**Example output:**
```markdown
# Codebase Summary
## Project Type
E-commerce backend API
## Tech Stack
- **Backend**: Node.js 14 (outdated), Express 4
- **Database**: MongoDB 4.2 (needs upgrade)
- **Cache**: Redis 5
- **Queue**: Bull (job processing)
## Architecture
- Layered architecture (routes β controllers β services β models)
- Microservices attempted but incomplete
- Background jobs for email, payments
## Key Features
1. User authentication (Passport.js + JWT)
2. Product catalog with search
3. Shopping cart and checkout
4. Payment processing (Stripe)
5. Order management
6. Email notifications
## Critical Issues
1. **Security**: Using outdated crypto methods
2. **Performance**: N+1 queries in product listings
3. **Reliability**: No error handling in payment flow
4. **Maintainability**: Mixed ES5 and ES6 code
## Dependencies Status
- 15 packages have major updates available
- 3 packages have security vulnerabilities
- bcrypt, jsonwebtoken, mongoose need updates
## Test Coverage: 12%
- Unit tests: 8 files
- Integration tests: 2 files
- E2E tests: 0 files
```
## Step 2: Understand the System
### Ask Questions
Now that ClaudeKit has context, ask specific questions:
```bash
/ask [how does the payment processing work?]
```
**Response:**
```
Payment Processing Flow:
1. User submits order (POST /api/orders)
File: src/controllers/order.controller.js:45
2. Order validated
File: src/services/order.service.js:120
3. Stripe payment intent created
File: src/services/payment.service.js:78
4. Webhook handles payment confirmation
File: src/routes/webhooks/stripe.js:23
5. Order status updated to 'paid'
Background job sends confirmation email
Critical Issues Found:
β No retry logic if Stripe fails
β Webhook not verified (security risk)
β Race condition in order updates
β Email failures block order completion
Recommendation: Use /fix:hard to address these issues
```
More questions:
```bash
/ask [what's the authentication flow?]
/ask [how are background jobs processed?]
/ask [what's causing the slow product search?]
```
## Step 3: Update Dependencies
### Check for Updates
```bash
npm outdated
```
### Create Update Plan
```bash
/plan [update all dependencies to latest stable versions]
```
**Generated plan includes:**
- Breaking changes for each package
- Migration steps
- Test strategy
- Rollback plan
### Implement Updates
```bash
/code [update dependencies following the plan]
```
**What happens:**
```
1. Updating package.json
β Node.js: 14 β 18 (update .nvmrc)
β Express: 4.17 β 4.18
β Mongoose: 5.13 β 7.6
β Jest: 26 β 29
... (updating all 73 packages)
2. Handling breaking changes
β Updated Mongoose queries (removed deprecated methods)
β Fixed Jest config for v29
β Updated bcrypt calls
β Modernized async/await patterns
3. Security fixes
β Updated jsonwebtoken (CVE-2022-23529)
β Updated minimist (CVE-2021-44906)
β Updated node-fetch (CVE-2022-0235)
4. Running tests
β 23 tests failing due to breaking changes
5. Fixing test failures
β Updated test mocks for Mongoose 7
β Fixed async test patterns
β Updated snapshot tests
6. Final test run
β All 45 tests passing
β 0 vulnerabilities
β All dependencies up to date
Ready to commit!
```
### Verify Updates
```bash
# Run tests
npm test
# Check for vulnerabilities
npm audit
# Try running the app
npm run dev
```
## Step 4: Add Missing Tests
### Generate Test Suite
```bash
/cook [generate comprehensive test suite for critical paths]
```
**Priority areas:**
1. Authentication
2. Payment processing
3. Order management
4. Product search
**What happens:**
```
Analyzing critical paths...
Identified 8 critical paths:
1. User registration & login
2. Password reset flow
3. Product search & filtering
4. Add to cart
5. Checkout process
6. Payment processing
7. Order status tracking
8. Admin operations
Generating tests...
Created 127 new tests:
β Unit tests: 73 files
- Authentication: 15 tests
- Payment: 22 tests
- Orders: 18 tests
- Products: 18 tests
β Integration tests: 35 files
- API endpoints: 35 tests
β E2E tests: 19 files
- Complete user flows: 19 tests
Running test suite...
β 172/172 tests passed
β Coverage: 78% (up from 12%)
Tests generated in tests/ directory
```
## Step 5: Fix Critical Issues
### Security Vulnerabilities
```bash
/fix:hard [Stripe webhook not verified - security risk]
```
**Fix applied:**
```
1. Added webhook signature verification
File: src/routes/webhooks/stripe.js
2. Added replay attack prevention
Using Stripe's timestamp validation
3. Added proper error handling
Logs and alerts for verification failures
4. Generated tests
β Valid signature accepted
β Invalid signature rejected
β Old timestamp rejected
β Replay attack prevented
Security issue resolved!
```
### Performance Issues
```bash
/fix:hard [N+1 queries in product listing causing slow response times]
```
**Fix applied:**
```
1. Located issue
File: src/services/product.service.js:156
2. Root cause
- Fetching products in loop
- Separate query for each product's category
- No database indexing
3. Implemented fixes
β Added Mongoose populate() for categories
β Created database indexes
β Implemented query result caching (Redis)
β Added pagination
4. Performance improvement
Before: 3.2s for 50 products
After: 180ms for 50 products
Improvement: 94% faster
5. Added performance tests
β Tests response time < 300ms
β Tests pagination works
β Tests cache invalidation
```
### Error Handling
```bash
/fix:hard [payment flow has no error handling, orders get stuck]
```
**Fix applied:**
```
1. Added comprehensive error handling
β Try-catch blocks in payment service
β Transaction rollback on failure
β Retry logic with exponential backoff
β Dead letter queue for failed jobs
2. Added monitoring
β Error logging to Sentry
β Metrics for failed payments
β Alerts for stuck orders
3. Added recovery mechanisms
β Manual retry endpoint for admins
β Automatic cleanup of stale orders
β Customer notification on failure
4. Generated tests
β Payment success path
β Payment failure handling
β Retry logic
β Timeout handling
```
## Step 6: Improve Code Quality
### Modernize Code
```bash
/cook [refactor mixed ES5/ES6 code to modern ES6+ with async/await]
```
**Improvements:**
- Convert callbacks to async/await
- Use const/let instead of var
- Use template literals
- Use destructuring
- Use arrow functions
- Remove unnecessary code
**Example transformation:**
```javascript
// Before (ES5 style)
var getUserOrders = function(userId, callback) {
Order.find({userId: userId}, function(err, orders) {
if (err) {
return callback(err);
}
var orderIds = [];
for (var i = 0; i < orders.length; i++) {
orderIds.push(orders[i]._id);
}
callback(null, orderIds);
});
};
// After (Modern ES6+)
const getUserOrders = async (userId) => {
const orders = await Order.find({ userId });
return orders.map(order => order._id);
};
```
### Add TypeScript (Optional)
```bash
/plan [migrate project to TypeScript]
```
Review the plan, then:
```bash
/code [implement TypeScript migration following the plan]
```
### Improve Documentation
```bash
/docs:update
```
Adds:
- Inline code comments
- JSDoc annotations
- API documentation
- Usage examples
## Step 7: Fix Deployment
### Diagnose Deployment Issues
```bash
/ask [why is the production deployment broken?]
```
**Analysis:**
```
Deployment Issues Found:
1. Build script fails
- Missing TypeScript compiler
- Outdated build command
2. Environment variables
- DATABASE_URL format changed in Mongoose 7
- Missing STRIPE_WEBHOOK_SECRET
3. Database migrations
- No migration system in place
- Schema changes not versioned
4. Health check endpoint missing
- Load balancer can't verify app health
Recommendation: Fix these issues before deployment
```
### Fix Deployment Issues
```bash
/fix:hard [fix production deployment issues]
```
**Fixes applied:**
```
1. Updated build process
β Fixed package.json scripts
β Added TypeScript to dependencies
β Optimized build output
2. Added migration system
β Created migrate-mongo setup
β Generated migration for schema changes
β Added migration step to deployment
3. Updated environment config
β Fixed DATABASE_URL format
β Added missing env variables
β Created .env.example
4. Added health check
β GET /health endpoint
β Checks database connection
β Checks Redis connection
β Returns service status
5. Updated deployment docs
β docs/deployment-guide.md
Ready to deploy!
```
### Deploy
```bash
# Test build locally
npm run build
npm run start:prod
# Deploy to staging
git push staging main
# Verify staging
curl https://staging.example.com/health
# Deploy to production
git push production main
# Monitor
/fix:logs # Watch production logs
```
## Step 8: Set Up Maintenance
### Add CI/CD
```bash
/cook [create GitHub Actions workflow for CI/CD]
```
**Generated workflow:**
- Run tests on PR
- Check code quality
- Deploy to staging on merge to develop
- Deploy to production on merge to main
### Add Monitoring
```bash
/cook [add monitoring and alerting]
```
**Added:**
- Error tracking (Sentry)
- Performance monitoring (New Relic)
- Uptime monitoring (UptimeRobot)
- Log aggregation (Datadog)
### Create Runbook
```bash
/docs:update
```
Creates `docs/runbook.md` with:
- Common issues and solutions
- Deployment procedures
- Rollback procedures
- Emergency contacts
## Progress Tracking
| Phase | Time | Before | After |
|-------|------|--------|-------|
| Documentation | 10 min | None | Complete |
| Dependencies | 30 min | 15 outdated | All updated |
| Tests | 45 min | 12% coverage | 78% coverage |
| Security | 20 min | 3 vulnerabilities | 0 vulnerabilities |
| Performance | 30 min | 3.2s response | 180ms response |
| Code Quality | 40 min | Mixed ES5/ES6 | Modern ES6+ |
| Deployment | 25 min | Broken | Working |
| **Total** | **3h 20min** | **Unmaintainable** | **Production-ready** |
## Key Improvements
β
**Documentation**: From 0% to 100%
β
**Tests**: From 12% to 78% coverage
β
**Dependencies**: All updated, 0 vulnerabilities
β
**Performance**: 94% faster
β
**Code Quality**: Modern, consistent code
β
**Deployment**: Fixed and automated
β
**Monitoring**: Full observability
β
**Security**: All issues resolved
## Ongoing Maintenance
### Weekly Tasks
```bash
# Check for updates
npm outdated
# Run security audit
npm audit
# Review test coverage
npm run test:coverage
# Update docs if needed
/docs:update
```
### Monthly Tasks
```bash
# Review technical debt
cat docs/technical-debt.md
# Plan improvements
/plan [next month's improvements]
# Update dependencies
/cook [update dependencies]
```
### When Adding Features
```bash
# 1. Plan feature
/plan [new feature description]
# 2. Implement
/code @plans/feature.md
# 3. Test
/test
# 4. Update docs
/docs:update
# 5. Commit
/git:cm
# 6. Deploy
git push
```
## Common Challenges
### "I don't understand the code"
```bash
/ask [explain how X works]
/ask [what does this function do?]
/ask [why is this pattern used here?]
```
### "Too many issues to fix"
Prioritize:
1. Security issues (/fix:hard)
2. Production blockers (/fix:hard)
3. Performance problems (/fix:hard)
4. Test coverage (/cook [add tests])
5. Code quality (/cook [refactor])
6. Documentation (/docs:update)
### "Breaking changes in dependencies"
```bash
/plan [migrate from package X v1 to v2]
# Review plan carefully
/code @plans/migration.md
/test # Comprehensive testing
```
## Next Steps
### Improve Further
```bash
# Add feature flags
/cook [implement feature flag system]
# Add A/B testing
/cook [add A/B testing framework]
# Improve observability
/cook [add distributed tracing]
```
### Train Team
1. Document everything (`/docs:update`)
2. Create onboarding guide
3. Share architecture docs
4. Set up development environment guide
## Key Takeaways
1. **Start with `/docs:init`** - Critical for understanding legacy code
2. **Fix security first** - Protect users and business
3. **Add tests gradually** - Focus on critical paths
4. **Update incrementally** - Don't break everything at once
5. **Document as you go** - Future you will thank you
6. **Automate maintenance** - CI/CD and monitoring
---
**Success!** You've transformed an unmaintainable legacy project into a modern, well-tested, documented codebase that's easy to maintain and extend.
---
# Gemini, wait what?
Section: workflows
Category: workflows
URL: https://docs.claudekit.cc/docs/workflows/gemini
# Gemini API: Why do you need it?
---
## TLDR
Anthropic has focused on training Claude models for coding capabilities, so they haven't invested much in vision abilities. This makes Claude models' image analysis quality subpar, affecting software development workflows.
**Note: Gemini API will incur costs. If you feel you don't need it, you can completely skip this step!**
---
## Installation
1. Get your Gemini key at [Google AI Studio](https://aistudio.google.com/api-keys)
2. Find the `.env.example` file:
- If you installed ClaudeKit at project scope: copy `.claude/.env.example` to `.claude/.env`
- If you installed ClaudeKit at global scope: copy `~/.claude/.env.example` to `~/.claude/.env` (for Windows users: `%USERPROFILE%\.claude\.env`)
3. Open the `.env` file and fill in the value for `GEMINI_API_KEY`
That's it!
---
The following section is adapted from my recent research on:
## The Problem with Claude's "Eyes"
To make debugging easier, providing screenshots so CC can visualize the problem is essential. I use this method very often.
But recently I discovered something: Claude's vision model is quite poor, not as good as competitor models (Gemini, ChatGPT,...).
Look at this example, Claude Desktop completely failed compared to Gemini and ChatGPT:

Claude couldn't correctly identify the actions and devices in the image.
**Now let's compare directly between Claude Code and Gemini CLI!**
I'll ask both to read a blueprint image and describe in detail what they see:

Gemini CLI provides detailed descriptions of the blueprint, while Claude Code is quite superficial...
Do you see the difference?
---
## Wait, there's one more thing that Claude's "eyes" currently CANNOT do: VIDEO ANALYSIS capability
But Gemini (web version, not CLI) can do this, which makes debugging in Vibe Coding much easier.

You don't always fully understand the situation, describe how to reproduce the bug, and figure out the solution direction. Recording the screen and giving it to Gemini (Web version) to help guess the root causes or suggest handling directions is not a bad solution at all.
The only problem is that Gemini web version doesn't have codebase context, so I have to include that information in the prompt, which is quite tedious...
So I decided to create this MCP: [**Human MCP**](https://github.com/mrgoonie/human-mcp)
The purpose is to use Gemini API to analyze images, documents (PDF, docx, xlsx,...) and videos.
In the early days of ClaudeKit, I had "Human MCP" pre-installed by default.
And you need `GEMINI_API_KEY` in the "Human MCP" env for it to work.
---
## Then, Anthropic launched Agent Skills!
Everyone knows: **MCP also has its problem - it consumes too much context**
Here's an example from Chrome Devtools MCP and Playwright MCP:

And Agent Skills was created to solve that problem.
So I've converted all Human MCP tools into Agent Skills, so we can have more free space in the context window for agents to work.
Therefore, `GEMINI_API_KEY` has been moved to `.claude/.env`, you just need to enter the value there.
Typically, skills will be automatically activated depending on the context the agent is handling.
But if you need to manually activate this skill, just prompt like this: `use ai-multimodal to analyze this screenshot: ...`
It's that simple.
---
# Adding a New Feature
Section: workflows
Category: workflows
URL: https://docs.claudekit.cc/docs/workflows/adding-feature
# Adding a New Feature
Learn the complete workflow for adding new features to your project with ClaudeKit - from initial planning through production deployment with full testing and documentation.
## Overview
**Goal**: Add a complete feature with planning, implementation, tests, and docs
**Time**: 15-30 minutes (vs 2-4 hours manually)
**Agents Used**: planner, scout, tester, code-reviewer, docs-manager
**Commands**: /plan, /code, /test, /docs:update, /git:cm
## Prerequisites
- Existing project with ClaudeKit configured
- Clear feature requirements
- Development environment set up
- Git repository initialized
## Step-by-Step Workflow
### Step 1: Define Feature Requirements
Start by clearly defining what you want to build:
```bash
# Start Claude Code
claude
# Define your feature
# Example: Adding password reset functionality
```
**Good feature descriptions:**
- "Add password reset flow with email verification"
- "Implement product search with filters and pagination"
- "Create admin dashboard for user management"
**Poor descriptions:**
- "Add password stuff"
- "Make search better"
- "Admin panel"
### Step 2: Research and Plan
Use the planner agent to create a detailed implementation plan:
```bash
/plan [add password reset flow with email verification]
```
**What happens**:
```
[1/3] Spawning researcher agents...
β Researcher 1: Email service best practices
β Researcher 2: Token generation patterns
β Researcher 3: Security considerations
[2/3] Analyzing project structure...
β Located authentication module
β Found email service configuration
β Identified user model
[3/3] Creating implementation plan...
β Plan saved: plans/password-reset-20251030.md
Review plan at: plans/password-reset-20251030.md
```
### Step 3: Review the Plan
```bash
# Read the generated plan
cat plans/password-reset-20251030.md
```
**Plan structure**:
```markdown
# Password Reset Implementation Plan
## 1. Database Changes
- Add reset_token field to users table
- Add reset_token_expires timestamp
- Create migration script
## 2. Backend Implementation
- POST /api/auth/forgot-password endpoint
- POST /api/auth/reset-password endpoint
- Email service integration
- Token generation utility
- Token validation middleware
## 3. Security Considerations
- Token expiry (15 minutes)
- Rate limiting (5 requests/hour)
- HTTPS enforcement
- Token single-use enforcement
## 4. Testing Strategy
- Unit tests for token generation
- Integration tests for endpoints
- Email sending mocks
- Security tests for edge cases
## 5. Documentation
- API endpoint documentation
- User guide for reset flow
- Admin troubleshooting guide
```
**Review checklist**:
- β
All requirements covered
- β
Security considerations included
- β
Database changes documented
- β
Testing strategy defined
If plan needs adjustment, provide feedback:
```bash
"Please add SMS verification as an alternative to email"
```
### Step 4: Scout Existing Code (Optional)
For complex features, scout the codebase first:
```bash
/scout "Show me existing authentication code" 3
```
**Output**:
```
Found 8 relevant files:
Priority 1 (Core):
- src/auth/login.js (authentication logic)
- src/models/user.js (user model)
- src/middleware/auth.js (JWT middleware)
Priority 2 (Related):
- src/services/email.js (email service)
- src/routes/auth.routes.js (auth routes)
Priority 3 (Reference):
- tests/auth/login.test.js
- docs/api/authentication.md
- config/email.config.js
```
This helps understand existing patterns before implementing.
### Step 5: Implement the Feature
Use the cook command to implement based on the plan:
```bash
/cook [implement password reset with email verification]
```
**Implementation process**:
```
[1/6] Scouting relevant files...
β Located 8 files for modification
β Identified integration points
[2/6] Implementing database changes...
β Created migration: 20251030_add_reset_tokens.sql
β Updated user model
[3/6] Implementing endpoints...
β POST /api/auth/forgot-password
β POST /api/auth/reset-password
β Token validation middleware
[4/6] Implementing email service...
β Password reset email template
β Email sending logic
β Template variables
[5/6] Adding security measures...
β Rate limiting middleware
β Token expiry validation
β Single-use token enforcement
[6/6] Integration complete
β 5 files created
β 3 files modified
Files created:
- src/auth/password-reset.controller.js
- src/middleware/rate-limit.js
- src/templates/password-reset.html
- migrations/20251030_add_reset_tokens.sql
- tests/auth/password-reset.test.js
Files modified:
- src/routes/auth.routes.js
- src/models/user.js
- src/services/email.js
```
### Step 6: Run Tests
Automatically generate and run comprehensive tests:
```bash
/test
```
**Test execution**:
```
Running test suite...
β Unit Tests (28 tests)
β Token generation (8 tests)
β Token validation (7 tests)
β Email template rendering (5 tests)
β Rate limiting logic (8 tests)
β Integration Tests (15 tests)
β Forgot password endpoint (8 tests)
β Reset password endpoint (7 tests)
β Security Tests (10 tests)
β Token expiry enforcement (3 tests)
β Rate limiting (3 tests)
β Single-use tokens (4 tests)
Test Suites: 3 passed, 3 total
Tests: 53 passed, 53 total
Time: 4.382 s
Coverage: 89.3%
β
All tests passed
```
**If tests fail**:
```bash
/fix:test
```
### Step 7: Code Review
Review code quality automatically:
```bash
# Code reviewer runs automatically after /cook
# Or invoke manually
/review
```
**Review results**:
```
Code Review Complete
β Security
β Tokens hashed before storage
β Rate limiting implemented
β Input validation present
β HTTPS enforcement configured
β Performance
β Database queries optimized
β Indexes on lookup fields
β Email sending async
β Code Quality
β Error handling comprehensive
β Logging appropriate
β Code follows project standards
β Recommendations
- Consider adding SMS fallback
- Add admin override capability
- Implement audit logging
Overall: APPROVED β
```
### Step 8: Update Documentation
Sync documentation with new feature:
```bash
/docs:update
```
**Documentation updates**:
```
Updating documentation...
β API Documentation
- Added /api/auth/forgot-password
- Added /api/auth/reset-password
- Updated authentication flow diagram
β User Guides
- Created password-reset-guide.md
- Updated authentication.md
β Architecture Docs
- Updated system-architecture.md
- Added security-controls.md section
β Code Documentation
- JSDoc comments added
- README.md updated
Documentation sync complete
```
### Step 9: Manual Testing (Optional)
Test the feature manually:
```bash
# Start development server
npm run dev
# Test forgot password
curl -X POST http://localhost:3000/api/auth/forgot-password \
-H "Content-Type: application/json" \
-d '{"email": "user@example.com"}'
# Expected response:
# {
# "message": "Password reset email sent",
# "expiresIn": "15 minutes"
# }
# Check email (development)
# Open: http://localhost:8025 (mailhog/mailcatcher)
# Test reset with token from email
curl -X POST http://localhost:3000/api/auth/reset-password \
-H "Content-Type: application/json" \
-d '{
"token": "abc123...",
"newPassword": "NewSecure123!"
}'
# Expected response:
# {
# "message": "Password successfully reset"
# }
```
### Step 10: Commit Changes
Commit with conventional commit message:
```bash
/git:cm
```
**Generated commit**:
```
feat: implement password reset with email verification
- Add forgot-password endpoint with rate limiting
- Implement reset-password endpoint with token validation
- Create email template for reset notifications
- Add database migration for reset tokens
- Implement single-use token enforcement
- Add comprehensive test suite (53 tests, 89% coverage)
- Update API documentation
Security features:
- 15-minute token expiry
- Rate limiting (5 requests/hour per email)
- Single-use token enforcement
- Bcrypt token hashing
```
### Step 11: Create Pull Request (Optional)
If working with a team:
```bash
/git:pr main feature/password-reset
```
**Generated PR**:
```markdown
## Summary
Implements password reset functionality with email verification
- User can request password reset via email
- Token-based verification (15-min expiry)
- Rate limiting to prevent abuse
- Comprehensive test coverage (89%)
## Test Plan
- [x] All automated tests pass
- [x] Manual testing completed
- [x] Security review passed
- [x] Documentation updated
## Security Considerations
- Tokens expire after 15 minutes
- Single-use tokens enforced
- Rate limiting: 5 requests/hour
- Tokens hashed in database
## Screenshots
[Email template preview]
[API response examples]
```
## Complete Example
Real-world scenario: Adding search functionality to an e-commerce site.
### Initial Command
```bash
/plan [add product search with filters, sorting, and pagination]
```
### Plan Review
**Generated plan includes**:
- Elasticsearch integration
- Search endpoint design
- Filter implementation (category, price, rating)
- Sorting options (relevance, price, newest)
- Pagination strategy
- Search analytics
- Autocomplete suggestions
### Implementation
```bash
/code [implement product search as planned]
```
**Results after 8 minutes**:
- Elasticsearch service integration
- Search indexing service
- GET /api/products/search endpoint
- Filter query builder
- Sort and pagination logic
- Search result ranking
- Autocomplete API
- 67 tests (91% coverage)
- Complete API documentation
### Time Comparison
**Manual implementation**: 6-8 hours
- Research: 1-2 hours
- Implementation: 3-4 hours
- Testing: 1-2 hours
- Documentation: 1 hour
**With ClaudeKit**: 25 minutes
- Planning: 5 minutes
- Implementation: 12 minutes
- Testing: 5 minutes
- Documentation: 3 minutes
**Time saved**: 5.5-7.5 hours (93% faster)
## Common Variations
### Variation 1: API Endpoint Only
```bash
# Skip planning for simple endpoints
/cook [add GET /api/users/:id/profile endpoint]
```
### Variation 2: Database-Heavy Feature
```bash
# Plan complex database changes first
/plan [implement multi-tenant architecture with tenant isolation]
/code @plans/multi-tenant.md
```
### Variation 3: UI + Backend Feature
```bash
# Split into separate features
/cook [implement backend API for notifications]
/cook [implement frontend notification panel]
```
### Variation 4: Third-Party Integration
```bash
# Research included automatically
/plan [integrate Twilio SMS notifications]
/code @plans/twilio-sms.md
```
## Troubleshooting
### Issue: Feature Too Large
**Problem**: Implementation taking too long or changing too many files
**Solution**:
```bash
# Break into smaller features
/plan [add user management - phase 1: user CRUD]
/code @plans/user-crud.md
/plan [add user management - phase 2: roles and permissions]
/code @plans/roles-permissions.md
```
### Issue: Tests Failing
**Problem**: Generated tests don't pass
**Solution**:
```bash
/fix:test
# Debugger analyzes failures and fixes
# Re-runs tests automatically
```
### Issue: Missing Edge Cases
**Problem**: Implementation doesn't cover all scenarios
**Solution**:
```bash
# Add specific requirements
/cook [add error handling for network failures in password reset]
```
### Issue: Performance Concerns
**Problem**: Feature works but too slow
**Solution**:
```bash
# Add optimization
/cook [optimize search queries with database indexes and caching]
```
### Issue: Documentation Unclear
**Problem**: Generated docs don't explain feature well
**Solution**:
```bash
# Regenerate with focus
/docs:update [focus on password reset flow with diagrams]
```
## Best Practices
### 1. Plan Complex Features
For features requiring multiple components:
```bash
# Always plan first
/plan [feature description]
# Review plan
# Then implement
/cook [feature description]
```
### 2. Small, Focused Features
Break large features into smaller pieces:
```bash
β
Good:
/cook [add user profile picture upload]
/cook [add image thumbnail generation]
β Too large:
/cook [add complete media management system]
```
### 3. Test Immediately
Don't skip testing:
```bash
/cook [feature]
/test # Always run tests
/fix:test # Fix failures immediately
```
### 4. Document as You Go
Keep docs current:
```bash
/cook [feature]
/docs:update # Update docs immediately
```
### 5. Review Before Committing
Always review generated code:
```bash
# Review all changes
git status
git diff
# Understand what changed
# Only then commit
/git:cm
```
### 6. Use Feature Branches
Work safely:
```bash
# Create feature branch
git checkout -b feature/password-reset
# Implement
/cook [feature]
# Commit
/git:cm
# Create PR
/git:pr main feature/password-reset
```
## Next Steps
### Related Use Cases
- [Fixing Bugs](/docs/workflows/fixing-bugs) - Debug and fix issues
- [Refactoring Code](/docs/workflows/refactoring-code) - Improve code quality
- [Building an API](/docs/workflows/building-api) - Create REST APIs
### Related Commands
- [/plan](/docs/commands/core/plan) - Create implementation plans
- [/cook](/docs/commands/core/cook) - Implement features
- [/test](/docs/commands/core/test) - Run test suites
- [/docs:update](/docs/commands/docs/update) - Update documentation
- [/git:cm](/docs/commands/git/commit) - Commit changes
### Further Reading
- [Command Reference](/docs/commands) - All available commands
- [Agent Guide](/docs/agents) - Understanding agents
- [Workflows](/docs/docs/configuration/workflows) - Development patterns
---
**Key Takeaway**: Use ClaudeKit's systematic workflow (plan β implement β test β document β commit) to add production-ready features 10x faster than manual development.
---
# Fixing Bugs
Section: workflows
Category: workflows
URL: https://docs.claudekit.cc/docs/workflows/fixing-bugs
# Fixing Bugs
Learn how to systematically investigate, fix, and verify bug fixes using ClaudeKit's debugging workflow - from log analysis to production deployment.
## Overview
**Goal**: Debug and fix issues systematically with root cause analysis
**Time**: 5-20 minutes (vs 1-4 hours manually)
**Agents Used**: debugger, tester, code-reviewer
**Commands**: /fix:fast, /fix:hard, /fix:logs, /fix:ui, /fix:ci, /test
## Prerequisites
- Bug reproduction steps or error logs
- Development environment set up
- Access to relevant logs (if production bug)
- Test suite in place
## Choosing the Right Command
ClaudeKit provides different debugging commands for different scenarios:
| Command | Use Case | Complexity | Time |
|---------|----------|------------|------|
| `/fix:fast` | Simple bugs, quick fixes | Low | 2-5 min |
| `/fix:hard` | Complex bugs, multi-file changes | High | 10-20 min |
| `/fix:logs` | Production issues from logs | Medium | 5-15 min |
| `/fix:ui` | Visual/layout bugs | Low-Medium | 3-10 min |
| `/fix:ci` | CI/CD pipeline failures | Medium | 5-15 min |
| `/fix:types` | TypeScript type errors | Low | 2-5 min |
## Step-by-Step Workflow
### Step 1: Reproduce the Bug
Before fixing, confirm you can reproduce the issue:
```bash
# Example: Bug report
# "Users can't login with valid credentials"
# Test manually
curl -X POST http://localhost:3000/api/auth/login \
-H "Content-Type: application/json" \
-d '{"email": "test@example.com", "password": "password123"}'
# Expected: 200 OK with token
# Actual: 401 Unauthorized
```
Document the issue:
- What happens (actual behavior)
- What should happen (expected behavior)
- Steps to reproduce
- Error messages or logs
### Step 2: Choose Debugging Approach
#### Option A: Quick Fix (/fix:fast)
For simple, isolated bugs:
```bash
/fix:fast [users getting 401 error on login with valid credentials]
```
**What happens**:
```
[1/4] Analyzing issue...
β Located login controller
β Found authentication middleware
β Identified issue: password comparison using == instead of bcrypt.compare
[2/4] Implementing fix...
β Updated src/auth/login.controller.js
β Fixed password validation logic
[3/4] Running tests...
β All auth tests pass (24/24)
β Coverage maintained: 87%
[4/4] Verification...
β Manual test: Login successful
β Token generation working
β No regressions detected
β
Bug fixed
Files modified:
- src/auth/login.controller.js
Change summary:
- Line 47: Replace == with bcrypt.compare()
- Added async/await for password validation
```
#### Option B: Complex Fix (/fix:hard)
For bugs requiring investigation and multiple changes:
```bash
/fix:hard [memory leak in WebSocket connections causing server crashes]
```
**What happens**:
```
[1/6] Investigation phase...
β Analyzing WebSocket implementation
β Checking connection cleanup
β Reviewing event listeners
β Inspecting memory patterns
Root cause identified:
- Event listeners not removed on disconnect
- Connection objects not garbage collected
- Socket references held in memory
[2/6] Creating fix plan...
β Plan saved: plans/fix-websocket-leak-20251030.md
[3/6] Implementing fixes...
β Add cleanup on disconnect
β Remove event listeners properly
β Implement connection pooling
β Add memory monitoring
[4/6] Running tests...
β Connection cleanup tests (12 new tests)
β Memory leak tests (5 new tests)
β All existing tests pass
[5/6] Performance validation...
β Memory usage stable over 1000 connections
β No leaks detected in 10-minute stress test
[6/6] Documentation updated...
β Added WebSocket lifecycle docs
β Updated architecture diagram
β
Complex bug fixed
Files modified:
- src/websocket/server.js
- src/websocket/connection-manager.js
- src/utils/cleanup.js
Tests added:
- tests/websocket/cleanup.test.js
- tests/websocket/memory.test.js
```
#### Option C: Production Log Analysis (/fix:logs)
For bugs discovered in production:
```bash
# Copy production logs
# logs/production.log
/fix:logs [analyze production error logs and fix the issue]
```
**What happens**:
```
[1/5] Analyzing logs...
β Read logs/production.log
β Parsed 1,247 error entries
β Grouped by error type
Error distribution:
- TypeError: Cannot read property 'id' (847 occurrences)
- Database connection timeout (312 occurrences)
- Rate limit exceeded (88 occurrences)
[2/5] Root cause analysis...
β Primary issue: User object undefined in middleware
β Occurs when JWT token expired but not caught
β Affects: src/middleware/auth.js:34
[3/5] Implementing fix...
β Add token expiry validation
β Improve error handling
β Add logging for debugging
[4/5] Testing fix...
β Expired token test added
β Error handling test added
β All tests pass
[5/5] Prevention measures...
β Added monitoring alert
β Updated error response
β Documented in troubleshooting guide
β
Production issue resolved
Next steps:
1. Deploy fix to staging
2. Monitor for 24 hours
3. Deploy to production
```
#### Option D: UI Bug Fix (/fix:ui)
For visual or layout issues:
```bash
# Provide screenshot or description
/fix:ui [button misaligned on mobile devices]
```
**What happens**:
```
[1/4] Analyzing UI issue...
β Located button component
β Checked responsive styles
β Issue: Missing media query for mobile
[2/4] Implementing fix...
β Added mobile breakpoint styles
β Fixed button alignment
β Improved touch target size
[3/4] Testing across devices...
β Desktop: β
β Tablet: β
β Mobile: β
[4/4] Visual regression check...
β No other layout changes
β All components render correctly
β
UI bug fixed
Files modified:
- src/components/Button.css
```
#### Option E: CI/CD Fix (/fix:ci)
For build or deployment failures:
```bash
# Provide GitHub Actions URL
/fix:ci [https://github.com/user/repo/actions/runs/12345]
```
**What happens**:
```
[1/5] Fetching CI logs...
β Downloaded workflow logs
β Parsed 847 lines
[2/5] Analyzing failure...
Error found at line 234:
"Error: Cannot find module '@babel/preset-react'"
Root cause: Missing dependency in package.json
[3/5] Implementing fix...
β Added @babel/preset-react to devDependencies
β Updated .babelrc configuration
[4/5] Local verification...
β npm install successful
β Build passes locally
β Tests pass
[5/5] CI validation...
β Pushing fix to trigger CI
β Monitoring workflow...
β CI build successful β
β
CI fixed
Files modified:
- package.json
- .babelrc
```
### Step 3: Verify the Fix
Always verify fixes thoroughly:
```bash
# Run test suite
/test
# Manual testing
npm run dev
# Test the specific bug scenario
# Check for regressions
npm run test:integration
```
### Step 4: Document the Fix
Update documentation with the fix:
```bash
# Update docs
/docs:update [document the login bug fix and prevention measures]
```
**Generated documentation**:
```markdown
## Bug Fix: Login Authentication Issue
### Issue
Users unable to login with valid credentials due to incorrect password comparison.
### Root Cause
Password validation using == operator instead of bcrypt.compare()
### Fix Applied
Updated authentication logic to use proper async bcrypt comparison.
### Prevention
- Added test for bcrypt password validation
- Updated code review checklist
- Documented password handling best practices
```
### Step 5: Commit the Fix
```bash
/git:cm
```
**Generated commit**:
```
fix: resolve login authentication with valid credentials
- Fix password comparison to use bcrypt.compare instead of ==
- Add async/await for proper password validation
- Add test cases for password validation logic
- Update authentication documentation
Fixes: #123
```
## Complete Example: E-Commerce Cart Bug
### Bug Report
```
Title: Shopping cart duplicating items on page refresh
Severity: High
Impact: Users seeing incorrect cart totals
Steps to reproduce:
1. Add item to cart
2. Refresh page
3. Item appears twice in cart
4. Price incorrectly calculated
Environment: Production
Browser: All browsers
```
### Investigation
```bash
/fix:hard [shopping cart duplicating items on page refresh]
```
### Analysis Results
```
Root Cause Analysis:
1. Cart stored in localStorage
2. Also stored in database
3. On page load, both sources loaded and merged
4. Merge logic not checking for duplicates
Contributing factors:
- Missing unique key check
- No deduplication logic
- Improper state synchronization
```
### Implementation
```
Fixes applied:
1. src/store/cart.js
- Add deduplication in loadCart()
- Check by product ID before adding
- Prioritize database as source of truth
2. src/utils/cart-sync.js
- Implement proper sync strategy
- Add conflict resolution
- Clear localStorage after DB sync
3. tests/cart/sync.test.js
- Add duplicate detection tests
- Test localStorage/DB sync
- Test refresh scenarios
4. docs/architecture/state-management.md
- Document cart sync strategy
- Add troubleshooting guide
```
### Verification
```bash
# Automated tests
β 15 new tests added
β All tests pass (187/187)
β Coverage: 89%
# Manual testing
β Add item to cart
β Refresh page
β Single item in cart β
β Correct price β
β Quantity correct β
# Regression testing
β Cart updates on quantity change
β Cart persists on navigation
β Cart syncs across tabs
```
### Time Comparison
**Manual debugging**: 3-4 hours
- Reproduce: 15 minutes
- Investigation: 60-90 minutes
- Implementation: 45-60 minutes
- Testing: 30-45 minutes
- Documentation: 30 minutes
**With ClaudeKit**: 18 minutes
- Reproduce: 5 minutes
- /fix:hard: 12 minutes
- Verification: 1 minute
**Time saved**: 3+ hours (90% faster)
## Common Variations
### Variation 1: Type Error Fix
```bash
/fix:types
# Automatically fixes TypeScript errors
# Updates type definitions
# Runs type checker
```
### Variation 2: Performance Bug
```bash
/fix:hard [API endpoint taking 8+ seconds to respond]
# Analyzes performance
# Identifies bottlenecks
# Implements optimization
# Validates improvement
```
### Variation 3: Security Bug
```bash
/fix:fast [SQL injection vulnerability in search endpoint]
# Identifies vulnerability
# Implements parameterized queries
# Adds input validation
# Runs security tests
```
### Variation 4: Integration Bug
```bash
/fix:logs [Stripe webhook failing with 400 errors]
# Analyzes webhook logs
# Identifies signature mismatch
# Fixes verification logic
# Tests webhook handling
```
## Troubleshooting
### Issue: Can't Reproduce Bug
**Problem**: Bug doesn't occur in development
**Solution**:
```bash
# Use production logs
/fix:logs [analyze production logs to identify the issue]
# Or try production-like environment
docker-compose -f docker-compose.prod.yml up
```
### Issue: Fix Breaks Other Features
**Problem**: Fix causes regressions
**Solution**:
```bash
# Run comprehensive tests
/test
# If tests fail
/fix:test
# Review all changes
git diff
# Consider alternative approach
/fix:hard [fix the login bug without changing the middleware]
```
### Issue: Root Cause Unclear
**Problem**: Can't identify why bug occurs
**Solution**:
```bash
# Use hard fix for investigation
/fix:hard [detailed description of symptoms]
# Provides thorough analysis
# Creates investigation plan
# Identifies root cause
```
### Issue: Intermittent Bug
**Problem**: Bug only happens sometimes
**Solution**:
```bash
# Add logging first
/cook [add detailed logging around the problematic area]
# Reproduce multiple times
# Collect logs
/fix:logs [analyze collected logs]
```
## Best Practices
### 1. Always Reproduce First
Before fixing:
```bash
β
Reproduce the bug locally
β
Document exact steps
β
Capture error messages
β
Note environment details
β Don't fix without reproducing
```
### 2. Add Tests for Bugs
Prevent regressions:
```bash
# After fixing
/test # Includes new regression test
# Or add specific test
/cook [add test case for the login bug to prevent regression]
```
### 3. Check for Related Issues
Fix similar bugs:
```bash
# Search codebase
/scout "similar pattern to the bug" 3
# Fix all instances
/fix:fast [fix all instances of the password comparison bug]
```
### 4. Document in Changelog
Track bug fixes:
```bash
# Commit with fix: prefix
/git:cm
# Automatically added to CHANGELOG.md
# Links to issue number
```
### 5. Monitor After Deployment
Verify fix in production:
```bash
# After deploying fix
# Monitor logs for 24-48 hours
# Check error rates
# Verify user reports stopped
```
### 6. Root Cause Analysis
Understand why bug occurred:
```bash
# Use /fix:hard for analysis
# Documents root cause
# Suggests prevention measures
# Updates development guidelines
```
## Prevention Strategies
After fixing bugs, improve processes:
### 1. Add Validation
```bash
/cook [add input validation to prevent similar issues]
```
### 2. Improve Error Handling
```bash
/cook [enhance error handling with better logging and user messages]
```
### 3. Add Monitoring
```bash
/cook [add monitoring alerts for this type of error]
```
### 4. Update Code Standards
```bash
/docs:update [add this bug pattern to code review checklist]
```
## Next Steps
### Related Use Cases
- [Adding a New Feature](/docs/workflows/adding-feature) - Implement features
- [Optimizing Performance](/docs/workflows/optimizing-performance) - Speed improvements
- [Refactoring Code](/docs/workflows/refactoring-code) - Code quality
### Related Commands
- [/fix:fast](/docs/commands/fix/fast) - Quick bug fixes
- [/fix:hard](/docs/commands/fix/hard) - Complex debugging
- [/fix:logs](/docs/commands/fix/logs) - Log analysis
- [/fix:ui](/docs/commands/fix/ui) - UI bug fixes
- [/fix:ci](/docs/commands/fix/ci) - CI/CD fixes
- [/test](/docs/commands/core/test) - Test suite
### Further Reading
- [Debugger Agent](/docs/agents/debugger) - Debugging capabilities
- [Tester Agent](/docs/agents/tester) - Testing features
- [Troubleshooting](/docs/troubleshooting) - Common issues
---
**Key Takeaway**: ClaudeKit's debugging workflow provides systematic bug resolution with root cause analysis, automated testing, and prevention measures - turning hours of debugging into minutes.
---
# Refactoring Code
Section: workflows
Category: workflows
URL: https://docs.claudekit.cc/docs/workflows/refactoring-code
# Refactoring Code
Learn how to safely refactor code with ClaudeKit - from identifying technical debt to implementing improvements with comprehensive testing and validation.
## Overview
**Goal**: Improve code quality and maintainability without breaking functionality
**Time**: 15-45 minutes (vs 3-8 hours manually)
**Agents Used**: code-reviewer, tester, docs-manager
**Commands**: /plan, /code, /test, /docs:update
## Prerequisites
- Existing codebase with areas needing improvement
- Test suite in place (or willing to add tests)
- Clear understanding of current functionality
- Version control for safe rollback
## Refactoring Scenarios
### When to Refactor
| Scenario | Priority | Risk Level |
|----------|----------|------------|
| Duplicate code (DRY violations) | High | Low |
| Large functions (100+ lines) | Medium | Medium |
| Complex conditionals (nested if/else) | Medium | Medium |
| Poor naming conventions | Low | Low |
| Tight coupling | High | High |
| Missing error handling | High | Medium |
| Performance bottlenecks | Medium | Medium |
| Security vulnerabilities | Critical | Low |
### Refactoring Types
1. **Code Organization** - Restructure files and modules
2. **Code Quality** - Improve readability and maintainability
3. **Performance** - Optimize slow code
4. **Security** - Fix vulnerabilities
5. **Architecture** - Improve design patterns
6. **Dependencies** - Update or remove libraries
## Step-by-Step Workflow
### Step 1: Identify Refactoring Needs
Use code review to identify issues:
```bash
# Automated code review
/review
```
**Review output**:
```
Code Review Report
β Issues Found:
1. Duplicate Code (15 instances)
- src/users/create.js and src/users/update.js
- 67% code duplication in validation logic
- Recommendation: Extract to shared validator
2. Large Functions (8 instances)
- src/orders/process.js:processOrder() - 234 lines
- Recommendation: Break into smaller functions
3. Complex Conditionals (12 instances)
- src/auth/authorize.js - Nested 5 levels deep
- Recommendation: Simplify with early returns
4. Magic Numbers (23 instances)
- Hardcoded values without constants
- Recommendation: Extract to configuration
5. Missing Error Handling (18 instances)
- Unhandled promise rejections
- Recommendation: Add try/catch blocks
Technical Debt Score: 6.8/10 (High)
Maintainability Index: 42/100 (Needs Improvement)
```
### Step 2: Prioritize Refactoring Tasks
Create a refactoring plan:
```bash
/plan [refactor user validation logic to eliminate duplication]
```
**Generated plan**:
```markdown
# Refactoring Plan: User Validation
## Current State
- Validation duplicated in create.js and update.js
- 156 lines of duplicate code
- Inconsistent error messages
- Hard to maintain
## Proposed Solution
### 1. Extract Shared Validator
- Create src/validators/user.validator.js
- Extract common validation rules
- Implement reusable validation functions
### 2. Consolidate Error Handling
- Standardize error messages
- Create error code constants
- Improve error response format
### 3. Update Controllers
- Import shared validator
- Remove duplicate code
- Add validation tests
## Testing Strategy
- Unit tests for validator
- Integration tests for endpoints
- Ensure no behavior changes
## Risk Assessment
- Risk Level: Low
- Breaking Changes: None
- Rollback Plan: Git revert
## Success Criteria
- 0% code duplication
- All tests pass
- Same functionality maintained
```
### Step 3: Add Tests (If Missing)
Before refactoring, ensure good test coverage:
```bash
# Check current coverage
npm run test:coverage
# Add tests if coverage low
/cook [add comprehensive tests for user validation before refactoring]
```
**Why test first?**
- Ensures refactoring doesn't break functionality
- Validates behavior is preserved
- Provides safety net for changes
- Documents expected behavior
### Step 4: Implement Refactoring
Execute the refactoring:
```bash
/cook [refactor user validation logic to eliminate duplication]
```
**Refactoring process**:
```
[1/6] Analyzing current code...
β Identified 156 lines of duplicate code
β Mapped dependencies
β Checked test coverage: 78%
[2/6] Creating shared validator...
β Created src/validators/user.validator.js
β Extracted validation functions:
- validateEmail()
- validatePassword()
- validateName()
- validateAge()
[3/6] Updating controllers...
β Updated src/users/create.js
β Updated src/users/update.js
β Removed duplicate code (156 lines)
β Added imports for shared validator
[4/6] Running tests...
β All existing tests pass (142/142)
β New validator tests added (24 tests)
β Coverage increased: 78% β 85%
[5/6] Code review...
β No duplication detected
β Consistent error handling
β Improved maintainability
[6/6] Documentation updated...
β Updated architecture docs
β Added validator documentation
β
Refactoring complete
Code Metrics:
- Lines removed: 156
- Lines added: 89
- Net reduction: 67 lines
- Duplication: 67% β 0%
- Maintainability: +18 points
Files modified:
- src/validators/user.validator.js (new)
- src/users/create.js (refactored)
- src/users/update.js (refactored)
- tests/validators/user.test.js (new)
```
### Step 5: Verify Functionality
Thoroughly test refactored code:
```bash
# Run full test suite
/test
# Manual testing
npm run dev
# Test all affected endpoints
curl -X POST http://localhost:3000/api/users \
-H "Content-Type: application/json" \
-d '{"email": "test@example.com", "password": "pass123"}'
curl -X PUT http://localhost:3000/api/users/123 \
-H "Content-Type: application/json" \
-d '{"name": "John Doe"}'
```
### Step 6: Review and Document
```bash
# Update documentation
/docs:update [document the validator refactoring]
# Review changes
git diff
# Check metrics improvement
/review
```
### Step 7: Commit Refactoring
```bash
/git:cm
```
**Generated commit**:
```
refactor: extract shared user validation logic
- Create reusable user.validator.js module
- Remove 156 lines of duplicate validation code
- Standardize error messages across create/update
- Add comprehensive validator tests (24 tests)
- Increase test coverage from 78% to 85%
Code metrics:
- Duplication: 67% β 0%
- Maintainability: +18 points
- Lines of code: -67 (net reduction)
No functional changes - all tests pass
```
## Complete Examples
### Example 1: Simplify Complex Function
**Before** (234 lines):
```javascript
// src/orders/process.js
async function processOrder(orderId) {
const order = await getOrder(orderId);
if (!order) {
throw new Error('Order not found');
}
if (order.status === 'cancelled') {
throw new Error('Order cancelled');
}
if (order.items.length === 0) {
throw new Error('No items');
}
// ... 220+ more lines of mixed concerns
// - Inventory checks
// - Payment processing
// - Email notifications
// - Shipping calculations
// - Tax calculations
// - Database updates
}
```
**Refactoring command**:
```bash
/cook [refactor processOrder function to follow single responsibility principle]
```
**After** (clean architecture):
```javascript
// src/orders/process.js
async function processOrder(orderId) {
const order = await validateOrder(orderId);
await checkInventory(order);
const payment = await processPayment(order);
await updateInventory(order);
await sendConfirmationEmail(order, payment);
await createShipment(order);
return order;
}
// Each function extracted to separate file
// src/orders/validate.js - Order validation
// src/inventory/check.js - Inventory checks
// src/payments/process.js - Payment logic
// src/emails/confirmation.js - Email sending
// src/shipping/create.js - Shipment creation
```
**Results**:
- Main function: 234 lines β 9 lines
- 5 new focused modules created
- Each function < 50 lines
- Test coverage: 65% β 92%
- Maintainability: +34 points
### Example 2: Remove Tight Coupling
**Before** (tightly coupled):
```javascript
// src/users/service.js
class UserService {
async createUser(data) {
// Direct database access
const db = require('../database');
const user = await db.users.create(data);
// Direct email service
const emailer = require('../email');
await emailer.send(user.email, 'Welcome!');
// Direct logger
const logger = require('../logger');
logger.info('User created', user.id);
return user;
}
}
```
**Refactoring command**:
```bash
/cook [refactor UserService to use dependency injection]
```
**After** (loosely coupled):
```javascript
// src/users/service.js
class UserService {
constructor(database, emailService, logger) {
this.db = database;
this.email = emailService;
this.logger = logger;
}
async createUser(data) {
const user = await this.db.users.create(data);
await this.email.sendWelcome(user.email);
this.logger.info('User created', user.id);
return user;
}
}
// Easily mockable for testing
// Can swap implementations
// Clear dependencies
```
### Example 3: Improve Error Handling
**Before** (poor error handling):
```javascript
async function fetchUserData(userId) {
const response = await fetch(`/api/users/${userId}`);
const data = await response.json();
return data.user;
}
```
**Refactoring command**:
```bash
/cook [add comprehensive error handling to fetchUserData]
```
**After** (robust error handling):
```javascript
async function fetchUserData(userId) {
try {
const response = await fetch(`/api/users/${userId}`);
if (!response.ok) {
throw new ApiError(
`Failed to fetch user: ${response.status}`,
response.status
);
}
const data = await response.json();
if (!data || !data.user) {
throw new DataError('Invalid user data received');
}
return data.user;
} catch (error) {
if (error instanceof ApiError) {
logger.error('API error fetching user', { userId, error });
} else if (error instanceof DataError) {
logger.error('Data validation error', { userId, error });
} else {
logger.error('Unexpected error', { userId, error });
}
throw error;
}
}
```
## Common Refactoring Patterns
### 1. Extract Method
```bash
/cook [extract password validation into separate function]
```
### 2. Extract Class
```bash
/cook [extract payment processing into PaymentService class]
```
### 3. Rename
```bash
/cook [rename all instances of 'data' variable to descriptive names]
```
### 4. Introduce Parameter Object
```bash
/cook [replace multiple parameters with configuration object]
```
### 5. Replace Conditional with Polymorphism
```bash
/cook [replace user type conditionals with strategy pattern]
```
### 6. Move Method
```bash
/cook [move authentication logic from controller to service layer]
```
## Refactoring Best Practices
### 1. Test Before and After
```bash
# Before refactoring
/test
# Take note of test results
# After refactoring
/test
# Verify same results
```
### 2. Small, Incremental Changes
```bash
β
Good approach:
/cook [extract validation to separate function]
/test
/git:cm
/cook [add error handling to validation]
/test
/git:cm
β Bad approach:
/cook [refactor entire authentication system]
# Too many changes at once
# Hard to debug if issues arise
```
### 3. Maintain Functionality
```bash
# Refactoring should NOT change behavior
# Only change structure and implementation
# All tests should still pass
```
### 4. Update Tests Alongside Code
```bash
/cook [refactor user service and update tests accordingly]
```
### 5. Document Architectural Changes
```bash
/docs:update [document the new validation architecture]
```
## Common Variations
### Variation 1: Performance Refactoring
```bash
/cook [optimize database queries in user service]
```
### Variation 2: Security Refactoring
```bash
/cook [refactor to use parameterized queries instead of string concatenation]
```
### Variation 3: Modernize Code
```bash
/cook [convert callbacks to async/await throughout the codebase]
```
### Variation 4: Simplify Architecture
```bash
/cook [simplify three-layer architecture to two layers]
```
## Troubleshooting
### Issue: Tests Fail After Refactoring
**Problem**: Refactoring broke existing functionality
**Solution**:
```bash
# Revert changes
git reset --hard HEAD
# Refactor in smaller steps
/cook [extract just the email validation function]
/test
# Ensure tests pass before continuing
```
### Issue: Unclear What to Refactor
**Problem**: Don't know where to start
**Solution**:
```bash
# Get code review suggestions
/review
# Or ask for analysis
/ask [analyze the codebase and suggest refactoring priorities]
```
### Issue: Breaking API Compatibility
**Problem**: Refactoring changes public API
**Solution**:
```bash
# Maintain backward compatibility
/cook [refactor internal implementation without changing public API]
# Or version the API
/cook [create v2 API with refactored structure]
```
## Measuring Success
### Before Refactoring
```bash
# Collect metrics
Code Duplication: 45%
Average Function Length: 87 lines
Cyclomatic Complexity: 23
Test Coverage: 65%
Maintainability Index: 42/100
```
### After Refactoring
```bash
# Measure improvement
Code Duplication: 5%
Average Function Length: 28 lines
Cyclomatic Complexity: 8
Test Coverage: 89%
Maintainability Index: 78/100
Improvement: +36 points maintainability
```
## Next Steps
### Related Use Cases
- [Adding a New Feature](/docs/workflows/adding-feature) - Build features
- [Fixing Bugs](/docs/workflows/fixing-bugs) - Debug issues
- [Optimizing Performance](/docs/workflows/optimizing-performance) - Speed improvements
### Related Commands
- [/cook](/docs/commands/core/cook) - Implement refactoring
- [/test](/docs/commands/core/test) - Verify changes
- [/docs:update](/docs/commands/docs/update) - Update docs
### Related Agents
- [Code Reviewer](/docs/agents/code-reviewer) - Code quality analysis
- [Tester](/docs/agents/tester) - Testing coverage
- [Docs Manager](/docs/agents/docs-manager) - Documentation
---
**Key Takeaway**: ClaudeKit enables safe, incremental refactoring with automated testing and validation - improving code quality without breaking functionality, turning days of risky refactoring into hours of confident improvement.
---
# Building a REST API
Section: workflows
Category: workflows
URL: https://docs.claudekit.cc/docs/workflows/building-api
# Building a REST API
Learn how to build production-ready REST APIs with ClaudeKit - from API design through implementation, testing, documentation, and deployment.
## Overview
**Goal**: Build a complete REST API with CRUD operations, auth, and docs
**Time**: 30-60 minutes (vs 6-12 hours manually)
**Agents Used**: planner, researcher, tester, code-reviewer, docs-manager
**Commands**: /bootstrap, /plan, /code, /test, /docs:update
## Prerequisites
- Clear API requirements
- Database choice (PostgreSQL, MySQL, MongoDB, etc.)
- Node.js 18+ or Python 3.9+ installed
- Postman or similar API testing tool
## API Development Phases
| Phase | Activities | Time | Commands |
|-------|-----------|------|----------|
| Design | Plan endpoints, data models | 5-10 min | /plan |
| Setup | Initialize project, database | 5-10 min | /bootstrap |
| Implementation | Build endpoints, logic | 15-25 min | /cook |
| Testing | Unit, integration, E2E tests | 5-10 min | /test |
| Documentation | API docs, examples | 5 min | /docs:update |
| Deployment | Production setup | 10-15 min | /cook |
## Step-by-Step Workflow
### Step 1: Design API Structure
Plan your API endpoints and data models:
```bash
/plan [design REST API for task management with users, tasks, and projects]
```
**Generated plan includes**:
```markdown
# Task Management API Design
## Data Models
### User
- id: UUID (primary key)
- email: String (unique)
- password: String (hashed)
- name: String
- createdAt: Timestamp
- updatedAt: Timestamp
### Project
- id: UUID (primary key)
- name: String
- description: Text
- ownerId: UUID (foreign key β User)
- createdAt: Timestamp
- updatedAt: Timestamp
### Task
- id: UUID (primary key)
- title: String
- description: Text
- status: Enum [todo, in_progress, done]
- priority: Enum [low, medium, high]
- projectId: UUID (foreign key β Project)
- assigneeId: UUID (foreign key β User)
- dueDate: Date
- createdAt: Timestamp
- updatedAt: Timestamp
## API Endpoints
### Authentication
- POST /api/auth/register - Create new user
- POST /api/auth/login - Login user
- POST /api/auth/logout - Logout user
- GET /api/auth/me - Get current user
### Users
- GET /api/users - List users (admin)
- GET /api/users/:id - Get user by ID
- PUT /api/users/:id - Update user
- DELETE /api/users/:id - Delete user
### Projects
- GET /api/projects - List user's projects
- POST /api/projects - Create project
- GET /api/projects/:id - Get project
- PUT /api/projects/:id - Update project
- DELETE /api/projects/:id - Delete project
### Tasks
- GET /api/tasks - List tasks (with filters)
- POST /api/tasks - Create task
- GET /api/tasks/:id - Get task
- PUT /api/tasks/:id - Update task
- DELETE /api/tasks/:id - Delete task
- GET /api/projects/:id/tasks - Get project tasks
## Technical Stack
### Backend
- Framework: Express.js (Node.js)
- Database: PostgreSQL
- ORM: Prisma
- Authentication: JWT
- Validation: Joi
### Features
- Input validation
- Error handling
- Rate limiting
- CORS configuration
- API documentation (Swagger)
- Request logging
- Health check endpoint
## Security
- Password hashing (bcrypt)
- JWT token authentication
- Role-based access control
- SQL injection prevention
- XSS protection
- Rate limiting
```
### Step 2: Bootstrap the API
Create the initial project structure:
```bash
/bootstrap [create REST API for task management with Node.js, Express, PostgreSQL, and JWT auth]
```
**Interactive setup**:
```
π€ ClaudeKit: I'll bootstrap your REST API project.
Confirming requirements:
β Framework: Express.js
β Database: PostgreSQL
β ORM: Prisma
β Authentication: JWT
β Validation: Joi
β Documentation: Swagger
Additional features:
β Rate limiting
β CORS configuration
β Error handling middleware
β Request logging
β Docker support
β Test suite
Proceed? (y/n) > y
[1/6] Researching best practices...
β Express.js API patterns
β Prisma schema design
β JWT authentication
β Security best practices
[2/6] Creating project structure...
β Package initialization
β Directory structure
β Configuration files
[3/6] Setting up database...
β Prisma schema created
β Migration files
β Seed data script
[4/6] Implementing authentication...
β JWT middleware
β Password hashing
β Auth endpoints
[5/6] Implementing CRUD...
β User routes
β Project routes
β Task routes
β Controllers & services
[6/6] Setting up infrastructure...
β Docker configuration
β Environment setup
β Test framework
β API documentation
β
API bootstrapped successfully!
```
**Generated structure**:
```
task-api/
βββ src/
β βββ routes/
β β βββ auth.routes.js
β β βββ users.routes.js
β β βββ projects.routes.js
β β βββ tasks.routes.js
β βββ controllers/
β β βββ auth.controller.js
β β βββ users.controller.js
β β βββ projects.controller.js
β β βββ tasks.controller.js
β βββ services/
β β βββ auth.service.js
β β βββ users.service.js
β β βββ projects.service.js
β β βββ tasks.service.js
β βββ middleware/
β β βββ auth.middleware.js
β β βββ validate.middleware.js
β β βββ error.middleware.js
β β βββ rate-limit.middleware.js
β βββ validators/
β β βββ auth.validator.js
β β βββ project.validator.js
β β βββ task.validator.js
β βββ utils/
β β βββ jwt.js
β β βββ hash.js
β β βββ logger.js
β βββ server.js
βββ prisma/
β βββ schema.prisma
β βββ migrations/
β βββ seed.js
βββ tests/
β βββ unit/
β βββ integration/
β βββ e2e/
βββ docs/
β βββ swagger.yaml
βββ .env.example
βββ Dockerfile
βββ docker-compose.yml
βββ package.json
```
### Step 3: Configure Environment
Set up database and environment variables:
```bash
# Copy environment template
cp .env.example .env
# Edit configuration
nano .env
```
**.env file**:
```env
NODE_ENV=development
PORT=3000
# Database
DATABASE_URL="postgresql://user:password@localhost:5432/taskapi"
# JWT
JWT_SECRET="your-super-secret-jwt-key"
JWT_EXPIRES_IN="7d"
# Rate Limiting
RATE_LIMIT_WINDOW_MS=900000
RATE_LIMIT_MAX_REQUESTS=100
# CORS
CORS_ORIGIN="http://localhost:3001"
```
### Step 4: Initialize Database
```bash
# Start PostgreSQL (Docker)
docker-compose up -d postgres
# Generate Prisma client
npx prisma generate
# Run migrations
npx prisma migrate dev --name init
# Seed database
npx prisma db seed
```
### Step 5: Implement Custom Endpoints
Add specialized endpoints not in bootstrap:
```bash
/cook [add task filtering by status, priority, and due date]
```
**Implementation**:
```
[1/4] Implementing filters...
β Added query parameter parsing
β Implemented filter logic in service
β Updated Prisma queries
[2/4] Adding validation...
β Query parameter validation
β Enum validation for status/priority
β Date validation
[3/4] Testing...
β Filter tests added (18 tests)
β All tests pass
[4/4] Documentation...
β Swagger docs updated
β Example queries added
β
Filtering implemented
Example usage:
GET /api/tasks?status=in_progress&priority=high&dueBefore=2025-12-31
```
### Step 6: Add Search Functionality
```bash
/cook [implement full-text search for tasks and projects]
```
**Implementation**:
```
[1/5] Setting up search...
β Added search indexes to Prisma schema
β Generated migration
[2/5] Implementing search logic...
β Full-text search in PostgreSQL
β Search across title and description
β Relevance scoring
[3/5] Creating endpoint...
β GET /api/search?q=query
β Cross-model search (tasks + projects)
β Pagination support
[4/5] Testing...
β Search tests (12 tests)
β Edge case handling
[5/5] Documentation...
β Search API documented
β
Search implemented
```
### Step 7: Add Pagination
```bash
/cook [add pagination to all list endpoints]
```
**Implementation**:
```
Pagination added to:
β GET /api/users
β GET /api/projects
β GET /api/tasks
β GET /api/search
Query parameters:
- page: Page number (default: 1)
- limit: Items per page (default: 20, max: 100)
Response format:
{
"data": [...],
"pagination": {
"page": 1,
"limit": 20,
"total": 156,
"totalPages": 8,
"hasNext": true,
"hasPrev": false
}
}
```
### Step 8: Implement Advanced Features
#### Rate Limiting
```bash
/cook [add rate limiting per user with Redis]
```
#### File Uploads
```bash
/cook [add file attachment support for tasks using S3]
```
#### Real-time Updates
```bash
/cook [add WebSocket support for real-time task updates]
```
#### Email Notifications
```bash
/cook [send email notifications for task assignments and due dates]
```
### Step 9: Testing
Run comprehensive test suite:
```bash
/test
```
**Test results**:
```
β Unit Tests (87 tests)
β Services (45 tests)
β Validators (23 tests)
β Utils (19 tests)
β Integration Tests (64 tests)
β Auth endpoints (18 tests)
β User endpoints (12 tests)
β Project endpoints (15 tests)
β Task endpoints (19 tests)
β E2E Tests (23 tests)
β Complete user flows (12 tests)
β Authentication flows (6 tests)
β Error scenarios (5 tests)
Test Suites: 3 passed, 3 total
Tests: 174 passed, 174 total
Time: 12.847 s
Coverage: 91.3%
β
All tests passed
```
### Step 10: API Documentation
Generate comprehensive API documentation:
```bash
/docs:update
```
**Generated documentation**:
```
β Swagger/OpenAPI specification
β Postman collection
β API reference guide
β Authentication guide
β Error codes reference
β Rate limiting docs
β Examples for each endpoint
```
**Access documentation**:
```bash
npm run dev
# Open browser
http://localhost:3000/api-docs
```
### Step 11: Manual API Testing
Test endpoints with curl or Postman:
```bash
# Register user
curl -X POST http://localhost:3000/api/auth/register \
-H "Content-Type: application/json" \
-d '{
"email": "john@example.com",
"password": "SecurePass123!",
"name": "John Doe"
}'
# Response:
{
"token": "eyJhbGciOiJIUzI1NiIs...",
"user": {
"id": "uuid-here",
"email": "john@example.com",
"name": "John Doe"
}
}
# Create project
curl -X POST http://localhost:3000/api/projects \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "Website Redesign",
"description": "Complete redesign of company website"
}'
# Create task
curl -X POST http://localhost:3000/api/tasks \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"title": "Design homepage mockup",
"description": "Create mockup in Figma",
"status": "todo",
"priority": "high",
"projectId": "project-uuid",
"dueDate": "2025-11-15"
}'
# Get tasks with filters
curl -X GET "http://localhost:3000/api/tasks?status=todo&priority=high&page=1&limit=20" \
-H "Authorization: Bearer YOUR_TOKEN"
# Search
curl -X GET "http://localhost:3000/api/search?q=homepage" \
-H "Authorization: Bearer YOUR_TOKEN"
```
### Step 12: Deploy API
Deploy to production:
```bash
# Option 1: Deploy with Docker
/cook [create production Docker setup with nginx reverse proxy]
docker build -t task-api:latest .
docker push your-registry/task-api:latest
# Option 2: Deploy to Heroku
heroku create task-api-prod
heroku addons:create heroku-postgresql:mini
git push heroku main
# Option 3: Deploy to AWS
/cook [create AWS deployment with ECS, RDS, and load balancer]
```
## Complete Example: Blog API
### Requirements
```
Build a REST API for a blogging platform with:
- User authentication
- Blog post CRUD
- Comments
- Categories and tags
- Search functionality
- Like/unlike posts
- Follow users
```
### Implementation
```bash
# Design API
/plan [design REST API for blogging platform with all features]
# Bootstrap project
/bootstrap [create blog API with Node.js, Express, MongoDB, and JWT]
# Implement features
/cook [implement blog post CRUD with draft/publish status]
/cook [add commenting system with nested replies]
/cook [implement category and tag management]
/cook [add full-text search for posts]
/cook [implement like/unlike functionality]
/cook [add user following system]
/cook [implement feed generation for followed users]
# Test everything
/test
# Document API
/docs:update
# Deploy
/cook [set up production deployment to DigitalOcean]
```
### Time Breakdown
**Manual development**: 10-16 hours
- API design: 1-2 hours
- Setup: 1-2 hours
- Authentication: 2-3 hours
- CRUD operations: 3-4 hours
- Advanced features: 2-3 hours
- Testing: 1-2 hours
**With ClaudeKit**: 65 minutes
- Design: 8 minutes
- Bootstrap: 12 minutes
- Features: 35 minutes
- Testing: 7 minutes
- Documentation: 3 minutes
**Time saved**: 9-15 hours (90% faster)
## Advanced API Patterns
### 1. Versioning
```bash
/cook [implement API versioning with v1 and v2 routes]
```
### 2. GraphQL Alternative
```bash
/cook [add GraphQL endpoint alongside REST API]
```
### 3. Webhooks
```bash
/cook [implement webhook system for task events]
```
### 4. API Analytics
```bash
/cook [add API usage analytics and metrics]
```
### 5. Caching Layer
```bash
/cook [implement Redis caching for frequently accessed data]
```
## Best Practices
### 1. RESTful Design
```bash
β
Good:
GET /api/users # List users
POST /api/users # Create user
GET /api/users/:id # Get user
PUT /api/users/:id # Update user
DELETE /api/users/:id # Delete user
β Bad:
GET /api/getUsers
POST /api/createUser
GET /api/user/:id
POST /api/updateUser/:id
POST /api/deleteUser/:id
```
### 2. Consistent Response Format
```javascript
// Success response
{
"success": true,
"data": {...},
"message": "User created successfully"
}
// Error response
{
"success": false,
"error": {
"code": "VALIDATION_ERROR",
"message": "Invalid email format",
"details": [...]
}
}
```
### 3. Proper Status Codes
```
200 OK - Successful GET, PUT
201 Created - Successful POST
204 No Content - Successful DELETE
400 Bad Request - Validation error
401 Unauthorized - Missing/invalid auth
403 Forbidden - No permission
404 Not Found - Resource not found
500 Internal Server Error - Server error
```
### 4. Input Validation
```bash
/cook [add comprehensive input validation to all endpoints]
```
### 5. Error Handling
```bash
/cook [implement centralized error handling middleware]
```
## Troubleshooting
### Issue: Database Connection Fails
**Solution**:
```bash
# Check DATABASE_URL
echo $DATABASE_URL
# Test connection
npx prisma db pull
# Or fix with ClaudeKit
/fix:fast [database connection failing]
```
### Issue: Authentication Not Working
**Solution**:
```bash
/fix:fast [JWT authentication returning 401 for valid tokens]
```
### Issue: Slow Query Performance
**Solution**:
```bash
/cook [add database indexes for frequently queried fields]
```
### Issue: API Rate Limiting Too Strict
**Solution**:
```bash
/cook [adjust rate limiting to 200 requests per 15 minutes]
```
## Next Steps
### Related Use Cases
- [Implementing Authentication](/docs/workflows/implementing-auth) - Auth deep dive
- [Integrating Payments](/docs/workflows/integrating-payment) - Add payments
- [Optimizing Performance](/docs/workflows/optimizing-performance) - Speed up API
### Related Commands
- [/bootstrap](/docs/commands/core/bootstrap) - Initialize projects
- [/cook](/docs/commands/core/cook) - Implement features
- [/test](/docs/commands/core/test) - Test suite
### Further Reading
- [API Best Practices](https://restfulapi.net/)
- [Swagger/OpenAPI](https://swagger.io/specification/)
- [JWT Authentication](https://jwt.io/)
---
**Key Takeaway**: ClaudeKit enables rapid REST API development with best practices built-in - from design to deployment in under an hour with production-ready code, tests, and documentation.
---
# Greenfield Projects
Section: workflows
Category: workflows
URL: https://docs.claudekit.cc/docs/workflows/new-project
# Greenfield Projects
Create new projects from scratch with ClaudeKit's AI-powered development workflow. Transform ideas into production-ready applications quickly with intelligent agents.
## Installation
### 1. Install ClaudeKit CLI
```bash
npm i -g claudekit-cli@latest
```
## Quick Start
### Method 1: Bootstrap New Project
```bash
# Initialize project with ClaudeKit Engineer kit
ck init --kit engineer --dir /path/to/project
```
**Options:**
- `--kit engineer`: Installs ClaudeKit Engineer configuration
- `--dir`: Target directory for the project
### Method 2: Manual Setup
```bash
# Create directory
mkdir my-awesome-project
cd my-awesome-project
# Initialize git (optional but recommended)
git init
# Start Claude Code
claude
```
## Bootstrap Your Idea
### The Bootstrap Command
```bash
/bootstrap
```
This is the **most powerful command** for greenfield projects. It:
1. Asks clarifying questions for context
2. **Provides detailed implementation plan** (review carefully!)
3. After approval, starts implementing
4. Writes tests automatically
5. Performs code reviews
6. Creates initial specs and roadmap
7. Provides summary report
**No need to run `/docs:init`** - specs are created automatically during bootstrap.
### Example: Simple Project
```bash
/bootstrap A CLI tool that converts markdown files to PDF with custom styling
```
**CC will ask:**
- Target platforms? (Node.js, Deno, Bun?)
- PDF library preference? (pdfkit, puppeteer, weasyprint?)
- Custom styling method? (CSS, theme files?)
- Distribution method? (npm, binary, both?)
### Example: Web Application
```bash
/bootstrap A real-time collaborative todo app with team workspaces and permissions
```
**CC will ask:**
- Frontend framework? (React, Vue, Svelte?)
- Backend? (Node.js, Python, Go?)
- Database? (PostgreSQL, MongoDB, Supabase?)
- Real-time solution? (WebSocket, Server-Sent Events?)
- Authentication? (OAuth, email/password, magic link?)
### Example: Discord Bot
```bash
/bootstrap A Discord Bot that researches, analyzes and sends report of DJI stock market at 7am every day
```
**CC will ask:**
- Stock data source? (API preference?)
- Analysis type? (Technical, fundamental, both?)
- Report format? (Embed, text, charts?)
- Time zone for scheduling?
- Storage for historical data?
### Autonomous Mode (Use with Caution!)
```bash
/bootstrap:auto
```
Runs full autonomous mode without plan review. CC will:
- Make all technical decisions
- Implement entire project
- Run tests and fix issues
- Generate documentation
**Recommendation:** Only use for:
- Simple, well-defined projects
- Prototypes and experiments
- Non-critical applications
**Always review generated code** before production use.
## After Bootstrap
### Project Structure
ClaudeKit creates standard project structure:
```
my-project/
βββ .claude/ # ClaudeKit configuration
βββ docs/ # Generated documentation
β βββ project-overview-pdr.md
β βββ system-architecture.md
β βββ roadmap.md
βββ src/ # Source code
βββ tests/ # Test files
βββ package.json # Dependencies
βββ README.md # Project readme
```
### Continue Development
Use all ClaudeKit commands for further development:
#### Add New Features
```bash
/cook Add user authentication with email verification
```
#### Fix Issues
```bash
/fix:fast Button click handler not responding on mobile
/fix:hard Complex state management bug in checkout flow
```
#### Plan Enhancements
```bash
/plan Add payment processing with Stripe
```
#### Run Tests
```bash
/test
```
## Common Project Types
### Web API Server
```bash
/bootstrap REST API for e-commerce platform with products, cart, orders, and payments
```
**Typical questions:**
- Framework: Express, Fastify, Nest.js?
- Database: PostgreSQL, MySQL, MongoDB?
- Authentication: JWT, session-based?
- Payment provider: Stripe, PayPal?
- Hosting: Vercel, AWS, Railway?
### Full-Stack Application
```bash
/bootstrap Full-stack task management app with kanban boards, time tracking, and team collaboration
```
**Typical questions:**
- Frontend: Next.js, React + Vite, Remix?
- State management: Redux, Zustand, Context?
- Database: Supabase, PlanetScale, MongoDB?
- Real-time: Socket.io, Supabase Realtime?
- File storage: S3, Cloudflare R2?
### Chrome Extension
```bash
/bootstrap Chrome extension that summarizes web articles and saves highlights to Notion
```
**Typical questions:**
- Manifest version: V2 or V3?
- AI service: OpenAI, Anthropic, local?
- Notion integration: Official API?
- Storage: Chrome storage, cloud sync?
### Mobile App Backend
```bash
/bootstrap Backend API for mobile fitness app with workout tracking, social features, and achievements
```
**Typical questions:**
- Framework: Express, FastAPI, Rails?
- Database: PostgreSQL, MongoDB?
- File uploads: S3, Cloudflare R2?
- Push notifications: FCM, OneSignal?
- Analytics: Mixpanel, PostHog?
## Advanced Workflows
### Iterative Development
```bash
# 1. Start with MVP
/bootstrap Minimal viable product for habit tracking app
# 2. After MVP completion, add features
/cook Add social sharing features
/cook Add streak tracking and notifications
/cook Add analytics dashboard
# 3. Optimize and refine
/fix:hard Performance issues with large datasets
/plan Add premium features with subscription
```
### Multi-Service Architecture
```bash
# 1. Bootstrap main API
/bootstrap Main API service for social media platform
# 2. Plan microservices
/plan Add separate auth service
/plan Add media processing service
/plan Add notification service
# 3. Implement services separately
/code auth-service-plan.md
/code media-service-plan.md
/code notification-service-plan.md
```
### Documentation-Driven Development
```bash
# 1. Create detailed plan first
/plan Complete SaaS platform with multi-tenancy, billing, and admin dashboard
# 2. Review and refine plan
# Edit the generated plan markdown
# 3. Implement in phases
/code plan.md --phase 1
/test
/code plan.md --phase 2
/test
```
## Project Configuration
### Environment Setup
After bootstrap, configure environments:
```bash
# Development
/config Create development environment configuration
# Production
/config Create production environment configuration
# Testing
/config Create testing environment configuration
```
### Deployment
```bash
# Plan deployment
/plan Deploy to Vercel with CI/CD
# Implement deployment
/code deployment-plan.md
# Or use specific integration
/integrate vercel
/integrate railway
/integrate fly.io
```
## Best Practices
### 1. Clear Description
**Good:**
```bash
/bootstrap Real-time chat application with rooms, direct messages, file sharing, and presence indicators. Target 1000 concurrent users.
```
**Bad:**
```bash
/bootstrap Chat app
```
### 2. Answer Questions Thoroughly
Provide detailed answers to CC's questions. Better context = better implementation.
### 3. Review Plans Carefully
**IMPORTANT:** Always review implementation plans before approval. Check:
- Architecture decisions
- Technology choices
- Security considerations
- Scalability approach
- Testing strategy
### 4. Start Small, Iterate
Begin with core functionality, then expand:
```bash
# Phase 1: Core MVP
/bootstrap Basic blogging platform with posts and comments
# Phase 2: Enhancements
/cook Add rich text editor
/cook Add image uploads
/cook Add user profiles
# Phase 3: Advanced Features
/cook Add search functionality
/cook Add social sharing
```
### 5. Use Version Control
```bash
# After bootstrap
git add .
git commit -m "Initial project setup via ClaudeKit bootstrap"
# After each feature
/git:cp
```
## Troubleshooting
### Bootstrap Stalls or Fails
```bash
# Stop and restart with more specific description
/bootstrap [more detailed description with tech stack preferences]
```
### Generated Code Has Issues
```bash
# Fix specific issues
/fix:hard [describe the issue]
# Or run full test suite and auto-fix
/fix:test
```
### Need to Change Approach
```bash
# Create new plan
/plan Refactor to use different architecture
# Implement changes
/code new-approach-plan.md
```
## Examples from Real Projects
### E-commerce Platform
```bash
/bootstrap E-commerce platform with:
- Product catalog with categories and search
- Shopping cart with session persistence
- Checkout with Stripe integration
- Order management for customers and admin
- Email notifications for orders
- Admin dashboard for inventory management
```
### SaaS Analytics Tool
```bash
/bootstrap SaaS analytics dashboard with:
- Multi-tenant architecture with workspaces
- Real-time data ingestion API
- Custom dashboard builder with drag-and-drop
- SQL query builder for custom reports
- Role-based access control
- Subscription billing with Stripe
```
### Content Management System
```bash
/bootstrap Headless CMS with:
- Flexible content modeling with custom types
- Rich text editor with markdown support
- Media library with image optimization
- GraphQL and REST APIs
- Webhook support for publishing events
- Multi-environment content staging
```
## Next Steps
After bootstrapping your project:
1. **Continuous Development**: Use `/cook` for new features
2. **Testing**: Regular `/test` runs
3. **Documentation**: Keep `/docs:update` current
4. **Deployment**: Set up CI/CD with `/plan ci`
5. **Team Collaboration**: Share `.claude/` configuration
## Resources
- [All Commands](/docs/commands/) - Complete command reference
- [AI Agents](/docs/agents/) - Understanding specialized agents
- [Workflows](/docs/docs/configuration/workflows) - Development workflows
- [Use Cases](/docs/workflows/) - Real-world examples
---
**Ready to build?** Start with `/bootstrap` and let AI agents handle the heavy lifting. Remember to **review plans carefully** before approval!
**Need help?** Visit [GitHub Discussions](https://github.com/mrgoonie/claudekit-cli/discussions)
---
# Brownfield Projects
Section: workflows
Category: workflows
URL: https://docs.claudekit.cc/docs/workflows/existing-project
# Brownfield Projects
Integrate ClaudeKit into your existing projects to enhance development workflow with AI-powered agents. Perfect for legacy codebases, team projects, and gradual AI adoption.
## Installation
### 1. Install ClaudeKit CLI
```bash
npm i -g claudekit-cli@latest
```
### 2. Navigate to Your Project
```bash
cd /path/to/your/existing/project
```
### 3. Start Claude Code
```bash
claude
```
This will start Claude Code (CC) with ClaudeKit agents in your project directory.
## Initial Setup
### Analyze and Create Specs
Let Claude Code scan and analyze your codebase to create initial specifications:
```bash
/docs:init
```
This command will:
- Analyze your project structure
- Understand your tech stack
- Generate initial documentation specs
- Create baseline for AI-assisted development
**Wait for completion** before proceeding with other commands.
## Core Workflows
### Implement New Features
```bash
/cook
```
**Example:**
```bash
/cook Add user profile page with avatar upload and edit functionality
```
**Process:**
1. CC will ask clarifying questions - answer them!
2. **IMPORTANT:** Review the detailed implementation plan carefully
3. After your approval, CC starts implementing
4. Automatic testing and code review included
5. Summary report provided when finished
**Autonomous variants** (use at your own risk):
- `/cook:auto` - Full autonomous mode with plan review
- `/cook:auto:fast` - Faster mode with less token consumption
### Fix Bugs
#### Quick Bug Fix
```bash
/fix:fast
```
For simple, straightforward bugs.
#### Complex Bug Fix
```bash
/fix:hard
```
For difficult bugs requiring deeper analysis and more thinking time.
**Example:**
```bash
/fix:hard User authentication breaks after OAuth login when email is not verified
```
#### Auto-Fix from Logs
```bash
/fix:logs
```
Automatically fetches logs and fixes issues.
#### Fix Test Suite
```bash
/fix:test
```
Runs test suite and keeps fixing until all tests pass.
#### Fix CI/CD Issues
```bash
/fix:ci
```
**Example:**
```bash
/fix:ci https://github.com/username/repo/actions/runs/12345
```
Fetches GitHub Actions logs and fixes build/deployment errors.
### Planning & Research
#### Brainstorm Ideas
```bash
/brainstorm
```
Use when unsure about technical feasibility or implementation approach.
**Example:**
```bash
/brainstorm Real-time collaborative editing feature like Google Docs
```
#### Create Implementation Plan
```bash
/plan
```
Research and create detailed implementation plan without implementing.
**Example:**
```bash
/plan Migrate from REST API to GraphQL with backward compatibility
```
#### Execute Existing Plan
```bash
/code
```
Start implementing from a markdown plan file.
### Testing
#### Run Tests and Report
```bash
/test
```
Runs test suite and generates report. No automatic fixes.
## Advanced Commands
### Documentation
```bash
/docs:update # Update existing documentation
/docs:summarize # Summarize documentation
```
### Git Operations
```bash
/git:cm # Create meaningful commit message
/git:cp # Commit and push changes
/git:pr # Create pull request
```
### Integration
```bash
/integrate:polar # Integrate Polar API
/integrate:sepay # Integrate SePay payment
```
### Skills Management
```bash
/skill:create # Create new skill
/skill:fix-logs # Fix skill errors
```
## Best Practices
### 1. Start with Documentation
Always run `/docs:init` first to let CC understand your codebase.
### 2. Review Plans Carefully
**IMPORTANT:** Always review implementation plans before approving. CC provides detailed plans for a reason.
### 3. Incremental Integration
- Start with small features
- Fix non-critical bugs first
- Gradually increase complexity
- Build team confidence
### 4. Use Appropriate Commands
- Simple bugs β `/fix:fast`
- Complex bugs β `/fix:hard`
- Small features β `/cook`
- Large features β `/plan` then `/code`
### 5. Test Regularly
Run `/test` frequently to catch issues early.
## Common Scenarios
### Adding Feature to Legacy Code
```bash
# 1. Analyze codebase
/docs:init
# 2. Plan the feature
/plan Add user roles and permissions system
# 3. Review and approve plan
# 4. Implement
/code plan.md
# 5. Test
/test
```
### Fixing Production Bug
```bash
# 1. Quick fix for urgent issue
/fix:fast Payment processing fails on Safari browser
# 2. Test the fix
/test
# 3. Commit and push
/git:cp
```
### Refactoring Legacy Module
```bash
# 1. Brainstorm approach
/brainstorm Refactor authentication module to use modern JWT patterns
# 2. Create detailed plan
/plan Refactor auth module with backward compatibility
# 3. Review plan carefully
# 4. Implement incrementally
/code auth-refactor-plan.md
# 5. Run full test suite
/fix:test
```
## Team Collaboration
### Sharing Configuration
Share `.claude/` directory and generated specs with your team via git.
### Onboarding Team Members
```bash
# 1. Clone repository
git clone
# 2. Install ClaudeKit
npm i -g claudekit-cli@latest
# 3. Navigate to project
cd project-name
# 4. Start Claude Code
claude
# 5. Specs already exist - start working!
/cook Add new feature
```
## Troubleshooting
### CC Not Understanding Codebase
```bash
# Regenerate specs
/docs:update
```
### Commands Not Working
```bash
# Verify ClaudeKit installation
ck --version
# Restart Claude Code
# Exit CC and run: claude
```
### Need More Context
Provide detailed descriptions in commands. More context = better results.
## Next Steps
After successful integration:
1. **Explore Commands**: Check [Commands Documentation](/docs/commands/)
2. **Learn Agents**: Understand [Specialized Agents](/docs/agents/)
3. **Advanced Workflows**: See [Workflows Guide](/docs/docs/configuration/workflows)
4. **Team Training**: Share best practices with your team
---
**Need help?** Check [Troubleshooting Guide](/docs/support/troubleshooting/) or [GitHub Discussions](https://github.com/mrgoonie/claudekit-cli/discussions)
---
# Implementing Authentication
Section: workflows
Category: workflows
URL: https://docs.claudekit.cc/docs/workflows/implementing-auth
# Implementing Authentication
Learn how to implement secure authentication systems with ClaudeKit - from basic JWT authentication to OAuth2, 2FA, and passwordless login.
## Overview
**Goal**: Implement secure, production-ready authentication
**Time**: 20-40 minutes (vs 4-8 hours manually)
**Agents Used**: planner, researcher, tester, code-reviewer
**Commands**: /plan, /code, /test, /docs:update
## Prerequisites
- Existing API or web application
- Database configured (PostgreSQL, MongoDB, etc.)
- Email service for verification (optional)
- OAuth provider accounts (optional: Google, GitHub, etc.)
## Authentication Methods
| Method | Use Case | Security | Complexity | Time |
|--------|----------|----------|------------|------|
| JWT | API authentication | High | Low | 15-20 min |
| Session-based | Traditional web apps | High | Medium | 20-25 min |
| OAuth2 | Social login | Very High | Medium | 25-35 min |
| Passwordless | Magic links, OTP | High | Medium | 20-30 min |
| 2FA | Additional security | Very High | Medium | 15-20 min |
| Biometric | Mobile apps | Very High | High | 30-40 min |
## Step-by-Step Workflow
### Step 1: Plan Authentication Strategy
Choose authentication method and plan implementation:
```bash
/plan [implement JWT authentication with email/password and password reset]
```
**Generated plan**:
```markdown
# JWT Authentication Implementation Plan
## Components
### 1. User Model
- id (UUID)
- email (unique, indexed)
- password (hashed with bcrypt)
- emailVerified (boolean)
- resetToken (nullable)
- resetTokenExpiry (nullable)
- createdAt, updatedAt
### 2. Endpoints
- POST /api/auth/register - User registration
- POST /api/auth/login - User login
- POST /api/auth/logout - User logout
- GET /api/auth/me - Get current user
- POST /api/auth/verify-email - Email verification
- POST /api/auth/forgot-password - Request password reset
- POST /api/auth/reset-password - Reset password
- POST /api/auth/refresh-token - Refresh JWT token
### 3. Middleware
- authenticateJWT - Verify JWT token
- requireAuth - Protect routes
- checkEmailVerified - Require verified email
### 4. Security Features
- Password hashing (bcrypt, 12 rounds)
- JWT token (access + refresh)
- Token expiry (15 min access, 7 day refresh)
- Rate limiting on auth endpoints
- Email verification
- Password strength validation
- Account lockout after failed attempts
### 5. Testing
- Unit tests for auth service
- Integration tests for endpoints
- Security tests (brute force, token theft)
- E2E authentication flows
```
### Step 2: Implement Basic JWT Authentication
```bash
/cook [implement JWT authentication with registration and login]
```
**Implementation**:
```
[1/6] Setting up user model...
β Created User schema/model
β Added password hashing hooks
β Created database migration
[2/6] Implementing JWT service...
β Token generation (access + refresh)
β Token verification
β Token refresh logic
β Secure token storage
[3/6] Creating auth endpoints...
β POST /api/auth/register
β POST /api/auth/login
β POST /api/auth/logout
β GET /api/auth/me
β POST /api/auth/refresh-token
[4/6] Adding middleware...
β JWT authentication middleware
β Route protection
β Error handling
[5/6] Implementing validation...
β Email format validation
β Password strength (min 8 chars, uppercase, number, symbol)
β Input sanitization
[6/6] Testing...
β 32 tests generated
β All tests pass
β Coverage: 93%
β
JWT authentication implemented
Files created:
- src/models/User.js
- src/services/auth.service.js
- src/middleware/auth.middleware.js
- src/routes/auth.routes.js
- src/utils/jwt.js
- tests/auth/auth.test.js
```
### Step 3: Add Email Verification
```bash
/cook [add email verification to registration process]
```
**Implementation**:
```
[1/4] Email verification setup...
β Add verificationToken to User model
β Generate verification tokens
β Create verification endpoint
[2/4] Email service integration...
β Email template for verification
β Send verification email on registration
β Resend verification email endpoint
[3/4] Middleware...
β requireEmailVerified middleware
β Apply to protected routes
[4/4] Testing...
β Verification flow tests (8 tests)
β Email sending mocked
β
Email verification added
Usage:
1. User registers β receives verification email
2. User clicks link β account verified
3. Only verified users access protected routes
```
### Step 4: Implement Password Reset
```bash
/cook [implement password reset with email verification]
```
**Implementation**:
```
[1/5] Database changes...
β Add resetToken field
β Add resetTokenExpiry field
β Migration created
[2/5] Reset flow endpoints...
β POST /api/auth/forgot-password
β POST /api/auth/reset-password
β Token validation logic
[3/5] Email templates...
β Password reset email
β Password changed confirmation
[4/5] Security measures...
β Token expiry (15 minutes)
β Single-use tokens
β Rate limiting (5 requests/hour)
[5/5] Testing...
β Reset flow tests (12 tests)
β Security tests (token expiry, reuse)
β
Password reset implemented
```
### Step 5: Add OAuth2 (Social Login)
```bash
/cook [add OAuth2 login with Google and GitHub]
```
**Implementation**:
```
[1/6] OAuth setup...
β Installed passport.js
β Configured Google strategy
β Configured GitHub strategy
[2/6] OAuth endpoints...
β GET /api/auth/google
β GET /api/auth/google/callback
β GET /api/auth/github
β GET /api/auth/github/callback
[3/6] User model updates...
β Add oauthProvider field
β Add oauthId field
β Link OAuth accounts to existing users
[4/6] Account linking...
β Link OAuth to email if exists
β Create new user if doesn't exist
β Merge accounts logic
[5/6] Frontend integration...
β OAuth buttons
β Redirect handling
β Token extraction
[6/6] Testing...
β OAuth flow tests (16 tests)
β Account linking tests
β
OAuth2 implemented
Configuration needed (.env):
GOOGLE_CLIENT_ID=your-client-id
GOOGLE_CLIENT_SECRET=your-secret
GITHUB_CLIENT_ID=your-client-id
GITHUB_CLIENT_SECRET=your-secret
```
### Step 6: Add Two-Factor Authentication (2FA)
```bash
/cook [implement TOTP-based 2FA with QR code setup]
```
**Implementation**:
```
[1/5] 2FA setup...
β Installed speakeasy (TOTP library)
β Installed qrcode (QR generation)
β Add twoFactorSecret to User model
β Add twoFactorEnabled field
[2/5] 2FA endpoints...
β POST /api/auth/2fa/setup - Generate QR code
β POST /api/auth/2fa/verify - Verify and enable
β POST /api/auth/2fa/disable - Disable 2FA
β POST /api/auth/login/2fa - Login with 2FA code
[3/5] Modified login flow...
β Check if 2FA enabled
β Require 2FA code
β Validate TOTP token
[4/5] Backup codes...
β Generate 10 backup codes
β Store hashed backup codes
β Use backup code endpoint
[5/5] Testing...
β 2FA flow tests (14 tests)
β Backup code tests
β
2FA implemented
Flow:
1. User enables 2FA β scans QR code
2. User enters 6-digit code β 2FA activated
3. Future logins require 2FA code
4. Backup codes for emergencies
```
### Step 7: Add Passwordless Authentication
```bash
/cook [implement passwordless login with magic links]
```
**Implementation**:
```
[1/4] Magic link setup...
β Generate secure login tokens
β Store tokens with expiry (10 min)
β Email template for magic link
[2/4] Endpoints...
β POST /api/auth/magic-link/request
β GET /api/auth/magic-link/verify
β Token validation and cleanup
[3/4] Security...
β Single-use tokens
β Short expiry (10 minutes)
β Rate limiting
[4/4] Testing...
β Magic link flow tests (10 tests)
β
Passwordless authentication added
Usage:
1. User enters email
2. Receives magic link email
3. Clicks link β automatically logged in
```
### Step 8: Add Security Features
#### Account Lockout
```bash
/cook [add account lockout after 5 failed login attempts]
```
#### Session Management
```bash
/cook [implement session management with active session tracking]
```
#### IP Whitelisting
```bash
/cook [add IP whitelisting option for high-security accounts]
```
### Step 9: Testing Authentication
```bash
/test
```
**Test results**:
```
β Unit Tests (48 tests)
β Password hashing (8 tests)
β JWT generation/verification (12 tests)
β Token refresh (6 tests)
β 2FA TOTP (10 tests)
β Magic link generation (6 tests)
β OAuth account linking (6 tests)
β Integration Tests (56 tests)
β Registration (10 tests)
β Login (12 tests)
β Password reset (12 tests)
β OAuth flows (14 tests)
β 2FA flows (8 tests)
β Security Tests (24 tests)
β Brute force protection (6 tests)
β Token theft scenarios (8 tests)
β SQL injection attempts (4 tests)
β XSS attempts (6 tests)
Test Suites: 3 passed, 3 total
Tests: 128 passed, 128 total
Coverage: 94.7%
β
All tests passed
```
### Step 10: Security Review
```bash
/review
```
**Security checklist**:
```
β Passwords hashed with bcrypt (12 rounds)
β JWT tokens properly signed
β Refresh tokens securely stored
β Rate limiting on auth endpoints
β SQL injection protection
β XSS protection
β CSRF protection
β Secure password requirements
β Email verification implemented
β Account lockout after failed attempts
β 2FA option available
β Session timeout configured
β Secure token storage
β HTTPS enforcement
β Audit logging enabled
Security Score: 9.4/10 (Excellent)
```
### Step 11: Documentation
```bash
/docs:update
```
**Generated documentation**:
```
β Authentication guide
β API endpoint documentation
β Security best practices
β OAuth setup guide
β 2FA setup guide
β Testing guide
β Troubleshooting guide
```
## Complete Example: E-Commerce Authentication
### Requirements
```
Implement authentication for e-commerce site with:
- User registration/login
- Email verification
- Password reset
- Social login (Google, Facebook)
- Guest checkout
- Remember me option
- Account deletion
- GDPR compliance
```
### Implementation
```bash
# Plan authentication
/plan [design authentication system for e-commerce with all requirements]
# Implement base auth
/cook [implement JWT authentication with email verification]
# Add OAuth
/cook [add Google and Facebook OAuth login]
# Guest checkout
/cook [implement guest checkout with optional account creation]
# Remember me
/cook [add remember me functionality with long-lived tokens]
# Account management
/cook [implement account deletion with data export (GDPR)]
# Test everything
/test
# Document
/docs:update
```
### Time Comparison
**Manual implementation**: 6-10 hours
- JWT setup: 1-2 hours
- Email verification: 1 hour
- Password reset: 1 hour
- OAuth setup: 2-3 hours
- Testing: 1-2 hours
- Security review: 1-2 hours
**With ClaudeKit**: 38 minutes
- Planning: 5 minutes
- JWT + email: 10 minutes
- Password reset: 5 minutes
- OAuth: 12 minutes
- Testing: 6 minutes
**Time saved**: 5.5-9.5 hours (88% faster)
## Authentication Patterns
### Pattern 1: API-First Authentication
```bash
/cook [implement JWT authentication optimized for mobile apps]
```
### Pattern 2: SSO (Single Sign-On)
```bash
/cook [implement SSO with SAML for enterprise integration]
```
### Pattern 3: Multi-Tenant Authentication
```bash
/cook [implement multi-tenant authentication with organization isolation]
```
### Pattern 4: Role-Based Access Control (RBAC)
```bash
/cook [add role-based permissions with admin, user, and guest roles]
```
## Best Practices
### 1. Secure Password Storage
```javascript
β
Good:
- bcrypt with 12+ rounds
- Async password hashing
- Never store plain text
β Bad:
- MD5/SHA1 hashing
- No salt
- Plain text passwords
```
### 2. JWT Token Security
```javascript
β
Good:
- Short access token expiry (15 min)
- Refresh token rotation
- Secure token storage (httpOnly cookies)
- Token blacklisting on logout
β Bad:
- Long-lived tokens
- Tokens in localStorage
- No token refresh
- No logout mechanism
```
### 3. Input Validation
```bash
/cook [add comprehensive input validation to all auth endpoints]
```
### 4. Rate Limiting
```javascript
Auth endpoints rate limits:
- Login: 5 attempts per 15 minutes
- Registration: 3 per hour
- Password reset: 3 per hour
- Email verification: 5 per hour
```
### 5. Audit Logging
```bash
/cook [add audit logging for all authentication events]
```
## Troubleshooting
### Issue: Tokens Expiring Too Fast
**Solution**:
```bash
/cook [increase JWT access token expiry to 30 minutes]
```
### Issue: OAuth Callback Failing
**Solution**:
```bash
/fix:fast [OAuth callback returning 400 error]
```
### Issue: Password Reset Not Working
**Solution**:
```bash
/fix:fast [password reset emails not being sent]
```
### Issue: 2FA QR Code Not Displaying
**Solution**:
```bash
/fix:ui [2FA QR code not rendering in mobile view]
```
## Security Checklist
Before production deployment:
```bash
β All passwords hashed with bcrypt
β JWT tokens use strong secret
β Refresh token rotation enabled
β Rate limiting configured
β HTTPS enforced
β CORS properly configured
β Input validation on all endpoints
β SQL injection protection
β XSS protection
β CSRF protection
β Email verification working
β Password reset tested
β OAuth redirect URIs whitelisted
β 2FA tested with multiple apps
β Account lockout working
β Audit logging enabled
β Error messages don't leak info
β Session timeout configured
β Secure cookies (httpOnly, secure)
β Token blacklist on logout
```
## Next Steps
### Related Use Cases
- [Building a REST API](/docs/workflows/building-api) - Create APIs
- [Integrating Payments](/docs/workflows/integrating-payment) - Add payments
- [Adding a New Feature](/docs/workflows/adding-feature) - Build features
### Related Commands
- [/cook](/docs/commands/core/cook) - Implement features
- [/test](/docs/commands/core/test) - Test suite
- [/fix:fast](/docs/commands/fix/fast) - Quick fixes
### Integration Guides
- [/integrate:polar](/docs/commands/integrate/polar) - Polar payments
- [Better Auth Skill](/docs/skills) - Better Auth framework
### Further Reading
- [JWT Best Practices](https://jwt.io/introduction)
- [OAuth 2.0 Specification](https://oauth.net/2/)
- [OWASP Authentication Cheat Sheet](https://cheatsheetseries.owasp.org/cheatsheets/Authentication_Cheat_Sheet.html)
---
**Key Takeaway**: ClaudeKit enables rapid implementation of secure, production-ready authentication with industry best practices built-in - from basic JWT to OAuth2 and 2FA in under an hour.
---
# Integrating Payment Processing
Section: workflows
Category: workflows
URL: https://docs.claudekit.cc/docs/workflows/integrating-payment
# Integrating Payment Processing
Learn how to integrate payment processing with ClaudeKit - from one-time payments to subscriptions, webhooks, and revenue optimization.
## Overview
**Goal**: Implement secure payment processing with provider integration
**Time**: 25-50 minutes (vs 5-10 hours manually)
**Agents Used**: planner, researcher, tester, code-reviewer
**Commands**: /integrate:polar, /integrate:sepay, /cook, /test
## Prerequisites
- Existing application with user accounts
- Payment provider account (Stripe, Polar, etc.)
- SSL certificate (required for payments)
- Business/tax information configured
## Payment Providers
| Provider | Best For | Features | Setup Time |
|----------|----------|----------|------------|
| Stripe | Global, feature-rich | Cards, wallets, subscriptions | 20-30 min |
| Polar | Creator economy | Subscriptions, products | 15-25 min |
| PayPal | Global recognition | Cards, PayPal balance | 20-30 min |
| SePay | Vietnam market | Bank transfer, e-wallets | 15-20 min |
| Square | In-person + online | POS integration | 25-35 min |
## Step-by-Step Workflow
### Step 1: Choose Payment Provider
Select provider based on your needs:
```bash
# For SaaS subscriptions
/plan [integrate Stripe for subscription billing]
# For creator platforms
/integrate:polar
# For Vietnamese market
/integrate:sepay
# For general e-commerce
/plan [integrate Stripe with PayPal fallback]
```
### Step 2: Integrate Stripe (Most Common)
```bash
/cook [integrate Stripe payment processing with one-time and subscription payments]
```
**Implementation**:
```
[1/8] Setting up Stripe...
β Installed Stripe SDK
β Created Stripe configuration
β Added API keys to environment
[2/8] Database changes...
β Created Payment model
β Created Subscription model
β Added stripeCustomerId to User
β Migrations generated
[3/8] Payment endpoints...
β POST /api/payments/create-intent
β POST /api/payments/confirm
β GET /api/payments/:id
β GET /api/payments/history
[4/8] Subscription endpoints...
β POST /api/subscriptions/create
β POST /api/subscriptions/cancel
β POST /api/subscriptions/update
β GET /api/subscriptions/current
[5/8] Webhook handling...
β POST /api/webhooks/stripe
β Signature verification
β Event processing
β Idempotency handling
[6/8] Frontend components...
β Payment form component
β Stripe Elements integration
β Subscription management UI
β Payment history display
[7/8] Testing...
β Payment flow tests (24 tests)
β Webhook tests (16 tests)
β Subscription tests (18 tests)
[8/8] Documentation...
β Payment integration guide
β Webhook setup instructions
β Testing guide
β
Stripe integration complete
Configuration needed (.env):
STRIPE_SECRET_KEY=sk_test_...
STRIPE_PUBLISHABLE_KEY=pk_test_...
STRIPE_WEBHOOK_SECRET=whsec_...
```
**Generated files**:
```
src/
βββ services/
β βββ stripe.service.js
β βββ payment.service.js
β βββ subscription.service.js
βββ routes/
β βββ payment.routes.js
β βββ subscription.routes.js
β βββ webhook.routes.js
βββ models/
β βββ Payment.js
β βββ Subscription.js
βββ middleware/
β βββ stripe-webhook.middleware.js
β βββ payment-auth.middleware.js
βββ utils/
βββ stripe-helpers.js
frontend/
βββ components/
β βββ PaymentForm.tsx
β βββ SubscriptionCard.tsx
β βββ PaymentHistory.tsx
```
### Step 3: Implement One-Time Payments
```bash
# Already done in Step 2, but can add specific features
/cook [add invoice generation for one-time payments]
```
**Payment flow**:
```javascript
// Frontend
const handlePayment = async (amount, currency) => {
// 1. Create payment intent
const { clientSecret } = await fetch('/api/payments/create-intent', {
method: 'POST',
body: JSON.stringify({ amount, currency })
}).then(r => r.json());
// 2. Confirm with Stripe Elements
const { error, paymentIntent } = await stripe.confirmCardPayment(
clientSecret,
{ payment_method: { card: cardElement } }
);
// 3. Handle result
if (error) {
showError(error.message);
} else if (paymentIntent.status === 'succeeded') {
showSuccess('Payment successful!');
}
};
```
### Step 4: Implement Subscriptions
```bash
/cook [implement subscription tiers with monthly and annual billing]
```
**Implementation**:
```
[1/5] Subscription plans...
β Created plan configuration
β Synced with Stripe products
β Price IDs configured
Plans created:
- Starter: $9/month or $90/year
- Pro: $29/month or $290/year
- Enterprise: $99/month or $990/year
[2/5] Subscription flow...
β Plan selection UI
β Subscription creation
β Trial period support (14 days)
β Proration handling
[3/5] Billing management...
β Update payment method
β Change plan (upgrade/downgrade)
β Cancel subscription
β Reactivate subscription
[4/5] Usage tracking...
β Feature access control
β Usage limits enforcement
β Overage handling
[5/5] Testing...
β Subscription tests (22 tests)
β
Subscriptions implemented
```
### Step 5: Set Up Webhooks
Webhooks are critical for payment processing:
```bash
/cook [implement comprehensive Stripe webhook handling]
```
**Webhook events handled**:
```
β payment_intent.succeeded
- Mark payment as successful
- Grant access to product/service
- Send confirmation email
β payment_intent.payment_failed
- Notify user of failure
- Log for investigation
- Retry if temporary failure
β customer.subscription.created
- Activate subscription
- Grant feature access
- Send welcome email
β customer.subscription.updated
- Update subscription status
- Adjust feature access
- Handle plan changes
β customer.subscription.deleted
- Deactivate subscription
- Remove feature access
- Send cancellation email
β invoice.payment_succeeded
- Mark invoice as paid
- Extend subscription period
- Send receipt
β invoice.payment_failed
- Notify user
- Attempt retry
- Suspend after final failure
β checkout.session.completed
- Process successful checkout
- Create customer record
- Send confirmation
```
**Webhook security**:
```javascript
// Verify webhook signature
const verifyWebhook = (req) => {
const signature = req.headers['stripe-signature'];
const event = stripe.webhooks.constructEvent(
req.body,
signature,
process.env.STRIPE_WEBHOOK_SECRET
);
return event;
};
```
### Step 6: Add Payment Methods
```bash
/cook [add support for multiple payment methods - cards, Apple Pay, Google Pay]
```
**Implementation**:
```
[1/4] Payment methods...
β Credit/debit cards (Visa, MC, Amex)
β Apple Pay integration
β Google Pay integration
β Link (Stripe's payment method)
[2/4] Payment method management...
β Save cards for future use
β Multiple cards per customer
β Default payment method
β Remove payment methods
[3/4] Checkout optimization...
β Auto-detect payment method
β One-click checkout
β Remember me option
[4/4] Testing...
β Payment method tests (14 tests)
β
Multiple payment methods added
```
### Step 7: Implement Polar (for Creators)
```bash
/integrate:polar
```
**Implementation**:
```
[1/6] Polar setup...
β Installed Polar SDK
β Created Polar configuration
β Connected to Polar API
[2/6] Product management...
β Digital products
β Subscriptions
β One-time purchases
β Tiered pricing
[3/6] Checkout flow...
β Polar Checkout integration
β Embedded checkout option
β Custom success pages
[4/6] Webhook handling...
β order.created event
β subscription.created event
β subscription.updated event
β refund.created event
[5/6] Customer portal...
β Subscription management
β Invoice history
β Update payment method
[6/6] Testing...
β Polar integration tests (18 tests)
β
Polar integration complete
Configuration (.env):
POLAR_ACCESS_TOKEN=polar_...
POLAR_WEBHOOK_SECRET=whsec_...
```
### Step 8: Add Vietnamese Payment (SePay)
```bash
/integrate:sepay
```
**Implementation**:
```
[1/5] SePay integration...
β Bank transfer payments
β E-wallet support (Momo, ZaloPay)
β QR code generation
β Transaction verification
[2/5] Payment flow...
β Create payment request
β Display QR code
β Poll for payment status
β Auto-confirm on payment
[3/5] Webhook handling...
β Payment confirmed webhook
β Transaction updates
β Refund notifications
[4/5] Vietnamese localization...
β VND currency support
β Vietnamese language UI
β Local payment methods
[5/5] Testing...
β SePay tests (12 tests)
β
SePay integration complete
```
### Step 9: Add Revenue Optimization
#### Coupon/Discount System
```bash
/cook [implement coupon and discount code system]
```
#### Abandoned Cart Recovery
```bash
/cook [add abandoned checkout email automation]
```
#### Upsell/Cross-sell
```bash
/cook [implement checkout upsells and product recommendations]
```
#### Tax Calculation
```bash
/cook [add automatic tax calculation with TaxJar integration]
```
### Step 10: Analytics and Reporting
```bash
/cook [implement payment analytics dashboard]
```
**Analytics features**:
```
β Revenue metrics (MRR, ARR)
β Subscription metrics (churn, LTV)
β Payment success rates
β Failed payment analysis
β Revenue forecasting
β Customer lifetime value
β Cohort analysis
β Geographic revenue breakdown
```
### Step 11: Testing Payments
```bash
/test
```
**Test coverage**:
```
β Unit Tests (46 tests)
β Payment intent creation (8 tests)
β Subscription logic (12 tests)
β Webhook processing (16 tests)
β Coupon validation (10 tests)
β Integration Tests (38 tests)
β End-to-end payment flows (14 tests)
β Subscription lifecycle (12 tests)
β Refund processing (6 tests)
β Multiple payment methods (6 tests)
β Security Tests (12 tests)
β Webhook signature verification (4 tests)
β Payment authorization (4 tests)
β Idempotency (4 tests)
Tests: 96 passed, 96 total
Coverage: 92.4%
β
All payment tests passed
```
**Manual testing with test cards**:
```bash
# Stripe test cards
4242 4242 4242 4242 # Success
4000 0000 0000 9995 # Decline
4000 0000 0000 3220 # 3D Secure
# Test in Stripe Dashboard
# Monitor webhook delivery
# Verify payment flow
```
### Step 12: Deploy to Production
```bash
# Switch to production keys
# Update .env
STRIPE_SECRET_KEY=sk_live_...
STRIPE_PUBLISHABLE_KEY=pk_live_...
# Deploy
/cook [deploy payment integration to production with security review]
```
## Complete Example: SaaS Subscription Platform
### Requirements
```
Implement payment system for SaaS platform:
- Three subscription tiers
- Monthly and annual billing
- 14-day free trial
- Usage-based add-ons
- Team billing
- Invoice generation
- Automatic tax calculation
- Multiple payment methods
- Dunning management (failed payments)
```
### Implementation
```bash
# Plan implementation
/plan [design payment system for SaaS with all requirements]
# Integrate Stripe (use /code since plan exists)
/code @plans/payment-system.md
# Subscription tiers
/cook [create three subscription tiers with feature gates]
# Free trial
/cook [implement 14-day free trial without requiring payment method]
# Usage billing
/cook [add usage-based billing for API calls]
# Team billing
/cook [implement team billing with seat management]
# Invoicing
/cook [add automatic invoice generation and email delivery]
# Tax calculation
/cook [integrate TaxJar for automatic tax calculation]
# Payment methods
/cook [add card, Apple Pay, Google Pay, and ACH support]
# Dunning
/cook [implement smart retry logic for failed payments]
# Test everything
/test
# Deploy
/cook [deploy to production with monitoring]
```
### Time Comparison
**Manual implementation**: 8-14 hours
- Stripe integration: 2-3 hours
- Subscription logic: 2-3 hours
- Webhooks: 2-3 hours
- UI components: 1-2 hours
- Testing: 1-2 hours
- Debugging: 1-2 hours
**With ClaudeKit**: 48 minutes
- Planning: 6 minutes
- Stripe setup: 15 minutes
- Subscriptions: 12 minutes
- Webhooks: 8 minutes
- Testing: 7 minutes
**Time saved**: 7-13 hours (88% faster)
## Payment Patterns
### Pattern 1: Freemium Model
```bash
/cook [implement freemium model with upgrade prompts]
```
### Pattern 2: Pay-What-You-Want
```bash
/cook [add pay-what-you-want pricing with suggested amounts]
```
### Pattern 3: Tiered Pricing
```bash
/cook [implement dynamic tiered pricing based on usage]
```
### Pattern 4: Marketplace Payments
```bash
/cook [implement marketplace payments with split payouts using Stripe Connect]
```
## Best Practices
### 1. Idempotency
```javascript
// Ensure operations are idempotent
const createPayment = async (idempotencyKey, data) => {
return stripe.paymentIntents.create(data, {
idempotencyKey
});
};
```
### 2. Webhook Retry Logic
```javascript
// Handle webhook retries gracefully
const processWebhook = async (event) => {
// Check if already processed
const existing = await Webhook.findOne({
eventId: event.id
});
if (existing) {
return { status: 'already_processed' };
}
// Process and save
await processEvent(event);
await Webhook.create({ eventId: event.id });
};
```
### 3. Failed Payment Handling
```bash
/cook [implement dunning management with smart retry and email notifications]
```
### 4. PCI Compliance
```
β Never store card details
β Use Stripe Elements (PCI-compliant)
β Tokenize payment info
β HTTPS everywhere
β Regular security audits
```
### 5. Transaction Monitoring
```bash
/cook [add fraud detection and transaction monitoring]
```
## Troubleshooting
### Issue: Webhook Not Receiving Events
**Solution**:
```bash
# Test webhook locally with Stripe CLI
stripe listen --forward-to localhost:3000/api/webhooks/stripe
# Or fix with ClaudeKit
/fix:fast [Stripe webhooks not being received]
```
### Issue: Payment Failing
**Solution**:
```bash
/fix:logs [analyze payment failure logs and fix issues]
```
### Issue: Double Charging
**Solution**:
```bash
/fix:fast [prevent double charging with idempotency keys]
```
### Issue: Tax Calculation Wrong
**Solution**:
```bash
/fix:fast [tax calculation incorrect for Canadian customers]
```
## Security Checklist
Before production:
```bash
β Using production API keys
β Webhook signatures verified
β HTTPS enforced
β No card data stored
β Stripe Elements used (PCI compliant)
β Idempotency keys implemented
β Rate limiting on payment endpoints
β Transaction logging enabled
β Fraud detection configured
β 3D Secure enabled for EU
β Refund policy documented
β Terms of service updated
β Privacy policy includes payment info
β GDPR compliance for EU customers
β Error messages don't leak info
β Audit logging enabled
β Failed payment retry logic
β Customer dispute handling process
```
## Next Steps
### Related Use Cases
- [Implementing Authentication](/docs/workflows/implementing-auth) - User accounts
- [Building a REST API](/docs/workflows/building-api) - API development
- [Adding a New Feature](/docs/workflows/adding-feature) - Feature development
### Related Commands
- [/integrate:polar](/docs/commands/integrate/polar) - Polar integration
- [/integrate:sepay](/docs/commands/integrate/sepay) - SePay integration
- [/cook](/docs/commands/core/cook) - Custom features
- [/test](/docs/commands/core/test) - Test suite
### Further Reading
- [Stripe Documentation](https://stripe.com/docs)
- [Polar Documentation](https://docs.polar.sh)
- [PCI DSS Compliance](https://www.pcisecuritystandards.org/)
- [SCA Regulations](https://stripe.com/guides/strong-customer-authentication)
---
**Key Takeaway**: ClaudeKit enables rapid payment integration with built-in security, webhook handling, and best practices - from simple one-time payments to complex subscription systems in under an hour.
---
# Optimizing Performance
Section: workflows
Category: workflows
URL: https://docs.claudekit.cc/docs/workflows/optimizing-performance
# Optimizing Performance
Learn how to identify and fix performance bottlenecks with ClaudeKit - from profiling and analysis to implementation of caching, database optimization, and code improvements.
## Overview
**Goal**: Identify and resolve performance bottlenecks systematically
**Time**: 30-60 minutes (vs 4-12 hours manually)
**Agents Used**: debugger, code-reviewer, tester
**Commands**: /debug, /cook, /test, /fix:hard
## Prerequisites
- Application with performance issues
- Monitoring/profiling tools installed
- Performance baseline metrics
- Test data representative of production
## Performance Targets
| Metric | Good | Acceptable | Poor |
|--------|------|------------|------|
| API Response Time | <200ms | 200-500ms | >500ms |
| Page Load Time | <2s | 2-4s | >4s |
| Database Query | <50ms | 50-200ms | >200ms |
| Memory Usage | <512MB | 512MB-2GB | >2GB |
| CPU Usage | <30% | 30-70% | >70% |
## Step-by-Step Workflow
### Step 1: Identify Performance Issues
Start by profiling your application:
```bash
# Run performance analysis
/debug [analyze application performance and identify bottlenecks]
```
**Analysis output**:
```
Performance Analysis Report
π΄ Critical Issues (3)
1. Database Query Performance
Location: src/users/service.js:23
Issue: N+1 query problem
Impact: 2,847ms per request
Queries: 156 individual queries
Recommendation: Use eager loading
2. Unoptimized Images
Location: public/uploads/
Issue: Large image sizes
Impact: 3.2s additional load time
Total: 12.4MB transferred
Recommendation: Image optimization + CDN
3. Missing Cache Layer
Location: src/products/service.js
Issue: Repeated expensive operations
Impact: 95% cache miss rate
Recommendation: Implement Redis caching
π‘ Warning Issues (5)
4. Inefficient Algorithm
Location: src/search/index.js:45
Issue: O(nΒ²) complexity
Impact: 523ms for 1000 items
Recommendation: Use hash map (O(n))
5. Blocking Operations
Location: src/emails/send.js:12
Issue: Synchronous email sending
Impact: 890ms blocking
Recommendation: Use queue (Bull/Redis)
6. Large Bundle Size
Location: frontend/dist/
Issue: 4.2MB JavaScript bundle
Impact: 6s parse time on mobile
Recommendation: Code splitting
7. No Compression
Issue: Missing gzip/brotli
Impact: 2.8x larger transfers
Recommendation: Enable compression
8. Memory Leaks
Location: src/websocket/handler.js
Issue: Event listeners not removed
Impact: 125MB/hour growth
Recommendation: Cleanup on disconnect
Performance Score: 34/100 (Poor)
Load Time: 8.2s (Target: <2s)
Time to Interactive: 12.4s (Target: <3.5s)
```
### Step 2: Fix Database Performance
#### N+1 Query Problem
```bash
/cook [fix N+1 query problem in user service with eager loading]
```
**Implementation**:
```
[1/4] Analyzing queries...
β Identified 156 individual queries
β Should be 1 query with joins
Before:
const users = await User.findAll();
for (const user of users) {
user.posts = await Post.findAll({ userId: user.id }); // N queries
}
After:
const users = await User.findAll({
include: [{ model: Post }] // 1 query with JOIN
});
[2/4] Implementing fix...
β Added eager loading
β Optimized includes
β Added select to limit fields
[3/4] Testing...
β Query count: 156 β 1
β Response time: 2,847ms β 87ms
β 97% improvement
[4/4] Documentation...
β Added performance notes
β
N+1 problem fixed
```
#### Add Database Indexes
```bash
/cook [add database indexes for frequently queried fields]
```
**Implementation**:
```
[1/3] Analyzing query patterns...
β Identified slow queries
β Found missing indexes
Indexes to add:
- users.email (unique)
- posts.userId (foreign key)
- posts.status (filter)
- posts.createdAt (sorting)
- products.categoryId (foreign key)
- products.price (range queries)
[2/3] Creating indexes...
β Generated migration
β Composite indexes for complex queries
β Partial indexes for filtered queries
[3/3] Testing...
β Query performance improved:
- User lookup: 234ms β 3ms
- Post filtering: 567ms β 12ms
- Product search: 892ms β 34ms
β
Indexes added
```
#### Query Optimization
```bash
/cook [optimize database queries to reduce execution time]
```
### Step 3: Implement Caching
#### Redis Cache
```bash
/cook [implement Redis caching for frequently accessed data]
```
**Implementation**:
```
[1/5] Setting up Redis...
β Installed Redis client
β Configuration added
β Connection pool setup
[2/5] Cache strategy...
β Cache-aside pattern
β Write-through for updates
β TTL configuration
[3/5] Implementing cache...
β User data (TTL: 1 hour)
β Product catalog (TTL: 15 min)
β Search results (TTL: 5 min)
β API responses (TTL: 1 min)
[4/5] Cache invalidation...
β On update operations
β On delete operations
β Manual invalidation API
[5/5] Testing...
β Cache hit rate: 0% β 87%
β Response time: 456ms β 23ms
β Database load: -76%
β
Redis caching implemented
Performance improvement:
- Average response: 95% faster
- Database queries: 76% reduction
- Server load: 64% reduction
```
#### In-Memory Cache
```bash
/cook [add in-memory LRU cache for hot data]
```
#### CDN Integration
```bash
/cook [integrate CloudFlare CDN for static assets]
```
### Step 4: Optimize Frontend
#### Code Splitting
```bash
/cook [implement code splitting and lazy loading]
```
**Implementation**:
```
[1/4] Analyzing bundle...
β Current size: 4.2MB
β Identified heavy modules
β Found unused dependencies
Heavy modules:
- moment.js: 287KB (use date-fns instead)
- lodash: 531KB (use individual imports)
- chart.js: 456KB (lazy load)
[2/4] Code splitting...
β Route-based splitting
β Component lazy loading
β Vendor chunk optimization
[3/4] Tree shaking...
β Removed unused code
β Optimized imports
β Replaced heavy libraries
[4/4] Results...
β Bundle size: 4.2MB β 687KB (84% reduction)
β Initial load: 6s β 1.2s
β Time to interactive: 12.4s β 2.8s
β
Frontend optimized
```
#### Image Optimization
```bash
/cook [optimize images with compression and lazy loading]
```
**Implementation**:
```
[1/4] Image analysis...
β Total images: 342
β Total size: 12.4MB
β Average size: 36KB
[2/4] Optimization...
β Convert to WebP format
β Compress with quality 85
β Generate responsive sizes
β Add lazy loading
[3/4] Implementation...
β Picture element with fallbacks
β Intersection Observer for lazy load
β Placeholder images
[4/4] Results...
β Image size: 12.4MB β 2.1MB (83% reduction)
β Load time: 3.2s β 0.6s
β Bandwidth: -10.3MB per page
β
Images optimized
```
#### Bundle Compression
```bash
/cook [enable gzip and brotli compression]
```
### Step 5: Optimize Algorithms
#### Replace Inefficient Algorithm
```bash
/cook [replace O(nΒ²) algorithm with O(n) hash map solution]
```
**Before** (O(nΒ²) - 523ms):
```javascript
function findDuplicates(items) {
const duplicates = [];
for (let i = 0; i < items.length; i++) {
for (let j = i + 1; j < items.length; j++) {
if (items[i] === items[j]) {
duplicates.push(items[i]);
}
}
}
return duplicates;
}
```
**After** (O(n) - 4ms):
```javascript
function findDuplicates(items) {
const seen = new Set();
const duplicates = new Set();
for (const item of items) {
if (seen.has(item)) {
duplicates.add(item);
} else {
seen.add(item);
}
}
return Array.from(duplicates);
}
```
**Result**: 99.2% faster (523ms β 4ms)
### Step 6: Async Operations
#### Background Jobs
```bash
/cook [move email sending to background queue with Bull]
```
**Implementation**:
```
[1/4] Setting up Bull queue...
β Redis queue configured
β Worker processes setup
β Job processing logic
[2/4] Moving operations to queue...
β Email sending (was 890ms blocking)
β Report generation (was 2.3s blocking)
β Image processing (was 1.2s blocking)
[3/4] Implementing retry logic...
β Automatic retry on failure
β Exponential backoff
β Dead letter queue
[4/4] Results...
β API response: 890ms β 45ms
β Non-blocking operations
β Better error handling
β
Background jobs implemented
```
#### Parallel Processing
```bash
/cook [process multiple operations in parallel instead of sequential]
```
### Step 7: Database Connection Pool
```bash
/cook [optimize database connection pooling]
```
**Configuration**:
```javascript
// Before: Default settings
pool: {
max: 5,
min: 0,
idle: 10000
}
// After: Optimized
pool: {
max: 20, // More connections
min: 5, // Keep minimum ready
idle: 30000, // Longer idle time
acquire: 60000, // Longer acquire timeout
evict: 1000 // Cleanup interval
}
Result: 45% faster during peak load
```
### Step 8: API Rate Limiting & Throttling
```bash
/cook [implement intelligent rate limiting and request throttling]
```
### Step 9: Memory Optimization
#### Fix Memory Leaks
```bash
/fix:hard [fix memory leak in WebSocket handler]
```
**Implementation**:
```
[1/4] Identifying leak...
β Memory growing 125MB/hour
β Event listeners not cleaned up
β Socket references retained
[2/4] Implementing fixes...
β Remove event listeners on disconnect
β Clear socket references
β Implement cleanup function
[3/4] Memory management...
β WeakMap for temporary data
β Clear timers on disconnect
β Garbage collection hints
[4/4] Testing...
β 24-hour test: stable memory
β 1000 connections: no growth
β Stress test: passed
β
Memory leak fixed
```
#### Reduce Memory Usage
```bash
/cook [optimize memory usage by using streams for large data]
```
### Step 10: Monitoring & Profiling
```bash
/cook [implement performance monitoring with metrics and alerts]
```
**Monitoring setup**:
```
β Response time tracking
β Database query monitoring
β Memory usage alerts
β CPU usage tracking
β Error rate monitoring
β Cache hit rate metrics
β Custom business metrics
β Real-user monitoring (RUM)
Alerts configured:
- Response time >500ms
- Error rate >1%
- Memory usage >80%
- CPU usage >75%
- Cache hit rate <70%
```
### Step 11: Load Testing
```bash
/test
```
**Performance test results**:
```
Load Test Report (1000 concurrent users)
Before optimization:
- Avg response time: 2,847ms
- 95th percentile: 5,234ms
- Requests/sec: 23
- Error rate: 12.4%
- Failed requests: 124/1000
After optimization:
- Avg response time: 87ms (97% faster)
- 95th percentile: 156ms (97% faster)
- Requests/sec: 892 (38x more)
- Error rate: 0.1%
- Failed requests: 1/1000
Database:
- Query time: 234ms β 8ms (97% faster)
- Queries per request: 156 β 1
- Connection pool usage: 95% β 34%
Memory:
- Usage: 2.1GB β 487MB (77% reduction)
- Leak rate: 125MB/hour β 0MB/hour
- GC pauses: 89/hour β 12/hour
Frontend:
- Bundle size: 4.2MB β 687KB (84% smaller)
- Load time: 8.2s β 1.2s (85% faster)
- Time to interactive: 12.4s β 2.8s (77% faster)
Overall Performance Score: 34/100 β 94/100
β
All performance targets met
```
## Complete Example: Slow E-Commerce API
### Initial Issues
```
Performance problems:
- Product listing: 4.2s response time
- Search: 6.8s with 1000 products
- Cart update: 1.8s
- Checkout: 3.4s
- Homepage: 9.2s load time
- High database load: 89% CPU
```
### Optimization Steps
```bash
# 1. Profile application
/debug [analyze e-commerce API performance]
# 2. Database optimization
/cook [fix N+1 queries and add indexes]
/cook [optimize product search queries]
# 3. Caching
/cook [implement Redis caching for products and categories]
/cook [add query result caching]
# 4. Frontend optimization
/cook [implement code splitting and lazy loading]
/cook [optimize product images with WebP and lazy loading]
# 5. API optimization
/cook [move image processing to background queue]
/cook [implement response compression]
# 6. Algorithm optimization
/cook [optimize search algorithm with inverted index]
# 7. Test improvements
/test
# 8. Monitor in production
/cook [set up performance monitoring with alerts]
```
### Results
```
After optimization (1 hour work):
Product listing: 4.2s β 124ms (97% faster)
Search: 6.8s β 89ms (99% faster)
Cart update: 1.8s β 34ms (98% faster)
Checkout: 3.4s β 187ms (95% faster)
Homepage: 9.2s β 1.4s (85% faster)
Database CPU: 89% β 23%
Customer impact:
- 94% faster page loads
- 10x more concurrent users
- 87% lower server costs
- 45% increase in conversions
```
### Time Comparison
**Manual optimization**: 8-16 hours
- Profiling: 1-2 hours
- Database optimization: 2-3 hours
- Caching: 2-3 hours
- Frontend: 2-4 hours
- Testing: 1-2 hours
- Debugging: 1-2 hours
**With ClaudeKit**: 58 minutes
- Profiling: 8 minutes
- Database: 15 minutes
- Caching: 12 minutes
- Frontend: 18 minutes
- Testing: 5 minutes
**Time saved**: 7-15 hours (88% faster)
## Performance Optimization Patterns
### Pattern 1: Progressive Enhancement
```bash
/cook [implement progressive enhancement for slow connections]
```
### Pattern 2: Predictive Prefetching
```bash
/cook [add predictive prefetching for likely user actions]
```
### Pattern 3: Service Worker Caching
```bash
/cook [implement service worker for offline-first experience]
```
### Pattern 4: Database Read Replicas
```bash
/cook [set up database read replicas for scaling reads]
```
## Best Practices
### 1. Measure First
Always profile before optimizing:
```bash
β
Profile β Identify β Fix β Measure
β Guess β Optimize β Hope
```
### 2. Focus on Biggest Impact
Optimize high-impact issues first:
```
Priority order:
1. Critical path operations
2. High-frequency operations
3. User-facing operations
4. Background operations
```
### 3. Cache Aggressively
But invalidate correctly:
```javascript
// Cache layers
1. Browser cache (static assets)
2. CDN cache (global content)
3. Application cache (Redis)
4. Database query cache
5. Result memoization
```
### 4. Use Appropriate Data Structures
```javascript
β
Hash map for lookups: O(1)
β
Set for uniqueness: O(1)
β
Binary search: O(log n)
β Array loops: O(n)
β Nested loops: O(nΒ²)
```
### 5. Monitor Continuously
```bash
/cook [implement continuous performance monitoring]
```
## Troubleshooting
### Issue: Still Slow After Optimization
**Solution**:
```bash
# Re-profile
/debug [deep performance analysis with detailed metrics]
# Check for new bottlenecks
# Optimize further
```
### Issue: Cache Not Hitting
**Solution**:
```bash
/fix:fast [Redis cache hit rate below 50%]
```
### Issue: Memory Still Growing
**Solution**:
```bash
/fix:hard [memory still growing despite fixes]
```
### Issue: Database Timeout
**Solution**:
```bash
/cook [increase connection pool and optimize slow queries]
```
## Performance Checklist
```bash
Backend:
β Database queries optimized
β Indexes on frequently queried fields
β N+1 queries eliminated
β Caching implemented (Redis)
β Connection pooling optimized
β Background jobs for slow operations
β API response compression
β Rate limiting configured
Frontend:
β Code splitting implemented
β Lazy loading for routes
β Images optimized (WebP, lazy load)
β Bundle size minimized
β Tree shaking enabled
β CDN for static assets
β Service worker caching
β Critical CSS inlined
Infrastructure:
β Load balancing configured
β Auto-scaling enabled
β CDN integration
β Database read replicas
β Monitoring and alerts
β Performance budgets set
β Regular load testing
Metrics:
β Response time <200ms
β Page load <2s
β Time to interactive <3.5s
β Cache hit rate >80%
β Error rate <0.1%
```
## Next Steps
### Related Use Cases
- [Fixing Bugs](/docs/workflows/fixing-bugs) - Debug issues
- [Refactoring Code](/docs/workflows/refactoring-code) - Code quality
- [Building a REST API](/docs/workflows/building-api) - API development
### Related Commands
- [/debug](/docs/commands/core/debug) - Performance profiling
- [/cook](/docs/commands/core/cook) - Implement optimizations
- [/fix:hard](/docs/commands/fix/hard) - Complex fixes
- [/test](/docs/commands/core/test) - Performance testing
### Further Reading
- [Web.dev Performance](https://web.dev/performance/)
- [Database Indexing](https://use-the-index-luke.com/)
- [Redis Caching Patterns](https://redis.io/docs/manual/patterns/)
---
**Key Takeaway**: ClaudeKit enables systematic performance optimization with profiling, analysis, and implementation of best practices - turning slow applications into fast ones in under an hour with measurable improvements.
---
# Understanding Codebases with GitLab Knowledge Graph
Section: workflows
Category: workflows
URL: https://docs.claudekit.cc/docs/workflows/understanding-codebases-with-gkg
# GitLab Knowledge Graph MCP
GitLab Knowledge Graph is a powerful code analysis tool that creates a queryable map of your repositories. It enables deep codebase exploration, impact analysis, and AI-driven code understanding through the Model Context Protocol (MCP).
## What is GitLab Knowledge Graph?
GitLab Knowledge Graph (gkg) transforms your code into structured knowledge by:
1. **Scanning Code Structure**: Identifies files, directories, classes, functions, methods, and modules
2. **Extracting Relationships**: Discovers function calls, inheritance hierarchies, and dependencies
3. **Building Graph Database**: Stores everything in Kuzu (high-performance graph database) for fast queries
4. **Enabling AI Integration**: Provides MCP tools for AI agents to understand codebase deeply
**Key Benefits:**
- RAG (Retrieval-Augmented Generation) for code context
- Architecture visualization and dependency analysis
- Impact analysis before refactoring
- Intelligent code navigation and exploration
## Installation
### Install GitLab Knowledge Graph CLI
```bash
# One-line installation (Linux/macOS)
curl -fsSL https://gitlab.com/gitlab-org/rust/knowledge-graph/-/raw/main/install.sh | bash
# Add to PATH (if not already done)
export PATH="$HOME/.local/bin:$PATH"
# Verify installation
gkg --version
```
### Start the Server
```bash
# Start the knowledge graph server (runs on http://localhost:27495)
gkg server start
# In another terminal, index your project
gkg index /path/to/project
```
## How to Use GKG MCP
### 1. Index Your Project
Before using GKG, create the knowledge graph for your project:
```bash
# Index current directory
gkg index
# Or index a specific project
gkg index /path/to/your/project
```
This command:
- Discovers all Git repositories
- Parses code structure
- Extracts relationships
- Stores in local graph database
**Example output:**
```
β
Workspace indexing completed in 12.34 seconds
Workspace Statistics:
- Projects indexed: 3
- Files processed: 1,247
- Code entities extracted: 5,832
- Relationships found: 12,156
```
### 2. Use GKG MCP Tools with ClaudeKit
Once indexed, use GKG MCP tools in your Claude Code sessions:
#### List All Projects
```bash
# See all indexed projects in knowledge graph
gkg_list_projects
```
Returns absolute paths to all indexed projects.
#### Search for Code Definitions
```bash
# Find function/class definitions
gkg_search_codebase_definitions \
--project_absolute_path /path/to/project \
--search_terms "MyFunction" "MyClass" "handleSubmit"
```
**Useful for:**
- Finding class definitions before refactoring
- Locating API endpoints
- Discovering utility functions
- Understanding code structure
**Returns:**
- Definition name and type (Function, Class, Method, etc.)
- File location and line numbers
- Fully qualified names
- Code context snippets
#### Get All References to a Symbol
```bash
# Find everywhere a function/class is used
gkg_get_references \
--absolute_file_path /path/to/project/src/utils.ts \
--definition_name calculateTotal
```
**Perfect for:**
- Impact analysis before changes
- Finding all callers of a function
- Dependency mapping
- Safe refactoring
**Returns:**
- All definitions that reference the symbol
- Reference type (CALLS, PropertyReference, etc.)
- File locations and line numbers
- Code context around each reference
#### Jump to Definition
```bash
# Go directly to function/method definition
gkg_get_definition \
--absolute_file_path /path/to/file.ts \
--line "const result = calculateTotal(items);" \
--symbol_name calculateTotal
```
**Perfect for:**
- Understanding what a function does
- Verifying implementations
- Quick code exploration
**Returns:**
- Definition location (file path and line)
- Full code snippet
- Type of definition
#### Read Definition Bodies
```bash
# Get complete code of multiple definitions
gkg_read_definitions \
--definitions "[
{
\"names\": [\"handleSubmit\", \"validateForm\"],
\"file_path\": \"src/components/Form.tsx\"
},
{
\"names\": [\"calculateTotal\"],
\"file_path\": \"src/utils/math.ts\"
}
]"
```
**Optimized for:**
- Reading multiple related functions
- Understanding function implementations
- Analyzing code patterns
**Returns:**
- Full definition bodies
- Qualified names
- Line ranges
#### Generate Repository Map
```bash
# Get compact overview of code structure
gkg_repo_map \
--project_absolute_path /path/to/project \
--relative_paths ["src/components", "src/utils/helpers.ts"] \
--depth 2
```
**Useful for:**
- Understanding folder structure
- API-style mapping of codebase segments
- Token-efficient codebase overview
- Finding related files
**Returns:**
- Directory tree structure
- File listings with definitions
- Line ranges and context
- Token-optimized format
#### Rebuild Project Index
```bash
# Update knowledge graph after code changes
gkg_index_project \
--project_absolute_path /path/to/project
```
Use after:
- Major file changes
- Adding/deleting files
- Architecture refactoring
- Long development sessions
## Common Workflows
### Refactoring with Impact Analysis
```bash
# 1. Find all references to function before changing
gkg_get_references \
--absolute_file_path /path/to/project/src/api.ts \
--definition_name fetchUserData
# 2. Review all call sites and understand impact
# 3. Make the change safely knowing full scope
```
### Onboarding: Understanding New Codebase
```bash
# 1. Get structure overview
gkg_repo_map \
--project_absolute_path /path/to/project \
--relative_paths ["src"] \
--depth 2
# 2. Search for key components
gkg_search_codebase_definitions \
--project_absolute_path /path/to/project \
--search_terms "App" "Router" "Controller" "Service"
# 3. Explore connections between components
gkg_get_references \
--absolute_file_path src/components/App.tsx \
--definition_name App
```
### Feature Implementation
```bash
# 1. Find similar features/patterns
gkg_search_codebase_definitions \
--project_absolute_path /path/to/project \
--search_terms "Form" "Validation" "Submit"
# 2. Read implementation patterns
gkg_read_definitions \
--definitions "[
{
\"names\": [\"UserForm\", \"validateEmail\"],
\"file_path\": \"src/components/UserForm.tsx\"
}
]"
# 3. Use patterns for new feature
```
### Bug Investigation
```bash
# 1. Search for error location
gkg_search_codebase_definitions \
--project_absolute_path /path/to/project \
--search_terms "handleError" "ErrorBoundary"
# 2. Find all references
gkg_get_references \
--absolute_file_path src/components/ErrorHandler.tsx \
--definition_name handleError
# 3. Understand error flow and dependencies
```
## MCP Tools Reference
### Tools Available via MCP
| Tool | Purpose | Use Case |
| ----------------------------- | ------------------------- | ---------------------------------- |
| `list_projects` | Get indexed projects | Know current project paths |
| `search_codebase_definitions` | Find definitions by name | Find functions, classes, constants |
| `get_references` | Find all uses of symbol | Impact analysis, refactoring |
| `read_definitions` | Read full definition code | Understand implementation |
| `get_definition` | Jump to definition | Quick exploration |
| `repo_map` | Get structure overview | Understand architecture |
| `index_project` | Rebuild index | Update after changes |
### Input Parameters
**project_absolute_path**
- Absolute filesystem path to project root
- Required for most tools
- Use `list_projects` to discover
**relative_paths** (for repo_map)
- Project-relative file/directory paths
- Can be multiple paths
- Example: `["src/components", "src/utils.ts"]`
**search_terms**
- Function/class/constant names to find
- Case-sensitive
- Supports partial matches
- Multiple terms in single call
**definition_name**
- Exact identifier name as it appears in code
- No namespace prefixes
- Used with file_path for precise location
**page** (pagination)
- Starting from 1
- Check `next_page` in response for more results
- Default: 1
## Best Practices
### 1. Always Index Before Exploring
```bash
# Stale index = inaccurate results
gkg index /path/to/project
# Then explore with confidence
gkg_search_codebase_definitions ...
```
### 2. Use Right Tool for Task
| Task | Tool |
| ----------------------------- | ----------------------------- |
| Find where function is called | `get_references` |
| Understand implementation | `read_definitions` |
| Explore codebase structure | `repo_map` |
| Quick definition lookup | `get_definition` |
| Find by name pattern | `search_codebase_definitions` |
### 3. Pagination for Large Results
```bash
# If next_page > 1, more results available
response = gkg_search_codebase_definitions(
search_terms=["handler"],
page=1
)
if response.next_page:
# Fetch page 2
response2 = gkg_search_codebase_definitions(
search_terms=["handler"],
page=response.next_page
)
```
### 4. Combine Tools for Depth
```bash
# Search β Read β Analyze Pattern
# 1. Find
search_results = search_codebase_definitions(terms=["FormHandler"])
# 2. Read full code
definitions = read_definitions(
file_path=search_results[0].location.file
names=["FormHandler"]
)
# 3. Find uses
references = get_references(
file_path=search_results[0].location.file
definition_name="FormHandler"
)
```
## Troubleshooting
### Knowledge Graph Not Finding Definitions
**Problem**: `search_codebase_definitions` returns no results
**Solutions:**
1. Ensure project is indexed: `gkg index /path/to/project`
2. Check exact naming (case-sensitive)
3. Verify project path is absolute
4. Try partial search terms
5. Check language is supported (Rust, TypeScript, Python, Ruby, Java, Kotlin)
### Server Not Running
**Problem**: Cannot connect to localhost:27495
**Solution:**
```bash
# Ensure server is running
gkg server start
# Check if running on different port
gkg server start --port 27495
# View logs for errors
gkg server logs
```
### Stale Results
**Problem**: Recent changes not showing up
**Solution:**
```bash
# Rebuild index
gkg index_project /path/to/project
# Or restart server
gkg server stop
gkg server start
```
## Integration with ClaudeKit
### Using GKG in `/code` Commands
```bash
# When implementing from plan, use GKG for context
/cook Add user authentication system
# Claude Code uses GKG to:
# 1. Search for existing auth patterns
# 2. Understand current architecture
# 3. Find integration points
# 4. Analyze impact of changes
```
### Using GKG in Refactoring
```bash
# Before major refactoring
/plan Refactor authentication module
# Claude Code uses GKG to:
# 1. Map all auth-related code
# 2. Find all usages
# 3. Identify dependencies
# 4. Suggest safe refactoring approach
```
## Example: Complete Workflow
```bash
# 1. Index the project
gkg index /Users/dev/my-app
# 2. Start exploring
gkg_search_codebase_definitions \
--project_absolute_path /Users/dev/my-app \
--search_terms "handleSubmit"
# 3. See where it's used
gkg_get_references \
--absolute_file_path /Users/dev/my-app/src/forms.ts \
--definition_name handleSubmit
# 4. Read full implementation
gkg_read_definitions \
--definitions "[{
\"names\": [\"handleSubmit\"],
\"file_path\": \"src/forms.ts\"
}]"
# 5. Understand connections
gkg_get_references \
--absolute_file_path /Users/dev/my-app/src/api.ts \
--definition_name apiCall
# 6. Get architecture overview
gkg_repo_map \
--project_absolute_path /Users/dev/my-app \
--relative_paths ["src"] \
--depth 2
```
## Resources
- [GitLab Knowledge Graph Docs](https://gitlab-org.gitlab.io/rust/knowledge-graph/)
- [MCP Tools Reference](https://gitlab-org.gitlab.io/rust/knowledge-graph/mcp/tools/)
- [Supported Languages](https://gitlab-org.gitlab.io/rust/knowledge-graph/languages/overview/)
- [Repository on GitLab](https://gitlab.com/gitlab-org/rust/knowledge-graph)
## Next Steps
1. **Install GKG**: Follow installation guide above
2. **Index Your Project**: `gkg index /path/to/project`
3. **Explore**: Use MCP tools to understand codebase
4. **Integrate with ClaudeKit**: Use GKG context in `/code` and refactoring commands
5. **Automate**: Build workflows with GKG data
---
**GitLab Knowledge Graph** enables AI agents to truly understand your codebase, making code analysis, refactoring, and implementation more intelligent and context-aware.
---