bmad初始化

This commit is contained in:
2025-11-01 19:22:39 +08:00
parent 5b21dc0bd5
commit 426ae41f54
447 changed files with 80633 additions and 0 deletions

View File

@@ -0,0 +1,444 @@
# Document Project Workflow
**Version:** 1.2.0
**Module:** BMM (BMAD Method Module)
**Type:** Action Workflow (Documentation Generator)
## Purpose
Analyzes and documents brownfield projects by scanning codebase, architecture, and patterns to create comprehensive reference documentation for AI-assisted development. Generates a master index and multiple documentation files tailored to project structure and type.
**NEW in v1.2.0:** Context-safe architecture with scan levels, resumability, and write-as-you-go pattern to prevent context exhaustion.
## Key Features
- **Multi-Project Type Support**: Handles web, backend, mobile, CLI, game, embedded, data, infra, library, desktop, and extension projects
- **Multi-Part Detection**: Automatically detects and documents projects with separate client/server or multiple services
- **Three Scan Levels** (NEW v1.2.0): Quick (2-5 min), Deep (10-30 min), Exhaustive (30-120 min)
- **Resumability** (NEW v1.2.0): Interrupt and resume workflows without losing progress
- **Write-as-you-go** (NEW v1.2.0): Documents written immediately to prevent context exhaustion
- **Intelligent Batching** (NEW v1.2.0): Subfolder-based processing for deep/exhaustive scans
- **Data-Driven Analysis**: Uses CSV-based project type detection and documentation requirements
- **Comprehensive Scanning**: Analyzes APIs, data models, UI components, configuration, security patterns, and more
- **Architecture Matching**: Matches projects to 170+ architecture templates from the solutioning registry
- **Brownfield PRD Ready**: Generates documentation specifically designed for AI agents planning new features
## How to Invoke
```bash
workflow document-project
```
Or from BMAD CLI:
```bash
/bmad:bmm:workflows:document-project
```
## Scan Levels (NEW in v1.2.0)
Choose the right scan depth for your needs:
### 1. Quick Scan (Default)
**Duration:** 2-5 minutes
**What it does:** Pattern-based analysis without reading source files
**Reads:** Config files, package manifests, directory structure, README
**Use when:**
- You need a fast project overview
- Initial understanding of project structure
- Planning next steps before deeper analysis
**Does NOT read:** Source code files (_.js, _.ts, _.py, _.go, etc.)
### 2. Deep Scan
**Duration:** 10-30 minutes
**What it does:** Reads files in critical directories based on project type
**Reads:** Files in critical paths defined by documentation requirements
**Use when:**
- Creating comprehensive documentation for brownfield PRD
- Need detailed analysis of key areas
- Want balance between depth and speed
**Example:** For a web app, reads controllers/, models/, components/, but not every utility file
### 3. Exhaustive Scan
**Duration:** 30-120 minutes
**What it does:** Reads ALL source files in project
**Reads:** Every source file (excludes node_modules, dist, build, .git)
**Use when:**
- Complete project analysis needed
- Migration planning requires full understanding
- Detailed audit of entire codebase
- Deep technical debt assessment
**Note:** Deep-dive mode ALWAYS uses exhaustive scan (no choice)
## Resumability (NEW in v1.2.0)
The workflow can be interrupted and resumed without losing progress:
- **State Tracking:** Progress saved in `project-scan-report.json`
- **Auto-Detection:** Workflow detects incomplete runs (<24 hours old)
- **Resume Prompt:** Choose to resume or start fresh
- **Step-by-Step:** Resume from exact step where interrupted
- **Archiving:** Old state files automatically archived
**Example Resume Flow:**
```
> workflow document-project
I found an in-progress workflow state from 2025-10-11 14:32:15.
Current Progress:
- Mode: initial_scan
- Scan Level: deep
- Completed Steps: 5/12
- Last Step: step_5
Would you like to:
1. Resume from where we left off - Continue from step 6
2. Start fresh - Archive old state and begin new scan
3. Cancel - Exit without changes
Your choice [1/2/3]:
```
## What It Does
### Step-by-Step Process
1. **Detects Project Structure** - Identifies if project is single-part or multi-part (client/server/etc.)
2. **Classifies Project Type** - Matches against 12 project types (web, backend, mobile, etc.)
3. **Discovers Documentation** - Finds existing README, CONTRIBUTING, ARCHITECTURE files
4. **Analyzes Tech Stack** - Parses package files, identifies frameworks, versions, dependencies
5. **Conditional Scanning** - Performs targeted analysis based on project type requirements:
- API routes and endpoints
- Database models and schemas
- State management patterns
- UI component libraries
- Configuration and security
- CI/CD and deployment configs
6. **Generates Source Tree** - Creates annotated directory structure with critical paths
7. **Extracts Dev Instructions** - Documents setup, build, run, and test commands
8. **Creates Architecture Docs** - Generates detailed architecture using matched templates
9. **Builds Master Index** - Creates comprehensive index.md as primary AI retrieval source
10. **Validates Output** - Runs 140+ point checklist to ensure completeness
### Output Files
**Single-Part Projects:**
- `index.md` - Master index
- `project-overview.md` - Executive summary
- `architecture.md` - Detailed architecture
- `source-tree-analysis.md` - Annotated directory tree
- `component-inventory.md` - Component catalog (if applicable)
- `development-guide.md` - Local dev instructions
- `api-contracts.md` - API documentation (if applicable)
- `data-models.md` - Database schema (if applicable)
- `deployment-guide.md` - Deployment process (optional)
- `contribution-guide.md` - Contributing guidelines (optional)
- `project-scan-report.json` - State file for resumability (NEW v1.2.0)
**Multi-Part Projects (e.g., client + server):**
- `index.md` - Master index with part navigation
- `project-overview.md` - Multi-part summary
- `architecture-{part_id}.md` - Per-part architecture docs
- `source-tree-analysis.md` - Full tree with part annotations
- `component-inventory-{part_id}.md` - Per-part components
- `development-guide-{part_id}.md` - Per-part dev guides
- `integration-architecture.md` - How parts communicate
- `project-parts.json` - Machine-readable metadata
- `project-scan-report.json` - State file for resumability (NEW v1.2.0)
- Additional conditional files per part (API, data models, etc.)
## Data Files
The workflow uses a single comprehensive CSV file:
**documentation-requirements.csv** - Complete project analysis guide
- Location: `/bmad/bmm/workflows/document-project/documentation-requirements.csv`
- 12 project types (web, mobile, backend, cli, library, desktop, game, data, extension, infra, embedded)
- 24 columns combining:
- **Detection columns**: `project_type_id`, `key_file_patterns` (identifies project type from codebase)
- **Requirement columns**: `requires_api_scan`, `requires_data_models`, `requires_ui_components`, etc.
- **Pattern columns**: `critical_directories`, `test_file_patterns`, `config_patterns`, etc.
- Self-contained: All project detection AND scanning requirements in one file
- Architecture patterns inferred from tech stack (no external registry needed)
## Use Cases
### Primary Use Case: Brownfield PRD Creation
After running this workflow, use the generated `index.md` as input to brownfield PRD workflows:
```
User: "I want to add a new dashboard feature"
PRD Workflow: Loads docs/index.md
→ Understands existing architecture
→ Identifies reusable components
→ Plans integration with existing APIs
→ Creates contextual PRD with epics and stories
```
### Other Use Cases
- **Onboarding New Developers** - Comprehensive project documentation
- **Architecture Review** - Structured analysis of existing system
- **Technical Debt Assessment** - Identify patterns and anti-patterns
- **Migration Planning** - Understand current state before refactoring
## Requirements
### Recommended Inputs (Optional)
- Project root directory (defaults to current directory)
- README.md or similar docs (auto-discovered if present)
- User guidance on key areas to focus (workflow will ask)
### Tools Used
- File system scanning (Glob, Read, Grep)
- Code analysis
- Git repository analysis (optional)
## Configuration
### Default Output Location
Files are saved to: `{output_folder}` (from config.yaml)
Default: `/docs/` folder in project root
### Customization
- Modify `documentation-requirements.csv` to adjust scanning patterns for project types
- Add new project types to `project-types.csv`
- Add new architecture templates to `registry.csv`
## Example: Multi-Part Web App
**Input:**
```
my-app/
├── client/ # React frontend
├── server/ # Express backend
└── README.md
```
**Detection Result:**
- Repository Type: Monorepo
- Part 1: client (web/React)
- Part 2: server (backend/Express)
**Output (10+ files):**
```
docs/
├── index.md
├── project-overview.md
├── architecture-client.md
├── architecture-server.md
├── source-tree-analysis.md
├── component-inventory-client.md
├── development-guide-client.md
├── development-guide-server.md
├── api-contracts-server.md
├── data-models-server.md
├── integration-architecture.md
└── project-parts.json
```
## Example: Simple CLI Tool
**Input:**
```
hello-cli/
├── main.go
├── go.mod
└── README.md
```
**Detection Result:**
- Repository Type: Monolith
- Part 1: main (cli/Go)
**Output (4 files):**
```
docs/
├── index.md
├── project-overview.md
├── architecture.md
└── source-tree-analysis.md
```
## Deep-Dive Mode
### What is Deep-Dive Mode?
When you run the workflow on a project that already has documentation, you'll be offered a choice:
1. **Rescan entire project** - Update all documentation with latest changes
2. **Deep-dive into specific area** - Generate EXHAUSTIVE documentation for a particular feature/module/folder
3. **Cancel** - Keep existing documentation
Deep-dive mode performs **comprehensive, file-by-file analysis** of a specific area, reading EVERY file completely and documenting:
- All exports with complete signatures
- All imports and dependencies
- Dependency graphs and data flow
- Code patterns and implementations
- Testing coverage and strategies
- Integration points
- Reuse opportunities
### When to Use Deep-Dive Mode
- **Before implementing a feature** - Deep-dive the area you'll be modifying
- **During architecture review** - Deep-dive complex modules
- **For code understanding** - Deep-dive unfamiliar parts of codebase
- **When creating PRDs** - Deep-dive areas affected by new features
### Deep-Dive Process
1. Workflow detects existing `index.md`
2. Offers deep-dive option
3. Suggests areas based on project structure:
- API route groups
- Feature modules
- UI component areas
- Services/business logic
4. You select area or specify custom path
5. Workflow reads EVERY file in that area
6. Generates `deep-dive-{area-name}.md` with complete analysis
7. Updates `index.md` with link to deep-dive doc
8. Offers to deep-dive another area or finish
### Deep-Dive Output Example
**docs/deep-dive-dashboard-feature.md:**
- Complete file inventory (47 files analyzed)
- Every export with signatures
- Dependency graph
- Data flow analysis
- Integration points
- Testing coverage
- Related code references
- Implementation guidance
- ~3,000 LOC documented in detail
### Incremental Deep-Diving
You can deep-dive multiple areas over time:
- First run: Scan entire project generates index.md
- Second run: Deep-dive dashboard feature
- Third run: Deep-dive API layer
- Fourth run: Deep-dive authentication system
All deep-dive docs are linked from the master index.
## Validation
The workflow includes a comprehensive 160+ point checklist covering:
- Project detection accuracy
- Technology stack completeness
- Codebase scanning thoroughness
- Architecture documentation quality
- Multi-part handling (if applicable)
- Brownfield PRD readiness
- Deep-dive completeness (if applicable)
## Next Steps After Completion
1. **Review** `docs/index.md` - Your master documentation index
2. **Validate** - Check generated docs for accuracy
3. **Use for PRD** - Point brownfield PRD workflow to index.md
4. **Maintain** - Re-run workflow when architecture changes significantly
## File Structure
```
document-project/
├── workflow.yaml # Workflow configuration
├── instructions.md # Step-by-step workflow logic
├── checklist.md # Validation criteria
├── documentation-requirements.csv # Project type scanning patterns
├── templates/ # Output templates
│ ├── index-template.md
│ ├── project-overview-template.md
│ └── source-tree-template.md
└── README.md # This file
```
## Troubleshooting
**Issue: Project type not detected correctly**
- Solution: Workflow will ask for confirmation; manually select correct type
**Issue: Missing critical information**
- Solution: Provide additional context when prompted; re-run specific analysis steps
**Issue: Multi-part detection missed a part**
- Solution: When asked to confirm parts, specify the missing part and its path
**Issue: Architecture template doesn't match well**
- Solution: Check registry.csv; may need to add new template or adjust matching criteria
## Architecture Improvements in v1.2.0
### Context-Safe Design
The workflow now uses a write-as-you-go architecture:
- Documents written immediately to disk (not accumulated in memory)
- Detailed findings purged after writing (only summaries kept)
- State tracking enables resumption from any step
- Batching strategy prevents context exhaustion on large projects
### Batching Strategy
For deep/exhaustive scans:
- Process ONE subfolder at a time
- Read files Extract info Write output Validate Purge context
- Primary concern is file SIZE (not count)
- Track batches in state file for resumability
### State File Format
Optimized JSON (no pretty-printing):
```json
{
"workflow_version": "1.2.0",
"timestamps": {...},
"mode": "initial_scan",
"scan_level": "deep",
"completed_steps": [...],
"current_step": "step_6",
"findings": {"summary": "only"},
"outputs_generated": [...],
"resume_instructions": "..."
}
```

View File

@@ -0,0 +1,245 @@
# Document Project Workflow - Validation Checklist
## Scan Level and Resumability (v1.2.0)
- [ ] Scan level selection offered (quick/deep/exhaustive) for initial_scan and full_rescan modes
- [ ] Deep-dive mode automatically uses exhaustive scan (no choice given)
- [ ] Quick scan does NOT read source files (only patterns, configs, manifests)
- [ ] Deep scan reads files in critical directories per project type
- [ ] Exhaustive scan reads ALL source files (excluding node_modules, dist, build)
- [ ] State file (project-scan-report.json) created at workflow start
- [ ] State file updated after each step completion
- [ ] State file contains all required fields per schema
- [ ] Resumability prompt shown if state file exists and is <24 hours old
- [ ] Old state files (>24 hours) automatically archived
- [ ] Resume functionality loads previous state correctly
- [ ] Workflow can jump to correct step when resuming
## Write-as-you-go Architecture
- [ ] Each document written to disk IMMEDIATELY after generation
- [ ] Document validation performed right after writing (section-level)
- [ ] State file updated after each document is written
- [ ] Detailed findings purged from context after writing (only summaries kept)
- [ ] Context contains only high-level summaries (1-2 sentences per section)
- [ ] No accumulation of full project analysis in memory
## Batching Strategy (Deep/Exhaustive Scans)
- [ ] Batching applied for deep and exhaustive scan levels
- [ ] Batches organized by SUBFOLDER (not arbitrary file count)
- [ ] Large files (>5000 LOC) handled with appropriate judgment
- [ ] Each batch: read files, extract info, write output, validate, purge context
- [ ] Batch completion tracked in state file (batches_completed array)
- [ ] Batch summaries kept in context (1-2 sentences max)
## Project Detection and Classification
- [ ] Project type correctly identified and matches actual technology stack
- [ ] Multi-part vs single-part structure accurately detected
- [ ] All project parts identified if multi-part (no missing client/server/etc.)
- [ ] Documentation requirements loaded for each part type
- [ ] Architecture registry match is appropriate for detected stack
## Technology Stack Analysis
- [ ] All major technologies identified (framework, language, database, etc.)
- [ ] Versions captured where available
- [ ] Technology decision table is complete and accurate
- [ ] Dependencies and libraries documented
- [ ] Build tools and package managers identified
## Codebase Scanning Completeness
- [ ] All critical directories scanned based on project type
- [ ] API endpoints documented (if requires_api_scan = true)
- [ ] Data models captured (if requires_data_models = true)
- [ ] State management patterns identified (if requires_state_management = true)
- [ ] UI components inventoried (if requires_ui_components = true)
- [ ] Configuration files located and documented
- [ ] Authentication/security patterns identified
- [ ] Entry points correctly identified
- [ ] Integration points mapped (for multi-part projects)
- [ ] Test files and patterns documented
## Source Tree Analysis
- [ ] Complete directory tree generated with no major omissions
- [ ] Critical folders highlighted and described
- [ ] Entry points clearly marked
- [ ] Integration paths noted (for multi-part)
- [ ] Asset locations identified (if applicable)
- [ ] File organization patterns explained
## Architecture Documentation Quality
- [ ] Architecture document uses appropriate template from registry
- [ ] All template sections filled with relevant information (no placeholders)
- [ ] Technology stack section is comprehensive
- [ ] Architecture pattern clearly explained
- [ ] Data architecture documented (if applicable)
- [ ] API design documented (if applicable)
- [ ] Component structure explained (if applicable)
- [ ] Source tree included and annotated
- [ ] Testing strategy documented
- [ ] Deployment architecture captured (if config found)
## Development and Operations Documentation
- [ ] Prerequisites clearly listed
- [ ] Installation steps documented
- [ ] Environment setup instructions provided
- [ ] Local run commands specified
- [ ] Build process documented
- [ ] Test commands and approach explained
- [ ] Deployment process documented (if applicable)
- [ ] CI/CD pipeline details captured (if found)
- [ ] Contribution guidelines extracted (if found)
## Multi-Part Project Specific (if applicable)
- [ ] Each part documented separately
- [ ] Part-specific architecture files created (architecture-{part_id}.md)
- [ ] Part-specific component inventories created (if applicable)
- [ ] Part-specific development guides created
- [ ] Integration architecture document created
- [ ] Integration points clearly defined with type and details
- [ ] Data flow between parts explained
- [ ] project-parts.json metadata file created
## Index and Navigation
- [ ] index.md created as master entry point
- [ ] Project structure clearly summarized in index
- [ ] Quick reference section complete and accurate
- [ ] All generated docs linked from index
- [ ] All existing docs linked from index (if found)
- [ ] Getting started section provides clear next steps
- [ ] AI-assisted development guidance included
- [ ] Navigation structure matches project complexity (simple for single-part, detailed for multi-part)
## File Completeness
- [ ] index.md generated
- [ ] project-overview.md generated
- [ ] source-tree-analysis.md generated
- [ ] architecture.md (or per-part) generated
- [ ] component-inventory.md (or per-part) generated if UI components exist
- [ ] development-guide.md (or per-part) generated
- [ ] api-contracts.md (or per-part) generated if APIs documented
- [ ] data-models.md (or per-part) generated if data models found
- [ ] deployment-guide.md generated if deployment config found
- [ ] contribution-guide.md generated if guidelines found
- [ ] integration-architecture.md generated if multi-part
- [ ] project-parts.json generated if multi-part
## Content Quality
- [ ] Technical information is accurate and specific
- [ ] No generic placeholders or "TODO" items remain
- [ ] Examples and code snippets are relevant to actual project
- [ ] File paths and directory references are correct
- [ ] Technology names and versions are accurate
- [ ] Terminology is consistent across all documents
- [ ] Descriptions are clear and actionable
## Brownfield PRD Readiness
- [ ] Documentation provides enough context for AI to understand existing system
- [ ] Integration points are clear for planning new features
- [ ] Reusable components are identified for leveraging in new work
- [ ] Data models are documented for schema extension planning
- [ ] API contracts are documented for endpoint expansion
- [ ] Code conventions and patterns are captured for consistency
- [ ] Architecture constraints are clear for informed decision-making
## Output Validation
- [ ] All files saved to correct output folder
- [ ] File naming follows convention (no part suffix for single-part, with suffix for multi-part)
- [ ] No broken internal links between documents
- [ ] Markdown formatting is correct and renders properly
- [ ] JSON files are valid (project-parts.json if applicable)
## Final Validation
- [ ] User confirmed project classification is accurate
- [ ] User provided any additional context needed
- [ ] All requested areas of focus addressed
- [ ] Documentation is immediately usable for brownfield PRD workflow
- [ ] No critical information gaps identified
## Issues Found
### Critical Issues (must fix before completion)
-
### Minor Issues (can be addressed later)
-
### Missing Information (to note for user)
- ***
## Deep-Dive Mode Validation (if deep-dive was performed)
- [ ] Deep-dive target area correctly identified and scoped
- [ ] All files in target area read completely (no skipped files)
- [ ] File inventory includes all exports with complete signatures
- [ ] Dependencies mapped for all files
- [ ] Dependents identified (who imports each file)
- [ ] Code snippets included for key implementation details
- [ ] Patterns and design approaches documented
- [ ] State management strategy explained
- [ ] Side effects documented (API calls, DB queries, etc.)
- [ ] Error handling approaches captured
- [ ] Testing files and coverage documented
- [ ] TODOs and comments extracted
- [ ] Dependency graph created showing relationships
- [ ] Data flow traced through the scanned area
- [ ] Integration points with rest of codebase identified
- [ ] Related code and similar patterns found outside scanned area
- [ ] Reuse opportunities documented
- [ ] Implementation guidance provided
- [ ] Modification instructions clear
- [ ] Index.md updated with deep-dive link
- [ ] Deep-dive documentation is immediately useful for implementation
---
## State File Quality
- [ ] State file is valid JSON (no syntax errors)
- [ ] State file is optimized (no pretty-printing, minimal whitespace)
- [ ] State file contains all completed steps with timestamps
- [ ] State file outputs_generated list is accurate and complete
- [ ] State file resume_instructions are clear and actionable
- [ ] State file findings contain only high-level summaries (not detailed data)
- [ ] State file can be successfully loaded for resumption
## Completion Criteria
All items in the following sections must be checked:
- ✓ Scan Level and Resumability (v1.2.0)
- ✓ Write-as-you-go Architecture
- ✓ Batching Strategy (if deep/exhaustive scan)
- ✓ Project Detection and Classification
- ✓ Technology Stack Analysis
- ✓ Architecture Documentation Quality
- ✓ Index and Navigation
- ✓ File Completeness
- ✓ Brownfield PRD Readiness
- ✓ State File Quality
- ✓ Deep-Dive Mode Validation (if applicable)
The workflow is complete when:
1. All critical checklist items are satisfied
2. No critical issues remain
3. User has reviewed and approved the documentation
4. Generated docs are ready for use in brownfield PRD workflow
5. Deep-dive docs (if any) are comprehensive and implementation-ready
6. State file is valid and can enable resumption if interrupted

View File

@@ -0,0 +1,12 @@
project_type_id,requires_api_scan,requires_data_models,requires_state_management,requires_ui_components,requires_deployment_config,key_file_patterns,critical_directories,integration_scan_patterns,test_file_patterns,config_patterns,auth_security_patterns,schema_migration_patterns,entry_point_patterns,shared_code_patterns,monorepo_workspace_patterns,async_event_patterns,ci_cd_patterns,asset_patterns,hardware_interface_patterns,protocol_schema_patterns,localization_patterns,requires_hardware_docs,requires_asset_inventory
web,true,true,true,true,true,package.json;tsconfig.json;*.config.js;*.config.ts;vite.config.*;webpack.config.*;next.config.*;nuxt.config.*,src/;app/;pages/;components/;api/;lib/;styles/;public/;static/,*client.ts;*service.ts;*api.ts;fetch*.ts;axios*.ts;*http*.ts,*.test.ts;*.spec.ts;*.test.tsx;*.spec.tsx;**/__tests__/**;**/*.test.*;**/*.spec.*,.env*;config/*;*.config.*;.config/;settings/,*auth*.ts;*session*.ts;middleware/auth*;*.guard.ts;*authenticat*;*permission*;guards/,migrations/**;prisma/**;*.prisma;alembic/**;knex/**;*migration*.sql;*migration*.ts,main.ts;index.ts;app.ts;server.ts;_app.tsx;_app.ts;layout.tsx,shared/**;common/**;utils/**;lib/**;helpers/**;@*/**;packages/**,pnpm-workspace.yaml;lerna.json;nx.json;turbo.json;workspace.json;rush.json,*event*.ts;*queue*.ts;*subscriber*.ts;*consumer*.ts;*producer*.ts;*worker*.ts;jobs/**,.github/workflows/**;.gitlab-ci.yml;Jenkinsfile;.circleci/**;azure-pipelines.yml;bitbucket-pipelines.yml,.drone.yml,public/**;static/**;assets/**;images/**;media/**,N/A,*.proto;*.graphql;graphql/**;schema.graphql;*.avro;openapi.*;swagger.*,i18n/**;locales/**;lang/**;translations/**;messages/**;*.po;*.pot,false,false
mobile,true,true,true,true,true,package.json;pubspec.yaml;Podfile;build.gradle;app.json;capacitor.config.*;ionic.config.json,src/;app/;screens/;components/;services/;models/;assets/;ios/;android/,*client.ts;*service.ts;*api.ts;fetch*.ts;axios*.ts;*http*.ts,*.test.ts;*.test.tsx;*_test.dart;*.test.dart;**/__tests__/**,.env*;config/*;app.json;capacitor.config.*;google-services.json;GoogleService-Info.plist,*auth*.ts;*session*.ts;*authenticat*;*permission*;*biometric*;secure-store*,migrations/**;realm/**;*.realm;watermelondb/**;sqlite/**,main.ts;index.ts;App.tsx;App.ts;main.dart,shared/**;common/**;utils/**;lib/**;components/shared/**;@*/**,pnpm-workspace.yaml;lerna.json;nx.json;turbo.json,*event*.ts;*notification*.ts;*push*.ts;background-fetch*,fastlane/**;.github/workflows/**;.gitlab-ci.yml;bitbucket-pipelines.yml;appcenter-*,assets/**;Resources/**;res/**;*.xcassets;drawable*/;mipmap*/;images/**,N/A,*.proto;graphql/**;*.graphql,i18n/**;locales/**;translations/**;*.strings;*.xml,false,true
backend,true,true,false,false,true,package.json;requirements.txt;go.mod;Gemfile;pom.xml;build.gradle;Cargo.toml;*.csproj,src/;api/;services/;models/;routes/;controllers/;middleware/;handlers/;repositories/;domain/,*client.ts;*repository.ts;*service.ts;*connector*.ts;*adapter*.ts,*.test.ts;*.spec.ts;*_test.go;test_*.py;*Test.java;*_test.rs,.env*;config/*;*.config.*;application*.yml;application*.yaml;appsettings*.json;settings.py,*auth*.ts;*session*.ts;*authenticat*;*authorization*;middleware/auth*;guards/;*jwt*;*oauth*,migrations/**;alembic/**;flyway/**;liquibase/**;prisma/**;*.prisma;*migration*.sql;*migration*.ts;db/migrate,main.ts;index.ts;server.ts;app.ts;main.go;main.py;Program.cs;__init__.py,shared/**;common/**;utils/**;lib/**;core/**;@*/**;pkg/**,pnpm-workspace.yaml;lerna.json;nx.json;go.work,*event*.ts;*queue*.ts;*subscriber*.ts;*consumer*.ts;*producer*.ts;*worker*.ts;*handler*.ts;jobs/**;workers/**,.github/workflows/**;.gitlab-ci.yml;Jenkinsfile;.circleci/**;azure-pipelines.yml;.drone.yml,N/A,N/A,*.proto;*.graphql;graphql/**;*.avro;*.thrift;openapi.*;swagger.*;schema/**,N/A,false,false
cli,false,false,false,false,false,package.json;go.mod;Cargo.toml;setup.py;pyproject.toml;*.gemspec,src/;cmd/;cli/;bin/;lib/;commands/,N/A,*.test.ts;*_test.go;test_*.py;*.spec.ts;*_spec.rb,.env*;config/*;*.config.*;.*.rc;.*rc,N/A,N/A,main.ts;index.ts;cli.ts;main.go;main.py;__main__.py;bin/*,shared/**;common/**;utils/**;lib/**;helpers/**,N/A,N/A,.github/workflows/**;.gitlab-ci.yml;goreleaser.yml,N/A,N/A,N/A,N/A,false,false
library,false,false,false,false,false,package.json;setup.py;Cargo.toml;go.mod;*.gemspec;*.csproj;pom.xml,src/;lib/;dist/;pkg/;build/;target/,N/A,*.test.ts;*_test.go;test_*.py;*.spec.ts;*Test.java;*_test.rs,.*.rc;tsconfig.json;rollup.config.*;vite.config.*;webpack.config.*,N/A,N/A,index.ts;index.js;lib.rs;main.go;__init__.py,src/**;lib/**;core/**,N/A,N/A,.github/workflows/**;.gitlab-ci.yml;.circleci/**,N/A,N/A,N/A,N/A,false,false
desktop,false,false,true,true,true,package.json;Cargo.toml;*.csproj;CMakeLists.txt;tauri.conf.json;electron-builder.yml;wails.json,src/;app/;components/;main/;renderer/;resources/;assets/;build/,*service.ts;ipc*.ts;*bridge*.ts;*native*.ts;invoke*,*.test.ts;*.spec.ts;*_test.rs;*.spec.tsx,.env*;config/*;*.config.*;app.config.*;forge.config.*;builder.config.*,*auth*.ts;*session*.ts;keychain*;secure-storage*,N/A,main.ts;index.ts;main.js;src-tauri/main.rs;electron.ts,shared/**;common/**;utils/**;lib/**;components/shared/**,N/A,*event*.ts;*ipc*.ts;*message*.ts,.github/workflows/**;.gitlab-ci.yml;.circleci/**,resources/**;assets/**;icons/**;static/**;build/resources,N/A,N/A,i18n/**;locales/**;translations/**;lang/**,false,true
game,false,false,true,false,false,*.unity;*.godot;*.uproject;package.json;project.godot,Assets/;Scenes/;Scripts/;Prefabs/;Resources/;Content/;Source/;src/;scenes/;scripts/,N/A,*Test.cs;*_test.gd;*Test.cpp;*.test.ts,.env*;config/*;*.ini;settings/;GameSettings/,N/A,N/A,main.gd;Main.cs;GameManager.cs;main.cpp;index.ts,shared/**;common/**;utils/**;Core/**;Framework/**,N/A,N/A,.github/workflows/**;.gitlab-ci.yml,Assets/**;Scenes/**;Prefabs/**;Materials/**;Textures/**;Audio/**;Models/**;*.fbx;*.blend;*.shader;*.hlsl;*.glsl;Shaders/**;VFX/**,N/A,N/A,Localization/**;Languages/**;i18n/**,false,true
data,false,true,false,false,true,requirements.txt;pyproject.toml;dbt_project.yml;airflow.cfg;setup.py;Pipfile,dags/;pipelines/;models/;transformations/;notebooks/;sql/;etl/;jobs/,N/A,test_*.py;*_test.py;tests/**,.env*;config/*;profiles.yml;dbt_project.yml;airflow.cfg,N/A,migrations/**;dbt/models/**;*.sql;schemas/**,main.py;__init__.py;pipeline.py;dag.py,shared/**;common/**;utils/**;lib/**;helpers/**,N/A,*event*.py;*consumer*.py;*producer*.py;*worker*.py;jobs/**;tasks/**,.github/workflows/**;.gitlab-ci.yml;airflow/dags/**,N/A,N/A,*.proto;*.avro;schemas/**;*.parquet,N/A,false,false
extension,true,false,true,true,false,manifest.json;package.json;wxt.config.ts,src/;popup/;content/;background/;assets/;components/,*message.ts;*runtime.ts;*storage.ts;*tabs.ts,*.test.ts;*.spec.ts;*.test.tsx,.env*;wxt.config.*;webpack.config.*;vite.config.*,*auth*.ts;*session*.ts;*permission*,N/A,index.ts;popup.ts;background.ts;content.ts,shared/**;common/**;utils/**;lib/**,N/A,*message*.ts;*event*.ts;chrome.runtime*;browser.runtime*,.github/workflows/**,assets/**;icons/**;images/**;static/**,N/A,N/A,_locales/**;locales/**;i18n/**,false,false
infra,false,false,false,false,true,*.tf;*.tfvars;pulumi.yaml;cdk.json;*.yml;*.yaml;Dockerfile;docker-compose*.yml,terraform/;modules/;k8s/;charts/;playbooks/;roles/;policies/;stacks/,N/A,*_test.go;test_*.py;*_test.tf;*_spec.rb,.env*;*.tfvars;config/*;vars/;group_vars/;host_vars/,N/A,N/A,main.tf;index.ts;__main__.py;playbook.yml,modules/**;shared/**;common/**;lib/**,N/A,N/A,.github/workflows/**;.gitlab-ci.yml;.circleci/**,N/A,N/A,N/A,N/A,false,false
embedded,false,false,false,false,false,platformio.ini;CMakeLists.txt;*.ino;Makefile;*.ioc;mbed-os.lib,src/;lib/;include/;firmware/;drivers/;hal/;bsp/;components/,N/A,test_*.c;*_test.cpp;*_test.c;tests/**,.env*;config/*;sdkconfig;*.json;settings/,N/A,N/A,main.c;main.cpp;main.ino;app_main.c,lib/**;shared/**;common/**;drivers/**,N/A,N/A,.github/workflows/**;.gitlab-ci.yml,N/A,*.h;*.hpp;drivers/**;hal/**;bsp/**;pinout.*;peripheral*;gpio*;*.fzz;schematics/**,*.proto;mqtt*;coap*;modbus*,N/A,true,false
1 project_type_id,requires_api_scan,requires_data_models,requires_state_management,requires_ui_components,requires_deployment_config,key_file_patterns,critical_directories,integration_scan_patterns,test_file_patterns,config_patterns,auth_security_patterns,schema_migration_patterns,entry_point_patterns,shared_code_patterns,monorepo_workspace_patterns,async_event_patterns,ci_cd_patterns,asset_patterns,hardware_interface_patterns,protocol_schema_patterns,localization_patterns,requires_hardware_docs,requires_asset_inventory
2 web,true,true,true,true,true,package.json;tsconfig.json;*.config.js;*.config.ts;vite.config.*;webpack.config.*;next.config.*;nuxt.config.*,src/;app/;pages/;components/;api/;lib/;styles/;public/;static/,*client.ts;*service.ts;*api.ts;fetch*.ts;axios*.ts;*http*.ts,*.test.ts;*.spec.ts;*.test.tsx;*.spec.tsx;**/__tests__/**;**/*.test.*;**/*.spec.*,.env*;config/*;*.config.*;.config/;settings/,*auth*.ts;*session*.ts;middleware/auth*;*.guard.ts;*authenticat*;*permission*;guards/,migrations/**;prisma/**;*.prisma;alembic/**;knex/**;*migration*.sql;*migration*.ts,main.ts;index.ts;app.ts;server.ts;_app.tsx;_app.ts;layout.tsx,shared/**;common/**;utils/**;lib/**;helpers/**;@*/**;packages/**,pnpm-workspace.yaml;lerna.json;nx.json;turbo.json;workspace.json;rush.json,*event*.ts;*queue*.ts;*subscriber*.ts;*consumer*.ts;*producer*.ts;*worker*.ts;jobs/**,.github/workflows/**;.gitlab-ci.yml;Jenkinsfile;.circleci/**;azure-pipelines.yml;bitbucket-pipelines.yml,.drone.yml,public/**;static/**;assets/**;images/**;media/**,N/A,*.proto;*.graphql;graphql/**;schema.graphql;*.avro;openapi.*;swagger.*,i18n/**;locales/**;lang/**;translations/**;messages/**;*.po;*.pot,false,false
3 mobile,true,true,true,true,true,package.json;pubspec.yaml;Podfile;build.gradle;app.json;capacitor.config.*;ionic.config.json,src/;app/;screens/;components/;services/;models/;assets/;ios/;android/,*client.ts;*service.ts;*api.ts;fetch*.ts;axios*.ts;*http*.ts,*.test.ts;*.test.tsx;*_test.dart;*.test.dart;**/__tests__/**,.env*;config/*;app.json;capacitor.config.*;google-services.json;GoogleService-Info.plist,*auth*.ts;*session*.ts;*authenticat*;*permission*;*biometric*;secure-store*,migrations/**;realm/**;*.realm;watermelondb/**;sqlite/**,main.ts;index.ts;App.tsx;App.ts;main.dart,shared/**;common/**;utils/**;lib/**;components/shared/**;@*/**,pnpm-workspace.yaml;lerna.json;nx.json;turbo.json,*event*.ts;*notification*.ts;*push*.ts;background-fetch*,fastlane/**;.github/workflows/**;.gitlab-ci.yml;bitbucket-pipelines.yml;appcenter-*,assets/**;Resources/**;res/**;*.xcassets;drawable*/;mipmap*/;images/**,N/A,*.proto;graphql/**;*.graphql,i18n/**;locales/**;translations/**;*.strings;*.xml,false,true
4 backend,true,true,false,false,true,package.json;requirements.txt;go.mod;Gemfile;pom.xml;build.gradle;Cargo.toml;*.csproj,src/;api/;services/;models/;routes/;controllers/;middleware/;handlers/;repositories/;domain/,*client.ts;*repository.ts;*service.ts;*connector*.ts;*adapter*.ts,*.test.ts;*.spec.ts;*_test.go;test_*.py;*Test.java;*_test.rs,.env*;config/*;*.config.*;application*.yml;application*.yaml;appsettings*.json;settings.py,*auth*.ts;*session*.ts;*authenticat*;*authorization*;middleware/auth*;guards/;*jwt*;*oauth*,migrations/**;alembic/**;flyway/**;liquibase/**;prisma/**;*.prisma;*migration*.sql;*migration*.ts;db/migrate,main.ts;index.ts;server.ts;app.ts;main.go;main.py;Program.cs;__init__.py,shared/**;common/**;utils/**;lib/**;core/**;@*/**;pkg/**,pnpm-workspace.yaml;lerna.json;nx.json;go.work,*event*.ts;*queue*.ts;*subscriber*.ts;*consumer*.ts;*producer*.ts;*worker*.ts;*handler*.ts;jobs/**;workers/**,.github/workflows/**;.gitlab-ci.yml;Jenkinsfile;.circleci/**;azure-pipelines.yml;.drone.yml,N/A,N/A,*.proto;*.graphql;graphql/**;*.avro;*.thrift;openapi.*;swagger.*;schema/**,N/A,false,false
5 cli,false,false,false,false,false,package.json;go.mod;Cargo.toml;setup.py;pyproject.toml;*.gemspec,src/;cmd/;cli/;bin/;lib/;commands/,N/A,*.test.ts;*_test.go;test_*.py;*.spec.ts;*_spec.rb,.env*;config/*;*.config.*;.*.rc;.*rc,N/A,N/A,main.ts;index.ts;cli.ts;main.go;main.py;__main__.py;bin/*,shared/**;common/**;utils/**;lib/**;helpers/**,N/A,N/A,.github/workflows/**;.gitlab-ci.yml;goreleaser.yml,N/A,N/A,N/A,N/A,false,false
6 library,false,false,false,false,false,package.json;setup.py;Cargo.toml;go.mod;*.gemspec;*.csproj;pom.xml,src/;lib/;dist/;pkg/;build/;target/,N/A,*.test.ts;*_test.go;test_*.py;*.spec.ts;*Test.java;*_test.rs,.*.rc;tsconfig.json;rollup.config.*;vite.config.*;webpack.config.*,N/A,N/A,index.ts;index.js;lib.rs;main.go;__init__.py,src/**;lib/**;core/**,N/A,N/A,.github/workflows/**;.gitlab-ci.yml;.circleci/**,N/A,N/A,N/A,N/A,false,false
7 desktop,false,false,true,true,true,package.json;Cargo.toml;*.csproj;CMakeLists.txt;tauri.conf.json;electron-builder.yml;wails.json,src/;app/;components/;main/;renderer/;resources/;assets/;build/,*service.ts;ipc*.ts;*bridge*.ts;*native*.ts;invoke*,*.test.ts;*.spec.ts;*_test.rs;*.spec.tsx,.env*;config/*;*.config.*;app.config.*;forge.config.*;builder.config.*,*auth*.ts;*session*.ts;keychain*;secure-storage*,N/A,main.ts;index.ts;main.js;src-tauri/main.rs;electron.ts,shared/**;common/**;utils/**;lib/**;components/shared/**,N/A,*event*.ts;*ipc*.ts;*message*.ts,.github/workflows/**;.gitlab-ci.yml;.circleci/**,resources/**;assets/**;icons/**;static/**;build/resources,N/A,N/A,i18n/**;locales/**;translations/**;lang/**,false,true
8 game,false,false,true,false,false,*.unity;*.godot;*.uproject;package.json;project.godot,Assets/;Scenes/;Scripts/;Prefabs/;Resources/;Content/;Source/;src/;scenes/;scripts/,N/A,*Test.cs;*_test.gd;*Test.cpp;*.test.ts,.env*;config/*;*.ini;settings/;GameSettings/,N/A,N/A,main.gd;Main.cs;GameManager.cs;main.cpp;index.ts,shared/**;common/**;utils/**;Core/**;Framework/**,N/A,N/A,.github/workflows/**;.gitlab-ci.yml,Assets/**;Scenes/**;Prefabs/**;Materials/**;Textures/**;Audio/**;Models/**;*.fbx;*.blend;*.shader;*.hlsl;*.glsl;Shaders/**;VFX/**,N/A,N/A,Localization/**;Languages/**;i18n/**,false,true
9 data,false,true,false,false,true,requirements.txt;pyproject.toml;dbt_project.yml;airflow.cfg;setup.py;Pipfile,dags/;pipelines/;models/;transformations/;notebooks/;sql/;etl/;jobs/,N/A,test_*.py;*_test.py;tests/**,.env*;config/*;profiles.yml;dbt_project.yml;airflow.cfg,N/A,migrations/**;dbt/models/**;*.sql;schemas/**,main.py;__init__.py;pipeline.py;dag.py,shared/**;common/**;utils/**;lib/**;helpers/**,N/A,*event*.py;*consumer*.py;*producer*.py;*worker*.py;jobs/**;tasks/**,.github/workflows/**;.gitlab-ci.yml;airflow/dags/**,N/A,N/A,*.proto;*.avro;schemas/**;*.parquet,N/A,false,false
10 extension,true,false,true,true,false,manifest.json;package.json;wxt.config.ts,src/;popup/;content/;background/;assets/;components/,*message.ts;*runtime.ts;*storage.ts;*tabs.ts,*.test.ts;*.spec.ts;*.test.tsx,.env*;wxt.config.*;webpack.config.*;vite.config.*,*auth*.ts;*session*.ts;*permission*,N/A,index.ts;popup.ts;background.ts;content.ts,shared/**;common/**;utils/**;lib/**,N/A,*message*.ts;*event*.ts;chrome.runtime*;browser.runtime*,.github/workflows/**,assets/**;icons/**;images/**;static/**,N/A,N/A,_locales/**;locales/**;i18n/**,false,false
11 infra,false,false,false,false,true,*.tf;*.tfvars;pulumi.yaml;cdk.json;*.yml;*.yaml;Dockerfile;docker-compose*.yml,terraform/;modules/;k8s/;charts/;playbooks/;roles/;policies/;stacks/,N/A,*_test.go;test_*.py;*_test.tf;*_spec.rb,.env*;*.tfvars;config/*;vars/;group_vars/;host_vars/,N/A,N/A,main.tf;index.ts;__main__.py;playbook.yml,modules/**;shared/**;common/**;lib/**,N/A,N/A,.github/workflows/**;.gitlab-ci.yml;.circleci/**,N/A,N/A,N/A,N/A,false,false
12 embedded,false,false,false,false,false,platformio.ini;CMakeLists.txt;*.ino;Makefile;*.ioc;mbed-os.lib,src/;lib/;include/;firmware/;drivers/;hal/;bsp/;components/,N/A,test_*.c;*_test.cpp;*_test.c;tests/**,.env*;config/*;sdkconfig;*.json;settings/,N/A,N/A,main.c;main.cpp;main.ino;app_main.c,lib/**;shared/**;common/**;drivers/**,N/A,N/A,.github/workflows/**;.gitlab-ci.yml,N/A,*.h;*.hpp;drivers/**;hal/**;bsp/**;pinout.*;peripheral*;gpio*;*.fzz;schematics/**,*.proto;mqtt*;coap*;modbus*,N/A,true,false

View File

@@ -0,0 +1,222 @@
# Document Project Workflow Router
<critical>The workflow execution engine is governed by: {project-root}/bmad/core/tasks/workflow.xml</critical>
<critical>You MUST have already loaded and processed: {project-root}/bmad/bmm/workflows/document-project/workflow.yaml</critical>
<critical>Communicate all responses in {communication_language}</critical>
<workflow>
<critical>This router determines workflow mode and delegates to specialized sub-workflows</critical>
<step n="1" goal="Validate workflow and get project info">
<invoke-workflow path="{project-root}/bmad/bmm/workflows/workflow-status">
<param>mode: data</param>
<param>data_request: project_config</param>
</invoke-workflow>
<check if="status_exists == false">
<output>{{suggestion}}</output>
<output>Note: Documentation workflow can run standalone. Continuing without progress tracking.</output>
<action>Set standalone_mode = true</action>
<action>Set status_file_found = false</action>
</check>
<check if="status_exists == true">
<action>Store {{status_file_path}} for later updates</action>
<action>Set status_file_found = true</action>
<!-- Extract brownfield/greenfield from status data -->
<check if="field_type == 'greenfield'">
<output>Note: This is a greenfield project. Documentation workflow is typically for brownfield projects.</output>
<ask>Continue anyway to document planning artifacts? (y/n)</ask>
<check if="n">
<action>Exit workflow</action>
</check>
</check>
<!-- Now validate sequencing -->
<invoke-workflow path="{project-root}/bmad/bmm/workflows/workflow-status">
<param>mode: validate</param>
<param>calling_workflow: document-project</param>
</invoke-workflow>
<check if="warning != ''">
<output>{{warning}}</output>
<output>Note: This may be auto-invoked by prd for brownfield documentation.</output>
<ask>Continue with documentation? (y/n)</ask>
<check if="n">
<output>{{suggestion}}</output>
<action>Exit workflow</action>
</check>
</check>
</check>
</step>
<step n="2" goal="Check for resumability and determine workflow mode">
<critical>SMART LOADING STRATEGY: Check state file FIRST before loading any CSV files</critical>
<action>Check for existing state file at: {output_folder}/project-scan-report.json</action>
<check if="project-scan-report.json exists">
<action>Read state file and extract: timestamps, mode, scan_level, current_step, completed_steps, project_classification</action>
<action>Extract cached project_type_id(s) from state file if present</action>
<action>Calculate age of state file (current time - last_updated)</action>
<ask>I found an in-progress workflow state from {{last_updated}}.
**Current Progress:**
- Mode: {{mode}}
- Scan Level: {{scan_level}}
- Completed Steps: {{completed_steps_count}}/{{total_steps}}
- Last Step: {{current_step}}
- Project Type(s): {{cached_project_types}}
Would you like to:
1. **Resume from where we left off** - Continue from step {{current_step}}
2. **Start fresh** - Archive old state and begin new scan
3. **Cancel** - Exit without changes
Your choice [1/2/3]:
</ask>
<check if="user selects 1">
<action>Set resume_mode = true</action>
<action>Set workflow_mode = {{mode}}</action>
<action>Load findings summaries from state file</action>
<action>Load cached project_type_id(s) from state file</action>
<critical>CONDITIONAL CSV LOADING FOR RESUME:</critical>
<action>For each cached project_type_id, load ONLY the corresponding row from: {documentation_requirements_csv}</action>
<action>Skip loading project-types.csv and architecture_registry.csv (not needed on resume)</action>
<action>Store loaded doc requirements for use in remaining steps</action>
<action>Display: "Resuming {{workflow_mode}} from {{current_step}} with cached project type(s): {{cached_project_types}}"</action>
<check if="workflow_mode == deep_dive">
<action>Load and execute: {installed_path}/workflows/deep-dive-instructions.md with resume context</action>
</check>
<check if="workflow_mode == initial_scan OR workflow_mode == full_rescan">
<action>Load and execute: {installed_path}/workflows/full-scan-instructions.md with resume context</action>
</check>
</check>
<check if="user selects 2">
<action>Create archive directory: {output_folder}/.archive/</action>
<action>Move old state file to: {output_folder}/.archive/project-scan-report-{{timestamp}}.json</action>
<action>Set resume_mode = false</action>
<action>Continue to Step 0.5</action>
</check>
<check if="user selects 3">
<action>Display: "Exiting workflow without changes."</action>
<action>Exit workflow</action>
</check>
</check>
<check if="state file age >= 24 hours">
<action>Display: "Found old state file (>24 hours). Starting fresh scan."</action>
<action>Archive old state file to: {output_folder}/.archive/project-scan-report-{{timestamp}}.json</action>
<action>Set resume_mode = false</action>
<action>Continue to Step 0.5</action>
</check>
</step>
<step n="3" goal="Check for existing documentation and determine workflow mode" if="resume_mode == false">
<action>Check if {output_folder}/index.md exists</action>
<check if="index.md exists">
<action>Read existing index.md to extract metadata (date, project structure, parts count)</action>
<action>Store as {{existing_doc_date}}, {{existing_structure}}</action>
<ask>I found existing documentation generated on {{existing_doc_date}}.
What would you like to do?
1. **Re-scan entire project** - Update all documentation with latest changes
2. **Deep-dive into specific area** - Generate detailed documentation for a particular feature/module/folder
3. **Cancel** - Keep existing documentation as-is
Your choice [1/2/3]:
</ask>
<check if="user selects 1">
<action>Set workflow_mode = "full_rescan"</action>
<action>Display: "Starting full project rescan..."</action>
<action>Load and execute: {installed_path}/workflows/full-scan-instructions.md</action>
<action>After sub-workflow completes, continue to Step 4</action>
</check>
<check if="user selects 2">
<action>Set workflow_mode = "deep_dive"</action>
<action>Set scan_level = "exhaustive"</action>
<action>Display: "Starting deep-dive documentation mode..."</action>
<action>Load and execute: {installed_path}/workflows/deep-dive-instructions.md</action>
<action>After sub-workflow completes, continue to Step 4</action>
</check>
<check if="user selects 3">
<action>Display message: "Keeping existing documentation. Exiting workflow."</action>
<action>Exit workflow</action>
</check>
</check>
<check if="index.md does not exist">
<action>Set workflow_mode = "initial_scan"</action>
<action>Display: "No existing documentation found. Starting initial project scan..."</action>
<action>Load and execute: {installed_path}/workflows/full-scan-instructions.md</action>
<action>After sub-workflow completes, continue to Step 4</action>
</check>
</step>
<step n="4" goal="Update status and complete">
<check if="status_file_found == true">
<invoke-workflow path="{project-root}/bmad/bmm/workflows/workflow-status">
<param>mode: update</param>
<param>action: complete_workflow</param>
<param>workflow_name: document-project</param>
</invoke-workflow>
<check if="success == true">
<output>Status updated!</output>
</check>
</check>
<output>**✅ Document Project Workflow Complete, {user_name}!**
**Documentation Generated:**
- Mode: {{workflow_mode}}
- Scan Level: {{scan_level}}
- Output: {output_folder}/bmm-index.md and related files
{{#if status_file_found}}
**Status Updated:**
- Progress tracking updated
**Next Steps:**
- **Next required:** {{next_workflow}} ({{next_agent}} agent)
Check status anytime with: `workflow-status`
{{else}}
**Next Steps:**
Since no workflow is in progress:
- Refer to the BMM workflow guide if unsure what to do next
- Or run `workflow-init` to create a workflow path and get guided next steps
{{/if}}
</output>
</step>
</workflow>

View File

@@ -0,0 +1,38 @@
# Document Project Workflow Templates
This directory contains template files for the `document-project` workflow.
## Template Files
- **index-template.md** - Master index template (adapts for single/multi-part projects)
- **project-overview-template.md** - Executive summary and high-level overview
- **source-tree-template.md** - Annotated directory structure
## Template Usage
The workflow dynamically selects and populates templates based on:
1. **Project structure** (single part vs multi-part)
2. **Project type** (web, backend, mobile, etc.)
3. **Documentation requirements** (from documentation-requirements.csv)
## Variable Naming Convention
Templates use Handlebars-style variables:
- `{{variable_name}}` - Simple substitution
- `{{#if condition}}...{{/if}}` - Conditional blocks
- `{{#each collection}}...{{/each}}` - Iteration
## Additional Templates
Architecture-specific templates are dynamically loaded from:
`/bmad/bmm/workflows/3-solutioning/templates/`
Based on the matched architecture type from the registry.
## Notes
- Templates support both simple and complex project structures
- Multi-part projects get part-specific file naming (e.g., `architecture-{part_id}.md`)
- Single-part projects use simplified naming (e.g., `architecture.md`)

View File

@@ -0,0 +1,345 @@
# {{target_name}} - Deep Dive Documentation
**Generated:** {{date}}
**Scope:** {{target_path}}
**Files Analyzed:** {{file_count}}
**Lines of Code:** {{total_loc}}
**Workflow Mode:** Exhaustive Deep-Dive
## Overview
{{target_description}}
**Purpose:** {{target_purpose}}
**Key Responsibilities:** {{responsibilities}}
**Integration Points:** {{integration_summary}}
## Complete File Inventory
{{#each files_in_inventory}}
### {{file_path}}
**Purpose:** {{purpose}}
**Lines of Code:** {{loc}}
**File Type:** {{file_type}}
**What Future Contributors Must Know:** {{contributor_note}}
**Exports:**
{{#each exports}}
- `{{signature}}` - {{description}}
{{/each}}
**Dependencies:**
{{#each imports}}
- `{{import_path}}` - {{reason}}
{{/each}}
**Used By:**
{{#each dependents}}
- `{{dependent_path}}`
{{/each}}
**Key Implementation Details:**
```{{language}}
{{key_code_snippet}}
```
{{implementation_notes}}
**Patterns Used:**
{{#each patterns}}
- {{pattern_name}}: {{pattern_description}}
{{/each}}
**State Management:** {{state_approach}}
**Side Effects:**
{{#each side_effects}}
- {{effect_type}}: {{effect_description}}
{{/each}}
**Error Handling:** {{error_handling_approach}}
**Testing:**
- Test File: {{test_file_path}}
- Coverage: {{coverage_percentage}}%
- Test Approach: {{test_approach}}
**Comments/TODOs:**
{{#each todos}}
- Line {{line_number}}: {{todo_text}}
{{/each}}
---
{{/each}}
## Contributor Checklist
- **Risks & Gotchas:** {{risks_notes}}
- **Pre-change Verification Steps:** {{verification_steps}}
- **Suggested Tests Before PR:** {{suggested_tests}}
## Architecture & Design Patterns
### Code Organization
{{organization_approach}}
### Design Patterns
{{#each design_patterns}}
- **{{pattern_name}}**: {{usage_description}}
{{/each}}
### State Management Strategy
{{state_management_details}}
### Error Handling Philosophy
{{error_handling_philosophy}}
### Testing Strategy
{{testing_strategy}}
## Data Flow
{{data_flow_diagram}}
### Data Entry Points
{{#each entry_points}}
- **{{entry_name}}**: {{entry_description}}
{{/each}}
### Data Transformations
{{#each transformations}}
- **{{transformation_name}}**: {{transformation_description}}
{{/each}}
### Data Exit Points
{{#each exit_points}}
- **{{exit_name}}**: {{exit_description}}
{{/each}}
## Integration Points
### APIs Consumed
{{#each apis_consumed}}
- **{{api_endpoint}}**: {{api_description}}
- Method: {{method}}
- Authentication: {{auth_requirement}}
- Response: {{response_schema}}
{{/each}}
### APIs Exposed
{{#each apis_exposed}}
- **{{api_endpoint}}**: {{api_description}}
- Method: {{method}}
- Request: {{request_schema}}
- Response: {{response_schema}}
{{/each}}
### Shared State
{{#each shared_state}}
- **{{state_name}}**: {{state_description}}
- Type: {{state_type}}
- Accessed By: {{accessors}}
{{/each}}
### Events
{{#each events}}
- **{{event_name}}**: {{event_description}}
- Type: {{publish_or_subscribe}}
- Payload: {{payload_schema}}
{{/each}}
### Database Access
{{#each database_operations}}
- **{{table_name}}**: {{operation_type}}
- Queries: {{query_patterns}}
- Indexes Used: {{indexes}}
{{/each}}
## Dependency Graph
{{dependency_graph_visualization}}
### Entry Points (Not Imported by Others in Scope)
{{#each entry_point_files}}
- {{file_path}}
{{/each}}
### Leaf Nodes (Don't Import Others in Scope)
{{#each leaf_files}}
- {{file_path}}
{{/each}}
### Circular Dependencies
{{#if has_circular_dependencies}}
⚠️ Circular dependencies detected:
{{#each circular_deps}}
- {{cycle_description}}
{{/each}}
{{else}}
✓ No circular dependencies detected
{{/if}}
## Testing Analysis
### Test Coverage Summary
- **Statements:** {{statements_coverage}}%
- **Branches:** {{branches_coverage}}%
- **Functions:** {{functions_coverage}}%
- **Lines:** {{lines_coverage}}%
### Test Files
{{#each test_files}}
- **{{test_file_path}}**
- Tests: {{test_count}}
- Approach: {{test_approach}}
- Mocking Strategy: {{mocking_strategy}}
{{/each}}
### Test Utilities Available
{{#each test_utilities}}
- `{{utility_name}}`: {{utility_description}}
{{/each}}
### Testing Gaps
{{#each testing_gaps}}
- {{gap_description}}
{{/each}}
## Related Code & Reuse Opportunities
### Similar Features Elsewhere
{{#each similar_features}}
- **{{feature_name}}** (`{{feature_path}}`)
- Similarity: {{similarity_description}}
- Can Reference For: {{reference_use_case}}
{{/each}}
### Reusable Utilities Available
{{#each reusable_utilities}}
- **{{utility_name}}** (`{{utility_path}}`)
- Purpose: {{utility_purpose}}
- How to Use: {{usage_example}}
{{/each}}
### Patterns to Follow
{{#each patterns_to_follow}}
- **{{pattern_name}}**: Reference `{{reference_file}}` for implementation
{{/each}}
## Implementation Notes
### Code Quality Observations
{{#each quality_observations}}
- {{observation}}
{{/each}}
### TODOs and Future Work
{{#each all_todos}}
- **{{file_path}}:{{line_number}}**: {{todo_text}}
{{/each}}
### Known Issues
{{#each known_issues}}
- {{issue_description}}
{{/each}}
### Optimization Opportunities
{{#each optimizations}}
- {{optimization_suggestion}}
{{/each}}
### Technical Debt
{{#each tech_debt_items}}
- {{debt_description}}
{{/each}}
## Modification Guidance
### To Add New Functionality
{{modification_guidance_add}}
### To Modify Existing Functionality
{{modification_guidance_modify}}
### To Remove/Deprecate
{{modification_guidance_remove}}
### Testing Checklist for Changes
{{#each testing_checklist_items}}
- [ ] {{checklist_item}}
{{/each}}
---
_Generated by `document-project` workflow (deep-dive mode)_
_Base Documentation: docs/index.md_
_Scan Date: {{date}}_
_Analysis Mode: Exhaustive_

View File

@@ -0,0 +1,169 @@
# {{project_name}} Documentation Index
**Type:** {{repository_type}}{{#if is_multi_part}} with {{parts_count}} parts{{/if}}
**Primary Language:** {{primary_language}}
**Architecture:** {{architecture_type}}
**Last Updated:** {{date}}
## Project Overview
{{project_description}}
{{#if is_multi_part}}
## Project Structure
This project consists of {{parts_count}} parts:
{{#each project_parts}}
### {{part_name}} ({{part_id}})
- **Type:** {{project_type}}
- **Location:** `{{root_path}}`
- **Tech Stack:** {{tech_stack_summary}}
- **Entry Point:** {{entry_point}}
{{/each}}
## Cross-Part Integration
{{integration_summary}}
{{/if}}
## Quick Reference
{{#if is_single_part}}
- **Tech Stack:** {{tech_stack_summary}}
- **Entry Point:** {{entry_point}}
- **Architecture Pattern:** {{architecture_pattern}}
- **Database:** {{database}}
- **Deployment:** {{deployment_platform}}
{{else}}
{{#each project_parts}}
### {{part_name}} Quick Ref
- **Stack:** {{tech_stack_summary}}
- **Entry:** {{entry_point}}
- **Pattern:** {{architecture_pattern}}
{{/each}}
{{/if}}
## Generated Documentation
### Core Documentation
- [Project Overview](./project-overview.md) - Executive summary and high-level architecture
- [Source Tree Analysis](./source-tree-analysis.md) - Annotated directory structure
{{#if is_single_part}}
- [Architecture](./architecture.md) - Detailed technical architecture
- [Component Inventory](./component-inventory.md) - Catalog of major components{{#if has_ui_components}} and UI elements{{/if}}
- [Development Guide](./development-guide.md) - Local setup and development workflow
{{#if has_api_docs}}- [API Contracts](./api-contracts.md) - API endpoints and schemas{{/if}}
{{#if has_data_models}}- [Data Models](./data-models.md) - Database schema and models{{/if}}
{{else}}
### Part-Specific Documentation
{{#each project_parts}}
#### {{part_name}} ({{part_id}})
- [Architecture](./architecture-{{part_id}}.md) - Technical architecture for {{part_name}}
{{#if has_components}}- [Components](./component-inventory-{{part_id}}.md) - Component catalog{{/if}}
- [Development Guide](./development-guide-{{part_id}}.md) - Setup and dev workflow
{{#if has_api}}- [API Contracts](./api-contracts-{{part_id}}.md) - API documentation{{/if}}
{{#if has_data}}- [Data Models](./data-models-{{part_id}}.md) - Data architecture{{/if}}
{{/each}}
### Integration
- [Integration Architecture](./integration-architecture.md) - How parts communicate
- [Project Parts Metadata](./project-parts.json) - Machine-readable structure
{{/if}}
### Optional Documentation
{{#if has_deployment_guide}}- [Deployment Guide](./deployment-guide.md) - Deployment process and infrastructure{{/if}}
{{#if has_contribution_guide}}- [Contribution Guide](./contribution-guide.md) - Contributing guidelines and standards{{/if}}
## Existing Documentation
{{#if has_existing_docs}}
{{#each existing_docs}}
- [{{title}}]({{path}}) - {{description}}
{{/each}}
{{else}}
No existing documentation files were found in the project.
{{/if}}
## Getting Started
{{#if is_single_part}}
### Prerequisites
{{prerequisites}}
### Setup
```bash
{{setup_commands}}
```
### Run Locally
```bash
{{run_commands}}
```
### Run Tests
```bash
{{test_commands}}
```
{{else}}
{{#each project_parts}}
### {{part_name}} Setup
**Prerequisites:** {{prerequisites}}
**Install & Run:**
```bash
cd {{root_path}}
{{setup_command}}
{{run_command}}
```
{{/each}}
{{/if}}
## For AI-Assisted Development
This documentation was generated specifically to enable AI agents to understand and extend this codebase.
### When Planning New Features:
**UI-only features:**
{{#if is_multi_part}}→ Reference: `architecture-{{ui_part_id}}.md`, `component-inventory-{{ui_part_id}}.md`{{else}}→ Reference: `architecture.md`, `component-inventory.md`{{/if}}
**API/Backend features:**
{{#if is_multi_part}}→ Reference: `architecture-{{api_part_id}}.md`, `api-contracts-{{api_part_id}}.md`, `data-models-{{api_part_id}}.md`{{else}}→ Reference: `architecture.md`{{#if has_api_docs}}, `api-contracts.md`{{/if}}{{#if has_data_models}}, `data-models.md`{{/if}}{{/if}}
**Full-stack features:**
→ Reference: All architecture docs{{#if is_multi_part}} + `integration-architecture.md`{{/if}}
**Deployment changes:**
{{#if has_deployment_guide}}→ Reference: `deployment-guide.md`{{else}}→ Review CI/CD configs in project{{/if}}
---
_Documentation generated by BMAD Method `document-project` workflow_

View File

@@ -0,0 +1,103 @@
# {{project_name}} - Project Overview
**Date:** {{date}}
**Type:** {{project_type}}
**Architecture:** {{architecture_type}}
## Executive Summary
{{executive_summary}}
## Project Classification
- **Repository Type:** {{repository_type}}
- **Project Type(s):** {{project_types_list}}
- **Primary Language(s):** {{primary_languages}}
- **Architecture Pattern:** {{architecture_pattern}}
{{#if is_multi_part}}
## Multi-Part Structure
This project consists of {{parts_count}} distinct parts:
{{#each project_parts}}
### {{part_name}}
- **Type:** {{project_type}}
- **Location:** `{{root_path}}`
- **Purpose:** {{purpose}}
- **Tech Stack:** {{tech_stack}}
{{/each}}
### How Parts Integrate
{{integration_description}}
{{/if}}
## Technology Stack Summary
{{#if is_single_part}}
{{technology_table}}
{{else}}
{{#each project_parts}}
### {{part_name}} Stack
{{technology_table}}
{{/each}}
{{/if}}
## Key Features
{{key_features}}
## Architecture Highlights
{{architecture_highlights}}
## Development Overview
### Prerequisites
{{prerequisites}}
### Getting Started
{{getting_started_summary}}
### Key Commands
{{#if is_single_part}}
- **Install:** `{{install_command}}`
- **Dev:** `{{dev_command}}`
- **Build:** `{{build_command}}`
- **Test:** `{{test_command}}`
{{else}}
{{#each project_parts}}
#### {{part_name}}
- **Install:** `{{install_command}}`
- **Dev:** `{{dev_command}}`
{{/each}}
{{/if}}
## Repository Structure
{{repository_structure_summary}}
## Documentation Map
For detailed information, see:
- [index.md](./index.md) - Master documentation index
- [architecture.md](./architecture{{#if is_multi_part}}-{part_id}{{/if}}.md) - Detailed architecture
- [source-tree-analysis.md](./source-tree-analysis.md) - Directory structure
- [development-guide.md](./development-guide{{#if is_multi_part}}-{part_id}{{/if}}.md) - Development workflow
---
_Generated using BMAD Method `document-project` workflow_

View File

@@ -0,0 +1,160 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Project Scan Report Schema",
"description": "State tracking file for document-project workflow resumability",
"type": "object",
"required": ["workflow_version", "timestamps", "mode", "scan_level", "completed_steps", "current_step"],
"properties": {
"workflow_version": {
"type": "string",
"description": "Version of document-project workflow",
"example": "1.2.0"
},
"timestamps": {
"type": "object",
"required": ["started", "last_updated"],
"properties": {
"started": {
"type": "string",
"format": "date-time",
"description": "ISO 8601 timestamp when workflow started"
},
"last_updated": {
"type": "string",
"format": "date-time",
"description": "ISO 8601 timestamp of last state update"
},
"completed": {
"type": "string",
"format": "date-time",
"description": "ISO 8601 timestamp when workflow completed (if finished)"
}
}
},
"mode": {
"type": "string",
"enum": ["initial_scan", "full_rescan", "deep_dive"],
"description": "Workflow execution mode"
},
"scan_level": {
"type": "string",
"enum": ["quick", "deep", "exhaustive"],
"description": "Scan depth level (deep_dive mode always uses exhaustive)"
},
"project_root": {
"type": "string",
"description": "Absolute path to project root directory"
},
"output_folder": {
"type": "string",
"description": "Absolute path to output folder"
},
"completed_steps": {
"type": "array",
"items": {
"type": "object",
"required": ["step", "status"],
"properties": {
"step": {
"type": "string",
"description": "Step identifier (e.g., 'step_1', 'step_2')"
},
"status": {
"type": "string",
"enum": ["completed", "partial", "failed"]
},
"timestamp": {
"type": "string",
"format": "date-time"
},
"outputs": {
"type": "array",
"items": { "type": "string" },
"description": "Files written during this step"
},
"summary": {
"type": "string",
"description": "1-2 sentence summary of step outcome"
}
}
}
},
"current_step": {
"type": "string",
"description": "Current step identifier for resumption"
},
"findings": {
"type": "object",
"description": "High-level summaries only (detailed findings purged after writing)",
"properties": {
"project_classification": {
"type": "object",
"properties": {
"repository_type": { "type": "string" },
"parts_count": { "type": "integer" },
"primary_language": { "type": "string" },
"architecture_type": { "type": "string" }
}
},
"technology_stack": {
"type": "array",
"items": {
"type": "object",
"properties": {
"part_id": { "type": "string" },
"tech_summary": { "type": "string" }
}
}
},
"batches_completed": {
"type": "array",
"description": "For deep/exhaustive scans: subfolders processed",
"items": {
"type": "object",
"properties": {
"path": { "type": "string" },
"files_scanned": { "type": "integer" },
"summary": { "type": "string" }
}
}
}
}
},
"outputs_generated": {
"type": "array",
"items": { "type": "string" },
"description": "List of all output files generated"
},
"resume_instructions": {
"type": "string",
"description": "Instructions for resuming from current_step"
},
"validation_status": {
"type": "object",
"properties": {
"last_validated": {
"type": "string",
"format": "date-time"
},
"validation_errors": {
"type": "array",
"items": { "type": "string" }
}
}
},
"deep_dive_targets": {
"type": "array",
"description": "Track deep-dive areas analyzed (for deep_dive mode)",
"items": {
"type": "object",
"properties": {
"target_name": { "type": "string" },
"target_path": { "type": "string" },
"files_analyzed": { "type": "integer" },
"output_file": { "type": "string" },
"timestamp": { "type": "string", "format": "date-time" }
}
}
}
}
}

View File

@@ -0,0 +1,135 @@
# {{project_name}} - Source Tree Analysis
**Date:** {{date}}
## Overview
{{source_tree_overview}}
{{#if is_multi_part}}
## Multi-Part Structure
This project is organized into {{parts_count}} distinct parts:
{{#each project_parts}}
- **{{part_name}}** (`{{root_path}}`): {{purpose}}
{{/each}}
{{/if}}
## Complete Directory Structure
```
{{complete_source_tree}}
```
## Critical Directories
{{#each critical_folders}}
### `{{folder_path}}`
{{description}}
**Purpose:** {{purpose}}
**Contains:** {{contents_summary}}
{{#if entry_points}}**Entry Points:** {{entry_points}}{{/if}}
{{#if integration_note}}**Integration:** {{integration_note}}{{/if}}
{{/each}}
{{#if is_multi_part}}
## Part-Specific Trees
{{#each project_parts}}
### {{part_name}} Structure
```
{{source_tree}}
```
**Key Directories:**
{{#each critical_directories}}
- **`{{path}}`**: {{description}}
{{/each}}
{{/each}}
## Integration Points
{{#each integration_points}}
### {{from_part}} → {{to_part}}
- **Location:** `{{integration_path}}`
- **Type:** {{integration_type}}
- **Details:** {{details}}
{{/each}}
{{/if}}
## Entry Points
{{#if is_single_part}}
- **Main Entry:** `{{main_entry_point}}`
{{#if additional_entry_points}}
- **Additional:**
{{#each additional_entry_points}}
- `{{path}}`: {{description}}
{{/each}}
{{/if}}
{{else}}
{{#each project_parts}}
### {{part_name}}
- **Entry Point:** `{{entry_point}}`
- **Bootstrap:** {{bootstrap_description}}
{{/each}}
{{/if}}
## File Organization Patterns
{{file_organization_patterns}}
## Key File Types
{{#each file_type_patterns}}
### {{file_type}}
- **Pattern:** `{{pattern}}`
- **Purpose:** {{purpose}}
- **Examples:** {{examples}}
{{/each}}
## Asset Locations
{{#if has_assets}}
{{#each asset_locations}}
- **{{asset_type}}**: `{{location}}` ({{file_count}} files, {{total_size}})
{{/each}}
{{else}}
No significant assets detected.
{{/if}}
## Configuration Files
{{#each config_files}}
- **`{{path}}`**: {{description}}
{{/each}}
## Notes for Development
{{development_notes}}
---
_Generated using BMAD Method `document-project` workflow_

View File

@@ -0,0 +1,34 @@
# Document Project Workflow Configuration
name: "document-project"
version: "1.2.0"
description: "Analyzes and documents brownfield projects by scanning codebase, architecture, and patterns to create comprehensive reference documentation for AI-assisted development"
author: "BMad"
# Critical variables
config_source: "{project-root}/bmad/bmm/config.yaml"
output_folder: "{config_source}:output_folder"
user_name: "{config_source}:user_name"
communication_language: "{config_source}:communication_language"
document_output_language: "{config_source}:document_output_language"
user_skill_level: "{config_source}:user_skill_level"
date: system-generated
# Module path and component files
installed_path: "{project-root}/bmad/bmm/workflows/document-project"
template: false # This is an action workflow with multiple output files
instructions: "{installed_path}/instructions.md"
validation: "{installed_path}/checklist.md"
# Required data files - CRITICAL for project type detection and documentation requirements
documentation_requirements_csv: "{installed_path}/documentation-requirements.csv"
# Optional input - project root to scan (defaults to current working directory)
recommended_inputs:
- project_root: "User will specify or use current directory"
- existing_readme: "README.md at project root (if exists)"
- project_config: "package.json, go.mod, requirements.txt, etc. (auto-detected)"
# Output configuration - Multiple files generated in output folder
# Primary output: {output_folder}/index.md
# Additional files generated by sub-workflows based on project structure
standalone: true

View File

@@ -0,0 +1,298 @@
# Deep-Dive Documentation Instructions
<workflow>
<critical>This workflow performs exhaustive deep-dive documentation of specific areas</critical>
<critical>Called by: ../document-project/instructions.md router</critical>
<critical>Handles: deep_dive mode only</critical>
<step n="13" goal="Deep-dive documentation of specific area" if="workflow_mode == deep_dive">
<critical>Deep-dive mode requires literal full-file review. Sampling, guessing, or relying solely on tooling output is FORBIDDEN.</critical>
<action>Load existing project structure from index.md and project-parts.json (if exists)</action>
<action>Load source tree analysis to understand available areas</action>
<step n="13a" goal="Identify area for deep-dive">
<action>Analyze existing documentation to suggest deep-dive options</action>
<ask>What area would you like to deep-dive into?
**Suggested Areas Based on Project Structure:**
{{#if has_api_routes}}
### API Routes ({{api_route_count}} endpoints found)
{{#each api_route_groups}}
{{group_index}}. {{group_name}} - {{endpoint_count}} endpoints in `{{path}}`
{{/each}}
{{/if}}
{{#if has_feature_modules}}
### Feature Modules ({{feature_count}} features)
{{#each feature_modules}}
{{module_index}}. {{module_name}} - {{file_count}} files in `{{path}}`
{{/each}}
{{/if}}
{{#if has_ui_components}}
### UI Component Areas
{{#each component_groups}}
{{group_index}}. {{group_name}} - {{component_count}} components in `{{path}}`
{{/each}}
{{/if}}
{{#if has_services}}
### Services/Business Logic
{{#each service_groups}}
{{service_index}}. {{service_name}} - `{{path}}`
{{/each}}
{{/if}}
**Or specify custom:**
- Folder path (e.g., "client/src/features/dashboard")
- File path (e.g., "server/src/api/users.ts")
- Feature name (e.g., "authentication system")
Enter your choice (number or custom path):
</ask>
<action>Parse user input to determine: - target_type: "folder" | "file" | "feature" | "api_group" | "component_group" - target_path: Absolute path to scan - target_name: Human-readable name for documentation - target_scope: List of all files to analyze
</action>
<action>Store as {{deep_dive_target}}</action>
<action>Display confirmation:
Target: {{target_name}}
Type: {{target_type}}
Path: {{target_path}}
Estimated files to analyze: {{estimated_file_count}}
This will read EVERY file in this area. Proceed? [y/n]
</action>
<action if="user confirms 'n'">Return to Step 13a (select different area)</action>
</step>
<step n="13b" goal="Comprehensive exhaustive scan of target area">
<action>Set scan_mode = "exhaustive"</action>
<action>Initialize file_inventory = []</action>
<critical>You must read every line of every file in scope and capture a plain-language explanation (what the file does, side effects, why it matters) that future developer agents can act on. No shortcuts.</critical>
<check if="target_type == folder">
<action>Get complete recursive file list from {{target_path}}</action>
<action>Filter out: node_modules/, .git/, dist/, build/, coverage/, *.min.js, *.map</action>
<action>For EVERY remaining file in folder:
- Read complete file contents (all lines)
- Extract all exports (functions, classes, types, interfaces, constants)
- Extract all imports (dependencies)
- Identify purpose from comments and code structure
- Write 1-2 sentences (minimum) in natural language describing behaviour, side effects, assumptions, and anything a developer must know before modifying the file
- Extract function signatures with parameter types and return types
- Note any TODOs, FIXMEs, or comments
- Identify patterns (hooks, components, services, controllers, etc.)
- Capture per-file contributor guidance: `contributor_note`, `risks`, `verification_steps`, `suggested_tests`
- Store in file_inventory
</action>
</check>
<check if="target_type == file">
<action>Read complete file at {{target_path}}</action>
<action>Extract all information as above</action>
<action>Read all files it imports (follow import chain 1 level deep)</action>
<action>Find all files that import this file (dependents via grep)</action>
<action>Store all in file_inventory</action>
</check>
<check if="target_type == api_group">
<action>Identify all route/controller files in API group</action>
<action>Read all route handlers completely</action>
<action>Read associated middleware, controllers, services</action>
<action>Read data models and schemas used</action>
<action>Extract complete request/response schemas</action>
<action>Document authentication and authorization requirements</action>
<action>Store all in file_inventory</action>
</check>
<check if="target_type == feature">
<action>Search codebase for all files related to feature name</action>
<action>Include: UI components, API endpoints, models, services, tests</action>
<action>Read each file completely</action>
<action>Store all in file_inventory</action>
</check>
<check if="target_type == component_group">
<action>Get all component files in group</action>
<action>Read each component completely</action>
<action>Extract: Props interfaces, hooks used, child components, state management</action>
<action>Store all in file_inventory</action>
</check>
<action>For each file in file\*inventory, document: - **File Path:** Full path - **Purpose:** What this file does (1-2 sentences) - **Lines of Code:** Total LOC - **Exports:** Complete list with signatures
- Functions: `functionName(param: Type): ReturnType` - Description
_ Classes: `ClassName` - Description with key methods
_ Types/Interfaces: `TypeName` - Description
\_ Constants: `CONSTANT_NAME: Type` - Description - **Imports/Dependencies:** What it uses and why - **Used By:** Files that import this (dependents) - **Key Implementation Details:** Important logic, algorithms, patterns - **State Management:** If applicable (Redux, Context, local state) - **Side Effects:** API calls, database queries, file I/O, external services - **Error Handling:** Try/catch blocks, error boundaries, validation - **Testing:** Associated test files and coverage - **Comments/TODOs:** Any inline documentation or planned work
</action>
<template-output>comprehensive_file_inventory</template-output>
</step>
<step n="13c" goal="Analyze relationships and data flow">
<action>Build dependency graph for scanned area:
- Create graph with files as nodes
- Add edges for import relationships
- Identify circular dependencies if any
- Find entry points (files not imported by others in scope)
- Find leaf nodes (files that don't import others in scope)
</action>
<action>Trace data flow through the system: - Follow function calls and data transformations - Track API calls and their responses - Document state updates and propagation - Map database queries and mutations
</action>
<action>Identify integration points: - External APIs consumed - Internal APIs/services called - Shared state accessed - Events published/subscribed - Database tables accessed
</action>
<template-output>dependency_graph</template-output>
<template-output>data_flow_analysis</template-output>
<template-output>integration_points</template-output>
</step>
<step n="13d" goal="Find related code and similar patterns">
<action>Search codebase OUTSIDE scanned area for:
- Similar file/folder naming patterns
- Similar function signatures
- Similar component structures
- Similar API patterns
- Reusable utilities that could be used
</action>
<action>Identify code reuse opportunities: - Shared utilities available - Design patterns used elsewhere - Component libraries available - Helper functions that could apply
</action>
<action>Find reference implementations: - Similar features in other parts of codebase - Established patterns to follow - Testing approaches used elsewhere
</action>
<template-output>related_code_references</template-output>
<template-output>reuse_opportunities</template-output>
</step>
<step n="13e" goal="Generate comprehensive deep-dive documentation">
<action>Create documentation filename: deep-dive-{{sanitized_target_name}}.md</action>
<action>Aggregate contributor insights across files:
- Combine unique risk/gotcha notes into {{risks_notes}}
- Combine verification steps developers should run before changes into {{verification_steps}}
- Combine recommended test commands into {{suggested_tests}}
</action>
<action>Load complete deep-dive template from: {installed_path}/templates/deep-dive-template.md</action>
<action>Fill template with all collected data from steps 13b-13d</action>
<action>Write filled template to: {output_folder}/deep-dive-{{sanitized_target_name}}.md</action>
<action>Validate deep-dive document completeness</action>
<template-output>deep_dive_documentation</template-output>
<action>Update state file: - Add to deep_dive_targets array: {"target_name": "{{target_name}}", "target_path": "{{target_path}}", "files_analyzed": {{file_count}}, "output_file": "deep-dive-{{sanitized_target_name}}.md", "timestamp": "{{now}}"} - Add output to outputs_generated - Update last_updated timestamp
</action>
</step>
<step n="13f" goal="Update master index with deep-dive link">
<action>Read existing index.md</action>
<action>Check if "Deep-Dive Documentation" section exists</action>
<check if="section does not exist">
<action>Add new section after "Generated Documentation":
## Deep-Dive Documentation
Detailed exhaustive analysis of specific areas:
</action>
</check>
<action>Add link to new deep-dive doc:
- [{{target_name}} Deep-Dive](./deep-dive-{{sanitized_target_name}}.md) - Comprehensive analysis of {{target_description}} ({{file_count}} files, {{total_loc}} LOC) - Generated {{date}}
</action>
<action>Update index metadata:
Last Updated: {{date}}
Deep-Dives: {{deep_dive_count}}
</action>
<action>Save updated index.md</action>
<template-output>updated_index</template-output>
</step>
<step n="13g" goal="Offer to continue or complete">
<action>Display summary:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
## Deep-Dive Documentation Complete! ✓
**Generated:** {output_folder}/deep-dive-{{target_name}}.md
**Files Analyzed:** {{file_count}}
**Lines of Code Scanned:** {{total_loc}}
**Time Taken:** ~{{duration}}
**Documentation Includes:**
- Complete file inventory with all exports
- Dependency graph and data flow
- Integration points and API contracts
- Testing analysis and coverage
- Related code and reuse opportunities
- Implementation guidance
**Index Updated:** {output_folder}/index.md now includes link to this deep-dive
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</action>
<ask>Would you like to:
1. **Deep-dive another area** - Analyze another feature/module/folder
2. **Finish** - Complete workflow
Your choice [1/2]:
</ask>
<action if="user selects 1">
<action>Clear current deep_dive_target</action>
<action>Go to Step 13a (select new area)</action>
</action>
<action if="user selects 2">
<action>Display final message:
All deep-dive documentation complete!
**Master Index:** {output_folder}/index.md
**Deep-Dives Generated:** {{deep_dive_count}}
These comprehensive docs are now ready for:
- Architecture review
- Implementation planning
- Code understanding
- Brownfield PRD creation
Thank you for using the document-project workflow!
</action>
<action>Exit workflow</action>
</action>
</step>
</step>
</workflow>

View File

@@ -0,0 +1,31 @@
# Deep-Dive Documentation Workflow Configuration
name: "document-project-deep-dive"
description: "Exhaustive deep-dive documentation of specific project areas"
author: "BMad"
# This is a sub-workflow called by document-project/workflow.yaml
parent_workflow: "{project-root}/bmad/bmm/workflows/document-project/workflow.yaml"
# Critical variables inherited from parent
config_source: "{project-root}/bmad/bmb/config.yaml"
output_folder: "{config_source}:output_folder"
user_name: "{config_source}:user_name"
date: system-generated
# Module path and component files
installed_path: "{project-root}/bmad/bmm/workflows/document-project/workflows"
template: false # Action workflow
instructions: "{installed_path}/deep-dive-instructions.md"
validation: "{project-root}/bmad/bmm/workflows/document-project/checklist.md"
# Templates
deep_dive_template: "{project-root}/bmad/bmm/workflows/document-project/templates/deep-dive-template.md"
# Runtime inputs (passed from parent workflow)
workflow_mode: "deep_dive"
scan_level: "exhaustive" # Deep-dive always uses exhaustive scan
project_root_path: ""
existing_index_path: "" # Path to existing index.md
# Configuration
autonomous: false # Requires user input to select target area

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,31 @@
# Full Project Scan Workflow Configuration
name: "document-project-full-scan"
description: "Complete project documentation workflow (initial scan or full rescan)"
author: "BMad"
# This is a sub-workflow called by document-project/workflow.yaml
parent_workflow: "{project-root}/bmad/bmm/workflows/document-project/workflow.yaml"
# Critical variables inherited from parent
config_source: "{project-root}/bmad/bmb/config.yaml"
output_folder: "{config_source}:output_folder"
user_name: "{config_source}:user_name"
date: system-generated
# Data files
documentation_requirements_csv: "{project-root}/bmad/bmm/workflows/document-project/documentation-requirements.csv"
# Module path and component files
installed_path: "{project-root}/bmad/bmm/workflows/document-project/workflows"
template: false # Action workflow
instructions: "{installed_path}/full-scan-instructions.md"
validation: "{project-root}/bmad/bmm/workflows/document-project/checklist.md"
# Runtime inputs (passed from parent workflow)
workflow_mode: "" # "initial_scan" or "full_rescan"
scan_level: "" # "quick", "deep", or "exhaustive"
resume_mode: false
project_root_path: ""
# Configuration
autonomous: false # Requires user input at key decision points