bmad初始化

This commit is contained in:
2025-11-01 19:22:39 +08:00
parent 5b21dc0bd5
commit 426ae41f54
447 changed files with 80633 additions and 0 deletions

View File

@@ -0,0 +1,469 @@
# Non-Functional Requirements Assessment Workflow
**Workflow ID:** `testarch-nfr`
**Agent:** Test Architect (TEA)
**Command:** `bmad tea *nfr-assess`
---
## Overview
The **nfr-assess** workflow performs a comprehensive assessment of non-functional requirements (NFRs) to validate that the implementation meets performance, security, reliability, and maintainability standards before release. It uses evidence-based validation with deterministic PASS/CONCERNS/FAIL rules and provides actionable recommendations for remediation.
**Key Features:**
- Assess multiple NFR categories (performance, security, reliability, maintainability, custom)
- Validate NFRs against defined thresholds from tech specs, PRD, or defaults
- Classify status deterministically (PASS/CONCERNS/FAIL) based on evidence
- Never guess thresholds - mark as CONCERNS if unknown
- Generate CI/CD-ready YAML snippets for quality gates
- Provide quick wins and recommended actions for remediation
- Create evidence checklists for gaps
---
## When to Use This Workflow
Use `*nfr-assess` when you need to:
- ✅ Validate non-functional requirements before release
- ✅ Assess performance against defined thresholds
- ✅ Verify security requirements are met
- ✅ Validate reliability and error handling
- ✅ Check maintainability standards (coverage, quality, documentation)
- ✅ Generate NFR assessment reports for stakeholders
- ✅ Create gate-ready metrics for CI/CD pipelines
**Typical Timing:**
- Before release (validate all NFRs)
- Before PR merge (validate critical NFRs)
- During sprint retrospectives (assess maintainability)
- After performance testing (validate performance NFRs)
- After security audit (validate security NFRs)
---
## Prerequisites
**Required:**
- Implementation deployed locally or accessible for evaluation
- Evidence sources available (test results, metrics, logs, CI results)
**Recommended:**
- NFR requirements defined in tech-spec.md, PRD.md, or story
- Test results from performance, security, reliability tests
- Application metrics (response times, error rates, throughput)
- CI/CD pipeline results for burn-in validation
**Halt Conditions:**
- NFR targets are undefined and cannot be obtained → Halt and request definition
- Implementation is not accessible for evaluation → Halt and request deployment
---
## Usage
### Basic Usage (BMad Mode)
```bash
bmad tea *nfr-assess
```
The workflow will:
1. Read tech-spec.md for NFR requirements
2. Gather evidence from test results, metrics, logs
3. Assess each NFR category against thresholds
4. Generate NFR assessment report
5. Save to `bmad/output/nfr-assessment.md`
### Standalone Mode (No Tech Spec)
```bash
bmad tea *nfr-assess --feature-name "User Authentication"
```
### Custom Configuration
```bash
bmad tea *nfr-assess \
--assess-performance true \
--assess-security true \
--assess-reliability true \
--assess-maintainability true \
--performance-response-time-ms 500 \
--security-score-min 85
```
---
## Workflow Steps
1. **Load Context** - Read tech spec, PRD, knowledge base fragments
2. **Identify NFRs** - Determine categories and thresholds
3. **Gather Evidence** - Read test results, metrics, logs, CI results
4. **Assess NFRs** - Apply deterministic PASS/CONCERNS/FAIL rules
5. **Identify Actions** - Quick wins, recommended actions, monitoring hooks
6. **Generate Deliverables** - NFR assessment report, gate YAML, evidence checklist
---
## Outputs
### NFR Assessment Report (`nfr-assessment.md`)
Comprehensive markdown file with:
- Executive summary (overall status, critical issues)
- Assessment by category (performance, security, reliability, maintainability)
- Evidence for each NFR (test results, metrics, thresholds)
- Status classification (PASS/CONCERNS/FAIL)
- Quick wins section
- Recommended actions section
- Evidence gaps checklist
### Gate YAML Snippet (Optional)
```yaml
nfr_assessment:
date: '2025-10-14'
categories:
performance: 'PASS'
security: 'CONCERNS'
reliability: 'PASS'
maintainability: 'PASS'
overall_status: 'CONCERNS'
critical_issues: 0
high_priority_issues: 1
concerns: 1
blockers: false
```
### Evidence Checklist (Optional)
- List of NFRs with missing or incomplete evidence
- Owners for evidence collection
- Suggested evidence sources
- Deadlines for evidence collection
---
## NFR Categories
### Performance
**Criteria:** Response time, throughput, resource usage, scalability
**Thresholds (Default):**
- Response time p95: 500ms
- Throughput: 100 RPS
- CPU usage: < 70%
- Memory usage: < 80%
**Evidence Sources:** Load test results, APM data, Lighthouse reports, Playwright traces
---
### Security
**Criteria:** Authentication, authorization, data protection, vulnerability management
**Thresholds (Default):**
- Security score: >= 85/100
- Critical vulnerabilities: 0
- High vulnerabilities: < 3
- MFA enabled
**Evidence Sources:** SAST results, DAST results, dependency scanning, pentest reports
---
### Reliability
**Criteria:** Availability, error handling, fault tolerance, disaster recovery
**Thresholds (Default):**
- Uptime: >= 99.9%
- Error rate: < 0.1%
- MTTR: < 15 minutes
- CI burn-in: 100 consecutive runs
**Evidence Sources:** Uptime monitoring, error logs, CI burn-in results, chaos tests
---
### Maintainability
**Criteria:** Code quality, test coverage, documentation, technical debt
**Thresholds (Default):**
- Test coverage: >= 80%
- Code quality: >= 85/100
- Technical debt: < 5%
- Documentation: >= 90%
**Evidence Sources:** Coverage reports, static analysis, documentation audit, test review
---
## Assessment Rules
### PASS ✅
- Evidence exists AND meets or exceeds threshold
- No concerns flagged in evidence
- Quality is acceptable
### CONCERNS ⚠️
- Threshold is UNKNOWN (not defined)
- Evidence is MISSING or INCOMPLETE
- Evidence is close to threshold (within 10%)
- Evidence shows intermittent issues
### FAIL ❌
- Evidence exists BUT does not meet threshold
- Critical evidence is MISSING
- Evidence shows consistent failures
- Quality is unacceptable
---
## Configuration
### workflow.yaml Variables
```yaml
variables:
# NFR categories to assess
assess_performance: true
assess_security: true
assess_reliability: true
assess_maintainability: true
# Custom NFR categories
custom_nfr_categories: '' # e.g., "accessibility,compliance"
# Evidence sources
test_results_dir: '{project-root}/test-results'
metrics_dir: '{project-root}/metrics'
logs_dir: '{project-root}/logs'
include_ci_results: true
# Thresholds
performance_response_time_ms: 500
performance_throughput_rps: 100
security_score_min: 85
reliability_uptime_pct: 99.9
maintainability_coverage_pct: 80
# Assessment configuration
use_deterministic_rules: true
never_guess_thresholds: true
require_evidence: true
suggest_monitoring: true
# Output configuration
output_file: '{output_folder}/nfr-assessment.md'
generate_gate_yaml: true
generate_evidence_checklist: true
```
---
## Knowledge Base Integration
This workflow automatically loads relevant knowledge fragments:
- `nfr-criteria.md` - Non-functional requirements criteria
- `ci-burn-in.md` - CI/CD burn-in patterns for reliability
- `test-quality.md` - Test quality expectations (maintainability)
- `playwright-config.md` - Performance configuration patterns
---
## Examples
### Example 1: Full NFR Assessment Before Release
```bash
bmad tea *nfr-assess
```
**Output:**
```markdown
# NFR Assessment - Story 1.3
**Overall Status:** PASS ✅ (No blockers)
## Performance Assessment
- Response Time p95: PASS ✅ (320ms < 500ms threshold)
- Throughput: PASS (250 RPS > 100 RPS threshold)
## Security Assessment
- Authentication: PASS ✅ (MFA enforced)
- Data Protection: PASS ✅ (AES-256 + TLS 1.3)
## Reliability Assessment
- Uptime: PASS ✅ (99.95% > 99.9% threshold)
- Error Rate: PASS ✅ (0.05% < 0.1% threshold)
## Maintainability Assessment
- Test Coverage: PASS (87% > 80% threshold)
- Code Quality: PASS ✅ (92/100 > 85/100 threshold)
Gate Status: PASS ✅ - Ready for release
```
### Example 2: NFR Assessment with Concerns
```bash
bmad tea *nfr-assess --feature-name "User Authentication"
```
**Output:**
```markdown
# NFR Assessment - User Authentication
**Overall Status:** CONCERNS ⚠️ (1 HIGH issue)
## Security Assessment
### Authentication Strength
- **Status:** CONCERNS ⚠️
- **Threshold:** MFA enabled for all users
- **Actual:** MFA optional (not enforced)
- **Evidence:** Security audit (security-audit-2025-10-14.md)
- **Recommendation:** HIGH - Enforce MFA for all new accounts
## Quick Wins
1. **Enforce MFA (Security)** - HIGH - 4 hours
- Add configuration flag to enforce MFA
- No code changes needed
Gate Status: CONCERNS ⚠️ - Address HIGH priority issues before release
```
### Example 3: Performance-Only Assessment
```bash
bmad tea *nfr-assess \
--assess-performance true \
--assess-security false \
--assess-reliability false \
--assess-maintainability false
```
---
## Troubleshooting
### "NFR thresholds not defined"
- Check tech-spec.md for NFR requirements
- Check PRD.md for product-level SLAs
- Check story file for feature-specific requirements
- If thresholds truly unknown, mark as CONCERNS and recommend defining them
### "No evidence found"
- Check evidence directories (test-results, metrics, logs)
- Check CI/CD pipeline for test results
- If evidence truly missing, mark NFR as "NO EVIDENCE" and recommend generating it
### "CONCERNS status but no threshold exceeded"
- CONCERNS is correct when threshold is UNKNOWN or evidence is MISSING/INCOMPLETE
- CONCERNS is also correct when evidence is close to threshold (within 10%)
- Document why CONCERNS was assigned in assessment report
### "FAIL status blocks release"
- This is intentional - FAIL means critical NFR not met
- Recommend remediation actions with specific steps
- Re-run assessment after remediation
---
## Integration with Other Workflows
- **testarch-test-design** → `*nfr-assess` - Define NFR requirements, then assess
- **testarch-framework** → `*nfr-assess` - Set up frameworks, then validate NFRs
- **testarch-ci** → `*nfr-assess` - Configure CI, then assess reliability with burn-in
- `*nfr-assess`**testarch-trace (Phase 2)** - Assess NFRs, then apply quality gates
- `*nfr-assess`**testarch-test-review** - Assess maintainability, then review tests
---
## Best Practices
1. **Never Guess Thresholds**
- If threshold is unknown, mark as CONCERNS
- Recommend defining threshold in tech-spec.md
- Don't infer thresholds from similar features
2. **Evidence-Based Assessment**
- Every assessment must be backed by evidence
- Mark NFRs without evidence as "NO EVIDENCE"
- Don't assume or infer - require explicit evidence
3. **Deterministic Rules**
- Apply PASS/CONCERNS/FAIL consistently
- Document reasoning for each classification
- Use same rules across all NFR categories
4. **Actionable Recommendations**
- Provide specific steps, not generic advice
- Include priority, effort estimate, owner suggestion
- Focus on quick wins first
5. **Gate Integration**
- Enable `generate_gate_yaml` for CI/CD integration
- Use YAML snippets in pipeline quality gates
- Export metrics for dashboard visualization
---
## Quality Gates
| Status | Criteria | Action |
| ----------- | ---------------------------- | --------------------------- |
| PASS ✅ | All NFRs have PASS status | Ready for release |
| CONCERNS ⚠️ | Any NFR has CONCERNS status | Address before next release |
| FAIL ❌ | Critical NFR has FAIL status | Do not release - BLOCKER |
---
## Related Commands
- `bmad tea *test-design` - Define NFR requirements and test plan
- `bmad tea *framework` - Set up performance/security testing frameworks
- `bmad tea *ci` - Configure CI/CD for NFR validation
- `bmad tea *trace` (Phase 2) - Apply quality gates using NFR assessment metrics
- `bmad tea *test-review` - Review test quality (maintainability NFR)
---
## Resources
- [Instructions](./instructions.md) - Detailed workflow steps
- [Checklist](./checklist.md) - Validation checklist
- [Template](./nfr-report-template.md) - NFR assessment report template
- [Knowledge Base](../../testarch/knowledge/) - NFR criteria and best practices
---
<!-- Powered by BMAD-CORE™ -->

View File

@@ -0,0 +1,405 @@
# Non-Functional Requirements Assessment - Validation Checklist
**Workflow:** `testarch-nfr`
**Purpose:** Ensure comprehensive and evidence-based NFR assessment with actionable recommendations
---
## Prerequisites Validation
- [ ] Implementation is deployed and accessible for evaluation
- [ ] Evidence sources are available (test results, metrics, logs, CI results)
- [ ] NFR categories are determined (performance, security, reliability, maintainability, custom)
- [ ] Evidence directories exist and are accessible (`test_results_dir`, `metrics_dir`, `logs_dir`)
- [ ] Knowledge base is loaded (nfr-criteria, ci-burn-in, test-quality)
---
## Context Loading
- [ ] Tech-spec.md loaded successfully (if available)
- [ ] PRD.md loaded (if available)
- [ ] Story file loaded (if applicable)
- [ ] Relevant knowledge fragments loaded from `tea-index.csv`:
- [ ] `nfr-criteria.md`
- [ ] `ci-burn-in.md`
- [ ] `test-quality.md`
- [ ] `playwright-config.md` (if using Playwright)
---
## NFR Categories and Thresholds
### Performance
- [ ] Response time threshold defined or marked as UNKNOWN
- [ ] Throughput threshold defined or marked as UNKNOWN
- [ ] Resource usage thresholds defined or marked as UNKNOWN
- [ ] Scalability requirements defined or marked as UNKNOWN
### Security
- [ ] Authentication requirements defined or marked as UNKNOWN
- [ ] Authorization requirements defined or marked as UNKNOWN
- [ ] Data protection requirements defined or marked as UNKNOWN
- [ ] Vulnerability management thresholds defined or marked as UNKNOWN
- [ ] Compliance requirements identified (GDPR, HIPAA, PCI-DSS, etc.)
### Reliability
- [ ] Availability (uptime) threshold defined or marked as UNKNOWN
- [ ] Error rate threshold defined or marked as UNKNOWN
- [ ] MTTR (Mean Time To Recovery) threshold defined or marked as UNKNOWN
- [ ] Fault tolerance requirements defined or marked as UNKNOWN
- [ ] Disaster recovery requirements defined (RTO, RPO) or marked as UNKNOWN
### Maintainability
- [ ] Test coverage threshold defined or marked as UNKNOWN
- [ ] Code quality threshold defined or marked as UNKNOWN
- [ ] Technical debt threshold defined or marked as UNKNOWN
- [ ] Documentation completeness threshold defined or marked as UNKNOWN
### Custom NFR Categories (if applicable)
- [ ] Custom NFR category 1: Thresholds defined or marked as UNKNOWN
- [ ] Custom NFR category 2: Thresholds defined or marked as UNKNOWN
- [ ] Custom NFR category 3: Thresholds defined or marked as UNKNOWN
---
## Evidence Gathering
### Performance Evidence
- [ ] Load test results collected (JMeter, k6, Gatling, etc.)
- [ ] Application metrics collected (response times, throughput, resource usage)
- [ ] APM data collected (New Relic, Datadog, Dynatrace, etc.)
- [ ] Lighthouse reports collected (if web app)
- [ ] Playwright performance traces collected (if applicable)
### Security Evidence
- [ ] SAST results collected (SonarQube, Checkmarx, Veracode, etc.)
- [ ] DAST results collected (OWASP ZAP, Burp Suite, etc.)
- [ ] Dependency scanning results collected (Snyk, Dependabot, npm audit)
- [ ] Penetration test reports collected (if available)
- [ ] Security audit logs collected
- [ ] Compliance audit results collected (if applicable)
### Reliability Evidence
- [ ] Uptime monitoring data collected (Pingdom, UptimeRobot, StatusCake)
- [ ] Error logs collected
- [ ] Error rate metrics collected
- [ ] CI burn-in results collected (stability over time)
- [ ] Chaos engineering test results collected (if available)
- [ ] Failover/recovery test results collected (if available)
- [ ] Incident reports and postmortems collected (if applicable)
### Maintainability Evidence
- [ ] Code coverage reports collected (Istanbul, NYC, c8, JaCoCo)
- [ ] Static analysis results collected (ESLint, SonarQube, CodeClimate)
- [ ] Technical debt metrics collected
- [ ] Documentation audit results collected
- [ ] Test review report collected (from test-review workflow, if available)
- [ ] Git metrics collected (code churn, commit frequency, etc.)
---
## NFR Assessment with Deterministic Rules
### Performance Assessment
- [ ] Response time assessed against threshold
- [ ] Throughput assessed against threshold
- [ ] Resource usage assessed against threshold
- [ ] Scalability assessed against requirements
- [ ] Status classified (PASS/CONCERNS/FAIL) with justification
- [ ] Evidence source documented (file path, metric name)
### Security Assessment
- [ ] Authentication strength assessed against requirements
- [ ] Authorization controls assessed against requirements
- [ ] Data protection assessed against requirements
- [ ] Vulnerability management assessed against thresholds
- [ ] Compliance assessed against requirements
- [ ] Status classified (PASS/CONCERNS/FAIL) with justification
- [ ] Evidence source documented (file path, scan result)
### Reliability Assessment
- [ ] Availability (uptime) assessed against threshold
- [ ] Error rate assessed against threshold
- [ ] MTTR assessed against threshold
- [ ] Fault tolerance assessed against requirements
- [ ] Disaster recovery assessed against requirements (RTO, RPO)
- [ ] CI burn-in assessed (stability over time)
- [ ] Status classified (PASS/CONCERNS/FAIL) with justification
- [ ] Evidence source documented (file path, monitoring data)
### Maintainability Assessment
- [ ] Test coverage assessed against threshold
- [ ] Code quality assessed against threshold
- [ ] Technical debt assessed against threshold
- [ ] Documentation completeness assessed against threshold
- [ ] Test quality assessed (from test-review, if available)
- [ ] Status classified (PASS/CONCERNS/FAIL) with justification
- [ ] Evidence source documented (file path, coverage report)
### Custom NFR Assessment (if applicable)
- [ ] Custom NFR 1 assessed against threshold with justification
- [ ] Custom NFR 2 assessed against threshold with justification
- [ ] Custom NFR 3 assessed against threshold with justification
---
## Status Classification Validation
### PASS Criteria Verified
- [ ] Evidence exists for PASS status
- [ ] Evidence meets or exceeds threshold
- [ ] No concerns flagged in evidence
- [ ] Quality is acceptable
### CONCERNS Criteria Verified
- [ ] Threshold is UNKNOWN (documented) OR
- [ ] Evidence is MISSING or INCOMPLETE (documented) OR
- [ ] Evidence is close to threshold (within 10%, documented) OR
- [ ] Evidence shows intermittent issues (documented)
### FAIL Criteria Verified
- [ ] Evidence exists BUT does not meet threshold (documented) OR
- [ ] Critical evidence is MISSING (documented) OR
- [ ] Evidence shows consistent failures (documented) OR
- [ ] Quality is unacceptable (documented)
### No Threshold Guessing
- [ ] All thresholds are either defined or marked as UNKNOWN
- [ ] No thresholds were guessed or inferred
- [ ] All UNKNOWN thresholds result in CONCERNS status
---
## Quick Wins and Recommended Actions
### Quick Wins Identified
- [ ] Low-effort, high-impact improvements identified for CONCERNS/FAIL
- [ ] Configuration changes (no code changes) identified
- [ ] Optimization opportunities identified (caching, indexing, compression)
- [ ] Monitoring additions identified (detect issues before failures)
### Recommended Actions
- [ ] Specific remediation steps provided (not generic advice)
- [ ] Priority assigned (CRITICAL, HIGH, MEDIUM, LOW)
- [ ] Estimated effort provided (hours, days)
- [ ] Owner suggestions provided (dev, ops, security)
### Monitoring Hooks
- [ ] Performance monitoring suggested (APM, synthetic monitoring)
- [ ] Error tracking suggested (Sentry, Rollbar, error logs)
- [ ] Security monitoring suggested (intrusion detection, audit logs)
- [ ] Alerting thresholds suggested (notify before breach)
### Fail-Fast Mechanisms
- [ ] Circuit breakers suggested for reliability
- [ ] Rate limiting suggested for performance
- [ ] Validation gates suggested for security
- [ ] Smoke tests suggested for maintainability
---
## Deliverables Generated
### NFR Assessment Report
- [ ] File created at `{output_folder}/nfr-assessment.md`
- [ ] Template from `nfr-report-template.md` used
- [ ] Executive summary included (overall status, critical issues)
- [ ] Assessment by category included (performance, security, reliability, maintainability)
- [ ] Evidence for each NFR documented
- [ ] Status classifications documented (PASS/CONCERNS/FAIL)
- [ ] Findings summary included (PASS count, CONCERNS count, FAIL count)
- [ ] Quick wins section included
- [ ] Recommended actions section included
- [ ] Evidence gaps checklist included
### Gate YAML Snippet (if enabled)
- [ ] YAML snippet generated
- [ ] Date included
- [ ] Categories status included (performance, security, reliability, maintainability)
- [ ] Overall status included (PASS/CONCERNS/FAIL)
- [ ] Issue counts included (critical, high, medium, concerns)
- [ ] Blockers flag included (true/false)
- [ ] Recommendations included
### Evidence Checklist (if enabled)
- [ ] All NFRs with MISSING or INCOMPLETE evidence listed
- [ ] Owners assigned for evidence collection
- [ ] Suggested evidence sources provided
- [ ] Deadlines set for evidence collection
### Updated Story File (if enabled and requested)
- [ ] "NFR Assessment" section added to story markdown
- [ ] Link to NFR assessment report included
- [ ] Overall status and critical issues included
- [ ] Gate status included
---
## Quality Assurance
### Accuracy Checks
- [ ] All NFR categories assessed (none skipped)
- [ ] All thresholds documented (defined or UNKNOWN)
- [ ] All evidence sources documented (file paths, metric names)
- [ ] Status classifications are deterministic and consistent
- [ ] No false positives (status correctly assigned)
- [ ] No false negatives (all issues identified)
### Completeness Checks
- [ ] All NFR categories covered (performance, security, reliability, maintainability, custom)
- [ ] All evidence sources checked (test results, metrics, logs, CI results)
- [ ] All status types used appropriately (PASS, CONCERNS, FAIL)
- [ ] All NFRs with CONCERNS/FAIL have recommendations
- [ ] All evidence gaps have owners and deadlines
### Actionability Checks
- [ ] Recommendations are specific (not generic)
- [ ] Remediation steps are clear and actionable
- [ ] Priorities are assigned (CRITICAL, HIGH, MEDIUM, LOW)
- [ ] Effort estimates are provided (hours, days)
- [ ] Owners are suggested (dev, ops, security)
---
## Integration with BMad Artifacts
### With tech-spec.md
- [ ] Tech spec loaded for NFR requirements and thresholds
- [ ] Performance targets extracted
- [ ] Security requirements extracted
- [ ] Reliability SLAs extracted
- [ ] Architectural decisions considered
### With test-design.md
- [ ] Test design loaded for NFR test plan
- [ ] Test priorities referenced (P0/P1/P2/P3)
- [ ] Assessment aligned with planned NFR validation
### With PRD.md
- [ ] PRD loaded for product-level NFR context
- [ ] User experience goals considered
- [ ] Unstated requirements checked
- [ ] Product-level SLAs referenced
---
## Quality Gates Validation
### Release Blocker (FAIL)
- [ ] Critical NFR status checked (security, reliability)
- [ ] Performance failures assessed for user impact
- [ ] Release blocker flagged if critical NFR has FAIL status
### PR Blocker (HIGH CONCERNS)
- [ ] High-priority NFR status checked
- [ ] Multiple CONCERNS assessed
- [ ] PR blocker flagged if HIGH priority issues exist
### Warning (CONCERNS)
- [ ] Any NFR with CONCERNS status flagged
- [ ] Missing or incomplete evidence documented
- [ ] Warning issued to address before next release
### Pass (PASS)
- [ ] All NFRs have PASS status
- [ ] No blockers or concerns exist
- [ ] Ready for release confirmed
---
## Non-Prescriptive Validation
- [ ] NFR categories adapted to team needs
- [ ] Thresholds appropriate for project context
- [ ] Assessment criteria customized as needed
- [ ] Teams can extend with custom NFR categories
- [ ] Integration with external tools supported (New Relic, Datadog, SonarQube, JIRA)
---
## Documentation and Communication
- [ ] NFR assessment report is readable and well-formatted
- [ ] Tables render correctly in markdown
- [ ] Code blocks have proper syntax highlighting
- [ ] Links are valid and accessible
- [ ] Recommendations are clear and prioritized
- [ ] Overall status is prominent and unambiguous
- [ ] Executive summary provides quick understanding
---
## Final Validation
- [ ] All prerequisites met
- [ ] All NFR categories assessed with evidence (or gaps documented)
- [ ] No thresholds were guessed (all defined or UNKNOWN)
- [ ] Status classifications are deterministic and justified
- [ ] Quick wins identified for all CONCERNS/FAIL
- [ ] Recommended actions are specific and actionable
- [ ] Evidence gaps documented with owners and deadlines
- [ ] NFR assessment report generated and saved
- [ ] Gate YAML snippet generated (if enabled)
- [ ] Evidence checklist generated (if enabled)
- [ ] Workflow completed successfully
---
## Sign-Off
**NFR Assessment Status:**
- [ ] ✅ PASS - All NFRs meet requirements, ready for release
- [ ] ⚠️ CONCERNS - Some NFRs have concerns, address before next release
- [ ] ❌ FAIL - Critical NFRs not met, BLOCKER for release
**Next Actions:**
- If PASS ✅: Proceed to `*gate` workflow or release
- If CONCERNS ⚠️: Address HIGH/CRITICAL issues, re-run `*nfr-assess`
- If FAIL ❌: Resolve FAIL status NFRs, re-run `*nfr-assess`
**Critical Issues:** {COUNT}
**High Priority Issues:** {COUNT}
**Concerns:** {COUNT}
---
<!-- Powered by BMAD-CORE™ -->

View File

@@ -0,0 +1,722 @@
# Non-Functional Requirements Assessment - Instructions v4.0
**Workflow:** `testarch-nfr`
**Purpose:** Assess non-functional requirements (performance, security, reliability, maintainability) before release with evidence-based validation
**Agent:** Test Architect (TEA)
**Format:** Pure Markdown v4.0 (no XML blocks)
---
## Overview
This workflow performs a comprehensive assessment of non-functional requirements (NFRs) to validate that the implementation meets performance, security, reliability, and maintainability standards before release. It uses evidence-based validation with deterministic PASS/CONCERNS/FAIL rules and provides actionable recommendations for remediation.
**Key Capabilities:**
- Assess multiple NFR categories (performance, security, reliability, maintainability, custom)
- Validate NFRs against defined thresholds from tech specs, PRD, or defaults
- Classify status deterministically (PASS/CONCERNS/FAIL) based on evidence
- Never guess thresholds - mark as CONCERNS if unknown
- Generate gate-ready YAML snippets for CI/CD integration
- Provide quick wins and recommended actions for remediation
- Create evidence checklists for gaps
---
## Prerequisites
**Required:**
- Implementation deployed locally or accessible for evaluation
- Evidence sources available (test results, metrics, logs, CI results)
**Recommended:**
- NFR requirements defined in tech-spec.md, PRD.md, or story
- Test results from performance, security, reliability tests
- Application metrics (response times, error rates, throughput)
- CI/CD pipeline results for burn-in validation
**Halt Conditions:**
- If NFR targets are undefined and cannot be obtained, halt and request definition
- If implementation is not accessible for evaluation, halt and request deployment
---
## Workflow Steps
### Step 1: Load Context and Knowledge Base
**Actions:**
1. Load relevant knowledge fragments from `{project-root}/bmad/bmm/testarch/tea-index.csv`:
- `nfr-criteria.md` - Non-functional requirements criteria and thresholds (security, performance, reliability, maintainability with code examples, 658 lines, 4 examples)
- `ci-burn-in.md` - CI/CD burn-in patterns for reliability validation (10-iteration detection, sharding, selective execution, 678 lines, 4 examples)
- `test-quality.md` - Test quality expectations for maintainability (deterministic, isolated, explicit assertions, length/time limits, 658 lines, 5 examples)
- `playwright-config.md` - Performance configuration patterns: parallelization, timeout standards, artifact output (722 lines, 5 examples)
- `error-handling.md` - Reliability validation patterns: scoped exceptions, retry validation, telemetry logging, graceful degradation (736 lines, 4 examples)
2. Read story file (if provided):
- Extract NFR requirements
- Identify specific thresholds or SLAs
- Note any custom NFR categories
3. Read related BMad artifacts (if available):
- `tech-spec.md` - Technical NFR requirements and targets
- `PRD.md` - Product-level NFR context (user expectations)
- `test-design.md` - NFR test plan and priorities
**Output:** Complete understanding of NFR targets, evidence sources, and validation criteria
---
### Step 2: Identify NFR Categories and Thresholds
**Actions:**
1. Determine which NFR categories to assess (default: performance, security, reliability, maintainability):
- **Performance**: Response time, throughput, resource usage
- **Security**: Authentication, authorization, data protection, vulnerability scanning
- **Reliability**: Error handling, recovery, availability, fault tolerance
- **Maintainability**: Code quality, test coverage, documentation, technical debt
2. Add custom NFR categories if specified (e.g., accessibility, internationalization, compliance)
3. Gather thresholds for each NFR:
- From tech-spec.md (primary source)
- From PRD.md (product-level SLAs)
- From story file (feature-specific requirements)
- From workflow variables (default thresholds)
- Mark thresholds as UNKNOWN if not defined
4. Never guess thresholds - if a threshold is unknown, mark the NFR as CONCERNS
**Output:** Complete list of NFRs to assess with defined (or UNKNOWN) thresholds
---
### Step 3: Gather Evidence
**Actions:**
1. For each NFR category, discover evidence sources:
**Performance Evidence:**
- Load test results (JMeter, k6, Lighthouse)
- Application metrics (response times, throughput, resource usage)
- Performance monitoring data (New Relic, Datadog, APM)
- Playwright performance traces (if applicable)
**Security Evidence:**
- Security scan results (SAST, DAST, dependency scanning)
- Authentication/authorization test results
- Penetration test reports
- Vulnerability assessment reports
- Compliance audit results
**Reliability Evidence:**
- Error logs and error rates
- Uptime monitoring data
- Chaos engineering test results
- Failover/recovery test results
- CI burn-in results (stability over time)
**Maintainability Evidence:**
- Code coverage reports (Istanbul, NYC, c8)
- Static analysis results (ESLint, SonarQube)
- Technical debt metrics
- Documentation completeness
- Test quality assessment (from test-review workflow)
2. Read relevant files from evidence directories:
- `{test_results_dir}` for test execution results
- `{metrics_dir}` for application metrics
- `{logs_dir}` for application logs
- CI/CD pipeline results (if `include_ci_results` is true)
3. Mark NFRs without evidence as "NO EVIDENCE" - never infer or assume
**Output:** Comprehensive evidence inventory for each NFR
---
### Step 4: Assess NFRs with Deterministic Rules
**Actions:**
1. For each NFR, apply deterministic PASS/CONCERNS/FAIL rules:
**PASS Criteria:**
- Evidence exists AND meets defined threshold
- No concerns flagged in evidence
- Example: Response time is 350ms (threshold: 500ms) → PASS
**CONCERNS Criteria:**
- Threshold is UNKNOWN (not defined)
- Evidence is MISSING or INCOMPLETE
- Evidence is close to threshold (within 10%)
- Evidence shows intermittent issues
- Example: Response time is 480ms (threshold: 500ms, 96% of threshold) → CONCERNS
**FAIL Criteria:**
- Evidence exists BUT does not meet threshold
- Critical evidence is MISSING
- Evidence shows consistent failures
- Example: Response time is 750ms (threshold: 500ms) → FAIL
2. Document findings for each NFR:
- Status (PASS/CONCERNS/FAIL)
- Evidence source (file path, test name, metric name)
- Actual value vs threshold
- Justification for status classification
3. Classify severity based on category:
- **CRITICAL**: Security failures, reliability failures (affect users immediately)
- **HIGH**: Performance failures, maintainability failures (affect users soon)
- **MEDIUM**: Concerns without failures (may affect users eventually)
- **LOW**: Missing evidence for non-critical NFRs
**Output:** Complete NFR assessment with deterministic status classifications
---
### Step 5: Identify Quick Wins and Recommended Actions
**Actions:**
1. For each NFR with CONCERNS or FAIL status, identify quick wins:
- Low-effort, high-impact improvements
- Configuration changes (no code changes needed)
- Optimization opportunities (caching, indexing, compression)
- Monitoring additions (detect issues before they become failures)
2. Provide recommended actions for each issue:
- Specific steps to remediate (not generic advice)
- Priority (CRITICAL, HIGH, MEDIUM, LOW)
- Estimated effort (hours, days)
- Owner suggestion (dev, ops, security)
3. Suggest monitoring hooks for gaps:
- Add performance monitoring (APM, synthetic monitoring)
- Add error tracking (Sentry, Rollbar, error logs)
- Add security monitoring (intrusion detection, audit logs)
- Add alerting thresholds (notify before thresholds are breached)
4. Suggest fail-fast mechanisms:
- Add circuit breakers for reliability
- Add rate limiting for performance
- Add validation gates for security
- Add smoke tests for maintainability
**Output:** Actionable remediation plan with prioritized recommendations
---
### Step 6: Generate Deliverables
**Actions:**
1. Create NFR assessment markdown file:
- Use template from `nfr-report-template.md`
- Include executive summary (overall status, critical issues)
- Add NFR-by-NFR assessment (status, evidence, thresholds)
- Add findings summary (PASS count, CONCERNS count, FAIL count)
- Add quick wins section
- Add recommended actions section
- Add evidence gaps checklist
- Save to `{output_folder}/nfr-assessment.md`
2. Generate gate YAML snippet (if enabled):
```yaml
nfr_assessment:
date: '2025-10-14'
categories:
performance: 'PASS'
security: 'CONCERNS'
reliability: 'PASS'
maintainability: 'PASS'
overall_status: 'CONCERNS'
critical_issues: 0
high_priority_issues: 1
concerns: 2
blockers: false
```
3. Generate evidence checklist (if enabled):
- List all NFRs with MISSING or INCOMPLETE evidence
- Assign owners for evidence collection
- Suggest evidence sources (tests, metrics, logs)
- Set deadlines for evidence collection
4. Update story file (if enabled and requested):
- Add "NFR Assessment" section to story markdown
- Link to NFR assessment report
- Include overall status and critical issues
- Add gate status
**Output:** Complete NFR assessment documentation ready for review and CI/CD integration
---
## Non-Prescriptive Approach
**Minimal Examples:** This workflow provides principles and patterns, not rigid templates. Teams should adapt NFR categories, thresholds, and assessment criteria to their needs.
**Key Patterns to Follow:**
- Use evidence-based validation (no guessing or inference)
- Apply deterministic rules (consistent PASS/CONCERNS/FAIL classification)
- Never guess thresholds (mark as CONCERNS if unknown)
- Provide actionable recommendations (specific steps, not generic advice)
- Generate gate-ready artifacts (YAML snippets for CI/CD)
**Extend as Needed:**
- Add custom NFR categories (accessibility, internationalization, compliance)
- Integrate with external tools (New Relic, Datadog, SonarQube, JIRA)
- Add custom thresholds and rules
- Link to external assessment systems
---
## NFR Categories and Criteria
### Performance
**Criteria:**
- Response time (p50, p95, p99 percentiles)
- Throughput (requests per second, transactions per second)
- Resource usage (CPU, memory, disk, network)
- Scalability (horizontal, vertical)
**Thresholds (Default):**
- Response time p95: 500ms
- Throughput: 100 RPS
- CPU usage: < 70% average
- Memory usage: < 80% max
**Evidence Sources:**
- Load test results (JMeter, k6, Gatling)
- APM data (New Relic, Datadog, Dynatrace)
- Lighthouse reports (for web apps)
- Playwright performance traces
---
### Security
**Criteria:**
- Authentication (login security, session management)
- Authorization (access control, permissions)
- Data protection (encryption, PII handling)
- Vulnerability management (SAST, DAST, dependency scanning)
- Compliance (GDPR, HIPAA, PCI-DSS)
**Thresholds (Default):**
- Security score: >= 85/100
- Critical vulnerabilities: 0
- High vulnerabilities: < 3
- Authentication strength: MFA enabled
**Evidence Sources:**
- SAST results (SonarQube, Checkmarx, Veracode)
- DAST results (OWASP ZAP, Burp Suite)
- Dependency scanning (Snyk, Dependabot, npm audit)
- Penetration test reports
- Security audit logs
---
### Reliability
**Criteria:**
- Availability (uptime percentage)
- Error handling (graceful degradation, error recovery)
- Fault tolerance (redundancy, failover)
- Disaster recovery (backup, restore, RTO/RPO)
- Stability (CI burn-in, chaos engineering)
**Thresholds (Default):**
- Uptime: >= 99.9% (three nines)
- Error rate: < 0.1% (1 in 1000 requests)
- MTTR (Mean Time To Recovery): < 15 minutes
- CI burn-in: 100 consecutive successful runs
**Evidence Sources:**
- Uptime monitoring (Pingdom, UptimeRobot, StatusCake)
- Error logs and error rates
- CI burn-in results (see `ci-burn-in.md`)
- Chaos engineering test results (Chaos Monkey, Gremlin)
- Incident reports and postmortems
---
### Maintainability
**Criteria:**
- Code quality (complexity, duplication, code smells)
- Test coverage (unit, integration, E2E)
- Documentation (code comments, README, architecture docs)
- Technical debt (debt ratio, code churn)
- Test quality (from test-review workflow)
**Thresholds (Default):**
- Test coverage: >= 80%
- Code quality score: >= 85/100
- Technical debt ratio: < 5%
- Documentation completeness: >= 90%
**Evidence Sources:**
- Coverage reports (Istanbul, NYC, c8, JaCoCo)
- Static analysis (ESLint, SonarQube, CodeClimate)
- Documentation audit (manual or automated)
- Test review report (from test-review workflow)
- Git metrics (code churn, commit frequency)
---
## Deterministic Assessment Rules
### PASS Rules
- Evidence exists
- Evidence meets or exceeds threshold
- No concerns flagged
- Quality is acceptable
**Example:**
```markdown
NFR: Response Time p95
Threshold: 500ms
Evidence: Load test result shows 350ms p95
Status: PASS ✅
```
---
### CONCERNS Rules
- Threshold is UNKNOWN
- Evidence is MISSING or INCOMPLETE
- Evidence is close to threshold (within 10%)
- Evidence shows intermittent issues
- Quality is marginal
**Example:**
```markdown
NFR: Response Time p95
Threshold: 500ms
Evidence: Load test result shows 480ms p95 (96% of threshold)
Status: CONCERNS ⚠️
Recommendation: Optimize before production - very close to threshold
```
---
### FAIL Rules
- Evidence exists BUT does not meet threshold
- Critical evidence is MISSING
- Evidence shows consistent failures
- Quality is unacceptable
**Example:**
```markdown
NFR: Response Time p95
Threshold: 500ms
Evidence: Load test result shows 750ms p95 (150% of threshold)
Status: FAIL ❌
Recommendation: BLOCKER - optimize performance before release
```
---
## Integration with BMad Artifacts
### With tech-spec.md
- Primary source for NFR requirements and thresholds
- Load performance targets, security requirements, reliability SLAs
- Use architectural decisions to understand NFR trade-offs
### With test-design.md
- Understand NFR test plan and priorities
- Reference test priorities (P0/P1/P2/P3) for severity classification
- Align assessment with planned NFR validation
### With PRD.md
- Understand product-level NFR expectations
- Verify NFRs align with user experience goals
- Check for unstated NFR requirements (implied by product goals)
---
## Quality Gates
### Release Blocker (FAIL)
- Critical NFR has FAIL status (security, reliability)
- Performance failure affects user experience severely
- Do not release until FAIL is resolved
### PR Blocker (HIGH CONCERNS)
- High-priority NFR has FAIL status
- Multiple CONCERNS exist
- Block PR merge until addressed
### Warning (CONCERNS)
- Any NFR has CONCERNS status
- Evidence is missing or incomplete
- Address before next release
### Pass (PASS)
- All NFRs have PASS status
- No blockers or concerns
- Ready for release
---
## Example NFR Assessment
````markdown
# NFR Assessment - Story 1.3
**Feature:** User Authentication
**Date:** 2025-10-14
**Overall Status:** CONCERNS ⚠️ (1 HIGH issue)
## Executive Summary
**Assessment:** 3 PASS, 1 CONCERNS, 0 FAIL
**Blockers:** None
**High Priority Issues:** 1 (Security - MFA not enforced)
**Recommendation:** Address security concern before release
## Performance Assessment
### Response Time (p95)
- **Status:** PASS ✅
- **Threshold:** 500ms
- **Actual:** 320ms (64% of threshold)
- **Evidence:** Load test results (test-results/load-2025-10-14.json)
- **Findings:** Response time well below threshold across all percentiles
### Throughput
- **Status:** PASS ✅
- **Threshold:** 100 RPS
- **Actual:** 250 RPS (250% of threshold)
- **Evidence:** Load test results (test-results/load-2025-10-14.json)
- **Findings:** System handles 2.5x target load without degradation
## Security Assessment
### Authentication Strength
- **Status:** CONCERNS ⚠️
- **Threshold:** MFA enabled for all users
- **Actual:** MFA optional (not enforced)
- **Evidence:** Security audit (security-audit-2025-10-14.md)
- **Findings:** MFA is implemented but not enforced by default
- **Recommendation:** HIGH - Enforce MFA for all new accounts, provide migration path for existing users
### Data Protection
- **Status:** PASS ✅
- **Threshold:** PII encrypted at rest and in transit
- **Actual:** AES-256 at rest, TLS 1.3 in transit
- **Evidence:** Security scan (security-scan-2025-10-14.json)
- **Findings:** All PII properly encrypted
## Reliability Assessment
### Uptime
- **Status:** PASS ✅
- **Threshold:** 99.9% (three nines)
- **Actual:** 99.95% over 30 days
- **Evidence:** Uptime monitoring (uptime-report-2025-10-14.csv)
- **Findings:** Exceeds target with margin
### Error Rate
- **Status:** PASS ✅
- **Threshold:** < 0.1% (1 in 1000)
- **Actual:** 0.05% (1 in 2000)
- **Evidence:** Error logs (logs/errors-2025-10.log)
- **Findings:** Error rate well below threshold
## Maintainability Assessment
### Test Coverage
- **Status:** PASS ✅
- **Threshold:** >= 80%
- **Actual:** 87%
- **Evidence:** Coverage report (coverage/lcov-report/index.html)
- **Findings:** Coverage exceeds threshold with good distribution
### Code Quality
- **Status:** PASS ✅
- **Threshold:** >= 85/100
- **Actual:** 92/100
- **Evidence:** SonarQube analysis (sonarqube-report-2025-10-14.pdf)
- **Findings:** High code quality score with low technical debt
## Quick Wins
1. **Enforce MFA (Security)** - HIGH - 4 hours
- Add configuration flag to enforce MFA for new accounts
- No code changes needed, only config adjustment
## Recommended Actions
### Immediate (Before Release)
1. **Enforce MFA for all new accounts** - HIGH - 4 hours - Security Team
- Add `ENFORCE_MFA=true` to production config
- Update user onboarding flow to require MFA setup
- Test MFA enforcement in staging environment
### Short-term (Next Sprint)
1. **Migrate existing users to MFA** - MEDIUM - 3 days - Product + Engineering
- Design migration UX (prompt, incentives, deadline)
- Implement migration flow with grace period
- Communicate migration to existing users
## Evidence Gaps
- [ ] Chaos engineering test results (reliability)
- Owner: DevOps Team
- Deadline: 2025-10-21
- Suggested evidence: Run chaos monkey tests in staging
- [ ] Penetration test report (security)
- Owner: Security Team
- Deadline: 2025-10-28
- Suggested evidence: Schedule third-party pentest
## Gate YAML Snippet
```yaml
nfr_assessment:
date: '2025-10-14'
story_id: '1.3'
categories:
performance: 'PASS'
security: 'CONCERNS'
reliability: 'PASS'
maintainability: 'PASS'
overall_status: 'CONCERNS'
critical_issues: 0
high_priority_issues: 1
medium_priority_issues: 0
concerns: 1
blockers: false
recommendations:
- 'Enforce MFA for all new accounts (HIGH - 4 hours)'
evidence_gaps: 2
```
````
## Recommendations Summary
- **Release Blocker:** None ✅
- **High Priority:** 1 (Enforce MFA before release)
- **Medium Priority:** 1 (Migrate existing users to MFA)
- **Next Steps:** Address HIGH priority item, then proceed to gate workflow
```
---
## Validation Checklist
Before completing this workflow, verify:
- ✅ All NFR categories assessed (performance, security, reliability, maintainability, custom)
- ✅ Thresholds defined or marked as UNKNOWN
- ✅ Evidence gathered for each NFR (or marked as MISSING)
- ✅ Status classified deterministically (PASS/CONCERNS/FAIL)
- ✅ No thresholds were guessed (marked as CONCERNS if unknown)
- ✅ Quick wins identified for CONCERNS/FAIL
- ✅ Recommended actions are specific and actionable
- ✅ Evidence gaps documented with owners and deadlines
- ✅ NFR assessment report generated and saved
- ✅ Gate YAML snippet generated (if enabled)
- ✅ Evidence checklist generated (if enabled)
---
## Notes
- **Never Guess Thresholds:** If a threshold is unknown, mark as CONCERNS and recommend defining it
- **Evidence-Based:** Every assessment must be backed by evidence (tests, metrics, logs, CI results)
- **Deterministic Rules:** Use consistent PASS/CONCERNS/FAIL classification based on evidence
- **Actionable Recommendations:** Provide specific steps, not generic advice
- **Gate Integration:** Generate YAML snippets that can be consumed by CI/CD pipelines
---
## Troubleshooting
### "NFR thresholds not defined"
- Check tech-spec.md for NFR requirements
- Check PRD.md for product-level SLAs
- Check story file for feature-specific requirements
- If thresholds truly unknown, mark as CONCERNS and recommend defining them
### "No evidence found"
- Check evidence directories (test-results, metrics, logs)
- Check CI/CD pipeline for test results
- If evidence truly missing, mark NFR as "NO EVIDENCE" and recommend generating it
### "CONCERNS status but no threshold exceeded"
- CONCERNS is correct when threshold is UNKNOWN or evidence is MISSING/INCOMPLETE
- CONCERNS is also correct when evidence is close to threshold (within 10%)
- Document why CONCERNS was assigned
### "FAIL status blocks release"
- This is intentional - FAIL means critical NFR not met
- Recommend remediation actions with specific steps
- Re-run assessment after remediation
---
## Related Workflows
- **testarch-test-design** - Define NFR requirements and test plan
- **testarch-framework** - Set up performance/security testing frameworks
- **testarch-ci** - Configure CI/CD for NFR validation
- **testarch-gate** - Use NFR assessment as input for quality gate decisions
- **testarch-test-review** - Review test quality (maintainability NFR)
---
<!-- Powered by BMAD-CORE™ -->
```

View File

@@ -0,0 +1,443 @@
# NFR Assessment - {FEATURE_NAME}
**Date:** {DATE}
**Story:** {STORY_ID} (if applicable)
**Overall Status:** {OVERALL_STATUS} {STATUS_ICON}
---
## Executive Summary
**Assessment:** {PASS_COUNT} PASS, {CONCERNS_COUNT} CONCERNS, {FAIL_COUNT} FAIL
**Blockers:** {BLOCKER_COUNT} {BLOCKER_DESCRIPTION}
**High Priority Issues:** {HIGH_PRIORITY_COUNT} {HIGH_PRIORITY_DESCRIPTION}
**Recommendation:** {OVERALL_RECOMMENDATION}
---
## Performance Assessment
### Response Time (p95)
- **Status:** {STATUS} {STATUS_ICON}
- **Threshold:** {THRESHOLD_VALUE}
- **Actual:** {ACTUAL_VALUE}
- **Evidence:** {EVIDENCE_SOURCE}
- **Findings:** {FINDINGS_DESCRIPTION}
### Throughput
- **Status:** {STATUS} {STATUS_ICON}
- **Threshold:** {THRESHOLD_VALUE}
- **Actual:** {ACTUAL_VALUE}
- **Evidence:** {EVIDENCE_SOURCE}
- **Findings:** {FINDINGS_DESCRIPTION}
### Resource Usage
- **CPU Usage**
- **Status:** {STATUS} {STATUS_ICON}
- **Threshold:** {THRESHOLD_VALUE}
- **Actual:** {ACTUAL_VALUE}
- **Evidence:** {EVIDENCE_SOURCE}
- **Memory Usage**
- **Status:** {STATUS} {STATUS_ICON}
- **Threshold:** {THRESHOLD_VALUE}
- **Actual:** {ACTUAL_VALUE}
- **Evidence:** {EVIDENCE_SOURCE}
### Scalability
- **Status:** {STATUS} {STATUS_ICON}
- **Threshold:** {THRESHOLD_DESCRIPTION}
- **Actual:** {ACTUAL_DESCRIPTION}
- **Evidence:** {EVIDENCE_SOURCE}
- **Findings:** {FINDINGS_DESCRIPTION}
---
## Security Assessment
### Authentication Strength
- **Status:** {STATUS} {STATUS_ICON}
- **Threshold:** {THRESHOLD_DESCRIPTION}
- **Actual:** {ACTUAL_DESCRIPTION}
- **Evidence:** {EVIDENCE_SOURCE}
- **Findings:** {FINDINGS_DESCRIPTION}
- **Recommendation:** {RECOMMENDATION} (if CONCERNS or FAIL)
### Authorization Controls
- **Status:** {STATUS} {STATUS_ICON}
- **Threshold:** {THRESHOLD_DESCRIPTION}
- **Actual:** {ACTUAL_DESCRIPTION}
- **Evidence:** {EVIDENCE_SOURCE}
- **Findings:** {FINDINGS_DESCRIPTION}
### Data Protection
- **Status:** {STATUS} {STATUS_ICON}
- **Threshold:** {THRESHOLD_DESCRIPTION}
- **Actual:** {ACTUAL_DESCRIPTION}
- **Evidence:** {EVIDENCE_SOURCE}
- **Findings:** {FINDINGS_DESCRIPTION}
### Vulnerability Management
- **Status:** {STATUS} {STATUS_ICON}
- **Threshold:** {THRESHOLD_DESCRIPTION} (e.g., "0 critical, <3 high vulnerabilities")
- **Actual:** {ACTUAL_DESCRIPTION} (e.g., "0 critical, 1 high, 5 medium vulnerabilities")
- **Evidence:** {EVIDENCE_SOURCE} (e.g., "Snyk scan results - scan-2025-10-14.json")
- **Findings:** {FINDINGS_DESCRIPTION}
### Compliance (if applicable)
- **Status:** {STATUS} {STATUS_ICON}
- **Standards:** {COMPLIANCE_STANDARDS} (e.g., "GDPR, HIPAA, PCI-DSS")
- **Actual:** {ACTUAL_COMPLIANCE_STATUS}
- **Evidence:** {EVIDENCE_SOURCE}
- **Findings:** {FINDINGS_DESCRIPTION}
---
## Reliability Assessment
### Availability (Uptime)
- **Status:** {STATUS} {STATUS_ICON}
- **Threshold:** {THRESHOLD_VALUE} (e.g., "99.9%")
- **Actual:** {ACTUAL_VALUE} (e.g., "99.95%")
- **Evidence:** {EVIDENCE_SOURCE} (e.g., "Uptime monitoring - uptime-report-2025-10-14.csv")
- **Findings:** {FINDINGS_DESCRIPTION}
### Error Rate
- **Status:** {STATUS} {STATUS_ICON}
- **Threshold:** {THRESHOLD_VALUE} (e.g., "<0.1%")
- **Actual:** {ACTUAL_VALUE} (e.g., "0.05%")
- **Evidence:** {EVIDENCE_SOURCE} (e.g., "Error logs - logs/errors-2025-10.log")
- **Findings:** {FINDINGS_DESCRIPTION}
### MTTR (Mean Time To Recovery)
- **Status:** {STATUS} {STATUS_ICON}
- **Threshold:** {THRESHOLD_VALUE} (e.g., "<15 minutes")
- **Actual:** {ACTUAL_VALUE} (e.g., "12 minutes")
- **Evidence:** {EVIDENCE_SOURCE} (e.g., "Incident reports - incidents/")
- **Findings:** {FINDINGS_DESCRIPTION}
### Fault Tolerance
- **Status:** {STATUS} {STATUS_ICON}
- **Threshold:** {THRESHOLD_DESCRIPTION}
- **Actual:** {ACTUAL_DESCRIPTION}
- **Evidence:** {EVIDENCE_SOURCE}
- **Findings:** {FINDINGS_DESCRIPTION}
### CI Burn-In (Stability)
- **Status:** {STATUS} {STATUS_ICON}
- **Threshold:** {THRESHOLD_VALUE} (e.g., "100 consecutive successful runs")
- **Actual:** {ACTUAL_VALUE} (e.g., "150 consecutive successful runs")
- **Evidence:** {EVIDENCE_SOURCE} (e.g., "CI burn-in results - ci-burn-in-2025-10-14.log")
- **Findings:** {FINDINGS_DESCRIPTION}
### Disaster Recovery (if applicable)
- **RTO (Recovery Time Objective)**
- **Status:** {STATUS} {STATUS_ICON}
- **Threshold:** {THRESHOLD_VALUE}
- **Actual:** {ACTUAL_VALUE}
- **Evidence:** {EVIDENCE_SOURCE}
- **RPO (Recovery Point Objective)**
- **Status:** {STATUS} {STATUS_ICON}
- **Threshold:** {THRESHOLD_VALUE}
- **Actual:** {ACTUAL_VALUE}
- **Evidence:** {EVIDENCE_SOURCE}
---
## Maintainability Assessment
### Test Coverage
- **Status:** {STATUS} {STATUS_ICON}
- **Threshold:** {THRESHOLD_VALUE} (e.g., ">=80%")
- **Actual:** {ACTUAL_VALUE} (e.g., "87%")
- **Evidence:** {EVIDENCE_SOURCE} (e.g., "Coverage report - coverage/lcov-report/index.html")
- **Findings:** {FINDINGS_DESCRIPTION}
### Code Quality
- **Status:** {STATUS} {STATUS_ICON}
- **Threshold:** {THRESHOLD_VALUE} (e.g., ">=85/100")
- **Actual:** {ACTUAL_VALUE} (e.g., "92/100")
- **Evidence:** {EVIDENCE_SOURCE} (e.g., "SonarQube analysis - sonarqube-report-2025-10-14.pdf")
- **Findings:** {FINDINGS_DESCRIPTION}
### Technical Debt
- **Status:** {STATUS} {STATUS_ICON}
- **Threshold:** {THRESHOLD_VALUE} (e.g., "<5% debt ratio")
- **Actual:** {ACTUAL_VALUE} (e.g., "3.2% debt ratio")
- **Evidence:** {EVIDENCE_SOURCE} (e.g., "CodeClimate analysis - codeclimate-2025-10-14.json")
- **Findings:** {FINDINGS_DESCRIPTION}
### Documentation Completeness
- **Status:** {STATUS} {STATUS_ICON}
- **Threshold:** {THRESHOLD_VALUE} (e.g., ">=90%")
- **Actual:** {ACTUAL_VALUE} (e.g., "95%")
- **Evidence:** {EVIDENCE_SOURCE} (e.g., "Documentation audit - docs-audit-2025-10-14.md")
- **Findings:** {FINDINGS_DESCRIPTION}
### Test Quality (from test-review, if available)
- **Status:** {STATUS} {STATUS_ICON}
- **Threshold:** {THRESHOLD_DESCRIPTION}
- **Actual:** {ACTUAL_DESCRIPTION}
- **Evidence:** {EVIDENCE_SOURCE} (e.g., "Test review report - test-review-2025-10-14.md")
- **Findings:** {FINDINGS_DESCRIPTION}
---
## Custom NFR Assessments (if applicable)
### {CUSTOM_NFR_NAME_1}
- **Status:** {STATUS} {STATUS_ICON}
- **Threshold:** {THRESHOLD_DESCRIPTION}
- **Actual:** {ACTUAL_DESCRIPTION}
- **Evidence:** {EVIDENCE_SOURCE}
- **Findings:** {FINDINGS_DESCRIPTION}
### {CUSTOM_NFR_NAME_2}
- **Status:** {STATUS} {STATUS_ICON}
- **Threshold:** {THRESHOLD_DESCRIPTION}
- **Actual:** {ACTUAL_DESCRIPTION}
- **Evidence:** {EVIDENCE_SOURCE}
- **Findings:** {FINDINGS_DESCRIPTION}
---
## Quick Wins
{QUICK_WIN_COUNT} quick wins identified for immediate implementation:
1. **{QUICK_WIN_TITLE_1}** ({NFR_CATEGORY}) - {PRIORITY} - {ESTIMATED_EFFORT}
- {QUICK_WIN_DESCRIPTION}
- No code changes needed / Minimal code changes
2. **{QUICK_WIN_TITLE_2}** ({NFR_CATEGORY}) - {PRIORITY} - {ESTIMATED_EFFORT}
- {QUICK_WIN_DESCRIPTION}
---
## Recommended Actions
### Immediate (Before Release) - CRITICAL/HIGH Priority
1. **{ACTION_TITLE_1}** - {PRIORITY} - {ESTIMATED_EFFORT} - {OWNER}
- {ACTION_DESCRIPTION}
- {SPECIFIC_STEPS}
- {VALIDATION_CRITERIA}
2. **{ACTION_TITLE_2}** - {PRIORITY} - {ESTIMATED_EFFORT} - {OWNER}
- {ACTION_DESCRIPTION}
- {SPECIFIC_STEPS}
- {VALIDATION_CRITERIA}
### Short-term (Next Sprint) - MEDIUM Priority
1. **{ACTION_TITLE_3}** - {PRIORITY} - {ESTIMATED_EFFORT} - {OWNER}
- {ACTION_DESCRIPTION}
2. **{ACTION_TITLE_4}** - {PRIORITY} - {ESTIMATED_EFFORT} - {OWNER}
- {ACTION_DESCRIPTION}
### Long-term (Backlog) - LOW Priority
1. **{ACTION_TITLE_5}** - {PRIORITY} - {ESTIMATED_EFFORT} - {OWNER}
- {ACTION_DESCRIPTION}
---
## Monitoring Hooks
{MONITORING_HOOK_COUNT} monitoring hooks recommended to detect issues before failures:
### Performance Monitoring
- [ ] {MONITORING_TOOL_1} - {MONITORING_DESCRIPTION}
- **Owner:** {OWNER}
- **Deadline:** {DEADLINE}
- [ ] {MONITORING_TOOL_2} - {MONITORING_DESCRIPTION}
- **Owner:** {OWNER}
- **Deadline:** {DEADLINE}
### Security Monitoring
- [ ] {MONITORING_TOOL_3} - {MONITORING_DESCRIPTION}
- **Owner:** {OWNER}
- **Deadline:** {DEADLINE}
### Reliability Monitoring
- [ ] {MONITORING_TOOL_4} - {MONITORING_DESCRIPTION}
- **Owner:** {OWNER}
- **Deadline:** {DEADLINE}
### Alerting Thresholds
- [ ] {ALERT_DESCRIPTION} - Notify when {THRESHOLD_CONDITION}
- **Owner:** {OWNER}
- **Deadline:** {DEADLINE}
---
## Fail-Fast Mechanisms
{FAIL_FAST_COUNT} fail-fast mechanisms recommended to prevent failures:
### Circuit Breakers (Reliability)
- [ ] {CIRCUIT_BREAKER_DESCRIPTION}
- **Owner:** {OWNER}
- **Estimated Effort:** {EFFORT}
### Rate Limiting (Performance)
- [ ] {RATE_LIMITING_DESCRIPTION}
- **Owner:** {OWNER}
- **Estimated Effort:** {EFFORT}
### Validation Gates (Security)
- [ ] {VALIDATION_GATE_DESCRIPTION}
- **Owner:** {OWNER}
- **Estimated Effort:** {EFFORT}
### Smoke Tests (Maintainability)
- [ ] {SMOKE_TEST_DESCRIPTION}
- **Owner:** {OWNER}
- **Estimated Effort:** {EFFORT}
---
## Evidence Gaps
{EVIDENCE_GAP_COUNT} evidence gaps identified - action required:
- [ ] **{NFR_NAME_1}** ({NFR_CATEGORY})
- **Owner:** {OWNER}
- **Deadline:** {DEADLINE}
- **Suggested Evidence:** {SUGGESTED_EVIDENCE_SOURCE}
- **Impact:** {IMPACT_DESCRIPTION}
- [ ] **{NFR_NAME_2}** ({NFR_CATEGORY})
- **Owner:** {OWNER}
- **Deadline:** {DEADLINE}
- **Suggested Evidence:** {SUGGESTED_EVIDENCE_SOURCE}
- **Impact:** {IMPACT_DESCRIPTION}
---
## Findings Summary
| Category | PASS | CONCERNS | FAIL | Overall Status |
| --------------- | ---------------- | -------------------- | ---------------- | ----------------------------------- |
| Performance | {P_PASS_COUNT} | {P_CONCERNS_COUNT} | {P_FAIL_COUNT} | {P_STATUS} {P_ICON} |
| Security | {S_PASS_COUNT} | {S_CONCERNS_COUNT} | {S_FAIL_COUNT} | {S_STATUS} {S_ICON} |
| Reliability | {R_PASS_COUNT} | {R_CONCERNS_COUNT} | {R_FAIL_COUNT} | {R_STATUS} {R_ICON} |
| Maintainability | {M_PASS_COUNT} | {M_CONCERNS_COUNT} | {M_FAIL_COUNT} | {M_STATUS} {M_ICON} |
| **Total** | **{TOTAL_PASS}** | **{TOTAL_CONCERNS}** | **{TOTAL_FAIL}** | **{OVERALL_STATUS} {OVERALL_ICON}** |
---
## Gate YAML Snippet
```yaml
nfr_assessment:
date: '{DATE}'
story_id: '{STORY_ID}'
feature_name: '{FEATURE_NAME}'
categories:
performance: '{PERFORMANCE_STATUS}'
security: '{SECURITY_STATUS}'
reliability: '{RELIABILITY_STATUS}'
maintainability: '{MAINTAINABILITY_STATUS}'
overall_status: '{OVERALL_STATUS}'
critical_issues: { CRITICAL_COUNT }
high_priority_issues: { HIGH_COUNT }
medium_priority_issues: { MEDIUM_COUNT }
concerns: { CONCERNS_COUNT }
blockers: { BLOCKER_BOOLEAN } # true/false
quick_wins: { QUICK_WIN_COUNT }
evidence_gaps: { EVIDENCE_GAP_COUNT }
recommendations:
- '{RECOMMENDATION_1}'
- '{RECOMMENDATION_2}'
- '{RECOMMENDATION_3}'
```
---
## Related Artifacts
- **Story File:** {STORY_FILE_PATH} (if applicable)
- **Tech Spec:** {TECH_SPEC_PATH} (if available)
- **PRD:** {PRD_PATH} (if available)
- **Test Design:** {TEST_DESIGN_PATH} (if available)
- **Evidence Sources:**
- Test Results: {TEST_RESULTS_DIR}
- Metrics: {METRICS_DIR}
- Logs: {LOGS_DIR}
- CI Results: {CI_RESULTS_PATH}
---
## Recommendations Summary
**Release Blocker:** {RELEASE_BLOCKER_SUMMARY}
**High Priority:** {HIGH_PRIORITY_SUMMARY}
**Medium Priority:** {MEDIUM_PRIORITY_SUMMARY}
**Next Steps:** {NEXT_STEPS_DESCRIPTION}
---
## Sign-Off
**NFR Assessment:**
- Overall Status: {OVERALL_STATUS} {OVERALL_ICON}
- Critical Issues: {CRITICAL_COUNT}
- High Priority Issues: {HIGH_COUNT}
- Concerns: {CONCERNS_COUNT}
- Evidence Gaps: {EVIDENCE_GAP_COUNT}
**Gate Status:** {GATE_STATUS} {GATE_ICON}
**Next Actions:**
- If PASS ✅: Proceed to `*gate` workflow or release
- If CONCERNS ⚠️: Address HIGH/CRITICAL issues, re-run `*nfr-assess`
- If FAIL ❌: Resolve FAIL status NFRs, re-run `*nfr-assess`
**Generated:** {DATE}
**Workflow:** testarch-nfr v4.0
---
<!-- Powered by BMAD-CORE™ -->

View File

@@ -0,0 +1,56 @@
# Test Architect workflow: nfr-assess
name: testarch-nfr
description: "Assess non-functional requirements (performance, security, reliability, maintainability) before release with evidence-based validation"
author: "BMad"
# Critical variables from config
config_source: "{project-root}/bmad/bmm/config.yaml"
output_folder: "{config_source}:output_folder"
user_name: "{config_source}:user_name"
communication_language: "{config_source}:communication_language"
document_output_language: "{config_source}:document_output_language"
date: system-generated
# Workflow components
installed_path: "{project-root}/bmad/bmm/workflows/testarch/nfr-assess"
instructions: "{installed_path}/instructions.md"
validation: "{installed_path}/checklist.md"
template: "{installed_path}/nfr-report-template.md"
# Variables and inputs
variables:
# NFR category assessment (defaults to all categories)
custom_nfr_categories: "" # Optional additional categories beyond standard (security, performance, reliability, maintainability)
# Output configuration
default_output_file: "{output_folder}/nfr-assessment.md"
# Required tools
required_tools:
- read_file # Read story, test results, metrics, logs, BMad artifacts
- write_file # Create NFR assessment, gate YAML, evidence checklist
- list_files # Discover test results, metrics, logs
- search_repo # Find NFR-related tests and evidence
- glob # Find result files matching patterns
# Recommended inputs
recommended_inputs:
- story: "Story markdown with NFR requirements (optional)"
- tech_spec: "Technical specification with NFR targets (recommended)"
- test_results: "Test execution results (performance, security, etc.)"
- metrics: "Application metrics (response times, error rates, etc.)"
- logs: "Application logs for reliability analysis"
- ci_results: "CI/CD pipeline results for burn-in validation"
tags:
- qa
- nfr
- test-architect
- performance
- security
- reliability
execution_hints:
interactive: false # Minimize prompts
autonomous: true # Proceed without user input unless blocked
iterative: true