bmad初始化
This commit is contained in:
672
bmad/bmm/workflows/testarch/atdd/README.md
Normal file
672
bmad/bmm/workflows/testarch/atdd/README.md
Normal file
@@ -0,0 +1,672 @@
|
||||
# ATDD (Acceptance Test-Driven Development) Workflow
|
||||
|
||||
Generates failing acceptance tests BEFORE implementation following TDD's red-green-refactor cycle. Creates comprehensive test coverage at appropriate levels (E2E, API, Component) with supporting infrastructure (fixtures, factories, mocks) and provides an implementation checklist to guide development toward passing tests.
|
||||
|
||||
**Core Principle**: Tests fail first (red phase), guide development to green, then enable confident refactoring.
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
bmad tea *atdd
|
||||
```
|
||||
|
||||
The TEA agent runs this workflow when:
|
||||
|
||||
- User story is approved with clear acceptance criteria
|
||||
- Development is about to begin (before any implementation code)
|
||||
- Team is practicing Test-Driven Development (TDD)
|
||||
- Need to establish test-first contract with DEV team
|
||||
|
||||
## Inputs
|
||||
|
||||
**Required Context Files:**
|
||||
|
||||
- **Story markdown** (`{story_file}`): User story with acceptance criteria, functional requirements, and technical constraints
|
||||
- **Framework configuration**: Test framework config (playwright.config.ts or cypress.config.ts) from framework workflow
|
||||
|
||||
**Workflow Variables:**
|
||||
|
||||
- `story_file`: Path to story markdown with acceptance criteria (required)
|
||||
- `test_dir`: Directory for test files (default: `{project-root}/tests`)
|
||||
- `test_framework`: Detected from framework workflow (playwright or cypress)
|
||||
- `test_levels`: Which test levels to generate (default: "e2e,api,component")
|
||||
- `primary_level`: Primary test level for acceptance criteria (default: "e2e")
|
||||
- `start_failing`: Tests must fail initially - red phase (default: true)
|
||||
- `use_given_when_then`: BDD-style test structure (default: true)
|
||||
- `network_first`: Route interception before navigation to prevent race conditions (default: true)
|
||||
- `one_assertion_per_test`: Atomic test design (default: true)
|
||||
- `generate_factories`: Create data factory stubs using faker (default: true)
|
||||
- `generate_fixtures`: Create fixture architecture with auto-cleanup (default: true)
|
||||
- `auto_cleanup`: Fixtures clean up their data automatically (default: true)
|
||||
- `include_data_testids`: List required data-testid attributes for DEV (default: true)
|
||||
- `include_mock_requirements`: Document mock/stub needs (default: true)
|
||||
- `auto_load_knowledge`: Load fixture-architecture, data-factories, component-tdd fragments (default: true)
|
||||
- `share_with_dev`: Provide implementation checklist to DEV agent (default: true)
|
||||
- `output_checklist`: Path for implementation checklist (default: `{output_folder}/atdd-checklist-{story_id}.md`)
|
||||
|
||||
**Optional Context:**
|
||||
|
||||
- **Test design document**: For risk/priority context alignment (P0-P3 scenarios)
|
||||
- **Existing fixtures/helpers**: For consistency with established patterns
|
||||
- **Architecture documents**: For understanding system boundaries and integration points
|
||||
|
||||
## Outputs
|
||||
|
||||
**Primary Deliverable:**
|
||||
|
||||
- **ATDD Checklist** (`atdd-checklist-{story_id}.md`): Implementation guide containing:
|
||||
- Story summary and acceptance criteria breakdown
|
||||
- Test files created with paths and line counts
|
||||
- Data factories created with patterns
|
||||
- Fixtures created with auto-cleanup logic
|
||||
- Mock requirements for external services
|
||||
- Required data-testid attributes list
|
||||
- Implementation checklist mapping tests to code tasks
|
||||
- Red-green-refactor workflow guidance
|
||||
- Execution commands for running tests
|
||||
|
||||
**Test Files Created:**
|
||||
|
||||
- **E2E tests** (`tests/e2e/{feature-name}.spec.ts`): Full user journey tests for critical paths
|
||||
- **API tests** (`tests/api/{feature-name}.api.spec.ts`): Business logic and service contract tests
|
||||
- **Component tests** (`tests/component/{ComponentName}.test.tsx`): UI component behavior tests
|
||||
|
||||
**Supporting Infrastructure:**
|
||||
|
||||
- **Data factories** (`tests/support/factories/{entity}.factory.ts`): Factory functions using @faker-js/faker for generating test data with overrides support
|
||||
- **Test fixtures** (`tests/support/fixtures/{feature}.fixture.ts`): Playwright fixtures with setup/teardown and auto-cleanup
|
||||
- **Mock/stub documentation**: Requirements for external service mocking (payment gateways, email services, etc.)
|
||||
- **data-testid requirements**: List of required test IDs for stable selectors in UI implementation
|
||||
|
||||
**Validation Safeguards:**
|
||||
|
||||
- All tests must fail initially (red phase verified by local test run)
|
||||
- Failure messages are clear and actionable
|
||||
- Tests use Given-When-Then format for readability
|
||||
- Network-first pattern applied (route interception before navigation)
|
||||
- One assertion per test (atomic test design)
|
||||
- No hard waits or sleeps (explicit waits only)
|
||||
|
||||
## Key Features
|
||||
|
||||
### Red-Green-Refactor Cycle
|
||||
|
||||
**RED Phase** (TEA Agent responsibility):
|
||||
|
||||
- Write failing tests first defining expected behavior
|
||||
- Tests fail for right reason (missing implementation, not test bugs)
|
||||
- All supporting infrastructure (factories, fixtures, mocks) created
|
||||
|
||||
**GREEN Phase** (DEV Agent responsibility):
|
||||
|
||||
- Implement minimal code to pass one test at a time
|
||||
- Use implementation checklist as guide
|
||||
- Run tests frequently to verify progress
|
||||
|
||||
**REFACTOR Phase** (DEV Agent responsibility):
|
||||
|
||||
- Improve code quality with confidence (tests provide safety net)
|
||||
- Extract duplications, optimize performance
|
||||
- Ensure tests still pass after changes
|
||||
|
||||
### Test Level Selection Framework
|
||||
|
||||
**E2E (End-to-End)**:
|
||||
|
||||
- Critical user journeys (login, checkout, core workflows)
|
||||
- Multi-system integration
|
||||
- User-facing acceptance criteria
|
||||
- Characteristics: High confidence, slow execution, brittle
|
||||
|
||||
**API (Integration)**:
|
||||
|
||||
- Business logic validation
|
||||
- Service contracts and data transformations
|
||||
- Backend integration without UI
|
||||
- Characteristics: Fast feedback, good balance, stable
|
||||
|
||||
**Component**:
|
||||
|
||||
- UI component behavior (buttons, forms, modals)
|
||||
- Interaction testing (click, hover, keyboard navigation)
|
||||
- Visual regression and state management
|
||||
- Characteristics: Fast, isolated, granular
|
||||
|
||||
**Unit**:
|
||||
|
||||
- Pure business logic and algorithms
|
||||
- Edge cases and error handling
|
||||
- Minimal dependencies
|
||||
- Characteristics: Fastest, most granular
|
||||
|
||||
**Selection Strategy**: Avoid duplicate coverage. Use E2E for critical happy path, API for business logic variations, component for UI edge cases, unit for pure logic.
|
||||
|
||||
### Recording Mode (NEW - Phase 2.5)
|
||||
|
||||
**atdd** can record complex UI interactions instead of AI generation.
|
||||
|
||||
**Activation**: Automatic for complex UI when config.tea_use_mcp_enhancements is true and MCP available
|
||||
|
||||
- Fallback: AI generation (silent, automatic)
|
||||
|
||||
**When to Use Recording Mode:**
|
||||
|
||||
- ✅ Complex UI interactions (drag-drop, multi-step forms, wizards)
|
||||
- ✅ Visual workflows (modals, dialogs, animations)
|
||||
- ✅ Unclear requirements (exploratory, discovering expected behavior)
|
||||
- ✅ Multi-page flows (checkout, registration, onboarding)
|
||||
- ❌ NOT for simple CRUD (AI generation faster)
|
||||
- ❌ NOT for API-only tests (no UI to record)
|
||||
|
||||
**When to Use AI Generation (Default):**
|
||||
|
||||
- ✅ Clear acceptance criteria available
|
||||
- ✅ Standard patterns (login, CRUD, navigation)
|
||||
- ✅ Need many tests quickly
|
||||
- ✅ API/backend tests (no UI interaction)
|
||||
|
||||
**How Test Generation Works (Default - AI-Based):**
|
||||
|
||||
TEA generates tests using AI by:
|
||||
|
||||
1. **Analyzing acceptance criteria** from story markdown
|
||||
2. **Inferring selectors** from requirement descriptions (e.g., "login button" → `[data-testid="login-button"]`)
|
||||
3. **Synthesizing test code** based on knowledge base patterns
|
||||
4. **Estimating interactions** using common UI patterns (click, type, verify)
|
||||
5. **Applying best practices** from knowledge fragments (Given-When-Then, network-first, fixtures)
|
||||
|
||||
**This works well for:**
|
||||
|
||||
- ✅ Clear requirements with known UI patterns
|
||||
- ✅ Standard workflows (login, CRUD, navigation)
|
||||
- ✅ When selectors follow conventions (data-testid attributes)
|
||||
|
||||
**What MCP Adds (Interactive Verification & Enhancement):**
|
||||
|
||||
When Playwright MCP is available, TEA **additionally**:
|
||||
|
||||
1. **Verifies generated tests** by:
|
||||
- **Launching real browser** with `generator_setup_page`
|
||||
- **Executing generated test steps** with `browser_*` tools (`navigate`, `click`, `type`)
|
||||
- **Seeing actual UI** with `browser_snapshot` (visual verification)
|
||||
- **Discovering real selectors** with `browser_generate_locator` (auto-generate from live DOM)
|
||||
|
||||
2. **Enhances AI-generated tests** by:
|
||||
- **Validating selectors exist** in actual DOM (not just guesses)
|
||||
- **Verifying behavior** with `browser_verify_text`, `browser_verify_visible`, `browser_verify_url`
|
||||
- **Capturing actual interaction log** with `generator_read_log`
|
||||
- **Refining test code** with real observed behavior
|
||||
|
||||
3. **Catches issues early** by:
|
||||
- **Finding missing selectors** before DEV implements (requirements clarification)
|
||||
- **Discovering edge cases** not in requirements (loading states, error messages)
|
||||
- **Validating assumptions** about UI structure and behavior
|
||||
|
||||
**Key Benefits of MCP Enhancement:**
|
||||
|
||||
- ✅ **AI generates tests** (fast, based on requirements) **+** **MCP verifies tests** (accurate, based on reality)
|
||||
- ✅ **Accurate selectors**: Validated against actual DOM, not just inferred
|
||||
- ✅ **Visual validation**: TEA sees what user sees (modals, animations, state changes)
|
||||
- ✅ **Complex flows**: Records multi-step interactions precisely
|
||||
- ✅ **Edge case discovery**: Observes actual app behavior beyond requirements
|
||||
- ✅ **Selector resilience**: MCP generates robust locators from live page (role-based, text-based, fallback chains)
|
||||
|
||||
**Example Enhancement Flow:**
|
||||
|
||||
```
|
||||
1. AI generates test based on acceptance criteria
|
||||
→ await page.click('[data-testid="submit-button"]')
|
||||
|
||||
2. MCP verifies selector exists (browser_generate_locator)
|
||||
→ Found: button[type="submit"].btn-primary
|
||||
→ No data-testid attribute exists!
|
||||
|
||||
3. TEA refines test with actual selector
|
||||
→ await page.locator('button[type="submit"]').click()
|
||||
→ Documents requirement: "Add data-testid='submit-button' to button"
|
||||
```
|
||||
|
||||
**Recording Workflow (MCP-Based):**
|
||||
|
||||
```
|
||||
1. Set generation_mode: "recording"
|
||||
2. Use generator_setup_page to init recording session
|
||||
3. For each acceptance criterion:
|
||||
a. Execute scenario with browser_* tools:
|
||||
- browser_navigate, browser_click, browser_type
|
||||
- browser_select, browser_check
|
||||
b. Add verifications with browser_verify_* tools:
|
||||
- browser_verify_text, browser_verify_visible
|
||||
- browser_verify_url
|
||||
c. Capture log with generator_read_log
|
||||
d. Generate test with generator_write_test
|
||||
4. Enhance generated tests with knowledge base patterns:
|
||||
- Add Given-When-Then comments
|
||||
- Replace selectors with data-testid
|
||||
- Add network-first interception
|
||||
- Add fixtures/factories
|
||||
5. Verify tests fail (RED phase)
|
||||
```
|
||||
|
||||
**Example: Recording a Checkout Flow**
|
||||
|
||||
```markdown
|
||||
Recording session for: "User completes checkout with credit card"
|
||||
|
||||
Actions recorded:
|
||||
|
||||
1. browser_navigate('/cart')
|
||||
2. browser_click('[data-testid="checkout-button"]')
|
||||
3. browser_type('[data-testid="card-number"]', '4242424242424242')
|
||||
4. browser_type('[data-testid="expiry"]', '12/25')
|
||||
5. browser_type('[data-testid="cvv"]', '123')
|
||||
6. browser_click('[data-testid="place-order"]')
|
||||
7. browser_verify_text('Order confirmed')
|
||||
8. browser_verify_url('/confirmation')
|
||||
|
||||
Generated test (enhanced):
|
||||
|
||||
- Given-When-Then structure added
|
||||
- data-testid selectors used
|
||||
- Network-first payment API mock added
|
||||
- Card factory created for test data
|
||||
- Test verified to FAIL (checkout not implemented)
|
||||
```
|
||||
|
||||
**Graceful Degradation:**
|
||||
|
||||
- Recording mode is OPTIONAL (default: AI generation)
|
||||
- Requires Playwright MCP (falls back to AI if unavailable)
|
||||
- Generated tests enhanced with knowledge base patterns
|
||||
- Same quality output regardless of generation method
|
||||
|
||||
### Given-When-Then Structure
|
||||
|
||||
All tests follow BDD format for clarity:
|
||||
|
||||
```typescript
|
||||
test('should display error for invalid credentials', async ({ page }) => {
|
||||
// GIVEN: User is on login page
|
||||
await page.goto('/login');
|
||||
|
||||
// WHEN: User submits invalid credentials
|
||||
await page.fill('[data-testid="email-input"]', 'invalid@example.com');
|
||||
await page.fill('[data-testid="password-input"]', 'wrongpassword');
|
||||
await page.click('[data-testid="login-button"]');
|
||||
|
||||
// THEN: Error message is displayed
|
||||
await expect(page.locator('[data-testid="error-message"]')).toHaveText('Invalid email or password');
|
||||
});
|
||||
```
|
||||
|
||||
### Network-First Testing Pattern
|
||||
|
||||
**Critical pattern to prevent race conditions**:
|
||||
|
||||
```typescript
|
||||
// ✅ CORRECT: Intercept BEFORE navigation
|
||||
await page.route('**/api/data', handler);
|
||||
await page.goto('/page');
|
||||
|
||||
// ❌ WRONG: Navigate then intercept (race condition)
|
||||
await page.goto('/page');
|
||||
await page.route('**/api/data', handler); // Too late!
|
||||
```
|
||||
|
||||
Always set up route interception before navigating to pages that make network requests.
|
||||
|
||||
### Data Factory Architecture
|
||||
|
||||
Use faker for all test data generation:
|
||||
|
||||
```typescript
|
||||
// tests/support/factories/user.factory.ts
|
||||
import { faker } from '@faker-js/faker';
|
||||
|
||||
export const createUser = (overrides = {}) => ({
|
||||
id: faker.number.int(),
|
||||
email: faker.internet.email(),
|
||||
name: faker.person.fullName(),
|
||||
createdAt: faker.date.recent().toISOString(),
|
||||
...overrides,
|
||||
});
|
||||
|
||||
export const createUsers = (count: number) => Array.from({ length: count }, () => createUser());
|
||||
```
|
||||
|
||||
**Factory principles:**
|
||||
|
||||
- Use faker for random data (no hardcoded values to prevent collisions)
|
||||
- Support overrides for specific test scenarios
|
||||
- Generate complete valid objects matching API contracts
|
||||
- Include helper functions for bulk creation
|
||||
|
||||
### Fixture Architecture with Auto-Cleanup
|
||||
|
||||
Playwright fixtures with automatic data cleanup:
|
||||
|
||||
```typescript
|
||||
// tests/support/fixtures/auth.fixture.ts
|
||||
import { test as base } from '@playwright/test';
|
||||
|
||||
export const test = base.extend({
|
||||
authenticatedUser: async ({ page }, use) => {
|
||||
// Setup: Create and authenticate user
|
||||
const user = await createUser();
|
||||
await page.goto('/login');
|
||||
await page.fill('[data-testid="email"]', user.email);
|
||||
await page.fill('[data-testid="password"]', 'password123');
|
||||
await page.click('[data-testid="login-button"]');
|
||||
await page.waitForURL('/dashboard');
|
||||
|
||||
// Provide to test
|
||||
await use(user);
|
||||
|
||||
// Cleanup: Delete user (automatic)
|
||||
await deleteUser(user.id);
|
||||
},
|
||||
});
|
||||
```
|
||||
|
||||
**Fixture principles:**
|
||||
|
||||
- Auto-cleanup (always delete created data in teardown)
|
||||
- Composable (fixtures can use other fixtures via mergeTests)
|
||||
- Isolated (each test gets fresh data)
|
||||
- Type-safe with TypeScript
|
||||
|
||||
### One Assertion Per Test (Atomic Design)
|
||||
|
||||
Each test should verify exactly one behavior:
|
||||
|
||||
```typescript
|
||||
// ✅ CORRECT: One assertion
|
||||
test('should display user name', async ({ page }) => {
|
||||
await expect(page.locator('[data-testid="user-name"]')).toHaveText('John');
|
||||
});
|
||||
|
||||
// ❌ WRONG: Multiple assertions (not atomic)
|
||||
test('should display user info', async ({ page }) => {
|
||||
await expect(page.locator('[data-testid="user-name"]')).toHaveText('John');
|
||||
await expect(page.locator('[data-testid="user-email"]')).toHaveText('john@example.com');
|
||||
});
|
||||
```
|
||||
|
||||
**Why?** If second assertion fails, you don't know if first is still valid. Split into separate tests for clear failure diagnosis.
|
||||
|
||||
### Implementation Checklist for DEV
|
||||
|
||||
Maps each failing test to concrete implementation tasks:
|
||||
|
||||
```markdown
|
||||
## Implementation Checklist
|
||||
|
||||
### Test: User Login with Valid Credentials
|
||||
|
||||
- [ ] Create `/login` route
|
||||
- [ ] Implement login form component
|
||||
- [ ] Add email/password validation
|
||||
- [ ] Integrate authentication API
|
||||
- [ ] Add `data-testid` attributes: `email-input`, `password-input`, `login-button`
|
||||
- [ ] Implement error handling
|
||||
- [ ] Run test: `npm run test:e2e -- login.spec.ts`
|
||||
- [ ] ✅ Test passes (green phase)
|
||||
```
|
||||
|
||||
Provides clear path from red to green for each test.
|
||||
|
||||
## Integration with Other Workflows
|
||||
|
||||
**Before this workflow:**
|
||||
|
||||
- **framework** workflow: Must run first to establish test framework architecture (Playwright or Cypress config, directory structure, base fixtures)
|
||||
- **test-design** workflow: Optional but recommended for P0-P3 priority alignment and risk assessment context
|
||||
|
||||
**After this workflow:**
|
||||
|
||||
- **DEV agent** implements features guided by failing tests and implementation checklist
|
||||
- **test-review** workflow: Review generated test quality before sharing with DEV team
|
||||
- **automate** workflow: After story completion, expand regression suite with additional edge case coverage
|
||||
|
||||
**Coordinates with:**
|
||||
|
||||
- **Story approval process**: ATDD runs after story is approved but before DEV begins implementation
|
||||
- **Quality gates**: Failing tests serve as acceptance criteria for story completion (all tests must pass)
|
||||
|
||||
## Important Notes
|
||||
|
||||
### ATDD is Test-First, Not Test-After
|
||||
|
||||
**Critical timing**: Tests must be written BEFORE any implementation code. This ensures:
|
||||
|
||||
- Tests define the contract (what needs to be built)
|
||||
- Implementation is guided by tests (no over-engineering)
|
||||
- Tests verify behavior, not implementation details
|
||||
- Confidence in refactoring (tests catch regressions)
|
||||
|
||||
### All Tests Must Fail Initially
|
||||
|
||||
**Red phase verification is mandatory**:
|
||||
|
||||
- Run tests locally after creation to confirm RED phase
|
||||
- Failure should be due to missing implementation, not test bugs
|
||||
- Failure messages should be clear and actionable
|
||||
- Document expected failure messages in ATDD checklist
|
||||
|
||||
If a test passes before implementation, it's not testing the right thing.
|
||||
|
||||
### Use data-testid for Stable Selectors
|
||||
|
||||
**Why data-testid?**
|
||||
|
||||
- CSS classes change frequently (styling refactors)
|
||||
- IDs may not be unique or stable
|
||||
- Text content changes with localization
|
||||
- data-testid is explicit contract between tests and UI
|
||||
|
||||
```typescript
|
||||
// ✅ CORRECT: Stable selector
|
||||
await page.click('[data-testid="login-button"]');
|
||||
|
||||
// ❌ FRAGILE: Class-based selector
|
||||
await page.click('.btn.btn-primary.login-btn');
|
||||
```
|
||||
|
||||
ATDD checklist includes complete list of required data-testid attributes for DEV team.
|
||||
|
||||
### No Hard Waits or Sleeps
|
||||
|
||||
**Use explicit waits only**:
|
||||
|
||||
```typescript
|
||||
// ✅ CORRECT: Explicit wait for condition
|
||||
await page.waitForSelector('[data-testid="user-name"]');
|
||||
await expect(page.locator('[data-testid="user-name"]')).toBeVisible();
|
||||
|
||||
// ❌ WRONG: Hard wait (flaky, slow)
|
||||
await page.waitForTimeout(2000);
|
||||
```
|
||||
|
||||
Playwright's auto-waiting is preferred (expect() automatically waits up to timeout).
|
||||
|
||||
### Component Tests for Complex UI Only
|
||||
|
||||
**When to use component tests:**
|
||||
|
||||
- Complex UI interactions (drag-drop, keyboard navigation)
|
||||
- Form validation logic
|
||||
- State management within component
|
||||
- Visual edge cases
|
||||
|
||||
**When NOT to use:**
|
||||
|
||||
- Simple rendering (snapshot tests are sufficient)
|
||||
- Integration with backend (use E2E or API tests)
|
||||
- Full user journeys (use E2E tests)
|
||||
|
||||
Component tests are valuable but should complement, not replace, E2E and API tests.
|
||||
|
||||
### Auto-Cleanup is Non-Negotiable
|
||||
|
||||
**Every test must clean up its data**:
|
||||
|
||||
- Use fixtures with automatic teardown
|
||||
- Never leave test data in database/storage
|
||||
- Each test should be isolated (no shared state)
|
||||
|
||||
**Cleanup patterns:**
|
||||
|
||||
- Fixtures: Cleanup in teardown function
|
||||
- Factories: Provide deletion helpers
|
||||
- Tests: Use `test.afterEach()` for manual cleanup if needed
|
||||
|
||||
Without auto-cleanup, tests become flaky and depend on execution order.
|
||||
|
||||
## Knowledge Base References
|
||||
|
||||
This workflow automatically consults:
|
||||
|
||||
- **fixture-architecture.md** - Test fixture patterns with setup/teardown and auto-cleanup using Playwright's test.extend()
|
||||
- **data-factories.md** - Factory patterns using @faker-js/faker for random test data generation with overrides support
|
||||
- **component-tdd.md** - Component test strategies using Playwright Component Testing (@playwright/experimental-ct-react)
|
||||
- **network-first.md** - Route interception patterns (intercept before navigation to prevent race conditions)
|
||||
- **test-quality.md** - Test design principles (Given-When-Then, one assertion per test, determinism, isolation)
|
||||
- **test-levels-framework.md** - Test level selection framework (E2E vs API vs Component vs Unit)
|
||||
|
||||
See `tea-index.csv` for complete knowledge fragment mapping and additional references.
|
||||
|
||||
## Example Output
|
||||
|
||||
After running this workflow, the ATDD checklist will contain:
|
||||
|
||||
````markdown
|
||||
# ATDD Checklist - Epic 3, Story 5: User Authentication
|
||||
|
||||
## Story Summary
|
||||
|
||||
As a user, I want to log in with email and password so that I can access my personalized dashboard.
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
1. User can log in with valid credentials
|
||||
2. User sees error message with invalid credentials
|
||||
3. User is redirected to dashboard after successful login
|
||||
|
||||
## Failing Tests Created (RED Phase)
|
||||
|
||||
### E2E Tests (3 tests)
|
||||
|
||||
- `tests/e2e/user-authentication.spec.ts` (87 lines)
|
||||
- ✅ should log in with valid credentials (RED - missing /login route)
|
||||
- ✅ should display error for invalid credentials (RED - error message not implemented)
|
||||
- ✅ should redirect to dashboard after login (RED - redirect logic missing)
|
||||
|
||||
### API Tests (2 tests)
|
||||
|
||||
- `tests/api/auth.api.spec.ts` (54 lines)
|
||||
- ✅ POST /api/auth/login - should return token for valid credentials (RED - endpoint not implemented)
|
||||
- ✅ POST /api/auth/login - should return 401 for invalid credentials (RED - validation missing)
|
||||
|
||||
## Data Factories Created
|
||||
|
||||
- `tests/support/factories/user.factory.ts` - createUser(), createUsers(count)
|
||||
|
||||
## Fixtures Created
|
||||
|
||||
- `tests/support/fixtures/auth.fixture.ts` - authenticatedUser fixture with auto-cleanup
|
||||
|
||||
## Required data-testid Attributes
|
||||
|
||||
### Login Page
|
||||
|
||||
- `email-input` - Email input field
|
||||
- `password-input` - Password input field
|
||||
- `login-button` - Submit button
|
||||
- `error-message` - Error message container
|
||||
|
||||
### Dashboard Page
|
||||
|
||||
- `user-name` - User name display
|
||||
- `logout-button` - Logout button
|
||||
|
||||
## Implementation Checklist
|
||||
|
||||
### Test: User Login with Valid Credentials
|
||||
|
||||
- [ ] Create `/login` route
|
||||
- [ ] Implement login form component
|
||||
- [ ] Add email/password validation
|
||||
- [ ] Integrate authentication API
|
||||
- [ ] Add data-testid attributes: `email-input`, `password-input`, `login-button`
|
||||
- [ ] Run test: `npm run test:e2e -- user-authentication.spec.ts`
|
||||
- [ ] ✅ Test passes (green phase)
|
||||
|
||||
### Test: Display Error for Invalid Credentials
|
||||
|
||||
- [ ] Add error state management
|
||||
- [ ] Display error message UI
|
||||
- [ ] Add `data-testid="error-message"`
|
||||
- [ ] Run test: `npm run test:e2e -- user-authentication.spec.ts`
|
||||
- [ ] ✅ Test passes (green phase)
|
||||
|
||||
### Test: Redirect to Dashboard After Login
|
||||
|
||||
- [ ] Implement redirect logic after successful auth
|
||||
- [ ] Verify authentication token stored
|
||||
- [ ] Add dashboard route protection
|
||||
- [ ] Run test: `npm run test:e2e -- user-authentication.spec.ts`
|
||||
- [ ] ✅ Test passes (green phase)
|
||||
|
||||
## Running Tests
|
||||
|
||||
```bash
|
||||
# Run all failing tests
|
||||
npm run test:e2e
|
||||
|
||||
# Run specific test file
|
||||
npm run test:e2e -- user-authentication.spec.ts
|
||||
|
||||
# Run tests in headed mode (see browser)
|
||||
npm run test:e2e -- --headed
|
||||
|
||||
# Debug specific test
|
||||
npm run test:e2e -- user-authentication.spec.ts --debug
|
||||
```
|
||||
````
|
||||
|
||||
## Red-Green-Refactor Workflow
|
||||
|
||||
**RED Phase** (Complete):
|
||||
|
||||
- ✅ All tests written and failing
|
||||
- ✅ Fixtures and factories created
|
||||
- ✅ data-testid requirements documented
|
||||
|
||||
**GREEN Phase** (DEV Team - Next Steps):
|
||||
|
||||
1. Pick one failing test from checklist
|
||||
2. Implement minimal code to make it pass
|
||||
3. Run test to verify green
|
||||
4. Check off task in checklist
|
||||
5. Move to next test
|
||||
6. Repeat until all tests pass
|
||||
|
||||
**REFACTOR Phase** (DEV Team - After All Tests Pass):
|
||||
|
||||
1. All tests passing (green)
|
||||
2. Improve code quality (extract functions, optimize)
|
||||
3. Remove duplications
|
||||
4. Ensure tests still pass after each refactor
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. Review this checklist with team
|
||||
2. Run failing tests to confirm RED phase: `npm run test:e2e`
|
||||
3. Begin implementation using checklist as guide
|
||||
4. Share progress in daily standup
|
||||
5. When all tests pass, run `bmad sm story-done` to move story to DONE
|
||||
|
||||
```
|
||||
|
||||
This comprehensive checklist guides DEV team from red to green with clear tasks and validation steps.
|
||||
```
|
||||
363
bmad/bmm/workflows/testarch/atdd/atdd-checklist-template.md
Normal file
363
bmad/bmm/workflows/testarch/atdd/atdd-checklist-template.md
Normal file
@@ -0,0 +1,363 @@
|
||||
# ATDD Checklist - Epic {epic_num}, Story {story_num}: {story_title}
|
||||
|
||||
**Date:** {date}
|
||||
**Author:** {user_name}
|
||||
**Primary Test Level:** {primary_level}
|
||||
|
||||
---
|
||||
|
||||
## Story Summary
|
||||
|
||||
{Brief 2-3 sentence summary of the user story}
|
||||
|
||||
**As a** {user_role}
|
||||
**I want** {feature_description}
|
||||
**So that** {business_value}
|
||||
|
||||
---
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
{List all testable acceptance criteria from the story}
|
||||
|
||||
1. {Acceptance criterion 1}
|
||||
2. {Acceptance criterion 2}
|
||||
3. {Acceptance criterion 3}
|
||||
|
||||
---
|
||||
|
||||
## Failing Tests Created (RED Phase)
|
||||
|
||||
### E2E Tests ({e2e_test_count} tests)
|
||||
|
||||
**File:** `{e2e_test_file_path}` ({line_count} lines)
|
||||
|
||||
{List each E2E test with its current status and expected failure reason}
|
||||
|
||||
- ✅ **Test:** {test_name}
|
||||
- **Status:** RED - {failure_reason}
|
||||
- **Verifies:** {what_this_test_validates}
|
||||
|
||||
### API Tests ({api_test_count} tests)
|
||||
|
||||
**File:** `{api_test_file_path}` ({line_count} lines)
|
||||
|
||||
{List each API test with its current status and expected failure reason}
|
||||
|
||||
- ✅ **Test:** {test_name}
|
||||
- **Status:** RED - {failure_reason}
|
||||
- **Verifies:** {what_this_test_validates}
|
||||
|
||||
### Component Tests ({component_test_count} tests)
|
||||
|
||||
**File:** `{component_test_file_path}` ({line_count} lines)
|
||||
|
||||
{List each component test with its current status and expected failure reason}
|
||||
|
||||
- ✅ **Test:** {test_name}
|
||||
- **Status:** RED - {failure_reason}
|
||||
- **Verifies:** {what_this_test_validates}
|
||||
|
||||
---
|
||||
|
||||
## Data Factories Created
|
||||
|
||||
{List all data factory files created with their exports}
|
||||
|
||||
### {Entity} Factory
|
||||
|
||||
**File:** `tests/support/factories/{entity}.factory.ts`
|
||||
|
||||
**Exports:**
|
||||
|
||||
- `create{Entity}(overrides?)` - Create single entity with optional overrides
|
||||
- `create{Entity}s(count)` - Create array of entities
|
||||
|
||||
**Example Usage:**
|
||||
|
||||
```typescript
|
||||
const user = createUser({ email: 'specific@example.com' });
|
||||
const users = createUsers(5); // Generate 5 random users
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Fixtures Created
|
||||
|
||||
{List all test fixture files created with their fixture names and descriptions}
|
||||
|
||||
### {Feature} Fixtures
|
||||
|
||||
**File:** `tests/support/fixtures/{feature}.fixture.ts`
|
||||
|
||||
**Fixtures:**
|
||||
|
||||
- `{fixtureName}` - {description_of_what_fixture_provides}
|
||||
- **Setup:** {what_setup_does}
|
||||
- **Provides:** {what_test_receives}
|
||||
- **Cleanup:** {what_cleanup_does}
|
||||
|
||||
**Example Usage:**
|
||||
|
||||
```typescript
|
||||
import { test } from './fixtures/{feature}.fixture';
|
||||
|
||||
test('should do something', async ({ {fixtureName} }) => {
|
||||
// {fixtureName} is ready to use with auto-cleanup
|
||||
});
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Mock Requirements
|
||||
|
||||
{Document external services that need mocking and their requirements}
|
||||
|
||||
### {Service Name} Mock
|
||||
|
||||
**Endpoint:** `{HTTP_METHOD} {endpoint_url}`
|
||||
|
||||
**Success Response:**
|
||||
|
||||
```json
|
||||
{
|
||||
{success_response_example}
|
||||
}
|
||||
```
|
||||
|
||||
**Failure Response:**
|
||||
|
||||
```json
|
||||
{
|
||||
{failure_response_example}
|
||||
}
|
||||
```
|
||||
|
||||
**Notes:** {any_special_mock_requirements}
|
||||
|
||||
---
|
||||
|
||||
## Required data-testid Attributes
|
||||
|
||||
{List all data-testid attributes required in UI implementation for test stability}
|
||||
|
||||
### {Page or Component Name}
|
||||
|
||||
- `{data-testid-name}` - {description_of_element}
|
||||
- `{data-testid-name}` - {description_of_element}
|
||||
|
||||
**Implementation Example:**
|
||||
|
||||
```tsx
|
||||
<button data-testid="login-button">Log In</button>
|
||||
<input data-testid="email-input" type="email" />
|
||||
<div data-testid="error-message">{errorText}</div>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation Checklist
|
||||
|
||||
{Map each failing test to concrete implementation tasks that will make it pass}
|
||||
|
||||
### Test: {test_name_1}
|
||||
|
||||
**File:** `{test_file_path}`
|
||||
|
||||
**Tasks to make this test pass:**
|
||||
|
||||
- [ ] {Implementation task 1}
|
||||
- [ ] {Implementation task 2}
|
||||
- [ ] {Implementation task 3}
|
||||
- [ ] Add required data-testid attributes: {list_of_testids}
|
||||
- [ ] Run test: `{test_execution_command}`
|
||||
- [ ] ✅ Test passes (green phase)
|
||||
|
||||
**Estimated Effort:** {effort_estimate} hours
|
||||
|
||||
---
|
||||
|
||||
### Test: {test_name_2}
|
||||
|
||||
**File:** `{test_file_path}`
|
||||
|
||||
**Tasks to make this test pass:**
|
||||
|
||||
- [ ] {Implementation task 1}
|
||||
- [ ] {Implementation task 2}
|
||||
- [ ] {Implementation task 3}
|
||||
- [ ] Add required data-testid attributes: {list_of_testids}
|
||||
- [ ] Run test: `{test_execution_command}`
|
||||
- [ ] ✅ Test passes (green phase)
|
||||
|
||||
**Estimated Effort:** {effort_estimate} hours
|
||||
|
||||
---
|
||||
|
||||
## Running Tests
|
||||
|
||||
```bash
|
||||
# Run all failing tests for this story
|
||||
{test_command_all}
|
||||
|
||||
# Run specific test file
|
||||
{test_command_specific_file}
|
||||
|
||||
# Run tests in headed mode (see browser)
|
||||
{test_command_headed}
|
||||
|
||||
# Debug specific test
|
||||
{test_command_debug}
|
||||
|
||||
# Run tests with coverage
|
||||
{test_command_coverage}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Red-Green-Refactor Workflow
|
||||
|
||||
### RED Phase (Complete) ✅
|
||||
|
||||
**TEA Agent Responsibilities:**
|
||||
|
||||
- ✅ All tests written and failing
|
||||
- ✅ Fixtures and factories created with auto-cleanup
|
||||
- ✅ Mock requirements documented
|
||||
- ✅ data-testid requirements listed
|
||||
- ✅ Implementation checklist created
|
||||
|
||||
**Verification:**
|
||||
|
||||
- All tests run and fail as expected
|
||||
- Failure messages are clear and actionable
|
||||
- Tests fail due to missing implementation, not test bugs
|
||||
|
||||
---
|
||||
|
||||
### GREEN Phase (DEV Team - Next Steps)
|
||||
|
||||
**DEV Agent Responsibilities:**
|
||||
|
||||
1. **Pick one failing test** from implementation checklist (start with highest priority)
|
||||
2. **Read the test** to understand expected behavior
|
||||
3. **Implement minimal code** to make that specific test pass
|
||||
4. **Run the test** to verify it now passes (green)
|
||||
5. **Check off the task** in implementation checklist
|
||||
6. **Move to next test** and repeat
|
||||
|
||||
**Key Principles:**
|
||||
|
||||
- One test at a time (don't try to fix all at once)
|
||||
- Minimal implementation (don't over-engineer)
|
||||
- Run tests frequently (immediate feedback)
|
||||
- Use implementation checklist as roadmap
|
||||
|
||||
**Progress Tracking:**
|
||||
|
||||
- Check off tasks as you complete them
|
||||
- Share progress in daily standup
|
||||
- Mark story as IN PROGRESS in `bmm-workflow-status.md`
|
||||
|
||||
---
|
||||
|
||||
### REFACTOR Phase (DEV Team - After All Tests Pass)
|
||||
|
||||
**DEV Agent Responsibilities:**
|
||||
|
||||
1. **Verify all tests pass** (green phase complete)
|
||||
2. **Review code for quality** (readability, maintainability, performance)
|
||||
3. **Extract duplications** (DRY principle)
|
||||
4. **Optimize performance** (if needed)
|
||||
5. **Ensure tests still pass** after each refactor
|
||||
6. **Update documentation** (if API contracts change)
|
||||
|
||||
**Key Principles:**
|
||||
|
||||
- Tests provide safety net (refactor with confidence)
|
||||
- Make small refactors (easier to debug if tests fail)
|
||||
- Run tests after each change
|
||||
- Don't change test behavior (only implementation)
|
||||
|
||||
**Completion:**
|
||||
|
||||
- All tests pass
|
||||
- Code quality meets team standards
|
||||
- No duplications or code smells
|
||||
- Ready for code review and story approval
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. **Review this checklist** with team in standup or planning
|
||||
2. **Run failing tests** to confirm RED phase: `{test_command_all}`
|
||||
3. **Begin implementation** using implementation checklist as guide
|
||||
4. **Work one test at a time** (red → green for each)
|
||||
5. **Share progress** in daily standup
|
||||
6. **When all tests pass**, refactor code for quality
|
||||
7. **When refactoring complete**, run `bmad sm story-done` to move story to DONE
|
||||
|
||||
---
|
||||
|
||||
## Knowledge Base References Applied
|
||||
|
||||
This ATDD workflow consulted the following knowledge fragments:
|
||||
|
||||
- **fixture-architecture.md** - Test fixture patterns with setup/teardown and auto-cleanup using Playwright's `test.extend()`
|
||||
- **data-factories.md** - Factory patterns using `@faker-js/faker` for random test data generation with overrides support
|
||||
- **component-tdd.md** - Component test strategies using Playwright Component Testing
|
||||
- **network-first.md** - Route interception patterns (intercept BEFORE navigation to prevent race conditions)
|
||||
- **test-quality.md** - Test design principles (Given-When-Then, one assertion per test, determinism, isolation)
|
||||
- **test-levels-framework.md** - Test level selection framework (E2E vs API vs Component vs Unit)
|
||||
|
||||
See `tea-index.csv` for complete knowledge fragment mapping.
|
||||
|
||||
---
|
||||
|
||||
## Test Execution Evidence
|
||||
|
||||
### Initial Test Run (RED Phase Verification)
|
||||
|
||||
**Command:** `{test_command_all}`
|
||||
|
||||
**Results:**
|
||||
|
||||
```
|
||||
{paste_test_run_output_showing_all_tests_failing}
|
||||
```
|
||||
|
||||
**Summary:**
|
||||
|
||||
- Total tests: {total_test_count}
|
||||
- Passing: 0 (expected)
|
||||
- Failing: {total_test_count} (expected)
|
||||
- Status: ✅ RED phase verified
|
||||
|
||||
**Expected Failure Messages:**
|
||||
{list_expected_failure_messages_for_each_test}
|
||||
|
||||
---
|
||||
|
||||
## Notes
|
||||
|
||||
{Any additional notes, context, or special considerations for this story}
|
||||
|
||||
- {Note 1}
|
||||
- {Note 2}
|
||||
- {Note 3}
|
||||
|
||||
---
|
||||
|
||||
## Contact
|
||||
|
||||
**Questions or Issues?**
|
||||
|
||||
- Ask in team standup
|
||||
- Tag @{tea_agent_username} in Slack/Discord
|
||||
- Refer to `testarch/README.md` for workflow documentation
|
||||
- Consult `testarch/knowledge/` for testing best practices
|
||||
|
||||
---
|
||||
|
||||
**Generated by BMad TEA Agent** - {date}
|
||||
373
bmad/bmm/workflows/testarch/atdd/checklist.md
Normal file
373
bmad/bmm/workflows/testarch/atdd/checklist.md
Normal file
@@ -0,0 +1,373 @@
|
||||
# ATDD Workflow Validation Checklist
|
||||
|
||||
Use this checklist to validate that the ATDD workflow has been executed correctly and all deliverables meet quality standards.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before starting this workflow, verify:
|
||||
|
||||
- [ ] Story approved with clear acceptance criteria (AC must be testable)
|
||||
- [ ] Development sandbox/environment ready
|
||||
- [ ] Framework scaffolding exists (run `framework` workflow if missing)
|
||||
- [ ] Test framework configuration available (playwright.config.ts or cypress.config.ts)
|
||||
- [ ] Package.json has test dependencies installed (Playwright or Cypress)
|
||||
|
||||
**Halt if missing:** Framework scaffolding or story acceptance criteria
|
||||
|
||||
---
|
||||
|
||||
## Step 1: Story Context and Requirements
|
||||
|
||||
- [ ] Story markdown file loaded and parsed successfully
|
||||
- [ ] All acceptance criteria identified and extracted
|
||||
- [ ] Affected systems and components identified
|
||||
- [ ] Technical constraints documented
|
||||
- [ ] Framework configuration loaded (playwright.config.ts or cypress.config.ts)
|
||||
- [ ] Test directory structure identified from config
|
||||
- [ ] Existing fixture patterns reviewed for consistency
|
||||
- [ ] Similar test patterns searched and found in `{test_dir}`
|
||||
- [ ] Knowledge base fragments loaded:
|
||||
- [ ] `fixture-architecture.md`
|
||||
- [ ] `data-factories.md`
|
||||
- [ ] `component-tdd.md`
|
||||
- [ ] `network-first.md`
|
||||
- [ ] `test-quality.md`
|
||||
|
||||
---
|
||||
|
||||
## Step 2: Test Level Selection and Strategy
|
||||
|
||||
- [ ] Each acceptance criterion analyzed for appropriate test level
|
||||
- [ ] Test level selection framework applied (E2E vs API vs Component vs Unit)
|
||||
- [ ] E2E tests: Critical user journeys and multi-system integration identified
|
||||
- [ ] API tests: Business logic and service contracts identified
|
||||
- [ ] Component tests: UI component behavior and interactions identified
|
||||
- [ ] Unit tests: Pure logic and edge cases identified (if applicable)
|
||||
- [ ] Duplicate coverage avoided (same behavior not tested at multiple levels unnecessarily)
|
||||
- [ ] Tests prioritized using P0-P3 framework (if test-design document exists)
|
||||
- [ ] Primary test level set in `primary_level` variable (typically E2E or API)
|
||||
- [ ] Test levels documented in ATDD checklist
|
||||
|
||||
---
|
||||
|
||||
## Step 3: Failing Tests Generated
|
||||
|
||||
### Test File Structure Created
|
||||
|
||||
- [ ] Test files organized in appropriate directories:
|
||||
- [ ] `tests/e2e/` for end-to-end tests
|
||||
- [ ] `tests/api/` for API tests
|
||||
- [ ] `tests/component/` for component tests
|
||||
- [ ] `tests/support/` for infrastructure (fixtures, factories, helpers)
|
||||
|
||||
### E2E Tests (If Applicable)
|
||||
|
||||
- [ ] E2E test files created in `tests/e2e/`
|
||||
- [ ] All tests follow Given-When-Then format
|
||||
- [ ] Tests use `data-testid` selectors (not CSS classes or fragile selectors)
|
||||
- [ ] One assertion per test (atomic test design)
|
||||
- [ ] No hard waits or sleeps (explicit waits only)
|
||||
- [ ] Network-first pattern applied (route interception BEFORE navigation)
|
||||
- [ ] Tests fail initially (RED phase verified by local test run)
|
||||
- [ ] Failure messages are clear and actionable
|
||||
|
||||
### API Tests (If Applicable)
|
||||
|
||||
- [ ] API test files created in `tests/api/`
|
||||
- [ ] Tests follow Given-When-Then format
|
||||
- [ ] API contracts validated (request/response structure)
|
||||
- [ ] HTTP status codes verified
|
||||
- [ ] Response body validation includes all required fields
|
||||
- [ ] Error cases tested (400, 401, 403, 404, 500)
|
||||
- [ ] Tests fail initially (RED phase verified)
|
||||
|
||||
### Component Tests (If Applicable)
|
||||
|
||||
- [ ] Component test files created in `tests/component/`
|
||||
- [ ] Tests follow Given-When-Then format
|
||||
- [ ] Component mounting works correctly
|
||||
- [ ] Interaction testing covers user actions (click, hover, keyboard)
|
||||
- [ ] State management within component validated
|
||||
- [ ] Props and events tested
|
||||
- [ ] Tests fail initially (RED phase verified)
|
||||
|
||||
### Test Quality Validation
|
||||
|
||||
- [ ] All tests use Given-When-Then structure with clear comments
|
||||
- [ ] All tests have descriptive names explaining what they test
|
||||
- [ ] No duplicate tests (same behavior tested multiple times)
|
||||
- [ ] No flaky patterns (race conditions, timing issues)
|
||||
- [ ] No test interdependencies (tests can run in any order)
|
||||
- [ ] Tests are deterministic (same input always produces same result)
|
||||
|
||||
---
|
||||
|
||||
## Step 4: Data Infrastructure Built
|
||||
|
||||
### Data Factories Created
|
||||
|
||||
- [ ] Factory files created in `tests/support/factories/`
|
||||
- [ ] All factories use `@faker-js/faker` for random data generation (no hardcoded values)
|
||||
- [ ] Factories support overrides for specific test scenarios
|
||||
- [ ] Factories generate complete valid objects matching API contracts
|
||||
- [ ] Helper functions for bulk creation provided (e.g., `createUsers(count)`)
|
||||
- [ ] Factory exports are properly typed (TypeScript)
|
||||
|
||||
### Test Fixtures Created
|
||||
|
||||
- [ ] Fixture files created in `tests/support/fixtures/`
|
||||
- [ ] All fixtures use Playwright's `test.extend()` pattern
|
||||
- [ ] Fixtures have setup phase (arrange test preconditions)
|
||||
- [ ] Fixtures provide data to tests via `await use(data)`
|
||||
- [ ] Fixtures have teardown phase with auto-cleanup (delete created data)
|
||||
- [ ] Fixtures are composable (can use other fixtures if needed)
|
||||
- [ ] Fixtures are isolated (each test gets fresh data)
|
||||
- [ ] Fixtures are type-safe (TypeScript types defined)
|
||||
|
||||
### Mock Requirements Documented
|
||||
|
||||
- [ ] External service mocking requirements identified
|
||||
- [ ] Mock endpoints documented with URLs and methods
|
||||
- [ ] Success response examples provided
|
||||
- [ ] Failure response examples provided
|
||||
- [ ] Mock requirements documented in ATDD checklist for DEV team
|
||||
|
||||
### data-testid Requirements Listed
|
||||
|
||||
- [ ] All required data-testid attributes identified from E2E tests
|
||||
- [ ] data-testid list organized by page or component
|
||||
- [ ] Each data-testid has clear description of element it targets
|
||||
- [ ] data-testid list included in ATDD checklist for DEV team
|
||||
|
||||
---
|
||||
|
||||
## Step 5: Implementation Checklist Created
|
||||
|
||||
- [ ] Implementation checklist created with clear structure
|
||||
- [ ] Each failing test mapped to concrete implementation tasks
|
||||
- [ ] Tasks include:
|
||||
- [ ] Route/component creation
|
||||
- [ ] Business logic implementation
|
||||
- [ ] API integration
|
||||
- [ ] data-testid attribute additions
|
||||
- [ ] Error handling
|
||||
- [ ] Test execution command
|
||||
- [ ] Completion checkbox
|
||||
- [ ] Red-Green-Refactor workflow documented in checklist
|
||||
- [ ] RED phase marked as complete (TEA responsibility)
|
||||
- [ ] GREEN phase tasks listed for DEV team
|
||||
- [ ] REFACTOR phase guidance provided
|
||||
- [ ] Execution commands provided:
|
||||
- [ ] Run all tests: `npm run test:e2e`
|
||||
- [ ] Run specific test file
|
||||
- [ ] Run in headed mode
|
||||
- [ ] Debug specific test
|
||||
- [ ] Estimated effort included (hours or story points)
|
||||
|
||||
---
|
||||
|
||||
## Step 6: Deliverables Generated
|
||||
|
||||
### ATDD Checklist Document Created
|
||||
|
||||
- [ ] Output file created at `{output_folder}/atdd-checklist-{story_id}.md`
|
||||
- [ ] Document follows template structure from `atdd-checklist-template.md`
|
||||
- [ ] Document includes all required sections:
|
||||
- [ ] Story summary
|
||||
- [ ] Acceptance criteria breakdown
|
||||
- [ ] Failing tests created (paths and line counts)
|
||||
- [ ] Data factories created
|
||||
- [ ] Fixtures created
|
||||
- [ ] Mock requirements
|
||||
- [ ] Required data-testid attributes
|
||||
- [ ] Implementation checklist
|
||||
- [ ] Red-green-refactor workflow
|
||||
- [ ] Execution commands
|
||||
- [ ] Next steps for DEV team
|
||||
|
||||
### All Tests Verified to Fail (RED Phase)
|
||||
|
||||
- [ ] Full test suite run locally before finalizing
|
||||
- [ ] All tests fail as expected (RED phase confirmed)
|
||||
- [ ] No tests passing before implementation (if passing, test is invalid)
|
||||
- [ ] Failure messages documented in ATDD checklist
|
||||
- [ ] Failures are due to missing implementation, not test bugs
|
||||
- [ ] Test run output captured for reference
|
||||
|
||||
### Summary Provided
|
||||
|
||||
- [ ] Summary includes:
|
||||
- [ ] Story ID
|
||||
- [ ] Primary test level
|
||||
- [ ] Test counts (E2E, API, Component)
|
||||
- [ ] Test file paths
|
||||
- [ ] Factory count
|
||||
- [ ] Fixture count
|
||||
- [ ] Mock requirements count
|
||||
- [ ] data-testid count
|
||||
- [ ] Implementation task count
|
||||
- [ ] Estimated effort
|
||||
- [ ] Next steps for DEV team
|
||||
- [ ] Output file path
|
||||
- [ ] Knowledge base references applied
|
||||
|
||||
---
|
||||
|
||||
## Quality Checks
|
||||
|
||||
### Test Design Quality
|
||||
|
||||
- [ ] Tests are readable (clear Given-When-Then structure)
|
||||
- [ ] Tests are maintainable (use factories and fixtures, not hardcoded data)
|
||||
- [ ] Tests are isolated (no shared state between tests)
|
||||
- [ ] Tests are deterministic (no race conditions or flaky patterns)
|
||||
- [ ] Tests are atomic (one assertion per test)
|
||||
- [ ] Tests are fast (no unnecessary waits or delays)
|
||||
|
||||
### Knowledge Base Integration
|
||||
|
||||
- [ ] fixture-architecture.md patterns applied to all fixtures
|
||||
- [ ] data-factories.md patterns applied to all factories
|
||||
- [ ] network-first.md patterns applied to E2E tests with network requests
|
||||
- [ ] component-tdd.md patterns applied to component tests
|
||||
- [ ] test-quality.md principles applied to all test design
|
||||
|
||||
### Code Quality
|
||||
|
||||
- [ ] All TypeScript types are correct and complete
|
||||
- [ ] No linting errors in generated test files
|
||||
- [ ] Consistent naming conventions followed
|
||||
- [ ] Imports are organized and correct
|
||||
- [ ] Code follows project style guide
|
||||
|
||||
---
|
||||
|
||||
## Integration Points
|
||||
|
||||
### With DEV Agent
|
||||
|
||||
- [ ] ATDD checklist provides clear implementation guidance
|
||||
- [ ] Implementation tasks are granular and actionable
|
||||
- [ ] data-testid requirements are complete and clear
|
||||
- [ ] Mock requirements include all necessary details
|
||||
- [ ] Execution commands work correctly
|
||||
|
||||
### With Story Workflow
|
||||
|
||||
- [ ] Story ID correctly referenced in output files
|
||||
- [ ] Acceptance criteria from story accurately reflected in tests
|
||||
- [ ] Technical constraints from story considered in test design
|
||||
|
||||
### With Framework Workflow
|
||||
|
||||
- [ ] Test framework configuration correctly detected and used
|
||||
- [ ] Directory structure matches framework setup
|
||||
- [ ] Fixtures and helpers follow established patterns
|
||||
- [ ] Naming conventions consistent with framework standards
|
||||
|
||||
### With test-design Workflow (If Available)
|
||||
|
||||
- [ ] P0 scenarios from test-design prioritized in ATDD
|
||||
- [ ] Risk assessment from test-design considered in test coverage
|
||||
- [ ] Coverage strategy from test-design aligned with ATDD tests
|
||||
|
||||
---
|
||||
|
||||
## Completion Criteria
|
||||
|
||||
All of the following must be true before marking this workflow as complete:
|
||||
|
||||
- [ ] **Story acceptance criteria analyzed** and mapped to appropriate test levels
|
||||
- [ ] **Failing tests created** at all appropriate levels (E2E, API, Component)
|
||||
- [ ] **Given-When-Then format** used consistently across all tests
|
||||
- [ ] **RED phase verified** by local test run (all tests failing as expected)
|
||||
- [ ] **Network-first pattern** applied to E2E tests with network requests
|
||||
- [ ] **Data factories created** using faker (no hardcoded test data)
|
||||
- [ ] **Fixtures created** with auto-cleanup in teardown
|
||||
- [ ] **Mock requirements documented** for external services
|
||||
- [ ] **data-testid attributes listed** for DEV team
|
||||
- [ ] **Implementation checklist created** mapping tests to code tasks
|
||||
- [ ] **Red-green-refactor workflow documented** in ATDD checklist
|
||||
- [ ] **Execution commands provided** and verified to work
|
||||
- [ ] **ATDD checklist document created** and saved to correct location
|
||||
- [ ] **Output file formatted correctly** using template structure
|
||||
- [ ] **Knowledge base references applied** and documented in summary
|
||||
- [ ] **No test quality issues** (flaky patterns, race conditions, hardcoded data)
|
||||
|
||||
---
|
||||
|
||||
## Common Issues and Resolutions
|
||||
|
||||
### Issue: Tests pass before implementation
|
||||
|
||||
**Problem:** A test passes even though no implementation code exists yet.
|
||||
|
||||
**Resolution:**
|
||||
|
||||
- Review test to ensure it's testing actual behavior, not mocked/stubbed behavior
|
||||
- Check if test is accidentally using existing functionality
|
||||
- Verify test assertions are correct and meaningful
|
||||
- Rewrite test to fail until implementation is complete
|
||||
|
||||
### Issue: Network-first pattern not applied
|
||||
|
||||
**Problem:** Route interception happens after navigation, causing race conditions.
|
||||
|
||||
**Resolution:**
|
||||
|
||||
- Move `await page.route()` calls BEFORE `await page.goto()`
|
||||
- Review `network-first.md` knowledge fragment
|
||||
- Update all E2E tests to follow network-first pattern
|
||||
|
||||
### Issue: Hardcoded test data in tests
|
||||
|
||||
**Problem:** Tests use hardcoded strings/numbers instead of factories.
|
||||
|
||||
**Resolution:**
|
||||
|
||||
- Replace all hardcoded data with factory function calls
|
||||
- Use `faker` for all random data generation
|
||||
- Update data-factories to support all required test scenarios
|
||||
|
||||
### Issue: Fixtures missing auto-cleanup
|
||||
|
||||
**Problem:** Fixtures create data but don't clean it up in teardown.
|
||||
|
||||
**Resolution:**
|
||||
|
||||
- Add cleanup logic after `await use(data)` in fixture
|
||||
- Call deletion/cleanup functions in teardown
|
||||
- Verify cleanup works by checking database/storage after test run
|
||||
|
||||
### Issue: Tests have multiple assertions
|
||||
|
||||
**Problem:** Tests verify multiple behaviors in single test (not atomic).
|
||||
|
||||
**Resolution:**
|
||||
|
||||
- Split into separate tests (one assertion per test)
|
||||
- Each test should verify exactly one behavior
|
||||
- Use descriptive test names to clarify what each test verifies
|
||||
|
||||
### Issue: Tests depend on execution order
|
||||
|
||||
**Problem:** Tests fail when run in isolation or different order.
|
||||
|
||||
**Resolution:**
|
||||
|
||||
- Remove shared state between tests
|
||||
- Each test should create its own test data
|
||||
- Use fixtures for consistent setup across tests
|
||||
- Verify tests can run with `.only` flag
|
||||
|
||||
---
|
||||
|
||||
## Notes for TEA Agent
|
||||
|
||||
- **Preflight halt is critical:** Do not proceed if story has no acceptance criteria or framework is missing
|
||||
- **RED phase verification is mandatory:** Tests must fail before sharing with DEV team
|
||||
- **Network-first pattern:** Route interception BEFORE navigation prevents race conditions
|
||||
- **One assertion per test:** Atomic tests provide clear failure diagnosis
|
||||
- **Auto-cleanup is non-negotiable:** Every fixture must clean up data in teardown
|
||||
- **Use knowledge base:** Load relevant fragments (fixture-architecture, data-factories, network-first, component-tdd, test-quality) for guidance
|
||||
- **Share with DEV agent:** ATDD checklist provides implementation roadmap from red to green
|
||||
785
bmad/bmm/workflows/testarch/atdd/instructions.md
Normal file
785
bmad/bmm/workflows/testarch/atdd/instructions.md
Normal file
@@ -0,0 +1,785 @@
|
||||
<!-- Powered by BMAD-CORE™ -->
|
||||
|
||||
# Acceptance Test-Driven Development (ATDD)
|
||||
|
||||
**Workflow ID**: `bmad/bmm/testarch/atdd`
|
||||
**Version**: 4.0 (BMad v6)
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Generates failing acceptance tests BEFORE implementation following TDD's red-green-refactor cycle. This workflow creates comprehensive test coverage at appropriate levels (E2E, API, Component) with supporting infrastructure (fixtures, factories, mocks) and provides an implementation checklist to guide development.
|
||||
|
||||
**Core Principle**: Tests fail first (red phase), then guide development to green, then enable confident refactoring.
|
||||
|
||||
---
|
||||
|
||||
## Preflight Requirements
|
||||
|
||||
**Critical:** Verify these requirements before proceeding. If any fail, HALT and notify the user.
|
||||
|
||||
- ✅ Story approved with clear acceptance criteria
|
||||
- ✅ Development sandbox/environment ready
|
||||
- ✅ Framework scaffolding exists (run `framework` workflow if missing)
|
||||
- ✅ Test framework configuration available (playwright.config.ts or cypress.config.ts)
|
||||
|
||||
---
|
||||
|
||||
## Step 1: Load Story Context and Requirements
|
||||
|
||||
### Actions
|
||||
|
||||
1. **Read Story Markdown**
|
||||
- Load story file from `{story_file}` variable
|
||||
- Extract acceptance criteria (all testable requirements)
|
||||
- Identify affected systems and components
|
||||
- Note any technical constraints or dependencies
|
||||
|
||||
2. **Load Framework Configuration**
|
||||
- Read framework config (playwright.config.ts or cypress.config.ts)
|
||||
- Identify test directory structure
|
||||
- Check existing fixture patterns
|
||||
- Note test runner capabilities
|
||||
|
||||
3. **Load Existing Test Patterns**
|
||||
- Search `{test_dir}` for similar tests
|
||||
- Identify reusable fixtures and helpers
|
||||
- Check data factory patterns
|
||||
- Note naming conventions
|
||||
|
||||
4. **Load Knowledge Base Fragments**
|
||||
|
||||
**Critical:** Consult `{project-root}/bmad/bmm/testarch/tea-index.csv` to load:
|
||||
- `fixture-architecture.md` - Test fixture patterns with auto-cleanup (pure function → fixture → mergeTests composition, 406 lines, 5 examples)
|
||||
- `data-factories.md` - Factory patterns using faker (override patterns, nested factories, API seeding, 498 lines, 5 examples)
|
||||
- `component-tdd.md` - Component test strategies (red-green-refactor, provider isolation, accessibility, visual regression, 480 lines, 4 examples)
|
||||
- `network-first.md` - Route interception patterns (intercept before navigate, HAR capture, deterministic waiting, 489 lines, 5 examples)
|
||||
- `test-quality.md` - Test design principles (deterministic tests, isolated with cleanup, explicit assertions, length limits, execution time optimization, 658 lines, 5 examples)
|
||||
- `test-healing-patterns.md` - Common failure patterns and healing strategies (stale selectors, race conditions, dynamic data, network errors, hard waits, 648 lines, 5 examples)
|
||||
- `selector-resilience.md` - Selector best practices (data-testid > ARIA > text > CSS hierarchy, dynamic patterns, anti-patterns, 541 lines, 4 examples)
|
||||
- `timing-debugging.md` - Race condition prevention and async debugging (network-first, deterministic waiting, anti-patterns, 370 lines, 3 examples)
|
||||
|
||||
**Halt Condition:** If story has no acceptance criteria or framework is missing, HALT with message: "ATDD requires clear acceptance criteria and test framework setup"
|
||||
|
||||
---
|
||||
|
||||
## Step 1.5: Generation Mode Selection (NEW - Phase 2.5)
|
||||
|
||||
### Actions
|
||||
|
||||
1. **Detect Generation Mode**
|
||||
|
||||
Determine mode based on scenario complexity:
|
||||
|
||||
**AI Generation Mode (DEFAULT)**:
|
||||
- Clear acceptance criteria with standard patterns
|
||||
- Uses: AI-generated tests from requirements
|
||||
- Appropriate for: CRUD, auth, navigation, API tests
|
||||
- Fastest approach
|
||||
|
||||
**Recording Mode (OPTIONAL - Complex UI)**:
|
||||
- Complex UI interactions (drag-drop, wizards, multi-page flows)
|
||||
- Uses: Interactive test recording with Playwright MCP
|
||||
- Appropriate for: Visual workflows, unclear requirements
|
||||
- Only if config.tea_use_mcp_enhancements is true AND MCP available
|
||||
|
||||
2. **AI Generation Mode (DEFAULT - Continue to Step 2)**
|
||||
|
||||
For standard scenarios:
|
||||
- Continue with existing workflow (Step 2: Select Test Levels and Strategy)
|
||||
- AI generates tests based on acceptance criteria from Step 1
|
||||
- Use knowledge base patterns for test structure
|
||||
|
||||
3. **Recording Mode (OPTIONAL - Complex UI Only)**
|
||||
|
||||
For complex UI scenarios AND config.tea_use_mcp_enhancements is true:
|
||||
|
||||
**A. Check MCP Availability**
|
||||
|
||||
If Playwright MCP tools are available in your IDE:
|
||||
- Use MCP recording mode (Step 3.B)
|
||||
|
||||
If MCP unavailable:
|
||||
- Fallback to AI generation mode (silent, automatic)
|
||||
- Continue to Step 2
|
||||
|
||||
**B. Interactive Test Recording (MCP-Based)**
|
||||
|
||||
Use Playwright MCP test-generator tools:
|
||||
|
||||
**Setup:**
|
||||
|
||||
```
|
||||
1. Use generator_setup_page to initialize recording session
|
||||
2. Navigate to application starting URL (from story context)
|
||||
3. Ready to record user interactions
|
||||
```
|
||||
|
||||
**Recording Process (Per Acceptance Criterion):**
|
||||
|
||||
```
|
||||
4. Read acceptance criterion from story
|
||||
5. Manually execute test scenario using browser_* tools:
|
||||
- browser_navigate: Navigate to pages
|
||||
- browser_click: Click buttons, links, elements
|
||||
- browser_type: Fill form fields
|
||||
- browser_select: Select dropdown options
|
||||
- browser_check: Check/uncheck checkboxes
|
||||
6. Add verification steps using browser_verify_* tools:
|
||||
- browser_verify_text: Verify text content
|
||||
- browser_verify_visible: Verify element visibility
|
||||
- browser_verify_url: Verify URL navigation
|
||||
7. Capture interaction log with generator_read_log
|
||||
8. Generate test file with generator_write_test
|
||||
9. Repeat for next acceptance criterion
|
||||
```
|
||||
|
||||
**Post-Recording Enhancement:**
|
||||
|
||||
```
|
||||
10. Review generated test code
|
||||
11. Enhance with knowledge base patterns:
|
||||
- Add Given-When-Then comments
|
||||
- Replace recorded selectors with data-testid (if needed)
|
||||
- Add network-first interception (from network-first.md)
|
||||
- Add fixtures for auth/data setup (from fixture-architecture.md)
|
||||
- Use factories for test data (from data-factories.md)
|
||||
12. Verify tests fail (missing implementation)
|
||||
13. Continue to Step 4 (Build Data Infrastructure)
|
||||
```
|
||||
|
||||
**When to Use Recording Mode:**
|
||||
- ✅ Complex UI interactions (drag-drop, multi-step forms, wizards)
|
||||
- ✅ Visual workflows (modals, dialogs, animations)
|
||||
- ✅ Unclear requirements (exploratory, discovering expected behavior)
|
||||
- ✅ Multi-page flows (checkout, registration, onboarding)
|
||||
- ❌ NOT for simple CRUD (AI generation faster)
|
||||
- ❌ NOT for API-only tests (no UI to record)
|
||||
|
||||
**When to Use AI Generation (Default):**
|
||||
- ✅ Clear acceptance criteria available
|
||||
- ✅ Standard patterns (login, CRUD, navigation)
|
||||
- ✅ Need many tests quickly
|
||||
- ✅ API/backend tests (no UI interaction)
|
||||
|
||||
4. **Proceed to Test Level Selection**
|
||||
|
||||
After mode selection:
|
||||
- AI Generation: Continue to Step 2 (Select Test Levels and Strategy)
|
||||
- Recording: Skip to Step 4 (Build Data Infrastructure) - tests already generated
|
||||
|
||||
---
|
||||
|
||||
## Step 2: Select Test Levels and Strategy
|
||||
|
||||
### Actions
|
||||
|
||||
1. **Analyze Acceptance Criteria**
|
||||
|
||||
For each acceptance criterion, determine:
|
||||
- Does it require full user journey? → E2E test
|
||||
- Does it test business logic/API contract? → API test
|
||||
- Does it validate UI component behavior? → Component test
|
||||
- Can it be unit tested? → Unit test
|
||||
|
||||
2. **Apply Test Level Selection Framework**
|
||||
|
||||
**Knowledge Base Reference**: `test-levels-framework.md`
|
||||
|
||||
**E2E (End-to-End)**:
|
||||
- Critical user journeys (login, checkout, core workflow)
|
||||
- Multi-system integration
|
||||
- User-facing acceptance criteria
|
||||
- **Characteristics**: High confidence, slow execution, brittle
|
||||
|
||||
**API (Integration)**:
|
||||
- Business logic validation
|
||||
- Service contracts
|
||||
- Data transformations
|
||||
- **Characteristics**: Fast feedback, good balance, stable
|
||||
|
||||
**Component**:
|
||||
- UI component behavior (buttons, forms, modals)
|
||||
- Interaction testing
|
||||
- Visual regression
|
||||
- **Characteristics**: Fast, isolated, granular
|
||||
|
||||
**Unit**:
|
||||
- Pure business logic
|
||||
- Edge cases
|
||||
- Error handling
|
||||
- **Characteristics**: Fastest, most granular
|
||||
|
||||
3. **Avoid Duplicate Coverage**
|
||||
|
||||
Don't test same behavior at multiple levels unless necessary:
|
||||
- Use E2E for critical happy path only
|
||||
- Use API tests for complex business logic variations
|
||||
- Use component tests for UI interaction edge cases
|
||||
- Use unit tests for pure logic edge cases
|
||||
|
||||
4. **Prioritize Tests**
|
||||
|
||||
If test-design document exists, align with priority levels:
|
||||
- P0 scenarios → Must cover in failing tests
|
||||
- P1 scenarios → Should cover if time permits
|
||||
- P2/P3 scenarios → Optional for this iteration
|
||||
|
||||
**Decision Point:** Set `primary_level` variable to main test level for this story (typically E2E or API)
|
||||
|
||||
---
|
||||
|
||||
## Step 3: Generate Failing Tests
|
||||
|
||||
### Actions
|
||||
|
||||
1. **Create Test File Structure**
|
||||
|
||||
```
|
||||
tests/
|
||||
├── e2e/
|
||||
│ └── {feature-name}.spec.ts # E2E acceptance tests
|
||||
├── api/
|
||||
│ └── {feature-name}.api.spec.ts # API contract tests
|
||||
├── component/
|
||||
│ └── {ComponentName}.test.tsx # Component tests
|
||||
└── support/
|
||||
├── fixtures/ # Test fixtures
|
||||
├── factories/ # Data factories
|
||||
└── helpers/ # Utility functions
|
||||
```
|
||||
|
||||
2. **Write Failing E2E Tests (If Applicable)**
|
||||
|
||||
**Use Given-When-Then format:**
|
||||
|
||||
```typescript
|
||||
import { test, expect } from '@playwright/test';
|
||||
|
||||
test.describe('User Login', () => {
|
||||
test('should display error for invalid credentials', async ({ page }) => {
|
||||
// GIVEN: User is on login page
|
||||
await page.goto('/login');
|
||||
|
||||
// WHEN: User submits invalid credentials
|
||||
await page.fill('[data-testid="email-input"]', 'invalid@example.com');
|
||||
await page.fill('[data-testid="password-input"]', 'wrongpassword');
|
||||
await page.click('[data-testid="login-button"]');
|
||||
|
||||
// THEN: Error message is displayed
|
||||
await expect(page.locator('[data-testid="error-message"]')).toHaveText('Invalid email or password');
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**Critical patterns:**
|
||||
- One assertion per test (atomic tests)
|
||||
- Explicit waits (no hard waits/sleeps)
|
||||
- Network-first approach (route interception before navigation)
|
||||
- data-testid selectors for stability
|
||||
- Clear Given-When-Then structure
|
||||
|
||||
3. **Apply Network-First Pattern**
|
||||
|
||||
**Knowledge Base Reference**: `network-first.md`
|
||||
|
||||
```typescript
|
||||
test('should load user dashboard after login', async ({ page }) => {
|
||||
// CRITICAL: Intercept routes BEFORE navigation
|
||||
await page.route('**/api/user', (route) =>
|
||||
route.fulfill({
|
||||
status: 200,
|
||||
body: JSON.stringify({ id: 1, name: 'Test User' }),
|
||||
}),
|
||||
);
|
||||
|
||||
// NOW navigate
|
||||
await page.goto('/dashboard');
|
||||
|
||||
await expect(page.locator('[data-testid="user-name"]')).toHaveText('Test User');
|
||||
});
|
||||
```
|
||||
|
||||
4. **Write Failing API Tests (If Applicable)**
|
||||
|
||||
```typescript
|
||||
import { test, expect } from '@playwright/test';
|
||||
|
||||
test.describe('User API', () => {
|
||||
test('POST /api/users - should create new user', async ({ request }) => {
|
||||
// GIVEN: Valid user data
|
||||
const userData = {
|
||||
email: 'newuser@example.com',
|
||||
name: 'New User',
|
||||
};
|
||||
|
||||
// WHEN: Creating user via API
|
||||
const response = await request.post('/api/users', {
|
||||
data: userData,
|
||||
});
|
||||
|
||||
// THEN: User is created successfully
|
||||
expect(response.status()).toBe(201);
|
||||
const body = await response.json();
|
||||
expect(body).toMatchObject({
|
||||
email: userData.email,
|
||||
name: userData.name,
|
||||
id: expect.any(Number),
|
||||
});
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
5. **Write Failing Component Tests (If Applicable)**
|
||||
|
||||
**Knowledge Base Reference**: `component-tdd.md`
|
||||
|
||||
```typescript
|
||||
import { test, expect } from '@playwright/experimental-ct-react';
|
||||
import { LoginForm } from './LoginForm';
|
||||
|
||||
test.describe('LoginForm Component', () => {
|
||||
test('should disable submit button when fields are empty', async ({ mount }) => {
|
||||
// GIVEN: LoginForm is mounted
|
||||
const component = await mount(<LoginForm />);
|
||||
|
||||
// WHEN: Form is initially rendered
|
||||
const submitButton = component.locator('button[type="submit"]');
|
||||
|
||||
// THEN: Submit button is disabled
|
||||
await expect(submitButton).toBeDisabled();
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
6. **Verify Tests Fail Initially**
|
||||
|
||||
**Critical verification:**
|
||||
- Run tests locally to confirm they fail
|
||||
- Failure should be due to missing implementation, not test errors
|
||||
- Failure messages should be clear and actionable
|
||||
- All tests must be in RED phase before sharing with DEV
|
||||
|
||||
**Important:** Tests MUST fail initially. If a test passes before implementation, it's not a valid acceptance test.
|
||||
|
||||
---
|
||||
|
||||
## Step 4: Build Data Infrastructure
|
||||
|
||||
### Actions
|
||||
|
||||
1. **Create Data Factories**
|
||||
|
||||
**Knowledge Base Reference**: `data-factories.md`
|
||||
|
||||
```typescript
|
||||
// tests/support/factories/user.factory.ts
|
||||
import { faker } from '@faker-js/faker';
|
||||
|
||||
export const createUser = (overrides = {}) => ({
|
||||
id: faker.number.int(),
|
||||
email: faker.internet.email(),
|
||||
name: faker.person.fullName(),
|
||||
createdAt: faker.date.recent().toISOString(),
|
||||
...overrides,
|
||||
});
|
||||
|
||||
export const createUsers = (count: number) => Array.from({ length: count }, () => createUser());
|
||||
```
|
||||
|
||||
**Factory principles:**
|
||||
- Use faker for random data (no hardcoded values)
|
||||
- Support overrides for specific scenarios
|
||||
- Generate complete valid objects
|
||||
- Include helper functions for bulk creation
|
||||
|
||||
2. **Create Test Fixtures**
|
||||
|
||||
**Knowledge Base Reference**: `fixture-architecture.md`
|
||||
|
||||
```typescript
|
||||
// tests/support/fixtures/auth.fixture.ts
|
||||
import { test as base } from '@playwright/test';
|
||||
|
||||
export const test = base.extend({
|
||||
authenticatedUser: async ({ page }, use) => {
|
||||
// Setup: Create and authenticate user
|
||||
const user = await createUser();
|
||||
await page.goto('/login');
|
||||
await page.fill('[data-testid="email"]', user.email);
|
||||
await page.fill('[data-testid="password"]', 'password123');
|
||||
await page.click('[data-testid="login-button"]');
|
||||
await page.waitForURL('/dashboard');
|
||||
|
||||
// Provide to test
|
||||
await use(user);
|
||||
|
||||
// Cleanup: Delete user
|
||||
await deleteUser(user.id);
|
||||
},
|
||||
});
|
||||
```
|
||||
|
||||
**Fixture principles:**
|
||||
- Auto-cleanup (always delete created data)
|
||||
- Composable (fixtures can use other fixtures)
|
||||
- Isolated (each test gets fresh data)
|
||||
- Type-safe
|
||||
|
||||
3. **Document Mock Requirements**
|
||||
|
||||
If external services need mocking, document requirements:
|
||||
|
||||
```markdown
|
||||
### Mock Requirements for DEV Team
|
||||
|
||||
**Payment Gateway Mock**:
|
||||
|
||||
- Endpoint: `POST /api/payments`
|
||||
- Success response: `{ status: 'success', transactionId: '123' }`
|
||||
- Failure response: `{ status: 'failed', error: 'Insufficient funds' }`
|
||||
|
||||
**Email Service Mock**:
|
||||
|
||||
- Should not send real emails in test environment
|
||||
- Log email contents for verification
|
||||
```
|
||||
|
||||
4. **List Required data-testid Attributes**
|
||||
|
||||
```markdown
|
||||
### Required data-testid Attributes
|
||||
|
||||
**Login Page**:
|
||||
|
||||
- `email-input` - Email input field
|
||||
- `password-input` - Password input field
|
||||
- `login-button` - Submit button
|
||||
- `error-message` - Error message container
|
||||
|
||||
**Dashboard Page**:
|
||||
|
||||
- `user-name` - User name display
|
||||
- `logout-button` - Logout button
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 5: Create Implementation Checklist
|
||||
|
||||
### Actions
|
||||
|
||||
1. **Map Tests to Implementation Tasks**
|
||||
|
||||
For each failing test, create corresponding implementation task:
|
||||
|
||||
```markdown
|
||||
## Implementation Checklist
|
||||
|
||||
### Epic X - User Authentication
|
||||
|
||||
#### Test: User Login with Valid Credentials
|
||||
|
||||
- [ ] Create `/login` route
|
||||
- [ ] Implement login form component
|
||||
- [ ] Add email/password validation
|
||||
- [ ] Integrate authentication API
|
||||
- [ ] Add `data-testid` attributes: `email-input`, `password-input`, `login-button`
|
||||
- [ ] Implement error handling
|
||||
- [ ] Run test: `npm run test:e2e -- login.spec.ts`
|
||||
- [ ] ✅ Test passes (green phase)
|
||||
|
||||
#### Test: Display Error for Invalid Credentials
|
||||
|
||||
- [ ] Add error state management
|
||||
- [ ] Display error message UI
|
||||
- [ ] Add `data-testid="error-message"`
|
||||
- [ ] Run test: `npm run test:e2e -- login.spec.ts`
|
||||
- [ ] ✅ Test passes (green phase)
|
||||
```
|
||||
|
||||
2. **Include Red-Green-Refactor Guidance**
|
||||
|
||||
```markdown
|
||||
## Red-Green-Refactor Workflow
|
||||
|
||||
**RED Phase** (Complete):
|
||||
|
||||
- ✅ All tests written and failing
|
||||
- ✅ Fixtures and factories created
|
||||
- ✅ Mock requirements documented
|
||||
|
||||
**GREEN Phase** (DEV Team):
|
||||
|
||||
1. Pick one failing test
|
||||
2. Implement minimal code to make it pass
|
||||
3. Run test to verify green
|
||||
4. Move to next test
|
||||
5. Repeat until all tests pass
|
||||
|
||||
**REFACTOR Phase** (DEV Team):
|
||||
|
||||
1. All tests passing (green)
|
||||
2. Improve code quality
|
||||
3. Extract duplications
|
||||
4. Optimize performance
|
||||
5. Ensure tests still pass
|
||||
```
|
||||
|
||||
3. **Add Execution Commands**
|
||||
|
||||
````markdown
|
||||
## Running Tests
|
||||
|
||||
```bash
|
||||
# Run all failing tests
|
||||
npm run test:e2e
|
||||
|
||||
# Run specific test file
|
||||
npm run test:e2e -- login.spec.ts
|
||||
|
||||
# Run tests in headed mode (see browser)
|
||||
npm run test:e2e -- --headed
|
||||
|
||||
# Debug specific test
|
||||
npm run test:e2e -- login.spec.ts --debug
|
||||
```
|
||||
````
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 6: Generate Deliverables
|
||||
|
||||
### Actions
|
||||
|
||||
1. **Create ATDD Checklist Document**
|
||||
|
||||
Use template structure at `{installed_path}/atdd-checklist-template.md`:
|
||||
- Story summary
|
||||
- Acceptance criteria breakdown
|
||||
- Test files created (with paths)
|
||||
- Data factories created
|
||||
- Fixtures created
|
||||
- Mock requirements
|
||||
- Required data-testid attributes
|
||||
- Implementation checklist
|
||||
- Red-green-refactor workflow
|
||||
- Execution commands
|
||||
|
||||
2. **Verify All Tests Fail**
|
||||
|
||||
Before finalizing:
|
||||
- Run full test suite locally
|
||||
- Confirm all tests in RED phase
|
||||
- Document expected failure messages
|
||||
- Ensure failures are due to missing implementation, not test bugs
|
||||
|
||||
3. **Write to Output File**
|
||||
|
||||
Save to `{output_folder}/atdd-checklist-{story_id}.md`
|
||||
|
||||
---
|
||||
|
||||
## Important Notes
|
||||
|
||||
### Red-Green-Refactor Cycle
|
||||
|
||||
**RED Phase** (TEA responsibility):
|
||||
|
||||
- Write failing tests first
|
||||
- Tests define expected behavior
|
||||
- Tests must fail for right reason (missing implementation)
|
||||
|
||||
**GREEN Phase** (DEV responsibility):
|
||||
|
||||
- Implement minimal code to pass tests
|
||||
- One test at a time
|
||||
- Don't over-engineer
|
||||
|
||||
**REFACTOR Phase** (DEV responsibility):
|
||||
|
||||
- Improve code quality with confidence
|
||||
- Tests provide safety net
|
||||
- Extract duplications, optimize
|
||||
|
||||
### Given-When-Then Structure
|
||||
|
||||
**GIVEN** (Setup):
|
||||
|
||||
- Arrange test preconditions
|
||||
- Create necessary data
|
||||
- Navigate to starting point
|
||||
|
||||
**WHEN** (Action):
|
||||
|
||||
- Execute the behavior being tested
|
||||
- Single action per test
|
||||
|
||||
**THEN** (Assertion):
|
||||
|
||||
- Verify expected outcome
|
||||
- One assertion per test (atomic)
|
||||
|
||||
### Network-First Testing
|
||||
|
||||
**Critical pattern:**
|
||||
|
||||
```typescript
|
||||
// ✅ CORRECT: Intercept BEFORE navigation
|
||||
await page.route('**/api/data', handler);
|
||||
await page.goto('/page');
|
||||
|
||||
// ❌ WRONG: Navigate then intercept (race condition)
|
||||
await page.goto('/page');
|
||||
await page.route('**/api/data', handler); // Too late!
|
||||
```
|
||||
|
||||
### Data Factory Best Practices
|
||||
|
||||
**Use faker for all test data:**
|
||||
|
||||
```typescript
|
||||
// ✅ CORRECT: Random data
|
||||
email: faker.internet.email();
|
||||
|
||||
// ❌ WRONG: Hardcoded data (collisions, maintenance burden)
|
||||
email: 'test@example.com';
|
||||
```
|
||||
|
||||
**Auto-cleanup principle:**
|
||||
|
||||
- Every factory that creates data must provide cleanup
|
||||
- Fixtures automatically cleanup in teardown
|
||||
- No manual cleanup in test code
|
||||
|
||||
### One Assertion Per Test
|
||||
|
||||
**Atomic test design:**
|
||||
|
||||
```typescript
|
||||
// ✅ CORRECT: One assertion
|
||||
test('should display user name', async ({ page }) => {
|
||||
await expect(page.locator('[data-testid="user-name"]')).toHaveText('John');
|
||||
});
|
||||
|
||||
// ❌ WRONG: Multiple assertions (not atomic)
|
||||
test('should display user info', async ({ page }) => {
|
||||
await expect(page.locator('[data-testid="user-name"]')).toHaveText('John');
|
||||
await expect(page.locator('[data-testid="user-email"]')).toHaveText('john@example.com');
|
||||
});
|
||||
```
|
||||
|
||||
**Why?** If second assertion fails, you don't know if first is still valid.
|
||||
|
||||
### Component Test Strategy
|
||||
|
||||
**When to use component tests:**
|
||||
|
||||
- Complex UI interactions (drag-drop, keyboard nav)
|
||||
- Form validation logic
|
||||
- State management within component
|
||||
- Visual edge cases
|
||||
|
||||
**When NOT to use:**
|
||||
|
||||
- Simple rendering (snapshot tests are sufficient)
|
||||
- Integration with backend (use E2E or API tests)
|
||||
- Full user journeys (use E2E tests)
|
||||
|
||||
### Knowledge Base Integration
|
||||
|
||||
**Core Fragments (Auto-loaded in Step 1):**
|
||||
|
||||
- `fixture-architecture.md` - Pure function → fixture → mergeTests patterns (406 lines, 5 examples)
|
||||
- `data-factories.md` - Factory patterns with faker, overrides, API seeding (498 lines, 5 examples)
|
||||
- `component-tdd.md` - Red-green-refactor, provider isolation, accessibility, visual regression (480 lines, 4 examples)
|
||||
- `network-first.md` - Intercept before navigate, HAR capture, deterministic waiting (489 lines, 5 examples)
|
||||
- `test-quality.md` - Deterministic tests, cleanup, explicit assertions, length/time limits (658 lines, 5 examples)
|
||||
- `test-healing-patterns.md` - Common failure patterns: stale selectors, race conditions, dynamic data, network errors, hard waits (648 lines, 5 examples)
|
||||
- `selector-resilience.md` - Selector hierarchy (data-testid > ARIA > text > CSS), dynamic patterns, anti-patterns (541 lines, 4 examples)
|
||||
- `timing-debugging.md` - Race condition prevention, deterministic waiting, async debugging (370 lines, 3 examples)
|
||||
|
||||
**Reference for Test Level Selection:**
|
||||
|
||||
- `test-levels-framework.md` - E2E vs API vs Component vs Unit decision framework (467 lines, 4 examples)
|
||||
|
||||
**Manual Reference (Optional):**
|
||||
|
||||
- Use `tea-index.csv` to find additional specialized fragments as needed
|
||||
|
||||
---
|
||||
|
||||
## Output Summary
|
||||
|
||||
After completing this workflow, provide a summary:
|
||||
|
||||
```markdown
|
||||
## ATDD Complete - Tests in RED Phase
|
||||
|
||||
**Story**: {story_id}
|
||||
**Primary Test Level**: {primary_level}
|
||||
|
||||
**Failing Tests Created**:
|
||||
|
||||
- E2E tests: {e2e_count} tests in {e2e_files}
|
||||
- API tests: {api_count} tests in {api_files}
|
||||
- Component tests: {component_count} tests in {component_files}
|
||||
|
||||
**Supporting Infrastructure**:
|
||||
|
||||
- Data factories: {factory_count} factories created
|
||||
- Fixtures: {fixture_count} fixtures with auto-cleanup
|
||||
- Mock requirements: {mock_count} services documented
|
||||
|
||||
**Implementation Checklist**:
|
||||
|
||||
- Total tasks: {task_count}
|
||||
- Estimated effort: {effort_estimate} hours
|
||||
|
||||
**Required data-testid Attributes**: {data_testid_count} attributes documented
|
||||
|
||||
**Next Steps for DEV Team**:
|
||||
|
||||
1. Run failing tests: `npm run test:e2e`
|
||||
2. Review implementation checklist
|
||||
3. Implement one test at a time (RED → GREEN)
|
||||
4. Refactor with confidence (tests provide safety net)
|
||||
5. Share progress in daily standup
|
||||
|
||||
**Output File**: {output_file}
|
||||
|
||||
**Knowledge Base References Applied**:
|
||||
|
||||
- Fixture architecture patterns
|
||||
- Data factory patterns with faker
|
||||
- Network-first route interception
|
||||
- Component TDD strategies
|
||||
- Test quality principles
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Validation
|
||||
|
||||
After completing all steps, verify:
|
||||
|
||||
- [ ] Story acceptance criteria analyzed and mapped to tests
|
||||
- [ ] Appropriate test levels selected (E2E, API, Component)
|
||||
- [ ] All tests written in Given-When-Then format
|
||||
- [ ] All tests fail initially (RED phase verified)
|
||||
- [ ] Network-first pattern applied (route interception before navigation)
|
||||
- [ ] Data factories created with faker
|
||||
- [ ] Fixtures created with auto-cleanup
|
||||
- [ ] Mock requirements documented for DEV team
|
||||
- [ ] Required data-testid attributes listed
|
||||
- [ ] Implementation checklist created with clear tasks
|
||||
- [ ] Red-green-refactor workflow documented
|
||||
- [ ] Execution commands provided
|
||||
- [ ] Output file created and formatted correctly
|
||||
|
||||
Refer to `checklist.md` for comprehensive validation criteria.
|
||||
52
bmad/bmm/workflows/testarch/atdd/workflow.yaml
Normal file
52
bmad/bmm/workflows/testarch/atdd/workflow.yaml
Normal file
@@ -0,0 +1,52 @@
|
||||
# Test Architect workflow: atdd
|
||||
name: testarch-atdd
|
||||
description: "Generate failing acceptance tests before implementation using TDD red-green-refactor cycle"
|
||||
author: "BMad"
|
||||
|
||||
# Critical variables from config
|
||||
config_source: "{project-root}/bmad/bmm/config.yaml"
|
||||
output_folder: "{config_source}:output_folder"
|
||||
user_name: "{config_source}:user_name"
|
||||
communication_language: "{config_source}:communication_language"
|
||||
document_output_language: "{config_source}:document_output_language"
|
||||
date: system-generated
|
||||
|
||||
# Workflow components
|
||||
installed_path: "{project-root}/bmad/bmm/workflows/testarch/atdd"
|
||||
instructions: "{installed_path}/instructions.md"
|
||||
validation: "{installed_path}/checklist.md"
|
||||
template: "{installed_path}/atdd-checklist-template.md"
|
||||
|
||||
# Variables and inputs
|
||||
variables:
|
||||
test_dir: "{project-root}/tests" # Root test directory
|
||||
|
||||
# Output configuration
|
||||
default_output_file: "{output_folder}/atdd-checklist-{story_id}.md"
|
||||
|
||||
# Required tools
|
||||
required_tools:
|
||||
- read_file # Read story markdown, framework config
|
||||
- write_file # Create test files, checklist, factory stubs
|
||||
- create_directory # Create test directories
|
||||
- list_files # Find existing fixtures and helpers
|
||||
- search_repo # Search for similar test patterns
|
||||
|
||||
# Recommended inputs
|
||||
recommended_inputs:
|
||||
- story: "Story markdown with acceptance criteria (required)"
|
||||
- framework_config: "Test framework configuration (playwright.config.ts, cypress.config.ts)"
|
||||
- existing_fixtures: "Current fixture patterns for consistency"
|
||||
- test_design: "Test design document (optional, for risk/priority context)"
|
||||
|
||||
tags:
|
||||
- qa
|
||||
- atdd
|
||||
- test-architect
|
||||
- tdd
|
||||
- red-green-refactor
|
||||
|
||||
execution_hints:
|
||||
interactive: false # Minimize prompts
|
||||
autonomous: true # Proceed without user input unless blocked
|
||||
iterative: true
|
||||
Reference in New Issue
Block a user