mirror of
https://github.com/pacnpal/thrilltrack-explorer.git
synced 2025-12-20 09:11:12 -05:00
242 lines
7.4 KiB
Markdown
242 lines
7.4 KiB
Markdown
# Phase 5: Testing & Validation Guide
|
|
|
|
## Completed Implementation
|
|
|
|
✅ **Phase 1-2**: All 26 edge functions + 29 frontend calls have request tracking
|
|
✅ **Phase 3**: All 7 admin forms use `submissionReducer` for state management
|
|
✅ **Phase 4**: Ready for moderation state machine integration
|
|
|
|
## Manual Testing Checklist
|
|
|
|
### Test Suite 1: Form Submission Flow (30 min)
|
|
|
|
#### Test Case: RideForm submission with state machine
|
|
1. Navigate to `/admin` → Create Ride
|
|
2. Fill out form completely
|
|
3. **DevTools Check:**
|
|
- React DevTools → Find `RideForm` component
|
|
- Watch `submissionState` prop
|
|
- Verify transitions: `draft` → `validating` → `submitting` → `complete`
|
|
4. Click Submit
|
|
5. **Expected Behavior:**
|
|
- Button becomes disabled immediately
|
|
- Text changes to "Saving..."
|
|
- Success toast appears
|
|
- Form redirects or resets
|
|
|
|
#### Test Case: Validation error handling
|
|
1. Fill out form with missing required field (e.g., no name)
|
|
2. Click Submit
|
|
3. **Expected Behavior:**
|
|
- State transitions: `draft` → `validating` → `draft` (with errors)
|
|
- Validation error toast appears
|
|
- Button re-enables for retry
|
|
- Form retains entered data
|
|
|
|
#### Test Case: Network error handling
|
|
1. Fill out form completely
|
|
2. Open DevTools → Network tab → Throttle to "Offline"
|
|
3. Click Submit
|
|
4. **Expected Behavior:**
|
|
- State attempts transition
|
|
- Error caught and handled
|
|
- State resets to `draft`
|
|
- Error toast with retry option
|
|
- Button re-enables
|
|
|
|
### Test Suite 2: Request Tracking (30 min)
|
|
|
|
#### Test Case: Edge function correlation
|
|
1. Submit RideForm
|
|
2. **Browser Check:**
|
|
- Network tab → Find POST request to edge function
|
|
- Response Headers → Verify `X-Request-ID` present
|
|
- Response Body → Verify `requestId` field present
|
|
3. Copy `requestId` value
|
|
4. **Database Check:**
|
|
```sql
|
|
SELECT * FROM request_metadata
|
|
WHERE request_id = 'PASTE_REQUEST_ID_HERE';
|
|
```
|
|
5. **Expected:** Single row with matching endpoint, user_id, duration
|
|
|
|
#### Test Case: Toast notification with requestId
|
|
1. Trigger photo upload
|
|
2. **Expected:** Success toast displays:
|
|
```
|
|
Upload Successful
|
|
Request ID: abc12345
|
|
```
|
|
|
|
### Test Suite 3: Database Validation (1 hour)
|
|
|
|
#### Query 1: Request Metadata Coverage
|
|
```sql
|
|
SELECT
|
|
endpoint,
|
|
COUNT(*) as request_count,
|
|
COUNT(DISTINCT user_id) as unique_users,
|
|
AVG(duration_ms) as avg_duration_ms,
|
|
MAX(duration_ms) as max_duration_ms,
|
|
MIN(duration_ms) as min_duration_ms,
|
|
COUNT(CASE WHEN error_message IS NOT NULL THEN 1 END) as error_count,
|
|
ROUND(100.0 * COUNT(CASE WHEN error_message IS NOT NULL THEN 1 END) / COUNT(*), 2) as error_rate_percent
|
|
FROM request_metadata
|
|
WHERE created_at > NOW() - INTERVAL '1 hour'
|
|
GROUP BY endpoint
|
|
ORDER BY request_count DESC;
|
|
```
|
|
**Expected:** All critical endpoints present (`process-selective-approval`, `upload-image`, etc.)
|
|
|
|
#### Query 2: Trace ID Correlation
|
|
```sql
|
|
SELECT
|
|
trace_id,
|
|
COUNT(*) as operation_count,
|
|
MIN(created_at) as first_operation,
|
|
MAX(created_at) as last_operation,
|
|
EXTRACT(EPOCH FROM (MAX(created_at) - MIN(created_at))) as total_duration_seconds,
|
|
STRING_AGG(DISTINCT endpoint, ', ' ORDER BY endpoint) as endpoints_hit
|
|
FROM request_metadata
|
|
WHERE trace_id IS NOT NULL
|
|
AND created_at > NOW() - INTERVAL '1 day'
|
|
GROUP BY trace_id
|
|
HAVING COUNT(*) > 1
|
|
ORDER BY operation_count DESC
|
|
LIMIT 20;
|
|
```
|
|
**Expected:** Batch approvals show 5-50 operations with same `trace_id`
|
|
|
|
#### Query 3: Status Type Safety
|
|
```sql
|
|
SELECT
|
|
'content_submissions' as table_name,
|
|
status,
|
|
COUNT(*) as count,
|
|
CASE
|
|
WHEN status IN ('draft', 'pending', 'locked', 'reviewing', 'partially_approved', 'approved', 'rejected', 'escalated')
|
|
THEN 'VALID'
|
|
ELSE 'INVALID'
|
|
END as validity
|
|
FROM content_submissions
|
|
GROUP BY status
|
|
|
|
UNION ALL
|
|
|
|
SELECT
|
|
'submission_items' as table_name,
|
|
status,
|
|
COUNT(*) as count,
|
|
CASE
|
|
WHEN status IN ('pending', 'approved', 'rejected', 'flagged', 'skipped')
|
|
THEN 'VALID'
|
|
ELSE 'INVALID'
|
|
END as validity
|
|
FROM submission_items
|
|
GROUP BY status
|
|
ORDER BY table_name, count DESC;
|
|
```
|
|
**Expected:** All rows show `VALID` in validity column
|
|
|
|
#### Query 4: Orphaned Data Check
|
|
```sql
|
|
SELECT
|
|
cs.id,
|
|
cs.created_at,
|
|
cs.status,
|
|
cs.submission_type,
|
|
cs.submitted_by,
|
|
COUNT(si.id) as item_count
|
|
FROM content_submissions cs
|
|
LEFT JOIN submission_items si ON si.submission_id = cs.id
|
|
WHERE cs.created_at > NOW() - INTERVAL '2 hours'
|
|
GROUP BY cs.id, cs.created_at, cs.status, cs.submission_type, cs.submitted_by
|
|
HAVING COUNT(si.id) = 0
|
|
ORDER BY cs.created_at DESC;
|
|
```
|
|
**Expected:** 0 rows (or only very recent submissions < 1 hour old)
|
|
|
|
#### Query 5: Lock Duration Analysis
|
|
```sql
|
|
SELECT
|
|
DATE_TRUNC('hour', locked_at) as hour,
|
|
COUNT(*) as locks_acquired,
|
|
AVG(EXTRACT(EPOCH FROM (locked_until - locked_at))) / 60 as avg_lock_duration_minutes,
|
|
COUNT(CASE WHEN locked_until < NOW() THEN 1 END) as expired_locks,
|
|
COUNT(CASE WHEN status = 'locked' AND locked_until < NOW() THEN 1 END) as stuck_locks
|
|
FROM content_submissions
|
|
WHERE locked_at > NOW() - INTERVAL '24 hours'
|
|
GROUP BY DATE_TRUNC('hour', locked_at)
|
|
ORDER BY hour DESC;
|
|
```
|
|
**Expected:**
|
|
- Average lock duration ~15 minutes
|
|
- Few expired locks
|
|
- Zero stuck locks
|
|
|
|
### Test Suite 4: Performance Testing (1 hour)
|
|
|
|
#### Test 1: State Machine Overhead
|
|
1. Open Chrome DevTools → Performance tab
|
|
2. Click "Record" (⚫)
|
|
3. Fill out and submit RideForm
|
|
4. Stop recording
|
|
5. **Analysis:**
|
|
- Find "Reducer" or "submissionReducer" in flame graph
|
|
- Measure total time in reducer calls
|
|
- **Target:** < 5ms total overhead per submission
|
|
|
|
#### Test 2: Request Metadata Insert Performance
|
|
```sql
|
|
EXPLAIN ANALYZE
|
|
INSERT INTO request_metadata (
|
|
request_id, user_id, endpoint, method, status_code, duration_ms
|
|
) VALUES (
|
|
gen_random_uuid(),
|
|
'test-user-id',
|
|
'/functions/test',
|
|
'POST',
|
|
200,
|
|
150
|
|
);
|
|
```
|
|
**Target:** Execution time < 50ms
|
|
|
|
#### Test 3: Memory Leak Detection
|
|
1. Open Chrome DevTools → Memory tab
|
|
2. Take heap snapshot (Baseline)
|
|
3. Perform 20 form submissions (RideForm)
|
|
4. Force garbage collection (🗑️ icon)
|
|
5. Take second heap snapshot
|
|
6. Compare snapshots
|
|
7. **Expected:**
|
|
- No significant memory retention from state machines
|
|
- No dangling event listeners
|
|
- No uncleaned timeouts/intervals
|
|
|
|
## Success Criteria
|
|
|
|
### Functional Requirements
|
|
- ✅ All 26 edge functions return `requestId` and `X-Request-ID` header
|
|
- ✅ All 29 `supabase.functions.invoke` calls use `invokeWithTracking`
|
|
- ✅ All 7 admin forms use `submissionReducer` for submission flow
|
|
- ⏳ `SubmissionReviewManager` uses `moderationReducer` for review flow
|
|
- ⏳ `useModerationQueue` uses `moderationReducer` for claim/release operations
|
|
- ⏳ Lock expiry monitoring active with warning toasts
|
|
- ✅ Error toasts display `requestId` for debugging support
|
|
|
|
### Quality Requirements
|
|
- ✅ Zero TypeScript errors in strict mode
|
|
- ✅ No illegal state transitions possible (enforced by reducers)
|
|
- ✅ 100% request correlation coverage for critical paths
|
|
- ⏳ Database queries validate no orphaned data or invalid statuses
|
|
- ⏳ Performance overhead within acceptable limits
|
|
|
|
## Next Steps
|
|
|
|
1. **Phase 4**: Integrate moderation state machine into `SubmissionReviewManager` and `useModerationQueueManager`
|
|
2. **Complete Testing**: Run all manual test scenarios
|
|
3. **Database Validation**: Execute all validation queries
|
|
4. **Performance Benchmarks**: Verify all metrics meet targets
|
|
5. **Memory Leak Testing**: Ensure no memory retention issues
|