mirror of
https://github.com/pacnpal/thrilltrack-explorer.git
synced 2025-12-20 13:51:13 -05:00
7.4 KiB
7.4 KiB
Phase 5: Testing & Validation Guide
Completed Implementation
✅ Phase 1-2: All 26 edge functions + 29 frontend calls have request tracking
✅ Phase 3: All 7 admin forms use submissionReducer for state management
✅ Phase 4: Ready for moderation state machine integration
Manual Testing Checklist
Test Suite 1: Form Submission Flow (30 min)
Test Case: RideForm submission with state machine
- Navigate to
/admin→ Create Ride - Fill out form completely
- DevTools Check:
- React DevTools → Find
RideFormcomponent - Watch
submissionStateprop - Verify transitions:
draft→validating→submitting→complete
- React DevTools → Find
- Click Submit
- Expected Behavior:
- Button becomes disabled immediately
- Text changes to "Saving..."
- Success toast appears
- Form redirects or resets
Test Case: Validation error handling
- Fill out form with missing required field (e.g., no name)
- Click Submit
- Expected Behavior:
- State transitions:
draft→validating→draft(with errors) - Validation error toast appears
- Button re-enables for retry
- Form retains entered data
- State transitions:
Test Case: Network error handling
- Fill out form completely
- Open DevTools → Network tab → Throttle to "Offline"
- Click Submit
- Expected Behavior:
- State attempts transition
- Error caught and handled
- State resets to
draft - Error toast with retry option
- Button re-enables
Test Suite 2: Request Tracking (30 min)
Test Case: Edge function correlation
- Submit RideForm
- Browser Check:
- Network tab → Find POST request to edge function
- Response Headers → Verify
X-Request-IDpresent - Response Body → Verify
requestIdfield present
- Copy
requestIdvalue - Database Check:
SELECT * FROM request_metadata WHERE request_id = 'PASTE_REQUEST_ID_HERE'; - Expected: Single row with matching endpoint, user_id, duration
Test Case: Toast notification with requestId
- Trigger photo upload
- Expected: Success toast displays:
Upload Successful Request ID: abc12345
Test Suite 3: Database Validation (1 hour)
Query 1: Request Metadata Coverage
SELECT
endpoint,
COUNT(*) as request_count,
COUNT(DISTINCT user_id) as unique_users,
AVG(duration_ms) as avg_duration_ms,
MAX(duration_ms) as max_duration_ms,
MIN(duration_ms) as min_duration_ms,
COUNT(CASE WHEN error_message IS NOT NULL THEN 1 END) as error_count,
ROUND(100.0 * COUNT(CASE WHEN error_message IS NOT NULL THEN 1 END) / COUNT(*), 2) as error_rate_percent
FROM request_metadata
WHERE created_at > NOW() - INTERVAL '1 hour'
GROUP BY endpoint
ORDER BY request_count DESC;
Expected: All critical endpoints present (process-selective-approval, upload-image, etc.)
Query 2: Trace ID Correlation
SELECT
trace_id,
COUNT(*) as operation_count,
MIN(created_at) as first_operation,
MAX(created_at) as last_operation,
EXTRACT(EPOCH FROM (MAX(created_at) - MIN(created_at))) as total_duration_seconds,
STRING_AGG(DISTINCT endpoint, ', ' ORDER BY endpoint) as endpoints_hit
FROM request_metadata
WHERE trace_id IS NOT NULL
AND created_at > NOW() - INTERVAL '1 day'
GROUP BY trace_id
HAVING COUNT(*) > 1
ORDER BY operation_count DESC
LIMIT 20;
Expected: Batch approvals show 5-50 operations with same trace_id
Query 3: Status Type Safety
SELECT
'content_submissions' as table_name,
status,
COUNT(*) as count,
CASE
WHEN status IN ('draft', 'pending', 'locked', 'reviewing', 'partially_approved', 'approved', 'rejected', 'escalated')
THEN 'VALID'
ELSE 'INVALID'
END as validity
FROM content_submissions
GROUP BY status
UNION ALL
SELECT
'submission_items' as table_name,
status,
COUNT(*) as count,
CASE
WHEN status IN ('pending', 'approved', 'rejected', 'flagged', 'skipped')
THEN 'VALID'
ELSE 'INVALID'
END as validity
FROM submission_items
GROUP BY status
ORDER BY table_name, count DESC;
Expected: All rows show VALID in validity column
Query 4: Orphaned Data Check
SELECT
cs.id,
cs.created_at,
cs.status,
cs.submission_type,
cs.submitted_by,
COUNT(si.id) as item_count
FROM content_submissions cs
LEFT JOIN submission_items si ON si.submission_id = cs.id
WHERE cs.created_at > NOW() - INTERVAL '2 hours'
GROUP BY cs.id, cs.created_at, cs.status, cs.submission_type, cs.submitted_by
HAVING COUNT(si.id) = 0
ORDER BY cs.created_at DESC;
Expected: 0 rows (or only very recent submissions < 1 hour old)
Query 5: Lock Duration Analysis
SELECT
DATE_TRUNC('hour', locked_at) as hour,
COUNT(*) as locks_acquired,
AVG(EXTRACT(EPOCH FROM (locked_until - locked_at))) / 60 as avg_lock_duration_minutes,
COUNT(CASE WHEN locked_until < NOW() THEN 1 END) as expired_locks,
COUNT(CASE WHEN status = 'locked' AND locked_until < NOW() THEN 1 END) as stuck_locks
FROM content_submissions
WHERE locked_at > NOW() - INTERVAL '24 hours'
GROUP BY DATE_TRUNC('hour', locked_at)
ORDER BY hour DESC;
Expected:
- Average lock duration ~15 minutes
- Few expired locks
- Zero stuck locks
Test Suite 4: Performance Testing (1 hour)
Test 1: State Machine Overhead
- Open Chrome DevTools → Performance tab
- Click "Record" (⚫)
- Fill out and submit RideForm
- Stop recording
- Analysis:
- Find "Reducer" or "submissionReducer" in flame graph
- Measure total time in reducer calls
- Target: < 5ms total overhead per submission
Test 2: Request Metadata Insert Performance
EXPLAIN ANALYZE
INSERT INTO request_metadata (
request_id, user_id, endpoint, method, status_code, duration_ms
) VALUES (
gen_random_uuid(),
'test-user-id',
'/functions/test',
'POST',
200,
150
);
Target: Execution time < 50ms
Test 3: Memory Leak Detection
- Open Chrome DevTools → Memory tab
- Take heap snapshot (Baseline)
- Perform 20 form submissions (RideForm)
- Force garbage collection (🗑️ icon)
- Take second heap snapshot
- Compare snapshots
- Expected:
- No significant memory retention from state machines
- No dangling event listeners
- No uncleaned timeouts/intervals
Success Criteria
Functional Requirements
- ✅ All 26 edge functions return
requestIdandX-Request-IDheader - ✅ All 29
supabase.functions.invokecalls useinvokeWithTracking - ✅ All 7 admin forms use
submissionReducerfor submission flow - ⏳
SubmissionReviewManagerusesmoderationReducerfor review flow - ⏳
useModerationQueueusesmoderationReducerfor claim/release operations - ⏳ Lock expiry monitoring active with warning toasts
- ✅ Error toasts display
requestIdfor debugging support
Quality Requirements
- ✅ Zero TypeScript errors in strict mode
- ✅ No illegal state transitions possible (enforced by reducers)
- ✅ 100% request correlation coverage for critical paths
- ⏳ Database queries validate no orphaned data or invalid statuses
- ⏳ Performance overhead within acceptable limits
Next Steps
- Phase 4: Integrate moderation state machine into
SubmissionReviewManageranduseModerationQueueManager - Complete Testing: Run all manual test scenarios
- Database Validation: Execute all validation queries
- Performance Benchmarks: Verify all metrics meet targets
- Memory Leak Testing: Ensure no memory retention issues