Files
thrilltrack-explorer/docs/moderation/SECURITY.md
gpt-engineer-app[bot] a9644c0bee Approve tool use
2025-11-02 21:46:47 +00:00

351 lines
9.1 KiB
Markdown

# Moderation Queue Security
## Overview
The moderation queue implements multiple layers of security to prevent unauthorized access, enforce proper workflows, and maintain a comprehensive audit trail.
## Security Layers
### 1. Role-Based Access Control (RBAC)
All moderation actions require one of the following roles:
- `moderator`: Can review and approve/reject submissions
- `admin`: Full moderation access + user management
- `superuser`: All admin privileges + system configuration
**Implementation:**
- Roles stored in separate `user_roles` table (not on profiles)
- `has_role()` function uses `SECURITY DEFINER` to avoid RLS recursion
- RLS policies enforce role requirements on all sensitive operations
### 2. Lock Enforcement
Submissions can be "claimed" by moderators to prevent concurrent modifications.
**Lock Mechanism:**
- 15-minute expiry window
- Only the claiming moderator can approve/reject/delete
- Backend validation via `validate_moderation_action()` function
- RLS policies prevent lock bypassing
**Lock States:**
```typescript
interface LockState {
submissionId: string;
lockedBy: string;
expiresAt: Date;
}
```
### 3. Rate Limiting
**Client-Side:**
- Debounced filter updates (300ms)
- Action buttons disabled during processing
- Toast notifications for user feedback
**Server-Side:**
- Maximum 10 moderation actions per minute per user
- Enforced by `validate_moderation_action()` function
- Uses `moderation_audit_log` for tracking
### 4. Input Sanitization
All user-generated content is sanitized before rendering to prevent XSS attacks.
**Sanitization Functions:**
```typescript
import { sanitizeURL, sanitizePlainText, sanitizeHTML } from '@/lib/sanitize';
// Sanitize URLs to prevent javascript: and data: protocols
const safeUrl = sanitizeURL(userInput);
// Escape HTML entities in plain text
const safeText = sanitizePlainText(userInput);
// Sanitize HTML with whitelist
const safeHTML = sanitizeHTML(userInput);
```
**Protected Fields:**
- `submission_notes` - Plain text sanitization
- `source_url` - URL protocol validation
- `reviewer_notes` - Plain text sanitization
### 5. Audit Trail
All moderation actions are automatically logged in the `moderation_audit_log` table.
**Logged Actions:**
- `approve` - Submission approved
- `reject` - Submission rejected
- `delete` - Submission permanently deleted
- `reset` - Submission reset to pending
- `claim` - Submission locked by moderator
- `release` - Lock released
- `extend_lock` - Lock expiry extended
- `retry_failed` - Failed items retried
**Audit Log Schema:**
```sql
CREATE TABLE moderation_audit_log (
id UUID PRIMARY KEY,
submission_id UUID REFERENCES content_submissions(id),
moderator_id UUID REFERENCES auth.users(id),
action TEXT,
previous_status TEXT,
new_status TEXT,
notes TEXT,
metadata JSONB,
created_at TIMESTAMPTZ
);
```
**Access:**
- Read-only for moderators/admins/superusers
- Inserted automatically via database trigger
- Cannot be modified or deleted (immutable audit trail)
## Validation Function
The `validate_moderation_action()` function enforces all security rules:
```sql
SELECT validate_moderation_action(
_submission_id := '<uuid>',
_user_id := auth.uid(),
_action := 'approve'
);
```
**Validation Steps:**
1. Check if user has moderator/admin/superuser role
2. Check if submission is locked by another user
3. Check rate limit (10 actions/minute)
4. Return `true` if valid, raise exception otherwise
**Usage in Application:**
While the validation function exists, it's primarily enforced through:
- RLS policies on `content_submissions` table
- Automatic audit logging via triggers
- Frontend lock state management
The validation function can be called explicitly for additional security checks:
```typescript
const { data, error } = await supabase.rpc('validate_moderation_action', {
_submission_id: submissionId,
_user_id: userId,
_action: 'approve'
});
if (error) {
// Handle validation failure
}
```
## RLS Policies
### content_submissions
```sql
-- Update policy with lock enforcement
CREATE POLICY "Moderators can update with validation"
ON content_submissions FOR UPDATE
USING (has_role(auth.uid(), 'moderator'))
WITH CHECK (
has_role(auth.uid(), 'moderator')
AND (
assigned_to IS NULL
OR assigned_to = auth.uid()
OR locked_until < NOW()
)
);
```
### moderation_audit_log
```sql
-- Read-only for moderators
CREATE POLICY "Moderators can view audit log"
ON moderation_audit_log FOR SELECT
USING (has_role(auth.uid(), 'moderator'));
-- Insert only (via trigger or explicit call)
CREATE POLICY "System can insert audit log"
ON moderation_audit_log FOR INSERT
WITH CHECK (moderator_id = auth.uid());
```
## Security Best Practices
### For Developers
1. **Always sanitize user input** before rendering:
```typescript
// ❌ NEVER DO THIS
<div>{userInput}</div>
// ✅ ALWAYS DO THIS
<div>{sanitizePlainText(userInput)}</div>
```
2. **Never bypass validation** for "convenience":
```typescript
// ❌ WRONG
if (isAdmin) {
// Skip lock check for admins
await updateSubmission(id, { status: 'approved' });
}
// ✅ CORRECT
// Let RLS policies handle authorization
const { error } = await supabase
.from('content_submissions')
.update({ status: 'approved' })
.eq('id', id);
```
3. **Always check lock state** before actions:
```typescript
const isLockedByOther = useModerationQueue().isLockedByOther(
item.id,
item.assigned_to,
item.locked_until
);
if (isLockedByOther) {
toast.error('Submission is locked by another moderator');
return;
}
```
4. **Log all admin actions** for audit trail:
```typescript
await supabase.rpc('log_admin_action', {
action: 'delete_submission',
target_id: submissionId,
details: { reason: 'spam' }
});
```
### For Moderators
1. **Always claim submissions** before reviewing (prevents conflicts)
2. **Release locks** if stepping away (allows others to review)
3. **Provide clear notes** for rejections (improves submitter experience)
4. **Respect rate limits** (prevents accidental mass actions)
## Threat Mitigation
### XSS (Cross-Site Scripting)
**Threat:** Malicious users submit content with JavaScript to steal session tokens or modify page behavior.
**Mitigation:**
- All user input sanitized via `DOMPurify`
- URL validation blocks `javascript:` and `data:` protocols
- CSP headers (if configured) provide additional layer
### CSRF (Cross-Site Request Forgery)
**Threat:** Attacker tricks authenticated user into making unwanted actions.
**Mitigation:**
- Supabase JWT tokens provide CSRF protection
- All API calls require valid session token
- SameSite cookie settings (managed by Supabase)
### Privilege Escalation
**Threat:** Regular user gains moderator/admin privileges.
**Mitigation:**
- Roles stored in separate `user_roles` table with RLS
- Only superusers can grant roles (enforced by RLS)
- `has_role()` function uses `SECURITY DEFINER` safely
### Lock Bypassing
**Threat:** User modifies submission while locked by another moderator.
**Mitigation:**
- RLS policies check lock state on UPDATE
- Backend validation in `validate_moderation_action()`
- Frontend enforces disabled state on UI
### Rate Limit Abuse
**Threat:** User spams approve/reject actions to overwhelm system.
**Mitigation:**
- Server-side rate limiting (10 actions/minute)
- Client-side debouncing on filters
- Action buttons disabled during processing
## Testing Security
See `tests/integration/moderation-security.test.ts` for comprehensive security tests:
- ✅ Role validation
- ✅ Lock enforcement
- ✅ Rate limiting
- ✅ Audit logging
- ✅ XSS protection (unit tests in `tests/unit/sanitize.test.ts`)
**Run Security Tests:**
```bash
npm run test:integration -- moderation-security
npm run test:unit -- sanitize
```
## Monitoring & Alerts
**Key Metrics to Monitor:**
1. **Failed validation attempts** - May indicate attack
2. **Rate limit violations** - May indicate abuse
3. **Expired locks** - May indicate abandoned reviews
4. **Audit log anomalies** - Unusual action patterns
**Query Audit Log:**
```sql
-- Recent moderation actions
SELECT * FROM moderation_audit_log
ORDER BY created_at DESC
LIMIT 100;
-- Actions by moderator
SELECT action, COUNT(*) as count
FROM moderation_audit_log
WHERE moderator_id = '<uuid>'
GROUP BY action;
-- Rate limit violations (proxy: high action density)
SELECT moderator_id, COUNT(*) as action_count
FROM moderation_audit_log
WHERE created_at > NOW() - INTERVAL '1 minute'
GROUP BY moderator_id
HAVING COUNT(*) > 10;
```
## Incident Response
If a security issue is detected:
1. **Immediate:** Revoke affected user's role in `user_roles` table
2. **Investigate:** Query `moderation_audit_log` for suspicious activity
3. **Rollback:** Reset affected submissions to pending if needed
4. **Notify:** Alert other moderators via admin panel
5. **Document:** Record incident details for review
## Future Enhancements
- [ ] MFA requirement for delete/reverse actions
- [ ] IP-based rate limiting (in addition to user-based)
- [ ] Anomaly detection on audit log patterns
- [ ] Automated lock expiry notifications
- [ ] Scheduled security audits via cron jobs