Add secret management guide, client-side performance monitoring, and search accessibility enhancements

- Introduced a comprehensive Secret Management Guide detailing best practices, secret classification, development setup, production management, rotation procedures, and emergency protocols.
- Implemented a client-side performance monitoring script to track various metrics including page load performance, paint metrics, layout shifts, and memory usage.
- Enhanced search accessibility with keyboard navigation support for search results, ensuring compliance with WCAG standards and improving user experience.
This commit is contained in:
pacnpal
2025-12-23 16:41:42 -05:00
parent ae31e889d7
commit edcd8f2076
155 changed files with 22046 additions and 4645 deletions

View File

@@ -1,108 +1,120 @@
# ThrillWiki Monorepo Deployment Guide
# ThrillWiki Deployment Guide
This document outlines deployment strategies, build processes, and infrastructure considerations for the ThrillWiki Django + Vue.js monorepo.
This document outlines deployment strategies, build processes, and infrastructure considerations for the ThrillWiki Django + HTMX application.
## Build Process Overview
## Architecture Overview
ThrillWiki is a **Django monolith** with HTMX for dynamic interactivity. There is no separate frontend build process - templates and static assets are served directly by Django.
```mermaid
graph TB
A[Source Code] --> B[Backend Build]
A --> C[Frontend Build]
B --> D[Django Static Collection]
C --> E[Vue.js Production Build]
D --> F[Backend Container]
E --> G[Frontend Assets]
F --> H[Production Deployment]
G --> H
A[Source Code] --> B[Django Application]
B --> C[Static Files Collection]
C --> D[Docker Container]
D --> E[Production Deployment]
subgraph "Django Application"
B1[Python Dependencies]
B2[Database Migrations]
B3[HTMX Templates]
end
```
## Development Environment
### Prerequisites
- Python 3.11+ with UV package manager
- Node.js 18+ with pnpm
- PostgreSQL (production) / SQLite (development)
- Redis (for caching and sessions)
- Python 3.13+ with UV package manager
- PostgreSQL 14+ with PostGIS extension
- Redis 6+ (for caching and sessions)
### Local Development Setup
```bash
# Clone repository
git clone <repository-url>
cd thrillwiki-monorepo
cd thrillwiki
# Install root dependencies
pnpm install
# Backend setup
# Install dependencies
cd backend
uv sync
uv sync --frozen
# Configure environment
cp .env.example .env
# Edit .env with your settings
# Database setup
uv run manage.py migrate
uv run manage.py collectstatic
uv run manage.py collectstatic --noinput
# Frontend setup
cd ../frontend
pnpm install
# Start development servers
cd ..
pnpm run dev # Starts both backend and frontend
# Start development server
uv run manage.py runserver
```
## Build Strategies
### 1. Containerized Deployment (Recommended)
#### Multi-stage Dockerfile for Backend
#### Multi-stage Dockerfile
```dockerfile
# backend/Dockerfile
FROM python:3.11-slim as builder
FROM python:3.13-slim as builder
WORKDIR /app
COPY pyproject.toml uv.lock ./
# Install system dependencies for GeoDjango
RUN apt-get update && apt-get install -y \
binutils libproj-dev gdal-bin libgdal-dev \
libpq-dev gcc \
&& rm -rf /var/lib/apt/lists/*
# Install UV
RUN pip install uv
RUN uv sync --no-dev
FROM python:3.11-slim as runtime
# Copy dependency files
COPY pyproject.toml uv.lock ./
# Install dependencies
RUN uv sync --frozen --no-dev
FROM python:3.13-slim as runtime
WORKDIR /app
# Install runtime dependencies for GeoDjango
RUN apt-get update && apt-get install -y \
libpq5 gdal-bin libgdal32 libgeos-c1v5 libproj25 \
&& rm -rf /var/lib/apt/lists/*
# Copy virtual environment from builder
COPY --from=builder /app/.venv /app/.venv
ENV PATH="/app/.venv/bin:$PATH"
# Copy application code
COPY . .
# Collect static files
RUN python manage.py collectstatic --noinput
# Create logs directory
RUN mkdir -p logs
EXPOSE 8000
CMD ["gunicorn", "config.wsgi:application", "--bind", "0.0.0.0:8000"]
```
#### Dockerfile for Frontend
```dockerfile
# frontend/Dockerfile
FROM node:18-alpine as builder
WORKDIR /app
COPY package.json pnpm-lock.yaml ./
RUN npm install -g pnpm
RUN pnpm install --frozen-lockfile
COPY . .
RUN pnpm run build
FROM nginx:alpine as runtime
COPY --from=builder /app/dist /usr/share/nginx/html
COPY nginx.conf /etc/nginx/nginx.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
# Run with gunicorn
CMD ["gunicorn", "config.wsgi:application", "--bind", "0.0.0.0:8000", "--workers", "4"]
```
#### Docker Compose for Development
```yaml
# docker-compose.dev.yml
version: '3.8'
services:
db:
image: postgres:15
image: postgis/postgis:15-3.3
environment:
POSTGRES_DB: thrillwiki
POSTGRES_USER: thrillwiki
@@ -117,7 +129,7 @@ services:
ports:
- "6379:6379"
backend:
web:
build:
context: ./backend
dockerfile: Dockerfile.dev
@@ -128,36 +140,40 @@ services:
- ./shared/media:/app/media
environment:
- DEBUG=1
- DATABASE_URL=postgresql://thrillwiki:password@db:5432/thrillwiki
- DATABASE_URL=postgis://thrillwiki:password@db:5432/thrillwiki
- REDIS_URL=redis://redis:6379/0
depends_on:
- db
- redis
command: python manage.py runserver 0.0.0.0:8000
frontend:
celery:
build:
context: ./frontend
context: ./backend
dockerfile: Dockerfile.dev
ports:
- "3000:3000"
volumes:
- ./frontend:/app
- /app/node_modules
- ./backend:/app
environment:
- VITE_API_URL=http://localhost:8000
- DATABASE_URL=postgis://thrillwiki:password@db:5432/thrillwiki
- REDIS_URL=redis://redis:6379/0
depends_on:
- db
- redis
command: celery -A config.celery worker -l info
volumes:
postgres_data:
```
#### Docker Compose for Production
```yaml
# docker-compose.prod.yml
version: '3.8'
services:
db:
image: postgres:15
image: postgis/postgis:15-3.3
environment:
POSTGRES_DB: ${POSTGRES_DB}
POSTGRES_USER: ${POSTGRES_USER}
@@ -170,7 +186,7 @@ services:
image: redis:7-alpine
restart: unless-stopped
backend:
web:
build:
context: ./backend
dockerfile: Dockerfile
@@ -188,10 +204,18 @@ services:
- redis
restart: unless-stopped
frontend:
celery:
build:
context: ./frontend
context: ./backend
dockerfile: Dockerfile
environment:
- DATABASE_URL=${DATABASE_URL}
- REDIS_URL=${REDIS_URL}
- SECRET_KEY=${SECRET_KEY}
depends_on:
- db
- redis
command: celery -A config.celery worker -l info
restart: unless-stopped
nginx:
@@ -205,8 +229,7 @@ services:
- static_files:/usr/share/nginx/html/static
- ./shared/media:/usr/share/nginx/html/media
depends_on:
- backend
- frontend
- web
restart: unless-stopped
volumes:
@@ -214,21 +237,76 @@ volumes:
static_files:
```
### 2. Static Site Generation (Alternative)
### Nginx Configuration
For sites with mostly static content, consider pre-rendering:
```nginx
# nginx/nginx.conf
upstream django {
server web:8000;
}
```bash
# Frontend build with pre-rendering
cd frontend
pnpm run build:prerender
server {
listen 80;
server_name yourdomain.com www.yourdomain.com;
return 301 https://$server_name$request_uri;
}
# Serve static files with minimal backend
server {
listen 443 ssl http2;
server_name yourdomain.com www.yourdomain.com;
ssl_certificate /etc/nginx/ssl/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256;
ssl_prefer_server_ciphers off;
# Security headers
add_header X-Frame-Options "DENY" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
# Static files
location /static/ {
alias /usr/share/nginx/html/static/;
expires 1y;
add_header Cache-Control "public, immutable";
}
# Media files
location /media/ {
alias /usr/share/nginx/html/media/;
expires 1M;
add_header Cache-Control "public";
}
# Django application
location / {
proxy_pass http://django;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# HTMX considerations
proxy_set_header HX-Request $http_hx_request;
proxy_set_header HX-Current-URL $http_hx_current_url;
}
# Health check endpoint
location /api/v1/health/simple/ {
proxy_pass http://django;
proxy_set_header Host $http_host;
access_log off;
}
}
```
## CI/CD Pipeline
### GitHub Actions Workflow
```yaml
# .github/workflows/deploy.yml
name: Deploy ThrillWiki
@@ -242,10 +320,10 @@ on:
jobs:
test:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:15
image: postgis/postgis:15-3.3
env:
POSTGRES_PASSWORD: postgres
options: >-
@@ -253,171 +331,99 @@ jobs:
--health-interval 10s
--health-timeout 5s
--health-retries 5
ports:
- 5432:5432
redis:
image: redis:7-alpine
ports:
- 6379:6379
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v4
uses: actions/setup-python@v5
with:
python-version: '3.11'
python-version: '3.13'
- name: Install UV
run: pip install uv
- name: Backend Tests
- name: Cache dependencies
uses: actions/cache@v4
with:
path: ~/.cache/uv
key: ${{ runner.os }}-uv-${{ hashFiles('backend/uv.lock') }}
- name: Install dependencies
run: |
cd backend
uv sync
uv run manage.py test
uv run flake8 .
uv run black --check .
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: '18'
- name: Install pnpm
run: npm install -g pnpm
- name: Frontend Tests
uv sync --frozen
- name: Run tests
run: |
cd frontend
pnpm install --frozen-lockfile
pnpm run test
pnpm run lint
pnpm run type-check
cd backend
uv run manage.py test
env:
DATABASE_URL: postgis://postgres:postgres@localhost:5432/postgres
REDIS_URL: redis://localhost:6379/0
SECRET_KEY: test-secret-key
DEBUG: "1"
- name: Run linting
run: |
cd backend
uv run ruff check .
uv run black --check .
build:
needs: test
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main'
steps:
- uses: actions/checkout@v4
- name: Build and push Docker images
- name: Build Docker image
run: |
docker build -t thrillwiki-backend ./backend
docker build -t thrillwiki-frontend ./frontend
# Push to registry
docker build -t thrillwiki-web ./backend
- name: Push to registry
run: |
# Push to your container registry
# docker push your-registry/thrillwiki-web:${{ github.sha }}
deploy:
needs: build
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main'
steps:
- name: Deploy to production
run: |
# Deploy using your preferred method
# (AWS ECS, GCP Cloud Run, Azure Container Instances, etc.)
```
## Platform-Specific Deployments
### 1. Vercel Deployment (Frontend + API)
```json
// vercel.json
{
"version": 2,
"builds": [
{
"src": "frontend/package.json",
"use": "@vercel/static-build",
"config": {
"distDir": "dist"
}
},
{
"src": "backend/config/wsgi.py",
"use": "@vercel/python"
}
],
"routes": [
{
"src": "/api/(.*)",
"dest": "backend/config/wsgi.py"
},
{
"src": "/(.*)",
"dest": "frontend/dist/$1"
}
]
}
```
### 2. Railway Deployment
```toml
# railway.toml
[environments.production]
[environments.production.services.backend]
dockerfile = "backend/Dockerfile"
variables = { DEBUG = "0" }
[environments.production.services.frontend]
dockerfile = "frontend/Dockerfile"
[environments.production.services.postgres]
image = "postgres:15"
variables = { POSTGRES_DB = "thrillwiki" }
```
### 3. DigitalOcean App Platform
```yaml
# .do/app.yaml
name: thrillwiki
services:
- name: backend
source_dir: backend
github:
repo: your-username/thrillwiki-monorepo
branch: main
run_command: gunicorn config.wsgi:application
environment_slug: python
instance_count: 1
instance_size_slug: basic-xxs
envs:
- key: DEBUG
value: "0"
- name: frontend
source_dir: frontend
github:
repo: your-username/thrillwiki-monorepo
branch: main
build_command: pnpm run build
run_command: pnpm run preview
environment_slug: node-js
instance_count: 1
instance_size_slug: basic-xxs
databases:
- name: thrillwiki-db
engine: PG
version: "15"
# SSH, Kubernetes, AWS ECS, etc.
```
## Environment Configuration
### Environment Variables
### Required Environment Variables
#### Backend (.env)
```bash
# Django Settings
DEBUG=0
SECRET_KEY=your-secret-key-here
SECRET_KEY=your-production-secret-key
ALLOWED_HOSTS=yourdomain.com,www.yourdomain.com
CSRF_TRUSTED_ORIGINS=https://yourdomain.com,https://www.yourdomain.com
DJANGO_SETTINGS_MODULE=config.django.production
# Database
DATABASE_URL=postgresql://user:password@host:port/database
DATABASE_URL=postgis://user:password@host:port/database
# Redis
REDIS_URL=redis://host:port/0
# File Storage
MEDIA_ROOT=/app/media
STATIC_ROOT=/app/staticfiles
# Email
EMAIL_BACKEND=django.core.mail.backends.smtp.EmailBackend
EMAIL_HOST=smtp.yourmailprovider.com
@@ -426,162 +432,136 @@ EMAIL_USE_TLS=True
EMAIL_HOST_USER=your-email@yourdomain.com
EMAIL_HOST_PASSWORD=your-email-password
# Third-party Services
SENTRY_DSN=your-sentry-dsn
AWS_ACCESS_KEY_ID=your-aws-key
AWS_SECRET_ACCESS_KEY=your-aws-secret
```
# Cloudflare Images
CLOUDFLARE_IMAGES_ACCOUNT_ID=your-account-id
CLOUDFLARE_IMAGES_API_TOKEN=your-api-token
CLOUDFLARE_IMAGES_ACCOUNT_HASH=your-account-hash
#### Frontend (.env.production)
```bash
VITE_API_URL=https://api.yourdomain.com
VITE_APP_TITLE=ThrillWiki
VITE_SENTRY_DSN=your-frontend-sentry-dsn
VITE_GOOGLE_ANALYTICS_ID=your-ga-id
# Sentry (optional)
SENTRY_DSN=your-sentry-dsn
SENTRY_ENVIRONMENT=production
```
## Performance Optimization
### Backend Optimizations
```python
# backend/config/settings/production.py
### Database Optimization
# Database optimization
```python
# backend/config/django/production.py
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'CONN_MAX_AGE': 60,
'ENGINE': 'django.contrib.gis.db.backends.postgis',
'CONN_MAX_AGE': 60, # Keep connections alive for 60 seconds
'OPTIONS': {
'MAX_CONNS': 20,
'connect_timeout': 10,
'options': '-c statement_timeout=30000', # 30 second query timeout
}
}
}
# Caching
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.redis.RedisCache',
'LOCATION': 'redis://127.0.0.1:6379/1',
'OPTIONS': {
'CLIENT_CLASS': 'django_redis.client.DefaultClient',
},
'KEY_PREFIX': 'thrillwiki'
}
}
# Static files with CDN
AWS_S3_CUSTOM_DOMAIN = 'cdn.yourdomain.com'
STATICFILES_STORAGE = 'storages.backends.s3boto3.StaticS3Boto3Storage'
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto3.MediaS3Boto3Storage'
```
### Frontend Optimizations
```typescript
// frontend/vite.config.ts
export default defineConfig({
build: {
rollupOptions: {
output: {
manualChunks: {
vendor: ['vue', 'vue-router', 'pinia'],
ui: ['@headlessui/vue', '@heroicons/vue']
}
}
},
sourcemap: false,
minify: 'terser',
terserOptions: {
compress: {
drop_console: true,
drop_debugger: true
}
}
}
})
### Redis Caching
```python
# Caching configuration is in config/django/production.py
# Multiple cache backends for different purposes:
# - default: General caching
# - sessions: Session storage
# - api: API response caching
```
### Static Files with WhiteNoise
```python
# backend/config/django/production.py
STATICFILES_STORAGE = "whitenoise.storage.CompressedManifestStaticFilesStorage"
```
## Monitoring and Logging
### Application Monitoring
### Health Check Endpoints
| Endpoint | Purpose | Use Case |
|----------|---------|----------|
| `/api/v1/health/` | Comprehensive health check | Monitoring dashboards |
| `/api/v1/health/simple/` | Simple OK/ERROR | Load balancer health checks |
| `/api/v1/health/performance/` | Performance metrics | Debug mode only |
### Logging Configuration
Production logging uses JSON format for log aggregation:
```python
# backend/config/settings/production.py
import sentry_sdk
from sentry_sdk.integrations.django import DjangoIntegration
sentry_sdk.init(
dsn="your-sentry-dsn",
integrations=[DjangoIntegration()],
traces_sample_rate=0.1,
send_default_pii=True
)
# Logging configuration
# backend/config/django/production.py
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'handlers': {
'file': {
'level': 'INFO',
'class': 'logging.FileHandler',
'filename': '/var/log/django/thrillwiki.log',
'console': {
'class': 'logging.StreamHandler',
'formatter': 'json',
},
'file': {
'class': 'logging.handlers.RotatingFileHandler',
'filename': 'logs/django.log',
'maxBytes': 1024 * 1024 * 15, # 15MB
'backupCount': 10,
'formatter': 'json',
},
},
'root': {
'handlers': ['file'],
},
}
```
### Infrastructure Monitoring
- Use Prometheus + Grafana for metrics
- Implement health check endpoints
- Set up log aggregation (ELK stack or similar)
- Monitor database performance
- Track API response times
### Sentry Integration
```python
# Sentry is configured in config/django/production.py
# Enable by setting SENTRY_DSN environment variable
```
## Security Considerations
### Production Security Checklist
- [ ] `DEBUG=False` in production
- [ ] `SECRET_KEY` is unique and secure
- [ ] `ALLOWED_HOSTS` properly configured
- [ ] HTTPS enforced with SSL certificates
- [ ] Security headers configured (HSTS, CSP, etc.)
- [ ] Database credentials secured
- [ ] Secret keys rotated regularly
- [ ] Redis password configured (if exposed)
- [ ] CORS properly configured
- [ ] Rate limiting implemented
- [ ] Rate limiting enabled
- [ ] File upload validation
- [ ] SQL injection protection
- [ ] SQL injection protection (Django ORM)
- [ ] XSS protection enabled
- [ ] CSRF protection active
### Security Headers
```python
# backend/config/settings/production.py
# backend/config/django/production.py
SECURE_SSL_REDIRECT = True
SECURE_HSTS_SECONDS = 31536000
SECURE_HSTS_SECONDS = 31536000 # 1 year
SECURE_HSTS_INCLUDE_SUBDOMAINS = True
SECURE_HSTS_PRELOAD = True
SECURE_CONTENT_TYPE_NOSNIFF = True
SECURE_BROWSER_XSS_FILTER = True
SESSION_COOKIE_SECURE = True
CSRF_COOKIE_SECURE = True
X_FRAME_OPTIONS = 'DENY'
# CORS for API
CORS_ALLOWED_ORIGINS = [
"https://yourdomain.com",
"https://www.yourdomain.com",
]
SECURE_CONTENT_TYPE_NOSNIFF = True
```
## Backup and Recovery
### Database Backup Strategy
```bash
# Automated backup script
#!/bin/bash
# Automated backup script
pg_dump $DATABASE_URL | gzip > backup_$(date +%Y%m%d_%H%M%S).sql.gz
aws s3 cp backup_*.sql.gz s3://your-backup-bucket/database/
```
### Media Files Backup
```bash
# Sync media files to S3
aws s3 sync ./shared/media/ s3://your-media-bucket/media/ --delete
@@ -590,39 +570,60 @@ aws s3 sync ./shared/media/ s3://your-media-bucket/media/ --delete
## Scaling Strategies
### Horizontal Scaling
- Load balancer configuration
- Database read replicas
- CDN for static assets
- Redis clustering
- Auto-scaling groups
- Use load balancer (nginx, AWS ALB, etc.)
- Database read replicas for read-heavy workloads
- CDN for static assets (Cloudflare, CloudFront)
- Redis cluster for session/cache scaling
- Multiple Gunicorn workers per container
### Vertical Scaling
- Database connection pooling
- Application server optimization
- Database connection pooling (pgBouncer)
- Query optimization with select_related/prefetch_related
- Memory usage optimization
- CPU-intensive task optimization
- Background task offloading to Celery
## Troubleshooting Guide
### Common Issues
1. **Build failures**: Check dependencies and environment variables
2. **Database connection errors**: Verify connection strings and firewall rules
3. **Static file 404s**: Ensure collectstatic runs and paths are correct
4. **CORS errors**: Check CORS configuration and allowed origins
5. **Memory issues**: Monitor application memory usage and optimize queries
1. **Static files not loading**
- Run `python manage.py collectstatic`
- Check nginx static file configuration
- Verify WhiteNoise settings
2. **Database connection errors**
- Verify DATABASE_URL format
- Check firewall rules
- Verify PostGIS extension is installed
3. **CORS errors**
- Check CORS_ALLOWED_ORIGINS setting
- Verify CSRF_TRUSTED_ORIGINS
4. **Memory issues**
- Monitor with `docker stats`
- Optimize Gunicorn worker count
- Check for query inefficiencies
### Debug Commands
```bash
# Backend debugging
# Check Django configuration
cd backend
uv run manage.py check --deploy
uv run manage.py shell
# Database shell
uv run manage.py dbshell
# Frontend debugging
cd frontend
pnpm run build --debug
pnpm run preview
# Django shell
uv run manage.py shell
# Validate settings
uv run manage.py validate_settings
```
This deployment guide provides a comprehensive approach to deploying the ThrillWiki monorepo across various platforms while maintaining security, performance, and scalability.
---
This deployment guide provides a comprehensive approach to deploying the ThrillWiki Django + HTMX application while maintaining security, performance, and scalability.