Add secret management guide, client-side performance monitoring, and search accessibility enhancements

- Introduced a comprehensive Secret Management Guide detailing best practices, secret classification, development setup, production management, rotation procedures, and emergency protocols.
- Implemented a client-side performance monitoring script to track various metrics including page load performance, paint metrics, layout shifts, and memory usage.
- Enhanced search accessibility with keyboard navigation support for search results, ensuring compliance with WCAG standards and improving user experience.
This commit is contained in:
pacnpal
2025-12-23 16:41:42 -05:00
parent ae31e889d7
commit edcd8f2076
155 changed files with 22046 additions and 4645 deletions

View File

@@ -0,0 +1,222 @@
# ADR-004: Caching Strategy
## Status
Accepted
## Context
ThrillWiki serves data that is:
- Read-heavy (browsing parks and rides)
- Moderately updated (user contributions, moderation)
- Geographically queried (map views, location searches)
We needed a caching strategy that would:
- Reduce database load for common queries
- Provide fast response times for users
- Handle cache invalidation correctly
- Support different caching needs (sessions, API, geographic)
## Decision
We implemented a **Multi-Layer Caching Strategy** using Redis with multiple cache backends for different purposes.
### Cache Architecture
```
┌─────────────────────────────────────────────────────┐
│ Application │
└─────────────────────────────────────────────────────┘
│ │ │
▼ ▼ ▼
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Default Cache │ │ Session Cache │ │ API Cache │
│ (General data) │ │ (User sessions)│ │ (API responses)│
└─────────────────┘ └─────────────────┘ └─────────────────┘
│ │ │
└────────────────┼────────────────┘
┌─────────────────┐
│ Redis │
│ (with pools) │
└─────────────────┘
```
### Cache Configuration
```python
# backend/config/django/production.py
CACHES = {
"default": {
"BACKEND": "django_redis.cache.RedisCache",
"LOCATION": redis_url,
"OPTIONS": {
"CLIENT_CLASS": "django_redis.client.DefaultClient",
"PARSER_CLASS": "redis.connection.HiredisParser",
"COMPRESSOR": "django_redis.compressors.zlib.ZlibCompressor",
"CONNECTION_POOL_CLASS_KWARGS": {
"max_connections": 100,
"timeout": 20,
},
},
"KEY_PREFIX": "thrillwiki",
},
"sessions": {
"BACKEND": "django_redis.cache.RedisCache",
"LOCATION": redis_sessions_url,
"KEY_PREFIX": "sessions",
},
"api": {
"BACKEND": "django_redis.cache.RedisCache",
"LOCATION": redis_api_url,
"OPTIONS": {
"COMPRESSOR": "django_redis.compressors.zlib.ZlibCompressor",
},
"KEY_PREFIX": "api",
},
}
```
### Caching Layers
| Layer | Purpose | TTL | Invalidation |
|-------|---------|-----|--------------|
| QuerySet | Expensive database queries | 1 hour | On model save |
| API Response | Serialized API responses | 30 min | On data change |
| Geographic | Map data and location queries | 30 min | On location update |
| Template Fragment | Rendered template parts | 15 min | On context change |
## Consequences
### Benefits
1. **Reduced Database Load**: Common queries served from cache
2. **Fast Response Times**: Sub-millisecond cache hits
3. **Scalability**: Cache can be distributed across Redis cluster
4. **Flexibility**: Different TTLs for different data types
5. **Compression**: Reduced memory usage with zlib compression
### Trade-offs
1. **Cache Invalidation**: Must carefully invalidate on data changes
2. **Memory Usage**: Redis memory must be monitored
3. **Consistency**: Potential for stale data during TTL window
4. **Complexity**: Multiple cache backends to manage
### Cache Key Naming Convention
```
{prefix}:{entity_type}:{identifier}:{context}
Examples:
thrillwiki:park:123:detail
thrillwiki:park:123:rides
api:parks:list:page1:filter_operating
geo:bounds:40.7:-74.0:41.0:-73.5:z10
```
### Cache Invalidation Patterns
```python
# Model signal for cache invalidation
@receiver(post_save, sender=Park)
def invalidate_park_cache(sender, instance, **kwargs):
cache_service = EnhancedCacheService()
# Invalidate specific park cache
cache_service.invalidate_model_cache('park', instance.id)
# Invalidate list caches
cache_service.invalidate_pattern('api:parks:list:*')
# Invalidate geographic caches if location changed
if instance.location_changed:
cache_service.invalidate_pattern('geo:*')
```
## Alternatives Considered
### Database-Only (No Caching)
**Rejected because:**
- High database load for read-heavy traffic
- Slower response times
- Database as bottleneck for scaling
### Memcached
**Rejected because:**
- Less feature-rich than Redis
- No data persistence
- No built-in data structures
### Application-Level Caching Only
**Rejected because:**
- Not shared across application instances
- Memory per-instance overhead
- Cache cold on restart
## Implementation Details
### EnhancedCacheService
```python
# backend/apps/core/services/enhanced_cache_service.py
class EnhancedCacheService:
"""Comprehensive caching service with multiple cache backends."""
def cache_queryset(self, cache_key, queryset_func, timeout=3600, **kwargs):
"""Cache expensive querysets with logging."""
cached = self.default_cache.get(cache_key)
if cached is None:
result = queryset_func(**kwargs)
self.default_cache.set(cache_key, result, timeout)
return result
return cached
def invalidate_pattern(self, pattern):
"""Invalidate cache keys matching pattern."""
if hasattr(self.default_cache, 'delete_pattern'):
return self.default_cache.delete_pattern(pattern)
```
### Cache Warming
```python
# Proactive cache warming for common queries
class CacheWarmer:
"""Context manager for batch cache warming."""
def warm_popular_parks(self):
parks = Park.objects.operating()[:100]
for park in parks:
self.cache_service.warm_cache(
f'park:{park.id}:detail',
lambda: ParkSerializer(park).data,
timeout=3600
)
```
### Cache Monitoring
```python
class CacheMonitor:
"""Monitor cache performance and statistics."""
def get_cache_stats(self):
redis_client = self.cache_service.default_cache._cache.get_client()
info = redis_client.info()
hits = info.get('keyspace_hits', 0)
misses = info.get('keyspace_misses', 0)
return {
'used_memory': info.get('used_memory_human'),
'hit_rate': hits / (hits + misses) * 100 if hits + misses > 0 else 0,
}
```
## References
- [Django Redis](https://github.com/jazzband/django-redis)
- [Redis Documentation](https://redis.io/documentation)
- [Cache Invalidation Strategies](https://en.wikipedia.org/wiki/Cache_invalidation)