Remove deprecated scripts and assets related to ThrillWiki deployment and validation

- Deleted the systemd service diagnosis script `test-systemd-service-diagnosis.sh`
- Removed the validation fix test script `test-validation-fix.sh`
- Eliminated the simple validation test script `validate-step5b-simple.sh`
- Removed the GitHub webhook listener script `webhook-listener.py`
- Deleted various placeholder images from the static assets
- Removed the ThrillWiki database file `thrillwiki.db`
This commit is contained in:
pacnpal
2025-09-24 21:21:50 -04:00
parent 82cbdecc4c
commit 4373d18176
186 changed files with 0 additions and 43099 deletions

View File

@@ -1,94 +0,0 @@
# ThrillWiki Development Scripts
## Development Server Script
The `dev_server.sh` script sets up all necessary environment variables and starts the Django development server with proper configuration.
### Usage
```bash
# From the project root directory
./scripts/dev_server.sh
# Or from anywhere
/path/to/thrillwiki_django_no_react/scripts/dev_server.sh
```
### What the script does
1. **Environment Setup**: Sets all required environment variables for local development
2. **Directory Creation**: Creates necessary directories (logs, profiles, media, etc.)
3. **Database Migrations**: Runs pending migrations automatically
4. **Superuser Creation**: Creates a development superuser (admin/admin) if none exists
5. **Static Files**: Collects static files for the application
6. **Tailwind CSS**: Builds Tailwind CSS if npm is available
7. **System Checks**: Runs Django system checks
8. **Server Start**: Starts the Django development server on `http://localhost:8000`
### Environment Variables Set
The script automatically sets these environment variables:
- `DJANGO_SETTINGS_MODULE=config.django.local`
- `DEBUG=True`
- `SECRET_KEY=<generated-dev-key>`
- `ALLOWED_HOSTS=localhost,127.0.0.1,0.0.0.0`
- `DATABASE_URL=postgis://thrillwiki_user:thrillwiki_pass@localhost:5432/thrillwiki_db`
- `CACHE_URL=locmemcache://`
- `CORS_ALLOW_ALL_ORIGINS=True`
- GeoDjango library paths for macOS
- And many more...
### Prerequisites
1. **PostgreSQL with PostGIS**: Make sure PostgreSQL with PostGIS extension is running
2. **Database**: Create the database `thrillwiki_db` with user `thrillwiki_user`
3. **uv**: The script uses `uv` to run Django commands
4. **Virtual Environment**: The script will activate `.venv` if it exists
### Database Setup
If you need to set up the database:
```bash
# Install PostgreSQL and PostGIS (macOS with Homebrew)
brew install postgresql postgis
# Start PostgreSQL
brew services start postgresql
# Create database and user
psql postgres -c "CREATE USER thrillwiki_user WITH PASSWORD 'thrillwiki_pass';"
psql postgres -c "CREATE DATABASE thrillwiki_db OWNER thrillwiki_user;"
psql -d thrillwiki_db -c "CREATE EXTENSION postgis;"
psql -d thrillwiki_db -c "GRANT ALL PRIVILEGES ON DATABASE thrillwiki_db TO thrillwiki_user;"
```
### Access Points
Once the server is running, you can access:
- **Main Application**: http://localhost:8000
- **Admin Interface**: http://localhost:8000/admin/ (admin/admin)
- **Django Silk Profiler**: http://localhost:8000/silk/
- **API Documentation**: http://localhost:8000/api/docs/
- **API Redoc**: http://localhost:8000/api/redoc/
### Stopping the Server
Press `Ctrl+C` to stop the development server.
### Troubleshooting
1. **Database Connection Issues**: Ensure PostgreSQL is running and the database exists
2. **GeoDjango Library Issues**: Adjust `GDAL_LIBRARY_PATH` and `GEOS_LIBRARY_PATH` if needed
3. **Permission Issues**: Make sure the script is executable with `chmod +x scripts/dev_server.sh`
4. **Virtual Environment**: Ensure your virtual environment is set up with all dependencies
### Customization
You can modify the script to:
- Change default database credentials
- Adjust library paths for your system
- Add additional environment variables
- Modify the development server port or host

View File

@@ -1 +0,0 @@
[GITHUB-TOKEN-REMOVED]

View File

@@ -1,203 +0,0 @@
# ThrillWiki Automation Service Environment Configuration
# Copy this file to thrillwiki-automation***REMOVED*** and customize for your environment
#
# Security Note: This file should have restricted permissions (600) as it may contain
# sensitive information like GitHub Personal Access Tokens
# [AWS-SECRET-REMOVED]====================================
# PROJECT CONFIGURATION
# [AWS-SECRET-REMOVED]====================================
# Base project directory (usually auto-detected)
# PROJECT_DIR=/home/ubuntu/thrillwiki
# Service name for systemd integration
# SERVICE_NAME=thrillwiki
# [AWS-SECRET-REMOVED]====================================
# GITHUB REPOSITORY CONFIGURATION
# [AWS-SECRET-REMOVED]====================================
# GitHub repository remote name
# GITHUB_REPO=origin
# Branch to pull from
# GITHUB_BRANCH=main
# GitHub Personal Access Token (PAT) - Required for private repositories
# Generate at: https://github.com/settings/tokens
# Required permissions: repo (Full control of private repositories)
# GITHUB_TOKEN=ghp_your_personal_access_token_here
# GitHub token file location (alternative to GITHUB_TOKEN)
# GITHUB_TOKEN_FILE=/home/ubuntu/thrillwiki/.github-pat
# [AWS-SECRET-REMOVED]====================================
# AUTOMATION TIMING CONFIGURATION
# [AWS-SECRET-REMOVED]====================================
# Repository pull interval in seconds (default: 300 = 5 minutes)
# PULL_INTERVAL=300
# Health check interval in seconds (default: 60 = 1 minute)
# HEALTH_CHECK_INTERVAL=60
# Server startup timeout in seconds (default: 120 = 2 minutes)
# STARTUP_TIMEOUT=120
# Restart delay after failure in seconds (default: 10)
# RESTART_DELAY=10
# [AWS-SECRET-REMOVED]====================================
# LOGGING CONFIGURATION
# [AWS-SECRET-REMOVED]====================================
# Log directory (default: project_dir/logs)
# LOG_DIR=/home/ubuntu/thrillwiki/logs
# Log file path
# LOG_[AWS-SECRET-REMOVED]proof-automation.log
# Maximum log file size in bytes (default: 10485760 = 10MB)
# MAX_LOG_SIZE=10485760
# Lock file location to prevent multiple instances
# LOCK_FILE=/tmp/thrillwiki-bulletproof.lock
# [AWS-SECRET-REMOVED]====================================
# DEVELOPMENT SERVER CONFIGURATION
# [AWS-SECRET-REMOVED]====================================
# Server host address (default: 0.0.0.0 for all interfaces)
# SERVER_HOST=0.0.0.0
# Server port (default: 8000)
# SERVER_PORT=8000
# [AWS-SECRET-REMOVED]====================================
# DJANGO CONFIGURATION
# [AWS-SECRET-REMOVED]====================================
# Django settings module
# DJANGO_SETTINGS_MODULE=thrillwiki.settings
# Python path
# PYTHONPATH=/home/ubuntu/thrillwiki
# [AWS-SECRET-REMOVED]====================================
# ADVANCED CONFIGURATION
# [AWS-SECRET-REMOVED]====================================
# GitHub authentication script location
# GITHUB_AUTH_[AWS-SECRET-REMOVED]ithub-auth.py
# Enable verbose logging (true/false)
# VERBOSE_LOGGING=false
# Enable debug mode for troubleshooting (true/false)
# DEBUG_MODE=false
# Custom git remote URL (overrides GITHUB_REPO if set)
# CUSTOM_GIT_REMOTE=https://github.com/username/repository.git
# Email notifications for critical failures (requires email configuration)
# NOTIFICATION_EMAIL=admin@example.com
# Maximum consecutive failures before alerting (default: 5)
# MAX_CONSECUTIVE_FAILURES=5
# Enable automatic dependency updates (true/false, default: true)
# AUTO_UPDATE_DEPENDENCIES=true
# Enable automatic migrations on code changes (true/false, default: true)
# AUTO_MIGRATE=true
# Enable automatic static file collection (true/false, default: true)
# AUTO_COLLECTSTATIC=true
# [AWS-SECRET-REMOVED]====================================
# SECURITY CONFIGURATION
# [AWS-SECRET-REMOVED]====================================
# GitHub authentication method (token|ssh|https)
# Default: token (uses GITHUB_TOKEN or GITHUB_TOKEN_FILE)
# GITHUB_AUTH_METHOD=token
# SSH key path for git operations (when using ssh auth method)
# SSH_KEY_PATH=/home/ubuntu/.ssh/***REMOVED***
# Git user configuration for commits
# GIT_USER_NAME="ThrillWiki Automation"
# GIT_USER_EMAIL="automation@thrillwiki.local"
# [AWS-SECRET-REMOVED]====================================
# MONITORING AND HEALTH CHECKS
# [AWS-SECRET-REMOVED]====================================
# Health check URL to verify server is running
# HEALTH_CHECK_URL=http://localhost:8000/health/
# Health check timeout in seconds
# HEALTH_CHECK_TIMEOUT=30
# Enable system resource monitoring (true/false)
# MONITOR_RESOURCES=true
# Memory usage threshold for warnings (in MB)
# MEMORY_WARNING_THRESHOLD=1024
# CPU usage threshold for warnings (percentage)
# CPU_WARNING_THRESHOLD=80
# Disk usage threshold for warnings (percentage)
# DISK_WARNING_THRESHOLD=90
# [AWS-SECRET-REMOVED]====================================
# INTEGRATION SETTINGS
# [AWS-SECRET-REMOVED]====================================
# Webhook integration (if using thrillwiki-webhook service)
# WEBHOOK_INTEGRATION=true
# Slack webhook URL for notifications (optional)
# SLACK_WEBHOOK_URL=https://hooks.slack.com/services/your/webhook/url
# Discord webhook URL for notifications (optional)
# DISCORD_WEBHOOK_URL=https://discord.com/api/webhooks/your/webhook/url
# [AWS-SECRET-REMOVED]====================================
# USAGE EXAMPLES
# [AWS-SECRET-REMOVED]====================================
# Example 1: Basic setup with GitHub PAT
# GITHUB_TOKEN=ghp_your_token_here
# PULL_INTERVAL=300
# AUTO_MIGRATE=true
# Example 2: Enhanced monitoring setup
# HEALTH_CHECK_INTERVAL=30
# MONITOR_RESOURCES=true
# NOTIFICATION_EMAIL=admin@thrillwiki.com
# SLACK_WEBHOOK_URL=https://hooks.slack.com/services/your/webhook
# Example 3: Development environment with frequent pulls
# PULL_INTERVAL=60
# DEBUG_MODE=true
# VERBOSE_LOGGING=true
# AUTO_UPDATE_DEPENDENCIES=true
# [AWS-SECRET-REMOVED]====================================
# INSTALLATION NOTES
# [AWS-SECRET-REMOVED]====================================
# 1. Copy this file: cp thrillwiki-automation***REMOVED***.example thrillwiki-automation***REMOVED***
# 2. Set secure permissions: chmod 600 thrillwiki-automation***REMOVED***
# 3. Customize the settings above for your environment
# 4. Enable the service: sudo systemctl enable thrillwiki-automation
# 5. Start the service: sudo systemctl start thrillwiki-automation
# 6. Check status: sudo systemctl status thrillwiki-automation
# 7. View logs: sudo journalctl -u thrillwiki-automation -f
# For security, ensure only the ubuntu user can read this file:
# sudo chown ubuntu:ubuntu thrillwiki-automation***REMOVED***
# sudo chmod 600 thrillwiki-automation***REMOVED***

View File

@@ -1,129 +0,0 @@
#!/bin/bash
# ThrillWiki Local CI Start Script
# This script starts the Django development server following project requirements
set -e # Exit on any error
# Configuration
PROJECT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
LOG_DIR="$PROJECT_DIR/logs"
PID_FILE="$LOG_DIR/django.pid"
LOG_FILE="$LOG_DIR/django.log"
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Logging function
log() {
echo -e "${BLUE}[$(date +'%Y-%m-%d %H:%M:%S')]${NC} $1"
}
log_success() {
echo -e "${GREEN}[$(date +'%Y-%m-%d %H:%M:%S')]${NC} $1"
}
log_warning() {
echo -e "${YELLOW}[$(date +'%Y-%m-%d %H:%M:%S')]${NC} $1"
}
log_error() {
echo -e "${RED}[$(date +'%Y-%m-%d %H:%M:%S')]${NC} $1"
}
# Create logs directory if it doesn't exist
mkdir -p "$LOG_DIR"
# Change to project directory
cd "$PROJECT_DIR"
log "Starting ThrillWiki CI deployment..."
# Check if UV is installed
if ! command -v uv &> /dev/null; then
log_error "UV is not installed. Please install UV first."
exit 1
fi
# Stop any existing Django processes on port 8000
log "Stopping any existing Django processes on port 8000..."
if lsof -ti :8000 >/dev/null 2>&1; then
lsof -ti :8000 | xargs kill -9 2>/dev/null || true
log_success "Stopped existing processes"
else
log "No existing processes found on port 8000"
fi
# Clean up Python cache files
log "Cleaning up Python cache files..."
find . -type d -name "__pycache__" -exec rm -r {} + 2>/dev/null || true
log_success "Cache files cleaned"
# Install/update dependencies
log "Installing/updating dependencies with UV..."
uv sync --no-dev || {
log_error "Failed to sync dependencies"
exit 1
}
# Run database migrations
log "Running database migrations..."
uv run manage.py migrate || {
log_error "Database migrations failed"
exit 1
}
# Collect static files
log "Collecting static files..."
uv run manage.py collectstatic --noinput || {
log_warning "Static file collection failed, continuing anyway"
}
# Start the development server
log "Starting Django development server with Tailwind..."
log "Server will be available at: http://localhost:8000"
log "Press Ctrl+C to stop the server"
# Start server and capture PID
uv run manage.py tailwind runserver 0.0.0.0:8000 &
SERVER_PID=$!
# Save PID to file
echo $SERVER_PID > "$PID_FILE"
log_success "Django server started with PID: $SERVER_PID"
log "Server logs are being written to: $LOG_FILE"
# Wait for server to start
sleep 3
# Check if server is running
if kill -0 $SERVER_PID 2>/dev/null; then
log_success "Server is running successfully!"
# Monitor the process
wait $SERVER_PID
else
log_error "Server failed to start"
rm -f "$PID_FILE"
exit 1
fi
# Cleanup on exit
cleanup() {
log "Shutting down server..."
if [ -f "$PID_FILE" ]; then
PID=$(cat "$PID_FILE")
if kill -0 $PID 2>/dev/null; then
kill $PID
log_success "Server stopped"
fi
rm -f "$PID_FILE"
fi
}
trap cleanup EXIT INT TERM

View File

@@ -1,108 +0,0 @@
from django.utils import timezone
from parks.models import Park, ParkLocation
from rides.models import Ride, RideModel, RollerCoasterStats
from rides.models import Manufacturer
# Create Cedar Point
park, _ = Park.objects.get_or_create(
name="Cedar Point",
slug="cedar-point",
defaults={
"description": (
"Cedar Point is a 364-acre amusement park located on a Lake Erie "
"peninsula in Sandusky, Ohio."
),
"website": "https://www.cedarpoint.com",
"size_acres": 364,
"opening_date": timezone.datetime(
1870, 1, 1
).date(), # Cedar Point opened in 1870
},
)
# Create location for Cedar Point
location, _ = ParkLocation.objects.get_or_create(
park=park,
defaults={
"street_address": "1 Cedar Point Dr",
"city": "Sandusky",
"state": "OH",
"postal_code": "44870",
"country": "USA",
},
)
# Set coordinates using the helper method
location.set_coordinates(-82.6839, 41.4822) # longitude, latitude
location.save()
# Create Intamin as manufacturer
bm, _ = Manufacturer.objects.get_or_create(
name="Intamin",
slug="intamin",
defaults={
"description": (
"Intamin Amusement Rides is a design company known for creating "
"some of the most thrilling and innovative roller coasters in the world."
),
"website": "https://www.intaminworldwide.com",
},
)
# Create Giga Coaster model
giga_model, _ = RideModel.objects.get_or_create(
name="Giga Coaster",
manufacturer=bm,
defaults={
"description": (
"A roller coaster type characterized by a height between 300399 feet "
"and a complete circuit."
),
"category": "RC", # Roller Coaster
},
)
# Create Millennium Force
millennium, _ = Ride.objects.get_or_create(
name="Millennium Force",
slug="millennium-force",
defaults={
"description": (
"Millennium Force is a steel roller coaster located at Cedar Point "
"amusement park in Sandusky, Ohio. It was built by Intamin of "
"Switzerland and opened on May 13, 2000 as the world's first giga "
"coaster, a class of roller coasters having a height between 300 "
"and 399 feet and a complete circuit."
),
"park": park,
"category": "RC",
"manufacturer": bm,
"ride_model": giga_model,
"status": "OPERATING",
"opening_date": timezone.datetime(2000, 5, 13).date(),
"min_height_in": 48, # 48 inches minimum height
"capacity_per_hour": 1300,
"ride_duration_seconds": 120, # 2 minutes
},
)
# Create stats for Millennium Force
RollerCoasterStats.objects.get_or_create(
ride=millennium,
defaults={
"height_ft": 310,
"length_ft": 6595,
"speed_mph": 93,
"inversions": 0,
"ride_time_seconds": 120,
"track_material": "STEEL",
"roller_coaster_type": "SITDOWN",
"max_drop_height_ft": 300,
"launch_type": "CHAIN",
"train_style": "Open-air stadium seating",
"trains_count": 3,
"cars_per_train": 9,
"seats_per_car": 4,
},
)
print("Initial data created successfully!")

View File

@@ -1,494 +0,0 @@
#!/bin/bash
# ThrillWiki Deployment Script
# Deploys the application to various environments
set -e
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Script directory and project root
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../../../" && pwd)"
# Configuration
DEPLOY_ENV="production"
DEPLOY_DIR="$PROJECT_ROOT/deploy"
BACKUP_DIR="$PROJECT_ROOT/backups"
TIMESTAMP=$(date +"%Y%m%d_%H%M%S")
# Function to print colored output
print_status() {
echo -e "${BLUE}[INFO]${NC} $1"
}
print_success() {
echo -e "${GREEN}[SUCCESS]${NC} $1"
}
print_warning() {
echo -e "${YELLOW}[WARNING]${NC} $1"
}
print_error() {
echo -e "${RED}[ERROR]${NC} $1"
}
# Function to check if a command exists
command_exists() {
command -v "$1" >/dev/null 2>&1
}
# Function to check deployment requirements
check_deployment_requirements() {
print_status "Checking deployment requirements..."
local missing_deps=()
# Check if deployment artifacts exist
if [ ! -d "$DEPLOY_DIR" ]; then
missing_deps+=("deployment_artifacts")
fi
if [ ! -f "$DEPLOY_DIR/manifest.json" ]; then
missing_deps+=("deployment_manifest")
fi
# Check for deployment tools
if [ "$DEPLOY_METHOD" = "docker" ]; then
if ! command_exists docker; then
missing_deps+=("docker")
fi
fi
if [ "$DEPLOY_METHOD" = "rsync" ]; then
if ! command_exists rsync; then
missing_deps+=("rsync")
fi
fi
if [ ${#missing_deps[@]} -ne 0 ]; then
print_error "Missing deployment requirements: ${missing_deps[*]}"
exit 1
fi
print_success "Deployment requirements met!"
}
# Function to create backup
create_backup() {
print_status "Creating backup before deployment..."
mkdir -p "$BACKUP_DIR"
local backup_path="$BACKUP_DIR/backup_$TIMESTAMP"
# Create backup directory
mkdir -p "$backup_path"
# Backup current deployment if it exists
if [ -d "$DEPLOY_TARGET" ]; then
print_status "Backing up current deployment..."
cp -r "$DEPLOY_TARGET" "$backup_path/current"
fi
# Backup database if requested
if [ "$BACKUP_DATABASE" = true ]; then
print_status "Backing up database..."
# This would depend on your database setup
# For SQLite:
if [ -f "$PROJECT_ROOT/backend/db.sqlite3" ]; then
cp "$PROJECT_ROOT/backend/db.sqlite3" "$backup_path/database.sqlite3"
fi
fi
# Backup environment files
if [ -f "$PROJECT_ROOT/.env" ]; then
cp "$PROJECT_ROOT/.env" "$backup_path/.env.backup"
fi
print_success "Backup created: $backup_path"
}
# Function to prepare deployment artifacts
prepare_artifacts() {
print_status "Preparing deployment artifacts..."
# Check if build artifacts exist
if [ ! -d "$DEPLOY_DIR" ]; then
print_error "No deployment artifacts found. Please run build-all.sh first."
exit 1
fi
# Validate manifest
if [ -f "$DEPLOY_DIR/manifest.json" ]; then
print_status "Validating deployment manifest..."
# You could add more validation here
cat "$DEPLOY_DIR/manifest.json" | grep -q "build_timestamp" || {
print_error "Invalid deployment manifest"
exit 1
}
fi
print_success "Deployment artifacts ready!"
}
# Function to deploy to local development
deploy_local() {
print_status "Deploying to local development environment..."
local target_dir="$PROJECT_ROOT/deployment"
# Create target directory
mkdir -p "$target_dir"
# Copy artifacts
print_status "Copying frontend artifacts..."
cp -r "$DEPLOY_DIR/frontend" "$target_dir/"
print_status "Copying backend artifacts..."
mkdir -p "$target_dir/backend"
cp -r "$DEPLOY_DIR/backend/staticfiles" "$target_dir/backend/"
# Copy deployment configuration
cp "$DEPLOY_DIR/manifest.json" "$target_dir/"
print_success "Local deployment completed!"
print_status "Deployment available at: $target_dir"
}
# Function to deploy via rsync
deploy_rsync() {
print_status "Deploying via rsync..."
if [ -z "$DEPLOY_HOST" ]; then
print_error "DEPLOY_HOST not set for rsync deployment"
exit 1
fi
local target=""
if [ -n "$DEPLOY_USER" ]; then
target="$DEPLOY_USER@$DEPLOY_HOST:$DEPLOY_PATH"
else
target="$DEPLOY_HOST:$DEPLOY_PATH"
fi
print_status "Syncing files to $target..."
# Rsync options:
# -a: archive mode (recursive, preserves attributes)
# -v: verbose
# -z: compress during transfer
# --delete: delete files not in source
# --exclude: exclude certain files
rsync -avz --delete \
--exclude='.git' \
--exclude='node_modules' \
--exclude='__pycache__' \
--exclude='*.log' \
"$DEPLOY_DIR/" "$target"
print_success "Rsync deployment completed!"
}
# Function to deploy via Docker
deploy_docker() {
print_status "Deploying via Docker..."
local image_name="thrillwiki-$DEPLOY_ENV"
local container_name="thrillwiki-$DEPLOY_ENV"
# Build Docker image
print_status "Building Docker image: $image_name"
docker build -t "$image_name" \
--build-arg DEPLOY_ENV="$DEPLOY_ENV" \
-f "$PROJECT_ROOT/Dockerfile" \
"$PROJECT_ROOT"
# Stop existing container
if docker ps -q -f name="$container_name" | grep -q .; then
print_status "Stopping existing container..."
docker stop "$container_name"
fi
# Remove existing container
if docker ps -a -q -f name="$container_name" | grep -q .; then
print_status "Removing existing container..."
docker rm "$container_name"
fi
# Run new container
print_status "Starting new container..."
docker run -d \
--name "$container_name" \
-p 8080:80 \
-e DEPLOY_ENV="$DEPLOY_ENV" \
"$image_name"
print_success "Docker deployment completed!"
print_status "Container: $container_name"
print_status "URL: http://localhost:8080"
}
# Function to run post-deployment checks
run_post_deploy_checks() {
print_status "Running post-deployment checks..."
local health_url=""
case $DEPLOY_METHOD in
"local")
health_url="http://localhost:8080/health"
;;
"docker")
health_url="http://localhost:8080/health"
;;
"rsync")
if [ -n "$DEPLOY_HOST" ]; then
health_url="http://$DEPLOY_HOST/health"
fi
;;
esac
if [ -n "$health_url" ]; then
print_status "Checking health endpoint: $health_url"
if curl -s -f "$health_url" > /dev/null 2>&1; then
print_success "Health check passed!"
else
print_warning "Health check failed. Please verify deployment."
fi
fi
print_success "Post-deployment checks completed!"
}
# Function to generate deployment report
generate_deployment_report() {
print_status "Generating deployment report..."
local report_file="$PROJECT_ROOT/deployment-report-$DEPLOY_ENV-$TIMESTAMP.txt"
cat > "$report_file" << EOF
ThrillWiki Deployment Report
============================
Deployment Information:
- Deployment Date: $(date)
- Environment: $DEPLOY_ENV
- Method: $DEPLOY_METHOD
- Project Root: $PROJECT_ROOT
Deployment Details:
- Source Directory: $DEPLOY_DIR
- Target: $DEPLOY_TARGET
- Backup Created: $([ "$CREATE_BACKUP" = true ] && echo "Yes" || echo "No")
Build Information:
$(if [ -f "$DEPLOY_DIR/manifest.json" ]; then
cat "$DEPLOY_DIR/manifest.json"
else
echo "No manifest found"
fi)
System Information:
- Hostname: $(hostname)
- User: $(whoami)
- OS: $(uname -s) $(uname -r)
Deployment Status: SUCCESS
Post-Deployment:
- Health Check: $([ "$RUN_CHECKS" = true ] && echo "Run" || echo "Skipped")
- Backup Location: $([ "$CREATE_BACKUP" = true ] && echo "$BACKUP_DIR/backup_$TIMESTAMP" || echo "None")
EOF
print_success "Deployment report generated: $report_file"
}
# Function to show usage
show_usage() {
cat << EOF
Usage: $0 [ENVIRONMENT] [OPTIONS]
Deploy ThrillWiki to the specified environment.
Environments:
dev Development environment
staging Staging environment
production Production environment
Options:
-h, --help Show this help message
-m, --method METHOD Deployment method (local, rsync, docker)
--no-backup Skip backup creation
--no-checks Skip post-deployment checks
--no-report Skip deployment report generation
Examples:
$0 production # Deploy to production using default method
$0 staging --method docker # Deploy to staging using Docker
$0 dev --no-backup # Deploy to dev without backup
Environment Variables:
DEPLOY_METHOD Deployment method (default: local)
DEPLOY_HOST Target host for rsync deployment
DEPLOY_USER SSH user for rsync deployment
DEPLOY_PATH Target path for rsync deployment
CREATE_BACKUP Create backup before deployment (default: true)
BACKUP_DATABASE Backup database (default: false)
EOF
}
# Parse command line arguments
DEPLOY_METHOD="local"
CREATE_BACKUP=true
RUN_CHECKS=true
SKIP_REPORT=false
# Get environment from first argument
if [ $# -gt 0 ]; then
case $1 in
dev|staging|production)
DEPLOY_ENV="$1"
shift
;;
-h|--help)
show_usage
exit 0
;;
*)
print_error "Invalid environment: $1"
show_usage
exit 1
;;
esac
fi
# Parse remaining arguments
while [[ $# -gt 0 ]]; do
case $1 in
-h|--help)
show_usage
exit 0
;;
-m|--method)
DEPLOY_METHOD="$2"
shift 2
;;
--no-backup)
CREATE_BACKUP=false
shift
;;
--no-checks)
RUN_CHECKS=false
shift
;;
--no-report)
SKIP_REPORT=true
shift
;;
*)
print_error "Unknown option: $1"
show_usage
exit 1
;;
esac
done
# Override from environment variables
if [ ! -z "$DEPLOY_METHOD_ENV" ]; then
DEPLOY_METHOD=$DEPLOY_METHOD_ENV
fi
if [ "$CREATE_BACKUP_ENV" = "false" ]; then
CREATE_BACKUP=false
fi
# Set deployment target based on method
case $DEPLOY_METHOD in
"local")
DEPLOY_TARGET="$PROJECT_ROOT/deployment"
;;
"rsync")
DEPLOY_TARGET="${DEPLOY_USER:-}$(if [ -n "$DEPLOY_USER" ]; then echo "@"; fi)${DEPLOY_HOST:-localhost}:${DEPLOY_PATH:-/var/www/thrillwiki}"
;;
"docker")
DEPLOY_TARGET="docker_container"
;;
*)
print_error "Unsupported deployment method: $DEPLOY_METHOD"
exit 1
;;
esac
# Print banner
echo -e "${GREEN}"
echo "=========================================="
echo " ThrillWiki Deployment"
echo "=========================================="
echo -e "${NC}"
print_status "Environment: $DEPLOY_ENV"
print_status "Method: $DEPLOY_METHOD"
print_status "Target: $DEPLOY_TARGET"
print_status "Create backup: $CREATE_BACKUP"
# Check deployment requirements
check_deployment_requirements
# Prepare deployment artifacts
prepare_artifacts
# Create backup if requested
if [ "$CREATE_BACKUP" = true ]; then
create_backup
else
print_warning "Skipping backup creation as requested"
fi
# Deploy based on method
case $DEPLOY_METHOD in
"local")
deploy_local
;;
"rsync")
deploy_rsync
;;
"docker")
deploy_docker
;;
*)
print_error "Unsupported deployment method: $DEPLOY_METHOD"
exit 1
;;
esac
# Run post-deployment checks
if [ "$RUN_CHECKS" = true ]; then
run_post_deploy_checks
else
print_warning "Skipping post-deployment checks as requested"
fi
# Generate deployment report
if [ "$SKIP_REPORT" = false ]; then
generate_deployment_report
else
print_warning "Skipping deployment report generation as requested"
fi
print_success "Deployment completed successfully!"
echo ""
print_status "Environment: $DEPLOY_ENV"
print_status "Method: $DEPLOY_METHOD"
print_status "Target: $DEPLOY_TARGET"
echo ""
print_status "Deployment report: $PROJECT_ROOT/deployment-report-$DEPLOY_ENV-$TIMESTAMP.txt"

View File

@@ -1,368 +0,0 @@
#!/bin/bash
# ThrillWiki Development Environment Setup
# Sets up the complete development environment for both backend and frontend
set -e
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Script directory and project root
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../../../" && pwd)"
# Configuration
BACKEND_DIR="$PROJECT_ROOT/backend"
FRONTEND_DIR="$PROJECT_ROOT/frontend"
# Function to print colored output
print_status() {
echo -e "${BLUE}[INFO]${NC} $1"
}
print_success() {
echo -e "${GREEN}[SUCCESS]${NC} $1"
}
print_warning() {
echo -e "${YELLOW}[WARNING]${NC} $1"
}
print_error() {
echo -e "${RED}[ERROR]${NC} $1"
}
# Function to check if a command exists
command_exists() {
command -v "$1" >/dev/null 2>&1
}
# Function to check system requirements
check_requirements() {
print_status "Checking system requirements..."
local missing_deps=()
# Check Python
if ! command_exists python3; then
missing_deps+=("python3")
else
local python_version=$(python3 --version | cut -d' ' -f2 | cut -d'.' -f1,2)
if (( $(echo "$python_version < 3.11" | bc -l) )); then
print_warning "Python version $python_version detected. Python 3.11+ recommended."
fi
fi
# Check uv
if ! command_exists uv; then
missing_deps+=("uv")
fi
# Check Node.js
if ! command_exists node; then
missing_deps+=("node")
else
local node_version=$(node --version | cut -d'v' -f2 | cut -d'.' -f1)
if (( node_version < 18 )); then
print_warning "Node.js version $node_version detected. Node.js 18+ recommended."
fi
fi
# Check pnpm
if ! command_exists pnpm; then
missing_deps+=("pnpm")
fi
# Check PostgreSQL (optional)
if ! command_exists psql; then
print_warning "PostgreSQL not found. SQLite will be used for development."
fi
# Check Redis (optional)
if ! command_exists redis-server; then
print_warning "Redis not found. Some features may not work."
fi
if [ ${#missing_deps[@]} -ne 0 ]; then
print_error "Missing required dependencies: ${missing_deps[*]}"
print_status "Please install the missing dependencies and run this script again."
print_status "Installation instructions:"
print_status " - Python 3.11+: https://www.python.org/downloads/"
print_status " - uv: pip install uv"
print_status " - Node.js 18+: https://nodejs.org/"
print_status " - pnpm: npm install -g pnpm"
exit 1
fi
print_success "All system requirements met!"
}
# Function to setup backend
setup_backend() {
print_status "Setting up Django backend..."
cd "$BACKEND_DIR"
# Install Python dependencies with uv
print_status "Installing Python dependencies..."
if [ ! -d ".venv" ]; then
uv sync
else
print_warning "Virtual environment already exists. Updating dependencies..."
uv sync
fi
# Create .env file if it doesn't exist
if [ ! -f ".env" ]; then
print_status "Creating backend .env file..."
cp .env.example .env
print_warning "Please edit backend/.env with your settings"
else
print_warning "Backend .env file already exists"
fi
# Run database migrations
print_status "Running database migrations..."
uv run manage.py migrate
# Create superuser (optional)
print_status "Creating Django superuser..."
echo "from django.contrib.auth import get_user_model; User = get_user_model(); User.objects.filter(username='admin').exists() or User.objects.create_superuser('admin', 'admin@example.com', 'admin')" | uv run manage.py shell
print_success "Backend setup completed!"
cd "$PROJECT_ROOT"
}
# Function to setup frontend
setup_frontend() {
print_status "Setting up Vue.js frontend..."
cd "$FRONTEND_DIR"
# Install Node.js dependencies
print_status "Installing Node.js dependencies..."
if [ ! -d "node_modules" ]; then
pnpm install
else
print_warning "node_modules already exists. Updating dependencies..."
pnpm install
fi
# Create environment files if they don't exist
if [ ! -f ".env.local" ]; then
print_status "Creating frontend .env.local file..."
cp .env.development .env.local
print_warning "Please edit frontend/.env.local with your settings"
else
print_warning "Frontend .env.local file already exists"
fi
print_success "Frontend setup completed!"
cd "$PROJECT_ROOT"
}
# Function to setup root environment
setup_root_env() {
print_status "Setting up root environment..."
cd "$PROJECT_ROOT"
# Create root .env file if it doesn't exist
if [ ! -f ".env" ]; then
print_status "Creating root .env file..."
cp .env.example .env
print_warning "Please edit .env with your settings"
else
print_warning "Root .env file already exists"
fi
print_success "Root environment setup completed!"
}
# Function to verify setup
verify_setup() {
print_status "Verifying setup..."
local issues=()
# Check backend
cd "$BACKEND_DIR"
if [ ! -d ".venv" ]; then
issues+=("Backend virtual environment not found")
fi
if [ ! -f ".env" ]; then
issues+=("Backend .env file not found")
fi
# Check if Django can start
if ! uv run manage.py check --settings=config.django.local >/dev/null 2>&1; then
issues+=("Django configuration check failed")
fi
cd "$FRONTEND_DIR"
# Check frontend
if [ ! -d "node_modules" ]; then
issues+=("Frontend node_modules not found")
fi
if [ ! -f ".env.local" ]; then
issues+=("Frontend .env.local file not found")
fi
# Check if Vue can build
if ! pnpm run type-check >/dev/null 2>&1; then
issues+=("Vue.js type check failed")
fi
cd "$PROJECT_ROOT"
if [ ${#issues[@]} -ne 0 ]; then
print_warning "Setup verification found issues:"
for issue in "${issues[@]}"; do
echo -e " - ${YELLOW}$issue${NC}"
done
return 1
else
print_success "Setup verification passed!"
return 0
fi
}
# Function to show usage
show_usage() {
cat << EOF
Usage: $0 [OPTIONS]
Set up the complete ThrillWiki development environment.
Options:
-h, --help Show this help message
-b, --backend-only Setup only the backend
-f, --frontend-only Setup only the frontend
-y, --yes Skip confirmation prompts
--no-verify Skip setup verification
Examples:
$0 # Setup both backend and frontend
$0 --backend-only # Setup only backend
$0 --frontend-only # Setup only frontend
Environment Variables:
SKIP_CONFIRMATION Set to 'true' to skip confirmation prompts
SKIP_VERIFICATION Set to 'true' to skip verification
EOF
}
# Parse command line arguments
BACKEND_ONLY=false
FRONTEND_ONLY=false
SKIP_CONFIRMATION=false
SKIP_VERIFICATION=false
while [[ $# -gt 0 ]]; do
case $1 in
-h|--help)
show_usage
exit 0
;;
-b|--backend-only)
BACKEND_ONLY=true
shift
;;
-f|--frontend-only)
FRONTEND_ONLY=true
shift
;;
-y|--yes)
SKIP_CONFIRMATION=true
shift
;;
--no-verify)
SKIP_VERIFICATION=true
shift
;;
*)
print_error "Unknown option: $1"
show_usage
exit 1
;;
esac
done
# Override from environment variables
if [ "$SKIP_CONFIRMATION" = "true" ] || [ "$SKIP_CONFIRMATION_ENV" = "true" ]; then
SKIP_CONFIRMATION=true
fi
if [ "$SKIP_VERIFICATION" = "true" ] || [ "$SKIP_VERIFICATION_ENV" = "true" ]; then
SKIP_VERIFICATION=true
fi
# Print banner
echo -e "${GREEN}"
echo "=========================================="
echo " ThrillWiki Development Setup"
echo "=========================================="
echo -e "${NC}"
print_status "Project root: $PROJECT_ROOT"
# Confirmation prompt
if [ "$SKIP_CONFIRMATION" = false ]; then
echo ""
read -p "This will set up the development environment. Continue? (y/N): " -n 1 -r
echo ""
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
print_status "Setup cancelled."
exit 0
fi
fi
# Check requirements
check_requirements
# Setup components based on options
if [ "$BACKEND_ONLY" = true ]; then
print_status "Setting up backend only..."
setup_backend
setup_root_env
elif [ "$FRONTEND_ONLY" = true ]; then
print_status "Setting up frontend only..."
setup_frontend
setup_root_env
else
print_status "Setting up both backend and frontend..."
setup_backend
setup_frontend
setup_root_env
fi
# Verify setup
if [ "$SKIP_VERIFICATION" = false ]; then
echo ""
if verify_setup; then
print_success "Development environment setup completed successfully!"
echo ""
print_status "Next steps:"
echo " 1. Edit .env files with your configuration"
echo " 2. Start development servers: ./shared/scripts/dev/start-all.sh"
echo " 3. Visit http://localhost:5174 for the frontend"
echo " 4. Visit http://localhost:8000 for the backend API"
echo ""
print_status "Happy coding! 🚀"
else
print_warning "Setup completed with issues. Please review the warnings above."
exit 1
fi
else
print_success "Development environment setup completed!"
print_status "Skipped verification as requested."
fi

View File

@@ -1,279 +0,0 @@
#!/bin/bash
# ThrillWiki Development Server Starter
# Starts both Django backend and Vue.js frontend servers concurrently
set -e
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Script directory and project root
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../../../" && pwd)"
# Configuration
BACKEND_PORT=8000
FRONTEND_PORT=5174
BACKEND_DIR="$PROJECT_ROOT/backend"
FRONTEND_DIR="$PROJECT_ROOT/frontend"
# Function to print colored output
print_status() {
echo -e "${BLUE}[INFO]${NC} $1"
}
print_success() {
echo -e "${GREEN}[SUCCESS]${NC} $1"
}
print_warning() {
echo -e "${YELLOW}[WARNING]${NC} $1"
}
print_error() {
echo -e "${RED}[ERROR]${NC} $1"
}
# Function to check if a port is available
check_port() {
local port=$1
if lsof -Pi :$port -sTCP:LISTEN -t >/dev/null ; then
return 1
else
return 0
fi
}
# Function to kill process on port
kill_port() {
local port=$1
local pid=$(lsof -ti:$port)
if [ ! -z "$pid" ]; then
print_warning "Killing process $pid on port $port"
kill -9 $pid
fi
}
# Function to wait for service to be ready
wait_for_service() {
local url=$1
local service_name=$2
local max_attempts=30
local attempt=1
print_status "Waiting for $service_name to be ready at $url"
while [ $attempt -le $max_attempts ]; do
if curl -s -f "$url" > /dev/null 2>&1; then
print_success "$service_name is ready!"
return 0
fi
echo -n "."
sleep 2
((attempt++))
done
print_error "$service_name failed to start after $max_attempts attempts"
return 1
}
# Function to start backend server
start_backend() {
print_status "Starting Django backend server..."
# Kill any existing process on backend port
kill_port $BACKEND_PORT
# Clean up Python cache files
find "$BACKEND_DIR" -type d -name "__pycache__" -exec rm -rf {} + 2>/dev/null || true
cd "$BACKEND_DIR"
# Check if virtual environment exists
if [ ! -d ".venv" ]; then
print_error "Backend virtual environment not found. Please run setup-dev.sh first."
exit 1
fi
# Start Django server in background
print_status "Starting Django development server on port $BACKEND_PORT"
uv run manage.py runserver 0.0.0.0:$BACKEND_PORT &
BACKEND_PID=$!
# Wait for backend to be ready
wait_for_service "http://localhost:$BACKEND_PORT/api/" "Django backend"
cd "$PROJECT_ROOT"
}
# Function to start frontend server
start_frontend() {
print_status "Starting Vue.js frontend server..."
cd "$FRONTEND_DIR"
# Check if node_modules exists
if [ ! -d "node_modules" ]; then
print_error "Frontend dependencies not installed. Please run setup-dev.sh first."
exit 1
fi
# Start Vue.js dev server in background
print_status "Starting Vue.js development server on port $FRONTEND_PORT"
pnpm run dev &
FRONTEND_PID=$!
# Wait for frontend to be ready
wait_for_service "http://localhost:$FRONTEND_PORT" "Vue.js frontend"
cd "$PROJECT_ROOT"
}
# Function to cleanup on script exit
cleanup() {
print_warning "Shutting down development servers..."
if [ ! -z "$BACKEND_PID" ]; then
kill $BACKEND_PID 2>/dev/null || true
fi
if [ ! -z "$FRONTEND_PID" ]; then
kill $FRONTEND_PID 2>/dev/null || true
fi
# Kill any remaining processes on our ports
kill_port $BACKEND_PORT
kill_port $FRONTEND_PORT
print_success "Development servers stopped."
exit 0
}
# Function to show usage
show_usage() {
cat << EOF
Usage: $0 [OPTIONS]
Start both Django backend and Vue.js frontend development servers.
Options:
-h, --help Show this help message
-b, --backend-only Start only the backend server
-f, --frontend-only Start only the frontend server
-p, --production Start in production mode (if applicable)
--no-wait Don't wait for services to be ready
Examples:
$0 # Start both servers
$0 --backend-only # Start only backend
$0 --frontend-only # Start only frontend
Environment Variables:
BACKEND_PORT Backend server port (default: 8000)
FRONTEND_PORT Frontend server port (default: 5174)
EOF
}
# Parse command line arguments
BACKEND_ONLY=false
FRONTEND_ONLY=false
PRODUCTION=false
WAIT_FOR_SERVICES=true
while [[ $# -gt 0 ]]; do
case $1 in
-h|--help)
show_usage
exit 0
;;
-b|--backend-only)
BACKEND_ONLY=true
shift
;;
-f|--frontend-only)
FRONTEND_ONLY=true
shift
;;
-p|--production)
PRODUCTION=true
shift
;;
--no-wait)
WAIT_FOR_SERVICES=false
shift
;;
*)
print_error "Unknown option: $1"
show_usage
exit 1
;;
esac
done
# Override ports from environment if set
if [ ! -z "$BACKEND_PORT_ENV" ]; then
BACKEND_PORT=$BACKEND_PORT_ENV
fi
if [ ! -z "$FRONTEND_PORT_ENV" ]; then
FRONTEND_PORT=$FRONTEND_PORT_ENV
fi
# Set up signal handlers for graceful shutdown
trap cleanup SIGINT SIGTERM
# Print banner
echo -e "${GREEN}"
echo "=========================================="
echo " ThrillWiki Development Environment"
echo "=========================================="
echo -e "${NC}"
print_status "Project root: $PROJECT_ROOT"
print_status "Backend port: $BACKEND_PORT"
print_status "Frontend port: $FRONTEND_PORT"
# Check if required tools are available
command -v uv >/dev/null 2>&1 || { print_error "uv is required but not installed. Please install uv first."; exit 1; }
command -v pnpm >/dev/null 2>&1 || { print_error "pnpm is required but not installed. Please install pnpm first."; exit 1; }
command -v curl >/dev/null 2>&1 || { print_error "curl is required but not installed."; exit 1; }
# Start services based on options
if [ "$BACKEND_ONLY" = true ]; then
print_status "Starting backend only..."
start_backend
print_success "Backend server started successfully!"
print_status "Backend URL: http://localhost:$BACKEND_PORT"
print_status "API URL: http://localhost:$BACKEND_PORT/api/"
wait
elif [ "$FRONTEND_ONLY" = true ]; then
print_status "Starting frontend only..."
start_frontend
print_success "Frontend server started successfully!"
print_status "Frontend URL: http://localhost:$FRONTEND_PORT"
wait
else
print_status "Starting both backend and frontend servers..."
start_backend &
BACKEND_PID=$!
start_frontend &
FRONTEND_PID=$!
print_success "Development servers started successfully!"
echo ""
print_status "Backend URL: http://localhost:$BACKEND_PORT"
print_status "API URL: http://localhost:$BACKEND_PORT/api/"
print_status "Frontend URL: http://localhost:$FRONTEND_PORT"
echo ""
print_status "Press Ctrl+C to stop all servers"
# Wait for both processes
wait
fi

View File

@@ -1,147 +0,0 @@
#!/bin/bash
# ThrillWiki Development Server Script
# This script sets up the proper environment variables and runs the Django development server
set -e # Exit on any error
echo "🚀 Starting ThrillWiki Development Server..."
# Change to the project directory (parent of scripts folder)
cd "$(dirname "$0")/.."
# Set Django environment to local development
export DJANGO_SETTINGS_MODULE="config.django.local"
# Core Django settings
export DEBUG="True"
export SECRET_KEY="django-insecure-dev-key-not-for-production-$(openssl rand -base64 32 | tr -d "=+/" | cut -c1-25)"
# Allowed hosts for development
export ALLOWED_HOSTS="localhost,127.0.0.1,0.0.0.0"
# CSRF trusted origins for development
export CSRF_TRUSTED_ORIGINS="http://localhost:8000,http://127.0.0.1:8000,https://127.0.0.1:8000"
# Database configuration (PostgreSQL with PostGIS)
export DATABASE_URL="postgis://thrillwiki_user:thrillwiki@localhost:5432/thrillwiki_test_db"
# Cache configuration (use locmem for development if Redis not available)
export CACHE_URL="locmemcache://"
export REDIS_URL="redis://127.0.0.1:6379/1"
# CORS settings for API development
export CORS_ALLOW_ALL_ORIGINS="True"
export CORS_ALLOWED_ORIGINS=""
# Email configuration for development (console backend)
export EMAIL_URL="consolemail://"
# GeoDjango library paths for macOS (adjust if needed)
export GDAL_LIBRARY_PATH="/opt/homebrew/lib/libgdal.dylib"
export GEOS_LIBRARY_PATH="/opt/homebrew/lib/libgeos_c.dylib"
# API rate limiting (generous for development)
export API_RATE_LIMIT_PER_MINUTE="1000"
export API_RATE_LIMIT_PER_HOUR="10000"
# Cache settings
export CACHE_MIDDLEWARE_SECONDS="1" # Very short cache for development
export CACHE_MIDDLEWARE_KEY_PREFIX="thrillwiki_dev"
# Social auth settings (you can set these if you have them)
# export GOOGLE_OAUTH2_CLIENT_ID=""
# export GOOGLE_OAUTH2_CLIENT_SECRET=""
# export DISCORD_CLIENT_ID=""
# export DISCORD_CLIENT_SECRET=""
# Create necessary directories
echo "📁 Creating necessary directories..."
mkdir -p logs
mkdir -p profiles
mkdir -p media
mkdir -p staticfiles
mkdir -p static/css
# Check if virtual environment is activated
if [[ -z "$VIRTUAL_ENV" ]] && [[ -d ".venv" ]]; then
echo "🔧 Activating virtual environment..."
source .venv/bin/activate
fi
# Run database migrations if needed
echo "🗄️ Checking database migrations..."
if uv run manage.py migrate --check 2>/dev/null; then
echo "✅ Database migrations are up to date"
else
echo "🔄 Running database migrations..."
uv run manage.py migrate --noinput
fi
echo "Resetting database..."
if uv run manage.py seed_sample_data 2>/dev/null; then
echo "Seeding complete!"
else
echo "Seeding test data to database..."
uv run manage.py seed_sample_data
fi
# Create superuser if it doesn't exist
echo "👤 Checking for superuser..."
if ! uv run manage.py shell -c "from django.contrib.auth import get_user_model; User = get_user_model(); exit(0 if User.objects.filter(is_superuser=True).exists() else 1)" 2>/dev/null; then
echo "👤 Creating development superuser (admin/admin)..."
uv run manage.py shell -c "
from django.contrib.auth import get_user_model
User = get_user_model()
if not User.objects.filter(username='admin').exists():
User.objects.create_superuser('admin', 'admin@example.com', 'admin')
print('Created superuser: admin/admin')
else:
print('Superuser already exists')
"
fi
# Collect static files for development
echo "📦 Collecting static files..."
uv run manage.py collectstatic --noinput --clear
# Build Tailwind CSS
if command -v npm &> /dev/null; then
echo "🎨 Building Tailwind CSS..."
uv run manage.py tailwind build
else
echo "⚠️ npm not found, skipping Tailwind CSS build"
fi
# Run system checks
echo "🔍 Running system checks..."
if uv run manage.py check; then
echo "✅ System checks passed"
else
echo "❌ System checks failed, but continuing..."
fi
# Display environment info
echo ""
echo "🌍 Development Environment:"
echo " - Settings Module: $DJANGO_SETTINGS_MODULE"
echo " - Debug Mode: $DEBUG"
echo " - Database: PostgreSQL with PostGIS"
echo " - Cache: Local memory cache"
echo " - Admin URL: http://localhost:8000/admin/"
echo " - Admin User: admin / admin"
echo " - Silk Profiler: http://localhost:8000/silk/"
echo " - Debug Toolbar: Available on debug pages"
echo " - API Documentation: http://localhost:8000/api/docs/"
echo ""
# Start the development server
echo "🌟 Starting Django development server on http://localhost:8000"
echo "Press Ctrl+C to stop the server"
echo ""
# Use runserver_plus if django-extensions is available, otherwise use standard runserver
if uv run python -c "import django_extensions" 2>/dev/null; then
exec uv run manage.py runserver_plus 0.0.0.0:8000
else
exec uv run manage.py runserver 0.0.0.0:8000
fi

View File

@@ -1,234 +0,0 @@
#!/usr/bin/env python3
"""
GitHub OAuth Device Flow Authentication for ThrillWiki CI/CD
This script implements GitHub's device flow to securely obtain access tokens.
"""
import sys
import time
import requests
import argparse
from pathlib import Path
# GitHub OAuth App Configuration
CLIENT_ID = "Iv23liOX5Hp75AxhUvIe"
TOKEN_FILE = ".github-token"
def parse_response(response):
"""Parse HTTP response and handle errors."""
if response.status_code in [200, 201]:
return response.json()
elif response.status_code == 401:
print("You are not authorized. Run the `login` command.")
sys.exit(1)
else:
print(f"HTTP {response.status_code}: {response.text}")
sys.exit(1)
def request_device_code():
"""Request a device code from GitHub."""
url = "https://github.com/login/device/code"
data = {"client_id": CLIENT_ID}
headers = {"Accept": "application/json"}
response = requests.post(url, data=data, headers=headers)
return parse_response(response)
def request_token(device_code):
"""Request an access token using the device code."""
url = "https://github.com/login/oauth/access_token"
data = {
"client_id": CLIENT_ID,
"device_code": device_code,
"grant_type": "urn:ietf:params:oauth:grant-type:device_code",
}
headers = {"Accept": "application/json"}
response = requests.post(url, data=data, headers=headers)
return parse_response(response)
def poll_for_token(device_code, interval):
"""Poll GitHub for the access token after user authorization."""
print("Waiting for authorization...")
while True:
response = request_token(device_code)
error = response.get("error")
access_token = response.get("access_token")
if error:
if error == "authorization_pending":
# User hasn't entered the code yet
print(".", end="", flush=True)
time.sleep(interval)
continue
elif error == "slow_down":
# Polling too fast
time.sleep(interval + 5)
continue
elif error == "expired_token":
print("\nThe device code has expired. Please run `login` again.")
sys.exit(1)
elif error == "access_denied":
print("\nLogin cancelled by user.")
sys.exit(1)
else:
print(f"\nError: {response}")
sys.exit(1)
# Success! Save the token
token_path = Path(TOKEN_FILE)
token_path.write_text(access_token)
token_path.chmod(0o600) # Read/write for owner only
print(f"\nToken saved to {TOKEN_FILE}")
break
def login():
"""Initiate the GitHub OAuth device flow login process."""
print("Starting GitHub authentication...")
device_response = request_device_code()
verification_uri = device_response["verification_uri"]
user_code = device_response["user_code"]
device_code = device_response["device_code"]
interval = device_response["interval"]
print(f"\nPlease visit: {verification_uri}")
print(f"and enter code: {user_code}")
print("\nWaiting for you to complete authorization in your browser...")
poll_for_token(device_code, interval)
print("Successfully authenticated!")
return True
def whoami():
"""Display information about the authenticated user."""
token_path = Path(TOKEN_FILE)
if not token_path.exists():
print("You are not authorized. Run the `login` command.")
sys.exit(1)
try:
token = token_path.read_text().strip()
except Exception as e:
print(f"Error reading token: {e}")
print("You may need to run the `login` command again.")
sys.exit(1)
url = "https://api.github.com/user"
headers = {
"Accept": "application/vnd.github+json",
"Authorization": f"Bearer {token}",
}
response = requests.get(url, headers=headers)
user_data = parse_response(response)
print(f"You are authenticated as: {user_data['login']}")
print(f"Name: {user_data.get('name', 'Not set')}")
print(f"Email: {user_data.get('email', 'Not public')}")
return user_data
def get_token():
"""Get the current access token if available."""
token_path = Path(TOKEN_FILE)
if not token_path.exists():
return None
try:
return token_path.read_text().strip()
except Exception:
return None
def validate_token():
"""Validate that the current token is still valid."""
token = get_token()
if not token:
return False
url = "https://api.github.com/user"
headers = {
"Accept": "application/vnd.github+json",
"Authorization": f"Bearer {token}",
}
try:
response = requests.get(url, headers=headers)
return response.status_code == 200
except Exception:
return False
def ensure_authenticated():
"""Ensure user is authenticated, prompting login if necessary."""
if validate_token():
return get_token()
print("GitHub authentication required.")
login()
return get_token()
def logout():
"""Remove the stored access token."""
token_path = Path(TOKEN_FILE)
if token_path.exists():
token_path.unlink()
print("Successfully logged out.")
else:
print("You are not currently logged in.")
def main():
"""Main CLI interface."""
parser = argparse.ArgumentParser(
description="GitHub OAuth authentication for ThrillWiki CI/CD"
)
parser.add_argument(
"command",
choices=["login", "logout", "whoami", "token", "validate"],
help="Command to execute",
)
if len(sys.argv) == 1:
parser.print_help()
sys.exit(1)
args = parser.parse_args()
if args.command == "login":
login()
elif args.command == "logout":
logout()
elif args.command == "whoami":
whoami()
elif args.command == "token":
token = get_token()
if token:
print(token)
else:
print("No token available. Run `login` first.")
sys.exit(1)
elif args.command == "validate":
if validate_token():
print("Token is valid.")
else:
print("Token is invalid or missing.")
sys.exit(1)
if __name__ == "__main__":
main()

View File

@@ -1,268 +0,0 @@
#!/bin/bash
# ThrillWiki VM CI Setup Script
# This script helps set up the VM deployment system
set -e
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
log() {
echo -e "${BLUE}[SETUP]${NC} $1"
}
log_success() {
echo -e "${GREEN}[SUCCESS]${NC} $1"
}
log_warning() {
echo -e "${YELLOW}[WARNING]${NC} $1"
}
log_error() {
echo -e "${RED}[ERROR]${NC} $1"
}
# Configuration prompts
prompt_config() {
log "Setting up ThrillWiki VM CI/CD system..."
echo
read -p "Enter your VM IP address: " VM_IP
read -p "Enter your VM username (default: ubuntu): " VM_USER
VM_USER=${VM_USER:-ubuntu}
read -p "Enter your GitHub repository URL: " REPO_URL
read -p "Enter your GitHub webhook secret: " WEBHOOK_SECRET
read -p "Enter local webhook port (default: 9000): " WEBHOOK_PORT
WEBHOOK_PORT=${WEBHOOK_PORT:-9000}
read -p "Enter VM project path (default: /home/$VM_USER/thrillwiki): " VM_PROJECT_PATH
VM_PROJECT_PATH=${VM_PROJECT_PATH:-/home/$VM_USER/thrillwiki}
}
# Create SSH key
setup_ssh() {
log "Setting up SSH keys..."
local ssh_key_path="$HOME/.ssh/thrillwiki_vm"
if [ ! -f "$ssh_key_path" ]; then
ssh-keygen -t rsa -b 4096 -f "$ssh_key_path" -N ""
log_success "SSH key generated: $ssh_key_path"
log "Please copy the following public key to your VM:"
echo "---"
cat "$ssh_key_path.pub"
echo "---"
echo
log "Run this on your VM:"
echo "mkdir -p ~/.ssh && echo '$(cat "$ssh_key_path.pub")' >> ~/.ssh/***REMOVED*** && chmod 600 ~/.ssh/***REMOVED***"
echo
read -p "Press Enter when you've added the key to your VM..."
else
log "SSH key already exists: $ssh_key_path"
fi
# Test SSH connection
log "Testing SSH connection..."
if ssh -i "$ssh_key_path" -o ConnectTimeout=5 -o StrictHostKeyChecking=no "$VM_USER@$VM_IP" "echo 'SSH connection successful'"; then
log_success "SSH connection test passed"
else
log_error "SSH connection test failed"
exit 1
fi
}
# Create environment file
create_env_file() {
log "Creating webhook environment file..."
cat > ***REMOVED***.webhook << EOF
# ThrillWiki Webhook Configuration
WEBHOOK_PORT=$WEBHOOK_PORT
WEBHOOK_SECRET=$WEBHOOK_SECRET
VM_HOST=$VM_IP
VM_PORT=22
VM_USER=$VM_USER
VM_KEY_PATH=$HOME/.ssh/thrillwiki_vm
VM_PROJECT_PATH=$VM_PROJECT_PATH
REPO_URL=$REPO_URL
DEPLOY_BRANCH=main
EOF
log_success "Environment file created: ***REMOVED***.webhook"
}
# Setup VM
setup_vm() {
log "Setting up VM environment..."
local ssh_key_path="$HOME/.ssh/thrillwiki_vm"
# Create setup script for VM
cat > /tmp/vm_setup.sh << 'EOF'
#!/bin/bash
set -e
echo "Setting up VM for ThrillWiki deployment..."
# Update system
sudo apt update
# Install required packages
sudo apt install -y git curl build-essential python3-pip lsof
# Install UV if not present
if ! command -v uv &> /dev/null; then
echo "Installing UV..."
curl -LsSf https://astral.sh/uv/install.sh | sh
source ~/.cargo/env
fi
# Clone repository if not present
if [ ! -d "thrillwiki" ]; then
echo "Cloning repository..."
git clone REPO_URL_PLACEHOLDER thrillwiki
fi
cd thrillwiki
# Install dependencies
uv sync
# Create directories
mkdir -p logs backups
# Make scripts executable
chmod +x scripts/*.sh
echo "VM setup completed successfully!"
EOF
# Replace placeholder with actual repo URL
sed -i.bak "s|REPO_URL_PLACEHOLDER|$REPO_URL|g" /tmp/vm_setup.sh
# Copy and execute setup script on VM
scp -i "$ssh_key_path" /tmp/vm_setup.sh "$VM_USER@$VM_IP:/tmp/"
ssh -i "$ssh_key_path" "$VM_USER@$VM_IP" "bash /tmp/vm_setup.sh"
log_success "VM setup completed"
# Cleanup
rm /tmp/vm_setup.sh /tmp/vm_setup.sh.bak
}
# Install systemd services
setup_services() {
log "Setting up systemd services on VM..."
local ssh_key_path="$HOME/.ssh/thrillwiki_vm"
# Copy service files and install them
ssh -i "$ssh_key_path" "$VM_USER@$VM_IP" << EOF
cd thrillwiki
# Update service files with correct paths
sed -i 's|/home/ubuntu|/home/$VM_USER|g' scripts/systemd/*.service
sed -i 's|ubuntu|$VM_USER|g' scripts/systemd/*.service
# Install services
sudo cp scripts/systemd/thrillwiki.service /etc/systemd/system/
sudo cp scripts/systemd/thrillwiki-webhook.service /etc/systemd/system/
# Reload and enable services
sudo systemctl daemon-reload
sudo systemctl enable thrillwiki.service
echo "Services installed successfully!"
EOF
log_success "Systemd services installed"
}
# Test deployment
test_deployment() {
log "Testing VM deployment..."
local ssh_key_path="$HOME/.ssh/thrillwiki_vm"
ssh -i "$ssh_key_path" "$VM_USER@$VM_IP" << EOF
cd thrillwiki
./scripts/vm-deploy.sh
EOF
log_success "Deployment test completed"
}
# Start webhook listener
start_webhook() {
log "Starting webhook listener..."
if [ -f "***REMOVED***.webhook" ]; then
log "Webhook configuration found. You can start the webhook listener with:"
echo " source ***REMOVED***.webhook && python3 scripts/webhook-listener.py"
echo
log "Or run it in the background:"
echo " nohup python3 scripts/webhook-listener.py > logs/webhook.log 2>&1 &"
else
log_error "Webhook configuration not found!"
exit 1
fi
}
# GitHub webhook instructions
github_instructions() {
log "GitHub Webhook Setup Instructions:"
echo
echo "1. Go to your GitHub repository: $REPO_URL"
echo "2. Navigate to Settings → Webhooks"
echo "3. Click 'Add webhook'"
echo "4. Configure:"
echo " - Payload URL: http://YOUR_PUBLIC_IP:$WEBHOOK_PORT/webhook"
echo " - Content type: application/json"
echo " - Secret: $WEBHOOK_SECRET"
echo " - Events: Just the push event"
echo "5. Click 'Add webhook'"
echo
log_warning "Make sure port $WEBHOOK_PORT is open on your firewall!"
}
# Main setup flow
main() {
log "ThrillWiki VM CI/CD Setup"
echo "=========================="
echo
# Create logs directory
mkdir -p logs
# Get configuration
prompt_config
# Setup steps
setup_ssh
create_env_file
setup_vm
setup_services
test_deployment
# Final instructions
echo
log_success "Setup completed successfully!"
echo
start_webhook
echo
github_instructions
log "Setup log saved to: logs/setup.log"
}
# Run main function and log output
main "$@" 2>&1 | tee logs/setup.log

View File

@@ -1,575 +0,0 @@
#!/bin/bash
# ThrillWiki Server Start Script
# Stops any running servers, clears caches, runs migrations, and starts both servers
# Works whether servers are currently running or not
# Usage: ./start-servers.sh
set -e # Exit on any error
# Global variables for process management
BACKEND_PID=""
FRONTEND_PID=""
CLEANUP_PERFORMED=false
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Script directory and project root
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
BACKEND_DIR="$PROJECT_ROOT/backend"
FRONTEND_DIR="$PROJECT_ROOT/frontend"
# Function to print colored output
print_status() {
echo -e "${BLUE}[INFO]${NC} $1"
}
print_success() {
echo -e "${GREEN}[SUCCESS]${NC} $1"
}
print_warning() {
echo -e "${YELLOW}[WARNING]${NC} $1"
}
print_error() {
echo -e "${RED}[ERROR]${NC} $1"
}
# Function for graceful shutdown
graceful_shutdown() {
if [ "$CLEANUP_PERFORMED" = true ]; then
return 0
fi
CLEANUP_PERFORMED=true
print_warning "Received shutdown signal - performing graceful shutdown..."
# Disable further signal handling to prevent recursive calls
trap - INT TERM
# Kill backend server if running
if [ -n "$BACKEND_PID" ] && kill -0 "$BACKEND_PID" 2>/dev/null; then
print_status "Stopping backend server (PID: $BACKEND_PID)..."
kill -TERM "$BACKEND_PID" 2>/dev/null || true
# Wait up to 10 seconds for graceful shutdown
local count=0
while [ $count -lt 10 ] && kill -0 "$BACKEND_PID" 2>/dev/null; do
sleep 1
count=$((count + 1))
done
# Force kill if still running
if kill -0 "$BACKEND_PID" 2>/dev/null; then
print_warning "Force killing backend server..."
kill -KILL "$BACKEND_PID" 2>/dev/null || true
fi
print_success "Backend server stopped"
else
print_status "Backend server not running or already stopped"
fi
# Kill frontend server if running
if [ -n "$FRONTEND_PID" ] && kill -0 "$FRONTEND_PID" 2>/dev/null; then
print_status "Stopping frontend server (PID: $FRONTEND_PID)..."
kill -TERM "$FRONTEND_PID" 2>/dev/null || true
# Wait up to 10 seconds for graceful shutdown
local count=0
while [ $count -lt 10 ] && kill -0 "$FRONTEND_PID" 2>/dev/null; do
sleep 1
count=$((count + 1))
done
# Force kill if still running
if kill -0 "$FRONTEND_PID" 2>/dev/null; then
print_warning "Force killing frontend server..."
kill -KILL "$FRONTEND_PID" 2>/dev/null || true
fi
print_success "Frontend server stopped"
else
print_status "Frontend server not running or already stopped"
fi
# Clear PID files if they exist
if [ -f "$PROJECT_ROOT/shared/logs/backend.pid" ]; then
rm -f "$PROJECT_ROOT/shared/logs/backend.pid"
fi
if [ -f "$PROJECT_ROOT/shared/logs/frontend.pid" ]; then
rm -f "$PROJECT_ROOT/shared/logs/frontend.pid"
fi
print_success "Graceful shutdown completed"
exit 0
}
# Function to kill processes by pattern
kill_processes() {
local pattern="$1"
local description="$2"
print_status "Checking for $description processes..."
# Find and kill processes
local pids=$(pgrep -f "$pattern" 2>/dev/null || true)
if [ -n "$pids" ]; then
print_status "Found $description processes, stopping them..."
echo "$pids" | xargs kill -TERM 2>/dev/null || true
sleep 2
# Force kill if still running
local remaining_pids=$(pgrep -f "$pattern" 2>/dev/null || true)
if [ -n "$remaining_pids" ]; then
print_warning "Force killing remaining $description processes..."
echo "$remaining_pids" | xargs kill -KILL 2>/dev/null || true
fi
print_success "$description processes stopped"
else
print_status "No $description processes found (this is fine)"
fi
}
# Function to clear Django cache
clear_django_cache() {
print_status "Clearing Django cache..."
cd "$BACKEND_DIR"
# Clear Django cache
if command -v uv >/dev/null 2>&1; then
if ! uv run manage.py clear_cache 2>clear_cache_error.log; then
print_error "Django clear_cache command failed:"
cat clear_cache_error.log
rm -f clear_cache_error.log
exit 1
else
rm -f clear_cache_error.log
fi
else
print_error "uv not found! Please install uv first."
exit 1
fi
# Remove Python cache files
find . -type d -name "__pycache__" -exec rm -rf {} + 2>/dev/null || true
find . -name "*.pyc" -delete 2>/dev/null || true
find . -name "*.pyo" -delete 2>/dev/null || true
print_success "Django cache cleared"
}
# Function to clear frontend cache
clear_frontend_cache() {
print_status "Clearing frontend cache..."
cd "$FRONTEND_DIR"
# Remove node_modules/.cache if it exists
if [ -d "node_modules/.cache" ]; then
rm -rf node_modules/.cache
print_status "Removed node_modules/.cache"
fi
# Remove .nuxt cache if it exists (for Nuxt projects)
if [ -d ".nuxt" ]; then
rm -rf .nuxt
print_status "Removed .nuxt cache"
fi
# Remove dist/build directories
if [ -d "dist" ]; then
rm -rf dist
print_status "Removed dist directory"
fi
if [ -d "build" ]; then
rm -rf build
print_status "Removed build directory"
fi
# Clear pnpm cache
if command -v pnpm >/dev/null 2>&1; then
pnpm store prune 2>/dev/null || print_warning "Could not prune pnpm store"
else
print_error "pnpm not found! Please install pnpm first."
exit 1
fi
print_success "Frontend cache cleared"
}
# Function to run Django migrations
run_migrations() {
print_status "Running Django migrations..."
cd "$BACKEND_DIR"
# Check for pending migrations
if uv run python manage.py showmigrations --plan | grep -q "\[ \]"; then
print_status "Pending migrations found, applying..."
uv run python manage.py migrate
print_success "Migrations applied successfully"
else
print_status "No pending migrations found"
fi
# Run any custom management commands if needed
# uv run python manage.py collectstatic --noinput --clear 2>/dev/null || print_warning "collectstatic failed or not needed"
}
# Function to start backend server
start_backend() {
print_status "Starting Django backend server with runserver_plus (verbose output)..."
cd "$BACKEND_DIR"
# Start Django development server with runserver_plus for enhanced features and verbose output
print_status "Running: uv run python manage.py runserver_plus 8000 --verbosity=2"
uv run python manage.py runserver_plus 8000 --verbosity=2 &
BACKEND_PID=$!
# Make sure the background process can receive signals
disown -h "$BACKEND_PID" 2>/dev/null || true
# Wait a moment and check if it started successfully
sleep 3
if kill -0 $BACKEND_PID 2>/dev/null; then
print_success "Backend server started (PID: $BACKEND_PID)"
echo $BACKEND_PID > ../shared/logs/backend.pid
else
print_error "Failed to start backend server"
return 1
fi
}
# Function to start frontend server
start_frontend() {
print_status "Starting frontend server with verbose output..."
cd "$FRONTEND_DIR"
# Install dependencies if node_modules doesn't exist or package.json is newer
if [ ! -d "node_modules" ] || [ "package.json" -nt "node_modules" ]; then
print_status "Installing/updating frontend dependencies..."
pnpm install
fi
# Start frontend development server using Vite with explicit port, auto-open, and verbose output
# --port 5173: Use standard Vite port
# --open: Automatically open browser when ready
# --host localhost: Ensure it binds to localhost
# --debug: Enable debug logging
print_status "Starting Vite development server with verbose output and auto-browser opening..."
print_status "Running: pnpm vite --port 5173 --open --host localhost --debug"
pnpm vite --port 5173 --open --host localhost --debug &
FRONTEND_PID=$!
# Make sure the background process can receive signals
disown -h "$FRONTEND_PID" 2>/dev/null || true
# Wait a moment and check if it started successfully
sleep 3
if kill -0 $FRONTEND_PID 2>/dev/null; then
print_success "Frontend server started (PID: $FRONTEND_PID) - browser should open automatically"
echo $FRONTEND_PID > ../shared/logs/frontend.pid
else
print_error "Failed to start frontend server"
return 1
fi
}
# Function to detect operating system
detect_os() {
case "$(uname -s)" in
Darwin*) echo "macos";;
Linux*) echo "linux";;
*) echo "unknown";;
esac
}
# Function to open browser on the appropriate OS
open_browser() {
local url="$1"
local os=$(detect_os)
print_status "Opening browser to $url..."
case "$os" in
"macos")
if command -v open >/dev/null 2>&1; then
open "$url" 2>/dev/null || print_warning "Failed to open browser automatically"
else
print_warning "Cannot open browser: 'open' command not available"
fi
;;
"linux")
if command -v xdg-open >/dev/null 2>&1; then
xdg-open "$url" 2>/dev/null || print_warning "Failed to open browser automatically"
else
print_warning "Cannot open browser: 'xdg-open' command not available"
fi
;;
*)
print_warning "Cannot open browser automatically: Unsupported operating system"
;;
esac
}
# Function to verify frontend is responding (simplified since port is known)
verify_frontend_ready() {
local frontend_url="http://localhost:5173"
local max_checks=15
local check=0
print_status "Verifying frontend server is responding at $frontend_url..."
while [ $check -lt $max_checks ]; do
local response_code=$(curl -s -o /dev/null -w "%{http_code}" "$frontend_url" 2>/dev/null)
if [ "$response_code" = "200" ] || [ "$response_code" = "301" ] || [ "$response_code" = "302" ] || [ "$response_code" = "404" ]; then
print_success "Frontend server is responding (HTTP $response_code)"
return 0
fi
if [ $((check % 3)) -eq 0 ]; then
print_status "Waiting for frontend to respond... (attempt $((check + 1))/$max_checks)"
fi
sleep 2
check=$((check + 1))
done
print_warning "Frontend may still be starting up"
return 1
}
# Function to verify servers are responding
verify_servers_ready() {
print_status "Verifying both servers are responding..."
# Check backend
local backend_ready=false
local frontend_ready=false
local max_checks=10
local check=0
while [ $check -lt $max_checks ]; do
# Check backend
if [ "$backend_ready" = false ]; then
local backend_response=$(curl -s -o /dev/null -w "%{http_code}" "http://localhost:8000" 2>/dev/null)
if [ "$backend_response" = "200" ] || [ "$backend_response" = "301" ] || [ "$backend_response" = "302" ] || [ "$backend_response" = "404" ]; then
print_success "Backend server is responding (HTTP $backend_response)"
backend_ready=true
fi
fi
# Check frontend
if [ "$frontend_ready" = false ]; then
local frontend_response=$(curl -s -o /dev/null -w "%{http_code}" "http://localhost:5173" 2>/dev/null)
if [ "$frontend_response" = "200" ] || [ "$frontend_response" = "301" ] || [ "$frontend_response" = "302" ] || [ "$frontend_response" = "404" ]; then
print_success "Frontend server is responding (HTTP $frontend_response)"
frontend_ready=true
fi
fi
# Both ready?
if [ "$backend_ready" = true ] && [ "$frontend_ready" = true ]; then
print_success "Both servers are responding!"
return 0
fi
sleep 2
check=$((check + 1))
done
# Show status of what's working
if [ "$backend_ready" = true ]; then
print_success "Backend is ready at http://localhost:8000"
else
print_warning "Backend may still be starting up"
fi
if [ "$frontend_ready" = true ]; then
print_success "Frontend is ready at http://localhost:5173"
else
print_warning "Frontend may still be starting up"
fi
}
# Function to create logs directory if it doesn't exist
ensure_logs_dir() {
local logs_dir="$PROJECT_ROOT/shared/logs"
if [ ! -d "$logs_dir" ]; then
mkdir -p "$logs_dir"
print_status "Created logs directory: $logs_dir"
fi
}
# Function to validate project structure
validate_project() {
if [ ! -d "$BACKEND_DIR" ]; then
print_error "Backend directory not found: $BACKEND_DIR"
exit 1
fi
if [ ! -d "$FRONTEND_DIR" ]; then
print_error "Frontend directory not found: $FRONTEND_DIR"
exit 1
fi
if [ ! -f "$BACKEND_DIR/manage.py" ]; then
print_error "Django manage.py not found in: $BACKEND_DIR"
exit 1
fi
if [ ! -f "$FRONTEND_DIR/package.json" ]; then
print_error "Frontend package.json not found in: $FRONTEND_DIR"
exit 1
fi
}
# Function to kill processes using specific ports
kill_port_processes() {
local port="$1"
local description="$2"
print_status "Checking for processes using port $port ($description)..."
# Find processes using the specific port
local pids=$(lsof -ti :$port 2>/dev/null || true)
if [ -n "$pids" ]; then
print_warning "Found processes using port $port, killing them..."
echo "$pids" | xargs kill -TERM 2>/dev/null || true
sleep 2
# Force kill if still running
local remaining_pids=$(lsof -ti :$port 2>/dev/null || true)
if [ -n "$remaining_pids" ]; then
print_warning "Force killing remaining processes on port $port..."
echo "$remaining_pids" | xargs kill -KILL 2>/dev/null || true
fi
print_success "Port $port cleared"
else
print_status "Port $port is available"
fi
}
# Function to check and clear required ports
check_and_clear_ports() {
print_status "Checking and clearing required ports..."
# Kill processes using our specific ports
kill_port_processes 8000 "Django backend"
kill_port_processes 5173 "Frontend Vite"
}
# Main execution function
main() {
print_status "ThrillWiki Server Start Script Starting..."
print_status "This script works whether servers are currently running or not."
print_status "Project root: $PROJECT_ROOT"
# Set up signal traps EARLY - before any long-running operations
print_status "Setting up signal handlers for graceful shutdown..."
trap 'graceful_shutdown' INT TERM
# Validate project structure
validate_project
# Ensure logs directory exists
ensure_logs_dir
# Check and clear ports
check_and_clear_ports
# Kill existing server processes (if any)
print_status "=== Stopping Any Running Servers ==="
print_status "Note: It's perfectly fine if no servers are currently running"
kill_processes "manage.py runserver" "Django backend"
kill_processes "pnpm.*dev\|npm.*dev\|yarn.*dev\|node.*dev" "Frontend development"
kill_processes "uvicorn\|gunicorn" "Python web servers"
# Clear caches
print_status "=== Clearing Caches ==="
clear_django_cache
clear_frontend_cache
# Run migrations
print_status "=== Running Migrations ==="
run_migrations
# Start servers
print_status "=== Starting Servers ==="
# Start backend first
if start_backend; then
print_success "Backend server is running"
else
print_error "Failed to start backend server"
exit 1
fi
# Start frontend
if start_frontend; then
print_success "Frontend server is running"
else
print_error "Failed to start frontend server"
print_status "Backend server is still running"
exit 1
fi
# Verify servers are responding
print_status "=== Verifying Servers ==="
verify_servers_ready
# Final status
print_status "=== Server Status ==="
print_success "✅ Backend server: http://localhost:8000 (Django with runserver_plus)"
print_success "✅ Frontend server: http://localhost:5173 (Vite with verbose output)"
print_status "🌐 Browser should have opened automatically via Vite --open"
print_status "🔧 To stop servers, use: kill \$(cat $PROJECT_ROOT/shared/logs/backend.pid) \$(cat $PROJECT_ROOT/shared/logs/frontend.pid)"
print_status "📋 Both servers are running with verbose output directly in your terminal"
print_success "🚀 All servers started successfully with full verbose output!"
# Keep the script running and wait for signals
wait_for_servers
}
# Wait for servers function to keep script running and handle signals
wait_for_servers() {
print_status "🚀 Servers are running! Press Ctrl+C for graceful shutdown."
print_status "📋 Backend: http://localhost:8000 | Frontend: http://localhost:5173"
# Keep the script alive and wait for signals
while [ "$CLEANUP_PERFORMED" != true ]; do
# Check if both servers are still running
if [ -n "$BACKEND_PID" ] && ! kill -0 "$BACKEND_PID" 2>/dev/null; then
print_error "Backend server has stopped unexpectedly"
graceful_shutdown
break
fi
if [ -n "$FRONTEND_PID" ] && ! kill -0 "$FRONTEND_PID" 2>/dev/null; then
print_error "Frontend server has stopped unexpectedly"
graceful_shutdown
break
fi
# Use shorter sleep and check for signals more frequently
sleep 1
done
}
# Run main function (no traps set up initially)
main "$@"

View File

@@ -1,296 +0,0 @@
# ThrillWiki Automation Service Environment Configuration
# Copy this file to thrillwiki-automation***REMOVED*** and customize for your environment
#
# Security Note: This file should have restricted permissions (600) as it may contain
# sensitive information like GitHub Personal Access Tokens
# [AWS-SECRET-REMOVED]====================================
# PROJECT CONFIGURATION
# [AWS-SECRET-REMOVED]====================================
# Base project directory (usually auto-detected)
# PROJECT_DIR=/home/ubuntu/thrillwiki
# Service name for systemd integration
# SERVICE_NAME=thrillwiki
# [AWS-SECRET-REMOVED]====================================
# GITHUB REPOSITORY CONFIGURATION
# [AWS-SECRET-REMOVED]====================================
# GitHub repository remote name
# GITHUB_REPO=origin
# Branch to pull from
# GITHUB_BRANCH=main
# GitHub Personal Access Token (PAT) - Required for private repositories
# Generate at: https://github.com/settings/tokens
# Required permissions: repo (Full control of private repositories)
# GITHUB_TOKEN=ghp_your_personal_access_token_here
# GitHub token file location (alternative to GITHUB_TOKEN)
# GITHUB_TOKEN_FILE=/home/ubuntu/thrillwiki/.github-pat
GITHUB_PAT_FILE=/home/ubuntu/thrillwiki/.github-pat
# [AWS-SECRET-REMOVED]====================================
# AUTOMATION TIMING CONFIGURATION
# [AWS-SECRET-REMOVED]====================================
# Repository pull interval in seconds (default: 300 = 5 minutes)
# PULL_INTERVAL=300
# Health check interval in seconds (default: 60 = 1 minute)
# HEALTH_CHECK_INTERVAL=60
# Server startup timeout in seconds (default: 120 = 2 minutes)
# STARTUP_TIMEOUT=120
# Restart delay after failure in seconds (default: 10)
# RESTART_DELAY=10
# [AWS-SECRET-REMOVED]====================================
# LOGGING CONFIGURATION
# [AWS-SECRET-REMOVED]====================================
# Log directory (default: project_dir/logs)
# LOG_DIR=/home/ubuntu/thrillwiki/logs
# Log file path
# LOG_[AWS-SECRET-REMOVED]proof-automation.log
# Maximum log file size in bytes (default: 10485760 = 10MB)
# MAX_LOG_SIZE=10485760
# Lock file location to prevent multiple instances
# LOCK_FILE=/tmp/thrillwiki-bulletproof.lock
# [AWS-SECRET-REMOVED]====================================
# DEVELOPMENT SERVER CONFIGURATION
# [AWS-SECRET-REMOVED]====================================
# Server host address (default: 0.0.0.0 for all interfaces)
# SERVER_HOST=0.0.0.0
# Server port (default: 8000)
# SERVER_PORT=8000
# [AWS-SECRET-REMOVED]====================================
# DEPLOYMENT CONFIGURATION
# [AWS-SECRET-REMOVED]====================================
# Deployment preset (dev, prod, demo, testing)
# DEPLOYMENT_PRESET=dev
# Repository URL for deployment
# GITHUB_REPO_URL=https://github.com/username/repository.git
# Repository branch for deployment
# GITHUB_REPO_BRANCH=main
# Enable Django project setup during deployment
# DJANGO_PROJECT_SETUP=true
# Skip GitHub authentication setup
# SKIP_GITHUB_SETUP=false
# Skip repository configuration
# SKIP_REPO_CONFIG=false
# Skip systemd service setup
# SKIP_SERVICE_SETUP=false
# Force deployment even if target exists
# FORCE_DEPLOY=false
# Remote deployment user
# REMOTE_USER=ubuntu
# Remote deployment host
# REMOTE_HOST=
# Remote deployment port
# REMOTE_PORT=22
# Remote deployment path
# REMOTE_PATH=/home/ubuntu/thrillwiki
# [AWS-SECRET-REMOVED]====================================
# DJANGO CONFIGURATION
# [AWS-SECRET-REMOVED]====================================
# Django settings module
# DJANGO_SETTINGS_MODULE=thrillwiki.settings
# Python path
# PYTHONPATH=/home/ubuntu/thrillwiki
# UV executable path (for systems where UV is not in standard PATH)
# UV_EXECUTABLE=/home/ubuntu/.local/bin/uv
# Django development server command (used by bulletproof automation)
# DJANGO_RUNSERVER_CMD=uv run manage.py tailwind runserver
# Enable development server auto-cleanup (kills processes on port 8000)
# AUTO_CLEANUP_PROCESSES=true
# [AWS-SECRET-REMOVED]====================================
# ADVANCED CONFIGURATION
# [AWS-SECRET-REMOVED]====================================
# GitHub authentication script location
# GITHUB_AUTH_[AWS-SECRET-REMOVED]ithub-auth.py
# Enable verbose logging (true/false)
# VERBOSE_LOGGING=false
# Enable debug mode for troubleshooting (true/false)
# DEBUG_MODE=false
# Custom git remote URL (overrides GITHUB_REPO if set)
# CUSTOM_GIT_REMOTE=https://github.com/username/repository.git
# Email notifications for critical failures (requires email configuration)
# NOTIFICATION_EMAIL=admin@example.com
# Maximum consecutive failures before alerting (default: 5)
# MAX_CONSECUTIVE_FAILURES=5
# Enable automatic dependency updates (true/false, default: true)
# AUTO_UPDATE_DEPENDENCIES=true
# Enable automatic migrations on code changes (true/false, default: true)
# AUTO_MIGRATE=true
# Enable automatic static file collection (true/false, default: true)
# AUTO_COLLECTSTATIC=true
# [AWS-SECRET-REMOVED]====================================
# SECURITY CONFIGURATION
# [AWS-SECRET-REMOVED]====================================
# GitHub authentication method (token|ssh|https)
# Default: token (uses GITHUB_TOKEN or GITHUB_TOKEN_FILE)
# GITHUB_AUTH_METHOD=token
# SSH key path for git operations (when using ssh auth method)
# SSH_KEY_PATH=/home/ubuntu/.ssh/***REMOVED***
# Git user configuration for commits
# GIT_USER_NAME="ThrillWiki Automation"
# GIT_USER_EMAIL="automation@thrillwiki.local"
# [AWS-SECRET-REMOVED]====================================
# MONITORING AND HEALTH CHECKS
# [AWS-SECRET-REMOVED]====================================
# Health check URL to verify server is running
# HEALTH_CHECK_URL=http://localhost:8000/health/
# Health check timeout in seconds
# HEALTH_CHECK_TIMEOUT=30
# Enable system resource monitoring (true/false)
# MONITOR_RESOURCES=true
# Memory usage threshold for warnings (in MB)
# MEMORY_WARNING_THRESHOLD=1024
# CPU usage threshold for warnings (percentage)
# CPU_WARNING_THRESHOLD=80
# Disk usage threshold for warnings (percentage)
# DISK_WARNING_THRESHOLD=90
# [AWS-SECRET-REMOVED]====================================
# INTEGRATION SETTINGS
# [AWS-SECRET-REMOVED]====================================
# Webhook integration (if using thrillwiki-webhook service)
# WEBHOOK_INTEGRATION=true
# Slack webhook URL for notifications (optional)
# SLACK_WEBHOOK_URL=https://hooks.slack.com/services/your/webhook/url
# Discord webhook URL for notifications (optional)
# DISCORD_WEBHOOK_URL=https://discord.com/api/webhooks/your/webhook/url
# [AWS-SECRET-REMOVED]====================================
# ENVIRONMENT AND SYSTEM CONFIGURATION
# [AWS-SECRET-REMOVED]====================================
# System PATH additions (for UV and other tools)
# ADDITIONAL_PATH=/home/ubuntu/.local/bin:/home/ubuntu/.cargo/bin
# Python environment configuration
# PYTHON_EXECUTABLE=python3
# Enable verbose logging for debugging
# VERBOSE_LOGGING=false
# Debug mode for development
# DEBUG_MODE=false
# Service restart configuration
# MAX_RESTART_ATTEMPTS=3
# RESTART_COOLDOWN=300
# Health check configuration
# HEALTH_CHECK_URL=http://localhost:8000/health/
# HEALTH_CHECK_TIMEOUT=30
# System resource monitoring
# MONITOR_RESOURCES=true
# MEMORY_WARNING_THRESHOLD=1024
# CPU_WARNING_THRESHOLD=80
# DISK_WARNING_THRESHOLD=90
# Lock file configuration
# LOCK_FILE=/tmp/thrillwiki-bulletproof.lock
# GitHub authentication method (token|ssh|https)
# GITHUB_AUTH_METHOD=token
# SSH key path for git operations (when using ssh auth method)
# SSH_KEY_PATH=/home/ubuntu/.ssh/***REMOVED***
# Git user configuration for commits
# GIT_USER_NAME="ThrillWiki Automation"
# GIT_USER_EMAIL="automation@thrillwiki.local"
# [AWS-SECRET-REMOVED]====================================
# USAGE EXAMPLES
# [AWS-SECRET-REMOVED]====================================
# Example 1: Basic setup with GitHub PAT
# GITHUB_TOKEN=ghp_your_token_here
# PULL_INTERVAL=300
# AUTO_MIGRATE=true
# Example 2: Enhanced monitoring setup
# HEALTH_CHECK_INTERVAL=30
# MONITOR_RESOURCES=true
# NOTIFICATION_EMAIL=admin@thrillwiki.com
# SLACK_WEBHOOK_URL=https://hooks.slack.com/services/your/webhook
# Example 3: Development environment with frequent pulls
# PULL_INTERVAL=60
# DEBUG_MODE=true
# VERBOSE_LOGGING=true
# AUTO_UPDATE_DEPENDENCIES=true
# [AWS-SECRET-REMOVED]====================================
# INSTALLATION NOTES
# [AWS-SECRET-REMOVED]====================================
# 1. Copy this file: cp thrillwiki-automation***REMOVED***.example thrillwiki-automation***REMOVED***
# 2. Set secure permissions: chmod 600 thrillwiki-automation***REMOVED***
# 3. Customize the settings above for your environment
# 4. Enable the service: sudo systemctl enable thrillwiki-automation
# 5. Start the service: sudo systemctl start thrillwiki-automation
# 6. Check status: sudo systemctl status thrillwiki-automation
# 7. View logs: sudo journalctl -u thrillwiki-automation -f
# For security, ensure only the ubuntu user can read this file:
# sudo chown ubuntu:ubuntu thrillwiki-automation***REMOVED***
# sudo chmod 600 thrillwiki-automation***REMOVED***

View File

@@ -1,106 +0,0 @@
[Unit]
Description=ThrillWiki Bulletproof Development Automation
Documentation=man:thrillwiki-automation(8)
After=network.target
Wants=network.target
Before=thrillwiki.service
PartOf=thrillwiki.service
[Service]
Type=simple
User=ubuntu
Group=ubuntu
[AWS-SECRET-REMOVED]
[AWS-SECRET-REMOVED]s/vm/bulletproof-automation.sh
ExecStop=/bin/kill -TERM $MAINPID
ExecReload=/bin/kill -HUP $MAINPID
Restart=always
RestartSec=10
KillMode=mixed
KillSignal=SIGTERM
TimeoutStopSec=60
TimeoutStartSec=120
StartLimitIntervalSec=300
StartLimitBurst=3
# Environment variables - Load from file for security
EnvironmentFile=-[AWS-SECRET-REMOVED]thrillwiki-automation***REMOVED***
Environment=PROJECT_DIR=/home/ubuntu/thrillwiki
Environment=SERVICE_NAME=thrillwiki-automation
Environment=GITHUB_REPO=origin
Environment=GITHUB_BRANCH=main
Environment=PULL_INTERVAL=300
Environment=HEALTH_CHECK_INTERVAL=60
Environment=STARTUP_TIMEOUT=120
Environment=RESTART_DELAY=10
Environment=LOG_DIR=/home/ubuntu/thrillwiki/logs
Environment=MAX_LOG_SIZE=10485760
Environment=SERVER_HOST=0.0.0.0
Environment=SERVER_PORT=8000
Environment=PATH=/home/ubuntu/.local/bin:/home/ubuntu/.cargo/bin:/usr/local/bin:/usr/bin:/bin
[AWS-SECRET-REMOVED]llwiki
# Security settings - Enhanced hardening for automation script
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=strict
ProtectHome=true
ProtectKernelTunables=true
ProtectKernelModules=true
ProtectControlGroups=true
RestrictSUIDSGID=true
RestrictRealtime=true
RestrictNamespaces=true
LockPersonality=true
MemoryDenyWriteExecute=false
RemoveIPC=true
# File system permissions - Allow access to necessary directories
ReadWritePaths=/home/ubuntu/thrillwiki
[AWS-SECRET-REMOVED]ogs
[AWS-SECRET-REMOVED]edia
[AWS-SECRET-REMOVED]taticfiles
[AWS-SECRET-REMOVED]ploads
ReadWritePaths=/home/ubuntu/.cache
ReadWritePaths=/tmp
ReadOnlyPaths=/home/ubuntu/.github-pat
ReadOnlyPaths=/home/ubuntu/.ssh
ReadOnlyPaths=/home/ubuntu/.local
# Resource limits - Appropriate for automation script
LimitNOFILE=65536
LimitNPROC=1024
MemoryMax=512M
CPUQuota=50%
TasksMax=256
# Timeouts
WatchdogSec=300
# Logging configuration
StandardOutput=journal
StandardError=journal
SyslogIdentifier=thrillwiki-automation
SyslogFacility=daemon
SyslogLevel=info
SyslogLevelPrefix=true
# Enhanced logging for debugging
# Ensure logs are captured and rotated properly
LogsDirectory=thrillwiki-automation
LogsDirectoryMode=0755
StateDirectory=thrillwiki-automation
StateDirectoryMode=0755
RuntimeDirectory=thrillwiki-automation
RuntimeDirectoryMode=0755
# Capabilities - Minimal required capabilities
CapabilityBoundingSet=
AmbientCapabilities=
PrivateDevices=true
ProtectClock=true
ProtectHostname=true
[Install]
WantedBy=multi-user.target
Also=thrillwiki.service

View File

@@ -1,103 +0,0 @@
[Unit]
Description=ThrillWiki Complete Deployment Automation Service
Documentation=man:thrillwiki-deployment(8)
After=network.target network-online.target
Wants=network-online.target
Before=thrillwiki-smart-deploy.timer
PartOf=thrillwiki-smart-deploy.timer
[Service]
Type=simple
User=thrillwiki
Group=thrillwiki
[AWS-SECRET-REMOVED]wiki
[AWS-SECRET-REMOVED]ripts/vm/deploy-automation.sh
ExecStop=/bin/kill -TERM $MAINPID
ExecReload=/bin/kill -HUP $MAINPID
Restart=always
RestartSec=30
KillMode=mixed
KillSignal=SIGTERM
TimeoutStopSec=120
TimeoutStartSec=180
StartLimitIntervalSec=600
StartLimitBurst=3
# Environment variables - Load from file for security and preset integration
EnvironmentFile=-[AWS-SECRET-REMOVED]emd/thrillwiki-deployment***REMOVED***
Environment=PROJECT_DIR=/home/thrillwiki/thrillwiki
Environment=SERVICE_NAME=thrillwiki-deployment
Environment=GITHUB_REPO=origin
Environment=GITHUB_BRANCH=main
Environment=DEPLOYMENT_MODE=automated
Environment=LOG_DIR=/home/thrillwiki/thrillwiki/logs
Environment=MAX_LOG_SIZE=10485760
Environment=SERVER_HOST=0.0.0.0
Environment=SERVER_PORT=8000
Environment=PATH=/home/thrillwiki/.local/bin:/home/thrillwiki/.cargo/bin:/usr/local/bin:/usr/bin:/bin
[AWS-SECRET-REMOVED]thrillwiki
# Security settings - Enhanced hardening for deployment automation
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=strict
ProtectHome=true
ProtectKernelTunables=true
ProtectKernelModules=true
ProtectControlGroups=true
RestrictSUIDSGID=true
RestrictRealtime=true
RestrictNamespaces=true
LockPersonality=true
MemoryDenyWriteExecute=false
RemoveIPC=true
# File system permissions - Allow access to necessary directories
[AWS-SECRET-REMOVED]ki
[AWS-SECRET-REMOVED]ki/logs
[AWS-SECRET-REMOVED]ki/media
[AWS-SECRET-REMOVED]ki/staticfiles
[AWS-SECRET-REMOVED]ki/uploads
ReadWritePaths=/home/thrillwiki/.cache
ReadWritePaths=/tmp
ReadOnlyPaths=/home/thrillwiki/.github-pat
ReadOnlyPaths=/home/thrillwiki/.ssh
ReadOnlyPaths=/home/thrillwiki/.local
# Resource limits - Appropriate for deployment automation
LimitNOFILE=65536
LimitNPROC=2048
MemoryMax=1G
CPUQuota=75%
TasksMax=512
# Timeouts and watchdog
WatchdogSec=600
RuntimeMaxSec=0
# Logging configuration
StandardOutput=journal
StandardError=journal
SyslogIdentifier=thrillwiki-deployment
SyslogFacility=daemon
SyslogLevel=info
SyslogLevelPrefix=true
# Enhanced logging for debugging
LogsDirectory=thrillwiki-deployment
LogsDirectoryMode=0755
StateDirectory=thrillwiki-deployment
StateDirectoryMode=0755
RuntimeDirectory=thrillwiki-deployment
RuntimeDirectoryMode=0755
# Capabilities - Minimal required capabilities
CapabilityBoundingSet=
AmbientCapabilities=
PrivateDevices=true
ProtectClock=true
ProtectHostname=true
[Install]
WantedBy=multi-user.target
Also=thrillwiki-smart-deploy.timer

View File

@@ -1,76 +0,0 @@
[Unit]
Description=ThrillWiki Smart Deployment Service
Documentation=man:thrillwiki-smart-deploy(8)
After=network.target thrillwiki-deployment.service
Wants=network.target
PartOf=thrillwiki-smart-deploy.timer
[Service]
Type=oneshot
User=thrillwiki
Group=thrillwiki
[AWS-SECRET-REMOVED]wiki
[AWS-SECRET-REMOVED]ripts/smart-deploy.sh
TimeoutStartSec=300
TimeoutStopSec=60
# Environment variables - Load from deployment configuration
EnvironmentFile=-[AWS-SECRET-REMOVED]emd/thrillwiki-deployment***REMOVED***
Environment=PROJECT_DIR=/home/thrillwiki/thrillwiki
Environment=SERVICE_NAME=thrillwiki-smart-deploy
Environment=DEPLOYMENT_MODE=timer
Environment=LOG_DIR=/home/thrillwiki/thrillwiki/logs
Environment=PATH=/home/thrillwiki/.local/bin:/home/thrillwiki/.cargo/bin:/usr/local/bin:/usr/bin:/bin
[AWS-SECRET-REMOVED]thrillwiki
# Security settings - Inherited from main deployment service
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=strict
ProtectHome=true
ProtectKernelTunables=true
ProtectKernelModules=true
ProtectControlGroups=true
RestrictSUIDSGID=true
RestrictRealtime=true
RestrictNamespaces=true
LockPersonality=true
MemoryDenyWriteExecute=false
RemoveIPC=true
# File system permissions
[AWS-SECRET-REMOVED]ki
[AWS-SECRET-REMOVED]ki/logs
[AWS-SECRET-REMOVED]ki/media
[AWS-SECRET-REMOVED]ki/staticfiles
[AWS-SECRET-REMOVED]ki/uploads
ReadWritePaths=/home/thrillwiki/.cache
ReadWritePaths=/tmp
ReadOnlyPaths=/home/thrillwiki/.github-pat
ReadOnlyPaths=/home/thrillwiki/.ssh
ReadOnlyPaths=/home/thrillwiki/.local
# Resource limits
LimitNOFILE=65536
LimitNPROC=1024
MemoryMax=512M
CPUQuota=50%
TasksMax=256
# Logging configuration
StandardOutput=journal
StandardError=journal
SyslogIdentifier=thrillwiki-smart-deploy
SyslogFacility=daemon
SyslogLevel=info
SyslogLevelPrefix=true
# Capabilities
CapabilityBoundingSet=
AmbientCapabilities=
PrivateDevices=true
ProtectClock=true
ProtectHostname=true
[Install]
WantedBy=multi-user.target

View File

@@ -1,17 +0,0 @@
[Unit]
Description=ThrillWiki Smart Deployment Timer
Documentation=man:thrillwiki-smart-deploy(8)
Requires=thrillwiki-smart-deploy.service
After=thrillwiki-deployment.service
[Timer]
# Default timer configuration (can be overridden by environment)
OnBootSec=5min
OnUnitActiveSec=5min
Unit=thrillwiki-smart-deploy.service
Persistent=true
RandomizedDelaySec=30sec
[Install]
WantedBy=timers.target
Also=thrillwiki-smart-deploy.service

View File

@@ -1,39 +0,0 @@
[Unit]
Description=ThrillWiki GitHub Webhook Listener
After=network.target
Wants=network.target
[Service]
Type=simple
User=ubuntu
Group=ubuntu
[AWS-SECRET-REMOVED]
ExecStart=/usr/bin/python3 /home/ubuntu/thrillwiki/scripts/webhook-listener.py
Restart=always
RestartSec=10
# Environment variables
Environment=WEBHOOK_PORT=9000
Environment=WEBHOOK_SECRET=your_webhook_secret_here
Environment=VM_HOST=localhost
Environment=VM_PORT=22
Environment=VM_USER=ubuntu
Environment=VM_KEY_PATH=/home/ubuntu/.ssh/***REMOVED***
Environment=VM_PROJECT_PATH=/home/ubuntu/thrillwiki
Environment=REPO_URL=https://github.com/YOUR_USERNAME/thrillwiki_django_no_react.git
Environment=DEPLOY_BRANCH=main
# Security settings
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=strict
ProtectHome=true
[AWS-SECRET-REMOVED]ogs
# Logging
StandardOutput=journal
StandardError=journal
SyslogIdentifier=thrillwiki-webhook
[Install]
WantedBy=multi-user.target

View File

@@ -1,45 +0,0 @@
[Unit]
Description=ThrillWiki Django Application
After=network.target postgresql.service
Wants=network.target
Requires=postgresql.service
[Service]
Type=forking
User=ubuntu
Group=ubuntu
[AWS-SECRET-REMOVED]
[AWS-SECRET-REMOVED]s/ci-start.sh
ExecStop=/bin/kill -TERM $MAINPID
ExecReload=/bin/kill -HUP $MAINPID
[AWS-SECRET-REMOVED]ngo.pid
Restart=always
RestartSec=10
# Environment variables
Environment=DJANGO_SETTINGS_MODULE=thrillwiki.settings
[AWS-SECRET-REMOVED]llwiki
Environment=PATH=/home/ubuntu/.cargo/bin:/usr/local/bin:/usr/bin:/bin
# Security settings
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=strict
ProtectHome=true
[AWS-SECRET-REMOVED]ogs
[AWS-SECRET-REMOVED]edia
[AWS-SECRET-REMOVED]taticfiles
[AWS-SECRET-REMOVED]ploads
# Resource limits
LimitNOFILE=65536
TimeoutStartSec=300
TimeoutStopSec=30
# Logging
StandardOutput=journal
StandardError=journal
SyslogIdentifier=thrillwiki
[Install]
WantedBy=multi-user.target

View File

@@ -1,175 +0,0 @@
#!/bin/bash
# ThrillWiki Automation Test Script
# This script validates all automation components without actually running them
set -e
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'
log() {
echo -e "${BLUE}[TEST]${NC} $1"
}
log_success() {
echo -e "${GREEN}[✓]${NC} $1"
}
log_warning() {
echo -e "${YELLOW}[!]${NC} $1"
}
log_error() {
echo -e "${RED}[✗]${NC} $1"
}
# Test counters
TESTS_PASSED=0
TESTS_FAILED=0
TESTS_TOTAL=0
test_case() {
local name="$1"
local command="$2"
((TESTS_TOTAL++))
log "Testing: $name"
if eval "$command" >/dev/null 2>&1; then
log_success "$name"
((TESTS_PASSED++))
else
log_error "$name"
((TESTS_FAILED++))
fi
}
test_case_with_output() {
local name="$1"
local command="$2"
local expected_pattern="$3"
((TESTS_TOTAL++))
log "Testing: $name"
local output
if output=$(eval "$command" 2>&1); then
if [[ -n "$expected_pattern" && ! "$output" =~ $expected_pattern ]]; then
log_error "$name (unexpected output)"
((TESTS_FAILED++))
else
log_success "$name"
((TESTS_PASSED++))
fi
else
log_error "$name (command failed)"
((TESTS_FAILED++))
fi
}
log "🧪 Starting ThrillWiki Automation Tests"
echo "======================================"
# Test 1: File Permissions
log "\n📁 Testing File Permissions..."
test_case "CI start script is executable" "[ -x scripts/ci-start.sh ]"
test_case "VM deploy script is executable" "[ -x scripts/vm-deploy.sh ]"
test_case "Webhook listener is executable" "[ -x scripts/webhook-listener.py ]"
test_case "VM manager is executable" "[ -x scripts/unraid/vm-manager.py ]"
test_case "Complete automation script is executable" "[ -x scripts/unraid/setup-complete-automation.sh ]"
# Test 2: Script Syntax
log "\n🔍 Testing Script Syntax..."
test_case "CI start script syntax" "bash -n scripts/ci-start.sh"
test_case "VM deploy script syntax" "bash -n scripts/vm-deploy.sh"
test_case "Setup VM CI script syntax" "bash -n scripts/setup-vm-ci.sh"
test_case "Complete automation script syntax" "bash -n scripts/unraid/setup-complete-automation.sh"
test_case "Webhook listener Python syntax" "python3 -m py_compile scripts/webhook-listener.py"
test_case "VM manager Python syntax" "python3 -m py_compile scripts/unraid/vm-manager.py"
# Test 3: Help Functions
log "\n❓ Testing Help Functions..."
test_case_with_output "VM manager help" "python3 scripts/unraid/vm-manager.py --help" "usage:"
test_case_with_output "Webhook listener help" "python3 scripts/webhook-listener.py --help" "usage:"
test_case_with_output "VM deploy script usage" "scripts/vm-deploy.sh invalid 2>&1" "Usage:"
# Test 4: Configuration Validation
log "\n⚙ Testing Configuration Validation..."
test_case_with_output "Webhook listener test mode" "python3 scripts/webhook-listener.py --test" "Configuration validation"
# Test 5: Directory Structure
log "\n📂 Testing Directory Structure..."
test_case "Scripts directory exists" "[ -d scripts ]"
test_case "Unraid scripts directory exists" "[ -d scripts/unraid ]"
test_case "Systemd directory exists" "[ -d scripts/systemd ]"
test_case "Docs directory exists" "[ -d docs ]"
test_case "Logs directory created" "[ -d logs ]"
# Test 6: Required Files
log "\n📄 Testing Required Files..."
test_case "ThrillWiki service file exists" "[ -f scripts/systemd/thrillwiki.service ]"
test_case "Webhook service file exists" "[ -f scripts/systemd/thrillwiki-webhook.service ]"
test_case "VM deployment setup doc exists" "[ -f docs/VM_DEPLOYMENT_SETUP.md ]"
test_case "Unraid automation doc exists" "[ -f docs/UNRAID_COMPLETE_AUTOMATION.md ]"
test_case "CI README exists" "[ -f CI_README.md ]"
# Test 7: Python Dependencies
log "\n🐍 Testing Python Dependencies..."
test_case "Python 3 available" "command -v python3"
test_case "Requests module available" "python3 -c 'import requests'"
test_case "JSON module available" "python3 -c 'import json'"
test_case "OS module available" "python3 -c 'import os'"
test_case "Subprocess module available" "python3 -c 'import subprocess'"
# Test 8: System Dependencies
log "\n🔧 Testing System Dependencies..."
test_case "SSH command available" "command -v ssh"
test_case "SCP command available" "command -v scp"
test_case "Bash available" "command -v bash"
test_case "Git available" "command -v git"
# Test 9: UV Package Manager
log "\n📦 Testing UV Package Manager..."
if command -v uv >/dev/null 2>&1; then
log_success "UV package manager is available"
((TESTS_PASSED++))
test_case "UV version check" "uv --version"
else
log_warning "UV package manager not found (will be installed during setup)"
((TESTS_PASSED++))
fi
((TESTS_TOTAL++))
# Test 10: Django Project Structure
log "\n🌟 Testing Django Project Structure..."
test_case "Django manage.py exists" "[ -f manage.py ]"
test_case "Django settings module exists" "[ -f thrillwiki/settings.py ]"
test_case "PyProject.toml exists" "[ -f pyproject.toml ]"
# Final Results
echo
log "📊 Test Results Summary"
echo "======================"
echo "Total Tests: $TESTS_TOTAL"
echo "Passed: $TESTS_PASSED"
echo "Failed: $TESTS_FAILED"
if [ $TESTS_FAILED -eq 0 ]; then
echo
log_success "🎉 All tests passed! The automation system is ready."
echo
log "Next steps:"
echo "1. For complete automation: ./scripts/unraid/setup-complete-automation.sh"
echo "2. For manual setup: ./scripts/setup-vm-ci.sh"
echo "3. Read documentation: docs/UNRAID_COMPLETE_AUTOMATION.md"
exit 0
else
echo
log_error "❌ Some tests failed. Please check the issues above."
exit 1
fi

View File

@@ -1,10 +0,0 @@
{
"permissions": {
"additionalDirectories": [
"/Users/talor/thrillwiki_django_no_react"
],
"allow": [
"Bash(uv run:*)"
]
}
}

View File

@@ -1,150 +0,0 @@
# Non-Interactive Mode for ThrillWiki Automation
The ThrillWiki automation script supports a non-interactive mode (`-y` flag) that allows you to run the entire setup process without any user prompts. This is perfect for:
- **CI/CD pipelines**
- **Automated deployments**
- **Scripted environments**
- **Remote execution**
## Prerequisites
1. **Saved Configuration**: You must have run the script interactively at least once to create the saved configuration file (`.thrillwiki-config`).
2. **Environment Variables**: Set the required environment variables for sensitive credentials that aren't saved to disk.
## Required Environment Variables
### Always Required
- `UNRAID_PASSWORD` - Your Unraid server password
### Required if GitHub API is enabled
- `GITHUB_TOKEN` - Your GitHub personal access token (if using token auth method)
### Required if Webhooks are enabled
- `WEBHOOK_SECRET` - Your GitHub webhook secret
## Usage Examples
### Basic Non-Interactive Setup
```bash
# Set required credentials
export UNRAID_PASSWORD="your_unraid_password"
export GITHUB_TOKEN="your_github_token"
export WEBHOOK_SECRET="your_webhook_secret"
# Run in non-interactive mode
./setup-complete-automation.sh -y
```
### CI/CD Pipeline Example
```bash
#!/bin/bash
set -e
# Load credentials from secure environment
export UNRAID_PASSWORD="$UNRAID_CREDS_PASSWORD"
export GITHUB_TOKEN="$GITHUB_API_TOKEN"
export WEBHOOK_SECRET="$WEBHOOK_SECRET_KEY"
# Deploy with no user interaction
cd scripts/unraid
./setup-complete-automation.sh -y
```
### Docker/Container Example
```bash
# Run from container with environment file
docker run --env-file ***REMOVED***.secrets \
-v $(pwd):/workspace \
your-automation-container \
/workspace/scripts/unraid/setup-complete-automation.sh -y
```
## Error Handling
The script will exit with clear error messages if:
- No saved configuration is found
- Required environment variables are missing
- OAuth tokens have expired (non-interactive mode cannot refresh them)
### Common Issues
**❌ No saved configuration**
```
[ERROR] No saved configuration found. Cannot run in non-interactive mode.
[ERROR] Please run the script without -y flag first to create initial configuration.
```
**Solution**: Run `./setup-complete-automation.sh` interactively first.
**❌ Missing password**
```
[ERROR] UNRAID_PASSWORD environment variable not set.
[ERROR] For non-interactive mode, set: export UNRAID_PASSWORD='your_password'
```
**Solution**: Set the `UNRAID_PASSWORD` environment variable.
**❌ Expired OAuth token**
```
[ERROR] OAuth token expired and cannot refresh in non-interactive mode
[ERROR] Please run without -y flag to re-authenticate with GitHub
```
**Solution**: Run interactively to refresh OAuth token, or switch to personal access token method.
## Security Best Practices
1. **Never commit credentials to version control**
2. **Use secure environment variable storage** (CI/CD secret stores, etc.)
3. **Rotate credentials regularly**
4. **Use minimal required permissions** for tokens
5. **Clear environment variables** after use if needed:
```bash
unset UNRAID_PASSWORD GITHUB_TOKEN WEBHOOK_SECRET
```
## Advanced Usage
### Combining with Reset Modes
```bash
# Reset VM only and redeploy non-interactively
export UNRAID_PASSWORD="password"
./setup-complete-automation.sh --reset-vm -y
```
### Using with Different Authentication Methods
```bash
# For OAuth method (no GITHUB_TOKEN needed if valid)
export UNRAID_PASSWORD="password"
export WEBHOOK_SECRET="secret"
./setup-complete-automation.sh -y
# For personal access token method
export UNRAID_PASSWORD="password"
export GITHUB_TOKEN="ghp_xxxx"
export WEBHOOK_SECRET="secret"
./setup-complete-automation.sh -y
```
### Environment File Pattern
```bash
# Create ***REMOVED***.automation (don't commit this!)
cat > ***REMOVED***.automation << EOF
UNRAID_PASSWORD=your_password_here
GITHUB_TOKEN=your_token_here
WEBHOOK_SECRET=your_secret_here
EOF
# Use it
source ***REMOVED***.automation
./setup-complete-automation.sh -y
# Clean up
rm ***REMOVED***.automation
```
## Integration Examples
See `example-non-interactive.sh` for a complete working example that you can customize for your needs.
The non-interactive mode makes it easy to integrate ThrillWiki deployment into your existing automation workflows while maintaining security and reliability.

View File

@@ -1,385 +0,0 @@
# ThrillWiki Template-Based VM Deployment
This guide explains how to use the new **template-based VM deployment** system that dramatically speeds up VM creation by using a pre-configured Ubuntu template instead of autoinstall ISOs.
## Overview
### Traditional Approach (Slow)
- Create autoinstall ISO from scratch
- Boot VM from ISO (20-30 minutes)
- Wait for Ubuntu installation
- Configure system packages and dependencies
### Template Approach (Fast ⚡)
- Copy pre-configured VM disk from template
- Boot VM from template disk (2-5 minutes)
- System is already configured with Ubuntu, packages, and dependencies
## Prerequisites
1. **Template VM**: You must have a VM named `thrillwiki-template-ubuntu` on your Unraid server
2. **Template Configuration**: The template should be pre-configured with:
- Ubuntu 24.04 LTS
- Python 3, Git, PostgreSQL, Nginx
- UV package manager (optional but recommended)
- Basic system configuration
## Template VM Setup
### Creating the Template VM
1. **Create the template VM manually** on your Unraid server:
- Name: `thrillwiki-template-ubuntu`
- Install Ubuntu 24.04 LTS
- Configure with 4GB RAM, 2 vCPUs (can be adjusted later)
2. **Configure the template** by SSH'ing into it and running:
```bash
# Update system
sudo apt update && sudo apt upgrade -y
# Install required packages
sudo apt install -y git curl build-essential python3-pip python3-venv
sudo apt install -y postgresql postgresql-contrib nginx
# Install UV (Python package manager)
curl -LsSf https://astral.sh/uv/install.sh | sh
source ~/.cargo/env
# Create thrillwiki user with password 'thrillwiki'
sudo useradd -m -s /bin/bash thrillwiki || true
echo 'thrillwiki:thrillwiki' | sudo chpasswd
sudo usermod -aG sudo thrillwiki
# Setup SSH key for thrillwiki user
# First, generate your SSH key on your Mac:
# ssh-keygen -t rsa -b 4096 -f ~/.ssh/thrillwiki_vm -N "" -C "thrillwiki-template-vm-access"
# Then copy the public key to the template VM:
sudo mkdir -p /home/thrillwiki/.ssh
echo "YOUR_PUBLIC_KEY_FROM_~/.ssh/thrillwiki_vm.pub" | sudo tee /home/thrillwiki/.ssh/***REMOVED***
sudo chown -R thrillwiki:thrillwiki /home/thrillwiki/.ssh
sudo chmod 700 /home/thrillwiki/.ssh
sudo chmod 600 /home/thrillwiki/.ssh/***REMOVED***
# Configure PostgreSQL
sudo systemctl enable postgresql
sudo systemctl start postgresql
# Configure Nginx
sudo systemctl enable nginx
# Clean up for template
sudo apt autoremove -y
sudo apt autoclean
history -c && history -w
# Shutdown template
sudo shutdown now
```
3. **Verify template** is stopped and ready:
```bash
./template-utils.sh status # Should show "shut off"
```
## Quick Start
### Step 0: Set Up SSH Key (First Time Only)
**IMPORTANT**: Before using template deployment, set up your SSH key:
```bash
# Generate and configure SSH key
./scripts/unraid/setup-ssh-key.sh
# Follow the instructions to add the public key to your template VM
```
See `TEMPLATE_VM_SETUP.md` for complete template VM setup instructions.
### Using the Utility Script
The easiest way to work with template VMs is using the utility script:
```bash
# Check if template is ready
./template-utils.sh check
# Get template information
./template-utils.sh info
# Deploy a new VM from template
./template-utils.sh deploy my-thrillwiki-vm
# Copy template to new VM (without full deployment)
./template-utils.sh copy my-vm-name
# List all template-based VMs
./template-utils.sh list
```
### Using Python Scripts Directly
For more control, use the Python scripts:
```bash
# Set environment variables
export UNRAID_HOST="your.unraid.server.ip"
export UNRAID_USER="root"
export VM_NAME="my-thrillwiki-vm"
export REPO_URL="owner/repository-name"
# Deploy VM from template
python3 main_template.py deploy
# Just create VM without ThrillWiki setup
python3 main_template.py setup
# Get VM status and IP
python3 main_template.py status
python3 main_template.py ip
# Manage template
python3 main_template.py template info
python3 main_template.py template check
```
## File Structure
### New Template-Based Files
```
scripts/unraid/
├── template_manager.py # Template VM management
├── vm_manager_template.py # Template-based VM manager
├── main_template.py # Template deployment orchestrator
├── template-utils.sh # Quick utility commands
├── deploy-thrillwiki-template.sh # Optimized deployment script
├── thrillwiki-vm-template-simple.xml # VM XML without autoinstall ISO
└── README-template-deployment.md # This documentation
```
### Original Files (Still Available)
```
scripts/unraid/
├── main.py # Original autoinstall approach
├── vm_manager.py # Original VM manager
├── deploy-thrillwiki.sh # Original deployment script
└── thrillwiki-vm-template.xml # Original XML with autoinstall
```
## Commands Reference
### Template Management
```bash
# Check template status
./template-utils.sh status
python3 template_manager.py check
# Get template information
./template-utils.sh info
python3 template_manager.py info
# List VMs created from template
./template-utils.sh list
python3 template_manager.py list
# Update template instructions
./template-utils.sh update
python3 template_manager.py update
```
### VM Deployment
```bash
# Complete deployment (VM + ThrillWiki)
./template-utils.sh deploy VM_NAME
python3 main_template.py deploy
# VM setup only
python3 main_template.py setup
# Individual operations
python3 main_template.py create
python3 main_template.py start
python3 main_template.py stop
python3 main_template.py delete
```
### VM Information
```bash
# Get VM status
python3 main_template.py status
# Get VM IP and connection info
python3 main_template.py ip
# Get detailed VM information
python3 main_template.py info
```
## Environment Variables
Configure these in your `***REMOVED***.unraid` file or export them:
```bash
# Required
UNRAID_HOST="192.168.1.100" # Your Unraid server IP
UNRAID_USER="root" # Unraid SSH user
VM_NAME="thrillwiki-vm" # Name for new VM
# Optional VM Configuration
VM_MEMORY="4096" # Memory in MB
VM_VCPUS="2" # Number of vCPUs
VM_DISK_SIZE="50" # Disk size in GB (for reference)
VM_IP="dhcp" # IP configuration (dhcp or static IP)
# ThrillWiki Configuration
REPO_URL="owner/repository-name" # GitHub repository
GITHUB_TOKEN="ghp_xxxxx" # GitHub token (optional)
```
## Advantages of Template Approach
### Speed ⚡
- **VM Creation**: 2-5 minutes vs 20-30 minutes
- **Boot Time**: Instant boot vs full Ubuntu installation
- **Total Deployment**: ~10 minutes vs ~45 minutes
### Reliability 🔒
- **Pre-tested**: Template is already configured and tested
- **Consistent**: All VMs start from identical base
- **No Installation Failures**: No autoinstall ISO issues
### Efficiency 💾
- **Disk Space**: Copy-on-write QCOW2 format
- **Network**: No ISO downloads during deployment
- **Resources**: Less CPU usage during creation
## Troubleshooting
### Template Not Found
```
❌ Template VM disk not found at: /mnt/user/domains/thrillwiki-template-ubuntu/vdisk1.qcow2
```
**Solution**: Create the template VM first or verify the path.
### Template VM Running
```
⚠️ Template VM is currently running!
```
**Solution**: Stop the template VM before creating new instances:
```bash
ssh root@unraid-host "virsh shutdown thrillwiki-template-ubuntu"
```
### SSH Connection Issues
```
❌ Cannot connect to Unraid server
```
**Solutions**:
1. Verify `UNRAID_HOST` is correct
2. Ensure SSH key authentication is set up
3. Check network connectivity
### Template Disk Corruption
If template VM gets corrupted:
1. Start template VM and fix issues
2. Or recreate template VM from scratch
3. Update template: `./template-utils.sh update`
## Template Maintenance
### Updating the Template
Periodically update your template:
1. **Start template VM** on Unraid
2. **SSH into template** and update:
```bash
sudo apt update && sudo apt upgrade -y
sudo apt autoremove -y && sudo apt autoclean
# Update UV if installed
~/.cargo/bin/uv --version
# Clear history
history -c && history -w
```
3. **Shutdown template VM**
4. **Verify update**: `./template-utils.sh check`
### Template Best Practices
- Keep template VM stopped when not maintaining it
- Update template monthly or before major deployments
- Test template by creating a test VM before important deployments
- Document any custom configurations in the template
## Migration Guide
### From Autoinstall to Template
1. **Create your template VM** following the setup guide above
2. **Test template deployment**:
```bash
./template-utils.sh deploy test-vm
```
3. **Update your automation scripts** to use template approach
4. **Keep autoinstall scripts** as backup for special cases
### Switching Between Approaches
You can use both approaches as needed:
```bash
# Template-based (fast)
python3 main_template.py deploy
# Autoinstall-based (traditional)
python3 main.py setup
```
## Integration with CI/CD
The template approach integrates perfectly with your existing CI/CD:
```bash
# In your automation scripts
export UNRAID_HOST="your-server"
export VM_NAME="thrillwiki-$(date +%s)"
export REPO_URL="your-org/thrillwiki"
# Deploy quickly
./scripts/unraid/template-utils.sh deploy "$VM_NAME"
# VM is ready in minutes instead of 30+ minutes
```
## FAQ
**Q: Can I use both template and autoinstall approaches?**
A: Yes! Keep both. Use template for speed, autoinstall for special configurations.
**Q: How much disk space does template copying use?**
A: QCOW2 copy-on-write format means copies only store differences, saving space.
**Q: What if I need different Ubuntu versions?**
A: Create multiple template VMs (e.g., `thrillwiki-template-ubuntu-22`, `thrillwiki-template-ubuntu-24`).
**Q: Can I customize the template VM configuration?**
A: Yes! The template VM is just a regular VM. Customize it as needed.
**Q: Is this approach secure?**
A: Yes. Each VM gets a fresh copy and can be configured independently.
---
This template-based approach should make your VM deployments much faster and more reliable! 🚀

View File

@@ -1,131 +0,0 @@
# ThrillWiki Unraid VM Automation
This directory contains scripts and configuration files for automating the creation and deployment of ThrillWiki VMs on Unraid servers using Ubuntu autoinstall.
## Files
- **`vm-manager.py`** - Main VM management script with direct kernel boot support
- **`thrillwiki-vm-template.xml`** - VM XML configuration template for libvirt
- **`cloud-init-template.yaml`** - Ubuntu autoinstall configuration template
- **`validate-autoinstall.py`** - Validation script for autoinstall configuration
## Key Features
### Direct Kernel Boot Approach
The system now uses direct kernel boot instead of GRUB-based boot for maximum reliability:
1. **Kernel Extraction**: Automatically extracts Ubuntu kernel and initrd files from the ISO
2. **Direct Boot**: VM boots directly using extracted kernel with explicit autoinstall parameters
3. **Reliable Autoinstall**: Kernel cmdline explicitly specifies `autoinstall ds=nocloud-net;s=cdrom:/`
### Schema-Compliant Configuration
The autoinstall configuration has been validated against Ubuntu's official schema:
- ✅ Proper network configuration structure
- ✅ Correct storage layout specification
- ✅ Valid shutdown configuration
- ✅ Schema-compliant field types and values
## Usage
### Environment Variables
Set these environment variables before running:
```bash
export UNRAID_HOST="your-unraid-server"
export UNRAID_USER="root"
export UNRAID_PASSWORD="your-password"
export SSH_PUBLIC_KEY="your-ssh-public-key"
export REPO_URL="https://github.com/your-username/thrillwiki.git"
export VM_IP="192.168.20.20" # or "dhcp" for DHCP
export VM_GATEWAY="192.168.20.1"
```
### Basic Operations
```bash
# Create and configure VM
./vm-manager.py create
# Start the VM
./vm-manager.py start
# Check VM status
./vm-manager.py status
# Get VM IP address
./vm-manager.py ip
# Complete setup (create + start + get IP)
./vm-manager.py setup
# Stop the VM
./vm-manager.py stop
# Delete VM and all files
./vm-manager.py delete
```
### Configuration Validation
```bash
# Validate autoinstall configuration
./validate-autoinstall.py
```
## How It Works
### VM Creation Process
1. **Extract Kernel**: Mount Ubuntu ISO and extract `vmlinuz` and `initrd` from `/casper/`
2. **Create Cloud-Init ISO**: Generate configuration ISO with autoinstall settings
3. **Generate VM XML**: Create libvirt VM configuration with direct kernel boot
4. **Define VM**: Register VM as persistent domain in libvirt
### Boot Process
1. **Direct Kernel Boot**: VM starts using extracted kernel and initrd directly
2. **Autoinstall Trigger**: Kernel cmdline forces Ubuntu installer into autoinstall mode
3. **Cloud-Init Data**: NoCloud datasource provides configuration from CD-ROM
4. **Automated Setup**: Ubuntu installs and configures ThrillWiki automatically
### Network Configuration
The system supports both static IP and DHCP configurations:
- **Static IP**: Set `VM_IP` to desired IP address (e.g., "192.168.20.20")
- **DHCP**: Set `VM_IP` to "dhcp" for automatic IP assignment
## Troubleshooting
### VM Console Access
Connect to VM console to monitor autoinstall progress:
```bash
ssh root@unraid-server
virsh console thrillwiki-vm
```
### Check VM Logs
View autoinstall logs inside the VM:
```bash
# After VM is accessible
ssh ubuntu@vm-ip
sudo journalctl -u cloud-init
tail -f /var/log/cloud-init.log
```
### Validation Errors
If autoinstall validation fails, check:
1. YAML syntax in `cloud-init-template.yaml`
2. Required fields according to Ubuntu schema
3. Proper data types for configuration values
## Architecture Benefits
1. **Reliable Boot**: Direct kernel boot eliminates GRUB-related issues
2. **Schema Compliance**: Configuration validated against official Ubuntu schema
3. **Predictable Behavior**: Explicit kernel parameters ensure consistent autoinstall
4. **Clean Separation**: VM configuration, cloud-init, and kernel files are properly organized
5. **Easy Maintenance**: Modular design allows independent updates of components
This implementation provides a robust, schema-compliant solution for automated ThrillWiki deployment on Unraid VMs.

View File

@@ -1,245 +0,0 @@
# Template VM Setup Instructions
## Prerequisites for Template-Based Deployment
Before using the template-based deployment system, you need to:
1. **Create the template VM** named `thrillwiki-template-ubuntu` on your Unraid server
2. **Configure SSH access** with your public key
3. **Set up the template** with all required software
## Step 1: Create Template VM on Unraid
1. Create a new VM on your Unraid server:
- **Name**: `thrillwiki-template-ubuntu`
- **OS**: Ubuntu 24.04 LTS
- **Memory**: 4GB (you can adjust this later for instances)
- **vCPUs**: 2 (you can adjust this later for instances)
- **Disk**: 50GB (sufficient for template)
2. Install Ubuntu 24.04 LTS using standard installation
## Step 2: Configure Template VM
SSH into your template VM and run the following setup:
### Create thrillwiki User
```bash
# Create the thrillwiki user with password 'thrillwiki'
sudo useradd -m -s /bin/bash thrillwiki
echo 'thrillwiki:thrillwiki' | sudo chpasswd
sudo usermod -aG sudo thrillwiki
# Switch to thrillwiki user for remaining setup
sudo su - thrillwiki
```
### Set Up SSH Access
**IMPORTANT**: Add your SSH public key to the template VM:
```bash
# Create .ssh directory
mkdir -p ~/.ssh
chmod 700 ~/.ssh
# Add your public key (replace with your actual public key)
echo "YOUR_PUBLIC_KEY_HERE" >> ~/.ssh/***REMOVED***
chmod 600 ~/.ssh/***REMOVED***
```
**To get your public key** (run this on your Mac):
```bash
# Generate key if it doesn't exist
if [ ! -f ~/.ssh/thrillwiki_vm ]; then
ssh-keygen -t rsa -b 4096 -f ~/.ssh/thrillwiki_vm -N "" -C "thrillwiki-template-vm-access"
fi
# Show your public key to copy
cat ~/.ssh/thrillwiki_vm.pub
```
Copy this public key and paste it into the template VM's ***REMOVED*** file.
### Install Required Software
```bash
# Update system
sudo apt update && sudo apt upgrade -y
# Install essential packages
sudo apt install -y \
git curl wget build-essential \
python3 python3-pip python3-venv python3-dev \
postgresql postgresql-contrib postgresql-client \
nginx \
htop tree vim nano \
software-properties-common
# Install UV (Python package manager)
curl -LsSf https://astral.sh/uv/install.sh | sh
source ~/.cargo/env
# Add UV to PATH permanently
echo 'export PATH="$HOME/.cargo/bin:$PATH"' >> ~/.bashrc
# Configure PostgreSQL
sudo systemctl enable postgresql
sudo systemctl start postgresql
# Create database user and database
sudo -u postgres createuser thrillwiki
sudo -u postgres createdb thrillwiki
sudo -u postgres psql -c "ALTER USER thrillwiki WITH PASSWORD 'thrillwiki';"
sudo -u postgres psql -c "GRANT ALL PRIVILEGES ON DATABASE thrillwiki TO thrillwiki;"
# Configure Nginx
sudo systemctl enable nginx
# Create ThrillWiki directories
mkdir -p ~/thrillwiki ~/logs ~/backups
# Set up basic environment
echo "export DJANGO_SETTINGS_MODULE=thrillwiki.settings" >> ~/.bashrc
echo "export DATABASE_URL=[DATABASE-URL-REMOVED] >> ~/.bashrc
```
### Pre-install Common Python Packages (Optional)
```bash
# Create a base virtual environment with common packages
cd ~
python3 -m venv base_venv
source base_venv/bin/activate
pip install --upgrade pip
# Install common Django packages
pip install \
django \
psycopg2-binary \
gunicorn \
whitenoise \
python-decouple \
pillow \
requests
deactivate
```
### Clean Up Template
```bash
# Clean package cache
sudo apt autoremove -y
sudo apt autoclean
# Clear bash history
history -c
history -w
# Clear any temporary files
sudo find /tmp -type f -delete
sudo find /var/tmp -type f -delete
# Shutdown the template VM
sudo shutdown now
```
## Step 3: Verify Template Setup
After the template VM shuts down, verify it's ready:
```bash
# From your Mac, check the template
cd /path/to/your/thrillwiki/project
./scripts/unraid/template-utils.sh check
```
## Step 4: Test Template Deployment
Create a test VM from the template:
```bash
# Deploy a test VM
./scripts/unraid/template-utils.sh deploy test-thrillwiki-vm
# Check if it worked
ssh thrillwiki@<VM_IP> "echo 'Template VM working!'"
```
## Template VM Configuration Summary
Your template VM should now have:
-**Username**: `thrillwiki` (password: `thrillwiki`)
-**SSH Access**: Your public key in `/home/thrillwiki/.ssh/***REMOVED***`
-**Python**: Python 3 with UV package manager
-**Database**: PostgreSQL with `thrillwiki` user and database
-**Web Server**: Nginx installed and enabled
-**Directories**: `~/thrillwiki`, `~/logs`, `~/backups` ready
## SSH Configuration on Your Mac
The automation scripts will set this up, but you can also configure manually:
```bash
# Add to ~/.ssh/config
cat >> ~/.ssh/config << EOF
# ThrillWiki Template VM
Host thrillwiki-vm
HostName %h
User thrillwiki
IdentityFile ~/.ssh/thrillwiki_vm
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
EOF
```
## Next Steps
Once your template is set up:
1. **Run the automation setup**:
```bash
./scripts/unraid/setup-template-automation.sh
```
2. **Deploy VMs quickly**:
```bash
./scripts/unraid/template-utils.sh deploy my-vm-name
```
3. **Enjoy 5-10x faster deployments** (2-5 minutes instead of 20-30 minutes!)
## Troubleshooting
### SSH Access Issues
```bash
# Test SSH access to template (when it's running for updates)
ssh -i ~/.ssh/thrillwiki_vm thrillwiki@TEMPLATE_VM_IP
# If access fails, check:
# 1. Template VM is running
# 2. Public key is in ***REMOVED***
# 3. Permissions are correct (700 for .ssh, 600 for ***REMOVED***)
```
### Template VM Updates
```bash
# Start template VM on Unraid
# SSH in and update:
sudo apt update && sudo apt upgrade -y
~/.cargo/bin/uv --version # Check UV is still working
# Clean up and shutdown
sudo apt autoremove -y && sudo apt autoclean
history -c && history -w
sudo shutdown now
```
### Permission Issues
```bash
# If you get permission errors, ensure thrillwiki user owns everything
sudo chown -R thrillwiki:thrillwiki /home/thrillwiki/
sudo chmod 700 /home/thrillwiki/.ssh
sudo chmod 600 /home/thrillwiki/.ssh/***REMOVED***
```
Your template is now ready for lightning-fast VM deployments! ⚡

View File

@@ -1,206 +0,0 @@
#cloud-config
autoinstall:
# version is an Autoinstall required field.
version: 1
# Install Ubuntu server packages and ThrillWiki dependencies
packages:
- ubuntu-server
- curl
- wget
- git
- python3
- python3-pip
- python3-venv
- nginx
- postgresql
- postgresql-contrib
- redis-server
- nodejs
- npm
- build-essential
- ufw
- fail2ban
- htop
- tree
- vim
- tmux
- qemu-guest-agent
# User creation
identity:
realname: 'ThrillWiki Admin'
username: thrillwiki
# Default [PASSWORD-REMOVED] (change after login)
password: '$6$rounds=4096$saltsalt$[AWS-SECRET-REMOVED]AzpI8g8T14F8VnhXo0sUkZV2NV6/.c77tHgVi34DgbPu.'
hostname: thrillwiki-vm
locale: en_US.UTF-8
keyboard:
layout: us
package_update: true
package_upgrade: true
# Use direct storage layout (no LVM)
storage:
swap:
size: 0
layout:
name: direct
# SSH configuration
ssh:
allow-pw: true
install-server: true
authorized-keys:
- {SSH_PUBLIC_KEY}
# Network configuration - will be replaced with proper config
network:
version: 2
ethernets:
enp1s0:
dhcp4: true
dhcp-identifier: mac
# Commands to run after installation
late-commands:
# Update GRUB
- curtin in-target -- update-grub
# Enable and start services
- curtin in-target -- systemctl enable qemu-guest-agent
- curtin in-target -- systemctl enable postgresql
- curtin in-target -- systemctl enable redis-server
- curtin in-target -- systemctl enable nginx
# Configure PostgreSQL
- curtin in-target -- sudo -u postgres createuser -s thrillwiki
- curtin in-target -- sudo -u postgres createdb thrillwiki_db
- curtin in-target -- sudo -u postgres psql -c "ALTER USER thrillwiki PASSWORD 'thrillwiki123';"
# Configure firewall
- curtin in-target -- ufw allow OpenSSH
- curtin in-target -- ufw allow 'Nginx Full'
- curtin in-target -- ufw --force enable
# Clone ThrillWiki repository if provided
- curtin in-target -- bash -c 'if [ -n "{GITHUB_REPO}" ]; then cd /home/thrillwiki && git clone "{GITHUB_REPO}" thrillwiki-app && chown -R thrillwiki:thrillwiki thrillwiki-app; fi'
# Create deployment script
- curtin in-target -- tee /home/thrillwiki/deploy-thrillwiki.sh << 'EOF'
#!/bin/bash
set -e
echo "=== ThrillWiki Deployment Script ==="
# Check if repo was cloned
if [ ! -d "/home/thrillwiki/thrillwiki-app" ]; then
echo "Repository not found. Please clone your ThrillWiki repository:"
echo "git clone YOUR_REPO_URL thrillwiki-app"
exit 1
fi
cd /home/thrillwiki/thrillwiki-app
# Create virtual environment
python3 -m venv venv
source venv/bin/activate
# Install Python dependencies
if [ -f "requirements.txt" ]; then
pip install -r requirements.txt
else
echo "Warning: requirements.txt not found"
fi
# Install Django if not in requirements
pip install django psycopg2-binary redis celery gunicorn
# Set up environment variables
cat > ***REMOVED*** << 'ENVEOF'
DEBUG=False
SECRET_KEY=your-secret-key-change-this
DATABASE_URL=[DATABASE-URL-REMOVED]
REDIS_URL=redis://localhost:6379/0
ALLOWED_HOSTS=localhost,127.0.0.1,thrillwiki-vm
ENVEOF
# Run Django setup commands
if [ -f "manage.py" ]; then
python manage.py collectstatic --noinput
python manage.py migrate
echo "from django.contrib.auth import get_user_model; User = get_user_model(); User.objects.create_superuser('admin', 'admin@thrillwiki.com', 'thrillwiki123') if not User.objects.filter(username='admin').exists() else None" | python manage.py shell
fi
# Configure Nginx
sudo tee /etc/nginx/sites-available/thrillwiki << 'NGINXEOF'
server {
listen 80;
server_name _;
location /static/ {
alias /home/thrillwiki/thrillwiki-app/staticfiles/;
}
location /media/ {
alias /home/thrillwiki/thrillwiki-app/media/;
}
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
NGINXEOF
# Enable Nginx site
sudo ln -sf /etc/nginx/sites-available/thrillwiki /etc/nginx/sites-enabled/
sudo rm -f /etc/nginx/sites-enabled/default
sudo systemctl reload nginx
# Create systemd service for Django
sudo tee /etc/systemd/system/thrillwiki.service << 'SERVICEEOF'
[Unit]
Description=ThrillWiki Django App
After=network.target
[Service]
User=thrillwiki
Group=thrillwiki
[AWS-SECRET-REMOVED]wiki-app
[AWS-SECRET-REMOVED]wiki-app/venv/bin
ExecStart=/home/thrillwiki/thrillwiki-app/venv/bin/gunicorn --workers 3 --bind 127.0.0.1:8000 thrillwiki.wsgi:application
Restart=always
[Install]
WantedBy=multi-user.target
SERVICEEOF
# Enable and start ThrillWiki service
sudo systemctl daemon-reload
sudo systemctl enable thrillwiki
sudo systemctl start thrillwiki
echo "=== ThrillWiki deployment complete! ==="
echo "Access your application at: http://$(hostname -I | awk '{print $1}')"
echo "Django Admin: http://$(hostname -I | awk '{print $1}')/admin"
echo "Default superuser: admin / thrillwiki123"
echo ""
echo "Important: Change default passwords!"
EOF
# Make deployment script executable
- curtin in-target -- chmod +x /home/thrillwiki/deploy-thrillwiki.sh
- curtin in-target -- chown thrillwiki:thrillwiki /home/thrillwiki/deploy-thrillwiki.sh
# Clean up
- curtin in-target -- apt-get autoremove -y
- curtin in-target -- apt-get autoclean
# Reboot after installation
shutdown: reboot

View File

@@ -1,62 +0,0 @@
#cloud-config
# Ubuntu autoinstall configuration
autoinstall:
version: 1
locale: en_US.UTF-8
keyboard:
layout: us
network:
version: 2
ethernets:
ens3:
dhcp4: true
enp1s0:
dhcp4: true
eth0:
dhcp4: true
ssh:
install-server: true
authorized-keys:
- {SSH_PUBLIC_KEY}
allow-pw: false
storage:
layout:
name: lvm
identity:
hostname: thrillwiki-vm
username: ubuntu
password: "$6$rounds=4096$salt$hash" # disabled - ssh key only
packages:
- openssh-server
- curl
- git
- python3
- python3-pip
- python3-venv
- build-essential
- postgresql
- postgresql-contrib
- nginx
- nodejs
- npm
- wget
- ca-certificates
- openssl
- dnsutils
- net-tools
early-commands:
- systemctl stop ssh
late-commands:
# Enable sudo for ubuntu user
- echo 'ubuntu ALL=(ALL) NOPASSWD:ALL' > /target/etc/sudoers.d/ubuntu
# Install uv Python package manager
- chroot /target su - ubuntu -c 'curl -LsSf https://astral.sh/uv/install.sh | sh || pip3 install uv'
# Add uv to PATH
- chroot /target su - ubuntu -c 'echo "export PATH=\$HOME/.cargo/bin:\$PATH" >> /home/ubuntu/.bashrc'
# Clone ThrillWiki repository
- chroot /target su - ubuntu -c 'cd /home/ubuntu && git clone {GITHUB_REPO} thrillwiki'
# Setup systemd service for ThrillWiki
- systemctl enable postgresql
- systemctl enable nginx
shutdown: reboot

View File

@@ -1,451 +0,0 @@
#!/bin/bash
#
# ThrillWiki Template-Based Deployment Script
# Optimized for VMs deployed from templates that already have basic setup
#
# Function to log messages with timestamp
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a /home/ubuntu/thrillwiki-deploy.log
}
# Function to check if a command exists
command_exists() {
command -v "$1" >/dev/null 2>&1
}
# Function to wait for network connectivity
wait_for_network() {
log "Waiting for network connectivity..."
local max_attempts=20 # Reduced from 30 since template VMs boot faster
local attempt=1
while [ $attempt -le $max_attempts ]; do
if curl -s --connect-timeout 5 https://github.com >/dev/null 2>&1; then
log "Network connectivity confirmed"
return 0
fi
log "Network attempt $attempt/$max_attempts failed, retrying in 5 seconds..."
sleep 5 # Reduced from 10 since template VMs should have faster networking
attempt=$((attempt + 1))
done
log "WARNING: Network connectivity check failed after $max_attempts attempts"
return 1
}
# Function to update system packages (lighter since template should be recent)
update_system() {
log "Updating system packages..."
# Quick update - template should already have most packages
sudo apt update || log "WARNING: apt update failed"
# Only upgrade security packages to save time
sudo apt list --upgradable 2>/dev/null | grep -q security && {
log "Installing security updates..."
sudo apt upgrade -y --with-new-pkgs -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" || log "WARNING: Security updates failed"
} || log "No security updates needed"
}
# Function to setup Python environment with template optimizations
setup_python_env() {
log "Setting up Python environment..."
# Check if uv is already available (should be in template)
export PATH="/home/ubuntu/.cargo/bin:$PATH"
if command_exists uv; then
log "Using existing uv installation from template"
uv --version
else
log "Installing uv (not found in template)..."
if wait_for_network; then
curl -LsSf --connect-timeout 30 --retry 2 --retry-delay 5 https://astral.sh/uv/install.sh | sh
export PATH="/home/ubuntu/.cargo/bin:$PATH"
else
log "WARNING: Network not available, falling back to pip"
fi
fi
# Setup virtual environment
if command_exists uv; then
log "Creating virtual environment with uv..."
if uv venv .venv && source .venv/bin/activate; then
if uv sync; then
log "Successfully set up environment with uv"
return 0
else
log "uv sync failed, falling back to pip"
fi
else
log "uv venv failed, falling back to pip"
fi
fi
# Fallback to pip with venv
log "Setting up environment with pip and venv"
if python3 -m venv .venv && source .venv/bin/activate; then
pip install --upgrade pip || log "WARNING: Failed to upgrade pip"
# Try different dependency installation methods
if [ -f pyproject.toml ]; then
log "Installing dependencies from pyproject.toml"
if pip install -e . || pip install .; then
log "Successfully installed dependencies from pyproject.toml"
return 0
else
log "Failed to install from pyproject.toml"
fi
fi
if [ -f requirements.txt ]; then
log "Installing dependencies from requirements.txt"
if pip install -r requirements.txt; then
log "Successfully installed dependencies from requirements.txt"
return 0
else
log "Failed to install from requirements.txt"
fi
fi
# Last resort: install common Django packages
log "Installing basic Django packages as fallback"
pip install django psycopg2-binary gunicorn || log "WARNING: Failed to install basic packages"
else
log "ERROR: Failed to create virtual environment"
return 1
fi
}
# Function to setup database (should already exist in template)
setup_database() {
log "Setting up PostgreSQL database..."
# Check if PostgreSQL is already running (should be in template)
if sudo systemctl is-active --quiet postgresql; then
log "PostgreSQL is already running"
else
log "Starting PostgreSQL service..."
sudo systemctl start postgresql || {
log "Failed to start PostgreSQL, trying alternative methods"
sudo service postgresql start || {
log "ERROR: Could not start PostgreSQL"
return 1
}
}
fi
# Check if database and user already exist (may be in template)
if sudo -u postgres psql -lqt | cut -d \| -f 1 | grep -qw thrillwiki_production; then
log "Database 'thrillwiki_production' already exists"
else
log "Creating database 'thrillwiki_production'..."
sudo -u postgres createdb thrillwiki_production || {
log "ERROR: Failed to create database"
return 1
}
fi
# Create/update database user
if sudo -u postgres psql -c "SELECT 1 FROM pg_user WHERE usename = 'ubuntu'" | grep -q 1; then
log "Database user 'ubuntu' already exists"
else
sudo -u postgres createuser ubuntu || log "WARNING: Failed to create user (may already exist)"
fi
# Grant permissions
sudo -u postgres psql -c "ALTER USER ubuntu WITH SUPERUSER;" || {
log "WARNING: Failed to grant superuser privileges, trying alternative permissions"
sudo -u postgres psql -c "GRANT ALL PRIVILEGES ON DATABASE thrillwiki_production TO ubuntu;" || log "WARNING: Failed to grant database privileges"
}
log "Database setup completed"
}
# Function to run Django commands with fallbacks
run_django_commands() {
log "Running Django management commands..."
# Ensure we're in the virtual environment
if [ ! -d ".venv" ] || ! source .venv/bin/activate; then
log "WARNING: Virtual environment not found or failed to activate"
# Try to run without venv activation
fi
# Function to run a Django command with fallbacks
run_django_cmd() {
local cmd="$1"
local description="$2"
log "Running: $description"
# Try uv run first
if command_exists uv && uv run manage.py $cmd; then
log "Successfully ran '$cmd' with uv"
return 0
fi
# Try python in venv
if python manage.py $cmd; then
log "Successfully ran '$cmd' with python"
return 0
fi
# Try python3
if python3 manage.py $cmd; then
log "Successfully ran '$cmd' with python3"
return 0
fi
log "WARNING: Failed to run '$cmd'"
return 1
}
# Run migrations
run_django_cmd "migrate" "Database migrations" || log "WARNING: Database migration failed"
# Collect static files
run_django_cmd "collectstatic --noinput" "Static files collection" || log "WARNING: Static files collection failed"
# Build Tailwind CSS (if available)
if run_django_cmd "tailwind build" "Tailwind CSS build"; then
log "Tailwind CSS built successfully"
else
log "Tailwind CSS build not available or failed - this is optional"
fi
}
# Function to setup systemd services (may already exist in template)
setup_services() {
log "Setting up systemd services..."
# Check if systemd service files exist
if [ -f scripts/systemd/thrillwiki.service ]; then
log "Copying ThrillWiki systemd service..."
sudo cp scripts/systemd/thrillwiki.service /etc/systemd/system/ || {
log "Failed to copy thrillwiki.service, creating basic service"
create_basic_service
}
else
log "Systemd service file not found, creating basic service"
create_basic_service
fi
# Copy webhook service if available
if [ -f scripts/systemd/thrillwiki-webhook.service ]; then
sudo cp scripts/systemd/thrillwiki-webhook.service /etc/systemd/system/ || {
log "Failed to copy webhook service, skipping"
}
else
log "Webhook service file not found, skipping"
fi
# Update service files with correct paths
if [ -f /etc/systemd/system/thrillwiki.service ]; then
sudo sed -i "s|/opt/thrillwiki|/home/ubuntu/thrillwiki|g" /etc/systemd/system/thrillwiki.service
sudo sed -i "s|User=thrillwiki|User=ubuntu|g" /etc/systemd/system/thrillwiki.service
fi
if [ -f /etc/systemd/system/thrillwiki-webhook.service ]; then
sudo sed -i "s|/opt/thrillwiki|/home/ubuntu/thrillwiki|g" /etc/systemd/system/thrillwiki-webhook.service
sudo sed -i "s|User=thrillwiki|User=ubuntu|g" /etc/systemd/system/thrillwiki-webhook.service
fi
# Reload systemd and start services
sudo systemctl daemon-reload
# Enable and start main service
if sudo systemctl enable thrillwiki 2>/dev/null; then
log "ThrillWiki service enabled"
if sudo systemctl start thrillwiki; then
log "ThrillWiki service started successfully"
else
log "WARNING: Failed to start ThrillWiki service"
sudo systemctl status thrillwiki --no-pager || true
fi
else
log "WARNING: Failed to enable ThrillWiki service"
fi
# Try to start webhook service if it exists
if [ -f /etc/systemd/system/thrillwiki-webhook.service ]; then
sudo systemctl enable thrillwiki-webhook 2>/dev/null && sudo systemctl start thrillwiki-webhook || {
log "WARNING: Failed to start webhook service"
}
fi
}
# Function to create a basic systemd service if none exists
create_basic_service() {
log "Creating basic systemd service..."
sudo tee /etc/systemd/system/thrillwiki.service > /dev/null << 'SERVICE_EOF'
[Unit]
Description=ThrillWiki Django Application
After=network.target postgresql.service
Wants=postgresql.service
[Service]
Type=exec
User=ubuntu
Group=ubuntu
[AWS-SECRET-REMOVED]
[AWS-SECRET-REMOVED]/.venv/bin:/home/ubuntu/.cargo/bin:/usr/local/bin:/usr/bin:/bin
ExecStart=/home/ubuntu/thrillwiki/.venv/bin/python manage.py runserver 0.0.0.0:8000
Restart=always
RestartSec=3
[Install]
WantedBy=multi-user.target
SERVICE_EOF
log "Basic systemd service created"
}
# Function to setup web server (may already be configured in template)
setup_webserver() {
log "Setting up web server..."
# Check if nginx is installed and running
if command_exists nginx; then
if ! sudo systemctl is-active --quiet nginx; then
log "Starting nginx..."
sudo systemctl start nginx || log "WARNING: Failed to start nginx"
fi
# Create basic nginx config if none exists
if [ ! -f /etc/nginx/sites-available/thrillwiki ]; then
log "Creating nginx configuration..."
sudo tee /etc/nginx/sites-available/thrillwiki > /dev/null << 'NGINX_EOF'
server {
listen 80;
server_name _;
location /static/ {
alias /home/ubuntu/thrillwiki/staticfiles/;
}
location /media/ {
alias /home/ubuntu/thrillwiki/media/;
}
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
NGINX_EOF
# Enable the site
sudo ln -sf /etc/nginx/sites-available/thrillwiki /etc/nginx/sites-enabled/ || log "WARNING: Failed to enable nginx site"
sudo nginx -t && sudo systemctl reload nginx || log "WARNING: nginx configuration test failed"
else
log "nginx configuration already exists"
fi
else
log "nginx not installed, ThrillWiki will run on port 8000 directly"
fi
}
# Main deployment function
main() {
log "Starting ThrillWiki template-based deployment..."
# Shorter wait time since template VMs boot faster
log "Waiting for system to be ready..."
sleep 10
# Wait for network
wait_for_network || log "WARNING: Network check failed, continuing anyway"
# Clone or update repository
log "Setting up ThrillWiki repository..."
export GITHUB_TOKEN=$(cat /home/ubuntu/.github-token 2>/dev/null || echo "")
# Get the GitHub repository from environment or parameter
GITHUB_REPO="${1:-}"
if [ -z "$GITHUB_REPO" ]; then
log "ERROR: GitHub repository not specified"
return 1
fi
if [ -d "/home/ubuntu/thrillwiki" ]; then
log "ThrillWiki directory already exists, updating..."
cd /home/ubuntu/thrillwiki
git pull || log "WARNING: Failed to update repository"
else
if [ -n "$GITHUB_TOKEN" ]; then
log "Cloning with GitHub token..."
git clone https://$GITHUB_TOKEN@github.com/$GITHUB_REPO /home/ubuntu/thrillwiki || {
log "Failed to clone with token, trying without..."
git clone https://github.com/$GITHUB_REPO /home/ubuntu/thrillwiki || {
log "ERROR: Failed to clone repository"
return 1
}
}
else
log "Cloning without GitHub token..."
git clone https://github.com/$GITHUB_REPO /home/ubuntu/thrillwiki || {
log "ERROR: Failed to clone repository"
return 1
}
fi
cd /home/ubuntu/thrillwiki
fi
# Update system (lighter for template VMs)
update_system
# Setup Python environment
setup_python_env || {
log "ERROR: Failed to set up Python environment"
return 1
}
# Setup environment file
log "Setting up environment configuration..."
if [ -f ***REMOVED***.example ]; then
cp ***REMOVED***.example ***REMOVED*** || log "WARNING: Failed to copy ***REMOVED***.example"
fi
# Update ***REMOVED*** with production settings
{
echo "DEBUG=False"
echo "DATABASE_URL=postgresql://ubuntu@localhost/thrillwiki_production"
echo "ALLOWED_HOSTS=*"
echo "STATIC_[AWS-SECRET-REMOVED]"
} >> ***REMOVED***
# Setup database
setup_database || {
log "ERROR: Database setup failed"
return 1
}
# Run Django commands
run_django_commands
# Setup systemd services
setup_services
# Setup web server
setup_webserver
log "ThrillWiki template-based deployment completed!"
log "Application should be available at http://$(hostname -I | awk '{print $1}'):8000"
log "Logs are available at /home/ubuntu/thrillwiki-deploy.log"
}
# Run main function and capture any errors
main "$@" 2>&1 | tee -a /home/ubuntu/thrillwiki-deploy.log
exit_code=${PIPESTATUS[0]}
if [ $exit_code -eq 0 ]; then
log "Template-based deployment completed successfully!"
else
log "Template-based deployment completed with errors (exit code: $exit_code)"
fi
exit $exit_code

View File

@@ -1,467 +0,0 @@
#!/bin/bash
# Function to log messages with timestamp
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a /home/ubuntu/thrillwiki-deploy.log
}
# Function to check if a command exists
command_exists() {
command -v "$1" >/dev/null 2>&1
}
# Function to wait for network connectivity
wait_for_network() {
log "Waiting for network connectivity..."
local max_attempts=30
local attempt=1
while [ $attempt -le $max_attempts ]; do
if curl -s --connect-timeout 5 https://github.com >/dev/null 2>&1; then
log "Network connectivity confirmed"
return 0
fi
log "Network attempt $attempt/$max_attempts failed, retrying in 10 seconds..."
sleep 10
attempt=$((attempt + 1))
done
log "WARNING: Network connectivity check failed after $max_attempts attempts"
return 1
}
# Function to install uv if not available
install_uv() {
log "Checking for uv installation..."
export PATH="/home/ubuntu/.cargo/bin:$PATH"
if command_exists uv; then
log "uv is already available"
return 0
fi
log "Installing uv..."
# Wait for network connectivity first
wait_for_network || {
log "Network not available, skipping uv installation"
return 1
}
# Try to install uv with multiple attempts
local max_attempts=3
local attempt=1
while [ $attempt -le $max_attempts ]; do
log "uv installation attempt $attempt/$max_attempts"
if curl -LsSf --connect-timeout 30 --retry 2 --retry-delay 5 https://astral.sh/uv/install.sh | sh; then
# Reload PATH
export PATH="/home/ubuntu/.cargo/bin:$PATH"
if command_exists uv; then
log "uv installed successfully"
return 0
else
log "uv installation completed but command not found, checking PATH..."
# Try to source the shell profile to get updated PATH
if [ -f /home/ubuntu/.bashrc ]; then
source /home/ubuntu/.bashrc 2>/dev/null || true
fi
if [ -f /home/ubuntu/.cargo/env ]; then
source /home/ubuntu/.cargo/env 2>/dev/null || true
fi
export PATH="/home/ubuntu/.cargo/bin:$PATH"
if command_exists uv; then
log "uv is now available after PATH update"
return 0
fi
fi
fi
log "uv installation attempt $attempt failed"
attempt=$((attempt + 1))
[ $attempt -le $max_attempts ] && sleep 10
done
log "Failed to install uv after $max_attempts attempts, will use pip fallback"
return 1
}
# Function to setup Python environment with fallbacks
setup_python_env() {
log "Setting up Python environment..."
# Try to install uv first if not available
install_uv
export PATH="/home/ubuntu/.cargo/bin:$PATH"
# Try uv first
if command_exists uv; then
log "Using uv for Python environment management"
if uv venv .venv && source .venv/bin/activate; then
if uv sync; then
log "Successfully set up environment with uv"
return 0
else
log "uv sync failed, falling back to pip"
fi
else
log "uv venv failed, falling back to pip"
fi
else
log "uv not available, using pip"
fi
# Fallback to pip with venv
log "Setting up environment with pip and venv"
if python3 -m venv .venv && source .venv/bin/activate; then
pip install --upgrade pip || log "WARNING: Failed to upgrade pip"
# Try different dependency installation methods
if [ -f pyproject.toml ]; then
log "Installing dependencies from pyproject.toml"
if pip install -e . || pip install .; then
log "Successfully installed dependencies from pyproject.toml"
return 0
else
log "Failed to install from pyproject.toml"
fi
fi
if [ -f requirements.txt ]; then
log "Installing dependencies from requirements.txt"
if pip install -r requirements.txt; then
log "Successfully installed dependencies from requirements.txt"
return 0
else
log "Failed to install from requirements.txt"
fi
fi
# Last resort: install common Django packages
log "Installing basic Django packages as fallback"
pip install django psycopg2-binary gunicorn || log "WARNING: Failed to install basic packages"
else
log "ERROR: Failed to create virtual environment"
return 1
fi
}
# Function to setup database with fallbacks
setup_database() {
log "Setting up PostgreSQL database..."
# Ensure PostgreSQL is running
if ! sudo systemctl is-active --quiet postgresql; then
log "Starting PostgreSQL service..."
sudo systemctl start postgresql || {
log "Failed to start PostgreSQL, trying alternative methods"
sudo service postgresql start || {
log "ERROR: Could not start PostgreSQL"
return 1
}
}
fi
# Create database user and database with error handling
if sudo -u postgres createuser ubuntu 2>/dev/null || sudo -u postgres psql -c "SELECT 1 FROM pg_user WHERE usename = 'ubuntu'" | grep -q 1; then
log "Database user 'ubuntu' created or already exists"
else
log "ERROR: Failed to create database user"
return 1
fi
if sudo -u postgres createdb thrillwiki_production 2>/dev/null || sudo -u postgres psql -lqt | cut -d \| -f 1 | grep -qw thrillwiki_production; then
log "Database 'thrillwiki_production' created or already exists"
else
log "ERROR: Failed to create database"
return 1
fi
# Grant permissions
sudo -u postgres psql -c "ALTER USER ubuntu WITH SUPERUSER;" || {
log "WARNING: Failed to grant superuser privileges, trying alternative permissions"
sudo -u postgres psql -c "GRANT ALL PRIVILEGES ON DATABASE thrillwiki_production TO ubuntu;" || log "WARNING: Failed to grant database privileges"
}
log "Database setup completed"
}
# Function to run Django commands with fallbacks
run_django_commands() {
log "Running Django management commands..."
# Ensure we're in the virtual environment
if [ ! -d ".venv" ] || ! source .venv/bin/activate; then
log "WARNING: Virtual environment not found or failed to activate"
# Try to run without venv activation
fi
# Function to run a Django command with fallbacks
run_django_cmd() {
local cmd="$1"
local description="$2"
log "Running: $description"
# Try uv run first
if command_exists uv && uv run manage.py $cmd; then
log "Successfully ran '$cmd' with uv"
return 0
fi
# Try python in venv
if python manage.py $cmd; then
log "Successfully ran '$cmd' with python"
return 0
fi
# Try python3
if python3 manage.py $cmd; then
log "Successfully ran '$cmd' with python3"
return 0
fi
log "WARNING: Failed to run '$cmd'"
return 1
}
# Run migrations
run_django_cmd "migrate" "Database migrations" || log "WARNING: Database migration failed"
# Collect static files
run_django_cmd "collectstatic --noinput" "Static files collection" || log "WARNING: Static files collection failed"
# Build Tailwind CSS (if available)
if run_django_cmd "tailwind build" "Tailwind CSS build"; then
log "Tailwind CSS built successfully"
else
log "Tailwind CSS build not available or failed - this is optional"
fi
}
# Function to setup systemd services with fallbacks
setup_services() {
log "Setting up systemd services..."
# Check if systemd service files exist
if [ -f scripts/systemd/thrillwiki.service ]; then
sudo cp scripts/systemd/thrillwiki.service /etc/systemd/system/ || {
log "Failed to copy thrillwiki.service, creating basic service"
create_basic_service
}
else
log "Systemd service file not found, creating basic service"
create_basic_service
fi
if [ -f scripts/systemd/thrillwiki-webhook.service ]; then
sudo cp scripts/systemd/thrillwiki-webhook.service /etc/systemd/system/ || {
log "Failed to copy webhook service, skipping"
}
else
log "Webhook service file not found, skipping"
fi
# Update service files with correct paths
if [ -f /etc/systemd/system/thrillwiki.service ]; then
sudo sed -i "s|/opt/thrillwiki|/home/ubuntu/thrillwiki|g" /etc/systemd/system/thrillwiki.service
sudo sed -i "s|User=thrillwiki|User=ubuntu|g" /etc/systemd/system/thrillwiki.service
fi
if [ -f /etc/systemd/system/thrillwiki-webhook.service ]; then
sudo sed -i "s|/opt/thrillwiki|/home/ubuntu/thrillwiki|g" /etc/systemd/system/thrillwiki-webhook.service
sudo sed -i "s|User=thrillwiki|User=ubuntu|g" /etc/systemd/system/thrillwiki-webhook.service
fi
# Reload systemd and start services
sudo systemctl daemon-reload
if sudo systemctl enable thrillwiki 2>/dev/null; then
log "ThrillWiki service enabled"
if sudo systemctl start thrillwiki; then
log "ThrillWiki service started successfully"
else
log "WARNING: Failed to start ThrillWiki service"
sudo systemctl status thrillwiki --no-pager || true
fi
else
log "WARNING: Failed to enable ThrillWiki service"
fi
# Try to start webhook service if it exists
if [ -f /etc/systemd/system/thrillwiki-webhook.service ]; then
sudo systemctl enable thrillwiki-webhook 2>/dev/null && sudo systemctl start thrillwiki-webhook || {
log "WARNING: Failed to start webhook service"
}
fi
}
# Function to create a basic systemd service if none exists
create_basic_service() {
log "Creating basic systemd service..."
sudo tee /etc/systemd/system/thrillwiki.service > /dev/null << 'SERVICE_EOF'
[Unit]
Description=ThrillWiki Django Application
After=network.target postgresql.service
Wants=postgresql.service
[Service]
Type=exec
User=ubuntu
Group=ubuntu
[AWS-SECRET-REMOVED]
[AWS-SECRET-REMOVED]/.venv/bin:/home/ubuntu/.cargo/bin:/usr/local/bin:/usr/bin:/bin
ExecStart=/home/ubuntu/thrillwiki/.venv/bin/python manage.py runserver 0.0.0.0:8000
Restart=always
RestartSec=3
[Install]
WantedBy=multi-user.target
SERVICE_EOF
log "Basic systemd service created"
}
# Function to setup web server (nginx) with fallbacks
setup_webserver() {
log "Setting up web server..."
# Check if nginx is installed and running
if command_exists nginx; then
if ! sudo systemctl is-active --quiet nginx; then
log "Starting nginx..."
sudo systemctl start nginx || log "WARNING: Failed to start nginx"
fi
# Create basic nginx config if none exists
if [ ! -f /etc/nginx/sites-available/thrillwiki ]; then
log "Creating nginx configuration..."
sudo tee /etc/nginx/sites-available/thrillwiki > /dev/null << 'NGINX_EOF'
server {
listen 80;
server_name _;
location /static/ {
alias /home/ubuntu/thrillwiki/staticfiles/;
}
location /media/ {
alias /home/ubuntu/thrillwiki/media/;
}
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
NGINX_EOF
# Enable the site
sudo ln -sf /etc/nginx/sites-available/thrillwiki /etc/nginx/sites-enabled/ || log "WARNING: Failed to enable nginx site"
sudo nginx -t && sudo systemctl reload nginx || log "WARNING: nginx configuration test failed"
fi
else
log "nginx not installed, ThrillWiki will run on port 8000 directly"
fi
}
# Main deployment function
main() {
log "Starting ThrillWiki deployment..."
# Wait for system to be ready
log "Waiting for system to be ready..."
sleep 30
# Wait for network
wait_for_network || log "WARNING: Network check failed, continuing anyway"
# Clone repository
log "Cloning ThrillWiki repository..."
export GITHUB_TOKEN=$(cat /home/ubuntu/.github-token 2>/dev/null || echo "")
# Get the GitHub repository from environment or parameter
GITHUB_REPO="${1:-}"
if [ -z "$GITHUB_REPO" ]; then
log "ERROR: GitHub repository not specified"
return 1
fi
if [ -d "/home/ubuntu/thrillwiki" ]; then
log "ThrillWiki directory already exists, updating..."
cd /home/ubuntu/thrillwiki
git pull || log "WARNING: Failed to update repository"
else
if [ -n "$GITHUB_TOKEN" ]; then
log "Cloning with GitHub token..."
git clone https://$GITHUB_TOKEN@github.com/$GITHUB_REPO /home/ubuntu/thrillwiki || {
log "Failed to clone with token, trying without..."
git clone https://github.com/$GITHUB_REPO /home/ubuntu/thrillwiki || {
log "ERROR: Failed to clone repository"
return 1
}
}
else
log "Cloning without GitHub token..."
git clone https://github.com/$GITHUB_REPO /home/ubuntu/thrillwiki || {
log "ERROR: Failed to clone repository"
return 1
}
fi
cd /home/ubuntu/thrillwiki
fi
# Setup Python environment
setup_python_env || {
log "ERROR: Failed to set up Python environment"
return 1
}
# Setup environment file
log "Setting up environment configuration..."
if [ -f ***REMOVED***.example ]; then
cp ***REMOVED***.example ***REMOVED*** || log "WARNING: Failed to copy ***REMOVED***.example"
fi
# Update ***REMOVED*** with production settings
{
echo "DEBUG=False"
echo "DATABASE_URL=postgresql://ubuntu@localhost/thrillwiki_production"
echo "ALLOWED_HOSTS=*"
echo "STATIC_[AWS-SECRET-REMOVED]"
} >> ***REMOVED***
# Setup database
setup_database || {
log "ERROR: Database setup failed"
return 1
}
# Run Django commands
run_django_commands
# Setup systemd services
setup_services
# Setup web server
setup_webserver
log "ThrillWiki deployment completed!"
log "Application should be available at http://$(hostname -I | awk '{print $1}'):8000"
log "Logs are available at /home/ubuntu/thrillwiki-deploy.log"
}
# Run main function and capture any errors
main "$@" 2>&1 | tee -a /home/ubuntu/thrillwiki-deploy.log
exit_code=${PIPESTATUS[0]}
if [ $exit_code -eq 0 ]; then
log "Deployment completed successfully!"
else
log "Deployment completed with errors (exit code: $exit_code)"
fi
exit $exit_code

View File

@@ -1,39 +0,0 @@
#!/bin/bash
# Example: How to use non-interactive mode for ThrillWiki setup
#
# This script shows how to set up environment variables for non-interactive mode
# and run the automation without any user prompts.
echo "🤖 ThrillWiki Non-Interactive Setup Example"
echo "[AWS-SECRET-REMOVED]=="
# Set required environment variables for non-interactive mode
# These replace the interactive prompts
# Unraid password (REQUIRED)
export UNRAID_PASSWORD="your_unraid_password_here"
# GitHub token (REQUIRED if using GitHub API)
export GITHUB_TOKEN="your_github_token_here"
# Webhook secret (REQUIRED if webhooks enabled)
export WEBHOOK_SECRET="your_webhook_secret_here"
echo "✅ Environment variables set"
echo "📋 Configuration summary:"
echo " - UNRAID_PASSWORD: [HIDDEN]"
echo " - GITHUB_TOKEN: [HIDDEN]"
echo " - WEBHOOK_SECRET: [HIDDEN]"
echo
echo "🚀 Starting non-interactive setup..."
echo "This will use saved configuration and the environment variables above"
echo
# Run the setup script in non-interactive mode
./setup-complete-automation.sh -y
echo
echo "✨ Non-interactive setup completed!"
echo "📝 Note: This example script should be customized with your actual credentials"

View File

@@ -1,531 +0,0 @@
#!/usr/bin/env python3
"""
Ubuntu ISO Builder for Autoinstall
Follows the Ubuntu autoinstall guide exactly:
1. Download Ubuntu ISO
2. Extract with 7zip equivalent
3. Modify GRUB configuration
4. Add server/ directory with autoinstall config
5. Rebuild ISO with xorriso equivalent
"""
import os
import logging
import subprocess
import tempfile
import shutil
import urllib.request
from pathlib import Path
from typing import Optional
logger = logging.getLogger(__name__)
# Ubuntu ISO URLs with fallbacks
UBUNTU_MIRRORS = [
"https://releases.ubuntu.com", # Official Ubuntu releases (primary)
"http://archive.ubuntu.com/ubuntu-releases", # Official archive
"http://mirror.csclub.uwaterloo.ca/ubuntu-releases", # University of Waterloo
"http://mirror.math.princeton.edu/pub/ubuntu-releases", # Princeton mirror
]
UBUNTU_24_04_ISO = "24.04/ubuntu-24.04.3-live-server-amd64.iso"
UBUNTU_22_04_ISO = "22.04/ubuntu-22.04.3-live-server-amd64.iso"
def get_latest_ubuntu_server_iso(version: str) -> Optional[str]:
"""Dynamically find the latest point release for a given Ubuntu version."""
try:
import re
for mirror in UBUNTU_MIRRORS:
try:
url = f"{mirror}/{version}/"
response = urllib.request.urlopen(url, timeout=10)
content = response.read().decode("utf-8")
# Find all server ISO files for this version
pattern = rf"ubuntu-{
re.escape(version)}\.[0-9]+-live-server-amd64\.iso"
matches = re.findall(pattern, content)
if matches:
# Sort by version and return the latest
matches.sort(key=lambda x: [int(n) for n in re.findall(r"\d+", x)])
latest_iso = matches[-1]
return f"{version}/{latest_iso}"
except Exception as e:
logger.debug(f"Failed to check {mirror}/{version}/: {e}")
continue
logger.warning(f"Could not dynamically detect latest ISO for Ubuntu {version}")
return None
except Exception as e:
logger.error(f"Error in dynamic ISO detection: {e}")
return None
class UbuntuISOBuilder:
"""Builds modified Ubuntu ISO with autoinstall configuration."""
def __init__(self, vm_name: str, work_dir: Optional[str] = None):
self.vm_name = vm_name
self.work_dir = (
Path(work_dir)
if work_dir
else Path(tempfile.mkdtemp(prefix="ubuntu-autoinstall-"))
)
self.source_files_dir = self.work_dir / "source-files"
self.boot_dir = self.work_dir / "BOOT"
self.server_dir = self.source_files_dir / "server"
self.grub_cfg_path = self.source_files_dir / "boot" / "grub" / "grub.cfg"
# Ensure directories exist
self.work_dir.mkdir(exist_ok=True, parents=True)
self.source_files_dir.mkdir(exist_ok=True, parents=True)
def check_tools(self) -> bool:
"""Check if required tools are available."""
# Check for 7zip equivalent (p7zip on macOS/Linux)
if not shutil.which("7z") and not shutil.which("7za"):
logger.error(
"7zip not found. Install with: brew install p7zip (macOS) or apt install p7zip-full (Ubuntu)"
)
return False
# Check for xorriso equivalent
if (
not shutil.which("xorriso")
and not shutil.which("mkisofs")
and not shutil.which("hdiutil")
):
logger.error(
"No ISO creation tool found. Install xorriso, mkisofs, or use macOS hdiutil"
)
return False
return True
def download_ubuntu_iso(self, version: str = "24.04") -> Path:
"""Download Ubuntu ISO if not already present, trying multiple mirrors."""
iso_filename = f"ubuntu-{version}-live-server-amd64.iso"
iso_path = self.work_dir / iso_filename
if iso_path.exists():
logger.info(f"Ubuntu ISO already exists: {iso_path}")
return iso_path
if version == "24.04":
iso_subpath = UBUNTU_24_04_ISO
elif version == "22.04":
iso_subpath = UBUNTU_22_04_ISO
else:
raise ValueError(f"Unsupported Ubuntu version: {version}")
# Try each mirror until one works
last_error = None
for mirror in UBUNTU_MIRRORS:
iso_url = f"{mirror}/{iso_subpath}"
logger.info(f"Trying to download Ubuntu {version} ISO from {iso_url}")
try:
# Try downloading from this mirror
urllib.request.urlretrieve(iso_url, iso_path)
logger.info(
f"✅ Ubuntu ISO downloaded successfully from {mirror}: {iso_path}"
)
return iso_path
except Exception as e:
last_error = e
logger.warning(f"Failed to download from {mirror}: {e}")
# Remove partial download if it exists
if iso_path.exists():
iso_path.unlink()
continue
# If we get here, all mirrors failed
logger.error(
f"Failed to download Ubuntu ISO from all mirrors. Last error: {last_error}"
)
raise last_error
def extract_iso(self, iso_path: Path) -> bool:
"""Extract Ubuntu ISO following the guide."""
logger.info(f"Extracting ISO: {iso_path}")
# Use 7z to extract ISO
seven_zip_cmd = "7z" if shutil.which("7z") else "7za"
try:
# Extract ISO: 7z -y x ubuntu.iso -osource-files
subprocess.run(
[
seven_zip_cmd,
"-y",
"x",
str(iso_path),
f"-o{self.source_files_dir}",
],
capture_output=True,
text=True,
check=True,
)
logger.info("ISO extracted successfully")
# Move [BOOT] directory as per guide: mv '[BOOT]' ../BOOT
boot_source = self.source_files_dir / "[BOOT]"
if boot_source.exists():
shutil.move(str(boot_source), str(self.boot_dir))
logger.info(f"Moved [BOOT] directory to {self.boot_dir}")
else:
logger.warning("[BOOT] directory not found in extracted files")
return True
except subprocess.CalledProcessError as e:
logger.error(f"Failed to extract ISO: {e.stderr}")
return False
except Exception as e:
logger.error(f"Error extracting ISO: {e}")
return False
def modify_grub_config(self) -> bool:
"""Modify GRUB configuration to add autoinstall menu entry."""
logger.info("Modifying GRUB configuration...")
if not self.grub_cfg_path.exists():
logger.error(f"GRUB config not found: {self.grub_cfg_path}")
return False
try:
# Read existing GRUB config
with open(self.grub_cfg_path, "r", encoding="utf-8") as f:
grub_content = f.read()
# Autoinstall menu entry as per guide
autoinstall_entry = """menuentry "Autoinstall Ubuntu Server" {
set gfxpayload=keep
linux /casper/vmlinuz quiet autoinstall ds=nocloud\\;s=/cdrom/server/ ---
initrd /casper/initrd
}
"""
# Insert autoinstall entry at the beginning of menu entries
# Find the first menuentry and insert before it
import re
first_menu_match = re.search(r'(menuentry\s+["\'])', grub_content)
if first_menu_match:
insert_pos = first_menu_match.start()
modified_content = (
grub_content[:insert_pos]
+ autoinstall_entry
+ grub_content[insert_pos:]
)
else:
# Fallback: append at the end
modified_content = grub_content + "\n" + autoinstall_entry
# Write modified GRUB config
with open(self.grub_cfg_path, "w", encoding="utf-8") as f:
f.write(modified_content)
logger.info("GRUB configuration modified successfully")
return True
except Exception as e:
logger.error(f"Failed to modify GRUB config: {e}")
return False
def create_autoinstall_config(self, user_data: str) -> bool:
"""Create autoinstall configuration in server/ directory."""
logger.info("Creating autoinstall configuration...")
try:
# Create server directory
self.server_dir.mkdir(exist_ok=True, parents=True)
# Create empty meta-data file (as per guide)
meta_data_path = self.server_dir / "meta-data"
meta_data_path.touch()
logger.info(f"Created empty meta-data: {meta_data_path}")
# Create user-data file with autoinstall configuration
user_data_path = self.server_dir / "user-data"
with open(user_data_path, "w", encoding="utf-8") as f:
f.write(user_data)
logger.info(f"Created user-data: {user_data_path}")
return True
except Exception as e:
logger.error(f"Failed to create autoinstall config: {e}")
return False
def rebuild_iso(self, output_path: Path) -> bool:
"""Rebuild ISO with autoinstall configuration using xorriso."""
logger.info(f"Rebuilding ISO: {output_path}")
try:
# Change to source-files directory for xorriso command
original_cwd = os.getcwd()
os.chdir(self.source_files_dir)
# Remove existing output file
if output_path.exists():
output_path.unlink()
# Try different ISO creation methods in order of preference
success = False
# Method 1: xorriso (most complete)
if shutil.which("xorriso") and not success:
try:
logger.info("Trying xorriso method...")
cmd = [
"xorriso",
"-as",
"mkisofs",
"-r",
"-V",
f"Ubuntu 24.04 LTS AUTO (EFIBIOS)",
"-o",
str(output_path),
"--grub2-mbr",
f"..{os.sep}BOOT{os.sep}1-Boot-NoEmul.img",
"-partition_offset",
"16",
"--mbr-force-bootable",
"-append_partition",
"2",
"28732ac11ff8d211ba4b00a0c93ec93b",
f"..{os.sep}BOOT{os.sep}2-Boot-NoEmul.img",
"-appended_part_as_gpt",
"-iso_mbr_part_type",
"a2a0d0ebe5b9334487c068b6b72699c7",
"-c",
"/boot.catalog",
"-b",
"/boot/grub/i386-pc/eltorito.img",
"-no-emul-boot",
"-boot-load-size",
"4",
"-boot-info-table",
"--grub2-boot-info",
"-eltorito-alt-boot",
"-e",
"--interval:appended_partition_2:::",
"-no-emul-boot",
".",
]
subprocess.run(cmd, capture_output=True, text=True, check=True)
success = True
logger.info("✅ ISO created with xorriso")
except subprocess.CalledProcessError as e:
logger.warning(f"xorriso failed: {e.stderr}")
if output_path.exists():
output_path.unlink()
# Method 2: mkisofs with joliet-long
if shutil.which("mkisofs") and not success:
try:
logger.info("Trying mkisofs with joliet-long...")
cmd = [
"mkisofs",
"-r",
"-V",
f"Ubuntu 24.04 LTS AUTO",
"-cache-inodes",
"-J",
"-joliet-long",
"-l",
"-b",
"boot/grub/i386-pc/eltorito.img",
"-c",
"boot.catalog",
"-no-emul-boot",
"-boot-load-size",
"4",
"-boot-info-table",
"-o",
str(output_path),
".",
]
subprocess.run(cmd, capture_output=True, text=True, check=True)
success = True
logger.info("✅ ISO created with mkisofs (joliet-long)")
except subprocess.CalledProcessError as e:
logger.warning(f"mkisofs with joliet-long failed: {e.stderr}")
if output_path.exists():
output_path.unlink()
# Method 3: mkisofs without Joliet (fallback)
if shutil.which("mkisofs") and not success:
try:
logger.info("Trying mkisofs without Joliet (fallback)...")
cmd = [
"mkisofs",
"-r",
"-V",
f"Ubuntu 24.04 LTS AUTO",
"-cache-inodes",
"-l", # No -J (Joliet) to avoid filename conflicts
"-b",
"boot/grub/i386-pc/eltorito.img",
"-c",
"boot.catalog",
"-no-emul-boot",
"-boot-load-size",
"4",
"-boot-info-table",
"-o",
str(output_path),
".",
]
subprocess.run(cmd, capture_output=True, text=True, check=True)
success = True
logger.info("✅ ISO created with mkisofs (no Joliet)")
except subprocess.CalledProcessError as e:
logger.warning(
f"mkisofs without Joliet failed: {
e.stderr}"
)
if output_path.exists():
output_path.unlink()
# Method 4: macOS hdiutil
if shutil.which("hdiutil") and not success:
try:
logger.info("Trying hdiutil (macOS)...")
cmd = [
"hdiutil",
"makehybrid",
"-iso",
"-joliet",
"-o",
str(output_path),
".",
]
subprocess.run(cmd, capture_output=True, text=True, check=True)
success = True
logger.info("✅ ISO created with hdiutil")
except subprocess.CalledProcessError as e:
logger.warning(f"hdiutil failed: {e.stderr}")
if output_path.exists():
output_path.unlink()
if not success:
logger.error("All ISO creation methods failed")
return False
# Verify the output file was created
if not output_path.exists():
logger.error("ISO file was not created despite success message")
return False
logger.info(f"ISO rebuilt successfully: {output_path}")
logger.info(
f"ISO size: {output_path.stat().st_size / (1024 * 1024):.1f} MB"
)
return True
except Exception as e:
logger.error(f"Error rebuilding ISO: {e}")
return False
finally:
# Return to original directory
os.chdir(original_cwd)
def build_autoinstall_iso(
self, user_data: str, output_path: Path, ubuntu_version: str = "24.04"
) -> bool:
"""Complete ISO build process following the Ubuntu autoinstall guide."""
logger.info(
f"🚀 Starting Ubuntu {ubuntu_version} autoinstall ISO build process"
)
try:
# Step 1: Check tools
if not self.check_tools():
return False
# Step 2: Download Ubuntu ISO
iso_path = self.download_ubuntu_iso(ubuntu_version)
# Step 3: Extract ISO
if not self.extract_iso(iso_path):
return False
# Step 4: Modify GRUB
if not self.modify_grub_config():
return False
# Step 5: Create autoinstall config
if not self.create_autoinstall_config(user_data):
return False
# Step 6: Rebuild ISO
if not self.rebuild_iso(output_path):
return False
logger.info(f"🎉 Successfully created autoinstall ISO: {output_path}")
logger.info(f"📁 Work directory: {self.work_dir}")
return True
except Exception as e:
logger.error(f"Failed to build autoinstall ISO: {e}")
return False
def cleanup(self):
"""Clean up temporary work directory."""
if self.work_dir.exists():
shutil.rmtree(self.work_dir)
logger.info(f"Cleaned up work directory: {self.work_dir}")
def main():
"""Test the ISO builder."""
import logging
logging.basicConfig(level=logging.INFO)
# Sample autoinstall user-data
user_data = """#cloud-config
autoinstall:
version: 1
packages:
- ubuntu-server
identity:
realname: 'Test User'
username: testuser
password: '$6$rounds=4096$saltsalt$[AWS-SECRET-REMOVED]AzpI8g8T14F8VnhXo0sUkZV2NV6/.c77tHgVi34DgbPu.'
hostname: test-vm
locale: en_US.UTF-8
keyboard:
layout: us
storage:
layout:
name: direct
ssh:
install-server: true
late-commands:
- curtin in-target -- apt-get autoremove -y
"""
builder = UbuntuISOBuilder("test-vm")
output_path = Path("/tmp/ubuntu-24.04-autoinstall.iso")
success = builder.build_autoinstall_iso(user_data, output_path)
if success:
print(f"✅ ISO created: {output_path}")
else:
print("❌ ISO creation failed")
# Optionally clean up
# builder.cleanup()
if __name__ == "__main__":
main()

View File

@@ -1,288 +0,0 @@
#!/usr/bin/env python3
"""
Unraid VM Manager for ThrillWiki - Main Orchestrator
Follows the Ubuntu autoinstall guide exactly:
1. Creates modified Ubuntu ISO with autoinstall configuration
2. Manages VM lifecycle on Unraid server
3. Handles ThrillWiki deployment automation
"""
import os
import sys
import logging
from pathlib import Path
# Import our modular components
from iso_builder import UbuntuISOBuilder
from vm_manager import UnraidVMManager
# Configuration
UNRAID_HOST = os.environ.get("UNRAID_HOST", "localhost")
UNRAID_USER = os.environ.get("UNRAID_USER", "root")
VM_NAME = os.environ.get("VM_NAME", "thrillwiki-vm")
VM_MEMORY = int(os.environ.get("VM_MEMORY", 4096)) # MB
VM_VCPUS = int(os.environ.get("VM_VCPUS", 2))
VM_DISK_SIZE = int(os.environ.get("VM_DISK_SIZE", 50)) # GB
SSH_PUBLIC_KEY = os.environ.get("SSH_PUBLIC_KEY", "")
# Network Configuration
VM_IP = os.environ.get("VM_IP", "dhcp")
VM_GATEWAY = os.environ.get("VM_GATEWAY", "192.168.20.1")
VM_NETMASK = os.environ.get("VM_NETMASK", "255.255.255.0")
VM_NETWORK = os.environ.get("VM_NETWORK", "192.168.20.0/24")
# GitHub Configuration
REPO_URL = os.environ.get("REPO_URL", "")
GITHUB_USERNAME = os.environ.get("GITHUB_USERNAME", "")
GITHUB_TOKEN = os.environ.get("GITHUB_TOKEN", "")
# Ubuntu version preference
UBUNTU_VERSION = os.environ.get("UBUNTU_VERSION", "24.04")
# Setup logging
os.makedirs("logs", exist_ok=True)
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s - %(levelname)s - %(message)s",
handlers=[
logging.FileHandler("logs/unraid-vm.log"),
logging.StreamHandler(),
],
)
logger = logging.getLogger(__name__)
class ThrillWikiVMOrchestrator:
"""Main orchestrator for ThrillWiki VM deployment."""
def __init__(self):
self.vm_manager = UnraidVMManager(VM_NAME, UNRAID_HOST, UNRAID_USER)
self.iso_builder = None
def create_autoinstall_user_data(self) -> str:
"""Create autoinstall user-data configuration."""
# Read autoinstall template
template_path = Path(__file__).parent / "autoinstall-user-data.yaml"
if not template_path.exists():
raise FileNotFoundError(f"Autoinstall template not found: {template_path}")
with open(template_path, "r", encoding="utf-8") as f:
template = f.read()
# Replace placeholders using string replacement (avoiding .format() due
# to curly braces in YAML)
user_data = template.replace(
"{SSH_PUBLIC_KEY}",
SSH_PUBLIC_KEY if SSH_PUBLIC_KEY else "# No SSH key provided",
).replace("{GITHUB_REPO}", REPO_URL if REPO_URL else "")
# Update network configuration based on VM_IP setting
if VM_IP.lower() == "dhcp":
# Keep DHCP configuration as-is
pass
else:
# Replace with static IP configuration
network_config = f"""dhcp4: false
addresses:
- {VM_IP}/24
gateway4: {VM_GATEWAY}
nameservers:
addresses:
- 8.8.8.8
- 8.8.4.4"""
user_data = user_data.replace("dhcp4: true", network_config)
return user_data
def build_autoinstall_iso(self) -> Path:
"""Build Ubuntu autoinstall ISO following the guide."""
logger.info("🔨 Building Ubuntu autoinstall ISO...")
# Create ISO builder
self.iso_builder = UbuntuISOBuilder(VM_NAME)
# Create user-data configuration
user_data = self.create_autoinstall_user_data()
# Build autoinstall ISO
iso_output_path = Path(f"/tmp/{VM_NAME}-ubuntu-autoinstall.iso")
success = self.iso_builder.build_autoinstall_iso(
user_data=user_data,
output_path=iso_output_path,
ubuntu_version=UBUNTU_VERSION,
)
if not success:
raise RuntimeError("Failed to build autoinstall ISO")
logger.info(f"✅ Autoinstall ISO built successfully: {iso_output_path}")
return iso_output_path
def deploy_vm(self) -> bool:
"""Complete VM deployment process."""
try:
logger.info("🚀 Starting ThrillWiki VM deployment...")
# Step 1: Check SSH connectivity
logger.info("📡 Testing Unraid connectivity...")
if not self.vm_manager.authenticate():
logger.error("❌ Cannot connect to Unraid server")
return False
# Step 2: Build autoinstall ISO
logger.info("🔨 Building Ubuntu autoinstall ISO...")
iso_path = self.build_autoinstall_iso()
# Step 3: Upload ISO to Unraid
logger.info("📤 Uploading autoinstall ISO to Unraid...")
self.vm_manager.upload_iso_to_unraid(iso_path)
# Step 4: Create/update VM configuration
logger.info("⚙️ Creating VM configuration...")
success = self.vm_manager.create_vm(
vm_memory=VM_MEMORY,
vm_vcpus=VM_VCPUS,
vm_disk_size=VM_DISK_SIZE,
vm_ip=VM_IP,
)
if not success:
logger.error("❌ Failed to create VM configuration")
return False
# Step 5: Start VM
logger.info("🟢 Starting VM...")
success = self.vm_manager.start_vm()
if not success:
logger.error("❌ Failed to start VM")
return False
logger.info("🎉 VM deployment completed successfully!")
logger.info("")
logger.info("📋 Next Steps:")
logger.info("1. VM is now booting with Ubuntu autoinstall")
logger.info("2. Installation will take 15-30 minutes")
logger.info("3. Use 'python main.py ip' to get VM IP when ready")
logger.info("4. SSH to VM and run /home/thrillwiki/deploy-thrillwiki.sh")
logger.info("")
return True
except Exception as e:
logger.error(f"❌ VM deployment failed: {e}")
return False
finally:
# Cleanup ISO builder temp files
if self.iso_builder:
self.iso_builder.cleanup()
def get_vm_info(self) -> dict:
"""Get VM information."""
return {
"name": VM_NAME,
"status": self.vm_manager.vm_status(),
"ip": self.vm_manager.get_vm_ip(),
"memory": VM_MEMORY,
"vcpus": VM_VCPUS,
"disk_size": VM_DISK_SIZE,
}
def main():
"""Main entry point."""
import argparse
parser = argparse.ArgumentParser(
description="ThrillWiki VM Manager - Ubuntu Autoinstall on Unraid",
epilog="""
Examples:
python main.py setup # Complete VM setup with autoinstall
python main.py start # Start existing VM
python main.py ip # Get VM IP address
python main.py status # Get VM status
python main.py delete # Remove VM completely
""",
formatter_class=argparse.RawDescriptionHelpFormatter,
)
parser.add_argument(
"action",
choices=[
"setup",
"create",
"start",
"stop",
"status",
"ip",
"delete",
"info",
],
help="Action to perform",
)
args = parser.parse_args()
# Create orchestrator
orchestrator = ThrillWikiVMOrchestrator()
if args.action == "setup":
logger.info("🚀 Setting up complete ThrillWiki VM environment...")
success = orchestrator.deploy_vm()
sys.exit(0 if success else 1)
elif args.action == "create":
logger.info("⚙️ Creating VM configuration...")
success = orchestrator.vm_manager.create_vm(
VM_MEMORY, VM_VCPUS, VM_DISK_SIZE, VM_IP
)
sys.exit(0 if success else 1)
elif args.action == "start":
logger.info("🟢 Starting VM...")
success = orchestrator.vm_manager.start_vm()
sys.exit(0 if success else 1)
elif args.action == "stop":
logger.info("🛑 Stopping VM...")
success = orchestrator.vm_manager.stop_vm()
sys.exit(0 if success else 1)
elif args.action == "status":
status = orchestrator.vm_manager.vm_status()
print(f"VM Status: {status}")
sys.exit(0)
elif args.action == "ip":
ip = orchestrator.vm_manager.get_vm_ip()
if ip:
print(f"VM IP: {ip}")
print(f"SSH: ssh thrillwiki@{ip}")
print(
f"Deploy: ssh thrillwiki@{ip} '/home/thrillwiki/deploy-thrillwiki.sh'"
)
sys.exit(0)
else:
print("❌ Failed to get VM IP (VM may not be ready yet)")
sys.exit(1)
elif args.action == "info":
info = orchestrator.get_vm_info()
print("🖥️ VM Information:")
print(f" Name: {info['name']}")
print(f" Status: {info['status']}")
print(f" IP: {info['ip'] or 'Not available'}")
print(f" Memory: {info['memory']} MB")
print(f" vCPUs: {info['vcpus']}")
print(f" Disk: {info['disk_size']} GB")
sys.exit(0)
elif args.action == "delete":
logger.info("🗑️ Deleting VM and all files...")
success = orchestrator.vm_manager.delete_vm()
sys.exit(0 if success else 1)
if __name__ == "__main__":
main()

View File

@@ -1,456 +0,0 @@
#!/usr/bin/env python3
"""
Unraid VM Manager for ThrillWiki - Template-Based Main Orchestrator
Uses pre-built template VMs for fast deployment instead of autoinstall.
"""
import os
import sys
import logging
from pathlib import Path
# Import our modular components
from template_manager import TemplateVMManager
from vm_manager_template import UnraidTemplateVMManager
class ConfigLoader:
"""Dynamic configuration loader that reads environment variables when needed."""
def __init__(self):
# Try to load ***REMOVED***.unraid if it exists to ensure we have the
# latest config
self._load_env_file()
def _load_env_file(self):
"""Load ***REMOVED***.unraid file if it exists."""
# Find the project directory (two levels up from this script)
script_dir = Path(__file__).parent
project_dir = script_dir.parent.parent
env_file = project_dir / "***REMOVED***.unraid"
if env_file.exists():
try:
with open(env_file, "r") as f:
for line in f:
line = line.strip()
if line and not line.startswith("#") and "=" in line:
key, value = line.split("=", 1)
# Remove quotes if present
value = value.strip("\"'")
# Only set if not already in environment (env vars
# take precedence)
if key not in os.environ:
os.environ[key] = value
logging.info(f"📝 Loaded configuration from {env_file}")
except Exception as e:
logging.warning(f"⚠️ Could not load ***REMOVED***.unraid: {e}")
@property
def UNRAID_HOST(self):
return os.environ.get("UNRAID_HOST", "localhost")
@property
def UNRAID_USER(self):
return os.environ.get("UNRAID_USER", "root")
@property
def VM_NAME(self):
return os.environ.get("VM_NAME", "thrillwiki-vm")
@property
def VM_MEMORY(self):
return int(os.environ.get("VM_MEMORY", 4096))
@property
def VM_VCPUS(self):
return int(os.environ.get("VM_VCPUS", 2))
@property
def VM_DISK_SIZE(self):
return int(os.environ.get("VM_DISK_SIZE", 50))
@property
def SSH_PUBLIC_KEY(self):
return os.environ.get("SSH_PUBLIC_KEY", "")
@property
def VM_IP(self):
return os.environ.get("VM_IP", "dhcp")
@property
def VM_GATEWAY(self):
return os.environ.get("VM_GATEWAY", "192.168.20.1")
@property
def VM_NETMASK(self):
return os.environ.get("VM_NETMASK", "255.255.255.0")
@property
def VM_NETWORK(self):
return os.environ.get("VM_NETWORK", "192.168.20.0/24")
@property
def REPO_URL(self):
return os.environ.get("REPO_URL", "")
@property
def GITHUB_USERNAME(self):
return os.environ.get("GITHUB_USERNAME", "")
@property
def GITHUB_TOKEN(self):
return os.environ.get("GITHUB_TOKEN", "")
# Create a global configuration instance
config = ConfigLoader()
# Setup logging with reduced buffering
os.makedirs("logs", exist_ok=True)
# Configure console handler with line buffering
console_handler = logging.StreamHandler(sys.stdout)
console_handler.setLevel(logging.INFO)
console_handler.setFormatter(
logging.Formatter("%(asctime)s - %(levelname)s - %(message)s")
)
# Force flush after each log message
console_handler.flush = lambda: sys.stdout.flush()
# Configure file handler
file_handler = logging.FileHandler("logs/unraid-vm.log")
file_handler.setLevel(logging.INFO)
file_handler.setFormatter(
logging.Formatter("%(asctime)s - %(levelname)s - %(message)s")
)
# Set up basic config with both handlers
logging.basicConfig(
level=logging.INFO,
handlers=[file_handler, console_handler],
)
# Ensure stdout is line buffered for real-time output
sys.stdout.reconfigure(line_buffering=True)
logger = logging.getLogger(__name__)
class ThrillWikiTemplateVMOrchestrator:
"""Main orchestrator for template-based ThrillWiki VM deployment."""
def __init__(self):
# Log current configuration for debugging
logger.info(
f"🔧 Using configuration: UNRAID_HOST={
config.UNRAID_HOST}, UNRAID_USER={
config.UNRAID_USER}, VM_NAME={
config.VM_NAME}"
)
self.template_manager = TemplateVMManager(
config.UNRAID_HOST, config.UNRAID_USER
)
self.vm_manager = UnraidTemplateVMManager(
config.VM_NAME, config.UNRAID_HOST, config.UNRAID_USER
)
def check_template_ready(self) -> bool:
"""Check if template VM is ready for use."""
logger.info("🔍 Checking template VM availability...")
if not self.template_manager.check_template_exists():
logger.error("❌ Template VM disk not found!")
logger.error(
"Please ensure 'thrillwiki-template-ubuntu' VM exists and is properly configured"
)
logger.error(
"Template should be located at: /mnt/user/domains/thrillwiki-template-ubuntu/vdisk1.qcow2"
)
return False
# Check template status
if not self.template_manager.update_template():
logger.warning("⚠️ Template VM may be running - this could cause issues")
logger.warning(
"Ensure the template VM is stopped before creating new instances"
)
info = self.template_manager.get_template_info()
if info:
logger.info(f"📋 Template Info:")
logger.info(f" Virtual Size: {info['virtual_size']}")
logger.info(f" File Size: {info['file_size']}")
logger.info(f" Last Modified: {info['last_modified']}")
return True
def deploy_vm_from_template(self) -> bool:
"""Complete template-based VM deployment process."""
try:
logger.info("🚀 Starting ThrillWiki template-based VM deployment...")
# Step 1: Check SSH connectivity
logger.info("📡 Testing Unraid connectivity...")
if not self.vm_manager.authenticate():
logger.error("❌ Cannot connect to Unraid server")
return False
# Step 2: Check template availability
logger.info("🔍 Verifying template VM...")
if not self.check_template_ready():
logger.error("❌ Template VM not ready")
return False
# Step 3: Create VM from template
logger.info("⚙️ Creating VM from template...")
success = self.vm_manager.create_vm_from_template(
vm_memory=config.VM_MEMORY,
vm_vcpus=config.VM_VCPUS,
vm_disk_size=config.VM_DISK_SIZE,
vm_ip=config.VM_IP,
)
if not success:
logger.error("❌ Failed to create VM from template")
return False
# Step 4: Start VM
logger.info("🟢 Starting VM...")
success = self.vm_manager.start_vm()
if not success:
logger.error("❌ Failed to start VM")
return False
logger.info("🎉 Template-based VM deployment completed successfully!")
logger.info("")
logger.info("📋 Next Steps:")
logger.info("1. VM is now booting from template disk")
logger.info("2. Boot time should be much faster (2-5 minutes)")
logger.info("3. Use 'python main_template.py ip' to get VM IP when ready")
logger.info("4. SSH to VM and run deployment commands")
logger.info("")
return True
except Exception as e:
logger.error(f"❌ Template VM deployment failed: {e}")
return False
def deploy_and_configure_thrillwiki(self) -> bool:
"""Deploy VM from template and configure ThrillWiki."""
try:
logger.info("🚀 Starting complete ThrillWiki deployment from template...")
# Step 1: Deploy VM from template
if not self.deploy_vm_from_template():
return False
# Step 2: Wait for VM to be accessible and configure ThrillWiki
if config.REPO_URL:
logger.info("🔧 Configuring ThrillWiki on VM...")
success = self.vm_manager.customize_vm_for_thrillwiki(
config.REPO_URL, config.GITHUB_TOKEN
)
if success:
vm_ip = self.vm_manager.get_vm_ip()
logger.info("🎉 Complete ThrillWiki deployment successful!")
logger.info(f"🌐 ThrillWiki is available at: http://{vm_ip}:8000")
else:
logger.warning(
"⚠️ VM deployed but ThrillWiki configuration may have failed"
)
logger.info(
"You can manually configure ThrillWiki by SSH'ing to the VM"
)
else:
logger.info(
"📝 No repository URL provided - VM deployed but ThrillWiki not configured"
)
logger.info(
"Set REPO_URL environment variable to auto-configure ThrillWiki"
)
return True
except Exception as e:
logger.error(f"❌ Complete deployment failed: {e}")
return False
def get_vm_info(self) -> dict:
"""Get VM information."""
return {
"name": config.VM_NAME,
"status": self.vm_manager.vm_status(),
"ip": self.vm_manager.get_vm_ip(),
"memory": config.VM_MEMORY,
"vcpus": config.VM_VCPUS,
"disk_size": config.VM_DISK_SIZE,
"deployment_type": "template-based",
}
def main():
"""Main entry point."""
import argparse
parser = argparse.ArgumentParser(
description="ThrillWiki Template-Based VM Manager - Fast VM deployment using templates",
epilog="""
Examples:
python main_template.py setup # Deploy VM from template only
python main_template.py deploy # Deploy VM and configure ThrillWiki
python main_template.py start # Start existing VM
python main_template.py ip # Get VM IP address
python main_template.py status # Get VM status
python main_template.py delete # Remove VM completely
python main_template.py template # Manage template VM
""",
formatter_class=argparse.RawDescriptionHelpFormatter,
)
parser.add_argument(
"action",
choices=[
"setup",
"deploy",
"create",
"start",
"stop",
"status",
"ip",
"delete",
"info",
"template",
],
help="Action to perform",
)
parser.add_argument(
"template_action",
nargs="?",
choices=["info", "check", "update", "list"],
help="Template management action (used with 'template' action)",
)
args = parser.parse_args()
# Create orchestrator
orchestrator = ThrillWikiTemplateVMOrchestrator()
if args.action == "setup":
logger.info("🚀 Setting up VM from template...")
success = orchestrator.deploy_vm_from_template()
sys.exit(0 if success else 1)
elif args.action == "deploy":
logger.info("🚀 Complete ThrillWiki deployment from template...")
success = orchestrator.deploy_and_configure_thrillwiki()
sys.exit(0 if success else 1)
elif args.action == "create":
logger.info("⚙️ Creating VM from template...")
success = orchestrator.vm_manager.create_vm_from_template(
config.VM_MEMORY,
config.VM_VCPUS,
config.VM_DISK_SIZE,
config.VM_IP,
)
sys.exit(0 if success else 1)
elif args.action == "start":
logger.info("🟢 Starting VM...")
success = orchestrator.vm_manager.start_vm()
sys.exit(0 if success else 1)
elif args.action == "stop":
logger.info("🛑 Stopping VM...")
success = orchestrator.vm_manager.stop_vm()
sys.exit(0 if success else 1)
elif args.action == "status":
status = orchestrator.vm_manager.vm_status()
print(f"VM Status: {status}")
sys.exit(0)
elif args.action == "ip":
ip = orchestrator.vm_manager.get_vm_ip()
if ip:
print(f"VM IP: {ip}")
print(f"SSH: ssh thrillwiki@{ip}")
print(f"ThrillWiki: http://{ip}:8000")
sys.exit(0)
else:
print("❌ Failed to get VM IP (VM may not be ready yet)")
sys.exit(1)
elif args.action == "info":
info = orchestrator.get_vm_info()
print("🖥️ VM Information:")
print(f" Name: {info['name']}")
print(f" Status: {info['status']}")
print(f" IP: {info['ip'] or 'Not available'}")
print(f" Memory: {info['memory']} MB")
print(f" vCPUs: {info['vcpus']}")
print(f" Disk: {info['disk_size']} GB")
print(f" Type: {info['deployment_type']}")
sys.exit(0)
elif args.action == "delete":
logger.info("🗑️ Deleting VM and all files...")
success = orchestrator.vm_manager.delete_vm()
sys.exit(0 if success else 1)
elif args.action == "template":
template_action = args.template_action or "info"
if template_action == "info":
logger.info("📋 Template VM Information")
info = orchestrator.template_manager.get_template_info()
if info:
print(f"Template Path: {info['template_path']}")
print(f"Virtual Size: {info['virtual_size']}")
print(f"File Size: {info['file_size']}")
print(f"Last Modified: {info['last_modified']}")
else:
print("❌ Failed to get template information")
sys.exit(1)
elif template_action == "check":
if orchestrator.template_manager.check_template_exists():
logger.info("✅ Template VM disk exists and is ready to use")
sys.exit(0)
else:
logger.error("❌ Template VM disk not found")
sys.exit(1)
elif template_action == "update":
success = orchestrator.template_manager.update_template()
sys.exit(0 if success else 1)
elif template_action == "list":
logger.info("📋 Template-based VM Instances")
instances = orchestrator.template_manager.list_template_instances()
if instances:
for instance in instances:
status_emoji = (
"🟢"
if instance["status"] == "running"
else "🔴" if instance["status"] == "shut off" else "🟡"
)
print(
f"{status_emoji} {
instance['name']} ({
instance['status']})"
)
else:
print("No template instances found")
sys.exit(0)
if __name__ == "__main__":
main()

File diff suppressed because it is too large Load Diff

View File

@@ -1,75 +0,0 @@
#!/bin/bash
# ThrillWiki Template VM SSH Key Setup Helper
# This script generates the SSH key needed for template VM access
set -e
# Colors for output
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
echo -e "${BLUE}ThrillWiki Template VM SSH Key Setup${NC}"
echo "[AWS-SECRET-REMOVED]"
echo
SSH_KEY_PATH="$HOME/.ssh/thrillwiki_vm"
# Generate SSH key if it doesn't exist
if [ ! -f "$SSH_KEY_PATH" ]; then
echo -e "${YELLOW}Generating new SSH key for ThrillWiki template VM...${NC}"
ssh-keygen -t rsa -b 4096 -f "$SSH_KEY_PATH" -N "" -C "thrillwiki-template-vm-access"
echo -e "${GREEN}✅ SSH key generated: $SSH_KEY_PATH${NC}"
echo
else
echo -e "${GREEN}✅ SSH key already exists: $SSH_KEY_PATH${NC}"
echo
fi
# Display the public key
echo -e "${YELLOW}📋 Your SSH Public Key:${NC}"
echo "Copy this ENTIRE line and add it to your template VM:"
echo
echo -e "${GREEN}$(cat "$SSH_KEY_PATH.pub")${NC}"
echo
# Instructions
echo -e "${BLUE}📝 Template VM Setup Instructions:${NC}"
echo "1. SSH into your template VM (thrillwiki-template-ubuntu)"
echo "2. Switch to the thrillwiki user:"
echo " sudo su - thrillwiki"
echo "3. Create .ssh directory and set permissions:"
echo " mkdir -p ~/.ssh && chmod 700 ~/.ssh"
echo "4. Add the public key above to ***REMOVED***:"
echo " echo 'YOUR_PUBLIC_KEY_HERE' >> ~/.ssh/***REMOVED***"
echo " chmod 600 ~/.ssh/***REMOVED***"
echo "5. Test SSH access:"
echo " ssh -i ~/.ssh/thrillwiki_vm thrillwiki@YOUR_TEMPLATE_VM_IP"
echo
# SSH config helper
SSH_CONFIG="$HOME/.ssh/config"
echo -e "${BLUE}🔧 SSH Config Setup:${NC}"
if ! grep -q "thrillwiki-vm" "$SSH_CONFIG" 2>/dev/null; then
echo "Adding SSH config entry..."
cat >> "$SSH_CONFIG" << EOF
# ThrillWiki Template VM
Host thrillwiki-vm
HostName %h
User thrillwiki
IdentityFile $SSH_KEY_PATH
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
EOF
echo -e "${GREEN}✅ SSH config updated${NC}"
else
echo -e "${GREEN}✅ SSH config already contains thrillwiki-vm entry${NC}"
fi
echo
echo -e "${GREEN}🎉 SSH key setup complete!${NC}"
echo "Next: Set up your template VM using TEMPLATE_VM_SETUP.md"
echo "Then run: ./setup-template-automation.sh"

File diff suppressed because it is too large Load Diff

View File

@@ -1,249 +0,0 @@
#!/bin/bash
#
# ThrillWiki Template VM Management Utilities
# Quick helpers for managing template VMs on Unraid
#
# Set strict mode
set -e
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
log() {
echo -e "${BLUE}[TEMPLATE]${NC} $1"
}
log_success() {
echo -e "${GREEN}[SUCCESS]${NC} $1"
}
log_warning() {
echo -e "${YELLOW}[WARNING]${NC} $1"
}
log_error() {
echo -e "${RED}[ERROR]${NC} $1"
}
# Configuration
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)"
# Load environment variables if available
if [[ -f "$PROJECT_DIR/***REMOVED***.unraid" ]]; then
source "$PROJECT_DIR/***REMOVED***.unraid"
else
log_error "No ***REMOVED***.unraid file found. Please run setup-complete-automation.sh first."
exit 1
fi
# Function to show help
show_help() {
echo "ThrillWiki Template VM Management Utilities"
echo ""
echo "Usage:"
echo " $0 check Check if template exists and is ready"
echo " $0 info Show template information"
echo " $0 list List all template-based VM instances"
echo " $0 copy VM_NAME Copy template to new VM"
echo " $0 deploy VM_NAME Deploy complete VM from template"
echo " $0 status Show template VM status"
echo " $0 update Update template VM (instructions)"
echo " $0 autopull Manage auto-pull functionality"
echo ""
echo "Auto-pull Commands:"
echo " $0 autopull status Show auto-pull status on VMs"
echo " $0 autopull enable VM Enable auto-pull on specific VM"
echo " $0 autopull disable VM Disable auto-pull on specific VM"
echo " $0 autopull logs VM Show auto-pull logs from VM"
echo " $0 autopull test VM Test auto-pull on specific VM"
echo ""
echo "Examples:"
echo " $0 check # Verify template is ready"
echo " $0 copy thrillwiki-prod # Copy template to new VM"
echo " $0 deploy thrillwiki-test # Complete deployment from template"
echo " $0 autopull status # Check auto-pull status on all VMs"
echo " $0 autopull logs $VM_NAME # View auto-pull logs"
exit 0
}
# Check if required environment variables are set
check_environment() {
if [[ -z "$UNRAID_HOST" ]]; then
log_error "UNRAID_HOST not set. Please configure your environment."
exit 1
fi
if [[ -z "$UNRAID_USER" ]]; then
UNRAID_USER="root"
log "Using default UNRAID_USER: $UNRAID_USER"
fi
log_success "Environment configured: $UNRAID_USER@$UNRAID_HOST"
}
# Function to run python template manager commands
run_template_manager() {
cd "$SCRIPT_DIR"
export UNRAID_HOST="$UNRAID_HOST"
export UNRAID_USER="$UNRAID_USER"
python3 template_manager.py "$@"
}
# Function to run template-based main script
run_main_template() {
cd "$SCRIPT_DIR"
# Export all environment variables
export UNRAID_HOST="$UNRAID_HOST"
export UNRAID_USER="$UNRAID_USER"
export VM_NAME="$1"
export VM_MEMORY="${VM_MEMORY:-4096}"
export VM_VCPUS="${VM_VCPUS:-2}"
export VM_DISK_SIZE="${VM_DISK_SIZE:-50}"
export VM_IP="${VM_IP:-dhcp}"
export REPO_URL="${REPO_URL:-}"
export GITHUB_TOKEN="${GITHUB_TOKEN:-}"
shift # Remove VM_NAME from arguments
python3 main_template.py "$@"
}
# Parse command line arguments
case "${1:-}" in
check)
log "🔍 Checking template VM availability..."
check_environment
run_template_manager check
;;
info)
log "📋 Getting template VM information..."
check_environment
run_template_manager info
;;
list)
log "📋 Listing template-based VM instances..."
check_environment
run_template_manager list
;;
copy)
if [[ -z "${2:-}" ]]; then
log_error "VM name is required for copy operation"
echo "Usage: $0 copy VM_NAME"
exit 1
fi
log "💾 Copying template to VM: $2"
check_environment
run_template_manager copy "$2"
;;
deploy)
if [[ -z "${2:-}" ]]; then
log_error "VM name is required for deploy operation"
echo "Usage: $0 deploy VM_NAME"
exit 1
fi
log "🚀 Deploying complete VM from template: $2"
check_environment
run_main_template "$2" deploy
;;
status)
log "📊 Checking template VM status..."
check_environment
# Check template VM status directly
ssh "$UNRAID_USER@$UNRAID_HOST" "virsh domstate thrillwiki-template-ubuntu" 2>/dev/null || {
log_error "Could not check template VM status"
exit 1
}
;;
update)
log "🔄 Template VM update instructions:"
echo ""
echo "To update your template VM:"
echo "1. Start the template VM on Unraid"
echo "2. SSH into the template VM"
echo "3. Update packages: sudo apt update && sudo apt upgrade -y"
echo "4. Update ThrillWiki dependencies if needed"
echo "5. Clean up temporary files: sudo apt autoremove && sudo apt autoclean"
echo "6. Clear bash history: history -c && history -w"
echo "7. Shutdown the template VM: sudo shutdown now"
echo "8. The updated disk is now ready as a template"
echo ""
log_warning "IMPORTANT: Template VM must be stopped before creating new instances"
check_environment
run_template_manager update
;;
autopull)
shift # Remove 'autopull' from arguments
autopull_command="${1:-status}"
vm_name="${2:-$VM_NAME}"
log "🔄 Managing auto-pull functionality..."
check_environment
# Get list of all template VMs
if [[ "$autopull_command" == "status" ]] && [[ "$vm_name" == "$VM_NAME" ]]; then
all_vms=$(run_template_manager list | grep -E "(running|shut off)" | awk '{print $2}' || echo "")
else
all_vms=$vm_name
fi
if [[ -z "$all_vms" ]]; then
log_warning "No running template VMs found to manage auto-pull on."
exit 0
fi
for vm in $all_vms; do
log "====== Auto-pull for VM: $vm ======"
case "$autopull_command" in
status)
ssh "$vm" "[AWS-SECRET-REMOVED]uto-pull.sh --status"
;;
enable)
ssh "$vm" "(crontab -l 2>/dev/null || echo \"\") | { cat; echo \"*/10 * * * * [AWS-SECRET-REMOVED]uto-pull.sh >> /home/thrillwiki/logs/cron.log 2>&1\"; } | crontab - && echo '✅ Auto-pull enabled' || echo '❌ Failed to enable'"
;;
disable)
ssh "$vm" "crontab -l 2>/dev/null | grep -v 'auto-pull.sh' | crontab - && echo '✅ Auto-pull disabled' || echo '❌ Failed to disable'"
;;
logs)
ssh "$vm" "[AWS-SECRET-REMOVED]uto-pull.sh --logs"
;;
test)
ssh "$vm" "[AWS-SECRET-REMOVED]uto-pull.sh --force"
;;
*)
log_error "Invalid auto-pull command: $autopull_command"
show_help
exit 1
;;
esac
echo
done
;;
--help|-h|help|"")
show_help
;;
*)
log_error "Unknown command: ${1:-}"
echo ""
show_help
;;
esac

View File

@@ -1,571 +0,0 @@
#!/usr/bin/env python3
"""
Template VM Manager for ThrillWiki
Handles copying template VM disks and managing template-based deployments.
"""
import os
import sys
import time
import logging
import subprocess
from typing import Dict
logger = logging.getLogger(__name__)
class TemplateVMManager:
"""Manages template-based VM deployment on Unraid."""
def __init__(self, unraid_host: str, unraid_user: str = "root"):
self.unraid_host = unraid_host
self.unraid_user = unraid_user
self.template_vm_name = "thrillwiki-template-ubuntu"
self.template_path = f"/mnt/user/domains/{self.template_vm_name}"
def authenticate(self) -> bool:
"""Test SSH connectivity to Unraid server."""
try:
result = subprocess.run(
f"ssh -o ConnectTimeout=10 {self.unraid_user}@{self.unraid_host} 'echo Connected'",
shell=True,
capture_output=True,
text=True,
timeout=15,
)
if result.returncode == 0 and "Connected" in result.stdout:
logger.info("Successfully connected to Unraid via SSH")
return True
else:
logger.error(f"SSH connection failed: {result.stderr}")
return False
except Exception as e:
logger.error(f"SSH authentication error: {e}")
return False
def check_template_exists(self) -> bool:
"""Check if template VM disk exists."""
try:
result = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'test -f {self.template_path}/vdisk1.qcow2'",
shell=True,
capture_output=True,
text=True,
)
if result.returncode == 0:
logger.info(
f"Template VM disk found at {
self.template_path}/vdisk1.qcow2"
)
return True
else:
logger.error(
f"Template VM disk not found at {
self.template_path}/vdisk1.qcow2"
)
return False
except Exception as e:
logger.error(f"Error checking template existence: {e}")
return False
def get_template_info(self) -> Dict[str, str]:
"""Get information about the template VM."""
try:
# Get disk size
size_result = subprocess.run(
f"ssh {
self.unraid_user}@{
self.unraid_host} 'qemu-img info {
self.template_path}/vdisk1.qcow2 | grep \"virtual size\"'",
shell=True,
capture_output=True,
text=True,
)
# Get file size
file_size_result = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'ls -lh {self.template_path}/vdisk1.qcow2'",
shell=True,
capture_output=True,
text=True,
)
# Get last modification time
mod_time_result = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'stat -c \"%y\" {self.template_path}/vdisk1.qcow2'",
shell=True,
capture_output=True,
text=True,
)
info = {
"template_path": f"{
self.template_path}/vdisk1.qcow2",
"virtual_size": (
size_result.stdout.strip()
if size_result.returncode == 0
else "Unknown"
),
"file_size": (
file_size_result.stdout.split()[4]
if file_size_result.returncode == 0
else "Unknown"
),
"last_modified": (
mod_time_result.stdout.strip()
if mod_time_result.returncode == 0
else "Unknown"
),
}
return info
except Exception as e:
logger.error(f"Error getting template info: {e}")
return {}
def copy_template_disk(self, target_vm_name: str) -> bool:
"""Copy template VM disk to a new VM instance."""
try:
if not self.check_template_exists():
logger.error("Template VM disk not found. Cannot proceed with copy.")
return False
target_path = f"/mnt/user/domains/{target_vm_name}"
target_disk = f"{target_path}/vdisk1.qcow2"
logger.info(f"Copying template disk to new VM: {target_vm_name}")
# Create target directory
subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'mkdir -p {target_path}'",
shell=True,
check=True,
)
# Check if target disk already exists
disk_check = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'test -f {target_disk}'",
shell=True,
capture_output=True,
)
if disk_check.returncode == 0:
logger.warning(f"Target disk already exists: {target_disk}")
logger.info(
"Removing existing disk to replace with fresh template copy..."
)
subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'rm -f {target_disk}'",
shell=True,
check=True,
)
# Copy template disk with rsync progress display
logger.info("🚀 Copying template disk with rsync progress display...")
start_time = time.time()
# First, get the size of the template disk for progress calculation
size_result = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'stat -c%s {self.template_path}/vdisk1.qcow2'",
shell=True,
capture_output=True,
text=True,
)
template_size = "unknown size"
if size_result.returncode == 0:
size_bytes = int(size_result.stdout.strip())
if size_bytes > 1024 * 1024 * 1024: # GB
template_size = f"{size_bytes /
(1024 *
1024 *
1024):.1f}GB"
elif size_bytes > 1024 * 1024: # MB
template_size = f"{size_bytes / (1024 * 1024):.1f}MB"
else:
template_size = f"{size_bytes / 1024:.1f}KB"
logger.info(f"📊 Template disk size: {template_size}")
# Use rsync with progress display
logger.info("📈 Using rsync for real-time progress display...")
# Force rsync to output progress to stderr and capture it
copy_cmd = f"ssh {
self.unraid_user}@{
self.unraid_host} 'rsync -av --progress --stats {
self.template_path}/vdisk1.qcow2 {target_disk}'"
# Run with real-time output, unbuffered
process = subprocess.Popen(
copy_cmd,
shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
bufsize=0, # Unbuffered
universal_newlines=True,
)
import select
# Read both stdout and stderr for progress with real-time display
while True:
# Check if process is still running
if process.poll() is not None:
# Process finished, read any remaining output
remaining_out = process.stdout.read()
remaining_err = process.stderr.read()
if remaining_out:
print(f"📊 {remaining_out.strip()}", flush=True)
logger.info(f"📊 {remaining_out.strip()}")
if remaining_err:
for line in remaining_err.strip().split("\n"):
if line.strip():
print(f"{line.strip()}", flush=True)
logger.info(f"{line.strip()}")
break
# Use select to check for available data
try:
ready, _, _ = select.select(
[process.stdout, process.stderr], [], [], 0.1
)
for stream in ready:
line = stream.readline()
if line:
line = line.strip()
if line:
if stream == process.stdout:
print(f"📊 {line}", flush=True)
logger.info(f"📊 {line}")
else: # stderr
# rsync progress goes to stderr
if any(
keyword in line
for keyword in [
"%",
"bytes/sec",
"to-check=",
"xfr#",
]
):
print(f"{line}", flush=True)
logger.info(f"{line}")
else:
print(f"📋 {line}", flush=True)
logger.info(f"📋 {line}")
except select.error:
# Fallback for systems without select (like some Windows
# environments)
print(
"⚠️ select() not available, using fallback method...",
flush=True,
)
logger.info("⚠️ select() not available, using fallback method...")
# Simple fallback - just wait and read what's available
time.sleep(0.5)
try:
# Try to read non-blocking
import fcntl
import os
# Make stdout/stderr non-blocking
fd_out = process.stdout.fileno()
fd_err = process.stderr.fileno()
fl_out = fcntl.fcntl(fd_out, fcntl.F_GETFL)
fl_err = fcntl.fcntl(fd_err, fcntl.F_GETFL)
fcntl.fcntl(fd_out, fcntl.F_SETFL, fl_out | os.O_NONBLOCK)
fcntl.fcntl(fd_err, fcntl.F_SETFL, fl_err | os.O_NONBLOCK)
try:
out_line = process.stdout.readline()
if out_line:
print(f"📊 {out_line.strip()}", flush=True)
logger.info(f"📊 {out_line.strip()}")
except BaseException:
pass
try:
err_line = process.stderr.readline()
if err_line:
if any(
keyword in err_line
for keyword in [
"%",
"bytes/sec",
"to-check=",
"xfr#",
]
):
print(f"{err_line.strip()}", flush=True)
logger.info(f"{err_line.strip()}")
else:
print(f"📋 {err_line.strip()}", flush=True)
logger.info(f"📋 {err_line.strip()}")
except BaseException:
pass
except ImportError:
# If fcntl not available, just continue
print(
"📊 Progress display limited - continuing copy...",
flush=True,
)
logger.info("📊 Progress display limited - continuing copy...")
break
copy_result_code = process.wait()
end_time = time.time()
copy_time = end_time - start_time
if copy_result_code == 0:
logger.info(
f"✅ Template disk copied successfully in {
copy_time:.1f} seconds"
)
logger.info(f"🎯 New VM disk created: {target_disk}")
# Verify the copy by checking file size
verify_result = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'ls -lh {target_disk}'",
shell=True,
capture_output=True,
text=True,
)
if verify_result.returncode == 0:
file_info = verify_result.stdout.strip().split()
if len(file_info) >= 5:
copied_size = file_info[4]
logger.info(f"📋 Copied disk size: {copied_size}")
return True
else:
logger.error(
f"❌ Failed to copy template disk (exit code: {copy_result_code})"
)
logger.error("Check Unraid server disk space and permissions")
return False
except Exception as e:
logger.error(f"Error copying template disk: {e}")
return False
def prepare_vm_from_template(
self, target_vm_name: str, vm_memory: int, vm_vcpus: int, vm_ip: str
) -> bool:
"""Complete template-based VM preparation."""
try:
logger.info(f"Preparing VM '{target_vm_name}' from template...")
# Step 1: Copy template disk
if not self.copy_template_disk(target_vm_name):
return False
logger.info(f"VM '{target_vm_name}' prepared successfully from template")
logger.info("The VM disk is ready with Ubuntu pre-installed")
logger.info("You can now create the VM configuration and start it")
return True
except Exception as e:
logger.error(f"Error preparing VM from template: {e}")
return False
def update_template(self) -> bool:
"""Update the template VM with latest changes."""
try:
logger.info("Updating template VM...")
logger.info("Note: This should be done manually by:")
logger.info("1. Starting the template VM")
logger.info("2. Updating Ubuntu packages")
logger.info("3. Updating ThrillWiki dependencies")
logger.info("4. Stopping the template VM")
logger.info("5. The disk will automatically be the new template")
# Check template VM status
template_status = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'virsh domstate {self.template_vm_name}'",
shell=True,
capture_output=True,
text=True,
)
if template_status.returncode == 0:
status = template_status.stdout.strip()
logger.info(
f"Template VM '{
self.template_vm_name}' status: {status}"
)
if status == "running":
logger.warning("Template VM is currently running!")
logger.warning("Stop the template VM when updates are complete")
logger.warning("Running VMs should not be used as templates")
return False
elif status in ["shut off", "shutoff"]:
logger.info(
"Template VM is properly stopped and ready to use as template"
)
return True
else:
logger.warning(f"Template VM in unexpected state: {status}")
return False
else:
logger.error("Could not check template VM status")
return False
except Exception as e:
logger.error(f"Error updating template: {e}")
return False
def list_template_instances(self) -> list:
"""List all VMs that were created from the template."""
try:
# Get all domains
result = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'virsh list --all --name'",
shell=True,
capture_output=True,
text=True,
)
if result.returncode != 0:
logger.error("Failed to list VMs")
return []
all_vms = result.stdout.strip().split("\n")
# Filter for thrillwiki VMs (excluding template)
template_instances = []
for vm in all_vms:
vm = vm.strip()
if vm and "thrillwiki" in vm.lower() and vm != self.template_vm_name:
# Get VM status
status_result = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'virsh domstate {vm}'",
shell=True,
capture_output=True,
text=True,
)
status = (
status_result.stdout.strip()
if status_result.returncode == 0
else "unknown"
)
template_instances.append({"name": vm, "status": status})
return template_instances
except Exception as e:
logger.error(f"Error listing template instances: {e}")
return []
def main():
"""Main entry point for template manager."""
import argparse
parser = argparse.ArgumentParser(
description="ThrillWiki Template VM Manager",
epilog="""
Examples:
python template_manager.py info # Show template info
python template_manager.py copy my-vm # Copy template to new VM
python template_manager.py list # List template instances
python template_manager.py update # Update template VM
""",
formatter_class=argparse.RawDescriptionHelpFormatter,
)
parser.add_argument(
"action",
choices=["info", "copy", "list", "update", "check"],
help="Action to perform",
)
parser.add_argument("vm_name", nargs="?", help="VM name (required for copy action)")
args = parser.parse_args()
# Get Unraid connection details from environment
unraid_host = os.environ.get("UNRAID_HOST")
unraid_user = os.environ.get("UNRAID_USER", "root")
if not unraid_host:
logger.error("UNRAID_HOST environment variable is required")
sys.exit(1)
# Create template manager
template_manager = TemplateVMManager(unraid_host, unraid_user)
# Authenticate
if not template_manager.authenticate():
logger.error("Failed to connect to Unraid server")
sys.exit(1)
if args.action == "info":
logger.info("📋 Template VM Information")
info = template_manager.get_template_info()
if info:
print(f"Template Path: {info['template_path']}")
print(f"Virtual Size: {info['virtual_size']}")
print(f"File Size: {info['file_size']}")
print(f"Last Modified: {info['last_modified']}")
else:
print("❌ Failed to get template information")
sys.exit(1)
elif args.action == "check":
if template_manager.check_template_exists():
logger.info("✅ Template VM disk exists and is ready to use")
sys.exit(0)
else:
logger.error("❌ Template VM disk not found")
sys.exit(1)
elif args.action == "copy":
if not args.vm_name:
logger.error("VM name is required for copy action")
sys.exit(1)
success = template_manager.copy_template_disk(args.vm_name)
sys.exit(0 if success else 1)
elif args.action == "list":
logger.info("📋 Template-based VM Instances")
instances = template_manager.list_template_instances()
if instances:
for instance in instances:
status_emoji = (
"🟢"
if instance["status"] == "running"
else "🔴" if instance["status"] == "shut off" else "🟡"
)
print(
f"{status_emoji} {
instance['name']} ({
instance['status']})"
)
else:
print("No template instances found")
elif args.action == "update":
success = template_manager.update_template()
sys.exit(0 if success else 1)
if __name__ == "__main__":
# Setup logging
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s - %(levelname)s - %(message)s",
handlers=[logging.StreamHandler()],
)
main()

View File

@@ -1,116 +0,0 @@
<?xml version='1.0' encoding='UTF-8'?>
<domain type='kvm'>
<name>{VM_NAME}</name>
<uuid>{VM_UUID}</uuid>
<metadata>
<vmtemplate xmlns="unraid" name="ThrillWiki VM (Template-based)" iconold="ubuntu.png" icon="ubuntu.png" os="linux" webui=""/>
</metadata>
<memory unit='KiB'>{VM_MEMORY_KIB}</memory>
<currentMemory unit='KiB'>{VM_MEMORY_KIB}</currentMemory>
<vcpu placement='static'>{VM_VCPUS}</vcpu>
<os>
<type arch='x86_64' machine='pc-q35-9.2'>hvm</type>
<loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
<nvram>/etc/libvirt/qemu/nvram/{VM_UUID}_VARS-pure-efi.fd</nvram>
</os>
<features>
<acpi/>
<apic/>
<vmport state='off'/>
</features>
<cpu mode='host-passthrough' check='none' migratable='on'>
<topology sockets='1' dies='1' clusters='1' cores='{CPU_CORES}' threads='{CPU_THREADS}'/>
<cache mode='passthrough'/>
<feature policy='require' name='topoext'/>
</cpu>
<clock offset='utc'>
<timer name='hpet' present='no'/>
<timer name='hypervclock' present='yes'/>
<timer name='pit' tickpolicy='delay'/>
<timer name='rtc' tickpolicy='catchup'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<pm>
<suspend-to-mem enabled='no'/>
<suspend-to-disk enabled='no'/>
</pm>
<devices>
<emulator>/usr/local/sbin/qemu</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='writeback' discard='ignore'/>
<source file='/mnt/user/domains/{VM_NAME}/vdisk1.qcow2'/>
<target dev='hdc' bus='virtio'/>
<boot order='1'/>
<address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
</disk>
<controller type='usb' index='0' model='qemu-xhci' ports='15'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</controller>
<controller type='pci' index='0' model='pcie-root'/>
<controller type='pci' index='1' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='1' port='0x10'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
</controller>
<controller type='pci' index='2' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='2' port='0x11'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
</controller>
<controller type='pci' index='3' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='3' port='0x12'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
</controller>
<controller type='pci' index='4' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='4' port='0x13'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
</controller>
<controller type='pci' index='5' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='5' port='0x14'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
</controller>
<controller type='virtio-serial' index='0'>
<address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
</controller>
<interface type='bridge'>
<mac address='52:54:00:{MAC_SUFFIX}'/>
<source bridge='br0.20'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
</interface>
<serial type='pty'>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<channel type='unix'>
<target type='virtio' name='org.qemu.guest_agent.0'/>
<address type='virtio-serial' controller='0' bus='0' port='1'/>
</channel>
<input type='tablet' bus='usb'>
<address type='usb' bus='0' port='1'/>
</input>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' sharePolicy='ignore'>
<listen type='address' address='0.0.0.0'/>
</graphics>
<audio id='1' type='none'/>
<video>
<model type='qxl' ram='65536' vram='65536' vram64='65535' vgamem='65536' heads='1' primary='yes'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x1e' function='0x0'/>
</video>
<watchdog model='itco' action='reset'/>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
</memballoon>
</devices>
</domain>

View File

@@ -1,127 +0,0 @@
<?xml version='1.0' encoding='UTF-8'?>
<domain type='kvm'>
<name>{VM_NAME}</name>
<uuid>{VM_UUID}</uuid>
<metadata>
<vmtemplate xmlns="unraid" name="ThrillWiki VM" iconold="ubuntu.png" icon="ubuntu.png" os="linux" webui=""/>
</metadata>
<memory unit='KiB'>{VM_MEMORY_KIB}</memory>
<currentMemory unit='KiB'>{VM_MEMORY_KIB}</currentMemory>
<vcpu placement='static'>{VM_VCPUS}</vcpu>
<os>
<type arch='x86_64' machine='pc-q35-9.2'>hvm</type>
<loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
<nvram>/etc/libvirt/qemu/nvram/{VM_UUID}_VARS-pure-efi.fd</nvram>
</os>
<features>
<acpi/>
<apic/>
<vmport state='off'/>
</features>
<cpu mode='host-passthrough' check='none' migratable='on'>
<topology sockets='1' dies='1' clusters='1' cores='{CPU_CORES}' threads='{CPU_THREADS}'/>
<cache mode='passthrough'/>
<feature policy='require' name='topoext'/>
</cpu>
<clock offset='utc'>
<timer name='hpet' present='no'/>
<timer name='hypervclock' present='yes'/>
<timer name='pit' tickpolicy='delay'/>
<timer name='rtc' tickpolicy='catchup'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<pm>
<suspend-to-mem enabled='no'/>
<suspend-to-disk enabled='no'/>
</pm>
<devices>
<emulator>/usr/local/sbin/qemu</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='writeback' discard='ignore'/>
<source file='/mnt/user/domains/{VM_NAME}/vdisk1.qcow2'/>
<target dev='hdc' bus='virtio'/>
<boot order='2'/>
<address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
</disk>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/mnt/user/isos/{VM_NAME}-ubuntu-autoinstall.iso'/>
<target dev='hda' bus='sata'/>
<readonly/>
<boot order='1'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
<controller type='usb' index='0' model='qemu-xhci' ports='15'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</controller>
<controller type='pci' index='0' model='pcie-root'/>
<controller type='pci' index='1' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='1' port='0x10'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
</controller>
<controller type='pci' index='2' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='2' port='0x11'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
</controller>
<controller type='pci' index='3' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='3' port='0x12'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
</controller>
<controller type='pci' index='4' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='4' port='0x13'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
</controller>
<controller type='pci' index='5' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='5' port='0x14'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
</controller>
<controller type='virtio-serial' index='0'>
<address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
</controller>
<controller type='sata' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
</controller>
<interface type='bridge'>
<mac address='52:54:00:{MAC_SUFFIX}'/>
<source bridge='br0.20'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
</interface>
<serial type='pty'>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<channel type='unix'>
<target type='virtio' name='org.qemu.guest_agent.0'/>
<address type='virtio-serial' controller='0' bus='0' port='1'/>
</channel>
<input type='tablet' bus='usb'>
<address type='usb' bus='0' port='1'/>
</input>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' sharePolicy='ignore'>
<listen type='address' address='0.0.0.0'/>
</graphics>
<audio id='1' type='none'/>
<video>
<model type='qxl' ram='65536' vram='65536' vram64='65535' vgamem='65536' heads='1' primary='yes'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x1e' function='0x0'/>
</video>
<watchdog model='itco' action='reset'/>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
</memballoon>
</devices>
</domain>

View File

@@ -1,212 +0,0 @@
#!/usr/bin/env python3
"""
Validate autoinstall configuration against Ubuntu's schema.
This script provides basic validation to check if our autoinstall config
complies with the official schema structure.
"""
import yaml
import sys
from pathlib import Path
def load_autoinstall_config(template_path: str) -> dict:
"""Load the autoinstall configuration from the template file."""
with open(template_path, "r") as f:
content = f.read()
# Parse the cloud-config YAML
config = yaml.safe_load(content)
# Extract the autoinstall section
if "autoinstall" in config:
return config["autoinstall"]
else:
raise ValueError("No autoinstall section found in cloud-config")
def validate_required_fields(config: dict) -> list:
"""Validate required fields according to schema."""
errors = []
# Check version field (required)
if "version" not in config:
errors.append("Missing required field: version")
elif not isinstance(config["version"], int) or config["version"] != 1:
errors.append("Invalid version: must be integer 1")
return errors
def validate_identity_section(config: dict) -> list:
"""Validate identity section."""
errors = []
if "identity" in config:
identity = config["identity"]
required_fields = ["username", "hostname", "password"]
for field in required_fields:
if field not in identity:
errors.append(f"Identity section missing required field: {field}")
# Additional validation
if "username" in identity and not isinstance(identity["username"], str):
errors.append("Identity username must be a string")
if "hostname" in identity and not isinstance(identity["hostname"], str):
errors.append("Identity hostname must be a string")
return errors
def validate_network_section(config: dict) -> list:
"""Validate network section."""
errors = []
if "network" in config:
network = config["network"]
if "version" not in network:
errors.append("Network section missing required field: version")
elif network["version"] != 2:
errors.append("Network version must be 2")
return errors
def validate_keyboard_section(config: dict) -> list:
"""Validate keyboard section."""
errors = []
if "keyboard" in config:
keyboard = config["keyboard"]
if "layout" not in keyboard:
errors.append("Keyboard section missing required field: layout")
return errors
def validate_ssh_section(config: dict) -> list:
"""Validate SSH section."""
errors = []
if "ssh" in config:
ssh = config["ssh"]
if "install-server" in ssh and not isinstance(ssh["install-server"], bool):
errors.append("SSH install-server must be boolean")
if "authorized-keys" in ssh and not isinstance(ssh["authorized-keys"], list):
errors.append("SSH authorized-keys must be an array")
if "allow-pw" in ssh and not isinstance(ssh["allow-pw"], bool):
errors.append("SSH allow-pw must be boolean")
return errors
def validate_packages_section(config: dict) -> list:
"""Validate packages section."""
errors = []
if "packages" in config:
packages = config["packages"]
if not isinstance(packages, list):
errors.append("Packages must be an array")
else:
for i, package in enumerate(packages):
if not isinstance(package, str):
errors.append(f"Package at index {i} must be a string")
return errors
def validate_commands_sections(config: dict) -> list:
"""Validate early-commands and late-commands sections."""
errors = []
for section_name in ["early-commands", "late-commands"]:
if section_name in config:
commands = config[section_name]
if not isinstance(commands, list):
errors.append(f"{section_name} must be an array")
else:
for i, command in enumerate(commands):
if not isinstance(command, (str, list)):
errors.append(
f"{section_name} item at index {i} must be string or array"
)
elif isinstance(command, list):
for j, cmd_part in enumerate(command):
if not isinstance(cmd_part, str):
errors.append(
f"{section_name}[{i}][{j}] must be a string"
)
return errors
def validate_shutdown_section(config: dict) -> list:
"""Validate shutdown section."""
errors = []
if "shutdown" in config:
shutdown = config["shutdown"]
valid_values = ["reboot", "poweroff"]
if shutdown not in valid_values:
errors.append(f"Shutdown must be one of: {valid_values}")
return errors
def main():
"""Main validation function."""
template_path = Path(__file__).parent / "cloud-init-template.yaml"
if not template_path.exists():
print(f"Error: Template file not found at {template_path}")
sys.exit(1)
try:
# Load the autoinstall configuration
print(f"Loading autoinstall config from {template_path}")
config = load_autoinstall_config(str(template_path))
# Run validation checks
all_errors = []
all_errors.extend(validate_required_fields(config))
all_errors.extend(validate_identity_section(config))
all_errors.extend(validate_network_section(config))
all_errors.extend(validate_keyboard_section(config))
all_errors.extend(validate_ssh_section(config))
all_errors.extend(validate_packages_section(config))
all_errors.extend(validate_commands_sections(config))
all_errors.extend(validate_shutdown_section(config))
# Report results
if all_errors:
print("\n❌ Validation failed with the following errors:")
for error in all_errors:
print(f" - {error}")
sys.exit(1)
else:
print("\n✅ Autoinstall configuration validation passed!")
print("Configuration appears to comply with Ubuntu autoinstall schema.")
# Print summary of detected sections
sections = list(config.keys())
print(f"\nDetected sections: {', '.join(sorted(sections))}")
except Exception as e:
print(f"Error during validation: {e}")
sys.exit(1)
if __name__ == "__main__":
main()

File diff suppressed because it is too large Load Diff

View File

@@ -1,570 +0,0 @@
#!/usr/bin/env python3
"""
VM Manager for Unraid
Handles VM creation, configuration, and lifecycle management.
"""
import os
import time
import logging
import subprocess
from pathlib import Path
from typing import Optional
import uuid
logger = logging.getLogger(__name__)
class UnraidVMManager:
"""Manages VMs on Unraid server."""
def __init__(self, vm_name: str, unraid_host: str, unraid_user: str = "root"):
self.vm_name = vm_name
self.unraid_host = unraid_host
self.unraid_user = unraid_user
self.vm_config_path = f"/mnt/user/domains/{vm_name}"
def authenticate(self) -> bool:
"""Test SSH connectivity to Unraid server."""
try:
result = subprocess.run(
f"ssh -o ConnectTimeout=10 {self.unraid_user}@{self.unraid_host} 'echo Connected'",
shell=True,
capture_output=True,
text=True,
timeout=15,
)
if result.returncode == 0 and "Connected" in result.stdout:
logger.info("Successfully connected to Unraid via SSH")
return True
else:
logger.error(f"SSH connection failed: {result.stderr}")
return False
except Exception as e:
logger.error(f"SSH authentication error: {e}")
return False
def check_vm_exists(self) -> bool:
"""Check if VM already exists."""
try:
result = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'virsh list --all | grep {self.vm_name}'",
shell=True,
capture_output=True,
text=True,
)
return self.vm_name in result.stdout
except Exception as e:
logger.error(f"Error checking VM existence: {e}")
return False
def _generate_mac_suffix(self, vm_ip: str) -> str:
"""Generate MAC address suffix based on VM IP or name."""
if vm_ip.lower() != "dhcp" and "." in vm_ip:
# Use last octet of static IP for MAC generation
last_octet = int(vm_ip.split(".")[-1])
return f"{last_octet:02x}:7d:fd"
else:
# Use hash of VM name for consistent MAC generation
import hashlib
hash_obj = hashlib.md5(self.vm_name.encode())
hash_bytes = hash_obj.digest()[:3]
return ":".join([f"{b:02x}" for b in hash_bytes])
def create_vm_xml(
self,
vm_memory: int,
vm_vcpus: int,
vm_ip: str,
existing_uuid: str = None,
) -> str:
"""Generate VM XML configuration from template file."""
vm_uuid = existing_uuid if existing_uuid else str(uuid.uuid4())
# Read XML template from file
template_path = Path(__file__).parent / "thrillwiki-vm-template.xml"
if not template_path.exists():
raise FileNotFoundError(f"VM XML template not found at {template_path}")
with open(template_path, "r", encoding="utf-8") as f:
xml_template = f.read()
# Calculate CPU topology
cpu_cores = vm_vcpus // 2 if vm_vcpus > 1 else 1
cpu_threads = 2 if vm_vcpus > 1 else 1
# Replace placeholders with actual values
xml_content = xml_template.format(
VM_NAME=self.vm_name,
VM_UUID=vm_uuid,
VM_MEMORY_KIB=vm_memory * 1024,
VM_VCPUS=vm_vcpus,
CPU_CORES=cpu_cores,
CPU_THREADS=cpu_threads,
MAC_SUFFIX=self._generate_mac_suffix(vm_ip),
)
return xml_content.strip()
def upload_iso_to_unraid(self, local_iso_path: Path) -> str:
"""Upload ISO to Unraid server."""
remote_iso_path = f"/mnt/user/isos/{
self.vm_name}-ubuntu-autoinstall.iso"
logger.info(f"Uploading ISO to Unraid: {remote_iso_path}")
try:
# Remove old ISO if exists
subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'rm -f {remote_iso_path}'",
shell=True,
check=False, # Don't fail if file doesn't exist
)
# Upload new ISO
subprocess.run(
f"scp {local_iso_path} {self.unraid_user}@{self.unraid_host}:{remote_iso_path}",
shell=True,
check=True,
)
logger.info(f"ISO uploaded successfully: {remote_iso_path}")
return remote_iso_path
except Exception as e:
logger.error(f"Failed to upload ISO: {e}")
raise
def create_vm(
self, vm_memory: int, vm_vcpus: int, vm_disk_size: int, vm_ip: str
) -> bool:
"""Create or update the VM on Unraid."""
try:
vm_exists = self.check_vm_exists()
if vm_exists:
logger.info(
f"VM {
self.vm_name} already exists, updating configuration..."
)
# Always try to stop VM before updating
current_status = self.vm_status()
logger.info(f"Current VM status: {current_status}")
if current_status not in ["shut off", "unknown"]:
logger.info(
f"Stopping VM {
self.vm_name} for configuration update..."
)
self.stop_vm()
time.sleep(3)
else:
logger.info(f"VM {self.vm_name} is already stopped")
else:
logger.info(f"Creating VM {self.vm_name}...")
# Ensure VM directory exists
subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'mkdir -p {self.vm_config_path}'",
shell=True,
check=True,
)
# Create virtual disk if it doesn't exist
disk_check = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'test -f {self.vm_config_path}/vdisk1.qcow2'",
shell=True,
capture_output=True,
)
if disk_check.returncode != 0:
logger.info(f"Creating virtual disk for VM {self.vm_name}...")
disk_cmd = f"""
ssh {self.unraid_user}@{self.unraid_host} 'qemu-img create -f qcow2 {self.vm_config_path}/vdisk1.qcow2 {vm_disk_size}G'
"""
subprocess.run(disk_cmd, shell=True, check=True)
else:
logger.info(
f"Virtual disk already exists for VM {
self.vm_name}"
)
existing_uuid = None
if vm_exists:
# Get existing VM UUID
cmd = f'ssh {
self.unraid_user}@{
self.unraid_host} \'virsh dumpxml {
self.vm_name} | grep "<uuid>" | sed "s/<uuid>//g" | sed "s/<\\/uuid>//g" | tr -d " "\''
result = subprocess.run(
cmd,
shell=True,
capture_output=True,
text=True,
)
if result.returncode == 0 and result.stdout.strip():
existing_uuid = result.stdout.strip()
logger.info(f"Found existing VM UUID: {existing_uuid}")
# Check if VM is persistent or transient
persistent_check = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'virsh list --persistent --all | grep {self.vm_name}'",
shell=True,
capture_output=True,
text=True,
)
is_persistent = self.vm_name in persistent_check.stdout
if is_persistent:
# Undefine persistent VM with NVRAM flag
logger.info(
f"VM {
self.vm_name} is persistent, undefining with NVRAM for reconfiguration..."
)
subprocess.run(
f"ssh {
self.unraid_user}@{
self.unraid_host} 'virsh undefine {
self.vm_name} --nvram'",
shell=True,
check=True,
)
logger.info(
f"Persistent VM {
self.vm_name} undefined for reconfiguration"
)
else:
# Handle transient VM - just destroy it
logger.info(
f"VM {
self.vm_name} is transient, destroying for reconfiguration..."
)
if self.vm_status() == "running":
subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'virsh destroy {self.vm_name}'",
shell=True,
check=True,
)
logger.info(
f"Transient VM {
self.vm_name} destroyed for reconfiguration"
)
# Generate VM XML with appropriate UUID
vm_xml = self.create_vm_xml(vm_memory, vm_vcpus, vm_ip, existing_uuid)
xml_file = f"/tmp/{self.vm_name}.xml"
with open(xml_file, "w", encoding="utf-8") as f:
f.write(vm_xml)
# Copy XML to Unraid and define/redefine VM
subprocess.run(
f"scp {xml_file} {self.unraid_user}@{self.unraid_host}:/tmp/",
shell=True,
check=True,
)
# Define VM as persistent domain
subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'virsh define /tmp/{self.vm_name}.xml'",
shell=True,
check=True,
)
# Ensure VM is set to autostart for persistent configuration
subprocess.run(
f"ssh {
self.unraid_user}@{
self.unraid_host} 'virsh autostart {
self.vm_name}'",
shell=True,
check=False, # Don't fail if autostart is already enabled
)
action = "updated" if vm_exists else "created"
logger.info(f"VM {self.vm_name} {action} successfully")
# Cleanup
os.remove(xml_file)
return True
except Exception as e:
logger.error(f"Failed to create VM: {e}")
return False
def create_nvram_file(self, vm_uuid: str) -> bool:
"""Create NVRAM file for UEFI VM."""
try:
nvram_path = f"/etc/libvirt/qemu/nvram/{vm_uuid}_VARS-pure-efi.fd"
# Check if NVRAM file already exists
result = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'test -f {nvram_path}'",
shell=True,
capture_output=True,
)
if result.returncode == 0:
logger.info(f"NVRAM file already exists: {nvram_path}")
return True
# Copy template to create NVRAM file
logger.info(f"Creating NVRAM file: {nvram_path}")
result = subprocess.run(
f"ssh {
self.unraid_user}@{
self.unraid_host} 'cp /usr/share/qemu/ovmf-x64/OVMF_VARS-pure-efi.fd {nvram_path}'",
shell=True,
capture_output=True,
text=True,
)
if result.returncode == 0:
logger.info("NVRAM file created successfully")
return True
else:
logger.error(f"Failed to create NVRAM file: {result.stderr}")
return False
except Exception as e:
logger.error(f"Error creating NVRAM file: {e}")
return False
def start_vm(self) -> bool:
"""Start the VM if it's not already running."""
try:
# Check if VM is already running
current_status = self.vm_status()
if current_status == "running":
logger.info(f"VM {self.vm_name} is already running")
return True
logger.info(f"Starting VM {self.vm_name}...")
# For new VMs, we need to extract the UUID and create NVRAM file
vm_exists = self.check_vm_exists()
if not vm_exists:
logger.error("Cannot start VM that doesn't exist")
return False
# Get VM UUID from XML
cmd = f'ssh {
self.unraid_user}@{
self.unraid_host} \'virsh dumpxml {
self.vm_name} | grep "<uuid>" | sed "s/<uuid>//g" | sed "s/<\\/uuid>//g" | tr -d " "\''
result = subprocess.run(
cmd,
shell=True,
capture_output=True,
text=True,
)
if result.returncode == 0 and result.stdout.strip():
vm_uuid = result.stdout.strip()
logger.info(f"VM UUID: {vm_uuid}")
# Create NVRAM file if it doesn't exist
if not self.create_nvram_file(vm_uuid):
return False
result = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'virsh start {self.vm_name}'",
shell=True,
capture_output=True,
text=True,
)
if result.returncode == 0:
logger.info(f"VM {self.vm_name} started successfully")
return True
else:
logger.error(f"Failed to start VM: {result.stderr}")
return False
except Exception as e:
logger.error(f"Error starting VM: {e}")
return False
def stop_vm(self) -> bool:
"""Stop the VM with timeout and force destroy if needed."""
try:
logger.info(f"Stopping VM {self.vm_name}...")
# Try graceful shutdown first
result = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'virsh shutdown {self.vm_name}'",
shell=True,
capture_output=True,
text=True,
timeout=10,
)
if result.returncode == 0:
# Wait up to 30 seconds for graceful shutdown
logger.info(
f"Waiting for VM {
self.vm_name} to shutdown gracefully..."
)
for i in range(30):
status = self.vm_status()
if status in ["shut off", "unknown"]:
logger.info(f"VM {self.vm_name} stopped gracefully")
return True
time.sleep(1)
# If still running after 30 seconds, force destroy
logger.warning(
f"VM {
self.vm_name} didn't shutdown gracefully, forcing destroy..."
)
destroy_result = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'virsh destroy {self.vm_name}'",
shell=True,
capture_output=True,
text=True,
timeout=10,
)
if destroy_result.returncode == 0:
logger.info(f"VM {self.vm_name} forcefully destroyed")
return True
else:
logger.error(
f"Failed to destroy VM: {
destroy_result.stderr}"
)
return False
else:
logger.error(
f"Failed to initiate VM shutdown: {
result.stderr}"
)
return False
except subprocess.TimeoutExpired:
logger.error(f"Timeout stopping VM {self.vm_name}")
return False
except Exception as e:
logger.error(f"Error stopping VM: {e}")
return False
def get_vm_ip(self) -> Optional[str]:
"""Get VM IP address."""
try:
# Wait for VM to get IP - Ubuntu autoinstall can take 20-30 minutes
max_attempts = 120 # 20 minutes total wait time
for attempt in range(max_attempts):
result = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'virsh domifaddr {self.vm_name}'",
shell=True,
capture_output=True,
text=True,
)
if result.returncode == 0 and "ipv4" in result.stdout:
lines = result.stdout.strip().split("\\n")
for line in lines:
if "ipv4" in line:
# Extract IP from line like: vnet0
# 52:54:00:xx:xx:xx ipv4
# 192.168.1.100/24
parts = line.split()
if len(parts) >= 4:
ip_with_mask = parts[3]
ip = ip_with_mask.split("/")[0]
logger.info(f"VM IP address: {ip}")
return ip
logger.info(
f"Waiting for VM IP... (attempt {
attempt + 1}/{max_attempts}) - Ubuntu autoinstall in progress"
)
time.sleep(10)
logger.error("Failed to get VM IP address")
return None
except Exception as e:
logger.error(f"Error getting VM IP: {e}")
return None
def vm_status(self) -> str:
"""Get VM status."""
try:
result = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'virsh domstate {self.vm_name}'",
shell=True,
capture_output=True,
text=True,
)
if result.returncode == 0:
return result.stdout.strip()
else:
return "unknown"
except Exception as e:
logger.error(f"Error getting VM status: {e}")
return "error"
def delete_vm(self) -> bool:
"""Completely remove VM and all associated files."""
try:
logger.info(
f"Deleting VM {
self.vm_name} and all associated files..."
)
# Check if VM exists
if not self.check_vm_exists():
logger.info(f"VM {self.vm_name} does not exist")
return True
# Stop VM if running
if self.vm_status() == "running":
logger.info(f"Stopping VM {self.vm_name}...")
self.stop_vm()
time.sleep(5)
# Undefine VM with NVRAM
logger.info(f"Undefining VM {self.vm_name}...")
subprocess.run(
f"ssh {
self.unraid_user}@{
self.unraid_host} 'virsh undefine {
self.vm_name} --nvram'",
shell=True,
check=True,
)
# Remove VM directory and all files
logger.info(f"Removing VM directory and files...")
subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'rm -rf {self.vm_config_path}'",
shell=True,
check=True,
)
# Remove autoinstall ISO
subprocess.run(
f"ssh {
self.unraid_user}@{
self.unraid_host} 'rm -f /mnt/user/isos/{
self.vm_name}-ubuntu-autoinstall.iso'",
shell=True,
check=False, # Don't fail if file doesn't exist
)
logger.info(f"VM {self.vm_name} completely removed")
return True
except Exception as e:
logger.error(f"Failed to delete VM: {e}")
return False

View File

@@ -1,654 +0,0 @@
#!/usr/bin/env python3
"""
Template-based VM Manager for Unraid
Handles VM creation using pre-built template disks instead of autoinstall.
"""
import os
import time
import logging
import subprocess
from pathlib import Path
from typing import Optional
import uuid
from template_manager import TemplateVMManager
logger = logging.getLogger(__name__)
class UnraidTemplateVMManager:
"""Manages template-based VMs on Unraid server."""
def __init__(self, vm_name: str, unraid_host: str, unraid_user: str = "root"):
self.vm_name = vm_name
self.unraid_host = unraid_host
self.unraid_user = unraid_user
self.vm_config_path = f"/mnt/user/domains/{vm_name}"
self.template_manager = TemplateVMManager(unraid_host, unraid_user)
def authenticate(self) -> bool:
"""Test SSH connectivity to Unraid server."""
return self.template_manager.authenticate()
def check_vm_exists(self) -> bool:
"""Check if VM already exists."""
try:
result = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'virsh list --all | grep {self.vm_name}'",
shell=True,
capture_output=True,
text=True,
)
return self.vm_name in result.stdout
except Exception as e:
logger.error(f"Error checking VM existence: {e}")
return False
def _generate_mac_suffix(self, vm_ip: str) -> str:
"""Generate MAC address suffix based on VM IP or name."""
if vm_ip.lower() != "dhcp" and "." in vm_ip:
# Use last octet of static IP for MAC generation
last_octet = int(vm_ip.split(".")[-1])
return f"{last_octet:02x}:7d:fd"
else:
# Use hash of VM name for consistent MAC generation
import hashlib
hash_obj = hashlib.md5(self.vm_name.encode())
hash_bytes = hash_obj.digest()[:3]
return ":".join([f"{b:02x}" for b in hash_bytes])
def create_vm_xml(
self,
vm_memory: int,
vm_vcpus: int,
vm_ip: str,
existing_uuid: str = None,
) -> str:
"""Generate VM XML configuration from template file."""
vm_uuid = existing_uuid if existing_uuid else str(uuid.uuid4())
# Use simplified template for template-based VMs
template_path = Path(__file__).parent / "thrillwiki-vm-template-simple.xml"
if not template_path.exists():
raise FileNotFoundError(f"VM XML template not found at {template_path}")
with open(template_path, "r", encoding="utf-8") as f:
xml_template = f.read()
# Calculate CPU topology
cpu_cores = vm_vcpus // 2 if vm_vcpus > 1 else 1
cpu_threads = 2 if vm_vcpus > 1 else 1
# Replace placeholders with actual values
xml_content = xml_template.format(
VM_NAME=self.vm_name,
VM_UUID=vm_uuid,
VM_MEMORY_KIB=vm_memory * 1024,
VM_VCPUS=vm_vcpus,
CPU_CORES=cpu_cores,
CPU_THREADS=cpu_threads,
MAC_SUFFIX=self._generate_mac_suffix(vm_ip),
)
return xml_content.strip()
def create_vm_from_template(
self, vm_memory: int, vm_vcpus: int, vm_disk_size: int, vm_ip: str
) -> bool:
"""Create VM from template disk."""
try:
vm_exists = self.check_vm_exists()
if vm_exists:
logger.info(
f"VM {
self.vm_name} already exists, updating configuration..."
)
# Always try to stop VM before updating
current_status = self.vm_status()
logger.info(f"Current VM status: {current_status}")
if current_status not in ["shut off", "unknown"]:
logger.info(
f"Stopping VM {
self.vm_name} for configuration update..."
)
self.stop_vm()
time.sleep(3)
else:
logger.info(f"VM {self.vm_name} is already stopped")
else:
logger.info(f"Creating VM {self.vm_name} from template...")
# Step 1: Prepare VM from template (copy disk)
logger.info("Preparing VM from template disk...")
if not self.template_manager.prepare_vm_from_template(
self.vm_name, vm_memory, vm_vcpus, vm_ip
):
logger.error("Failed to prepare VM from template")
return False
existing_uuid = None
if vm_exists:
# Get existing VM UUID
cmd = f'ssh {
self.unraid_user}@{
self.unraid_host} \'virsh dumpxml {
self.vm_name} | grep "<uuid>" | sed "s/<uuid>//g" | sed "s/<\\/uuid>//g" | tr -d " "\''
result = subprocess.run(
cmd,
shell=True,
capture_output=True,
text=True,
)
if result.returncode == 0 and result.stdout.strip():
existing_uuid = result.stdout.strip()
logger.info(f"Found existing VM UUID: {existing_uuid}")
# Check if VM is persistent or transient
persistent_check = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'virsh list --persistent --all | grep {self.vm_name}'",
shell=True,
capture_output=True,
text=True,
)
is_persistent = self.vm_name in persistent_check.stdout
if is_persistent:
# Undefine persistent VM with NVRAM flag
logger.info(
f"VM {
self.vm_name} is persistent, undefining with NVRAM for reconfiguration..."
)
subprocess.run(
f"ssh {
self.unraid_user}@{
self.unraid_host} 'virsh undefine {
self.vm_name} --nvram'",
shell=True,
check=True,
)
logger.info(
f"Persistent VM {
self.vm_name} undefined for reconfiguration"
)
else:
# Handle transient VM - just destroy it
logger.info(
f"VM {
self.vm_name} is transient, destroying for reconfiguration..."
)
if self.vm_status() == "running":
subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'virsh destroy {self.vm_name}'",
shell=True,
check=True,
)
logger.info(
f"Transient VM {
self.vm_name} destroyed for reconfiguration"
)
# Step 2: Generate VM XML with appropriate UUID
vm_xml = self.create_vm_xml(vm_memory, vm_vcpus, vm_ip, existing_uuid)
xml_file = f"/tmp/{self.vm_name}.xml"
with open(xml_file, "w", encoding="utf-8") as f:
f.write(vm_xml)
# Step 3: Copy XML to Unraid and define VM
subprocess.run(
f"scp {xml_file} {self.unraid_user}@{self.unraid_host}:/tmp/",
shell=True,
check=True,
)
# Define VM as persistent domain
subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'virsh define /tmp/{self.vm_name}.xml'",
shell=True,
check=True,
)
# Ensure VM is set to autostart for persistent configuration
subprocess.run(
f"ssh {
self.unraid_user}@{
self.unraid_host} 'virsh autostart {
self.vm_name}'",
shell=True,
check=False, # Don't fail if autostart is already enabled
)
action = "updated" if vm_exists else "created"
logger.info(
f"VM {
self.vm_name} {action} successfully from template"
)
# Cleanup
os.remove(xml_file)
return True
except Exception as e:
logger.error(f"Failed to create VM from template: {e}")
return False
def create_nvram_file(self, vm_uuid: str) -> bool:
"""Create NVRAM file for UEFI VM."""
try:
nvram_path = f"/etc/libvirt/qemu/nvram/{vm_uuid}_VARS-pure-efi.fd"
# Check if NVRAM file already exists
result = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'test -f {nvram_path}'",
shell=True,
capture_output=True,
)
if result.returncode == 0:
logger.info(f"NVRAM file already exists: {nvram_path}")
return True
# Copy template to create NVRAM file
logger.info(f"Creating NVRAM file: {nvram_path}")
result = subprocess.run(
f"ssh {
self.unraid_user}@{
self.unraid_host} 'cp /usr/share/qemu/ovmf-x64/OVMF_VARS-pure-efi.fd {nvram_path}'",
shell=True,
capture_output=True,
text=True,
)
if result.returncode == 0:
logger.info("NVRAM file created successfully")
return True
else:
logger.error(f"Failed to create NVRAM file: {result.stderr}")
return False
except Exception as e:
logger.error(f"Error creating NVRAM file: {e}")
return False
def start_vm(self) -> bool:
"""Start the VM if it's not already running."""
try:
# Check if VM is already running
current_status = self.vm_status()
if current_status == "running":
logger.info(f"VM {self.vm_name} is already running")
return True
logger.info(f"Starting VM {self.vm_name}...")
# For VMs, we need to extract the UUID and create NVRAM file
vm_exists = self.check_vm_exists()
if not vm_exists:
logger.error("Cannot start VM that doesn't exist")
return False
# Get VM UUID from XML
cmd = f'ssh {
self.unraid_user}@{
self.unraid_host} \'virsh dumpxml {
self.vm_name} | grep "<uuid>" | sed "s/<uuid>//g" | sed "s/<\\/uuid>//g" | tr -d " "\''
result = subprocess.run(
cmd,
shell=True,
capture_output=True,
text=True,
)
if result.returncode == 0 and result.stdout.strip():
vm_uuid = result.stdout.strip()
logger.info(f"VM UUID: {vm_uuid}")
# Create NVRAM file if it doesn't exist
if not self.create_nvram_file(vm_uuid):
return False
result = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'virsh start {self.vm_name}'",
shell=True,
capture_output=True,
text=True,
)
if result.returncode == 0:
logger.info(f"VM {self.vm_name} started successfully")
logger.info(
"VM is booting from template disk - should be ready quickly!"
)
return True
else:
logger.error(f"Failed to start VM: {result.stderr}")
return False
except Exception as e:
logger.error(f"Error starting VM: {e}")
return False
def stop_vm(self) -> bool:
"""Stop the VM with timeout and force destroy if needed."""
try:
logger.info(f"Stopping VM {self.vm_name}...")
# Try graceful shutdown first
result = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'virsh shutdown {self.vm_name}'",
shell=True,
capture_output=True,
text=True,
timeout=10,
)
if result.returncode == 0:
# Wait up to 30 seconds for graceful shutdown
logger.info(
f"Waiting for VM {
self.vm_name} to shutdown gracefully..."
)
for i in range(30):
status = self.vm_status()
if status in ["shut off", "unknown"]:
logger.info(f"VM {self.vm_name} stopped gracefully")
return True
time.sleep(1)
# If still running after 30 seconds, force destroy
logger.warning(
f"VM {
self.vm_name} didn't shutdown gracefully, forcing destroy..."
)
destroy_result = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'virsh destroy {self.vm_name}'",
shell=True,
capture_output=True,
text=True,
timeout=10,
)
if destroy_result.returncode == 0:
logger.info(f"VM {self.vm_name} forcefully destroyed")
return True
else:
logger.error(
f"Failed to destroy VM: {
destroy_result.stderr}"
)
return False
else:
logger.error(
f"Failed to initiate VM shutdown: {
result.stderr}"
)
return False
except subprocess.TimeoutExpired:
logger.error(f"Timeout stopping VM {self.vm_name}")
return False
except Exception as e:
logger.error(f"Error stopping VM: {e}")
return False
def get_vm_ip(self) -> Optional[str]:
"""Get VM IP address using multiple detection methods for template VMs."""
try:
# Method 1: Try guest agent first (most reliable for template VMs)
logger.info("Trying guest agent for IP detection...")
ssh_cmd = f"ssh -o StrictHostKeyChecking=no {
self.unraid_user}@{
self.unraid_host} 'virsh guestinfo {
self.vm_name} --interface 2>/dev/null || echo FAILED'"
logger.info(f"Running SSH command: {ssh_cmd}")
result = subprocess.run(
ssh_cmd, shell=True, capture_output=True, text=True, timeout=10
)
logger.info(
f"Guest agent result (returncode={result.returncode}): {result.stdout[:200]}..."
)
if (
result.returncode == 0
and "FAILED" not in result.stdout
and "addr" in result.stdout
):
# Parse guest agent output for IP addresses
lines = result.stdout.strip().split("\n")
import re
for line in lines:
logger.info(f"Processing line: {line}")
# Look for lines like: if.1.addr.0.addr : 192.168.20.65
if (
".addr." in line
and "addr :" in line
and "127.0.0.1" not in line
):
# Extract IP address from the line
ip_match = re.search(
r":\s*([0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3})\s*$",
line,
)
if ip_match:
ip = ip_match.group(1)
logger.info(f"Found potential IP: {ip}")
# Skip localhost and Docker bridge IPs
if not ip.startswith("127.") and not ip.startswith("172."):
logger.info(f"Found IP via guest agent: {ip}")
return ip
# Method 2: Try domifaddr (network interface detection)
logger.info("Trying domifaddr for IP detection...")
result = subprocess.run(
f"ssh {
self.unraid_user}@{
self.unraid_host} 'virsh domifaddr {
self.vm_name} 2>/dev/null || echo FAILED'",
shell=True,
capture_output=True,
text=True,
timeout=10,
)
if (
result.returncode == 0
and "FAILED" not in result.stdout
and "ipv4" in result.stdout
):
lines = result.stdout.strip().split("\n")
for line in lines:
if "ipv4" in line:
# Extract IP from line like: vnet0
# 52:54:00:xx:xx:xx ipv4 192.168.1.100/24
parts = line.split()
if len(parts) >= 4:
ip_with_mask = parts[3]
ip = ip_with_mask.split("/")[0]
logger.info(f"Found IP via domifaddr: {ip}")
return ip
# Method 3: Try ARP table lookup (fallback for when guest agent
# isn't ready)
logger.info("Trying ARP table lookup...")
# Get VM MAC address first
mac_result = subprocess.run(
f'ssh {
self.unraid_user}@{
self.unraid_host} \'virsh dumpxml {
self.vm_name} | grep "mac address" | head -1 | sed "s/.*address=.\\([^\'"]*\\).*/\\1/"\'',
shell=True,
capture_output=True,
text=True,
timeout=10,
)
if mac_result.returncode == 0 and mac_result.stdout.strip():
mac_addr = mac_result.stdout.strip()
logger.info(f"VM MAC address: {mac_addr}")
# Look up IP by MAC in ARP table
arp_result = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'arp -a | grep {mac_addr} || echo NOTFOUND'",
shell=True,
capture_output=True,
text=True,
timeout=10,
)
if arp_result.returncode == 0 and "NOTFOUND" not in arp_result.stdout:
# Parse ARP output like: (192.168.1.100) at
# 52:54:00:xx:xx:xx
import re
ip_match = re.search(r"\(([0-9.]+)\)", arp_result.stdout)
if ip_match:
ip = ip_match.group(1)
logger.info(f"Found IP via ARP lookup: {ip}")
return ip
logger.warning("All IP detection methods failed")
return None
except subprocess.TimeoutExpired:
logger.error("Timeout getting VM IP - guest agent may not be ready")
return None
except Exception as e:
logger.error(f"Error getting VM IP: {e}")
return None
def vm_status(self) -> str:
"""Get VM status."""
try:
result = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'virsh domstate {self.vm_name}'",
shell=True,
capture_output=True,
text=True,
)
if result.returncode == 0:
return result.stdout.strip()
else:
return "unknown"
except Exception as e:
logger.error(f"Error getting VM status: {e}")
return "error"
def delete_vm(self) -> bool:
"""Completely remove VM and all associated files."""
try:
logger.info(
f"Deleting VM {
self.vm_name} and all associated files..."
)
# Check if VM exists
if not self.check_vm_exists():
logger.info(f"VM {self.vm_name} does not exist")
return True
# Stop VM if running
if self.vm_status() == "running":
logger.info(f"Stopping VM {self.vm_name}...")
self.stop_vm()
time.sleep(5)
# Undefine VM with NVRAM
logger.info(f"Undefining VM {self.vm_name}...")
subprocess.run(
f"ssh {
self.unraid_user}@{
self.unraid_host} 'virsh undefine {
self.vm_name} --nvram'",
shell=True,
check=True,
)
# Remove VM directory and all files
logger.info(f"Removing VM directory and files...")
subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'rm -rf {self.vm_config_path}'",
shell=True,
check=True,
)
logger.info(f"VM {self.vm_name} completely removed")
return True
except Exception as e:
logger.error(f"Failed to delete VM: {e}")
return False
def customize_vm_for_thrillwiki(
self, repo_url: str, github_token: str = ""
) -> bool:
"""Customize the VM for ThrillWiki after it boots."""
try:
logger.info("Waiting for VM to be accessible via SSH...")
# Wait for VM to get an IP and be SSH accessible
vm_ip = None
max_attempts = 20
for attempt in range(max_attempts):
vm_ip = self.get_vm_ip()
if vm_ip:
# Test SSH connectivity
ssh_test = subprocess.run(
f"ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no thrillwiki@{vm_ip} 'echo SSH ready'",
shell=True,
capture_output=True,
text=True,
)
if ssh_test.returncode == 0:
logger.info(f"VM is SSH accessible at {vm_ip}")
break
logger.info(
f"Waiting for SSH access... (attempt {
attempt + 1}/{max_attempts})"
)
time.sleep(15)
if not vm_ip:
logger.error("VM failed to become SSH accessible")
return False
# Run ThrillWiki deployment on the VM
logger.info("Running ThrillWiki deployment on VM...")
deploy_cmd = f"cd /home/thrillwiki && /home/thrillwiki/deploy-thrillwiki.sh '{repo_url}'"
if github_token:
deploy_cmd = f"cd /home/thrillwiki && GITHUB_TOKEN='{github_token}' /home/thrillwiki/deploy-thrillwiki.sh '{repo_url}'"
deploy_result = subprocess.run(
f"ssh -o StrictHostKeyChecking=no thrillwiki@{vm_ip} '{deploy_cmd}'",
shell=True,
capture_output=True,
text=True,
)
if deploy_result.returncode == 0:
logger.info("ThrillWiki deployment completed successfully!")
logger.info(f"ThrillWiki should be accessible at http://{vm_ip}:8000")
return True
else:
logger.error(
f"ThrillWiki deployment failed: {
deploy_result.stderr}"
)
return False
except Exception as e:
logger.error(f"Error customizing VM: {e}")
return False

View File

@@ -1,340 +0,0 @@
#!/bin/bash
# ThrillWiki VM Deployment Script
# This script runs on the Linux VM to deploy the latest code and restart the server
set -e # Exit on any error
# Configuration
PROJECT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
LOG_DIR="$PROJECT_DIR/logs"
BACKUP_DIR="$PROJECT_DIR/backups"
DEPLOY_LOG="$LOG_DIR/deploy.log"
SERVICE_NAME="thrillwiki"
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Logging function
log() {
local message="[$(date +'%Y-%m-%d %H:%M:%S')] $1"
echo -e "${BLUE}${message}${NC}"
echo "$message" >> "$DEPLOY_LOG"
}
log_success() {
local message="[$(date +'%Y-%m-%d %H:%M:%S')] ✓ $1"
echo -e "${GREEN}${message}${NC}"
echo "$message" >> "$DEPLOY_LOG"
}
log_warning() {
local message="[$(date +'%Y-%m-%d %H:%M:%S')] ⚠ $1"
echo -e "${YELLOW}${message}${NC}"
echo "$message" >> "$DEPLOY_LOG"
}
log_error() {
local message="[$(date +'%Y-%m-%d %H:%M:%S')] ✗ $1"
echo -e "${RED}${message}${NC}"
echo "$message" >> "$DEPLOY_LOG"
}
# Create necessary directories
create_directories() {
log "Creating necessary directories..."
mkdir -p "$LOG_DIR" "$BACKUP_DIR"
log_success "Directories created"
}
# Backup current deployment
backup_current() {
log "Creating backup of current deployment..."
local timestamp=$(date +'%Y%m%d_%H%M%S')
local backup_path="$BACKUP_DIR/backup_$timestamp"
# Create backup of current code
if [ -d "$PROJECT_DIR/.git" ]; then
local current_commit=$(git -C "$PROJECT_DIR" rev-parse HEAD)
echo "$current_commit" > "$backup_path.commit"
log_success "Backup created with commit: ${current_commit:0:8}"
else
log_warning "Not a git repository, skipping backup"
fi
}
# Stop the service
stop_service() {
log "Stopping ThrillWiki service..."
# Stop systemd service if it exists
if systemctl is-active --quiet "$SERVICE_NAME" 2>/dev/null; then
sudo systemctl stop "$SERVICE_NAME"
log_success "Systemd service stopped"
else
log "Systemd service not running"
fi
# Kill any remaining Django processes on port 8000
if lsof -ti :8000 >/dev/null 2>&1; then
log "Stopping processes on port 8000..."
lsof -ti :8000 | xargs kill -9 2>/dev/null || true
log_success "Port 8000 processes stopped"
fi
# Clean up Python cache
log "Cleaning Python cache..."
find "$PROJECT_DIR" -type d -name "__pycache__" -exec rm -r {} + 2>/dev/null || true
log_success "Python cache cleaned"
}
# Update code from git
update_code() {
log "Updating code from git repository..."
cd "$PROJECT_DIR"
# Fetch latest changes
git fetch origin
log "Fetched latest changes"
# Get current and new commit info
local old_commit=$(git rev-parse HEAD)
local new_commit=$(git rev-parse origin/main)
if [ "$old_commit" = "$new_commit" ]; then
log_warning "No new commits to deploy"
return 0
fi
log "Updating from ${old_commit:0:8} to ${new_commit:0:8}"
# Pull latest changes
git reset --hard origin/main
log_success "Code updated successfully"
# Show what changed
log "Changes in this deployment:"
git log --oneline "$old_commit..$new_commit" || true
}
# Install/update dependencies
update_dependencies() {
log "Updating dependencies..."
cd "$PROJECT_DIR"
# Check if UV is installed
if ! command -v uv &> /dev/null; then
log_error "UV is not installed. Installing UV..."
curl -LsSf https://astral.sh/uv/install.sh | sh
source $HOME/.cargo/env
fi
# Sync dependencies
uv sync --no-dev || {
log_error "Failed to sync dependencies"
return 1
}
log_success "Dependencies updated"
}
# Run database migrations
run_migrations() {
log "Running database migrations..."
cd "$PROJECT_DIR"
# Check for pending migrations
if uv run manage.py showmigrations --plan | grep -q "\[ \]"; then
log "Applying database migrations..."
uv run manage.py migrate || {
log_error "Database migrations failed"
return 1
}
log_success "Database migrations completed"
else
log "No pending migrations"
fi
}
# Collect static files
collect_static() {
log "Collecting static files..."
cd "$PROJECT_DIR"
uv run manage.py collectstatic --noinput || {
log_warning "Static file collection failed, continuing..."
}
log_success "Static files collected"
}
# Start the service
start_service() {
log "Starting ThrillWiki service..."
cd "$PROJECT_DIR"
# Start systemd service if it exists
if systemctl list-unit-files | grep -q "^$SERVICE_NAME.service"; then
sudo systemctl start "$SERVICE_NAME"
sudo systemctl enable "$SERVICE_NAME"
# Wait for service to start
sleep 5
if systemctl is-active --quiet "$SERVICE_NAME"; then
log_success "Systemd service started successfully"
else
log_error "Systemd service failed to start"
return 1
fi
else
log_warning "Systemd service not found, starting manually..."
# Start server in background
nohup ./scripts/ci-start.sh > "$LOG_DIR/server.log" 2>&1 &
local server_pid=$!
# Wait for server to start
sleep 5
if kill -0 $server_pid 2>/dev/null; then
echo $server_pid > "$LOG_DIR/server.pid"
log_success "Server started manually with PID: $server_pid"
else
log_error "Failed to start server manually"
return 1
fi
fi
}
# Health check
health_check() {
log "Performing health check..."
local max_attempts=30
local attempt=1
while [ $attempt -le $max_attempts ]; do
if curl -f -s http://localhost:8000/health >/dev/null 2>&1; then
log_success "Health check passed"
return 0
fi
log "Health check attempt $attempt/$max_attempts failed, retrying..."
sleep 2
((attempt++))
done
log_error "Health check failed after $max_attempts attempts"
return 1
}
# Cleanup old backups
cleanup_backups() {
log "Cleaning up old backups..."
# Keep only the last 10 backups
cd "$BACKUP_DIR"
ls -t backup_*.commit 2>/dev/null | tail -n +11 | xargs rm -f 2>/dev/null || true
log_success "Old backups cleaned up"
}
# Rollback function
rollback() {
log_error "Deployment failed, attempting rollback..."
local latest_backup=$(ls -t "$BACKUP_DIR"/backup_*.commit 2>/dev/null | head -n 1)
if [ -n "$latest_backup" ]; then
local backup_commit=$(cat "$latest_backup")
log "Rolling back to commit: ${backup_commit:0:8}"
cd "$PROJECT_DIR"
git reset --hard "$backup_commit"
# Restart service
stop_service
start_service
if health_check; then
log_success "Rollback completed successfully"
else
log_error "Rollback failed - manual intervention required"
fi
else
log_error "No backup found for rollback"
fi
}
# Main deployment function
deploy() {
log "=== ThrillWiki Deployment Started ==="
log "Timestamp: $(date)"
log "User: $(whoami)"
log "Host: $(hostname)"
# Trap errors for rollback
trap rollback ERR
create_directories
backup_current
stop_service
update_code
update_dependencies
run_migrations
collect_static
start_service
health_check
cleanup_backups
# Remove error trap
trap - ERR
log_success "=== Deployment Completed Successfully ==="
log "Server is now running the latest code"
log "Check logs at: $LOG_DIR/"
}
# Script execution
case "${1:-deploy}" in
deploy)
deploy
;;
stop)
stop_service
;;
start)
start_service
;;
restart)
stop_service
start_service
health_check
;;
status)
if systemctl is-active --quiet "$SERVICE_NAME" 2>/dev/null; then
echo "Service is running"
elif [ -f "$LOG_DIR/server.pid" ] && kill -0 "$(cat "$LOG_DIR/server.pid")" 2>/dev/null; then
echo "Server is running manually"
else
echo "Service is not running"
fi
;;
health)
health_check
;;
*)
echo "Usage: $0 {deploy|stop|start|restart|status|health}"
exit 1
;;
esac

View File

@@ -1,482 +0,0 @@
# ThrillWiki Remote Deployment System
🚀 **Bulletproof remote deployment with integrated GitHub authentication and automatic pull scheduling**
## Overview
The ThrillWiki Remote Deployment System provides a complete solution for deploying the ThrillWiki automation infrastructure to remote VMs via SSH/SCP. It includes integrated GitHub authentication setup and automatic pull scheduling configured as systemd services.
## 🎯 Key Features
- **🔄 Bulletproof Remote Deployment** - SSH/SCP-based deployment with connection testing and retry logic
- **🔐 Integrated GitHub Authentication** - Seamless PAT setup during deployment process
- **⏰ Automatic Pull Scheduling** - Configurable intervals (default: 5 minutes) with systemd integration
- **🛡️ Comprehensive Error Handling** - Rollback capabilities and health validation
- **📊 Multi-Host Support** - Deploy to multiple VMs in parallel or sequentially
- **✅ Health Validation** - Real-time status reporting and post-deployment testing
- **🔧 Multiple Deployment Presets** - Dev, prod, demo, and testing configurations
## 🏗️ Architecture
```
┌─────────────────────────────────────────────────────────────────┐
│ Local Development Machine │
├─────────────────────────────────────────────────────────────────┤
│ deploy-complete.sh (Orchestrator) │
│ ├── GitHub Authentication Setup │
│ ├── Multi-host Connectivity Testing │
│ └── Deployment Coordination │
│ │
│ remote-deploy.sh (Core Deployment) │
│ ├── SSH/SCP File Transfer │
│ ├── Remote Environment Setup │
│ ├── Service Configuration │
│ └── Health Validation │
└─────────────────────────────────────────────────────────────────┘
│ SSH/SCP
┌─────────────────────────────────────────────────────────────────┐
│ Remote VM(s) │
├─────────────────────────────────────────────────────────────────┤
│ ThrillWiki Project Files │
│ ├── bulletproof-automation.sh (5-min pull scheduling) │
│ ├── GitHub PAT Authentication │
│ └── UV Package Management │
│ │
│ systemd Service │
│ ├── thrillwiki-automation.service │
│ ├── Auto-start on boot │
│ ├── Health monitoring │
│ └── Automatic restart on failure │
└─────────────────────────────────────────────────────────────────┘
```
## 📁 File Structure
```
scripts/vm/
├── deploy-complete.sh # 🎯 One-command complete deployment
├── remote-deploy.sh # 🚀 Core remote deployment engine
├── bulletproof-automation.sh # 🔄 Main automation with 5-min pulls
├── setup-automation.sh # ⚙️ Interactive setup script
├── automation-config.sh # 📋 Configuration management
├── github-setup.py # 🔐 GitHub PAT authentication
├── quick-start.sh # ⚡ Rapid setup with defaults
└── README.md # 📚 This documentation
scripts/systemd/
├── thrillwiki-automation.service # 🛡️ systemd service definition
└── thrillwiki-automation***REMOVED***.example # 📝 Environment template
```
## 🚀 Quick Start
### 1. One-Command Complete Deployment
Deploy the complete automation system to a remote VM:
```bash
# Basic deployment with interactive setup
./scripts/vm/deploy-complete.sh 192.168.1.100
# Production deployment with GitHub token
./scripts/vm/deploy-complete.sh --preset prod --token ghp_xxxxx production-server
# Multi-host parallel deployment
./scripts/vm/deploy-complete.sh --parallel host1 host2 host3
```
### 2. Preview Deployment (Dry Run)
See what would be deployed without making changes:
```bash
./scripts/vm/deploy-complete.sh --dry-run --preset prod 192.168.1.100
```
### 3. Development Environment Setup
Quick development deployment with frequent pulls:
```bash
./scripts/vm/deploy-complete.sh --preset dev --pull-interval 60 dev-server
```
## 🎛️ Deployment Options
### Deployment Presets
| Preset | Pull Interval | Use Case | Features |
|--------|---------------|----------|----------|
| `dev` | 60s (1 min) | Development | Debug enabled, frequent updates |
| `prod` | 300s (5 min) | Production | Security hardened, stable intervals |
| `demo` | 120s (2 min) | Demos | Feature showcase, moderate updates |
| `testing` | 180s (3 min) | Testing | Comprehensive monitoring |
### Command Options
#### deploy-complete.sh (Orchestrator)
```bash
./scripts/vm/deploy-complete.sh [OPTIONS] <host1> [host2] [host3]...
OPTIONS:
-u, --user USER Remote username (default: ubuntu)
-p, --port PORT SSH port (default: 22)
-k, --key PATH SSH private key file
-t, --token TOKEN GitHub Personal Access Token
--preset PRESET Deployment preset (dev/prod/demo/testing)
--pull-interval SEC Custom pull interval in seconds
--skip-github Skip GitHub authentication setup
--parallel Deploy to multiple hosts in parallel
--dry-run Preview deployment without executing
--force Force deployment even if target exists
--debug Enable debug logging
```
#### remote-deploy.sh (Core Engine)
```bash
./scripts/vm/remote-deploy.sh [OPTIONS] <remote_host>
OPTIONS:
-u, --user USER Remote username
-p, --port PORT SSH port
-k, --key PATH SSH private key file
-d, --dest PATH Remote destination path
--github-token TOK GitHub token for authentication
--skip-github Skip GitHub setup
--skip-service Skip systemd service setup
--force Force deployment
--dry-run Preview mode
```
## 🔐 GitHub Authentication
### Automatic Setup
The deployment system automatically configures GitHub authentication:
1. **Interactive Setup** - Guides you through PAT creation
2. **Token Validation** - Tests API access and permissions
3. **Secure Storage** - Stores tokens with proper file permissions
4. **Repository Access** - Validates access to your ThrillWiki repository
### Manual GitHub Token Setup
If you prefer to set up GitHub authentication manually:
```bash
# Create GitHub PAT at: https://github.com/settings/tokens
# Required scopes: repo (for private repos) or public_repo (for public repos)
# Use token during deployment
./scripts/vm/deploy-complete.sh --token ghp_your_token_here 192.168.1.100
# Or set as environment variable
export GITHUB_TOKEN=ghp_your_token_here
./scripts/vm/deploy-complete.sh 192.168.1.100
```
## ⏰ Automatic Pull Scheduling
### Default Configuration
- **Pull Interval**: 5 minutes (300 seconds)
- **Health Checks**: Every 60 seconds
- **Auto-restart**: On failure with 10-second delay
- **Systemd Integration**: Auto-start on boot
### Customization
```bash
# Custom pull intervals
./scripts/vm/deploy-complete.sh --pull-interval 120 192.168.1.100 # 2 minutes
# Development with frequent pulls
./scripts/vm/deploy-complete.sh --preset dev 192.168.1.100 # 1 minute
# Production with stable intervals
./scripts/vm/deploy-complete.sh --preset prod 192.168.1.100 # 5 minutes
```
### Monitoring
```bash
# Monitor automation in real-time
ssh ubuntu@192.168.1.100 'sudo journalctl -u thrillwiki-automation -f'
# Check service status
ssh ubuntu@192.168.1.100 'sudo systemctl status thrillwiki-automation'
# View automation logs
ssh ubuntu@192.168.1.100 'tail -f [AWS-SECRET-REMOVED]-automation.log'
```
## 🛠️ Advanced Usage
### Multi-Host Deployment
Deploy to multiple hosts simultaneously:
```bash
# Sequential deployment
./scripts/vm/deploy-complete.sh host1 host2 host3
# Parallel deployment (faster)
./scripts/vm/deploy-complete.sh --parallel host1 host2 host3
# Mixed environments
./scripts/vm/deploy-complete.sh --preset prod prod1 prod2 prod3
```
### Custom SSH Configuration
```bash
# Custom SSH key and user
./scripts/vm/deploy-complete.sh -u admin -k ~/.ssh/custom_key -p 2222 remote-host
# SSH config file support
# Add to ~/.ssh/config:
# Host thrillwiki-prod
# HostName 192.168.1.100
# User ubuntu
# IdentityFile ~/.ssh/thrillwiki_key
# Port 22
./scripts/vm/deploy-complete.sh thrillwiki-prod
```
### Environment-Specific Deployment
```bash
# Development environment
./scripts/vm/deploy-complete.sh --preset dev --debug dev-server
# Production environment with security
./scripts/vm/deploy-complete.sh --preset prod --token $GITHUB_TOKEN prod-server
# Testing environment with monitoring
./scripts/vm/deploy-complete.sh --preset testing test-server
```
## 🔧 Troubleshooting
### Common Issues
#### SSH Connection Failed
```bash
# Test SSH connectivity
ssh -o ConnectTimeout=10 ubuntu@192.168.1.100 'echo "Connection test"'
# Check SSH key permissions
chmod 600 ~/.ssh/your_key
ssh-add ~/.ssh/your_key
# Verify host accessibility
ping 192.168.1.100
```
#### GitHub Authentication Issues
```bash
# Validate GitHub token
python3 scripts/vm/github-setup.py validate
# Test repository access
curl -H "Authorization: Bearer $GITHUB_TOKEN" \
https://api.github.com/repos/your-username/thrillwiki
# Re-setup GitHub authentication
python3 scripts/vm/github-setup.py setup
```
#### Service Not Starting
```bash
# Check service status
ssh ubuntu@host 'sudo systemctl status thrillwiki-automation'
# View service logs
ssh ubuntu@host 'sudo journalctl -u thrillwiki-automation --since "1 hour ago"'
# Manual service restart
ssh ubuntu@host 'sudo systemctl restart thrillwiki-automation'
```
#### Deployment Validation Failed
```bash
# Check project files
ssh ubuntu@host 'ls -la /home/ubuntu/thrillwiki/scripts/vm/'
# Test automation script manually
ssh ubuntu@host 'cd /home/ubuntu/thrillwiki && bash scripts/vm/bulletproof-automation.sh --test'
# Verify GitHub access
ssh ubuntu@host 'cd /home/ubuntu/thrillwiki && python3 scripts/vm/github-setup.py validate'
```
### Debug Mode
Enable detailed logging for troubleshooting:
```bash
# Enable debug mode
export COMPLETE_DEBUG=true
export DEPLOY_DEBUG=true
./scripts/vm/deploy-complete.sh --debug 192.168.1.100
```
### Rollback Deployment
If deployment fails, automatic rollback is performed:
```bash
# Manual rollback (if needed)
ssh ubuntu@host 'sudo systemctl stop thrillwiki-automation'
ssh ubuntu@host 'sudo systemctl disable thrillwiki-automation'
ssh ubuntu@host 'rm -rf /home/ubuntu/thrillwiki'
```
## 📊 Monitoring and Maintenance
### Health Monitoring
The deployed system includes comprehensive health monitoring:
- **Service Health**: systemd monitors the automation service
- **Repository Health**: Regular GitHub connectivity tests
- **Server Health**: Django server monitoring and auto-restart
- **Resource Health**: Memory and CPU monitoring
- **Log Health**: Automatic log rotation and cleanup
### Regular Maintenance
```bash
# Update automation system
ssh ubuntu@host 'cd /home/ubuntu/thrillwiki && git pull'
ssh ubuntu@host 'sudo systemctl restart thrillwiki-automation'
# View recent logs
ssh ubuntu@host 'sudo journalctl -u thrillwiki-automation --since "24 hours ago"'
# Check disk usage
ssh ubuntu@host 'df -h /home/ubuntu/thrillwiki'
# Rotate logs manually
ssh ubuntu@host 'cd /home/ubuntu/thrillwiki && find logs/ -name "*.log" -size +10M -exec mv {} {}.old \;'
```
### Performance Tuning
```bash
# Adjust pull intervals for performance
./scripts/vm/deploy-complete.sh --pull-interval 600 192.168.1.100 # 10 minutes
# Monitor resource usage
ssh ubuntu@host 'top -p $(pgrep -f bulletproof-automation)'
# Check automation performance
ssh ubuntu@host 'tail -100 [AWS-SECRET-REMOVED]-automation.log | grep -E "(SUCCESS|ERROR)"'
```
## 🔒 Security Considerations
### SSH Security
- Use SSH keys instead of passwords
- Restrict SSH access with firewall rules
- Use non-standard SSH ports when possible
- Regularly rotate SSH keys
### GitHub Token Security
- Use tokens with minimal required permissions
- Set reasonable expiration dates
- Store tokens securely with 600 permissions
- Regularly rotate GitHub PATs
### System Security
- Keep remote systems updated
- Use systemd security features
- Monitor automation logs for suspicious activity
- Restrict network access to automation services
## 📚 Integration Guide
### CI/CD Integration
Integrate with your CI/CD pipeline:
```yaml
# GitHub Actions example
- name: Deploy to Production
run: |
./scripts/vm/deploy-complete.sh \
--preset prod \
--token ${{ secrets.GITHUB_TOKEN }} \
--parallel \
prod1.example.com prod2.example.com
# GitLab CI example
deploy_production:
script:
- ./scripts/vm/deploy-complete.sh --preset prod --token $GITHUB_TOKEN $PROD_SERVERS
```
### Infrastructure as Code
Use with Terraform or similar tools:
```hcl
# Terraform example
resource "null_resource" "thrillwiki_deployment" {
provisioner "local-exec" {
command = "./scripts/vm/deploy-complete.sh --preset prod ${aws_instance.app.public_ip}"
}
depends_on = [aws_instance.app]
}
```
## 🆘 Support
### Getting Help
1. **Check the logs** - Most issues are logged in detail
2. **Use debug mode** - Enable debug logging for troubleshooting
3. **Test connectivity** - Verify SSH and GitHub access
4. **Validate environment** - Check dependencies and permissions
### Log Locations
- **Local Deployment Logs**: `logs/deploy-complete.log`, `logs/remote-deploy.log`
- **Remote Automation Logs**: `[AWS-SECRET-REMOVED]-automation.log`
- **System Service Logs**: `journalctl -u thrillwiki-automation`
### Common Solutions
| Issue | Solution |
|-------|----------|
| SSH timeout | Check network connectivity and SSH service |
| Permission denied | Verify SSH key permissions and user access |
| GitHub API rate limit | Configure GitHub PAT with proper scopes |
| Service won't start | Check systemd service configuration and logs |
| Automation not pulling | Verify GitHub access and repository permissions |
---
## 🎉 Success!
Your ThrillWiki automation system is now deployed with:
-**Automatic repository pulls every 5 minutes**
-**GitHub authentication configured**
-**systemd service for reliability**
-**Health monitoring and logging**
-**Django server automation with UV**
The system will automatically:
1. Pull latest changes from your repository
2. Run Django migrations when needed
3. Update dependencies with UV
4. Restart the Django server
5. Monitor and recover from failures
**Enjoy your fully automated ThrillWiki deployment! 🚀**

View File

@@ -1,464 +0,0 @@
#!/bin/bash
#
# ThrillWiki Auto-Pull Script
# Automatically pulls latest changes from Git repository every 10 minutes
# Designed to run as a cron job on the VM
#
set -e
# Configuration
PROJECT_DIR="/home/thrillwiki/thrillwiki"
LOG_FILE="/home/thrillwiki/logs/auto-pull.log"
LOCK_FILE="/tmp/thrillwiki-auto-pull.lock"
SERVICE_NAME="thrillwiki"
MAX_LOG_SIZE=10485760 # 10MB
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Logging function
log() {
echo -e "$(date '+%Y-%m-%d %H:%M:%S') [AUTO-PULL] $1" | tee -a "$LOG_FILE"
}
log_error() {
echo -e "$(date '+%Y-%m-%d %H:%M:%S') ${RED}[ERROR]${NC} $1" | tee -a "$LOG_FILE"
}
log_success() {
echo -e "$(date '+%Y-%m-%d %H:%M:%S') ${GREEN}[SUCCESS]${NC} $1" | tee -a "$LOG_FILE"
}
log_warning() {
echo -e "$(date '+%Y-%m-%d %H:%M:%S') ${YELLOW}[WARNING]${NC} $1" | tee -a "$LOG_FILE"
}
# Function to rotate log file if it gets too large
rotate_log() {
if [ -f "$LOG_FILE" ] && [ $(stat -f%z "$LOG_FILE" 2>/dev/null || stat -c%s "$LOG_FILE" 2>/dev/null || echo 0) -gt $MAX_LOG_SIZE ]; then
mv "$LOG_FILE" "${LOG_FILE}.old"
log "Log file rotated due to size limit"
fi
}
# Function to acquire lock
acquire_lock() {
if [ -f "$LOCK_FILE" ]; then
local lock_pid=$(cat "$LOCK_FILE" 2>/dev/null || echo "")
if [ -n "$lock_pid" ] && kill -0 "$lock_pid" 2>/dev/null; then
log_warning "Auto-pull already running (PID: $lock_pid), skipping this run"
exit 0
else
log "Removing stale lock file"
rm -f "$LOCK_FILE"
fi
fi
echo $$ > "$LOCK_FILE"
trap 'rm -f "$LOCK_FILE"' EXIT
}
# Function to setup GitHub authentication
setup_git_auth() {
log "🔐 Setting up GitHub authentication..."
# Check if GITHUB_TOKEN is available
if [ -z "${GITHUB_TOKEN:-}" ]; then
# Try loading from ***REMOVED*** file in project directory
if [ -f "$PROJECT_DIR/***REMOVED***" ]; then
source "$PROJECT_DIR/***REMOVED***"
fi
# Try loading from global ***REMOVED***.unraid
if [ -z "${GITHUB_TOKEN:-}" ] && [ -f "$PROJECT_DIR/../../***REMOVED***.unraid" ]; then
source "$PROJECT_DIR/../../***REMOVED***.unraid"
fi
# Try loading from parent directory ***REMOVED***.unraid
if [ -z "${GITHUB_TOKEN:-}" ] && [ -f "$PROJECT_DIR/../***REMOVED***.unraid" ]; then
source "$PROJECT_DIR/../***REMOVED***.unraid"
fi
fi
# Verify we have the token
if [ -z "${GITHUB_TOKEN:-}" ]; then
log_warning "⚠️ GITHUB_TOKEN not found, trying public access..."
return 1
fi
# Configure git to use token authentication
local repo_url="https://github.com/pacnpal/thrillwiki_django_no_react.git"
local auth_url="https://pacnpal:${GITHUB_TOKEN}@github.com/pacnpal/thrillwiki_django_no_react.git"
# Update remote URL to use token
if git remote get-url origin | grep -q "github.com/pacnpal/thrillwiki_django_no_react"; then
git remote set-url origin "$auth_url"
log_success "✅ GitHub authentication configured with token"
return 0
else
log_warning "⚠️ Repository origin URL doesn't match expected GitHub repo"
return 1
fi
}
# Function to check if Git repository has changes
has_remote_changes() {
# Setup authentication first
if ! setup_git_auth; then
log_warning "⚠️ GitHub authentication failed, skipping remote check"
return 1 # Assume no changes if we can't authenticate
fi
# Fetch latest changes without merging
log "📡 Fetching latest changes from remote..."
if ! git fetch origin main --quiet 2>/dev/null; then
log_error "❌ Failed to fetch from remote repository - authentication or network issue"
log_warning "⚠️ Auto-pull will skip this cycle due to fetch failure"
return 1
fi
# Compare local and remote
local local_commit=$(git rev-parse HEAD)
local remote_commit=$(git rev-parse origin/main)
log "📊 Local commit: ${local_commit:0:8}"
log "📊 Remote commit: ${remote_commit:0:8}"
if [ "$local_commit" != "$remote_commit" ]; then
log "📥 New changes detected!"
return 0 # Has changes
else
log "✅ Repository is up to date"
return 1 # No changes
fi
}
# Function to check service status
is_service_running() {
systemctl is-active --quiet "$SERVICE_NAME" 2>/dev/null
}
# Function to restart service safely
restart_service() {
log "Restarting ThrillWiki service..."
if systemctl is-enabled --quiet "$SERVICE_NAME" 2>/dev/null; then
if sudo systemctl restart "$SERVICE_NAME"; then
log_success "Service restarted successfully"
return 0
else
log_error "Failed to restart service"
return 1
fi
else
log_warning "Service not enabled, attempting manual restart..."
# Try to start it anyway
if sudo systemctl start "$SERVICE_NAME" 2>/dev/null; then
log_success "Service started successfully"
return 0
else
log_warning "Service restart failed, may need manual intervention"
return 1
fi
fi
}
# Function to update Python dependencies
update_dependencies() {
log "Checking for dependency updates..."
# Check if UV is available
export PATH="/home/thrillwiki/.cargo/bin:$PATH"
if command -v uv > /dev/null 2>&1; then
log "Updating dependencies with UV..."
if uv sync --quiet; then
log_success "Dependencies updated with UV"
return 0
else
log_warning "UV sync failed, trying pip..."
fi
fi
# Fallback to pip if UV fails or isn't available
if [ -d ".venv" ]; then
log "Activating virtual environment and updating with pip..."
source .venv/bin/activate
if pip install -e . --quiet; then
log_success "Dependencies updated with pip"
return 0
else
log_warning "Pip install failed"
return 1
fi
else
log_warning "No virtual environment found, skipping dependency update"
return 1
fi
}
# Function to run Django migrations
run_migrations() {
log "Running Django migrations..."
export PATH="/home/thrillwiki/.cargo/bin:$PATH"
# Try with UV first
if command -v uv > /dev/null 2>&1; then
if uv run python manage.py migrate --quiet; then
log_success "Migrations completed with UV"
return 0
else
log_warning "UV migrations failed, trying direct Python..."
fi
fi
# Fallback to direct Python
if [ -d ".venv" ]; then
source .venv/bin/activate
if python manage.py migrate --quiet; then
log_success "Migrations completed with Python"
return 0
else
log_warning "Django migrations failed"
return 1
fi
else
if python3 manage.py migrate --quiet; then
log_success "Migrations completed"
return 0
else
log_warning "Django migrations failed"
return 1
fi
fi
}
# Function to collect static files
collect_static() {
log "Collecting static files..."
export PATH="/home/thrillwiki/.cargo/bin:$PATH"
# Try with UV first
if command -v uv > /dev/null 2>&1; then
if uv run python manage.py collectstatic --noinput --quiet; then
log_success "Static files collected with UV"
return 0
else
log_warning "UV collectstatic failed, trying direct Python..."
fi
fi
# Fallback to direct Python
if [ -d ".venv" ]; then
source .venv/bin/activate
if python manage.py collectstatic --noinput --quiet; then
log_success "Static files collected with Python"
return 0
else
log_warning "Static file collection failed"
return 1
fi
else
if python3 manage.py collectstatic --noinput --quiet; then
log_success "Static files collected"
return 0
else
log_warning "Static file collection failed"
return 1
fi
fi
}
# Main auto-pull function
main() {
# Setup
rotate_log
acquire_lock
log "🔄 Starting auto-pull check..."
# Ensure logs directory exists
mkdir -p "$(dirname "$LOG_FILE")"
# Change to project directory
if ! cd "$PROJECT_DIR"; then
log_error "Failed to change to project directory: $PROJECT_DIR"
exit 1
fi
# Check if this is a Git repository
if [ ! -d ".git" ]; then
log_error "Not a Git repository: $PROJECT_DIR"
exit 1
fi
# Check for remote changes
log "📡 Checking for remote changes..."
if ! has_remote_changes; then
log "✅ Repository is up to date, no changes to pull"
exit 0
fi
log "📥 New changes detected, pulling updates..."
# Record current service status
local service_was_running=false
if is_service_running; then
service_was_running=true
log "📊 Service is currently running"
else
log "📊 Service is not running"
fi
# Pull the latest changes
local pull_output
if pull_output=$(git pull origin main 2>&1); then
log_success "✅ Git pull completed successfully"
log "📋 Changes:"
echo "$pull_output" | grep -E "^\s*(create|modify|delete|rename)" | head -10 | while read line; do
log " $line"
done
else
log_error "❌ Git pull failed:"
echo "$pull_output" | head -10 | while read line; do
log_error " $line"
done
exit 1
fi
# Update dependencies if requirements files changed
if echo "$pull_output" | grep -qE "(pyproject\.toml|requirements.*\.txt|setup\.py)"; then
log "📦 Dependencies file changed, updating..."
update_dependencies
else
log "📦 No dependency changes detected, skipping update"
fi
# Run migrations if models changed
if echo "$pull_output" | grep -qE "(models\.py|migrations/)"; then
log "🗄️ Model changes detected, running migrations..."
run_migrations
else
log "🗄️ No model changes detected, skipping migrations"
fi
# Collect static files if they changed
if echo "$pull_output" | grep -qE "(static/|templates/|\.css|\.js)"; then
log "🎨 Static files changed, collecting..."
collect_static
else
log "🎨 No static file changes detected, skipping collection"
fi
# Restart service if it was running
if $service_was_running; then
log "🔄 Restarting service due to code changes..."
if restart_service; then
# Wait a moment for service to fully start
sleep 3
# Verify service is running
if is_service_running; then
log_success "🎉 Auto-pull completed successfully! Service is running."
else
log_error "⚠️ Service failed to start after restart"
exit 1
fi
else
log_error "⚠️ Service restart failed"
exit 1
fi
else
log_success "🎉 Auto-pull completed successfully! (Service was not running)"
fi
# Health check
log "🔍 Performing health check..."
if curl -f http://localhost:8000 > /dev/null 2>&1; then
log_success "✅ Application health check passed"
else
log_warning "⚠️ Application health check failed (may still be starting up)"
fi
log "✨ Auto-pull cycle completed at $(date)"
}
# Handle script arguments
case "${1:-}" in
--help|-h)
echo "ThrillWiki Auto-Pull Script"
echo ""
echo "Usage:"
echo " $0 Run auto-pull check (default)"
echo " $0 --force Force pull even if no changes detected"
echo " $0 --status Check auto-pull service status"
echo " $0 --logs Show recent auto-pull logs"
echo " $0 --help Show this help"
exit 0
;;
--force)
log "🚨 Force mode: Pulling regardless of changes"
# Skip the has_remote_changes check
cd "$PROJECT_DIR"
# Setup authentication and pull
setup_git_auth
if git pull origin main; then
log_success "✅ Force pull completed"
# Run standard update procedures
update_dependencies
run_migrations
collect_static
# Restart service if it was running
if is_service_running; then
restart_service
fi
log_success "🎉 Force update completed successfully!"
else
log_error "❌ Force pull failed"
exit 1
fi
;;
--status)
if systemctl is-active --quiet crond 2>/dev/null; then
echo "✅ Cron daemon is running"
else
echo "❌ Cron daemon is not running"
fi
if crontab -l 2>/dev/null | grep -q "auto-pull.sh"; then
echo "✅ Auto-pull cron job is installed"
echo "📋 Current cron jobs:"
crontab -l 2>/dev/null | grep -E "(auto-pull|thrillwiki)"
else
echo "❌ Auto-pull cron job is not installed"
fi
if [ -f "$LOG_FILE" ]; then
echo "📄 Last auto-pull log entries:"
tail -5 "$LOG_FILE"
else
echo "📄 No auto-pull logs found"
fi
;;
--logs)
if [ -f "$LOG_FILE" ]; then
tail -50 "$LOG_FILE"
else
echo "No auto-pull logs found at $LOG_FILE"
fi
;;
*)
# Default: run main auto-pull
main
;;
esac

View File

@@ -1,838 +0,0 @@
#!/bin/bash
#
# ThrillWiki Automation Configuration Library
# Centralized configuration management for bulletproof automation system
#
# Features:
# - Configuration file reading/writing with validation
# - GitHub PAT token management and validation
# - Environment variable management with secure file permissions
# - Configuration migration and backup utilities
# - Comprehensive error handling and logging
#
# [AWS-SECRET-REMOVED]====================================
# LIBRARY METADATA
# [AWS-SECRET-REMOVED]====================================
AUTOMATION_CONFIG_VERSION="1.0.0"
AUTOMATION_CONFIG_LOADED="true"
# [AWS-SECRET-REMOVED]====================================
# CONFIGURATION CONSTANTS
# [AWS-SECRET-REMOVED]====================================
# Configuration file paths
readonly CONFIG_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/../.." && pwd)"
readonly SYSTEMD_CONFIG_DIR="$CONFIG_DIR/scripts/systemd"
readonly VM_CONFIG_DIR="$CONFIG_DIR/scripts/vm"
# Environment configuration files
readonly ENV_EXAMPLE_FILE="$SYSTEMD_CONFIG_DIR/thrillwiki-automation***REMOVED***.example"
readonly ENV_CONFIG_FILE="$SYSTEMD_CONFIG_DIR/thrillwiki-automation***REMOVED***"
readonly PROJECT_ENV_FILE="$CONFIG_DIR/***REMOVED***"
# GitHub authentication files
readonly GITHUB_TOKEN_FILE="$CONFIG_DIR/.github-pat"
readonly GITHUB_AUTH_SCRIPT="$CONFIG_DIR/scripts/github-auth.py"
readonly GITHUB_TOKEN_BACKUP="$CONFIG_DIR/.github-pat.backup"
# Service configuration
readonly SERVICE_NAME="thrillwiki-automation"
readonly SERVICE_FILE="$SYSTEMD_CONFIG_DIR/$SERVICE_NAME.service"
# Backup configuration
readonly CONFIG_BACKUP_DIR="$CONFIG_DIR/backups/config"
readonly MAX_BACKUPS=5
# [AWS-SECRET-REMOVED]====================================
# COLOR DEFINITIONS
# [AWS-SECRET-REMOVED]====================================
if [[ -z "${RED:-}" ]]; then
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
PURPLE='\033[0;35m'
CYAN='\033[0;36m'
NC='\033[0m' # No Color
fi
# [AWS-SECRET-REMOVED]====================================
# LOGGING FUNCTIONS
# [AWS-SECRET-REMOVED]====================================
# Configuration-specific logging functions
config_log() {
local level="$1"
local color="$2"
local message="$3"
local timestamp="$(date '+%Y-%m-%d %H:%M:%S')"
echo -e "${color}[$timestamp] [CONFIG-$level]${NC} $message"
}
config_info() {
config_log "INFO" "$BLUE" "$1"
}
config_success() {
config_log "SUCCESS" "$GREEN" "$1"
}
config_warning() {
config_log "WARNING" "$YELLOW" "⚠️ $1"
}
config_error() {
config_log "ERROR" "$RED" "$1"
}
config_debug() {
if [[ "${CONFIG_DEBUG:-false}" == "true" ]]; then
config_log "DEBUG" "$PURPLE" "🔍 $1"
fi
}
# [AWS-SECRET-REMOVED]====================================
# UTILITY FUNCTIONS
# [AWS-SECRET-REMOVED]====================================
# Check if command exists
command_exists() {
command -v "$1" >/dev/null 2>&1
}
# Create directory with proper permissions if it doesn't exist
ensure_directory() {
local dir="$1"
local permissions="${2:-755}"
if [[ ! -d "$dir" ]]; then
config_debug "Creating directory: $dir"
mkdir -p "$dir"
chmod "$permissions" "$dir"
config_debug "Directory created with permissions $permissions"
fi
}
# Set secure file permissions
set_secure_permissions() {
local file="$1"
local permissions="${2:-600}"
if [[ -f "$file" ]]; then
chmod "$permissions" "$file"
config_debug "Set permissions $permissions on $file"
fi
}
# Backup a file with timestamp
backup_file() {
local source_file="$1"
local backup_dir="${2:-$CONFIG_BACKUP_DIR}"
if [[ ! -f "$source_file" ]]; then
config_debug "Source file does not exist for backup: $source_file"
return 1
fi
ensure_directory "$backup_dir"
local filename
filename=$(basename "$source_file")
local timestamp
timestamp=$(date '+%Y%m%d_%H%M%S')
local backup_file="$backup_dir/${filename}.${timestamp}.backup"
if cp "$source_file" "$backup_file"; then
config_debug "File backed up: $source_file -> $backup_file"
# Clean up old backups (keep only MAX_BACKUPS)
local backup_count
backup_count=$(find "$backup_dir" -name "${filename}.*.backup" | wc -l)
if [[ $backup_count -gt $MAX_BACKUPS ]]; then
config_debug "Cleaning up old backups (keeping $MAX_BACKUPS)"
find "$backup_dir" -name "${filename}.*.backup" -type f -printf '%T@ %p\n' | \
sort -n | head -n -"$MAX_BACKUPS" | cut -d' ' -f2- | \
xargs rm -f
fi
echo "$backup_file"
return 0
else
config_error "Failed to backup file: $source_file"
return 1
fi
}
# [AWS-SECRET-REMOVED]====================================
# CONFIGURATION FILE MANAGEMENT
# [AWS-SECRET-REMOVED]====================================
# Read configuration value from file
read_config_value() {
local key="$1"
local config_file="${2:-$ENV_CONFIG_FILE}"
local default_value="${3:-}"
config_debug "Reading config value: $key from $config_file"
if [[ ! -f "$config_file" ]]; then
config_debug "Config file not found: $config_file"
echo "$default_value"
return 1
fi
# Look for the key (handle both commented and uncommented lines)
local value
value=$(grep -E "^[#[:space:]]*${key}[[:space:]]*=" "$config_file" | \
grep -v "^[[:space:]]*#" | \
tail -1 | \
cut -d'=' -f2- | \
sed 's/^[[:space:]]*//' | \
sed 's/[[:space:]]*$//' | \
sed 's/^["'\'']\(.*\)["'\'']$/\1/')
if [[ -n "$value" ]]; then
echo "$value"
return 0
else
echo "$default_value"
return 1
fi
}
# Write configuration value to file
write_config_value() {
local key="$1"
local value="$2"
local config_file="${3:-$ENV_CONFIG_FILE}"
local create_if_missing="${4:-true}"
config_debug "Writing config value: $key=$value to $config_file"
# Create config file from example if it doesn't exist
if [[ ! -f "$config_file" ]] && [[ "$create_if_missing" == "true" ]]; then
if [[ -f "$ENV_EXAMPLE_FILE" ]]; then
config_info "Creating config file from template: $config_file"
cp "$ENV_EXAMPLE_FILE" "$config_file"
set_secure_permissions "$config_file" 600
else
config_info "Creating new config file: $config_file"
touch "$config_file"
set_secure_permissions "$config_file" 600
fi
fi
# Backup existing file
backup_file "$config_file" >/dev/null
# Check if key already exists
if grep -q "^[#[:space:]]*${key}[[:space:]]*=" "$config_file" 2>/dev/null; then
# Update existing key
config_debug "Updating existing key: $key"
# Use a temporary file for safe updating
local temp_file
temp_file=$(mktemp)
# Process the file line by line
while IFS= read -r line || [[ -n "$line" ]]; do
if [[ "$line" =~ ^[#[:space:]]*${key}[[:space:]]*= ]]; then
# Replace this line with the new value
echo "$key=$value"
config_debug "Replaced line: $line -> $key=$value"
else
echo "$line"
fi
done < "$config_file" > "$temp_file"
# Replace original file
mv "$temp_file" "$config_file"
set_secure_permissions "$config_file" 600
else
# Add new key
config_debug "Adding new key: $key"
echo "$key=$value" >> "$config_file"
fi
config_success "Configuration updated: $key"
return 0
}
# Remove configuration value from file
remove_config_value() {
local key="$1"
local config_file="${2:-$ENV_CONFIG_FILE}"
config_debug "Removing config value: $key from $config_file"
if [[ ! -f "$config_file" ]]; then
config_warning "Config file not found: $config_file"
return 1
fi
# Backup existing file
backup_file "$config_file" >/dev/null
# Remove the key using sed
sed -i.tmp "/^[#[:space:]]*${key}[[:space:]]*=/d" "$config_file"
rm -f "${config_file}.tmp"
config_success "Configuration removed: $key"
return 0
}
# Validate configuration file
validate_config_file() {
local config_file="${1:-$ENV_CONFIG_FILE}"
local errors=0
config_info "Validating configuration file: $config_file"
if [[ ! -f "$config_file" ]]; then
config_error "Configuration file not found: $config_file"
return 1
fi
# Check file permissions
local perms
perms=$(stat -c "%a" "$config_file" 2>/dev/null || stat -f "%A" "$config_file" 2>/dev/null)
if [[ "$perms" != "600" ]] && [[ "$perms" != "0600" ]]; then
config_warning "Configuration file has insecure permissions: $perms (should be 600)"
((errors++))
fi
# Check for required variables if GitHub token is configured
local github_token
github_token=$(read_config_value "GITHUB_TOKEN" "$config_file")
if [[ -n "$github_token" ]]; then
config_debug "GitHub token found in configuration"
# Check token format
if [[ ! "$github_token" =~ ^gh[pousr]_[A-Za-z0-9_]{36,255}$ ]]; then
config_warning "GitHub token format appears invalid"
((errors++))
fi
fi
# Check syntax by sourcing in a subshell
if ! (source "$config_file" >/dev/null 2>&1); then
config_error "Configuration file has syntax errors"
((errors++))
fi
if [[ $errors -eq 0 ]]; then
config_success "Configuration file validation passed"
return 0
else
config_error "Configuration file validation failed with $errors errors"
return 1
fi
}
# [AWS-SECRET-REMOVED]====================================
# GITHUB PAT TOKEN MANAGEMENT
# [AWS-SECRET-REMOVED]====================================
# Validate GitHub PAT token format
validate_github_token_format() {
local token="$1"
if [[ -z "$token" ]]; then
config_debug "Empty token provided"
return 1
fi
# GitHub token formats:
# - Classic PAT: ghp_[36-40 chars]
# - Fine-grained PAT: github_pat_[40+ chars]
# - OAuth token: gho_[36-40 chars]
# - User token: ghu_[36-40 chars]
# - Server token: ghs_[36-40 chars]
# - Refresh token: ghr_[36-40 chars]
if [[ "$token" =~ ^gh[pousr]_[A-Za-z0-9_]{36,255}$ ]] || [[ "$token" =~ ^github_pat_[A-Za-z0-9_]{40,255}$ ]]; then
config_debug "Token format is valid"
return 0
else
config_debug "Token format is invalid"
return 1
fi
}
# Test GitHub PAT token by making API call
test_github_token() {
local token="$1"
local timeout="${2:-10}"
config_debug "Testing GitHub token with API call"
if [[ -z "$token" ]]; then
config_error "No token provided for testing"
return 1
fi
# Test with GitHub API
local response
local http_code
response=$(curl -s -w "%{http_code}" \
--max-time "$timeout" \
-H "Authorization: Bearer $token" \
-H "Accept: application/vnd.github+json" \
-H "X-GitHub-Api-Version: 2022-11-28" \
"https://api.github.com/user" 2>/dev/null)
http_code="${response: -3}"
case "$http_code" in
200)
config_debug "GitHub token is valid"
return 0
;;
401)
config_error "GitHub token is invalid or expired"
return 1
;;
403)
config_error "GitHub token lacks required permissions"
return 1
;;
*)
config_error "GitHub API request failed with HTTP $http_code"
return 1
;;
esac
}
# Get GitHub user information using PAT
get_github_user_info() {
local token="$1"
local timeout="${2:-10}"
if [[ -z "$token" ]]; then
config_error "No token provided"
return 1
fi
config_debug "Fetching GitHub user information"
local response
response=$(curl -s --max-time "$timeout" \
-H "Authorization: Bearer $token" \
-H "Accept: application/vnd.github+json" \
-H "X-GitHub-Api-Version: 2022-11-28" \
"https://api.github.com/user" 2>/dev/null)
if [[ $? -eq 0 ]] && [[ -n "$response" ]]; then
# Extract key information using simple grep/sed (avoid jq dependency)
local login
local name
local email
login=$(echo "$response" | grep -o '"login"[[:space:]]*:[[:space:]]*"[^"]*"' | sed 's/.*"login"[[:space:]]*:[[:space:]]*"\([^"]*\)".*/\1/')
name=$(echo "$response" | grep -o '"name"[[:space:]]*:[[:space:]]*"[^"]*"' | sed 's/.*"name"[[:space:]]*:[[:space:]]*"\([^"]*\)".*/\1/')
email=$(echo "$response" | grep -o '"email"[[:space:]]*:[[:space:]]*"[^"]*"' | sed 's/.*"email"[[:space:]]*:[[:space:]]*"\([^"]*\)".*/\1/')
echo "login:$login"
echo "name:$name"
echo "email:$email"
return 0
else
config_error "Failed to fetch GitHub user information"
return 1
fi
}
# Store GitHub PAT token securely
store_github_token() {
local token="$1"
local token_file="${2:-$GITHUB_TOKEN_FILE}"
config_debug "Storing GitHub token to: $token_file"
if [[ -z "$token" ]]; then
config_error "No token provided for storage"
return 1
fi
# Validate token format
if ! validate_github_token_format "$token"; then
config_error "Invalid GitHub token format"
return 1
fi
# Test token before storing
if ! test_github_token "$token"; then
config_error "GitHub token validation failed"
return 1
fi
# Backup existing token file
if [[ -f "$token_file" ]]; then
backup_file "$token_file" >/dev/null
fi
# Store token with secure permissions
echo "$token" > "$token_file"
set_secure_permissions "$token_file" 600
# Also store in environment configuration
write_config_value "GITHUB_TOKEN" "$token"
config_success "GitHub token stored successfully"
return 0
}
# Load GitHub PAT token from various sources
load_github_token() {
config_debug "Loading GitHub token from available sources"
local token=""
# Priority order:
# 1. Environment variable GITHUB_TOKEN
# 2. Token file
# 3. Configuration file
# 4. GitHub auth script
# Check environment variable
if [[ -n "${GITHUB_TOKEN:-}" ]]; then
config_debug "Using GitHub token from environment variable"
token="$GITHUB_TOKEN"
# Check token file
elif [[ -f "$GITHUB_TOKEN_FILE" ]]; then
config_debug "Loading GitHub token from file: $GITHUB_TOKEN_FILE"
token=$(cat "$GITHUB_TOKEN_FILE" 2>/dev/null | tr -d '\n\r')
# Check configuration file
elif [[ -f "$ENV_CONFIG_FILE" ]]; then
config_debug "Loading GitHub token from config file"
token=$(read_config_value "GITHUB_TOKEN")
# Try GitHub auth script
elif [[ -x "$GITHUB_AUTH_SCRIPT" ]]; then
config_debug "Attempting to get token from GitHub auth script"
token=$(python3 "$GITHUB_AUTH_SCRIPT" token 2>/dev/null || echo "")
fi
if [[ -n "$token" ]]; then
# Validate token
if validate_github_token_format "$token" && test_github_token "$token"; then
export GITHUB_TOKEN="$token"
config_debug "GitHub token loaded and validated successfully"
return 0
else
config_warning "Loaded GitHub token is invalid"
return 1
fi
else
config_debug "No GitHub token found"
return 1
fi
}
# Remove GitHub PAT token
remove_github_token() {
local token_file="${1:-$GITHUB_TOKEN_FILE}"
config_info "Removing GitHub token"
# Remove token file
if [[ -f "$token_file" ]]; then
backup_file "$token_file" >/dev/null
rm -f "$token_file"
config_debug "Token file removed: $token_file"
fi
# Remove from configuration
remove_config_value "GITHUB_TOKEN"
# Clear environment variable
unset GITHUB_TOKEN
config_success "GitHub token removed successfully"
return 0
}
# [AWS-SECRET-REMOVED]====================================
# MIGRATION AND UPGRADE UTILITIES
# [AWS-SECRET-REMOVED]====================================
# Migrate configuration from old format to new format
migrate_configuration() {
config_info "Checking for configuration migration needs"
local migration_needed=false
# Check for old configuration files
local old_configs=(
"$CONFIG_DIR/***REMOVED***.automation"
"$CONFIG_DIR/automation.conf"
"$CONFIG_DIR/config***REMOVED***"
)
for old_config in "${old_configs[@]}"; do
if [[ -f "$old_config" ]]; then
config_info "Found old configuration file: $old_config"
migration_needed=true
# Backup old config
backup_file "$old_config" >/dev/null
# Migrate values if possible
if [[ -r "$old_config" ]]; then
config_info "Migrating values from $old_config"
# Simple migration - source old config and write values to new config
while IFS='=' read -r key value; do
# Skip comments and empty lines
[[ "$key" =~ ^[[:space:]]*# ]] && continue
[[ -z "$key" ]] && continue
# Clean up key and value
key=$(echo "$key" | sed 's/^[[:space:]]*//' | sed 's/[[:space:]]*$//')
value=$(echo "$value" | sed 's/^[[:space:]]*//' | sed 's/[[:space:]]*$//' | sed 's/^["'\'']\(.*\)["'\'']$/\1/')
if [[ -n "$key" ]] && [[ -n "$value" ]]; then
write_config_value "$key" "$value"
config_debug "Migrated: $key=$value"
fi
done < "$old_config"
fi
fi
done
if [[ "$migration_needed" == "true" ]]; then
config_success "Configuration migration completed"
else
config_debug "No migration needed"
fi
return 0
}
# [AWS-SECRET-REMOVED]====================================
# SYSTEM INTEGRATION
# [AWS-SECRET-REMOVED]====================================
# Check if systemd service is available and configured
check_systemd_service() {
config_debug "Checking systemd service configuration"
if ! command_exists systemctl; then
config_warning "systemd not available on this system"
return 1
fi
if [[ ! -f "$SERVICE_FILE" ]]; then
config_warning "Service file not found: $SERVICE_FILE"
return 1
fi
# Check if service is installed
if systemctl list-unit-files "$SERVICE_NAME.service" >/dev/null 2>&1; then
config_debug "Service is installed: $SERVICE_NAME"
# Check service status
local status
status=$(systemctl is-active "$SERVICE_NAME" 2>/dev/null || echo "inactive")
config_debug "Service status: $status"
return 0
else
config_debug "Service is not installed: $SERVICE_NAME"
return 1
fi
}
# Get systemd service status
get_service_status() {
if ! command_exists systemctl; then
echo "systemd_unavailable"
return 1
fi
local status
status=$(systemctl is-active "$SERVICE_NAME" 2>/dev/null || echo "inactive")
echo "$status"
case "$status" in
active)
return 0
;;
inactive|failed)
return 1
;;
*)
return 2
;;
esac
}
# [AWS-SECRET-REMOVED]====================================
# MAIN CONFIGURATION INTERFACE
# [AWS-SECRET-REMOVED]====================================
# Show current configuration status
show_config_status() {
config_info "ThrillWiki Automation Configuration Status"
echo "[AWS-SECRET-REMOVED]======"
echo ""
# Project information
echo "📁 Project Directory: $CONFIG_DIR"
echo "🔧 Configuration Version: $AUTOMATION_CONFIG_VERSION"
echo ""
# Configuration files
echo "📄 Configuration Files:"
if [[ -f "$ENV_CONFIG_FILE" ]]; then
echo " ✅ Environment config: $ENV_CONFIG_FILE"
local perms
perms=$(stat -c "%a" "$ENV_CONFIG_FILE" 2>/dev/null || stat -f "%A" "$ENV_CONFIG_FILE" 2>/dev/null)
echo " Permissions: $perms"
else
echo " ❌ Environment config: Not found"
fi
if [[ -f "$ENV_EXAMPLE_FILE" ]]; then
echo " ✅ Example config: $ENV_EXAMPLE_FILE"
else
echo " ❌ Example config: Not found"
fi
echo ""
# GitHub authentication
echo "🔐 GitHub Authentication:"
if load_github_token >/dev/null 2>&1; then
echo " ✅ GitHub token: Available and valid"
# Get user info
local user_info
user_info=$(get_github_user_info "$GITHUB_TOKEN" 2>/dev/null)
if [[ -n "$user_info" ]]; then
local login
login=$(echo "$user_info" | grep "^login:" | cut -d: -f2)
if [[ -n "$login" ]]; then
echo " Authenticated as: $login"
fi
fi
else
echo " ❌ GitHub token: Not available or invalid"
fi
if [[ -f "$GITHUB_TOKEN_FILE" ]]; then
echo " ✅ Token file: $GITHUB_TOKEN_FILE"
else
echo " ❌ Token file: Not found"
fi
echo ""
# Systemd service
echo "⚙️ Systemd Service:"
if check_systemd_service; then
echo " ✅ Service file: Available"
local status
status=$(get_service_status)
echo " Status: $status"
else
echo " ❌ Service: Not configured or available"
fi
echo ""
# Backups
echo "💾 Backups:"
if [[ -d "$CONFIG_BACKUP_DIR" ]]; then
local backup_count
backup_count=$(find "$CONFIG_BACKUP_DIR" -name "*.backup" 2>/dev/null | wc -l)
echo " 📦 Backup directory: $CONFIG_BACKUP_DIR"
echo " 📊 Backup files: $backup_count"
else
echo " ❌ No backup directory found"
fi
}
# Initialize configuration system
init_configuration() {
config_info "Initializing ThrillWiki automation configuration"
# Create necessary directories
ensure_directory "$CONFIG_BACKUP_DIR"
ensure_directory "$(dirname "$ENV_CONFIG_FILE")"
# Run migration if needed
migrate_configuration
# Create configuration file from example if it doesn't exist
if [[ ! -f "$ENV_CONFIG_FILE" ]] && [[ -f "$ENV_EXAMPLE_FILE" ]]; then
config_info "Creating configuration file from template"
cp "$ENV_EXAMPLE_FILE" "$ENV_CONFIG_FILE"
set_secure_permissions "$ENV_CONFIG_FILE" 600
config_success "Configuration file created: $ENV_CONFIG_FILE"
fi
# Validate configuration
validate_config_file
config_success "Configuration system initialized"
return 0
}
# [AWS-SECRET-REMOVED]====================================
# COMMAND LINE INTERFACE
# [AWS-SECRET-REMOVED]====================================
# Show help information
show_config_help() {
echo "ThrillWiki Automation Configuration Library v$AUTOMATION_CONFIG_VERSION"
echo "Usage: source automation-config.sh"
echo ""
echo "Available Functions:"
echo " Configuration Management:"
echo " read_config_value <key> [file] [default] - Read configuration value"
echo " write_config_value <key> <value> [file] - Write configuration value"
echo " remove_config_value <key> [file] - Remove configuration value"
echo " validate_config_file [file] - Validate configuration file"
echo ""
echo " GitHub Token Management:"
echo " load_github_token - Load GitHub token from sources"
echo " store_github_token <token> [file] - Store GitHub token securely"
echo " test_github_token <token> - Test GitHub token validity"
echo " remove_github_token [file] - Remove GitHub token"
echo ""
echo " System Status:"
echo " show_config_status - Show configuration status"
echo " check_systemd_service - Check systemd service status"
echo " get_service_status - Get service active status"
echo ""
echo " Utilities:"
echo " init_configuration - Initialize configuration system"
echo " migrate_configuration - Migrate old configuration"
echo " backup_file <file> [backup_dir] - Backup file with timestamp"
echo ""
echo "Configuration Files:"
echo " $ENV_CONFIG_FILE"
echo " $GITHUB_TOKEN_FILE"
echo ""
}
# If script is run directly (not sourced), show help
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
show_config_help
exit 0
fi
# Export key functions for use by other scripts
export -f read_config_value write_config_value remove_config_value validate_config_file
export -f load_github_token store_github_token test_github_token remove_github_token
export -f show_config_status check_systemd_service get_service_status
export -f init_configuration migrate_configuration backup_file
export -f config_info config_success config_warning config_error config_debug
config_debug "Automation configuration library loaded successfully"

File diff suppressed because it is too large Load Diff

View File

@@ -1,560 +0,0 @@
#!/usr/bin/env bash
#
# ThrillWiki Deployment Automation Service Script
# Comprehensive automated deployment management with preset integration
#
# Features:
# - Cross-shell compatible (bash/zsh)
# - Deployment preset integration
# - Health monitoring and recovery
# - Smart deployment coordination
# - Systemd service integration
# - GitHub authentication management
# - Server lifecycle management
#
set -e
# [AWS-SECRET-REMOVED]====================================
# SCRIPT CONFIGURATION
# [AWS-SECRET-REMOVED]====================================
# Cross-shell compatible script directory detection
if [ -n "${BASH_SOURCE:-}" ]; then
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
SCRIPT_NAME="$(basename "${BASH_SOURCE[0]}")"
elif [ -n "${ZSH_NAME:-}" ]; then
SCRIPT_DIR="$(cd "$(dirname "${(%):-%x}")" && pwd)"
SCRIPT_NAME="$(basename "${(%):-%x}")"
else
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
SCRIPT_NAME="$(basename "$0")"
fi
PROJECT_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)"
# Default configuration (can be overridden by environment)
DEPLOYMENT_PRESET="${DEPLOYMENT_PRESET:-dev}"
PULL_INTERVAL="${PULL_INTERVAL:-300}"
HEALTH_CHECK_INTERVAL="${HEALTH_CHECK_INTERVAL:-60}"
DEBUG_MODE="${DEBUG_MODE:-false}"
LOG_LEVEL="${LOG_LEVEL:-INFO}"
MAX_RESTART_ATTEMPTS="${MAX_RESTART_ATTEMPTS:-3}"
RESTART_COOLDOWN="${RESTART_COOLDOWN:-300}"
# Logging configuration
LOG_DIR="${LOG_DIR:-$PROJECT_DIR/logs}"
LOG_FILE="${LOG_FILE:-$LOG_DIR/deployment-automation.log}"
LOCK_FILE="${LOCK_FILE:-/tmp/thrillwiki-deployment.lock}"
# [AWS-SECRET-REMOVED]====================================
# COLOR DEFINITIONS
# [AWS-SECRET-REMOVED]====================================
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
PURPLE='\033[0;35m'
CYAN='\033[0;36m'
BOLD='\033[1m'
NC='\033[0m' # No Color
# [AWS-SECRET-REMOVED]====================================
# LOGGING FUNCTIONS
# [AWS-SECRET-REMOVED]====================================
deploy_log() {
local level="$1"
local color="$2"
local message="$3"
local timestamp="$(date '+%Y-%m-%d %H:%M:%S')"
# Ensure log directory exists
mkdir -p "$(dirname "$LOG_FILE")"
# Log to file (without colors)
echo "[$timestamp] [$level] [DEPLOY-AUTO] $message" >> "$LOG_FILE"
# Log to console (with colors) if not running as systemd service
if [ -t 1 ] && [ "${SYSTEMD_EXEC_PID:-}" = "" ]; then
echo -e "${color}[$timestamp] [DEPLOY-AUTO-$level]${NC} $message"
fi
# Log to systemd journal if running as service
if [ "${SYSTEMD_EXEC_PID:-}" != "" ]; then
echo "$message"
fi
}
deploy_info() {
deploy_log "INFO" "$BLUE" "$1"
}
deploy_success() {
deploy_log "SUCCESS" "$GREEN" "$1"
}
deploy_warning() {
deploy_log "WARNING" "$YELLOW" "⚠️ $1"
}
deploy_error() {
deploy_log "ERROR" "$RED" "$1"
}
deploy_debug() {
if [ "${DEBUG_MODE:-false}" = "true" ] || [ "${LOG_LEVEL:-INFO}" = "DEBUG" ]; then
deploy_log "DEBUG" "$PURPLE" "🔍 $1"
fi
}
deploy_progress() {
deploy_log "PROGRESS" "$CYAN" "🚀 $1"
}
# [AWS-SECRET-REMOVED]====================================
# UTILITY FUNCTIONS
# [AWS-SECRET-REMOVED]====================================
# Cross-shell compatible command existence check
command_exists() {
command -v "$1" >/dev/null 2>&1
}
# Lock file management
acquire_lock() {
if [ -f "$LOCK_FILE" ]; then
local lock_pid
lock_pid=$(cat "$LOCK_FILE" 2>/dev/null || echo "")
if [ -n "$lock_pid" ] && kill -0 "$lock_pid" 2>/dev/null; then
deploy_warning "Another deployment automation instance is already running (PID: $lock_pid)"
return 1
else
deploy_info "Removing stale lock file"
rm -f "$LOCK_FILE"
fi
fi
echo $$ > "$LOCK_FILE"
deploy_debug "Lock acquired (PID: $$)"
return 0
}
release_lock() {
if [ -f "$LOCK_FILE" ]; then
rm -f "$LOCK_FILE"
deploy_debug "Lock released"
fi
}
# Trap for cleanup
cleanup_and_exit() {
deploy_info "Deployment automation service stopping"
release_lock
exit 0
}
# [AWS-SECRET-REMOVED]====================================
# PRESET CONFIGURATION FUNCTIONS
# [AWS-SECRET-REMOVED]====================================
# Apply deployment preset configuration
apply_preset_configuration() {
local preset="${DEPLOYMENT_PRESET:-dev}"
deploy_info "Applying deployment preset: $preset"
case "$preset" in
"dev")
PULL_INTERVAL="${PULL_INTERVAL:-60}"
HEALTH_CHECK_INTERVAL="${HEALTH_CHECK_INTERVAL:-30}"
DEBUG_MODE="${DEBUG_MODE:-true}"
LOG_LEVEL="${LOG_LEVEL:-DEBUG}"
AUTO_MIGRATE="${AUTO_MIGRATE:-true}"
AUTO_UPDATE_DEPENDENCIES="${AUTO_UPDATE_DEPENDENCIES:-true}"
;;
"prod")
PULL_INTERVAL="${PULL_INTERVAL:-300}"
HEALTH_CHECK_INTERVAL="${HEALTH_CHECK_INTERVAL:-60}"
DEBUG_MODE="${DEBUG_MODE:-false}"
LOG_LEVEL="${LOG_LEVEL:-WARNING}"
AUTO_MIGRATE="${AUTO_MIGRATE:-true}"
AUTO_UPDATE_DEPENDENCIES="${AUTO_UPDATE_DEPENDENCIES:-false}"
;;
"demo")
PULL_INTERVAL="${PULL_INTERVAL:-120}"
HEALTH_CHECK_INTERVAL="${HEALTH_CHECK_INTERVAL:-45}"
DEBUG_MODE="${DEBUG_MODE:-false}"
LOG_LEVEL="${LOG_LEVEL:-INFO}"
AUTO_MIGRATE="${AUTO_MIGRATE:-true}"
AUTO_UPDATE_DEPENDENCIES="${AUTO_UPDATE_DEPENDENCIES:-true}"
;;
"testing")
PULL_INTERVAL="${PULL_INTERVAL:-180}"
HEALTH_CHECK_INTERVAL="${HEALTH_CHECK_INTERVAL:-30}"
DEBUG_MODE="${DEBUG_MODE:-true}"
LOG_LEVEL="${LOG_LEVEL:-DEBUG}"
AUTO_MIGRATE="${AUTO_MIGRATE:-true}"
AUTO_UPDATE_DEPENDENCIES="${AUTO_UPDATE_DEPENDENCIES:-true}"
;;
*)
deploy_warning "Unknown preset '$preset', using development defaults"
PULL_INTERVAL="${PULL_INTERVAL:-60}"
HEALTH_CHECK_INTERVAL="${HEALTH_CHECK_INTERVAL:-30}"
DEBUG_MODE="${DEBUG_MODE:-true}"
LOG_LEVEL="${LOG_LEVEL:-DEBUG}"
;;
esac
deploy_success "Preset configuration applied successfully"
deploy_debug "Configuration: interval=${PULL_INTERVAL}s, health=${HEALTH_CHECK_INTERVAL}s, debug=$DEBUG_MODE"
}
# [AWS-SECRET-REMOVED]====================================
# HEALTH CHECK FUNCTIONS
# [AWS-SECRET-REMOVED]====================================
# Check if smart deployment service is healthy
check_smart_deployment_health() {
deploy_debug "Checking smart deployment service health"
# Check if smart-deploy script exists and is executable
local smart_deploy_script="$PROJECT_DIR/scripts/smart-deploy.sh"
if [ ! -x "$smart_deploy_script" ]; then
deploy_warning "Smart deployment script not found or not executable: $smart_deploy_script"
return 1
fi
# Check if systemd timer is active
if command_exists systemctl; then
if systemctl is-active --quiet thrillwiki-smart-deploy.timer 2>/dev/null; then
deploy_debug "Smart deployment timer is active"
else
deploy_warning "Smart deployment timer is not active"
return 1
fi
fi
return 0
}
# Check if development server is healthy
check_development_server_health() {
deploy_debug "Checking development server health"
local health_url="${HEALTH_CHECK_URL:-http://localhost:8000/}"
local timeout="${HEALTH_CHECK_TIMEOUT:-30}"
if command_exists curl; then
if curl -s --connect-timeout "$timeout" "$health_url" > /dev/null 2>&1; then
deploy_debug "Development server health check passed"
return 0
else
deploy_warning "Development server health check failed"
return 1
fi
else
deploy_warning "curl not available for health checks"
return 1
fi
}
# Check GitHub authentication
check_github_authentication() {
deploy_debug "Checking GitHub authentication"
local github_token=""
# Try to get token from file
if [ -f "${GITHUB_TOKEN_FILE:-$PROJECT_DIR/.github-pat}" ]; then
github_token=$(cat "${GITHUB_TOKEN_FILE:-$PROJECT_DIR/.github-pat}" 2>/dev/null | tr -d '\n\r')
fi
# Try environment variable
if [ -z "$github_token" ] && [ -n "${GITHUB_TOKEN:-}" ]; then
github_token="$GITHUB_TOKEN"
fi
if [ -z "$github_token" ]; then
deploy_warning "No GitHub token found"
return 1
fi
# Test GitHub API access
if command_exists curl; then
local response
response=$(curl -s -H "Authorization: token $github_token" https://api.github.com/user 2>/dev/null)
if echo "$response" | grep -q '"login"'; then
deploy_debug "GitHub authentication verified"
return 0
else
deploy_warning "GitHub authentication failed"
return 1
fi
else
deploy_warning "Cannot verify GitHub authentication - curl not available"
return 1
fi
}
# Comprehensive system health check
perform_health_check() {
deploy_debug "Performing comprehensive health check"
local health_issues=0
# Check smart deployment
if ! check_smart_deployment_health; then
((health_issues++))
fi
# Check development server
if ! check_development_server_health; then
((health_issues++))
fi
# Check GitHub authentication
if ! check_github_authentication; then
((health_issues++))
fi
if [ $health_issues -eq 0 ]; then
deploy_success "All health checks passed"
return 0
else
deploy_warning "Health check found $health_issues issue(s)"
return 1
fi
}
# [AWS-SECRET-REMOVED]====================================
# RECOVERY FUNCTIONS
# [AWS-SECRET-REMOVED]====================================
# Restart smart deployment timer
restart_smart_deployment() {
deploy_info "Restarting smart deployment timer"
if command_exists systemctl; then
if systemctl restart thrillwiki-smart-deploy.timer 2>/dev/null; then
deploy_success "Smart deployment timer restarted"
return 0
else
deploy_error "Failed to restart smart deployment timer"
return 1
fi
else
deploy_warning "systemctl not available - cannot restart smart deployment"
return 1
fi
}
# Restart development server through smart deployment
restart_development_server() {
deploy_info "Restarting development server"
local smart_deploy_script="$PROJECT_DIR/scripts/smart-deploy.sh"
if [ -x "$smart_deploy_script" ]; then
if "$smart_deploy_script" restart-server 2>&1 | while IFS= read -r line; do
deploy_debug "Smart deploy: $line"
done; then
deploy_success "Development server restart initiated"
return 0
else
deploy_error "Failed to restart development server"
return 1
fi
else
deploy_warning "Smart deployment script not available"
return 1
fi
}
# Attempt recovery from health check failures
attempt_recovery() {
local attempt="$1"
local max_attempts="$2"
deploy_info "Attempting recovery (attempt $attempt/$max_attempts)"
# Try restarting smart deployment
if restart_smart_deployment; then
sleep 30 # Wait for service to stabilize
# Try restarting development server
if restart_development_server; then
sleep 60 # Wait for server to start
# Recheck health
if perform_health_check; then
deploy_success "Recovery successful"
return 0
fi
fi
fi
deploy_warning "Recovery attempt $attempt failed"
return 1
}
# [AWS-SECRET-REMOVED]====================================
# MAIN AUTOMATION LOOP
# [AWS-SECRET-REMOVED]====================================
# Main deployment automation service
run_deployment_automation() {
deploy_info "Starting deployment automation service"
deploy_info "Preset: $DEPLOYMENT_PRESET, Pull interval: ${PULL_INTERVAL}s, Health check: ${HEALTH_CHECK_INTERVAL}s"
local consecutive_failures=0
local last_recovery_attempt=0
while true; do
# Perform health check
if perform_health_check; then
consecutive_failures=0
deploy_debug "System healthy - continuing monitoring"
else
((consecutive_failures++))
deploy_warning "Health check failed (consecutive failures: $consecutive_failures)"
# Attempt recovery if we have consecutive failures
if [ $consecutive_failures -ge 3 ]; then
local current_time
current_time=$(date +%s)
# Check if enough time has passed since last recovery attempt
if [ $((current_time - last_recovery_attempt)) -ge $RESTART_COOLDOWN ]; then
deploy_info "Too many consecutive failures, attempting recovery"
local recovery_attempt=1
while [ $recovery_attempt -le $MAX_RESTART_ATTEMPTS ]; do
if attempt_recovery "$recovery_attempt" "$MAX_RESTART_ATTEMPTS"; then
consecutive_failures=0
last_recovery_attempt=$current_time
break
fi
((recovery_attempt++))
if [ $recovery_attempt -le $MAX_RESTART_ATTEMPTS ]; then
sleep 60 # Wait between recovery attempts
fi
done
if [ $recovery_attempt -gt $MAX_RESTART_ATTEMPTS ]; then
deploy_error "All recovery attempts failed - manual intervention may be required"
# Reset failure count to prevent continuous recovery attempts
consecutive_failures=0
last_recovery_attempt=$current_time
fi
else
deploy_debug "Recovery cooldown in effect, waiting before next attempt"
fi
fi
fi
# Wait for next health check cycle
sleep "$HEALTH_CHECK_INTERVAL"
done
}
# [AWS-SECRET-REMOVED]====================================
# INITIALIZATION AND STARTUP
# [AWS-SECRET-REMOVED]====================================
# Initialize deployment automation
initialize_automation() {
deploy_info "Initializing ThrillWiki deployment automation"
# Ensure we're in the project directory
cd "$PROJECT_DIR"
# Apply preset configuration
apply_preset_configuration
# Set up signal handlers
trap cleanup_and_exit INT TERM
# Acquire lock
if ! acquire_lock; then
deploy_error "Failed to acquire deployment lock"
exit 1
fi
# Perform initial health check
deploy_info "Performing initial system health check"
if ! perform_health_check; then
deploy_warning "Initial health check detected issues - will monitor and attempt recovery"
fi
deploy_success "Deployment automation initialized successfully"
}
# [AWS-SECRET-REMOVED]====================================
# COMMAND HANDLING
# [AWS-SECRET-REMOVED]====================================
# Handle script commands
case "${1:-start}" in
start)
initialize_automation
run_deployment_automation
;;
health-check)
if perform_health_check; then
echo "System is healthy"
exit 0
else
echo "System health check failed"
exit 1
fi
;;
restart-smart-deploy)
restart_smart_deployment
;;
restart-server)
restart_development_server
;;
status)
if [ -f "$LOCK_FILE" ]; then
local lock_pid
lock_pid=$(cat "$LOCK_FILE" 2>/dev/null || echo "")
if [ -n "$lock_pid" ] && kill -0 "$lock_pid" 2>/dev/null; then
echo "Deployment automation is running (PID: $lock_pid)"
exit 0
else
echo "Deployment automation is not running (stale lock file)"
exit 1
fi
else
echo "Deployment automation is not running"
exit 1
fi
;;
stop)
if [ -f "$LOCK_FILE" ]; then
local lock_pid
lock_pid=$(cat "$LOCK_FILE" 2>/dev/null || echo "")
if [ -n "$lock_pid" ] && kill -0 "$lock_pid" 2>/dev/null; then
echo "Stopping deployment automation (PID: $lock_pid)"
kill -TERM "$lock_pid"
sleep 5
if kill -0 "$lock_pid" 2>/dev/null; then
kill -KILL "$lock_pid"
fi
rm -f "$LOCK_FILE"
echo "Deployment automation stopped"
else
echo "Deployment automation is not running"
rm -f "$LOCK_FILE"
fi
else
echo "Deployment automation is not running"
fi
;;
*)
echo "Usage: $0 {start|stop|status|health-check|restart-smart-deploy|restart-server}"
exit 1
;;
esac

File diff suppressed because it is too large Load Diff

View File

@@ -1,113 +0,0 @@
#!/usr/bin/env bash
#
# Systemd Service Architecture Diagnosis Script
# Validates assumptions about timeout/restart cycles
#
set -e
echo "=== ThrillWiki Systemd Service Architecture Diagnosis ==="
echo "Timestamp: $(date)"
echo
# Check current service status
echo "1. CHECKING SERVICE STATUS"
echo "=========================="
echo "thrillwiki-deployment.service status:"
systemctl status thrillwiki-deployment.service --no-pager -l || echo "Service not active"
echo
echo "thrillwiki-smart-deploy.service status:"
systemctl status thrillwiki-smart-deploy.service --no-pager -l || echo "Service not active"
echo
echo "thrillwiki-smart-deploy.timer status:"
systemctl status thrillwiki-smart-deploy.timer --no-pager -l || echo "Timer not active"
echo
# Check recent journal logs for timeout/restart patterns
echo "2. CHECKING RECENT SYSTEMD LOGS (LAST 50 LINES)"
echo "[AWS-SECRET-REMOVED]======="
echo "Looking for timeout and restart patterns:"
journalctl -u thrillwiki-deployment.service --no-pager -n 50 | grep -E "(timeout|restart|failed|stopped)" || echo "No timeout/restart patterns found in recent logs"
echo
# Check if deploy-automation.sh is designed as infinite loop
echo "3. ANALYZING SCRIPT DESIGN"
echo "=========================="
echo "Checking if deploy-automation.sh contains infinite loops:"
if grep -n "while true" [AWS-SECRET-REMOVED]eploy-automation.sh 2>/dev/null; then
echo "✗ FOUND: Script contains 'while true' infinite loop - this conflicts with systemd service expectations"
else
echo "✓ No infinite loops found"
fi
echo
# Check service configuration issues
echo "4. ANALYZING SERVICE CONFIGURATION"
echo "=================================="
echo "Checking thrillwiki-deployment.service configuration:"
echo "- Type: $(grep '^Type=' [AWS-SECRET-REMOVED]emd/thrillwiki-deployment.service || echo 'Not specified')"
echo "- Restart: $(grep '^Restart=' [AWS-SECRET-REMOVED]emd/thrillwiki-deployment.service || echo 'Not specified')"
echo "- RestartSec: $(grep '^RestartSec=' [AWS-SECRET-REMOVED]emd/thrillwiki-deployment.service || echo 'Not specified')"
echo "- RuntimeMaxSec: $(grep '^RuntimeMaxSec=' [AWS-SECRET-REMOVED]emd/thrillwiki-deployment.service || echo 'Not specified')"
echo "- WatchdogSec: $(grep '^WatchdogSec=' [AWS-SECRET-REMOVED]emd/thrillwiki-deployment.service || echo 'Not specified')"
echo
# Check smart-deploy configuration (correct approach)
echo "Checking thrillwiki-smart-deploy.service configuration:"
echo "- Type: $(grep '^Type=' [AWS-SECRET-REMOVED]emd/thrillwiki-smart-deploy.service || echo 'Not specified')"
echo "- ExecStart: $(grep '^ExecStart=' [AWS-SECRET-REMOVED]emd/thrillwiki-smart-deploy.service || echo 'Not specified')"
echo
# Check timer configuration
echo "Checking thrillwiki-smart-deploy.timer configuration:"
echo "- OnBootSec: $(grep '^OnBootSec=' [AWS-SECRET-REMOVED]emd/thrillwiki-smart-deploy.timer || echo 'Not specified')"
echo "- OnUnitActiveSec: $(grep '^OnUnitActiveSec=' [AWS-SECRET-REMOVED]emd/thrillwiki-smart-deploy.timer || echo 'Not specified')"
echo
# Check if smart-deploy.sh exists and is executable
echo "5. CHECKING TIMER TARGET SCRIPT"
echo "==============================="
if [ -f "[AWS-SECRET-REMOVED]t-deploy.sh" ]; then
if [ -x "[AWS-SECRET-REMOVED]t-deploy.sh" ]; then
echo "✓ smart-deploy.sh exists and is executable"
else
echo "✗ smart-deploy.sh exists but is not executable"
fi
else
echo "✗ smart-deploy.sh does not exist"
fi
echo
# Resource analysis
echo "6. CHECKING SYSTEM RESOURCES"
echo "============================"
echo "Current process using deployment automation:"
ps aux | grep -E "(deploy-automation|smart-deploy)" | grep -v grep || echo "No deployment processes running"
echo
echo "Lock file status:"
if [ -f "/tmp/thrillwiki-deployment.lock" ]; then
echo "✗ Lock file exists: /tmp/thrillwiki-deployment.lock"
echo "Lock PID: $(cat /tmp/thrillwiki-deployment.lock 2>/dev/null || echo 'unreadable')"
else
echo "✓ No lock file present"
fi
echo
# Architectural recommendation
echo "7. ARCHITECTURE ANALYSIS"
echo "========================"
echo "CURRENT PROBLEMATIC ARCHITECTURE:"
echo "thrillwiki-deployment.service (Type=simple, Restart=always)"
echo " └── deploy-automation.sh (infinite loop script)"
echo " └── RESULT: Service times out and restarts continuously"
echo
echo "RECOMMENDED CORRECT ARCHITECTURE:"
echo "thrillwiki-smart-deploy.timer (every 5 minutes)"
echo " └── thrillwiki-smart-deploy.service (Type=oneshot)"
echo " └── smart-deploy.sh (runs once, exits cleanly)"
echo
echo "DIAGNOSIS COMPLETE"
echo "=================="

View File

@@ -1,264 +0,0 @@
#!/usr/bin/env bash
#
# EMERGENCY FIX: Systemd Service Architecture
# Stops infinite restart cycles and fixes broken service architecture
#
set -e
# Script configuration
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)"
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'
# Remote connection configuration
REMOTE_HOST="${1:-192.168.20.65}"
REMOTE_USER="${2:-thrillwiki}"
REMOTE_PORT="${3:-22}"
SSH_KEY="${SSH_KEY:-$HOME/.ssh/thrillwiki_vm}"
SSH_OPTIONS="-i $SSH_KEY -o IdentitiesOnly=yes -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ConnectTimeout=30"
echo -e "${RED}🚨 EMERGENCY SYSTEMD ARCHITECTURE FIX${NC}"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo ""
echo "Target: ${REMOTE_USER}@${REMOTE_HOST}:${REMOTE_PORT}"
echo ""
echo -e "${YELLOW}⚠️ This will fix critical issues:${NC}"
echo "• Stop infinite restart cycles (currently at 32+ restarts)"
echo "• Disable problematic continuous deployment service"
echo "• Clean up stale lock files"
echo "• Fix broken timer configuration"
echo "• Deploy correct service architecture"
echo "• Create missing smart-deploy.sh script"
echo ""
# Function to run remote commands with error handling
run_remote() {
local cmd="$1"
local description="$2"
local use_sudo="${3:-false}"
echo -e "${YELLOW}Executing: ${description}${NC}"
if [ "$use_sudo" = "true" ]; then
if ssh $SSH_OPTIONS -p $REMOTE_PORT -t $REMOTE_USER@$REMOTE_HOST "sudo $cmd" 2>/dev/null; then
echo -e "${GREEN}✅ SUCCESS: ${description}${NC}"
return 0
else
echo -e "${RED}❌ FAILED: ${description}${NC}"
return 1
fi
else
if ssh $SSH_OPTIONS -p $REMOTE_PORT $REMOTE_USER@$REMOTE_HOST "$cmd" 2>/dev/null; then
echo -e "${GREEN}✅ SUCCESS: ${description}${NC}"
return 0
else
echo -e "${RED}❌ FAILED: ${description}${NC}"
return 1
fi
fi
}
# Step 1: Emergency stop of problematic service
echo -e "${RED}🛑 STEP 1: EMERGENCY STOP OF PROBLEMATIC SERVICE${NC}"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
run_remote "systemctl stop thrillwiki-deployment.service" "Stop problematic deployment service" true
run_remote "systemctl disable thrillwiki-deployment.service" "Disable problematic deployment service" true
echo ""
echo -e "${GREEN}✅ Infinite restart cycle STOPPED${NC}"
echo ""
# Step 2: Clean up system state
echo -e "${YELLOW}🧹 STEP 2: CLEANUP SYSTEM STATE${NC}"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
# Remove stale lock file
run_remote "rm -f /tmp/thrillwiki-deployment.lock" "Remove stale lock file"
# Kill any remaining deployment processes (non-critical if it fails)
ssh $SSH_OPTIONS -p $REMOTE_PORT $REMOTE_USER@$REMOTE_HOST "pkill -f 'deploy-automation.sh' || true" 2>/dev/null || echo -e "${YELLOW}⚠️ No deployment processes to kill (this is fine)${NC}"
echo ""
# Step 3: Create missing smart-deploy.sh script
echo -e "${BLUE}📝 STEP 3: CREATE MISSING SMART-DEPLOY.SH SCRIPT${NC}"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
# Create the smart-deploy.sh script on the remote server
cat > /tmp/smart-deploy.sh << 'SMART_DEPLOY_EOF'
#!/usr/bin/env bash
#
# ThrillWiki Smart Deployment Script
# One-shot deployment automation for timer-based execution
#
set -e
# Configuration
PROJECT_DIR="/home/thrillwiki/thrillwiki"
LOG_DIR="$PROJECT_DIR/logs"
LOG_FILE="$LOG_DIR/smart-deploy.log"
# Ensure log directory exists
mkdir -p "$LOG_DIR"
# Logging function
log_message() {
local level="$1"
local message="$2"
local timestamp="$(date '+%Y-%m-%d %H:%M:%S')"
echo "[$timestamp] [$level] [SMART-DEPLOY] $message" | tee -a "$LOG_FILE"
}
log_message "INFO" "Smart deployment started"
# Change to project directory
cd "$PROJECT_DIR"
# Check for updates
log_message "INFO" "Checking for repository updates"
if git fetch origin main; then
LOCAL_COMMIT=$(git rev-parse HEAD)
REMOTE_COMMIT=$(git rev-parse origin/main)
if [ "$LOCAL_COMMIT" != "$REMOTE_COMMIT" ]; then
log_message "INFO" "Updates found, pulling changes"
git pull origin main
# Check if requirements changed
if git diff --name-only HEAD~1 | grep -E "(pyproject.toml|requirements.*\.txt)" > /dev/null; then
log_message "INFO" "Dependencies changed, updating packages"
if command -v uv > /dev/null; then
uv sync
else
pip install -r requirements.txt
fi
fi
# Check if migrations are needed
if command -v uv > /dev/null; then
MIGRATION_CHECK=$(uv run manage.py showmigrations --plan | grep '\[ \]' || true)
else
MIGRATION_CHECK=$(python manage.py showmigrations --plan | grep '\[ \]' || true)
fi
if [ -n "$MIGRATION_CHECK" ]; then
log_message "INFO" "Running database migrations"
if command -v uv > /dev/null; then
uv run manage.py migrate
else
python manage.py migrate
fi
fi
# Collect static files if needed
log_message "INFO" "Collecting static files"
if command -v uv > /dev/null; then
uv run manage.py collectstatic --noinput
else
python manage.py collectstatic --noinput
fi
log_message "INFO" "Deployment completed successfully"
else
log_message "INFO" "No updates available"
fi
else
log_message "WARNING" "Failed to fetch updates"
fi
log_message "INFO" "Smart deployment finished"
SMART_DEPLOY_EOF
# Upload the smart-deploy.sh script
echo -e "${YELLOW}Uploading smart-deploy.sh script...${NC}"
if scp $SSH_OPTIONS -P $REMOTE_PORT /tmp/smart-deploy.sh "$REMOTE_USER@$REMOTE_HOST:[AWS-SECRET-REMOVED]t-deploy.sh" 2>/dev/null; then
echo -e "${GREEN}✅ smart-deploy.sh uploaded successfully${NC}"
rm -f /tmp/smart-deploy.sh
else
echo -e "${RED}❌ Failed to upload smart-deploy.sh${NC}"
exit 1
fi
# Make it executable
run_remote "chmod +x [AWS-SECRET-REMOVED]t-deploy.sh" "Make smart-deploy.sh executable"
echo ""
# Step 4: Fix timer configuration
echo -e "${BLUE}⏰ STEP 4: FIX TIMER CONFIGURATION${NC}"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
# Stop and disable timer first
run_remote "systemctl stop thrillwiki-smart-deploy.timer" "Stop smart deploy timer" true
run_remote "systemctl disable thrillwiki-smart-deploy.timer" "Disable smart deploy timer" true
# Upload corrected service files
echo -e "${YELLOW}Uploading corrected service files...${NC}"
# Upload thrillwiki-smart-deploy.service
if scp $SSH_OPTIONS -P $REMOTE_PORT "$PROJECT_DIR/scripts/systemd/thrillwiki-smart-deploy.service" "$REMOTE_USER@$REMOTE_HOST:/tmp/thrillwiki-smart-deploy.service" 2>/dev/null; then
run_remote "sudo cp /tmp/thrillwiki-smart-deploy.service /etc/systemd/system/" "Install smart deploy service"
run_remote "rm -f /tmp/thrillwiki-smart-deploy.service" "Clean up temp service file"
else
echo -e "${RED}❌ Failed to upload smart deploy service${NC}"
fi
# Upload thrillwiki-smart-deploy.timer
if scp $SSH_OPTIONS -P $REMOTE_PORT "$PROJECT_DIR/scripts/systemd/thrillwiki-smart-deploy.timer" "$REMOTE_USER@$REMOTE_HOST:/tmp/thrillwiki-smart-deploy.timer" 2>/dev/null; then
run_remote "sudo cp /tmp/thrillwiki-smart-deploy.timer /etc/systemd/system/" "Install smart deploy timer"
run_remote "rm -f /tmp/thrillwiki-smart-deploy.timer" "Clean up temp timer file"
else
echo -e "${RED}❌ Failed to upload smart deploy timer${NC}"
fi
echo ""
# Step 5: Reload systemd and enable proper services
echo -e "${GREEN}🔄 STEP 5: RELOAD SYSTEMD AND ENABLE PROPER SERVICES${NC}"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
run_remote "systemctl daemon-reload" "Reload systemd configuration" true
run_remote "systemctl enable thrillwiki-smart-deploy.service" "Enable smart deploy service" true
run_remote "systemctl enable thrillwiki-smart-deploy.timer" "Enable smart deploy timer" true
run_remote "systemctl start thrillwiki-smart-deploy.timer" "Start smart deploy timer" true
echo ""
# Step 6: Verify the fix
echo -e "${GREEN}✅ STEP 6: VERIFY THE FIX${NC}"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo -e "${YELLOW}Checking service status...${NC}"
ssh $SSH_OPTIONS -p $REMOTE_PORT $REMOTE_USER@$REMOTE_HOST "systemctl status thrillwiki-deployment.service --no-pager -l" || echo "✅ Problematic service is stopped (expected)"
echo ""
ssh $SSH_OPTIONS -p $REMOTE_PORT $REMOTE_USER@$REMOTE_HOST "systemctl status thrillwiki-smart-deploy.timer --no-pager -l"
echo ""
ssh $SSH_OPTIONS -p $REMOTE_PORT $REMOTE_USER@$REMOTE_HOST "systemctl status thrillwiki-smart-deploy.service --no-pager -l"
echo ""
echo -e "${GREEN}🎉 EMERGENCY FIX COMPLETED SUCCESSFULLY!${NC}"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo ""
echo -e "${GREEN}✅ FIXED ISSUES:${NC}"
echo "• Stopped infinite restart cycles"
echo "• Disabled problematic continuous deployment service"
echo "• Cleaned up stale lock files and processes"
echo "• Created missing smart-deploy.sh script"
echo "• Fixed timer configuration"
echo "• Enabled proper timer-based automation"
echo ""
echo -e "${BLUE}📋 MONITORING COMMANDS:${NC}"
echo "• Check timer status: ssh $REMOTE_USER@$REMOTE_HOST 'sudo systemctl status thrillwiki-smart-deploy.timer'"
echo "• View deployment logs: ssh $REMOTE_USER@$REMOTE_HOST 'tail -f /home/thrillwiki/thrillwiki/logs/smart-deploy.log'"
echo "• Test manual deployment: ssh $REMOTE_USER@$REMOTE_HOST '[AWS-SECRET-REMOVED]t-deploy.sh'"
echo ""
echo -e "${GREEN}✅ System is now properly configured with timer-based automation!${NC}"

View File

@@ -1,175 +0,0 @@
#!/usr/bin/env bash
#
# Fix Missing Deploy-Automation Script
# Deploys the missing deploy-automation.sh script to fix systemd service startup failure
#
set -e
# Script configuration
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)"
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
CYAN='\033[0;36m'
BOLD='\033[1m'
NC='\033[0m'
# Configuration
REMOTE_HOST="${1:-192.168.20.65}"
REMOTE_USER="${2:-thrillwiki}"
REMOTE_PORT="${3:-22}"
SSH_KEY="${4:-$HOME/.ssh/thrillwiki_vm}"
REMOTE_PATH="/home/$REMOTE_USER/thrillwiki"
# Enhanced SSH options to handle authentication issues
SSH_OPTS="-i $SSH_KEY -o IdentitiesOnly=yes -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ConnectTimeout=30 -o PasswordAuthentication=no -o PreferredAuthentications=publickey -o ServerAliveInterval=60"
echo -e "${BOLD}${CYAN}🚀 Fix Missing Deploy-Automation Script${NC}"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo ""
echo "Target: ${REMOTE_USER}@${REMOTE_HOST}:${REMOTE_PORT}"
echo "SSH Key: $SSH_KEY"
echo "Remote Path: $REMOTE_PATH"
echo "Local Script: $SCRIPT_DIR/deploy-automation.sh"
echo ""
# Function to run remote commands with proper SSH authentication
run_remote() {
local cmd="$1"
local description="$2"
local use_sudo="${3:-false}"
echo -e "${YELLOW}🔧 ${description}${NC}"
if [ "$use_sudo" = "true" ]; then
ssh $SSH_OPTS -p $REMOTE_PORT -t $REMOTE_USER@$REMOTE_HOST "sudo $cmd" 2>/dev/null || {
echo -e "${RED}❌ Failed: $description${NC}"
return 1
}
else
ssh $SSH_OPTS -p $REMOTE_PORT $REMOTE_USER@$REMOTE_HOST "$cmd" 2>/dev/null || {
echo -e "${RED}❌ Failed: $description${NC}"
return 1
}
fi
echo -e "${GREEN}✅ Success: $description${NC}"
return 0
}
# Function to copy files to remote server
copy_to_remote() {
local local_file="$1"
local remote_file="$2"
local description="$3"
echo -e "${YELLOW}📁 ${description}${NC}"
if scp $SSH_OPTS -P $REMOTE_PORT "$local_file" "$REMOTE_USER@$REMOTE_HOST:$remote_file" 2>/dev/null; then
echo -e "${GREEN}✅ Success: $description${NC}"
return 0
else
echo -e "${RED}❌ Failed: $description${NC}"
return 1
fi
}
# Check if SSH key exists
echo -e "${BLUE}🔑 Checking SSH authentication...${NC}"
if [ ! -f "$SSH_KEY" ]; then
echo -e "${RED}❌ SSH key not found: $SSH_KEY${NC}"
echo "Please ensure the SSH key exists and has correct permissions"
exit 1
fi
# Check SSH key permissions
ssh_key_perms=$(stat -c %a "$SSH_KEY" 2>/dev/null || stat -f %A "$SSH_KEY" 2>/dev/null)
if [ "$ssh_key_perms" != "600" ]; then
echo -e "${YELLOW}⚠️ Fixing SSH key permissions...${NC}"
chmod 600 "$SSH_KEY"
fi
# Test SSH connection
echo -e "${BLUE}🔗 Testing SSH connection...${NC}"
if ssh $SSH_OPTS -p $REMOTE_PORT $REMOTE_USER@$REMOTE_HOST "echo 'SSH connection successful'" 2>/dev/null; then
echo -e "${GREEN}✅ SSH connection verified${NC}"
else
echo -e "${RED}❌ SSH connection failed${NC}"
echo "Please check:"
echo "1. SSH key is correct: $SSH_KEY"
echo "2. Remote host is accessible: $REMOTE_HOST"
echo "3. Remote user exists: $REMOTE_USER"
echo "4. SSH key is authorized on remote server"
exit 1
fi
# Check if local deploy-automation.sh exists
echo -e "${BLUE}📋 Checking local script...${NC}"
LOCAL_SCRIPT="$SCRIPT_DIR/deploy-automation.sh"
if [ ! -f "$LOCAL_SCRIPT" ]; then
echo -e "${RED}❌ Local script not found: $LOCAL_SCRIPT${NC}"
exit 1
fi
echo -e "${GREEN}✅ Local script found: $LOCAL_SCRIPT${NC}"
# Create remote directory structure if needed
run_remote "mkdir -p $REMOTE_PATH/scripts/vm" "Creating remote scripts directory"
# Deploy the deploy-automation.sh script
copy_to_remote "$LOCAL_SCRIPT" "$REMOTE_PATH/scripts/vm/deploy-automation.sh" "Deploying deploy-automation.sh script"
# Set executable permissions
run_remote "chmod +x $REMOTE_PATH/scripts/vm/deploy-automation.sh" "Setting executable permissions"
# Verify script deployment
echo -e "${BLUE}🔍 Verifying script deployment...${NC}"
run_remote "ls -la $REMOTE_PATH/scripts/vm/deploy-automation.sh" "Verifying script exists and has correct permissions"
# Test script execution
echo -e "${BLUE}🧪 Testing script functionality...${NC}"
run_remote "cd $REMOTE_PATH && ./scripts/vm/deploy-automation.sh status" "Testing script execution"
# Restart systemd service
echo -e "${BLUE}🔄 Restarting systemd service...${NC}"
run_remote "systemctl --user restart thrillwiki-deployment.service" "Restarting thrillwiki-deployment service"
# Wait for service to start
echo -e "${YELLOW}⏳ Waiting for service to start...${NC}"
sleep 10
# Check service status
echo -e "${BLUE}📊 Checking service status...${NC}"
if run_remote "systemctl --user is-active thrillwiki-deployment.service" "Checking if service is active"; then
echo ""
echo -e "${GREEN}${BOLD}🎉 SUCCESS: Systemd service startup fix completed!${NC}"
echo ""
echo "✅ deploy-automation.sh script deployed successfully"
echo "✅ Script has executable permissions"
echo "✅ Script functionality verified"
echo "✅ Systemd service restarted"
echo "✅ Service is now active and running"
echo ""
echo -e "${CYAN}Service Status:${NC}"
run_remote "systemctl --user status thrillwiki-deployment.service --no-pager -l" "Getting detailed service status"
else
echo ""
echo -e "${YELLOW}⚠️ Service restarted but may still be starting up${NC}"
echo "Checking detailed status..."
run_remote "systemctl --user status thrillwiki-deployment.service --no-pager -l" "Getting detailed service status"
fi
echo ""
echo -e "${BOLD}${CYAN}🔧 Fix Summary${NC}"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "• Missing script deployed: ✅ [AWS-SECRET-REMOVED]eploy-automation.sh"
echo "• Executable permissions: ✅ chmod +x applied"
echo "• Script functionality: ✅ Tested and working"
echo "• Systemd service: ✅ Restarted"
echo "• Error 203/EXEC: ✅ Should be resolved"
echo ""
echo "The systemd service startup failure has been fixed!"

View File

@@ -1,223 +0,0 @@
#!/usr/bin/env bash
#
# Fix Systemd Service Configuration
# Updates the systemd service file to resolve permission and execution issues
#
set -e
# Script configuration
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)"
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
CYAN='\033[0;36m'
BOLD='\033[1m'
NC='\033[0m'
# Configuration
REMOTE_HOST="${1:-192.168.20.65}"
REMOTE_USER="${2:-thrillwiki}"
REMOTE_PORT="${3:-22}"
SSH_KEY="${4:-$HOME/.ssh/thrillwiki_vm}"
REMOTE_PATH="/home/$REMOTE_USER/thrillwiki"
# Enhanced SSH options
SSH_OPTS="-i $SSH_KEY -o IdentitiesOnly=yes -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ConnectTimeout=30 -o PasswordAuthentication=no -o PreferredAuthentications=publickey"
echo -e "${BOLD}${CYAN}🔧 Fix Systemd Service Configuration${NC}"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo ""
echo "Target: ${REMOTE_USER}@${REMOTE_HOST}:${REMOTE_PORT}"
echo "Fixing systemd service security configuration issues"
echo ""
# Function to run remote commands
run_remote() {
local cmd="$1"
local description="$2"
local use_sudo="${3:-false}"
echo -e "${YELLOW}🔧 ${description}${NC}"
if [ "$use_sudo" = "true" ]; then
ssh $SSH_OPTS -p $REMOTE_PORT -t $REMOTE_USER@$REMOTE_HOST "sudo $cmd" 2>/dev/null || {
echo -e "${RED}❌ Failed: $description${NC}"
return 1
}
else
ssh $SSH_OPTS -p $REMOTE_PORT $REMOTE_USER@$REMOTE_HOST "$cmd" 2>/dev/null || {
echo -e "${RED}❌ Failed: $description${NC}"
return 1
}
fi
echo -e "${GREEN}✅ Success: $description${NC}"
return 0
}
# Create a fixed systemd service file
echo -e "${BLUE}📝 Creating corrected systemd service configuration...${NC}"
cat > /tmp/thrillwiki-deployment-fixed.service << 'EOF'
[Unit]
Description=ThrillWiki Complete Deployment Automation Service
Documentation=man:thrillwiki-deployment(8)
After=network.target network-online.target
Wants=network-online.target
Before=thrillwiki-smart-deploy.timer
PartOf=thrillwiki-smart-deploy.timer
[Service]
Type=simple
User=thrillwiki
Group=thrillwiki
[AWS-SECRET-REMOVED]wiki
[AWS-SECRET-REMOVED]ripts/vm/deploy-automation.sh
ExecStop=/bin/kill -TERM $MAINPID
ExecReload=/bin/kill -HUP $MAINPID
Restart=always
RestartSec=30
KillMode=mixed
KillSignal=SIGTERM
TimeoutStopSec=120
TimeoutStartSec=180
StartLimitIntervalSec=600
StartLimitBurst=3
# Environment variables - Load from file for security and preset integration
EnvironmentFile=-[AWS-SECRET-REMOVED]emd/thrillwiki-deployment***REMOVED***
Environment=PROJECT_DIR=/home/thrillwiki/thrillwiki
Environment=SERVICE_NAME=thrillwiki-deployment
Environment=GITHUB_REPO=origin
Environment=GITHUB_BRANCH=main
Environment=DEPLOYMENT_MODE=automated
Environment=LOG_DIR=/home/thrillwiki/thrillwiki/logs
Environment=MAX_LOG_SIZE=10485760
Environment=SERVER_HOST=0.0.0.0
Environment=SERVER_PORT=8000
Environment=PATH=/home/thrillwiki/.local/bin:/home/thrillwiki/.cargo/bin:/usr/local/bin:/usr/bin:/bin
[AWS-SECRET-REMOVED]thrillwiki
# Security settings - Relaxed to allow proper access to working directory
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=false
ProtectHome=false
ProtectKernelTunables=false
ProtectKernelModules=true
ProtectControlGroups=false
RestrictSUIDSGID=true
RestrictRealtime=true
RestrictNamespaces=false
LockPersonality=false
MemoryDenyWriteExecute=false
RemoveIPC=true
# File system permissions - Allow full access to home directory
ReadWritePaths=/home/thrillwiki
ReadOnlyPaths=
# Resource limits - Appropriate for deployment automation
LimitNOFILE=65536
LimitNPROC=2048
MemoryMax=1G
CPUQuota=75%
TasksMax=512
# Timeouts and watchdog
WatchdogSec=600
RuntimeMaxSec=0
# Logging configuration
StandardOutput=journal
StandardError=journal
SyslogIdentifier=thrillwiki-deployment
SyslogFacility=daemon
SyslogLevel=info
SyslogLevelPrefix=true
# Enhanced logging for debugging
LogsDirectory=thrillwiki-deployment
LogsDirectoryMode=0755
StateDirectory=thrillwiki-deployment
StateDirectoryMode=0755
RuntimeDirectory=thrillwiki-deployment
RuntimeDirectoryMode=0755
# Capabilities - Minimal required capabilities
CapabilityBoundingSet=
AmbientCapabilities=
PrivateDevices=false
ProtectClock=false
ProtectHostname=false
[Install]
WantedBy=multi-user.target
Also=thrillwiki-smart-deploy.timer
EOF
echo -e "${GREEN}✅ Created fixed systemd service configuration${NC}"
# Stop the current service
run_remote "systemctl stop thrillwiki-deployment.service" "Stopping current service" true
# Copy the fixed service file to remote server
echo -e "${YELLOW}📁 Deploying fixed service configuration...${NC}"
if scp $SSH_OPTS -P $REMOTE_PORT /tmp/thrillwiki-deployment-fixed.service "$REMOTE_USER@$REMOTE_HOST:/tmp/" 2>/dev/null; then
echo -e "${GREEN}✅ Service file uploaded${NC}"
else
echo -e "${RED}❌ Failed to upload service file${NC}"
exit 1
fi
# Install the fixed service file
run_remote "cp /tmp/thrillwiki-deployment-fixed.service /etc/systemd/system/thrillwiki-deployment.service" "Installing fixed service file" true
# Reload systemd daemon
run_remote "systemctl daemon-reload" "Reloading systemd daemon" true
# Start the service
run_remote "systemctl start thrillwiki-deployment.service" "Starting fixed service" true
# Wait for service to start
echo -e "${YELLOW}⏳ Waiting for service to start...${NC}"
sleep 15
# Check service status
echo -e "${BLUE}📊 Checking service status...${NC}"
if run_remote "systemctl is-active thrillwiki-deployment.service" "Checking if service is active" true; then
echo ""
echo -e "${GREEN}${BOLD}🎉 SUCCESS: Systemd service startup fix completed!${NC}"
echo ""
echo "✅ Missing deploy-automation.sh script deployed"
echo "✅ Systemd service configuration fixed"
echo "✅ Security restrictions relaxed appropriately"
echo "✅ Service started successfully"
echo "✅ No more 203/EXEC errors"
echo ""
echo -e "${CYAN}Service Status:${NC}"
run_remote "systemctl status thrillwiki-deployment.service --no-pager -l" "Getting detailed service status" true
else
echo ""
echo -e "${YELLOW}⚠️ Service may still be starting up${NC}"
run_remote "systemctl status thrillwiki-deployment.service --no-pager -l" "Getting detailed service status" true
fi
# Clean up
rm -f /tmp/thrillwiki-deployment-fixed.service
echo ""
echo -e "${BOLD}${CYAN}🔧 Fix Summary${NC}"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "• Missing script: ✅ deploy-automation.sh deployed successfully"
echo "• Security config: ✅ Fixed overly restrictive systemd settings"
echo "• Working directory: ✅ Permission issues resolved"
echo "• Service startup: ✅ No more 203/EXEC errors"
echo "• Status: ✅ Service active and running"
echo ""
echo "The systemd service startup failure has been completely resolved!"

View File

@@ -1,307 +0,0 @@
#!/usr/bin/env bash
#
# ThrillWiki Systemd Service Configuration Fix
# Addresses SSH authentication issues and systemd service installation problems
#
set -e
# Script configuration
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)"
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
CYAN='\033[0;36m'
BOLD='\033[1m'
NC='\033[0m'
# Configuration
REMOTE_HOST="${1:-192.168.20.65}"
REMOTE_USER="${2:-thrillwiki}"
REMOTE_PORT="${3:-22}"
SSH_KEY="${4:-$HOME/.ssh/thrillwiki_vm}"
REMOTE_PATH="/home/$REMOTE_USER/thrillwiki"
# Improved SSH options with key authentication
SSH_OPTS="-i $SSH_KEY -o IdentitiesOnly=yes -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ConnectTimeout=30 -o PasswordAuthentication=no"
echo -e "${BOLD}${CYAN}🔧 ThrillWiki Systemd Service Fix${NC}"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo ""
echo "Target: ${REMOTE_USER}@${REMOTE_HOST}:${REMOTE_PORT}"
echo "SSH Key: $SSH_KEY"
echo "Remote Path: $REMOTE_PATH"
echo ""
# Function to run remote commands with proper SSH key authentication
run_remote() {
local cmd="$1"
local description="$2"
local use_sudo="${3:-false}"
echo -e "${YELLOW}🔧 ${description}${NC}"
if [ "$use_sudo" = "true" ]; then
# Use sudo with cached credentials (will prompt once if needed)
ssh $SSH_OPTS -p $REMOTE_PORT -t $REMOTE_USER@$REMOTE_HOST "sudo $cmd"
else
ssh $SSH_OPTS -p $REMOTE_PORT $REMOTE_USER@$REMOTE_HOST "$cmd"
fi
if [ $? -eq 0 ]; then
echo -e "${GREEN}✅ Success: ${description}${NC}"
return 0
else
echo -e "${RED}❌ Failed: ${description}${NC}"
return 1
fi
}
# Function to initialize sudo session (ask for password once)
init_sudo_session() {
echo -e "${YELLOW}🔐 Initializing sudo session (you may be prompted for password)${NC}"
if ssh $SSH_OPTS -p $REMOTE_PORT -t $REMOTE_USER@$REMOTE_HOST "sudo -v"; then
echo -e "${GREEN}✅ Sudo session initialized${NC}"
return 0
else
echo -e "${RED}❌ Failed to initialize sudo session${NC}"
return 1
fi
}
echo "=== Step 1: SSH Authentication Test ==="
echo ""
# Test SSH connectivity
if ! run_remote "echo 'SSH connection test successful'" "Testing SSH connection"; then
echo -e "${RED}❌ SSH connection failed. Please check:${NC}"
echo "1. SSH key exists and has correct permissions: $SSH_KEY"
echo "2. SSH key is added to remote host: $REMOTE_USER@$REMOTE_HOST"
echo "3. Remote host is accessible: $REMOTE_HOST:$REMOTE_PORT"
exit 1
fi
# Initialize sudo session once (ask for password here)
if ! init_sudo_session; then
echo -e "${RED}❌ Cannot initialize sudo session. Systemd operations require sudo access.${NC}"
exit 1
fi
echo ""
echo "=== Step 2: Create Missing Scripts ==="
echo ""
# Create smart-deploy.sh script
echo -e "${YELLOW}🔧 Creating smart-deploy.sh script${NC}"
cat > /tmp/smart-deploy.sh << 'EOF'
#!/bin/bash
#
# ThrillWiki Smart Deployment Script
# Automated repository synchronization and Django server management
#
set -e
PROJECT_DIR="/home/thrillwiki/thrillwiki"
LOG_FILE="$PROJECT_DIR/logs/smart-deploy.log"
LOCK_FILE="/tmp/smart-deploy.lock"
# Logging function
smart_log() {
local level="$1"
local message="$2"
local timestamp="$(date '+%Y-%m-%d %H:%M:%S')"
echo "[$timestamp] [$level] $message" | tee -a "$LOG_FILE"
}
# Create lock to prevent multiple instances
if [ -f "$LOCK_FILE" ]; then
smart_log "WARNING" "Smart deploy already running (lock file exists)"
exit 0
fi
echo $$ > "$LOCK_FILE"
trap 'rm -f "$LOCK_FILE"' EXIT
smart_log "INFO" "Starting smart deployment cycle"
cd "$PROJECT_DIR"
# Pull latest changes
smart_log "INFO" "Pulling latest repository changes"
if git pull origin main; then
smart_log "SUCCESS" "Repository updated successfully"
else
smart_log "ERROR" "Failed to pull repository changes"
exit 1
fi
# Check if dependencies need updating
if [ -f "pyproject.toml" ]; then
smart_log "INFO" "Updating dependencies with UV"
if uv sync; then
smart_log "SUCCESS" "Dependencies updated"
else
smart_log "WARNING" "Dependency update had issues"
fi
fi
# Run Django migrations
smart_log "INFO" "Running Django migrations"
if uv run manage.py migrate --no-input; then
smart_log "SUCCESS" "Migrations completed"
else
smart_log "WARNING" "Migration had issues"
fi
# Collect static files
smart_log "INFO" "Collecting static files"
if uv run manage.py collectstatic --no-input; then
smart_log "SUCCESS" "Static files collected"
else
smart_log "WARNING" "Static file collection had issues"
fi
smart_log "SUCCESS" "Smart deployment cycle completed"
EOF
# Upload smart-deploy.sh
if scp $SSH_OPTS -P $REMOTE_PORT /tmp/smart-deploy.sh $REMOTE_USER@$REMOTE_HOST:$REMOTE_PATH/scripts/smart-deploy.sh; then
echo -e "${GREEN}✅ smart-deploy.sh uploaded successfully${NC}"
else
echo -e "${RED}❌ Failed to upload smart-deploy.sh${NC}"
exit 1
fi
# Make smart-deploy.sh executable
run_remote "chmod +x $REMOTE_PATH/scripts/smart-deploy.sh" "Making smart-deploy.sh executable"
# Create logs directory
run_remote "mkdir -p $REMOTE_PATH/logs" "Creating logs directory"
echo ""
echo "=== Step 3: Deploy Systemd Service Files ==="
echo ""
# Upload systemd service files
echo -e "${YELLOW}🔧 Uploading systemd service files${NC}"
# Upload thrillwiki-deployment.service
if scp $SSH_OPTS -P $REMOTE_PORT $PROJECT_DIR/scripts/systemd/thrillwiki-deployment.service $REMOTE_USER@$REMOTE_HOST:/tmp/; then
echo -e "${GREEN}✅ thrillwiki-deployment.service uploaded${NC}"
else
echo -e "${RED}❌ Failed to upload thrillwiki-deployment.service${NC}"
exit 1
fi
# Upload thrillwiki-smart-deploy.service
if scp $SSH_OPTS -P $REMOTE_PORT $PROJECT_DIR/scripts/systemd/thrillwiki-smart-deploy.service $REMOTE_USER@$REMOTE_HOST:/tmp/; then
echo -e "${GREEN}✅ thrillwiki-smart-deploy.service uploaded${NC}"
else
echo -e "${RED}❌ Failed to upload thrillwiki-smart-deploy.service${NC}"
exit 1
fi
# Upload thrillwiki-smart-deploy.timer
if scp $SSH_OPTS -P $REMOTE_PORT $PROJECT_DIR/scripts/systemd/thrillwiki-smart-deploy.timer $REMOTE_USER@$REMOTE_HOST:/tmp/; then
echo -e "${GREEN}✅ thrillwiki-smart-deploy.timer uploaded${NC}"
else
echo -e "${RED}❌ Failed to upload thrillwiki-smart-deploy.timer${NC}"
exit 1
fi
# Upload environment file
if scp $SSH_OPTS -P $REMOTE_PORT $PROJECT_DIR/scripts/systemd/thrillwiki-deployment***REMOVED*** $REMOTE_USER@$REMOTE_HOST:$REMOTE_PATH/scripts/systemd/; then
echo -e "${GREEN}✅ thrillwiki-deployment***REMOVED*** uploaded${NC}"
else
echo -e "${RED}❌ Failed to upload thrillwiki-deployment***REMOVED***${NC}"
exit 1
fi
echo ""
echo "=== Step 4: Install Systemd Services ==="
echo ""
# Copy service files to systemd directory
run_remote "cp /tmp/thrillwiki-deployment.service /etc/systemd/system/" "Installing thrillwiki-deployment.service" true
run_remote "cp /tmp/thrillwiki-smart-deploy.service /etc/systemd/system/" "Installing thrillwiki-smart-deploy.service" true
run_remote "cp /tmp/thrillwiki-smart-deploy.timer /etc/systemd/system/" "Installing thrillwiki-smart-deploy.timer" true
# Set proper permissions
run_remote "chmod 644 /etc/systemd/system/thrillwiki-*.service /etc/systemd/system/thrillwiki-*.timer" "Setting service file permissions" true
# Set environment file permissions
run_remote "chmod 600 $REMOTE_PATH/scripts/systemd/thrillwiki-deployment***REMOVED***" "Setting environment file permissions"
run_remote "chown $REMOTE_USER:$REMOTE_USER $REMOTE_PATH/scripts/systemd/thrillwiki-deployment***REMOVED***" "Setting environment file ownership"
echo ""
echo "=== Step 5: Enable and Start Services ==="
echo ""
# Reload systemd daemon
run_remote "systemctl daemon-reload" "Reloading systemd daemon" true
# Enable services
run_remote "systemctl enable thrillwiki-deployment.service" "Enabling thrillwiki-deployment.service" true
run_remote "systemctl enable thrillwiki-smart-deploy.timer" "Enabling thrillwiki-smart-deploy.timer" true
# Start services
run_remote "systemctl start thrillwiki-deployment.service" "Starting thrillwiki-deployment.service" true
run_remote "systemctl start thrillwiki-smart-deploy.timer" "Starting thrillwiki-smart-deploy.timer" true
echo ""
echo "=== Step 6: Validate Service Operation ==="
echo ""
# Check service status
echo -e "${YELLOW}🔧 Checking service status${NC}"
if run_remote "systemctl is-active thrillwiki-deployment.service" "Checking thrillwiki-deployment.service status" true; then
echo -e "${GREEN}✅ thrillwiki-deployment.service is active${NC}"
else
echo -e "${RED}❌ thrillwiki-deployment.service is not active${NC}"
run_remote "systemctl status thrillwiki-deployment.service" "Getting service status details" true
fi
if run_remote "systemctl is-active thrillwiki-smart-deploy.timer" "Checking thrillwiki-smart-deploy.timer status" true; then
echo -e "${GREEN}✅ thrillwiki-smart-deploy.timer is active${NC}"
else
echo -e "${RED}❌ thrillwiki-smart-deploy.timer is not active${NC}"
run_remote "systemctl status thrillwiki-smart-deploy.timer" "Getting timer status details" true
fi
# Test smart-deploy script
echo -e "${YELLOW}🔧 Testing smart-deploy script${NC}"
if run_remote "$REMOTE_PATH/scripts/smart-deploy.sh" "Testing smart-deploy script execution"; then
echo -e "${GREEN}✅ smart-deploy script executed successfully${NC}"
else
echo -e "${RED}❌ smart-deploy script execution failed${NC}"
fi
echo ""
echo -e "${BOLD}${GREEN}🎉 Systemd Service Fix Completed!${NC}"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo ""
echo -e "${CYAN}📋 Service Management Commands:${NC}"
echo ""
echo "Monitor services:"
echo " ssh -i $SSH_KEY $REMOTE_USER@$REMOTE_HOST 'sudo systemctl status thrillwiki-deployment.service'"
echo " ssh -i $SSH_KEY $REMOTE_USER@$REMOTE_HOST 'sudo systemctl status thrillwiki-smart-deploy.timer'"
echo ""
echo "View logs:"
echo " ssh -i $SSH_KEY $REMOTE_USER@$REMOTE_HOST 'sudo journalctl -u thrillwiki-deployment -f'"
echo " ssh -i $SSH_KEY $REMOTE_USER@$REMOTE_HOST 'sudo journalctl -u thrillwiki-smart-deploy -f'"
echo " ssh -i $SSH_KEY $REMOTE_USER@$REMOTE_HOST 'tail -f $REMOTE_PATH/logs/smart-deploy.log'"
echo ""
echo "Control services:"
echo " ssh -i $SSH_KEY $REMOTE_USER@$REMOTE_HOST 'sudo systemctl restart thrillwiki-deployment.service'"
echo " ssh -i $SSH_KEY $REMOTE_USER@$REMOTE_HOST 'sudo systemctl restart thrillwiki-smart-deploy.timer'"
echo ""
# Cleanup temp files
rm -f /tmp/smart-deploy.sh
echo -e "${GREEN}✅ All systemd service issues have been resolved!${NC}"

View File

@@ -1,689 +0,0 @@
#!/usr/bin/env python3
"""
ThrillWiki GitHub PAT Setup Helper
Interactive script for setting up GitHub Personal Access Tokens with proper validation
and integration with the automation system.
Features:
- Guided GitHub PAT creation process
- Token validation and permission checking
- Integration with existing github-auth.py patterns
- Clear instructions for PAT scope requirements
- Secure token storage with proper file permissions
"""
import sys
import getpass
import requests
import argparse
import subprocess
from pathlib import Path
# Configuration
SCRIPT_DIR = Path(__file__).parent
PROJECT_DIR = SCRIPT_DIR.parent.parent
CONFIG_SCRIPT = SCRIPT_DIR / "automation-config.sh"
GITHUB_AUTH_SCRIPT = PROJECT_DIR / "scripts" / "github-auth.py"
TOKEN_FILE = PROJECT_DIR / ".github-pat"
# GitHub API Configuration
GITHUB_API_BASE = "https://api.github.com"
REQUEST_TIMEOUT = 30
# Token scope requirements for different use cases
TOKEN_SCOPES = {
"public": {
"description": "Public repositories only",
"scopes": ["public_repo"],
"note": "Suitable for public repositories and basic automation",
},
"private": {
"description": "Private repositories access",
"scopes": ["repo"],
"note": "Required for private repositories and full automation features",
},
"full": {
"description": "Full automation capabilities",
"scopes": ["repo", "workflow", "read:org"],
"note": "Recommended for complete automation setup with GitHub Actions",
},
}
class Colors:
"""ANSI color codes for terminal output"""
RED = "\033[0;31m"
GREEN = "\033[0;32m"
YELLOW = "\033[1;33m"
BLUE = "\033[0;34m"
PURPLE = "\033[0;35m"
CYAN = "\033[0;36m"
BOLD = "\033[1m"
NC = "\033[0m" # No Color
def print_colored(message, color=Colors.NC):
"""Print colored message to terminal"""
print(f"{color}{message}{Colors.NC}")
def print_error(message):
"""Print error message"""
print_colored(f"❌ Error: {message}", Colors.RED)
def print_success(message):
"""Print success message"""
print_colored(f"{message}", Colors.GREEN)
def print_warning(message):
"""Print warning message"""
print_colored(f"⚠️ Warning: {message}", Colors.YELLOW)
def print_info(message):
"""Print info message"""
print_colored(f" {message}", Colors.BLUE)
def print_step(step, total, message):
"""Print step progress"""
print_colored(f"\n[{step}/{total}] {message}", Colors.CYAN)
def validate_token_format(token):
"""Validate GitHub token format"""
if not token:
return False
# GitHub token patterns
patterns = [
lambda t: t.startswith("ghp_") and len(t) >= 40, # Classic PAT
lambda t: t.startswith("github_pat_") and len(t) >= 50, # Fine-grained PAT
lambda t: t.startswith("gho_") and len(t) >= 40, # OAuth token
lambda t: t.startswith("ghu_") and len(t) >= 40, # User token
lambda t: t.startswith("ghs_") and len(t) >= 40, # Server token
]
return any(pattern(token) for pattern in patterns)
def test_github_token(token, timeout=REQUEST_TIMEOUT):
"""Test GitHub token by making API call"""
if not token:
return False, "No token provided"
try:
headers = {
"Authorization": f"Bearer {token}",
"Accept": "application/vnd.github+json",
"X-GitHub-Api-Version": "2022-11-28",
}
response = requests.get(
f"{GITHUB_API_BASE}/user", headers=headers, timeout=timeout
)
if response.status_code == 200:
user_data = response.json()
return (
True,
f"Valid token for user: {
user_data.get(
'login', 'unknown')}",
)
elif response.status_code == 401:
return False, "Invalid or expired token"
elif response.status_code == 403:
return False, "Token lacks required permissions"
else:
return (
False,
f"API request failed with HTTP {
response.status_code}",
)
except requests.exceptions.RequestException as e:
return False, f"Network error: {str(e)}"
def get_token_permissions(token, timeout=REQUEST_TIMEOUT):
"""Get token permissions and scopes"""
if not token:
return None, "No token provided"
try:
headers = {
"Authorization": f"Bearer {token}",
"Accept": "application/vnd.github+json",
"X-GitHub-Api-Version": "2022-11-28",
}
# Get user info and check token in response headers
response = requests.get(
f"{GITHUB_API_BASE}/user", headers=headers, timeout=timeout
)
if response.status_code == 200:
scopes = response.headers.get("X-OAuth-Scopes", "").split(", ")
scopes = [scope.strip() for scope in scopes if scope.strip()]
return scopes, None
else:
return (
None,
f"Failed to get permissions: HTTP {
response.status_code}",
)
except requests.exceptions.RequestException as e:
return None, f"Network error: {str(e)}"
def check_repository_access(token, repo_url=None, timeout=REQUEST_TIMEOUT):
"""Check if token can access the repository"""
if not token:
return False, "No token provided"
# Try to determine repository from git remote
if not repo_url:
try:
result = subprocess.run(
["git", "remote", "get-url", "origin"],
cwd=PROJECT_DIR,
capture_output=True,
text=True,
timeout=10,
)
if result.returncode == 0:
repo_url = result.stdout.strip()
except (subprocess.TimeoutExpired, FileNotFoundError):
pass
if not repo_url:
return None, "Could not determine repository URL"
# Extract owner/repo from URL
if "github.com" in repo_url:
# Handle both SSH and HTTPS URLs
if repo_url.startswith("git@github.com:"):
repo_path = repo_url.replace("git@github.com:", "").replace(".git", "")
elif "github.com/" in repo_url:
repo_path = repo_url.split("github.com/")[-1].replace(".git", "")
else:
return None, "Could not parse repository URL"
try:
headers = {
"Authorization": f"Bearer {token}",
"Accept": "application/vnd.github+json",
"X-GitHub-Api-Version": "2022-11-28",
}
response = requests.get(
f"{GITHUB_API_BASE}/repos/{repo_path}",
headers=headers,
timeout=timeout,
)
if response.status_code == 200:
repo_data = response.json()
return (
True,
f"Access confirmed for {
repo_data.get(
'full_name', repo_path)}",
)
elif response.status_code == 404:
return False, "Repository not found or no access"
elif response.status_code == 403:
return False, "Access denied - insufficient permissions"
else:
return (
False,
f"Access check failed: HTTP {
response.status_code}",
)
except requests.exceptions.RequestException as e:
return None, f"Network error: {str(e)}"
return None, "Not a GitHub repository"
def show_pat_instructions():
"""Show detailed PAT creation instructions"""
print_colored("\n" + "=" * 60, Colors.BOLD)
print_colored("GitHub Personal Access Token (PAT) Setup Guide", Colors.BOLD)
print_colored("=" * 60, Colors.BOLD)
print("\n🔐 Why do you need a GitHub PAT?")
print(" • Access private repositories")
print(" • Avoid GitHub API rate limits")
print(" • Enable automated repository operations")
print(" • Secure authentication without passwords")
print("\n📋 Step-by-step PAT creation:")
print(" 1. Go to: https://github.com/settings/tokens")
print(" 2. Click 'Generate new token''Generate new token (classic)'")
print(" 3. Enter a descriptive note: 'ThrillWiki Automation'")
print(" 4. Set expiration (recommended: 90 days for security)")
print(" 5. Select appropriate scopes:")
print("\n🎯 Recommended scope configurations:")
for scope_type, config in TOKEN_SCOPES.items():
print(f"\n {scope_type.upper()} REPOSITORIES:")
print(f" • Description: {config['description']}")
print(f" • Required scopes: {', '.join(config['scopes'])}")
print(f" • Note: {config['note']}")
print("\n⚡ Quick setup for most users:")
print(" • Select 'repo' scope for full repository access")
print(" • This enables all automation features")
print("\n🔒 Security best practices:")
print(" • Use descriptive token names")
print(" • Set reasonable expiration dates")
print(" • Regenerate tokens regularly")
print(" • Never share tokens in public")
print(" • Delete unused tokens immediately")
print("\n📱 After creating your token:")
print(" • Copy the token immediately (it won't be shown again)")
print(" • Return to this script and paste it when prompted")
print(" • The script will validate and securely store your token")
def interactive_token_setup():
"""Interactive token setup process"""
print_colored("\n🚀 ThrillWiki GitHub PAT Setup", Colors.BOLD)
print_colored("================================", Colors.BOLD)
# Check if token already exists
if TOKEN_FILE.exists():
try:
existing_token = TOKEN_FILE.read_text().strip()
if existing_token:
print_info("Existing GitHub token found")
# Test existing token
valid, message = test_github_token(existing_token)
if valid:
print_success(f"Current token is valid: {message}")
choice = (
input("\nDo you want to replace the existing token? (y/N): ")
.strip()
.lower()
)
if choice not in ["y", "yes"]:
print_info("Keeping existing token")
return True
else:
print_warning(f"Current token is invalid: {message}")
print_info("Setting up new token...")
except Exception as e:
print_warning(f"Could not read existing token: {e}")
# Show instructions
print("\n" + "=" * 50)
choice = (
input("Do you want to see PAT creation instructions? (Y/n): ").strip().lower()
)
if choice not in ["n", "no"]:
show_pat_instructions()
# Get token from user
print_step(1, 3, "Enter your GitHub Personal Access Token")
print("📋 Please paste your GitHub PAT below:")
print(" (Input will be hidden for security)")
while True:
try:
token = getpass.getpass("GitHub PAT: ").strip()
if not token:
print_error("No token entered. Please try again.")
continue
# Validate format
if not validate_token_format(token):
print_error(
"Invalid token format. GitHub tokens should start with 'ghp_', 'github_pat_', etc."
)
retry = input("Try again? (Y/n): ").strip().lower()
if retry in ["n", "no"]:
return False
continue
break
except KeyboardInterrupt:
print("\nSetup cancelled by user")
return False
# Test token
print_step(2, 3, "Validating GitHub token")
print("🔍 Testing token with GitHub API...")
valid, message = test_github_token(token)
if not valid:
print_error(f"Token validation failed: {message}")
return False
print_success(message)
# Check permissions
print("🔐 Checking token permissions...")
scopes, error = get_token_permissions(token)
if error:
print_warning(f"Could not check permissions: {error}")
else:
print_success(
f"Token scopes: {', '.join(scopes) if scopes else 'None detected'}"
)
# Check for recommended scopes
has_repo = "repo" in scopes or "public_repo" in scopes
if not has_repo:
print_warning("Token may lack repository access permissions")
# Check repository access
print("📁 Checking repository access...")
access, access_message = check_repository_access(token)
if access is True:
print_success(access_message)
elif access is False:
print_warning(access_message)
else:
print_info(access_message or "Repository access check skipped")
# Store token
print_step(3, 3, "Storing GitHub token securely")
try:
# Backup existing token if it exists
if TOKEN_FILE.exists():
backup_file = TOKEN_FILE.with_suffix(".backup")
TOKEN_FILE.rename(backup_file)
print_info(f"Existing token backed up to: {backup_file}")
# Write new token
TOKEN_FILE.write_text(token)
TOKEN_FILE.chmod(0o600) # Read/write for owner only
print_success(f"Token stored securely in: {TOKEN_FILE}")
# Try to update configuration via config script
try:
if CONFIG_SCRIPT.exists():
subprocess.run(
[
"bash",
"-c",
f'source {CONFIG_SCRIPT} && store_github_token "{token}"',
],
check=False,
capture_output=True,
)
print_success("Token added to automation configuration")
except Exception as e:
print_warning(f"Could not update automation config: {e}")
print_success("GitHub PAT setup completed successfully!")
return True
except Exception as e:
print_error(f"Failed to store token: {e}")
return False
def validate_existing_token():
"""Validate existing GitHub token"""
print_colored("\n🔍 GitHub Token Validation", Colors.BOLD)
print_colored("===========================", Colors.BOLD)
if not TOKEN_FILE.exists():
print_error("No GitHub token file found")
print_info(f"Expected location: {TOKEN_FILE}")
return False
try:
token = TOKEN_FILE.read_text().strip()
if not token:
print_error("Token file is empty")
return False
print_info("Validating stored token...")
# Format validation
if not validate_token_format(token):
print_error("Token format is invalid")
return False
print_success("Token format is valid")
# API validation
valid, message = test_github_token(token)
if not valid:
print_error(f"Token validation failed: {message}")
return False
print_success(message)
# Check permissions
scopes, error = get_token_permissions(token)
if error:
print_warning(f"Could not check permissions: {error}")
else:
print_success(
f"Token scopes: {
', '.join(scopes) if scopes else 'None detected'}"
)
# Check repository access
access, access_message = check_repository_access(token)
if access is True:
print_success(access_message)
elif access is False:
print_warning(access_message)
else:
print_info(access_message or "Repository access check inconclusive")
print_success("Token validation completed")
return True
except Exception as e:
print_error(f"Error reading token: {e}")
return False
def remove_token():
"""Remove stored GitHub token"""
print_colored("\n🗑️ GitHub Token Removal", Colors.BOLD)
print_colored("=========================", Colors.BOLD)
if not TOKEN_FILE.exists():
print_info("No GitHub token file found")
return True
try:
# Backup before removal
backup_file = TOKEN_FILE.with_suffix(".removed")
TOKEN_FILE.rename(backup_file)
print_success(f"Token removed and backed up to: {backup_file}")
# Try to remove from config
try:
if CONFIG_SCRIPT.exists():
subprocess.run(
[
"bash",
"-c",
f"source {CONFIG_SCRIPT} && remove_github_token",
],
check=False,
capture_output=True,
)
print_success("Token removed from automation configuration")
except Exception as e:
print_warning(f"Could not update automation config: {e}")
print_success("GitHub token removed successfully")
return True
except Exception as e:
print_error(f"Error removing token: {e}")
return False
def show_token_status():
"""Show current token status"""
print_colored("\n📊 GitHub Token Status", Colors.BOLD)
print_colored("======================", Colors.BOLD)
# Check token file
print(f"📁 Token file: {TOKEN_FILE}")
if TOKEN_FILE.exists():
print_success("Token file exists")
# Check permissions
perms = oct(TOKEN_FILE.stat().st_mode)[-3:]
if perms == "600":
print_success(f"File permissions: {perms} (secure)")
else:
print_warning(f"File permissions: {perms} (should be 600)")
# Quick validation
try:
token = TOKEN_FILE.read_text().strip()
if token:
if validate_token_format(token):
print_success("Token format is valid")
# Quick API test
valid, message = test_github_token(token, timeout=10)
if valid:
print_success(f"Token is valid: {message}")
else:
print_error(f"Token is invalid: {message}")
else:
print_error("Token format is invalid")
else:
print_error("Token file is empty")
except Exception as e:
print_error(f"Error reading token: {e}")
else:
print_warning("Token file not found")
# Check config integration
print(f"\n⚙️ Configuration: {CONFIG_SCRIPT}")
if CONFIG_SCRIPT.exists():
print_success("Configuration script available")
else:
print_warning("Configuration script not found")
# Check existing GitHub auth script
print(f"\n🔐 GitHub auth script: {GITHUB_AUTH_SCRIPT}")
if GITHUB_AUTH_SCRIPT.exists():
print_success("GitHub auth script available")
else:
print_warning("GitHub auth script not found")
def main():
"""Main CLI interface"""
parser = argparse.ArgumentParser(
description="ThrillWiki GitHub PAT Setup Helper",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="""
Examples:
%(prog)s setup # Interactive token setup
%(prog)s validate # Validate existing token
%(prog)s status # Show token status
%(prog)s remove # Remove stored token
%(prog)s --help # Show this help
For detailed PAT creation instructions, run: %(prog)s setup
""",
)
parser.add_argument(
"command",
choices=["setup", "validate", "status", "remove", "help"],
help="Command to execute",
)
parser.add_argument(
"--token", help="GitHub token to validate (for validate command)"
)
parser.add_argument(
"--force", action="store_true", help="Force operation without prompts"
)
if len(sys.argv) == 1:
parser.print_help()
sys.exit(1)
args = parser.parse_args()
try:
if args.command == "setup":
success = interactive_token_setup()
sys.exit(0 if success else 1)
elif args.command == "validate":
if args.token:
# Validate provided token
print_info("Validating provided token...")
if validate_token_format(args.token):
valid, message = test_github_token(args.token)
if valid:
print_success(message)
sys.exit(0)
else:
print_error(message)
sys.exit(1)
else:
print_error("Invalid token format")
sys.exit(1)
else:
# Validate existing token
success = validate_existing_token()
sys.exit(0 if success else 1)
elif args.command == "status":
show_token_status()
sys.exit(0)
elif args.command == "remove":
if not args.force:
confirm = (
input("Are you sure you want to remove the GitHub token? (y/N): ")
.strip()
.lower()
)
if confirm not in ["y", "yes"]:
print_info("Operation cancelled")
sys.exit(0)
success = remove_token()
sys.exit(0 if success else 1)
elif args.command == "help":
parser.print_help()
sys.exit(0)
except KeyboardInterrupt:
print("\nOperation cancelled by user")
sys.exit(1)
except Exception as e:
print_error(f"Unexpected error: {e}")
sys.exit(1)
if __name__ == "__main__":
main()

View File

@@ -1,712 +0,0 @@
#!/bin/bash
#
# ThrillWiki Quick Start Script
# One-command setup for bulletproof automation system
#
# Features:
# - Automated setup with sensible defaults for development
# - Minimal user interaction required
# - Rollback capabilities if setup fails
# - Clear status reporting and next steps
# - Support for different environment types (dev/prod)
#
set -e
# [AWS-SECRET-REMOVED]====================================
# SCRIPT CONFIGURATION
# [AWS-SECRET-REMOVED]====================================
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)"
SCRIPT_NAME="$(basename "${BASH_SOURCE[0]}")"
# Quick start configuration
QUICK_START_LOG="$PROJECT_DIR/logs/quick-start.log"
ROLLBACK_FILE="$PROJECT_DIR/.quick-start-rollback"
# Setup scripts
SETUP_SCRIPT="$SCRIPT_DIR/setup-automation.sh"
GITHUB_SETUP_SCRIPT="$SCRIPT_DIR/github-setup.py"
CONFIG_LIB="$SCRIPT_DIR/automation-config.sh"
# Environment presets
declare -A ENV_PRESETS=(
["dev"]="Development environment with frequent updates"
["prod"]="Production environment with stable intervals"
["demo"]="Demo environment for testing and showcasing"
)
# Default configurations for each environment
declare -A DEV_CONFIG=(
["PULL_INTERVAL"]="60" # 1 minute for development
["HEALTH_CHECK_INTERVAL"]="30" # 30 seconds
["AUTO_MIGRATE"]="true"
["AUTO_UPDATE_DEPENDENCIES"]="true"
["DEBUG_MODE"]="true"
)
declare -A PROD_CONFIG=(
["PULL_INTERVAL"]="300" # 5 minutes for production
["HEALTH_CHECK_INTERVAL"]="60" # 1 minute
["AUTO_MIGRATE"]="true"
["AUTO_UPDATE_DEPENDENCIES"]="false"
["DEBUG_MODE"]="false"
)
declare -A DEMO_CONFIG=(
["PULL_INTERVAL"]="120" # 2 minutes for demo
["HEALTH_CHECK_INTERVAL"]="45" # 45 seconds
["AUTO_MIGRATE"]="true"
["AUTO_UPDATE_DEPENDENCIES"]="true"
["DEBUG_MODE"]="false"
)
# [AWS-SECRET-REMOVED]====================================
# COLOR DEFINITIONS
# [AWS-SECRET-REMOVED]====================================
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
PURPLE='\033[0;35m'
CYAN='\033[0;36m'
BOLD='\033[1m'
NC='\033[0m' # No Color
# [AWS-SECRET-REMOVED]====================================
# LOGGING FUNCTIONS
# [AWS-SECRET-REMOVED]====================================
quick_log() {
local level="$1"
local color="$2"
local message="$3"
local timestamp="$(date '+%Y-%m-%d %H:%M:%S')"
# Ensure log directory exists
mkdir -p "$(dirname "$QUICK_START_LOG")"
# Log to file (without colors)
echo "[$timestamp] [$level] $message" >> "$QUICK_START_LOG"
# Log to console (with colors)
echo -e "${color}[$timestamp] [QUICK-$level]${NC} $message"
}
quick_info() {
quick_log "INFO" "$BLUE" "$1"
}
quick_success() {
quick_log "SUCCESS" "$GREEN" "$1"
}
quick_warning() {
quick_log "WARNING" "$YELLOW" "⚠️ $1"
}
quick_error() {
quick_log "ERROR" "$RED" "$1"
}
quick_debug() {
if [[ "${QUICK_DEBUG:-false}" == "true" ]]; then
quick_log "DEBUG" "$PURPLE" "🔍 $1"
fi
}
# [AWS-SECRET-REMOVED]====================================
# UTILITY FUNCTIONS
# [AWS-SECRET-REMOVED]====================================
command_exists() {
command -v "$1" >/dev/null 2>&1
}
# Show animated progress
show_spinner() {
local pid="$1"
local message="$2"
local delay=0.1
local spinstr='|/-\'
while ps -p "$pid" >/dev/null 2>&1; do
local temp=${spinstr#?}
printf "\r%s %c" "$message" "$spinstr"
local spinstr=$temp${spinstr%"$temp"}
sleep $delay
done
printf "\r%s ✓\n" "$message"
}
# Check if we're in a supported environment
detect_environment() {
quick_debug "Detecting environment type"
# Check for common development indicators
if [[ -f "$PROJECT_DIR/manage.py" ]] && [[ -d "$PROJECT_DIR/.git" ]]; then
if [[ -f "$PROJECT_DIR/pyproject.toml" ]] || [[ -f "$PROJECT_DIR/requirements.txt" ]]; then
echo "dev"
return 0
fi
fi
# Check for production indicators
if [[ -d "/etc/systemd/system" ]] && [[ "$USER" != "root" ]]; then
echo "prod"
return 0
fi
# Default to development
echo "dev"
}
# [AWS-SECRET-REMOVED]====================================
# ROLLBACK FUNCTIONALITY
# [AWS-SECRET-REMOVED]====================================
# Save rollback information
save_rollback_info() {
local action="$1"
local details="$2"
quick_debug "Saving rollback info: $action"
mkdir -p "$(dirname "$ROLLBACK_FILE")"
echo "$(date '+%Y-%m-%d %H:%M:%S')|$action|$details" >> "$ROLLBACK_FILE"
}
# Perform rollback
perform_rollback() {
quick_warning "Performing rollback of changes"
if [[ ! -f "$ROLLBACK_FILE" ]]; then
quick_info "No rollback information found"
return 0
fi
local rollback_errors=0
# Read rollback file in reverse order
while IFS='|' read -r timestamp action details; do
quick_debug "Rolling back: $action ($details)"
case "$action" in
"created_file")
if [[ -f "$details" ]]; then
rm -f "$details" && quick_debug "Removed file: $details" || ((rollback_errors++))
fi
;;
"modified_file")
# For modified files, we would need to restore from backup
# This is a simplified rollback - in practice, you'd restore from backup
quick_debug "File was modified: $details (manual restoration may be needed)"
;;
"installed_service")
if command_exists systemctl && [[ -f "/etc/systemd/system/$details" ]]; then
sudo systemctl stop "$details" 2>/dev/null || true
sudo systemctl disable "$details" 2>/dev/null || true
sudo rm -f "/etc/systemd/system/$details" && quick_debug "Removed service: $details" || ((rollback_errors++))
sudo systemctl daemon-reload 2>/dev/null || true
fi
;;
"created_directory")
if [[ -d "$details" ]]; then
rmdir "$details" 2>/dev/null && quick_debug "Removed directory: $details" || quick_debug "Directory not empty: $details"
fi
;;
esac
done < <(tac "$ROLLBACK_FILE" 2>/dev/null || cat "$ROLLBACK_FILE")
# Remove rollback file
rm -f "$ROLLBACK_FILE"
if [[ $rollback_errors -eq 0 ]]; then
quick_success "Rollback completed successfully"
else
quick_warning "Rollback completed with $rollback_errors errors"
quick_info "Some manual cleanup may be required"
fi
}
# [AWS-SECRET-REMOVED]====================================
# QUICK SETUP FUNCTIONS
# [AWS-SECRET-REMOVED]====================================
# Quick dependency check
quick_check_dependencies() {
quick_info "Checking system dependencies"
local missing_deps=()
local required_deps=("git" "curl" "python3")
for dep in "${required_deps[@]}"; do
if ! command_exists "$dep"; then
missing_deps+=("$dep")
fi
done
# Check for UV specifically
if ! command_exists "uv"; then
missing_deps+=("uv (Python package manager)")
fi
if [[ ${#missing_deps[@]} -gt 0 ]]; then
quick_error "Missing required dependencies: ${missing_deps[*]}"
echo ""
echo "🚀 Quick Installation Commands:"
echo ""
if command_exists apt-get; then
echo "# Ubuntu/Debian:"
echo "sudo apt-get update && sudo apt-get install -y git curl python3"
echo "curl -LsSf https://astral.sh/uv/install.sh | sh"
elif command_exists yum; then
echo "# RHEL/CentOS:"
echo "sudo yum install -y git curl python3"
echo "curl -LsSf https://astral.sh/uv/install.sh | sh"
elif command_exists brew; then
echo "# macOS:"
echo "brew install git curl python3"
echo "curl -LsSf https://astral.sh/uv/install.sh | sh"
fi
echo ""
echo "After installing dependencies, run this script again:"
echo " $0"
return 1
fi
quick_success "All dependencies are available"
return 0
}
# Apply environment preset configuration
apply_environment_preset() {
local env_type="$1"
quick_info "Applying $env_type environment configuration"
# Load configuration library
if [[ -f "$CONFIG_LIB" ]]; then
# shellcheck source=automation-config.sh
source "$CONFIG_LIB"
else
quick_error "Configuration library not found: $CONFIG_LIB"
return 1
fi
# Get configuration for environment type
local -n config_ref="${env_type^^}_CONFIG"
# Apply each configuration value
for key in "${!config_ref[@]}"; do
local value="${config_ref[$key]}"
quick_debug "Setting $key=$value"
if declare -f write_config_value >/dev/null 2>&1; then
write_config_value "$key" "$value"
else
quick_warning "Could not set configuration value: $key"
fi
done
quick_success "Environment configuration applied"
}
# Quick GitHub setup (optional)
quick_github_setup() {
local skip_github="${1:-false}"
if [[ "$skip_github" == "true" ]]; then
quick_info "Skipping GitHub authentication setup"
return 0
fi
quick_info "Setting up GitHub authentication (optional)"
echo ""
echo "🔐 GitHub Personal Access Token Setup"
echo "This enables private repository access and avoids rate limits."
echo "You can skip this step and set it up later if needed."
echo ""
read -r -p "Do you want to set up GitHub authentication now? (Y/n): " setup_github
if [[ "$setup_github" =~ ^[Nn] ]]; then
quick_info "Skipping GitHub authentication - you can set it up later with:"
echo " python3 $GITHUB_SETUP_SCRIPT setup"
return 0
fi
# Run GitHub setup with timeout
if timeout 300 python3 "$GITHUB_SETUP_SCRIPT" setup; then
quick_success "GitHub authentication configured"
save_rollback_info "configured_github" "token"
return 0
else
quick_warning "GitHub setup failed or timed out"
quick_info "Continuing without GitHub authentication"
return 0
fi
}
# Quick service setup
quick_service_setup() {
local enable_service="${1:-true}"
if [[ "$enable_service" != "true" ]]; then
quick_info "Skipping service installation"
return 0
fi
if ! command_exists systemctl; then
quick_info "systemd not available - skipping service setup"
return 0
fi
quick_info "Setting up systemd service"
# Use the main setup script for service installation
if "$SETUP_SCRIPT" --force-rebuild service >/dev/null 2>&1; then
quick_success "Systemd service installed"
save_rollback_info "installed_service" "thrillwiki-automation.service"
return 0
else
quick_warning "Service installation failed - continuing without systemd integration"
return 0
fi
}
# [AWS-SECRET-REMOVED]====================================
# MAIN QUICK START WORKFLOW
# [AWS-SECRET-REMOVED]====================================
run_quick_start() {
local env_type="${1:-auto}"
local skip_github="${2:-false}"
local enable_service="${3:-true}"
echo ""
echo "🚀 ThrillWiki Quick Start"
echo "========================="
echo ""
echo "This script will quickly set up the ThrillWiki automation system"
echo "with sensible defaults for immediate use."
echo ""
# Auto-detect environment if not specified
if [[ "$env_type" == "auto" ]]; then
env_type=$(detect_environment)
quick_info "Auto-detected environment type: $env_type"
fi
# Show environment preset info
if [[ -n "${ENV_PRESETS[$env_type]}" ]]; then
echo "📋 Environment: ${ENV_PRESETS[$env_type]}"
else
quick_warning "Unknown environment type: $env_type, using development defaults"
env_type="dev"
fi
echo ""
echo "⚡ Quick Setup Features:"
echo "• Minimal user interaction"
echo "• Automatic dependency validation"
echo "• Environment-specific configuration"
echo "• Optional GitHub authentication"
echo "• Systemd service integration"
echo "• Rollback support on failure"
echo ""
read -r -p "Continue with quick setup? (Y/n): " continue_setup
if [[ "$continue_setup" =~ ^[Nn] ]]; then
quick_info "Quick setup cancelled"
echo ""
echo "💡 For interactive setup with more options, run:"
echo " $SETUP_SCRIPT setup"
exit 0
fi
# Clear any previous rollback info
rm -f "$ROLLBACK_FILE"
local start_time
start_time=$(date +%s)
echo ""
echo "🔧 Starting quick setup..."
# Step 1: Dependencies
echo ""
echo "[1/5] Checking dependencies..."
if ! quick_check_dependencies; then
exit 1
fi
# Step 2: Configuration
echo ""
echo "[2/5] Setting up configuration..."
# Load and initialize configuration
if [[ -f "$CONFIG_LIB" ]]; then
# shellcheck source=automation-config.sh
source "$CONFIG_LIB"
if init_configuration >/dev/null 2>&1; then
quick_success "Configuration initialized"
save_rollback_info "modified_file" "$(dirname "$ENV_CONFIG")/thrillwiki-automation***REMOVED***"
else
quick_error "Configuration initialization failed"
perform_rollback
exit 1
fi
else
quick_error "Configuration library not found"
exit 1
fi
# Apply environment preset
if apply_environment_preset "$env_type"; then
quick_success "Environment configuration applied"
else
quick_warning "Environment configuration partially applied"
fi
# Step 3: GitHub Authentication (optional)
echo ""
echo "[3/5] GitHub authentication..."
quick_github_setup "$skip_github"
# Step 4: Service Installation
echo ""
echo "[4/5] Service installation..."
quick_service_setup "$enable_service"
# Step 5: Final Validation
echo ""
echo "[5/5] Validating setup..."
# Quick validation
local validation_errors=0
# Check configuration
if [[ -f "$(dirname "$ENV_CONFIG")/thrillwiki-automation***REMOVED***" ]]; then
quick_success "✓ Configuration file created"
else
quick_error "✗ Configuration file missing"
((validation_errors++))
fi
# Check scripts
if [[ -x "$SCRIPT_DIR/bulletproof-automation.sh" ]]; then
quick_success "✓ Automation script is executable"
else
quick_warning "⚠ Automation script may need executable permissions"
fi
# Check GitHub auth (optional)
if [[ -f "$PROJECT_DIR/.github-pat" ]]; then
quick_success "✓ GitHub authentication configured"
else
quick_info " GitHub authentication not configured (optional)"
fi
# Check service (optional)
if command_exists systemctl && systemctl list-unit-files thrillwiki-automation.service >/dev/null 2>&1; then
quick_success "✓ Systemd service installed"
else
quick_info " Systemd service not installed (optional)"
fi
local end_time
end_time=$(date +%s)
local setup_duration=$((end_time - start_time))
echo ""
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
if [[ $validation_errors -eq 0 ]]; then
quick_success "🎉 Quick setup completed successfully in ${setup_duration}s!"
else
quick_warning "⚠️ Quick setup completed with warnings in ${setup_duration}s"
fi
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
# Clean up rollback file on success
if [[ $validation_errors -eq 0 ]]; then
rm -f "$ROLLBACK_FILE"
fi
# Show next steps
show_next_steps "$env_type"
}
show_next_steps() {
local env_type="$1"
echo ""
echo "🎯 Next Steps:"
echo ""
echo "🚀 Start Automation:"
if command_exists systemctl && systemctl list-unit-files thrillwiki-automation.service >/dev/null 2>&1; then
echo " sudo systemctl start thrillwiki-automation # Start service"
echo " sudo systemctl enable thrillwiki-automation # Enable auto-start"
echo " sudo systemctl status thrillwiki-automation # Check status"
else
echo " $SCRIPT_DIR/bulletproof-automation.sh # Start manually"
echo " $SETUP_SCRIPT start # Alternative start"
fi
echo ""
echo "📊 Monitor Automation:"
if command_exists systemctl; then
echo " sudo journalctl -u thrillwiki-automation -f # Follow logs"
fi
echo " tail -f $QUICK_START_LOG # Quick start logs"
echo " $SETUP_SCRIPT status # Check status"
echo ""
echo "🔧 Manage Configuration:"
echo " $SETUP_SCRIPT setup # Interactive setup"
echo " python3 $GITHUB_SETUP_SCRIPT status # GitHub auth status"
echo " $SETUP_SCRIPT restart # Restart automation"
echo ""
echo "📖 Environment: $env_type"
case "$env_type" in
"dev")
echo " • Pull interval: 1 minute (fast development)"
echo " • Auto-migrations enabled"
echo " • Debug mode enabled"
;;
"prod")
echo " • Pull interval: 5 minutes (stable production)"
echo " • Auto-migrations enabled"
echo " • Debug mode disabled"
;;
"demo")
echo " • Pull interval: 2 minutes (demo environment)"
echo " • Auto-migrations enabled"
echo " • Debug mode disabled"
;;
esac
echo ""
echo "💡 Tips:"
echo " • Automation will start pulling changes automatically"
echo " • Django migrations run automatically on code changes"
echo " • Server restarts automatically when needed"
echo " • Logs are available via systemd journal or log files"
if [[ ! -f "$PROJECT_DIR/.github-pat" ]]; then
echo ""
echo "🔐 Optional: Set up GitHub authentication later for private repos:"
echo " python3 $GITHUB_SETUP_SCRIPT setup"
fi
}
# [AWS-SECRET-REMOVED]====================================
# COMMAND LINE INTERFACE
# [AWS-SECRET-REMOVED]====================================
show_quick_help() {
echo "ThrillWiki Quick Start Script"
echo "Usage: $SCRIPT_NAME [ENVIRONMENT] [OPTIONS]"
echo ""
echo "ENVIRONMENTS:"
echo " dev Development environment (default)"
echo " prod Production environment"
echo " demo Demo environment"
echo " auto Auto-detect environment"
echo ""
echo "OPTIONS:"
echo " --skip-github Skip GitHub authentication setup"
echo " --no-service Skip systemd service installation"
echo " --rollback Rollback previous quick start changes"
echo " --debug Enable debug logging"
echo " --help Show this help"
echo ""
echo "EXAMPLES:"
echo " $SCRIPT_NAME # Quick start with auto-detection"
echo " $SCRIPT_NAME dev # Development environment"
echo " $SCRIPT_NAME prod --skip-github # Production without GitHub"
echo " $SCRIPT_NAME --rollback # Rollback previous setup"
echo ""
echo "ENVIRONMENT PRESETS:"
for env in "${!ENV_PRESETS[@]}"; do
echo " $env: ${ENV_PRESETS[$env]}"
done
echo ""
}
main() {
local env_type="auto"
local skip_github="false"
local enable_service="true"
local show_help="false"
local perform_rollback_only="false"
# Parse arguments
while [[ $# -gt 0 ]]; do
case "$1" in
dev|prod|demo|auto)
env_type="$1"
shift
;;
--skip-github)
skip_github="true"
shift
;;
--no-service)
enable_service="false"
shift
;;
--rollback)
perform_rollback_only="true"
shift
;;
--debug)
export QUICK_DEBUG="true"
shift
;;
--help|-h)
show_help="true"
shift
;;
*)
quick_error "Unknown option: $1"
show_quick_help
exit 1
;;
esac
done
if [[ "$show_help" == "true" ]]; then
show_quick_help
exit 0
fi
if [[ "$perform_rollback_only" == "true" ]]; then
perform_rollback
exit 0
fi
# Validate environment type
if [[ "$env_type" != "auto" ]] && [[ -z "${ENV_PRESETS[$env_type]}" ]]; then
quick_error "Invalid environment type: $env_type"
show_quick_help
exit 1
fi
# Run quick start
run_quick_start "$env_type" "$skip_github" "$enable_service"
}
# Set up trap for cleanup on script exit
trap 'if [[ -f "$ROLLBACK_FILE" ]] && [[ $? -ne 0 ]]; then quick_error "Setup failed - performing rollback"; perform_rollback; fi' EXIT
# Run main function
main "$@"

File diff suppressed because it is too large Load Diff

View File

@@ -1,94 +0,0 @@
#!/usr/bin/env bash
#
# Run Systemd Architecture Diagnosis on Remote Server
# Executes the diagnostic script on the actual server to get real data
#
set -e
# Script configuration
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)"
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'
# Remote connection configuration (using same pattern as other scripts)
REMOTE_HOST="${1:-192.168.20.65}"
REMOTE_USER="${2:-thrillwiki}"
REMOTE_PORT="${3:-22}"
SSH_OPTIONS="-o IdentitiesOnly=yes -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ConnectTimeout=30"
echo -e "${BLUE}🔍 Running ThrillWiki Systemd Service Architecture Diagnosis on Remote Server${NC}"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo ""
echo "Target: ${REMOTE_USER}@${REMOTE_HOST}:${REMOTE_PORT}"
echo ""
# Test SSH connection first
echo -e "${YELLOW}🔗 Testing SSH connection...${NC}"
if ssh $SSH_OPTIONS -p $REMOTE_PORT $REMOTE_USER@$REMOTE_HOST "echo 'SSH connection successful'" 2>/dev/null; then
echo -e "${GREEN}✅ SSH connection verified${NC}"
else
echo -e "${RED}❌ SSH connection failed${NC}"
echo "Please check:"
echo "1. SSH key is set up correctly"
echo "2. Remote host is accessible: $REMOTE_HOST"
echo "3. Remote user exists: $REMOTE_USER"
echo "4. SSH port is correct: $REMOTE_PORT"
exit 1
fi
echo ""
echo -e "${YELLOW}📤 Uploading diagnostic script to remote server...${NC}"
# Upload the diagnostic script to the remote server
if scp $SSH_OPTIONS -P $REMOTE_PORT "$SCRIPT_DIR/diagnose-systemd-architecture.sh" "$REMOTE_USER@$REMOTE_HOST:/tmp/diagnose-systemd-architecture.sh" 2>/dev/null; then
echo -e "${GREEN}✅ Diagnostic script uploaded successfully${NC}"
else
echo -e "${RED}❌ Failed to upload diagnostic script${NC}"
exit 1
fi
echo ""
echo -e "${YELLOW}🔧 Making diagnostic script executable on remote server...${NC}"
# Make the script executable
if ssh $SSH_OPTIONS -p $REMOTE_PORT $REMOTE_USER@$REMOTE_HOST "chmod +x /tmp/diagnose-systemd-architecture.sh" 2>/dev/null; then
echo -e "${GREEN}✅ Script made executable${NC}"
else
echo -e "${RED}❌ Failed to make script executable${NC}"
exit 1
fi
echo ""
echo -e "${YELLOW}🚀 Running diagnostic on remote server...${NC}"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo ""
# Run the diagnostic script on the remote server
ssh $SSH_OPTIONS -p $REMOTE_PORT $REMOTE_USER@$REMOTE_HOST "/tmp/diagnose-systemd-architecture.sh" || {
echo ""
echo -e "${RED}❌ Diagnostic script execution failed${NC}"
exit 1
}
echo ""
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo -e "${GREEN}✅ Remote diagnostic completed successfully${NC}"
echo ""
echo -e "${YELLOW}🧹 Cleaning up temporary files on remote server...${NC}"
# Clean up the uploaded script
ssh $SSH_OPTIONS -p $REMOTE_PORT $REMOTE_USER@$REMOTE_HOST "rm -f /tmp/diagnose-systemd-architecture.sh" 2>/dev/null || {
echo -e "${YELLOW}⚠️ Warning: Could not clean up temporary file${NC}"
}
echo -e "${GREEN}✅ Cleanup completed${NC}"
echo ""
echo -e "${BLUE}📋 Diagnosis complete. Review the output above to identify systemd service issues.${NC}"

File diff suppressed because it is too large Load Diff

View File

@@ -1,355 +0,0 @@
#!/usr/bin/env bash
#
# ThrillWiki Deployment Preset Integration Test
# Tests deployment preset configuration and integration
#
set -e
# Test script directory detection (cross-shell compatible)
if [ -n "${BASH_SOURCE:-}" ]; then
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
elif [ -n "${ZSH_NAME:-}" ]; then
SCRIPT_DIR="$(cd "$(dirname "${(%):-%x}")" && pwd)"
else
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
fi
PROJECT_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)"
echo "ThrillWiki Deployment Preset Integration Test"
echo "[AWS-SECRET-REMOVED]======"
echo ""
# Import preset configuration functions (simulate the actual functions from deploy-complete.sh)
get_preset_config() {
local preset="$1"
local config_key="$2"
case "$preset" in
"dev")
case "$config_key" in
"PULL_INTERVAL") echo "60" ;;
"HEALTH_CHECK_INTERVAL") echo "30" ;;
"DEBUG_MODE") echo "true" ;;
"AUTO_MIGRATE") echo "true" ;;
"AUTO_UPDATE_DEPENDENCIES") echo "true" ;;
"LOG_LEVEL") echo "DEBUG" ;;
"SSL_REQUIRED") echo "false" ;;
"CORS_ALLOWED") echo "true" ;;
"DJANGO_DEBUG") echo "true" ;;
"ALLOWED_HOSTS") echo "*" ;;
esac
;;
"prod")
case "$config_key" in
"PULL_INTERVAL") echo "300" ;;
"HEALTH_CHECK_INTERVAL") echo "60" ;;
"DEBUG_MODE") echo "false" ;;
"AUTO_MIGRATE") echo "true" ;;
"AUTO_UPDATE_DEPENDENCIES") echo "false" ;;
"LOG_LEVEL") echo "WARNING" ;;
"SSL_REQUIRED") echo "true" ;;
"CORS_ALLOWED") echo "false" ;;
"DJANGO_DEBUG") echo "false" ;;
"ALLOWED_HOSTS") echo "production-host" ;;
esac
;;
"demo")
case "$config_key" in
"PULL_INTERVAL") echo "120" ;;
"HEALTH_CHECK_INTERVAL") echo "45" ;;
"DEBUG_MODE") echo "false" ;;
"AUTO_MIGRATE") echo "true" ;;
"AUTO_UPDATE_DEPENDENCIES") echo "true" ;;
"LOG_LEVEL") echo "INFO" ;;
"SSL_REQUIRED") echo "false" ;;
"CORS_ALLOWED") echo "true" ;;
"DJANGO_DEBUG") echo "false" ;;
"ALLOWED_HOSTS") echo "demo-host" ;;
esac
;;
"testing")
case "$config_key" in
"PULL_INTERVAL") echo "180" ;;
"HEALTH_CHECK_INTERVAL") echo "30" ;;
"DEBUG_MODE") echo "true" ;;
"AUTO_MIGRATE") echo "true" ;;
"AUTO_UPDATE_DEPENDENCIES") echo "true" ;;
"LOG_LEVEL") echo "DEBUG" ;;
"SSL_REQUIRED") echo "false" ;;
"CORS_ALLOWED") echo "true" ;;
"DJANGO_DEBUG") echo "true" ;;
"ALLOWED_HOSTS") echo "test-host" ;;
esac
;;
esac
}
validate_preset() {
local preset="$1"
local preset_list="dev prod demo testing"
for valid_preset in $preset_list; do
if [ "$preset" = "$valid_preset" ]; then
return 0
fi
done
return 1
}
test_preset_configuration() {
local preset="$1"
local expected_debug="$2"
local expected_interval="$3"
echo "Testing preset: $preset"
echo " Expected DEBUG: $expected_debug"
echo " Expected PULL_INTERVAL: $expected_interval"
local actual_debug
local actual_interval
actual_debug=$(get_preset_config "$preset" "DEBUG_MODE")
actual_interval=$(get_preset_config "$preset" "PULL_INTERVAL")
echo " Actual DEBUG: $actual_debug"
echo " Actual PULL_INTERVAL: $actual_interval"
if [ "$actual_debug" = "$expected_debug" ] && [ "$actual_interval" = "$expected_interval" ]; then
echo " ✅ Preset $preset configuration correct"
return 0
else
echo " ❌ Preset $preset configuration incorrect"
return 1
fi
}
generate_env_content() {
local preset="$1"
# Base ***REMOVED*** template
local env_content="# ThrillWiki Environment Configuration
DEBUG=
ALLOWED_HOSTS=
SECRET_KEY=test-secret-key
DEPLOYMENT_PRESET=
AUTO_MIGRATE=
PULL_INTERVAL=
LOG_LEVEL="
# Apply preset-specific configurations
case "$preset" in
"dev")
env_content=$(echo "$env_content" | sed \
-e "s/DEBUG=/DEBUG=True/" \
-e "s/ALLOWED_HOSTS=/ALLOWED_HOSTS=*/" \
-e "s/DEPLOYMENT_PRESET=/DEPLOYMENT_PRESET=dev/" \
-e "s/AUTO_MIGRATE=/AUTO_MIGRATE=True/" \
-e "s/PULL_INTERVAL=/PULL_INTERVAL=60/" \
-e "s/LOG_LEVEL=/LOG_LEVEL=DEBUG/"
)
;;
"prod")
env_content=$(echo "$env_content" | sed \
-e "s/DEBUG=/DEBUG=False/" \
-e "s/ALLOWED_HOSTS=/ALLOWED_HOSTS=production-host/" \
-e "s/DEPLOYMENT_PRESET=/DEPLOYMENT_PRESET=prod/" \
-e "s/AUTO_MIGRATE=/AUTO_MIGRATE=True/" \
-e "s/PULL_INTERVAL=/PULL_INTERVAL=300/" \
-e "s/LOG_LEVEL=/LOG_LEVEL=WARNING/"
)
;;
"demo")
env_content=$(echo "$env_content" | sed \
-e "s/DEBUG=/DEBUG=False/" \
-e "s/ALLOWED_HOSTS=/ALLOWED_HOSTS=demo-host/" \
-e "s/DEPLOYMENT_PRESET=/DEPLOYMENT_PRESET=demo/" \
-e "s/AUTO_MIGRATE=/AUTO_MIGRATE=True/" \
-e "s/PULL_INTERVAL=/PULL_INTERVAL=120/" \
-e "s/LOG_LEVEL=/LOG_LEVEL=INFO/"
)
;;
"testing")
env_content=$(echo "$env_content" | sed \
-e "s/DEBUG=/DEBUG=True/" \
-e "s/ALLOWED_HOSTS=/ALLOWED_HOSTS=test-host/" \
-e "s/DEPLOYMENT_PRESET=/DEPLOYMENT_PRESET=testing/" \
-e "s/AUTO_MIGRATE=/AUTO_MIGRATE=True/" \
-e "s/PULL_INTERVAL=/PULL_INTERVAL=180/" \
-e "s/LOG_LEVEL=/LOG_LEVEL=DEBUG/"
)
;;
esac
echo "$env_content"
}
test_env_generation() {
local preset="$1"
echo "Testing ***REMOVED*** generation for preset: $preset"
local env_content
env_content=$(generate_env_content "$preset")
# Test specific values
local debug_line
local preset_line
local interval_line
debug_line=$(echo "$env_content" | grep "^DEBUG=" || echo "")
preset_line=$(echo "$env_content" | grep "^DEPLOYMENT_PRESET=" || echo "")
interval_line=$(echo "$env_content" | grep "^PULL_INTERVAL=" || echo "")
echo " DEBUG line: $debug_line"
echo " PRESET line: $preset_line"
echo " INTERVAL line: $interval_line"
# Validate content
if echo "$env_content" | grep -q "DEPLOYMENT_PRESET=$preset" && \
echo "$env_content" | grep -q "SECRET_KEY=test-secret-key"; then
echo " ✅ ***REMOVED*** generation for $preset correct"
return 0
else
echo " ❌ ***REMOVED*** generation for $preset failed"
return 1
fi
}
# Start tests
echo "1. Testing preset validation:"
echo ""
presets_to_test="dev prod demo testing invalid"
for preset in $presets_to_test; do
if validate_preset "$preset"; then
echo "✅ Preset '$preset' is valid"
else
if [ "$preset" = "invalid" ]; then
echo "✅ Preset '$preset' correctly rejected"
else
echo "❌ Preset '$preset' should be valid"
fi
fi
done
echo ""
echo "2. Testing preset configurations:"
echo ""
# Test each preset configuration
test_preset_configuration "dev" "true" "60"
echo ""
test_preset_configuration "prod" "false" "300"
echo ""
test_preset_configuration "demo" "false" "120"
echo ""
test_preset_configuration "testing" "true" "180"
echo ""
echo "3. Testing ***REMOVED*** file generation:"
echo ""
for preset in dev prod demo testing; do
test_env_generation "$preset"
echo ""
done
echo "4. Testing UV package management compliance:"
echo ""
# Test UV command patterns (simulate)
test_uv_commands() {
echo "Testing UV command patterns:"
# Simulate UV commands that should be used
local commands=(
"uv add package"
"uv run manage.py migrate"
"uv run manage.py collectstatic"
"uv sync"
)
for cmd in "${commands[@]}"; do
if echo "$cmd" | grep -q "^uv "; then
echo " ✅ Command follows UV pattern: $cmd"
else
echo " ❌ Command does not follow UV pattern: $cmd"
fi
done
# Test commands that should NOT be used
local bad_commands=(
"python manage.py migrate"
"pip install package"
"python -m pip install package"
)
echo ""
echo " Testing prohibited patterns:"
for cmd in "${bad_commands[@]}"; do
if echo "$cmd" | grep -q "^uv "; then
echo " ❌ Prohibited command incorrectly uses UV: $cmd"
else
echo " ✅ Correctly avoiding prohibited pattern: $cmd"
fi
done
}
test_uv_commands
echo ""
echo "5. Testing cross-shell compatibility:"
echo ""
# Test shell-specific features
test_shell_features() {
echo "Testing shell-agnostic features:"
# Test variable assignment with defaults
local test_var="${UNDEFINED_VAR:-default}"
if [ "$test_var" = "default" ]; then
echo " ✅ Variable default assignment works"
else
echo " ❌ Variable default assignment failed"
fi
# Test command substitution
local date_output
date_output=$(date +%Y 2>/dev/null || echo "1970")
if [ ${#date_output} -eq 4 ]; then
echo " ✅ Command substitution works"
else
echo " ❌ Command substitution failed"
fi
# Test case statements
local test_case="testing"
local result=""
case "$test_case" in
"dev"|"testing") result="debug" ;;
"prod") result="production" ;;
*) result="unknown" ;;
esac
if [ "$result" = "debug" ]; then
echo " ✅ Case statement works correctly"
else
echo " ❌ Case statement failed"
fi
}
test_shell_features
echo ""
echo "Deployment Preset Integration Test Summary"
echo "[AWS-SECRET-REMOVED]=="
echo ""
echo "✅ All preset validation tests passed"
echo "✅ All preset configuration tests passed"
echo "✅ All ***REMOVED*** generation tests passed"
echo "✅ UV command compliance verified"
echo "✅ Cross-shell compatibility confirmed"
echo ""
echo "Step 3B implementation is ready for deployment!"
echo ""

View File

@@ -1,259 +0,0 @@
#!/bin/bash
#
# Test script to validate Django environment configuration fix
#
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)"
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
test_log() {
local level="$1"
local color="$2"
local message="$3"
echo -e "${color}[TEST-$level]${NC} $message"
}
test_info() {
test_log "INFO" "$BLUE" "$1"
}
test_success() {
test_log "SUCCESS" "$GREEN" "$1"
}
test_error() {
test_log "ERROR" "$RED" "$1"
}
test_warning() {
test_log "WARNING" "$YELLOW" "⚠️ $1"
}
# Test 1: Validate environment variable setup function
test_environment_setup() {
test_info "Testing environment variable setup function..."
# Create a temporary directory to simulate remote deployment
local test_dir="/tmp/thrillwiki-env-test-$$"
mkdir -p "$test_dir"
# Copy ***REMOVED***.example to test directory
cp "$PROJECT_DIR/***REMOVED***.example" "$test_dir/"
# Test DATABASE_URL configuration for different presets
local presets=("dev" "prod" "demo" "testing")
for preset in "${presets[@]}"; do
test_info "Testing preset: $preset"
# Simulate remote environment variable setup
local env_content=""
env_content=$(cat << 'EOF'
# ThrillWiki Environment Configuration
# Generated by remote deployment script
# Django Configuration
DEBUG=
ALLOWED_HOSTS=
SECRET_KEY=
DJANGO_SETTINGS_MODULE=thrillwiki.settings
# Database Configuration
DATABASE_URL=sqlite:///db.sqlite3
# Static and Media Files
STATIC_URL=/static/
MEDIA_URL=/media/
STATICFILES_DIRS=
# Security Settings
SECURE_SSL_REDIRECT=
SECURE_BROWSER_XSS_FILTER=True
SECURE_CONTENT_TYPE_NOSNIFF=True
X_FRAME_OPTIONS=DENY
# Performance Settings
USE_REDIS=False
REDIS_URL=
# Logging Configuration
LOG_LEVEL=
LOGGING_ENABLED=True
# External Services
SENTRY_DSN=
CLOUDFLARE_IMAGES_ACCOUNT_ID=
CLOUDFLARE_IMAGES_API_TOKEN=
# Deployment Settings
DEPLOYMENT_PRESET=
AUTO_MIGRATE=
AUTO_UPDATE_DEPENDENCIES=
PULL_INTERVAL=
HEALTH_CHECK_INTERVAL=
EOF
)
# Apply preset-specific configurations
case "$preset" in
"dev")
env_content=$(echo "$env_content" | sed \
-e "s/DEBUG=/DEBUG=True/" \
-e "s/ALLOWED_HOSTS=/ALLOWED_HOSTS=localhost,127.0.0.1,192.168.20.65/" \
-e "s/LOG_LEVEL=/LOG_LEVEL=DEBUG/" \
-e "s/DEPLOYMENT_PRESET=/DEPLOYMENT_PRESET=dev/" \
-e "s/SECURE_SSL_REDIRECT=/SECURE_SSL_REDIRECT=False/"
)
;;
"prod")
env_content=$(echo "$env_content" | sed \
-e "s/DEBUG=/DEBUG=False/" \
-e "s/ALLOWED_HOSTS=/ALLOWED_HOSTS=192.168.20.65/" \
-e "s/LOG_LEVEL=/LOG_LEVEL=WARNING/" \
-e "s/DEPLOYMENT_PRESET=/DEPLOYMENT_PRESET=prod/" \
-e "s/SECURE_SSL_REDIRECT=/SECURE_SSL_REDIRECT=True/"
)
;;
esac
# Update DATABASE_URL with correct absolute path for spatialite
local database_url="spatialite://$test_dir/db.sqlite3"
env_content=$(echo "$env_content" | sed "s|DATABASE_URL=.*|DATABASE_URL=$database_url|")
env_content=$(echo "$env_content" | sed "s/SECRET_KEY=/SECRET_KEY=test-secret-key-$(date +%s)/")
# Write test ***REMOVED*** file
echo "$env_content" > "$test_dir/***REMOVED***"
# Validate ***REMOVED*** file was created correctly
if [[ -f "$test_dir/***REMOVED***" && -s "$test_dir/***REMOVED***" ]]; then
test_success "✓ ***REMOVED*** file created for $preset preset"
else
test_error "✗ ***REMOVED*** file creation failed for $preset preset"
continue
fi
# Validate DATABASE_URL is set correctly
if grep -q "^DATABASE_URL=spatialite://" "$test_dir/***REMOVED***"; then
test_success "✓ DATABASE_URL configured correctly for $preset"
else
test_error "✗ DATABASE_URL not configured correctly for $preset"
fi
# Validate SECRET_KEY is set
if grep -q "^SECRET_KEY=test-secret-key" "$test_dir/***REMOVED***"; then
test_success "✓ SECRET_KEY configured for $preset"
else
test_error "✗ SECRET_KEY not configured for $preset"
fi
# Validate DEBUG setting
case "$preset" in
"dev"|"testing")
if grep -q "^DEBUG=True" "$test_dir/***REMOVED***"; then
test_success "✓ DEBUG=True for $preset preset"
else
test_error "✗ DEBUG should be True for $preset preset"
fi
;;
"prod"|"demo")
if grep -q "^DEBUG=False" "$test_dir/***REMOVED***"; then
test_success "✓ DEBUG=False for $preset preset"
else
test_error "✗ DEBUG should be False for $preset preset"
fi
;;
esac
done
# Cleanup
rm -rf "$test_dir"
test_success "Environment variable setup test completed"
}
# Test 2: Validate Django settings can load with our configuration
test_django_settings() {
test_info "Testing Django settings loading with our configuration..."
# Create a temporary ***REMOVED*** file in project directory
local backup_env=""
if [[ -f "$PROJECT_DIR/***REMOVED***" ]]; then
backup_env=$(cat "$PROJECT_DIR/***REMOVED***")
fi
# Create test ***REMOVED*** file
cat > "$PROJECT_DIR/***REMOVED***" << EOF
# Test Django Environment Configuration
SECRET_KEY=test-secret-key-for-validation
DEBUG=True
ALLOWED_HOSTS=localhost,127.0.0.1
DATABASE_URL=spatialite://$PROJECT_DIR/test_db.sqlite3
DJANGO_SETTINGS_MODULE=thrillwiki.settings
EOF
# Test Django check command
if cd "$PROJECT_DIR" && uv run manage.py check --quiet; then
test_success "✓ Django settings load successfully with our configuration"
else
test_error "✗ Django settings failed to load with our configuration"
test_info "Attempting to get detailed error information..."
cd "$PROJECT_DIR" && uv run manage.py check || true
fi
# Cleanup test database
rm -f "$PROJECT_DIR/test_db.sqlite3"
# Restore original ***REMOVED*** file
if [[ -n "$backup_env" ]]; then
echo "$backup_env" > "$PROJECT_DIR/***REMOVED***"
else
rm -f "$PROJECT_DIR/***REMOVED***"
fi
test_success "Django settings test completed"
}
# Test 3: Validate deployment order fix
test_deployment_order() {
test_info "Testing deployment order fix..."
# Simulate the fixed deployment order:
# 1. Environment setup before Django validation
# 2. Django validation after ***REMOVED*** creation
test_success "✓ Environment setup now runs before Django validation"
test_success "✓ Django validation includes ***REMOVED*** file existence check"
test_success "✓ Enhanced validation function added for post-environment setup"
test_success "Deployment order test completed"
}
# Run all tests
main() {
test_info "🚀 Starting Django environment configuration fix validation"
echo ""
test_environment_setup
echo ""
test_django_settings
echo ""
test_deployment_order
echo ""
test_success "🎉 All Django environment configuration tests completed successfully!"
test_info "The deployment should now properly create ***REMOVED*** files before Django validation"
test_info "DATABASE_URL will be correctly configured for spatialite with absolute paths"
test_info "Environment validation will occur after ***REMOVED*** file creation"
}
main "$@"

View File

@@ -1,146 +0,0 @@
#!/bin/bash
#
# GitHub Authentication Diagnosis Script
# Validates the specific authentication issues identified
#
set -e
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
log_info() {
echo -e "${BLUE}[INFO]${NC} $1"
}
log_success() {
echo -e "${GREEN}[SUCCESS]${NC}$1"
}
log_warning() {
echo -e "${YELLOW}[WARNING]${NC} ⚠️ $1"
}
log_error() {
echo -e "${RED}[ERROR]${NC}$1"
}
echo "🔍 GitHub Authentication Diagnosis"
echo "=================================="
echo ""
# Test 1: Check if GITHUB_TOKEN is available
log_info "Test 1: Checking GitHub token availability"
if [[ -n "${GITHUB_TOKEN:-}" ]]; then
log_success "GITHUB_TOKEN is available in environment"
echo "Token length: ${#GITHUB_TOKEN} characters"
else
log_error "GITHUB_TOKEN is not available in environment"
# Check for token file
if [[ -f ".github-pat" ]]; then
log_info "Found .github-pat file, attempting to load..."
if GITHUB_TOKEN=$(cat .github-pat 2>/dev/null | tr -d '\n\r') && [[ -n "$GITHUB_TOKEN" ]]; then
log_success "Loaded GitHub token from .github-pat file"
export GITHUB_TOKEN
else
log_error "Failed to load token from .github-pat file"
fi
else
log_error "No .github-pat file found"
fi
fi
echo ""
# Test 2: Validate git credential helper format
log_info "Test 2: Testing git credential formats"
if [[ -n "${GITHUB_TOKEN:-}" ]]; then
# Test current (incorrect) format
log_info "Current format: https://\$GITHUB_TOKEN@github.com"
echo "https://$GITHUB_TOKEN@github.com" > /tmp/test-credentials-bad
log_warning "This format is MISSING username component - will fail"
# Test correct format
log_info "Correct format: https://oauth2:\$GITHUB_TOKEN@github.com"
echo "https://oauth2:$GITHUB_TOKEN@github.com" > /tmp/test-credentials-good
log_success "This format includes oauth2 username - should work"
# Test alternative format
log_info "Alternative format: https://pacnpal:\$GITHUB_TOKEN@github.com"
echo "https://pacnpal:$GITHUB_TOKEN@github.com" > /tmp/test-credentials-alt
log_success "This format uses actual username - should work"
rm -f /tmp/test-credentials-*
else
log_error "Cannot test credential formats without GITHUB_TOKEN"
fi
echo ""
# Test 3: Test repository URL formats
log_info "Test 3: Testing repository URL formats"
REPO_URL="https://github.com/pacnpal/thrillwiki_django_no_react.git"
log_info "Current repo URL: $REPO_URL"
log_warning "This is plain HTTPS - requires separate authentication"
if [[ -n "${GITHUB_TOKEN:-}" ]]; then
AUTH_URL="https://oauth2:${GITHUB_TOKEN}@github.com/pacnpal/thrillwiki_django_no_react.git"
log_info "Authenticated repo URL: https://oauth2:*****@github.com/..."
log_success "This URL embeds credentials - should work without git config"
fi
echo ""
# Test 4: Simulate the exact deployment scenario
log_info "Test 4: Simulating deployment git credential configuration"
if [[ -n "${GITHUB_TOKEN:-}" ]]; then
# Simulate current (broken) approach
log_info "Current approach (lines 1276 in remote-deploy.sh):"
echo " git config --global credential.helper store"
echo " echo 'https://\$GITHUB_TOKEN@github.com' > ~/.git-credentials"
log_error "This will fail because git expects format: https://user:token@host"
echo ""
# Show correct approach
log_info "Correct approach should be:"
echo " git config --global credential.helper store"
echo " echo 'https://oauth2:\$GITHUB_TOKEN@github.com' > ~/.git-credentials"
log_success "This includes the required username component"
else
log_error "Cannot simulate without GITHUB_TOKEN"
fi
echo ""
# Test 5: Check deployment script logic flow
log_info "Test 5: Analyzing deployment script logic"
log_info "Issue found in scripts/vm/remote-deploy.sh:"
echo " Line 1276: echo 'https://\$GITHUB_TOKEN@github.com' > ~/.git-credentials"
log_error "Missing username in credential format"
echo ""
echo " Line 1330: git clone --branch '\$repo_branch' '\$repo_url' '\$project_repo_path'"
log_error "Uses plain HTTPS URL instead of authenticated URL"
echo ""
log_info "Recommended fixes:"
echo " 1. Fix credential format to include username"
echo " 2. Use authenticated URL for git clone as fallback"
echo " 3. Add better error handling and retry logic"
echo ""
echo "🎯 DIAGNOSIS COMPLETE"
echo "====================="
log_error "PRIMARY ISSUE: Git credential helper format missing username component"
log_error "SECONDARY ISSUE: Plain HTTPS URL used without embedded authentication"
log_success "Both issues are fixable with credential format and URL updates"

View File

@@ -1,274 +0,0 @@
#!/bin/bash
#
# GitHub Authentication Fix Test Script
# Tests the implemented authentication fixes in remote-deploy.sh
#
set -e
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
PURPLE='\033[0;35m'
NC='\033[0m' # No Color
log_info() {
echo -e "${BLUE}[INFO]${NC} $1"
}
log_success() {
echo -e "${GREEN}[SUCCESS]${NC}$1"
}
log_warning() {
echo -e "${YELLOW}[WARNING]${NC} ⚠️ $1"
}
log_error() {
echo -e "${RED}[ERROR]${NC}$1"
}
log_debug() {
echo -e "${PURPLE}[DEBUG]${NC} 🔍 $1"
}
echo "🧪 GitHub Authentication Fix Test"
echo "================================="
echo ""
# Check if GitHub token is available
if [[ -z "${GITHUB_TOKEN:-}" ]]; then
if [[ -f ".github-pat" ]]; then
log_info "Loading GitHub token from .github-pat file"
if GITHUB_TOKEN=$(cat .github-pat 2>/dev/null | tr -d '\n\r') && [[ -n "$GITHUB_TOKEN" ]]; then
export GITHUB_TOKEN
log_success "GitHub token loaded successfully"
else
log_error "Failed to load GitHub token from .github-pat file"
exit 1
fi
else
log_error "No GitHub token available (GITHUB_TOKEN or .github-pat file)"
exit 1
fi
else
log_success "GitHub token available from environment"
fi
echo ""
# Test 1: Validate git credential format fixes
log_info "Test 1: Validating git credential format fixes"
# Check if the fixes are present in remote-deploy.sh
log_debug "Checking for oauth2 credential format in remote-deploy.sh"
if grep -q "https://oauth2:\$GITHUB_TOKEN@github.com" scripts/vm/remote-deploy.sh; then
log_success "✓ Found oauth2 credential format fix"
else
log_error "✗ oauth2 credential format fix not found"
fi
log_debug "Checking for alternative username credential format"
if grep -q "https://pacnpal:\$GITHUB_TOKEN@github.com" scripts/vm/remote-deploy.sh; then
log_success "✓ Found alternative username credential format fix"
else
log_error "✗ Alternative username credential format fix not found"
fi
echo ""
# Test 2: Validate authenticated URL fallback
log_info "Test 2: Validating authenticated URL fallback implementation"
log_debug "Checking for authenticated URL creation logic"
if grep -q "auth_url.*oauth2.*GITHUB_TOKEN" scripts/vm/remote-deploy.sh; then
log_success "✓ Found authenticated URL creation logic"
else
log_error "✗ Authenticated URL creation logic not found"
fi
log_debug "Checking for git clone fallback with authenticated URL"
if grep -q "git clone.*auth_url" scripts/vm/remote-deploy.sh; then
log_success "✓ Found git clone fallback with authenticated URL"
else
log_error "✗ Git clone fallback with authenticated URL not found"
fi
echo ""
# Test 3: Validate enhanced error handling
log_info "Test 3: Validating enhanced error handling"
log_debug "Checking for git fetch fallback logic"
if grep -q "fetch_success.*false" scripts/vm/remote-deploy.sh; then
log_success "✓ Found git fetch fallback logic"
else
log_error "✗ Git fetch fallback logic not found"
fi
log_debug "Checking for clone success tracking"
if grep -q "clone_success.*false" scripts/vm/remote-deploy.sh; then
log_success "✓ Found clone success tracking"
else
log_error "✗ Clone success tracking not found"
fi
echo ""
# Test 4: Test credential format generation
log_info "Test 4: Testing credential format generation"
# Test oauth2 format
oauth2_format="https://oauth2:${GITHUB_TOKEN}@github.com"
log_debug "OAuth2 format: https://oauth2:***@github.com"
if [[ "$oauth2_format" =~ ^https://oauth2:.+@github\.com$ ]]; then
log_success "✓ OAuth2 credential format is valid"
else
log_error "✗ OAuth2 credential format is invalid"
fi
# Test username format
username_format="https://pacnpal:${GITHUB_TOKEN}@github.com"
log_debug "Username format: https://pacnpal:***@github.com"
if [[ "$username_format" =~ ^https://pacnpal:.+@github\.com$ ]]; then
log_success "✓ Username credential format is valid"
else
log_error "✗ Username credential format is invalid"
fi
echo ""
# Test 5: Test authenticated URL generation
log_info "Test 5: Testing authenticated URL generation"
REPO_URL="https://github.com/pacnpal/thrillwiki_django_no_react.git"
auth_url=$(echo "$REPO_URL" | sed "s|https://github.com/|https://oauth2:${GITHUB_TOKEN}@github.com/|")
log_debug "Original URL: $REPO_URL"
log_debug "Authenticated URL: ${auth_url/oauth2:${GITHUB_TOKEN}@/oauth2:***@}"
if [[ "$auth_url" =~ ^https://oauth2:.+@github\.com/pacnpal/thrillwiki_django_no_react\.git$ ]]; then
log_success "✓ Authenticated URL generation is correct"
else
log_error "✗ Authenticated URL generation is incorrect"
fi
echo ""
# Test 6: Test git credential file format
log_info "Test 6: Testing git credential file format"
# Create test credential files
test_dir="/tmp/github-auth-test-$$"
mkdir -p "$test_dir"
# Test oauth2 format
echo "https://oauth2:${GITHUB_TOKEN}@github.com" > "$test_dir/credentials-oauth2"
chmod 600 "$test_dir/credentials-oauth2"
# Test username format
echo "https://pacnpal:${GITHUB_TOKEN}@github.com" > "$test_dir/credentials-username"
chmod 600 "$test_dir/credentials-username"
# Validate file permissions
if [[ "$(stat -c %a "$test_dir/credentials-oauth2" 2>/dev/null || stat -f %A "$test_dir/credentials-oauth2" 2>/dev/null)" == "600" ]]; then
log_success "✓ Credential file permissions are secure (600)"
else
log_warning "⚠ Credential file permissions may not be secure"
fi
# Clean up test files
rm -rf "$test_dir"
echo ""
# Test 7: Validate deployment script syntax
log_info "Test 7: Validating deployment script syntax"
log_debug "Checking remote-deploy.sh syntax"
if bash -n scripts/vm/remote-deploy.sh; then
log_success "✓ remote-deploy.sh syntax is valid"
else
log_error "✗ remote-deploy.sh has syntax errors"
fi
echo ""
# Test 8: Check for logging improvements
log_info "Test 8: Validating logging improvements"
log_debug "Checking for enhanced debug logging"
if grep -q "deploy_debug.*Setting up git credential helper" scripts/vm/remote-deploy.sh; then
log_success "✓ Found enhanced debug logging for git setup"
else
log_warning "⚠ Enhanced debug logging not found"
fi
log_debug "Checking for authenticated URL debug logging"
if grep -q "deploy_debug.*Using authenticated URL format" scripts/vm/remote-deploy.sh; then
log_success "✓ Found authenticated URL debug logging"
else
log_warning "⚠ Authenticated URL debug logging not found"
fi
echo ""
# Summary
echo "🎯 TEST SUMMARY"
echo "==============="
# Count successful tests
total_tests=8
passed_tests=0
# Check each test result (simplified for this demo)
if grep -q "oauth2.*GITHUB_TOKEN.*github.com" scripts/vm/remote-deploy.sh; then
((passed_tests++))
fi
if grep -q "auth_url.*oauth2.*GITHUB_TOKEN" scripts/vm/remote-deploy.sh; then
((passed_tests++))
fi
if grep -q "fetch_success.*false" scripts/vm/remote-deploy.sh; then
((passed_tests++))
fi
if grep -q "clone_success.*false" scripts/vm/remote-deploy.sh; then
((passed_tests++))
fi
if [[ "$oauth2_format" =~ ^https://oauth2:.+@github\.com$ ]]; then
((passed_tests++))
fi
if [[ "$auth_url" =~ ^https://oauth2:.+@github\.com/pacnpal/thrillwiki_django_no_react\.git$ ]]; then
((passed_tests++))
fi
if bash -n scripts/vm/remote-deploy.sh; then
((passed_tests++))
fi
if grep -q "deploy_debug.*Setting up git credential helper" scripts/vm/remote-deploy.sh; then
((passed_tests++))
fi
echo "Tests passed: $passed_tests/$total_tests"
if [[ $passed_tests -eq $total_tests ]]; then
log_success "All tests passed! GitHub authentication fix is ready"
echo ""
echo "✅ PRIMARY ISSUE FIXED: Git credential format now includes username (oauth2)"
echo "✅ SECONDARY ISSUE FIXED: Authenticated URL fallback implemented"
echo "✅ ENHANCED ERROR HANDLING: Multiple retry mechanisms added"
echo "✅ IMPROVED LOGGING: Better debugging information available"
echo ""
echo "The deployment should now successfully clone the GitHub repository!"
exit 0
else
log_warning "Some tests failed. Please review the implementation."
exit 1
fi

View File

@@ -1,193 +0,0 @@
#!/usr/bin/env bash
#
# ThrillWiki Cross-Shell Compatibility Test
# Tests bash/zsh compatibility for Step 3B functions
#
set -e
# Test script directory detection (cross-shell compatible)
if [ -n "${BASH_SOURCE:-}" ]; then
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
SCRIPT_NAME="$(basename "${BASH_SOURCE[0]}")"
SHELL_TYPE="bash"
elif [ -n "${ZSH_NAME:-}" ]; then
SCRIPT_DIR="$(cd "$(dirname "${(%):-%x}")" && pwd)"
SCRIPT_NAME="$(basename "${(%):-%x}")"
SHELL_TYPE="zsh"
else
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
SCRIPT_NAME="$(basename "$0")"
SHELL_TYPE="unknown"
fi
PROJECT_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)"
echo "Cross-Shell Compatibility Test"
echo "=============================="
echo ""
echo "Shell Type: $SHELL_TYPE"
echo "Script Directory: $SCRIPT_DIR"
echo "Script Name: $SCRIPT_NAME"
echo "Project Directory: $PROJECT_DIR"
echo ""
# Test command existence check
command_exists() {
command -v "$1" >/dev/null 2>&1
}
echo "Testing command_exists function:"
if command_exists "ls"; then
echo "✅ ls command detected correctly"
else
echo "❌ ls command detection failed"
fi
if command_exists "nonexistent_command_12345"; then
echo "❌ False positive for nonexistent command"
else
echo "✅ Nonexistent command correctly not detected"
fi
echo ""
# Test array handling (cross-shell compatible approach)
echo "Testing array-like functionality:"
test_items="item1 item2 item3"
item_count=0
for item in $test_items; do
item_count=$((item_count + 1))
echo " Item $item_count: $item"
done
if [ "$item_count" -eq 3 ]; then
echo "✅ Array-like iteration works correctly"
else
echo "❌ Array-like iteration failed"
fi
echo ""
# Test variable handling
echo "Testing variable handling:"
TEST_VAR="${TEST_VAR:-default_value}"
echo "TEST_VAR (with default): $TEST_VAR"
if [ "$TEST_VAR" = "default_value" ]; then
echo "✅ Default variable assignment works"
else
echo "❌ Default variable assignment failed"
fi
echo ""
# Test conditional expressions
echo "Testing conditional expressions:"
if [[ "${SHELL_TYPE}" == "bash" ]] || [[ "${SHELL_TYPE}" == "zsh" ]]; then
echo "✅ Extended conditional test works in $SHELL_TYPE"
else
echo "⚠️ Using basic shell: $SHELL_TYPE"
fi
echo ""
# Test string manipulation
echo "Testing string manipulation:"
test_string="hello world"
upper_string=$(echo "$test_string" | tr '[:lower:]' '[:upper:]')
echo "Original: $test_string"
echo "Uppercase: $upper_string"
if [ "$upper_string" = "HELLO WORLD" ]; then
echo "✅ String manipulation works correctly"
else
echo "❌ String manipulation failed"
fi
echo ""
# Test file operations
echo "Testing file operations:"
test_file="/tmp/thrillwiki-test-$$"
echo "test content" > "$test_file"
if [ -f "$test_file" ]; then
echo "✅ File creation successful"
content=$(cat "$test_file")
if [ "$content" = "test content" ]; then
echo "✅ File content correct"
else
echo "❌ File content incorrect"
fi
rm -f "$test_file"
echo "✅ File cleanup successful"
else
echo "❌ File creation failed"
fi
echo ""
# Test deployment preset configuration (simulate)
echo "Testing deployment preset simulation:"
simulate_preset_config() {
local preset="$1"
local config_key="$2"
case "$preset" in
"dev")
case "$config_key" in
"DEBUG_MODE") echo "true" ;;
"PULL_INTERVAL") echo "60" ;;
*) echo "unknown" ;;
esac
;;
"prod")
case "$config_key" in
"DEBUG_MODE") echo "false" ;;
"PULL_INTERVAL") echo "300" ;;
*) echo "unknown" ;;
esac
;;
*) echo "invalid_preset" ;;
esac
}
dev_debug=$(simulate_preset_config "dev" "DEBUG_MODE")
prod_debug=$(simulate_preset_config "prod" "DEBUG_MODE")
if [ "$dev_debug" = "true" ] && [ "$prod_debug" = "false" ]; then
echo "✅ Preset configuration simulation works correctly"
else
echo "❌ Preset configuration simulation failed"
fi
echo ""
# Test environment variable handling
echo "Testing environment variable handling:"
export TEST_DEPLOY_VAR="test_value"
retrieved_var="${TEST_DEPLOY_VAR:-not_found}"
if [ "$retrieved_var" = "test_value" ]; then
echo "✅ Environment variable handling works"
else
echo "❌ Environment variable handling failed"
fi
unset TEST_DEPLOY_VAR
echo ""
# Summary
echo "Cross-Shell Compatibility Test Summary"
echo "====================================="
echo ""
echo "Shell: $SHELL_TYPE"
echo "All basic compatibility features tested successfully!"
echo ""
echo "This script validates that the Step 3B implementation"
echo "will work correctly in both bash and zsh environments."
echo ""

View File

@@ -1,135 +0,0 @@
#!/usr/bin/env bash
#
# Enhanced SSH Authentication Test Script with SSH Config Alias Support
# Tests the fixed SSH connectivity function with comprehensive diagnostics
#
set -e
# Get script directory
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)"
# Source the deploy-complete.sh functions
source "$SCRIPT_DIR/deploy-complete.sh"
# Test configuration
TEST_HOST="${1:-thrillwiki-vm}"
TEST_USER="${2:-thrillwiki}"
TEST_PORT="${3:-22}"
TEST_SSH_KEY="${4:-/Users/talor/.ssh/thrillwiki_vm}"
echo "🧪 Enhanced SSH Authentication Detection Test"
echo "[AWS-SECRET-REMOVED]======"
echo ""
echo "🔍 DIAGNOSIS MODE: This test will provide detailed diagnostics for SSH config alias issues"
echo ""
echo "Test Parameters:"
echo "• Host: $TEST_HOST"
echo "• User: $TEST_USER"
echo "• Port: $TEST_PORT"
echo "• SSH Key: $TEST_SSH_KEY"
echo ""
# Enable debug mode for detailed output
export COMPLETE_DEBUG=true
echo "🔍 Pre-test SSH Config Diagnostics"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo ""
# Test SSH config resolution manually
echo "🔍 Testing SSH config resolution for '$TEST_HOST':"
if command -v ssh >/dev/null 2>&1; then
echo "• SSH command available: ✅"
echo "• SSH config lookup for '$TEST_HOST':"
if ssh_config_output=$(ssh -G "$TEST_HOST" 2>&1); then
echo " └─ SSH config lookup successful ✅"
echo " └─ Key SSH config values:"
echo "$ssh_config_output" | grep -E "^(hostname|port|user|identityfile)" | while IFS= read -r line; do
echo " $line"
done
# Extract hostname specifically
resolved_hostname=$(echo "$ssh_config_output" | grep "^hostname " | awk '{print $2}' || echo "$TEST_HOST")
if [ "$resolved_hostname" != "$TEST_HOST" ]; then
echo " └─ SSH alias detected: '$TEST_HOST' → '$resolved_hostname' ✅"
else
echo " └─ No SSH alias (hostname same as input)"
fi
else
echo " └─ SSH config lookup failed ❌"
echo " └─ Error: $ssh_config_output"
fi
else
echo "• SSH command not available ❌"
fi
echo ""
# Test manual SSH key file
if [ -n "$TEST_SSH_KEY" ]; then
echo "🔍 SSH Key Diagnostics:"
if [ -f "$TEST_SSH_KEY" ]; then
echo "• SSH key file exists: ✅"
key_perms=$(ls -la "$TEST_SSH_KEY" | awk '{print $1}')
echo "• SSH key permissions: $key_perms"
if [[ "$key_perms" == *"rw-------"* ]] || [[ "$key_perms" == *"r--------"* ]]; then
echo " └─ Permissions are secure ✅"
else
echo " └─ Permissions may be too open ⚠️"
fi
else
echo "• SSH key file exists: ❌"
echo " └─ File not found: $TEST_SSH_KEY"
fi
else
echo "🔍 No SSH key specified - will use SSH agent or SSH config"
fi
echo ""
echo "🔍 Running Enhanced SSH Connectivity Test"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo ""
# Call the fixed test_ssh_connectivity function
if test_ssh_connectivity "$TEST_HOST" "$TEST_USER" "$TEST_PORT" "$TEST_SSH_KEY" 10; then
echo ""
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "✅ SSH AUTHENTICATION TEST PASSED!"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo ""
echo "🎉 SUCCESS: The SSH config alias resolution fix is working!"
echo ""
echo "What was fixed:"
echo "• SSH config aliases are now properly resolved for network tests"
echo "• Ping and port connectivity tests use resolved IP addresses"
echo "• SSH authentication uses original aliases for proper config application"
echo "• Enhanced diagnostics provide detailed troubleshooting information"
echo ""
echo "The deployment script should now correctly handle your SSH configuration."
exit 0
else
echo ""
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "❌ SSH AUTHENTICATION TEST FAILED"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo ""
echo "🔍 The enhanced diagnostics above should help identify the issue."
echo ""
echo "💡 Next troubleshooting steps:"
echo "1. Check the SSH config alias resolution output above"
echo "2. Verify the resolved IP address is correct"
echo "3. Test manual SSH connection: ssh $TEST_HOST"
echo "4. Check network connectivity to resolved IP"
echo "5. Verify SSH key authentication: ssh -i $TEST_SSH_KEY $TEST_USER@$TEST_HOST"
echo ""
echo "📝 Common SSH config alias issues:"
echo "• Hostname not properly defined in SSH config"
echo "• SSH key path incorrect in SSH config"
echo "• Network connectivity to resolved IP"
echo "• SSH service not running on target host"
echo ""
exit 1
fi

View File

@@ -1,304 +0,0 @@
#!/usr/bin/env bash
#
# ThrillWiki Step 4B Cross-Shell Compatibility Test
# Tests development server setup and automation functions
#
set -e
# Cross-shell compatible script directory detection
if [ -n "${BASH_SOURCE:-}" ]; then
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
SCRIPT_NAME="$(basename "${BASH_SOURCE[0]}")"
elif [ -n "${ZSH_NAME:-}" ]; then
SCRIPT_DIR="$(cd "$(dirname "${(%):-%x}")" && pwd)"
SCRIPT_NAME="$(basename "${(%):-%x}")"
else
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
SCRIPT_NAME="$(basename "$0")"
fi
PROJECT_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)"
# Source the main deployment script for testing
source "$SCRIPT_DIR/deploy-complete.sh"
# Test configurations
TEST_LOG="$PROJECT_DIR/logs/step4b-test.log"
TEST_HOST="localhost"
TEST_PRESET="dev"
# Color definitions for test output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
CYAN='\033[0;36m'
BOLD='\033[1m'
NC='\033[0m'
# Test logging functions
test_log() {
local level="$1"
local color="$2"
local message="$3"
local timestamp="$(date '+%Y-%m-%d %H:%M:%S')"
mkdir -p "$(dirname "$TEST_LOG")"
echo "[$timestamp] [$level] [STEP4B-TEST] $message" >> "$TEST_LOG"
echo -e "${color}[$timestamp] [STEP4B-TEST-$level]${NC} $message"
}
test_info() { test_log "INFO" "$BLUE" "$1"; }
test_success() { test_log "SUCCESS" "$GREEN" "$1"; }
test_warning() { test_log "WARNING" "$YELLOW" "⚠️ $1"; }
test_error() { test_log "ERROR" "$RED" "$1"; }
test_progress() { test_log "PROGRESS" "$CYAN" "🚀 $1"; }
# Test function existence
test_function_exists() {
local func_name="$1"
if declare -f "$func_name" > /dev/null; then
test_success "Function exists: $func_name"
return 0
else
test_error "Function missing: $func_name"
return 1
fi
}
# Test cross-shell variable detection
test_shell_detection() {
test_progress "Testing cross-shell variable detection"
# Test shell detection variables
if [ -n "${BASH_VERSION:-}" ]; then
test_info "Running in Bash: $BASH_VERSION"
elif [ -n "${ZSH_VERSION:-}" ]; then
test_info "Running in Zsh: $ZSH_VERSION"
else
test_info "Running in other shell: ${SHELL:-unknown}"
fi
# Test script directory detection worked
if [ -n "$SCRIPT_DIR" ] && [ -d "$SCRIPT_DIR" ]; then
test_success "Script directory detected: $SCRIPT_DIR"
else
test_error "Script directory detection failed"
return 1
fi
test_success "Cross-shell detection working"
return 0
}
# Test Step 4B function availability
test_step4b_functions() {
test_progress "Testing Step 4B function availability"
local functions=(
"setup_development_server"
"start_thrillwiki_server"
"verify_server_accessibility"
"setup_server_automation"
"setup_server_monitoring"
"integrate_with_smart_deployment"
"enhance_smart_deployment_with_server_management"
)
local test_failures=0
for func in "${functions[@]}"; do
if ! test_function_exists "$func"; then
((test_failures++))
fi
done
if [ $test_failures -eq 0 ]; then
test_success "All Step 4B functions are available"
return 0
else
test_error "$test_failures Step 4B functions are missing"
return 1
fi
}
# Test preset configuration integration
test_preset_integration() {
test_progress "Testing deployment preset integration"
# Test preset configuration function
if ! test_function_exists "get_preset_config"; then
test_error "get_preset_config function not available"
return 1
fi
# Test getting configuration values
local test_presets=("dev" "prod" "demo" "testing")
for preset in "${test_presets[@]}"; do
local health_interval
health_interval=$(get_preset_config "$preset" "HEALTH_CHECK_INTERVAL" 2>/dev/null || echo "")
if [ -n "$health_interval" ]; then
test_success "Preset $preset health check interval: ${health_interval}s"
else
test_warning "Could not get health check interval for preset: $preset"
fi
done
test_success "Preset integration testing completed"
return 0
}
# Test .clinerules command generation
test_clinerules_command() {
test_progress "Testing .clinerules command compliance"
# The exact command from .clinerules
local expected_command="lsof -ti :8000 | xargs kill -9; find . -type d -name '__pycache__' -exec rm -r {} +; uv run manage.py tailwind runserver"
# Extract the command from the start_thrillwiki_server function
if grep -q "lsof -ti :8000.*uv run manage.py tailwind runserver" "$SCRIPT_DIR/deploy-complete.sh"; then
test_success ".clinerules command found in start_thrillwiki_server function"
else
test_error ".clinerules command not found or incorrect"
return 1
fi
# Check for exact command components
if grep -q "lsof -ti :8000 | xargs kill -9" "$SCRIPT_DIR/deploy-complete.sh"; then
test_success "Process cleanup component present"
else
test_error "Process cleanup component missing"
fi
if grep -q "find . -type d -name '__pycache__' -exec rm -r {} +" "$SCRIPT_DIR/deploy-complete.sh"; then
test_success "Python cache cleanup component present"
else
test_error "Python cache cleanup component missing"
fi
if grep -q "uv run manage.py tailwind runserver" "$SCRIPT_DIR/deploy-complete.sh"; then
test_success "ThrillWiki server startup component present"
else
test_error "ThrillWiki server startup component missing"
fi
test_success ".clinerules command compliance verified"
return 0
}
# Test server management script structure
test_server_management_script() {
test_progress "Testing server management script structure"
# Check if the server management script is properly structured in the source
if grep -q "ThrillWiki Server Management Script" "$SCRIPT_DIR/deploy-complete.sh"; then
test_success "Server management script header found"
else
test_error "Server management script header missing"
return 1
fi
# Check for essential server management functions
local mgmt_functions=("start_server" "stop_server" "restart_server" "monitor_server")
for func in "${mgmt_functions[@]}"; do
if grep -q "$func()" "$SCRIPT_DIR/deploy-complete.sh"; then
test_success "Server management function: $func"
else
test_warning "Server management function missing: $func"
fi
done
test_success "Server management script structure verified"
return 0
}
# Test cross-shell deployment hook
test_deployment_hook() {
test_progress "Testing deployment hook cross-shell compatibility"
# Check for cross-shell script directory detection in deployment hook
if grep -A 10 "ThrillWiki Deployment Hook" "$SCRIPT_DIR/deploy-complete.sh" | grep -q "BASH_SOURCE\|ZSH_NAME"; then
test_success "Deployment hook has cross-shell compatibility"
else
test_error "Deployment hook missing cross-shell compatibility"
return 1
fi
test_success "Deployment hook structure verified"
return 0
}
# Main test execution
main() {
echo ""
echo -e "${BOLD}${CYAN}"
echo "🧪 ThrillWiki Step 4B Cross-Shell Compatibility Test"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo -e "${NC}"
echo ""
local test_failures=0
# Run tests
test_shell_detection || ((test_failures++))
echo ""
test_step4b_functions || ((test_failures++))
echo ""
test_preset_integration || ((test_failures++))
echo ""
test_clinerules_command || ((test_failures++))
echo ""
test_server_management_script || ((test_failures++))
echo ""
test_deployment_hook || ((test_failures++))
echo ""
# Summary
echo -e "${BOLD}${CYAN}Test Summary:${NC}"
echo "━━━━━━━━━━━━━━"
if [ $test_failures -eq 0 ]; then
test_success "All Step 4B cross-shell compatibility tests passed!"
echo ""
echo -e "${GREEN}✅ Step 4B implementation is ready for deployment${NC}"
echo ""
echo "Features validated:"
echo "• ThrillWiki development server startup with exact .clinerules command"
echo "• Automated server management with monitoring and restart capabilities"
echo "• Cross-shell compatible process management and control"
echo "• Integration with smart deployment system from Step 4A"
echo "• Server health monitoring and automatic recovery"
echo "• Development server configuration based on deployment presets"
echo "• Background automation service features"
return 0
else
test_error "$test_failures test(s) failed"
echo ""
echo -e "${RED}❌ Step 4B implementation needs attention${NC}"
echo ""
echo "Please check the test log for details: $TEST_LOG"
return 1
fi
}
# Cross-shell compatible script execution check
if [ -n "${BASH_SOURCE:-}" ]; then
# In bash, check if script is executed directly
if [ "${BASH_SOURCE[0]}" = "${0}" ]; then
main "$@"
fi
elif [ -n "${ZSH_NAME:-}" ]; then
# In zsh, check if script is executed directly
if [ "${(%):-%x}" = "${0}" ]; then
main "$@"
fi
else
# In other shells, assume direct execution
main "$@"
fi

View File

@@ -1,642 +0,0 @@
#!/usr/bin/env bash
#
# ThrillWiki Step 5A Cross-Shell Compatibility Test
# Tests service configuration and startup functionality in both bash and zsh
#
# Features tested:
# - Service configuration functions
# - Environment file generation
# - Systemd service integration
# - Timer configuration
# - Health monitoring
# - Cross-shell compatibility
#
set -e
# [AWS-SECRET-REMOVED]====================================
# SCRIPT CONFIGURATION
# [AWS-SECRET-REMOVED]====================================
# Cross-shell compatible script directory detection
if [ -n "${BASH_SOURCE:-}" ]; then
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
SCRIPT_NAME="$(basename "${BASH_SOURCE[0]}")"
elif [ -n "${ZSH_NAME:-}" ]; then
SCRIPT_DIR="$(cd "$(dirname "${(%):-%x}")" && pwd)"
SCRIPT_NAME="$(basename "${(%):-%x}")"
else
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
SCRIPT_NAME="$(basename "$0")"
fi
PROJECT_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)"
DEPLOY_COMPLETE_SCRIPT="$SCRIPT_DIR/deploy-complete.sh"
# Test configuration
TEST_LOG="$PROJECT_DIR/logs/test-step5a-compatibility.log"
TEST_HOST="localhost"
TEST_PRESET="dev"
TEST_TOKEN="test_token_value"
# [AWS-SECRET-REMOVED]====================================
# COLOR DEFINITIONS
# [AWS-SECRET-REMOVED]====================================
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
PURPLE='\033[0;35m'
CYAN='\033[0;36m'
BOLD='\033[1m'
NC='\033[0m' # No Color
# [AWS-SECRET-REMOVED]====================================
# LOGGING FUNCTIONS
# [AWS-SECRET-REMOVED]====================================
test_log() {
local level="$1"
local color="$2"
local message="$3"
local timestamp="$(date '+%Y-%m-%d %H:%M:%S')"
# Ensure log directory exists
mkdir -p "$(dirname "$TEST_LOG")"
# Log to file (without colors)
echo "[$timestamp] [$level] [STEP5A-TEST] $message" >> "$TEST_LOG"
# Log to console (with colors)
echo -e "${color}[$timestamp] [STEP5A-TEST-$level]${NC} $message"
}
test_info() {
test_log "INFO" "$BLUE" "$1"
}
test_success() {
test_log "SUCCESS" "$GREEN" "$1"
}
test_warning() {
test_log "WARNING" "$YELLOW" "⚠️ $1"
}
test_error() {
test_log "ERROR" "$RED" "$1"
}
test_debug() {
if [ "${TEST_DEBUG:-false}" = "true" ]; then
test_log "DEBUG" "$PURPLE" "🔍 $1"
fi
}
test_progress() {
test_log "PROGRESS" "$CYAN" "🚀 $1"
}
# [AWS-SECRET-REMOVED]====================================
# UTILITY FUNCTIONS
# [AWS-SECRET-REMOVED]====================================
# Cross-shell compatible command existence check
command_exists() {
command -v "$1" >/dev/null 2>&1
}
# Get current shell name
get_current_shell() {
if [ -n "${BASH_VERSION:-}" ]; then
echo "bash"
elif [ -n "${ZSH_VERSION:-}" ]; then
echo "zsh"
else
echo "unknown"
fi
}
# Test shell detection
test_shell_detection() {
local current_shell
current_shell=$(get_current_shell)
test_info "Testing shell detection in $current_shell"
# Test script directory detection
if [ -d "$SCRIPT_DIR" ] && [ -f "$SCRIPT_DIR/$SCRIPT_NAME" ]; then
test_success "Script directory detection works in $current_shell"
else
test_error "Script directory detection failed in $current_shell"
return 1
fi
# Test project directory detection
if [ -d "$PROJECT_DIR" ] && [ -f "$PROJECT_DIR/manage.py" ]; then
test_success "Project directory detection works in $current_shell"
else
test_error "Project directory detection failed in $current_shell"
return 1
fi
return 0
}
# [AWS-SECRET-REMOVED]====================================
# SERVICE CONFIGURATION TESTING
# [AWS-SECRET-REMOVED]====================================
# Test deployment preset configuration functions
test_preset_configuration() {
test_info "Testing deployment preset configuration functions"
# Source the deploy-complete script to access functions
source "$DEPLOY_COMPLETE_SCRIPT"
# Test preset validation
if validate_preset "dev"; then
test_success "Preset validation works for 'dev'"
else
test_error "Preset validation failed for 'dev'"
return 1
fi
if validate_preset "invalid_preset"; then
test_error "Preset validation incorrectly accepted invalid preset"
return 1
else
test_success "Preset validation correctly rejected invalid preset"
fi
# Test preset configuration retrieval
local pull_interval
pull_interval=$(get_preset_config "dev" "PULL_INTERVAL")
if [ "$pull_interval" = "60" ]; then
test_success "Preset config retrieval works for dev PULL_INTERVAL: $pull_interval"
else
test_error "Preset config retrieval failed for dev PULL_INTERVAL: got '$pull_interval', expected '60'"
return 1
fi
# Test all presets
local presets="dev prod demo testing"
for preset in $presets; do
local description
description=$(get_deployment_preset_description "$preset")
if [ -n "$description" ] && [ "$description" != "Unknown preset" ]; then
test_success "Preset description works for '$preset': $description"
else
test_error "Preset description failed for '$preset'"
return 1
fi
done
return 0
}
# Test environment file generation
test_environment_generation() {
test_info "Testing environment file generation"
# Source the deploy-complete script to access functions
source "$DEPLOY_COMPLETE_SCRIPT"
# Create temporary test directory
local test_dir="/tmp/thrillwiki-test-$$"
mkdir -p "$test_dir/scripts/systemd"
# Mock SSH command function for testing
generate_test_env_config() {
local preset="$1"
local github_token="$2"
# Simulate the environment generation logic
local pull_interval
pull_interval=$(get_preset_config "$preset" "PULL_INTERVAL")
local health_check_interval
health_check_interval=$(get_preset_config "$preset" "HEALTH_CHECK_INTERVAL")
local debug_mode
debug_mode=$(get_preset_config "$preset" "DEBUG_MODE")
# Generate test environment file
cat > "$test_dir/scripts/systemd/thrillwiki-deployment***REMOVED***" << EOF
# Test Environment Configuration
PROJECT_DIR=$test_dir
DEPLOYMENT_PRESET=$preset
PULL_INTERVAL=$pull_interval
HEALTH_CHECK_INTERVAL=$health_check_interval
DEBUG_MODE=$debug_mode
GITHUB_TOKEN=$github_token
EOF
return 0
}
# Test environment generation for different presets
local presets="dev prod demo testing"
for preset in $presets; do
if generate_test_env_config "$preset" "$TEST_TOKEN"; then
local env_file="$test_dir/scripts/systemd/thrillwiki-deployment***REMOVED***"
if [ -f "$env_file" ]; then
# Verify content
if grep -q "DEPLOYMENT_PRESET=$preset" "$env_file" && \
grep -q "GITHUB_TOKEN=$TEST_TOKEN" "$env_file"; then
test_success "Environment generation works for preset '$preset'"
else
test_error "Environment generation produced incorrect content for preset '$preset'"
cat "$env_file"
rm -rf "$test_dir"
return 1
fi
else
test_error "Environment file not created for preset '$preset'"
rm -rf "$test_dir"
return 1
fi
else
test_error "Environment generation failed for preset '$preset'"
rm -rf "$test_dir"
return 1
fi
done
# Cleanup
rm -rf "$test_dir"
return 0
}
# Test systemd service file validation
test_systemd_service_files() {
test_info "Testing systemd service file validation"
local systemd_dir="$PROJECT_DIR/scripts/systemd"
local required_files=(
"thrillwiki-deployment.service"
"thrillwiki-smart-deploy.service"
"thrillwiki-smart-deploy.timer"
"thrillwiki-deployment***REMOVED***"
)
# Check if service files exist
for file in "${required_files[@]}"; do
local file_path="$systemd_dir/$file"
if [ -f "$file_path" ]; then
test_success "Service file exists: $file"
# Basic syntax validation for service files
if [[ "$file" == *.service ]] || [[ "$file" == *.timer ]]; then
if grep -q "^\[Unit\]" "$file_path" && \
grep -q "^\[Install\]" "$file_path"; then
test_success "Service file has valid structure: $file"
else
test_error "Service file has invalid structure: $file"
return 1
fi
fi
else
test_error "Required service file missing: $file"
return 1
fi
done
return 0
}
# Test deployment automation script
test_deployment_automation_script() {
test_info "Testing deployment automation script"
local automation_script="$PROJECT_DIR/scripts/vm/deploy-automation.sh"
if [ -f "$automation_script" ]; then
test_success "Deployment automation script exists"
if [ -x "$automation_script" ]; then
test_success "Deployment automation script is executable"
else
test_error "Deployment automation script is not executable"
return 1
fi
# Test script syntax
if bash -n "$automation_script"; then
test_success "Deployment automation script has valid bash syntax"
else
test_error "Deployment automation script has syntax errors"
return 1
fi
# Test script commands
local commands="start stop status health-check restart-smart-deploy restart-server"
for cmd in $commands; do
if grep -q "$cmd)" "$automation_script"; then
test_success "Deployment automation script supports command: $cmd"
else
test_error "Deployment automation script missing command: $cmd"
return 1
fi
done
else
test_error "Deployment automation script not found"
return 1
fi
return 0
}
# [AWS-SECRET-REMOVED]====================================
# CROSS-SHELL COMPATIBILITY TESTING
# [AWS-SECRET-REMOVED]====================================
# Test function availability in both shells
test_function_availability() {
test_info "Testing function availability"
# Source the deploy-complete script
source "$DEPLOY_COMPLETE_SCRIPT"
# Test critical functions
local functions=(
"get_preset_config"
"get_deployment_preset_description"
"validate_preset"
"configure_deployment_services"
"generate_deployment_environment_config"
"configure_deployment_timer"
"install_systemd_services"
"enable_and_start_services"
"monitor_service_health"
)
for func in "${functions[@]}"; do
if command_exists "$func" || type "$func" >/dev/null 2>&1; then
test_success "Function available: $func"
else
test_error "Function not available: $func"
return 1
fi
done
return 0
}
# Test variable expansion and substitution
test_variable_expansion() {
test_info "Testing variable expansion and substitution"
# Test basic variable expansion
local test_var="test_value"
local expanded="${test_var:-default}"
if [ "$expanded" = "test_value" ]; then
test_success "Basic variable expansion works"
else
test_error "Basic variable expansion failed: got '$expanded', expected 'test_value'"
return 1
fi
# Test default value expansion
local empty_var=""
local default_expanded="${empty_var:-default_value}"
if [ "$default_expanded" = "default_value" ]; then
test_success "Default value expansion works"
else
test_error "Default value expansion failed: got '$default_expanded', expected 'default_value'"
return 1
fi
# Test array compatibility (where supported)
local array_test=(item1 item2 item3)
if [ "${#array_test[@]}" -eq 3 ]; then
test_success "Array operations work"
else
test_warning "Array operations may not be fully compatible"
fi
return 0
}
# [AWS-SECRET-REMOVED]====================================
# MAIN TEST EXECUTION
# [AWS-SECRET-REMOVED]====================================
# Run all tests
run_all_tests() {
local current_shell
current_shell=$(get_current_shell)
test_info "Starting Step 5A compatibility tests in $current_shell shell"
test_info "Test log: $TEST_LOG"
local test_failures=0
# Test 1: Shell detection
test_progress "Test 1: Shell detection"
if ! test_shell_detection; then
((test_failures++))
fi
# Test 2: Preset configuration
test_progress "Test 2: Preset configuration"
if ! test_preset_configuration; then
((test_failures++))
fi
# Test 3: Environment generation
test_progress "Test 3: Environment generation"
if ! test_environment_generation; then
((test_failures++))
fi
# Test 4: Systemd service files
test_progress "Test 4: Systemd service files"
if ! test_systemd_service_files; then
((test_failures++))
fi
# Test 5: Deployment automation script
test_progress "Test 5: Deployment automation script"
if ! test_deployment_automation_script; then
((test_failures++))
fi
# Test 6: Function availability
test_progress "Test 6: Function availability"
if ! test_function_availability; then
((test_failures++))
fi
# Test 7: Variable expansion
test_progress "Test 7: Variable expansion"
if ! test_variable_expansion; then
((test_failures++))
fi
# Report results
echo ""
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
if [ $test_failures -eq 0 ]; then
test_success "All Step 5A compatibility tests passed in $current_shell! 🎉"
echo -e "${GREEN}✅ Step 5A service configuration is fully compatible with $current_shell shell${NC}"
else
test_error "Step 5A compatibility tests failed: $test_failures test(s) failed in $current_shell"
echo -e "${RED}❌ Step 5A has compatibility issues with $current_shell shell${NC}"
fi
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo ""
return $test_failures
}
# Test in both shells if available
test_cross_shell_compatibility() {
test_info "Testing cross-shell compatibility"
local shells_to_test=()
# Check available shells
if command_exists bash; then
shells_to_test+=("bash")
fi
if command_exists zsh; then
shells_to_test+=("zsh")
fi
if [ ${#shells_to_test[@]} -eq 0 ]; then
test_error "No compatible shells found for testing"
return 1
fi
local total_failures=0
for shell in "${shells_to_test[@]}"; do
test_info "Testing in $shell shell"
echo ""
if "$shell" "$0" --single-shell; then
test_success "$shell compatibility test passed"
else
test_error "$shell compatibility test failed"
((total_failures++))
fi
echo ""
done
if [ $total_failures -eq 0 ]; then
test_success "Cross-shell compatibility verified for all available shells"
return 0
else
test_error "Cross-shell compatibility issues detected ($total_failures shell(s) failed)"
return 1
fi
}
# [AWS-SECRET-REMOVED]====================================
# COMMAND HANDLING
# [AWS-SECRET-REMOVED]====================================
# Show usage information
show_usage() {
cat << 'EOF'
🧪 ThrillWiki Step 5A Cross-Shell Compatibility Test
DESCRIPTION:
Tests Step 5A service configuration and startup functionality for cross-shell
compatibility between bash and zsh environments.
USAGE:
./test-step5a-compatibility.sh [OPTIONS]
OPTIONS:
--single-shell Run tests in current shell only (used internally)
--debug Enable debug logging
-h, --help Show this help message
FEATURES TESTED:
✅ Service configuration functions
✅ Environment file generation
✅ Systemd service integration
✅ Timer configuration
✅ Health monitoring
✅ Cross-shell compatibility
✅ Function availability
✅ Variable expansion
EXAMPLES:
# Run compatibility tests
./test-step5a-compatibility.sh
# Run with debug output
./test-step5a-compatibility.sh --debug
EXIT CODES:
0 All tests passed
1 Some tests failed
EOF
}
# Main execution
main() {
local single_shell=false
# Parse arguments
while [[ $# -gt 0 ]]; do
case $1 in
--single-shell)
single_shell=true
shift
;;
--debug)
export TEST_DEBUG=true
shift
;;
-h|--help)
show_usage
exit 0
;;
*)
test_error "Unknown option: $1"
show_usage
exit 1
;;
esac
done
# Run tests
if [ "$single_shell" = "true" ]; then
# Single shell test (called by cross-shell test)
run_all_tests
else
# Full cross-shell compatibility test
echo ""
echo -e "${BOLD}${CYAN}🧪 ThrillWiki Step 5A Cross-Shell Compatibility Test${NC}"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo ""
test_cross_shell_compatibility
fi
}
# Cross-shell compatible script execution check
if [ -n "${BASH_SOURCE:-}" ]; then
# In bash, check if script is executed directly
if [ "${BASH_SOURCE[0]}" = "${0}" ]; then
main "$@"
fi
elif [ -n "${ZSH_NAME:-}" ]; then
# In zsh, check if script is executed directly
if [ "${(%):-%x}" = "${0}" ]; then
main "$@"
fi
else
# In other shells, assume direct execution
main "$@"
fi

View File

@@ -1,227 +0,0 @@
#!/bin/bash
# ThrillWiki Step 5A Service Configuration - Simple Compatibility Test
# Tests systemd service configuration and cross-shell compatibility
# This is a non-interactive version focused on service file validation
set -e
# Cross-shell compatible script directory detection
if [ -n "${BASH_SOURCE:-}" ]; then
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
elif [ -n "${ZSH_NAME:-}" ]; then
SCRIPT_DIR="$(cd "$(dirname "${(%):-%x}")" && pwd)"
else
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
fi
PROJECT_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)"
# Color definitions
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'
# Logging functions
test_info() {
echo -e "${BLUE}[INFO]${NC} $1"
}
test_success() {
echo -e "${GREEN}[SUCCESS]${NC}$1"
}
test_error() {
echo -e "${RED}[ERROR]${NC}$1"
}
# Get current shell
get_shell() {
if [ -n "${BASH_VERSION:-}" ]; then
echo "bash"
elif [ -n "${ZSH_VERSION:-}" ]; then
echo "zsh"
else
echo "unknown"
fi
}
# Test systemd service files
test_service_files() {
local systemd_dir="$PROJECT_DIR/scripts/systemd"
local files=(
"thrillwiki-deployment.service"
"thrillwiki-smart-deploy.service"
"thrillwiki-smart-deploy.timer"
"thrillwiki-deployment***REMOVED***"
)
test_info "Testing systemd service files..."
for file in "${files[@]}"; do
if [ -f "$systemd_dir/$file" ]; then
test_success "Service file exists: $file"
# Validate service/timer structure
if [[ "$file" == *.service ]] || [[ "$file" == *.timer ]]; then
if grep -q "^\[Unit\]" "$systemd_dir/$file"; then
test_success "Service file has valid structure: $file"
else
test_error "Service file missing [Unit] section: $file"
return 1
fi
fi
else
test_error "Service file missing: $file"
return 1
fi
done
return 0
}
# Test deployment automation script
test_automation_script() {
local script="$PROJECT_DIR/scripts/vm/deploy-automation.sh"
test_info "Testing deployment automation script..."
if [ -f "$script" ]; then
test_success "Deployment automation script exists"
if [ -x "$script" ]; then
test_success "Script is executable"
else
test_error "Script is not executable"
return 1
fi
# Test syntax
if bash -n "$script" 2>/dev/null; then
test_success "Script has valid syntax"
else
test_error "Script has syntax errors"
return 1
fi
# Test commands
local commands=("start" "stop" "status" "health-check")
for cmd in "${commands[@]}"; do
if grep -q "$cmd)" "$script"; then
test_success "Script supports command: $cmd"
else
test_error "Script missing command: $cmd"
return 1
fi
done
else
test_error "Deployment automation script not found"
return 1
fi
return 0
}
# Test cross-shell compatibility
test_shell_compatibility() {
local current_shell
current_shell=$(get_shell)
test_info "Testing shell compatibility in $current_shell..."
# Test directory detection
if [ -d "$SCRIPT_DIR" ] && [ -d "$PROJECT_DIR" ]; then
test_success "Directory detection works in $current_shell"
else
test_error "Directory detection failed in $current_shell"
return 1
fi
# Test variable expansion
local test_var="value"
local expanded="${test_var:-default}"
if [ "$expanded" = "value" ]; then
test_success "Variable expansion works in $current_shell"
else
test_error "Variable expansion failed in $current_shell"
return 1
fi
return 0
}
# Main test function
run_tests() {
local current_shell
current_shell=$(get_shell)
echo
echo "🧪 ThrillWiki Step 5A Service Configuration Test"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "Testing in $current_shell shell"
echo
# Run tests
if ! test_shell_compatibility; then
return 1
fi
if ! test_service_files; then
return 1
fi
if ! test_automation_script; then
return 1
fi
echo
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
test_success "All Step 5A service configuration tests passed! 🎉"
echo "✅ Service configuration is compatible with $current_shell shell"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo
return 0
}
# Test in both shells
main() {
echo "Testing Step 5A compatibility..."
# Test in bash
echo
test_info "Testing in bash shell"
if bash "$0" run_tests; then
test_success "bash compatibility test passed"
else
test_error "bash compatibility test failed"
return 1
fi
# Test in zsh (if available)
if command -v zsh >/dev/null 2>&1; then
echo
test_info "Testing in zsh shell"
if zsh "$0" run_tests; then
test_success "zsh compatibility test passed"
else
test_error "zsh compatibility test failed"
return 1
fi
else
test_info "zsh not available, skipping zsh test"
fi
echo
test_success "All cross-shell compatibility tests completed successfully! 🎉"
return 0
}
# Check if we're being called to run tests directly
if [ "$1" = "run_tests" ]; then
run_tests
else
main
fi

View File

@@ -1,917 +0,0 @@
#!/usr/bin/env bash
#
# ThrillWiki Step 5B Final Validation Test Script
# Comprehensive testing of final validation and health checks with cross-shell compatibility
#
# Features:
# - Cross-shell compatible (bash/zsh)
# - Comprehensive final validation testing
# - Health check validation
# - Integration testing validation
# - System monitoring validation
# - Cross-shell compatibility testing
# - Deployment preset validation
# - Comprehensive reporting
#
set -e
# [AWS-SECRET-REMOVED]====================================
# SCRIPT CONFIGURATION
# [AWS-SECRET-REMOVED]====================================
# Cross-shell compatible script directory detection
if [ -n "${BASH_SOURCE:-}" ]; then
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
SCRIPT_NAME="$(basename "${BASH_SOURCE[0]}")"
elif [ -n "${ZSH_NAME:-}" ]; then
SCRIPT_DIR="$(cd "$(dirname "${(%):-%x}")" && pwd)"
SCRIPT_NAME="$(basename "${(%):-%x}")"
else
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
SCRIPT_NAME="$(basename "$0")"
fi
PROJECT_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)"
DEPLOY_COMPLETE_SCRIPT="$SCRIPT_DIR/deploy-complete.sh"
# Test configuration
TEST_LOG="$PROJECT_DIR/logs/test-step5b-final-validation.log"
TEST_RESULTS_FILE="$PROJECT_DIR/logs/step5b-test-results.txt"
# [AWS-SECRET-REMOVED]====================================
# COLOR DEFINITIONS
# [AWS-SECRET-REMOVED]====================================
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
PURPLE='\033[0;35m'
CYAN='\033[0;36m'
BOLD='\033[1m'
NC='\033[0m' # No Color
# [AWS-SECRET-REMOVED]====================================
# LOGGING FUNCTIONS
# [AWS-SECRET-REMOVED]====================================
test_log() {
local level="$1"
local color="$2"
local message="$3"
local timestamp="$(date '+%Y-%m-%d %H:%M:%S')"
# Ensure log directory exists
mkdir -p "$(dirname "$TEST_LOG")"
# Log to file (without colors)
echo "[$timestamp] [$level] [STEP5B-TEST] $message" >> "$TEST_LOG"
# Log to console (with colors)
echo -e "${color}[$timestamp] [STEP5B-TEST-$level]${NC} $message"
}
test_info() {
test_log "INFO" "$BLUE" "$1"
}
test_success() {
test_log "SUCCESS" "$GREEN" "$1"
}
test_warning() {
test_log "WARNING" "$YELLOW" "⚠️ $1"
}
test_error() {
test_log "ERROR" "$RED" "$1"
}
test_debug() {
if [ "${TEST_DEBUG:-false}" = "true" ]; then
test_log "DEBUG" "$PURPLE" "🔍 $1"
fi
}
test_progress() {
test_log "PROGRESS" "$CYAN" "🚀 $1"
}
# [AWS-SECRET-REMOVED]====================================
# UTILITY FUNCTIONS
# [AWS-SECRET-REMOVED]====================================
# Cross-shell compatible command existence check
command_exists() {
command -v "$1" >/dev/null 2>&1
}
# Show test banner
show_test_banner() {
echo ""
echo -e "${BOLD}${CYAN}"
echo "╔═══════════════════════════════════════════════════════════════════════════════╗"
echo "║ ║"
echo "║ 🧪 ThrillWiki Step 5B Final Validation Test 🧪 ║"
echo "║ ║"
echo "║ Comprehensive Testing of Final Validation and Health Checks ║"
echo "║ ║"
echo "╚═══════════════════════════════════════════════════════════════════════════════╝"
echo -e "${NC}"
echo ""
}
# Show usage information
show_usage() {
cat << 'EOF'
🧪 ThrillWiki Step 5B Final Validation Test Script
DESCRIPTION:
Comprehensive testing of Step 5B final validation and health checks
with cross-shell compatibility validation.
USAGE:
./test-step5b-final-validation.sh [OPTIONS]
OPTIONS:
--test-validation-functions Test individual validation functions
--test-health-checks Test component health checks
--test-integration Test integration testing functions
--test-monitoring Test system monitoring functions
--test-cross-shell Test cross-shell compatibility
--test-presets Test deployment preset validation
--test-reporting Test comprehensive reporting
--test-all Run all tests (default)
--create-mock-hosts Create mock host configuration for testing
--debug Enable debug output
--quiet Reduce output verbosity
-h, --help Show this help message
EXAMPLES:
# Run all tests
./test-step5b-final-validation.sh
# Test only validation functions
./test-step5b-final-validation.sh --test-validation-functions
# Test with debug output
./test-step5b-final-validation.sh --debug --test-all
# Test cross-shell compatibility
./test-step5b-final-validation.sh --test-cross-shell
FEATURES:
✅ Validation function testing
✅ Component health check testing
✅ Integration testing validation
✅ System monitoring testing
✅ Cross-shell compatibility testing
✅ Deployment preset validation
✅ Comprehensive reporting testing
✅ Mock environment creation
EOF
}
# [AWS-SECRET-REMOVED]====================================
# MOCK ENVIRONMENT SETUP
# [AWS-SECRET-REMOVED]====================================
create_mock_environment() {
test_progress "Creating mock environment for testing"
# Create mock host configuration
local mock_hosts_file="/tmp/thrillwiki-deploy-hosts.$$"
echo "test-host-1" > "$mock_hosts_file"
echo "192.168.1.100" >> "$mock_hosts_file"
echo "demo.thrillwiki.local" >> "$mock_hosts_file"
# Set mock environment variables
export REMOTE_USER="testuser"
export REMOTE_PORT="22"
export SSH_KEY="$HOME/.ssh/id_test"
export DEPLOYMENT_PRESET="dev"
export GITHUB_TOKEN="mock_token_for_testing"
export INTERACTIVE_MODE="false"
test_success "Mock environment created successfully"
return 0
}
cleanup_mock_environment() {
test_debug "Cleaning up mock environment"
# Remove mock host configuration
if [ -f "/tmp/thrillwiki-deploy-hosts.$$" ]; then
rm -f "/tmp/thrillwiki-deploy-hosts.$$"
fi
# Unset mock environment variables
unset REMOTE_USER REMOTE_PORT SSH_KEY DEPLOYMENT_PRESET GITHUB_TOKEN INTERACTIVE_MODE
test_success "Mock environment cleaned up"
}
# [AWS-SECRET-REMOVED]====================================
# STEP 5B VALIDATION TESTS
# [AWS-SECRET-REMOVED]====================================
# Test validation functions exist and are callable
test_validation_functions() {
test_progress "Testing validation functions"
local validation_success=true
local required_functions=(
"validate_final_system"
"validate_end_to_end_system"
"validate_component_health"
"validate_integration_testing"
"validate_system_monitoring"
"validate_cross_shell_compatibility"
"validate_deployment_presets"
)
# Source the deploy-complete script to access functions
if [ -f "$DEPLOY_COMPLETE_SCRIPT" ]; then
# Source without executing main
(
# Prevent main execution during sourcing
BASH_SOURCE=("$DEPLOY_COMPLETE_SCRIPT" "sourced")
source "$DEPLOY_COMPLETE_SCRIPT"
# Test each required function
for func in "${required_functions[@]}"; do
if declare -f "$func" >/dev/null 2>&1; then
test_success "Function '$func' exists and is callable"
else
test_error "Function '$func' not found or not callable"
validation_success=false
fi
done
)
else
test_error "Deploy complete script not found: $DEPLOY_COMPLETE_SCRIPT"
validation_success=false
fi
# Test helper functions
local helper_functions=(
"test_remote_thrillwiki_installation"
"test_remote_services"
"test_django_application"
"check_host_configuration_health"
"check_github_authentication_health"
"generate_validation_report"
)
for func in "${helper_functions[@]}"; do
if grep -q "^$func()" "$DEPLOY_COMPLETE_SCRIPT" 2>/dev/null; then
test_success "Helper function '$func' exists in script"
else
test_warning "Helper function '$func' not found or malformed"
fi
done
if [ "$validation_success" = true ]; then
test_success "All validation functions test passed"
return 0
else
test_error "Validation functions test failed"
return 1
fi
}
# Test component health checks
test_component_health_checks() {
test_progress "Testing component health checks"
local health_check_success=true
# Test health check functions exist
local health_check_functions=(
"check_host_configuration_health"
"check_github_authentication_health"
"check_repository_management_health"
"check_dependency_installation_health"
"check_django_deployment_health"
"check_systemd_services_health"
)
for func in "${health_check_functions[@]}"; do
if grep -q "^$func()" "$DEPLOY_COMPLETE_SCRIPT" 2>/dev/null; then
test_success "Health check function '$func' exists"
else
test_error "Health check function '$func' not found"
health_check_success=false
fi
done
# Test health check logic patterns
if grep -q "validate_component_health" "$DEPLOY_COMPLETE_SCRIPT"; then
test_success "Component health validation integration found"
else
test_error "Component health validation integration not found"
health_check_success=false
fi
if [ "$health_check_success" = true ]; then
test_success "Component health checks test passed"
return 0
else
test_error "Component health checks test failed"
return 1
fi
}
# Test integration testing functions
test_integration_testing() {
test_progress "Testing integration testing functions"
local integration_success=true
# Test integration testing functions exist
local integration_functions=(
"test_complete_deployment_flow"
"test_automated_deployment_cycle"
"test_service_integration"
"test_error_handling_and_recovery"
)
for func in "${integration_functions[@]}"; do
if grep -q "^$func()" "$DEPLOY_COMPLETE_SCRIPT" 2>/dev/null; then
test_success "Integration test function '$func' exists"
else
test_error "Integration test function '$func' not found"
integration_success=false
fi
done
# Test integration testing logic
if grep -q "validate_integration_testing" "$DEPLOY_COMPLETE_SCRIPT"; then
test_success "Integration testing validation found"
else
test_error "Integration testing validation not found"
integration_success=false
fi
if [ "$integration_success" = true ]; then
test_success "Integration testing functions test passed"
return 0
else
test_error "Integration testing functions test failed"
return 1
fi
}
# Test system monitoring functions
test_system_monitoring() {
test_progress "Testing system monitoring functions"
local monitoring_success=true
# Test monitoring functions exist
local monitoring_functions=(
"test_system_status_monitoring"
"test_performance_metrics"
"test_log_analysis"
"test_network_connectivity_monitoring"
)
for func in "${monitoring_functions[@]}"; do
if grep -q "^$func()" "$DEPLOY_COMPLETE_SCRIPT" 2>/dev/null; then
test_success "Monitoring function '$func' exists"
else
test_error "Monitoring function '$func' not found"
monitoring_success=false
fi
done
# Test monitoring integration
if grep -q "validate_system_monitoring" "$DEPLOY_COMPLETE_SCRIPT"; then
test_success "System monitoring validation found"
else
test_error "System monitoring validation not found"
monitoring_success=false
fi
if [ "$monitoring_success" = true ]; then
test_success "System monitoring functions test passed"
return 0
else
test_error "System monitoring functions test failed"
return 1
fi
}
# Test cross-shell compatibility
test_cross_shell_compatibility() {
test_progress "Testing cross-shell compatibility"
local shell_success=true
# Test cross-shell compatibility functions exist
local shell_functions=(
"test_bash_compatibility"
"test_zsh_compatibility"
"test_posix_compliance"
)
for func in "${shell_functions[@]}"; do
if grep -q "^$func()" "$DEPLOY_COMPLETE_SCRIPT" 2>/dev/null; then
test_success "Shell compatibility function '$func' exists"
else
test_error "Shell compatibility function '$func' not found"
shell_success=false
fi
done
# Test cross-shell script detection logic
if grep -q "BASH_SOURCE\|ZSH_NAME" "$DEPLOY_COMPLETE_SCRIPT"; then
test_success "Cross-shell detection logic found"
else
test_error "Cross-shell detection logic not found"
shell_success=false
fi
# Test POSIX compliance patterns
if grep -q "set -e" "$DEPLOY_COMPLETE_SCRIPT" && ! grep -q "[[" "$DEPLOY_COMPLETE_SCRIPT" | head -1; then
test_success "POSIX compliance patterns found"
else
test_warning "POSIX compliance could be improved"
fi
if [ "$shell_success" = true ]; then
test_success "Cross-shell compatibility test passed"
return 0
else
test_error "Cross-shell compatibility test failed"
return 1
fi
}
# Test deployment preset validation
test_deployment_presets() {
test_progress "Testing deployment preset validation"
local preset_success=true
# Test preset validation functions exist
if grep -q "test_deployment_preset" "$DEPLOY_COMPLETE_SCRIPT"; then
test_success "Deployment preset test function exists"
else
test_error "Deployment preset test function not found"
preset_success=false
fi
# Test preset configuration functions
if grep -q "validate_preset\|get_preset_config" "$DEPLOY_COMPLETE_SCRIPT"; then
test_success "Preset configuration functions found"
else
test_error "Preset configuration functions not found"
preset_success=false
fi
# Test all required presets are supported
local required_presets="dev prod demo testing"
for preset in $required_presets; do
if grep -q "\"$preset\"" "$DEPLOY_COMPLETE_SCRIPT"; then
test_success "Preset '$preset' configuration found"
else
test_error "Preset '$preset' configuration not found"
preset_success=false
fi
done
if [ "$preset_success" = true ]; then
test_success "Deployment preset validation test passed"
return 0
else
test_error "Deployment preset validation test failed"
return 1
fi
}
# Test comprehensive reporting
test_comprehensive_reporting() {
test_progress "Testing comprehensive reporting"
local reporting_success=true
# Test reporting functions exist
if grep -q "generate_validation_report" "$DEPLOY_COMPLETE_SCRIPT"; then
test_success "Validation report generation function exists"
else
test_error "Validation report generation function not found"
reporting_success=false
fi
# Test report content patterns
local report_patterns=(
"validation_results"
"total_tests"
"passed_tests"
"failed_tests"
"warning_tests"
"overall_status"
)
for pattern in "${report_patterns[@]}"; do
if grep -q "$pattern" "$DEPLOY_COMPLETE_SCRIPT"; then
test_success "Report pattern '$pattern' found"
else
test_error "Report pattern '$pattern' not found"
reporting_success=false
fi
done
# Test report file generation
if grep -q "final-validation-report.txt" "$DEPLOY_COMPLETE_SCRIPT"; then
test_success "Report file generation pattern found"
else
test_error "Report file generation pattern not found"
reporting_success=false
fi
if [ "$reporting_success" = true ]; then
test_success "Comprehensive reporting test passed"
return 0
else
test_error "Comprehensive reporting test failed"
return 1
fi
}
# Test Step 5B integration in main deployment flow
test_step5b_integration() {
test_progress "Testing Step 5B integration in main deployment flow"
local integration_success=true
# Test Step 5B is called in main function
if grep -q "validate_final_system" "$DEPLOY_COMPLETE_SCRIPT" && grep -A5 -B5 "validate_final_system" "$DEPLOY_COMPLETE_SCRIPT" | grep -q "Step 5B"; then
test_success "Step 5B integration found in main deployment flow"
else
test_error "Step 5B integration not found in main deployment flow"
integration_success=false
fi
# Test proper error handling for validation failures
if grep -A10 "validate_final_system" "$DEPLOY_COMPLETE_SCRIPT" | grep -q "FORCE_DEPLOY"; then
test_success "Validation failure handling with force deploy option found"
else
test_warning "Validation failure handling could be improved"
fi
# Test validation is called at the right time (after deployment)
if grep -B20 "validate_final_system" "$DEPLOY_COMPLETE_SCRIPT" | grep -q "setup_smart_automated_deployment"; then
test_success "Step 5B is properly positioned after deployment steps"
else
test_warning "Step 5B positioning in deployment flow could be improved"
fi
if [ "$integration_success" = true ]; then
test_success "Step 5B integration test passed"
return 0
else
test_error "Step 5B integration test failed"
return 1
fi
}
# [AWS-SECRET-REMOVED]====================================
# MAIN TEST EXECUTION
# [AWS-SECRET-REMOVED]====================================
# Run all Step 5B tests
run_all_tests() {
test_progress "Running comprehensive Step 5B final validation tests"
local start_time
start_time=$(date +%s)
local total_tests=0
local passed_tests=0
local failed_tests=0
local test_results=""
# Create mock environment for testing
create_mock_environment
# Test validation functions
total_tests=$((total_tests + 1))
if test_validation_functions; then
test_results="${test_results}✅ Validation functions test: PASS\n"
passed_tests=$((passed_tests + 1))
else
test_results="${test_results}❌ Validation functions test: FAIL\n"
failed_tests=$((failed_tests + 1))
fi
# Test component health checks
total_tests=$((total_tests + 1))
if test_component_health_checks; then
test_results="${test_results}✅ Component health checks test: PASS\n"
passed_tests=$((passed_tests + 1))
else
test_results="${test_results}❌ Component health checks test: FAIL\n"
failed_tests=$((failed_tests + 1))
fi
# Test integration testing
total_tests=$((total_tests + 1))
if test_integration_testing; then
test_results="${test_results}✅ Integration testing test: PASS\n"
passed_tests=$((passed_tests + 1))
else
test_results="${test_results}❌ Integration testing test: FAIL\n"
failed_tests=$((failed_tests + 1))
fi
# Test system monitoring
total_tests=$((total_tests + 1))
if test_system_monitoring; then
test_results="${test_results}✅ System monitoring test: PASS\n"
passed_tests=$((passed_tests + 1))
else
test_results="${test_results}❌ System monitoring test: FAIL\n"
failed_tests=$((failed_tests + 1))
fi
# Test cross-shell compatibility
total_tests=$((total_tests + 1))
if test_cross_shell_compatibility; then
test_results="${test_results}✅ Cross-shell compatibility test: PASS\n"
passed_tests=$((passed_tests + 1))
else
test_results="${test_results}❌ Cross-shell compatibility test: FAIL\n"
failed_tests=$((failed_tests + 1))
fi
# Test deployment presets
total_tests=$((total_tests + 1))
if test_deployment_presets; then
test_results="${test_results}✅ Deployment presets test: PASS\n"
passed_tests=$((passed_tests + 1))
else
test_results="${test_results}❌ Deployment presets test: FAIL\n"
failed_tests=$((failed_tests + 1))
fi
# Test comprehensive reporting
total_tests=$((total_tests + 1))
if test_comprehensive_reporting; then
test_results="${test_results}✅ Comprehensive reporting test: PASS\n"
passed_tests=$((passed_tests + 1))
else
test_results="${test_results}❌ Comprehensive reporting test: FAIL\n"
failed_tests=$((failed_tests + 1))
fi
# Test Step 5B integration
total_tests=$((total_tests + 1))
if test_step5b_integration; then
test_results="${test_results}✅ Step 5B integration test: PASS\n"
passed_tests=$((passed_tests + 1))
else
test_results="${test_results}❌ Step 5B integration test: FAIL\n"
failed_tests=$((failed_tests + 1))
fi
# Calculate test duration
local end_time
end_time=$(date +%s)
local test_duration=$((end_time - start_time))
# Generate test report
generate_test_report "$test_results" "$total_tests" "$passed_tests" "$failed_tests" "$test_duration"
# Cleanup mock environment
cleanup_mock_environment
# Determine overall test result
if [ "$failed_tests" -eq 0 ]; then
test_success "All Step 5B tests passed! ($passed_tests/$total_tests)"
return 0
else
test_error "Step 5B tests failed: $failed_tests/$total_tests tests failed"
return 1
fi
}
# Generate test report
generate_test_report() {
local test_results="$1"
local total_tests="$2"
local passed_tests="$3"
local failed_tests="$4"
local test_duration="$5"
mkdir -p "$(dirname "$TEST_RESULTS_FILE")"
{
echo "ThrillWiki Step 5B Final Validation Test Report"
echo "[AWS-SECRET-REMOVED]======"
echo ""
echo "Generated: $(date '+%Y-%m-%d %H:%M:%S')"
echo "Test Duration: ${test_duration} seconds"
echo "Shell: $0"
echo ""
echo "Test Results Summary:"
echo "===================="
echo "Total tests: $total_tests"
echo "Passed: $passed_tests"
echo "Failed: $failed_tests"
echo "Success rate: $(( (passed_tests * 100) / total_tests ))%"
echo ""
echo "Detailed Results:"
echo "================"
echo -e "$test_results"
echo ""
echo "Environment Information:"
echo "======================="
echo "Operating System: $(uname -s)"
echo "Architecture: $(uname -m)"
echo "Shell: ${SHELL:-unknown}"
echo "User: $(whoami)"
echo "Working Directory: $(pwd)"
echo "Project Directory: $PROJECT_DIR"
echo ""
} > "$TEST_RESULTS_FILE"
test_success "Test report saved to: $TEST_RESULTS_FILE"
}
# [AWS-SECRET-REMOVED]====================================
# ARGUMENT PARSING AND MAIN EXECUTION
# [AWS-SECRET-REMOVED]====================================
# Parse command line arguments
parse_arguments() {
local test_validation_functions=false
local test_health_checks=false
local test_integration=false
local test_monitoring=false
local test_cross_shell=false
local test_presets=false
local test_reporting=false
local test_all=true
local create_mock_hosts=false
local quiet=false
while [[ $# -gt 0 ]]; do
case $1 in
--test-validation-functions)
test_validation_functions=true
test_all=false
shift
;;
--test-health-checks)
test_health_checks=true
test_all=false
shift
;;
--test-integration)
test_integration=true
test_all=false
shift
;;
--test-monitoring)
test_monitoring=true
test_all=false
shift
;;
--test-cross-shell)
test_cross_shell=true
test_all=false
shift
;;
--test-presets)
test_presets=true
test_all=false
shift
;;
--test-reporting)
test_reporting=true
test_all=false
shift
;;
--test-all)
test_all=true
shift
;;
--create-mock-hosts)
create_mock_hosts=true
shift
;;
--debug)
export TEST_DEBUG=true
shift
;;
--quiet)
quiet=true
shift
;;
-h|--help)
show_usage
exit 0
;;
*)
test_error "Unknown option: $1"
echo "Use --help for usage information"
exit 1
;;
esac
done
# Execute requested tests
if [ "$test_all" = true ]; then
run_all_tests
else
# Run individual tests as requested
if [ "$create_mock_hosts" = true ]; then
create_mock_environment
fi
local any_test_run=false
if [ "$test_validation_functions" = true ]; then
test_validation_functions
any_test_run=true
fi
if [ "$test_health_checks" = true ]; then
test_component_health_checks
any_test_run=true
fi
if [ "$test_integration" = true ]; then
test_integration_testing
any_test_run=true
fi
if [ "$test_monitoring" = true ]; then
test_system_monitoring
any_test_run=true
fi
if [ "$test_cross_shell" = true ]; then
test_cross_shell_compatibility
any_test_run=true
fi
if [ "$test_presets" = true ]; then
test_deployment_presets
any_test_run=true
fi
if [ "$test_reporting" = true ]; then
test_comprehensive_reporting
any_test_run=true
fi
if [ "$any_test_run" = false ]; then
test_warning "No specific tests requested, running all tests"
run_all_tests
fi
if [ "$create_mock_hosts" = true ]; then
cleanup_mock_environment
fi
fi
}
# Main function
main() {
if [ "${1:-}" != "--quiet" ]; then
show_test_banner
fi
test_info "Starting ThrillWiki Step 5B Final Validation Test"
test_info "Project Directory: $PROJECT_DIR"
test_info "Deploy Complete Script: $DEPLOY_COMPLETE_SCRIPT"
# Validate prerequisites
if [ ! -f "$DEPLOY_COMPLETE_SCRIPT" ]; then
test_error "Deploy complete script not found: $DEPLOY_COMPLETE_SCRIPT"
exit 1
fi
# Parse arguments and run tests
parse_arguments "$@"
}
# Cross-shell compatible script execution check
if [ -n "${BASH_SOURCE:-}" ]; then
# In bash, check if script is executed directly
if [ "${BASH_SOURCE[0]}" = "${0}" ]; then
main "$@"
fi
elif [ -n "${ZSH_NAME:-}" ]; then
# In zsh, check if script is executed directly
if [ "${(%):-%x}" = "${0}" ]; then
main "$@"
fi
else
# In other shells, assume direct execution
main "$@"
fi

View File

@@ -1,162 +0,0 @@
#!/usr/bin/env bash
#
# ThrillWiki Systemd Service Configuration Diagnosis Script
# Tests and validates systemd service configuration issues
#
set -e
# Script configuration
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)"
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'
# Test configuration
REMOTE_HOST="${1:-192.168.20.65}"
REMOTE_USER="${2:-thrillwiki}"
REMOTE_PORT="${3:-22}"
SSH_OPTIONS="-o IdentitiesOnly=yes -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ConnectTimeout=30"
echo -e "${BLUE}🔍 ThrillWiki Systemd Service Diagnosis${NC}"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo ""
echo "Target: ${REMOTE_USER}@${REMOTE_HOST}:${REMOTE_PORT}"
echo ""
# Function to run remote commands
run_remote() {
local cmd="$1"
local description="$2"
echo -e "${YELLOW}Testing: ${description}${NC}"
if ssh $SSH_OPTIONS -p $REMOTE_PORT $REMOTE_USER@$REMOTE_HOST "$cmd" 2>/dev/null; then
echo -e "${GREEN}✅ PASS: ${description}${NC}"
return 0
else
echo -e "${RED}❌ FAIL: ${description}${NC}"
return 1
fi
}
echo "=== Issue #1: Service Script Dependencies ==="
echo ""
# Test 1: Check if smart-deploy.sh exists
run_remote "test -f [AWS-SECRET-REMOVED]t-deploy.sh" \
"smart-deploy.sh script exists"
# Test 2: Check if smart-deploy.sh is executable
run_remote "test -x [AWS-SECRET-REMOVED]t-deploy.sh" \
"smart-deploy.sh script is executable"
# Test 3: Check deploy-automation.sh exists
run_remote "test -f [AWS-SECRET-REMOVED]eploy-automation.sh" \
"deploy-automation.sh script exists"
# Test 4: Check deploy-automation.sh is executable
run_remote "test -x [AWS-SECRET-REMOVED]eploy-automation.sh" \
"deploy-automation.sh script is executable"
echo ""
echo "=== Issue #2: Systemd Service Installation ==="
echo ""
# Test 5: Check if service files exist in systemd
run_remote "test -f /etc/systemd/system/thrillwiki-deployment.service" \
"thrillwiki-deployment.service installed in systemd"
run_remote "test -f /etc/systemd/system/thrillwiki-smart-deploy.service" \
"thrillwiki-smart-deploy.service installed in systemd"
run_remote "test -f /etc/systemd/system/thrillwiki-smart-deploy.timer" \
"thrillwiki-smart-deploy.timer installed in systemd"
echo ""
echo "=== Issue #3: Service Status and Configuration ==="
echo ""
# Test 6: Check service enablement status
run_remote "sudo systemctl is-enabled thrillwiki-deployment.service" \
"thrillwiki-deployment.service is enabled"
run_remote "sudo systemctl is-enabled thrillwiki-smart-deploy.timer" \
"thrillwiki-smart-deploy.timer is enabled"
# Test 7: Check service active status
run_remote "sudo systemctl is-active thrillwiki-deployment.service" \
"thrillwiki-deployment.service is active"
run_remote "sudo systemctl is-active thrillwiki-smart-deploy.timer" \
"thrillwiki-smart-deploy.timer is active"
echo ""
echo "=== Issue #4: Environment and Configuration ==="
echo ""
# Test 8: Check environment file exists
run_remote "test -f [AWS-SECRET-REMOVED]emd/thrillwiki-deployment***REMOVED***" \
"Environment configuration file exists"
# Test 9: Check environment file permissions
run_remote "test -r [AWS-SECRET-REMOVED]emd/thrillwiki-deployment***REMOVED***" \
"Environment file is readable"
# Test 10: Check GitHub token configuration
run_remote "test -f /home/thrillwiki/thrillwiki/.github-pat" \
"GitHub token file exists"
echo ""
echo "=== Issue #5: Service Dependencies and Logs ==="
echo ""
# Test 11: Check systemd journal logs
echo -e "${YELLOW}Testing: Service logs availability${NC}"
if ssh $SSH_OPTIONS -p $REMOTE_PORT $REMOTE_USER@$REMOTE_HOST "sudo journalctl -u thrillwiki-deployment --no-pager -n 5" >/dev/null 2>&1; then
echo -e "${GREEN}✅ PASS: Service logs are available${NC}"
echo "Last 5 log entries:"
ssh $SSH_OPTIONS -p $REMOTE_PORT $REMOTE_USER@$REMOTE_HOST "sudo journalctl -u thrillwiki-deployment --no-pager -n 5" | sed 's/^/ /'
else
echo -e "${RED}❌ FAIL: Service logs not available${NC}"
fi
echo ""
echo "=== Issue #6: Service Configuration Validation ==="
echo ""
# Test 12: Validate service file syntax
echo -e "${YELLOW}Testing: Service file syntax validation${NC}"
if ssh $SSH_OPTIONS -p $REMOTE_PORT $REMOTE_USER@$REMOTE_HOST "sudo systemd-analyze verify /etc/systemd/system/thrillwiki-deployment.service" 2>/dev/null; then
echo -e "${GREEN}✅ PASS: thrillwiki-deployment.service syntax is valid${NC}"
else
echo -e "${RED}❌ FAIL: thrillwiki-deployment.service has syntax errors${NC}"
fi
if ssh $SSH_OPTIONS -p $REMOTE_PORT $REMOTE_USER@$REMOTE_HOST "sudo systemd-analyze verify /etc/systemd/system/thrillwiki-smart-deploy.service" 2>/dev/null; then
echo -e "${GREEN}✅ PASS: thrillwiki-smart-deploy.service syntax is valid${NC}"
else
echo -e "${RED}❌ FAIL: thrillwiki-smart-deploy.service has syntax errors${NC}"
fi
echo ""
echo "=== Issue #7: Automation Service Existence ==="
echo ""
# Test 13: Check for thrillwiki-automation.service (mentioned in error logs)
run_remote "test -f /etc/systemd/system/thrillwiki-automation.service" \
"thrillwiki-automation.service exists (mentioned in error logs)"
run_remote "sudo systemctl status thrillwiki-automation.service" \
"thrillwiki-automation.service status check"
echo ""
echo -e "${BLUE}🔍 Diagnosis Complete${NC}"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo ""
echo "This diagnosis will help identify the specific systemd service issues."
echo "Run this script to validate assumptions before implementing fixes."

View File

@@ -1,174 +0,0 @@
#!/usr/bin/env bash
#
# Test script to validate the ThrillWiki directory validation fix
#
set -e
# Configuration
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
DEPLOY_COMPLETE_SCRIPT="$SCRIPT_DIR/deploy-complete.sh"
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
test_log() {
echo -e "${BLUE}[TEST]${NC} $1"
}
test_success() {
echo -e "${GREEN}[PASS]${NC} $1"
}
test_fail() {
echo -e "${RED}[FAIL]${NC} $1"
}
test_warning() {
echo -e "${YELLOW}[WARN]${NC} $1"
}
echo ""
echo -e "${BLUE}🧪 Testing ThrillWiki Directory Validation Fix${NC}"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo ""
# Test 1: Check that SSH_OPTIONS is properly defined
test_log "Test 1: Checking SSH_OPTIONS definition in deploy-complete.sh"
if grep -q "SSH_OPTIONS.*IdentitiesOnly.*StrictHostKeyChecking.*UserKnownHostsFile.*ConnectTimeout" "$DEPLOY_COMPLETE_SCRIPT"; then
test_success "SSH_OPTIONS properly defined with deployment-consistent options"
else
test_fail "SSH_OPTIONS not properly defined"
exit 1
fi
# Test 2: Check that BatchMode=yes is removed from validation functions
test_log "Test 2: Checking that BatchMode=yes is removed from validation functions"
# Check if BatchMode=yes is still used in actual SSH commands (not comments)
if grep -n "BatchMode=yes" "$DEPLOY_COMPLETE_SCRIPT" | grep -v "Use deployment-consistent SSH options" | grep -v "# " > /dev/null; then
test_fail "BatchMode=yes still found in actual SSH commands"
grep -n "BatchMode=yes" "$DEPLOY_COMPLETE_SCRIPT" | grep -v "Use deployment-consistent SSH options" | grep -v "# "
exit 1
else
test_success "No BatchMode=yes found in actual SSH commands (only in comments)"
fi
# Test 3: Check that validation functions use SSH_OPTIONS
test_log "Test 3: Checking that validation functions use SSH_OPTIONS variable"
validation_functions=("test_remote_thrillwiki_installation" "test_remote_services" "test_django_application")
all_use_ssh_options=true
for func in "${validation_functions[@]}"; do
if grep -A10 "$func" "$DEPLOY_COMPLETE_SCRIPT" | grep -q "SSH_OPTIONS"; then
test_success "Function $func uses SSH_OPTIONS"
else
test_fail "Function $func does not use SSH_OPTIONS"
all_use_ssh_options=false
fi
done
if [ "$all_use_ssh_options" = false ]; then
exit 1
fi
# Test 4: Check that enhanced debugging is present
test_log "Test 4: Checking that enhanced debugging is present in validation"
if grep -q "Enhanced debugging for ThrillWiki directory validation" "$DEPLOY_COMPLETE_SCRIPT"; then
test_success "Enhanced debugging present in validation function"
else
test_fail "Enhanced debugging not found in validation function"
exit 1
fi
# Test 5: Check that alternative path checking is present
test_log "Test 5: Checking that alternative path validation is present"
if grep -q "Checking alternative ThrillWiki paths for debugging" "$DEPLOY_COMPLETE_SCRIPT"; then
test_success "Alternative path checking present"
else
test_fail "Alternative path checking not found"
exit 1
fi
# Test 6: Test SSH command construction (simulation)
test_log "Test 6: Testing SSH command construction"
# Source the SSH_OPTIONS definition
SSH_OPTIONS="-o IdentitiesOnly=yes -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ConnectTimeout=30"
REMOTE_PORT="22"
REMOTE_USER="thrillwiki"
SSH_KEY="/home/test/.ssh/***REMOVED***"
test_host="192.168.20.65"
# Simulate the SSH command construction from the fixed validation function
ssh_cmd="ssh $SSH_OPTIONS -i '$SSH_KEY' -p $REMOTE_PORT $REMOTE_USER@$test_host"
# Check individual components
components_to_check=(
"IdentitiesOnly=yes"
"StrictHostKeyChecking=no"
"UserKnownHostsFile=/dev/null"
"ConnectTimeout=30"
"thrillwiki@192.168.20.65"
"/home/test/.ssh/***REMOVED***"
)
test_success "Constructed SSH command: $ssh_cmd"
for component in "${components_to_check[@]}"; do
if echo "$ssh_cmd" | grep -q -F "$component"; then
test_success "SSH command contains: $component"
else
test_fail "SSH command missing: $component"
exit 1
fi
done
# Check for -i flag separately (without the space that causes grep issues)
if echo "$ssh_cmd" | grep -q "\-i "; then
test_success "SSH command contains: -i flag"
else
test_fail "SSH command missing: -i flag"
exit 1
fi
# Check for -p flag separately
if echo "$ssh_cmd" | grep -q "\-p 22"; then
test_success "SSH command contains: -p 22"
else
test_fail "SSH command missing: -p 22"
exit 1
fi
# Test 7: Verify no BatchMode in constructed command
if echo "$ssh_cmd" | grep -q "BatchMode"; then
test_fail "SSH command incorrectly contains BatchMode"
exit 1
else
test_success "SSH command correctly excludes BatchMode"
fi
echo ""
echo -e "${GREEN}✅ All validation fix tests passed successfully!${NC}"
echo ""
echo "Summary of changes:"
echo "• ✅ Removed BatchMode=yes from all validation SSH commands"
echo "• ✅ Added SSH_OPTIONS variable for deployment consistency"
echo "• ✅ Enhanced debugging for better troubleshooting"
echo "• ✅ Added alternative path checking for robustness"
echo "• ✅ Consistent SSH command construction across all validation functions"
echo ""
echo "Expected behavior:"
echo "• Validation SSH commands now allow interactive authentication"
echo "• SSH connection methods match successful deployment patterns"
echo "• Enhanced debugging will show exact paths and SSH commands"
echo "• Alternative path detection will help diagnose directory location issues"
echo ""

View File

@@ -1,158 +0,0 @@
#!/usr/bin/env bash
#
# ThrillWiki Step 5B Simple Validation Test
# Quick validation test for Step 5B final validation and health checks
#
set -e
# Cross-shell compatible script directory detection
if [ -n "${BASH_SOURCE:-}" ]; then
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
elif [ -n "${ZSH_NAME:-}" ]; then
SCRIPT_DIR="$(cd "$(dirname "${(%):-%x}")" && pwd)"
else
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
fi
PROJECT_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)"
DEPLOY_COMPLETE_SCRIPT="$SCRIPT_DIR/deploy-complete.sh"
# Colors
GREEN='\033[0;32m'
RED='\033[0;31m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'
echo ""
echo -e "${BLUE}🧪 ThrillWiki Step 5B Simple Validation Test${NC}"
echo "[AWS-SECRET-REMOVED]======"
echo ""
# Test 1: Check if deploy-complete.sh exists and is executable
echo -n "Testing deploy-complete.sh exists and is executable... "
if [ -f "$DEPLOY_COMPLETE_SCRIPT" ] && [ -x "$DEPLOY_COMPLETE_SCRIPT" ]; then
echo -e "${GREEN}✅ PASS${NC}"
else
echo -e "${RED}❌ FAIL${NC}"
exit 1
fi
# Test 2: Check if Step 5B validation functions exist
echo -n "Testing Step 5B validation functions exist... "
if grep -q "validate_final_system" "$DEPLOY_COMPLETE_SCRIPT" && \
grep -q "validate_end_to_end_system" "$DEPLOY_COMPLETE_SCRIPT" && \
grep -q "validate_component_health" "$DEPLOY_COMPLETE_SCRIPT"; then
echo -e "${GREEN}✅ PASS${NC}"
else
echo -e "${RED}❌ FAIL${NC}"
exit 1
fi
# Test 3: Check if health check functions exist
echo -n "Testing health check functions exist... "
if grep -q "check_host_configuration_health" "$DEPLOY_COMPLETE_SCRIPT" && \
grep -q "check_github_authentication_health" "$DEPLOY_COMPLETE_SCRIPT" && \
grep -q "check_django_deployment_health" "$DEPLOY_COMPLETE_SCRIPT"; then
echo -e "${GREEN}✅ PASS${NC}"
else
echo -e "${RED}❌ FAIL${NC}"
exit 1
fi
# Test 4: Check if integration testing functions exist
echo -n "Testing integration testing functions exist... "
if grep -q "test_complete_deployment_flow" "$DEPLOY_COMPLETE_SCRIPT" && \
grep -q "test_automated_deployment_cycle" "$DEPLOY_COMPLETE_SCRIPT" && \
grep -q "test_service_integration" "$DEPLOY_COMPLETE_SCRIPT"; then
echo -e "${GREEN}✅ PASS${NC}"
else
echo -e "${RED}❌ FAIL${NC}"
exit 1
fi
# Test 5: Check if cross-shell compatibility functions exist
echo -n "Testing cross-shell compatibility functions exist... "
if grep -q "test_bash_compatibility" "$DEPLOY_COMPLETE_SCRIPT" && \
grep -q "test_zsh_compatibility" "$DEPLOY_COMPLETE_SCRIPT" && \
grep -q "test_posix_compliance" "$DEPLOY_COMPLETE_SCRIPT"; then
echo -e "${GREEN}✅ PASS${NC}"
else
echo -e "${RED}❌ FAIL${NC}"
exit 1
fi
# Test 6: Check if Step 5B is integrated in main deployment flow
echo -n "Testing Step 5B integration in main flow... "
if grep -q "Step 5B" "$DEPLOY_COMPLETE_SCRIPT" && \
grep -A5 -B5 "validate_final_system" "$DEPLOY_COMPLETE_SCRIPT" | grep -q "final validation"; then
echo -e "${GREEN}✅ PASS${NC}"
else
echo -e "${RED}❌ FAIL${NC}"
exit 1
fi
# Test 7: Check if comprehensive reporting exists
echo -n "Testing comprehensive reporting exists... "
if grep -q "generate_validation_report" "$DEPLOY_COMPLETE_SCRIPT" && \
grep -q "final-validation-report.txt" "$DEPLOY_COMPLETE_SCRIPT"; then
echo -e "${GREEN}✅ PASS${NC}"
else
echo -e "${RED}❌ FAIL${NC}"
exit 1
fi
# Test 8: Check if deployment preset validation exists
echo -n "Testing deployment preset validation exists... "
if grep -q "validate_deployment_presets" "$DEPLOY_COMPLETE_SCRIPT" && \
grep -q "test_deployment_preset" "$DEPLOY_COMPLETE_SCRIPT"; then
echo -e "${GREEN}✅ PASS${NC}"
else
echo -e "${RED}❌ FAIL${NC}"
exit 1
fi
# Test 9: Check cross-shell compatibility patterns
echo -n "Testing cross-shell compatibility patterns... "
if grep -q "BASH_SOURCE\|ZSH_NAME" "$DEPLOY_COMPLETE_SCRIPT" && \
grep -q "set -e" "$DEPLOY_COMPLETE_SCRIPT"; then
echo -e "${GREEN}✅ PASS${NC}"
else
echo -e "${YELLOW}⚠️ WARNING${NC}"
fi
# Test 10: Check if test script exists
echo -n "Testing Step 5B test script exists... "
if [ -f "$SCRIPT_DIR/test-step5b-final-validation.sh" ] && [ -x "$SCRIPT_DIR/test-step5b-final-validation.sh" ]; then
echo -e "${GREEN}✅ PASS${NC}"
else
echo -e "${RED}❌ FAIL${NC}"
exit 1
fi
echo ""
echo -e "${GREEN}🎉 All Step 5B validation tests passed!${NC}"
echo ""
echo "Step 5B: Final Validation and Health Checks implementation is complete and functional."
echo ""
echo "Key features implemented:"
echo "• End-to-end system validation"
echo "• Comprehensive health checks for all components"
echo "• Integration testing of complete deployment pipeline"
echo "• System monitoring and reporting"
echo "• Cross-shell compatibility validation"
echo "• Deployment preset validation"
echo "• Comprehensive reporting and diagnostics"
echo "• Final system verification and status reporting"
echo ""
echo "Usage examples:"
echo " # Run complete deployment with final validation"
echo " ./deploy-complete.sh 192.168.1.100"
echo ""
echo " # Run comprehensive Step 5B validation tests"
echo " ./test-step5b-final-validation.sh --test-all"
echo ""
echo " # Run specific validation tests"
echo " ./test-step5b-final-validation.sh --test-health-checks"
echo ""

View File

@@ -1,302 +0,0 @@
#!/usr/bin/env python3
"""
GitHub Webhook Listener for ThrillWiki CI/CD
This script listens for GitHub webhook events and triggers deployments to a Linux VM.
"""
import os
import sys
import json
import hmac
import hashlib
import logging
import subprocess
from http.server import HTTPServer, BaseHTTPRequestHandler
import threading
from datetime import datetime
# Configuration
WEBHOOK_PORT = int(os.environ.get("WEBHOOK_PORT", 9000))
WEBHOOK_SECRET = os.environ.get("WEBHOOK_SECRET", "")
WEBHOOK_ENABLED = os.environ.get("WEBHOOK_ENABLED", "true").lower() == "true"
VM_HOST = os.environ.get("VM_HOST", "localhost")
VM_PORT = int(os.environ.get("VM_PORT", 22))
VM_USER = os.environ.get("VM_USER", "ubuntu")
VM_KEY_PATH = os.environ.get("VM_KEY_PATH", "~/.ssh/***REMOVED***")
PROJECT_PATH = os.environ.get("VM_PROJECT_PATH", "/home/ubuntu/thrillwiki")
REPO_URL = os.environ.get(
"REPO_URL",
"https://github.com/YOUR_USERNAME/thrillwiki_django_no_react.git",
)
DEPLOY_BRANCH = os.environ.get("DEPLOY_BRANCH", "main")
# GitHub API Configuration
GITHUB_USERNAME = os.environ.get("GITHUB_USERNAME", "")
GITHUB_TOKEN = os.environ.get("GITHUB_TOKEN", "")
GITHUB_API_ENABLED = os.environ.get("GITHUB_API_ENABLED", "false").lower() == "true"
# Setup logging
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s - %(levelname)s - %(message)s",
handlers=[
logging.FileHandler("logs/webhook.log"),
logging.StreamHandler(),
],
)
logger = logging.getLogger(__name__)
class GitHubWebhookHandler(BaseHTTPRequestHandler):
"""Handle incoming GitHub webhook requests."""
def do_GET(self):
"""Handle GET requests - health check."""
if self.path == "/health":
self.send_response(200)
self.send_header("Content-type", "application/json")
self.end_headers()
response = {
"status": "healthy",
"timestamp": datetime.now().isoformat(),
"service": "ThrillWiki Webhook Listener",
}
self.wfile.write(json.dumps(response).encode())
else:
self.send_response(404)
self.end_headers()
def do_POST(self):
"""Handle POST requests - webhook events."""
try:
content_length = int(self.headers["Content-Length"])
post_data = self.rfile.read(content_length)
# Verify webhook signature if secret is configured
if WEBHOOK_SECRET:
if not self._verify_signature(post_data):
logger.warning("Invalid webhook signature")
self.send_response(401)
self.end_headers()
return
# Parse webhook payload
try:
payload = json.loads(post_data.decode("utf-8"))
except json.JSONDecodeError:
logger.error("Invalid JSON payload")
self.send_response(400)
self.end_headers()
return
# Handle webhook event
event_type = self.headers.get("X-GitHub-Event")
if self._should_deploy(event_type, payload):
logger.info(f"Triggering deployment for {event_type} event")
threading.Thread(
target=self._trigger_deployment, args=(payload,)
).start()
self.send_response(200)
self.send_header("Content-type", "application/json")
self.end_headers()
response = {
"status": "deployment_triggered",
"event": event_type,
}
self.wfile.write(json.dumps(response).encode())
else:
logger.info(f"Ignoring {event_type} event - no deployment needed")
self.send_response(200)
self.send_header("Content-type", "application/json")
self.end_headers()
response = {"status": "ignored", "event": event_type}
self.wfile.write(json.dumps(response).encode())
except Exception as e:
logger.error(f"Error handling webhook: {e}")
self.send_response(500)
self.end_headers()
def _verify_signature(self, payload_body):
"""Verify GitHub webhook signature."""
signature = self.headers.get("X-Hub-Signature-256")
if not signature:
return False
expected_signature = (
"sha256="
+ hmac.new(
WEBHOOK_SECRET.encode(), payload_body, hashlib.sha256
).hexdigest()
)
return hmac.compare_digest(signature, expected_signature)
def _should_deploy(self, event_type, payload):
"""Determine if we should trigger a deployment."""
if event_type == "push":
# Deploy on push to main branch
ref = payload.get("ref", "")
target_ref = f"refs/heads/{DEPLOY_BRANCH}"
return ref == target_ref
elif event_type == "release":
# Deploy on new releases
action = payload.get("action", "")
return action == "published"
return False
def _trigger_deployment(self, payload):
"""Trigger deployment to Linux VM."""
try:
commit_sha = payload.get("after") or payload.get("head_commit", {}).get(
"id", "unknown"
)
commit_message = payload.get("head_commit", {}).get("message", "No message")
logger.info(
f"Starting deployment of commit {commit_sha[:8]}: {commit_message}"
)
# Execute deployment script on VM
deploy_script = f"""
#!/bin/bash
set -e
echo "=== ThrillWiki Deployment Started ==="
echo "Commit: {commit_sha[:8]}"
echo "Message: {commit_message}"
echo "Timestamp: $(date)"
cd {PROJECT_PATH}
# Pull latest changes
git fetch origin
git checkout {DEPLOY_BRANCH}
git pull origin {DEPLOY_BRANCH}
# Run deployment script
./scripts/vm-deploy.sh
echo "=== Deployment Completed Successfully ==="
"""
# Execute deployment on VM via SSH
ssh_command = [
"ssh",
"-i",
VM_KEY_PATH,
"-o",
"StrictHostKeyChecking=no",
"-o",
"UserKnownHostsFile=/dev/null",
f"{VM_USER}@{VM_HOST}",
deploy_script,
]
result = subprocess.run(
ssh_command,
capture_output=True,
text=True,
timeout=300, # 5 minute timeout
)
if result.returncode == 0:
logger.info(f"Deployment successful for commit {commit_sha[:8]}")
self._send_status_notification("success", commit_sha, commit_message)
else:
logger.error(
f"Deployment failed for commit {commit_sha[:8]}: {result.stderr}"
)
self._send_status_notification(
"failure", commit_sha, commit_message, result.stderr
)
except subprocess.TimeoutExpired:
logger.error("Deployment timed out")
self._send_status_notification("timeout", commit_sha, commit_message)
except Exception as e:
logger.error(f"Deployment error: {e}")
self._send_status_notification("error", commit_sha, commit_message, str(e))
def _send_status_notification(
self, status, commit_sha, commit_message, error_details=None
):
"""Send deployment status notification (optional)."""
# This could be extended to send notifications to Slack, Discord, etc.
status_msg = (
f"Deployment {status} for commit {commit_sha[:8]}: {commit_message}"
)
if error_details:
status_msg += f"\nError: {error_details}"
logger.info(f"Status: {status_msg}")
def log_message(self, format, *args):
"""Override to use our logger."""
logger.info(f"{self.client_address[0]} - {format % args}")
def main():
"""Main function to start the webhook listener."""
import argparse
parser = argparse.ArgumentParser(description="ThrillWiki GitHub Webhook Listener")
parser.add_argument(
"--port", type=int, default=WEBHOOK_PORT, help="Port to listen on"
)
parser.add_argument(
"--test",
action="store_true",
help="Test configuration without starting server",
)
args = parser.parse_args()
# Create logs directory
os.makedirs("logs", exist_ok=True)
# Validate configuration
if not WEBHOOK_SECRET:
logger.warning(
"WEBHOOK_SECRET not set - webhook signature verification disabled"
)
if not all([VM_HOST, VM_USER, PROJECT_PATH]):
logger.error("Missing required VM configuration")
if args.test:
print("❌ Configuration validation failed")
return
sys.exit(1)
logger.info(f"Webhook listener configuration:")
logger.info(f" Port: {args.port}")
logger.info(f" Target VM: {VM_USER}@{VM_HOST}")
logger.info(f" Project path: {PROJECT_PATH}")
logger.info(f" Deploy branch: {DEPLOY_BRANCH}")
if args.test:
print("✅ Configuration validation passed")
print(f"Webhook would listen on port {args.port}")
print(f"Target: {VM_USER}@{VM_HOST}")
return
logger.info(f"Starting webhook listener on port {args.port}")
try:
server = HTTPServer(("0.0.0.0", args.port), GitHubWebhookHandler)
logger.info(
f"Webhook listener started successfully on http://0.0.0.0:{args.port}"
)
logger.info("Health check available at: /health")
server.serve_forever()
except KeyboardInterrupt:
logger.info("Webhook listener stopped by user")
except Exception as e:
logger.error(f"Failed to start webhook listener: {e}")
sys.exit(1)
if __name__ == "__main__":
main()