feat: complete monorepo structure with frontend and shared resources

- Add complete backend/ directory with full Django application
- Add frontend/ directory with Vite + TypeScript setup ready for Next.js
- Add comprehensive shared/ directory with:
  - Complete documentation and memory-bank archives
  - Media files and avatars (letters, park/ride images)
  - Deployment scripts and automation tools
  - Shared types and utilities
- Add architecture/ directory with migration guides
- Configure pnpm workspace for monorepo development
- Update .gitignore to exclude .django_tailwind_cli/ build artifacts
- Preserve all historical documentation in shared/docs/memory-bank/
- Set up proper structure for full-stack development with shared resources
This commit is contained in:
pacnpal
2025-08-23 18:40:07 -04:00
parent b0e0678590
commit d504d41de2
762 changed files with 142636 additions and 0 deletions

View File

@@ -0,0 +1,10 @@
{
"permissions": {
"additionalDirectories": [
"/Users/talor/thrillwiki_django_no_react"
],
"allow": [
"Bash(uv run:*)"
]
}
}

View File

@@ -0,0 +1,150 @@
# Non-Interactive Mode for ThrillWiki Automation
The ThrillWiki automation script supports a non-interactive mode (`-y` flag) that allows you to run the entire setup process without any user prompts. This is perfect for:
- **CI/CD pipelines**
- **Automated deployments**
- **Scripted environments**
- **Remote execution**
## Prerequisites
1. **Saved Configuration**: You must have run the script interactively at least once to create the saved configuration file (`.thrillwiki-config`).
2. **Environment Variables**: Set the required environment variables for sensitive credentials that aren't saved to disk.
## Required Environment Variables
### Always Required
- `UNRAID_PASSWORD` - Your Unraid server password
### Required if GitHub API is enabled
- `GITHUB_TOKEN` - Your GitHub personal access token (if using token auth method)
### Required if Webhooks are enabled
- `WEBHOOK_SECRET` - Your GitHub webhook secret
## Usage Examples
### Basic Non-Interactive Setup
```bash
# Set required credentials
export UNRAID_PASSWORD="your_unraid_password"
export GITHUB_TOKEN="your_github_token"
export WEBHOOK_SECRET="your_webhook_secret"
# Run in non-interactive mode
./setup-complete-automation.sh -y
```
### CI/CD Pipeline Example
```bash
#!/bin/bash
set -e
# Load credentials from secure environment
export UNRAID_PASSWORD="$UNRAID_CREDS_PASSWORD"
export GITHUB_TOKEN="$GITHUB_API_TOKEN"
export WEBHOOK_SECRET="$WEBHOOK_SECRET_KEY"
# Deploy with no user interaction
cd scripts/unraid
./setup-complete-automation.sh -y
```
### Docker/Container Example
```bash
# Run from container with environment file
docker run --env-file ***REMOVED***.secrets \
-v $(pwd):/workspace \
your-automation-container \
/workspace/scripts/unraid/setup-complete-automation.sh -y
```
## Error Handling
The script will exit with clear error messages if:
- No saved configuration is found
- Required environment variables are missing
- OAuth tokens have expired (non-interactive mode cannot refresh them)
### Common Issues
**❌ No saved configuration**
```
[ERROR] No saved configuration found. Cannot run in non-interactive mode.
[ERROR] Please run the script without -y flag first to create initial configuration.
```
**Solution**: Run `./setup-complete-automation.sh` interactively first.
**❌ Missing password**
```
[ERROR] UNRAID_PASSWORD environment variable not set.
[ERROR] For non-interactive mode, set: export UNRAID_PASSWORD='your_password'
```
**Solution**: Set the `UNRAID_PASSWORD` environment variable.
**❌ Expired OAuth token**
```
[ERROR] OAuth token expired and cannot refresh in non-interactive mode
[ERROR] Please run without -y flag to re-authenticate with GitHub
```
**Solution**: Run interactively to refresh OAuth token, or switch to personal access token method.
## Security Best Practices
1. **Never commit credentials to version control**
2. **Use secure environment variable storage** (CI/CD secret stores, etc.)
3. **Rotate credentials regularly**
4. **Use minimal required permissions** for tokens
5. **Clear environment variables** after use if needed:
```bash
unset UNRAID_PASSWORD GITHUB_TOKEN WEBHOOK_SECRET
```
## Advanced Usage
### Combining with Reset Modes
```bash
# Reset VM only and redeploy non-interactively
export UNRAID_PASSWORD="password"
./setup-complete-automation.sh --reset-vm -y
```
### Using with Different Authentication Methods
```bash
# For OAuth method (no GITHUB_TOKEN needed if valid)
export UNRAID_PASSWORD="password"
export WEBHOOK_SECRET="secret"
./setup-complete-automation.sh -y
# For personal access token method
export UNRAID_PASSWORD="password"
export GITHUB_TOKEN="ghp_xxxx"
export WEBHOOK_SECRET="secret"
./setup-complete-automation.sh -y
```
### Environment File Pattern
```bash
# Create ***REMOVED***.automation (don't commit this!)
cat > ***REMOVED***.automation << EOF
UNRAID_PASSWORD=your_password_here
GITHUB_TOKEN=your_token_here
WEBHOOK_SECRET=your_secret_here
EOF
# Use it
source ***REMOVED***.automation
./setup-complete-automation.sh -y
# Clean up
rm ***REMOVED***.automation
```
## Integration Examples
See `example-non-interactive.sh` for a complete working example that you can customize for your needs.
The non-interactive mode makes it easy to integrate ThrillWiki deployment into your existing automation workflows while maintaining security and reliability.

View File

@@ -0,0 +1,385 @@
# ThrillWiki Template-Based VM Deployment
This guide explains how to use the new **template-based VM deployment** system that dramatically speeds up VM creation by using a pre-configured Ubuntu template instead of autoinstall ISOs.
## Overview
### Traditional Approach (Slow)
- Create autoinstall ISO from scratch
- Boot VM from ISO (20-30 minutes)
- Wait for Ubuntu installation
- Configure system packages and dependencies
### Template Approach (Fast ⚡)
- Copy pre-configured VM disk from template
- Boot VM from template disk (2-5 minutes)
- System is already configured with Ubuntu, packages, and dependencies
## Prerequisites
1. **Template VM**: You must have a VM named `thrillwiki-template-ubuntu` on your Unraid server
2. **Template Configuration**: The template should be pre-configured with:
- Ubuntu 24.04 LTS
- Python 3, Git, PostgreSQL, Nginx
- UV package manager (optional but recommended)
- Basic system configuration
## Template VM Setup
### Creating the Template VM
1. **Create the template VM manually** on your Unraid server:
- Name: `thrillwiki-template-ubuntu`
- Install Ubuntu 24.04 LTS
- Configure with 4GB RAM, 2 vCPUs (can be adjusted later)
2. **Configure the template** by SSH'ing into it and running:
```bash
# Update system
sudo apt update && sudo apt upgrade -y
# Install required packages
sudo apt install -y git curl build-essential python3-pip python3-venv
sudo apt install -y postgresql postgresql-contrib nginx
# Install UV (Python package manager)
curl -LsSf https://astral.sh/uv/install.sh | sh
source ~/.cargo/env
# Create thrillwiki user with password 'thrillwiki'
sudo useradd -m -s /bin/bash thrillwiki || true
echo 'thrillwiki:thrillwiki' | sudo chpasswd
sudo usermod -aG sudo thrillwiki
# Setup SSH key for thrillwiki user
# First, generate your SSH key on your Mac:
# ssh-keygen -t rsa -b 4096 -f ~/.ssh/thrillwiki_vm -N "" -C "thrillwiki-template-vm-access"
# Then copy the public key to the template VM:
sudo mkdir -p /home/thrillwiki/.ssh
echo "YOUR_PUBLIC_KEY_FROM_~/.ssh/thrillwiki_vm.pub" | sudo tee /home/thrillwiki/.ssh/***REMOVED***
sudo chown -R thrillwiki:thrillwiki /home/thrillwiki/.ssh
sudo chmod 700 /home/thrillwiki/.ssh
sudo chmod 600 /home/thrillwiki/.ssh/***REMOVED***
# Configure PostgreSQL
sudo systemctl enable postgresql
sudo systemctl start postgresql
# Configure Nginx
sudo systemctl enable nginx
# Clean up for template
sudo apt autoremove -y
sudo apt autoclean
history -c && history -w
# Shutdown template
sudo shutdown now
```
3. **Verify template** is stopped and ready:
```bash
./template-utils.sh status # Should show "shut off"
```
## Quick Start
### Step 0: Set Up SSH Key (First Time Only)
**IMPORTANT**: Before using template deployment, set up your SSH key:
```bash
# Generate and configure SSH key
./scripts/unraid/setup-ssh-key.sh
# Follow the instructions to add the public key to your template VM
```
See `TEMPLATE_VM_SETUP.md` for complete template VM setup instructions.
### Using the Utility Script
The easiest way to work with template VMs is using the utility script:
```bash
# Check if template is ready
./template-utils.sh check
# Get template information
./template-utils.sh info
# Deploy a new VM from template
./template-utils.sh deploy my-thrillwiki-vm
# Copy template to new VM (without full deployment)
./template-utils.sh copy my-vm-name
# List all template-based VMs
./template-utils.sh list
```
### Using Python Scripts Directly
For more control, use the Python scripts:
```bash
# Set environment variables
export UNRAID_HOST="your.unraid.server.ip"
export UNRAID_USER="root"
export VM_NAME="my-thrillwiki-vm"
export REPO_URL="owner/repository-name"
# Deploy VM from template
python3 main_template.py deploy
# Just create VM without ThrillWiki setup
python3 main_template.py setup
# Get VM status and IP
python3 main_template.py status
python3 main_template.py ip
# Manage template
python3 main_template.py template info
python3 main_template.py template check
```
## File Structure
### New Template-Based Files
```
scripts/unraid/
├── template_manager.py # Template VM management
├── vm_manager_template.py # Template-based VM manager
├── main_template.py # Template deployment orchestrator
├── template-utils.sh # Quick utility commands
├── deploy-thrillwiki-template.sh # Optimized deployment script
├── thrillwiki-vm-template-simple.xml # VM XML without autoinstall ISO
└── README-template-deployment.md # This documentation
```
### Original Files (Still Available)
```
scripts/unraid/
├── main.py # Original autoinstall approach
├── vm_manager.py # Original VM manager
├── deploy-thrillwiki.sh # Original deployment script
└── thrillwiki-vm-template.xml # Original XML with autoinstall
```
## Commands Reference
### Template Management
```bash
# Check template status
./template-utils.sh status
python3 template_manager.py check
# Get template information
./template-utils.sh info
python3 template_manager.py info
# List VMs created from template
./template-utils.sh list
python3 template_manager.py list
# Update template instructions
./template-utils.sh update
python3 template_manager.py update
```
### VM Deployment
```bash
# Complete deployment (VM + ThrillWiki)
./template-utils.sh deploy VM_NAME
python3 main_template.py deploy
# VM setup only
python3 main_template.py setup
# Individual operations
python3 main_template.py create
python3 main_template.py start
python3 main_template.py stop
python3 main_template.py delete
```
### VM Information
```bash
# Get VM status
python3 main_template.py status
# Get VM IP and connection info
python3 main_template.py ip
# Get detailed VM information
python3 main_template.py info
```
## Environment Variables
Configure these in your `***REMOVED***.unraid` file or export them:
```bash
# Required
UNRAID_HOST="192.168.1.100" # Your Unraid server IP
UNRAID_USER="root" # Unraid SSH user
VM_NAME="thrillwiki-vm" # Name for new VM
# Optional VM Configuration
VM_MEMORY="4096" # Memory in MB
VM_VCPUS="2" # Number of vCPUs
VM_DISK_SIZE="50" # Disk size in GB (for reference)
VM_IP="dhcp" # IP configuration (dhcp or static IP)
# ThrillWiki Configuration
REPO_URL="owner/repository-name" # GitHub repository
GITHUB_TOKEN="ghp_xxxxx" # GitHub token (optional)
```
## Advantages of Template Approach
### Speed ⚡
- **VM Creation**: 2-5 minutes vs 20-30 minutes
- **Boot Time**: Instant boot vs full Ubuntu installation
- **Total Deployment**: ~10 minutes vs ~45 minutes
### Reliability 🔒
- **Pre-tested**: Template is already configured and tested
- **Consistent**: All VMs start from identical base
- **No Installation Failures**: No autoinstall ISO issues
### Efficiency 💾
- **Disk Space**: Copy-on-write QCOW2 format
- **Network**: No ISO downloads during deployment
- **Resources**: Less CPU usage during creation
## Troubleshooting
### Template Not Found
```
❌ Template VM disk not found at: /mnt/user/domains/thrillwiki-template-ubuntu/vdisk1.qcow2
```
**Solution**: Create the template VM first or verify the path.
### Template VM Running
```
⚠️ Template VM is currently running!
```
**Solution**: Stop the template VM before creating new instances:
```bash
ssh root@unraid-host "virsh shutdown thrillwiki-template-ubuntu"
```
### SSH Connection Issues
```
❌ Cannot connect to Unraid server
```
**Solutions**:
1. Verify `UNRAID_HOST` is correct
2. Ensure SSH key authentication is set up
3. Check network connectivity
### Template Disk Corruption
If template VM gets corrupted:
1. Start template VM and fix issues
2. Or recreate template VM from scratch
3. Update template: `./template-utils.sh update`
## Template Maintenance
### Updating the Template
Periodically update your template:
1. **Start template VM** on Unraid
2. **SSH into template** and update:
```bash
sudo apt update && sudo apt upgrade -y
sudo apt autoremove -y && sudo apt autoclean
# Update UV if installed
~/.cargo/bin/uv --version
# Clear history
history -c && history -w
```
3. **Shutdown template VM**
4. **Verify update**: `./template-utils.sh check`
### Template Best Practices
- Keep template VM stopped when not maintaining it
- Update template monthly or before major deployments
- Test template by creating a test VM before important deployments
- Document any custom configurations in the template
## Migration Guide
### From Autoinstall to Template
1. **Create your template VM** following the setup guide above
2. **Test template deployment**:
```bash
./template-utils.sh deploy test-vm
```
3. **Update your automation scripts** to use template approach
4. **Keep autoinstall scripts** as backup for special cases
### Switching Between Approaches
You can use both approaches as needed:
```bash
# Template-based (fast)
python3 main_template.py deploy
# Autoinstall-based (traditional)
python3 main.py setup
```
## Integration with CI/CD
The template approach integrates perfectly with your existing CI/CD:
```bash
# In your automation scripts
export UNRAID_HOST="your-server"
export VM_NAME="thrillwiki-$(date +%s)"
export REPO_URL="your-org/thrillwiki"
# Deploy quickly
./scripts/unraid/template-utils.sh deploy "$VM_NAME"
# VM is ready in minutes instead of 30+ minutes
```
## FAQ
**Q: Can I use both template and autoinstall approaches?**
A: Yes! Keep both. Use template for speed, autoinstall for special configurations.
**Q: How much disk space does template copying use?**
A: QCOW2 copy-on-write format means copies only store differences, saving space.
**Q: What if I need different Ubuntu versions?**
A: Create multiple template VMs (e.g., `thrillwiki-template-ubuntu-22`, `thrillwiki-template-ubuntu-24`).
**Q: Can I customize the template VM configuration?**
A: Yes! The template VM is just a regular VM. Customize it as needed.
**Q: Is this approach secure?**
A: Yes. Each VM gets a fresh copy and can be configured independently.
---
This template-based approach should make your VM deployments much faster and more reliable! 🚀

View File

@@ -0,0 +1,131 @@
# ThrillWiki Unraid VM Automation
This directory contains scripts and configuration files for automating the creation and deployment of ThrillWiki VMs on Unraid servers using Ubuntu autoinstall.
## Files
- **`vm-manager.py`** - Main VM management script with direct kernel boot support
- **`thrillwiki-vm-template.xml`** - VM XML configuration template for libvirt
- **`cloud-init-template.yaml`** - Ubuntu autoinstall configuration template
- **`validate-autoinstall.py`** - Validation script for autoinstall configuration
## Key Features
### Direct Kernel Boot Approach
The system now uses direct kernel boot instead of GRUB-based boot for maximum reliability:
1. **Kernel Extraction**: Automatically extracts Ubuntu kernel and initrd files from the ISO
2. **Direct Boot**: VM boots directly using extracted kernel with explicit autoinstall parameters
3. **Reliable Autoinstall**: Kernel cmdline explicitly specifies `autoinstall ds=nocloud-net;s=cdrom:/`
### Schema-Compliant Configuration
The autoinstall configuration has been validated against Ubuntu's official schema:
- ✅ Proper network configuration structure
- ✅ Correct storage layout specification
- ✅ Valid shutdown configuration
- ✅ Schema-compliant field types and values
## Usage
### Environment Variables
Set these environment variables before running:
```bash
export UNRAID_HOST="your-unraid-server"
export UNRAID_USER="root"
export UNRAID_PASSWORD="your-password"
export SSH_PUBLIC_KEY="your-ssh-public-key"
export REPO_URL="https://github.com/your-username/thrillwiki.git"
export VM_IP="192.168.20.20" # or "dhcp" for DHCP
export VM_GATEWAY="192.168.20.1"
```
### Basic Operations
```bash
# Create and configure VM
./vm-manager.py create
# Start the VM
./vm-manager.py start
# Check VM status
./vm-manager.py status
# Get VM IP address
./vm-manager.py ip
# Complete setup (create + start + get IP)
./vm-manager.py setup
# Stop the VM
./vm-manager.py stop
# Delete VM and all files
./vm-manager.py delete
```
### Configuration Validation
```bash
# Validate autoinstall configuration
./validate-autoinstall.py
```
## How It Works
### VM Creation Process
1. **Extract Kernel**: Mount Ubuntu ISO and extract `vmlinuz` and `initrd` from `/casper/`
2. **Create Cloud-Init ISO**: Generate configuration ISO with autoinstall settings
3. **Generate VM XML**: Create libvirt VM configuration with direct kernel boot
4. **Define VM**: Register VM as persistent domain in libvirt
### Boot Process
1. **Direct Kernel Boot**: VM starts using extracted kernel and initrd directly
2. **Autoinstall Trigger**: Kernel cmdline forces Ubuntu installer into autoinstall mode
3. **Cloud-Init Data**: NoCloud datasource provides configuration from CD-ROM
4. **Automated Setup**: Ubuntu installs and configures ThrillWiki automatically
### Network Configuration
The system supports both static IP and DHCP configurations:
- **Static IP**: Set `VM_IP` to desired IP address (e.g., "192.168.20.20")
- **DHCP**: Set `VM_IP` to "dhcp" for automatic IP assignment
## Troubleshooting
### VM Console Access
Connect to VM console to monitor autoinstall progress:
```bash
ssh root@unraid-server
virsh console thrillwiki-vm
```
### Check VM Logs
View autoinstall logs inside the VM:
```bash
# After VM is accessible
ssh ubuntu@vm-ip
sudo journalctl -u cloud-init
tail -f /var/log/cloud-init.log
```
### Validation Errors
If autoinstall validation fails, check:
1. YAML syntax in `cloud-init-template.yaml`
2. Required fields according to Ubuntu schema
3. Proper data types for configuration values
## Architecture Benefits
1. **Reliable Boot**: Direct kernel boot eliminates GRUB-related issues
2. **Schema Compliance**: Configuration validated against official Ubuntu schema
3. **Predictable Behavior**: Explicit kernel parameters ensure consistent autoinstall
4. **Clean Separation**: VM configuration, cloud-init, and kernel files are properly organized
5. **Easy Maintenance**: Modular design allows independent updates of components
This implementation provides a robust, schema-compliant solution for automated ThrillWiki deployment on Unraid VMs.

View File

@@ -0,0 +1,245 @@
# Template VM Setup Instructions
## Prerequisites for Template-Based Deployment
Before using the template-based deployment system, you need to:
1. **Create the template VM** named `thrillwiki-template-ubuntu` on your Unraid server
2. **Configure SSH access** with your public key
3. **Set up the template** with all required software
## Step 1: Create Template VM on Unraid
1. Create a new VM on your Unraid server:
- **Name**: `thrillwiki-template-ubuntu`
- **OS**: Ubuntu 24.04 LTS
- **Memory**: 4GB (you can adjust this later for instances)
- **vCPUs**: 2 (you can adjust this later for instances)
- **Disk**: 50GB (sufficient for template)
2. Install Ubuntu 24.04 LTS using standard installation
## Step 2: Configure Template VM
SSH into your template VM and run the following setup:
### Create thrillwiki User
```bash
# Create the thrillwiki user with password 'thrillwiki'
sudo useradd -m -s /bin/bash thrillwiki
echo 'thrillwiki:thrillwiki' | sudo chpasswd
sudo usermod -aG sudo thrillwiki
# Switch to thrillwiki user for remaining setup
sudo su - thrillwiki
```
### Set Up SSH Access
**IMPORTANT**: Add your SSH public key to the template VM:
```bash
# Create .ssh directory
mkdir -p ~/.ssh
chmod 700 ~/.ssh
# Add your public key (replace with your actual public key)
echo "YOUR_PUBLIC_KEY_HERE" >> ~/.ssh/***REMOVED***
chmod 600 ~/.ssh/***REMOVED***
```
**To get your public key** (run this on your Mac):
```bash
# Generate key if it doesn't exist
if [ ! -f ~/.ssh/thrillwiki_vm ]; then
ssh-keygen -t rsa -b 4096 -f ~/.ssh/thrillwiki_vm -N "" -C "thrillwiki-template-vm-access"
fi
# Show your public key to copy
cat ~/.ssh/thrillwiki_vm.pub
```
Copy this public key and paste it into the template VM's ***REMOVED*** file.
### Install Required Software
```bash
# Update system
sudo apt update && sudo apt upgrade -y
# Install essential packages
sudo apt install -y \
git curl wget build-essential \
python3 python3-pip python3-venv python3-dev \
postgresql postgresql-contrib postgresql-client \
nginx \
htop tree vim nano \
software-properties-common
# Install UV (Python package manager)
curl -LsSf https://astral.sh/uv/install.sh | sh
source ~/.cargo/env
# Add UV to PATH permanently
echo 'export PATH="$HOME/.cargo/bin:$PATH"' >> ~/.bashrc
# Configure PostgreSQL
sudo systemctl enable postgresql
sudo systemctl start postgresql
# Create database user and database
sudo -u postgres createuser thrillwiki
sudo -u postgres createdb thrillwiki
sudo -u postgres psql -c "ALTER USER thrillwiki WITH PASSWORD 'thrillwiki';"
sudo -u postgres psql -c "GRANT ALL PRIVILEGES ON DATABASE thrillwiki TO thrillwiki;"
# Configure Nginx
sudo systemctl enable nginx
# Create ThrillWiki directories
mkdir -p ~/thrillwiki ~/logs ~/backups
# Set up basic environment
echo "export DJANGO_SETTINGS_MODULE=thrillwiki.settings" >> ~/.bashrc
echo "export DATABASE_URL=[DATABASE-URL-REMOVED] >> ~/.bashrc
```
### Pre-install Common Python Packages (Optional)
```bash
# Create a base virtual environment with common packages
cd ~
python3 -m venv base_venv
source base_venv/bin/activate
pip install --upgrade pip
# Install common Django packages
pip install \
django \
psycopg2-binary \
gunicorn \
whitenoise \
python-decouple \
pillow \
requests
deactivate
```
### Clean Up Template
```bash
# Clean package cache
sudo apt autoremove -y
sudo apt autoclean
# Clear bash history
history -c
history -w
# Clear any temporary files
sudo find /tmp -type f -delete
sudo find /var/tmp -type f -delete
# Shutdown the template VM
sudo shutdown now
```
## Step 3: Verify Template Setup
After the template VM shuts down, verify it's ready:
```bash
# From your Mac, check the template
cd /path/to/your/thrillwiki/project
./scripts/unraid/template-utils.sh check
```
## Step 4: Test Template Deployment
Create a test VM from the template:
```bash
# Deploy a test VM
./scripts/unraid/template-utils.sh deploy test-thrillwiki-vm
# Check if it worked
ssh thrillwiki@<VM_IP> "echo 'Template VM working!'"
```
## Template VM Configuration Summary
Your template VM should now have:
-**Username**: `thrillwiki` (password: `thrillwiki`)
-**SSH Access**: Your public key in `/home/thrillwiki/.ssh/***REMOVED***`
-**Python**: Python 3 with UV package manager
-**Database**: PostgreSQL with `thrillwiki` user and database
-**Web Server**: Nginx installed and enabled
-**Directories**: `~/thrillwiki`, `~/logs`, `~/backups` ready
## SSH Configuration on Your Mac
The automation scripts will set this up, but you can also configure manually:
```bash
# Add to ~/.ssh/config
cat >> ~/.ssh/config << EOF
# ThrillWiki Template VM
Host thrillwiki-vm
HostName %h
User thrillwiki
IdentityFile ~/.ssh/thrillwiki_vm
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
EOF
```
## Next Steps
Once your template is set up:
1. **Run the automation setup**:
```bash
./scripts/unraid/setup-template-automation.sh
```
2. **Deploy VMs quickly**:
```bash
./scripts/unraid/template-utils.sh deploy my-vm-name
```
3. **Enjoy 5-10x faster deployments** (2-5 minutes instead of 20-30 minutes!)
## Troubleshooting
### SSH Access Issues
```bash
# Test SSH access to template (when it's running for updates)
ssh -i ~/.ssh/thrillwiki_vm thrillwiki@TEMPLATE_VM_IP
# If access fails, check:
# 1. Template VM is running
# 2. Public key is in ***REMOVED***
# 3. Permissions are correct (700 for .ssh, 600 for ***REMOVED***)
```
### Template VM Updates
```bash
# Start template VM on Unraid
# SSH in and update:
sudo apt update && sudo apt upgrade -y
~/.cargo/bin/uv --version # Check UV is still working
# Clean up and shutdown
sudo apt autoremove -y && sudo apt autoclean
history -c && history -w
sudo shutdown now
```
### Permission Issues
```bash
# If you get permission errors, ensure thrillwiki user owns everything
sudo chown -R thrillwiki:thrillwiki /home/thrillwiki/
sudo chmod 700 /home/thrillwiki/.ssh
sudo chmod 600 /home/thrillwiki/.ssh/***REMOVED***
```
Your template is now ready for lightning-fast VM deployments! ⚡

View File

@@ -0,0 +1,206 @@
#cloud-config
autoinstall:
# version is an Autoinstall required field.
version: 1
# Install Ubuntu server packages and ThrillWiki dependencies
packages:
- ubuntu-server
- curl
- wget
- git
- python3
- python3-pip
- python3-venv
- nginx
- postgresql
- postgresql-contrib
- redis-server
- nodejs
- npm
- build-essential
- ufw
- fail2ban
- htop
- tree
- vim
- tmux
- qemu-guest-agent
# User creation
identity:
realname: 'ThrillWiki Admin'
username: thrillwiki
# Default [PASSWORD-REMOVED] (change after login)
password: '$6$rounds=4096$saltsalt$[AWS-SECRET-REMOVED]AzpI8g8T14F8VnhXo0sUkZV2NV6/.c77tHgVi34DgbPu.'
hostname: thrillwiki-vm
locale: en_US.UTF-8
keyboard:
layout: us
package_update: true
package_upgrade: true
# Use direct storage layout (no LVM)
storage:
swap:
size: 0
layout:
name: direct
# SSH configuration
ssh:
allow-pw: true
install-server: true
authorized-keys:
- {SSH_PUBLIC_KEY}
# Network configuration - will be replaced with proper config
network:
version: 2
ethernets:
enp1s0:
dhcp4: true
dhcp-identifier: mac
# Commands to run after installation
late-commands:
# Update GRUB
- curtin in-target -- update-grub
# Enable and start services
- curtin in-target -- systemctl enable qemu-guest-agent
- curtin in-target -- systemctl enable postgresql
- curtin in-target -- systemctl enable redis-server
- curtin in-target -- systemctl enable nginx
# Configure PostgreSQL
- curtin in-target -- sudo -u postgres createuser -s thrillwiki
- curtin in-target -- sudo -u postgres createdb thrillwiki_db
- curtin in-target -- sudo -u postgres psql -c "ALTER USER thrillwiki PASSWORD 'thrillwiki123';"
# Configure firewall
- curtin in-target -- ufw allow OpenSSH
- curtin in-target -- ufw allow 'Nginx Full'
- curtin in-target -- ufw --force enable
# Clone ThrillWiki repository if provided
- curtin in-target -- bash -c 'if [ -n "{GITHUB_REPO}" ]; then cd /home/thrillwiki && git clone "{GITHUB_REPO}" thrillwiki-app && chown -R thrillwiki:thrillwiki thrillwiki-app; fi'
# Create deployment script
- curtin in-target -- tee /home/thrillwiki/deploy-thrillwiki.sh << 'EOF'
#!/bin/bash
set -e
echo "=== ThrillWiki Deployment Script ==="
# Check if repo was cloned
if [ ! -d "/home/thrillwiki/thrillwiki-app" ]; then
echo "Repository not found. Please clone your ThrillWiki repository:"
echo "git clone YOUR_REPO_URL thrillwiki-app"
exit 1
fi
cd /home/thrillwiki/thrillwiki-app
# Create virtual environment
python3 -m venv venv
source venv/bin/activate
# Install Python dependencies
if [ -f "requirements.txt" ]; then
pip install -r requirements.txt
else
echo "Warning: requirements.txt not found"
fi
# Install Django if not in requirements
pip install django psycopg2-binary redis celery gunicorn
# Set up environment variables
cat > ***REMOVED*** << 'ENVEOF'
DEBUG=False
SECRET_KEY=your-secret-key-change-this
DATABASE_URL=[DATABASE-URL-REMOVED]
REDIS_URL=redis://localhost:6379/0
ALLOWED_HOSTS=localhost,127.0.0.1,thrillwiki-vm
ENVEOF
# Run Django setup commands
if [ -f "manage.py" ]; then
python manage.py collectstatic --noinput
python manage.py migrate
echo "from django.contrib.auth import get_user_model; User = get_user_model(); User.objects.create_superuser('admin', 'admin@thrillwiki.com', 'thrillwiki123') if not User.objects.filter(username='admin').exists() else None" | python manage.py shell
fi
# Configure Nginx
sudo tee /etc/nginx/sites-available/thrillwiki << 'NGINXEOF'
server {
listen 80;
server_name _;
location /static/ {
alias /home/thrillwiki/thrillwiki-app/staticfiles/;
}
location /media/ {
alias /home/thrillwiki/thrillwiki-app/media/;
}
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
NGINXEOF
# Enable Nginx site
sudo ln -sf /etc/nginx/sites-available/thrillwiki /etc/nginx/sites-enabled/
sudo rm -f /etc/nginx/sites-enabled/default
sudo systemctl reload nginx
# Create systemd service for Django
sudo tee /etc/systemd/system/thrillwiki.service << 'SERVICEEOF'
[Unit]
Description=ThrillWiki Django App
After=network.target
[Service]
User=thrillwiki
Group=thrillwiki
[AWS-SECRET-REMOVED]wiki-app
[AWS-SECRET-REMOVED]wiki-app/venv/bin
ExecStart=/home/thrillwiki/thrillwiki-app/venv/bin/gunicorn --workers 3 --bind 127.0.0.1:8000 thrillwiki.wsgi:application
Restart=always
[Install]
WantedBy=multi-user.target
SERVICEEOF
# Enable and start ThrillWiki service
sudo systemctl daemon-reload
sudo systemctl enable thrillwiki
sudo systemctl start thrillwiki
echo "=== ThrillWiki deployment complete! ==="
echo "Access your application at: http://$(hostname -I | awk '{print $1}')"
echo "Django Admin: http://$(hostname -I | awk '{print $1}')/admin"
echo "Default superuser: admin / thrillwiki123"
echo ""
echo "Important: Change default passwords!"
EOF
# Make deployment script executable
- curtin in-target -- chmod +x /home/thrillwiki/deploy-thrillwiki.sh
- curtin in-target -- chown thrillwiki:thrillwiki /home/thrillwiki/deploy-thrillwiki.sh
# Clean up
- curtin in-target -- apt-get autoremove -y
- curtin in-target -- apt-get autoclean
# Reboot after installation
shutdown: reboot

View File

@@ -0,0 +1,62 @@
#cloud-config
# Ubuntu autoinstall configuration
autoinstall:
version: 1
locale: en_US.UTF-8
keyboard:
layout: us
network:
version: 2
ethernets:
ens3:
dhcp4: true
enp1s0:
dhcp4: true
eth0:
dhcp4: true
ssh:
install-server: true
authorized-keys:
- {SSH_PUBLIC_KEY}
allow-pw: false
storage:
layout:
name: lvm
identity:
hostname: thrillwiki-vm
username: ubuntu
password: "$6$rounds=4096$salt$hash" # disabled - ssh key only
packages:
- openssh-server
- curl
- git
- python3
- python3-pip
- python3-venv
- build-essential
- postgresql
- postgresql-contrib
- nginx
- nodejs
- npm
- wget
- ca-certificates
- openssl
- dnsutils
- net-tools
early-commands:
- systemctl stop ssh
late-commands:
# Enable sudo for ubuntu user
- echo 'ubuntu ALL=(ALL) NOPASSWD:ALL' > /target/etc/sudoers.d/ubuntu
# Install uv Python package manager
- chroot /target su - ubuntu -c 'curl -LsSf https://astral.sh/uv/install.sh | sh || pip3 install uv'
# Add uv to PATH
- chroot /target su - ubuntu -c 'echo "export PATH=\$HOME/.cargo/bin:\$PATH" >> /home/ubuntu/.bashrc'
# Clone ThrillWiki repository
- chroot /target su - ubuntu -c 'cd /home/ubuntu && git clone {GITHUB_REPO} thrillwiki'
# Setup systemd service for ThrillWiki
- systemctl enable postgresql
- systemctl enable nginx
shutdown: reboot

View File

@@ -0,0 +1,451 @@
#!/bin/bash
#
# ThrillWiki Template-Based Deployment Script
# Optimized for VMs deployed from templates that already have basic setup
#
# Function to log messages with timestamp
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a /home/ubuntu/thrillwiki-deploy.log
}
# Function to check if a command exists
command_exists() {
command -v "$1" >/dev/null 2>&1
}
# Function to wait for network connectivity
wait_for_network() {
log "Waiting for network connectivity..."
local max_attempts=20 # Reduced from 30 since template VMs boot faster
local attempt=1
while [ $attempt -le $max_attempts ]; do
if curl -s --connect-timeout 5 https://github.com >/dev/null 2>&1; then
log "Network connectivity confirmed"
return 0
fi
log "Network attempt $attempt/$max_attempts failed, retrying in 5 seconds..."
sleep 5 # Reduced from 10 since template VMs should have faster networking
attempt=$((attempt + 1))
done
log "WARNING: Network connectivity check failed after $max_attempts attempts"
return 1
}
# Function to update system packages (lighter since template should be recent)
update_system() {
log "Updating system packages..."
# Quick update - template should already have most packages
sudo apt update || log "WARNING: apt update failed"
# Only upgrade security packages to save time
sudo apt list --upgradable 2>/dev/null | grep -q security && {
log "Installing security updates..."
sudo apt upgrade -y --with-new-pkgs -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" || log "WARNING: Security updates failed"
} || log "No security updates needed"
}
# Function to setup Python environment with template optimizations
setup_python_env() {
log "Setting up Python environment..."
# Check if uv is already available (should be in template)
export PATH="/home/ubuntu/.cargo/bin:$PATH"
if command_exists uv; then
log "Using existing uv installation from template"
uv --version
else
log "Installing uv (not found in template)..."
if wait_for_network; then
curl -LsSf --connect-timeout 30 --retry 2 --retry-delay 5 https://astral.sh/uv/install.sh | sh
export PATH="/home/ubuntu/.cargo/bin:$PATH"
else
log "WARNING: Network not available, falling back to pip"
fi
fi
# Setup virtual environment
if command_exists uv; then
log "Creating virtual environment with uv..."
if uv venv .venv && source .venv/bin/activate; then
if uv sync; then
log "Successfully set up environment with uv"
return 0
else
log "uv sync failed, falling back to pip"
fi
else
log "uv venv failed, falling back to pip"
fi
fi
# Fallback to pip with venv
log "Setting up environment with pip and venv"
if python3 -m venv .venv && source .venv/bin/activate; then
pip install --upgrade pip || log "WARNING: Failed to upgrade pip"
# Try different dependency installation methods
if [ -f pyproject.toml ]; then
log "Installing dependencies from pyproject.toml"
if pip install -e . || pip install .; then
log "Successfully installed dependencies from pyproject.toml"
return 0
else
log "Failed to install from pyproject.toml"
fi
fi
if [ -f requirements.txt ]; then
log "Installing dependencies from requirements.txt"
if pip install -r requirements.txt; then
log "Successfully installed dependencies from requirements.txt"
return 0
else
log "Failed to install from requirements.txt"
fi
fi
# Last resort: install common Django packages
log "Installing basic Django packages as fallback"
pip install django psycopg2-binary gunicorn || log "WARNING: Failed to install basic packages"
else
log "ERROR: Failed to create virtual environment"
return 1
fi
}
# Function to setup database (should already exist in template)
setup_database() {
log "Setting up PostgreSQL database..."
# Check if PostgreSQL is already running (should be in template)
if sudo systemctl is-active --quiet postgresql; then
log "PostgreSQL is already running"
else
log "Starting PostgreSQL service..."
sudo systemctl start postgresql || {
log "Failed to start PostgreSQL, trying alternative methods"
sudo service postgresql start || {
log "ERROR: Could not start PostgreSQL"
return 1
}
}
fi
# Check if database and user already exist (may be in template)
if sudo -u postgres psql -lqt | cut -d \| -f 1 | grep -qw thrillwiki_production; then
log "Database 'thrillwiki_production' already exists"
else
log "Creating database 'thrillwiki_production'..."
sudo -u postgres createdb thrillwiki_production || {
log "ERROR: Failed to create database"
return 1
}
fi
# Create/update database user
if sudo -u postgres psql -c "SELECT 1 FROM pg_user WHERE usename = 'ubuntu'" | grep -q 1; then
log "Database user 'ubuntu' already exists"
else
sudo -u postgres createuser ubuntu || log "WARNING: Failed to create user (may already exist)"
fi
# Grant permissions
sudo -u postgres psql -c "ALTER USER ubuntu WITH SUPERUSER;" || {
log "WARNING: Failed to grant superuser privileges, trying alternative permissions"
sudo -u postgres psql -c "GRANT ALL PRIVILEGES ON DATABASE thrillwiki_production TO ubuntu;" || log "WARNING: Failed to grant database privileges"
}
log "Database setup completed"
}
# Function to run Django commands with fallbacks
run_django_commands() {
log "Running Django management commands..."
# Ensure we're in the virtual environment
if [ ! -d ".venv" ] || ! source .venv/bin/activate; then
log "WARNING: Virtual environment not found or failed to activate"
# Try to run without venv activation
fi
# Function to run a Django command with fallbacks
run_django_cmd() {
local cmd="$1"
local description="$2"
log "Running: $description"
# Try uv run first
if command_exists uv && uv run manage.py $cmd; then
log "Successfully ran '$cmd' with uv"
return 0
fi
# Try python in venv
if python manage.py $cmd; then
log "Successfully ran '$cmd' with python"
return 0
fi
# Try python3
if python3 manage.py $cmd; then
log "Successfully ran '$cmd' with python3"
return 0
fi
log "WARNING: Failed to run '$cmd'"
return 1
}
# Run migrations
run_django_cmd "migrate" "Database migrations" || log "WARNING: Database migration failed"
# Collect static files
run_django_cmd "collectstatic --noinput" "Static files collection" || log "WARNING: Static files collection failed"
# Build Tailwind CSS (if available)
if run_django_cmd "tailwind build" "Tailwind CSS build"; then
log "Tailwind CSS built successfully"
else
log "Tailwind CSS build not available or failed - this is optional"
fi
}
# Function to setup systemd services (may already exist in template)
setup_services() {
log "Setting up systemd services..."
# Check if systemd service files exist
if [ -f scripts/systemd/thrillwiki.service ]; then
log "Copying ThrillWiki systemd service..."
sudo cp scripts/systemd/thrillwiki.service /etc/systemd/system/ || {
log "Failed to copy thrillwiki.service, creating basic service"
create_basic_service
}
else
log "Systemd service file not found, creating basic service"
create_basic_service
fi
# Copy webhook service if available
if [ -f scripts/systemd/thrillwiki-webhook.service ]; then
sudo cp scripts/systemd/thrillwiki-webhook.service /etc/systemd/system/ || {
log "Failed to copy webhook service, skipping"
}
else
log "Webhook service file not found, skipping"
fi
# Update service files with correct paths
if [ -f /etc/systemd/system/thrillwiki.service ]; then
sudo sed -i "s|/opt/thrillwiki|/home/ubuntu/thrillwiki|g" /etc/systemd/system/thrillwiki.service
sudo sed -i "s|User=thrillwiki|User=ubuntu|g" /etc/systemd/system/thrillwiki.service
fi
if [ -f /etc/systemd/system/thrillwiki-webhook.service ]; then
sudo sed -i "s|/opt/thrillwiki|/home/ubuntu/thrillwiki|g" /etc/systemd/system/thrillwiki-webhook.service
sudo sed -i "s|User=thrillwiki|User=ubuntu|g" /etc/systemd/system/thrillwiki-webhook.service
fi
# Reload systemd and start services
sudo systemctl daemon-reload
# Enable and start main service
if sudo systemctl enable thrillwiki 2>/dev/null; then
log "ThrillWiki service enabled"
if sudo systemctl start thrillwiki; then
log "ThrillWiki service started successfully"
else
log "WARNING: Failed to start ThrillWiki service"
sudo systemctl status thrillwiki --no-pager || true
fi
else
log "WARNING: Failed to enable ThrillWiki service"
fi
# Try to start webhook service if it exists
if [ -f /etc/systemd/system/thrillwiki-webhook.service ]; then
sudo systemctl enable thrillwiki-webhook 2>/dev/null && sudo systemctl start thrillwiki-webhook || {
log "WARNING: Failed to start webhook service"
}
fi
}
# Function to create a basic systemd service if none exists
create_basic_service() {
log "Creating basic systemd service..."
sudo tee /etc/systemd/system/thrillwiki.service > /dev/null << 'SERVICE_EOF'
[Unit]
Description=ThrillWiki Django Application
After=network.target postgresql.service
Wants=postgresql.service
[Service]
Type=exec
User=ubuntu
Group=ubuntu
[AWS-SECRET-REMOVED]
[AWS-SECRET-REMOVED]/.venv/bin:/home/ubuntu/.cargo/bin:/usr/local/bin:/usr/bin:/bin
ExecStart=/home/ubuntu/thrillwiki/.venv/bin/python manage.py runserver 0.0.0.0:8000
Restart=always
RestartSec=3
[Install]
WantedBy=multi-user.target
SERVICE_EOF
log "Basic systemd service created"
}
# Function to setup web server (may already be configured in template)
setup_webserver() {
log "Setting up web server..."
# Check if nginx is installed and running
if command_exists nginx; then
if ! sudo systemctl is-active --quiet nginx; then
log "Starting nginx..."
sudo systemctl start nginx || log "WARNING: Failed to start nginx"
fi
# Create basic nginx config if none exists
if [ ! -f /etc/nginx/sites-available/thrillwiki ]; then
log "Creating nginx configuration..."
sudo tee /etc/nginx/sites-available/thrillwiki > /dev/null << 'NGINX_EOF'
server {
listen 80;
server_name _;
location /static/ {
alias /home/ubuntu/thrillwiki/staticfiles/;
}
location /media/ {
alias /home/ubuntu/thrillwiki/media/;
}
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
NGINX_EOF
# Enable the site
sudo ln -sf /etc/nginx/sites-available/thrillwiki /etc/nginx/sites-enabled/ || log "WARNING: Failed to enable nginx site"
sudo nginx -t && sudo systemctl reload nginx || log "WARNING: nginx configuration test failed"
else
log "nginx configuration already exists"
fi
else
log "nginx not installed, ThrillWiki will run on port 8000 directly"
fi
}
# Main deployment function
main() {
log "Starting ThrillWiki template-based deployment..."
# Shorter wait time since template VMs boot faster
log "Waiting for system to be ready..."
sleep 10
# Wait for network
wait_for_network || log "WARNING: Network check failed, continuing anyway"
# Clone or update repository
log "Setting up ThrillWiki repository..."
export GITHUB_TOKEN=$(cat /home/ubuntu/.github-token 2>/dev/null || echo "")
# Get the GitHub repository from environment or parameter
GITHUB_REPO="${1:-}"
if [ -z "$GITHUB_REPO" ]; then
log "ERROR: GitHub repository not specified"
return 1
fi
if [ -d "/home/ubuntu/thrillwiki" ]; then
log "ThrillWiki directory already exists, updating..."
cd /home/ubuntu/thrillwiki
git pull || log "WARNING: Failed to update repository"
else
if [ -n "$GITHUB_TOKEN" ]; then
log "Cloning with GitHub token..."
git clone https://$GITHUB_TOKEN@github.com/$GITHUB_REPO /home/ubuntu/thrillwiki || {
log "Failed to clone with token, trying without..."
git clone https://github.com/$GITHUB_REPO /home/ubuntu/thrillwiki || {
log "ERROR: Failed to clone repository"
return 1
}
}
else
log "Cloning without GitHub token..."
git clone https://github.com/$GITHUB_REPO /home/ubuntu/thrillwiki || {
log "ERROR: Failed to clone repository"
return 1
}
fi
cd /home/ubuntu/thrillwiki
fi
# Update system (lighter for template VMs)
update_system
# Setup Python environment
setup_python_env || {
log "ERROR: Failed to set up Python environment"
return 1
}
# Setup environment file
log "Setting up environment configuration..."
if [ -f ***REMOVED***.example ]; then
cp ***REMOVED***.example ***REMOVED*** || log "WARNING: Failed to copy ***REMOVED***.example"
fi
# Update ***REMOVED*** with production settings
{
echo "DEBUG=False"
echo "DATABASE_URL=postgresql://ubuntu@localhost/thrillwiki_production"
echo "ALLOWED_HOSTS=*"
echo "STATIC_[AWS-SECRET-REMOVED]"
} >> ***REMOVED***
# Setup database
setup_database || {
log "ERROR: Database setup failed"
return 1
}
# Run Django commands
run_django_commands
# Setup systemd services
setup_services
# Setup web server
setup_webserver
log "ThrillWiki template-based deployment completed!"
log "Application should be available at http://$(hostname -I | awk '{print $1}'):8000"
log "Logs are available at /home/ubuntu/thrillwiki-deploy.log"
}
# Run main function and capture any errors
main "$@" 2>&1 | tee -a /home/ubuntu/thrillwiki-deploy.log
exit_code=${PIPESTATUS[0]}
if [ $exit_code -eq 0 ]; then
log "Template-based deployment completed successfully!"
else
log "Template-based deployment completed with errors (exit code: $exit_code)"
fi
exit $exit_code

View File

@@ -0,0 +1,467 @@
#!/bin/bash
# Function to log messages with timestamp
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a /home/ubuntu/thrillwiki-deploy.log
}
# Function to check if a command exists
command_exists() {
command -v "$1" >/dev/null 2>&1
}
# Function to wait for network connectivity
wait_for_network() {
log "Waiting for network connectivity..."
local max_attempts=30
local attempt=1
while [ $attempt -le $max_attempts ]; do
if curl -s --connect-timeout 5 https://github.com >/dev/null 2>&1; then
log "Network connectivity confirmed"
return 0
fi
log "Network attempt $attempt/$max_attempts failed, retrying in 10 seconds..."
sleep 10
attempt=$((attempt + 1))
done
log "WARNING: Network connectivity check failed after $max_attempts attempts"
return 1
}
# Function to install uv if not available
install_uv() {
log "Checking for uv installation..."
export PATH="/home/ubuntu/.cargo/bin:$PATH"
if command_exists uv; then
log "uv is already available"
return 0
fi
log "Installing uv..."
# Wait for network connectivity first
wait_for_network || {
log "Network not available, skipping uv installation"
return 1
}
# Try to install uv with multiple attempts
local max_attempts=3
local attempt=1
while [ $attempt -le $max_attempts ]; do
log "uv installation attempt $attempt/$max_attempts"
if curl -LsSf --connect-timeout 30 --retry 2 --retry-delay 5 https://astral.sh/uv/install.sh | sh; then
# Reload PATH
export PATH="/home/ubuntu/.cargo/bin:$PATH"
if command_exists uv; then
log "uv installed successfully"
return 0
else
log "uv installation completed but command not found, checking PATH..."
# Try to source the shell profile to get updated PATH
if [ -f /home/ubuntu/.bashrc ]; then
source /home/ubuntu/.bashrc 2>/dev/null || true
fi
if [ -f /home/ubuntu/.cargo/env ]; then
source /home/ubuntu/.cargo/env 2>/dev/null || true
fi
export PATH="/home/ubuntu/.cargo/bin:$PATH"
if command_exists uv; then
log "uv is now available after PATH update"
return 0
fi
fi
fi
log "uv installation attempt $attempt failed"
attempt=$((attempt + 1))
[ $attempt -le $max_attempts ] && sleep 10
done
log "Failed to install uv after $max_attempts attempts, will use pip fallback"
return 1
}
# Function to setup Python environment with fallbacks
setup_python_env() {
log "Setting up Python environment..."
# Try to install uv first if not available
install_uv
export PATH="/home/ubuntu/.cargo/bin:$PATH"
# Try uv first
if command_exists uv; then
log "Using uv for Python environment management"
if uv venv .venv && source .venv/bin/activate; then
if uv sync; then
log "Successfully set up environment with uv"
return 0
else
log "uv sync failed, falling back to pip"
fi
else
log "uv venv failed, falling back to pip"
fi
else
log "uv not available, using pip"
fi
# Fallback to pip with venv
log "Setting up environment with pip and venv"
if python3 -m venv .venv && source .venv/bin/activate; then
pip install --upgrade pip || log "WARNING: Failed to upgrade pip"
# Try different dependency installation methods
if [ -f pyproject.toml ]; then
log "Installing dependencies from pyproject.toml"
if pip install -e . || pip install .; then
log "Successfully installed dependencies from pyproject.toml"
return 0
else
log "Failed to install from pyproject.toml"
fi
fi
if [ -f requirements.txt ]; then
log "Installing dependencies from requirements.txt"
if pip install -r requirements.txt; then
log "Successfully installed dependencies from requirements.txt"
return 0
else
log "Failed to install from requirements.txt"
fi
fi
# Last resort: install common Django packages
log "Installing basic Django packages as fallback"
pip install django psycopg2-binary gunicorn || log "WARNING: Failed to install basic packages"
else
log "ERROR: Failed to create virtual environment"
return 1
fi
}
# Function to setup database with fallbacks
setup_database() {
log "Setting up PostgreSQL database..."
# Ensure PostgreSQL is running
if ! sudo systemctl is-active --quiet postgresql; then
log "Starting PostgreSQL service..."
sudo systemctl start postgresql || {
log "Failed to start PostgreSQL, trying alternative methods"
sudo service postgresql start || {
log "ERROR: Could not start PostgreSQL"
return 1
}
}
fi
# Create database user and database with error handling
if sudo -u postgres createuser ubuntu 2>/dev/null || sudo -u postgres psql -c "SELECT 1 FROM pg_user WHERE usename = 'ubuntu'" | grep -q 1; then
log "Database user 'ubuntu' created or already exists"
else
log "ERROR: Failed to create database user"
return 1
fi
if sudo -u postgres createdb thrillwiki_production 2>/dev/null || sudo -u postgres psql -lqt | cut -d \| -f 1 | grep -qw thrillwiki_production; then
log "Database 'thrillwiki_production' created or already exists"
else
log "ERROR: Failed to create database"
return 1
fi
# Grant permissions
sudo -u postgres psql -c "ALTER USER ubuntu WITH SUPERUSER;" || {
log "WARNING: Failed to grant superuser privileges, trying alternative permissions"
sudo -u postgres psql -c "GRANT ALL PRIVILEGES ON DATABASE thrillwiki_production TO ubuntu;" || log "WARNING: Failed to grant database privileges"
}
log "Database setup completed"
}
# Function to run Django commands with fallbacks
run_django_commands() {
log "Running Django management commands..."
# Ensure we're in the virtual environment
if [ ! -d ".venv" ] || ! source .venv/bin/activate; then
log "WARNING: Virtual environment not found or failed to activate"
# Try to run without venv activation
fi
# Function to run a Django command with fallbacks
run_django_cmd() {
local cmd="$1"
local description="$2"
log "Running: $description"
# Try uv run first
if command_exists uv && uv run manage.py $cmd; then
log "Successfully ran '$cmd' with uv"
return 0
fi
# Try python in venv
if python manage.py $cmd; then
log "Successfully ran '$cmd' with python"
return 0
fi
# Try python3
if python3 manage.py $cmd; then
log "Successfully ran '$cmd' with python3"
return 0
fi
log "WARNING: Failed to run '$cmd'"
return 1
}
# Run migrations
run_django_cmd "migrate" "Database migrations" || log "WARNING: Database migration failed"
# Collect static files
run_django_cmd "collectstatic --noinput" "Static files collection" || log "WARNING: Static files collection failed"
# Build Tailwind CSS (if available)
if run_django_cmd "tailwind build" "Tailwind CSS build"; then
log "Tailwind CSS built successfully"
else
log "Tailwind CSS build not available or failed - this is optional"
fi
}
# Function to setup systemd services with fallbacks
setup_services() {
log "Setting up systemd services..."
# Check if systemd service files exist
if [ -f scripts/systemd/thrillwiki.service ]; then
sudo cp scripts/systemd/thrillwiki.service /etc/systemd/system/ || {
log "Failed to copy thrillwiki.service, creating basic service"
create_basic_service
}
else
log "Systemd service file not found, creating basic service"
create_basic_service
fi
if [ -f scripts/systemd/thrillwiki-webhook.service ]; then
sudo cp scripts/systemd/thrillwiki-webhook.service /etc/systemd/system/ || {
log "Failed to copy webhook service, skipping"
}
else
log "Webhook service file not found, skipping"
fi
# Update service files with correct paths
if [ -f /etc/systemd/system/thrillwiki.service ]; then
sudo sed -i "s|/opt/thrillwiki|/home/ubuntu/thrillwiki|g" /etc/systemd/system/thrillwiki.service
sudo sed -i "s|User=thrillwiki|User=ubuntu|g" /etc/systemd/system/thrillwiki.service
fi
if [ -f /etc/systemd/system/thrillwiki-webhook.service ]; then
sudo sed -i "s|/opt/thrillwiki|/home/ubuntu/thrillwiki|g" /etc/systemd/system/thrillwiki-webhook.service
sudo sed -i "s|User=thrillwiki|User=ubuntu|g" /etc/systemd/system/thrillwiki-webhook.service
fi
# Reload systemd and start services
sudo systemctl daemon-reload
if sudo systemctl enable thrillwiki 2>/dev/null; then
log "ThrillWiki service enabled"
if sudo systemctl start thrillwiki; then
log "ThrillWiki service started successfully"
else
log "WARNING: Failed to start ThrillWiki service"
sudo systemctl status thrillwiki --no-pager || true
fi
else
log "WARNING: Failed to enable ThrillWiki service"
fi
# Try to start webhook service if it exists
if [ -f /etc/systemd/system/thrillwiki-webhook.service ]; then
sudo systemctl enable thrillwiki-webhook 2>/dev/null && sudo systemctl start thrillwiki-webhook || {
log "WARNING: Failed to start webhook service"
}
fi
}
# Function to create a basic systemd service if none exists
create_basic_service() {
log "Creating basic systemd service..."
sudo tee /etc/systemd/system/thrillwiki.service > /dev/null << 'SERVICE_EOF'
[Unit]
Description=ThrillWiki Django Application
After=network.target postgresql.service
Wants=postgresql.service
[Service]
Type=exec
User=ubuntu
Group=ubuntu
[AWS-SECRET-REMOVED]
[AWS-SECRET-REMOVED]/.venv/bin:/home/ubuntu/.cargo/bin:/usr/local/bin:/usr/bin:/bin
ExecStart=/home/ubuntu/thrillwiki/.venv/bin/python manage.py runserver 0.0.0.0:8000
Restart=always
RestartSec=3
[Install]
WantedBy=multi-user.target
SERVICE_EOF
log "Basic systemd service created"
}
# Function to setup web server (nginx) with fallbacks
setup_webserver() {
log "Setting up web server..."
# Check if nginx is installed and running
if command_exists nginx; then
if ! sudo systemctl is-active --quiet nginx; then
log "Starting nginx..."
sudo systemctl start nginx || log "WARNING: Failed to start nginx"
fi
# Create basic nginx config if none exists
if [ ! -f /etc/nginx/sites-available/thrillwiki ]; then
log "Creating nginx configuration..."
sudo tee /etc/nginx/sites-available/thrillwiki > /dev/null << 'NGINX_EOF'
server {
listen 80;
server_name _;
location /static/ {
alias /home/ubuntu/thrillwiki/staticfiles/;
}
location /media/ {
alias /home/ubuntu/thrillwiki/media/;
}
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
NGINX_EOF
# Enable the site
sudo ln -sf /etc/nginx/sites-available/thrillwiki /etc/nginx/sites-enabled/ || log "WARNING: Failed to enable nginx site"
sudo nginx -t && sudo systemctl reload nginx || log "WARNING: nginx configuration test failed"
fi
else
log "nginx not installed, ThrillWiki will run on port 8000 directly"
fi
}
# Main deployment function
main() {
log "Starting ThrillWiki deployment..."
# Wait for system to be ready
log "Waiting for system to be ready..."
sleep 30
# Wait for network
wait_for_network || log "WARNING: Network check failed, continuing anyway"
# Clone repository
log "Cloning ThrillWiki repository..."
export GITHUB_TOKEN=$(cat /home/ubuntu/.github-token 2>/dev/null || echo "")
# Get the GitHub repository from environment or parameter
GITHUB_REPO="${1:-}"
if [ -z "$GITHUB_REPO" ]; then
log "ERROR: GitHub repository not specified"
return 1
fi
if [ -d "/home/ubuntu/thrillwiki" ]; then
log "ThrillWiki directory already exists, updating..."
cd /home/ubuntu/thrillwiki
git pull || log "WARNING: Failed to update repository"
else
if [ -n "$GITHUB_TOKEN" ]; then
log "Cloning with GitHub token..."
git clone https://$GITHUB_TOKEN@github.com/$GITHUB_REPO /home/ubuntu/thrillwiki || {
log "Failed to clone with token, trying without..."
git clone https://github.com/$GITHUB_REPO /home/ubuntu/thrillwiki || {
log "ERROR: Failed to clone repository"
return 1
}
}
else
log "Cloning without GitHub token..."
git clone https://github.com/$GITHUB_REPO /home/ubuntu/thrillwiki || {
log "ERROR: Failed to clone repository"
return 1
}
fi
cd /home/ubuntu/thrillwiki
fi
# Setup Python environment
setup_python_env || {
log "ERROR: Failed to set up Python environment"
return 1
}
# Setup environment file
log "Setting up environment configuration..."
if [ -f ***REMOVED***.example ]; then
cp ***REMOVED***.example ***REMOVED*** || log "WARNING: Failed to copy ***REMOVED***.example"
fi
# Update ***REMOVED*** with production settings
{
echo "DEBUG=False"
echo "DATABASE_URL=postgresql://ubuntu@localhost/thrillwiki_production"
echo "ALLOWED_HOSTS=*"
echo "STATIC_[AWS-SECRET-REMOVED]"
} >> ***REMOVED***
# Setup database
setup_database || {
log "ERROR: Database setup failed"
return 1
}
# Run Django commands
run_django_commands
# Setup systemd services
setup_services
# Setup web server
setup_webserver
log "ThrillWiki deployment completed!"
log "Application should be available at http://$(hostname -I | awk '{print $1}'):8000"
log "Logs are available at /home/ubuntu/thrillwiki-deploy.log"
}
# Run main function and capture any errors
main "$@" 2>&1 | tee -a /home/ubuntu/thrillwiki-deploy.log
exit_code=${PIPESTATUS[0]}
if [ $exit_code -eq 0 ]; then
log "Deployment completed successfully!"
else
log "Deployment completed with errors (exit code: $exit_code)"
fi
exit $exit_code

View File

@@ -0,0 +1,39 @@
#!/bin/bash
# Example: How to use non-interactive mode for ThrillWiki setup
#
# This script shows how to set up environment variables for non-interactive mode
# and run the automation without any user prompts.
echo "🤖 ThrillWiki Non-Interactive Setup Example"
echo "[AWS-SECRET-REMOVED]=="
# Set required environment variables for non-interactive mode
# These replace the interactive prompts
# Unraid password (REQUIRED)
export UNRAID_PASSWORD="your_unraid_password_here"
# GitHub token (REQUIRED if using GitHub API)
export GITHUB_TOKEN="your_github_token_here"
# Webhook secret (REQUIRED if webhooks enabled)
export WEBHOOK_SECRET="your_webhook_secret_here"
echo "✅ Environment variables set"
echo "📋 Configuration summary:"
echo " - UNRAID_PASSWORD: [HIDDEN]"
echo " - GITHUB_TOKEN: [HIDDEN]"
echo " - WEBHOOK_SECRET: [HIDDEN]"
echo
echo "🚀 Starting non-interactive setup..."
echo "This will use saved configuration and the environment variables above"
echo
# Run the setup script in non-interactive mode
./setup-complete-automation.sh -y
echo
echo "✨ Non-interactive setup completed!"
echo "📝 Note: This example script should be customized with your actual credentials"

View File

@@ -0,0 +1,531 @@
#!/usr/bin/env python3
"""
Ubuntu ISO Builder for Autoinstall
Follows the Ubuntu autoinstall guide exactly:
1. Download Ubuntu ISO
2. Extract with 7zip equivalent
3. Modify GRUB configuration
4. Add server/ directory with autoinstall config
5. Rebuild ISO with xorriso equivalent
"""
import os
import logging
import subprocess
import tempfile
import shutil
import urllib.request
from pathlib import Path
from typing import Optional
logger = logging.getLogger(__name__)
# Ubuntu ISO URLs with fallbacks
UBUNTU_MIRRORS = [
"https://releases.ubuntu.com", # Official Ubuntu releases (primary)
"http://archive.ubuntu.com/ubuntu-releases", # Official archive
"http://mirror.csclub.uwaterloo.ca/ubuntu-releases", # University of Waterloo
"http://mirror.math.princeton.edu/pub/ubuntu-releases", # Princeton mirror
]
UBUNTU_24_04_ISO = "24.04/ubuntu-24.04.3-live-server-amd64.iso"
UBUNTU_22_04_ISO = "22.04/ubuntu-22.04.3-live-server-amd64.iso"
def get_latest_ubuntu_server_iso(version: str) -> Optional[str]:
"""Dynamically find the latest point release for a given Ubuntu version."""
try:
import re
for mirror in UBUNTU_MIRRORS:
try:
url = f"{mirror}/{version}/"
response = urllib.request.urlopen(url, timeout=10)
content = response.read().decode("utf-8")
# Find all server ISO files for this version
pattern = rf"ubuntu-{
re.escape(version)}\.[0-9]+-live-server-amd64\.iso"
matches = re.findall(pattern, content)
if matches:
# Sort by version and return the latest
matches.sort(key=lambda x: [int(n) for n in re.findall(r"\d+", x)])
latest_iso = matches[-1]
return f"{version}/{latest_iso}"
except Exception as e:
logger.debug(f"Failed to check {mirror}/{version}/: {e}")
continue
logger.warning(f"Could not dynamically detect latest ISO for Ubuntu {version}")
return None
except Exception as e:
logger.error(f"Error in dynamic ISO detection: {e}")
return None
class UbuntuISOBuilder:
"""Builds modified Ubuntu ISO with autoinstall configuration."""
def __init__(self, vm_name: str, work_dir: Optional[str] = None):
self.vm_name = vm_name
self.work_dir = (
Path(work_dir)
if work_dir
else Path(tempfile.mkdtemp(prefix="ubuntu-autoinstall-"))
)
self.source_files_dir = self.work_dir / "source-files"
self.boot_dir = self.work_dir / "BOOT"
self.server_dir = self.source_files_dir / "server"
self.grub_cfg_path = self.source_files_dir / "boot" / "grub" / "grub.cfg"
# Ensure directories exist
self.work_dir.mkdir(exist_ok=True, parents=True)
self.source_files_dir.mkdir(exist_ok=True, parents=True)
def check_tools(self) -> bool:
"""Check if required tools are available."""
# Check for 7zip equivalent (p7zip on macOS/Linux)
if not shutil.which("7z") and not shutil.which("7za"):
logger.error(
"7zip not found. Install with: brew install p7zip (macOS) or apt install p7zip-full (Ubuntu)"
)
return False
# Check for xorriso equivalent
if (
not shutil.which("xorriso")
and not shutil.which("mkisofs")
and not shutil.which("hdiutil")
):
logger.error(
"No ISO creation tool found. Install xorriso, mkisofs, or use macOS hdiutil"
)
return False
return True
def download_ubuntu_iso(self, version: str = "24.04") -> Path:
"""Download Ubuntu ISO if not already present, trying multiple mirrors."""
iso_filename = f"ubuntu-{version}-live-server-amd64.iso"
iso_path = self.work_dir / iso_filename
if iso_path.exists():
logger.info(f"Ubuntu ISO already exists: {iso_path}")
return iso_path
if version == "24.04":
iso_subpath = UBUNTU_24_04_ISO
elif version == "22.04":
iso_subpath = UBUNTU_22_04_ISO
else:
raise ValueError(f"Unsupported Ubuntu version: {version}")
# Try each mirror until one works
last_error = None
for mirror in UBUNTU_MIRRORS:
iso_url = f"{mirror}/{iso_subpath}"
logger.info(f"Trying to download Ubuntu {version} ISO from {iso_url}")
try:
# Try downloading from this mirror
urllib.request.urlretrieve(iso_url, iso_path)
logger.info(
f"✅ Ubuntu ISO downloaded successfully from {mirror}: {iso_path}"
)
return iso_path
except Exception as e:
last_error = e
logger.warning(f"Failed to download from {mirror}: {e}")
# Remove partial download if it exists
if iso_path.exists():
iso_path.unlink()
continue
# If we get here, all mirrors failed
logger.error(
f"Failed to download Ubuntu ISO from all mirrors. Last error: {last_error}"
)
raise last_error
def extract_iso(self, iso_path: Path) -> bool:
"""Extract Ubuntu ISO following the guide."""
logger.info(f"Extracting ISO: {iso_path}")
# Use 7z to extract ISO
seven_zip_cmd = "7z" if shutil.which("7z") else "7za"
try:
# Extract ISO: 7z -y x ubuntu.iso -osource-files
subprocess.run(
[
seven_zip_cmd,
"-y",
"x",
str(iso_path),
f"-o{self.source_files_dir}",
],
capture_output=True,
text=True,
check=True,
)
logger.info("ISO extracted successfully")
# Move [BOOT] directory as per guide: mv '[BOOT]' ../BOOT
boot_source = self.source_files_dir / "[BOOT]"
if boot_source.exists():
shutil.move(str(boot_source), str(self.boot_dir))
logger.info(f"Moved [BOOT] directory to {self.boot_dir}")
else:
logger.warning("[BOOT] directory not found in extracted files")
return True
except subprocess.CalledProcessError as e:
logger.error(f"Failed to extract ISO: {e.stderr}")
return False
except Exception as e:
logger.error(f"Error extracting ISO: {e}")
return False
def modify_grub_config(self) -> bool:
"""Modify GRUB configuration to add autoinstall menu entry."""
logger.info("Modifying GRUB configuration...")
if not self.grub_cfg_path.exists():
logger.error(f"GRUB config not found: {self.grub_cfg_path}")
return False
try:
# Read existing GRUB config
with open(self.grub_cfg_path, "r", encoding="utf-8") as f:
grub_content = f.read()
# Autoinstall menu entry as per guide
autoinstall_entry = """menuentry "Autoinstall Ubuntu Server" {
set gfxpayload=keep
linux /casper/vmlinuz quiet autoinstall ds=nocloud\\;s=/cdrom/server/ ---
initrd /casper/initrd
}
"""
# Insert autoinstall entry at the beginning of menu entries
# Find the first menuentry and insert before it
import re
first_menu_match = re.search(r'(menuentry\s+["\'])', grub_content)
if first_menu_match:
insert_pos = first_menu_match.start()
modified_content = (
grub_content[:insert_pos]
+ autoinstall_entry
+ grub_content[insert_pos:]
)
else:
# Fallback: append at the end
modified_content = grub_content + "\n" + autoinstall_entry
# Write modified GRUB config
with open(self.grub_cfg_path, "w", encoding="utf-8") as f:
f.write(modified_content)
logger.info("GRUB configuration modified successfully")
return True
except Exception as e:
logger.error(f"Failed to modify GRUB config: {e}")
return False
def create_autoinstall_config(self, user_data: str) -> bool:
"""Create autoinstall configuration in server/ directory."""
logger.info("Creating autoinstall configuration...")
try:
# Create server directory
self.server_dir.mkdir(exist_ok=True, parents=True)
# Create empty meta-data file (as per guide)
meta_data_path = self.server_dir / "meta-data"
meta_data_path.touch()
logger.info(f"Created empty meta-data: {meta_data_path}")
# Create user-data file with autoinstall configuration
user_data_path = self.server_dir / "user-data"
with open(user_data_path, "w", encoding="utf-8") as f:
f.write(user_data)
logger.info(f"Created user-data: {user_data_path}")
return True
except Exception as e:
logger.error(f"Failed to create autoinstall config: {e}")
return False
def rebuild_iso(self, output_path: Path) -> bool:
"""Rebuild ISO with autoinstall configuration using xorriso."""
logger.info(f"Rebuilding ISO: {output_path}")
try:
# Change to source-files directory for xorriso command
original_cwd = os.getcwd()
os.chdir(self.source_files_dir)
# Remove existing output file
if output_path.exists():
output_path.unlink()
# Try different ISO creation methods in order of preference
success = False
# Method 1: xorriso (most complete)
if shutil.which("xorriso") and not success:
try:
logger.info("Trying xorriso method...")
cmd = [
"xorriso",
"-as",
"mkisofs",
"-r",
"-V",
f"Ubuntu 24.04 LTS AUTO (EFIBIOS)",
"-o",
str(output_path),
"--grub2-mbr",
f"..{os.sep}BOOT{os.sep}1-Boot-NoEmul.img",
"-partition_offset",
"16",
"--mbr-force-bootable",
"-append_partition",
"2",
"28732ac11ff8d211ba4b00a0c93ec93b",
f"..{os.sep}BOOT{os.sep}2-Boot-NoEmul.img",
"-appended_part_as_gpt",
"-iso_mbr_part_type",
"a2a0d0ebe5b9334487c068b6b72699c7",
"-c",
"/boot.catalog",
"-b",
"/boot/grub/i386-pc/eltorito.img",
"-no-emul-boot",
"-boot-load-size",
"4",
"-boot-info-table",
"--grub2-boot-info",
"-eltorito-alt-boot",
"-e",
"--interval:appended_partition_2:::",
"-no-emul-boot",
".",
]
subprocess.run(cmd, capture_output=True, text=True, check=True)
success = True
logger.info("✅ ISO created with xorriso")
except subprocess.CalledProcessError as e:
logger.warning(f"xorriso failed: {e.stderr}")
if output_path.exists():
output_path.unlink()
# Method 2: mkisofs with joliet-long
if shutil.which("mkisofs") and not success:
try:
logger.info("Trying mkisofs with joliet-long...")
cmd = [
"mkisofs",
"-r",
"-V",
f"Ubuntu 24.04 LTS AUTO",
"-cache-inodes",
"-J",
"-joliet-long",
"-l",
"-b",
"boot/grub/i386-pc/eltorito.img",
"-c",
"boot.catalog",
"-no-emul-boot",
"-boot-load-size",
"4",
"-boot-info-table",
"-o",
str(output_path),
".",
]
subprocess.run(cmd, capture_output=True, text=True, check=True)
success = True
logger.info("✅ ISO created with mkisofs (joliet-long)")
except subprocess.CalledProcessError as e:
logger.warning(f"mkisofs with joliet-long failed: {e.stderr}")
if output_path.exists():
output_path.unlink()
# Method 3: mkisofs without Joliet (fallback)
if shutil.which("mkisofs") and not success:
try:
logger.info("Trying mkisofs without Joliet (fallback)...")
cmd = [
"mkisofs",
"-r",
"-V",
f"Ubuntu 24.04 LTS AUTO",
"-cache-inodes",
"-l", # No -J (Joliet) to avoid filename conflicts
"-b",
"boot/grub/i386-pc/eltorito.img",
"-c",
"boot.catalog",
"-no-emul-boot",
"-boot-load-size",
"4",
"-boot-info-table",
"-o",
str(output_path),
".",
]
subprocess.run(cmd, capture_output=True, text=True, check=True)
success = True
logger.info("✅ ISO created with mkisofs (no Joliet)")
except subprocess.CalledProcessError as e:
logger.warning(
f"mkisofs without Joliet failed: {
e.stderr}"
)
if output_path.exists():
output_path.unlink()
# Method 4: macOS hdiutil
if shutil.which("hdiutil") and not success:
try:
logger.info("Trying hdiutil (macOS)...")
cmd = [
"hdiutil",
"makehybrid",
"-iso",
"-joliet",
"-o",
str(output_path),
".",
]
subprocess.run(cmd, capture_output=True, text=True, check=True)
success = True
logger.info("✅ ISO created with hdiutil")
except subprocess.CalledProcessError as e:
logger.warning(f"hdiutil failed: {e.stderr}")
if output_path.exists():
output_path.unlink()
if not success:
logger.error("All ISO creation methods failed")
return False
# Verify the output file was created
if not output_path.exists():
logger.error("ISO file was not created despite success message")
return False
logger.info(f"ISO rebuilt successfully: {output_path}")
logger.info(
f"ISO size: {output_path.stat().st_size / (1024 * 1024):.1f} MB"
)
return True
except Exception as e:
logger.error(f"Error rebuilding ISO: {e}")
return False
finally:
# Return to original directory
os.chdir(original_cwd)
def build_autoinstall_iso(
self, user_data: str, output_path: Path, ubuntu_version: str = "24.04"
) -> bool:
"""Complete ISO build process following the Ubuntu autoinstall guide."""
logger.info(
f"🚀 Starting Ubuntu {ubuntu_version} autoinstall ISO build process"
)
try:
# Step 1: Check tools
if not self.check_tools():
return False
# Step 2: Download Ubuntu ISO
iso_path = self.download_ubuntu_iso(ubuntu_version)
# Step 3: Extract ISO
if not self.extract_iso(iso_path):
return False
# Step 4: Modify GRUB
if not self.modify_grub_config():
return False
# Step 5: Create autoinstall config
if not self.create_autoinstall_config(user_data):
return False
# Step 6: Rebuild ISO
if not self.rebuild_iso(output_path):
return False
logger.info(f"🎉 Successfully created autoinstall ISO: {output_path}")
logger.info(f"📁 Work directory: {self.work_dir}")
return True
except Exception as e:
logger.error(f"Failed to build autoinstall ISO: {e}")
return False
def cleanup(self):
"""Clean up temporary work directory."""
if self.work_dir.exists():
shutil.rmtree(self.work_dir)
logger.info(f"Cleaned up work directory: {self.work_dir}")
def main():
"""Test the ISO builder."""
import logging
logging.basicConfig(level=logging.INFO)
# Sample autoinstall user-data
user_data = """#cloud-config
autoinstall:
version: 1
packages:
- ubuntu-server
identity:
realname: 'Test User'
username: testuser
password: '$6$rounds=4096$saltsalt$[AWS-SECRET-REMOVED]AzpI8g8T14F8VnhXo0sUkZV2NV6/.c77tHgVi34DgbPu.'
hostname: test-vm
locale: en_US.UTF-8
keyboard:
layout: us
storage:
layout:
name: direct
ssh:
install-server: true
late-commands:
- curtin in-target -- apt-get autoremove -y
"""
builder = UbuntuISOBuilder("test-vm")
output_path = Path("/tmp/ubuntu-24.04-autoinstall.iso")
success = builder.build_autoinstall_iso(user_data, output_path)
if success:
print(f"✅ ISO created: {output_path}")
else:
print("❌ ISO creation failed")
# Optionally clean up
# builder.cleanup()
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,288 @@
#!/usr/bin/env python3
"""
Unraid VM Manager for ThrillWiki - Main Orchestrator
Follows the Ubuntu autoinstall guide exactly:
1. Creates modified Ubuntu ISO with autoinstall configuration
2. Manages VM lifecycle on Unraid server
3. Handles ThrillWiki deployment automation
"""
import os
import sys
import logging
from pathlib import Path
# Import our modular components
from iso_builder import UbuntuISOBuilder
from vm_manager import UnraidVMManager
# Configuration
UNRAID_HOST = os.environ.get("UNRAID_HOST", "localhost")
UNRAID_USER = os.environ.get("UNRAID_USER", "root")
VM_NAME = os.environ.get("VM_NAME", "thrillwiki-vm")
VM_MEMORY = int(os.environ.get("VM_MEMORY", 4096)) # MB
VM_VCPUS = int(os.environ.get("VM_VCPUS", 2))
VM_DISK_SIZE = int(os.environ.get("VM_DISK_SIZE", 50)) # GB
SSH_PUBLIC_KEY = os.environ.get("SSH_PUBLIC_KEY", "")
# Network Configuration
VM_IP = os.environ.get("VM_IP", "dhcp")
VM_GATEWAY = os.environ.get("VM_GATEWAY", "192.168.20.1")
VM_NETMASK = os.environ.get("VM_NETMASK", "255.255.255.0")
VM_NETWORK = os.environ.get("VM_NETWORK", "192.168.20.0/24")
# GitHub Configuration
REPO_URL = os.environ.get("REPO_URL", "")
GITHUB_USERNAME = os.environ.get("GITHUB_USERNAME", "")
GITHUB_TOKEN = os.environ.get("GITHUB_TOKEN", "")
# Ubuntu version preference
UBUNTU_VERSION = os.environ.get("UBUNTU_VERSION", "24.04")
# Setup logging
os.makedirs("logs", exist_ok=True)
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s - %(levelname)s - %(message)s",
handlers=[
logging.FileHandler("logs/unraid-vm.log"),
logging.StreamHandler(),
],
)
logger = logging.getLogger(__name__)
class ThrillWikiVMOrchestrator:
"""Main orchestrator for ThrillWiki VM deployment."""
def __init__(self):
self.vm_manager = UnraidVMManager(VM_NAME, UNRAID_HOST, UNRAID_USER)
self.iso_builder = None
def create_autoinstall_user_data(self) -> str:
"""Create autoinstall user-data configuration."""
# Read autoinstall template
template_path = Path(__file__).parent / "autoinstall-user-data.yaml"
if not template_path.exists():
raise FileNotFoundError(f"Autoinstall template not found: {template_path}")
with open(template_path, "r", encoding="utf-8") as f:
template = f.read()
# Replace placeholders using string replacement (avoiding .format() due
# to curly braces in YAML)
user_data = template.replace(
"{SSH_PUBLIC_KEY}",
SSH_PUBLIC_KEY if SSH_PUBLIC_KEY else "# No SSH key provided",
).replace("{GITHUB_REPO}", REPO_URL if REPO_URL else "")
# Update network configuration based on VM_IP setting
if VM_IP.lower() == "dhcp":
# Keep DHCP configuration as-is
pass
else:
# Replace with static IP configuration
network_config = f"""dhcp4: false
addresses:
- {VM_IP}/24
gateway4: {VM_GATEWAY}
nameservers:
addresses:
- 8.8.8.8
- 8.8.4.4"""
user_data = user_data.replace("dhcp4: true", network_config)
return user_data
def build_autoinstall_iso(self) -> Path:
"""Build Ubuntu autoinstall ISO following the guide."""
logger.info("🔨 Building Ubuntu autoinstall ISO...")
# Create ISO builder
self.iso_builder = UbuntuISOBuilder(VM_NAME)
# Create user-data configuration
user_data = self.create_autoinstall_user_data()
# Build autoinstall ISO
iso_output_path = Path(f"/tmp/{VM_NAME}-ubuntu-autoinstall.iso")
success = self.iso_builder.build_autoinstall_iso(
user_data=user_data,
output_path=iso_output_path,
ubuntu_version=UBUNTU_VERSION,
)
if not success:
raise RuntimeError("Failed to build autoinstall ISO")
logger.info(f"✅ Autoinstall ISO built successfully: {iso_output_path}")
return iso_output_path
def deploy_vm(self) -> bool:
"""Complete VM deployment process."""
try:
logger.info("🚀 Starting ThrillWiki VM deployment...")
# Step 1: Check SSH connectivity
logger.info("📡 Testing Unraid connectivity...")
if not self.vm_manager.authenticate():
logger.error("❌ Cannot connect to Unraid server")
return False
# Step 2: Build autoinstall ISO
logger.info("🔨 Building Ubuntu autoinstall ISO...")
iso_path = self.build_autoinstall_iso()
# Step 3: Upload ISO to Unraid
logger.info("📤 Uploading autoinstall ISO to Unraid...")
self.vm_manager.upload_iso_to_unraid(iso_path)
# Step 4: Create/update VM configuration
logger.info("⚙️ Creating VM configuration...")
success = self.vm_manager.create_vm(
vm_memory=VM_MEMORY,
vm_vcpus=VM_VCPUS,
vm_disk_size=VM_DISK_SIZE,
vm_ip=VM_IP,
)
if not success:
logger.error("❌ Failed to create VM configuration")
return False
# Step 5: Start VM
logger.info("🟢 Starting VM...")
success = self.vm_manager.start_vm()
if not success:
logger.error("❌ Failed to start VM")
return False
logger.info("🎉 VM deployment completed successfully!")
logger.info("")
logger.info("📋 Next Steps:")
logger.info("1. VM is now booting with Ubuntu autoinstall")
logger.info("2. Installation will take 15-30 minutes")
logger.info("3. Use 'python main.py ip' to get VM IP when ready")
logger.info("4. SSH to VM and run /home/thrillwiki/deploy-thrillwiki.sh")
logger.info("")
return True
except Exception as e:
logger.error(f"❌ VM deployment failed: {e}")
return False
finally:
# Cleanup ISO builder temp files
if self.iso_builder:
self.iso_builder.cleanup()
def get_vm_info(self) -> dict:
"""Get VM information."""
return {
"name": VM_NAME,
"status": self.vm_manager.vm_status(),
"ip": self.vm_manager.get_vm_ip(),
"memory": VM_MEMORY,
"vcpus": VM_VCPUS,
"disk_size": VM_DISK_SIZE,
}
def main():
"""Main entry point."""
import argparse
parser = argparse.ArgumentParser(
description="ThrillWiki VM Manager - Ubuntu Autoinstall on Unraid",
epilog="""
Examples:
python main.py setup # Complete VM setup with autoinstall
python main.py start # Start existing VM
python main.py ip # Get VM IP address
python main.py status # Get VM status
python main.py delete # Remove VM completely
""",
formatter_class=argparse.RawDescriptionHelpFormatter,
)
parser.add_argument(
"action",
choices=[
"setup",
"create",
"start",
"stop",
"status",
"ip",
"delete",
"info",
],
help="Action to perform",
)
args = parser.parse_args()
# Create orchestrator
orchestrator = ThrillWikiVMOrchestrator()
if args.action == "setup":
logger.info("🚀 Setting up complete ThrillWiki VM environment...")
success = orchestrator.deploy_vm()
sys.exit(0 if success else 1)
elif args.action == "create":
logger.info("⚙️ Creating VM configuration...")
success = orchestrator.vm_manager.create_vm(
VM_MEMORY, VM_VCPUS, VM_DISK_SIZE, VM_IP
)
sys.exit(0 if success else 1)
elif args.action == "start":
logger.info("🟢 Starting VM...")
success = orchestrator.vm_manager.start_vm()
sys.exit(0 if success else 1)
elif args.action == "stop":
logger.info("🛑 Stopping VM...")
success = orchestrator.vm_manager.stop_vm()
sys.exit(0 if success else 1)
elif args.action == "status":
status = orchestrator.vm_manager.vm_status()
print(f"VM Status: {status}")
sys.exit(0)
elif args.action == "ip":
ip = orchestrator.vm_manager.get_vm_ip()
if ip:
print(f"VM IP: {ip}")
print(f"SSH: ssh thrillwiki@{ip}")
print(
f"Deploy: ssh thrillwiki@{ip} '/home/thrillwiki/deploy-thrillwiki.sh'"
)
sys.exit(0)
else:
print("❌ Failed to get VM IP (VM may not be ready yet)")
sys.exit(1)
elif args.action == "info":
info = orchestrator.get_vm_info()
print("🖥️ VM Information:")
print(f" Name: {info['name']}")
print(f" Status: {info['status']}")
print(f" IP: {info['ip'] or 'Not available'}")
print(f" Memory: {info['memory']} MB")
print(f" vCPUs: {info['vcpus']}")
print(f" Disk: {info['disk_size']} GB")
sys.exit(0)
elif args.action == "delete":
logger.info("🗑️ Deleting VM and all files...")
success = orchestrator.vm_manager.delete_vm()
sys.exit(0 if success else 1)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,456 @@
#!/usr/bin/env python3
"""
Unraid VM Manager for ThrillWiki - Template-Based Main Orchestrator
Uses pre-built template VMs for fast deployment instead of autoinstall.
"""
import os
import sys
import logging
from pathlib import Path
# Import our modular components
from template_manager import TemplateVMManager
from vm_manager_template import UnraidTemplateVMManager
class ConfigLoader:
"""Dynamic configuration loader that reads environment variables when needed."""
def __init__(self):
# Try to load ***REMOVED***.unraid if it exists to ensure we have the
# latest config
self._load_env_file()
def _load_env_file(self):
"""Load ***REMOVED***.unraid file if it exists."""
# Find the project directory (two levels up from this script)
script_dir = Path(__file__).parent
project_dir = script_dir.parent.parent
env_file = project_dir / "***REMOVED***.unraid"
if env_file.exists():
try:
with open(env_file, "r") as f:
for line in f:
line = line.strip()
if line and not line.startswith("#") and "=" in line:
key, value = line.split("=", 1)
# Remove quotes if present
value = value.strip("\"'")
# Only set if not already in environment (env vars
# take precedence)
if key not in os.environ:
os.environ[key] = value
logging.info(f"📝 Loaded configuration from {env_file}")
except Exception as e:
logging.warning(f"⚠️ Could not load ***REMOVED***.unraid: {e}")
@property
def UNRAID_HOST(self):
return os.environ.get("UNRAID_HOST", "localhost")
@property
def UNRAID_USER(self):
return os.environ.get("UNRAID_USER", "root")
@property
def VM_NAME(self):
return os.environ.get("VM_NAME", "thrillwiki-vm")
@property
def VM_MEMORY(self):
return int(os.environ.get("VM_MEMORY", 4096))
@property
def VM_VCPUS(self):
return int(os.environ.get("VM_VCPUS", 2))
@property
def VM_DISK_SIZE(self):
return int(os.environ.get("VM_DISK_SIZE", 50))
@property
def SSH_PUBLIC_KEY(self):
return os.environ.get("SSH_PUBLIC_KEY", "")
@property
def VM_IP(self):
return os.environ.get("VM_IP", "dhcp")
@property
def VM_GATEWAY(self):
return os.environ.get("VM_GATEWAY", "192.168.20.1")
@property
def VM_NETMASK(self):
return os.environ.get("VM_NETMASK", "255.255.255.0")
@property
def VM_NETWORK(self):
return os.environ.get("VM_NETWORK", "192.168.20.0/24")
@property
def REPO_URL(self):
return os.environ.get("REPO_URL", "")
@property
def GITHUB_USERNAME(self):
return os.environ.get("GITHUB_USERNAME", "")
@property
def GITHUB_TOKEN(self):
return os.environ.get("GITHUB_TOKEN", "")
# Create a global configuration instance
config = ConfigLoader()
# Setup logging with reduced buffering
os.makedirs("logs", exist_ok=True)
# Configure console handler with line buffering
console_handler = logging.StreamHandler(sys.stdout)
console_handler.setLevel(logging.INFO)
console_handler.setFormatter(
logging.Formatter("%(asctime)s - %(levelname)s - %(message)s")
)
# Force flush after each log message
console_handler.flush = lambda: sys.stdout.flush()
# Configure file handler
file_handler = logging.FileHandler("logs/unraid-vm.log")
file_handler.setLevel(logging.INFO)
file_handler.setFormatter(
logging.Formatter("%(asctime)s - %(levelname)s - %(message)s")
)
# Set up basic config with both handlers
logging.basicConfig(
level=logging.INFO,
handlers=[file_handler, console_handler],
)
# Ensure stdout is line buffered for real-time output
sys.stdout.reconfigure(line_buffering=True)
logger = logging.getLogger(__name__)
class ThrillWikiTemplateVMOrchestrator:
"""Main orchestrator for template-based ThrillWiki VM deployment."""
def __init__(self):
# Log current configuration for debugging
logger.info(
f"🔧 Using configuration: UNRAID_HOST={
config.UNRAID_HOST}, UNRAID_USER={
config.UNRAID_USER}, VM_NAME={
config.VM_NAME}"
)
self.template_manager = TemplateVMManager(
config.UNRAID_HOST, config.UNRAID_USER
)
self.vm_manager = UnraidTemplateVMManager(
config.VM_NAME, config.UNRAID_HOST, config.UNRAID_USER
)
def check_template_ready(self) -> bool:
"""Check if template VM is ready for use."""
logger.info("🔍 Checking template VM availability...")
if not self.template_manager.check_template_exists():
logger.error("❌ Template VM disk not found!")
logger.error(
"Please ensure 'thrillwiki-template-ubuntu' VM exists and is properly configured"
)
logger.error(
"Template should be located at: /mnt/user/domains/thrillwiki-template-ubuntu/vdisk1.qcow2"
)
return False
# Check template status
if not self.template_manager.update_template():
logger.warning("⚠️ Template VM may be running - this could cause issues")
logger.warning(
"Ensure the template VM is stopped before creating new instances"
)
info = self.template_manager.get_template_info()
if info:
logger.info(f"📋 Template Info:")
logger.info(f" Virtual Size: {info['virtual_size']}")
logger.info(f" File Size: {info['file_size']}")
logger.info(f" Last Modified: {info['last_modified']}")
return True
def deploy_vm_from_template(self) -> bool:
"""Complete template-based VM deployment process."""
try:
logger.info("🚀 Starting ThrillWiki template-based VM deployment...")
# Step 1: Check SSH connectivity
logger.info("📡 Testing Unraid connectivity...")
if not self.vm_manager.authenticate():
logger.error("❌ Cannot connect to Unraid server")
return False
# Step 2: Check template availability
logger.info("🔍 Verifying template VM...")
if not self.check_template_ready():
logger.error("❌ Template VM not ready")
return False
# Step 3: Create VM from template
logger.info("⚙️ Creating VM from template...")
success = self.vm_manager.create_vm_from_template(
vm_memory=config.VM_MEMORY,
vm_vcpus=config.VM_VCPUS,
vm_disk_size=config.VM_DISK_SIZE,
vm_ip=config.VM_IP,
)
if not success:
logger.error("❌ Failed to create VM from template")
return False
# Step 4: Start VM
logger.info("🟢 Starting VM...")
success = self.vm_manager.start_vm()
if not success:
logger.error("❌ Failed to start VM")
return False
logger.info("🎉 Template-based VM deployment completed successfully!")
logger.info("")
logger.info("📋 Next Steps:")
logger.info("1. VM is now booting from template disk")
logger.info("2. Boot time should be much faster (2-5 minutes)")
logger.info("3. Use 'python main_template.py ip' to get VM IP when ready")
logger.info("4. SSH to VM and run deployment commands")
logger.info("")
return True
except Exception as e:
logger.error(f"❌ Template VM deployment failed: {e}")
return False
def deploy_and_configure_thrillwiki(self) -> bool:
"""Deploy VM from template and configure ThrillWiki."""
try:
logger.info("🚀 Starting complete ThrillWiki deployment from template...")
# Step 1: Deploy VM from template
if not self.deploy_vm_from_template():
return False
# Step 2: Wait for VM to be accessible and configure ThrillWiki
if config.REPO_URL:
logger.info("🔧 Configuring ThrillWiki on VM...")
success = self.vm_manager.customize_vm_for_thrillwiki(
config.REPO_URL, config.GITHUB_TOKEN
)
if success:
vm_ip = self.vm_manager.get_vm_ip()
logger.info("🎉 Complete ThrillWiki deployment successful!")
logger.info(f"🌐 ThrillWiki is available at: http://{vm_ip}:8000")
else:
logger.warning(
"⚠️ VM deployed but ThrillWiki configuration may have failed"
)
logger.info(
"You can manually configure ThrillWiki by SSH'ing to the VM"
)
else:
logger.info(
"📝 No repository URL provided - VM deployed but ThrillWiki not configured"
)
logger.info(
"Set REPO_URL environment variable to auto-configure ThrillWiki"
)
return True
except Exception as e:
logger.error(f"❌ Complete deployment failed: {e}")
return False
def get_vm_info(self) -> dict:
"""Get VM information."""
return {
"name": config.VM_NAME,
"status": self.vm_manager.vm_status(),
"ip": self.vm_manager.get_vm_ip(),
"memory": config.VM_MEMORY,
"vcpus": config.VM_VCPUS,
"disk_size": config.VM_DISK_SIZE,
"deployment_type": "template-based",
}
def main():
"""Main entry point."""
import argparse
parser = argparse.ArgumentParser(
description="ThrillWiki Template-Based VM Manager - Fast VM deployment using templates",
epilog="""
Examples:
python main_template.py setup # Deploy VM from template only
python main_template.py deploy # Deploy VM and configure ThrillWiki
python main_template.py start # Start existing VM
python main_template.py ip # Get VM IP address
python main_template.py status # Get VM status
python main_template.py delete # Remove VM completely
python main_template.py template # Manage template VM
""",
formatter_class=argparse.RawDescriptionHelpFormatter,
)
parser.add_argument(
"action",
choices=[
"setup",
"deploy",
"create",
"start",
"stop",
"status",
"ip",
"delete",
"info",
"template",
],
help="Action to perform",
)
parser.add_argument(
"template_action",
nargs="?",
choices=["info", "check", "update", "list"],
help="Template management action (used with 'template' action)",
)
args = parser.parse_args()
# Create orchestrator
orchestrator = ThrillWikiTemplateVMOrchestrator()
if args.action == "setup":
logger.info("🚀 Setting up VM from template...")
success = orchestrator.deploy_vm_from_template()
sys.exit(0 if success else 1)
elif args.action == "deploy":
logger.info("🚀 Complete ThrillWiki deployment from template...")
success = orchestrator.deploy_and_configure_thrillwiki()
sys.exit(0 if success else 1)
elif args.action == "create":
logger.info("⚙️ Creating VM from template...")
success = orchestrator.vm_manager.create_vm_from_template(
config.VM_MEMORY,
config.VM_VCPUS,
config.VM_DISK_SIZE,
config.VM_IP,
)
sys.exit(0 if success else 1)
elif args.action == "start":
logger.info("🟢 Starting VM...")
success = orchestrator.vm_manager.start_vm()
sys.exit(0 if success else 1)
elif args.action == "stop":
logger.info("🛑 Stopping VM...")
success = orchestrator.vm_manager.stop_vm()
sys.exit(0 if success else 1)
elif args.action == "status":
status = orchestrator.vm_manager.vm_status()
print(f"VM Status: {status}")
sys.exit(0)
elif args.action == "ip":
ip = orchestrator.vm_manager.get_vm_ip()
if ip:
print(f"VM IP: {ip}")
print(f"SSH: ssh thrillwiki@{ip}")
print(f"ThrillWiki: http://{ip}:8000")
sys.exit(0)
else:
print("❌ Failed to get VM IP (VM may not be ready yet)")
sys.exit(1)
elif args.action == "info":
info = orchestrator.get_vm_info()
print("🖥️ VM Information:")
print(f" Name: {info['name']}")
print(f" Status: {info['status']}")
print(f" IP: {info['ip'] or 'Not available'}")
print(f" Memory: {info['memory']} MB")
print(f" vCPUs: {info['vcpus']}")
print(f" Disk: {info['disk_size']} GB")
print(f" Type: {info['deployment_type']}")
sys.exit(0)
elif args.action == "delete":
logger.info("🗑️ Deleting VM and all files...")
success = orchestrator.vm_manager.delete_vm()
sys.exit(0 if success else 1)
elif args.action == "template":
template_action = args.template_action or "info"
if template_action == "info":
logger.info("📋 Template VM Information")
info = orchestrator.template_manager.get_template_info()
if info:
print(f"Template Path: {info['template_path']}")
print(f"Virtual Size: {info['virtual_size']}")
print(f"File Size: {info['file_size']}")
print(f"Last Modified: {info['last_modified']}")
else:
print("❌ Failed to get template information")
sys.exit(1)
elif template_action == "check":
if orchestrator.template_manager.check_template_exists():
logger.info("✅ Template VM disk exists and is ready to use")
sys.exit(0)
else:
logger.error("❌ Template VM disk not found")
sys.exit(1)
elif template_action == "update":
success = orchestrator.template_manager.update_template()
sys.exit(0 if success else 1)
elif template_action == "list":
logger.info("📋 Template-based VM Instances")
instances = orchestrator.template_manager.list_template_instances()
if instances:
for instance in instances:
status_emoji = (
"🟢"
if instance["status"] == "running"
else "🔴" if instance["status"] == "shut off" else "🟡"
)
print(
f"{status_emoji} {
instance['name']} ({
instance['status']})"
)
else:
print("No template instances found")
sys.exit(0)
if __name__ == "__main__":
main()

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,75 @@
#!/bin/bash
# ThrillWiki Template VM SSH Key Setup Helper
# This script generates the SSH key needed for template VM access
set -e
# Colors for output
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
echo -e "${BLUE}ThrillWiki Template VM SSH Key Setup${NC}"
echo "[AWS-SECRET-REMOVED]"
echo
SSH_KEY_PATH="$HOME/.ssh/thrillwiki_vm"
# Generate SSH key if it doesn't exist
if [ ! -f "$SSH_KEY_PATH" ]; then
echo -e "${YELLOW}Generating new SSH key for ThrillWiki template VM...${NC}"
ssh-keygen -t rsa -b 4096 -f "$SSH_KEY_PATH" -N "" -C "thrillwiki-template-vm-access"
echo -e "${GREEN}✅ SSH key generated: $SSH_KEY_PATH${NC}"
echo
else
echo -e "${GREEN}✅ SSH key already exists: $SSH_KEY_PATH${NC}"
echo
fi
# Display the public key
echo -e "${YELLOW}📋 Your SSH Public Key:${NC}"
echo "Copy this ENTIRE line and add it to your template VM:"
echo
echo -e "${GREEN}$(cat "$SSH_KEY_PATH.pub")${NC}"
echo
# Instructions
echo -e "${BLUE}📝 Template VM Setup Instructions:${NC}"
echo "1. SSH into your template VM (thrillwiki-template-ubuntu)"
echo "2. Switch to the thrillwiki user:"
echo " sudo su - thrillwiki"
echo "3. Create .ssh directory and set permissions:"
echo " mkdir -p ~/.ssh && chmod 700 ~/.ssh"
echo "4. Add the public key above to ***REMOVED***:"
echo " echo 'YOUR_PUBLIC_KEY_HERE' >> ~/.ssh/***REMOVED***"
echo " chmod 600 ~/.ssh/***REMOVED***"
echo "5. Test SSH access:"
echo " ssh -i ~/.ssh/thrillwiki_vm thrillwiki@YOUR_TEMPLATE_VM_IP"
echo
# SSH config helper
SSH_CONFIG="$HOME/.ssh/config"
echo -e "${BLUE}🔧 SSH Config Setup:${NC}"
if ! grep -q "thrillwiki-vm" "$SSH_CONFIG" 2>/dev/null; then
echo "Adding SSH config entry..."
cat >> "$SSH_CONFIG" << EOF
# ThrillWiki Template VM
Host thrillwiki-vm
HostName %h
User thrillwiki
IdentityFile $SSH_KEY_PATH
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
EOF
echo -e "${GREEN}✅ SSH config updated${NC}"
else
echo -e "${GREEN}✅ SSH config already contains thrillwiki-vm entry${NC}"
fi
echo
echo -e "${GREEN}🎉 SSH key setup complete!${NC}"
echo "Next: Set up your template VM using TEMPLATE_VM_SETUP.md"
echo "Then run: ./setup-template-automation.sh"

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,249 @@
#!/bin/bash
#
# ThrillWiki Template VM Management Utilities
# Quick helpers for managing template VMs on Unraid
#
# Set strict mode
set -e
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
log() {
echo -e "${BLUE}[TEMPLATE]${NC} $1"
}
log_success() {
echo -e "${GREEN}[SUCCESS]${NC} $1"
}
log_warning() {
echo -e "${YELLOW}[WARNING]${NC} $1"
}
log_error() {
echo -e "${RED}[ERROR]${NC} $1"
}
# Configuration
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)"
# Load environment variables if available
if [[ -f "$PROJECT_DIR/***REMOVED***.unraid" ]]; then
source "$PROJECT_DIR/***REMOVED***.unraid"
else
log_error "No ***REMOVED***.unraid file found. Please run setup-complete-automation.sh first."
exit 1
fi
# Function to show help
show_help() {
echo "ThrillWiki Template VM Management Utilities"
echo ""
echo "Usage:"
echo " $0 check Check if template exists and is ready"
echo " $0 info Show template information"
echo " $0 list List all template-based VM instances"
echo " $0 copy VM_NAME Copy template to new VM"
echo " $0 deploy VM_NAME Deploy complete VM from template"
echo " $0 status Show template VM status"
echo " $0 update Update template VM (instructions)"
echo " $0 autopull Manage auto-pull functionality"
echo ""
echo "Auto-pull Commands:"
echo " $0 autopull status Show auto-pull status on VMs"
echo " $0 autopull enable VM Enable auto-pull on specific VM"
echo " $0 autopull disable VM Disable auto-pull on specific VM"
echo " $0 autopull logs VM Show auto-pull logs from VM"
echo " $0 autopull test VM Test auto-pull on specific VM"
echo ""
echo "Examples:"
echo " $0 check # Verify template is ready"
echo " $0 copy thrillwiki-prod # Copy template to new VM"
echo " $0 deploy thrillwiki-test # Complete deployment from template"
echo " $0 autopull status # Check auto-pull status on all VMs"
echo " $0 autopull logs $VM_NAME # View auto-pull logs"
exit 0
}
# Check if required environment variables are set
check_environment() {
if [[ -z "$UNRAID_HOST" ]]; then
log_error "UNRAID_HOST not set. Please configure your environment."
exit 1
fi
if [[ -z "$UNRAID_USER" ]]; then
UNRAID_USER="root"
log "Using default UNRAID_USER: $UNRAID_USER"
fi
log_success "Environment configured: $UNRAID_USER@$UNRAID_HOST"
}
# Function to run python template manager commands
run_template_manager() {
cd "$SCRIPT_DIR"
export UNRAID_HOST="$UNRAID_HOST"
export UNRAID_USER="$UNRAID_USER"
python3 template_manager.py "$@"
}
# Function to run template-based main script
run_main_template() {
cd "$SCRIPT_DIR"
# Export all environment variables
export UNRAID_HOST="$UNRAID_HOST"
export UNRAID_USER="$UNRAID_USER"
export VM_NAME="$1"
export VM_MEMORY="${VM_MEMORY:-4096}"
export VM_VCPUS="${VM_VCPUS:-2}"
export VM_DISK_SIZE="${VM_DISK_SIZE:-50}"
export VM_IP="${VM_IP:-dhcp}"
export REPO_URL="${REPO_URL:-}"
export GITHUB_TOKEN="${GITHUB_TOKEN:-}"
shift # Remove VM_NAME from arguments
python3 main_template.py "$@"
}
# Parse command line arguments
case "${1:-}" in
check)
log "🔍 Checking template VM availability..."
check_environment
run_template_manager check
;;
info)
log "📋 Getting template VM information..."
check_environment
run_template_manager info
;;
list)
log "📋 Listing template-based VM instances..."
check_environment
run_template_manager list
;;
copy)
if [[ -z "${2:-}" ]]; then
log_error "VM name is required for copy operation"
echo "Usage: $0 copy VM_NAME"
exit 1
fi
log "💾 Copying template to VM: $2"
check_environment
run_template_manager copy "$2"
;;
deploy)
if [[ -z "${2:-}" ]]; then
log_error "VM name is required for deploy operation"
echo "Usage: $0 deploy VM_NAME"
exit 1
fi
log "🚀 Deploying complete VM from template: $2"
check_environment
run_main_template "$2" deploy
;;
status)
log "📊 Checking template VM status..."
check_environment
# Check template VM status directly
ssh "$UNRAID_USER@$UNRAID_HOST" "virsh domstate thrillwiki-template-ubuntu" 2>/dev/null || {
log_error "Could not check template VM status"
exit 1
}
;;
update)
log "🔄 Template VM update instructions:"
echo ""
echo "To update your template VM:"
echo "1. Start the template VM on Unraid"
echo "2. SSH into the template VM"
echo "3. Update packages: sudo apt update && sudo apt upgrade -y"
echo "4. Update ThrillWiki dependencies if needed"
echo "5. Clean up temporary files: sudo apt autoremove && sudo apt autoclean"
echo "6. Clear bash history: history -c && history -w"
echo "7. Shutdown the template VM: sudo shutdown now"
echo "8. The updated disk is now ready as a template"
echo ""
log_warning "IMPORTANT: Template VM must be stopped before creating new instances"
check_environment
run_template_manager update
;;
autopull)
shift # Remove 'autopull' from arguments
autopull_command="${1:-status}"
vm_name="${2:-$VM_NAME}"
log "🔄 Managing auto-pull functionality..."
check_environment
# Get list of all template VMs
if [[ "$autopull_command" == "status" ]] && [[ "$vm_name" == "$VM_NAME" ]]; then
all_vms=$(run_template_manager list | grep -E "(running|shut off)" | awk '{print $2}' || echo "")
else
all_vms=$vm_name
fi
if [[ -z "$all_vms" ]]; then
log_warning "No running template VMs found to manage auto-pull on."
exit 0
fi
for vm in $all_vms; do
log "====== Auto-pull for VM: $vm ======"
case "$autopull_command" in
status)
ssh "$vm" "[AWS-SECRET-REMOVED]uto-pull.sh --status"
;;
enable)
ssh "$vm" "(crontab -l 2>/dev/null || echo \"\") | { cat; echo \"*/10 * * * * [AWS-SECRET-REMOVED]uto-pull.sh >> /home/thrillwiki/logs/cron.log 2>&1\"; } | crontab - && echo '✅ Auto-pull enabled' || echo '❌ Failed to enable'"
;;
disable)
ssh "$vm" "crontab -l 2>/dev/null | grep -v 'auto-pull.sh' | crontab - && echo '✅ Auto-pull disabled' || echo '❌ Failed to disable'"
;;
logs)
ssh "$vm" "[AWS-SECRET-REMOVED]uto-pull.sh --logs"
;;
test)
ssh "$vm" "[AWS-SECRET-REMOVED]uto-pull.sh --force"
;;
*)
log_error "Invalid auto-pull command: $autopull_command"
show_help
exit 1
;;
esac
echo
done
;;
--help|-h|help|"")
show_help
;;
*)
log_error "Unknown command: ${1:-}"
echo ""
show_help
;;
esac

View File

@@ -0,0 +1,571 @@
#!/usr/bin/env python3
"""
Template VM Manager for ThrillWiki
Handles copying template VM disks and managing template-based deployments.
"""
import os
import sys
import time
import logging
import subprocess
from typing import Dict
logger = logging.getLogger(__name__)
class TemplateVMManager:
"""Manages template-based VM deployment on Unraid."""
def __init__(self, unraid_host: str, unraid_user: str = "root"):
self.unraid_host = unraid_host
self.unraid_user = unraid_user
self.template_vm_name = "thrillwiki-template-ubuntu"
self.template_path = f"/mnt/user/domains/{self.template_vm_name}"
def authenticate(self) -> bool:
"""Test SSH connectivity to Unraid server."""
try:
result = subprocess.run(
f"ssh -o ConnectTimeout=10 {self.unraid_user}@{self.unraid_host} 'echo Connected'",
shell=True,
capture_output=True,
text=True,
timeout=15,
)
if result.returncode == 0 and "Connected" in result.stdout:
logger.info("Successfully connected to Unraid via SSH")
return True
else:
logger.error(f"SSH connection failed: {result.stderr}")
return False
except Exception as e:
logger.error(f"SSH authentication error: {e}")
return False
def check_template_exists(self) -> bool:
"""Check if template VM disk exists."""
try:
result = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'test -f {self.template_path}/vdisk1.qcow2'",
shell=True,
capture_output=True,
text=True,
)
if result.returncode == 0:
logger.info(
f"Template VM disk found at {
self.template_path}/vdisk1.qcow2"
)
return True
else:
logger.error(
f"Template VM disk not found at {
self.template_path}/vdisk1.qcow2"
)
return False
except Exception as e:
logger.error(f"Error checking template existence: {e}")
return False
def get_template_info(self) -> Dict[str, str]:
"""Get information about the template VM."""
try:
# Get disk size
size_result = subprocess.run(
f"ssh {
self.unraid_user}@{
self.unraid_host} 'qemu-img info {
self.template_path}/vdisk1.qcow2 | grep \"virtual size\"'",
shell=True,
capture_output=True,
text=True,
)
# Get file size
file_size_result = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'ls -lh {self.template_path}/vdisk1.qcow2'",
shell=True,
capture_output=True,
text=True,
)
# Get last modification time
mod_time_result = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'stat -c \"%y\" {self.template_path}/vdisk1.qcow2'",
shell=True,
capture_output=True,
text=True,
)
info = {
"template_path": f"{
self.template_path}/vdisk1.qcow2",
"virtual_size": (
size_result.stdout.strip()
if size_result.returncode == 0
else "Unknown"
),
"file_size": (
file_size_result.stdout.split()[4]
if file_size_result.returncode == 0
else "Unknown"
),
"last_modified": (
mod_time_result.stdout.strip()
if mod_time_result.returncode == 0
else "Unknown"
),
}
return info
except Exception as e:
logger.error(f"Error getting template info: {e}")
return {}
def copy_template_disk(self, target_vm_name: str) -> bool:
"""Copy template VM disk to a new VM instance."""
try:
if not self.check_template_exists():
logger.error("Template VM disk not found. Cannot proceed with copy.")
return False
target_path = f"/mnt/user/domains/{target_vm_name}"
target_disk = f"{target_path}/vdisk1.qcow2"
logger.info(f"Copying template disk to new VM: {target_vm_name}")
# Create target directory
subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'mkdir -p {target_path}'",
shell=True,
check=True,
)
# Check if target disk already exists
disk_check = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'test -f {target_disk}'",
shell=True,
capture_output=True,
)
if disk_check.returncode == 0:
logger.warning(f"Target disk already exists: {target_disk}")
logger.info(
"Removing existing disk to replace with fresh template copy..."
)
subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'rm -f {target_disk}'",
shell=True,
check=True,
)
# Copy template disk with rsync progress display
logger.info("🚀 Copying template disk with rsync progress display...")
start_time = time.time()
# First, get the size of the template disk for progress calculation
size_result = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'stat -c%s {self.template_path}/vdisk1.qcow2'",
shell=True,
capture_output=True,
text=True,
)
template_size = "unknown size"
if size_result.returncode == 0:
size_bytes = int(size_result.stdout.strip())
if size_bytes > 1024 * 1024 * 1024: # GB
template_size = f"{size_bytes /
(1024 *
1024 *
1024):.1f}GB"
elif size_bytes > 1024 * 1024: # MB
template_size = f"{size_bytes / (1024 * 1024):.1f}MB"
else:
template_size = f"{size_bytes / 1024:.1f}KB"
logger.info(f"📊 Template disk size: {template_size}")
# Use rsync with progress display
logger.info("📈 Using rsync for real-time progress display...")
# Force rsync to output progress to stderr and capture it
copy_cmd = f"ssh {
self.unraid_user}@{
self.unraid_host} 'rsync -av --progress --stats {
self.template_path}/vdisk1.qcow2 {target_disk}'"
# Run with real-time output, unbuffered
process = subprocess.Popen(
copy_cmd,
shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
bufsize=0, # Unbuffered
universal_newlines=True,
)
import select
# Read both stdout and stderr for progress with real-time display
while True:
# Check if process is still running
if process.poll() is not None:
# Process finished, read any remaining output
remaining_out = process.stdout.read()
remaining_err = process.stderr.read()
if remaining_out:
print(f"📊 {remaining_out.strip()}", flush=True)
logger.info(f"📊 {remaining_out.strip()}")
if remaining_err:
for line in remaining_err.strip().split("\n"):
if line.strip():
print(f"{line.strip()}", flush=True)
logger.info(f"{line.strip()}")
break
# Use select to check for available data
try:
ready, _, _ = select.select(
[process.stdout, process.stderr], [], [], 0.1
)
for stream in ready:
line = stream.readline()
if line:
line = line.strip()
if line:
if stream == process.stdout:
print(f"📊 {line}", flush=True)
logger.info(f"📊 {line}")
else: # stderr
# rsync progress goes to stderr
if any(
keyword in line
for keyword in [
"%",
"bytes/sec",
"to-check=",
"xfr#",
]
):
print(f"{line}", flush=True)
logger.info(f"{line}")
else:
print(f"📋 {line}", flush=True)
logger.info(f"📋 {line}")
except select.error:
# Fallback for systems without select (like some Windows
# environments)
print(
"⚠️ select() not available, using fallback method...",
flush=True,
)
logger.info("⚠️ select() not available, using fallback method...")
# Simple fallback - just wait and read what's available
time.sleep(0.5)
try:
# Try to read non-blocking
import fcntl
import os
# Make stdout/stderr non-blocking
fd_out = process.stdout.fileno()
fd_err = process.stderr.fileno()
fl_out = fcntl.fcntl(fd_out, fcntl.F_GETFL)
fl_err = fcntl.fcntl(fd_err, fcntl.F_GETFL)
fcntl.fcntl(fd_out, fcntl.F_SETFL, fl_out | os.O_NONBLOCK)
fcntl.fcntl(fd_err, fcntl.F_SETFL, fl_err | os.O_NONBLOCK)
try:
out_line = process.stdout.readline()
if out_line:
print(f"📊 {out_line.strip()}", flush=True)
logger.info(f"📊 {out_line.strip()}")
except BaseException:
pass
try:
err_line = process.stderr.readline()
if err_line:
if any(
keyword in err_line
for keyword in [
"%",
"bytes/sec",
"to-check=",
"xfr#",
]
):
print(f"{err_line.strip()}", flush=True)
logger.info(f"{err_line.strip()}")
else:
print(f"📋 {err_line.strip()}", flush=True)
logger.info(f"📋 {err_line.strip()}")
except BaseException:
pass
except ImportError:
# If fcntl not available, just continue
print(
"📊 Progress display limited - continuing copy...",
flush=True,
)
logger.info("📊 Progress display limited - continuing copy...")
break
copy_result_code = process.wait()
end_time = time.time()
copy_time = end_time - start_time
if copy_result_code == 0:
logger.info(
f"✅ Template disk copied successfully in {
copy_time:.1f} seconds"
)
logger.info(f"🎯 New VM disk created: {target_disk}")
# Verify the copy by checking file size
verify_result = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'ls -lh {target_disk}'",
shell=True,
capture_output=True,
text=True,
)
if verify_result.returncode == 0:
file_info = verify_result.stdout.strip().split()
if len(file_info) >= 5:
copied_size = file_info[4]
logger.info(f"📋 Copied disk size: {copied_size}")
return True
else:
logger.error(
f"❌ Failed to copy template disk (exit code: {copy_result_code})"
)
logger.error("Check Unraid server disk space and permissions")
return False
except Exception as e:
logger.error(f"Error copying template disk: {e}")
return False
def prepare_vm_from_template(
self, target_vm_name: str, vm_memory: int, vm_vcpus: int, vm_ip: str
) -> bool:
"""Complete template-based VM preparation."""
try:
logger.info(f"Preparing VM '{target_vm_name}' from template...")
# Step 1: Copy template disk
if not self.copy_template_disk(target_vm_name):
return False
logger.info(f"VM '{target_vm_name}' prepared successfully from template")
logger.info("The VM disk is ready with Ubuntu pre-installed")
logger.info("You can now create the VM configuration and start it")
return True
except Exception as e:
logger.error(f"Error preparing VM from template: {e}")
return False
def update_template(self) -> bool:
"""Update the template VM with latest changes."""
try:
logger.info("Updating template VM...")
logger.info("Note: This should be done manually by:")
logger.info("1. Starting the template VM")
logger.info("2. Updating Ubuntu packages")
logger.info("3. Updating ThrillWiki dependencies")
logger.info("4. Stopping the template VM")
logger.info("5. The disk will automatically be the new template")
# Check template VM status
template_status = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'virsh domstate {self.template_vm_name}'",
shell=True,
capture_output=True,
text=True,
)
if template_status.returncode == 0:
status = template_status.stdout.strip()
logger.info(
f"Template VM '{
self.template_vm_name}' status: {status}"
)
if status == "running":
logger.warning("Template VM is currently running!")
logger.warning("Stop the template VM when updates are complete")
logger.warning("Running VMs should not be used as templates")
return False
elif status in ["shut off", "shutoff"]:
logger.info(
"Template VM is properly stopped and ready to use as template"
)
return True
else:
logger.warning(f"Template VM in unexpected state: {status}")
return False
else:
logger.error("Could not check template VM status")
return False
except Exception as e:
logger.error(f"Error updating template: {e}")
return False
def list_template_instances(self) -> list:
"""List all VMs that were created from the template."""
try:
# Get all domains
result = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'virsh list --all --name'",
shell=True,
capture_output=True,
text=True,
)
if result.returncode != 0:
logger.error("Failed to list VMs")
return []
all_vms = result.stdout.strip().split("\n")
# Filter for thrillwiki VMs (excluding template)
template_instances = []
for vm in all_vms:
vm = vm.strip()
if vm and "thrillwiki" in vm.lower() and vm != self.template_vm_name:
# Get VM status
status_result = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'virsh domstate {vm}'",
shell=True,
capture_output=True,
text=True,
)
status = (
status_result.stdout.strip()
if status_result.returncode == 0
else "unknown"
)
template_instances.append({"name": vm, "status": status})
return template_instances
except Exception as e:
logger.error(f"Error listing template instances: {e}")
return []
def main():
"""Main entry point for template manager."""
import argparse
parser = argparse.ArgumentParser(
description="ThrillWiki Template VM Manager",
epilog="""
Examples:
python template_manager.py info # Show template info
python template_manager.py copy my-vm # Copy template to new VM
python template_manager.py list # List template instances
python template_manager.py update # Update template VM
""",
formatter_class=argparse.RawDescriptionHelpFormatter,
)
parser.add_argument(
"action",
choices=["info", "copy", "list", "update", "check"],
help="Action to perform",
)
parser.add_argument("vm_name", nargs="?", help="VM name (required for copy action)")
args = parser.parse_args()
# Get Unraid connection details from environment
unraid_host = os.environ.get("UNRAID_HOST")
unraid_user = os.environ.get("UNRAID_USER", "root")
if not unraid_host:
logger.error("UNRAID_HOST environment variable is required")
sys.exit(1)
# Create template manager
template_manager = TemplateVMManager(unraid_host, unraid_user)
# Authenticate
if not template_manager.authenticate():
logger.error("Failed to connect to Unraid server")
sys.exit(1)
if args.action == "info":
logger.info("📋 Template VM Information")
info = template_manager.get_template_info()
if info:
print(f"Template Path: {info['template_path']}")
print(f"Virtual Size: {info['virtual_size']}")
print(f"File Size: {info['file_size']}")
print(f"Last Modified: {info['last_modified']}")
else:
print("❌ Failed to get template information")
sys.exit(1)
elif args.action == "check":
if template_manager.check_template_exists():
logger.info("✅ Template VM disk exists and is ready to use")
sys.exit(0)
else:
logger.error("❌ Template VM disk not found")
sys.exit(1)
elif args.action == "copy":
if not args.vm_name:
logger.error("VM name is required for copy action")
sys.exit(1)
success = template_manager.copy_template_disk(args.vm_name)
sys.exit(0 if success else 1)
elif args.action == "list":
logger.info("📋 Template-based VM Instances")
instances = template_manager.list_template_instances()
if instances:
for instance in instances:
status_emoji = (
"🟢"
if instance["status"] == "running"
else "🔴" if instance["status"] == "shut off" else "🟡"
)
print(
f"{status_emoji} {
instance['name']} ({
instance['status']})"
)
else:
print("No template instances found")
elif args.action == "update":
success = template_manager.update_template()
sys.exit(0 if success else 1)
if __name__ == "__main__":
# Setup logging
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s - %(levelname)s - %(message)s",
handlers=[logging.StreamHandler()],
)
main()

View File

@@ -0,0 +1,116 @@
<?xml version='1.0' encoding='UTF-8'?>
<domain type='kvm'>
<name>{VM_NAME}</name>
<uuid>{VM_UUID}</uuid>
<metadata>
<vmtemplate xmlns="unraid" name="ThrillWiki VM (Template-based)" iconold="ubuntu.png" icon="ubuntu.png" os="linux" webui=""/>
</metadata>
<memory unit='KiB'>{VM_MEMORY_KIB}</memory>
<currentMemory unit='KiB'>{VM_MEMORY_KIB}</currentMemory>
<vcpu placement='static'>{VM_VCPUS}</vcpu>
<os>
<type arch='x86_64' machine='pc-q35-9.2'>hvm</type>
<loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
<nvram>/etc/libvirt/qemu/nvram/{VM_UUID}_VARS-pure-efi.fd</nvram>
</os>
<features>
<acpi/>
<apic/>
<vmport state='off'/>
</features>
<cpu mode='host-passthrough' check='none' migratable='on'>
<topology sockets='1' dies='1' clusters='1' cores='{CPU_CORES}' threads='{CPU_THREADS}'/>
<cache mode='passthrough'/>
<feature policy='require' name='topoext'/>
</cpu>
<clock offset='utc'>
<timer name='hpet' present='no'/>
<timer name='hypervclock' present='yes'/>
<timer name='pit' tickpolicy='delay'/>
<timer name='rtc' tickpolicy='catchup'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<pm>
<suspend-to-mem enabled='no'/>
<suspend-to-disk enabled='no'/>
</pm>
<devices>
<emulator>/usr/local/sbin/qemu</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='writeback' discard='ignore'/>
<source file='/mnt/user/domains/{VM_NAME}/vdisk1.qcow2'/>
<target dev='hdc' bus='virtio'/>
<boot order='1'/>
<address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
</disk>
<controller type='usb' index='0' model='qemu-xhci' ports='15'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</controller>
<controller type='pci' index='0' model='pcie-root'/>
<controller type='pci' index='1' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='1' port='0x10'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
</controller>
<controller type='pci' index='2' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='2' port='0x11'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
</controller>
<controller type='pci' index='3' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='3' port='0x12'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
</controller>
<controller type='pci' index='4' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='4' port='0x13'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
</controller>
<controller type='pci' index='5' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='5' port='0x14'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
</controller>
<controller type='virtio-serial' index='0'>
<address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
</controller>
<interface type='bridge'>
<mac address='52:54:00:{MAC_SUFFIX}'/>
<source bridge='br0.20'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
</interface>
<serial type='pty'>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<channel type='unix'>
<target type='virtio' name='org.qemu.guest_agent.0'/>
<address type='virtio-serial' controller='0' bus='0' port='1'/>
</channel>
<input type='tablet' bus='usb'>
<address type='usb' bus='0' port='1'/>
</input>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' sharePolicy='ignore'>
<listen type='address' address='0.0.0.0'/>
</graphics>
<audio id='1' type='none'/>
<video>
<model type='qxl' ram='65536' vram='65536' vram64='65535' vgamem='65536' heads='1' primary='yes'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x1e' function='0x0'/>
</video>
<watchdog model='itco' action='reset'/>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
</memballoon>
</devices>
</domain>

View File

@@ -0,0 +1,127 @@
<?xml version='1.0' encoding='UTF-8'?>
<domain type='kvm'>
<name>{VM_NAME}</name>
<uuid>{VM_UUID}</uuid>
<metadata>
<vmtemplate xmlns="unraid" name="ThrillWiki VM" iconold="ubuntu.png" icon="ubuntu.png" os="linux" webui=""/>
</metadata>
<memory unit='KiB'>{VM_MEMORY_KIB}</memory>
<currentMemory unit='KiB'>{VM_MEMORY_KIB}</currentMemory>
<vcpu placement='static'>{VM_VCPUS}</vcpu>
<os>
<type arch='x86_64' machine='pc-q35-9.2'>hvm</type>
<loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
<nvram>/etc/libvirt/qemu/nvram/{VM_UUID}_VARS-pure-efi.fd</nvram>
</os>
<features>
<acpi/>
<apic/>
<vmport state='off'/>
</features>
<cpu mode='host-passthrough' check='none' migratable='on'>
<topology sockets='1' dies='1' clusters='1' cores='{CPU_CORES}' threads='{CPU_THREADS}'/>
<cache mode='passthrough'/>
<feature policy='require' name='topoext'/>
</cpu>
<clock offset='utc'>
<timer name='hpet' present='no'/>
<timer name='hypervclock' present='yes'/>
<timer name='pit' tickpolicy='delay'/>
<timer name='rtc' tickpolicy='catchup'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<pm>
<suspend-to-mem enabled='no'/>
<suspend-to-disk enabled='no'/>
</pm>
<devices>
<emulator>/usr/local/sbin/qemu</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='writeback' discard='ignore'/>
<source file='/mnt/user/domains/{VM_NAME}/vdisk1.qcow2'/>
<target dev='hdc' bus='virtio'/>
<boot order='2'/>
<address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
</disk>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/mnt/user/isos/{VM_NAME}-ubuntu-autoinstall.iso'/>
<target dev='hda' bus='sata'/>
<readonly/>
<boot order='1'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
<controller type='usb' index='0' model='qemu-xhci' ports='15'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</controller>
<controller type='pci' index='0' model='pcie-root'/>
<controller type='pci' index='1' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='1' port='0x10'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
</controller>
<controller type='pci' index='2' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='2' port='0x11'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
</controller>
<controller type='pci' index='3' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='3' port='0x12'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
</controller>
<controller type='pci' index='4' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='4' port='0x13'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
</controller>
<controller type='pci' index='5' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='5' port='0x14'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
</controller>
<controller type='virtio-serial' index='0'>
<address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
</controller>
<controller type='sata' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
</controller>
<interface type='bridge'>
<mac address='52:54:00:{MAC_SUFFIX}'/>
<source bridge='br0.20'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
</interface>
<serial type='pty'>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<channel type='unix'>
<target type='virtio' name='org.qemu.guest_agent.0'/>
<address type='virtio-serial' controller='0' bus='0' port='1'/>
</channel>
<input type='tablet' bus='usb'>
<address type='usb' bus='0' port='1'/>
</input>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' sharePolicy='ignore'>
<listen type='address' address='0.0.0.0'/>
</graphics>
<audio id='1' type='none'/>
<video>
<model type='qxl' ram='65536' vram='65536' vram64='65535' vgamem='65536' heads='1' primary='yes'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x1e' function='0x0'/>
</video>
<watchdog model='itco' action='reset'/>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
</memballoon>
</devices>
</domain>

View File

@@ -0,0 +1,212 @@
#!/usr/bin/env python3
"""
Validate autoinstall configuration against Ubuntu's schema.
This script provides basic validation to check if our autoinstall config
complies with the official schema structure.
"""
import yaml
import sys
from pathlib import Path
def load_autoinstall_config(template_path: str) -> dict:
"""Load the autoinstall configuration from the template file."""
with open(template_path, "r") as f:
content = f.read()
# Parse the cloud-config YAML
config = yaml.safe_load(content)
# Extract the autoinstall section
if "autoinstall" in config:
return config["autoinstall"]
else:
raise ValueError("No autoinstall section found in cloud-config")
def validate_required_fields(config: dict) -> list:
"""Validate required fields according to schema."""
errors = []
# Check version field (required)
if "version" not in config:
errors.append("Missing required field: version")
elif not isinstance(config["version"], int) or config["version"] != 1:
errors.append("Invalid version: must be integer 1")
return errors
def validate_identity_section(config: dict) -> list:
"""Validate identity section."""
errors = []
if "identity" in config:
identity = config["identity"]
required_fields = ["username", "hostname", "password"]
for field in required_fields:
if field not in identity:
errors.append(f"Identity section missing required field: {field}")
# Additional validation
if "username" in identity and not isinstance(identity["username"], str):
errors.append("Identity username must be a string")
if "hostname" in identity and not isinstance(identity["hostname"], str):
errors.append("Identity hostname must be a string")
return errors
def validate_network_section(config: dict) -> list:
"""Validate network section."""
errors = []
if "network" in config:
network = config["network"]
if "version" not in network:
errors.append("Network section missing required field: version")
elif network["version"] != 2:
errors.append("Network version must be 2")
return errors
def validate_keyboard_section(config: dict) -> list:
"""Validate keyboard section."""
errors = []
if "keyboard" in config:
keyboard = config["keyboard"]
if "layout" not in keyboard:
errors.append("Keyboard section missing required field: layout")
return errors
def validate_ssh_section(config: dict) -> list:
"""Validate SSH section."""
errors = []
if "ssh" in config:
ssh = config["ssh"]
if "install-server" in ssh and not isinstance(ssh["install-server"], bool):
errors.append("SSH install-server must be boolean")
if "authorized-keys" in ssh and not isinstance(ssh["authorized-keys"], list):
errors.append("SSH authorized-keys must be an array")
if "allow-pw" in ssh and not isinstance(ssh["allow-pw"], bool):
errors.append("SSH allow-pw must be boolean")
return errors
def validate_packages_section(config: dict) -> list:
"""Validate packages section."""
errors = []
if "packages" in config:
packages = config["packages"]
if not isinstance(packages, list):
errors.append("Packages must be an array")
else:
for i, package in enumerate(packages):
if not isinstance(package, str):
errors.append(f"Package at index {i} must be a string")
return errors
def validate_commands_sections(config: dict) -> list:
"""Validate early-commands and late-commands sections."""
errors = []
for section_name in ["early-commands", "late-commands"]:
if section_name in config:
commands = config[section_name]
if not isinstance(commands, list):
errors.append(f"{section_name} must be an array")
else:
for i, command in enumerate(commands):
if not isinstance(command, (str, list)):
errors.append(
f"{section_name} item at index {i} must be string or array"
)
elif isinstance(command, list):
for j, cmd_part in enumerate(command):
if not isinstance(cmd_part, str):
errors.append(
f"{section_name}[{i}][{j}] must be a string"
)
return errors
def validate_shutdown_section(config: dict) -> list:
"""Validate shutdown section."""
errors = []
if "shutdown" in config:
shutdown = config["shutdown"]
valid_values = ["reboot", "poweroff"]
if shutdown not in valid_values:
errors.append(f"Shutdown must be one of: {valid_values}")
return errors
def main():
"""Main validation function."""
template_path = Path(__file__).parent / "cloud-init-template.yaml"
if not template_path.exists():
print(f"Error: Template file not found at {template_path}")
sys.exit(1)
try:
# Load the autoinstall configuration
print(f"Loading autoinstall config from {template_path}")
config = load_autoinstall_config(str(template_path))
# Run validation checks
all_errors = []
all_errors.extend(validate_required_fields(config))
all_errors.extend(validate_identity_section(config))
all_errors.extend(validate_network_section(config))
all_errors.extend(validate_keyboard_section(config))
all_errors.extend(validate_ssh_section(config))
all_errors.extend(validate_packages_section(config))
all_errors.extend(validate_commands_sections(config))
all_errors.extend(validate_shutdown_section(config))
# Report results
if all_errors:
print("\n❌ Validation failed with the following errors:")
for error in all_errors:
print(f" - {error}")
sys.exit(1)
else:
print("\n✅ Autoinstall configuration validation passed!")
print("Configuration appears to comply with Ubuntu autoinstall schema.")
# Print summary of detected sections
sections = list(config.keys())
print(f"\nDetected sections: {', '.join(sorted(sections))}")
except Exception as e:
print(f"Error during validation: {e}")
sys.exit(1)
if __name__ == "__main__":
main()

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,570 @@
#!/usr/bin/env python3
"""
VM Manager for Unraid
Handles VM creation, configuration, and lifecycle management.
"""
import os
import time
import logging
import subprocess
from pathlib import Path
from typing import Optional
import uuid
logger = logging.getLogger(__name__)
class UnraidVMManager:
"""Manages VMs on Unraid server."""
def __init__(self, vm_name: str, unraid_host: str, unraid_user: str = "root"):
self.vm_name = vm_name
self.unraid_host = unraid_host
self.unraid_user = unraid_user
self.vm_config_path = f"/mnt/user/domains/{vm_name}"
def authenticate(self) -> bool:
"""Test SSH connectivity to Unraid server."""
try:
result = subprocess.run(
f"ssh -o ConnectTimeout=10 {self.unraid_user}@{self.unraid_host} 'echo Connected'",
shell=True,
capture_output=True,
text=True,
timeout=15,
)
if result.returncode == 0 and "Connected" in result.stdout:
logger.info("Successfully connected to Unraid via SSH")
return True
else:
logger.error(f"SSH connection failed: {result.stderr}")
return False
except Exception as e:
logger.error(f"SSH authentication error: {e}")
return False
def check_vm_exists(self) -> bool:
"""Check if VM already exists."""
try:
result = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'virsh list --all | grep {self.vm_name}'",
shell=True,
capture_output=True,
text=True,
)
return self.vm_name in result.stdout
except Exception as e:
logger.error(f"Error checking VM existence: {e}")
return False
def _generate_mac_suffix(self, vm_ip: str) -> str:
"""Generate MAC address suffix based on VM IP or name."""
if vm_ip.lower() != "dhcp" and "." in vm_ip:
# Use last octet of static IP for MAC generation
last_octet = int(vm_ip.split(".")[-1])
return f"{last_octet:02x}:7d:fd"
else:
# Use hash of VM name for consistent MAC generation
import hashlib
hash_obj = hashlib.md5(self.vm_name.encode())
hash_bytes = hash_obj.digest()[:3]
return ":".join([f"{b:02x}" for b in hash_bytes])
def create_vm_xml(
self,
vm_memory: int,
vm_vcpus: int,
vm_ip: str,
existing_uuid: str = None,
) -> str:
"""Generate VM XML configuration from template file."""
vm_uuid = existing_uuid if existing_uuid else str(uuid.uuid4())
# Read XML template from file
template_path = Path(__file__).parent / "thrillwiki-vm-template.xml"
if not template_path.exists():
raise FileNotFoundError(f"VM XML template not found at {template_path}")
with open(template_path, "r", encoding="utf-8") as f:
xml_template = f.read()
# Calculate CPU topology
cpu_cores = vm_vcpus // 2 if vm_vcpus > 1 else 1
cpu_threads = 2 if vm_vcpus > 1 else 1
# Replace placeholders with actual values
xml_content = xml_template.format(
VM_NAME=self.vm_name,
VM_UUID=vm_uuid,
VM_MEMORY_KIB=vm_memory * 1024,
VM_VCPUS=vm_vcpus,
CPU_CORES=cpu_cores,
CPU_THREADS=cpu_threads,
MAC_SUFFIX=self._generate_mac_suffix(vm_ip),
)
return xml_content.strip()
def upload_iso_to_unraid(self, local_iso_path: Path) -> str:
"""Upload ISO to Unraid server."""
remote_iso_path = f"/mnt/user/isos/{
self.vm_name}-ubuntu-autoinstall.iso"
logger.info(f"Uploading ISO to Unraid: {remote_iso_path}")
try:
# Remove old ISO if exists
subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'rm -f {remote_iso_path}'",
shell=True,
check=False, # Don't fail if file doesn't exist
)
# Upload new ISO
subprocess.run(
f"scp {local_iso_path} {self.unraid_user}@{self.unraid_host}:{remote_iso_path}",
shell=True,
check=True,
)
logger.info(f"ISO uploaded successfully: {remote_iso_path}")
return remote_iso_path
except Exception as e:
logger.error(f"Failed to upload ISO: {e}")
raise
def create_vm(
self, vm_memory: int, vm_vcpus: int, vm_disk_size: int, vm_ip: str
) -> bool:
"""Create or update the VM on Unraid."""
try:
vm_exists = self.check_vm_exists()
if vm_exists:
logger.info(
f"VM {
self.vm_name} already exists, updating configuration..."
)
# Always try to stop VM before updating
current_status = self.vm_status()
logger.info(f"Current VM status: {current_status}")
if current_status not in ["shut off", "unknown"]:
logger.info(
f"Stopping VM {
self.vm_name} for configuration update..."
)
self.stop_vm()
time.sleep(3)
else:
logger.info(f"VM {self.vm_name} is already stopped")
else:
logger.info(f"Creating VM {self.vm_name}...")
# Ensure VM directory exists
subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'mkdir -p {self.vm_config_path}'",
shell=True,
check=True,
)
# Create virtual disk if it doesn't exist
disk_check = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'test -f {self.vm_config_path}/vdisk1.qcow2'",
shell=True,
capture_output=True,
)
if disk_check.returncode != 0:
logger.info(f"Creating virtual disk for VM {self.vm_name}...")
disk_cmd = f"""
ssh {self.unraid_user}@{self.unraid_host} 'qemu-img create -f qcow2 {self.vm_config_path}/vdisk1.qcow2 {vm_disk_size}G'
"""
subprocess.run(disk_cmd, shell=True, check=True)
else:
logger.info(
f"Virtual disk already exists for VM {
self.vm_name}"
)
existing_uuid = None
if vm_exists:
# Get existing VM UUID
cmd = f'ssh {
self.unraid_user}@{
self.unraid_host} \'virsh dumpxml {
self.vm_name} | grep "<uuid>" | sed "s/<uuid>//g" | sed "s/<\\/uuid>//g" | tr -d " "\''
result = subprocess.run(
cmd,
shell=True,
capture_output=True,
text=True,
)
if result.returncode == 0 and result.stdout.strip():
existing_uuid = result.stdout.strip()
logger.info(f"Found existing VM UUID: {existing_uuid}")
# Check if VM is persistent or transient
persistent_check = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'virsh list --persistent --all | grep {self.vm_name}'",
shell=True,
capture_output=True,
text=True,
)
is_persistent = self.vm_name in persistent_check.stdout
if is_persistent:
# Undefine persistent VM with NVRAM flag
logger.info(
f"VM {
self.vm_name} is persistent, undefining with NVRAM for reconfiguration..."
)
subprocess.run(
f"ssh {
self.unraid_user}@{
self.unraid_host} 'virsh undefine {
self.vm_name} --nvram'",
shell=True,
check=True,
)
logger.info(
f"Persistent VM {
self.vm_name} undefined for reconfiguration"
)
else:
# Handle transient VM - just destroy it
logger.info(
f"VM {
self.vm_name} is transient, destroying for reconfiguration..."
)
if self.vm_status() == "running":
subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'virsh destroy {self.vm_name}'",
shell=True,
check=True,
)
logger.info(
f"Transient VM {
self.vm_name} destroyed for reconfiguration"
)
# Generate VM XML with appropriate UUID
vm_xml = self.create_vm_xml(vm_memory, vm_vcpus, vm_ip, existing_uuid)
xml_file = f"/tmp/{self.vm_name}.xml"
with open(xml_file, "w", encoding="utf-8") as f:
f.write(vm_xml)
# Copy XML to Unraid and define/redefine VM
subprocess.run(
f"scp {xml_file} {self.unraid_user}@{self.unraid_host}:/tmp/",
shell=True,
check=True,
)
# Define VM as persistent domain
subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'virsh define /tmp/{self.vm_name}.xml'",
shell=True,
check=True,
)
# Ensure VM is set to autostart for persistent configuration
subprocess.run(
f"ssh {
self.unraid_user}@{
self.unraid_host} 'virsh autostart {
self.vm_name}'",
shell=True,
check=False, # Don't fail if autostart is already enabled
)
action = "updated" if vm_exists else "created"
logger.info(f"VM {self.vm_name} {action} successfully")
# Cleanup
os.remove(xml_file)
return True
except Exception as e:
logger.error(f"Failed to create VM: {e}")
return False
def create_nvram_file(self, vm_uuid: str) -> bool:
"""Create NVRAM file for UEFI VM."""
try:
nvram_path = f"/etc/libvirt/qemu/nvram/{vm_uuid}_VARS-pure-efi.fd"
# Check if NVRAM file already exists
result = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'test -f {nvram_path}'",
shell=True,
capture_output=True,
)
if result.returncode == 0:
logger.info(f"NVRAM file already exists: {nvram_path}")
return True
# Copy template to create NVRAM file
logger.info(f"Creating NVRAM file: {nvram_path}")
result = subprocess.run(
f"ssh {
self.unraid_user}@{
self.unraid_host} 'cp /usr/share/qemu/ovmf-x64/OVMF_VARS-pure-efi.fd {nvram_path}'",
shell=True,
capture_output=True,
text=True,
)
if result.returncode == 0:
logger.info("NVRAM file created successfully")
return True
else:
logger.error(f"Failed to create NVRAM file: {result.stderr}")
return False
except Exception as e:
logger.error(f"Error creating NVRAM file: {e}")
return False
def start_vm(self) -> bool:
"""Start the VM if it's not already running."""
try:
# Check if VM is already running
current_status = self.vm_status()
if current_status == "running":
logger.info(f"VM {self.vm_name} is already running")
return True
logger.info(f"Starting VM {self.vm_name}...")
# For new VMs, we need to extract the UUID and create NVRAM file
vm_exists = self.check_vm_exists()
if not vm_exists:
logger.error("Cannot start VM that doesn't exist")
return False
# Get VM UUID from XML
cmd = f'ssh {
self.unraid_user}@{
self.unraid_host} \'virsh dumpxml {
self.vm_name} | grep "<uuid>" | sed "s/<uuid>//g" | sed "s/<\\/uuid>//g" | tr -d " "\''
result = subprocess.run(
cmd,
shell=True,
capture_output=True,
text=True,
)
if result.returncode == 0 and result.stdout.strip():
vm_uuid = result.stdout.strip()
logger.info(f"VM UUID: {vm_uuid}")
# Create NVRAM file if it doesn't exist
if not self.create_nvram_file(vm_uuid):
return False
result = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'virsh start {self.vm_name}'",
shell=True,
capture_output=True,
text=True,
)
if result.returncode == 0:
logger.info(f"VM {self.vm_name} started successfully")
return True
else:
logger.error(f"Failed to start VM: {result.stderr}")
return False
except Exception as e:
logger.error(f"Error starting VM: {e}")
return False
def stop_vm(self) -> bool:
"""Stop the VM with timeout and force destroy if needed."""
try:
logger.info(f"Stopping VM {self.vm_name}...")
# Try graceful shutdown first
result = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'virsh shutdown {self.vm_name}'",
shell=True,
capture_output=True,
text=True,
timeout=10,
)
if result.returncode == 0:
# Wait up to 30 seconds for graceful shutdown
logger.info(
f"Waiting for VM {
self.vm_name} to shutdown gracefully..."
)
for i in range(30):
status = self.vm_status()
if status in ["shut off", "unknown"]:
logger.info(f"VM {self.vm_name} stopped gracefully")
return True
time.sleep(1)
# If still running after 30 seconds, force destroy
logger.warning(
f"VM {
self.vm_name} didn't shutdown gracefully, forcing destroy..."
)
destroy_result = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'virsh destroy {self.vm_name}'",
shell=True,
capture_output=True,
text=True,
timeout=10,
)
if destroy_result.returncode == 0:
logger.info(f"VM {self.vm_name} forcefully destroyed")
return True
else:
logger.error(
f"Failed to destroy VM: {
destroy_result.stderr}"
)
return False
else:
logger.error(
f"Failed to initiate VM shutdown: {
result.stderr}"
)
return False
except subprocess.TimeoutExpired:
logger.error(f"Timeout stopping VM {self.vm_name}")
return False
except Exception as e:
logger.error(f"Error stopping VM: {e}")
return False
def get_vm_ip(self) -> Optional[str]:
"""Get VM IP address."""
try:
# Wait for VM to get IP - Ubuntu autoinstall can take 20-30 minutes
max_attempts = 120 # 20 minutes total wait time
for attempt in range(max_attempts):
result = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'virsh domifaddr {self.vm_name}'",
shell=True,
capture_output=True,
text=True,
)
if result.returncode == 0 and "ipv4" in result.stdout:
lines = result.stdout.strip().split("\\n")
for line in lines:
if "ipv4" in line:
# Extract IP from line like: vnet0
# 52:54:00:xx:xx:xx ipv4
# 192.168.1.100/24
parts = line.split()
if len(parts) >= 4:
ip_with_mask = parts[3]
ip = ip_with_mask.split("/")[0]
logger.info(f"VM IP address: {ip}")
return ip
logger.info(
f"Waiting for VM IP... (attempt {
attempt + 1}/{max_attempts}) - Ubuntu autoinstall in progress"
)
time.sleep(10)
logger.error("Failed to get VM IP address")
return None
except Exception as e:
logger.error(f"Error getting VM IP: {e}")
return None
def vm_status(self) -> str:
"""Get VM status."""
try:
result = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'virsh domstate {self.vm_name}'",
shell=True,
capture_output=True,
text=True,
)
if result.returncode == 0:
return result.stdout.strip()
else:
return "unknown"
except Exception as e:
logger.error(f"Error getting VM status: {e}")
return "error"
def delete_vm(self) -> bool:
"""Completely remove VM and all associated files."""
try:
logger.info(
f"Deleting VM {
self.vm_name} and all associated files..."
)
# Check if VM exists
if not self.check_vm_exists():
logger.info(f"VM {self.vm_name} does not exist")
return True
# Stop VM if running
if self.vm_status() == "running":
logger.info(f"Stopping VM {self.vm_name}...")
self.stop_vm()
time.sleep(5)
# Undefine VM with NVRAM
logger.info(f"Undefining VM {self.vm_name}...")
subprocess.run(
f"ssh {
self.unraid_user}@{
self.unraid_host} 'virsh undefine {
self.vm_name} --nvram'",
shell=True,
check=True,
)
# Remove VM directory and all files
logger.info(f"Removing VM directory and files...")
subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'rm -rf {self.vm_config_path}'",
shell=True,
check=True,
)
# Remove autoinstall ISO
subprocess.run(
f"ssh {
self.unraid_user}@{
self.unraid_host} 'rm -f /mnt/user/isos/{
self.vm_name}-ubuntu-autoinstall.iso'",
shell=True,
check=False, # Don't fail if file doesn't exist
)
logger.info(f"VM {self.vm_name} completely removed")
return True
except Exception as e:
logger.error(f"Failed to delete VM: {e}")
return False

View File

@@ -0,0 +1,654 @@
#!/usr/bin/env python3
"""
Template-based VM Manager for Unraid
Handles VM creation using pre-built template disks instead of autoinstall.
"""
import os
import time
import logging
import subprocess
from pathlib import Path
from typing import Optional
import uuid
from template_manager import TemplateVMManager
logger = logging.getLogger(__name__)
class UnraidTemplateVMManager:
"""Manages template-based VMs on Unraid server."""
def __init__(self, vm_name: str, unraid_host: str, unraid_user: str = "root"):
self.vm_name = vm_name
self.unraid_host = unraid_host
self.unraid_user = unraid_user
self.vm_config_path = f"/mnt/user/domains/{vm_name}"
self.template_manager = TemplateVMManager(unraid_host, unraid_user)
def authenticate(self) -> bool:
"""Test SSH connectivity to Unraid server."""
return self.template_manager.authenticate()
def check_vm_exists(self) -> bool:
"""Check if VM already exists."""
try:
result = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'virsh list --all | grep {self.vm_name}'",
shell=True,
capture_output=True,
text=True,
)
return self.vm_name in result.stdout
except Exception as e:
logger.error(f"Error checking VM existence: {e}")
return False
def _generate_mac_suffix(self, vm_ip: str) -> str:
"""Generate MAC address suffix based on VM IP or name."""
if vm_ip.lower() != "dhcp" and "." in vm_ip:
# Use last octet of static IP for MAC generation
last_octet = int(vm_ip.split(".")[-1])
return f"{last_octet:02x}:7d:fd"
else:
# Use hash of VM name for consistent MAC generation
import hashlib
hash_obj = hashlib.md5(self.vm_name.encode())
hash_bytes = hash_obj.digest()[:3]
return ":".join([f"{b:02x}" for b in hash_bytes])
def create_vm_xml(
self,
vm_memory: int,
vm_vcpus: int,
vm_ip: str,
existing_uuid: str = None,
) -> str:
"""Generate VM XML configuration from template file."""
vm_uuid = existing_uuid if existing_uuid else str(uuid.uuid4())
# Use simplified template for template-based VMs
template_path = Path(__file__).parent / "thrillwiki-vm-template-simple.xml"
if not template_path.exists():
raise FileNotFoundError(f"VM XML template not found at {template_path}")
with open(template_path, "r", encoding="utf-8") as f:
xml_template = f.read()
# Calculate CPU topology
cpu_cores = vm_vcpus // 2 if vm_vcpus > 1 else 1
cpu_threads = 2 if vm_vcpus > 1 else 1
# Replace placeholders with actual values
xml_content = xml_template.format(
VM_NAME=self.vm_name,
VM_UUID=vm_uuid,
VM_MEMORY_KIB=vm_memory * 1024,
VM_VCPUS=vm_vcpus,
CPU_CORES=cpu_cores,
CPU_THREADS=cpu_threads,
MAC_SUFFIX=self._generate_mac_suffix(vm_ip),
)
return xml_content.strip()
def create_vm_from_template(
self, vm_memory: int, vm_vcpus: int, vm_disk_size: int, vm_ip: str
) -> bool:
"""Create VM from template disk."""
try:
vm_exists = self.check_vm_exists()
if vm_exists:
logger.info(
f"VM {
self.vm_name} already exists, updating configuration..."
)
# Always try to stop VM before updating
current_status = self.vm_status()
logger.info(f"Current VM status: {current_status}")
if current_status not in ["shut off", "unknown"]:
logger.info(
f"Stopping VM {
self.vm_name} for configuration update..."
)
self.stop_vm()
time.sleep(3)
else:
logger.info(f"VM {self.vm_name} is already stopped")
else:
logger.info(f"Creating VM {self.vm_name} from template...")
# Step 1: Prepare VM from template (copy disk)
logger.info("Preparing VM from template disk...")
if not self.template_manager.prepare_vm_from_template(
self.vm_name, vm_memory, vm_vcpus, vm_ip
):
logger.error("Failed to prepare VM from template")
return False
existing_uuid = None
if vm_exists:
# Get existing VM UUID
cmd = f'ssh {
self.unraid_user}@{
self.unraid_host} \'virsh dumpxml {
self.vm_name} | grep "<uuid>" | sed "s/<uuid>//g" | sed "s/<\\/uuid>//g" | tr -d " "\''
result = subprocess.run(
cmd,
shell=True,
capture_output=True,
text=True,
)
if result.returncode == 0 and result.stdout.strip():
existing_uuid = result.stdout.strip()
logger.info(f"Found existing VM UUID: {existing_uuid}")
# Check if VM is persistent or transient
persistent_check = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'virsh list --persistent --all | grep {self.vm_name}'",
shell=True,
capture_output=True,
text=True,
)
is_persistent = self.vm_name in persistent_check.stdout
if is_persistent:
# Undefine persistent VM with NVRAM flag
logger.info(
f"VM {
self.vm_name} is persistent, undefining with NVRAM for reconfiguration..."
)
subprocess.run(
f"ssh {
self.unraid_user}@{
self.unraid_host} 'virsh undefine {
self.vm_name} --nvram'",
shell=True,
check=True,
)
logger.info(
f"Persistent VM {
self.vm_name} undefined for reconfiguration"
)
else:
# Handle transient VM - just destroy it
logger.info(
f"VM {
self.vm_name} is transient, destroying for reconfiguration..."
)
if self.vm_status() == "running":
subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'virsh destroy {self.vm_name}'",
shell=True,
check=True,
)
logger.info(
f"Transient VM {
self.vm_name} destroyed for reconfiguration"
)
# Step 2: Generate VM XML with appropriate UUID
vm_xml = self.create_vm_xml(vm_memory, vm_vcpus, vm_ip, existing_uuid)
xml_file = f"/tmp/{self.vm_name}.xml"
with open(xml_file, "w", encoding="utf-8") as f:
f.write(vm_xml)
# Step 3: Copy XML to Unraid and define VM
subprocess.run(
f"scp {xml_file} {self.unraid_user}@{self.unraid_host}:/tmp/",
shell=True,
check=True,
)
# Define VM as persistent domain
subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'virsh define /tmp/{self.vm_name}.xml'",
shell=True,
check=True,
)
# Ensure VM is set to autostart for persistent configuration
subprocess.run(
f"ssh {
self.unraid_user}@{
self.unraid_host} 'virsh autostart {
self.vm_name}'",
shell=True,
check=False, # Don't fail if autostart is already enabled
)
action = "updated" if vm_exists else "created"
logger.info(
f"VM {
self.vm_name} {action} successfully from template"
)
# Cleanup
os.remove(xml_file)
return True
except Exception as e:
logger.error(f"Failed to create VM from template: {e}")
return False
def create_nvram_file(self, vm_uuid: str) -> bool:
"""Create NVRAM file for UEFI VM."""
try:
nvram_path = f"/etc/libvirt/qemu/nvram/{vm_uuid}_VARS-pure-efi.fd"
# Check if NVRAM file already exists
result = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'test -f {nvram_path}'",
shell=True,
capture_output=True,
)
if result.returncode == 0:
logger.info(f"NVRAM file already exists: {nvram_path}")
return True
# Copy template to create NVRAM file
logger.info(f"Creating NVRAM file: {nvram_path}")
result = subprocess.run(
f"ssh {
self.unraid_user}@{
self.unraid_host} 'cp /usr/share/qemu/ovmf-x64/OVMF_VARS-pure-efi.fd {nvram_path}'",
shell=True,
capture_output=True,
text=True,
)
if result.returncode == 0:
logger.info("NVRAM file created successfully")
return True
else:
logger.error(f"Failed to create NVRAM file: {result.stderr}")
return False
except Exception as e:
logger.error(f"Error creating NVRAM file: {e}")
return False
def start_vm(self) -> bool:
"""Start the VM if it's not already running."""
try:
# Check if VM is already running
current_status = self.vm_status()
if current_status == "running":
logger.info(f"VM {self.vm_name} is already running")
return True
logger.info(f"Starting VM {self.vm_name}...")
# For VMs, we need to extract the UUID and create NVRAM file
vm_exists = self.check_vm_exists()
if not vm_exists:
logger.error("Cannot start VM that doesn't exist")
return False
# Get VM UUID from XML
cmd = f'ssh {
self.unraid_user}@{
self.unraid_host} \'virsh dumpxml {
self.vm_name} | grep "<uuid>" | sed "s/<uuid>//g" | sed "s/<\\/uuid>//g" | tr -d " "\''
result = subprocess.run(
cmd,
shell=True,
capture_output=True,
text=True,
)
if result.returncode == 0 and result.stdout.strip():
vm_uuid = result.stdout.strip()
logger.info(f"VM UUID: {vm_uuid}")
# Create NVRAM file if it doesn't exist
if not self.create_nvram_file(vm_uuid):
return False
result = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'virsh start {self.vm_name}'",
shell=True,
capture_output=True,
text=True,
)
if result.returncode == 0:
logger.info(f"VM {self.vm_name} started successfully")
logger.info(
"VM is booting from template disk - should be ready quickly!"
)
return True
else:
logger.error(f"Failed to start VM: {result.stderr}")
return False
except Exception as e:
logger.error(f"Error starting VM: {e}")
return False
def stop_vm(self) -> bool:
"""Stop the VM with timeout and force destroy if needed."""
try:
logger.info(f"Stopping VM {self.vm_name}...")
# Try graceful shutdown first
result = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'virsh shutdown {self.vm_name}'",
shell=True,
capture_output=True,
text=True,
timeout=10,
)
if result.returncode == 0:
# Wait up to 30 seconds for graceful shutdown
logger.info(
f"Waiting for VM {
self.vm_name} to shutdown gracefully..."
)
for i in range(30):
status = self.vm_status()
if status in ["shut off", "unknown"]:
logger.info(f"VM {self.vm_name} stopped gracefully")
return True
time.sleep(1)
# If still running after 30 seconds, force destroy
logger.warning(
f"VM {
self.vm_name} didn't shutdown gracefully, forcing destroy..."
)
destroy_result = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'virsh destroy {self.vm_name}'",
shell=True,
capture_output=True,
text=True,
timeout=10,
)
if destroy_result.returncode == 0:
logger.info(f"VM {self.vm_name} forcefully destroyed")
return True
else:
logger.error(
f"Failed to destroy VM: {
destroy_result.stderr}"
)
return False
else:
logger.error(
f"Failed to initiate VM shutdown: {
result.stderr}"
)
return False
except subprocess.TimeoutExpired:
logger.error(f"Timeout stopping VM {self.vm_name}")
return False
except Exception as e:
logger.error(f"Error stopping VM: {e}")
return False
def get_vm_ip(self) -> Optional[str]:
"""Get VM IP address using multiple detection methods for template VMs."""
try:
# Method 1: Try guest agent first (most reliable for template VMs)
logger.info("Trying guest agent for IP detection...")
ssh_cmd = f"ssh -o StrictHostKeyChecking=no {
self.unraid_user}@{
self.unraid_host} 'virsh guestinfo {
self.vm_name} --interface 2>/dev/null || echo FAILED'"
logger.info(f"Running SSH command: {ssh_cmd}")
result = subprocess.run(
ssh_cmd, shell=True, capture_output=True, text=True, timeout=10
)
logger.info(
f"Guest agent result (returncode={result.returncode}): {result.stdout[:200]}..."
)
if (
result.returncode == 0
and "FAILED" not in result.stdout
and "addr" in result.stdout
):
# Parse guest agent output for IP addresses
lines = result.stdout.strip().split("\n")
import re
for line in lines:
logger.info(f"Processing line: {line}")
# Look for lines like: if.1.addr.0.addr : 192.168.20.65
if (
".addr." in line
and "addr :" in line
and "127.0.0.1" not in line
):
# Extract IP address from the line
ip_match = re.search(
r":\s*([0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3})\s*$",
line,
)
if ip_match:
ip = ip_match.group(1)
logger.info(f"Found potential IP: {ip}")
# Skip localhost and Docker bridge IPs
if not ip.startswith("127.") and not ip.startswith("172."):
logger.info(f"Found IP via guest agent: {ip}")
return ip
# Method 2: Try domifaddr (network interface detection)
logger.info("Trying domifaddr for IP detection...")
result = subprocess.run(
f"ssh {
self.unraid_user}@{
self.unraid_host} 'virsh domifaddr {
self.vm_name} 2>/dev/null || echo FAILED'",
shell=True,
capture_output=True,
text=True,
timeout=10,
)
if (
result.returncode == 0
and "FAILED" not in result.stdout
and "ipv4" in result.stdout
):
lines = result.stdout.strip().split("\n")
for line in lines:
if "ipv4" in line:
# Extract IP from line like: vnet0
# 52:54:00:xx:xx:xx ipv4 192.168.1.100/24
parts = line.split()
if len(parts) >= 4:
ip_with_mask = parts[3]
ip = ip_with_mask.split("/")[0]
logger.info(f"Found IP via domifaddr: {ip}")
return ip
# Method 3: Try ARP table lookup (fallback for when guest agent
# isn't ready)
logger.info("Trying ARP table lookup...")
# Get VM MAC address first
mac_result = subprocess.run(
f'ssh {
self.unraid_user}@{
self.unraid_host} \'virsh dumpxml {
self.vm_name} | grep "mac address" | head -1 | sed "s/.*address=.\\([^\'"]*\\).*/\\1/"\'',
shell=True,
capture_output=True,
text=True,
timeout=10,
)
if mac_result.returncode == 0 and mac_result.stdout.strip():
mac_addr = mac_result.stdout.strip()
logger.info(f"VM MAC address: {mac_addr}")
# Look up IP by MAC in ARP table
arp_result = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'arp -a | grep {mac_addr} || echo NOTFOUND'",
shell=True,
capture_output=True,
text=True,
timeout=10,
)
if arp_result.returncode == 0 and "NOTFOUND" not in arp_result.stdout:
# Parse ARP output like: (192.168.1.100) at
# 52:54:00:xx:xx:xx
import re
ip_match = re.search(r"\(([0-9.]+)\)", arp_result.stdout)
if ip_match:
ip = ip_match.group(1)
logger.info(f"Found IP via ARP lookup: {ip}")
return ip
logger.warning("All IP detection methods failed")
return None
except subprocess.TimeoutExpired:
logger.error("Timeout getting VM IP - guest agent may not be ready")
return None
except Exception as e:
logger.error(f"Error getting VM IP: {e}")
return None
def vm_status(self) -> str:
"""Get VM status."""
try:
result = subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'virsh domstate {self.vm_name}'",
shell=True,
capture_output=True,
text=True,
)
if result.returncode == 0:
return result.stdout.strip()
else:
return "unknown"
except Exception as e:
logger.error(f"Error getting VM status: {e}")
return "error"
def delete_vm(self) -> bool:
"""Completely remove VM and all associated files."""
try:
logger.info(
f"Deleting VM {
self.vm_name} and all associated files..."
)
# Check if VM exists
if not self.check_vm_exists():
logger.info(f"VM {self.vm_name} does not exist")
return True
# Stop VM if running
if self.vm_status() == "running":
logger.info(f"Stopping VM {self.vm_name}...")
self.stop_vm()
time.sleep(5)
# Undefine VM with NVRAM
logger.info(f"Undefining VM {self.vm_name}...")
subprocess.run(
f"ssh {
self.unraid_user}@{
self.unraid_host} 'virsh undefine {
self.vm_name} --nvram'",
shell=True,
check=True,
)
# Remove VM directory and all files
logger.info(f"Removing VM directory and files...")
subprocess.run(
f"ssh {self.unraid_user}@{self.unraid_host} 'rm -rf {self.vm_config_path}'",
shell=True,
check=True,
)
logger.info(f"VM {self.vm_name} completely removed")
return True
except Exception as e:
logger.error(f"Failed to delete VM: {e}")
return False
def customize_vm_for_thrillwiki(
self, repo_url: str, github_token: str = ""
) -> bool:
"""Customize the VM for ThrillWiki after it boots."""
try:
logger.info("Waiting for VM to be accessible via SSH...")
# Wait for VM to get an IP and be SSH accessible
vm_ip = None
max_attempts = 20
for attempt in range(max_attempts):
vm_ip = self.get_vm_ip()
if vm_ip:
# Test SSH connectivity
ssh_test = subprocess.run(
f"ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no thrillwiki@{vm_ip} 'echo SSH ready'",
shell=True,
capture_output=True,
text=True,
)
if ssh_test.returncode == 0:
logger.info(f"VM is SSH accessible at {vm_ip}")
break
logger.info(
f"Waiting for SSH access... (attempt {
attempt + 1}/{max_attempts})"
)
time.sleep(15)
if not vm_ip:
logger.error("VM failed to become SSH accessible")
return False
# Run ThrillWiki deployment on the VM
logger.info("Running ThrillWiki deployment on VM...")
deploy_cmd = f"cd /home/thrillwiki && /home/thrillwiki/deploy-thrillwiki.sh '{repo_url}'"
if github_token:
deploy_cmd = f"cd /home/thrillwiki && GITHUB_TOKEN='{github_token}' /home/thrillwiki/deploy-thrillwiki.sh '{repo_url}'"
deploy_result = subprocess.run(
f"ssh -o StrictHostKeyChecking=no thrillwiki@{vm_ip} '{deploy_cmd}'",
shell=True,
capture_output=True,
text=True,
)
if deploy_result.returncode == 0:
logger.info("ThrillWiki deployment completed successfully!")
logger.info(f"ThrillWiki should be accessible at http://{vm_ip}:8000")
return True
else:
logger.error(
f"ThrillWiki deployment failed: {
deploy_result.stderr}"
)
return False
except Exception as e:
logger.error(f"Error customizing VM: {e}")
return False